juju-0.7.orig/.testr.conf0000644000000000000000000000012712135220114013477 0ustar 00000000000000[DEFAULT] test_command=./test --reporter=subunit $LISTOPT $IDLIST test_list_option=-n juju-0.7.orig/COPYING0000644000000000000000000010333012135220114012444 0ustar 00000000000000 GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see . juju-0.7.orig/Makefile0000644000000000000000000000337112135220114013055 0ustar 00000000000000PEP8=pep8 COVERAGE_FILES=`find juju -name "*py" | grep -v "tests\|lib/mocker.py\|lib/testing.py"` all: @echo "You've just watched the fastest build on earth." tests: ./test coverage: python -c "import coverage as c; c.main()" run ./test python -c "import coverage as c; c.main()" html -d htmlcov $(COVERAGE_FILES) gnome-open htmlcov/index.html ftests: ./test --functional tags: @ctags --python-kinds=-iv -R juju etags: @ctags -e --python-kinds=-iv -R juju present_pep8=$(shell which $(PEP8)) present_pyflakes=$(shell which pyflakes) warn_missing_linters: @test -n "$(present_pep8)" || echo "WARNING: $(PEP8) not installed." @test -n "$(present_pyflakes)" || echo "WARNING: pyflakes not installed." # "check": Check uncommitted changes for lint. check_changes=$(shell bzr status -S | grep '^[ +]*[MN]' | awk '{print $$2;}' | grep "\\.py$$") check: warn_missing_linters @test -z $(present_pep8) || (echo $(check_changes) | xargs -r $(PEP8) --repeat) @test -z $(present_pyflakes) || (echo $(check_changes) | xargs -r pyflakes) # "review": Check all changes compared to trunk for lint. review_changes=$(shell bzr status -S -r ancestor:$(JUJU_TRUNK) | grep '^[ +]*[MN]' | awk '{print $$2;}' | grep "\\.py$$") review: warn_missing_linters #@test -z $(present_pep8) || (echo $(review_changes) | xargs -r $(PEP8) --repeat) @test -z $(present_pyflakes) || (echo $(review_changes) | xargs -r pyflakes) ptests_changes=$(shell bzr status -S -r branch::prev | grep -P '^[ +]*[MN]' | awk '{print $$2;}'| grep "test_.*\\.py$$") ptests: @echo $(ptests_changes) | xargs -r ./test btests_changes=$(shell bzr status -S -r ancestor:$(JUJU_TRUNK)/ | grep "test.*\\.py$$" | awk '{print $$2;}') btests: @./test $(btests_changes) .PHONY: tags check review warn_missing_linters juju-0.7.orig/README0000644000000000000000000000220412135220114012267 0ustar 00000000000000juju ==== Welcome to juju, we hope you enjoy your stay. You can always get the latest juju code by running:: $ bzr branch lp:juju The juju bug tracker is at https://bugs.launchpad.net/juju Documentation for getting setup and running juju can be found at http://juju.ubuntu.com/docs ==== Juju Developers Except where stated otherwise, the files contained within this source code tree are covered under the following copyright and license: Copyright 2010, 2011 Canonical Ltd. All Rights Reserved. This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see . juju-0.7.orig/bin/0000755000000000000000000000000012135220114012161 5ustar 00000000000000juju-0.7.orig/juju/0000755000000000000000000000000012135220114012366 5ustar 00000000000000juju-0.7.orig/misc/0000755000000000000000000000000012135220114012344 5ustar 00000000000000juju-0.7.orig/setup.py0000644000000000000000000000216512135220114013127 0ustar 00000000000000from distutils.core import setup from glob import glob from juju import __version__ import os def find_packages(): """ Compatibility wrapper. Taken from storm setup.py. """ try: from setuptools import find_packages return find_packages() except ImportError: pass packages = [] for directory, subdirectories, files in os.walk("juju"): if "__init__.py" in files: packages.append(directory.replace(os.sep, '.')) return packages setup( name="juju", version=__version__, description="Cloud automation and orchestration", author="Juju Developers", author_email="juju@lists.ubuntu.com", url="https://launchpad.net/juju", license="GPL", packages=find_packages(), scripts=glob("./bin/*"), classifiers=[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: System Administrators", "Intended Audience :: Information Technology", "Programming Language :: Python", "Topic :: Database", "Topic :: Internet :: WWW/HTTP", ], ) juju-0.7.orig/test0000755000000000000000000000302512135220114012316 0ustar 00000000000000#!/usr/bin/env python import os import sys from twisted.scripts.trial import run from juju.tests.common import zookeeper_test_context from juju.lib.testing import TestCase FUNCTIONAL = '--functional' def main(args): if not "ZOOKEEPER_PATH" in os.environ: # Look for a system install of ZK env_path = "/etc/zookeeper/conf/environment" if os.path.exists(env_path): print "Using system zookeeper classpath from %s" % env_path os.environ["ZOOKEEPER_PATH"] = "system" else: print ("Environment variable ZOOKEEPER_PATH must be defined " "and should point to directory of Zookeeper installation") exit() matched = [arg for arg in args if arg.startswith("juju")] if FUNCTIONAL in sys.argv: sys.argv.remove(FUNCTIONAL) sys.argv.append("juju.ftests") elif matched: pass else: packages = [p for p in os.listdir("juju") \ if os.path.isdir("juju%s%s"%(os.sep, p))] packages.remove("ftests") sys.argv.extend(["juju.%s"%p for p in packages]) if 'JUJU_TEST_TIMEOUT' in os.environ: try: TestCase.timeout = float(os.environ['JUJU_TEST_TIMEOUT']) except ValueError: print ("JUJU_TEST_TIMEOUT must be a number") exit() with zookeeper_test_context( os.environ["ZOOKEEPER_PATH"], os.environ.get("ZOOKEEPER_TEST_PORT", 28181)) as zk: run() if __name__ == "__main__": main(sys.argv[1:]) juju-0.7.orig/bin/close-port0000755000000000000000000000047512135220114014204 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import close_port if __name__ == '__main__': close_port() juju-0.7.orig/bin/config-get0000755000000000000000000000047612135220114014140 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import config_get if __name__ == '__main__': config_get() juju-0.7.orig/bin/juju0000755000000000000000000000027612135220114013071 0ustar 00000000000000#!/usr/bin/env python import sys from juju.control import main from juju.errors import JujuError try: main(sys.argv[1:]) except JujuError, error: sys.exit("error: %s" % (error,)) juju-0.7.orig/bin/juju-admin0000755000000000000000000000027712135220114014160 0ustar 00000000000000#!/usr/bin/env python import sys from juju.control import admin from juju.errors import JujuError try: admin(sys.argv[1:]) except JujuError, error: sys.exit("error: %s" % (error,)) juju-0.7.orig/bin/juju-log0000755000000000000000000000046012135220114013643 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import log if __name__ == '__main__': log() juju-0.7.orig/bin/open-port0000755000000000000000000000047312135220114014036 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import open_port if __name__ == '__main__': open_port() juju-0.7.orig/bin/relation-get0000755000000000000000000000050212135220114014476 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import relation_get if __name__ == '__main__': relation_get() juju-0.7.orig/bin/relation-ids0000755000000000000000000000050112135220114014475 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import relation_ids if __name__ == '__main__': relation_ids() juju-0.7.orig/bin/relation-list0000755000000000000000000000050512135220114014675 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import relation_list if __name__ == '__main__': relation_list() juju-0.7.orig/bin/relation-set0000755000000000000000000000050312135220114014513 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import relation_set if __name__ == '__main__': relation_set() juju-0.7.orig/bin/unit-get0000755000000000000000000000047212135220114013646 0ustar 00000000000000#!/usr/bin/env python # We avoid using PYTHONPATH because it can cause side effects on hook execution import os, sys if "JUJU_PYTHONPATH" in os.environ: sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":")) from juju.hooks.commands import unit_get if __name__ == '__main__': unit_get() juju-0.7.orig/juju/__init__.py0000644000000000000000000000002612135220114014475 0ustar 00000000000000# __version__ = '0.7' juju-0.7.orig/juju/agents/0000755000000000000000000000000012135220114013647 5ustar 00000000000000juju-0.7.orig/juju/charm/0000755000000000000000000000000012135220114013460 5ustar 00000000000000juju-0.7.orig/juju/control/0000755000000000000000000000000012135220114014046 5ustar 00000000000000juju-0.7.orig/juju/environment/0000755000000000000000000000000012135220114014732 5ustar 00000000000000juju-0.7.orig/juju/errors.py0000644000000000000000000001455312135220114014264 0ustar 00000000000000""" This file holds the generic errors which are sensible for several areas of juju. """ class JujuError(Exception): """All errors in juju are subclasses of this. This error should not be raised by itself, though, since it means pretty much nothing. It's useful mostly as something to catch instead. """ class IncompatibleVersion(JujuError): """Raised when there is a mismatch in versions using the topology. This mismatch will occur when the /topology node has the key version set to a version different from juju.state.topology.VERSION in the code itself. This scenario can occur when a new client accesses an environment deployed with previous code, or upon the update of the code in the environment itself. Although this checking is done at the level of the topology, upon every read, the error is defined here because of its generality. Doing the check in the topology is just because of the centrality of that piece within juju. """ def __init__(self, current, wanted): self.current = current self.wanted = wanted def __str__(self): return ( "Incompatible juju protocol versions (found %r, want %r)" % ( self.current, self.wanted)) class FileNotFound(JujuError): """Raised when a file is not found, obviously! :-) @ivar path: Path of the directory or file which wasn't found. """ def __init__(self, path): self.path = path def __str__(self): return "File was not found: %r" % (self.path,) class CharmError(JujuError): """An error occurred while processing a charm.""" def __init__(self, path, message): self.path = path self.message = message def __str__(self): return "Error processing %r: %s" % (self.path, self.message) class CharmInvocationError(CharmError): """A charm's hook invocation exited with an error""" def __init__(self, path, exit_code, signal=None): self.path = path self.exit_code = exit_code self.signal = signal def __str__(self): if self.signal is None: return "Error processing %r: exit code %s." % ( self.path, self.exit_code) else: return "Error processing %r: signal %s." % ( self.path, self.signal) class CharmUpgradeError(CharmError): """Something went wrong trying to upgrade a charm""" def __init__(self, message): self.message = message def __str__(self): return "Cannot upgrade charm: %s" % self.message class FileAlreadyExists(JujuError): """Raised when something refuses to overwrite an existing file. @ivar path: Path of the directory or file which wasn't found. """ def __init__(self, path): self.path = path def __str__(self): return "File already exists, won't overwrite: %r" % (self.path,) class NoConnection(JujuError): """Raised when the CLI is unable to establish a Zookeeper connection.""" class InvalidHost(NoConnection): """Raised when the CLI cannot connect to ZK because of an invalid host.""" class InvalidUser(NoConnection): """Raised when the CLI cannot connect to ZK because of an invalid user.""" class EnvironmentNotFound(NoConnection): """Raised when the juju environment cannot be found.""" def __init__(self, info="no details available"): self._info = info def __str__(self): return "juju environment not found: %s" % self._info class EnvironmentPending(NoConnection): """Raised when the juju environment is not accessible.""" class ConstraintError(JujuError): """Machine constraints are inappropriate or incomprehensible""" class UnknownConstraintError(ConstraintError): """Constraint name not recognised""" def __init__(self, name): self.name = name def __str__(self): return "Unknown constraint: %r" % self.name class ProviderError(JujuError): """Raised when an exception occurs in a provider.""" class CloudInitError(ProviderError): """Raised when a cloud-init file is misconfigured""" class MachinesNotFound(ProviderError): """Raised when a provider can't fulfil a request for machines.""" def __init__(self, instance_ids): self.instance_ids = list(instance_ids) def __str__(self): return "Cannot find machine%s: %s" % ( "" if len(self.instance_ids) == 1 else "s", ", ".join(map(str, self.instance_ids))) class ProviderInteractionError(ProviderError): """Raised when an unexpected error occurs interacting with a provider""" class CannotTerminateMachine(JujuError): """Cannot terminate machine because of some reason""" def __init__(self, id, reason): self.id = id self.reason = reason def __str__(self): return "Cannot terminate machine %d: %s" % (self.id, self.reason) class InvalidPlacementPolicy(JujuError): """The provider does not support the user specified placement policy. """ def __init__(self, user_policy, provider_type, provider_policies): self.user_policy = user_policy self.provider_type = provider_type self.provider_policies = provider_policies def __str__(self): return ( "Unsupported placement policy: %r " "for provider: %r, supported policies %s" % ( self.user_policy, self.provider_type, ", ".join(self.provider_policies))) class ServiceError(JujuError): """Some problem with an upstart service""" class SSLVerificationError(JujuError): """User friendly wrapper for SSL certificate errors Unfortunately the SSL exceptions on certificate validation failure are not very useful, just being: ('SSL routines','SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed') """ def __init__(self, ssl_error): # TODO: pass and report hostname that did not validate self.ssl_error = ssl_error def __str__(self): return ("Bad HTTPS certificate, " "set 'ssl-hostname-verification' to false to permit") class SSLVerificationUnsupported(JujuError): """Verifying https certificates unsupported as txaws lacks support""" def __str__(self): return ("HTTPS certificates cannot be verified as txaws.client.ssl is" " missing.\n" "Upgrade txaws or set 'ssl-hostname-verification' to false.") juju-0.7.orig/juju/ftests/0000755000000000000000000000000012135220114013676 5ustar 00000000000000juju-0.7.orig/juju/hooks/0000755000000000000000000000000012135220114013511 5ustar 00000000000000juju-0.7.orig/juju/lib/0000755000000000000000000000000012135220114013134 5ustar 00000000000000juju-0.7.orig/juju/machine/0000755000000000000000000000000012135220114013772 5ustar 00000000000000juju-0.7.orig/juju/providers/0000755000000000000000000000000012135220114014403 5ustar 00000000000000juju-0.7.orig/juju/state/0000755000000000000000000000000012135220114013506 5ustar 00000000000000juju-0.7.orig/juju/tests/0000755000000000000000000000000012135220114013530 5ustar 00000000000000juju-0.7.orig/juju/unit/0000755000000000000000000000000012135220114013345 5ustar 00000000000000juju-0.7.orig/juju/agents/__init__.py0000644000000000000000000000000212135220114015750 0ustar 00000000000000# juju-0.7.orig/juju/agents/base.py0000644000000000000000000002657312135220114015150 0ustar 00000000000000import argparse import os import logging import stat import sys import yaml import zookeeper from twisted.application import service from twisted.internet.defer import inlineCallbacks, returnValue from twisted.scripts._twistd_unix import UnixApplicationRunner, UnixAppLogger from twisted.python.log import PythonLoggingObserver from txzookeeper import ZookeeperClient from txzookeeper.managed import ManagedClient from juju.control.options import setup_twistd_options from juju.errors import NoConnection, JujuError from juju.lib.zklog import ZookeeperHandler from juju.lib.zk import CLIENT_SESSION_TIMEOUT from juju.state.environment import GlobalSettingsStateManager def load_client_id(path): try: with open(path) as f: return yaml.load(f.read()) except IOError: return None def save_client_id(path, client_id): parent = os.path.dirname(path) if not os.path.exists(parent): os.makedirs(parent) with open(path, "w") as f: f.write(yaml.dump(client_id)) os.chmod(path, stat.S_IRUSR | stat.S_IWUSR) class TwistedOptionNamespace(object): """ An argparse namespace implementation that is compatible with twisted config dictionary usage. """ def __getitem__(self, key): return self.__dict__[key] def __setitem__(self, key, value): self.__dict__[key] = value def get(self, key, default=None): return self.__dict__.get(key, default) def has_key(self, key): return key in self.__dict__ class AgentLogger(UnixAppLogger): def __init__(self, options): super(AgentLogger, self).__init__(options) self._loglevel = options.get("loglevel", logging.DEBUG) def _getLogObserver(self): if self._logfilename == "-": log_file = sys.stdout else: log_file = open(self._logfilename, "a") # Setup file logger log_handler = logging.StreamHandler(log_file) formatter = logging.Formatter( "%(asctime)s: %(name)s@%(levelname)s: %(message)s") log_handler.setFormatter(formatter) # Also capture zookeeper logs (XXX not compatible with rotation) zookeeper.set_log_stream(log_file) zookeeper.set_debug_level(0) # Configure logging. root = logging.getLogger() root.addHandler(log_handler) root.setLevel(logging.getLevelName(self._loglevel)) # Twisted logging is painfully verbose on twisted.web, and # there isn't a good way to distinguish different channels # within twisted, so just utlize error level logging only for # all of twisted. twisted_log = logging.getLogger("twisted") twisted_log.setLevel(logging.ERROR) observer = PythonLoggingObserver() return observer.emit class AgentRunner(UnixApplicationRunner): application = None loggerFactory = AgentLogger def createOrGetApplication(self): return self.application class BaseAgent(object, service.Service): name = "juju-agent-unknown" client = None # Flag when enabling persistent topology watches, testing aid. _watch_enabled = True # Distributed debug log handler _debug_log_handler = None @classmethod def run(cls): """Runs the agent as a unix daemon. Main entry point for starting an agent, parses cli options, and setups a daemon using twistd as per options. """ parser = argparse.ArgumentParser() cls.setup_options(parser) config = parser.parse_args(namespace=TwistedOptionNamespace()) runner = AgentRunner(config) agent = cls() agent.configure(config) runner.application = agent.as_app() runner.run() @classmethod def setup_options(cls, parser): """Configure the argparse cli parser for the agent.""" return cls.setup_default_options(parser) @classmethod def setup_default_options(cls, parser): """Setup default twistd daemon and agent options. This method is intended as a utility for subclasses. @param parser an argparse instance. @type C{argparse.ArgumentParser} """ setup_twistd_options(parser, cls) setup_default_agent_options(parser, cls) def as_app(self): """ Return the agent as a C{twisted.application.service.Application} """ app = service.Application(self.name) self.setServiceParent(app) return app def configure(self, options): """Configure the agent to handle its cli options. Invoked called before the service is started. @param options @type C{TwistedOptionNamespace} an argparse namespace corresponding to a dict. """ if not options.get("zookeeper_servers"): raise NoConnection("No zookeeper connection configured.") if not os.path.exists(options.get("juju_directory", "")): raise JujuError( "Invalid juju-directory %r, does not exist." % ( options.get("juju_directory"))) if options["session_file"] is None: raise JujuError("No session file specified") self.config = options @inlineCallbacks def _kill_existing_session(self): try: # We might have died suddenly, in which case the session may # still be alive. If this is the case, shoot it in the head, so # it doesn't interfere with our attempts to recreate our state. # (We need to be able to recreate our state *anyway*, and it's # much simpler to force ourselves to recreate it every time than # it is to mess around partially recreating partial state.) client_id = load_client_id(self.config["session_file"]) if client_id is None: return temp_client = yield ZookeeperClient().connect( self.config["zookeeper_servers"], client_id=client_id) yield temp_client.close() except zookeeper.ZooKeeperException: # We don't really care what went wrong; just that we're not able # to connect using the old session, and therefore we should be ok # to start a fresh one without transient state hanging around. pass @inlineCallbacks def connect(self): """Return an authenticated connection to the juju zookeeper.""" yield self._kill_existing_session() self.client = yield ManagedClient( session_timeout=CLIENT_SESSION_TIMEOUT).connect( self.config["zookeeper_servers"]) save_client_id( self.config["session_file"], self.client.client_id) principals = self.config.get("principals", ()) for principal in principals: self.client.add_auth("digest", principal) # bug work around to keep auth fast if principals: yield self.client.exists("/") returnValue(self.client) def start(self): """Callback invoked on the agent's startup. The agent will already be connected to zookeeper. Subclasses are responsible for implementing. """ raise NotImplementedError def stop(self): """Callback invoked on when the agent is shutting down.""" pass # Twisted IService implementation, used for delegates to maintain naming # conventions. @inlineCallbacks def startService(self): yield self.connect() # Start the global settings watch prior to starting the agent. # Allows for debug log to be enabled early. if self.get_watch_enabled(): yield self.start_global_settings_watch() yield self.start() @inlineCallbacks def stopService(self): try: yield self.stop() finally: if self.client and self.client.connected: self.client.close() session_file = self.config["session_file"] if os.path.exists(session_file): os.unlink(session_file) def set_watch_enabled(self, flag): """Set boolean flag for whether this agent should watching zookeeper. This is mainly used for testing, to allow for setting up the various data scenarios, before enabling an agent watch which will be observing state. """ self._watch_enabled = bool(flag) def get_watch_enabled(self): """Returns a boolean if the agent should be settings state watches. The meaning of this flag is typically agent specific, as each agent has separate watches they'd like to establish on agent specific state within zookeeper. In general if this flag is False, the agent should refrain from establishing a watch on startup. This flag is typically used by tests to isolate and test the watch behavior independent of the agent startup, via construction of test data. """ return self._watch_enabled def start_global_settings_watch(self): """Start watching the runtime state for configuration changes.""" self.global_settings_state = GlobalSettingsStateManager(self.client) return self.global_settings_state.watch_settings_changes( self.on_global_settings_change) @inlineCallbacks def on_global_settings_change(self, change): """On global settings change, take action. """ if (yield self.global_settings_state.is_debug_log_enabled()): yield self.start_debug_log() else: self.stop_debug_log() @inlineCallbacks def start_debug_log(self): """Enable the distributed debug log handler. """ if self._debug_log_handler is not None: returnValue(None) context_name = self.get_agent_name() self._debug_log_handler = ZookeeperHandler( self.client, context_name) yield self._debug_log_handler.open() log_root = logging.getLogger() log_root.addHandler(self._debug_log_handler) def stop_debug_log(self): """Disable any configured debug log handler. """ if self._debug_log_handler is None: return handler, self._debug_log_handler = self._debug_log_handler, None log_root = logging.getLogger() log_root.removeHandler(handler) def get_agent_name(self): """Return the agent's name and context such that it can be identified. Subclasses should override this to provide additional context and unique naming. """ return self.__class__.__name__ def setup_default_agent_options(parser, cls): principals_default = os.environ.get("JUJU_PRINCIPALS", "").split() parser.add_argument( "--principal", "-e", action="append", dest="principals", default=principals_default, help="Agent principals to utilize for the zookeeper connection") servers_default = os.environ.get("JUJU_ZOOKEEPER", "") parser.add_argument( "--zookeeper-servers", "-z", default=servers_default, help="juju Zookeeper servers to connect to ($JUJU_ZOOKEEPER)") juju_home = os.environ.get("JUJU_HOME", "/var/lib/juju") parser.add_argument( "--juju-directory", default=juju_home, type=os.path.abspath, help="juju working directory ($JUJU_HOME)") parser.add_argument( "--session-file", default=None, type=os.path.abspath, help="like a pidfile, but for the zookeeper session id") juju-0.7.orig/juju/agents/dummy.py0000644000000000000000000000051712135220114015357 0ustar 00000000000000 from .base import BaseAgent class DummyAgent(BaseAgent): """A do nothing juju agent. A bit like a dog, it just lies around basking in the sun, doing nothing, nonetheless its quite content. :-) """ def start(self): """nothing to see here, move along.""" if __name__ == '__main__': DummyAgent.run() juju-0.7.orig/juju/agents/machine.py0000644000000000000000000001023112135220114015622 0ustar 00000000000000import logging import os from twisted.internet.defer import inlineCallbacks from juju.errors import JujuError from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager from juju.unit.deploy import UnitDeployer from .base import BaseAgent log = logging.getLogger("juju.agents.machine") class MachineAgent(BaseAgent): """A juju machine agent. The machine agent is responsible for monitoring service units assigned to a machine. If a new unit is assigned to machine, the machine agent will download the charm, create a working space for the service unit agent, and then launch it. Additionally the machine agent will monitor the running service unit agents on the machine, via their ephemeral nodes, and restart them if they die. """ name = "juju-machine-agent" unit_agent_module = "juju.agents.unit" @property def units_directory(self): return os.path.join(self.config["juju_directory"], "units") @property def unit_state_directory(self): return os.path.join(self.config["juju_directory"], "state") @inlineCallbacks def start(self): """Start the machine agent. Creates state directories on the machine, retrieves the machine state, and enables watch on assigned units. """ if not os.path.exists(self.units_directory): os.makedirs(self.units_directory) if not os.path.exists(self.unit_state_directory): os.makedirs(self.unit_state_directory) # Get state managers we'll be utilizing. self.service_state_manager = ServiceStateManager(self.client) self.unit_deployer = UnitDeployer( self.client, self.get_machine_id(), self.config["juju_directory"]) yield self.unit_deployer.start() # Retrieve the machine state for the machine we represent. machine_manager = MachineStateManager(self.client) self.machine_state = yield machine_manager.get_machine_state( self.get_machine_id()) # Watch assigned units for the machine. if self.get_watch_enabled(): self.machine_state.watch_assigned_units( self.watch_service_units) # Connect the machine agent, broadcasting presence to the world. yield self.machine_state.connect_agent() log.info("Machine agent started id:%s" % self.get_machine_id()) @inlineCallbacks def watch_service_units(self, old_units, new_units): """Callback invoked when the assigned service units change. """ if old_units is None: old_units = set() log.debug( "Units changed old:%s new:%s", old_units, new_units) stopped = old_units - new_units started = new_units - old_units for unit_name in stopped: log.debug("Stopping service unit: %s ...", unit_name) try: yield self.unit_deployer.kill_service_unit(unit_name) except Exception: log.exception("Error stopping unit: %s", unit_name) for unit_name in started: log.debug("Starting service unit: %s ...", unit_name) try: yield self.unit_deployer.start_service_unit(unit_name) except Exception: log.exception("Error starting unit: %s", unit_name) def get_machine_id(self): """Get the id of the machine as known within the zk state.""" return self.config["machine_id"] def get_agent_name(self): return "Machine:%s" % (self.get_machine_id()) def configure(self, options): super(MachineAgent, self).configure(options) if not options.get("machine_id"): msg = ("--machine-id must be provided in the command line, " "or $JUJU_MACHINE_ID in the environment") raise JujuError(msg) @classmethod def setup_options(cls, parser): super(MachineAgent, cls).setup_options(parser) machine_id = os.environ.get("JUJU_MACHINE_ID", "") parser.add_argument( "--machine-id", default=machine_id) return parser if __name__ == "__main__": MachineAgent().run() juju-0.7.orig/juju/agents/provision.py0000644000000000000000000002263112135220114016255 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks, returnValue, succeed from zookeeper import NoNodeException from juju.environment.config import EnvironmentsConfig from juju.errors import ProviderError from juju.lib.twistutils import concurrent_execution_guard from juju.state.errors import MachineStateNotFound, StopWatcher from juju.state.firewall import FirewallManager from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager from .base import BaseAgent log = logging.getLogger("juju.agents.provision") class ProvisioningAgent(BaseAgent): name = "juju-provisoning-agent" _current_machines = () # time in seconds machine_check_period = 60 def get_agent_name(self): return "provision:%s" % (self.environment.type) @inlineCallbacks def start(self): self._running = True self.environment = yield self.configure_environment() self.provider = self.environment.get_machine_provider() self.machine_state_manager = MachineStateManager(self.client) self.service_state_manager = ServiceStateManager(self.client) self.firewall_manager = FirewallManager( self.client, self.is_running, self.provider) if self.get_watch_enabled(): self.machine_state_manager.watch_machine_states( self.watch_machine_changes) self.service_state_manager.watch_service_states( self.firewall_manager.watch_service_changes) from twisted.internet import reactor reactor.callLater( self.machine_check_period, self.periodic_machine_check) log.info("Started provisioning agent") else: log.info("Started provisioning agent without watches enabled") def stop(self): log.info("Stopping provisioning agent") self._running = False return succeed(True) def is_running(self): """Whether this agent is running or not.""" return self._running @inlineCallbacks def configure_environment(self): """The provisioning agent configure its environment on start or change. The environment contains the configuration th agent needs to interact with its machine provider, in order to do its work. This configuration data is deployed lazily over an encrypted connection upon first usage. The agent waits for this data to exist before completing its startup. """ try: get_d, watch_d = self.client.get_and_watch("/environment") environment_data, stat = yield get_d watch_d.addCallback(self._on_environment_changed) except NoNodeException: # Wait till the environment node appears. play twisted gymnastics exists_d, watch_d = self.client.exists_and_watch("/environment") stat = yield exists_d if stat: environment = yield self.configure_environment() else: watch_d.addCallback( lambda result: self.configure_environment()) if not stat: environment = yield watch_d returnValue(environment) config = EnvironmentsConfig() config.parse(environment_data) returnValue(config.get_default()) @inlineCallbacks def _on_environment_changed(self, event): """Reload the environment if its data changes.""" if event.type_name == "deleted": return self.environment = yield self.configure_environment() self.provider = self.environment.get_machine_provider() def periodic_machine_check(self): """A periodic checking of machine states and provider machines. In addition to the on demand changes to zookeeper states that are monitored by L{watch_machine_changes}, the periodic machine check performs non zookeeper state related verification by periodically checking the last current provider machine states against the last known zookeeper state. Primarily this helps in recovering from transient error conditions which may have prevent processing of an individual machine state, as well as verifying the current state of the provider's running machines against the zk state, thus pruning unused resources. """ from twisted.internet import reactor d = self.process_machines(self._current_machines) d.addBoth( lambda result: reactor.callLater( self.machine_check_period, self.periodic_machine_check)) return d @inlineCallbacks def watch_machine_changes(self, old_machines, new_machines): """Watches and processes machine state changes. This function is used to subscribe to topology changes, and specifically changes to machines within the topology. It performs work against the machine provider to ensure that the currently running state of the juju cluster corresponds to the topology via creation and deletion of machines. The subscription utilized is a permanent one, meaning that this function will automatically be rescheduled to run whenever a topology state change happens that involves machines. This functional also caches the current set of machines as an agent instance attribute. @param old_machines machine ids as existed in the previous topology. @param new_machines machine ids as exist in the current topology. """ if not self._running: raise StopWatcher() log.debug("Machines changed old:%s new:%s", old_machines, new_machines) self._current_machines = new_machines try: yield self.process_machines(self._current_machines) except Exception: # Log and effectively retry later in periodic_machine_check log.exception( "Got unexpected exception in processing machines," " will retry") @concurrent_execution_guard("_processing_machines") @inlineCallbacks def process_machines(self, current_machines): """Ensure the currently running machines correspond to state. At the end of each process_machines execution, verify that all running machines within the provider correspond to machine_ids within the topology. If they don't then shut them down. Utilizes concurrent execution guard, to ensure that this is only being executed at most once per process. """ # XXX this is obviously broken, but the margins of 80 columns prevent # me from describing. hint think concurrent agents, and use a lock. # map of instance_id -> machine try: provider_machines = yield self.provider.get_machines() except ProviderError: log.exception("Cannot get machine list") return provider_machines = dict( [(m.instance_id, m) for m in provider_machines]) instance_ids = [] for machine_state_id in current_machines: try: instance_id = yield self.process_machine( machine_state_id, provider_machines) except (MachineStateNotFound, ProviderError): log.exception("Cannot process machine %s", machine_state_id) continue instance_ids.append(instance_id) # Terminate all unused juju machines running within the cluster. unused = set(provider_machines.keys()) - set(instance_ids) for instance_id in unused: log.info("Shutting down machine id:%s ...", instance_id) machine = provider_machines[instance_id] try: yield self.provider.shutdown_machine(machine) except ProviderError: log.exception("Cannot shutdown machine %s", instance_id) continue @inlineCallbacks def process_machine(self, machine_state_id, provider_machine_map): """Ensure a provider machine for a machine state id. For each machine_id in new machines which represents the current state of the topology: * Check to ensure its state reflects that it has been launched. If it hasn't then create the machine and update the state. * Watch the machine's assigned services so that changes can be applied to the firewall for service exposing support. """ # fetch the machine state machine_state = yield self.machine_state_manager.get_machine_state( machine_state_id) instance_id = yield machine_state.get_instance_id() # Verify a machine id has state and is running, else launch it. if instance_id is None or not instance_id in provider_machine_map: log.info("Starting machine id:%s ...", machine_state.id) constraints = yield machine_state.get_constraints() machines = yield self.provider.start_machine( {"machine-id": machine_state.id, "constraints": constraints}) instance_id = machines[0].instance_id yield machine_state.set_instance_id(instance_id) # The firewall manager also needs to be checked for any # outstanding retries on this machine yield self.firewall_manager.process_machine(machine_state) returnValue(instance_id) if __name__ == '__main__': ProvisioningAgent().run() juju-0.7.orig/juju/agents/tests/0000755000000000000000000000000012135220114015011 5ustar 00000000000000juju-0.7.orig/juju/agents/unit.py0000644000000000000000000002064412135220114015206 0ustar 00000000000000import os import logging from twisted.internet.defer import inlineCallbacks, returnValue from juju.errors import JujuError from juju.state.service import ServiceStateManager, RETRY_HOOKS from juju.hooks.protocol import UnitSettingsFactory from juju.hooks.executor import HookExecutor from juju.unit.address import get_unit_address from juju.unit.lifecycle import UnitLifecycle, HOOK_SOCKET_FILE from juju.unit.workflow import UnitWorkflowState from juju.agents.base import BaseAgent log = logging.getLogger("juju.agents.unit") def unit_path(juju_path, unit_state): return os.path.join( juju_path, "units", unit_state.unit_name.replace("/", "-")) class UnitAgent(BaseAgent): """An juju Unit Agent. Provides for the management of a charm, via hook execution in response to external events in the coordination space (zookeeper). """ name = "juju-unit-agent" @classmethod def setup_options(cls, parser): super(UnitAgent, cls).setup_options(parser) unit_name = os.environ.get("JUJU_UNIT_NAME", "") parser.add_argument("--unit-name", default=unit_name) @property def unit_name(self): return self.config["unit_name"] def get_agent_name(self): return "unit:%s" % self.unit_name def configure(self, options): """Configure the unit agent.""" super(UnitAgent, self).configure(options) if not options.get("unit_name"): msg = ("--unit-name must be provided in the command line, " "or $JUJU_UNIT_NAME in the environment") raise JujuError(msg) self.executor = HookExecutor() self.api_factory = UnitSettingsFactory( self.executor.get_hook_context, self.executor.get_invoker, logging.getLogger("unit.hook.api")) self.api_socket = None self.workflow = None @inlineCallbacks def start(self): """Start the unit agent process.""" service_state_manager = ServiceStateManager(self.client) # Retrieve our unit and configure working directories. service_name = self.unit_name.split("/")[0] self.service_state = yield service_state_manager.get_service_state( service_name) self.unit_state = yield self.service_state.get_unit_state( self.unit_name) self.unit_directory = os.path.join( self.config["juju_directory"], "units", self.unit_state.unit_name.replace("/", "-")) self.state_directory = os.path.join( self.config["juju_directory"], "state") # Setup the server portion of the cli api exposed to hooks. socket_path = os.path.join(self.unit_directory, HOOK_SOCKET_FILE) if os.path.exists(socket_path): os.unlink(socket_path) from twisted.internet import reactor self.api_socket = reactor.listenUNIX(socket_path, self.api_factory) # Setup the unit state's address address = yield get_unit_address(self.client) yield self.unit_state.set_public_address( (yield address.get_public_address())) yield self.unit_state.set_private_address( (yield address.get_private_address())) if self.get_watch_enabled(): yield self.unit_state.watch_hook_debug(self.cb_watch_hook_debug) # Inform the system, we're alive. yield self.unit_state.connect_agent() # Start paying attention to the debug-log setting if self.get_watch_enabled(): yield self.unit_state.watch_hook_debug(self.cb_watch_hook_debug) self.lifecycle = UnitLifecycle( self.client, self.unit_state, self.service_state, self.unit_directory, self.state_directory, self.executor) self.workflow = UnitWorkflowState( self.client, self.unit_state, self.lifecycle, self.state_directory) # Set up correct lifecycle and executor state given the persistent # unit workflow state, and fire any starting transitions if necessary. with (yield self.workflow.lock()): yield self.workflow.synchronize(self.executor) if self.get_watch_enabled(): yield self.unit_state.watch_resolved(self.cb_watch_resolved) yield self.service_state.watch_config_state( self.cb_watch_config_changed) yield self.unit_state.watch_upgrade_flag( self.cb_watch_upgrade_flag) @inlineCallbacks def stop(self): """Stop the unit agent process.""" if self.lifecycle.running: yield self.lifecycle.stop(fire_hooks=False, stop_relations=False) yield self.executor.stop() if self.api_socket: yield self.api_socket.stopListening() yield self.api_factory.stopFactory() @inlineCallbacks def cb_watch_resolved(self, change): """Update the unit's state, when its resolved. Resolved operations form the basis of error recovery for unit workflows. A resolved operation can optionally specify hook execution. The unit agent runs the error recovery transition if the unit is not in a running state. """ # Would be nice if we could fold this into an atomic # get and delete primitive. # Check resolved setting resolved = yield self.unit_state.get_resolved() if resolved is None: returnValue(None) # Clear out the setting yield self.unit_state.clear_resolved() with (yield self.workflow.lock()): if (yield self.workflow.get_state()) == "started": returnValue(None) try: log.info("Resolved detected, firing retry transition") if resolved["retry"] == RETRY_HOOKS: yield self.workflow.fire_transition_alias("retry_hook") else: yield self.workflow.fire_transition_alias("retry") except Exception: log.exception("Unknown error while transitioning for resolved") @inlineCallbacks def cb_watch_hook_debug(self, change): """Update the hooks to be debugged when the settings change. """ debug = yield self.unit_state.get_hook_debug() debug_hooks = debug and debug.get("debug_hooks") or None self.executor.set_debug(debug_hooks) @inlineCallbacks def cb_watch_upgrade_flag(self, change): """Update the unit's charm when requested. """ upgrade_flag = yield self.unit_state.get_upgrade_flag() if not upgrade_flag: log.info("No upgrade flag set.") return log.info("Upgrade detected") # Clear the flag immediately; this means that upgrade requests will # be *ignored* by units which are not "started", and will need to be # reissued when the units are in acceptable states. yield self.unit_state.clear_upgrade_flag() new_id = yield self.service_state.get_charm_id() old_id = yield self.unit_state.get_charm_id() if new_id == old_id: log.info("Upgrade ignored: already running latest charm") return with (yield self.workflow.lock()): state = yield self.workflow.get_state() if state != "started": if upgrade_flag["force"]: yield self.lifecycle.upgrade_charm( fire_hooks=False, force=True) log.info("Forced upgrade complete") return log.warning( "Cannot upgrade: unit is in non-started state %s. Reissue " "upgrade command to try again.", state) return log.info("Starting upgrade") if (yield self.workflow.fire_transition("upgrade_charm")): log.info("Upgrade complete") else: log.info("Upgrade failed") @inlineCallbacks def cb_watch_config_changed(self, change): """Trigger hook on configuration change""" # Verify it is running with (yield self.workflow.lock()): current_state = yield self.workflow.get_state() log.debug("Configuration Changed") if current_state != "started": log.debug( "Configuration updated on service in a non-started state") returnValue(None) yield self.workflow.fire_transition("configure") if __name__ == '__main__': UnitAgent.run() juju-0.7.orig/juju/agents/tests/__init__.py0000644000000000000000000000000212135220114017112 0ustar 00000000000000# juju-0.7.orig/juju/agents/tests/common.py0000644000000000000000000000323712135220114016660 0ustar 00000000000000import os from twisted.internet.defer import inlineCallbacks, succeed from txzookeeper.tests.utils import deleteTree from juju.agents.base import TwistedOptionNamespace from juju.state.tests.common import StateTestBase from juju.tests.common import get_test_zookeeper_address class AgentTestBase(StateTestBase): agent_class = None juju_directory = None setup_environment = True @inlineCallbacks def setUp(self): self.juju_directory = self.makeDir() yield super(AgentTestBase, self).setUp() assert self.agent_class, "Agent Class must be specified on test" if self.setup_environment: yield self.push_default_config() self.agent = self.agent_class() self.options = yield self.get_agent_config() self.agent.configure(self.options) self.agent.set_watch_enabled(False) def tearDown(self): if self.agent.client and self.agent.client.connected: self.agent.client.close() if self.client.connected: deleteTree("/", self.client.handle) self.client.close() def get_agent_config(self): options = TwistedOptionNamespace() options["juju_directory"] = self.juju_directory options["zookeeper_servers"] = get_test_zookeeper_address() options["session_file"] = self.makeFile() return succeed(options) @inlineCallbacks def debug_pprint_tree(self, path="/", indent=1): children = yield self.client.get_children(path) for n in children: print " " * indent, "/" + n yield self.debug_pprint_tree( os.path.join(path, n), indent + 1) juju-0.7.orig/juju/agents/tests/test_base.py0000644000000000000000000005445312135220114017347 0ustar 00000000000000import argparse import json import logging import os import stat import sys import yaml from twisted.application.app import AppLogger from twisted.application.service import IService, IServiceCollection from twisted.internet.defer import ( fail, succeed, Deferred, inlineCallbacks, returnValue) from twisted.python.components import Componentized from twisted.python import log import zookeeper from txzookeeper import ZookeeperClient from juju.lib.testing import TestCase from juju.lib.mocker import MATCH from juju.tests.common import get_test_zookeeper_address from juju.agents.base import ( BaseAgent, TwistedOptionNamespace, AgentRunner, AgentLogger) from juju.agents.dummy import DummyAgent from juju.errors import NoConnection, JujuError from juju.lib.zklog import ZookeeperHandler from juju.agents.tests.common import AgentTestBase MATCH_APP = MATCH(lambda x: isinstance(x, Componentized)) MATCH_HANDLER = MATCH(lambda x: isinstance(x, ZookeeperHandler)) class BaseAgentTest(TestCase): @inlineCallbacks def setUp(self): yield super(BaseAgentTest, self).setUp() self.juju_home = self.makeDir() self.change_environment(JUJU_HOME=self.juju_home) def test_as_app(self): """The agent class can be accessed as an application.""" app = BaseAgent().as_app() multi_service = IService(app, None) self.assertTrue(IServiceCollection.providedBy(multi_service)) services = list(multi_service) self.assertEqual(len(services), 1) def test_twistd_default_options(self): """The agent cli parsing, populates standard twistd options.""" parser = argparse.ArgumentParser() BaseAgent.setup_options(parser) # Daemon group self.assertEqual( parser.get_default("logfile"), "%s.log" % BaseAgent.name) self.assertEqual(parser.get_default("pidfile"), "") self.assertEqual(parser.get_default("loglevel"), "DEBUG") self.assertFalse(parser.get_default("nodaemon")) self.assertEqual(parser.get_default("rundir"), ".") self.assertEqual(parser.get_default("chroot"), None) self.assertEqual(parser.get_default("umask"), '0022') self.assertEqual(parser.get_default("uid"), None) self.assertEqual(parser.get_default("gid"), None) self.assertEqual(parser.get_default("euid"), None) self.assertEqual(parser.get_default("prefix"), BaseAgent.name) self.assertEqual(parser.get_default("syslog"), False) # Development Group self.assertFalse(parser.get_default("debug")) self.assertFalse(parser.get_default("profile")) self.assertFalse(parser.get_default("savestats")) self.assertEqual(parser.get_default("profiler"), "cprofile") # Hidden defaults self.assertEqual(parser.get_default("reactor"), "epoll") self.assertEqual(parser.get_default("originalname"), None) # Agent options self.assertEqual(parser.get_default("principals"), []) self.assertEqual(parser.get_default("zookeeper_servers"), "") self.assertEqual(parser.get_default("juju_directory"), self.juju_home) self.assertEqual(parser.get_default("session_file"), None) def test_twistd_flags_correspond(self): parser = argparse.ArgumentParser() BaseAgent.setup_options(parser) args = [ "--profile", "--savestats", "--nodaemon"] options = parser.parse_args(args, namespace=TwistedOptionNamespace()) self.assertEqual(options.get("savestats"), True) self.assertEqual(options.get("nodaemon"), True) self.assertEqual(options.get("profile"), True) def test_agent_logger(self): parser = argparse.ArgumentParser() BaseAgent.setup_options(parser) log_file_path = self.makeFile() options = parser.parse_args( ["--logfile", log_file_path, "--session-file", self.makeFile()], namespace=TwistedOptionNamespace()) def match_observer(observer): return isinstance(observer.im_self, log.PythonLoggingObserver) def cleanup(observer): # post test cleanup of global state. log.removeObserver(observer) logging.getLogger().handlers = [] original_log_with_observer = log.startLoggingWithObserver def _start_log_with_observer(observer): self.addCleanup(cleanup, observer) # by default logging will replace stdout/stderr return original_log_with_observer(observer, 0) app = self.mocker.mock() app.getComponent(log.ILogObserver, None) self.mocker.result(None) start_log_with_observer = self.mocker.replace( log.startLoggingWithObserver) start_log_with_observer(MATCH(match_observer)) self.mocker.call(_start_log_with_observer) self.mocker.replay() agent_logger = AgentLogger(options) agent_logger.start(app) # We suppress twisted messages below the error level. output = open(log_file_path).read() self.assertFalse(output) # also verify we didn't mess with the app logging. app_log = logging.getLogger() app_log.info("Good") # and that twisted errors still go through. log.err("Something bad happened") output = open(log_file_path).read() self.assertIn("Good", output) self.assertIn("Something bad happened", output) def test_custom_log_level(self): parser = argparse.ArgumentParser() BaseAgent.setup_options(parser) options = parser.parse_args( ["--loglevel", "INFO"], namespace=TwistedOptionNamespace()) self.assertEqual(options.loglevel, "INFO") def test_twistd_option_namespace(self): """ The twisted option namespace bridges argparse attribute access, to twisted dictionary access for cli options. """ options = TwistedOptionNamespace() options.x = 1 self.assertEqual(options['x'], 1) self.assertEqual(options.get('x'), 1) self.assertEqual(options.get('y'), None) self.assertRaises(KeyError, options.__getitem__, 'y') options['y'] = 2 self.assertEqual(options.y, 2) self.assertTrue(options.has_key('y')) self.assertFalse(options.has_key('z')) def test_runner_attribute_application(self): """The agent runner retrieve the application as an attribute.""" runner = AgentRunner({}) self.assertEqual(runner.createOrGetApplication(), None) runner.application = 21 self.assertEqual(runner.createOrGetApplication(), 21) def test_run(self): """Invokes the run class method on an agent. This will create an agent instance, parse the cli args, passes them to the agent, and starts the agent runner. """ self.change_args( "es-agent", "--zookeeper-servers", get_test_zookeeper_address(), "--session-file", self.makeFile()) runner = self.mocker.patch(AgentRunner) runner.run() mock_agent = self.mocker.patch(BaseAgent) def match_args(config): self.assertEqual(config["zookeeper_servers"], get_test_zookeeper_address()) return True mock_agent.configure(MATCH(match_args)) self.mocker.passthrough() self.mocker.replay() BaseAgent.run() def test_full_run(self): """Verify a functional agent start via the 'run' method. This test requires Zookeeper running on the default port of localhost. The mocked portions are to prevent the daemon start from altering the test environment (sys.stdout/sys.stderr, and reactor start). """ zookeeper.set_debug_level(0) started = Deferred() class DummyAgent(BaseAgent): started = False def start(self): started.callback(self) def validate_started(agent): self.assertTrue(agent.client.connected) started.addCallback(validate_started) self.change_args( "es-agent", "--nodaemon", "--zookeeper-servers", get_test_zookeeper_address(), "--session-file", self.makeFile()) runner = self.mocker.patch(AgentRunner) logger = self.mocker.patch(AppLogger) logger.start(MATCH_APP) runner.startReactor(None, sys.stdout, sys.stderr) logger.stop() self.mocker.replay() DummyAgent.run() return started @inlineCallbacks def test_stop_service_stub_closes_agent(self): """The base class agent, stopService will the stop method. Additionally it will close the agent's zookeeper client if the client is still connected. """ mock_agent = self.mocker.patch(BaseAgent) mock_client = self.mocker.mock(ZookeeperClient) session_file = self.makeFile() # connection is closed after agent.stop invoked. with self.mocker.order(): mock_agent.stop() self.mocker.passthrough() # client existence check mock_agent.client self.mocker.result(mock_client) # client connected check mock_agent.client self.mocker.result(mock_client) mock_client.connected self.mocker.result(True) # client close mock_agent.client self.mocker.result(mock_client) mock_client.close() # delete session file mock_agent.config self.mocker.result({"session_file": session_file}) self.mocker.replay() agent = BaseAgent() yield agent.stopService() self.assertFalse(os.path.exists(session_file)) @inlineCallbacks def test_stop_service_stub_ignores_disconnected_agent(self): """The base class agent, stopService will the stop method. If the client is not connected then no attempt is made to close it. """ mock_agent = self.mocker.patch(BaseAgent) mock_client = self.mocker.mock(ZookeeperClient) session_file = self.makeFile() # connection is closed after agent.stop invoked. with self.mocker.order(): mock_agent.stop() # client existence check mock_agent.client self.mocker.result(mock_client) # client connected check mock_agent.client self.mocker.result(mock_client) mock_client.connected self.mocker.result(False) mock_agent.config self.mocker.result({"session_file": session_file}) self.mocker.replay() agent = BaseAgent() yield agent.stopService() self.assertFalse(os.path.exists(session_file)) def test_run_base_raises_error(self): """The base class agent, raises a notimplemented error when started.""" client = self.mocker.patch(ZookeeperClient) client.connect(get_test_zookeeper_address()) client_mock = self.mocker.mock() self.mocker.result(succeed(client_mock)) client_mock.client_id self.mocker.result((123, "abc")) self.mocker.replay() agent = BaseAgent() agent.set_watch_enabled(False) agent.configure({ "zookeeper_servers": get_test_zookeeper_address(), "juju_directory": self.makeDir(), "session_file": self.makeFile()}) d = agent.startService() self.failUnlessFailure(d, NotImplementedError) return d def test_connect_cli_option(self): """The zookeeper server can be passed via cli argument.""" mock_client = self.mocker.mock() client = self.mocker.patch(ZookeeperClient) client.connect("x2.example.com") self.mocker.result(succeed(mock_client)) mock_client.client_id self.mocker.result((123, "abc")) self.mocker.replay() agent = BaseAgent() agent.configure({"zookeeper_servers": "x2.example.com", "juju_directory": self.makeDir(), "session_file": self.makeFile()}) result = agent.connect() self.assertEqual(result.result, mock_client) self.assertEqual(agent.client, mock_client) def test_nonexistent_directory(self): """If the juju directory does not exist an error should be raised. """ juju_directory = self.makeDir() os.rmdir(juju_directory) data = {"zookeeper_servers": get_test_zookeeper_address(), "juju_directory": juju_directory, "session_file": self.makeFile()} self.assertRaises(JujuError, BaseAgent().configure, data) def test_bad_session_file(self): """If the session file cannot be created an error should be raised. """ data = {"zookeeper_servers": get_test_zookeeper_address(), "juju_directory": self.makeDir(), "session_file": None} self.assertRaises(JujuError, BaseAgent().configure, data) def test_directory_cli_option(self): """The juju directory can be configured on the cli.""" juju_directory = self.makeDir() self.change_args( "es-agent", "--zookeeper-servers", get_test_zookeeper_address(), "--juju-directory", juju_directory, "--session-file", self.makeFile()) agent = BaseAgent() parser = argparse.ArgumentParser() agent.setup_options(parser) options = parser.parse_args(namespace=TwistedOptionNamespace()) agent.configure(options) self.assertEqual( agent.config["juju_directory"], juju_directory) def test_directory_env(self): """The juju directory passed via environment.""" self.change_args("es-agent") juju_directory = self.makeDir() self.change_environment( JUJU_HOME=juju_directory, JUJU_ZOOKEEPER=get_test_zookeeper_address()) agent = BaseAgent() parser = argparse.ArgumentParser() agent.setup_options(parser) options = parser.parse_args( ["--session-file", self.makeFile()], namespace=TwistedOptionNamespace()) agent.configure(options) self.assertEqual( agent.config["juju_directory"], juju_directory) def test_connect_env(self): """Zookeeper connection information can be passed via environment.""" self.change_args("es-agent") self.change_environment( JUJU_HOME=self.makeDir(), JUJU_ZOOKEEPER="x1.example.com", JUJU_PRINCIPALS="admin:abc agent:xyz") client = self.mocker.patch(ZookeeperClient) client.connect("x1.example.com") self.mocker.result(succeed(client)) client.client_id self.mocker.result((123, "abc")) client.add_auth("digest", "admin:abc") client.add_auth("digest", "agent:xyz") client.exists("/") self.mocker.replay() agent = BaseAgent() agent.set_watch_enabled(False) parser = argparse.ArgumentParser() agent.setup_options(parser) options = parser.parse_args( ["--session-file", self.makeFile()], namespace=TwistedOptionNamespace()) agent.configure(options) d = agent.startService() self.failUnlessFailure(d, NotImplementedError) return d def test_connect_closes_running_session(self): self.change_args("es-agent") self.change_environment( JUJU_HOME=self.makeDir(), JUJU_ZOOKEEPER="x1.example.com") session_file = self.makeFile() with open(session_file, "w") as f: f.write(yaml.dump((123, "abc"))) mock_client_1 = self.mocker.mock() client = self.mocker.patch(ZookeeperClient) client.connect("x1.example.com", client_id=(123, "abc")) self.mocker.result(succeed(mock_client_1)) mock_client_1.close() self.mocker.result(None) mock_client_2 = self.mocker.mock() client.connect("x1.example.com") self.mocker.result(succeed(mock_client_2)) mock_client_2.client_id self.mocker.result((456, "def")) self.mocker.replay() agent = BaseAgent() agent.set_watch_enabled(False) parser = argparse.ArgumentParser() agent.setup_options(parser) options = parser.parse_args( ["--session-file", session_file], namespace=TwistedOptionNamespace()) agent.configure(options) d = agent.startService() self.failUnlessFailure(d, NotImplementedError) return d def test_connect_handles_expired_session(self): self.change_args("es-agent") self.change_environment( JUJU_HOME=self.makeDir(), JUJU_ZOOKEEPER="x1.example.com") session_file = self.makeFile() with open(session_file, "w") as f: f.write(yaml.dump((123, "abc"))) client = self.mocker.patch(ZookeeperClient) client.connect("x1.example.com", client_id=(123, "abc")) self.mocker.result(fail(zookeeper.SessionExpiredException())) mock_client = self.mocker.mock() client.connect("x1.example.com") self.mocker.result(succeed(mock_client)) mock_client.client_id self.mocker.result((456, "def")) self.mocker.replay() agent = BaseAgent() agent.set_watch_enabled(False) parser = argparse.ArgumentParser() agent.setup_options(parser) options = parser.parse_args( ["--session-file", session_file], namespace=TwistedOptionNamespace()) agent.configure(options) d = agent.startService() self.failUnlessFailure(d, NotImplementedError) return d def test_connect_handles_nonsense_session(self): self.change_args("es-agent") self.change_environment( JUJU_HOME=self.makeDir(), JUJU_ZOOKEEPER="x1.example.com") session_file = self.makeFile() with open(session_file, "w") as f: f.write(yaml.dump("cheesy wotsits")) client = self.mocker.patch(ZookeeperClient) client.connect("x1.example.com", client_id="cheesy wotsits") self.mocker.result(fail(zookeeper.ZooKeeperException())) mock_client = self.mocker.mock() client.connect("x1.example.com") self.mocker.result(succeed(mock_client)) mock_client.client_id self.mocker.result((456, "def")) self.mocker.replay() agent = BaseAgent() agent.set_watch_enabled(False) parser = argparse.ArgumentParser() agent.setup_options(parser) options = parser.parse_args( ["--session-file", session_file], namespace=TwistedOptionNamespace()) agent.configure(options) d = agent.startService() self.failUnlessFailure(d, NotImplementedError) return d def test_zookeeper_hosts_not_configured(self): """a NoConnection error is raised if no zookeeper host is specified.""" agent = BaseAgent() self.assertRaises( NoConnection, agent.configure, {"zookeeper_servers": None}) def test_watch_enabled_accessors(self): agent = BaseAgent() self.assertTrue(agent.get_watch_enabled()) agent.set_watch_enabled(False) self.assertFalse(agent.get_watch_enabled()) @inlineCallbacks def test_session_file_permissions(self): session_file = self.makeFile() agent = DummyAgent() agent.configure({ "session_file": session_file, "juju_directory": self.makeDir(), "zookeeper_servers": get_test_zookeeper_address()}) yield agent.startService() mode = os.stat(session_file).st_mode mask = stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO self.assertEquals(mode & mask, stat.S_IRUSR | stat.S_IWUSR) yield agent.stopService() self.assertFalse(os.path.exists(session_file)) class AgentDebugLogSettingsWatch(AgentTestBase): agent_class = BaseAgent @inlineCallbacks def get_log_entry(self, number, wait=True): entry_path = "/logs/log-%010d" % number exists_d, watch_d = self.client.exists_and_watch(entry_path) exists = yield exists_d if not exists and wait: yield watch_d elif not exists: returnValue(False) data, stat = yield self.client.get(entry_path) returnValue(json.loads(data)) def test_get_agent_name(self): self.assertEqual(self.agent.get_agent_name(), "BaseAgent") @inlineCallbacks def test_runtime_watching_toggles_log(self): """Redundant changes with regard to the current configuration are ignored.""" yield self.agent.connect() root_log = logging.getLogger() mock_log = self.mocker.replace(root_log) mock_log.addHandler(MATCH_HANDLER) self.mocker.result(True) mock_log.removeHandler(MATCH_HANDLER) self.mocker.result(True) mock_log.addHandler(MATCH_HANDLER) self.mocker.result(True) self.mocker.replay() yield self.agent.start_global_settings_watch() yield self.agent.global_settings_state.set_debug_log(True) yield self.agent.global_settings_state.set_debug_log(True) yield self.agent.global_settings_state.set_debug_log(False) yield self.agent.global_settings_state.set_debug_log(False) yield self.agent.global_settings_state.set_debug_log(True) # Give a moment for watches to fire. yield self.sleep(0.1) @inlineCallbacks def test_log_enable_disable(self): """The log can be enabled and disabled.""" root_log = logging.getLogger() root_log.setLevel(logging.DEBUG) self.capture_logging(None, level=logging.DEBUG) yield self.agent.connect() self.assertFalse((yield self.client.exists("/logs"))) yield self.agent.start_debug_log() root_log.debug("hello world") yield self.agent.stop_debug_log() root_log.info("goodbye") root_log.info("world") entry = yield self.get_log_entry(0) self.assertTrue(entry) self.assertEqual(entry["levelname"], "DEBUG") entry = yield self.get_log_entry(1, wait=False) self.assertFalse(entry) # Else zookeeper is closing on occassion in teardown yield self.sleep(0.1) juju-0.7.orig/juju/agents/tests/test_dummy.py0000644000000000000000000000044212135220114017555 0ustar 00000000000000from juju.lib.testing import TestCase from juju.agents.dummy import DummyAgent class DummyTestCase(TestCase): def test_start_dummy(self): """ Does nothing. """ agent = DummyAgent() result = agent.start() self.assertEqual(result, None) juju-0.7.orig/juju/agents/tests/test_machine.py0000644000000000000000000002165612135220114020040 0ustar 00000000000000import argparse import logging import os from twisted.internet.defer import ( inlineCallbacks, returnValue, fail, Deferred) from juju.agents.base import TwistedOptionNamespace from juju.agents.machine import MachineAgent from juju.errors import JujuError from juju.charm.bundle import CharmBundle from juju.charm.directory import CharmDirectory from juju.charm.publisher import CharmPublisher from juju.charm.tests import local_charm_id from juju.charm.tests.test_repository import RepositoryTestBase from juju.lib.mocker import MATCH from juju.machine.tests.test_constraints import ( dummy_constraints, series_constraints) from juju.state.machine import MachineStateManager, MachineState from juju.state.service import ServiceStateManager from juju.tests.common import get_test_zookeeper_address from .common import AgentTestBase MATCH_BUNDLE = MATCH(lambda x: isinstance(x, CharmBundle)) class MachineAgentTest(AgentTestBase, RepositoryTestBase): agent_class = MachineAgent @inlineCallbacks def setUp(self): yield super(MachineAgentTest, self).setUp() self.output = self.capture_logging(level=logging.DEBUG) environment = self.config.get_default() # Load the environment with the charm state and charm binary self.provider = environment.get_machine_provider() self.storage = self.provider.get_file_storage() self.charm = CharmDirectory(self.sample_dir1) self.publisher = CharmPublisher(self.client, self.storage) yield self.publisher.add_charm(local_charm_id(self.charm), self.charm) charm_states = yield self.publisher.publish() self.charm_state = charm_states[0] # Create a service from the charm from which we can create units for # the machine. self.service_state_manager = ServiceStateManager(self.client) self.service = yield self.service_state_manager.add_service_state( "fatality-blog", self.charm_state, dummy_constraints) @inlineCallbacks def get_agent_config(self): # gets invoked by AgentTestBase.setUp options = yield super(MachineAgentTest, self).get_agent_config() machine_state_manager = MachineStateManager(self.client) self.machine_state = yield machine_state_manager.add_machine_state( series_constraints) self.change_environment( JUJU_MACHINE_ID="0", JUJU_HOME=self.juju_directory) options["machine_id"] = str(self.machine_state.id) # Start the agent with watching enabled returnValue(options) @inlineCallbacks def test_start_begins_watch_and_initializes_directories(self): self.agent.set_watch_enabled(True) mock_machine_state = self.mocker.patch(MachineState) mock_machine_state.watch_assigned_units( self.agent.watch_service_units) self.mocker.replay() yield self.agent.startService() self.assertTrue(os.path.isdir(self.agent.units_directory)) self.assertTrue(os.path.isdir(self.agent.unit_state_directory)) self.assertIn( "Machine agent started id:%s" % self.agent.get_machine_id(), self.output.getvalue()) yield self.agent.stopService() def test_agent_machine_id_environment_extraction(self): self.change_args("es-agent") parser = argparse.ArgumentParser() self.agent.setup_options(parser) config = parser.parse_args(namespace=TwistedOptionNamespace()) self.assertEqual( config["machine_id"], "0") def test_get_agent_name(self): self.assertEqual(self.agent.get_agent_name(), "Machine:0") def test_agent_machine_id_cli_error(self): """ If the machine id can't be found, a detailed error message is given. """ # initially setup by get_agent_config in setUp self.change_environment(JUJU_MACHINE_ID="") self.change_args("es-agent", "--zookeeper-servers", get_test_zookeeper_address(), "--juju-directory", self.makeDir(), "--session-file", self.makeFile()) parser = argparse.ArgumentParser() self.agent.setup_options(parser) options = parser.parse_args(namespace=TwistedOptionNamespace()) e = self.assertRaises( JujuError, self.agent.configure, options) self.assertIn( ("--machine-id must be provided in the command line," " or $JUJU_MACHINE_ID in the environment"), str(e)) def test_agent_machine_id_cli_extraction(self): """Command line passing of machine id works and has precedence over environment arg passing.""" self.change_environment(JUJU_MACHINE_ID=str(21)) self.change_args("es-agent", "--machine-id", "0") parser = argparse.ArgumentParser() self.agent.setup_options(parser) config = parser.parse_args(namespace=TwistedOptionNamespace()) self.assertEqual( config["machine_id"], "0") def test_machine_agent_knows_its_machine_id(self): self.assertEqual(self.agent.get_machine_id(), "0") @inlineCallbacks def test_watch_new_service_unit(self): """ Adding a new service unit is detected by the watch. """ from juju.unit.deploy import UnitDeployer mock_deployer = self.mocker.patch(UnitDeployer) mock_deployer.start_service_unit("fatality-blog/0") test_deferred = Deferred() def test_complete(service_name): test_deferred.callback(True) self.mocker.call(test_complete) self.mocker.replay() self.agent.set_watch_enabled(True) yield self.agent.startService() # Create a new service unit self.service_unit = yield self.service.add_unit_state() yield self.service_unit.assign_to_machine(self.machine_state) yield test_deferred self.assertIn( "Units changed old:set([]) new:set(['fatality-blog/0'])", self.output.getvalue()) def test_watch_new_service_unit_error(self): """ An error while starting a new service is logged """ # Inject an error into the service deployment from juju.unit.deploy import UnitDeployer mock_deployer = self.mocker.patch(UnitDeployer) mock_deployer.start_service_unit("fatality-blog/0") self.mocker.result(fail(SyntaxError("Bad"))) self.mocker.replay() yield self.agent.startService() yield self.agent.watch_service_units(None, set(["fatality-blog/0"])) self.assertIn("Starting service unit: %s" % "fatality-blog/0", self.output.getvalue()) self.assertIn("Error starting unit: %s" % "fatality-blog/0", self.output.getvalue()) self.assertIn("SyntaxError: Bad", self.output.getvalue()) @inlineCallbacks def test_service_unit_removed(self): """ Service unit removed with manual invocation of watch_service_units. """ from juju.unit.deploy import UnitDeployer mock_deployer = self.mocker.patch(UnitDeployer) started = Deferred() mock_deployer.start_service_unit("fatality-blog/0") self.mocker.call(started.callback) stopped = Deferred() mock_deployer.kill_service_unit("fatality-blog/0") self.mocker.call(stopped.callback) self.mocker.replay() # Start the agent with watching enabled self.agent.set_watch_enabled(True) yield self.agent.startService() # Create a new service unit self.service_unit = yield self.service.add_unit_state() yield self.service_unit.assign_to_machine(self.machine_state) # Need to ensure no there's no concurrency creating an overlap # between assigning, unassigning to machine, since it is # possible then for the watch in the machine agent to not # observe *any* change in this case ("you cannot reliably see # every change that happens to a node in ZooKeeper") yield started # And now remove it yield self.service_unit.unassign_from_machine() yield stopped @inlineCallbacks def test_watch_removed_service_unit_error(self): """ An error while removing a service unit is logged """ from juju.unit.deploy import UnitDeployer mock_deployer = self.mocker.patch(UnitDeployer) mock_deployer.kill_service_unit("fatality-blog/0") self.mocker.result(fail(OSError("Bad"))) self.mocker.replay() yield self.agent.startService() yield self.agent.watch_service_units(set(["fatality-blog/0"]), set()) self.assertIn("Stopping service unit: %s" % "fatality-blog/0", self.output.getvalue()) self.assertIn("Error stopping unit: %s" % "fatality-blog/0", self.output.getvalue()) self.assertIn("OSError: Bad", self.output.getvalue()) juju-0.7.orig/juju/agents/tests/test_provision.py0000644000000000000000000004564112135220114020464 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks, fail, succeed from twisted.internet import reactor from juju.agents.provision import ProvisioningAgent from juju.environment.environment import Environment from juju.environment.errors import EnvironmentsConfigError from juju.machine.tests.test_constraints import dummy_cs, series_constraints from juju.errors import ProviderInteractionError from juju.lib.mocker import MATCH from juju.providers.dummy import DummyMachine from juju.state.errors import StopWatcher from juju.state.machine import MachineState, MachineStateManager from juju.state.tests.test_service import ServiceStateManagerTestBase from .common import AgentTestBase MATCH_MACHINE = MATCH(lambda x: isinstance(x, DummyMachine)) MATCH_MACHINE_STATE = MATCH(lambda x: isinstance(x, MachineState)) MATCH_SET = MATCH(lambda x: isinstance(x, set)) class ProvisioningTestBase(AgentTestBase): agent_class = ProvisioningAgent @inlineCallbacks def setUp(self): yield super(ProvisioningTestBase, self).setUp() self.machine_manager = MachineStateManager(self.client) def add_machine_state(self, constraints=None): return self.machine_manager.add_machine_state( constraints or series_constraints) class ProvisioningAgentStartupTest(ProvisioningTestBase): setup_environment = False @inlineCallbacks def setUp(self): yield super(ProvisioningAgentStartupTest, self).setUp() yield self.agent.connect() @inlineCallbacks def test_agent_waits_for_environment(self): """ When the agent starts it waits for the /environment node to exist. As soon as it does, the agent will fetch the environment, and deserialize it into an environment object. """ env_loaded_deferred = self.agent.configure_environment() reactor.callLater( 0.3, self.push_default_config, with_constraints=False) result = yield env_loaded_deferred self.assertTrue(isinstance(result, Environment)) self.assertEqual(result.name, "firstenv") @inlineCallbacks def test_agent_with_existing_environment(self): """An agent should load an existing environment to configure itself.""" yield self.push_default_config() def verify_environment(result): self.assertTrue(isinstance(result, Environment)) self.assertEqual(result.name, "firstenv") d = self.agent.configure_environment() d.addCallback(verify_environment) yield d @inlineCallbacks def test_agent_with_invalid_environment(self): yield self.client.create("/environment", "WAHOO!") d = self.agent.configure_environment() yield self.assertFailure(d, EnvironmentsConfigError) def test_agent_with_nonexistent_environment_created_concurrently(self): """ If the environment node does not initially exist but it is created while the agent is processing the NoNodeException, it should detect this and configure normally. """ exists_and_watch = self.agent.client.exists_and_watch mock_client = self.mocker.patch(self.agent.client) mock_client.exists_and_watch("/environment") def inject_creation(path): self.push_default_config(with_constraints=False) return exists_and_watch(path) self.mocker.call(inject_creation) self.mocker.replay() def verify_configured(result): self.assertTrue(isinstance(result, Environment)) self.assertEqual(result.type, "dummy") # mocker magic test d = self.agent.configure_environment() d.addCallback(verify_configured) return d class ProvisioningAgentTest(ProvisioningTestBase): @inlineCallbacks def setUp(self): yield super(ProvisioningAgentTest, self).setUp() self.agent.set_watch_enabled(False) yield self.agent.startService() self.output = self.capture_logging("juju.agents.provision", logging.DEBUG) def test_get_agent_name(self): self.assertEqual(self.agent.get_agent_name(), "provision:dummy") @inlineCallbacks def test_watch_machine_changes_processes_new_machine_id(self): """The agent should process a new machine id by creating it""" machine_state0 = yield self.add_machine_state() machine_state1 = yield self.add_machine_state() yield self.agent.watch_machine_changes( None, [machine_state0.id, machine_state1.id]) self.assertIn( "Machines changed old:None new:[0, 1]", self.output.getvalue()) self.assertIn("Starting machine id:0", self.output.getvalue()) machines = yield self.agent.provider.get_machines() self.assertEquals(len(machines), 2) instance_id = yield machine_state0.get_instance_id() self.assertEqual(instance_id, 0) instance_id = yield machine_state1.get_instance_id() self.assertEqual(instance_id, 1) @inlineCallbacks def test_watch_machine_changes_ignores_running_machine(self): """ If there is an existing machine instance and state, when a new machine state is added, the existing instance is preserved, and a new instance is created. """ machine_state0 = yield self.add_machine_state() machines = yield self.agent.provider.start_machine( {"machine-id": machine_state0.id}) machine = machines.pop() yield machine_state0.set_instance_id(machine.instance_id) machine_state1 = yield self.add_machine_state() machines = yield self.agent.provider.get_machines() self.assertEquals(len(machines), 1) yield self.agent.watch_machine_changes( None, [machine_state0.id, machine_state1.id]) machines = yield self.agent.provider.get_machines() self.assertEquals(len(machines), 2) instance_id = yield machine_state1.get_instance_id() self.assertEqual(instance_id, 1) @inlineCallbacks def test_watch_machine_changes_terminates_unused(self): """ Any running provider machine instances without corresponding machine states are terminated. """ # start an unused machine within the dummy provider instance yield self.agent.provider.start_machine({"machine-id": "machine-1"}) yield self.agent.watch_machine_changes(None, []) self.assertIn("Shutting down machine id:0", self.output.getvalue()) machines = yield self.agent.provider.get_machines() self.assertFalse(machines) @inlineCallbacks def test_watch_machine_changes_stop_watches(self): """Verify that the watches stops once the agent stops.""" yield self.agent.start() yield self.agent.stop() yield self.assertFailure( self.agent.watch_machine_changes(None, []), StopWatcher) @inlineCallbacks def test_new_machine_state_removed_while_processing(self): """ If the machine state is removed while the event is processing the state, the watch function should process it normally. """ yield self.agent.watch_machine_changes( None, [0]) machines = yield self.agent.provider.get_machines() self.assertEquals(len(machines), 0) @inlineCallbacks def test_process_machines_non_concurrency(self): """ Process machines should only be executed serially by an agent. """ machine_state0 = yield self.add_machine_state() machine_state1 = yield self.add_machine_state() call_1 = self.agent.process_machines([machine_state0.id]) # The second call should return immediately due to the # instance attribute guard. call_2 = self.agent.process_machines([machine_state1.id]) self.assertEqual(call_2.called, True) self.assertEqual(call_2.result, False) # The first call should have started a provider machine yield call_1 machines = yield self.agent.provider.get_machines() self.assertEquals(len(machines), 1) instance_id_0 = yield machine_state0.get_instance_id() self.assertEqual(instance_id_0, 0) instance_id_1 = yield machine_state1.get_instance_id() self.assertEqual(instance_id_1, None) def test_new_machine_state_removed_while_processing_get_provider_id(self): """ If the machine state is removed while the event is processing the state, the watch function should process it normally. """ yield self.agent.watch_machine_changes( None, [0]) machines = yield self.agent.provider.get_machines() self.assertEquals(len(machines), 0) @inlineCallbacks def test_on_environment_change_agent_reconfigures(self): """ If the environment changes the agent reconfigures itself """ provider = self.agent.provider yield self.push_default_config() yield self.sleep(0.2) self.assertNotIdentical(provider, self.agent.provider) @inlineCallbacks def test_machine_state_reflects_invalid_provider_state(self): """ If a machine state has an invalid instance_id, it should be detected, and a new machine started and the machine state updated with the new instance_id. """ m1 = yield self.add_machine_state() yield m1.set_instance_id("zebra") m2 = yield self.add_machine_state() yield self.agent.watch_machine_changes(None, [m1.id, m2.id]) m1_instance_id = yield m1.get_instance_id() self.assertEqual(m1_instance_id, 0) m2_instance_id = yield m2.get_instance_id() self.assertEqual(m2_instance_id, 1) def test_periodic_task(self): """ The agent schedules period checks that execute the process machines call. """ mock_reactor = self.mocker.patch(reactor) mock_reactor.callLater(self.agent.machine_check_period, self.agent.periodic_machine_check) mock_agent = self.mocker.patch(self.agent) mock_agent.process_machines(()) self.mocker.result(succeed(None)) self.mocker.replay() # mocker magic test self.agent.periodic_machine_check() @inlineCallbacks def test_transient_provider_error_on_start_machine(self): """ If there's an error when processing changes, the agent should log the error and continue. """ machine_state0 = yield self.add_machine_state( dummy_cs.parse(["cpu=10"]).with_series("series")) machine_state1 = yield self.add_machine_state( dummy_cs.parse(["cpu=20"]).with_series("series")) mock_provider = self.mocker.patch(self.agent.provider) mock_provider.start_machine({ "machine-id": 0, "constraints": { "arch": "amd64", "cpu": 10, "mem": 512, "provider-type": "dummy", "ubuntu-series": "series"}}) self.mocker.result(fail(ProviderInteractionError())) mock_provider.start_machine({ "machine-id": 1, "constraints": { "arch": "amd64", "cpu": 20, "mem": 512, "provider-type": "dummy", "ubuntu-series": "series"}}) self.mocker.passthrough() self.mocker.replay() yield self.agent.watch_machine_changes( [], [machine_state0.id, machine_state1.id]) machine1_instance_id = yield machine_state1.get_instance_id() self.assertEqual(machine1_instance_id, 0) self.assertIn( "Cannot process machine 0", self.output.getvalue()) @inlineCallbacks def test_transient_provider_error_on_shutdown_machine(self): """ A transient provider error on shutdown will be ignored and the shutdown will be reattempted (assuming similiar state conditions) on the next execution of process machines. """ yield self.agent.provider.start_machine({"machine-id": 1}) mock_provider = self.mocker.patch(self.agent.provider) mock_provider.shutdown_machine(MATCH_MACHINE) self.mocker.result(fail(ProviderInteractionError())) mock_provider.shutdown_machine(MATCH_MACHINE) self.mocker.passthrough() self.mocker.replay() try: yield self.agent.process_machines([]) except: self.fail("Should not raise") machines = yield self.agent.provider.get_machines() self.assertTrue(machines) yield self.agent.process_machines([]) machines = yield self.agent.provider.get_machines() self.assertFalse(machines) self.assertIn( "Cannot shutdown machine 0", self.output.getvalue()) @inlineCallbacks def test_transient_provider_error_on_get_machines(self): machine_state0 = yield self.add_machine_state() mock_provider = self.mocker.patch(self.agent.provider) mock_provider.get_machines() self.mocker.result(fail(ProviderInteractionError())) mock_provider.get_machines() self.mocker.passthrough() self.mocker.replay() try: yield self.agent.process_machines([machine_state0.id]) except: self.fail("Should not raise") instance_id = yield machine_state0.get_instance_id() self.assertEqual(instance_id, None) yield self.agent.process_machines( [machine_state0.id]) instance_id = yield machine_state0.get_instance_id() self.assertEqual(instance_id, 0) self.assertIn( "Cannot get machine list", self.output.getvalue()) @inlineCallbacks def test_transient_unhandled_error_in_process_machines(self): """Verify that watch_machine_changes handles the exception. Provider implementations may use libraries like txaws that do not handle every error. However, this should not stop the watch from re-establishing itself, as will be the case if the exception is not caught. """ machine_state0 = yield self.add_machine_state() machine_state1 = yield self.add_machine_state() # Simulate a failure scenario seen occasionally when working # with OpenStack and txaws mock_agent = self.mocker.patch(self.agent) # Simulate transient error mock_agent.process_machines([machine_state0.id]) self.mocker.result(fail( TypeError("'NoneType' object is not iterable"))) # Let it succeed on second try. In this case, the scenario is # that the watch triggered before the periodic_machine_check # was run again mock_agent.process_machines([machine_state0.id, machine_state1.id]) self.mocker.passthrough() self.mocker.replay() # Verify that watch_machine_changes does not fail even in the case of # the transient error, although no work was done try: yield self.agent.watch_machine_changes([], [machine_state0.id]) except: self.fail("Should not raise") instance_id = yield machine_state0.get_instance_id() self.assertEqual(instance_id, None) # Second attempt, verifiy it did in fact process the machine yield self.agent.watch_machine_changes( [machine_state0.id], [machine_state0.id, machine_state1.id]) self.assertEqual((yield machine_state0.get_instance_id()), 0) self.assertEqual((yield machine_state1.get_instance_id()), 1) # But only after attempting and failing the first time self.assertIn( "Got unexpected exception in processing machines, will retry", self.output.getvalue()) self.assertIn( "'NoneType' object is not iterable", self.output.getvalue()) @inlineCallbacks def test_start_agent_with_watch(self): mock_reactor = self.mocker.patch(reactor) mock_reactor.callLater( self.agent.machine_check_period, self.agent.periodic_machine_check) self.mocker.replay() self.agent.set_watch_enabled(True) yield self.agent.start() machine_state0 = yield self.add_machine_state() exists_d, watch_d = self.client.exists_and_watch( "/machines/%s" % machine_state0.internal_id) yield exists_d # Wait for the provisioning agent to wake and modify # the machine id. yield watch_d instance_id = yield machine_state0.get_instance_id() self.assertEqual(instance_id, 0) class FirewallManagerTest( ProvisioningTestBase, ServiceStateManagerTestBase): @inlineCallbacks def setUp(self): yield super(FirewallManagerTest, self).setUp() self.agent.set_watch_enabled(False) yield self.agent.startService() @inlineCallbacks def test_watch_service_changes_is_called(self): """Verify FirewallManager is called when services change""" from juju.state.firewall import FirewallManager mock_manager = self.mocker.patch(FirewallManager) seen = [] def record_watch_changes(old_services, new_services): seen.append((old_services, new_services)) return succeed(True) mock_manager.watch_service_changes(MATCH_SET, MATCH_SET) self.mocker.count(3, 3) self.mocker.call(record_watch_changes) mock_reactor = self.mocker.patch(reactor) mock_reactor.callLater( self.agent.machine_check_period, self.agent.periodic_machine_check) self.mocker.replay() self.agent.set_watch_enabled(True) yield self.agent.start() # Modify services, while subsequently poking to ensure service # watch is processed on each modification yield self.add_service("wordpress") while len(seen) < 1: yield self.poke_zk() mysql = yield self.add_service("mysql") while len(seen) < 2: yield self.poke_zk() yield self.service_state_manager.remove_service_state(mysql) while len(seen) < 3: yield self.poke_zk() self.assertEqual( seen, [(set(), set(["wordpress"])), (set(["wordpress"]), set(["mysql", "wordpress"])), (set(["mysql", "wordpress"]), set(["wordpress"]))]) @inlineCallbacks def test_process_machine_is_called(self): """Verify FirewallManager is called when machines are processed""" from juju.state.firewall import FirewallManager mock_manager = self.mocker.patch(FirewallManager) seen = [] def record_machine(machine): seen.append(machine) return succeed(True) mock_manager.process_machine(MATCH_MACHINE_STATE) self.mocker.call(record_machine) self.mocker.replay() machine_state = yield self.add_machine_state() yield self.agent.process_machines([machine_state.id]) self.assertEqual(seen, [machine_state]) juju-0.7.orig/juju/agents/tests/test_unit.py0000644000000000000000000007011712135220114017407 0ustar 00000000000000import argparse import logging import os from twisted.internet.defer import inlineCallbacks, returnValue from juju.agents.unit import UnitAgent from juju.agents.base import TwistedOptionNamespace from juju.charm import get_charm_from_path from juju.charm.url import CharmURL from juju.errors import JujuError from juju.hooks.executor import HookExecutor from juju.lib import serializer from juju.state.environment import GlobalSettingsStateManager from juju.state.errors import ServiceStateNotFound from juju.state.service import NO_HOOKS, RETRY_HOOKS from juju.unit.lifecycle import UnitLifecycle from juju.unit.workflow import UnitWorkflowState from juju.agents.tests.common import AgentTestBase from juju.control.tests.test_upgrade_charm import CharmUpgradeTestBase from juju.hooks.tests.test_invoker import get_cli_environ_path from juju.tests.common import get_test_zookeeper_address from juju.unit.tests.test_charm import CharmPublisherTestBase from juju.unit.tests.test_workflow import WorkflowTestBase class UnitAgentTestBase(AgentTestBase, WorkflowTestBase): agent_class = UnitAgent @inlineCallbacks def setUp(self): self.patch(HookExecutor, "LOCK_PATH", os.path.join(self.makeDir(), "hook.lock")) yield super(UnitAgentTestBase, self).setUp() settings = GlobalSettingsStateManager(self.client) yield settings.set_provider_type("dummy") self.change_environment( PATH=get_cli_environ_path(), JUJU_ENV_UUID="snowflake", JUJU_UNIT_NAME="mysql/0") @inlineCallbacks def tearDown(self): if self.agent.api_socket: yield self.agent.api_socket.stopListening() yield super(UnitAgentTestBase, self).tearDown() @inlineCallbacks def get_agent_config(self): yield self.setup_default_test_relation() options = yield super(UnitAgentTestBase, self).get_agent_config() options["unit_name"] = str(self.states["unit"].unit_name) returnValue(options) def write_empty_hooks(self, start=True, stop=True, install=True, **kw): # NB Tests that use this helper method must properly wait on # the agent being stopped (yield self.agent.stopService()) to # avoid the environment being restored while asynchronously # the stop hook continues to execute. Otherwise # JUJU_UNIT_NAME, which hook invocation depends on, will # not be available and the stop hook will fail (somewhat # mysteriously!). The alternative is to set stop=False so that # the stop hook will not be created when writing the empty # hooks. output_file = self.makeFile() if install: self.write_hook( "install", "#!/bin/bash\necho install >> %s" % output_file) if start: self.write_hook( "start", "#!/bin/bash\necho start >> %s" % output_file) if stop: self.write_hook( "stop", "#!/bin/bash\necho stop >> %s" % output_file) for k in kw.keys(): hook_name = k.replace("_", "-") self.write_hook( hook_name, "#!/bin/bash\necho %s >> %s" % (hook_name, output_file)) return output_file def parse_output(self, output_file): return filter(None, open(output_file).read().split("\n")) class UnitAgentTest(UnitAgentTestBase): @inlineCallbacks def test_agent_start_stop_start_service(self): """Verify workflow state when starting and stopping the unit agent.""" self.write_empty_hooks() yield self.agent.startService() current_state = yield self.agent.workflow.get_state() self.assertEqual(current_state, "started") self.assertTrue(self.agent.lifecycle.running) self.assertTrue(self.agent.executor.running) workflow = self.agent.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) relation_state = yield workflow.get_state() self.assertEquals(relation_state, "up") yield self.agent.stopService() current_state = yield self.agent.workflow.get_state() # NOTE: stopping the unit agent does *not* imply that the service # should not continue to run; ie don't transition to "stopped", and # don't mark the relation states as "down" self.assertEqual(current_state, "started") self.assertFalse(self.agent.lifecycle.running) self.assertFalse(self.agent.executor.running) relation_state = yield workflow.get_state() self.assertEquals(relation_state, "up") # and check we can restart as well yield self.agent.startService() current_state = yield self.agent.workflow.get_state() self.assertEqual(current_state, "started") self.assertTrue(self.agent.lifecycle.running) self.assertTrue(self.agent.executor.running) relation_state = yield workflow.get_state() self.assertEquals(relation_state, "up") yield self.agent.stopService() current_state = yield self.agent.workflow.get_state() self.assertEqual(current_state, "started") self.assertFalse(self.agent.lifecycle.running) self.assertFalse(self.agent.executor.running) relation_state = yield workflow.get_state() self.assertEquals(relation_state, "up") @inlineCallbacks def test_agent_start_from_started_workflow(self): lifecycle = UnitLifecycle( self.client, self.states["unit"], self.states["service"], self.unit_directory, self.state_directory, self.executor) workflow = UnitWorkflowState( self.client, self.states["unit"], lifecycle, os.path.join(self.juju_directory, "state")) with (yield workflow.lock()): yield workflow.fire_transition("install") yield lifecycle.stop(fire_hooks=False, stop_relations=False) yield self.agent.startService() current_state = yield self.agent.workflow.get_state() self.assertEqual(current_state, "started") self.assertTrue(self.agent.lifecycle.running) self.assertTrue(self.agent.executor.running) @inlineCallbacks def test_agent_start_from_error_workflow(self): lifecycle = UnitLifecycle( self.client, self.states["unit"], self.states["service"], self.unit_directory, self.state_directory, self.executor) workflow = UnitWorkflowState( self.client, self.states["unit"], lifecycle, os.path.join(self.juju_directory, "state")) with (yield workflow.lock()): yield workflow.fire_transition("install") self.write_exit_hook("stop", 1) yield workflow.fire_transition("stop") yield self.agent.startService() current_state = yield self.agent.workflow.get_state() self.assertEqual(current_state, "stop_error") self.assertFalse(self.agent.lifecycle.running) self.assertTrue(self.agent.executor.running) def test_agent_unit_name_environment_extraction(self): """Verify extraction of unit name from the environment.""" self.change_args("unit-agent") self.change_environment(JUJU_UNIT_NAME="rabbit/1") parser = argparse.ArgumentParser() self.agent.setup_options(parser) options = parser.parse_args(namespace=TwistedOptionNamespace()) self.assertEqual(options["unit_name"], "rabbit/1") def test_agent_unit_name_cli_extraction_error(self): """Failure to extract the unit name, results in a nice error message. """ # We don't want JUJU_UNIT_NAME set, so that the expected # JujuError will be raised self.change_environment( PATH=get_cli_environ_path()) self.change_args( "unit-agent", "--juju-directory", self.makeDir(), "--zookeeper-servers", get_test_zookeeper_address(), "--session-file", self.makeFile()) parser = argparse.ArgumentParser() self.agent.setup_options(parser) options = parser.parse_args(namespace=TwistedOptionNamespace()) e = self.assertRaises(JujuError, self.agent.configure, options) self.assertEquals( str(e), "--unit-name must be provided in the command line, or " "$JUJU_UNIT_NAME in the environment") def test_agent_unit_name_cli_extraction(self): """The unit agent can parse its unit-name from the cli. """ self.change_args("unit-agent", "--unit-name", "rabbit/1") parser = argparse.ArgumentParser() self.agent.setup_options(parser) options = parser.parse_args(namespace=TwistedOptionNamespace()) self.assertEqual(options["unit_name"], "rabbit/1") def test_get_agent_name(self): self.assertEqual(self.agent.get_agent_name(), "unit:mysql/0") def test_agent_invalid_unit_name(self): """If the unit agent is given an invalid unit name, an error message is raised.""" options = {} options["juju_directory"] = self.juju_directory options["zookeeper_servers"] = get_test_zookeeper_address() options["session_file"] = self.makeFile() options["unit_name"] = "rabbit-1" agent = self.agent_class() agent.configure(options) return self.assertFailure(agent.startService(), ServiceStateNotFound) @inlineCallbacks def test_agent_records_address_on_startup(self): """On startup the agent will record the unit's addresses. """ yield self.agent.startService() self.assertEqual( (yield self.agent.unit_state.get_public_address()), "localhost") self.assertEqual( (yield self.agent.unit_state.get_private_address()), "localhost") @inlineCallbacks def test_agent_executes_install_and_start_hooks_on_startup(self): """On initial startup, the unit agent executes install and start hooks. """ output_file = self.write_empty_hooks() hooks_complete = self.wait_on_hook( sequence=["install", "config-changed", "start"], executor=self.agent.executor) yield self.agent.startService() # Verify the hook has executed. yield hooks_complete # config-changed is not mentioned in the output below as the # hook is optional and not written by default self.assertEqual(self.parse_output(output_file), ["install", "start"]) yield self.assertState(self.agent.workflow, "started") yield self.agent.stopService() @inlineCallbacks def test_agent_install_error_transitions_install_error(self): self.write_hook("install", "!/bin/bash\nexit 1\n") hooks_complete = self.wait_on_hook( "install", executor=self.agent.executor) yield self.agent.startService() # Verify the hook has executed. yield hooks_complete yield self.assertState(self.agent.workflow, "install_error") @inlineCallbacks def test_agent_executes_relation_changed_hook(self): """If a relation changes after the unit is started, a relation change hook is executed.""" self.write_empty_hooks() file_path = self.makeFile() self.write_hook("app-relation-changed", ("#!/bin/sh\n" "echo $JUJU_REMOTE_UNIT >> %s\n" % file_path)) yield self.agent.startService() hook_complete = self.wait_on_hook( "app-relation-changed", executor=self.agent.executor) wordpress_states = yield self.add_opposite_service_unit( self.states) # Verify the hook has executed. yield hook_complete self.assertEqual(open(file_path).read().strip(), wordpress_states["unit"].unit_name) @inlineCallbacks def test_agent_executes_config_changed_hook(self): """Service config changes fire a config-changed hook.""" self.agent.set_watch_enabled(True) self.write_empty_hooks() file_path = self.makeFile() self.write_hook("config-changed", ("#!/bin/sh\n" "config-get foo >> %s\n" % file_path)) yield self.agent.startService() transition_complete = self.wait_on_state( self.agent.workflow, "started") service = self.states["service"] config = yield service.get_config() config["foo"] = "bar" yield config.write() # Verify the hook has executed, and transition has completed. yield transition_complete self.assertEqual(open(file_path).read().strip(), "bar") @inlineCallbacks def test_agent_can_execute_config_changed_in_relation_hook(self): """Service config changes fire a config-changed hook.""" self.agent.set_watch_enabled(True) self.write_empty_hooks() file_path = self.makeFile() self.write_hook("app-relation-changed", ("#!/bin/sh\n" "config-get foo >> %s\n" % file_path)) # set service config service = self.states["service"] config = yield service.get_config() config["foo"] = "bar" yield config.write() yield self.agent.startService() hook_complete = self.wait_on_hook( "app-relation-changed", executor=self.agent.executor) # trigger the hook that will read service options yield self.add_opposite_service_unit(self.states) # Verify the hook has executed. yield hook_complete self.assertEqual(open(file_path).read().strip(), "bar") @inlineCallbacks def test_agent_hook_api_usage(self): """If a relation changes after the unit is started, a relation change hook is executed.""" self.write_empty_hooks() file_path = self.makeFile() self.write_hook("app-relation-changed", "\n".join( ["#!/bin/sh", "echo `relation-list` >> %s" % file_path, "echo `relation-set greeting=hello`", "echo `relation-set planet=earth`", "echo `relation-get planet %s` >> %s" % ( self.states["unit"].unit_name, file_path)])) yield self.agent.startService() hook_complete = self.wait_on_hook( "app-relation-changed", executor=self.agent.executor) yield self.add_opposite_service_unit(self.states) # Verify the hook has executed. yield hook_complete # Verify hook output output = open(file_path).read().strip().split("\n") self.assertEqual(output, ["wordpress/0", "earth"]) # Verify zookeeper state contents = yield self.states["unit_relation"].get_data() self.assertEqual( {"greeting": "hello", "planet": "earth", "private-address": "mysql-0.example.com"}, serializer.load(contents)) self.failUnlessIn("wordpress/0", output) @inlineCallbacks def test_agent_executes_depart_hook(self): """If a relation changes after the unit is started, a relation change hook is executed.""" self.write_empty_hooks(app_relation_changed=True) file_path = self.makeFile() self.write_hook("app-relation-broken", ("#!/bin/sh\n" "echo broken hook >> %s\n" % file_path)) yield self.agent.startService() hook_complete = self.wait_on_hook( "app-relation-changed", executor=self.agent.executor) yield self.add_opposite_service_unit(self.states) yield hook_complete # Watch the unit relation workflow complete workflow_complete = self.wait_on_state( self.agent.lifecycle.get_relation_workflow( self.states["relation"].internal_id), "departed") yield self.relation_manager.remove_relation_state( self.states["relation"]) hook_complete = self.wait_on_hook( "app-relation-broken", executor=self.agent.executor) # Verify the hook has executed. yield hook_complete self.assertEqual(open(file_path).read().strip(), "broken hook") # Wait for the workflow transition to complete. yield workflow_complete @inlineCallbacks def test_agent_debug_watch(self): """The unit agent subscribes to changes to the hook debug settings. """ self.agent.set_watch_enabled(True) yield self.agent.startService() yield self.states["unit"].enable_hook_debug(["*"]) # Wait for watch to fire invoke callback and reset yield self.sleep(0.1) # Check the propogation to the executor self.assertNotEquals( self.agent.executor.get_hook_path("x"), "x") class UnitAgentResolvedTest(UnitAgentTestBase): @inlineCallbacks def test_resolved_unit_already_running(self): """If the unit already running the setting is cleared, and no transition is performed. """ self.write_empty_hooks() start_deferred = self.wait_on_hook( "start", executor=self.agent.executor) self.agent.set_watch_enabled(True) yield self.agent.startService() yield start_deferred self.assertEqual( "started", (yield self.agent.workflow.get_state())) yield self.agent.unit_state.set_resolved(RETRY_HOOKS) # Wait for watch to fire and reset yield self.sleep(0.1) self.assertEqual( "started", (yield self.agent.workflow.get_state())) self.assertEqual( None, (yield self.agent.unit_state.get_resolved())) @inlineCallbacks def test_resolved_install_error(self): """If the unit has an install error it will automatically be transitioned to the installed state after the recovery. """ self.write_empty_hooks() install_deferred = self.wait_on_hook( "install", executor=self.agent.executor) self.write_hook("install", "#!/bin/sh\nexit 1") self.agent.set_watch_enabled(True) yield self.agent.startService() yield install_deferred self.assertEqual( "install_error", (yield self.agent.workflow.get_state())) install_deferred = self.wait_on_state(self.agent.workflow, "started") self.write_hook("install", "#!/bin/sh\nexit 0") yield self.agent.unit_state.set_resolved(RETRY_HOOKS) yield install_deferred self.assertEqual("started", (yield self.agent.workflow.get_state())) # Ensure we clear out background activity from the watch firing yield self.poke_zk() @inlineCallbacks def test_resolved_start_error(self): """If the unit has a start error it will automatically be transitioned to started after the recovery. """ self.write_empty_hooks() hook_deferred = self.wait_on_hook( "start", executor=self.agent.executor) self.write_hook("start", "#!/bin/sh\nexit 1") self.agent.set_watch_enabled(True) yield self.agent.startService() yield hook_deferred self.assertEqual( "start_error", (yield self.agent.workflow.get_state())) state_deferred = self.wait_on_state(self.agent.workflow, "started") yield self.agent.unit_state.set_resolved(NO_HOOKS) yield state_deferred self.assertEqual("started", (yield self.agent.workflow.get_state())) # Resolving to the started state from the resolved watch will cause the # lifecycle start to execute in the background context, wait # for it to finish. yield self.sleep(0.1) @inlineCallbacks def test_resolved_stopped(self): """If the unit has a stop error it will automatically be transitioned to stopped after the recovery. """ self.write_empty_hooks() self.write_hook("stop", "#!/bin/sh\nexit 1") hook_deferred = self.wait_on_hook( "start", executor=self.agent.executor) self.agent.set_watch_enabled(True) yield self.agent.startService() yield hook_deferred hook_deferred = self.wait_on_hook("stop", executor=self.agent.executor) with (yield self.agent.workflow.lock()): yield self.agent.workflow.fire_transition("stop") yield hook_deferred self.assertEqual("stop_error", (yield self.agent.workflow.get_state())) state_deferred = self.wait_on_state(self.agent.workflow, "stopped") self.write_hook("stop", "#!/bin/sh\nexit 0") yield self.agent.unit_state.set_resolved(RETRY_HOOKS) yield state_deferred self.assertEqual("stopped", (yield self.agent.workflow.get_state())) # Ensure we clear out background activity from the watch firing yield self.poke_zk() @inlineCallbacks def test_hook_error_on_resolved_retry_remains_in_error_state(self): """If the unit has an install error it will automatically be transitioned to started after the recovery. """ self.write_empty_hooks() self.write_hook("stop", "#!/bin/sh\nexit 1") hook_deferred = self.wait_on_hook( "start", executor=self.agent.executor) self.agent.set_watch_enabled(True) yield self.agent.startService() yield hook_deferred hook_deferred = self.wait_on_hook("stop", executor=self.agent.executor) with (yield self.agent.workflow.lock()): yield self.agent.workflow.fire_transition("stop") yield hook_deferred self.assertEqual("stop_error", (yield self.agent.workflow.get_state())) hook_deferred = self.wait_on_hook("stop", executor=self.agent.executor) yield self.agent.unit_state.set_resolved(RETRY_HOOKS) yield hook_deferred # Ensure we clear out background activity from the watch firing yield self.poke_zk() self.assertEqual("stop_error", (yield self.agent.workflow.get_state())) class UnitAgentUpgradeTest( UnitAgentTestBase, CharmPublisherTestBase, CharmUpgradeTestBase): @inlineCallbacks def setUp(self): yield super(UnitAgentTestBase, self).setUp() settings = GlobalSettingsStateManager(self.client) yield settings.set_provider_type("dummy") self.makeDir(path=os.path.join(self.juju_directory, "charms")) @inlineCallbacks def wait_for_log(self, logger_name, message, level=logging.DEBUG): output = self.capture_logging(logger_name, level=level) while message not in output.getvalue(): yield self.sleep(0.1) @inlineCallbacks def mark_charm_upgrade(self): # Create a new version of the charm repository = self.increment_charm(self.charm) # Upload the new charm version charm = yield repository.find(CharmURL.parse("local:series/mysql")) charm, charm_state = yield self.publish_charm(charm.path) # Mark the unit for upgrade yield self.states["service"].set_charm_id(charm_state.id) yield self.states["unit"].set_upgrade_flag() @inlineCallbacks def test_agent_upgrade_watch(self): """The agent watches for unit upgrades.""" yield self.mark_charm_upgrade() self.agent.set_watch_enabled(True) hook_done = self.wait_on_hook( "upgrade-charm", executor=self.agent.executor) yield self.agent.startService() yield hook_done yield self.assertState(self.agent.workflow, "started") @inlineCallbacks def test_agent_upgrade(self): """The agent can succesfully upgrade its charm.""" log_written = self.wait_for_log("juju.agents.unit", "Upgrade complete") hook_done = self.wait_on_hook( "upgrade-charm", executor=self.agent.executor) self.agent.set_watch_enabled(True) yield self.agent.startService() yield self.mark_charm_upgrade() yield hook_done yield log_written self.assertIdentical( (yield self.states["unit"].get_upgrade_flag()), False) new_charm = get_charm_from_path( os.path.join(self.agent.unit_directory, "charm")) self.assertEqual( self.charm.get_revision() + 1, new_charm.get_revision()) @inlineCallbacks def test_agent_upgrade_version_current(self): """If the unit is running the latest charm, do nothing.""" log_written = self.wait_for_log( "juju.agents.unit", "Upgrade ignored: already running latest charm") old_charm_id = yield self.states["unit"].get_charm_id() self.agent.set_watch_enabled(True) yield self.agent.startService() yield self.states["unit"].set_upgrade_flag() yield log_written self.assertIdentical( (yield self.states["unit"].get_upgrade_flag()), False) self.assertEquals( (yield self.states["unit"].get_charm_id()), old_charm_id) @inlineCallbacks def test_agent_upgrade_bad_unit_state(self): """The upgrade fails if the unit is in a bad state.""" # Upload a new version of the unit's charm repository = self.increment_charm(self.charm) charm = yield repository.find(CharmURL.parse("local:series/mysql")) charm, charm_state = yield self.publish_charm(charm.path) old_charm_id = yield self.states["unit"].get_charm_id() log_written = self.wait_for_log( "juju.agents.unit", "Cannot upgrade: unit is in non-started state configure_error. " "Reissue upgrade command to try again.") self.agent.set_watch_enabled(True) yield self.agent.startService() # Mark the unit for upgrade, with an invalid state. with (yield self.agent.workflow.lock()): yield self.agent.workflow.fire_transition("error_configure") yield self.states["service"].set_charm_id(charm_state.id) yield self.states["unit"].set_upgrade_flag() yield log_written self.assertIdentical( (yield self.states["unit"].get_upgrade_flag()), False) self.assertEquals( (yield self.states["unit"].get_charm_id()), old_charm_id) @inlineCallbacks def test_agent_force_upgrade_bad_unit_state(self): """The upgrade runs if forced and the unit is in a bad state.""" # Upload a new version of the unit's charm repository = self.increment_charm(self.charm) charm = yield repository.find(CharmURL.parse("local:series/mysql")) charm, charm_state = yield self.publish_charm(charm.path) old_charm_id = yield self.states["unit"].get_charm_id() output = self.capture_logging("juju.agents.unit", level=logging.DEBUG) self.agent.set_watch_enabled(True) yield self.agent.startService() # Mark the unit for upgrade, with an invalid state. with (yield self.agent.workflow.lock()): yield self.agent.workflow.fire_transition("error_configure") yield self.states["service"].set_charm_id(charm_state.id) yield self.states["unit"].set_upgrade_flag(force=True) # Its hard to watch something with no hooks and no state changes. yield self.sleep(0.1) self.assertIdentical( (yield self.states["unit"].get_upgrade_flag()), False) self.assertIn("Forced upgrade complete", output.getvalue()) self.assertEquals( (yield self.states["unit"].get_charm_id()), "local:series/mysql-2") self.assertEquals(old_charm_id, "local:series/dummy-1") @inlineCallbacks def test_agent_upgrade_no_flag(self): """An upgrade stops if there is no upgrade flag set.""" log_written = self.wait_for_log( "juju.agents.unit", "No upgrade flag set") old_charm_id = yield self.states["unit"].get_charm_id() self.agent.set_watch_enabled(True) yield self.agent.startService() yield log_written self.assertIdentical( (yield self.states["unit"].get_upgrade_flag()), False) new_charm_id = yield self.states["unit"].get_charm_id() self.assertEquals(new_charm_id, old_charm_id) juju-0.7.orig/juju/charm/__init__.py0000644000000000000000000000011412135220114015565 0ustar 00000000000000from provider import get_charm_from_path __all__ = ["get_charm_from_path"] juju-0.7.orig/juju/charm/base.py0000644000000000000000000000335712135220114014754 0ustar 00000000000000from juju.errors import CharmError def get_revision(file_content, metadata, path): if file_content is None: return metadata.obsolete_revision try: result = int(file_content.strip()) if result >= 0: return result except (ValueError, TypeError): pass raise CharmError(path, "invalid charm revision %r" % file_content) class CharmBase(object): """Abstract base class for charm implementations. """ _sha256 = None def _unsupported(self, attr): raise NotImplementedError("%s.%s not supported" % (self.__class__.__name__, attr)) def get_revision(self): """Get the revision, preferably from the revision file. Will fall back to metadata if not available. """ self._unsupported("get_revision()") def set_revision(self, revision): """Update the revision file, if possible. Some subclasses may not be able to do this. """ self._unsupported("set_revision()") def as_bundle(self): """Transform this charm into a charm bundle, if possible. Some subclasses may not be able to do this. """ self._unsupported("as_bundle()") def compute_sha256(self): """Compute the sha256 for this charm. Every charm subclass must implement this. """ self._unsupported("compute_sha256()") def get_sha256(self): """Return the cached sha256, or compute it if necessary. If the sha256 value for this charm is not yet cached, the compute_sha256() method will be called to compute it. """ if self._sha256 is None: self._sha256 = self.compute_sha256() return self._sha256 juju-0.7.orig/juju/charm/bundle.py0000644000000000000000000000540412135220114015306 0ustar 00000000000000import hashlib import tempfile import os import stat from zipfile import ZipFile, BadZipfile from juju.charm.base import CharmBase, get_revision from juju.charm.config import ConfigOptions from juju.charm.metadata import MetaData from juju.errors import CharmError from juju.lib.filehash import compute_file_hash class CharmBundle(CharmBase): """ZIP-archive that contains charm directory content.""" type = "bundle" def __init__(self, path): self.path = isinstance(path, file) and path.name or path try: zf = ZipFile(path, 'r') except BadZipfile, exc: raise CharmError(path, "must be a zip file (%s)" % exc) if "metadata.yaml" not in zf.namelist(): raise CharmError( path, "charm does not contain required file 'metadata.yaml'") self.metadata = MetaData() self.metadata.parse(zf.read("metadata.yaml")) try: revision_content = zf.read("revision") except KeyError: revision_content = None self._revision = get_revision( revision_content, self.metadata, self.path) if self._revision is None: raise CharmError(self.path, "has no revision") self.config = ConfigOptions() if "config.yaml" in zf.namelist(): self.config.parse(zf.read("config.yaml")) def get_revision(self): return self._revision def compute_sha256(self): """Return the SHA256 digest for this charm bundle. The digest is extracted out of the final bundle file itself. """ return compute_file_hash(hashlib.sha256, self.path) def extract_to(self, directory_path): """Extract the bundle to directory path and return a CharmDirectory handle""" from .directory import CharmDirectory zf = ZipFile(self.path, "r") for info in zf.infolist(): mode = info.external_attr >> 16 if stat.S_ISLNK(mode): source = zf.read(info.filename) target = os.path.join(directory_path, info.filename) # Support extracting over existing charm. # TODO: a directory changed to a file needs install manifests if os.path.exists(target): os.remove(target) os.symlink(source, target) continue # Preserve mode extract_path = zf.extract(info, directory_path) os.chmod(extract_path, mode) return CharmDirectory(directory_path) def as_bundle(self): return self def as_directory(self): """Returns the bundle as a CharmDirectory using a temporary path""" dn = tempfile.mkdtemp(prefix="tmp-charm-") return self.extract_to(dn) juju-0.7.orig/juju/charm/config.py0000644000000000000000000001570512135220114015307 0ustar 00000000000000import copy import os import sys import yaml from juju.lib import serializer from juju.lib.format import YAMLFormat from juju.lib.schema import (SchemaError, KeyDict, Dict, String, Constant, OneOf, Int, Float) from juju.charm.errors import ( ServiceConfigError, ServiceConfigValueError) OPTION_SCHEMA = KeyDict({ "type": OneOf(Constant("string"), Constant("str"), # Obsolete Constant("int"), Constant("boolean"), Constant("float")), "default": OneOf(String(), Int(), Float()), "description": String(), }, optional=["default", "description"], ) # Schema used to validate ConfigOptions specifications CONFIG_SCHEMA = KeyDict({ "options": Dict(String(), OPTION_SCHEMA), }) WARNED_STR_IS_OBSOLETE = False class ConfigOptions(object): """Represents the configuration options exposed by a charm. The intended usage is that Charm provide access to these objects and then use them to `validate` inputs provided in the `juju set` and `juju deploy` code paths. """ def __init__(self): self._data = {} def as_dict(self): return copy.deepcopy(self._data) def load(self, pathname): """Construct a ConfigOptions instance from a YAML file. If is currently allowed for `pathname` to be missing. An empty file with no allowable options will be assumed in that case. """ data = None if os.path.exists(pathname): with open(pathname) as fh: data = fh.read() else: pathname = None data = "options: {}\n" if not data: raise ServiceConfigError( pathname, "Missing required service options metadata") self.parse(data, pathname) return self def parse(self, data, pathname=None): """Load data into the config object. Data can be a properly encoded YAML string or an dict, such as one returned by `get_serialization_data`. Each call to `parse` replaces any existing data. `data`: Python dict or YAML encoded dict containing a valid config options specification. `pathname`: optional pathname included in some errors """ if isinstance(data, basestring): try: raw_data = serializer.yaml_load(data) except yaml.MarkedYAMLError, e: # Capture the path name on the error if present. if pathname is not None: e.problem_mark = serializer.yaml_mark_with_path( pathname, e.problem_mark) raise elif isinstance(data, dict): raw_data = data else: raise ServiceConfigError( pathname or "", "Unknown data type for config options: %s" % type(data)) data = self.parse_serialization_data(raw_data, pathname) self._data = data # validate defaults self.get_defaults() def parse_serialization_data(self, data, pathname=None): """Verify we have sensible option metadata. Returns the `options` dict from within the YAML data. """ if not data or not isinstance(data, dict): raise ServiceConfigError( pathname or "", "Expected YAML dict of options metadata") try: data = CONFIG_SCHEMA.coerce(data, []) except SchemaError, error: raise ServiceConfigError( pathname or "", "Invalid options specification: %s" % error) # XXX Drop this after everyone has migrated their config to 'string'. global WARNED_STR_IS_OBSOLETE if not WARNED_STR_IS_OBSOLETE: for name, info in data["options"].iteritems(): for field, value in info.iteritems(): if field == "type" and value == "str": sys.stderr.write( "WARNING: Charm is using obsolete 'str' type " "in config.yaml. Rename it to 'string'. %r \n" % ( pathname or "")) WARNED_STR_IS_OBSOLETE = True break return data["options"] def _validate_one(self, name, value): # see if there is a type associated with the option kind = self._data[name].get("type", "string") if kind not in validation_kinds: raise ServiceConfigValueError( "Unknown service option type: %s" % kind) # apply validation validator = validation_kinds[kind] value, valid = validator(value, self._data[name]) if not valid: # Return value such that it roundtrips; this allows us to # report back the boolean false instead of the Python # output format, False raise ServiceConfigValueError( "Invalid value for %s: %s" % ( name, YAMLFormat().format_raw(value))) return value def get_defaults(self): """Return a mapping of option: default for all options.""" d = {} for name, options in self._data.items(): if "default" in options: d[name] = self._validate_one(name, options["default"]) return d def validate(self, options): """Validate options using the loaded validation data. This method validates all the provided options, and returns a new dictionary with values properly typed. If a provided option is unknown or its value fails validation, ServiceConfigError is raised. """ d = {} for option, value in options.items(): if option not in self._data: raise ServiceConfigValueError( "%s is not a valid configuration option." % (option)) d[option] = self._validate_one(option, value) return d def get_serialization_data(self): return dict(options=self._data.copy()) # Validators return (type mapped value, valid boolean) def validate_str(value, options): if isinstance(value, basestring): return value, True return value, False def validate_int(value, options): try: return int(value), True except ValueError: return value, False def validate_float(value, options): try: return float(value), True except ValueError: return value, False def validate_boolean(value, options): if isinstance(value, bool): return value, True if value.lower() == "true": return True, True if value.lower() == "false": return False, True return value, False # maps service option types to callables validation_kinds = { "string": validate_str, "str": validate_str, # Obsolete "int": validate_int, "float": validate_float, "boolean": validate_boolean, } juju-0.7.orig/juju/charm/directory.py0000644000000000000000000001126112135220114016037 0ustar 00000000000000import os import stat import zipfile import tempfile from juju.charm.base import CharmBase, get_revision from juju.charm.bundle import CharmBundle from juju.charm.config import ConfigOptions from juju.charm.errors import InvalidCharmFile from juju.charm.metadata import MetaData class CharmDirectory(CharmBase): """Directory that holds charm content. :param path: Path to charm directory The directory must contain the following files:: - ``metadata.yaml`` """ type = "dir" def __init__(self, path): self.path = path self.metadata = MetaData(os.path.join(path, "metadata.yaml")) revision_content = None revision_path = os.path.join(self.path, "revision") if os.path.exists(revision_path): with open(revision_path) as f: revision_content = f.read() self._revision = get_revision( revision_content, self.metadata, self.path) if self._revision is None: self.set_revision(0) elif revision_content is None: self.set_revision(self._revision) self.config = ConfigOptions() self.config.load(os.path.join(path, "config.yaml")) self._temp_bundle = None self._temp_bundle_file = None def get_revision(self): return self._revision def set_revision(self, revision): self._revision = revision with open(os.path.join(self.path, "revision"), "w") as f: f.write(str(revision) + "\n") def make_archive(self, path): """Create archive of directory and write to ``path``. :param path: Path to archive - build/* - This is used for packing the charm itself and any similar tasks. - */.* - Hidden files are all ignored for now. This will most likely be changed into a specific ignore list (.bzr, etc) """ zf = zipfile.ZipFile(path, 'w', zipfile.ZIP_DEFLATED) for dirpath, dirnames, filenames in os.walk(self.path): relative_path = dirpath[len(self.path) + 1:] if relative_path and not self._ignore(relative_path): zf.write(dirpath, relative_path) for name in filenames: archive_name = os.path.join(relative_path, name) if not self._ignore(archive_name): real_path = os.path.join(dirpath, name) self._check_type(real_path) if os.path.islink(real_path): self._check_link(real_path) self._write_symlink( zf, os.readlink(real_path), archive_name) else: zf.write(real_path, archive_name) zf.close() def _check_type(self, path): """Check the path """ s = os.stat(path) if stat.S_ISDIR(s.st_mode) or stat.S_ISREG(s.st_mode): return path raise InvalidCharmFile( self.metadata.name, path, "Invalid file type for a charm") def _check_link(self, path): link_path = os.readlink(path) if link_path[0] == "/": raise InvalidCharmFile( self.metadata.name, path, "Absolute links are invalid") path_dir = os.path.dirname(path) link_path = os.path.join(path_dir, link_path) if not link_path.startswith(os.path.abspath(self.path)): raise InvalidCharmFile( self.metadata.name, path, "Only internal symlinks are allowed") def _write_symlink(self, zf, link_target, link_path): """Package symlinks with appropriate zipfile metadata.""" info = zipfile.ZipInfo() info.filename = link_path info.create_system = 3 # Preserve the pre-existing voodoo mode in a slightly clearer form. info.external_attr = (stat.S_IFLNK | 0755) << 16 zf.writestr(info, link_target) def _ignore(self, path): if path == "build" or path.startswith("build/"): return True if path.startswith('.'): return True def as_bundle(self): if self._temp_bundle is None: prefix = "%s-%d.charm." % (self.metadata.name, self.get_revision()) temp_file = tempfile.NamedTemporaryFile(prefix=prefix) self.make_archive(temp_file.name) self._temp_bundle = CharmBundle(temp_file.name) # Attach the life time of temp_file to self: self._temp_bundle_file = temp_file return self._temp_bundle def as_directory(self): return self def compute_sha256(self): """ Compute sha256, based on the bundle. """ return self.as_bundle().compute_sha256() juju-0.7.orig/juju/charm/errors.py0000644000000000000000000000467412135220114015361 0ustar 00000000000000from juju.errors import CharmError, JujuError class CharmNotFound(JujuError): """A charm was not found in the repository.""" # This isn't semantically an error with a charm error, its an error # even finding the charm. def __init__(self, repository_path, charm_name): self.repository_path = repository_path self.charm_name = charm_name def __str__(self): return "Charm '%s' not found in repository %s" % ( self.charm_name, self.repository_path) class CharmURLError(CharmError): def __init__(self, url, message): self.url = url self.message = message def __str__(self): return "Bad charm URL %r: %s" % (self.url, self.message) class MetaDataError(CharmError): """Raised when an error in the info file of a charm is found.""" def __init__(self, *args): super(CharmError, self).__init__(*args) def __str__(self): return super(CharmError, self).__str__() class InvalidCharmHook(CharmError): """A named hook was not found to be valid for the charm.""" def __init__(self, charm_name, hook_name): self.charm_name = charm_name self.hook_name = hook_name def __str__(self): return "Charm %r does not contain hook %r" % ( self.charm_name, self.hook_name) class InvalidCharmFile(CharmError): """An invalid file was found in a charm.""" def __init__(self, charm_name, file_path, msg): self.charm_name = charm_name self.file_path = file_path self.msg = msg def __str__(self): return "Charm %r invalid file %r %s" % ( self.charm_name, self.file_path, self.msg) class NewerCharmNotFound(CharmError): """A newer charm was not found.""" def __init__(self, charm_id): self.charm_id = charm_id def __str__(self): return "Charm %r is the latest revision known" % self.charm_id class ServiceConfigError(CharmError): """Indicates an issue related to definition of service options.""" class ServiceConfigValueError(JujuError): """Indicates an issue related to values of service options.""" class RepositoryNotFound(JujuError): """Indicates inability to locate an appropriate repository""" def __init__(self, specifier): self.specifier = specifier def __str__(self): if self.specifier is None: return "No repository specified" return "No repository found at %r" % self.specifier juju-0.7.orig/juju/charm/metadata.py0000644000000000000000000002105612135220114015616 0ustar 00000000000000import logging import os import yaml from juju.charm.errors import MetaDataError from juju.errors import FileNotFound from juju.lib import serializer from juju.lib.format import is_valid_charm_format from juju.lib.schema import ( SchemaError, Bool, Constant, Dict, Int, KeyDict, OneOf, UnicodeOrString) log = logging.getLogger("juju.charm") UTF8_SCHEMA = UnicodeOrString("utf-8") SCOPE_GLOBAL = "global" SCOPE_CONTAINER = "container" INTERFACE_SCHEMA = KeyDict({ "interface": UTF8_SCHEMA, "limit": OneOf(Constant(None), Int()), "scope": OneOf(Constant(SCOPE_GLOBAL), Constant(SCOPE_CONTAINER)), "optional": Bool()}, optional=["scope"]) class InterfaceExpander(object): """Schema coercer that expands the interface shorthand notation. We need this class because our charm shorthand is difficult to work with (unfortunately). So we coerce shorthand and then store the desired format in ZK. Supports the following variants:: provides: server: riak admin: http foobar: interface: blah provides: server: interface: mysql limit: optional: false In all input cases, the output is the fully specified interface representation as seen in the mysql interface description above. """ def __init__(self, limit): """Create relation interface reshaper. @limit: the limit for this relation. Used to provide defaults for a given kind of relation role (peer, provider, consumer) """ self.limit = limit def coerce(self, value, path): """Coerce `value` into an expanded interface. Helper method to support each of the variants, either the charm does not specify limit and optional, such as foobar in the above example; or the interface spec is just a string, such as the ``server: riak`` example. """ if not isinstance(value, dict): return { "interface": UTF8_SCHEMA.coerce(value, path), "limit": self.limit, "scope": SCOPE_GLOBAL, "optional": False} else: # Optional values are context-sensitive and/or have # defaults, which is different than what KeyDict can # readily support. So just do it here first, then # coerce. if "limit" not in value: value["limit"] = self.limit if "optional" not in value: value["optional"] = False value["scope"] = value.get("scope", SCOPE_GLOBAL) return INTERFACE_SCHEMA.coerce(value, path) SCHEMA = KeyDict({ "name": UTF8_SCHEMA, "revision": Int(), "summary": UTF8_SCHEMA, "description": UTF8_SCHEMA, "format": Int(), "peers": Dict(UTF8_SCHEMA, InterfaceExpander(limit=1)), "provides": Dict(UTF8_SCHEMA, InterfaceExpander(limit=None)), "requires": Dict(UTF8_SCHEMA, InterfaceExpander(limit=1)), "subordinate": Bool(), }, optional=set( ["format", "provides", "requires", "peers", "revision", "subordinate"])) class MetaData(object): """Represents the charm info file. The main metadata for a charm (name, revision, etc) is maintained in the charm's info file. This class is able to parse, validate, and provide access to data in the info file. """ def __init__(self, path=None): self._data = {} if path is not None: self.load(path) @property def name(self): """The charm name.""" return self._data.get("name") @property def obsolete_revision(self): """The charm revision. The charm revision acts as a version, but unlike e.g. package versions, the charm revision is a monotonically increasing integer. This should not be stored in metadata any more, but remains for backward compatibility's sake. """ return self._data.get("revision") @property def summary(self): """The charm summary.""" return self._data.get("summary") @property def description(self): """The charm description.""" return self._data.get("description") @property def format(self): """Optional charm format, defaults to 1""" return self._data.get("format", 1) @property def provides(self): """The charm provides relations.""" return self._data.get("provides") @property def requires(self): """The charm requires relations.""" return self._data.get("requires") @property def peers(self): """The charm peers relations.""" return self._data.get("peers") @property def is_subordinate(self): """Indicates the charm requires a contained relationship. This property will effect the deployment options of its charm. When a charm is_subordinate it can only be deployed when its contained relationship is satisfied. See the subordinates specification. """ return self._data.get("subordinate", False) def get_serialization_data(self): """Get internal dictionary representing the state of this instance. This is useful to embed this information inside other storage-related dictionaries. """ return dict(self._data) def load(self, path): """Load and parse the info file. @param path: Path of the file to load. Internally, this function will pass the content of the file to the C{parse()} method. """ if not os.path.isfile(path): raise FileNotFound(path) with open(path) as f: self.parse(f.read(), path) def parse(self, content, path=None): """Parse the info file described by the given content. @param content: Content of the info file to parse. @param path: Optional path of the loaded file. Used when raising errors. @raise MetaDataError: When errors are found in the info data. """ try: self.parse_serialization_data( serializer.yaml_load(content), path) except yaml.MarkedYAMLError, e: # Capture the path name on the error if present. if path is not None: e.problem_mark = serializer.yaml_mark_with_path( path, e.problem_mark) raise if "revision" in self._data and path: log.warning( "%s: revision field is obsolete. Move it to the 'revision' " "file." % path) if self.provides: for rel in self.provides: if rel.startswith("juju-"): raise MetaDataError( "Charm %s attempting to provide relation in " "implicit relation namespace: %s" % (self.name, rel)) interface = self.provides[rel]["interface"] if interface.startswith("juju-"): raise MetaDataError( "Charm %s attempting to provide interface in implicit namespace: " "%s (relation: %s)" % (self.name, interface, rel)) if self.is_subordinate: proper_subordinate = False if self.requires: for relation_data in self.requires.values(): if relation_data.get("scope") == SCOPE_CONTAINER: proper_subordinate = True if not proper_subordinate: raise MetaDataError( "%s labeled subordinate but lacking scope:container `requires` relation" % path) if not is_valid_charm_format(self.format): raise MetaDataError("Charm %s uses an unknown format: %s" % ( self.name, self.format)) def parse_serialization_data(self, serialization_data, path=None): """Parse the unprocessed serialization data and load in this instance. @param serialization_data: Unprocessed data matching the metadata schema. @param path: Optional path of the loaded file. Used when raising errors. @raise MetaDataError: When errors are found in the info data. """ try: self._data = SCHEMA.coerce(serialization_data, []) except SchemaError, error: if path: path_info = " %s:" % path else: path_info = "" raise MetaDataError("Bad data in charm info:%s %s" % (path_info, error)) juju-0.7.orig/juju/charm/provider.py0000644000000000000000000000147012135220114015666 0ustar 00000000000000"""Charm Factory Register a set of input handlers and spawn the correct charm implementation. """ from juju.errors import CharmError import os.path def _is_bundle(filename): """is_bundle(filename) -> boolean""" return os.path.isfile(filename) and filename.endswith(".charm") def get_charm_from_path(specification): """ Given the specification of a charm (usually a pathname) map it to an implementation and create an instance of the proper type. """ if _is_bundle(specification): from .bundle import CharmBundle return CharmBundle(specification) elif os.path.isdir(specification): from .directory import CharmDirectory return CharmDirectory(specification) raise CharmError( specification, "unable to process %s into a charm" % specification) juju-0.7.orig/juju/charm/publisher.py0000644000000000000000000001037012135220114016030 0ustar 00000000000000import logging from zookeeper import NodeExistsException, NoNodeException from twisted.internet.defer import ( DeferredList, inlineCallbacks, returnValue, succeed, FirstError) from juju.lib import under from juju.state.charm import CharmStateManager from juju.state.errors import CharmStateNotFound, StateChanged log = logging.getLogger("juju.charm") class CharmPublisher(object): """ Publishes a charm to an environment. """ def __init__(self, client, storage): self._client = client self._storage = storage self._charm_state_manager = CharmStateManager(self._client) self._charm_add_queue = [] self._charm_state_cache = {} @classmethod @inlineCallbacks def for_environment(cls, environment): provider = environment.get_machine_provider() storage = provider.get_file_storage() client = yield provider.connect() returnValue(cls(client, storage)) @inlineCallbacks def add_charm(self, charm_id, charm): """Schedule a charm for addition to an juju environment. Returns true if the charm is scheduled for upload, false if the charm is already present in juju. """ self._charm_add_queue.append((charm_id, charm)) if charm_id in self._charm_state_cache: returnValue(False) try: state = yield self._charm_state_manager.get_charm_state( charm_id) except CharmStateNotFound: pass else: log.info("Using cached charm version of %s" % charm.metadata.name) self._charm_state_cache[charm_id] = state returnValue(False) returnValue(True) def _publish_one(self, charm_id, charm): if charm_id in self._charm_state_cache: return succeed(self._charm_state_cache[charm_id]) bundle = charm.as_bundle() charm_file = open(bundle.path, "rb") charm_store_path = under.quote( "%s:%s" % (charm_id, bundle.get_sha256())) def close_charm_file(passthrough): charm_file.close() return passthrough def get_charm_url(result): return self._storage.get_url(charm_store_path) d = self._storage.put(charm_store_path, charm_file) d.addBoth(close_charm_file) d.addCallback(get_charm_url) d.addCallback(self._cb_store_charm_state, charm_id, bundle) d.addErrback(self._eb_verify_duplicate, charm_id, bundle) return d def publish(self): """Publish all added charms to provider storage and zookeeper. Returns the charm_state of all scheduled charms. """ publish_deferreds = [] for charm_id, charm in self._charm_add_queue: publish_deferreds.append(self._publish_one(charm_id, charm)) publish_deferred = DeferredList(publish_deferreds, fireOnOneErrback=1, consumeErrors=1) # callbacks and deferreds to unwind the dlist publish_deferred.addCallback(self._cb_extract_charm_state) publish_deferred.addErrback(self._eb_extract_error) return publish_deferred def _cb_extract_charm_state(self, result): return [r[1] for r in result] def _eb_extract_error(self, failure): failure.trap(FirstError) return failure.value.subFailure def _cb_store_charm_state(self, charm_url, charm_id, charm): return self._charm_state_manager.add_charm_state( charm_id, charm, charm_url) @inlineCallbacks def _eb_verify_duplicate(self, failure, charm_id, charm): """Detects duplicates vs. conflicts, raises stateerror on conflict.""" failure.trap(NodeExistsException) try: charm_state = \ yield self._charm_state_manager.get_charm_state(charm_id) except NoNodeException: # Check if the state goes away due to concurrent removal msg = "Charm removed concurrently during publish, please retry." raise StateChanged(msg) if charm_state.get_sha256() != charm.get_sha256(): msg = "Concurrent upload of charm has different checksum %s" % ( charm_id) raise StateChanged(msg) juju-0.7.orig/juju/charm/repository.py0000644000000000000000000001710312135220114016253 0ustar 00000000000000import json import logging import os import tempfile import urllib import urlparse import yaml from twisted.internet.defer import fail, inlineCallbacks, returnValue, succeed from twisted.web.client import downloadPage, getPage from twisted.web.error import Error from txaws.client.ssl import VerifyingContextFactory from juju.charm.provider import get_charm_from_path from juju.charm.url import CharmURL from juju.errors import FileNotFound from juju.lib import under from .errors import ( CharmNotFound, CharmError, RepositoryNotFound, ServiceConfigValueError) log = logging.getLogger("juju.charm") CS_STORE_URL = "https://store.juju.ubuntu.com" def _makedirs(path): try: os.makedirs(path) except OSError: pass def _cache_key(charm_url): charm_url.assert_revision() return under.quote("%s.charm" % charm_url) class LocalCharmRepository(object): """Charm repository in a local directory.""" type = "local" def __init__(self, path): if path is None or not os.path.isdir(path): raise RepositoryNotFound(path) self.path = path def _collection(self, collection): path = os.path.join(self.path, collection.series) if not os.path.exists(path): return for dentry in os.listdir(path): if dentry.startswith("."): continue dentry_path = os.path.join(path, dentry) try: yield get_charm_from_path(dentry_path) except FileNotFound: continue # There is a broken charm in the repo, but that # shouldn't stop us from continuing except yaml.YAMLError, e: # Log yaml errors for feedback to developers. log.warning("Charm %r has a YAML error: %s", dentry, e) continue except (CharmError, ServiceConfigValueError), e: # Log invalid config.yaml and metadata.yaml semantic errors log.warning("Charm %r has an error: %r %s", dentry, e, e) continue except CharmNotFound: # This could just be a random directory/file in the repo continue except Exception, e: # Catch all (perms, unknowns, etc) log.warning( "Unexpected error while processing %s: %r", dentry, e) def find(self, charm_url): """Find a charm with the given name. If multiple charms are found with different versions, the most recent one (greatest revision) will be returned. """ assert charm_url.collection.schema == "local", "schema mismatch" latest = None for charm in self._collection(charm_url.collection): if charm.metadata.name == charm_url.name: if charm.get_revision() == charm_url.revision: return succeed(charm) if (latest is None or latest.get_revision() < charm.get_revision()): latest = charm if latest is None or charm_url.revision is not None: return fail(CharmNotFound(self.path, charm_url)) return succeed(latest) def latest(self, charm_url): d = self.find(charm_url.with_revision(None)) d.addCallback(lambda c: c.get_revision()) return d def __str__(self): return "local charm repository: %s" % self.path class RemoteCharmRepository(object): cache_path = os.path.expanduser("~/.juju/cache") type = "store" def __init__(self, url_base, cache_path=None): self.url_base = url_base if cache_path is not None: self.cache_path = cache_path self.no_stats = bool(os.environ.get("JUJU_TESTING")) def __str__(self): return "charm store" @inlineCallbacks def _get_info(self, charm_url): charm_id = str(charm_url) url = "%s/charm-info?charms=%s" % ( self.url_base, urllib.quote(charm_id)) if self.no_stats: url += "&stats=0" try: host = urlparse.urlparse(url).hostname all_info = json.loads( (yield getPage( url, contextFactory=VerifyingContextFactory(host)))) charm_info = all_info[charm_id] for warning in charm_info.get("warnings", []): log.warning("%s: %s", charm_id, warning) errors = charm_info.get("errors", []) if errors: raise CharmError(charm_id, "; ".join(errors)) returnValue(charm_info) except Error: raise CharmNotFound(self.url_base, charm_url) @inlineCallbacks def _download(self, charm_url, cache_path): url = "%s/charm/%s" % (self.url_base, urllib.quote(charm_url.path)) downloads = os.path.join(self.cache_path, "downloads") _makedirs(downloads) f = tempfile.NamedTemporaryFile( prefix=_cache_key(charm_url), suffix=".part", dir=downloads, delete=False) f.close() downloading_path = f.name host = urlparse.urlparse(url).hostname if self.no_stats: url += "?stats=0" try: yield downloadPage( url, downloading_path, contextFactory=VerifyingContextFactory(host)) except Error: raise CharmNotFound(self.url_base, charm_url) os.rename(downloading_path, cache_path) @inlineCallbacks def find(self, charm_url): info = yield self._get_info(charm_url) revision = info["revision"] if charm_url.revision is None: charm_url = charm_url.with_revision(revision) else: assert revision == charm_url.revision, "bad url revision" cache_path = os.path.join(self.cache_path, _cache_key(charm_url)) cached = os.path.exists(cache_path) if not cached: yield self._download(charm_url, cache_path) charm = get_charm_from_path(cache_path) assert charm.get_revision() == revision, "bad charm revision" if charm.get_sha256() != info["sha256"]: os.remove(cache_path) name = "%s (%s)" % ( charm_url, "cached" if cached else "downloaded") raise CharmError(name, "SHA256 mismatch") returnValue(charm) @inlineCallbacks def latest(self, charm_url): info = yield self._get_info(charm_url.with_revision(None)) returnValue(info["revision"]) def resolve(vague_name, repository_path, default_series): """Get a Charm and associated identifying information :param str vague_name: a lazily specified charm name, suitable for use with :meth:`CharmURL.infer` :param repository_path: where on the local filesystem to find a repository (only currently meaningful when `charm_name` is specified with `"local:"`) :type repository_path: str or None :param str default_series: the Ubuntu series to insert when `charm_name` is inadequately specified. :return: a tuple of a :class:`juju.charm.url.CharmURL` and a :class:`juju.charm.base.CharmBase` subclass, which together contain both the charm's data and all information necessary to specify its source. """ url = CharmURL.infer(vague_name, default_series) if url.collection.schema == "local": repo = LocalCharmRepository(repository_path) elif url.collection.schema == "cs": # The eventual charm store url, point to elastic ip for now b2 repo = RemoteCharmRepository(CS_STORE_URL) return repo, url juju-0.7.orig/juju/charm/tests/0000755000000000000000000000000012135220114014622 5ustar 00000000000000juju-0.7.orig/juju/charm/url.py0000644000000000000000000001071312135220114014636 0ustar 00000000000000import copy import re from juju.charm.errors import CharmURLError _USER_RE = re.compile("^[a-z0-9][a-zA-Z0-9+.-]+$") _SERIES_RE = re.compile("^[a-z]+([a-z-]+[a-z])?$") _NAME_RE = re.compile("^[a-z][a-z0-9]*(-[a-z0-9]*[a-z][a-z0-9]*)*$") class CharmCollection(object): """Holds enough information to specify a repository and location :attr str schema: Defines which repository; valid values are "cs" (for the Juju charm store) and "local" (for a local repository). :attr user: Remote repositories can have sections owned by individual users. :type user: str or None :attr series: Which version of Ubuntu is targeted by charms in this collection. """ def __init__(self, schema, user, series): self.schema = schema self.user = user self.series = series def __str__(self): if self.user is None: return "%s:%s" % (self.schema, self.series) return "%s:~%s/%s" % (self.schema, self.user, self.series) class CharmURL(object): """Holds enough information to specify a charm. :attr collection: Where to look for the charm. :type collection: :class:`CharmCollection` :attr str name: The charm's name. :attr revision: The charm's revision, if specified. :type revision: int or None """ def __init__(self, collection, name, revision): self.collection = collection self.name = name self.revision = revision def __str__(self): if self.revision is None: return "%s/%s" % (self.collection, self.name) return "%s/%s-%s" % (self.collection, self.name, self.revision) @property def path(self): return str(self).split(":", 1)[1] def with_revision(self, revision): other = copy.deepcopy(self) other.revision = revision return other def assert_revision(self): if self.revision is None: raise CharmURLError(str(self), "expected a revision") @classmethod def parse(cls, string): """Turn an unambiguous string representation into a CharmURL.""" def fail(message): raise CharmURLError(string, message) if not isinstance(string, basestring): fail("not a string type") if ":" not in string: fail("no schema specified") schema, rest = string.split(":", 1) if schema not in ("cs", "local"): fail("invalid schema") parts = rest.split("/") if len(parts) not in (2, 3): fail("invalid form") user = None if parts[0].startswith("~"): if schema == "local": fail("users not allowed in local URLs") user = parts[0][1:] if not _USER_RE.match(user): fail("invalid user") parts = parts[1:] if len(parts) != 2: fail("no series specified") revision = None series, name = parts if not _SERIES_RE.match(series): fail("invalid series") if "-" in name: maybe_name, maybe_revision = name.rsplit("-", 1) if maybe_revision.isdigit(): name, revision = maybe_name, int(maybe_revision) if not _NAME_RE.match(name): fail("invalid name") return cls(CharmCollection(schema, user, series), name, revision) @classmethod def infer(cls, vague_name, default_series): """Turn a potentially fuzzy alias into a CharmURL.""" try: # it might already be a valid URL string return cls.parse(vague_name) except CharmURLError: # ok, it's not, we have to do some work pass if vague_name.startswith("~"): raise CharmURLError( vague_name, "a URL with a user must specify a schema") if ":" in vague_name: schema, rest = vague_name.split(":", 1) else: schema, rest = "cs", vague_name url_string = "%s:%s" % (schema, rest) parts = rest.split("/") if len(parts) == 1: url_string = "%s:%s/%s" % (schema, default_series, rest) elif len(parts) == 2: if parts[0].startswith("~"): url_string = "%s:%s/%s/%s" % ( schema, parts[0], default_series, parts[1]) try: return cls.parse(url_string) except CharmURLError as err: err.message += " (URL inferred from '%s')" % vague_name raise juju-0.7.orig/juju/charm/tests/__init__.py0000644000000000000000000000016212135220114016732 0ustar 00000000000000def local_charm_id(charm): return "local:series/%s-%s" % ( charm.metadata.name, charm.get_revision()) juju-0.7.orig/juju/charm/tests/repository/0000755000000000000000000000000012135220114017041 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/test_base.py0000644000000000000000000000516212135220114017151 0ustar 00000000000000from juju.charm.base import CharmBase, get_revision from juju.charm.metadata import MetaData from juju.errors import CharmError from juju.lib import serializer from juju.lib.testing import TestCase class MyCharm(CharmBase): pass class CharmBaseTest(TestCase): def setUp(self): self.charm = MyCharm() def assertUnsupported(self, callable, attr_name): try: callable() except NotImplementedError, e: self.assertEquals(str(e), "MyCharm.%s not supported" % attr_name) else: self.fail("MyCharm.%s didn't fail" % attr_name) def test_unsupported(self): self.assertUnsupported(self.charm.as_bundle, "as_bundle()") self.assertUnsupported(self.charm.get_sha256, "compute_sha256()") self.assertUnsupported(self.charm.compute_sha256, "compute_sha256()") self.assertUnsupported(self.charm.get_revision, "get_revision()") self.assertUnsupported( lambda: self.charm.set_revision(1), "set_revision()") def test_compute_and_cache_sha256(self): """ The value computed by compute_sha256() on a child class is returned by get_sha256, and cached permanently. """ sha256 = ["mysha"] class CustomCharm(CharmBase): def compute_sha256(self): return sha256[0] charm = CustomCharm() self.assertEquals(charm.get_sha256(), "mysha") sha256 = ["anothervalue"] # Should still be the same, since the old one was cached. self.assertEquals(charm.get_sha256(), "mysha") class GetRevisionTest(TestCase): def assert_good_content(self, content, value): self.assertEquals(get_revision(content, None, None), value) def assert_bad_content(self, content): err = self.assertRaises( CharmError, get_revision, content, None, "path") self.assertEquals( str(err), "Error processing 'path': invalid charm revision %r" % content) def test_with_content(self): self.assert_good_content("0\n", 0) self.assert_good_content("123\n", 123) self.assert_bad_content("") self.assert_bad_content("-1\n") self.assert_bad_content("three hundred and six or so") def test_metadata_fallback(self): metadata = MetaData() self.assertEquals(get_revision(None, metadata, None), None) metadata.parse( serializer.yaml_dump( {"name": "x", "summary": "y", "description": "z","revision": 33}, )) self.assertEquals(get_revision(None, metadata, None), 33) juju-0.7.orig/juju/charm/tests/test_bundle.py0000644000000000000000000002153312135220114017510 0ustar 00000000000000import os import hashlib import inspect import shutil import stat import zipfile from juju.lib import serializer from juju.lib.testing import TestCase from juju.lib.filehash import compute_file_hash from juju.charm.metadata import MetaData from juju.charm.bundle import CharmBundle from juju.errors import CharmError from juju.charm.directory import CharmDirectory from juju.charm.provider import get_charm_from_path from juju.charm import tests repository_directory = os.path.join( os.path.dirname(inspect.getabsfile(tests)), "repository") sample_directory = os.path.join(repository_directory, "series", "dummy") class BundleTest(TestCase): def setUp(self): directory = CharmDirectory(sample_directory) # add sample directory self.filename = self.makeFile(suffix=".charm") directory.make_archive(self.filename) def copy_charm(self): dir_ = os.path.join(self.makeDir(), "sample") shutil.copytree(sample_directory, dir_) return dir_ def test_initialization(self): bundle = CharmBundle(self.filename) self.assertEquals(bundle.path, self.filename) def test_error_not_zip(self): filename = self.makeFile("@#$@$") err = self.assertRaises(CharmError, CharmBundle, filename) self.assertEquals( str(err), "Error processing %r: must be a zip file (File is not a zip file)" % filename) def test_error_zip_but_doesnt_have_metadata_file(self): filename = self.makeFile() zf = zipfile.ZipFile(filename, 'w') zf.writestr("README.txt", "This is not a valid charm.") zf.close() err = self.assertRaises(CharmError, CharmBundle, filename) self.assertEquals( str(err), "Error processing %r: charm does not contain required " "file 'metadata.yaml'" % filename) def test_no_revision_at_all(self): filename = self.makeFile() zf_dst = zipfile.ZipFile(filename, "w") zf_src = zipfile.ZipFile(self.filename, "r") for name in zf_src.namelist(): if name == "revision": continue zf_dst.writestr(name, zf_src.read(name)) zf_src.close() zf_dst.close() err = self.assertRaises(CharmError, CharmBundle, filename) self.assertEquals( str(err), "Error processing %r: has no revision" % filename) def test_revision_in_metadata(self): filename = self.makeFile() zf_dst = zipfile.ZipFile(filename, "w") zf_src = zipfile.ZipFile(self.filename, "r") for name in zf_src.namelist(): if name == "revision": continue content = zf_src.read(name) if name == "metadata.yaml": data = serializer.yaml_load(content) data["revision"] = 303 content = serializer.yaml_dump(data) zf_dst.writestr(name, content) zf_src.close() zf_dst.close() charm = CharmBundle(filename) self.assertEquals(charm.get_revision(), 303) def test_competing_revisions(self): zf = zipfile.ZipFile(self.filename, "a") zf.writestr("revision", "999") data = serializer.yaml_load(zf.read("metadata.yaml")) data["revision"] = 303 zf.writestr("metadata.yaml", serializer.yaml_dump(data)) zf.close() charm = CharmBundle(self.filename) self.assertEquals(charm.get_revision(), 999) def test_cannot_set_revision(self): charm = CharmBundle(self.filename) self.assertRaises(NotImplementedError, charm.set_revision, 123) def test_bundled_config(self): """Make sure that config is accessible from a bundle.""" from juju.charm.tests.test_config import sample_yaml_data bundle = CharmBundle(self.filename) self.assertEquals(bundle.config.get_serialization_data(), sample_yaml_data) def test_info(self): bundle = CharmBundle(self.filename) self.assertTrue(bundle.metadata is not None) self.assertTrue(isinstance(bundle.metadata, MetaData)) self.assertEquals(bundle.metadata.name, "dummy") self.assertEqual(bundle.type, "bundle") def test_as_bundle(self): bundle = CharmBundle(self.filename) self.assertEquals(bundle.as_bundle(), bundle) def test_executable_extraction(self): sample_directory = os.path.join( repository_directory, "series", "varnish-alternative") charm_directory = CharmDirectory(sample_directory) source_hook_path = os.path.join(sample_directory, "hooks", "install") self.assertTrue(os.access(source_hook_path, os.X_OK)) bundle = charm_directory.as_bundle() directory = bundle.as_directory() hook_path = os.path.join(directory.path, "hooks", "install") self.assertTrue(os.access(hook_path, os.X_OK)) config_path = os.path.join(directory.path, "config.yaml") self.assertFalse(os.access(config_path, os.X_OK)) def get_charm_sha256(self): return compute_file_hash(hashlib.sha256, self.filename) def test_compute_sha256(self): sha256 = self.get_charm_sha256() bundle = CharmBundle(self.filename) self.assertEquals(bundle.compute_sha256(), sha256) def test_charm_base_inheritance(self): """ get_sha256() should be implemented in the base class, and should use compute_sha256 to calculate the digest. """ sha256 = self.get_charm_sha256() bundle = CharmBundle(self.filename) self.assertEquals(bundle.get_sha256(), sha256) def test_file_handle_as_path(self): sha256 = self.get_charm_sha256() fh = open(self.filename) bundle = CharmBundle(fh) self.assertEquals(bundle.get_sha256(), sha256) def test_extract_to(self): filename = self.makeFile() charm = get_charm_from_path(self.filename) f2 = charm.extract_to(filename) # f2 should be a charm directory self.assertInstance(f2, CharmDirectory) self.assertInstance(f2.get_sha256(), basestring) self.assertEqual(f2.path, filename) def test_extract_symlink(self): extract_dir = self.makeDir() charm_path = self.copy_charm() sym_path = os.path.join(charm_path, 'foobar') os.symlink('metadata.yaml', sym_path) charm_dir = CharmDirectory(charm_path) bundle = charm_dir.as_bundle() bundle.extract_to(extract_dir) self.assertIn("foobar", os.listdir(extract_dir)) self.assertTrue(os.path.islink(os.path.join(extract_dir, "foobar"))) self.assertEqual(os.readlink(os.path.join(extract_dir, 'foobar')), 'metadata.yaml') # Verify we can extract it over again os.remove(sym_path) os.symlink('./config.yaml', sym_path) charm_dir = CharmDirectory(charm_path) bundle = charm_dir.as_bundle() bundle.extract_to(extract_dir) self.assertEqual(os.readlink(os.path.join(extract_dir, 'foobar')), './config.yaml') def test_extract_symlink_mode(self): # lp:973260 - charms packed by different tools that record symlink # mode permissions differently (ie the charm store) don't extract # correctly. charm_path = self.copy_charm() sym_path = os.path.join(charm_path, 'foobar') os.symlink('metadata.yaml', sym_path) charm_dir = CharmDirectory(charm_path) normal_path = charm_dir.as_bundle().path zf_src = zipfile.ZipFile(normal_path, "r") foreign_path = os.path.join(self.makeDir(), "store.charm") zf_dst = zipfile.ZipFile(foreign_path, "w") for info in zf_src.infolist(): if info.filename == "foobar": # This is what the charm store does: info.external_attr = (stat.S_IFLNK | 0777) << 16 zf_dst.writestr(info, zf_src.read(info.filename)) zf_src.close() zf_dst.close() bundle = CharmBundle(foreign_path) extract_dir = self.makeDir() bundle.extract_to(extract_dir) self.assertIn("foobar", os.listdir(extract_dir)) self.assertTrue(os.path.islink(os.path.join(extract_dir, "foobar"))) self.assertEqual(os.readlink(os.path.join(extract_dir, 'foobar')), 'metadata.yaml') def test_as_directory(self): filename = self.makeFile() charm = get_charm_from_path(self.filename) f2 = charm.as_directory() # f2 should be a charm directory self.assertInstance(f2, CharmDirectory) self.assertInstance(f2.get_sha256(), basestring) # verify that it was extracted to a new temp dirname self.assertNotEqual(f2.path, filename) fn = os.path.split(f2.path)[1] # verify that it used the expected prefix self.assertStartsWith(fn, "tmp") juju-0.7.orig/juju/charm/tests/test_config.py0000644000000000000000000001746212135220114017512 0ustar 00000000000000from StringIO import StringIO import sys import yaml from juju.lib import serializer from juju.lib.testing import TestCase from juju.charm.config import ConfigOptions from juju.charm.errors import ServiceConfigError, ServiceConfigValueError sample_configuration = """ options: title: default: My Title description: A descriptive title used for the service. type: string outlook: description: No default outlook. type: string username: default: admin001 description: The name of the initial account (given admin permissions). type: string skill-level: description: A number indicating skill. type: int """ sample_yaml_data = serializer.yaml_load(sample_configuration) sample_config_defaults = {"title": "My Title", "username": "admin001"} class ConfigOptionsTest(TestCase): def setUp(self): self.config = ConfigOptions() def test_load(self): """Validate we can load data or get expected errors.""" # load valid data filename = self.makeFile(sample_configuration) self.config.load(filename) self.assertEqual(self.config.get_serialization_data(), sample_yaml_data) # test with dict based data self.config.parse(sample_yaml_data) self.assertEqual(self.config.get_serialization_data(), sample_yaml_data) # and with an unhandled type self.assertRaises(TypeError, self.config.load, 1.234) def test_load_file(self): sample_path = self.makeFile(sample_configuration) config = ConfigOptions() config.load(sample_path) self.assertEqual(config.get_serialization_data(), sample_yaml_data) # and an expected exception # on an empty file empty_file = self.makeFile("") error = self.assertRaises(ServiceConfigError, config.load, empty_file) self.assertEqual( str(error), ("Error processing %r: " "Missing required service options metadata") % empty_file) # a missing filename is allowed config = config.load("missing_file") def test_defaults(self): self.config.parse(sample_configuration) defaults = self.config.get_defaults() self.assertEqual(defaults, sample_config_defaults) def test_defaults_validated(self): e = self.assertRaises( ServiceConfigValueError, self.config.parse, serializer.yaml_dump( {"options": { "foobar": { "description": "beyond what?", "type": "string", "default": True}}})) self.assertEqual( str(e), "Invalid value for foobar: true") def test_as_dict(self): # load valid data filename = self.makeFile(sample_configuration) self.config.load(filename) # Verify dictionary serialization schema_dict = self.config.as_dict() self.assertEqual( schema_dict, serializer.yaml_load(sample_configuration)["options"]) # Verify the dictionary is a copy # Poke at embedded objects schema_dict["outlook"]["default"] = 1 schema2_dict = self.config.as_dict() self.assertFalse("default" in schema2_dict["outlook"]) def test_parse(self): """Verify that parse checks and raises.""" # no options dict self.assertRaises( ServiceConfigError, self.config.parse, {"foo": "bar"}) # and with bad data expected exceptions error = self.assertRaises(yaml.YAMLError, self.config.parse, "foo: [1, 2", "/tmp/zamboni") self.assertIn("/tmp/zamboni", str(error)) def test_validate(self): sample_input = {"title": "Helpful Title", "outlook": "Peachy"} self.config.parse(sample_configuration) data = self.config.validate(sample_input) # This should include an overridden value, a default and a new value. self.assertEqual(data, {"outlook": "Peachy", "title": "Helpful Title"}) # now try to set a value outside the expected sample_input["bad"] = "value" error = self.assertRaises(ServiceConfigValueError, self.config.validate, sample_input) self.assertEqual(error.message, "bad is not a valid configuration option.") # validating with an empty instance # the service takes no options config = ConfigOptions() self.assertRaises( ServiceConfigValueError, config.validate, sample_input) def test_validate_float(self): self.config.parse(serializer.yaml_dump( {"options": { "score": { "description": "A number indicating score.", "type": "float"}}})) error = self.assertRaises(ServiceConfigValueError, self.config.validate, {"score": "arg"}) self.assertEquals(str(error), "Invalid value for score: arg") data = self.config.validate({"score": "82.1"}) self.assertEqual(data, {"score": 82.1}) def test_validate_string(self): self.config.parse(sample_configuration) error = self.assertRaises(ServiceConfigValueError, self.config.validate, {"title": True}) self.assertEquals(str(error), "Invalid value for title: true") data = self.config.validate({"title": u"Good"}) self.assertEqual(data, {"title": u"Good"}) def test_validate_boolean(self): self.config.parse(serializer.yaml_dump( {"options": { "active": { "description": "A boolean indicating activity.", "type": "boolean"}}})) error = self.assertRaises(ServiceConfigValueError, self.config.validate, {"active": "Blob"}) self.assertEquals(str(error), "Invalid value for active: Blob") data = self.config.validate({"active": "False"}) self.assertEqual(data, {"active": False}) data = self.config.validate({"active": "True"}) self.assertEqual(data, {"active": True}) data = self.config.validate({"active": True}) self.assertEqual(data, {"active": True}) def test_validate_integer(self): self.config.parse(sample_configuration) error = self.assertRaises(ServiceConfigValueError, self.config.validate, {"skill-level": "NaN"}) self.assertEquals(str(error), "Invalid value for skill-level: NaN") data = self.config.validate({"skill-level": "9001"}) # its over 9000! self.assertEqual(data, {"skill-level": 9001}) def test_validate_with_obsolete_str(self): """ Test the handling for the obsolete 'str' option type (it's 'string' now). Remove support for it after a while, and take this test with it. """ config = serializer.yaml_load(sample_configuration) config["options"]["title"]["type"] = "str" obsolete_config = serializer.yaml_dump(config) sio = StringIO() self.patch(sys, "stderr", sio) self.config.parse(obsolete_config) data = self.config.validate({"title": "Helpful Title"}) self.assertEqual(data["title"], "Helpful Title") self.assertIn("obsolete 'str'", sio.getvalue()) # Trying it again, it should not warn since we don't want # to pester the charm author. sio.truncate(0) self.config.parse(obsolete_config) data = self.config.validate({"title": "Helpful Title"}) self.assertEqual(data["title"], "Helpful Title") self.assertEqual(sio.getvalue(), "") juju-0.7.orig/juju/charm/tests/test_directory.py0000644000000000000000000002067612135220114020252 0ustar 00000000000000import gc import os import hashlib import inspect import shutil import zipfile from juju.errors import CharmError, FileNotFound from juju.charm.errors import InvalidCharmFile from juju.charm.metadata import MetaData from juju.charm.directory import CharmDirectory from juju.charm.bundle import CharmBundle from juju.lib import serializer from juju.lib.filehash import compute_file_hash from juju.charm import tests from juju.charm.tests.test_repository import RepositoryTestBase sample_directory = os.path.join( os.path.dirname( inspect.getabsfile(tests)), "repository", "series", "dummy") class DirectoryTest(RepositoryTestBase): def setUp(self): super(DirectoryTest, self).setUp() # Ensure the empty/ directory exists under the dummy sample # charm. Depending on how the source code is exported, # empty directories may be ignored. empty_dir = os.path.join(sample_directory, "empty") if not os.path.isdir(empty_dir): os.mkdir(empty_dir) def copy_charm(self): dir_ = os.path.join(self.makeDir(), "sample") shutil.copytree(sample_directory, dir_) return dir_ def delete_revision(self, dir_): os.remove(os.path.join(dir_, "revision")) def set_metadata_revision(self, dir_, revision): metadata_path = os.path.join(dir_, "metadata.yaml") with open(metadata_path) as f: data = serializer.yaml_load(f.read()) data["revision"] = 999 with open(metadata_path, "w") as f: f.write(serializer.yaml_dump(data)) def test_metadata_is_required(self): directory = self.makeDir() self.assertRaises(FileNotFound, CharmDirectory, directory) def test_no_revision(self): dir_ = self.copy_charm() self.delete_revision(dir_) charm = CharmDirectory(dir_) self.assertEquals(charm.get_revision(), 0) with open(os.path.join(dir_, "revision")) as f: self.assertEquals(f.read(), "0\n") def test_nonsense_revision(self): dir_ = self.copy_charm() with open(os.path.join(dir_, "revision"), "w") as f: f.write("shifty look") err = self.assertRaises(CharmError, CharmDirectory, dir_) self.assertEquals( str(err), "Error processing %r: invalid charm revision 'shifty look'" % dir_) def test_revision_in_metadata(self): dir_ = self.copy_charm() self.delete_revision(dir_) self.set_metadata_revision(dir_, 999) log = self.capture_logging("juju.charm") charm = CharmDirectory(dir_) self.assertEquals(charm.get_revision(), 999) self.assertIn( "revision field is obsolete. Move it to the 'revision' file.", log.getvalue()) def test_competing_revisions(self): dir_ = self.copy_charm() self.set_metadata_revision(dir_, 999) log = self.capture_logging("juju.charm") charm = CharmDirectory(dir_) self.assertEquals(charm.get_revision(), 1) self.assertIn( "revision field is obsolete. Move it to the 'revision' file.", log.getvalue()) def test_set_revision(self): dir_ = self.copy_charm() charm = CharmDirectory(dir_) charm.set_revision(123) self.assertEquals(charm.get_revision(), 123) with open(os.path.join(dir_, "revision")) as f: self.assertEquals(f.read(), "123\n") def test_info(self): directory = CharmDirectory(sample_directory) self.assertTrue(directory.metadata is not None) self.assertTrue(isinstance(directory.metadata, MetaData)) self.assertEquals(directory.metadata.name, "dummy") self.assertEquals(directory.type, "dir") def test_make_archive(self): # make archive from sample directory directory = CharmDirectory(sample_directory) f = self.makeFile(suffix=".charm") directory.make_archive(f) # open archive in .zip-format and assert integrity from zipfile import ZipFile zf = ZipFile(f) self.assertEqual(zf.testzip(), None) # assert included included = [info.filename for info in zf.infolist()] self.assertEqual( set(included), set(("metadata.yaml", "empty/", "src/", "src/hello.c", "config.yaml", "hooks/", "hooks/install", "revision"))) def test_as_bundle(self): directory = CharmDirectory(self.sample_dir1) charm_bundle = directory.as_bundle() self.assertEquals(type(charm_bundle), CharmBundle) self.assertEquals(charm_bundle.metadata.name, "sample") self.assertIn("sample-1.charm", charm_bundle.path) total_compressed = 0 total_uncompressed = 0 zip_file = zipfile.ZipFile(charm_bundle.path) for n in zip_file.namelist(): info = zip_file.getinfo(n) total_compressed += info.compress_size total_uncompressed += info.file_size self.assertTrue(total_compressed < total_uncompressed) def test_as_bundle_file_lifetime(self): """ The temporary bundle file created should have a life time equivalent to that of the directory object itself. """ directory = CharmDirectory(self.sample_dir1) charm_bundle = directory.as_bundle() gc.collect() self.assertTrue(os.path.isfile(charm_bundle.path)) del directory gc.collect() self.assertFalse(os.path.isfile(charm_bundle.path)) def test_compute_sha256(self): """ Computing the sha256 of a directory will use the bundled charm, since the hash of the file itself is needed. """ directory = CharmDirectory(self.sample_dir1) sha256 = directory.compute_sha256() charm_bundle = directory.as_bundle() self.assertEquals(type(charm_bundle), CharmBundle) self.assertEquals(compute_file_hash(hashlib.sha256, charm_bundle.path), sha256) def test_as_bundle_with_relative_path(self): """ Ensure that as_bundle works correctly with relative paths. """ current_dir = os.getcwd() os.chdir(self.sample_dir2) self.addCleanup(os.chdir, current_dir) charm_dir = "../%s" % os.path.basename(self.sample_dir1) directory = CharmDirectory(charm_dir) charm_bundle = directory.as_bundle() self.assertEquals(type(charm_bundle), CharmBundle) self.assertEquals(charm_bundle.metadata.name, "sample") def test_charm_base_inheritance(self): """ get_sha256() should be implemented in the base class, and should use compute_sha256 to calculate the digest. """ directory = CharmDirectory(self.sample_dir1) bundle = directory.as_bundle() digest = compute_file_hash(hashlib.sha256, bundle.path) self.assertEquals(digest, directory.get_sha256()) def test_as_directory(self): directory = CharmDirectory(self.sample_dir1) self.assertIs(directory.as_directory(), directory) def test_config(self): """Validate that ConfigOptions are available on the charm""" from juju.charm.tests.test_config import sample_yaml_data directory = CharmDirectory(sample_directory) self.assertEquals(directory.config.get_serialization_data(), sample_yaml_data) def test_file_type(self): charm_dir = self.copy_charm() os.mkfifo(os.path.join(charm_dir, "foobar")) directory = CharmDirectory(charm_dir) e = self.assertRaises(InvalidCharmFile, directory.as_bundle) self.assertIn("foobar' Invalid file type for a charm", str(e)) def test_internal_symlink(self): charm_path = self.copy_charm() external_file = self.makeFile(content='baz') os.symlink(external_file, os.path.join(charm_path, "foobar")) directory = CharmDirectory(charm_path) e = self.assertRaises(InvalidCharmFile, directory.as_bundle) self.assertIn("foobar' Absolute links are invalid", str(e)) def test_extract_symlink(self): charm_path = self.copy_charm() external_file = self.makeFile(content='lorem ipsum') os.symlink(external_file, os.path.join(charm_path, "foobar")) directory = CharmDirectory(charm_path) e = self.assertRaises(InvalidCharmFile, directory.as_bundle) self.assertIn("foobar' Absolute links are invalid", str(e)) juju-0.7.orig/juju/charm/tests/test_errors.py0000644000000000000000000000474612135220114017562 0ustar 00000000000000import os from juju.charm.errors import ( CharmURLError, CharmNotFound, InvalidCharmHook, NewerCharmNotFound, RepositoryNotFound, ServiceConfigError, InvalidCharmFile, MetaDataError) from juju.errors import CharmError, JujuError from juju.lib.testing import TestCase class CharmErrorsTest(TestCase): def test_NewerCharmNotFound(self): error = NewerCharmNotFound("local:name:21") self.assertEquals( str(error), "Charm 'local:name:21' is the latest revision known") self.assertTrue(isinstance(error, CharmError)) def test_CharmURLError(self): error = CharmURLError("foobar:/adfsa:slashot", "bad magic") self.assertEquals( str(error), "Bad charm URL 'foobar:/adfsa:slashot': bad magic") self.assertTrue(isinstance(error, CharmError)) def test_CharmNotFound(self): error = CharmNotFound("/path", "cassandra") self.assertEquals( str(error), "Charm 'cassandra' not found in repository /path") self.assertTrue(isinstance(error, JujuError)) def test_InvalidCharmHook(self): error = InvalidCharmHook("mysql", "magic-relation-changed") self.assertEquals( str(error), "Charm 'mysql' does not contain hook 'magic-relation-changed'") self.assertTrue(isinstance(error, CharmError)) def test_InvalidCharmFile(self): error = InvalidCharmFile("mysql", "hooks/foobar", "bad file") self.assertEquals( str(error), "Charm 'mysql' invalid file 'hooks/foobar' bad file") self.assertTrue(isinstance(error, CharmError)) def test_MetaDataError(self): error = MetaDataError("foobar is bad") self.assertEquals( str(error), "foobar is bad") self.assertTrue(isinstance(error, CharmError)) def test_RepositoryNotFound(self): error = RepositoryNotFound(None) self.assertEquals(str(error), "No repository specified") self.assertTrue(isinstance(error, JujuError)) path = os.path.join(self.makeDir(), "missing") error = RepositoryNotFound(path) self.assertEquals(str(error), "No repository found at %r" % path) self.assertTrue(isinstance(error, JujuError)) def test_ServiceConfigError(self): error = ServiceConfigError("foobar", "blah") self.assertEquals(str(error), "Error processing 'foobar': blah") self.assertTrue(isinstance(error, JujuError)) juju-0.7.orig/juju/charm/tests/test_metadata.py0000644000000000000000000003477012135220114020026 0ustar 00000000000000# -*- encoding: utf-8 -*- import os import yaml import inspect from juju.charm import tests from juju.charm.metadata import ( MetaData, MetaDataError, InterfaceExpander, SchemaError) from juju.errors import FileNotFound from juju.lib.testing import TestCase from juju.lib import serializer test_repository_path = os.path.join( os.path.dirname(inspect.getabsfile(tests)), "repository") sample_path = os.path.join( test_repository_path, "series", "dummy", "metadata.yaml") sample_configuration = open(sample_path).read() class MetaDataTest(TestCase): def setUp(self): self.metadata = MetaData() self.sample = sample_configuration def change_sample(self): """Return a context manager for hacking the sample data. This should be used follows: with self.change_sample() as data: data["some-key"] = "some-data" The changed sample file content will be available in self.sample once the context is done executing. """ class HackManager(object): def __enter__(mgr): mgr.data = serializer.yaml_load(self.sample) return mgr.data def __exit__(mgr, exc_type, exc_val, exc_tb): self.sample = serializer.yaml_dump(mgr.data) return False return HackManager() def test_path_argument_loads_charm_info(self): info = MetaData(sample_path) self.assertEquals(info.name, "dummy") def test_check_basic_info_before_loading(self): """ Attributes should be set to None before anything is loaded. """ self.assertEquals(self.metadata.name, None) self.assertEquals(self.metadata.obsolete_revision, None) self.assertEquals(self.metadata.summary, None) self.assertEquals(self.metadata.description, None) self.assertEquals(self.metadata.is_subordinate, False) self.assertEquals(self.metadata.format, 1) def test_parse_and_check_basic_info(self): """ Parsing the content file should work. :-) Basic information will be available as attributes of the info file. """ self.metadata.parse(self.sample) self.assertEquals(self.metadata.name, "dummy") self.assertEquals(self.metadata.obsolete_revision, None) self.assertEquals(self.metadata.summary, u"That's a dummy charm.") self.assertEquals(self.metadata.description, u"This is a longer description which\n" u"potentially contains multiple lines.\n") self.assertEquals(self.metadata.is_subordinate, False) def test_is_subordinate(self): """Validate rules for detecting proper subordinate charms are working""" logging_path = os.path.join( test_repository_path, "series", "logging", "metadata.yaml") logging_configuration = open(logging_path).read() self.metadata.parse(logging_configuration) self.assertTrue(self.metadata.is_subordinate) def test_subordinate_without_container_relation(self): """Validate rules for detecting proper subordinate charms are working Case where no container relation is specified. """ with self.change_sample() as data: data["subordinate"] = True error = self.assertRaises(MetaDataError, self.metadata.parse, self.sample, "some/path") self.assertIn("some/path labeled subordinate but lacking scope:container `requires` relation", str(error)) def test_scope_constraint(self): """Verify the scope constrain is parsed properly.""" logging_path = os.path.join( test_repository_path, "series", "logging", "metadata.yaml") logging_configuration = open(logging_path).read() self.metadata.parse(logging_configuration) # Verify the scope settings self.assertEqual(self.metadata.provides[u"logging-client"]["scope"], "global") self.assertEqual(self.metadata.requires[u"logging-directory"]["scope"], "container") self.assertTrue(self.metadata.is_subordinate) def assert_parse_with_revision(self, with_path): """ Parsing the content file should work. :-) Basic information will be available as attributes of the info file. """ with self.change_sample() as data: data["revision"] = 123 log = self.capture_logging("juju.charm") self.metadata.parse(self.sample, "some/path" if with_path else None) if with_path: self.assertIn( "some/path: revision field is obsolete. Move it to the " "'revision' file.", log.getvalue()) self.assertEquals(self.metadata.name, "dummy") self.assertEquals(self.metadata.obsolete_revision, 123) self.assertEquals(self.metadata.summary, u"That's a dummy charm.") self.assertEquals(self.metadata.description, u"This is a longer description which\n" u"potentially contains multiple lines.\n") self.assertEquals( self.metadata.get_serialization_data()["revision"], 123) def test_parse_with_revision(self): self.assert_parse_with_revision(True) self.assert_parse_with_revision(False) def test_load_calls_parse_calls_parse_serialzation_data(self): """ We'll break the rules a little bit here and test the implementation itself just so that we don't have to test *everything* twice. If load() calls parse() which calls parse_serialzation_data(), then whatever happens with parse_serialization_data(), happens with the others. """ serialization_data = {"Hi": "there!"} yaml_data = serializer.yaml_dump(serialization_data) path = self.makeFile(yaml_data) mock = self.mocker.patch(self.metadata) mock.parse(yaml_data, path) self.mocker.passthrough() mock.parse_serialization_data(serialization_data, path) self.mocker.replay() self.metadata.load(path) # Do your magic Mocker! def test_metadata_parse_error_includes_path_with_load(self): broken = ("""\ description: helo name: hi requires: {interface: zebra revision: 0 summary: hola""") path = self.makeFile() e = self.assertRaises( yaml.YAMLError, self.metadata.parse, broken, path) self.assertIn(path, str(e)) def test_schema_error_includes_path_with_load(self): """ When using load(), the exception message should mention the path name which was attempted. """ with self.change_sample() as data: data["revision"] = "1" filename = self.makeFile(self.sample) error = self.assertRaises(MetaDataError, self.metadata.load, filename) self.assertEquals(str(error), "Bad data in charm info: %s: revision: " "expected int, got '1'" % filename) def test_load_missing_file(self): """ When using load(), the exception message should mention the path name which was attempted. """ filename = self.makeFile() error = self.assertRaises(FileNotFound, self.metadata.load, filename) self.assertEquals(error.path, filename) def test_name_summary_and_description_are_utf8(self): """ Textual fields are decoded to unicode by the schema using UTF-8. """ value = u"áéíóú" str_value = value.encode("utf-8") with self.change_sample() as data: data["name"] = str_value data["summary"] = str_value data["description"] = str_value self.metadata.parse(self.sample) self.assertEquals(self.metadata.name, value) self.assertEquals(self.metadata.summary, value) self.assertEquals(self.metadata.description, value) def test_get_serialized_data(self): """ The get_serialization_data() function should return an object which may be passed to parse_serialization_data() to restore the state of the instance. """ self.metadata.parse(self.sample) serialization_data = self.metadata.get_serialization_data() self.assertEquals(serialization_data["name"], "dummy") def test_provide_implicit_relation(self): """Verify providing a juju-* reserved relation errors""" with self.change_sample() as data: data["provides"] = {"juju-foo": {"interface": "juju-magic", "scope": "container"}} # verify relation level error error = self.assertRaises(MetaDataError, self.metadata.parse, self.sample) self.assertIn("Charm dummy attempting to provide relation in implicit relation namespace: juju-foo", str(error)) # verify interface level error with self.change_sample() as data: data["provides"] = {"foo-rel": {"interface": "juju-magic", "scope": "container"}} error = self.assertRaises(MetaDataError, self.metadata.parse, self.sample) self.assertIn( "Charm dummy attempting to provide interface in implicit namespace: juju-magic (relation: foo-rel)", str(error)) def test_format(self): # Defaults to 1 self.metadata.parse(self.sample) self.assertEquals(self.metadata.format, 1) # Explicitly set to 1 with self.change_sample() as data: data["format"] = 1 self.metadata.parse(self.sample) self.assertEquals(self.metadata.format, 1) # Explicitly set to 2 with self.change_sample() as data: data["format"] = 2 self.metadata.parse(self.sample) self.assertEquals(self.metadata.format, 2) # Explicitly set to 3; however this is an unknown format for Juju with self.change_sample() as data: data["format"] = 3 error = self.assertRaises(MetaDataError, self.metadata.parse, self.sample) self.assertIn("Charm dummy uses an unknown format: 3", str(error)) class ParseTest(TestCase): """Test the parsing of some well-known sample files""" def get_metadata(self, charm_name): """Get the associated metadata for a given charm, eg ``wordpress``""" metadata = MetaData(os.path.join( test_repository_path, "series", charm_name, "metadata.yaml")) self.assertEqual(metadata.name, charm_name) return metadata def test_mysql_sample(self): """Test parse of a relation written in shorthand format. Such relations are defined as follows:: provides: server: mysql """ metadata = self.get_metadata("mysql") self.assertEqual(metadata.peers, None) self.assertEqual( metadata.provides["server"], {"interface": "mysql", "limit": None, "optional": False, "scope": "global"}) self.assertEqual(metadata.requires, None) def test_riak_sample(self): """Test multiple interfaces defined in long form, with defaults.""" metadata = self.get_metadata("riak") self.assertEqual( metadata.peers["ring"], {"interface": "riak", "limit": 1, "optional": False, "scope": "global"}) self.assertEqual( metadata.provides["endpoint"], {"interface": "http", "limit": None, "optional": False, "scope": "global"}) self.assertEqual( metadata.provides["admin"], {"interface": "http", "limit": None, "optional": False, "scope": "global"}) self.assertEqual(metadata.requires, None) def test_wordpress_sample(self): """Test multiple interfaces defined in long form, without defaults.""" metadata = self.get_metadata("wordpress") self.assertEqual(metadata.peers, None) self.assertEqual( metadata.provides["url"], {"interface": "http", "limit": None, "optional": False, "scope": "global"}) self.assertEqual( metadata.requires["db"], {"interface": "mysql", "limit": 1, "optional": False, "scope": "global"}) self.assertEqual( metadata.requires["cache"], {"interface": "varnish", "limit": 2, "optional": True, "scope": "global"}) def test_interface_expander(self): """Test rewriting of a given interface specification into long form. InterfaceExpander uses `coerce` to do one of two things: - Rewrite shorthand to the long form used for actual storage - Fills in defaults, including a configurable `limit` This test ensures test coverage on each of these branches, along with ensuring the conversion object properly raises SchemaError exceptions on invalid data. """ expander = InterfaceExpander(limit=None) # shorthand is properly rewritten self.assertEqual( expander.coerce("http", ["provides"]), {"interface": "http", "limit": None, "optional": False, "scope": "global"}) # defaults are properly applied self.assertEqual( expander.coerce( {"interface": "http"}, ["provides"]), {"interface": "http", "limit": None, "optional": False, "scope": "global"}) self.assertEqual( expander.coerce( {"interface": "http", "limit": 2}, ["provides"]), {"interface": "http", "limit": 2, "optional": False, "scope": "global"}) self.assertEqual( expander.coerce( {"interface": "http", "optional": True, "scope": "global"}, ["provides"]), {"interface": "http", "limit": None, "optional": True, "scope": "global"}) # invalid data raises SchemaError self.assertRaises( SchemaError, expander.coerce, 42, ["provides"]) self.assertRaises( SchemaError, expander.coerce, {"interface": "http", "optional": None, "scope": "global"}, ["provides"]) self.assertRaises( SchemaError, expander.coerce, {"interface": "http", "limit": "none, really"}, ["provides"]) # can change `limit` default expander = InterfaceExpander(limit=1) self.assertEqual( expander.coerce("http", ["consumes"]), {"interface": "http", "limit": 1, "optional": False, "scope": "global"}) juju-0.7.orig/juju/charm/tests/test_provider.py0000644000000000000000000000176212135220114020073 0ustar 00000000000000import os import inspect from juju.lib.testing import TestCase from juju.charm import tests from juju.charm.provider import get_charm_from_path sample_directory = os.path.join( os.path.dirname(inspect.getabsfile(tests)), "repository", "series", "dummy") class CharmFromPathTest(TestCase): def test_charm_from_path(self): # from a directory charm = get_charm_from_path(sample_directory) assert charm.get_sha256() filename = self.makeFile(suffix=".charm") charm.make_archive(filename) # and from a bundle charm = get_charm_from_path(filename) self.assertEquals(charm.path, filename) self.assertInstance(charm.get_sha256(), basestring) # and validate the implementation detail that calling it twice # doesn't result in an error after caching the callable charm = get_charm_from_path(filename) self.assertEquals(charm.path, filename) self.assertInstance(charm.get_sha256(), basestring) juju-0.7.orig/juju/charm/tests/test_publisher.py0000644000000000000000000001776312135220114020246 0ustar 00000000000000import fcntl import os import zookeeper from twisted.internet.defer import inlineCallbacks, fail from twisted.python.failure import Failure from txzookeeper.tests.utils import deleteTree from juju.charm.bundle import CharmBundle from juju.charm.directory import CharmDirectory from juju.charm.publisher import CharmPublisher from juju.charm.tests import local_charm_id from juju.lib import under, serializer from juju.providers.dummy import FileStorage from juju.state.charm import CharmStateManager from juju.state.errors import StateChanged from juju.environment.tests.test_config import ( EnvironmentsConfigTestBase, SAMPLE_ENV) from juju.lib.mocker import MATCH from .test_repository import RepositoryTestBase def _count_open_files(): count = 0 for sfd in os.listdir("/proc/self/fd"): ifd = int(sfd) if ifd < 3: continue try: fcntl.fcntl(ifd, fcntl.F_GETFD) count += 1 except IOError: pass return count class CharmPublisherTest(RepositoryTestBase): @inlineCallbacks def setUp(self): super(CharmPublisherTest, self).setUp() zookeeper.set_debug_level(0) self.charm = CharmDirectory(self.sample_dir1) self.charm_id = local_charm_id(self.charm) self.charm_key = under.quote(self.charm_id) # provider storage key self.charm_storage_key = under.quote( "%s:%s" % (self.charm_id, self.charm.get_sha256())) self.client = self.get_zookeeper_client() self.storage_dir = self.makeDir() self.storage = FileStorage(self.storage_dir) self.publisher = CharmPublisher(self.client, self.storage) yield self.client.connect() yield self.client.create("/charms") def tearDown(self): deleteTree("/", self.client.handle) self.client.close() super(CharmPublisherTest, self).tearDown() @inlineCallbacks def test_add_charm_and_publish(self): open_file_count = _count_open_files() yield self.publisher.add_charm(self.charm_id, self.charm) result = yield self.publisher.publish() self.assertEquals(_count_open_files(), open_file_count) children = yield self.client.get_children("/charms") self.assertEqual(children, [self.charm_key]) fh = yield self.storage.get(self.charm_storage_key) bundle = CharmBundle(fh) self.assertEqual(self.charm.get_sha256(), bundle.get_sha256()) self.assertEqual( result[0].bundle_url, "file://%s/%s" % ( self.storage_dir, self.charm_storage_key)) @inlineCallbacks def test_published_charm_sans_unicode(self): yield self.publisher.add_charm(self.charm_id, self.charm) yield self.publisher.publish() data, stat = yield self.client.get("/charms/%s" % self.charm_key) self.assertNotIn("unicode", data) @inlineCallbacks def test_add_charm_with_concurrent(self): """ Publishing a charm, that has become published concurrent, after the add_charm, works fine. it will write to storage regardless. The use of a sha256 as part of the storage key is utilized to help ensure uniqueness of bits. The sha256 is also stored with the charm state. This relation betewen the charm state and the binary bits, helps guarantee the property that any published charm in zookeeper will use the binary bits that it was published with. """ yield self.publisher.add_charm(self.charm_id, self.charm) concurrent_publisher = CharmPublisher( self.client, self.storage) charm = CharmDirectory(self.sample_dir1) yield concurrent_publisher.add_charm(self.charm_id, charm) yield self.publisher.publish() # modify the charm to create a conflict scenario self.makeFile("zebra", path=os.path.join(self.sample_dir1, "junk.txt")) # assert the charm now has a different sha post modification modified_charm_sha = charm.get_sha256() self.assertNotEqual( modified_charm_sha, self.charm.get_sha256()) # verify publishing raises a stateerror def verify_failure(result): if not isinstance(result, Failure): self.fail("Should have raised state error") result.trap(StateChanged) return True yield concurrent_publisher.publish().addBoth(verify_failure) # verify the zk state charm_nodes = yield self.client.get_children("/charms") self.assertEqual(charm_nodes, [self.charm_key]) content, stat = yield self.client.get( "/charms/%s" % charm_nodes[0]) # assert the checksum matches the initially published checksum self.assertEqual( serializer.yaml_load(content)["sha256"], self.charm.get_sha256()) store_path = os.path.join(self.storage_dir, self.charm_storage_key) self.assertTrue(os.path.exists(store_path)) # and the binary bits where stored modified_charm_storage_key = under.quote( "%s:%s" % (self.charm_id, modified_charm_sha)) modified_store_path = os.path.join( self.storage_dir, modified_charm_storage_key) self.assertTrue(os.path.exists(modified_store_path)) @inlineCallbacks def test_add_charm_with_concurrent_removal(self): """ If a charm is published, and it detects that the charm exists already exists, it will attempt to retrieve the charm state to verify there is no checksum mismatch. If concurrently the charm is removed, the publisher should fail with a statechange error. """ manager = self.mocker.patch(CharmStateManager) manager.get_charm_state(self.charm_id) self.mocker.passthrough() def match_charm_bundle(bundle): return isinstance(bundle, CharmBundle) def match_charm_url(url): return url.startswith("file://") manager.add_charm_state( self.charm_id, MATCH(match_charm_bundle), MATCH(match_charm_url)) self.mocker.result(fail(zookeeper.NodeExistsException())) manager.get_charm_state(self.charm_id) self.mocker.result(fail(zookeeper.NoNodeException())) self.mocker.replay() yield self.publisher.add_charm(self.charm_id, self.charm) yield self.failUnlessFailure(self.publisher.publish(), StateChanged) @inlineCallbacks def test_add_charm_already_known(self): """Adding an existing charm, is an effective noop, as its not added to the internal publisher queue. """ output = self.capture_logging("juju.charm") # Do an initial publishing of the charm scheduled = yield self.publisher.add_charm(self.charm_id, self.charm) self.assertTrue(scheduled) result = yield self.publisher.publish() self.assertEqual(result[0].name, self.charm.metadata.name) publisher = CharmPublisher(self.client, self.storage) scheduled = yield publisher.add_charm(self.charm_id, self.charm) self.assertFalse(scheduled) scheduled = yield publisher.add_charm(self.charm_id, self.charm) self.assertFalse(scheduled) result = yield publisher.publish() self.assertEqual(result[0].name, self.charm.metadata.name) self.assertEqual(result[1].name, self.charm.metadata.name) self.assertIn( "Using cached charm version of %s" % self.charm.metadata.name, output.getvalue()) class EnvironmentPublisherTest(EnvironmentsConfigTestBase): def setUp(self): super(EnvironmentPublisherTest, self).setUp() self.write_config(SAMPLE_ENV) self.config.load() self.environment = self.config.get("myfirstenv") zookeeper.set_debug_level(0) @inlineCallbacks def test_publisher_for_environment(self): publisher = yield CharmPublisher.for_environment(self.environment) self.assertTrue(isinstance(publisher, CharmPublisher)) juju-0.7.orig/juju/charm/tests/test_repository.py0000644000000000000000000005513412135220114020462 0ustar 00000000000000import json import os import inspect import shutil from twisted.internet.defer import fail, inlineCallbacks, succeed from twisted.web.error import Error from txaws.client.ssl import VerifyingContextFactory from juju.charm.directory import CharmDirectory from juju.charm.errors import CharmNotFound, CharmURLError, RepositoryNotFound from juju.charm.repository import ( LocalCharmRepository, RemoteCharmRepository, resolve, CS_STORE_URL) from juju.charm.url import CharmURL from juju.charm import provider from juju.errors import CharmError from juju.lib import under from juju.charm import tests from juju.lib.mocker import ANY, MATCH from juju.lib.testing import TestCase unbundled_repository = os.path.join( os.path.dirname(inspect.getabsfile(tests)), "repository") class RepositoryTestBase(TestCase): @inlineCallbacks def setUp(self): yield super(RepositoryTestBase, self).setUp() self.bundled_repo_path = self.makeDir() os.mkdir(os.path.join(self.bundled_repo_path, "series")) self.unbundled_repo_path = self.makeDir() os.rmdir(self.unbundled_repo_path) shutil.copytree(unbundled_repository, self.unbundled_repo_path) self.sample_dir1 = os.path.join( self.unbundled_repo_path, "series", "old") self.sample_dir2 = os.path.join( self.unbundled_repo_path, "series", "new") class LocalRepositoryTest(RepositoryTestBase): def setUp(self): super(LocalRepositoryTest, self).setUp() # bundle sample charms CharmDirectory(self.sample_dir1).make_archive( os.path.join(self.bundled_repo_path, "series", "old.charm")) CharmDirectory(self.sample_dir2).make_archive( os.path.join(self.bundled_repo_path, "series", "new.charm")) # define repository objects self.repository1 = LocalCharmRepository(self.unbundled_repo_path) self.repository2 = LocalCharmRepository(self.bundled_repo_path) self.output = self.capture_logging("juju.charm") def assert_there(self, name, repo, revision, latest_revision=None): url = self.charm_url(name) charm = yield repo.find(url) self.assertEquals(charm.get_revision(), revision) latest = yield repo.latest(url) self.assertEquals(latest, latest_revision or revision) @inlineCallbacks def assert_not_there(self, name, repo, revision=None): url = self.charm_url(name) msg = "Charm 'local:series/%s' not found in repository %s" % ( name, repo.path) err = yield self.assertFailure(repo.find(url), CharmNotFound) self.assertEquals(str(err), msg) if revision is None: err = yield self.assertFailure(repo.latest(url), CharmNotFound) self.assertEquals(str(err), msg) def charm_url(self, name): return CharmURL.parse("local:series/" + name) def test_no_path(self): err = self.assertRaises(RepositoryNotFound, LocalCharmRepository, None) self.assertEquals(str(err), "No repository specified") def test_bad_path(self): path = os.path.join(self.makeDir(), "blah") err = self.assertRaises(RepositoryNotFound, LocalCharmRepository, path) self.assertEquals(str(err), "No repository found at %r" % path) with open(path, "w"): pass err = self.assertRaises(RepositoryNotFound, LocalCharmRepository, path) self.assertEquals(str(err), "No repository found at %r" % path) def test_find_inappropriate_url(self): url = CharmURL.parse("cs:foo/bar") err = self.assertRaises(AssertionError, self.repository1.find, url) self.assertEquals(str(err), "schema mismatch") def test_completely_missing(self): return self.assert_not_there("zebra", self.repository1) def test_unknown_files_ignored(self): self.makeFile( "Foobar", path=os.path.join(self.repository1.path, "series", "zebra")) return self.assert_not_there("zebra", self.repository1) @inlineCallbacks def test_random_error_logged(self): get_charm = self.mocker.replace(provider.get_charm_from_path) get_charm(ANY) self.mocker.throw(SyntaxError("magic")) self.mocker.count(0, 3) self.mocker.replay() yield self.assertFailure( self.repository1.find(self.charm_url("zebra")), CharmNotFound) self.assertIn( "Unexpected error while processing", self.output.getvalue()) self.assertIn( "SyntaxError('magic',)", self.output.getvalue()) def test_unknown_directories_ignored(self): self.makeDir( path=os.path.join(self.repository1.path, "series", "zebra")) return self.assert_not_there("zebra", self.repository1) @inlineCallbacks def test_broken_charm_metadata_ignored(self): charm_path = self.makeDir( path=os.path.join(self.repository1.path, "series", "zebra")) fh = open(os.path.join(charm_path, "metadata.yaml"), "w+") fh.write("""\ description: helo name: hi requires: {interface: zebra revision: 0 summary: hola""") fh.close() yield self.assertFailure( self.repository1.find(self.charm_url("zebra")), CharmNotFound) output = self.output.getvalue() self.assertIn( "Charm 'zebra' has a YAML error", output) self.assertIn( "%s/series/zebra/metadata.yaml" % self.repository1.path, output) @inlineCallbacks def test_broken_charm_config_ignored(self): """YAML Errors propogate to the log, but the search continues.""" fh = open( os.path.join( self.repository1.path, "series", "mysql", "config.yaml"), "w+") fh.write("""\ description: helo name: hi requires: {interface: zebra revision: 0 summary: hola""") fh.close() yield self.repository1.find(self.charm_url("sample")) output = self.output.getvalue() self.assertIn( "Charm 'mysql' has a YAML error", output) self.assertIn( "%s/series/mysql/config.yaml" % self.repository1.path, output) @inlineCallbacks def test_ignore_dot_files(self): """Dot files are ignored when browsing the repository.""" fh = open( os.path.join( self.repository1.path, "series", ".foo"), "w+") fh.write("Something") fh.close() yield self.repository1.find(self.charm_url("sample")) output = self.output.getvalue() self.assertNotIn("Charm '.foo' has an error", output) @inlineCallbacks def test_invalid_charm_config_ignored(self): fh = open( os.path.join( self.repository1.path, "series", "mysql", "config.yaml"), "w+") fh.write("foobar: {}") fh.close() stream = self.capture_logging("juju.charm") yield self.assertFailure( self.repository1.find(self.charm_url("mysql")), CharmNotFound) output = stream.getvalue() self.assertIn( "Charm 'mysql' has an error", output) self.assertIn( "%s/series/mysql/config.yaml" % self.repository1.path, output) def test_repo_type(self): self.assertEqual(self.repository1.type, "local") @inlineCallbacks def test_success_unbundled(self): yield self.assert_there("sample", self.repository1, 2) yield self.assert_there("sample-1", self.repository1, 1, 2) yield self.assert_there("sample-2", self.repository1, 2) yield self.assert_not_there("sample-3", self.repository1, 2) @inlineCallbacks def test_success_bundled(self): yield self.assert_there("sample", self.repository2, 2) yield self.assert_there("sample-1", self.repository2, 1, 2) yield self.assert_there("sample-2", self.repository2, 2) yield self.assert_not_there("sample-3", self.repository2, 2) @inlineCallbacks def test_no_revision_gets_latest(self): yield self.assert_there("sample", self.repository1, 2) yield self.assert_there("sample-1", self.repository1, 1, 2) yield self.assert_there("sample-2", self.repository1, 2) yield self.assert_not_there("sample-3", self.repository1, 2) revision_path = os.path.join( self.repository1.path, "series/old/revision") with open(revision_path, "w") as f: f.write("3") yield self.assert_there("sample", self.repository1, 3) yield self.assert_not_there("sample-1", self.repository1, 3) yield self.assert_there("sample-2", self.repository1, 2, 3) yield self.assert_there("sample-3", self.repository1, 3) class RemoteRepositoryTest(RepositoryTestBase): def setUp(self): super(RemoteRepositoryTest, self).setUp() self.cache_path = os.path.join( self.makeDir(), "notexistyet") self.download_path = os.path.join(self.cache_path, "downloads") def delete(): if os.path.exists(self.cache_path): shutil.rmtree(self.cache_path) self.addCleanup(delete) self.charm = CharmDirectory( os.path.join(self.unbundled_repo_path, "series", "dummy")) with open(self.charm.as_bundle().path, "rb") as f: self.bundle_data = f.read() self.sha256 = self.charm.as_bundle().get_sha256() self.getPage = self.mocker.replace("twisted.web.client.getPage") self.downloadPage = self.mocker.replace( "twisted.web.client.downloadPage") def repo(self, url_base): return RemoteCharmRepository(url_base, self.cache_path) def cache_location(self, url_str, revision): charm_url = CharmURL.parse(url_str) cache_key = under.quote( "%s.charm" % (charm_url.with_revision(revision))) return os.path.join(self.cache_path, cache_key) def charm_info(self, url_str, revision, warnings=None, errors=None): info = {"revision": revision, "sha256": self.sha256} if errors: info["errors"] = errors if warnings: info["warnings"] = warnings return json.dumps({url_str: info}) def mock_charm_info(self, url, result): def match_context(value): return isinstance(value, VerifyingContextFactory) self.getPage(url, contextFactory=MATCH(match_context)) self.mocker.result(result) def mock_download(self, url, error=None): def match_context(value): return isinstance(value, VerifyingContextFactory) self.downloadPage(url, ANY, contextFactory=MATCH(match_context)) if error: return self.mocker.result(fail(error)) def download(_, path, contextFactory): self.assertTrue(path.startswith(self.download_path)) with open(path, "wb") as f: f.write(self.bundle_data) return succeed(None) self.mocker.call(download) @inlineCallbacks def assert_find_uncached(self, dns_name, url_str, info_url, find_url): self.mock_charm_info(info_url, succeed(self.charm_info(url_str, 1))) self.mock_download(find_url) self.mocker.replay() repo = self.repo(dns_name) charm = yield repo.find(CharmURL.parse(url_str)) self.assertEquals(charm.get_sha256(), self.sha256) self.assertEquals(charm.path, self.cache_location(url_str, 1)) self.assertEquals(os.listdir(self.download_path), []) @inlineCallbacks def assert_find_cached(self, dns_name, url_str, info_url): os.makedirs(self.cache_path) cache_location = self.cache_location(url_str, 1) shutil.copy(self.charm.as_bundle().path, cache_location) self.mock_charm_info(info_url, succeed(self.charm_info(url_str, 1))) self.mocker.replay() repo = self.repo(dns_name) charm = yield repo.find(CharmURL.parse(url_str)) self.assertEquals(charm.get_sha256(), self.sha256) self.assertEquals(charm.path, cache_location) def assert_find_error(self, dns_name, url_str, err_type, message): self.mocker.replay() repo = self.repo(dns_name) d = self.assertFailure(repo.find(CharmURL.parse(url_str)), err_type) def verify(error): self.assertEquals(str(error), message) d.addCallback(verify) return d @inlineCallbacks def assert_latest(self, dns_name, url_str, revision): self.mocker.replay() repo = self.repo(dns_name) result = yield repo.latest(CharmURL.parse(url_str)) self.assertEquals(result, revision) def assert_latest_error(self, dns_name, url_str, err_type, message): self.mocker.replay() repo = self.repo(dns_name) d = self.assertFailure(repo.latest(CharmURL.parse(url_str)), err_type) def verify(error): self.assertEquals(str(error), message) d.addCallback(verify) return d def test_find_plain_uncached_no_stat(self): self.change_environment(JUJU_TESTING="1") return self.assert_find_uncached( "https://somewhe.re", "cs:series/name", "https://somewhe.re/charm-info?charms=cs%3Aseries/name&stats=0", "https://somewhe.re/charm/series/name-1?stats=0") def test_find_plain_uncached(self): return self.assert_find_uncached( "https://somewhe.re", "cs:series/name", "https://somewhe.re/charm-info?charms=cs%3Aseries/name", "https://somewhe.re/charm/series/name-1") def test_find_revision_uncached(self): return self.assert_find_uncached( "https://somewhe.re", "cs:series/name-1", "https://somewhe.re/charm-info?charms=cs%3Aseries/name-1", "https://somewhe.re/charm/series/name-1") def test_find_user_uncached(self): return self.assert_find_uncached( "https://somewhereel.se", "cs:~user/srs/name", "https://somewhereel.se/charm-info?charms=cs%3A%7Euser/srs/name", "https://somewhereel.se/charm/%7Euser/srs/name-1") def test_find_plain_cached(self): return self.assert_find_cached( "https://somewhe.re", "cs:series/name", "https://somewhe.re/charm-info?charms=cs%3Aseries/name") def test_find_revision_cached(self): return self.assert_find_cached( "https://somewhe.re", "cs:series/name-1", "https://somewhe.re/charm-info?charms=cs%3Aseries/name-1") def test_find_user_cached(self): return self.assert_find_cached( "https://somewhereel.se", "cs:~user/srs/name", "https://somewhereel.se/charm-info?charms=cs%3A%7Euser/srs/name") def test_find_info_http_error(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name", fail(Error("500"))) return self.assert_find_error( "https://anoth.er", "cs:series/name", CharmNotFound, "Charm 'cs:series/name' not found in repository https://anoth.er") @inlineCallbacks def test_find_info_store_warning(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name-1", succeed(self.charm_info( "cs:series/name-1", 1, warnings=["omg", "halp"]))) self.mock_download("https://anoth.er/charm/series/name-1") self.mocker.replay() repo = self.repo("https://anoth.er") log = self.capture_logging("juju.charm") charm = yield repo.find(CharmURL.parse("cs:series/name-1")) self.assertIn("omg", log.getvalue()) self.assertIn("halp", log.getvalue()) self.assertEquals(charm.get_sha256(), self.sha256) def test_find_info_store_error(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name-101", succeed(self.charm_info( "cs:series/name-101", 101, errors=["oh", "noes"]))) return self.assert_find_error( "https://anoth.er", "cs:series/name-101", CharmError, "Error processing 'cs:series/name-101': oh; noes") def test_find_info_bad_revision(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name-99", succeed(self.charm_info("cs:series/name-99", 1))) return self.assert_find_error( "https://anoth.er", "cs:series/name-99", AssertionError, "bad url revision") def test_find_download_error(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name", succeed(json.dumps({"cs:series/name": {"revision": 123}}))) self.mock_download( "https://anoth.er/charm/series/name-123", Error("999")) return self.assert_find_error( "https://anoth.er", "cs:series/name", CharmNotFound, "Charm 'cs:series/name-123' not found in repository " "https://anoth.er") def test_find_charm_revision_mismatch(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name", succeed(json.dumps({"cs:series/name": {"revision": 99}}))) self.mock_download("https://anoth.er/charm/series/name-99") return self.assert_find_error( "https://anoth.er", "cs:series/name", AssertionError, "bad charm revision") @inlineCallbacks def test_find_downloaded_hash_mismatch(self): cache_location = self.cache_location("cs:series/name-1", 1) self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name", succeed(json.dumps( {"cs:series/name": {"revision": 1, "sha256": "NO YUO"}}))) self.mock_download("https://anoth.er/charm/series/name-1") yield self.assert_find_error( "https://anoth.er", "cs:series/name", CharmError, "Error processing 'cs:series/name-1 (downloaded)': SHA256 " "mismatch") self.assertFalse(os.path.exists(cache_location)) @inlineCallbacks def test_find_cached_hash_mismatch(self): os.makedirs(self.cache_path) cache_location = self.cache_location("cs:series/name-1", 1) shutil.copy(self.charm.as_bundle().path, cache_location) self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name", succeed(json.dumps( {"cs:series/name": {"revision": 1, "sha256": "NO YUO"}}))) yield self.assert_find_error( "https://anoth.er", "cs:series/name", CharmError, "Error processing 'cs:series/name-1 (cached)': SHA256 mismatch") self.assertFalse(os.path.exists(cache_location)) def test_latest_plain(self): self.mock_charm_info( "https://somewhe.re/charm-info?charms=cs%3Afoo/bar", succeed(self.charm_info("cs:foo/bar", 99))) return self.assert_latest("https://somewhe.re", "cs:foo/bar-1", 99) def test_latest_user(self): self.mock_charm_info( "https://somewhereel.se/charm-info?charms=cs%3A%7Efee/foo/bar", succeed(self.charm_info("cs:~fee/foo/bar", 123))) return self.assert_latest( "https://somewhereel.se", "cs:~fee/foo/bar", 123) def test_latest_revision(self): self.mock_charm_info( "https://somewhereel.se/charm-info?charms=cs%3A%7Efee/foo/bar", succeed(self.charm_info("cs:~fee/foo/bar", 123))) return self.assert_latest( "https://somewhereel.se", "cs:~fee/foo/bar-99", 123) def test_latest_http_error(self): self.mock_charm_info( "https://andanoth.er/charm-info?charms=cs%3A%7Eblib/blab/blob", fail(Error("404"))) return self.assert_latest_error( "https://andanoth.er", "cs:~blib/blab/blob", CharmNotFound, "Charm 'cs:~blib/blab/blob' not found in repository " "https://andanoth.er") @inlineCallbacks def test_latest_store_warning(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name", succeed(self.charm_info( "cs:series/name", 1, warnings=["eww", "yuck"]))) self.mocker.replay() repo = self.repo("https://anoth.er") log = self.capture_logging("juju.charm") revision = yield repo.latest(CharmURL.parse("cs:series/name-1")) self.assertIn("eww", log.getvalue()) self.assertIn("yuck", log.getvalue()) self.assertEquals(revision, 1) def test_latest_store_error(self): self.mock_charm_info( "https://anoth.er/charm-info?charms=cs%3Aseries/name", succeed(self.charm_info( "cs:series/name", 1, errors=["blam", "dink"]))) return self.assert_latest_error( "https://anoth.er", "cs:series/name-1", CharmError, "Error processing 'cs:series/name': blam; dink") def test_repo_type(self): self.mocker.replay() self.assertEqual(self.repo("http://fbaro.com").type, "store") class ResolveTest(RepositoryTestBase): def assert_resolve_local(self, vague, default, expect): path = self.makeDir() repo, url = resolve(vague, path, default) self.assertEquals(str(url), expect) self.assertTrue(isinstance(repo, LocalCharmRepository)) self.assertEquals(repo.path, path) def test_resolve_local(self): self.assert_resolve_local( "local:series/sample", "default", "local:series/sample") self.assert_resolve_local( "local:sample", "default", "local:default/sample") def assert_resolve_remote(self, vague, default, expect): repo, url = resolve(vague, None, default) self.assertEquals(str(url), expect) self.assertTrue(isinstance(repo, RemoteCharmRepository)) self.assertEquals(repo.url_base, CS_STORE_URL) def test_resolve_remote(self): self.assert_resolve_remote( "sample", "default", "cs:default/sample") self.assert_resolve_remote( "series/sample", "default", "cs:series/sample") self.assert_resolve_remote( "cs:sample", "default", "cs:default/sample") self.assert_resolve_remote( "cs:series/sample", "default", "cs:series/sample") self.assert_resolve_remote( "cs:~user/sample", "default", "cs:~user/default/sample") self.assert_resolve_remote( "cs:~user/series/sample", "default", "cs:~user/series/sample") def test_resolve_nonsense(self): error = self.assertRaises( CharmURLError, resolve, "blah:whatever", None, "series") self.assertEquals( str(error), "Bad charm URL 'blah:series/whatever': invalid schema (URL " "inferred from 'blah:whatever')") juju-0.7.orig/juju/charm/tests/test_url.py0000644000000000000000000001276412135220114017047 0ustar 00000000000000from juju.charm.errors import CharmURLError from juju.charm.url import CharmCollection, CharmURL from juju.lib.testing import TestCase class CharmCollectionTest(TestCase): def test_str(self): self.assertEquals( str(CharmCollection("foo", "bar", "baz")), "foo:~bar/baz") self.assertEquals( str(CharmCollection("ping", None, "pong")), "ping:pong") class CharmURLTest(TestCase): def assert_url(self, url, schema, user, series, name, rev): self.assertEquals(url.collection.schema, schema) self.assertEquals(url.collection.user, user) self.assertEquals(url.collection.series, series) self.assertEquals(url.name, name) self.assertEquals(url.revision, rev) def assert_error(self, err, url_str, message): self.assertEquals( str(err), "Bad charm URL %r: %s" % (url_str, message)) def assert_parse(self, string, schema, user, series, name, rev): url = CharmURL.parse(string) self.assert_url(url, schema, user, series, name, rev) self.assertEquals(str(url), string) self.assertEquals(url.path, string.split(":", 1)[1]) def test_parse(self): self.assert_parse( "cs:~user/series/name", "cs", "user", "series", "name", None) self.assert_parse( "cs:~user/series/name-0", "cs", "user", "series", "name", 0) self.assert_parse( "cs:series/name", "cs", None, "series", "name", None) self.assert_parse( "cs:series/name-0", "cs", None, "series", "name", 0) self.assert_parse( "cs:series/name0", "cs", None, "series", "name0", None) self.assert_parse( "cs:series/n0-0n-n0", "cs", None, "series", "n0-0n-n0", None) self.assert_parse( "local:series/name", "local", None, "series", "name", None) self.assert_parse( "local:series/name-0", "local", None, "series", "name", 0) def assert_cannot_parse(self, string, message): err = self.assertRaises(CharmURLError, CharmURL.parse, string) self.assert_error(err, string, message) def test_cannot_parse(self): self.assert_cannot_parse( None, "not a string type") self.assert_cannot_parse( "series/name-1", "no schema specified") self.assert_cannot_parse( "bs:~user/series/name-1", "invalid schema") self.assert_cannot_parse( "cs:~1/series/name-1", "invalid user") self.assert_cannot_parse( "cs:~user/1/name-1", "invalid series") self.assert_cannot_parse( "cs:~user/series/name-1-2", "invalid name") self.assert_cannot_parse( "cs:~user/series/name-1-n-2", "invalid name") self.assert_cannot_parse( "cs:~user/series/name--a-2", "invalid name") self.assert_cannot_parse( "cs:~user/series/huh/name-1", "invalid form") self.assert_cannot_parse( "cs:~user/name", "no series specified") self.assert_cannot_parse( "cs:name", "invalid form") self.assert_cannot_parse( "local:~user/series/name", "users not allowed in local URLs") self.assert_cannot_parse( "local:~user/name", "users not allowed in local URLs") self.assert_cannot_parse( "local:name", "invalid form") def test_revision(self): url1 = CharmURL.parse("cs:foo/bar") error = self.assertRaises(CharmURLError, url1.assert_revision) self.assertEquals( str(error), "Bad charm URL 'cs:foo/bar': expected a revision") url2 = url1.with_revision(0) url1.collection.schema = "local" # change url1, verify deep copied url2.assert_revision() self.assertEquals(str(url2), "cs:foo/bar-0") url3 = url2.with_revision(999) url3.assert_revision() self.assertEquals(str(url3), "cs:foo/bar-999") def assert_infer(self, string, schema, user, series, name, rev): url = CharmURL.infer(string, "default") self.assert_url(url, schema, user, series, name, rev) def test_infer(self): self.assert_infer( "name", "cs", None, "default", "name", None) self.assert_infer( "name-0", "cs", None, "default", "name", 0) self.assert_infer( "series/name", "cs", None, "series", "name", None) self.assert_infer( "series/name-0", "cs", None, "series", "name", 0) self.assert_infer( "cs:name", "cs", None, "default", "name", None) self.assert_infer( "cs:name-0", "cs", None, "default", "name", 0) self.assert_infer( "cs:~user/name", "cs", "user", "default", "name", None) self.assert_infer( "cs:~user/name-0", "cs", "user", "default", "name", 0) self.assert_infer( "local:name", "local", None, "default", "name", None) self.assert_infer( "local:name-0", "local", None, "default", "name", 0) def test_cannot_infer(self): err = self.assertRaises( CharmURLError, CharmURL.infer, "name", "invalid!series") self.assertEquals( str(err), "Bad charm URL 'cs:invalid!series/name': invalid series (URL " "inferred from 'name')") err = self.assertRaises( CharmURLError, CharmURL.infer, "~user/name", "default") self.assertEquals( str(err), "Bad charm URL '~user/name': a URL with a user must specify a " "schema") juju-0.7.orig/juju/charm/tests/repository/series/0000755000000000000000000000000012135220114020333 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/configtest/0000755000000000000000000000000012135220114022500 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/0000755000000000000000000000000012135220114021466 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/0000755000000000000000000000000012135220114022333 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/logging/0000755000000000000000000000000012135220114021761 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/mysql/0000755000000000000000000000000012135220114021500 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/mysql-alternative/0000755000000000000000000000000012135220114024014 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/0000755000000000000000000000000012135220114023313 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/new/0000755000000000000000000000000012135220114021124 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/old/0000755000000000000000000000000012135220114021111 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/riak/0000755000000000000000000000000012135220114021261 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/varnish/0000755000000000000000000000000012135220114022005 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/0000755000000000000000000000000012135220114024321 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/wordpress/0000755000000000000000000000000012135220114022363 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/configtest/config.yaml0000644000000000000000000000023512135220114024631 0ustar 00000000000000options: foo: type: string default: "foo-default" description: "Foo" bar: type: string default: "bar-default" description: "Bar" juju-0.7.orig/juju/charm/tests/repository/series/configtest/hooks/0000755000000000000000000000000012135220114023623 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/configtest/metadata.yaml0000644000000000000000000000017412135220114025146 0ustar 00000000000000name: configtest summary: "Testing Defaults" description: "Test for bug #873643" provides: website: interface: http juju-0.7.orig/juju/charm/tests/repository/series/configtest/revision0000644000000000000000000000000212135220114024251 0ustar 000000000000001 juju-0.7.orig/juju/charm/tests/repository/series/dummy/.dir/0000755000000000000000000000000012135220114022322 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/.ignored0000644000000000000000000000000112135220114023105 0ustar 00000000000000#juju-0.7.orig/juju/charm/tests/repository/series/dummy/build/0000755000000000000000000000000012135220114022565 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/config.yaml0000644000000000000000000000054312135220114023621 0ustar 00000000000000options: title: {default: My Title, description: A descriptive title used for the service., type: string} outlook: {description: No default outlook., type: string} username: {default: admin001, description: The name of the initial account (given admin permissions)., type: string} skill-level: {description: A number indicating skill., type: int} juju-0.7.orig/juju/charm/tests/repository/series/dummy/empty/0000755000000000000000000000000012135220114022624 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/hooks/0000755000000000000000000000000012135220114022611 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/metadata.yaml0000644000000000000000000000021412135220114024127 0ustar 00000000000000name: dummy summary: "That's a dummy charm." description: | This is a longer description which potentially contains multiple lines. juju-0.7.orig/juju/charm/tests/repository/series/dummy/revision0000644000000000000000000000000112135220114023236 0ustar 000000000000001juju-0.7.orig/juju/charm/tests/repository/series/dummy/src/0000755000000000000000000000000012135220114022255 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/.dir/ignored0000644000000000000000000000000012135220114023662 0ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/build/ignored0000644000000000000000000000000012135220114024125 0ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/dummy/hooks/install0000755000000000000000000000003112135220114024177 0ustar 00000000000000#!/bin/bash echo "Done!" juju-0.7.orig/juju/charm/tests/repository/series/dummy/src/hello.c0000644000000000000000000000011412135220114023520 0ustar 00000000000000#include main() { printf ("Hello World!\n"); return 0; } juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/config.yaml0000644000000000000000000000015712135220114024467 0ustar 00000000000000options: blog-title: {default: My Title, description: A descriptive title used for the blog., type: string} juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/metadata.yaml0000644000000000000000000000043312135220114024777 0ustar 00000000000000name: funkyblog summary: "Blog engine" description: "A funky blog engine" provides: url: interface: http limit: optional: false requires: write-db: interface: mysql limit: 1 optional: false read-db: interface: mysql limit: 1 optional: false juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/revision0000644000000000000000000000000112135220114024103 0ustar 000000000000003juju-0.7.orig/juju/charm/tests/repository/series/logging/.ignored0000644000000000000000000000000112135220114023400 0ustar 00000000000000#juju-0.7.orig/juju/charm/tests/repository/series/logging/hooks/0000755000000000000000000000000012135220114023104 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/logging/metadata.yaml0000644000000000000000000000060112135220114024422 0ustar 00000000000000name: logging summary: "Subordinate logging test charm" description: | This is a longer description which potentially contains multiple lines. subordinate: true provides: logging-client: interface: logging requires: logging-directory: interface: logging scope: container juju-info-fallback: interface: juju-info scope: containerjuju-0.7.orig/juju/charm/tests/repository/series/logging/revision0000644000000000000000000000000112135220114023531 0ustar 000000000000001juju-0.7.orig/juju/charm/tests/repository/series/logging/hooks/install0000755000000000000000000000003112135220114024472 0ustar 00000000000000#!/bin/bash echo "Done!" juju-0.7.orig/juju/charm/tests/repository/series/mysql/config.yaml0000644000000000000000000000160712135220114023635 0ustar 00000000000000options: query-cache-size: default: -1 type: int description: Override the computed version from dataset-size. Still works if query-cache-type is "OFF" since sessions can override the cache type setting on their own. awesome: default: false type: boolean description: Set true to make this database engine truly awesome tuning-level: default: safest type: string description: Valid values are 'safest', 'fast', and 'unsafe'. If set to safest, all settings are tuned to have maximum safety at the cost of performance. Fast will turn off most controls, but may lose data on crashes. unsafe will turn off all protections. monkey-madness: default: 0.5 type: float description: The amount of randomness to be desired in any data that is returned, from 0 (sane) to 1 (monkeys running the asylum). juju-0.7.orig/juju/charm/tests/repository/series/mysql/metadata.yaml0000644000000000000000000000016412135220114024145 0ustar 00000000000000name: mysql summary: "Database engine" description: "A pretty popular database" provides: server: mysql format: 1 juju-0.7.orig/juju/charm/tests/repository/series/mysql/revision0000644000000000000000000000000112135220114023250 0ustar 000000000000001juju-0.7.orig/juju/charm/tests/repository/series/mysql-alternative/metadata.yaml0000644000000000000000000000025412135220114026461 0ustar 00000000000000name: mysql-alternative summary: "Database engine" description: "A pretty popular database" provides: prod: interface: mysql dev: interface: mysql limit: 2 juju-0.7.orig/juju/charm/tests/repository/series/mysql-alternative/revision0000644000000000000000000000000112135220114025564 0ustar 000000000000001juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/config.yaml0000644000000000000000000000160712135220114025450 0ustar 00000000000000options: query-cache-size: default: -1 type: int description: Override the computed version from dataset-size. Still works if query-cache-type is "OFF" since sessions can override the cache type setting on their own. awesome: default: false type: boolean description: Set true to make this database engine truly awesome tuning-level: default: safest type: string description: Valid values are 'safest', 'fast', and 'unsafe'. If set to safest, all settings are tuned to have maximum safety at the cost of performance. Fast will turn off most controls, but may lose data on crashes. unsafe will turn off all protections. monkey-madness: default: 0.5 type: float description: The amount of randomness to be desired in any data that is returned, from 0 (sane) to 1 (monkeys running the asylum). juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/metadata.yaml0000644000000000000000000000017612135220114025763 0ustar 00000000000000name: mysql-format-v2 summary: "Database engine" description: "A pretty popular database" provides: server: mysql format: 2 juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/revision0000644000000000000000000000000112135220114025063 0ustar 000000000000001juju-0.7.orig/juju/charm/tests/repository/series/new/metadata.yaml0000644000000000000000000000021612135220114023567 0ustar 00000000000000name: sample summary: "That's a sample charm." description: | This is a longer description which potentially contains multiple lines. juju-0.7.orig/juju/charm/tests/repository/series/new/revision0000644000000000000000000000000212135220114022675 0ustar 000000000000002 juju-0.7.orig/juju/charm/tests/repository/series/old/metadata.yaml0000644000000000000000000000021612135220114023554 0ustar 00000000000000name: sample summary: "That's a sample charm." description: | This is a longer description which potentially contains multiple lines. juju-0.7.orig/juju/charm/tests/repository/series/old/revision0000644000000000000000000000000212135220114022662 0ustar 000000000000001 juju-0.7.orig/juju/charm/tests/repository/series/riak/metadata.yaml0000644000000000000000000000031712135220114023726 0ustar 00000000000000name: riak summary: "K/V storage engine" description: "Scalable K/V Store in Erlang with Clocks :-)" provides: endpoint: interface: http admin: interface: http peers: ring: interface: riak juju-0.7.orig/juju/charm/tests/repository/series/riak/revision0000644000000000000000000000000112135220114023031 0ustar 000000000000007juju-0.7.orig/juju/charm/tests/repository/series/varnish/metadata.yaml0000644000000000000000000000015712135220114024454 0ustar 00000000000000name: varnish summary: "Database engine" description: "Another popular database" provides: webcache: varnish juju-0.7.orig/juju/charm/tests/repository/series/varnish/revision0000644000000000000000000000000112135220114023555 0ustar 000000000000001juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/hooks/0000755000000000000000000000000012135220114025444 5ustar 00000000000000juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/metadata.yaml0000644000000000000000000000017312135220114026766 0ustar 00000000000000name: varnish-alternative summary: "Database engine" description: "Another popular database" provides: webcache: varnish juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/revision0000644000000000000000000000000112135220114026071 0ustar 000000000000001juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/hooks/install0000755000000000000000000000003512135220114027036 0ustar 00000000000000#!/bin/bash echo hello worldjuju-0.7.orig/juju/charm/tests/repository/series/wordpress/config.yaml0000644000000000000000000000015712135220114024517 0ustar 00000000000000options: blog-title: {default: My Title, description: A descriptive title used for the blog., type: string} juju-0.7.orig/juju/charm/tests/repository/series/wordpress/metadata.yaml0000644000000000000000000000043712135220114025033 0ustar 00000000000000name: wordpress summary: "Blog engine" description: "A pretty popular blog engine" provides: url: interface: http limit: optional: false requires: db: interface: mysql limit: 1 optional: false cache: interface: varnish limit: 2 optional: true juju-0.7.orig/juju/charm/tests/repository/series/wordpress/revision0000644000000000000000000000000112135220114024133 0ustar 000000000000003juju-0.7.orig/juju/control/__init__.py0000644000000000000000000001203212135220114016155 0ustar 00000000000000import argparse import logging import sys import zookeeper from .command import Commander from .utils import ParseError from juju.environment.config import EnvironmentsConfig from juju import __version__ import add_relation import add_unit import bootstrap import config_get import config_set import constraints_get import constraints_set import debug_hooks import debug_log import deploy import destroy_environment import destroy_service import expose import open_tunnel import remove_relation import remove_unit import resolved import scp import status import ssh import terminate_machine import unexpose import upgrade_charm import initialize SUBCOMMANDS = [ add_relation, add_unit, bootstrap, config_get, config_set, constraints_get, constraints_set, debug_log, debug_hooks, deploy, destroy_environment, destroy_service, expose, open_tunnel, remove_relation, remove_unit, resolved, scp, status, ssh, terminate_machine, unexpose, upgrade_charm ] ADMIN_SUBCOMMANDS = [ initialize] log = logging.getLogger("juju.control.cli") class JujuParser(argparse.ArgumentParser): def add_subparsers(self, **kwargs): kwargs.setdefault("parser_class", argparse.ArgumentParser) return super(JujuParser, self).add_subparsers(**kwargs) def error(self, message): self.print_help(sys.stderr) self.exit(2, '%s: error: %s\n' % (self.prog, message)) class JujuFormatter(argparse.HelpFormatter): def _metavar_formatter(self, action, default_metavar): """Override to get rid of redundant printing of positional args. """ if action.metavar is not None: result = action.metavar elif default_metavar == "==SUPPRESS==": result = "" else: result = default_metavar def format(tuple_size): if isinstance(result, tuple): return result else: return (result, ) * tuple_size return format def setup_parser(subcommands, **kw): """Setup a command line argument/option parser.""" parser = JujuParser(formatter_class=JujuFormatter, **kw) parser.add_argument( "--verbose", "-v", default=False, action="store_true", help="Enable verbose logging") parser.add_argument( "--version", action="version", version='juju %s' % (__version__)) parser.add_argument( "--log-file", "-l", default=sys.stderr, type=argparse.FileType('a'), help="Log output to file") subparsers = parser.add_subparsers() for module in subcommands: configure_subparser = getattr(module, "configure_subparser", None) passthrough = getattr(module, "passthrough", None) if configure_subparser: sub_parser = configure_subparser(subparsers) else: sub_parser = subparsers.add_parser( module.__name__.split('.')[-1], help=module.command.__doc__) sub_parser.set_defaults( command=Commander(module.command, passthrough=passthrough), parser=sub_parser) return parser def setup_logging(options): level = logging.DEBUG if options.verbose else logging.INFO logging.basicConfig( format="%(asctime)s %(levelname)s %(message)s", level=level, stream=options.log_file) if level is not logging.DEBUG: zookeeper.set_debug_level(0) def admin(args): """juju Admin command line interface entry point. The admin cli is used to provide an entry point into infrastructure tools like initializing the zookeeper layout, launching machine and provisioning agents, etc. Its not intended to be used by end users but consumed internally by the framework. """ parser = setup_parser( subcommands=ADMIN_SUBCOMMANDS, prog="juju-admin", description="juju cloud orchestration internal tools") parser.set_defaults(log=log) options = parser.parse_args(args) setup_logging(options) options.command(options) def main(args): """The main end user cli command for juju users.""" parser = setup_parser( subcommands=SUBCOMMANDS, prog="juju", description="juju cloud orchestration admin") # Some commands, like juju ssh, do a further parse on options by # delegating to another command (such as the underlying ssh). But # first need to parse nonstrictly all args to even determine what # command is even being used. options, extra = parser.parse_known_args(args) if options.command.passthrough: try: # Augments options with subparser specific passthrough parsing options.command.passthrough(options, extra) except ParseError, e: options.parser.error(str(e)) else: # Otherwise, do be strict options = parser.parse_args(args) env_config = EnvironmentsConfig() env_config.load_or_write_sample() options.environments = env_config options.log = log setup_logging(options) options.command(options) juju-0.7.orig/juju/control/add_relation.py0000644000000000000000000000511312135220114017045 0ustar 00000000000000"""Implementation of add-relation juju subcommand""" from twisted.internet.defer import inlineCallbacks from juju.control.utils import get_environment from juju.state.errors import NoMatchingEndpoints, AmbiguousRelation from juju.state.relation import RelationStateManager from juju.state.service import ServiceStateManager def configure_subparser(subparsers): """Configure add-relation subcommand""" sub_parser = subparsers.add_parser("add-relation", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to add the relation in.") sub_parser.add_argument( "--verbose", help="Provide additional information when running the command.") sub_parser.add_argument( "descriptors", nargs=2, metavar="[:]", help="Define the relation endpoints to be joined.") return sub_parser def command(options): """Add a relation between services in juju.""" environment = get_environment(options) return add_relation( options.environments, environment, options.verbose, options.log, *options.descriptors) @inlineCallbacks def add_relation(env_config, environment, verbose, log, *descriptors): """Add relation between relation endpoints described by `descriptors`""" provider = environment.get_machine_provider() client = yield provider.connect() relation_state_manager = RelationStateManager(client) service_state_manager = ServiceStateManager(client) endpoint_pairs = yield service_state_manager.join_descriptors( *descriptors) if verbose: log.info("Endpoint pairs: %s", endpoint_pairs) if len(endpoint_pairs) == 0: raise NoMatchingEndpoints() elif len(endpoint_pairs) > 1: for pair in endpoint_pairs[1:]: if not (pair[0].relation_name.startswith("juju-") or pair[1].relation_name.startswith("juju-")): raise AmbiguousRelation(descriptors, endpoint_pairs) # At this point we just have one endpoint pair. We need to pick # just one of the endpoints if it's a peer endpoint, since that's # our current API - join descriptors takes two descriptors, but # add_relation_state takes one or two endpoints. TODO consider # refactoring. endpoints = endpoint_pairs[0] if endpoints[0] == endpoints[1]: endpoints = endpoints[0:1] yield relation_state_manager.add_relation_state(*endpoints) yield client.close() log.info("Added %s relation to all service units.", endpoints[0].relation_type) juju-0.7.orig/juju/control/add_unit.py0000644000000000000000000000432212135220114016210 0ustar 00000000000000"""Implementation of add unit subcommand""" from twisted.internet.defer import inlineCallbacks from juju.control import legacy from juju.control.utils import get_environment, sync_environment_state from juju.errors import JujuError from juju.state.placement import place_unit from juju.state.service import ServiceStateManager def configure_subparser(subparsers): """Configure add-unit subcommand""" sub_parser = subparsers.add_parser("add-unit", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") sub_parser.add_argument( "--num-units", "-n", default=1, type=int, metavar="NUM", help="Number of service units to add.") sub_parser.add_argument( "service_name", help="Name of the service a unit should be added for") return sub_parser def command(options): """Add a new service unit.""" environment = get_environment(options) return add_unit( options.environments, environment, options.verbose, options.log, options.service_name, options.num_units) @inlineCallbacks def add_unit(config, environment, verbose, log, service_name, num_units): """Add a unit of a service to the environment. """ provider = environment.get_machine_provider() placement_policy = provider.get_placement_policy() client = yield provider.connect() try: yield legacy.check_environment( client, provider.get_legacy_config_keys()) yield sync_environment_state(client, config, environment.name) service_manager = ServiceStateManager(client) service_state = yield service_manager.get_service_state(service_name) if (yield service_state.is_subordinate()): raise JujuError("Subordinate services acquire units from " "their principal service.") for i in range(num_units): unit_state = yield service_state.add_unit_state() yield place_unit(client, placement_policy, unit_state) log.info("Unit %r added to service %r", unit_state.unit_name, service_state.service_name) finally: yield client.close() juju-0.7.orig/juju/control/bootstrap.py0000644000000000000000000000244112135220114016436 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import legacy from juju.control.utils import expand_constraints, get_environment def configure_subparser(subparsers): """Configure bootstrap subcommand""" sub_parser = subparsers.add_parser("bootstrap", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") sub_parser.add_argument( "--constraints", help="default hardware constraints for this environment.", default=[], type=expand_constraints) return sub_parser @inlineCallbacks def command(options): """ Bootstrap machine providers in the specified environment. """ environment = get_environment(options) provider = environment.get_machine_provider() legacy_keys = provider.get_legacy_config_keys() if legacy_keys: legacy.error(legacy_keys) constraint_set = yield provider.get_constraint_set() constraints = constraint_set.parse(options.constraints) constraints = constraints.with_series(environment.default_series) options.log.info( "Bootstrapping environment %r (origin: %s type: %s)..." % ( environment.name, environment.origin, environment.type)) yield provider.bootstrap(constraints) juju-0.7.orig/juju/control/command.py0000644000000000000000000000443112135220114016040 0ustar 00000000000000from twisted.internet import defer from twisted.python.failure import Failure from argparse import Namespace from StringIO import StringIO import sys class Commander(object): """Command container. Command objects are constructed in the argument parser in package __init__ and used to control the execution of juju command line activities. Keyword Arguments: callback -- a callable object which will be triggered in the reactor loop when Commander.__call__ is invoked. """ def __init__(self, callback, passthrough=False): if not callable(callback): raise ValueError( "Commander callback argument must be a callable") self.callback = callback self.passthrough = passthrough self.options = None self.exit_code = 0 def __call__(self, options): from twisted.internet import reactor if not options or not isinstance(options, Namespace): raise ValueError( "%s.__call__ must be passed a valid argparse.Namespace" % self.__class__.__name__) self.options = options options.log.debug("Initializing %s runtime" % options.parser.prog) reactor.callWhenRunning(self._run) reactor.run() sys.exit(self.exit_code) def _run(self): d = defer.maybeDeferred(self.callback, self.options) d.addBoth(self._handle_exit) return d def _handle_exit(self, result, stream=None): from twisted.internet import reactor if stream is None: stream = sys.stderr if isinstance(result, Failure): if self.options.verbose: tracebackIO = StringIO() result.printTraceback(file=tracebackIO) stream.write(tracebackIO.getvalue()) self.options.log.error(tracebackIO.getvalue()) self.options.log.error(result.getErrorMessage()) if self.exit_code == 0: self.exit_code = 1 else: command_name = self.callback.__module__.rsplit('.', 1)[-1] self.options.log.info("%r command finished successfully" % command_name) if reactor.running: reactor.stop() juju-0.7.orig/juju/control/config_get.py0000644000000000000000000000551412135220114016531 0ustar 00000000000000import argparse from twisted.internet.defer import inlineCallbacks from juju.control.utils import get_environment from juju.lib.format import YAMLFormat from juju.state.service import ServiceStateManager def configure_subparser(subparsers): sub_parser = subparsers.add_parser( "get", formatter_class=argparse.RawDescriptionHelpFormatter, help=config_get.__doc__, description=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to utilize.") sub_parser.add_argument( "--schema", "-s", action="store_true", default=False, help="Display the schema only") sub_parser.add_argument( "service_name", help="The name of the service to retrieve settings for") return sub_parser def command(options): """Get service config options. Charms may define dynamic options which may be tweaked at deployment time, or over the lifetime of the service. This command allows display the current value of these settings in yaml format. $ juju get wordpress {'service': 'wordpress', 'charm': 'local:series/wordpress-3', 'settings': {'blog-title': { 'description': 'A descriptive title used for the blog.', 'type': 'string', 'value': 'Hello World'}}}, """ environment = get_environment(options) return config_get(environment, options.service_name, options.schema) @inlineCallbacks def config_get(environment, service_name, display_schema): """Get service settings. """ provider = environment.get_machine_provider() client = yield provider.connect() try: # Get the service service_manager = ServiceStateManager(client) service = yield service_manager.get_service_state(service_name) # Retrieve schema charm = yield service.get_charm_state() schema = yield charm.get_config() schema_dict = schema.as_dict() display_dict = {"service": service.service_name, "charm": (yield service.get_charm_id()), "settings": schema_dict} # Get current settings settings = yield service.get_config() settings = dict(settings.items()) # Merge current settings into schema/display dict for k, v in schema_dict.items(): # Display defaults for unset values. if k in settings: v['value'] = settings[k] else: v['value'] = None if 'default' in v: if v['default'] == settings[k]: v['default'] = True else: del v['default'] print YAMLFormat().format(display_dict) finally: yield client.close() juju-0.7.orig/juju/control/config_set.py0000644000000000000000000000671112135220114016545 0ustar 00000000000000import argparse import yaml from twisted.internet.defer import inlineCallbacks from juju.charm.errors import ServiceConfigValueError from juju.control.utils import get_environment from juju.lib import serializer from juju.lib.format import get_charm_formatter from juju.state.service import ServiceStateManager def configure_subparser(subparsers): sub_parser = subparsers.add_parser( "set", help=config_set.__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, description=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to status.") sub_parser.add_argument( "service_name", help="The name of the service the options apply to.") sub_parser.add_argument("--config", type=argparse.FileType("r"), help=( "a filename containing a YAML dict of values " "for the current service_name")) sub_parser.add_argument("service_options", nargs="*", help="""name=value for option to set""") return sub_parser def command(options): """Set service options. Service charms may define dynamic options which may be tweaked at deployment time, or over the lifetime of the service. This command allows changing these settings. $ juju set option=value [option=value] or $ juju set --config local.yaml """ environment = get_environment(options) if options.config: if options.service_options: raise ServiceConfigValueError( "--config and command line options cannot " "be used in a single invocation") yaml_data = options.config.read() try: data = serializer.yaml_load(yaml_data) except yaml.YAMLError: raise ServiceConfigValueError( "Config file %r not valid YAML" % options.config.name) if not data or not isinstance(data, dict): raise ServiceConfigValueError( "Config file %r invalid" % options.config.name ) data = data.get(options.service_name) if data: # set data directly options.service_options = data return config_set(environment, options.service_name, options.service_options) @inlineCallbacks def config_set(environment, service_name, service_options): """Set service settings. """ provider = environment.get_machine_provider() client = yield provider.connect() # Get the service and the charm service_manager = ServiceStateManager(client) service = yield service_manager.get_service_state(service_name) charm = yield service.get_charm_state() charm_format = (yield charm.get_metadata()).format formatter = get_charm_formatter(charm_format) # Use the charm's ConfigOptions instance to validate the # arguments to config_set. Invalid options passed to this method # will thrown an exception. if isinstance(service_options, dict): options = service_options else: options = formatter.parse_keyvalue_pairs(service_options) config = yield charm.get_config() options = config.validate(options) # Apply the change state = yield service.get_config() state.update(options) yield state.write() juju-0.7.orig/juju/control/constraints_get.py0000644000000000000000000000511712135220114017632 0ustar 00000000000000import argparse import sys from twisted.internet.defer import inlineCallbacks from juju.control.utils import get_environment, sync_environment_state from juju.lib import serializer from juju.state.environment import EnvironmentStateManager from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager def configure_subparser(subparsers): sub_parser = subparsers.add_parser( "get-constraints", help=command.__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, description=constraints_get.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to affect") sub_parser.add_argument( "entities", nargs="*", help="names of machines, units or services") return sub_parser def command(options): """Show currently applicable constraints""" environment = get_environment(options) return constraints_get( options.environments, environment, options.entities, options.log) @inlineCallbacks def constraints_get(env_config, environment, entity_names, log): """ Show the complete set of applicable constraints for each specified entity. This will show the final computed values of all constraints (including internal constraints which cannot be set directly via set-constraints). """ provider = environment.get_machine_provider() client = yield provider.connect() result = {} try: yield sync_environment_state(client, env_config, environment.name) if entity_names: msm = MachineStateManager(client) ssm = ServiceStateManager(client) for name in entity_names: if name.isdigit(): kind = "machine" entity = yield msm.get_machine_state(name) elif "/" in name: kind = "service unit" entity = yield ssm.get_unit_state(name) else: kind = "service" entity = yield ssm.get_service_state(name) log.info("Fetching constraints for %s %s", kind, name) constraints = yield entity.get_constraints() result[name] = dict(constraints) else: esm = EnvironmentStateManager(client) log.info("Fetching constraints for environment") constraints = yield esm.get_constraints() result = dict(constraints) contents = serializer.yaml_dump(result) sys.stdout.write(contents) finally: yield client.close() juju-0.7.orig/juju/control/constraints_set.py0000644000000000000000000000646312135220114017653 0ustar 00000000000000import argparse from twisted.internet.defer import inlineCallbacks from juju.control import legacy from juju.control.utils import get_environment, sync_environment_state from juju.state.environment import EnvironmentStateManager from juju.state.service import ServiceStateManager def configure_subparser(subparsers): sub_parser = subparsers.add_parser( "set-constraints", help=command.__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, description=constraints_set.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to affect") sub_parser.add_argument( "--service", "-s", default=None, help="Service to set constraints on") sub_parser.add_argument( "constraints", nargs="+", help="name=value for constraint to set") return sub_parser def command(options): """Set machine constraints for the environment, or for a named service. """ environment = get_environment(options) env_config = options.environments return constraints_set( env_config, environment, options.service, options.constraints) @inlineCallbacks def constraints_set(env_config, environment, service_name, constraint_strs): """ Machine constraints allow you to pick the hardware to which your services will be deployed. Examples: $ juju set-constraints --service-name mysql mem=8G cpu=4 $ juju set-constraints instance-type=t1.micro Available constraints vary by provider type, and will be ignored if not understood by the current environment's provider. The current set of available constraints across all providers is: On Amazon EC2: * arch (CPU architecture: i386/amd64/arm; amd64 by default) * cpu (processing power in Amazon ECU; 1 by default) * mem (memory in [MGT]iB; 512M by default) * instance-type (unset by default) * ec2-zone (unset by default) On Orchestra: * orchestra-classes (unset by default) On MAAS: * maas-name (unset by default) Service settings, if specified, will override environment settings, which will in turn override the juju defaults of mem=512M, cpu=1, arch=amd64. New constraints set on an entity will completely replace that entity's pre-existing constraints. To override an environment constraint with the juju default when setting service constraints, just specify "name=" (rather than just not specifying the constraint at all, which will cause it to inherit the environment's value). To entirely unset a constraint, specify "name=any". """ provider = environment.get_machine_provider() constraint_set = yield provider.get_constraint_set() constraints = constraint_set.parse(constraint_strs) client = yield provider.connect() try: yield legacy.check_constraints(client, constraint_strs) yield sync_environment_state(client, env_config, environment.name) if service_name is None: esm = EnvironmentStateManager(client) yield esm.set_constraints(constraints) else: ssm = ServiceStateManager(client) service = yield ssm.get_service_state(service_name) yield service.set_constraints(constraints) finally: yield client.close() juju-0.7.orig/juju/control/debug_hooks.py0000644000000000000000000001304112135220114016710 0ustar 00000000000000""" Command for debugging hooks on a service unit. """ import base64 import os from twisted.internet.defer import inlineCallbacks, returnValue from juju.control.utils import get_ip_address_for_unit from juju.control.utils import get_environment from juju.charm.errors import InvalidCharmHook from juju.state.charm import CharmStateManager from juju.state.service import ServiceStateManager def configure_subparser(subparsers): sub_parser = subparsers.add_parser("debug-hooks", help=command.__doc__) sub_parser.add_argument( "-e", "--environment", help="juju environment to operate in.") sub_parser.add_argument( "unit_name", help="Name of unit") sub_parser.add_argument( "hook_names", default=["*"], nargs="*", help="Name of hook, defaults to all") return sub_parser @inlineCallbacks def validate_hooks(client, unit_state, hook_names): # Assemble a list of valid hooks for the charm. valid_hooks = ["start", "stop", "install", "config-changed"] service_manager = ServiceStateManager(client) endpoints = yield service_manager.get_relation_endpoints( unit_state.service_name) endpoint_names = [ep.relation_name for ep in endpoints] for endpoint_name in endpoint_names: valid_hooks.extend([ endpoint_name + "-relation-joined", endpoint_name + "-relation-changed", endpoint_name + "-relation-departed", endpoint_name + "-relation-broken", ]) # Verify the debug names. for hook_name in hook_names: if hook_name in valid_hooks: continue break else: returnValue(True) # We dereference to the charm to give a fully qualified error # message. I wish this was a little easier to dereference, the # service_manager.get_relation_endpoints effectively does this # already. service_manager = ServiceStateManager(client) service_state = yield service_manager.get_service_state( unit_state.service_name) charm_id = yield service_state.get_charm_id() charm_manager = CharmStateManager(client) charm = yield charm_manager.get_charm_state(charm_id) raise InvalidCharmHook(charm.id, hook_name) @inlineCallbacks def command(options): """Interactively debug a hook remotely on a service unit. """ environment = get_environment(options) provider = environment.get_machine_provider() client = yield provider.connect() # Verify unit and retrieve ip address options.log.debug("Retrieving unit and machine information.") ip_address, unit = yield get_ip_address_for_unit( client, provider, options.unit_name) # Verify hook name if options.hook_names != ["*"]: options.log.debug("Verifying hook names...") yield validate_hooks(client, unit, options.hook_names) # Enable debug log options.log.debug( "Enabling hook debug on unit (%r)..." % options.unit_name) yield unit.enable_hook_debug(options.hook_names) # If we don't have an ipaddress the unit isn't up yet, wait for it. if not ip_address: options.log.info("Waiting for unit") # Wait and verify the agent is running. while 1: exists_d, watch_d = unit.watch_agent() exists = yield exists_d if exists: options.log.info("Unit running") break yield watch_d # Refetch the unit address ip_address, unit = yield get_ip_address_for_unit( client, provider, options.unit_name) # Connect via ssh and start tmux. options.log.info("Connecting to remote machine %s...", ip_address) # Encode the script as base64 so that we can deliver it with a single # ssh command while still retaining standard input on the terminal fd. script = SCRIPT.replace("{unit_name}", options.unit_name) script_b64 = base64.encodestring(script).replace("\n", "").strip() cmd = '"F=`mktemp`; echo %s | base64 -d > \$F; . \$F"' % script_b64 # Yield to facilitate testing. yield os.system( "ssh -t ubuntu@%s 'sudo /bin/bash -c %s'" % (ip_address, cmd)) options.log.info("Debug session ended.") # Ends hook debugging. yield client.close() SCRIPT = r""" # Wait for tmux to be installed. while [ ! -f /usr/bin/tmux ]; do sleep 1 done if [ ! -f ~/.tmux.conf ]; then if [ -f /usr/share/byobu/profiles/tmux ]; then # Use byobu/tmux profile for familiar keybindings and branding echo "source-file /usr/share/byobu/profiles/tmux" > ~/.tmux.conf else # Otherwise, use the legacy juju/tmux configuration cat > ~/.tmux.conf < arrow key set-option -s escape-time 0 END fi fi # The beauty below is a workaround for a bug in tmux (1.5 in Oneiric) or # epoll that doesn't support /dev/null or whatever. Without it the # command hangs. tmux new-session -d -s {unit_name} 2>&1 | cat > /dev/null || true tmux attach -t {unit_name} """ juju-0.7.orig/juju/control/debug_log.py0000644000000000000000000001043412135220114016351 0ustar 00000000000000""" Command for distributed debug logging output via the cli. """ from fnmatch import fnmatch import logging import sys from twisted.internet.defer import inlineCallbacks from juju.control.options import ensure_abs_path from juju.control.utils import get_environment from juju.state.environment import GlobalSettingsStateManager from juju.lib.zklog import LogIterator def configure_subparser(subparsers): """Configure debug-log subcommand""" sub_parser = subparsers.add_parser("debug-log", help=command.__doc__, description=debug_log.__doc__) sub_parser.add_argument( "-e", "--environment", help="juju environment to operate in.") sub_parser.add_argument( "-r", "--replay", default=False, action="store_true", help="Display all existing logs first.") sub_parser.add_argument( "-i", "--include", action="append", help=("Filter log messages to only show these log channels or agents." "Multiple values can be specified, also supports unix globbing.") ) sub_parser.add_argument( "-x", "--exclude", action="append", help=("Filter log messages to exclude these log channels or agents." "Multiple values can be specified, also supports unix globbing.") ) sub_parser.add_argument( "-l", "--level", default="DEBUG", choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"], help="Log level to show") sub_parser.add_argument( "-n", "--limit", type=int, help="Show n log messages and exit.") sub_parser.add_argument( "-o", "--output", default="-", help="File to log to, defaults to stdout", type=ensure_abs_path) return sub_parser def command(options): """Distributed juju debug log watching.""" environment = get_environment(options) return debug_log( options.environments, environment, options.log, options) @inlineCallbacks def debug_log(config, environment, log, options): """ Enables a distributed log for all agents in the environment, and displays all log entries that have not been seen yet. """ provider = environment.get_machine_provider() client = yield provider.connect() log.info("Enabling distributed debug log.") settings_manager = GlobalSettingsStateManager(client) yield settings_manager.set_debug_log(True) if not options.limit: log.info("Tailing logs - Ctrl-C to stop.") iterator = LogIterator(client, replay=options.replay) # Setup the logging output with the user specified file. if options.output == "-": log_file = sys.stdout else: log_file = open(options.output, "a") handler = logging.StreamHandler(log_file) log_level = logging.getLevelName(options.level) handler.setLevel(log_level) formatter = logging.Formatter( "%(asctime)s %(context)s: %(name)s %(levelname)s: %(message)s") handler.setFormatter(formatter) def match(data): local_name = data["context"].split(":")[-1] if options.exclude: for exclude in options.exclude: if fnmatch(local_name, exclude) or \ fnmatch(data["context"], exclude) or \ fnmatch(data["name"], exclude): return False if options.include: for include in options.include: if fnmatch(local_name, include) or \ fnmatch(data["context"], include) or \ fnmatch(data["name"], include): return True return False return True count = 0 try: while True: entry = yield iterator.next() if not match(entry): continue # json doesn't distinguish lists v. tuples but python string # formatting doesn't accept lists. entry["args"] = tuple(entry["args"]) record = logging.makeLogRecord(entry) if entry["levelno"] < handler.level: continue handler.handle(record) count += 1 if options.limit is not None and count == options.limit: break finally: yield settings_manager.set_debug_log(False) client.close() juju-0.7.orig/juju/control/deploy.py0000644000000000000000000001545412135220114015725 0ustar 00000000000000import os from twisted.internet.defer import inlineCallbacks from juju.control import legacy from juju.control.utils import ( expand_constraints, expand_path, get_environment, sync_environment_state) from juju.charm.errors import ServiceConfigValueError from juju.charm.publisher import CharmPublisher from juju.charm.repository import resolve from juju.errors import CharmError from juju.lib import serializer from juju.state.endpoint import RelationEndpoint from juju.state.placement import place_unit from juju.state.relation import RelationStateManager from juju.state.service import ServiceStateManager def configure_subparser(subparsers): sub_parser = subparsers.add_parser("deploy", help=command.__doc__, description=deploy.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to deploy the charm in.") sub_parser.add_argument( "--num-units", "-n", default=1, type=int, metavar="NUM", help="Number of service units to deploy.") sub_parser.add_argument( "-u", "--upgrade", default=False, action="store_true", help="Deploy the charm on disk, increments revision if needed") sub_parser.add_argument( "--repository", help="Directory for charm lookup and retrieval", default=os.environ.get("JUJU_REPOSITORY"), type=expand_path) sub_parser.add_argument( "--constraints", help="Hardware constraints for the service", default=[], type=expand_constraints) sub_parser.add_argument( "charm", nargs=None, help="Charm name") sub_parser.add_argument( "service_name", nargs="?", help="Service name of deployed charm") sub_parser.add_argument( "--config", help="YAML file containing service options") return sub_parser def command(options): """ Deploy a charm to juju! """ environment = get_environment(options) return deploy( options.environments, environment, options.repository, options.charm, options.service_name, options.log, options.constraints, options.config, options.upgrade, num_units=options.num_units) def parse_config_options(config_file, service_name, charm): if not os.path.exists(config_file) or \ not os.access(config_file, os.R_OK): raise ServiceConfigValueError( "Config file %r not accessible." % config_file) with open(config_file) as fh: options = serializer.yaml_load(fh.read()) if not options or not isinstance(options, dict) or \ service_name not in options: raise ServiceConfigValueError( "Invalid options file passed to --config.\n" "Expected a YAML dict with service name (%r)." % service_name) # Validate and type service options and return return charm.config.validate(options[service_name]) @inlineCallbacks def deploy(env_config, environment, repository_path, charm_name, service_name, log, constraint_strs, config_file=None, upgrade=False, num_units=1): """Deploy a charm within an environment. This will publish the charm to the environment, creating a service from the charm, and get it set to be launched on a new machine. If --repository is not specified, it will be taken from the environment variable JUJU_REPOSITORY. """ repo, charm_url = resolve( charm_name, repository_path, environment.default_series) log.info("Searching for charm %s in %s" % (charm_url, repo)) charm = yield repo.find(charm_url) if upgrade: if repo.type != "local" or charm.type != "dir": raise CharmError( charm.path, "Only local directory charms can be upgraded on deploy") charm.set_revision(charm.get_revision() + 1) charm_id = str(charm_url.with_revision(charm.get_revision())) # Validate config options prior to deployment attempt service_options = {} service_name = service_name or charm_url.name if config_file: service_options = parse_config_options( config_file, service_name, charm) charm = yield repo.find(charm_url) charm_id = str(charm_url.with_revision(charm.get_revision())) provider = environment.get_machine_provider() placement_policy = provider.get_placement_policy() constraint_set = yield provider.get_constraint_set() constraints = constraint_set.parse(constraint_strs) client = yield provider.connect() try: yield legacy.check_constraints(client, constraint_strs) yield legacy.check_environment( client, provider.get_legacy_config_keys()) yield sync_environment_state(client, env_config, environment.name) # Publish the charm to juju storage = yield provider.get_file_storage() publisher = CharmPublisher(client, storage) yield publisher.add_charm(charm_id, charm) result = yield publisher.publish() # In future we might have multiple charms be published at # the same time. For now, extract the charm_state from the # list. charm_state = result[0] # Create the service state service_manager = ServiceStateManager(client) service_state = yield service_manager.add_service_state( service_name, charm_state, constraints) # Use the charm's ConfigOptions instance to validate service # options.. Invalid options passed will thrown an exception # and prevent the deploy. state = yield service_state.get_config() charm_config = yield charm_state.get_config() # return the validated options with the defaults included service_options = charm_config.validate(service_options) state.update(service_options) yield state.write() # Create desired number of service units if (yield service_state.is_subordinate()): log.info("Subordinate %r awaiting relationship " "to principal for deployment.", service_name) else: for i in xrange(num_units): unit_state = yield service_state.add_unit_state() yield place_unit(client, placement_policy, unit_state) # Check if we have any peer relations to establish if charm.metadata.peers: relation_manager = RelationStateManager(client) for peer_name, peer_info in charm.metadata.peers.items(): yield relation_manager.add_relation_state( RelationEndpoint(service_name, peer_info["interface"], peer_name, "peer")) log.info("Charm deployed as service: %r", service_name) finally: yield client.close() juju-0.7.orig/juju/control/destroy_environment.py0000644000000000000000000000223712135220114020541 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, returnValue from juju.control.utils import get_environment def configure_subparser(subparsers): """Configure destroy-environment subcommand""" sub_parser = subparsers.add_parser( "destroy-environment", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") return sub_parser @inlineCallbacks def command(options): """ Terminate all machines and resources for an environment. """ environment = get_environment(options) provider = environment.get_machine_provider() value = raw_input( "WARNING: this command will destroy the %r environment (type: %s).\n" "This includes all machines, services, data, and other resources. " "Continue [y/N] " % ( environment.name, environment.type)) if value.strip().lower() not in ("y", "yes"): options.log.info("Environment destruction aborted") returnValue(None) options.log.info("Destroying environment %r (type: %s)..." % ( environment.name, environment.type)) yield provider.destroy_environment() juju-0.7.orig/juju/control/destroy_service.py0000644000000000000000000000513012135220114017630 0ustar 00000000000000"""Implementation of destroy service subcommand""" from twisted.internet.defer import inlineCallbacks from juju.state.errors import UnsupportedSubordinateServiceRemoval from juju.state.relation import RelationStateManager from juju.state.service import ServiceStateManager from juju.control.utils import get_environment def configure_subparser(subparsers): """Configure destroy-service subcommand""" sub_parser = subparsers.add_parser("destroy-service", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to add the relation in.") sub_parser.add_argument( "service_name", help="Name of the service to stop") return sub_parser def command(options): """Destroy a running service, its units, and break its relations.""" environment = get_environment(options) return destroy_service( options.environments, environment, options.verbose, options.log, options.service_name) @inlineCallbacks def destroy_service(config, environment, verbose, log, service_name): provider = environment.get_machine_provider() client = yield provider.connect() service_manager = ServiceStateManager(client) service_state = yield service_manager.get_service_state(service_name) if (yield service_state.is_subordinate()): # We can destroy the service if does not have relations. # That implies that principals have already been torn # down (or were never added). relation_manager = RelationStateManager(client) relations = yield relation_manager.get_relations_for_service( service_state) if relations: principal_service = None # if we have a container we can destroy the subordinate # (revisit in the future) for relation in relations: if relation.relation_scope != "container": continue services = yield relation.get_service_states() remote_service = [s for s in services if s.service_name != service_state.service_name][0] if not (yield remote_service.is_subordinate()): principal_service = remote_service break if principal_service: raise UnsupportedSubordinateServiceRemoval( service_state.service_name, principal_service.service_name) yield service_manager.remove_service_state(service_state) log.info("Service %r destroyed.", service_state.service_name) juju-0.7.orig/juju/control/expose.py0000644000000000000000000000275712135220114015736 0ustar 00000000000000"""Implementation of expose subcommand""" from twisted.internet.defer import inlineCallbacks from juju.control.utils import get_environment from juju.state.service import ServiceStateManager def configure_subparser(subparsers): """Configure expose subcommand""" sub_parser = subparsers.add_parser("expose", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") sub_parser.add_argument( "service_name", help="Name of the service that should be exposed.") return sub_parser def command(options): """Expose a service to the internet.""" environment = get_environment(options) return expose( options.environments, environment, options.verbose, options.log, options.service_name) @inlineCallbacks def expose( config, environment, verbose, log, service_name): """Expose a service.""" provider = environment.get_machine_provider() client = yield provider.connect() try: service_manager = ServiceStateManager(client) service_state = yield service_manager.get_service_state(service_name) already_exposed = yield service_state.get_exposed_flag() if not already_exposed: yield service_state.set_exposed_flag() log.info("Service %r was exposed.", service_name) else: log.info("Service %r was already exposed.", service_name) finally: yield client.close() juju-0.7.orig/juju/control/initialize.py0000644000000000000000000000267212135220114016570 0ustar 00000000000000from base64 import b64decode import os from twisted.internet.defer import inlineCallbacks from txzookeeper import ZookeeperClient from juju.lib import serializer from juju.state.initialize import StateHierarchy def configure_subparser(subparsers): sub_parser = subparsers.add_parser("initialize", help=command.__doc__) sub_parser.add_argument( "--instance-id", required=True, help="Provider instance id for the bootstrap node") sub_parser.add_argument( "--admin-identity", required=True, help="Admin access control identity for zookeeper ACLs") sub_parser.add_argument( "--constraints-data", required=True, help="Base64-encoded yaml dump of the environment constraints data") sub_parser.add_argument( "--provider-type", required=True, help="Environment machine provider type") return sub_parser @inlineCallbacks def command(options): """ Initialize Zookeeper hierarchy """ zk_address = os.environ.get("ZOOKEEPER_ADDRESS", "127.0.0.1:2181") client = yield ZookeeperClient(zk_address).connect() try: constraints_data = serializer.load(b64decode(options.constraints_data)) hierarchy = StateHierarchy( client, options.admin_identity, options.instance_id, constraints_data, options.provider_type) yield hierarchy.initialize() finally: yield client.close() juju-0.7.orig/juju/control/legacy.py0000644000000000000000000000230412135220114015663 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.errors import JujuError from juju.state.environment import EnvironmentStateManager _ERROR = """ Your environments.yaml contains deprecated keys; they must not be used other than in legacy deployments. The affected keys are: %s This error can be resolved according to the instructions available at: https://juju.ubuntu.com/DeprecatedEnvironmentSettings """ def error(keys): raise JujuError(_ERROR % "\n ".join(sorted(keys))) @inlineCallbacks def check_environment(client, keys): if not keys: return esm = EnvironmentStateManager(client) if not (yield esm.get_in_legacy_environment()): error(keys) @inlineCallbacks def check_constraints(client, constraint_strs): if not constraint_strs: return esm = EnvironmentStateManager(client) if (yield esm.get_in_legacy_environment()): raise JujuError( "Constraints are not valid in legacy deployments. To use machine " "constraints, please deploy your environment again from scratch. " "You can continue to use this environment as before, but any " "attempt to set constraints will fail.") juju-0.7.orig/juju/control/open_tunnel.py0000644000000000000000000000160512135220114016750 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, Deferred from juju.control.utils import get_environment def configure_subparser(subparsers): sub_parser = subparsers.add_parser("open-tunnel", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to operate on.") # TODO Coming next: #sub_parser.add_argument( # "unit_or_machine", nargs="*", help="Name of unit or machine") return sub_parser @inlineCallbacks def command(options): """Establish a tunnel to the environment. """ environment = get_environment(options) provider = environment.get_machine_provider() yield provider.connect(share=True) options.log.info("Tunnel to the environment is open. " "Press CTRL-C to close it.") yield hanging_deferred() def hanging_deferred(): # Hang forever. return Deferred() juju-0.7.orig/juju/control/options.py0000644000000000000000000000711612135220114016120 0ustar 00000000000000""" Argparse implementation of twistd standard unix options. """ import os import argparse from twisted.python.util import uidFromString, gidFromString from twisted.scripts._twistd_unix import _umask def ensure_abs_path(path): """ Ensure the parent directory to the given path exists. Returns the absolute file location to the given path """ if path == "-": return path path = os.path.abspath(path) parent_dir = os.path.dirname(path) if not os.path.exists(parent_dir): os.makedirs(parent_dir) return path def setup_twistd_options(parser, agent): """ Mimic the standard twisted options with some sane defaults """ # Standard twisted app options development_group = parser.add_argument_group("Development options") development_group.add_argument( "--debug", "-b", action="store_true", help="Run the application in the python debugger", ) development_group.add_argument( "--profile", "-p", action="store_true", help="Run in profile mode, dumping results to specified file", ) development_group.add_argument( "--savestats", "-s", action="store_true", help="Save the Stats object rather than text output of the profiler", ) # Standard unix daemon options unix_group = parser.add_argument_group("Unix Daemon options") unix_group.add_argument( "--rundir", "-d", default=".", help="Change to supplied directory before running", type=os.path.abspath, ) unix_group.add_argument( "--pidfile", default="", help="Path to the pid file", ) unix_group.add_argument( "--logfile", default="%s.log" % agent.name, help="Log to a specified file, - for stdout", type=ensure_abs_path, ) unix_group.add_argument( "--loglevel", default="DEBUG", choices=("DEBUG", "INFO", "ERROR", "WARNING", "CRITICAL"), help="Log level") unix_group.add_argument( "--chroot", default=None, help="Chroot to a supplied directory before running", type=os.path.abspath, ) unix_group.add_argument( "--umask", default='0022', type=_umask, help="The (octal) file creation mask to apply.", ) unix_group.add_argument( "--uid", "-u", default=None, type=uidFromString, help="The uid to run as.", ) unix_group.add_argument( "--gid", "-g", default=None, type=gidFromString, help="The gid to run as.", ) unix_group.add_argument( "--nodaemon", "-n", default=False, dest="nodaemon", action="store_true", help="Don't daemonize (stay in foreground)", ) unix_group.add_argument( "--syslog", default=False, action="store_true", help="Log to syslog, not to file", ) unix_group.add_argument( "--sysprefix", dest="prefix", default=agent.name, help="Syslog prefix [default: %s]" % (agent.name), ) # Hidden options expected by twistd, with sane defaults parser.add_argument( "--save", default=True, action="store_false", dest="no_save", help=argparse.SUPPRESS, ) parser.add_argument( "--profiler", default="cprofile", help=argparse.SUPPRESS, ) parser.add_argument( "--reactor", "-r", default="epoll", help=argparse.SUPPRESS, ) parser.add_argument( "--originalname", help=argparse.SUPPRESS, ) parser.add_argument( "--euid", help=argparse.SUPPRESS, ) juju-0.7.orig/juju/control/remove_relation.py0000644000000000000000000000670012135220114017615 0ustar 00000000000000"""Implementation of remove-relation juju subcommand""" from twisted.internet.defer import inlineCallbacks from juju.control.utils import get_environment from juju.state.errors import (AmbiguousRelation, NoMatchingEndpoints, UnsupportedSubordinateServiceRemoval) from juju.state.relation import RelationStateManager from juju.state.service import ServiceStateManager def configure_subparser(subparsers): """Configure remove-relation subcommand""" sub_parser = subparsers.add_parser("remove-relation", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to add the relation in.") sub_parser.add_argument( "--verbose", help="Provide additional information when running the command.") sub_parser.add_argument( "descriptors", nargs=2, metavar="[:]", help="Define the relation endpoints for the relation to be removed.") return sub_parser def command(options): """Remove a relation between services in juju.""" environment = get_environment(options) return remove_relation( options.environments, environment, options.verbose, options.log, *options.descriptors) @inlineCallbacks def remove_relation(env_config, environment, verbose, log, *descriptors): """Remove relation between relation endpoints described by `descriptors`""" provider = environment.get_machine_provider() client = yield provider.connect() relation_state_manager = RelationStateManager(client) service_state_manager = ServiceStateManager(client) endpoint_pairs = yield service_state_manager.join_descriptors( *descriptors) if verbose: log.info("Endpoint pairs: %s", endpoint_pairs) if len(endpoint_pairs) == 0: raise NoMatchingEndpoints() elif len(endpoint_pairs) > 1: raise AmbiguousRelation(descriptors, endpoint_pairs) # At this point we just have one endpoint pair. We need to pick # just one of the endpoints if it's a peer endpoint, since that's # our current API - join descriptors takes two descriptors, but # add_relation_state takes one or two endpoints. TODO consider # refactoring. endpoints = endpoint_pairs[0] if endpoints[0] == endpoints[1]: endpoints = endpoints[0:1] relation_state = yield relation_state_manager.get_relation_state( *endpoints) # Look at both endpoints, if we are dealing with a container relation # decide if one end is a principal. service_pair = [] # ordered such that sub, principal is_container = False has_principal = True for ep in endpoints: if ep.relation_scope == "container": is_container = True service = yield service_state_manager.get_service_state( ep.service_name) if (yield service.is_subordinate()): service_pair.append(service) has_principal = True else: service_pair.insert(0, service) if is_container and len(service_pair) == 2 and has_principal: sub, principal = service_pair raise UnsupportedSubordinateServiceRemoval(sub.service_name, principal.service_name) yield relation_state_manager.remove_relation_state(relation_state) yield client.close() log.info("Removed %s relation from all service units.", endpoints[0].relation_type) juju-0.7.orig/juju/control/remove_unit.py0000644000000000000000000000363512135220114016763 0ustar 00000000000000"""Implementation of remove unit subcommand""" from twisted.internet.defer import inlineCallbacks from juju.state.errors import UnsupportedSubordinateServiceRemoval from juju.state.service import ServiceStateManager, parse_service_name from juju.control.utils import get_environment def configure_subparser(subparsers): """Configure remove-unit subcommand""" sub_parser = subparsers.add_parser("remove-unit", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") sub_parser.add_argument( "unit_names", nargs="+", metavar="SERVICE_UNIT", help="Name of the service unit to remove.") return sub_parser def command(options): """Remove a service unit.""" environment = get_environment(options) return remove_unit( options.environments, environment, options.verbose, options.log, options.unit_names) @inlineCallbacks def remove_unit(config, environment, verbose, log, unit_names): provider = environment.get_machine_provider() client = yield provider.connect() try: service_manager = ServiceStateManager(client) for unit_name in unit_names: service_name = parse_service_name(unit_name) service_state = yield service_manager.get_service_state( service_name) unit_state = yield service_state.get_unit_state(unit_name) if (yield service_state.is_subordinate()): container = yield unit_state.get_container() raise UnsupportedSubordinateServiceRemoval( unit_state.unit_name, container.unit_name) yield service_state.remove_unit_state(unit_state) log.info("Unit %r removed from service %r", unit_state.unit_name, service_state.service_name) finally: yield client.close() juju-0.7.orig/juju/control/resolved.py0000644000000000000000000001004012135220114016236 0ustar 00000000000000"""Implementation of resolved subcommand""" import argparse from twisted.internet.defer import inlineCallbacks, returnValue from juju.control.utils import get_environment from juju.state.service import ServiceStateManager, RETRY_HOOKS, NO_HOOKS from juju.state.relation import RelationStateManager from juju.state.errors import RelationStateNotFound from juju.unit.workflow import is_unit_running, is_relation_running def configure_subparser(subparsers): """Configure resolved subcommand""" sub_parser = subparsers.add_parser( "resolved", help=command.__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, description=resolved.__doc__) sub_parser.add_argument( "--retry", "-r", action="store_true", help="Retry failed hook."), sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") sub_parser.add_argument( "service_unit_name", help="Name of the service unit that should be resolved") sub_parser.add_argument( "relation_name", nargs="?", default=None, help="Name of the unit relation that should be resolved") return sub_parser def command(options): """Mark an error as resolved in a unit or unit relation.""" environment = get_environment(options) return resolved( options.environments, environment, options.verbose, options.log, options.service_unit_name, options.relation_name, options.retry) @inlineCallbacks def resolved( config, environment, verbose, log, unit_name, relation_name, retry): """Mark an error as resolved in a unit or unit relation. If one of a unit's charm non-relation hooks returns a non-zero exit status, the entire unit can be considered to be in a non-running state. As a resolution, the the unit can be manually returned a running state via the juju resolved command. Optionally this command can also rerun the failed hook. This resolution also applies separately to each of the unit's relations. If one of the relation-hooks failed. In that case there is no notion of retrying (the change is gone), but resolving will allow additional relation hooks for that relation to proceed. """ provider = environment.get_machine_provider() client = yield provider.connect() service_manager = ServiceStateManager(client) relation_manager = RelationStateManager(client) unit_state = yield service_manager.get_unit_state(unit_name) service_state = yield service_manager.get_service_state( unit_name.split("/")[0]) retry = retry and RETRY_HOOKS or NO_HOOKS if not relation_name: running, workflow_state = yield is_unit_running(client, unit_state) if running: log.info("Unit %r already running: %s", unit_name, workflow_state) client.close() returnValue(False) yield unit_state.set_resolved(retry) log.info("Marked unit %r as resolved", unit_name) returnValue(True) # Check for the matching relations service_relations = yield relation_manager.get_relations_for_service( service_state) service_relations = [ sr for sr in service_relations if sr.relation_name == relation_name] if not service_relations: raise RelationStateNotFound() # Verify the relations are in need of resolution. resolved_relations = {} for service_relation in service_relations: unit_relation = yield service_relation.get_unit_state(unit_state) running, state = yield is_relation_running(client, unit_relation) if not running: resolved_relations[unit_relation.internal_relation_id] = retry if not resolved_relations: log.warning("Matched relations are all running") client.close() returnValue(False) # Mark the relations as resolved. yield unit_state.set_relation_resolved(resolved_relations) log.info( "Marked unit %r relation %r as resolved", unit_name, relation_name) client.close() juju-0.7.orig/juju/control/scp.py0000644000000000000000000000607712135220114015217 0ustar 00000000000000from argparse import RawDescriptionHelpFormatter import os from twisted.internet.defer import inlineCallbacks, returnValue from juju.control.utils import ( get_environment, get_ip_address_for_machine, get_ip_address_for_unit, parse_passthrough_args, ParseError) def configure_subparser(subparsers): sub_parser = subparsers.add_parser( "scp", help=command.__doc__, usage=("%(prog)s [-h] [-e ENV] " "[remote_host:]file1 ... [remote_host:]file2"), formatter_class=RawDescriptionHelpFormatter, description=( "positional arguments:\n" " [remote_host:]file The remote host can the name of either\n" " a Juju unit/machine or a remote system")) sub_parser.add_argument( "--environment", "-e", help="Environment to operate on.", metavar="ENV") return sub_parser def passthrough(options, extra): """Second parsing phase to parse `extra` to passthrough to scp itself. Partitions into flags and file specifications. """ flags, positional = parse_passthrough_args(extra, "cFiloPS") if not positional: raise ParseError("too few arguments") options.scp_flags = flags options.paths = positional def open_scp(flags, paths): # XXX - TODO - Might be nice if we had the ability to get the user's # private key path and utilize it here, ie the symmetric end to # get user public key. args = ["scp"] # Unlike ssh, choose not to share connections by default, given # that the target usage may be for large files. The user's ssh # config would probably be the best place to get this anyway. args.extend(flags) args.extend(paths) os.execvp("scp", args) @inlineCallbacks def _expand_unit_or_machine(client, provider, path): """Expands service unit or machine ID into DNS name""" parts = path.split(":") if len(parts) > 1: remote_system = parts[0] ip_address = None if remote_system.isdigit(): # machine id, will not pick up dotted IP addresses ip_address, _ = yield get_ip_address_for_machine( client, provider, remote_system) elif "/" in remote_system: # service unit ip_address, _ = yield get_ip_address_for_unit( client, provider, remote_system) if ip_address: returnValue("ubuntu@%s:%s" % (ip_address, ":".join(parts[1:]))) returnValue(path) # no need to expand @inlineCallbacks def command(options): """Use scp to copy files to/from given unit or machine. """ # Unlike juju ssh, no attempt to verify liveness of the agent, # instead it's just a matter of whether the underlying scp will work # or not. environment = get_environment(options) provider = environment.get_machine_provider() client = yield provider.connect() try: paths = [(yield _expand_unit_or_machine(client, provider, path)) for path in options.paths] open_scp(options.scp_flags, paths) finally: yield client.close() juju-0.7.orig/juju/control/ssh.py0000644000000000000000000000702412135220114015220 0ustar 00000000000000from argparse import RawDescriptionHelpFormatter import os from twisted.internet.defer import inlineCallbacks from juju.control.utils import ( get_environment, get_ip_address_for_machine, get_ip_address_for_unit, parse_passthrough_args, ParseError) from juju.state.errors import MachineStateNotFound from juju.state.sshforward import prepare_ssh_sharing def configure_subparser(subparsers): sub_parser = subparsers.add_parser( "ssh", help=command.__doc__, usage=("%(prog)s [-h] [-e ENV] unit_or_machine [command]"), formatter_class=RawDescriptionHelpFormatter, description=( "positional arguments:\n" " unit_or_machine Name of unit or machine\n" " [command] Optional command to run on machine")) sub_parser.add_argument( "--environment", "-e", help="Environment to operate on.", metavar="ENV") return sub_parser def passthrough(options, extra): """Second parsing phase to parse `extra` to passthrough to ssh itself. Partitions into flags, unit_or_machine, and optional ssh command. """ flags, positional = parse_passthrough_args(extra, "bcDeFIiLlmOopRSWw") if not positional: raise ParseError("too few arguments") options.ssh_flags = flags options.unit_or_machine = positional.pop(0) options.ssh_command = positional # if any def open_ssh(flags, ip_address, ssh_command): # XXX - TODO - Might be nice if we had the ability to get the user's # private key path and utilize it here, ie the symmetric end to # get user public key. args = ["ssh"] args.extend(prepare_ssh_sharing()) args.extend(flags) args.extend(["ubuntu@%s" % ip_address]) args.extend(ssh_command) os.execvp("ssh", args) @inlineCallbacks def command(options): """Launch an ssh shell on the given unit or machine. """ environment = get_environment(options) provider = environment.get_machine_provider() client = yield provider.connect() label = machine = unit = None # First check if it's a juju machine id if options.unit_or_machine.isdigit(): options.log.debug( "Fetching machine address using juju machine id.") ip_address, machine = yield get_ip_address_for_machine( client, provider, options.unit_or_machine) machine.get_ip_address = get_ip_address_for_machine label = "machine" # Next check if it's a unit elif "/" in options.unit_or_machine: options.log.debug( "Fetching machine address using unit name.") ip_address, unit = yield get_ip_address_for_unit( client, provider, options.unit_or_machine) unit.get_ip_address = get_ip_address_for_unit label = "unit" else: raise MachineStateNotFound(options.unit_or_machine) agent_state = machine or unit # Now verify the relevant agent is operational via its agent. exists_d, watch_d = agent_state.watch_agent() exists = yield exists_d if not exists: # If not wait on it. options.log.info("Waiting for %s to come up." % label) yield watch_d # Double check the address we have is valid, else refetch. if ip_address is None: ip_address, machine = yield agent_state.get_ip_address( client, provider, options.unit_or_machine) yield client.close() options.log.info("Connecting to %s %s at %s", label, options.unit_or_machine, ip_address) open_ssh(options.ssh_flags, ip_address, options.ssh_command) juju-0.7.orig/juju/control/status.py0000644000000000000000000006173512135220114015757 0ustar 00000000000000from fnmatch import fnmatch import argparse import functools import json import sys import yaml from twisted.internet.defer import inlineCallbacks, returnValue from juju.control.utils import get_environment from juju.errors import ProviderError from juju.state.errors import UnitRelationStateNotFound from juju.state.charm import CharmStateManager from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager, parse_service_name from juju.state.relation import RelationStateManager from juju.unit.workflow import WorkflowStateClient # a minimal registry for renderers # maps from format name to callable renderers = {} def configure_subparser(subparsers): sub_parser = subparsers.add_parser( "status", help=status.__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, description=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to status.") sub_parser.add_argument("--output", help="An optional filename to output " "the result to", type=argparse.FileType("w"), default=sys.stdout) sub_parser.add_argument("--format", help="Select an output format", default="yaml" ) sub_parser.add_argument("scope", nargs="*", help="""scope of status request, service or unit""" """ must match at least one of these""") return sub_parser def command(options): """Output status information about a deployment. This command will report on the runtime state of various system entities. $ juju status will return data on entire default deployment. $ juju status -e DEPLOYMENT2 will return data on the DEPLOYMENT2 envionment. """ environment = get_environment(options) renderer = renderers.get(options.format) if renderer is None: formats = sorted(renderers.keys()) formats = ", ".join(formats) raise SystemExit( "Unsupported render format %s (valid formats: %s)." % ( options.format, formats)) return status(environment, options.scope, renderer, options.output, options.log) @inlineCallbacks def status(environment, scope, renderer, output, log): """Display environment status information. """ provider = environment.get_machine_provider() client = yield provider.connect() try: # Collect status information command = StatusCommand(client, provider, log) state = yield command(scope) #state = yield collect(scope, provider, client, log) finally: yield client.close() # Render renderer(state, output, environment) def digest_scope(scope): """Parse scope used to filter status information. `scope`: a list of name specifiers. see collect() Returns a tuple of (service_filter, unit_filter). The values in either filter list will be passed as a glob to fnmatch """ services = [] units = [] if scope is not None: for value in scope: if "/" in value: units.append(value) else: services.append(value) return (services, units) class StatusCommand(object): def __init__(self, client, provider, log): """ Callable status command object. `client`: ZK client connection `provider`: machine provider for the environment `log`: a Python stdlib logger. """ self.client = client self.provider = provider self.log = log self.service_manager = ServiceStateManager(client) self.relation_manager = RelationStateManager(client) self.machine_manager = MachineStateManager(client) self.charm_manager = CharmStateManager(client) self._reset() def _reset(self, scope=None): # init per-run state # self.state is assembled by the various process methods # intermediate access to state is made more convenient # using these references to its internals. self.service_data = {} # service name: service info self.machine_data = {} # machine id: machine state self.unit_data = {} # unit_name :unit_info # used in collecting subordinate (which are added to state in a two # phase pass) self.subordinates = {} # service : set(principal service names) self.state = dict(services=self.service_data, machines=self.machine_data) # Filtering info self.seen_machines = set() self.filter_services, self.filter_units = digest_scope(scope) @inlineCallbacks def __call__(self, scope=None): """Extract status information into nested dicts for rendering. `scope`: an optional list of name specifiers. Globbing based wildcards supported. Defaults to all units, services and relations. """ self._reset(scope) # Pass 1 Gather Data (including principals and subordinates) # this builds unit info and container relationships # which is assembled in pass 2 below yield self._process_services() # Pass 2: Nest information according to principal/subordinates # rules self._process_subordinates() yield self._process_machines() returnValue(self.state) @inlineCallbacks def _process_services(self): """ For each service gather the following information:: : charm: exposed: relations: units: """ services = yield self.service_manager.get_all_service_states() for service in services: if len(self.filter_services): found = False for filter_service in self.filter_services: if fnmatch(service.service_name, filter_service): found = True break if not found: continue yield self._process_service(service) @inlineCallbacks def _process_service(self, service): """ Gather the service info (described in _process_services). `service`: ServiceState instance """ relation_data = {} service_data = self.service_data charm_id = yield service.get_charm_id() charm = yield self.charm_manager.get_charm_state(charm_id) service_data[service.service_name] = ( dict(units={}, charm=charm.id, relations=relation_data)) if (yield service.is_subordinate()): service_data[service.service_name]["subordinate"] = True yield self._process_expose(service) relations, rel_svc_map = yield self._process_relation_map( service) unit_matched = yield self._process_units(service, relations, rel_svc_map) # after filtering units check if any matched or remove the # service from the output if self.filter_units and not unit_matched: del service_data[service.service_name] return yield self._process_relations(service, relations, rel_svc_map) @inlineCallbacks def _process_units(self, service, relations, rel_svc_map): """ Gather unit information for a service:: : agent-state: machine: open-ports: ["port/protocol", ...] public-address: subordinates: `service`: ServiceState intance `relations`: list of ServiceRelationState instance for this service `rel_svc_map`: maps relation internal ids to the remote endpoint service name. This references the name of the remote endpoint and so is generated per service. """ units = yield service.get_all_unit_states() unit_matched = False for unit in units: if len(self.filter_units): found = False for filter_unit in self.filter_units: if fnmatch(unit.unit_name, filter_unit): found = True break if not found: continue yield self._process_unit(service, unit, relations, rel_svc_map) unit_matched = True returnValue(unit_matched) @inlineCallbacks def _process_unit(self, service, unit, relations, rel_svc_map): """ Generate unit info for a single unit of a single service. `unit`: ServiceUnitState see `_process_units` for an explanation of other arguments. """ u = self.unit_data[unit.unit_name] = dict() container = yield unit.get_container() if container: u["container"] = container.unit_name self.subordinates.setdefault(unit.service_name, set()).add(container.service_name) machine_id = yield unit.get_assigned_machine_id() u["machine"] = machine_id unit_workflow_client = WorkflowStateClient(self.client, unit) unit_state = yield unit_workflow_client.get_state() if not unit_state: u["agent-state"] = "pending" else: unit_connected = yield unit.has_agent() u["agent-state"] = unit_state.replace("_", "-") \ if unit_connected else "down" exposed = self.service_data[service.service_name].get("exposed") open_ports = yield unit.get_open_ports() if exposed: u["open-ports"] = ["{port}/{proto}".format(**port_info) for port_info in open_ports] elif open_ports: # Ensure a hint is provided that there are open ports if # not exposed by setting the key in the output self.service_data[service.service_name]["exposed"] = False u["public-address"] = yield unit.get_public_address() # indicate we should include information about this # machine later self.seen_machines.add(machine_id) # collect info on each relation for the service unit yield self._process_unit_relations(service, unit, relations, rel_svc_map) @inlineCallbacks def _process_relation_map(self, service): """Generate a mapping from a services relations to the service name of the remote endpoints. returns: ([ServiceRelationState, ...], mapping) """ relation_data = self.service_data[service.service_name]["relations"] relation_mgr = self.relation_manager relations = yield relation_mgr.get_relations_for_service(service) rel_svc_map = {} for relation in relations: rel_services = yield relation.get_service_states() # A single related service implies a peer relation. More # imply a bi-directional provides/requires relationship. # In the later case we omit the local side of the relation # when reporting. if len(rel_services) > 1: # Filter out self from multi-service relations. rel_services = [ rsn for rsn in rel_services if rsn.service_name != service.service_name] if len(rel_services) > 1: raise ValueError("Unexpected relationship with more " "than 2 endpoints") rel_service = rel_services[0] relation_data.setdefault(relation.relation_name, set()).add( rel_service.service_name) rel_svc_map[relation.internal_relation_id] = ( rel_service.service_name) returnValue((relations, rel_svc_map)) @inlineCallbacks def _process_relations(self, service, relations, rel_svc_map): """Generate relation information for a given service Each service with relations will have a relations dict nested under it with one or more relations described:: relations: : - """ relation_data = self.service_data[service.service_name]["relations"] for relation in relations: rel_services = yield relation.get_service_states() # A single related service implies a peer relation. More # imply a bi-directional provides/requires relationship. # In the later case we omit the local side of the relation # when reporting. if len(rel_services) > 1: # Filter out self from multi-service relations. rel_services = [ rsn for rsn in rel_services if rsn.service_name != service.service_name] if len(rel_services) > 1: raise ValueError("Unexpected relationship with more " "than 2 endpoints") rel_service = rel_services[0] relation_data.setdefault( relation.relation_name, set()).add( rel_service.service_name) rel_svc_map[relation.internal_relation_id] = ( rel_service.service_name) # Normalize the sets back to lists for r in relation_data: relation_data[r] = sorted(relation_data[r]) @inlineCallbacks def _process_unit_relations(self, service, unit, relations, rel_svc_map): """Collect UnitRelationState information per relation and per unit. Includes information under each unit for its relations including its relation state and information about any possible errors. see `_process_relations` for argument information """ u = self.unit_data[unit.unit_name] relation_errors = {} for relation in relations: try: relation_unit = yield relation.get_unit_state(unit) except UnitRelationStateNotFound: # This exception will occur when relations are # established between services without service # units, and therefore never have any # corresponding service relation units. # UPDATE: common with subordinate services, and # some testing scenarios. continue relation_workflow_client = WorkflowStateClient( self.client, relation_unit) workflow_state = yield relation_workflow_client.get_state() rel_svc_name = rel_svc_map.get(relation.internal_relation_id) if rel_svc_name and workflow_state not in ("up", None): relation_errors.setdefault( relation.relation_name, set()).add(rel_svc_name) if relation_errors: # Normalize sets and store. u["relation-errors"] = dict( [(r, sorted(relation_errors[r])) for r in relation_errors]) def _process_subordinates(self): """Properly nest subordinate units under their principal service's unit nodes. Services and units are generated in one pass, then iterated by this method to structure the output data to reflect actual unit containment. Subordinate units will include the follow:: subordinate: true subordinate-to: - Principal services that have subordinates will include:: subordinates: : agent-state: """ service_data = self.service_data for unit_name, u in self.unit_data.iteritems(): container = u.get("container") if container: d = self.unit_data[container].setdefault("subordinates", {}) d[unit_name] = u # remove key that don't appear in output or come from container for key in ("container", "machine", "public-address"): u.pop(key, None) else: service_name = parse_service_name(unit_name) service_data[service_name]["units"][unit_name] = u for sub_service, principal_services in self.subordinates.iteritems(): service_data[sub_service]["subordinate-to"] = sorted( principal_services) service_data[sub_service].pop("units", None) @inlineCallbacks def _process_expose(self, service): """Indicate if a service is exposed or not.""" exposed = yield service.get_exposed_flag() if exposed: self.service_data[service.service_name].update(exposed=exposed) returnValue(exposed) @inlineCallbacks def _process_machines(self): """Gather machine information. machines: : agent-state: dns-name: instance-id: instance-state: """ machines = yield self.machine_manager.get_all_machine_states() provider_machines = yield self._process_provider_machines() for machine_state in machines: if (self.filter_services or self.filter_units) and \ machine_state.id not in self.seen_machines: continue yield self._process_machine(machine_state, provider_machines) @inlineCallbacks def _process_provider_machines(self): """Retrieve known provider machines into map[instance-id] = machine. """ index = {} try: provider_machines = yield self.provider.get_machines() except ProviderError: self.log.exception( "Can't retrieve machine information from provider") returnValue(index) # missing is only when requesting by id. for m in provider_machines: index[m.instance_id] = m returnValue(index) @inlineCallbacks def _process_machine(self, machine_state, provider_machines): """ `machine_state`: MachineState instance """ instance_id = yield machine_state.get_instance_id() m = {"instance-id": instance_id \ if instance_id is not None else "pending"} if instance_id is None: self.machine_data[machine_state.id] = m return pm = provider_machines.get(instance_id) if pm is None: self.log.exception( "Machine provider information missing: machine %s" % ( machine_state.id)) self.machine_data[machine_state.id] = m return m["dns-name"] = pm.dns_name m["instance-state"] = pm.state if (yield machine_state.has_agent()): # if the agent's connected, we're fine m["agent-state"] = "running" else: units = ( yield machine_state.get_all_service_unit_states()) for unit in units: unit_workflow_client = WorkflowStateClient( self.client, unit) if (yield unit_workflow_client.get_state()): # for unit to have a state, its agent must # have run, which implies the machine agent # must have been running correctly at some # point in the past m["agent-state"] = "down" break # otherwise we're probably just still waiting if not 'agent-state' in m: m["agent-state"] = "not-started" self.machine_data[machine_state.id] = m def render_yaml(data, filelike, environment): # remove the root nodes empty name yaml.dump( data, filelike, default_flow_style=False, Dumper=yaml.CSafeDumper) renderers["yaml"] = render_yaml def jsonify(data, filelike, pretty=True, **kwargs): args = dict(skipkeys=True) args.update(kwargs) if pretty: args["sort_keys"] = True args["indent"] = 4 return json.dump(data, filelike, **args) def render_json(data, filelike, environment): jsonify(data, filelike) renderers["json"] = render_json # Supplement kwargs provided to pydot.Cluster/Edge/Node. # The first key is used as the data type selector. DEFAULT_STYLE = { "service_container": { "bgcolor": "#dedede", }, "service": { "color": "#772953", "shape": "component", "style": "filled", "fontcolor": "#ffffff", }, "unit": { "color": "#DD4814", "fontcolor": "#ffffff", "shape": "box", "style": "filled", }, "subunit": { "color": "#c9c9c9", "fontcolor": "#ffffff", "shape": "box", "style": "filled", "rank": "same" }, "relation": { "dir": "none"} } def safe_dot_label(name): """Convert a name to a label safe for use in DOT. Works around an issue where service names like wiki-db will produce DOT items with names like cluster_wiki-db where the trailing '-' invalidates the name. """ return name.replace("-", "_") def render_dot( data, filelike, environment, format="dot", style=DEFAULT_STYLE): """Render a graphiz output of the status information. """ try: import pydot except ImportError: raise SystemExit("You need to install the pydot " "library to support DOT visualizations") dot = pydot.Dot(graph_name=environment.name) # first create a cluster for each service seen_relations = set() for service_name, service in data["services"].iteritems(): cluster = pydot.Cluster( safe_dot_label(service_name), shape="component", label="%s service" % service_name, **style["service_container"]) snode = pydot.Node(safe_dot_label(service_name), label="<%s
%s>" % ( service_name, service["charm"]), **style["service"]) cluster.add_node(snode) for unit_name, unit in service.get("units", {}).iteritems(): subordinates = unit.get("subordinates") if subordinates: container = pydot.Subgraph() un = pydot.Node(safe_dot_label(unit_name), label="<%s
%s>" % ( unit_name, unit.get("public-address")), **style["unit"]) container.add_node(un) for sub in subordinates: s = pydot.Node(safe_dot_label(sub), label="<%s
>" % (sub), **style["subunit"]) container.add_node(s) container.add_edge(pydot.Edge(un, s, **style["relation"])) cluster.add_subgraph(container) else: un = pydot.Node(safe_dot_label(unit_name), label="<%s
%s>" % ( unit_name, unit.get("public-address")), **style["unit"]) cluster.add_node(un) cluster.add_edge(pydot.Edge(snode, un)) dot.add_subgraph(cluster) # now map the relationships for kind, relation in service["relations"].iteritems(): if not isinstance(relation, list): relation = (relation,) for rel in relation: src = safe_dot_label(rel) dest = safe_dot_label(service_name) descriptor = ":".join(tuple(sorted((src, dest)))) #kind = safe_dot_label("%s/%s" % (descriptor, kind)) if descriptor not in seen_relations: seen_relations.add(descriptor) dot.add_edge(pydot.Edge( src, dest, label=kind, **style["relation"] )) if format == "dot": filelike.write(dot.to_string()) else: filelike.write(dot.create(format=format)) renderers["dot"] = render_dot renderers["svg"] = functools.partial(render_dot, format="svg") renderers["png"] = functools.partial(render_dot, format="png") juju-0.7.orig/juju/control/terminate_machine.py0000644000000000000000000000505112135220114020075 0ustar 00000000000000"""Implementation of terminate-machine subcommand""" from twisted.internet.defer import inlineCallbacks from juju.control.utils import sync_environment_state, get_environment from juju.errors import CannotTerminateMachine from juju.state.errors import MachineStateNotFound from juju.state.machine import MachineStateManager def configure_subparser(subparsers): """Configure terminate-machine subcommand""" sub_parser = subparsers.add_parser( "terminate-machine", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="Environment to terminate machines.") sub_parser.add_argument( "machine_ids", metavar="ID", type=int, nargs="*", help="Machine IDs to terminate") return sub_parser def command(options): """Terminate machines in an environment.""" environment = get_environment(options) return terminate_machine( options.environments, environment, options.verbose, options.log, options.machine_ids) @inlineCallbacks def terminate_machine(config, environment, verbose, log, machine_ids): """Terminates the machines in `machine_ids`. Like the underlying code in MachineStateManager, it's permissible if the machine ID is already terminated or even never running. If we determine this is not desired behavior, presumably propagate that back to the state manager. XXX However, we currently special case support of not terminating the "root" machine, that is the one running the provisioning agent. At some point, this will be managed like any other service, but until then it seems best to ensure it's not terminated at this level. """ provider = environment.get_machine_provider() client = yield provider.connect() terminated_machine_ids = [] try: yield sync_environment_state(client, config, environment.name) machine_state_manager = MachineStateManager(client) for machine_id in machine_ids: if machine_id == 0: raise CannotTerminateMachine( 0, "environment would be destroyed") removed = yield machine_state_manager.remove_machine_state( machine_id) if not removed: raise MachineStateNotFound(machine_id) terminated_machine_ids.append(machine_id) finally: yield client.close() if terminated_machine_ids: log.info( "Machines terminated: %s", ", ".join(str(id) for id in terminated_machine_ids)) juju-0.7.orig/juju/control/tests/0000755000000000000000000000000012135220114015210 5ustar 00000000000000juju-0.7.orig/juju/control/unexpose.py0000644000000000000000000000277412135220114016300 0ustar 00000000000000"""Implementation of unexpose subcommand""" from twisted.internet.defer import inlineCallbacks from juju.control.utils import get_environment from juju.state.service import ServiceStateManager def configure_subparser(subparsers): """Configure unexpose subcommand""" sub_parser = subparsers.add_parser("unexpose", help=command.__doc__) sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") sub_parser.add_argument( "service_name", help="Name of the service that should be unexposed.") return sub_parser def command(options): """Remove internet access to a service.""" environment = get_environment(options) return unexpose( options.environments, environment, options.verbose, options.log, options.service_name) @inlineCallbacks def unexpose( config, environment, verbose, log, service_name): """Unexpose a service.""" provider = environment.get_machine_provider() client = yield provider.connect() try: service_manager = ServiceStateManager(client) service_state = yield service_manager.get_service_state(service_name) already_exposed = yield service_state.get_exposed_flag() if already_exposed: yield service_state.clear_exposed_flag() log.info("Service %r was unexposed.", service_name) else: log.info("Service %r was not exposed.", service_name) finally: yield client.close() juju-0.7.orig/juju/control/upgrade_charm.py0000644000000000000000000001161212135220114017222 0ustar 00000000000000"""Implementation of charm-upgrade subcommand""" import os from twisted.internet.defer import inlineCallbacks from juju.control.utils import get_environment, expand_path from juju.charm.directory import CharmDirectory from juju.charm.errors import NewerCharmNotFound from juju.charm.publisher import CharmPublisher from juju.charm.repository import resolve from juju.charm.url import CharmURL from juju.state.service import ServiceStateManager from juju.unit.workflow import is_unit_running def configure_subparser(subparsers): """Configure charm-upgrade subcommand""" sub_parser = subparsers.add_parser("upgrade-charm", help=command.__doc__, description=upgrade_charm.__doc__) sub_parser.add_argument( "--dry-run", "-n", action="store_true", help="Dry-Run, show which charm would be deployed for upgrade.") sub_parser.add_argument( "--force", action="store_true", default=False, help="Force an upgrade, regardless of unit state, no hooks executed.") sub_parser.add_argument( "--environment", "-e", help="juju environment to operate in.") sub_parser.add_argument( "--repository", help="Directory for charm lookup and retrieval", default=os.environ.get('JUJU_REPOSITORY'), type=expand_path) sub_parser.add_argument( "service_name", help="Name of the service that should be upgraded") return sub_parser def command(options): """Upgrade a service's charm.""" environment = get_environment(options) return upgrade_charm( options.environments, environment, options.verbose, options.log, options.repository, options.service_name, options.dry_run, options.force) @inlineCallbacks def upgrade_charm( config, environment, verbose, log, repository_path, service_name, dry_run, force): """Upgrades a service's charm. First determines if an upgrade is available, then updates the service charm reference, and marks the units as needing upgrades. If --repository is not specified, it will be taken from the environment variable JUJU_REPOSITORY. """ provider = environment.get_machine_provider() client = yield provider.connect() service_manager = ServiceStateManager(client) service_state = yield service_manager.get_service_state(service_name) old_charm_id = yield service_state.get_charm_id() old_charm_url = CharmURL.parse(old_charm_id) old_charm_url.assert_revision() repo, charm_url = resolve( str(old_charm_url.with_revision(None)), repository_path, environment.default_series) new_charm_url = charm_url.with_revision( (yield repo.latest(charm_url))) if charm_url.collection.schema == "local": if old_charm_url.revision >= new_charm_url.revision: new_revision = old_charm_url.revision + 1 charm = yield repo.find(new_charm_url) if isinstance(charm, CharmDirectory): if dry_run: log.info("%s would be set to revision %s", charm.path, new_revision) else: log.info("Setting %s to revision %s", charm.path, new_revision) charm.set_revision(new_revision) new_charm_url.revision = new_revision new_charm_id = str(new_charm_url) # Verify its newer than what's deployed if not new_charm_url.revision > old_charm_url.revision: if dry_run: log.info("Service already running latest charm %r", old_charm_id) else: raise NewerCharmNotFound(old_charm_id) elif dry_run: log.info("Service would be upgraded from charm %r to %r", old_charm_id, new_charm_id) # On dry run, stop before modifying state. if not dry_run: # Publish the new charm storage = provider.get_file_storage() publisher = CharmPublisher(client, storage) charm = yield repo.find(new_charm_url) yield publisher.add_charm(new_charm_id, charm) result = yield publisher.publish() charm_state = result[0] # Update the service charm reference yield service_state.set_charm_id(charm_state.id) # Update the service configuration # Mark the units for upgrades units = yield service_state.get_all_unit_states() for unit in units: if force: # Save some roundtrips if not dry_run: yield unit.set_upgrade_flag(force=force) continue running, state = yield is_unit_running(client, unit) if not force and not running: log.info( "Unit %r is not in a running state (state: %r), won't upgrade", unit.unit_name, state or "uninitialized") continue if not dry_run: yield unit.set_upgrade_flag() juju-0.7.orig/juju/control/utils.py0000644000000000000000000001225512135220114015565 0ustar 00000000000000import os from itertools import tee from twisted.internet.defer import inlineCallbacks, returnValue from juju.environment.errors import EnvironmentsConfigError from juju.state.errors import ServiceUnitStateMachineNotAssigned from juju.state.environment import EnvironmentStateManager from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager def get_environment(options): env_name = options.environment or os.environ.get("JUJU_ENV") environment = options.environments.get(env_name) if environment is None and options.environment: raise EnvironmentsConfigError( "Invalid environment %r" % options.environment) elif environment is None: environment = options.environments.get_default() return environment def sync_environment_state(client, config, name): """Push the local environment config to zookeeper. This needs to be done: * On any command which can cause the provisioning agent to take action against the provider (ie create/destroy a machine), because the PA needs to use credentials stored in the environment config to do so. * On any command which uses constraints-related code (even if indirectly) because Constraints objects are provider-specific, and need to be created with the help of a MachineProvider; and the only way state code can get a MachineProvider is by getting one from ZK (we certainly don't want to thread the relevant provider from juju.control and/or the PA itself all the way through the state code). So, we sync, to ensure that state code can use an EnvironmentStateManager to get a provider. """ esm = EnvironmentStateManager(client) return esm.set_config_state(config, name) @inlineCallbacks def get_ip_address_for_machine(client, provider, machine_id): """Returns public DNS name and machine state for the machine id. :param client: a connected zookeeper client. :param provider: the `MachineProvider` in charge of the juju. :param machine_id: machine ID of the desired machine to connect to. :return: tuple of the DNS name and a `MachineState`. """ manager = MachineStateManager(client) machine_state = yield manager.get_machine_state(machine_id) instance_id = yield machine_state.get_instance_id() provider_machine = yield provider.get_machine(instance_id) returnValue((provider_machine.dns_name, machine_state)) @inlineCallbacks def get_ip_address_for_unit(client, provider, unit_name): """Returns public DNS name and unit state for the service unit. :param client: a connected zookeeper client. :param provider: the `MachineProvider` in charge of the juju. :param unit_name: service unit running on a machine to connect to. :return: tuple of the DNS name and a `MachineState`. :raises: :class:`juju.state.errors.ServiceUnitStateMachineNotAssigned` """ manager = ServiceStateManager(client) service_unit = yield manager.get_unit_state(unit_name) machine_id = yield service_unit.get_assigned_machine_id() if machine_id is None: raise ServiceUnitStateMachineNotAssigned(unit_name) returnValue( ((yield service_unit.get_public_address()), service_unit)) def expand_path(p): return os.path.abspath(os.path.expanduser(p)) def expand_constraints(s): if s: return s.split(" ") return [] class ParseError(Exception): """Used to support returning custom parse errors in passthrough parsing. Enables similar support to what is seen in argparse, without using its internals. """ def parse_passthrough_args(args, flags_taking_arg=()): """Scans left to right, partitioning flags and positional args. :param args: Unparsed args from argparse :param flags_taking_arg: One character flags that combine with arguments. :return: tuple of flags and positional args :raises: :class:`juju.control.utils.ParseError` TODO May need to support long options for other passthrough commands. """ args = iter(args) flags_taking_arg = set(flags_taking_arg) flags = [] positional = [] while True: args, peek_args = tee(args) try: peeked = peek_args.next() except StopIteration: break if peeked.startswith("-"): flags.append(args.next()) # Only need to consume the next arg if the flag both takes # an arg (say -L) and then it has an extra arg following # (8080:localhost:80), rather than being combined, such as # -L8080:localhost:80 if len(peeked) == 2 and peeked[1] in flags_taking_arg: try: flags.append(args.next()) except StopIteration: raise ParseError( "argument -%s: expected one argument" % peeked[1]) else: # At this point no more flags for the command itself (more # can follow after the first positional arg, as seen in # working with ssh, for example), so consume the rest and # stop parsing options positional = list(args) break return flags, positional juju-0.7.orig/juju/control/tests/__init__.py0000644000000000000000000000000212135220114017311 0ustar 00000000000000# juju-0.7.orig/juju/control/tests/common.py0000644000000000000000000000431712135220114017057 0ustar 00000000000000from twisted.internet import reactor from twisted.internet.defer import Deferred, inlineCallbacks from juju.environment.tests.test_config import EnvironmentsConfigTestBase from juju.charm.tests.test_repository import RepositoryTestBase from juju.state.tests.test_service import ServiceStateManagerTestBase class ControlToolTest(EnvironmentsConfigTestBase): def setup_cli_reactor(self): """Mock mock out reactor start and stop. This is necessary when executing the CLI via tests since commands will run a reactor as part of their execution, then shut it down. Obviously this would cause issues with running multiple tests under Twisted Trial. Returns a a `Deferred` that a test can wait on until the reactor is mocked stopped. This means that code running in the context of a mock reactor run is in fact complete, and assertions and tearDown can now be done. """ mock_reactor = self.mocker.patch(reactor) mock_reactor.run() mock_reactor.stop() wait_on_stopped = Deferred() def f(): wait_on_stopped.callback("reactor has stopped") self.mocker.call(f) reactor.running = True return wait_on_stopped def setUp(self): self.log = self.capture_logging() return super(ControlToolTest, self).setUp() def setup_exit(self, code=0): mock_exit = self.mocker.replace("sys.exit") mock_exit(code) class MachineControlToolTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(MachineControlToolTest, self).setUp() # Dummy out the construction of our root machine (id=0), this # will go away in a later release. Right now, there's no # service unit holding it, so we have to special case. yield self.add_machine_state() @inlineCallbacks def destroy_service(self, service_name): """Destroys the service equivalently to destroy-service subcommand.""" service_state = yield self.service_state_manager.get_service_state( service_name) yield self.service_state_manager.remove_service_state(service_state) juju-0.7.orig/juju/control/tests/sample_cluster.yaml0000644000000000000000000000335712135220114021126 0ustar 00000000000000 machines: 0: {dns-name: ec2-50-19-158-109.compute-1.amazonaws.com, instance-id: i-215dd84f} 1: {dns-name: ec2-50-17-16-228.compute-1.amazonaws.com, instance-id: i-8d58dde3} 2: {dns-name: ec2-72-44-49-114.compute-1.amazonaws.com, instance-id: i-9558ddfb} 3: {dns-name: ec2-50-19-47-106.compute-1.amazonaws.com, instance-id: i-6d5bde03} 4: {dns-name: ec2-174-129-132-248.compute-1.amazonaws.com, instance-id: i-7f5bde11} 5: {dns-name: ec2-50-19-152-136.compute-1.amazonaws.com, instance-id: i-755bde1b} 6: {dns-name: ec2-50-17-168-124.compute-1.amazonaws.com, instance-id: i-4b5bde25} services: demo-wiki: charm: local:mediawiki-62 relations: {cache: wiki-cache, db: wiki-db, website: wiki-balancer} units: demo-wiki/0: machine: 2 relations: cache: {state: up} db: {state: up} website: {state: up} state: started demo-wiki/1: machine: 6 relations: cache: {state: up} db: {state: up} website: {state: up} state: started wiki-balancer: charm: local:haproxy-13 relations: {reverseproxy: demo-wiki} units: wiki-balancer/0: machine: 4 relations: reverseproxy: {state: up} state: started wiki-cache: charm: local:memcached-10 relations: {cache: demo-wiki} units: wiki-cache/0: machine: 3 relations: cache: {state: up} state: started wiki-cache/1: machine: 5 relations: cache: {state: up} state: started wiki-db: charm: local:mysql-93 relations: {db: demo-wiki} units: wiki-db/0: machine: 1 relations: db: {state: up} state: started juju-0.7.orig/juju/control/tests/test_add_relation.py0000644000000000000000000002201212135220114021243 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks from juju.control import main, add_relation from juju.control.tests.common import ControlToolTest from juju.charm.tests.test_repository import RepositoryTestBase from juju.state.tests.test_service import ServiceStateManagerTestBase class ControlAddRelationTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(ControlAddRelationTest, self).setUp() self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_add_relation_method(self): """Test adding a relation via the supporting method in the cmd obj.""" environment = self.config.get("firstenv") yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("wordpress") yield add_relation.add_relation( self.config, environment, False, logging.getLogger("juju.control.cli"), "mysql", "wordpress") self.assertIn( "Added mysql relation to all service units.", self.output.getvalue()) @inlineCallbacks def test_add_peer_relation(self): """Test that services that peer can have that relation added.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() yield self.add_service_from_charm("riak") main(["add-relation", "riak", "riak"]) yield wait_on_reactor_stopped self.assertIn( "Added riak relation to all service units.", self.output.getvalue()) @inlineCallbacks def test_add_relation(self): """Test that the command works when run from the CLI itself.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("wordpress") main(["add-relation", "mysql", "wordpress"]) yield wait_on_reactor_stopped self.assertIn( "Added mysql relation to all service units.", self.output.getvalue()) @inlineCallbacks def test_verbose_flag(self): """Test the verbose flag.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() yield self.add_service_from_charm("riak") main(["--verbose", "add-relation", "riak:ring", "riak:ring"]) yield wait_on_reactor_stopped self.assertIn("Endpoint pairs", self.output.getvalue()) self.assertIn( "Added riak relation to all service units.", self.output.getvalue()) @inlineCallbacks def test_use_relation_name(self): """Test that the descriptor can be qualified with a relation_name.""" yield self.add_service_from_charm("mysql-alternative") yield self.add_service_from_charm("wordpress") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() yield self.add_service_from_charm("riak") main(["add-relation", "mysql-alternative:dev", "wordpress"]) yield wait_on_reactor_stopped self.assertIn( "Added mysql relation to all service units.", self.output.getvalue()) @inlineCallbacks def test_add_relation_multiple(self): """Test that the command can be used to create multiple relations.""" environment = self.config.get("firstenv") yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("wordpress") yield self.add_service_from_charm("varnish") yield add_relation.add_relation( self.config, environment, False, logging.getLogger("juju.control.cli"), "mysql", "wordpress") self.assertIn( "Added mysql relation to all service units.", self.output.getvalue()) wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-relation", "wordpress", "varnish"]) yield wait_on_reactor_stopped self.assertIn( "Added varnish relation to all service units.", self.output.getvalue()) # test for various errors def test_with_no_args(self): """Test that two descriptor arguments are required for command.""" # in argparse, before reactor startup self.assertRaises(SystemExit, main, ["add-relation"]) self.assertIn( "juju add-relation: error: too few arguments", self.stderr.getvalue()) def test_too_many_arguments_provided(self): """Test command rejects more than 2 descriptor arguments.""" self.assertRaises( SystemExit, main, ["add-relation", "foo", "fum", "bar"]) self.assertIn( "juju: error: unrecognized arguments: bar", self.stderr.getvalue()) @inlineCallbacks def test_missing_service_added(self): """Test command fails if a service is missing.""" yield self.add_service_from_charm("mysql") # but not wordpress wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-relation", "wordpress", "mysql"]) yield wait_on_reactor_stopped self.assertIn( "Service 'wordpress' was not found", self.output.getvalue()) @inlineCallbacks def test_no_common_relation_type(self): """Test command fails if the services cannot be added in a relation.""" yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("riak") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-relation", "riak", "mysql"]) yield wait_on_reactor_stopped self.assertIn("No matching endpoints", self.output.getvalue()) @inlineCallbacks def test_ambiguous_pairing(self): """Test command fails if more than one way to connect services.""" yield self.add_service_from_charm("mysql-alternative") yield self.add_service_from_charm("wordpress") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-relation", "wordpress", "mysql-alternative"]) yield wait_on_reactor_stopped self.assertIn( "Ambiguous relation 'wordpress mysql-alternative'; could refer " "to:\n 'wordpress:db mysql-alternative:dev' (mysql client / " "mysql server)\n 'wordpress:db mysql-alternative:prod' (mysql " "client / mysql server)", self.output.getvalue()) @inlineCallbacks def test_missing_charm(self): """Test command fails if service is added w/o corresponding charm.""" yield self.add_service("mysql_no_charm") yield self.add_service_from_charm("wordpress") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-relation", "wordpress", "mysql_no_charm"]) yield wait_on_reactor_stopped self.assertIn("No matching endpoints", self.output.getvalue()) @inlineCallbacks def test_relation_added_twice(self): """Test command fails if it's run twice.""" yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("wordpress") yield add_relation.add_relation( self.config, self.config.get("firstenv"), False, logging.getLogger("juju.control.cli"), "mysql", "wordpress") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-relation", "wordpress", "mysql"]) yield wait_on_reactor_stopped self.assertIn( "Relation mysql already exists between wordpress and mysql", self.output.getvalue()) @inlineCallbacks def test_invalid_environment(self): """Test command with an environment that hasn't been set up.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() main(["add-relation", "--environment", "roman-candle", "wordpress", "mysql"]) yield wait_on_reactor_stopped self.assertIn( "Invalid environment 'roman-candle'", self.output.getvalue()) @inlineCallbacks def test_relate_to_implicit(self): """Validate we can implicitly relate to an implicitly provided relation""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("logging") main(["add-relation", "mysql", "logging"]) yield wait_on_reactor_stopped self.assertIn( "Added juju-info relation to all service units.", self.output.getvalue()) juju-0.7.orig/juju/control/tests/test_add_unit.py0000644000000000000000000001630412135220114020414 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.lib.serializer import yaml_dump from juju.state.environment import EnvironmentStateManager from .common import MachineControlToolTest class ControlAddUnitTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(ControlAddUnitTest, self).setUp() self.service_state1 = yield self.add_service_from_charm("mysql") self.service_unit1 = yield self.service_state1.add_unit_state() self.machine_state1 = yield self.add_machine_state() yield self.service_unit1.assign_to_machine(self.machine_state1) self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_add_unit(self): """ 'juju add-unit ' will add a new service unit of the given service. """ unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 1) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() # trash environment to check syncing yield self.client.delete("/environment") main(["add-unit", "mysql"]) yield finished # verify the env state was synced esm = EnvironmentStateManager(self.client) yield esm.get_config() # verify the unit and its machine assignment. unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 2) topology = yield self.get_topology() unit = yield self.service_state1.get_unit_state("mysql/1") machine_id = topology.get_service_unit_machine( self.service_state1.internal_id, unit.internal_id) self.assertNotEqual(machine_id, None) self.assertIn( "Unit 'mysql/1' added to service 'mysql'", self.output.getvalue()) yield self.assert_machine_assignments("mysql", [1, 2]) @inlineCallbacks def test_add_multiple_units(self): """ 'juju add-unit ' will add a new service unit of the given service. """ unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 1) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-unit", "--num-units", "5", "mysql"]) yield finished # verify the unit and its machine assignment. unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 6) topology = yield self.get_topology() unit = yield self.service_state1.get_unit_state("mysql/1") machine_id = topology.get_service_unit_machine( self.service_state1.internal_id, unit.internal_id) self.assertNotEqual(machine_id, None) for i in xrange(1, 6): self.assertIn( "Unit 'mysql/%d' added to service 'mysql'" % i, self.output.getvalue()) yield self.assert_machine_assignments("mysql", [1, 2, 3, 4, 5, 6]) @inlineCallbacks def test_add_unit_unknown_service(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-unit", "volcano"]) yield finished self.assertIn( "Service 'volcano' was not found", self.output.getvalue()) @inlineCallbacks def test_add_unit_subordinate_service(self): yield self.add_service_from_charm("logging") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-unit", "logging"]) yield finished self.assertIn( "Subordinate services acquire units from " "their principal service.", self.output.getvalue()) @inlineCallbacks def test_add_unit_reuses_machines(self): """Verify that if machines are not in use, add-unit uses them.""" # add machine to wordpress, then destroy and reallocate later # in this test to mysql as mysql/1's machine wordpress_service_state = yield self.add_service_from_charm( "wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() wordpress_machine_state = yield self.add_machine_state() yield wordpress_unit_state.assign_to_machine(wordpress_machine_state) yield wordpress_unit_state.unassign_from_machine() finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-unit", "mysql"]) yield finished self.assertIn( "Unit 'mysql/1' added to service 'mysql'", self.output.getvalue()) yield self.assert_machine_assignments("wordpress", [None]) yield self.assert_machine_assignments("mysql", [1, 2]) @inlineCallbacks def test_policy_from_environment(self): config = { "environments": {"firstenv": { "placement": "local", "type": "dummy"}}} yield self.push_config("firstenv", config) ms0 = yield self.machine_state_manager.get_machine_state(0) yield self.service_unit1.unassign_from_machine() yield self.service_unit1.assign_to_machine(ms0) unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 1) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-unit", "mysql"]) yield finished # Verify the local policy was used topology = yield self.get_topology() unit = yield self.service_state1.get_unit_state("mysql/1") machine_id = topology.get_service_unit_machine( self.service_state1.internal_id, unit.internal_id) self.assertNotEqual(machine_id, None) self.assertIn( "Unit 'mysql/1' added to service 'mysql'", self.output.getvalue()) # adding a second unit still assigns to machine 0 with local policy yield self.assert_machine_assignments("mysql", [0, 0]) @inlineCallbacks def test_legacy_option_in_legacy_env(self): yield self.client.delete("/constraints") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-unit", "mysql"]) yield finished unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 2) @inlineCallbacks def test_legacy_option_in_fresh_env(self): local_config = { "environments": {"firstenv": { "some-legacy-key": "blah", "type": "dummy"}}} self.write_config(yaml_dump(local_config)) self.config.load() finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["add-unit", "mysql"]) yield finished output = self.output.getvalue() self.assertIn( "Your environments.yaml contains deprecated keys", output) unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 1) juju-0.7.orig/juju/control/tests/test_admin.py0000644000000000000000000000271712135220114017720 0ustar 00000000000000from juju import control from juju.control import setup_logging, admin, setup_parser from juju.lib.mocker import ANY from .common import ControlToolTest class DummySubcommand(object): @staticmethod def configure_subparser(subparsers): subparser = subparsers.add_parser("dummy") subparser.add_argument("--opt1", default=1) return subparser @staticmethod def command(*args): """Doc String""" pass class AdminCommandOptionTest(ControlToolTest): def test_admin_subcommand_execution(self): """ Setup an admin subcommand, and verify that's it invoked. """ self.setup_cli_reactor() self.setup_exit(0) self.patch(control, "ADMIN_SUBCOMMANDS", [DummySubcommand]) setup_logging_mock = self.mocker.mock(setup_logging) setup_parser_mock = self.mocker.proxy(setup_parser) self.patch(control, "setup_logging", setup_logging_mock) self.patch(control, "setup_parser", setup_parser_mock) command_mock = self.mocker.proxy(DummySubcommand.command) self.patch(DummySubcommand, "command", command_mock) setup_parser_mock( subcommands=ANY, prog="juju-admin", description="juju cloud orchestration internal tools") self.mocker.passthrough() setup_logging_mock(ANY) command_mock(ANY) self.mocker.passthrough() self.mocker.replay() admin(["dummy"]) juju-0.7.orig/juju/control/tests/test_bootstrap.py0000644000000000000000000000746712135220114020654 0ustar 00000000000000 from twisted.internet.defer import inlineCallbacks, succeed from juju.control import main from juju.lib.serializer import yaml_dump as dump from juju.providers.dummy import MachineProvider from .common import ControlToolTest class ControlBootstrapTest(ControlToolTest): @inlineCallbacks def test_bootstrap(self): """ 'juju-control bootstrap' will invoke the bootstrap method of all configured machine providers in all environments. """ config = { "environments": { "firstenv": { "type": "dummy", "default-series": "homer"}, "secondenv": { "type": "dummy", "default-series": "marge"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() self.setup_exit(0) provider = self.mocker.patch(MachineProvider) provider.bootstrap({ "ubuntu-series": "homer", "provider-type": "dummy", "arch": "arm", "cpu": 2.0, "mem": 512.0}) self.mocker.result(succeed(True)) self.mocker.replay() self.capture_stream("stderr") main(["bootstrap", "-e", "firstenv", "--constraints", "arch=arm cpu=2"]) yield finished lines = filter(None, self.log.getvalue().split("\n")) self.assertEqual( lines, [("Bootstrapping environment 'firstenv' " "(origin: distro type: dummy)..."), "'bootstrap' command finished successfully"]) @inlineCallbacks def test_bootstrap_multiple_environments_no_default_specified(self): """ If there are multiple environments but no default, and no environment specified on the cli, then an error message is given. """ config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer"}, "secondenv": { "type": "dummy", "admin-secret": "marge"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() output = self.capture_logging() main(["bootstrap"]) yield finished msg = "There are multiple environments and no explicit default" self.assertIn(msg, self.log.getvalue()) self.assertIn(msg, output.getvalue()) @inlineCallbacks def test_bootstrap_invalid_environment_specified(self): """ If the environment specified does not exist an error message is given. """ config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() output = self.capture_logging() main(["bootstrap", "-e", "thirdenv"]) yield finished msg = "Invalid environment 'thirdenv'" self.assertIn(msg, self.log.getvalue()) self.assertIn(msg, output.getvalue()) @inlineCallbacks def test_bootstrap_legacy_config_keys(self): """ If the environment specified does not exist an error message is given. """ config = { "environments": { "firstenv": { "type": "dummy", "some-legacy-key": "blah"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() output = self.capture_logging() main(["bootstrap"]) yield finished msg = "Your environments.yaml contains deprecated keys" self.assertIn(msg, self.log.getvalue()) self.assertIn(msg, output.getvalue()) juju-0.7.orig/juju/control/tests/test_config_get.py0000644000000000000000000000636112135220114020733 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.charm.tests import local_charm_id from juju.lib import serializer from .common import MachineControlToolTest class ControlJujuGetTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(ControlJujuGetTest, self).setUp() self.output = self.capture_logging() @inlineCallbacks def test_get_service_config(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() self.service_state = yield self.add_service_from_charm("wordpress") config = yield self.service_state.get_config() # The value which isn't in the config won't be displayed. settings = {"blog-title": "Hello World", "world": 123} config.update(settings) yield config.write() output = self.capture_stream("stdout") main(["get", "wordpress"]) yield finished data = serializer.yaml_load(output.getvalue()) self.assertEqual( {"service": "wordpress", "charm": "local:series/wordpress-3", 'settings': {'blog-title': { 'description': 'A descriptive title used for the blog.', 'type': 'string', 'value': 'Hello World'}}}, data) @inlineCallbacks def test_get_service_config_with_no_value(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() self.service_state = yield self.add_service_from_charm( "dummy", local_charm_id(self.charm)) config = yield self.service_state.get_config() config["title"] = "hello movie" config["skill-level"] = 24 yield config.write() output = self.capture_stream("stdout") main(["get", "dummy"]) yield finished data = serializer.yaml_load(output.getvalue()) self.assertEqual( {"service": "dummy", "charm": "local:series/dummy-1", "settings": { 'outlook': { 'description': 'No default outlook.', 'type': 'string', 'value': None}, 'skill-level': { 'description': 'A number indicating skill.', 'value': 24, 'type': 'int'}, 'title': { 'description': ('A descriptive title used ' 'for the service.'), 'value': 'hello movie', 'type': 'string'}, 'username': { 'description': ('The name of the initial account (given ' 'admin permissions).'), 'value': 'admin001', 'default': True, 'type': 'string'}}}, data) @inlineCallbacks def test_set_invalid_service(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["get", "whatever"]) yield finished self.assertIn("Service 'whatever' was not found", self.output.getvalue()) juju-0.7.orig/juju/control/tests/test_config_set.py0000644000000000000000000002452712135220114020753 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.control.config_set import config_set from juju.lib import serializer from .common import MachineControlToolTest class ControlJujuSetTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(ControlJujuSetTest, self).setUp() self.service_state = yield self.add_service_from_charm("wordpress") self.service_unit = yield self.service_state.add_unit_state() self.environment = self.config.get_default() self.stderr = self.capture_stream("stderr") self.output = self.capture_logging() @inlineCallbacks def test_set_and_get(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "wordpress", "blog-title=Hello Tribune?"]) yield finished # Verify the state is accessible state = yield self.service_state.get_config() self.assertEqual(state, {"blog-title": "Hello Tribune?"}) @inlineCallbacks def test_set_with_config_file(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() config_file = self.makeFile(serializer.yaml_dump( dict(wordpress={"blog-title": "Hello World"}))) main(["set", "wordpress", "--config=%s" % config_file]) yield finished # Verify the state is accessible state = yield self.service_state.get_config() self.assertEqual(state, {"blog-title": "Hello World"}) @inlineCallbacks def test_set_with_invalid_file(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() # missing the service_name dict (will do nothing to values) config_file = self.makeFile( serializer.yaml_dump({"blog-title": "Hello World"})) main(["set", "wordpress", "--config=%s" % config_file]) yield finished state = yield self.service_state.get_config() self.assertEqual(state, {'blog-title': 'My Title'}) @inlineCallbacks def test_set_with_garbage_file(self): finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() # file exists but is not valid YAML config_file = self.makeFile("blah") main(["-v", "set", "wordpress", "--config=%s" % config_file]) yield finished self.assertIn( "Config file %r invalid" % config_file, self.stderr.getvalue()) state = yield self.service_state.get_config() self.assertEqual(state, {'blog-title': 'My Title'}) @inlineCallbacks def test_config_and_cli_options_errors(self): """Verify --config and cli kvpairs can't be used together""" finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() # valid file, but incorrect cli usage config_file = self.makeFile(serializer.yaml_dump(dict( wordpress={"blog-title": "Hello World"}))) main(["-v", "set", "wordpress", "blog-title=Test", "--config=%s" % config_file]) yield finished self.assertIn( "--config and command line options", self.stderr.getvalue()) @inlineCallbacks def test_set_invalid_option(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "wordpress", "blog-roll=What's a blog-roll?"]) yield finished # Make sure we got an error message to the user self.assertIn("blog-roll is not a valid configuration option.", self.output.getvalue()) @inlineCallbacks def test_set_invalid_service(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "whatever", "blog-roll=What's a blog-roll?"]) yield finished self.assertIn("Service 'whatever' was not found", self.output.getvalue()) @inlineCallbacks def test_set_valid_option(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "wordpress", "blog-title=My title"]) yield finished # Verify the state is accessible state = yield self.service_state.get_config() self.assertEqual(state, {"blog-title": "My title"}) @inlineCallbacks def test_multiple_calls_with_defaults(self): """Bug #873643 Calling config set multiple times would result in the subsequent calls resetting values to defaults if the values were not explicitly set in each call. This verifies that each value need not be present in each call for proper functioning. """ # apply all defaults as done through deploy self.service_state = yield self.add_service_from_charm("configtest") self.service_unit = yield self.service_state.add_unit_state() # Publish the defaults as deploy should have done charm = yield self.service_state.get_charm_state() config_options = yield charm.get_config() defaults = config_options.get_defaults() state = yield self.service_state.get_config() yield state.update(defaults) yield state.write() # Now perform two update in each case moving one value away # from their default and checking the end result is as expected yield config_set(self.environment, "configtest", ["foo=new foo"]) # force update yield state.read() self.assertEqual(state, {"foo": "new foo", "bar": "bar-default"}) # Now perform two update in each case moving one value away # from their default and checking the end result is as expected yield config_set(self.environment, "configtest", ["bar=new bar"]) # force update yield state.read() self.assertEqual(state, {"foo": "new foo", "bar": "new bar"}) @inlineCallbacks def test_boolean_option_str_format_v1(self): """Verify possible to set a boolean option with format v1""" self.service_state = yield self.add_service_from_charm("mysql") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "mysql", "awesome=true"]) yield finished state = yield self.service_state.get_config() self.assertEqual( state, {"awesome": True, "monkey-madness": 0.5, "query-cache-size": -1, "tuning-level": "safest"}) @inlineCallbacks def test_int_option_coerced_format_v1(self): """Verify int option possible in format v1""" self.service_state = yield self.add_service_from_charm("mysql") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "mysql", "query-cache-size=10"]) yield finished self.assertEqual(self.stderr.getvalue(), "") state = yield self.service_state.get_config() self.assertEqual( state, {"awesome": False, "monkey-madness": 0.5, "query-cache-size": 10, "tuning-level": "safest"}) @inlineCallbacks def test_float_option_str_format_v1(self): """Verify possible to set a float option with format v1""" self.service_state = yield self.add_service_from_charm("mysql") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "mysql", "monkey-madness=0.99999999"]) yield finished state = yield self.service_state.get_config() self.assertEqual( state, {"awesome": False, "monkey-madness": 0.99999999, "query-cache-size": -1, "tuning-level": "safest"}) @inlineCallbacks def test_valid_options_format_v2(self): """Verify that config settings can be properly parsed and applied""" self.service_state = yield self.add_service_from_charm( "mysql-format-v2") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "mysql-format-v2", "query-cache-size=100", "awesome=true", "tuning-level=unsafe", "monkey-madness=0.97"]) yield finished self.assertEqual(self.stderr.getvalue(), "") state = yield self.service_state.get_config() self.assertEqual( state, {"awesome": True, "monkey-madness": 0.97, "query-cache-size": 100, "tuning-level": "unsafe"}) @inlineCallbacks def test_invalid_float_option_format_v2(self): """Verify that config settings reject invalid floats""" self.service_state = yield self.add_service_from_charm( "mysql-format-v2") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "mysql-format-v2", "monkey-madness=barrels of monkeys"]) yield finished self.assertEqual( self.output.getvalue(), "Invalid value for monkey-madness: barrels of monkeys\n") state = yield self.service_state.get_config() self.assertEqual( state, {"awesome": False, "monkey-madness": 0.5, "query-cache-size": -1, "tuning-level": "safest"}) @inlineCallbacks def test_invalid_int_option_format_v2(self): """Verify that config settings reject invalid ints""" self.service_state = yield self.add_service_from_charm( "mysql-format-v2") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set", "mysql-format-v2", "query-cache-size=big"]) yield finished self.assertEqual( self.output.getvalue(), "Invalid value for query-cache-size: big\n") state = yield self.service_state.get_config() self.assertEqual( state, {"awesome": False, "monkey-madness": 0.5, "query-cache-size": -1, "tuning-level": "safest"}) juju-0.7.orig/juju/control/tests/test_constraints_get.py0000644000000000000000000001052612135220114022033 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.lib import serializer from juju.machine.tests.test_constraints import dummy_cs from juju.state.environment import EnvironmentStateManager from .common import MachineControlToolTest env_log = "Fetching constraints for environment" machine_log = "Fetching constraints for machine 1" service_log = "Fetching constraints for service mysql" unit_log = "Fetching constraints for service unit mysql/0" class ConstraintsGetTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(ConstraintsGetTest, self).setUp() env_constraints = dummy_cs.parse(["mem=1024"]) esm = EnvironmentStateManager(self.client) yield esm.set_constraints(env_constraints) self.expect_env = { "arch": "amd64", "cpu": 1.0, "mem": 1024.0, "provider-type": "dummy", "ubuntu-series": None} service_constraints = dummy_cs.parse(["cpu=10"]) service = yield self.add_service_from_charm( "mysql", constraints=service_constraints) # unit will snapshot the state of service when added unit = yield service.add_unit_state() self.expect_unit = { "arch": "amd64", "cpu": 10.0, "mem": 1024.0, "provider-type": "dummy", "ubuntu-series": "series"} # machine gets its own constraints machine_constraints = dummy_cs.parse(["cpu=15", "mem=8G"]) machine = yield self.add_machine_state( constraints=machine_constraints.with_series("series")) self.expect_machine = { "arch": "amd64", "cpu": 15.0, "mem": 8192.0, "provider-type": "dummy", "ubuntu-series": "series"} yield unit.assign_to_machine(machine) # service gets new constraints, leaves unit untouched yield service.set_constraints(dummy_cs.parse(["mem=16G"])) self.expect_service = { "arch": "amd64", "cpu": 1.0, "mem": 16384.0, "provider-type": "dummy", "ubuntu-series": "series"} self.log = self.capture_logging() self.stdout = self.capture_stream("stdout") self.finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() def assert_messages(self, *messages): log = self.log.getvalue() for message in messages: self.assertIn(message, log) @inlineCallbacks def test_env(self): main(["get-constraints"]) yield self.finished result = serializer.load(self.stdout.getvalue()) self.assertEquals(result, self.expect_env) self.assert_messages(env_log) @inlineCallbacks def test_service(self): main(["get-constraints", "mysql"]) yield self.finished result = serializer.load(self.stdout.getvalue()) self.assertEquals(result, {"mysql": self.expect_service}) self.assert_messages(service_log) @inlineCallbacks def test_unit(self): main(["get-constraints", "mysql/0"]) yield self.finished result = serializer.load(self.stdout.getvalue()) self.assertEquals(result, {"mysql/0": self.expect_unit}) self.assert_messages(unit_log) @inlineCallbacks def test_machine(self): main(["get-constraints", "1"]) yield self.finished result = serializer.load(self.stdout.getvalue()) self.assertEquals(result, {"1": self.expect_machine}) self.assert_messages(machine_log) @inlineCallbacks def test_all(self): main(["get-constraints", "mysql", "mysql/0", "1"]) yield self.finished result = serializer.load(self.stdout.getvalue()) expect = {"mysql": self.expect_service, "mysql/0": self.expect_unit, "1": self.expect_machine} self.assertEquals(result, expect) self.assert_messages(service_log, unit_log, machine_log) @inlineCallbacks def test_syncs_environment(self): """If the environment were not synced, it would be impossible to create the Constraints, so tool success proves sync.""" yield self.client.delete("/environment") main(["get-constraints", "mysql/0"]) yield self.finished result = serializer.load(self.stdout.getvalue()) self.assertEquals(result, {"mysql/0": self.expect_unit}) self.assert_messages(unit_log) juju-0.7.orig/juju/control/tests/test_constraints_set.py0000644000000000000000000000514212135220114022045 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.state.environment import EnvironmentStateManager from .common import MachineControlToolTest class ControlSetConstraintsTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(ControlSetConstraintsTest, self).setUp() self.service_state = yield self.add_service_from_charm("mysql") self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_set_service_constraints(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set-constraints", "--service", "mysql", "cpu=8", "mem=1G"]) yield finished constraints = yield self.service_state.get_constraints() expect = { "arch": "amd64", "cpu": 8, "mem": 1024, "provider-type": "dummy", "ubuntu-series": "series"} self.assertEquals(constraints, expect) @inlineCallbacks def test_bad_service_constraint(self): initial_constraints = yield self.service_state.get_constraints() finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() main(["set-constraints", "--service", "mysql", "arch=proscenium"]) yield finished self.assertIn( "Bad 'arch' constraint 'proscenium': unknown architecture", self.output.getvalue()) constraints = yield self.service_state.get_constraints() self.assertEquals(constraints, initial_constraints) @inlineCallbacks def test_environment_constraint(self): yield self.client.delete("/environment") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set-constraints", "arch=arm", "cpu=any"]) yield finished esm = EnvironmentStateManager(self.client) yield esm.get_config() constraints = yield esm.get_constraints() self.assertEquals(constraints, { "ubuntu-series": None, "provider-type": "dummy", "arch": "arm", "cpu": None, "mem": 512.0}) @inlineCallbacks def test_legacy_environment(self): yield self.client.delete("/constraints") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["set-constraints", "arch=arm", "cpu=any"]) yield finished self.assertIn( "Constraints are not valid in legacy deployments.", self.output.getvalue()) juju-0.7.orig/juju/control/tests/test_control.py0000644000000000000000000002412112135220114020301 0ustar 00000000000000import logging import time import os from StringIO import StringIO from argparse import Namespace from twisted.internet.defer import inlineCallbacks from juju.environment.errors import EnvironmentsConfigError from juju.control import setup_logging, main, setup_parser from juju.control.options import ensure_abs_path from juju.control.command import Commander from juju.state.tests.common import StateTestBase from juju.lib.testing import TestCase from juju import __version__ from .common import ControlToolTest class ControlInitiailizationTest(ControlToolTest): # The EnvironmentTestBase will replace our $HOME, so that tests will # write properly to a temporary directory. def test_write_sample_config(self): """ When juju-control is called without a valid environment configuration, it should write one down and raise an error to let the user know it should be edited. """ try: main(["bootstrap"]) except EnvironmentsConfigError, error: self.assertTrue(error.sample_written) else: self.fail("EnvironmentsConfigError not raised") class ControlOutputTest(ControlToolTest, StateTestBase): @inlineCallbacks def setUp(self): yield super(ControlOutputTest, self).setUp() yield self.push_default_config() def test_sans_args_produces_help(self): """ When juju-control is called without arguments, it produces a standard help message. """ stderr = self.capture_stream("stderr") self.mocker.replay() try: main([]) except SystemExit, e: self.assertEqual(e.args[0], 2) else: self.fail("Should have exited") output = stderr.getvalue() self.assertIn("add-relation", output) self.assertIn("destroy-environment", output) self.assertIn("juju cloud orchestration admin", output) def test_version(self): stderr = self.capture_stream("stderr") try: main(['--version']) except SystemExit, e: self.assertEqual(e.args[0], 0) else: self.fail("Should have exited") output = stderr.getvalue() self.assertIn(__version__, output) def test_custom_parser_does_not_extend_to_subcommand(self): stderr = self.capture_stream("stderr") self.mocker.replay() try: main(["deploy"]) except SystemExit, e: self.assertEqual(e.args[0], 2) else: self.fail("Should have exited") output = stderr.getvalue() self.assertIn("juju deploy: error: too few arguments", output) class ControlCommandOptionTest(ControlToolTest): def test_command_suboption(self): """ The argparser setup, invokes command module configure_subparser functions to allow the command to delineate additional cli options. """ from juju.control import destroy_environment, bootstrap def configure_destroy_environment(subparsers): subparser = subparsers.add_parser("destroy-environment") subparser.add_argument("--opt1", default=1) return subparser def configure_bootstrap(subparsers): subparser = subparsers.add_parser("bootstrap") subparser.add_argument("--opt2", default=2) return subparser self.patch(destroy_environment, "configure_subparser", configure_destroy_environment) self.patch(bootstrap, "configure_subparser", configure_bootstrap) parser = setup_parser(subcommands=[destroy_environment, bootstrap]) shutdown_opts = parser.parse_args(["destroy-environment"]) bootstrap_opts = parser.parse_args(["bootstrap"]) missing = object() self.assertEquals(shutdown_opts.opt1, 1) self.assertEquals(getattr(shutdown_opts, "opt2", missing), missing) self.assertEquals(bootstrap_opts.opt2, 2) self.assertEquals(getattr(bootstrap_opts, "opt1", missing), missing) class ControlUtilityTest(TestCase): def test_ensured_abs_path(self): parent_dir = self.makeDir() file_path = self.makeFile(dirname=parent_dir) os.rmdir(parent_dir) self.assertEqual(ensure_abs_path(file_path), file_path) self.assertTrue(os.path.exists(parent_dir)) def test_ensure_abs_path_with_stdout_symbol(self): self.assertEqual(ensure_abs_path("-"), "-") def test_ensured_abs_path_with_existing(self): temp_dir = self.makeDir() self.assertTrue(os.path.exists(temp_dir)) file_path = os.path.join(temp_dir, "zebra.txt") self.assertEqual(ensure_abs_path(file_path), file_path) self.assertTrue(os.path.exists(temp_dir)) def test_ensure_abs_path_relative(self): current_dir = os.path.abspath(os.getcwd()) self.addCleanup(os.chdir, current_dir) temp_dir = self.makeDir() os.chdir(temp_dir) file_path = ensure_abs_path("zebra.txt") self.assertEqual(file_path, os.path.join(temp_dir, "zebra.txt")) class ControlLoggingTest(TestCase): def test_logging_format(self): self.log = StringIO() setup_logging(Namespace(verbose=False, log_file=self.log)) # ensure that we use gtm regardless of system settings logging.getLogger().handlers[0].formatter.converter = time.gmtime record = logging.makeLogRecord( {"created": 0, "msecs": 0, "levelno": logging.INFO}) logging.getLogger().handle(record) self.assertEqual( self.log.getvalue(), "1970-01-01 00:00:00,000 Level None \n") def test_default_logging(self): """ Default log-level is informational. """ self.log = self.capture_logging() setup_logging(Namespace(verbose=False, log_file=None)) root = logging.getLogger() name = logging.getLevelName(root.getEffectiveLevel()) self.assertEqual(name, "INFO") def test_verbose_logging(self): """ When verbose logging is enabled, the log level is set to debugging. """ setup_logging(Namespace(verbose=True, log_file=None)) root = logging.getLogger() self.assertEqual(logging.getLevelName(root.level), "DEBUG") custom = logging.getLogger("custom") self.assertEqual(custom.getEffectiveLevel(), root.getEffectiveLevel()) def test_default_loggers(self): """ Verify that the default loggers are bound when the logging system is started. """ root = logging.getLogger() self.assertEqual(root.handlers, []) setup_logging(Namespace(verbose=False, log_file=None)) self.assertNotEqual(root.handlers, []) def tearDown(self): # remove the logging handlers we installed root = logging.getLogger() root.handlers = [] class AttrDict(dict): def __getattr__(self, key): return self[key] class TestCommander(ControlToolTest): def get_sample_namespace(self): # Command expects these objects forming a non-obvious contract # with the runtime self.debugStream = debugStream = StringIO() debugStream.__call__ = debugStream.write self.infoStream = infoStream = StringIO() infoStream.__call__ = infoStream.write self.errorStream = errorStream = StringIO() errorStream.__call__ = errorStream.write log = AttrDict(debug=debugStream, info=infoStream, error=errorStream) return Namespace(log=log, verbose=False, parser=AttrDict(prog="juju")) def test_invalid_callback(self): # non callable callback self.failUnlessRaises(ValueError, Commander, time.daylight) # valid callback Commander(time.time) def test_run_invalid_call(self): c = Commander(time.time) # called with invalid options self.failUnlessRaises(ValueError, c, None) def change_value(self, options): self.test_value = 42 return self.test_value @inlineCallbacks def deferred_callback(self, options): self.test_value = 42 yield self.test_value @inlineCallbacks def deferred_callback_with_exception(self, options): raise Exception("Some generic error condition") def test_call_without_deferred(self): self.test_value = None self.setup_cli_reactor() self.setup_exit(0) com = Commander(self.change_value) ns = self.get_sample_namespace() self.mocker.replay() com(ns) self.assertEqual(self.test_value, 42) def test_call_with_deferrred(self): self.test_value = None self.setup_cli_reactor() self.setup_exit(0) com = Commander(self.deferred_callback) ns = self.get_sample_namespace() self.mocker.replay() com(ns) self.assertEqual(self.test_value, 42) def test_call_with_deferrred_exception(self): self.test_value = None self.setup_cli_reactor() self.setup_exit(1) com = Commander(self.deferred_callback_with_exception) ns = self.get_sample_namespace() self.mocker.replay() com(ns) # verify that the exception message is all that comes out of stderr self.assertEqual(self.errorStream.getvalue(), "Some generic error condition") def test_verbose_successful(self): self.test_value = None self.setup_cli_reactor() self.setup_exit(0) com = Commander(self.deferred_callback) ns = self.get_sample_namespace() ns.verbose = True self.mocker.replay() com(ns) self.assertEqual(self.test_value, 42) def test_verbose_error_with_traceback(self): self.test_value = None self.setup_cli_reactor() err = self.capture_stream("stderr") self.setup_exit(1) com = Commander(self.deferred_callback_with_exception) ns = self.get_sample_namespace() ns.verbose = True self.mocker.replay() com(ns) self.assertIn("traceback", err.getvalue()) juju-0.7.orig/juju/control/tests/test_debug_hooks.py0000644000000000000000000001613112135220114021114 0ustar 00000000000000import logging import os from twisted.internet.defer import ( inlineCallbacks, returnValue, succeed, Deferred) from juju.control import main from juju.charm.tests.test_repository import RepositoryTestBase from juju.environment.environment import Environment from juju.state.service import ServiceUnitState from juju.lib.mocker import ANY from juju.control.tests.common import ControlToolTest from juju.state.tests.test_service import ServiceStateManagerTestBase class ControlDebugHookTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(ControlDebugHookTest, self).setUp() self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() # Setup a machine in the provider self.provider_machine = (yield self.provider.start_machine( {"machine-id": 0, "dns-name": "antigravity.example.com"}))[0] # Setup the zk tree with a service, unit, and machine. self.service = yield self.add_service_from_charm("mysql") self.unit = yield self.service.add_unit_state() self.machine = yield self.add_machine_state() yield self.machine.set_instance_id(0) yield self.unit.assign_to_machine(self.machine) # capture the output. self.output = self.capture_logging( "juju.control.cli", level=logging.INFO) self.stderr = self.capture_stream("stderr") self.setup_exit(0) @inlineCallbacks def test_debug_hook_invalid_hook_name(self): """If an invalid hookname is used an appropriate error message is raised that references the charm. """ mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) finished = self.setup_cli_reactor() self.mocker.replay() main(["debug-hooks", "mysql/0", "bad-happened"]) yield finished self.assertIn( "Charm 'local:series/mysql-1' does not contain hook " "'bad-happened'", self.output.getvalue()) @inlineCallbacks def test_debug_hook_invalid_unit_name(self): """An invalid unit causes an appropriate error. """ finished = self.setup_cli_reactor() self.mocker.replay() main(["debug-hooks", "mysql/42"]) yield finished self.assertIn( "Service unit 'mysql/42' was not found", self.output.getvalue()) @inlineCallbacks def test_debug_hook_invalid_service(self): """An invalid service causes an appropriate error. """ finished = self.setup_cli_reactor() self.mocker.replay() main(["debug-hooks", "magic/42"]) yield finished self.assertIn( "Service 'magic' was not found", self.output.getvalue()) @inlineCallbacks def test_debug_hook_unit_agent_not_available(self): """Simulate the unit agent isn't available when the command is run. The command will set the debug flag, and wait for the unit agent to be available. """ mock_unit = self.mocker.patch(ServiceUnitState) # First time, doesn't exist, will wait on watch mock_unit.watch_agent() self.mocker.result((succeed(False), succeed(True))) # Second time, setup the unit address mock_unit.watch_agent() def setup_unit_address(): set_d = self.unit.set_public_address("x1.example.com") exist_d = Deferred() set_d.addCallback(lambda result: exist_d.callback(True)) return (exist_d, succeed(True)) self.mocker.call(setup_unit_address) # Intercept the ssh call self.patch(os, "system", lambda x: True) #mock_environment = self.mocker.patch(Environment) #mock_environment.get_machine_provider() #self.mocker.result(self.provider) finished = self.setup_cli_reactor() self.mocker.replay() main(["debug-hooks", "mysql/0"]) yield finished self.assertIn("Waiting for unit", self.output.getvalue()) self.assertIn("Unit running", self.output.getvalue()) self.assertIn("Connecting to remote machine x1.example.com", self.output.getvalue()) @inlineCallbacks def test_debug_hook(self): """The debug cli will setup unit debug setting and ssh to a screen. """ system_mock = self.mocker.replace(os.system) system_mock(ANY) def on_ssh(command): self.assertStartsWith(command, "ssh -t ubuntu@x2.example.com") # In the function, os.system yields to faciliate testing. self.assertEqual( (yield self.unit.get_hook_debug()), {"debug_hooks": ["*"]}) returnValue(True) self.mocker.call(on_ssh) finished = self.setup_cli_reactor() self.mocker.replay() # Setup the unit address. yield self.unit.set_public_address("x2.example.com") main(["debug-hooks", "mysql/0"]) yield finished self.assertIn( "Connecting to remote machine", self.output.getvalue()) self.assertIn( "Debug session ended.", self.output.getvalue()) @inlineCallbacks def test_standard_named_debug_hook(self): """A hook can be debugged by name. """ yield self.verify_hook_debug("start") self.mocker.reset() self.setup_exit(0) yield self.verify_hook_debug("stop") self.mocker.reset() self.setup_exit(0) yield self.verify_hook_debug("server-relation-changed") self.mocker.reset() self.setup_exit(0) yield self.verify_hook_debug("server-relation-changed", "server-relation-broken") self.mocker.reset() self.setup_exit(0) yield self.verify_hook_debug("server-relation-joined", "server-relation-departed") @inlineCallbacks def verify_hook_debug(self, *hook_names): """Utility function to verify hook debugging by name """ mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) system_mock = self.mocker.replace(os.system) system_mock(ANY) def on_ssh(command): self.assertStartsWith(command, "ssh -t ubuntu@x11.example.com") self.assertEqual( (yield self.unit.get_hook_debug()), {"debug_hooks": list(hook_names)}) returnValue(True) self.mocker.call(on_ssh) finished = self.setup_cli_reactor() self.mocker.replay() yield self.unit.set_public_address("x11.example.com") args = ["debug-hooks", "mysql/0"] args.extend(hook_names) main(args) yield finished self.assertIn( "Connecting to remote machine", self.output.getvalue()) self.assertIn( "Debug session ended.", self.output.getvalue()) juju-0.7.orig/juju/control/tests/test_debug_log.py0000644000000000000000000002056712135220114020562 0ustar 00000000000000import json from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.control.tests.common import ControlToolTest from juju.lib.tests.test_zklog import LogTestBase class ControlDebugLogTest(LogTestBase, ControlToolTest): @inlineCallbacks def setUp(self): yield super(ControlDebugLogTest, self).setUp() yield self.push_default_config() @inlineCallbacks def test_replay(self): """ Older logs can be replayed without affecting the current position pointer. """ self.log = yield self.get_configured_log() for i in range(20): self.log.info(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") yield self.client.set("/logs", json.dumps({"next-log-index": 15})) main(["debug-log", "--replay", "--limit", "5"]) yield cli_done output = stream.getvalue().split("\n") for i in range(5): self.assertIn("unit:mysql/0: test-zk-log INFO: %s" % i, output[i]) # We can run it again and get the same output self.mocker.reset() cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") yield self.client.set("/logs", json.dumps({"next-log-index": 15})) main(["debug-log", "--replay", "--limit", "5"]) yield cli_done output2 = stream.getvalue().split("\n") self.assertEqual(output, output2) content, stat = yield self.client.get("/logs") self.assertEqual(json.loads(content), {"next-log-index": 15}) @inlineCallbacks def test_include_agent(self): """Messages can be filtered to include only certain agents.""" log = yield self.get_configured_log("hook.output", "unit:cassandra/10") log2 = yield self.get_configured_log("hook.output", "unit:cassandra/1") # example of an agent context name sans ":" log3 = yield self.get_configured_log("unit.workflow", "mysql/1") for i in range(5): log.info(str(i)) for i in range(5): log2.info(str(i)) for i in range(5): log3.info(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") main(["debug-log", "--include", "cassandra/1", "--limit", "4"]) yield cli_done output = stream.getvalue() self.assertNotIn("mysql/1", output) self.assertNotIn("cassandra/10", output) self.assertIn("cassandra/1", output) @inlineCallbacks def test_include_log(self): """Messages can be filtered to include only certain log channels.""" log = yield self.get_configured_log("hook.output", "unit:cassandra/1") log2 = yield self.get_configured_log("unit.workflow", "unit:mysql/1") log3 = yield self.get_configured_log("provisioning", "agent:provision") for i in range(5): log.info(str(i)) for i in range(5): log2.info(str(i)) for i in range(5): log3.info(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") main(["debug-log", "--include", "unit.workflow", "-i", "agent:provision", "--limit", "8"]) yield cli_done output = stream.getvalue() self.assertNotIn("cassandra/1", output) self.assertIn("mysql/1", output) self.assertIn("provisioning", output) @inlineCallbacks def test_exclude_agent(self): """Messages can be filterd to exclude certain agents.""" log = yield self.get_configured_log("hook.output", "unit:cassandra/1") log2 = yield self.get_configured_log("unit.workflow", "unit:mysql/1") for i in range(5): log.info(str(i)) for i in range(5): log2.info(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") main(["debug-log", "--exclude", "cassandra/1", "--limit", "4"]) yield cli_done output = stream.getvalue() self.assertNotIn("cassandra/1", output) self.assertIn("mysql/1", output) @inlineCallbacks def test_exclude_log(self): """Messages can be filtered to exclude certain log channels.""" log = yield self.get_configured_log("hook.output", "unit:cassandra/1") log2 = yield self.get_configured_log("provisioning", "agent:provision") log3 = yield self.get_configured_log("unit.workflow", "unit:mysql/1") for i in range(5): log.info(str(i)) for i in range(5): log2.info(str(i)) for i in range(5): log3.info(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") main(["debug-log", "-x", "unit:cass*", "-x", "provisioning", "--limit", "4"]) yield cli_done output = stream.getvalue() self.assertNotIn("cassandra/1", output) self.assertNotIn("provisioning", output) self.assertIn("mysql/1", output) @inlineCallbacks def test_complex_filter(self): """Messages can be filtered to include only certain log channels.""" log = yield self.get_configured_log("hook.output", "unit:cassandra/1") log2 = yield self.get_configured_log("hook.output", "unit:cassandra/2") log3 = yield self.get_configured_log("hook.output", "unit:cassandra/3") for i in range(5): log.info(str(i)) for i in range(5): log2.info(str(i)) for i in range(5): log3.info(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") main( ["debug-log", "-i", "cassandra/*", "-x", "cassandra/1", "-n", "8"]) yield cli_done output = stream.getvalue() self.assertNotIn("cassandra/1", output) self.assertIn("cassandra/2", output) self.assertIn("cassandra/3", output) @inlineCallbacks def test_log_level(self): """Messages can be filtered by log level.""" log = yield self.get_configured_log() for i in range(5): log.info(str(i)) for i in range(5): log.debug(str(i)) for i in range(5): log.warning(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") main(["debug-log", "--level", "WARNING", "--limit", "4"]) yield cli_done output = stream.getvalue().split("\n") for i in range(4): self.assertIn("WARNING", output[i]) @inlineCallbacks def test_log_file(self): """Logs can be sent to a file.""" log = yield self.get_configured_log() for i in range(5): log.info(str(i)) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() file_path = self.makeFile() main(["debug-log", "--output", file_path, "--limit", "4"]) yield cli_done output = open(file_path).read().split("\n") for i in range(4): self.assertIn("unit:mysql/0: test-zk-log INFO: %s" % i, output[i]) @inlineCallbacks def test_log_object(self): """Messages that utilize string interpolation are rendered correctly. """ class Foobar(object): def __init__(self, v): self._v = v def __str__(self): return str("Foobar:%s" % self._v) log = yield self.get_configured_log("unit.workflow", "unit:mysql/1") log.info("found a %s", Foobar(21)) log.info("jack jumped into a %s", Foobar("cauldron")) cli_done = self.setup_cli_reactor() self.setup_exit() self.mocker.replay() stream = self.capture_stream("stdout") main(["debug-log", "--limit", "2"]) yield cli_done output = stream.getvalue() self.assertIn("Foobar:21", output) self.assertIn("Foobar:cauldron", output) juju-0.7.orig/juju/control/tests/test_deploy.py0000644000000000000000000005624012135220114020124 0ustar 00000000000000import logging import os from twisted.internet.defer import inlineCallbacks, succeed from juju.control import deploy, main from juju.environment.environment import Environment from juju.environment.config import EnvironmentsConfig from juju.errors import CharmError from juju.charm.directory import CharmDirectory from juju.charm.repository import RemoteCharmRepository from juju.charm.url import CharmURL from juju.charm.errors import ServiceConfigValueError from juju.lib import serializer from juju.state.charm import CharmStateManager from juju.state.environment import EnvironmentStateManager from juju.state.errors import ServiceStateNameInUse, ServiceStateNotFound from juju.state.service import ServiceStateManager from juju.state.relation import RelationStateManager from juju.lib.mocker import MATCH from .common import MachineControlToolTest class ControlDeployTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(ControlDeployTest, self).setUp() config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer", "placement": "unassigned", "default-series": "series"}}} yield self.push_config("firstenv", config) def test_deploy_multiple_environments_none_specified(self): """ If multiple environments are configured, with no default, one must be specified for the deploy command. """ self.capture_logging() self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer"}, "secondenv": { "type": "dummy", "admin-secret": "marge"}}} self.write_config(serializer.dump(config)) stderr = self.capture_logging() main(["deploy", "--repository", self.unbundled_repo_path, "mysql"]) self.assertIn("There are multiple environments", stderr.getvalue()) def test_no_repository(self): self.capture_logging() self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() stderr = self.capture_logging() main(["deploy", "local:redis"]) self.assertIn("No repository specified", stderr.getvalue()) def test_repository_from_environ(self): """ test using environment to set a default repository """ self.change_environment(JUJU_REPOSITORY=self.unbundled_repo_path) self.capture_logging() self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() stderr = self.capture_logging() main([ "deploy", "local:redis"]) self.assertNotIn("No repository specified", stderr.getvalue()) def test_charm_not_found(self): self.capture_logging() self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() stderr = self.capture_logging() main([ "deploy", "--repository", self.unbundled_repo_path, "local:redis"]) self.assertIn( "Charm 'local:series/redis' not found in repository", stderr.getvalue()) def test_nonsense_constraint(self): self.capture_logging() self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() stderr = self.capture_logging() main([ "deploy", "--repository", self.unbundled_repo_path, "local:sample", "--constraints", "arch=arm tweedledee tweedledum"]) self.assertIn( "Could not interpret 'tweedledee' constraint: need more than 1 " "value to unpack", stderr.getvalue()) @inlineCallbacks def test_deploy_service_name_conflict(self): """Raise an error if a service name conflicts with an existing service """ environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "beekeeper", logging.getLogger("deploy"), []) # deploy the service a second time to generate a name conflict d = deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "beekeeper", logging.getLogger("deploy"), []) error = yield self.failUnlessFailure(d, ServiceStateNameInUse) self.assertEqual( str(error), "Service name 'beekeeper' is already in use") @inlineCallbacks def test_deploy_no_service_name_short_charm_name(self): """Uses charm name as service name if possible.""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", None, logging.getLogger("deploy"), []) service = yield ServiceStateManager( self.client).get_service_state("sample") self.assertEqual(service.service_name, "sample") @inlineCallbacks def test_deploy_no_service_name_long_charm_name(self): """Uses charm name as service name if possible.""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:series/sample", None, logging.getLogger("deploy"), []) service = yield ServiceStateManager( self.client).get_service_state("sample") self.assertEqual(service.service_name, "sample") def xtest_deploy_with_nonexistent_environment_specified(self): self.capture_logging() self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer"}, "secondenv": { "type": "dummy", "admin-secret": "marge"}}} self.write_config(serializer.dump(config)) stderr = self.capture_logging() main(["deploy", "--environment", "roman-candle", "--repository", self.unbundled_repo_path, "sample"]) self.assertIn("Invalid environment 'roman-candle'", stderr.getvalue()) def test_deploy_with_environment_specified(self): self.setup_cli_reactor() self.setup_exit(0) command = self.mocker.replace("juju.control.deploy.deploy") config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer"}, "secondenv": { "type": "dummy", "admin-secret": "marge"}}} self.write_config(serializer.dump(config)) def match_config(config): return isinstance(config, EnvironmentsConfig) def match_environment(environment): return isinstance(environment, Environment) and \ environment.name == "secondenv" command(MATCH(match_config), MATCH(match_environment), self.unbundled_repo_path, "local:sample", None, MATCH(lambda x: isinstance(x, logging.Logger)), ["cpu=36", "mem=64G"], None, False, num_units=1) self.mocker.replay() self.mocker.result(succeed(True)) main(["deploy", "--environment", "secondenv", "--repository", self.unbundled_repo_path, "--constraints", "cpu=36 mem=64G", "local:sample"]) @inlineCallbacks def test_deploy(self): """Create service, and service unit on machine from charm""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "myblog", logging.getLogger("deploy"), ["cpu=123"]) topology = yield self.get_topology() service_id = topology.find_service_with_name("myblog") self.assertEqual(service_id, "service-%010d" % 0) exists = yield self.client.exists("/services/%s" % service_id) self.assertTrue(exists) service_state_manager = ServiceStateManager(self.client) service_state = yield service_state_manager.get_service_state("myblog") charm_id = yield service_state.get_charm_id() self.assertEquals(charm_id, "local:series/sample-2") constraints = yield service_state.get_constraints() expect_constraints = { "arch": "amd64", "cpu": 123, "mem": 512, "provider-type": "dummy", "ubuntu-series": "series"} self.assertEquals(constraints, expect_constraints) machine_ids = topology.get_machines() self.assertEqual( machine_ids, ["machine-%010d" % 0, "machine-%010d" % 1]) exists = yield self.client.exists("/machines/%s" % machine_ids[0]) self.assertTrue(exists) unit_ids = topology.get_service_units(service_id) self.assertEqual(unit_ids, ["unit-%010d" % 0]) exists = yield self.client.exists("/units/%s" % unit_ids[0]) self.assertTrue(exists) @inlineCallbacks def test_deploy_upgrade(self): """A charm can be deployed and get the latest version""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "myblog", logging.getLogger("deploy"), []) yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "myblog2", logging.getLogger("deploy"), [], upgrade=True) services = ServiceStateManager(self.client) service1 = yield services.get_service_state("myblog") s1_charm_id = yield service1.get_charm_id() service2 = yield services.get_service_state("myblog2") s2_charm_id = yield service2.get_charm_id() self.assertNotEqual(s1_charm_id, s2_charm_id) charms = CharmStateManager(self.client) charm1 = yield charms.get_charm_state(s1_charm_id) charm2 = yield charms.get_charm_state(s2_charm_id) self.assertEqual(charm1.revision + 1, charm2.revision) @inlineCallbacks def test_deploy_upgrade_bundle(self): """The upgrade option is invalid with a charm bundle.""" # bundle sample charms output = self.capture_logging("deploy") CharmDirectory(self.sample_dir1).make_archive( os.path.join(self.bundled_repo_path, "series", "old.charm")) CharmDirectory(self.sample_dir2).make_archive( os.path.join(self.bundled_repo_path, "series", "new.charm")) environment = self.config.get("firstenv") error = yield self.assertFailure( deploy.deploy( self.config, environment, self.bundled_repo_path, "local:sample", "myblog", logging.getLogger("deploy"), [], upgrade=True), CharmError) self.assertIn("Searching for charm", output.getvalue()) self.assertIn("Only local directory charms can be upgraded on deploy", str(error)) @inlineCallbacks def test_deploy_upgrade_remote(self): """The upgrade option is invalid with a remote charm.""" repo = self.mocker.mock(RemoteCharmRepository) repo.type self.mocker.result("store") resolve = self.mocker.replace("juju.control.deploy.resolve") resolve("cs:sample", None, "series") self.mocker.result((repo, CharmURL.infer("cs:sample", "series"))) repo.find(MATCH(lambda x: isinstance(x, CharmURL))) self.mocker.result(CharmDirectory(self.sample_dir1)) self.mocker.replay() environment = self.config.get("firstenv") error = yield self.assertFailure(deploy.deploy( self.config, environment, None, "cs:sample", "myblog", logging.getLogger("deploy"), [], upgrade=True), CharmError) self.assertIn("Only local directory charms can be upgraded on deploy", str(error)) @inlineCallbacks def test_deploy_multiple_units(self): """Create service, and service unit on machine from charm""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "myblog", logging.getLogger("deploy"), [], num_units=5) topology = yield self.get_topology() service_id = topology.find_service_with_name("myblog") self.assertEqual(service_id, "service-%010d" % 0) exists = yield self.client.exists("/services/%s" % service_id) self.assertTrue(exists) # Verify standard placement policy - unit placed on a new machine machine_ids = topology.get_machines() self.assertEqual( set(machine_ids), set(["machine-%010d" % i for i in xrange(6)])) for i in xrange(6): self.assertTrue( (yield self.client.exists("/machines/%s" % machine_ids[i]))) unit_ids = topology.get_service_units(service_id) self.assertEqual( set(unit_ids), set(["unit-%010d" % i for i in xrange(5)])) for i in xrange(5): self.assertTrue( (yield self.client.exists("/units/%s" % unit_ids[i]))) @inlineCallbacks def test_deploy_sends_environment(self): """Uses charm name as service name if possible.""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", None, logging.getLogger("deploy"), []) env_state_manager = EnvironmentStateManager(self.client) env_config = yield env_state_manager.get_config() self.assertEquals(serializer.load(env_config.serialize("firstenv")), serializer.load(self.config.serialize("firstenv"))) @inlineCallbacks def test_deploy_reuses_machines(self): """Verify that if machines are not in use, deploy uses them.""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:mysql", None, logging.getLogger("deploy"), []) yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:wordpress", None, logging.getLogger("deploy"), []) yield self.destroy_service("mysql") yield self.destroy_service("wordpress") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:wordpress", None, logging.getLogger("deploy"), []) yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:mysql", None, logging.getLogger("deploy"), []) yield self.assert_machine_assignments("wordpress", [1]) yield self.assert_machine_assignments("mysql", [2]) def test_deploy_missing_config(self): """Missing config files should prevent the deployment""" stderr = self.capture_logging() self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() # missing config file main(["deploy", "--config", "missing", "--repository", self.unbundled_repo_path, "local:sample"]) self.assertIn("Config file 'missing'", stderr.getvalue()) @inlineCallbacks def test_deploy_with_bad_config(self): """Valid config options should be available to the deployed service.""" config_file = self.makeFile( serializer.dump(dict(otherservice=dict(application_file="foo")))) environment = self.config.get("firstenv") failure = deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "myblog", logging.getLogger("deploy"), [], config_file) error = yield self.assertFailure(failure, ServiceConfigValueError) self.assertIn( "Expected a YAML dict with service name ('myblog').", str(error)) @inlineCallbacks def test_deploy_with_invalid_config(self): """Can't deploy with config that doesn't pass charm validation.""" config_file = self.makeFile( serializer.dump(dict(myblog=dict(application_file="foo")))) environment = self.config.get("firstenv") failure = deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:sample", "myblog", logging.getLogger("deploy"), [], config_file) error = yield self.assertFailure(failure, ServiceConfigValueError) self.assertIn( "application_file is not a valid configuration option", str(error)) yield self.assertFailure( ServiceStateManager(self.client).get_service_state("myblog"), ServiceStateNotFound) @inlineCallbacks def test_deploy_with_config(self): """Valid config options should be available to the deployed service.""" config_file = self.makeFile(serializer.dump(dict( myblog=dict(outlook="sunny", username="tester01")))) environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:dummy", "myblog", logging.getLogger("deploy"), [], config_file) # Verify that options in the yaml are available as state after # the deploy call (successfully applied) service = yield ServiceStateManager( self.client).get_service_state("myblog") config = yield service.get_config() self.assertEqual(config["outlook"], "sunny") self.assertEqual(config["username"], "tester01") # a default value from the config.yaml self.assertEqual(config["title"], "My Title") @inlineCallbacks def test_deploy_with_default_config(self): """Valid config options should be available to the deployed service.""" environment = self.config.get("firstenv") # Here we explictly pass no config file but the services # associated config.yaml defines default which we expect to # find anyway. yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:dummy", "myblog", logging.getLogger("deploy"), [], None) # Verify that options in the yaml are available as state after # the deploy call (successfully applied) service = yield ServiceStateManager( self.client).get_service_state("myblog") config = yield service.get_config() self.assertEqual(config["title"], "My Title") @inlineCallbacks def test_deploy_adds_peer_relations(self): """Deploy automatically adds a peer relations.""" environment = self.config.get("firstenv") yield deploy.deploy( self.config, environment, self.unbundled_repo_path, "local:riak", None, logging.getLogger("deploy"), []) service_manager = ServiceStateManager(self.client) service_state = yield service_manager.get_service_state("riak") relation_manager = RelationStateManager(self.client) relations = yield relation_manager.get_relations_for_service( service_state) self.assertEqual(len(relations), 1) self.assertEqual(relations[0].relation_name, "ring") @inlineCallbacks def test_deploy_policy_from_environment(self): config = { "environments": {"firstenv": { "placement": "local", "type": "dummy", "default-series": "series"}}} yield self.push_config("firstenv", config) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["deploy", "--environment", "firstenv", "--repository", self.unbundled_repo_path, "local:sample", "beekeeper"]) yield finished # and verify its placed on node 0 (as per local policy) service = yield self.service_state_manager.get_service_state( "beekeeper") units = yield service.get_all_unit_states() unit = units[0] machine_id = yield unit.get_assigned_machine_id() self.assertEqual(machine_id, 0) @inlineCallbacks def test_deploy_informs_with_subordinate(self): """Verify subordinate charm doesn't deploy. And that it properly notifies the user. """ log = self.capture_logging() finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() # missing config file main(["deploy", "--repository", self.unbundled_repo_path, "local:logging"]) yield finished self.assertIn( "Subordinate 'logging' awaiting relationship to " "principal for deployment.\n", log.getvalue()) # and verify no units assigned to service service_state = yield self.service_state_manager.get_service_state("logging") self.assertEqual(service_state.service_name, "logging") units = yield service_state.get_unit_names() self.assertEqual(units, []) def test_deploy_legacy_keys_in_legacy_env(self): yield self.client.delete("/constraints") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["deploy", "--repository", self.unbundled_repo_path, "local:sample", "beekeeper"]) yield finished service_manager = ServiceStateManager(self.client) yield service_manager.get_service_state("beekeeper") @inlineCallbacks def test_deploy_legacy_keys_in_fresh_env(self): yield self.push_default_config() local_config = { "environments": {"firstenv": { "type": "dummy", "some-legacy-key": "blah", "default-series": "series"}}} self.write_config(serializer.dump(local_config)) self.config.load() finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() stderr = self.capture_logging() main(["deploy", "--repository", self.unbundled_repo_path, "local:sample", "beekeeper"]) yield finished self.assertIn( "Your environments.yaml contains deprecated keys", stderr.getvalue()) service_manager = ServiceStateManager(self.client) yield self.assertFailure( service_manager.get_service_state("beekeeper"), ServiceStateNotFound) @inlineCallbacks def test_deploy_constraints_in_legacy_env(self): yield self.client.delete("/constraints") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() stderr = self.capture_logging() main(["deploy", "--repository", self.unbundled_repo_path, "local:sample", "beekeeper", "--constraints", "arch=i386"]) yield finished self.assertIn( "Constraints are not valid in legacy deployments.", stderr.getvalue()) service_manager = ServiceStateManager(self.client) yield self.assertFailure( service_manager.get_service_state("beekeeper"), ServiceStateNotFound) juju-0.7.orig/juju/control/tests/test_destroy_environment.py0000644000000000000000000001200712135220114022736 0ustar 00000000000000from twisted.internet.defer import succeed, inlineCallbacks from juju.lib.serializer import dump from juju.lib.mocker import MATCH from juju.providers.dummy import MachineProvider from juju.control import main from .common import ControlToolTest class ControlDestroyEnvironmentTest(ControlToolTest): @inlineCallbacks def test_destroy_multiple_environments_no_default(self): """With multiple environments a default needs to be set or passed. """ config = { "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() stderr = self.capture_logging() main(["destroy-environment"]) yield finished self.assertIn( "There are multiple environments and no explicit default", stderr.getvalue()) @inlineCallbacks def test_destroy_invalid_environment(self): """If an invalid environment is specified, an error message is given. """ config = { "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() stderr = self.capture_logging() main(["destroy-environment", "-e", "thirdenv"]) yield finished self.assertIn("Invalid environment 'thirdenv'", stderr.getvalue()) @inlineCallbacks def test_destroy_environment_prompt_no(self): """If a user returns no to the prompt, destroy-environment is aborted. """ config = { "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() self.setup_exit(0) mock_raw = self.mocker.replace(raw_input) mock_raw(MATCH(lambda x: x.startswith( "WARNING: this command will destroy the 'secondenv' " "environment (type: dummy)"))) self.mocker.result("n") self.mocker.replay() main(["destroy-environment", "-e", "secondenv"]) yield finished self.assertIn( "Environment destruction aborted", self.log.getvalue()) @inlineCallbacks def test_destroy_environment(self): """Command will terminate instances in only one environment.""" config = { "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() envs = set(("firstenv", "secondenv")) def track_destroy_environment_call(self): envs.remove(self.environment_name) return succeed(True) provider = self.mocker.patch(MachineProvider) provider.destroy_environment() self.mocker.call(track_destroy_environment_call, with_object=True) self.setup_exit(0) mock_raw = self.mocker.replace(raw_input) mock_raw(MATCH(lambda x: x.startswith( "WARNING: this command will destroy the 'secondenv' " "environment (type: dummy)"))) self.mocker.result("y") self.mocker.replay() main(["destroy-environment", "-e", "secondenv"]) yield finished self.assertIn("Destroying environment 'secondenv' (type: dummy)...", self.log.getvalue()) self.assertEqual(envs, set(["firstenv"])) @inlineCallbacks def test_destroy_default_environment(self): """Command works with default environment, if specified.""" config = { "default": "thirdenv", "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}, "thirdenv": {"type": "dummy"}}} self.write_config(dump(config)) finished = self.setup_cli_reactor() envs = set(("firstenv", "secondenv", "thirdenv")) def track_destroy_environment_call(self): envs.remove(self.environment_name) return succeed(True) provider = self.mocker.patch(MachineProvider) provider.destroy_environment() self.mocker.call(track_destroy_environment_call, with_object=True) self.setup_exit(0) mock_raw = self.mocker.replace(raw_input) mock_raw(MATCH(lambda x: x.startswith( "WARNING: this command will destroy the 'thirdenv' " "environment (type: dummy)"))) self.mocker.result("y") self.mocker.replay() main(["destroy-environment"]) yield finished self.assertIn("Destroying environment 'thirdenv' (type: dummy)...", self.log.getvalue()) self.assertEqual(envs, set(["firstenv", "secondenv"])) juju-0.7.orig/juju/control/tests/test_destroy_service.py0000644000000000000000000001267312135220114022043 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from .common import ControlToolTest from juju.charm.tests.test_repository import RepositoryTestBase from juju.state.tests.test_service import ServiceStateManagerTestBase from juju.providers import dummy # for coverage/trial interaction class ControlStopServiceTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(ControlStopServiceTest, self).setUp() self.service_state1 = yield self.add_service_from_charm("mysql") self.service1_unit = yield self.service_state1.add_unit_state() self.service_state2 = yield self.add_service_from_charm("wordpress") yield self.add_relation( "database", (self.service_state1, "db", "server"), (self.service_state2, "db", "client")) self.output = self.capture_logging() @inlineCallbacks def test_stop_service(self): """ 'juju-control stop_service ' will shutdown all configured instances in all environments. """ topology = yield self.get_topology() service_id = topology.find_service_with_name("mysql") self.assertNotEqual(service_id, None) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["destroy-service", "mysql"]) yield finished topology = yield self.get_topology() self.assertFalse(topology.has_service(service_id)) exists = yield self.client.exists("/services/%s" % service_id) self.assertFalse(exists) self.assertIn("Service 'mysql' destroyed.", self.output.getvalue()) @inlineCallbacks def test_stop_unknown_service(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["destroy-service", "volcano"]) yield finished self.assertIn( "Service 'volcano' was not found", self.output.getvalue()) @inlineCallbacks def test_destroy_subordinate_service(self): log_service = yield self.add_service_from_charm("logging") lu1 = yield log_service.add_unit_state() yield self.add_relation( "juju-info", "container", (self.service_state1, "juju-info", "server"), (log_service, "juju-info", "client"), ) finished = self.setup_cli_reactor() topology = yield self.get_topology() service_id = topology.find_service_with_name("logging") self.assertNotEqual(service_id, None) self.setup_exit(0) self.mocker.replay() main(["destroy-service", "logging"]) yield finished service_id = topology.find_service_with_name("logging") topology = yield self.get_topology() self.assertTrue(topology.has_service(service_id)) exists = yield self.client.exists("/services/%s" % service_id) self.assertTrue(exists) self.assertEquals( "Unsupported attempt to destroy subordinate service 'logging' " "while principal service 'mysql' is related.\n", self.output.getvalue()) @inlineCallbacks def test_destroy_principal_with_subordinates(self): log_service = yield self.add_service_from_charm("logging") lu1 = yield log_service.add_unit_state() yield self.add_relation( "juju-info", (self.service_state1, "juju-info", "server"), (log_service, "juju-info", "client")) finished = self.setup_cli_reactor() topology = yield self.get_topology() service_id = topology.find_service_with_name("mysql") logging_id = topology.find_service_with_name("logging") self.assertNotEqual(service_id, None) self.assertNotEqual(logging_id, None) self.setup_exit(0) self.mocker.replay() main(["destroy-service", "mysql"]) yield finished service_id = topology.find_service_with_name("mysql") topology = yield self.get_topology() self.assertFalse(topology.has_service(service_id)) exists = yield self.client.exists("/services/%s" % service_id) self.assertFalse(exists) # Verify the subordinate state was not removed as well # destroy should allow the destruction of subordinate services # with no relations. This means removing the principal and then # breaking the relation will allow for actual removal from # Zookeeper. see test_destroy_subordinate_without_relations. exists = yield self.client.exists("/services/%s" % logging_id) self.assertTrue(exists) self.assertIn("Service 'mysql' destroyed.", self.output.getvalue()) @inlineCallbacks def test_destroy_subordinate_without_relations(self): """Verify we can remove a subordinate w/o relations.""" yield self.add_service_from_charm("logging") finished = self.setup_cli_reactor() topology = yield self.get_topology() logging_id = topology.find_service_with_name("logging") self.assertNotEqual(logging_id, None) self.setup_exit(0) self.mocker.replay() main(["destroy-service", "logging"]) yield finished topology = yield self.get_topology() self.assertFalse(topology.has_service(logging_id)) exists = yield self.client.exists("/services/%s" % logging_id) self.assertFalse(exists) juju-0.7.orig/juju/control/tests/test_expose.py0000644000000000000000000000652512135220114020134 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.control.tests.common import ControlToolTest from juju.lib import serializer from juju.state.tests.test_service import ServiceStateManagerTestBase class ExposeControlTest( ServiceStateManagerTestBase, ControlToolTest): @inlineCallbacks def setUp(self): yield super(ExposeControlTest, self).setUp() config = { "environments": {"firstenv": {"type": "dummy"}}} self.write_config(serializer.dump(config)) self.config.load() self.service_state = yield self.add_service_from_charm("wordpress") self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_expose_service(self): """Test subcommand sets the exposed flag for service.""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["expose", "wordpress"]) yield finished exposed_flag = yield self.service_state.get_exposed_flag() self.assertTrue(exposed_flag) self.assertIn("Service 'wordpress' was exposed.", self.output.getvalue()) @inlineCallbacks def test_expose_service_twice(self): """Test subcommand can run multiple times, keeping service exposed.""" yield self.service_state.set_exposed_flag() exposed_flag = yield self.service_state.get_exposed_flag() self.assertTrue(exposed_flag) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["expose", "wordpress"]) yield finished exposed_flag = yield self.service_state.get_exposed_flag() self.assertTrue(exposed_flag) self.assertIn("Service 'wordpress' was already exposed.", self.output.getvalue()) # various errors def test_expose_with_no_args(self): """Test subcommand takes at least one service argument.""" # in argparse, before reactor startup self.assertRaises(SystemExit, main, ["expose"]) self.assertIn( "juju expose: error: too few arguments", self.stderr.getvalue()) def test_expose_with_too_many_args(self): """Test subcommand takes at most one service argument.""" self.assertRaises( SystemExit, main, ["expose", "foo", "fum"]) self.assertIn( "juju: error: unrecognized arguments: fum", self.stderr.getvalue()) @inlineCallbacks def test_expose_unknown_service(self): """Test subcommand fails if service does not exist.""" finished = self.setup_cli_reactor() self.setup_exit(0) # XXX change when bug 697093 is fixed self.mocker.replay() main(["expose", "foobar"]) yield finished self.assertIn( "Service 'foobar' was not found", self.output.getvalue()) @inlineCallbacks def test_invalid_environment(self): """Test command with an environment that hasn't been set up.""" finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() main(["expose", "--environment", "roman-candle", "wordpress"]) yield finished self.assertIn( "Invalid environment 'roman-candle'", self.output.getvalue()) juju-0.7.orig/juju/control/tests/test_initialize.py0000644000000000000000000000334112135220114020763 0ustar 00000000000000from base64 import b64encode from twisted.internet.defer import succeed from txzookeeper import ZookeeperClient from juju.state.initialize import StateHierarchy from juju.lib.serializer import dump from juju.control import admin from .common import ControlToolTest class AdminInitializeTest(ControlToolTest): def test_initialize(self): """The admin cli dispatches the initialize method with arguments.""" client = self.mocker.patch(ZookeeperClient) hierarchy = self.mocker.patch(StateHierarchy) self.setup_cli_reactor() client.connect() self.mocker.result(succeed(client)) hierarchy.initialize() self.mocker.result(succeed(True)) client.close() self.capture_stream('stderr') self.setup_exit(0) self.mocker.replay() constraints_data = b64encode(dump({ "ubuntu-series": "foo", "provider-type": "bar"})) admin(["initialize", "--instance-id", "foobar", "--admin-identity", "admin:genie", "--constraints-data", constraints_data, "--provider-type", "dummy"]) def test_bad_constraints_data(self): """Test that failing to unpack --constraints-data aborts initialize""" client = self.mocker.patch(ZookeeperClient) self.setup_cli_reactor() client.connect() self.mocker.result(succeed(client)) self.capture_stream('stderr') self.setup_exit(1) self.mocker.replay() admin(["initialize", "--instance-id", "foobar", "--admin-identity", "admin:genie", "--constraints-data", "zaphod's just this guy, you know?", "--provider-type", "dummy"]) juju-0.7.orig/juju/control/tests/test_open_tunnel.py0000644000000000000000000000245112135220114021151 0ustar 00000000000000 from juju.control import main, open_tunnel from juju.lib.serializer import dump from juju.providers.dummy import MachineProvider from .common import ControlToolTest class OpenTunnelTest(ControlToolTest): def test_open_tunnel(self): """ 'juju-control bootstrap' will invoke the bootstrap method of all configured machine providers in all environments. """ config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer"}}} self.write_config(dump(config)) self.setup_cli_reactor() self.setup_exit(0) provider = self.mocker.patch(MachineProvider) provider.connect(share=True) hanging_deferred = self.mocker.replace(open_tunnel.hanging_deferred) def callback(deferred): deferred.callback(None) return deferred hanging_deferred() self.mocker.passthrough(callback) self.mocker.replay() self.capture_stream("stderr") main(["open-tunnel"]) lines = filter(None, self.log.getvalue().split("\n")) self.assertEqual( lines, ["Tunnel to the environment is open. Press CTRL-C to close it.", "'open_tunnel' command finished successfully"]) juju-0.7.orig/juju/control/tests/test_remove_relation.py0000644000000000000000000002120612135220114022014 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks, returnValue from juju.charm.tests.test_repository import RepositoryTestBase from juju.control import main, remove_relation from juju.control.tests.common import ControlToolTest from juju.lib import serializer from juju.machine.tests.test_constraints import dummy_constraints from juju.state.errors import ServiceStateNotFound from juju.state.tests.test_service import ServiceStateManagerTestBase class ControlRemoveRelationTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(ControlRemoveRelationTest, self).setUp() config = { "environments": { "firstenv": { "type": "dummy", "admin-secret": "homer"}}} self.write_config(serializer.dump(config)) self.config.load() self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def add_relation_state(self, *service_names): for service_name in service_names: # probe if service already exists try: yield self.service_state_manager.get_service_state( service_name) except ServiceStateNotFound: yield self.add_service_from_charm(service_name) endpoint_pairs = yield self.service_state_manager.join_descriptors( *service_names) endpoints = endpoint_pairs[0] endpoints = endpoint_pairs[0] if endpoints[0] == endpoints[1]: endpoints = endpoints[0:1] relation_state = (yield self.relation_state_manager.add_relation_state( *endpoints))[0] returnValue(relation_state) @inlineCallbacks def assertRemoval(self, relation_state): topology = yield self.get_topology() self.assertFalse(topology.has_relation(relation_state.internal_id)) @inlineCallbacks def test_remove_relation(self): """Test that the command works when run from the CLI itself.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() relation_state = yield self.add_relation_state("mysql", "wordpress") yield self.add_relation_state("varnish", "wordpress") main(["remove-relation", "mysql", "wordpress"]) yield wait_on_reactor_stopped self.assertIn( "Removed mysql relation from all service units.", self.output.getvalue()) yield self.assertRemoval(relation_state) @inlineCallbacks def test_remove_peer_relation(self): """Test that services that peer can have that relation removed.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() relation_state = yield self.add_relation_state("riak", "riak") main(["remove-relation", "riak", "riak"]) yield wait_on_reactor_stopped self.assertIn( "Removed riak relation from all service units.", self.output.getvalue()) yield self.assertRemoval(relation_state) @inlineCallbacks def test_remove_relation_command(self): """Test removing a relation via supporting method in the cmd obj.""" relation_state = yield self.add_relation_state("mysql", "wordpress") environment = self.config.get("firstenv") yield remove_relation.remove_relation( self.config, environment, False, logging.getLogger("juju.control.cli"), "mysql", "wordpress") self.assertIn( "Removed mysql relation from all service units.", self.output.getvalue()) yield self.assertRemoval(relation_state) @inlineCallbacks def test_verbose_flag(self): """Test the verbose flag.""" relation_state = yield self.add_relation_state("riak", "riak") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["--verbose", "remove-relation", "riak:ring", "riak:ring"]) yield wait_on_reactor_stopped self.assertIn("Endpoint pairs", self.output.getvalue()) self.assertIn( "Removed riak relation from all service units.", self.output.getvalue()) yield self.assertRemoval(relation_state) # test for various errors def test_with_no_args(self): """Test two descriptor arguments are required for command.""" self.assertRaises(SystemExit, main, ["remove-relation"]) self.assertIn( "juju remove-relation: error: too few arguments", self.stderr.getvalue()) def test_too_many_arguments_provided(self): """Test that command rejects more than 2 descriptor arguments.""" self.assertRaises( SystemExit, main, ["remove-relation", "foo", "fum", "bar"]) self.assertIn( "juju: error: unrecognized arguments: bar", self.stderr.getvalue()) @inlineCallbacks def test_missing_service(self): """Test command fails if a service in the relation is missing.""" yield self.add_service_from_charm("mysql") # but not wordpress wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-relation", "wordpress", "mysql"]) yield wait_on_reactor_stopped self.assertIn( "Service 'wordpress' was not found", self.output.getvalue()) @inlineCallbacks def test_no_common_relation_type(self): """Test command fails if no common relation between services.""" yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("riak") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-relation", "riak", "mysql"]) yield wait_on_reactor_stopped self.assertIn("No matching endpoints", self.output.getvalue()) @inlineCallbacks def test_ambiguous_pairing(self): """Test command fails because the relation is ambiguous.""" yield self.add_service_from_charm("mysql-alternative") yield self.add_service_from_charm("wordpress") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-relation", "wordpress", "mysql-alternative"]) yield wait_on_reactor_stopped self.assertIn( "Ambiguous relation 'wordpress mysql-alternative'; could refer " "to:\n 'wordpress:db mysql-alternative:dev' (mysql client / " "mysql server)\n 'wordpress:db mysql-alternative:prod' (mysql " "client / mysql server)", self.output.getvalue()) @inlineCallbacks def test_missing_charm(self): """Test command fails because service has no corresponding charm.""" yield self.add_service("mysql_no_charm") yield self.add_service_from_charm("wordpress") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-relation", "wordpress", "mysql_no_charm"]) yield wait_on_reactor_stopped self.assertIn("No matching endpoints", self.output.getvalue()) @inlineCallbacks def test_remove_relation_missing_relation(self): """Test that the command works when run from the CLI itself.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("wordpress") main(["remove-relation", "mysql", "wordpress"]) yield wait_on_reactor_stopped self.assertIn( "Relation not found", self.output.getvalue()) @inlineCallbacks def test_remove_subordinate_relation_with_principal(self): yield self.add_service_from_charm("wordpress") log_charm = yield self.get_subordinate_charm() yield self.service_state_manager.add_service_state( "logging", log_charm, dummy_constraints) yield self.add_relation_state("logging", "wordpress") wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-relation", "logging", "wordpress"]) yield wait_on_reactor_stopped self.assertIn("Unsupported attempt to destroy " "subordinate service 'wordpress' while " "principal service 'logging' is related.", self.output.getvalue()) juju-0.7.orig/juju/control/tests/test_remove_unit.py0000644000000000000000000002056412135220114021164 0ustar 00000000000000import logging import sys from twisted.internet.defer import inlineCallbacks import zookeeper from .common import ControlToolTest from juju.charm.tests.test_repository import RepositoryTestBase from juju.control import main from juju.state.endpoint import RelationEndpoint from juju.state.tests.test_service import ServiceStateManagerTestBase class ControlRemoveUnitTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(ControlRemoveUnitTest, self).setUp() self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() # Setup some service units. self.service_state1 = yield self.add_service_from_charm("mysql") self.service_unit1 = yield self.service_state1.add_unit_state() self.service_unit2 = yield self.service_state1.add_unit_state() self.service_unit3 = yield self.service_state1.add_unit_state() # Add an assigned machine to one of them. self.machine = yield self.add_machine_state() yield self.machine.set_instance_id(0) yield self.service_unit1.assign_to_machine(self.machine) # Setup a machine in the provider matching the assigned. self.provider_machine = yield self.provider.start_machine( {"machine-id": 0, "dns-name": "antigravity.example.com"}) self.output = self.capture_logging(level=logging.DEBUG) self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_remove_unit(self): """ 'juju remove-unit ' will remove the given unit. """ unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 3) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-unit", "mysql/0"]) yield finished topology = yield self.get_topology() self.assertFalse(topology.has_service_unit( self.service_state1.internal_id, self.service_unit1.internal_id)) topology = yield self.get_topology() self.assertTrue(topology.has_service_unit( self.service_state1.internal_id, self.service_unit2.internal_id)) self.assertFalse( topology.get_service_units_in_machine(self.machine.internal_id)) self.assertIn( "Unit 'mysql/0' removed from service 'mysql'", self.output.getvalue()) @inlineCallbacks def test_remove_multiple_units(self): """ 'juju remove-unit ...' removes desired units. """ unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 3) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-unit", "mysql/0", "mysql/2"]) yield finished topology = yield self.get_topology() self.assertFalse(topology.has_service_unit( self.service_state1.internal_id, self.service_unit1.internal_id)) topology = yield self.get_topology() self.assertTrue(topology.has_service_unit( self.service_state1.internal_id, self.service_unit2.internal_id)) self.assertFalse( topology.get_service_units_in_machine(self.machine.internal_id)) self.assertIn( "Unit 'mysql/0' removed from service 'mysql'", self.output.getvalue()) self.assertIn( "Unit 'mysql/2' removed from service 'mysql'", self.output.getvalue()) @inlineCallbacks def test_remove_unassigned_unit(self): """Remove unit also works if the unit is unassigned to a machine. """ unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 3) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-unit", "mysql/1"]) yield finished # verify the unit and its machine assignment. unit_names = yield self.service_state1.get_unit_names() self.assertEqual(len(unit_names), 2) topology = yield self.get_topology() topology = yield self.get_topology() self.assertFalse(topology.has_service_unit( self.service_state1.internal_id, self.service_unit2.internal_id)) topology = yield self.get_topology() self.assertTrue(topology.has_service_unit( self.service_state1.internal_id, self.service_unit1.internal_id)) @inlineCallbacks def test_remove_unit_unknown_service(self): """If the service doesn't exist, return an appropriate error message. """ finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-unit", "volcano/0"]) yield finished self.assertIn( "Service 'volcano' was not found", self.output.getvalue()) @inlineCallbacks def test_remove_unit_with_subordinate(self): wordpress = yield self.add_service_from_charm("wordpress") logging = yield self.add_service_from_charm("logging") wordpress_ep = RelationEndpoint("wordpress", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") relation_state, service_states = (yield self.relation_state_manager.add_relation_state( wordpress_ep, logging_ep)) wp1 = yield wordpress.add_unit_state() yield logging.add_unit_state(container=wp1) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-unit", "logging/0"]) yield finished self.assertIn( "Unsupported attempt to destroy subordinate service " "'logging/0' while principal service 'wordpress/0' is related.", self.output.getvalue()) @inlineCallbacks def test_remove_unit_bad_parse(self): """Verify that a bad service unit name results in an appropriate error. """ finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-unit", "volcano-0"]) yield finished self.assertIn( "Not a proper unit name: 'volcano-0'", self.output.getvalue()) @inlineCallbacks def test_remove_unit_unknown_unit(self): """If the unit doesn't exist an appropriate error message is returned. """ finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["remove-unit", "mysql/3"]) yield finished self.assertIn( "Service unit 'mysql/3' was not found", self.output.getvalue()) @inlineCallbacks def test_zookeeper_logging_default(self): """By default zookeeper logging is turned off, unless in verbose mode. """ log_file = self.makeFile() def reset_zk_log(): zookeeper.set_debug_level(0) zookeeper.set_log_stream(sys.stdout) self.addCleanup(reset_zk_log) finished = self.setup_cli_reactor() self.setup_exit(0) # Do this as late as possible to prevent zk background logging # from causing problems. zookeeper.set_debug_level(zookeeper.LOG_LEVEL_INFO) zookeeper.set_log_stream(open(log_file, "w")) self.mocker.replay() main(["remove-unit", "mysql/3"]) yield finished output = open(log_file).read() self.assertEqual(output, "") @inlineCallbacks def test_zookeeper_logging_enabled(self): """By default zookeeper logging is turned off, unless in verbose mode. """ log_file = self.makeFile() zookeeper.set_debug_level(10) zookeeper.set_log_stream(open(log_file, "w")) def reset_zk_log(): zookeeper.set_debug_level(0) zookeeper.set_log_stream(sys.stdout) self.addCleanup(reset_zk_log) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["-v", "remove-unit", "mysql/3"]) yield finished output = open(log_file).read() self.assertTrue(output) self.assertIn("ZOO_DEBUG", output) juju-0.7.orig/juju/control/tests/test_resolved.py0000644000000000000000000003257712135220114020462 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, returnValue from juju.control import main from juju.control.tests.common import ControlToolTest from juju.charm.tests.test_repository import RepositoryTestBase from juju.state.service import RETRY_HOOKS, NO_HOOKS from juju.state.tests.test_service import ServiceStateManagerTestBase from juju.state.errors import ServiceStateNotFound from juju.unit.workflow import UnitWorkflowState, RelationWorkflowState from juju.unit.lifecycle import UnitRelationLifecycle from juju.hooks.executor import HookExecutor class ControlResolvedTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(ControlResolvedTest, self).setUp() yield self.add_relation_state("wordpress", "mysql") yield self.add_relation_state("wordpress", "varnish") self.service1 = yield self.service_state_manager.get_service_state( "mysql") self.service_unit1 = yield self.service1.add_unit_state() self.service_unit2 = yield self.service1.add_unit_state() self.unit1_workflow = UnitWorkflowState( self.client, self.service_unit1, None, self.makeDir()) with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("started") self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") self.executor = HookExecutor() @inlineCallbacks def add_relation_state(self, *service_names): for service_name in service_names: try: yield self.service_state_manager.get_service_state( service_name) except ServiceStateNotFound: yield self.add_service_from_charm(service_name) endpoint_pairs = yield self.service_state_manager.join_descriptors( *service_names) endpoints = endpoint_pairs[0] endpoints = endpoint_pairs[0] if endpoints[0] == endpoints[1]: endpoints = endpoints[0:1] relation_state = (yield self.relation_state_manager.add_relation_state( *endpoints))[0] returnValue(relation_state) @inlineCallbacks def get_named_service_relation(self, service_state, relation_name): if isinstance(service_state, str): service_state = yield self.service_state_manager.get_service_state( service_state) rels = yield self.relation_state_manager.get_relations_for_service( service_state) rels = [sr for sr in rels if sr.relation_name == relation_name] if len(rels) == 1: returnValue(rels[0]) returnValue(rels) @inlineCallbacks def setup_unit_relations(self, service_relation, *units): """ Given a service relation and set of unit tuples in the form unit_state, unit_relation_workflow_state, will add unit relations for these units and update their workflow state to the desired/given state. """ for unit, state in units: unit_relation = yield service_relation.add_unit_state(unit) lifecycle = UnitRelationLifecycle( self.client, unit.unit_name, unit_relation, service_relation.relation_ident, self.makeDir(), self.makeDir(), self.executor) workflow_state = RelationWorkflowState( self.client, unit_relation, service_relation.relation_name, lifecycle, self.makeDir()) with (yield workflow_state.lock()): yield workflow_state.set_state(state) @inlineCallbacks def test_resolved(self): """ 'juju resolved ' will schedule a unit for retrying from an error state. """ # Push the unit into an error state with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("start_error") self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() self.assertEqual( (yield self.service_unit1.get_resolved()), None) main(["resolved", "mysql/0"]) yield finished self.assertEqual( (yield self.service_unit1.get_resolved()), {"retry": NO_HOOKS}) self.assertIn( "Marked unit 'mysql/0' as resolved", self.output.getvalue()) @inlineCallbacks def test_resolved_retry(self): """ 'juju resolved --retry ' will schedule a unit for retrying from an error state with a retry of hooks executions. """ with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("start_error") self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() self.assertEqual( (yield self.service_unit1.get_resolved()), None) main(["resolved", "--retry", "mysql/0"]) yield finished self.assertEqual( (yield self.service_unit1.get_resolved()), {"retry": RETRY_HOOKS}) self.assertIn( "Marked unit 'mysql/0' as resolved", self.output.getvalue()) @inlineCallbacks def test_relation_resolved(self): """ 'juju relation ' will schedule the broken unit relations for being resolved. """ service_relation = yield self.get_named_service_relation( self.service1, "server") yield self.setup_unit_relations( service_relation, (self.service_unit1, "down"), (self.service_unit2, "up")) with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("start_error") self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() self.assertEqual( (yield self.service_unit1.get_relation_resolved()), None) main(["resolved", "--retry", "mysql/0", service_relation.relation_name]) yield finished self.assertEqual( (yield self.service_unit1.get_relation_resolved()), {service_relation.internal_relation_id: RETRY_HOOKS}) self.assertEqual( (yield self.service_unit2.get_relation_resolved()), None) self.assertIn( "Marked unit 'mysql/0' relation 'server' as resolved", self.output.getvalue()) @inlineCallbacks def test_resolved_relation_some_already_resolved(self): """ 'juju resolved ' will mark resolved all down units that are not already marked resolved. """ service2 = yield self.service_state_manager.get_service_state( "wordpress") service_unit1 = yield service2.add_unit_state() service_relation = yield self.get_named_service_relation( service2, "db") yield self.setup_unit_relations( service_relation, (service_unit1, "down")) service_relation2 = yield self.get_named_service_relation( service2, "cache") yield self.setup_unit_relations( service_relation2, (service_unit1, "down")) yield service_unit1.set_relation_resolved( {service_relation.internal_relation_id: NO_HOOKS}) self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() main(["resolved", "--retry", "wordpress/0", "cache"]) yield finished self.assertEqual( (yield service_unit1.get_relation_resolved()), {service_relation.internal_relation_id: NO_HOOKS, service_relation2.internal_relation_id: RETRY_HOOKS}) self.assertIn( "Marked unit 'wordpress/0' relation 'cache' as resolved", self.output.getvalue()) @inlineCallbacks def test_resolved_relation_some_already_resolved_conflict(self): """ 'juju resolved ' will mark resolved all down units that are not already marked resolved. """ service2 = yield self.service_state_manager.get_service_state( "wordpress") service_unit1 = yield service2.add_unit_state() service_relation = yield self.get_named_service_relation( service2, "db") yield self.setup_unit_relations( service_relation, (service_unit1, "down")) yield service_unit1.set_relation_resolved( {service_relation.internal_relation_id: NO_HOOKS}) self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() main(["resolved", "--retry", "wordpress/0", "db"]) yield finished self.assertEqual( (yield service_unit1.get_relation_resolved()), {service_relation.internal_relation_id: NO_HOOKS}) self.assertIn( "Service unit 'wordpress/0' already has relations marked as resol", self.output.getvalue()) @inlineCallbacks def test_resolved_unknown_service(self): """ 'juju resolved ' will report if a service is invalid. """ self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() main(["resolved", "zebra/0"]) yield finished self.assertIn("Service 'zebra' was not found", self.output.getvalue()) @inlineCallbacks def test_resolved_unknown_unit(self): """ 'juju resolved ' will report if a unit is invalid. """ self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() main(["resolved", "mysql/5"]) yield finished self.assertIn( "Service unit 'mysql/5' was not found", self.output.getvalue()) @inlineCallbacks def test_resolved_unknown_unit_relation(self): """ 'juju resolved ' will report if a relation is invalid. """ self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() self.assertEqual( (yield self.service_unit1.get_resolved()), None) main(["resolved", "mysql/0", "magic"]) yield finished self.assertIn("Relation not found", self.output.getvalue()) @inlineCallbacks def test_resolved_already_running(self): """ 'juju resolved ' will report if the unit is already running. """ # Just verify we don't accidentally mark up another unit of the service unit2_workflow = UnitWorkflowState( self.client, self.service_unit2, None, self.makeDir()) with (yield unit2_workflow.lock()): unit2_workflow.set_state("start_error") self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() main(["resolved", "mysql/0"]) yield finished self.assertEqual( (yield self.service_unit2.get_resolved()), None) self.assertEqual( (yield self.service_unit1.get_resolved()), None) self.assertNotIn( "Unit 'mysql/0 already running: started", self.output.getvalue()) @inlineCallbacks def test_resolved_already_resolved(self): """ 'juju resolved ' will report if the unit is already resolved. """ # Mark the unit as resolved and as in an error state. yield self.service_unit1.set_resolved(RETRY_HOOKS) with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("start_error") unit2_workflow = UnitWorkflowState( self.client, self.service_unit1, None, self.makeDir()) with (yield unit2_workflow.lock()): unit2_workflow.set_state("start_error") self.assertEqual( (yield self.service_unit2.get_resolved()), None) self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() main(["resolved", "mysql/0"]) yield finished self.assertEqual( (yield self.service_unit1.get_resolved()), {"retry": RETRY_HOOKS}) self.assertNotIn( "Marked unit 'mysql/0' as resolved", self.output.getvalue()) self.assertIn( "Service unit 'mysql/0' is already marked as resolved.", self.output.getvalue(), "") @inlineCallbacks def test_resolved_relation_already_running(self): """ 'juju resolved ' will report if the relation is already running. """ service2 = yield self.service_state_manager.get_service_state( "wordpress") service_unit1 = yield service2.add_unit_state() service_relation = yield self.get_named_service_relation( service2, "db") yield self.setup_unit_relations( service_relation, (service_unit1, "up")) self.setup_exit(0) finished = self.setup_cli_reactor() self.mocker.replay() main(["resolved", "wordpress/0", "db"]) yield finished self.assertIn("Matched relations are all running", self.output.getvalue()) self.assertEqual( (yield service_unit1.get_relation_resolved()), None) juju-0.7.orig/juju/control/tests/test_scp.py0000644000000000000000000001115512135220114017411 0ustar 00000000000000import logging import os from twisted.internet.defer import inlineCallbacks from juju.environment.environment import Environment from juju.state.tests.test_service import ServiceStateManagerTestBase from juju.lib.mocker import ARGS, KWARGS from juju.control import main from .common import ControlToolTest class SCPTest(ServiceStateManagerTestBase, ControlToolTest): @inlineCallbacks def setUp(self): yield super(SCPTest, self).setUp() self.setup_exit(0) self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() # Setup a machine in the provider self.provider_machine = (yield self.provider.start_machine( {"machine-id": 0, "dns-name": "antigravity.example.com"}))[0] # Setup the zk tree with a service, unit, and machine. self.service = yield self.add_service_from_charm("mysql") self.unit = yield self.service.add_unit_state() yield self.unit.set_public_address( "%s.example.com" % self.unit.unit_name.replace("/", "-")) self.machine = yield self.add_machine_state() yield self.machine.set_instance_id(0) yield self.unit.assign_to_machine(self.machine) # capture the output. self.output = self.capture_logging( "juju.control.cli", level=logging.INFO) self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_scp_unit_name(self): """Verify scp command is invoked against the host for a unit name.""" # Verify expected call against scp mock_exec = self.mocker.replace(os.execvp) mock_exec("scp", [ "scp", "ubuntu@mysql-0.example.com:/foo/*", "10.1.2.3:."]) # But no other calls calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.unit.connect_agent() main(["scp", "mysql/0:/foo/*", "10.1.2.3:."]) yield finished self.assertEquals(calls, []) @inlineCallbacks def test_scp_machine_id(self): """Verify scp command is invoked against the host for a machine ID.""" # We need to do this because separate instances of DummyProvider don't # share instance state. mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) # Verify expected call against scp mock_exec = self.mocker.replace(os.execvp) mock_exec( "scp", ["scp", "ubuntu@antigravity.example.com:foo/*", "10.1.2.3:."]) # But no other calls calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.unit.connect_agent() main(["scp", "0:foo/*", "10.1.2.3:."]) yield finished self.assertEquals(calls, []) @inlineCallbacks def test_passthrough_args(self): """Verify that args are passed through to the underlying scp command. For example, something like the following command should be valid:: $ juju scp -o "ConnectTimeout 60" foo mysql/0:/foo/bar """ # Verify expected call against scp mock_exec = self.mocker.replace(os.execvp) mock_exec("scp", [ "scp", "-r", "-o", "ConnectTimeout 60", "foo", "ubuntu@mysql-0.example.com:/foo/bar"]) # But no other calls calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() main(["scp", "-r", "-o", "ConnectTimeout 60", "foo", "mysql/0:/foo/bar"]) yield finished self.assertEquals(calls, []) class ParseErrorsTest(ServiceStateManagerTestBase, ControlToolTest): @inlineCallbacks def setUp(self): yield super(ParseErrorsTest, self).setUp() self.stderr = self.capture_stream("stderr") def test_passthrough_args_parse_error(self): """Verify that bad passthrough args will get an argparse error.""" e = self.assertRaises( SystemExit, main, ["scp", "-P", "mysql/0"]) self.assertEqual(e.code, 2) self.assertIn("juju scp: error: too few arguments", self.stderr.getvalue()) juju-0.7.orig/juju/control/tests/test_ssh.py0000644000000000000000000003110712135220114017420 0ustar 00000000000000import logging import os from twisted.internet.defer import inlineCallbacks, succeed, Deferred from juju.environment.environment import Environment from juju.charm.tests.test_repository import RepositoryTestBase from juju.state.machine import MachineState from juju.state.service import ServiceUnitState from juju.state.tests.test_service import ServiceStateManagerTestBase from juju.lib.mocker import ARGS, KWARGS from juju.control import main from .common import ControlToolTest class ControlShellTest( ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase): @inlineCallbacks def setUp(self): yield super(ControlShellTest, self).setUp() self.setup_exit(0) self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() # Setup a machine in the provider self.provider_machine = (yield self.provider.start_machine( {"machine-id": 0, "dns-name": "antigravity.example.com"}))[0] # Setup the zk tree with a service, unit, and machine. self.service = yield self.add_service_from_charm("mysql") self.unit = yield self.service.add_unit_state() yield self.unit.set_public_address( "%s.example.com" % self.unit.unit_name.replace("/", "-")) self.machine = yield self.add_machine_state() yield self.machine.set_instance_id(0) yield self.unit.assign_to_machine(self.machine) # capture the output. self.output = self.capture_logging( "juju.control.cli", level=logging.INFO) @inlineCallbacks def test_shell_with_unit(self): """ 'juju ssh mysql/0' will execute ssh against the machine hosting the unit. """ mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) mock_exec = self.mocker.replace(os.execvp) mock_exec("ssh", [ "ssh", "-o", "ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "ubuntu@mysql-0.example.com"]) # Track unwanted calls: calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.unit.connect_agent() main(["ssh", self.unit.unit_name]) yield finished self.assertEquals(calls, []) self.assertIn( "Connecting to unit mysql/0 at mysql-0.example.com", self.output.getvalue()) @inlineCallbacks def test_passthrough_args(self): """Verify that args are passed through to the underlying ssh command. For example, something like the following command should be valid:: $ juju ssh -L8080:localhost:8080 -o "ConnectTimeout 60" mysql/0 ls / """ mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) mock_exec = self.mocker.replace(os.execvp) mock_exec("ssh", [ "ssh", "-o", "ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "-L8080:localhost:8080", "-o", "ConnectTimeout 60", "ubuntu@mysql-0.example.com", "ls *"]) # Track unwanted calls: calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.unit.connect_agent() main(["ssh", "-L8080:localhost:8080", "-o", "ConnectTimeout 60", self.unit.unit_name, "ls *"]) yield finished self.assertEquals(calls, []) self.assertIn( "Connecting to unit mysql/0 at mysql-0.example.com", self.output.getvalue()) @inlineCallbacks def test_shell_with_unit_and_unconnected_unit_agent(self): """If a unit doesn't have a connected unit agent, the ssh command will wait till one exists before connecting. """ mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) mock_unit = self.mocker.patch(ServiceUnitState) mock_unit.watch_agent() self.mocker.result((succeed(False), succeed(True))) mock_exec = self.mocker.replace(os.execvp) mock_exec("ssh", [ "ssh", "-o", "ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "ubuntu@mysql-0.example.com"]) # Track unwanted calls: calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.unit.connect_agent() main(["ssh", "mysql/0"]) yield finished self.assertEquals(calls, []) self.assertIn( "Waiting for unit to come up", self.output.getvalue()) @inlineCallbacks def test_shell_with_machine_and_unconnected_machine_agent(self): """If a machine doesn't have a connected machine agent, the ssh command will wait till one exists before connecting. """ mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) mock_machine = self.mocker.patch(MachineState) mock_machine.watch_agent() self.mocker.result((succeed(False), succeed(True))) mock_exec = self.mocker.replace(os.execvp) mock_exec("ssh", [ "ssh", "-o", "ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "ubuntu@antigravity.example.com"]) # Track unwanted calls: calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.machine.connect_agent() main(["ssh", "0"]) yield finished self.assertEquals(calls, []) self.assertIn( "Waiting for machine to come up", self.output.getvalue()) @inlineCallbacks def test_shell_with_unit_and_unset_dns(self): """If a machine agent isn't connects, its also possible that the provider machine may not yet have a dns name, if the instance hasn't started. In that case after the machine agent has connected, verify the provider dns name is valid.""" mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) mock_unit = self.mocker.patch(ServiceUnitState) mock_unit.watch_agent() address_set = Deferred() @inlineCallbacks def set_unit_dns(): yield self.unit.set_public_address("mysql-0.example.com") address_set.callback(True) self.mocker.call(set_unit_dns) self.mocker.result((succeed(False), succeed(False))) mock_exec = self.mocker.replace(os.execvp) mock_exec("ssh", [ "ssh", "-o", "ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "ubuntu@mysql-0.example.com"]) # Track unwanted calls: calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.unit.set_public_address(None) main(["ssh", "mysql/0"]) # Wait till we've set the unit address before connecting the agent. yield address_set yield self.unit.connect_agent() yield finished self.assertEquals(calls, []) self.assertIn( "Waiting for unit to come up", self.output.getvalue()) @inlineCallbacks def test_shell_with_machine_and_unset_dns(self): """If a machine agent isn't connects, its also possible that the provider machine may not yet have a dns name, if the instance hasn't started. In that case after the machine agent has connected, verify the provider dns name is valid.""" mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) mock_machine = self.mocker.patch(MachineState) mock_machine.watch_agent() def set_machine_dns(): self.provider_machine.dns_name = "antigravity.example.com" self.mocker.call(set_machine_dns) self.mocker.result((succeed(False), succeed(False))) mock_exec = self.mocker.replace(os.execvp) mock_exec("ssh", [ "ssh", "-o", "ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "ubuntu@antigravity.example.com"]) # Track unwanted calls: calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() self.provider_machine.dns_name = None yield self.machine.connect_agent() main(["ssh", "0"]) yield finished self.assertEquals(calls, []) self.assertIn( "Waiting for machine to come up", self.output.getvalue()) @inlineCallbacks def test_shell_with_machine_id(self): """ 'juju ssh ' will execute ssh against the machine with the corresponding id. """ mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) mock_exec = self.mocker.replace(os.execvp) mock_exec("ssh", [ "ssh", "-o", "ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "ubuntu@antigravity.example.com", ]) # Track unwanted calls: calls = [] mock_exec(ARGS, KWARGS) self.mocker.count(0, None) self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs))) finished = self.setup_cli_reactor() self.mocker.replay() yield self.machine.connect_agent() main(["ssh", "0"]) yield finished self.assertEquals(calls, []) self.assertIn( "Connecting to machine 0 at antigravity.example.com", self.output.getvalue()) @inlineCallbacks def test_shell_with_unassigned_unit(self): """If the service unit is not assigned, attempting to connect, raises an error.""" finished = self.setup_cli_reactor() self.mocker.replay() unit_state = yield self.service.add_unit_state() main(["ssh", unit_state.unit_name]) yield finished self.assertIn( "Service unit 'mysql/1' is not assigned to a machine", self.output.getvalue()) @inlineCallbacks def test_shell_with_invalid_machine(self): """If the machine does not exist, attempting to connect, raises an error.""" mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) finished = self.setup_cli_reactor() self.mocker.replay() main(["ssh", "1"]) yield finished self.assertIn("Machine 1 was not found", self.output.getvalue()) class ParseErrorsTest(ServiceStateManagerTestBase, ControlToolTest): @inlineCallbacks def setUp(self): yield super(ParseErrorsTest, self).setUp() self.stderr = self.capture_stream("stderr") def test_passthrough_args_parse_error(self): """Verify that bad passthrough args will get an argparse error.""" e = self.assertRaises( SystemExit, main, ["ssh", "-L", "mysql/0"]) self.assertEqual(e.code, 2) self.assertIn("juju ssh: error: too few arguments", self.stderr.getvalue()) juju-0.7.orig/juju/control/tests/test_status.py0000644000000000000000000011161012135220114020144 0ustar 00000000000000from fnmatch import fnmatch import inspect import json import logging import os from StringIO import StringIO from twisted.internet.defer import inlineCallbacks, returnValue from juju.agents.base import TwistedOptionNamespace from juju.agents.machine import MachineAgent from juju.environment.environment import Environment from juju.control import status from juju.control import tests from juju.lib import serializer from juju.state.endpoint import RelationEndpoint from juju.state.environment import GlobalSettingsStateManager from juju.state.tests.test_service import ServiceStateManagerTestBase from juju.tests.common import get_test_zookeeper_address from juju.unit.workflow import ZookeeperWorkflowState from .common import ControlToolTest tests_path = os.path.dirname(inspect.getabsfile(tests)) sample_path = os.path.join(tests_path, "sample_cluster.yaml") sample_cluster = serializer.load(open(sample_path, "r")) def dump_stringio(stringio, filename): """Debug utility to dump a StringIO to a filename.""" fp = open(filename, "w") fp.write(stringio.getvalue()) fp.close() @inlineCallbacks def collect(scope, provider, client, log): """Collect and return status info as dict""" # provided for backwards compatibility with # original API # used only in testing s = status.StatusCommand(client, provider, log) state = yield s(scope) returnValue(state) class StatusTestBase(ServiceStateManagerTestBase, ControlToolTest): # Status tests setup a large tree every time, make allowances for it. # TODO: create minimal trees needed per test. timeout = 10 @inlineCallbacks def setUp(self): yield super(StatusTestBase, self).setUp() settings = GlobalSettingsStateManager(self.client) yield settings.set_provider_type("dummy") self.log = self.capture_logging() self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() self.machine_count = 0 self.output = StringIO() @inlineCallbacks def set_unit_state(self, unit_state, state, port_protos=()): unit_state.set_public_address( "%s.example.com" % unit_state.unit_name.replace("/", "-")) workflow_client = ZookeeperWorkflowState(self.client, unit_state) with (yield workflow_client.lock()): yield workflow_client.set_state(state) for port_proto in port_protos: yield unit_state.open_port(*port_proto) @inlineCallbacks def add_relation_unit_states(self, relation_state, unit_states, states): for unit_state, state in zip(unit_states, states): relation_unit_state = yield relation_state.add_unit_state( unit_state) workflow_client = ZookeeperWorkflowState( self.client, relation_unit_state) with (yield workflow_client.lock()): yield workflow_client.set_state(state) @inlineCallbacks def add_relation_with_relation_units( self, source_endpoint, source_units, source_states, dest_endpoint, dest_units, dest_states): relation_state, service_relation_states = \ yield self.relation_state_manager.add_relation_state( *[source_endpoint, dest_endpoint]) source_relation_state, dest_relation_state = service_relation_states yield self.add_relation_unit_states( source_relation_state, source_units, source_states) yield self.add_relation_unit_states( dest_relation_state, dest_units, dest_states) @inlineCallbacks def add_unit(self, service, machine, with_agent=lambda _: True, units=None, container=None): unit = yield service.add_unit_state(container=container) self.assertTrue(machine or container) if machine is not None: yield unit.assign_to_machine(machine) if with_agent(unit.unit_name): yield unit.connect_agent() if units is not None: units.setdefault(service.service_name, []).append(unit) returnValue(unit) @inlineCallbacks def build_topology(self, base=None, skip_unit_agents=()): """Build a simulated topology with a default machine configuration. This method returns a dict that can be used to get handles to the constructed objects. """ state = {} # build out the topology using the state managers m1 = yield self.add_machine_state() m2 = yield self.add_machine_state() m3 = yield self.add_machine_state() m4 = yield self.add_machine_state() m5 = yield self.add_machine_state() m6 = yield self.add_machine_state() m7 = yield self.add_machine_state() # inform the provider about the machine yield self.provider.start_machine({"machine-id": 0, "dns-name": "steamcloud-1.com"}) yield self.provider.start_machine({"machine-id": 1, "dns-name": "steamcloud-2.com"}) yield self.provider.start_machine({"machine-id": 2, "dns-name": "steamcloud-3.com"}) yield self.provider.start_machine({"machine-id": 3, "dns-name": "steamcloud-4.com"}) yield self.provider.start_machine({"machine-id": 4, "dns-name": "steamcloud-5.com"}) yield self.provider.start_machine({"machine-id": 5, "dns-name": "steamcloud-6.com"}) yield self.provider.start_machine({"machine-id": 6, "dns-name": "steamcloud-7.com"}) yield m1.set_instance_id(0) yield m2.set_instance_id(1) yield m3.set_instance_id(2) yield m4.set_instance_id(3) yield m5.set_instance_id(4) yield m6.set_instance_id(5) yield m7.set_instance_id(6) state["machines"] = [m1, m2, m3, m4, m5, m6, m7] # "Deploy" services wordpress = yield self.add_service_from_charm("wordpress") mysql = yield self.add_service_from_charm("mysql") yield mysql.set_exposed_flag() # but w/ no open ports varnish = yield self.add_service_from_charm("varnish") yield varnish.set_exposed_flag() # w/o additional metadata memcache = yield self.add_service("memcache") state["services"] = dict(wordpress=wordpress, mysql=mysql, varnish=varnish, memcache=memcache) def with_unit(name): for pattern in skip_unit_agents: if fnmatch(name, pattern): return False return True units = {} wpu = yield self.add_unit(wordpress, m1, with_unit, units) myu1 = yield self.add_unit(mysql, m2, with_unit, units) myu2 = yield self.add_unit(mysql, m3, with_unit, units) vu1 = yield self.add_unit(varnish, m4, with_unit, units) vu2 = yield self.add_unit(varnish, m5, with_unit, units) mc1 = yield self.add_unit(memcache, m6, with_unit, units) mc2 = yield self.add_unit(memcache, m7, with_unit, units) state["units"] = units # add unit states to services and assign to machines # Set the lifecycle state and open ports, if any, for each unit state. yield self.set_unit_state(wpu, "started", [(80, "tcp"), (443, "tcp")]) yield self.set_unit_state(myu1, "started") yield self.set_unit_state(myu2, "stop_error") yield self.set_unit_state(vu1, "started", [(80, "tcp")]) yield self.set_unit_state(vu2, "started", [(80, "tcp")]) yield self.set_unit_state(mc1, None) yield self.set_unit_state(mc2, "installed") # Wordpress integrates with each of the following # services. Each relation endpoint is used to define the # specific relation to be established. mysql_ep = RelationEndpoint( "mysql", "client-server", "db", "server") memcache_ep = RelationEndpoint( "memcache", "client-server", "cache", "server") varnish_ep = RelationEndpoint( "varnish", "client-server", "proxy", "client") wordpress_db_ep = RelationEndpoint( "wordpress", "client-server", "db", "client") wordpress_cache_ep = RelationEndpoint( "wordpress", "client-server", "cache", "client") wordpress_proxy_ep = RelationEndpoint( "wordpress", "client-server", "proxy", "server") # Create relation service units for each of these relations yield self.add_relation_with_relation_units( mysql_ep, [myu1, myu2], ["up", "departed"], wordpress_db_ep, [wpu], ["up"]) yield self.add_relation_with_relation_units( memcache_ep, [mc1, mc2], ["up", "down"], wordpress_cache_ep, [wpu], ["up"]) yield self.add_relation_with_relation_units( varnish_ep, [vu1, vu2], ["up", "up"], wordpress_proxy_ep, [wpu], ["up"]) state["relations"] = dict( wordpress=[wpu], mysql=[myu1, myu2], varnish=[vu1, vu2], memcache=[mc1, mc2] ) returnValue(state) def mock_environment(self): mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) class StatusTest(StatusTestBase): @inlineCallbacks def add_provider_machine(self): m = yield self.add_machine_state() yield self.provider.start_machine( {"machine-id": self.machine_count, "dns-name": "steamcloud-%s.com" % self.machine_count}) m.set_instance_id(self.machine_count) self.machine_count += 1 returnValue(m) @inlineCallbacks def test_status_provider_machine(self): """Verify only one call to the provider for n machines. """ yield self.add_provider_machine() yield self.add_provider_machine() mock_provider = self.mocker.patch(self.provider) mock_provider.get_machines() self.mocker.count(1) self.mocker.passthrough() self.mocker.replay() state = yield collect( None, mock_provider, self.client, None) self.assertEqual( state, {'services': {}, 'machines': { 0: {'agent-state': 'not-started', 'instance-state': 'unknown', 'instance-id': 0, 'dns-name': 'steamcloud-0.com'}, 1: {'agent-state': 'not-started', 'instance-state': 'unknown', 'instance-id': 1, 'dns-name': 'steamcloud-1.com'}}}) @inlineCallbacks def test_peer_relation(self): """Verify status works with peer relations. """ m1 = yield self.add_provider_machine() m2 = yield self.add_provider_machine() riak = yield self.add_service_from_charm("riak") riak_u1 = yield self.add_unit(riak, m1) riak_u2 = yield self.add_unit(riak, m2, with_agent=lambda _: False) yield self.set_unit_state(riak_u1, "started") yield self.set_unit_state(riak_u2, "started") _, (peer_rel,) = yield self.relation_state_manager.add_relation_state( RelationEndpoint("riak", "peer", "ring", "peer")) riak_u1_relation = yield peer_rel.add_unit_state(riak_u1) riak_u1_workflow = ZookeeperWorkflowState( self.client, riak_u1_relation) with (yield riak_u1_workflow.lock()): yield riak_u1_workflow.set_state("up") yield peer_rel.add_unit_state(riak_u2) state = yield collect( ["riak"], self.provider, self.client, None) self.assertEqual( state["services"]["riak"], {"charm": "local:series/riak-7", "relations": {"ring": ["riak"]}, "units": {"riak/0": {"machine": 0, "public-address": "riak-0.example.com", "agent-state": "started"}, "riak/1": {"machine": 1, "public-address": "riak-1.example.com", "agent-state": "down"}}}) @inlineCallbacks def test_service_with_multiple_instances_of_named_relation(self): m1 = yield self.add_provider_machine() m2 = yield self.add_provider_machine() m3 = yield self.add_provider_machine() mysql = yield self.add_service_from_charm("mysql") mysql_ep = RelationEndpoint( "mysql", "client-server", "db", "server") mysql_1 = yield self.add_unit(mysql, m1) myblog = yield self.add_service_from_charm( "myblog", charm_name="wordpress") myblog_db_ep = RelationEndpoint( "myblog", "client-server", "db", "client") myblog_1 = yield self.add_unit(myblog, m2) teamblog = yield self.add_service_from_charm( "teamblog", charm_id=(yield myblog.get_charm_id())) teamblog_db_ep = RelationEndpoint( "teamblog", "client-server", "db", "client") teamblog_1 = yield self.add_unit(teamblog, m3) yield self.add_relation_with_relation_units( mysql_ep, [mysql_1], ["up"], myblog_db_ep, [myblog_1], ["up"]) yield self.add_relation_with_relation_units( mysql_ep, [mysql_1], ["up"], teamblog_db_ep, [teamblog_1], ["up"]) state = yield collect(None, self.provider, self.client, None) self.assertEqual( state["services"]["mysql"]["units"]["mysql/0"], {"agent-state": "pending", "machine": 0, "public-address": None}) self.assertEqual( state["services"]["mysql"]["relations"], {"db": ["myblog", "teamblog"]}) @inlineCallbacks def test_service_with_multiple_rels_to_same_endpoint(self): m1 = yield self.add_provider_machine() m2 = yield self.add_provider_machine() mysql = yield self.add_service_from_charm("mysql") mysql_ep = RelationEndpoint( "mysql", "client-server", "db", "server") mysql_1 = yield self.add_unit(mysql, m1) myblog = yield self.add_service_from_charm( "myblog", charm_name="funkyblog") write_db_ep = RelationEndpoint( "myblog", "client-server", "write-db", "client") read_db_ep = RelationEndpoint( "myblog", "client-server", "read-db", "client") myblog_1 = yield self.add_unit(myblog, m2) yield self.add_relation_with_relation_units( mysql_ep, [mysql_1], ["down"], write_db_ep, [myblog_1], ["up"]) yield self.add_relation_with_relation_units( mysql_ep, [mysql_1], ["down"], read_db_ep, [myblog_1], ["up"]) state = yield collect(None, self.provider, self.client, None) # Even though there are two relations to this service we # collapse to one the additional displays are redundant. self.assertEqual( state["services"]["mysql"]["relations"], {"db": ["myblog"]}) self.assertEqual( state["services"]["mysql"]["units"]["mysql/0"], {"agent-state": "pending", "machine": 0, "relation-errors": {"db": ["myblog"]}, "public-address": None}) @inlineCallbacks def test_collect(self): yield self.build_topology(skip_unit_agents=("varnish/1",)) agent = MachineAgent() options = TwistedOptionNamespace() options["juju_directory"] = self.makeDir() options["zookeeper_servers"] = get_test_zookeeper_address() options["session_file"] = self.makeFile() options["machine_id"] = "0" agent.configure(options) agent.set_watch_enabled(False) agent.client = self.client yield agent.start() # collect everything state = yield collect(None, self.provider, self.client, None) services = state["services"] self.assertIn("wordpress", services) self.assertIn("varnish", services) self.assertIn("mysql", services) # and verify the specifics of a single service self.assertTrue("mysql" in services) units = list(services["mysql"]["units"]) self.assertEqual(len(units), 2) self.assertEqual(state["machines"][0], {"instance-id": 0, "instance-state": "unknown", "dns-name": "steamcloud-1.com", "agent-state": "running"}) self.assertEqual(services["mysql"]["relations"], {"db": ["wordpress"]}) self.assertEqual(services["wordpress"]["relations"], {"cache": ["memcache"], "db": ["mysql"], "proxy": ["varnish"]}) self.assertEqual( services["varnish"], {"units": {"varnish/1": { "machine": 4, "agent-state": "down", "open-ports": ["80/tcp"], "public-address": "varnish-1.example.com"}, "varnish/0": { "machine": 3, "agent-state": "started", "public-address": "varnish-0.example.com", "open-ports": ["80/tcp"]}}, "exposed": True, "charm": "local:series/varnish-1", "relations": {"proxy": ["wordpress"]}}) self.assertEqual( services["wordpress"], {"charm": "local:series/wordpress-3", "exposed": False, "relations": { "cache": ["memcache"], "db": ["mysql"], "proxy": ["varnish"]}, "units": { "wordpress/0": { "machine": 0, "public-address": "wordpress-0.example.com", "agent-state": "started"}}}) self.assertEqual( services["memcache"], {"charm": "local:series/dummy-1", "relations": {"cache": ["wordpress"]}, "units": { "memcache/0": { "machine": 5, "public-address": "memcache-0.example.com", "agent-state": "pending"}, "memcache/1": { "machine": 6, "public-address": "memcache-1.example.com", "relation-errors": { "cache": ["wordpress"]}, "agent-state": "installed"}}} ) @inlineCallbacks def test_collect_filtering(self): yield self.build_topology() # collect by service name state = yield collect( ["wordpress"], self.provider, self.client, None) # Validate that only the expected service is present # in the state self.assertEqual(state["machines"].keys(), [0]) self.assertEqual(state["services"].keys(), ["wordpress"]) # collect by unit name state = yield collect(["*/0"], self.provider, self.client, None) self.assertEqual(set(state["machines"].keys()), set([0, 1, 3, 5])) self.assertEqual(set(state["services"].keys()), set(["memcache", "varnish", "mysql", "wordpress"])) # collect by unit name state = yield collect(["*/1"], self.provider, self.client, None) self.assertEqual(set(state["machines"].keys()), set([2, 4, 6])) # verify that only the proper units and services are present self.assertEqual( state["services"], {"memcache": { "charm": "local:series/dummy-1", "relations": {"cache": ["wordpress"]}, "units": { "memcache/1": { "machine": 6, "agent-state": "installed", "public-address": "memcache-1.example.com", "relation-errors": {"cache": ["wordpress"]}}}}, "mysql": { "exposed": True, "charm": "local:series/mysql-1", "relations": {"db": ["wordpress"]}, "units": { "mysql/1": { "machine": 2, "public-address": "mysql-1.example.com", "open-ports": [], "agent-state": "stop-error", "relation-errors": {"db": ["wordpress"]}}}}, "varnish": { "exposed": True, "charm": "local:series/varnish-1", "relations": {"proxy": ["wordpress"]}, "units": { "varnish/1": { "machine": 4, "public-address": "varnish-1.example.com", "open-ports": ["80/tcp"], "agent-state": "started", }}}}) # filter a missing service state = yield collect( ["cluehammer"], self.provider, self.client, None) self.assertEqual(set(state["machines"].keys()), set([])) self.assertEqual(set(state["services"].keys()), set([])) # filter a missing unit state = yield collect(["*/7"], self.provider, self.client, None) self.assertEqual(set(state["machines"].keys()), set([])) self.assertEqual(set(state["services"].keys()), set([])) @inlineCallbacks def test_collect_with_unassigned_machines(self): yield self.build_topology() # get a service's units and unassign one of them wordpress = yield self.service_state_manager.get_service_state( "wordpress") units = yield wordpress.get_all_unit_states() # There is only a single wordpress machine in the topology. unit = units[0] machine_id = yield unit.get_assigned_machine_id() yield unit.unassign_from_machine() yield unit.set_public_address(None) # test that the machine is in state information w/o assignment state = yield collect(None, self.provider, self.client, None) # verify that the unassigned machine appears in the state self.assertEqual(state["machines"][machine_id], {"dns-name": "steamcloud-1.com", "instance-id": 0, "instance-state": "unknown", "agent-state": "not-started"}) # verify that we have a record of the unassigned service; # but note that unassigning this machine without removing the # service unit and relation units now produces other dangling # records in the topology self.assertEqual( state["services"]["wordpress"]["units"], {"wordpress/0": {"machine": None, "public-address": None, "agent-state": "started"}}) @inlineCallbacks def test_collect_with_removed_unit(self): yield self.build_topology() # get a service's units and unassign one of them wordpress = yield self.service_state_manager.get_service_state( "wordpress") units = yield wordpress.get_all_unit_states() # There is only a single wordpress machine in the topology. unit = units[0] machine_id = yield unit.get_assigned_machine_id() yield wordpress.remove_unit_state(unit) # test that wordpress has no assigned service units state = yield collect(None, self.provider, self.client, None) self.assertEqual( state["services"]["wordpress"], {"charm": "local:series/wordpress-3", "relations": {"cache": ["memcache"], "db": ["mysql"], "proxy": ["varnish"]}, "units": {}}) # but its machine is still available as reported by status seen_machines = set() for service, service_data in state["services"].iteritems(): for unit, unit_data in service_data["units"].iteritems(): seen_machines.add(unit_data["machine"]) self.assertIn(machine_id, state["machines"]) self.assertNotIn(machine_id, seen_machines) @inlineCallbacks def test_provider_pending_machine_state(self): # verify that we get some error reporting if the provider # doesn't have proper machine info yield self.build_topology() # add a new machine to the topology (but not the provider) # and status it m8 = yield self.add_machine_state() wordpress = yield self.service_state_manager.get_service_state( "wordpress") wpu = yield wordpress.add_unit_state() yield wpu.assign_to_machine(m8) # test that we identify we don't have machine state state = yield collect( None, self.provider, self.client, logging.getLogger()) self.assertEqual(state["machines"][7]["instance-id"], "pending") @inlineCallbacks def test_render_yaml(self): yield self.build_topology() self.mock_environment() self.mocker.replay() yield status.status(self.environment, [], status.render_yaml, self.output, None) state = serializer.yaml_load(self.output.getvalue()) self.assertEqual(set(state["machines"].keys()), set([0, 1, 2, 3, 4, 5, 6])) services = state["services"] self.assertEqual(set(services["memcache"].keys()), set(["charm", "relations", "units"])) self.assertEqual(set(services["mysql"].keys()), set(["exposed", "charm", "relations", "units"])) self.assertEqual(set(services["varnish"].keys()), set(["exposed", "charm", "relations", "units"])) self.assertEqual(set(services["wordpress"].keys()), set(["charm", "exposed", "relations", "units"])) for service in services.itervalues(): self.assertGreaterEqual( # may also include "exposed" key set(service.keys()), set(["units", "relations", "charm"])) self.assertTrue(service["charm"].startswith("local:series/")) self.assertEqual(state["machines"][0], {"instance-id": 0, "instance-state": "unknown", "dns-name": "steamcloud-1.com", "agent-state": "down"}) self.assertEqual(services["mysql"]["relations"], {"db": ["wordpress"]}) self.assertEqual(services["mysql"]["units"]["mysql/1"]["open-ports"], []) self.assertEqual(services["wordpress"]["relations"], {"cache": ["memcache"], "db": ["mysql"], "proxy": ["varnish"]}) @inlineCallbacks def test_render_json(self): yield self.build_topology() self.mock_environment() self.mocker.replay() yield status.status(self.environment, [], status.render_json, self.output, None) state = json.loads(self.output.getvalue()) self.assertEqual(set(state["machines"].keys()), set([unicode(i) for i in [0, 1, 2, 3, 4, 5, 6]])) services = state["services"] self.assertEqual(set(services["memcache"].keys()), set(["charm", "relations", "units"])) self.assertEqual(set(services["mysql"].keys()), set(["exposed", "charm", "relations", "units"])) self.assertEqual(set(services["varnish"].keys()), set(["exposed", "charm", "relations", "units"])) self.assertEqual(set(services["wordpress"].keys()), set(["charm", "exposed", "relations", "units"])) for service in services.itervalues(): self.assertTrue(service["charm"].startswith("local:series/")) self.assertEqual(state["machines"][u"0"], {"instance-id": 0, "instance-state": "unknown", "dns-name": "steamcloud-1.com", "agent-state": "down"}) self.assertEqual(services["mysql"]["relations"], {"db": ["wordpress"]}) self.assertEqual(services["mysql"]["units"]["mysql/1"]["open-ports"], []) self.assertEqual(services["wordpress"]["relations"], {"cache": ["memcache"], "db": ["mysql"], "proxy": ["varnish"]}) self.assertEqual( services["varnish"], { "exposed": True, "units": {"varnish/1": { "machine": 4, "public-address": "varnish-1.example.com", "open-ports": ["80/tcp"], "agent-state": "started"}, "varnish/0": { "machine": 3, "public-address": "varnish-0.example.com", "open-ports": ["80/tcp"], "agent-state": "started"}, }, "charm": "local:series/varnish-1", "relations": {"proxy": ["wordpress"]}}) @inlineCallbacks def test_render_dot(self): yield self.build_topology() self.mock_environment() self.mocker.replay() yield status.status(self.environment, [], status.render_dot, self.output, None) result = self.output.getvalue() #dump_stringio(self.output, "/tmp/ens.dot") # make mild assertions about the expected DOT output # because the DOT language is simple we can test that some # relationships are present self.assertIn('memcache -> "memcache/1"', result) self.assertIn('varnish -> "varnish/0"', result) self.assertIn('varnish -> "varnish/1"', result) # test that relationships are being rendered self.assertIn("wordpress -> memcache", result) self.assertIn("mysql -> wordpress", result) # assert that properties were applied to a relationship # self.assertIn("wordpress -> varnish [dir=none, " # "label=\"varnish:wordpress/proxy\"]", # result) # verify that the renderer picked up the DNS name of the # machines (and they are associated with the proper machine) self.assertIn( '"mysql/0" [color="#DD4814", fontcolor="#ffffff", ' "shape=box, style=filled, label=mysql-0." "example.com>]", result) self.assertIn( '"mysql/1" [color="#DD4814", fontcolor="#ffffff", shape=box, style=filled, label=mysql-1.example.com>]', result) # Check the charms are present in the service node. self.assertIn( 'memcache [color="#772953", fontcolor="#ffffff", shape=component, style=filled, label=local:series/dummy-1>]', result) self.assertIn( 'varnish [color="#772953", fontcolor="#ffffff", shape=component, style=filled, label=local:series/varnish-1>]',result) self.assertIn( 'mysql [color="#772953", fontcolor="#ffffff", shape=component, style=filled, label=local:series/mysql-1>]', result) self.assertIn("local:series/dummy-1", result) def test_render_dot_bad_clustering(self): """Test around Bug #792448. Deployment producing bad status dot output, but sane normal output. """ self.mocker.replay() output = StringIO() renderer = status.renderers["dot"] renderer(sample_cluster, output, self.environment, format="dot") # Verify that the invalid names were properly corrected self.assertIn("subgraph cluster_wiki_db {", output.getvalue()) self.assertIn('wiki_cache -> "wiki_cache/0"', output.getvalue()) @inlineCallbacks def test_render_svg(self): yield self.build_topology() self.mock_environment() self.mocker.replay() yield status.status(self.environment, [], status.renderers["svg"], self.output, None) # look for a hint the process completed. self.assertIn("", self.output.getvalue()) @inlineCallbacks def test_subordinate_status_output(self): state = yield self.build_topology() # supplement status with additional subordinates # add logging to mysql and wordpress logging = yield self.add_service_from_charm("logging") mysql_ep = RelationEndpoint("mysql", "client-server", "juju-info", "server") wordpress_db_ep = RelationEndpoint("wordpress", "client-server", "juju-info", "server") logging_ep = RelationEndpoint("logging", "client-server", "juju-info", "client", "container") my_log_rel, my_log_services = ( yield self.relation_state_manager.add_relation_state( mysql_ep, logging_ep)) wp_log_rel, wp_log_services = ( yield self.relation_state_manager.add_relation_state( wordpress_db_ep, logging_ep)) units = state["units"] log_units = units.setdefault("logging", {}) wp1 = iter(units["wordpress"]).next() mu1, mu2 = list(units["mysql"]) yield self.add_unit(logging, None, container=mu1, units=log_units) yield self.add_unit(logging, None, container=wp1, units=log_units) yield self.add_unit(logging, None, container=mu2, units=log_units) self.mock_environment() self.mocker.replay() yield status.status(self.environment, [], status.render_yaml, self.output, None) state = serializer.load(self.output.getvalue()) # verify our changes log_state = state["services"]["logging"] self.assertEqual(set(log_state["relations"]["juju-info"]), set(["mysql", "wordpress"])) self.assertEqual(set(log_state["subordinate-to"]), set(["mysql", "wordpress"])) wp_state = state["services"]["wordpress"] self.assertEqual(wp_state["relations"]["juju-info"], ["logging"]) wp_subs = wp_state["units"]["wordpress/0"]["subordinates"] logging_sub = wp_subs["logging/1"] # this assertion verifies that we don't see keys we don't # expect as well self.assertEqual(logging_sub, {"agent-state": "pending"}) @inlineCallbacks def test_subordinate_status_output_no_container(self): state = yield self.build_topology() # supplement status with additional subordinates # add logging to mysql and wordpress logging = yield self.add_service_from_charm("logging") mysql_ep = RelationEndpoint("mysql", "client-server", "juju-info", "server") wordpress_db_ep = RelationEndpoint("wordpress", "client-server", "juju-info", "server") logging_ep = RelationEndpoint("logging", "client-server", "juju-info", "client", "container") my_log_rel, my_log_services = ( yield self.relation_state_manager.add_relation_state( mysql_ep, logging_ep)) wp_log_rel, wp_log_services = ( yield self.relation_state_manager.add_relation_state( wordpress_db_ep, logging_ep)) units = state["units"] log_units = units.setdefault("logging", {}) wp1 = iter(units["wordpress"]).next() mu1, mu2 = list(units["mysql"]) yield self.add_unit(logging, None, container=mu1, units=log_units) yield self.add_unit(logging, None, container=wp1, units=log_units) yield self.add_unit(logging, None, container=mu2, units=log_units) # remove mysql/0 yield state["services"]["mysql"].remove_unit_state(mu1) self.mock_environment() self.mocker.replay() yield status.status(self.environment, [], status.render_yaml, self.output, None) output = serializer.load(self.output.getvalue()) self.assertNotIn(mu1.unit_name, output["services"]["mysql"]["units"]) self.assertIn(mu2.unit_name, output["services"]["mysql"]["units"]) juju-0.7.orig/juju/control/tests/test_terminate_machine.py0000644000000000000000000002254112135220114022301 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks from juju.control import main, terminate_machine from juju.control.tests.common import MachineControlToolTest from juju.errors import CannotTerminateMachine from juju.state.errors import MachineStateInUse, MachineStateNotFound from juju.state.environment import EnvironmentStateManager class ControlTerminateMachineTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(ControlTerminateMachineTest, self).setUp() self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_terminate_machine_method(self): """Verify that underlying method works as expected.""" environment = self.config.get("firstenv") mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) wordpress_service_state = \ yield self.add_service_from_charm("wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() wordpress_machine_state = yield self.add_machine_state() yield wordpress_unit_state.assign_to_machine(wordpress_machine_state) yield wordpress_unit_state.unassign_from_machine() yield terminate_machine.terminate_machine( self.config, environment, False, logging.getLogger("juju.control.cli"), [2]) yield self.assert_machine_states([0, 1], [2]) @inlineCallbacks def test_terminate_machine_method_root(self): """Verify supporting method throws `CannotTerminateMachine`.""" environment = self.config.get("firstenv") mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) ex = yield self.assertFailure( terminate_machine.terminate_machine( self.config, environment, False, logging.getLogger("juju.control.cli"), [0]), CannotTerminateMachine) self.assertEqual( str(ex), "Cannot terminate machine 0: environment would be destroyed") @inlineCallbacks def test_terminate_machine_method_in_use(self): """Verify supporting method throws `MachineStateInUse`.""" environment = self.config.get("firstenv") mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) ex = yield self.assertFailure( terminate_machine.terminate_machine( self.config, environment, False, logging.getLogger("juju.control.cli"), [1]), MachineStateInUse) self.assertEqual(ex.machine_id, 1) yield self.assert_machine_states([0, 1], []) @inlineCallbacks def test_terminate_machine_method_unknown(self): """Verify supporting method throws `MachineStateInUse`.""" environment = self.config.get("firstenv") mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) ex = yield self.assertFailure( terminate_machine.terminate_machine( self.config, environment, False, logging.getLogger("juju.control.cli"), [42]), MachineStateNotFound) self.assertEqual(ex.machine_id, 42) yield self.assert_machine_states([0, 1], []) @inlineCallbacks def test_terminate_unused_machine(self): """Verify a typical allocation, unassignment, and then termination.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() wordpress_service_state = \ yield self.add_service_from_charm("wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() wordpress_machine_state = yield self.add_machine_state() yield wordpress_unit_state.assign_to_machine(wordpress_machine_state) riak_service_state = yield self.add_service_from_charm("riak") riak_unit_state = yield riak_service_state.add_unit_state() riak_machine_state = yield self.add_machine_state() yield riak_unit_state.assign_to_machine(riak_machine_state) mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) yield wordpress_unit_state.unassign_from_machine() yield mysql_unit_state.unassign_from_machine() yield self.assert_machine_states([0, 1, 2, 3], []) # trash environment to check syncing yield self.client.delete("/environment") main(["terminate-machine", "1", "3"]) yield wait_on_reactor_stopped # check environment synced esm = EnvironmentStateManager(self.client) yield esm.get_config() self.assertIn( "Machines terminated: 1, 3", self.output.getvalue()) yield self.assert_machine_states([0, 2], [1, 3]) @inlineCallbacks def test_attempt_terminate_unknown_machine(self): """Try to terminate a used machine and get an in use error in log.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) # XXX should be 1, see bug #697093 self.mocker.replay() mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) main(["terminate-machine", "42", "1"]) yield wait_on_reactor_stopped self.assertIn("Machine 42 was not found", self.output.getvalue()) yield self.assert_machine_states([1], []) @inlineCallbacks def test_attempt_terminate_root_machine(self): """Try to terminate root machine and get corresponding error in log.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) # XXX should be 1, see bug #697093 self.mocker.replay() mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) main(["terminate-machine", "0", "1"]) yield wait_on_reactor_stopped self.assertIn( "Cannot terminate machine 0: environment would be destroyed", self.output.getvalue()) yield self.assert_machine_states([0, 1], []) @inlineCallbacks def test_do_nothing(self): """Verify terminate-machine can take no args and then does nothing.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) main(["terminate-machine"]) yield wait_on_reactor_stopped yield self.assert_machine_states([0, 1], []) def test_wrong_arguments_provided_non_integer(self): """Test command rejects non-integer args.""" self.assertRaises( SystemExit, main, ["terminate-machine", "bar"]) self.assertIn( "juju terminate-machine: error: argument ID: " "invalid int value: 'bar'", self.stderr.getvalue()) @inlineCallbacks def test_invalid_environment(self): """Test command with an environment that hasn't been set up.""" wait_on_reactor_stopped = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.add_machine_state() yield mysql_unit_state.assign_to_machine(mysql_machine_state) wordpress_service_state = \ yield self.add_service_from_charm("wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() wordpress_machine_state = yield self.add_machine_state() yield wordpress_unit_state.assign_to_machine(wordpress_machine_state) main(["terminate-machine", "--environment", "roman-candle", "1", "2"]) yield wait_on_reactor_stopped self.assertIn( "Invalid environment 'roman-candle'", self.output.getvalue()) yield self.assert_machine_states([0, 1, 2], []) juju-0.7.orig/juju/control/tests/test_unexpose.py0000644000000000000000000000656012135220114020476 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.control import main from juju.control.tests.common import ControlToolTest from juju.lib import serializer from juju.state.tests.test_service import ServiceStateManagerTestBase class UnexposeControlTest( ServiceStateManagerTestBase, ControlToolTest): @inlineCallbacks def setUp(self): yield super(UnexposeControlTest, self).setUp() config = { "environments": {"firstenv": {"type": "dummy"}}} self.write_config(serializer.dump(config)) self.config.load() self.service_state = yield self.add_service_from_charm("wordpress") self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_unexpose_service(self): """Test subcommand clears exposed flag for service.""" yield self.service_state.set_exposed_flag() exposed_flag = yield self.service_state.get_exposed_flag() self.assertTrue(exposed_flag) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["unexpose", "wordpress"]) yield finished exposed_flag = yield self.service_state.get_exposed_flag() self.assertFalse(exposed_flag) self.assertIn("Service 'wordpress' was unexposed.", self.output.getvalue()) @inlineCallbacks def test_unexpose_service_not_exposed(self): """Test subcommand keeps an unexposed service still unexposed.""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["unexpose", "wordpress"]) yield finished exposed_flag = yield self.service_state.get_exposed_flag() self.assertFalse(exposed_flag) self.assertIn("Service 'wordpress' was not exposed.", self.output.getvalue()) # various errors def test_unexpose_with_no_args(self): """Test subcommand takes at least one service argument.""" # in argparse, before reactor startup self.assertRaises(SystemExit, main, ["unexpose"]) self.assertIn( "juju unexpose: error: too few arguments", self.stderr.getvalue()) def test_unexpose_with_too_many_args(self): """Test subcommand takes at most one service argument.""" self.assertRaises( SystemExit, main, ["unexpose", "foo", "fum"]) self.assertIn( "juju: error: unrecognized arguments: fum", self.stderr.getvalue()) @inlineCallbacks def test_unexpose_unknown_service(self): """Test subcommand fails if service does not exist.""" finished = self.setup_cli_reactor() self.setup_exit(0) # XXX change when bug 697093 is fixed self.mocker.replay() main(["unexpose", "foobar"]) yield finished self.assertIn( "Service 'foobar' was not found", self.output.getvalue()) @inlineCallbacks def test_invalid_environment(self): """Test command with an environment that hasn't been set up.""" finished = self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() main(["unexpose", "--environment", "roman-candle", "wordpress"]) yield finished self.assertIn( "Invalid environment 'roman-candle'", self.output.getvalue()) juju-0.7.orig/juju/control/tests/test_upgrade_charm.py0000644000000000000000000004502212135220114021425 0ustar 00000000000000import json import os from twisted.internet.defer import inlineCallbacks, succeed from juju.charm.directory import CharmDirectory from juju.charm.repository import LocalCharmRepository, CS_STORE_URL from juju.charm.tests.test_metadata import test_repository_path from juju.charm.url import CharmURL from juju.control import main from juju.errors import FileNotFound from juju.environment.environment import Environment from juju.lib.mocker import ANY from juju.lib.serializer import dump from juju.unit.workflow import UnitWorkflowState from .common import MachineControlToolTest class CharmUpgradeTestBase(object): def add_charm( self, metadata, revision, repository_dir=None, bundle=False, config=None): """ Helper method to create a charm in the given repo. """ if repository_dir is None: repository_dir = self.makeDir() series_dir = os.path.join(repository_dir, "series") os.mkdir(series_dir) charm_dir = self.makeDir() with open(os.path.join(charm_dir, "metadata.yaml"), "w") as f: f.write(dump(metadata)) with open(os.path.join(charm_dir, "revision"), "w") as f: f.write(str(revision)) if config: with open(os.path.join(charm_dir, "config.yaml"), "w") as f: f.write(dump(config)) if bundle: CharmDirectory(charm_dir).make_archive( os.path.join(series_dir, "%s.charm" % metadata["name"])) else: os.rename(charm_dir, os.path.join(series_dir, metadata["name"])) return LocalCharmRepository(repository_dir) def increment_charm(self, charm): metadata = charm.metadata.get_serialization_data() metadata["name"] = "mysql" repository = self.add_charm(metadata, charm.get_revision() + 1) return repository class ControlCharmUpgradeTest( MachineControlToolTest, CharmUpgradeTestBase): @inlineCallbacks def setUp(self): yield super(ControlCharmUpgradeTest, self).setUp() self.service_state1 = yield self.add_service_from_charm("mysql") self.service_unit1 = yield self.service_state1.add_unit_state() self.unit1_workflow = UnitWorkflowState( self.client, self.service_unit1, None, self.makeDir()) with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("started") self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_charm_upgrade(self): """ 'juju charm-upgrade ' will schedule a charm for upgrade. """ repository = self.increment_charm(self.charm) mock_environment = self.mocker.patch(Environment) mock_environment.get_machine_provider() self.mocker.result(self.provider) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["upgrade-charm", "--repository", repository.path, "mysql"]) yield finished # Verify the service has a new charm reference charm_id = yield self.service_state1.get_charm_id() self.assertEqual(charm_id, "local:series/mysql-2") # Verify the provider storage has been updated charm = yield repository.find(CharmURL.parse("local:series/mysql")) storage = self.provider.get_file_storage() try: yield storage.get( "local_3a_series_2f_mysql-2_3a_%s" % charm.get_sha256()) except FileNotFound: self.fail("New charm not uploaded") # Verify the upgrade flag on the service units. upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertTrue(upgrade_flag) @inlineCallbacks def test_missing_repository(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["upgrade-charm", "mysql"]) yield finished self.assertIn("No repository specified", self.output.getvalue()) @inlineCallbacks def test_repository_from_environ(self): repository = self.increment_charm(self.charm) self.change_environment(JUJU_REPOSITORY=repository.path) finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["upgrade-charm", "mysql"]) yield finished self.assertNotIn("No repository specified", self.output.getvalue()) @inlineCallbacks def test_upgrade_charm_with_unupgradeable_units(self): """If there are units that won't be upgraded, they will be reported, other units will be upgraded. """ repository = self.increment_charm(self.charm) service_unit2 = yield self.service_state1.add_unit_state() finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["upgrade-charm", "--repository", repository.path, "mysql"]) yield finished # Verify report of unupgradeable units self.assertIn( ("Unit 'mysql/1' is not in a running state " "(state: 'uninitialized'), won't upgrade"), self.output.getvalue()) # Verify flags only set on upgradeable unit. value = (yield service_unit2.get_upgrade_flag()) self.assertFalse(value) value = (yield self.service_unit1.get_upgrade_flag()) self.assertTrue(value) @inlineCallbacks def test_force_charm_upgrade(self): repository = self.increment_charm(self.charm) with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("start_error") finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["upgrade-charm", "--repository", repository.path, "--force", "mysql"]) yield finished value = (yield self.service_unit1.get_upgrade_flag()) self.assertEqual(value, {'force': True}) @inlineCallbacks def test_upgrade_charm_unknown_service(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["upgrade-charm", "--repository", self.makeDir(), "volcano"]) yield finished self.assertIn( "Service 'volcano' was not found", self.output.getvalue()) @inlineCallbacks def test_upgrade_charm_unknown_charm(self): """If a charm is not found in the repository, an error is given. """ finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() repository_dir = self.makeDir() os.mkdir(os.path.join(repository_dir, "series")) main(["upgrade-charm", "--repository", repository_dir, "mysql"]) yield finished self.assertIn( "Charm 'local:series/mysql' not found in repository", self.output.getvalue()) @inlineCallbacks def test_upgrade_charm_unknown_charm_dryrun(self): """If a charm is not found in the repository, an error is given. """ finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() repository_dir = self.makeDir() os.mkdir(os.path.join(repository_dir, "series")) main(["upgrade-charm", "--repository", repository_dir, "mysql", "--dry-run"]) yield finished self.assertIn( "Charm 'local:series/mysql' not found in repository", self.output.getvalue()) @inlineCallbacks def test_upgrade_charm_dryrun_reports_unupgradeable_units(self): """If there are units that won't be upgraded, dry-run will report them. """ repository = self.increment_charm(self.charm) service_unit2 = yield self.service_state1.add_unit_state() finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() main(["upgrade-charm", "-n", "--repository", repository.path, "mysql"]) yield finished # Verify dry run self.assertIn( "Service would be upgraded from charm", self.output.getvalue()) # Verify report of unupgradeable units self.assertIn( ("Unit 'mysql/1' is not in a running state " "(state: 'uninitialized'), won't upgrade"), self.output.getvalue()) # Verify no flags have been set. value = (yield service_unit2.get_upgrade_flag()) self.assertFalse(value) value = (yield self.service_unit1.get_upgrade_flag()) self.assertFalse(value) @inlineCallbacks def test_apply_new_charm_defaults(self): finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() # Add a charm and its service. metadata = {"name": "haiku", "summary": "its short", "description": "but with cadence"} repository = self.add_charm( metadata=metadata, revision=1, config={ "options": { "foo": {"type": "string", "default": "foo-default", "description": "Foo"}, "bar": {"type": "string", "default": "bar-default", "description": "Bar"}, } }) charm_dir = yield repository.find(CharmURL.parse("local:series/haiku")) service_state = yield self.add_service_from_charm( "haiku", charm_dir=charm_dir) # Update a config value config = yield service_state.get_config() config["foo"] = "abc" yield config.write() # Upgrade the charm repository = self.add_charm( metadata=metadata, revision=2, config={ "options": { "foo": {"type": "string", "default": "foo-default", "description": "Foo"}, "bar": {"type": "string", "default": "bar-default", "description": "Bar"}, "dca": {"type": "string", "default": "default-dca", "description": "Airport"}, } }) main(["upgrade-charm", "--repository", repository.path, "haiku"]) yield finished config = yield service_state.get_config() self.assertEqual( config, {"foo": "abc", "dca": "default-dca", "bar": "bar-default"}) @inlineCallbacks def test_latest_local_dry_run(self): """Do nothing; log that local charm would be re-revisioned and used""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() metadata = self.charm.metadata.get_serialization_data() metadata["name"] = "mysql" repository = self.add_charm(metadata, 1) main(["upgrade-charm", "--dry-run", "--repository", repository.path, "mysql"]) yield finished charm_path = os.path.join(repository.path, "series", "mysql") self.assertIn( "%s would be set to revision 2" % charm_path, self.output.getvalue()) self.assertIn( "Service would be upgraded from charm 'local:series/mysql-1' to " "'local:series/mysql-2'", self.output.getvalue()) with open(os.path.join(charm_path, "revision")) as f: self.assertEquals(f.read(), "1") upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertFalse(upgrade_flag) @inlineCallbacks def test_latest_local_live_fire(self): """Local charm should be re-revisioned and used; log that it was""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() metadata = self.charm.metadata.get_serialization_data() metadata["name"] = "mysql" repository = self.add_charm(metadata, 1) main(["upgrade-charm", "--repository", repository.path, "mysql"]) yield finished charm_path = os.path.join(repository.path, "series", "mysql") self.assertIn( "Setting %s to revision 2" % charm_path, self.output.getvalue()) with open(os.path.join(charm_path, "revision")) as f: self.assertEquals(f.read(), "2\n") upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertTrue(upgrade_flag) @inlineCallbacks def test_latest_local_leapfrog_dry_run(self): """Do nothing; log that local charm would be re-revisioned and used""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() metadata = self.charm.metadata.get_serialization_data() metadata["name"] = "mysql" repository = self.add_charm(metadata, 0) main(["upgrade-charm", "--dry-run", "--repository", repository.path, "mysql"]) yield finished charm_path = os.path.join(repository.path, "series", "mysql") self.assertIn( "%s would be set to revision 2" % charm_path, self.output.getvalue()) self.assertIn( "Service would be upgraded from charm 'local:series/mysql-1' to " "'local:series/mysql-2'", self.output.getvalue()) with open(os.path.join(charm_path, "revision")) as f: self.assertEquals(f.read(), "0") upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertFalse(upgrade_flag) @inlineCallbacks def test_latest_local_leapfrog_live_fire(self): """Local charm should be re-revisioned and used; log that it was""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() metadata = self.charm.metadata.get_serialization_data() metadata["name"] = "mysql" repository = self.add_charm(metadata, 0) main(["upgrade-charm", "--repository", repository.path, "mysql"]) yield finished charm_path = os.path.join(repository.path, "series", "mysql") self.assertIn( "Setting %s to revision 2" % charm_path, self.output.getvalue()) with open(os.path.join(charm_path, "revision")) as f: self.assertEquals(f.read(), "2\n") upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertTrue(upgrade_flag) @inlineCallbacks def test_latest_local_bundle_dry_run(self): """Do nothing; log that nothing would be done""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() metadata = self.charm.metadata.get_serialization_data() metadata["name"] = "mysql" repository = self.add_charm(metadata, 1, bundle=True) main(["upgrade-charm", "--dry-run", "--repository", repository.path, "mysql"]) yield finished self.assertIn( "Service already running latest charm", self.output.getvalue()) upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertFalse(upgrade_flag) @inlineCallbacks def test_latest_local_bundle_live_fire(self): """Do nothing; log that nothing was done""" finished = self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() metadata = self.charm.metadata.get_serialization_data() metadata["name"] = "mysql" repository = self.add_charm(metadata, 1, bundle=True) main(["upgrade-charm", "--repository", repository.path, "mysql"]) yield finished self.assertIn( "Charm 'local:series/mysql-1' is the latest revision known", self.output.getvalue()) upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertFalse(upgrade_flag) class RemoteUpgradeCharmTest(MachineControlToolTest): @inlineCallbacks def setUp(self): yield super(RemoteUpgradeCharmTest, self).setUp() charm = CharmDirectory(os.path.join( test_repository_path, "series", "mysql")) self.charm_state_manager.add_charm_state( "cs:series/mysql-1", charm, "") self.service_state1 = yield self.add_service_from_charm( "mysql", "cs:series/mysql-1") self.service_unit1 = yield self.service_state1.add_unit_state() self.unit1_workflow = UnitWorkflowState( self.client, self.service_unit1, None, self.makeDir()) with (yield self.unit1_workflow.lock()): yield self.unit1_workflow.set_state("started") self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() self.output = self.capture_logging() self.stderr = self.capture_stream("stderr") @inlineCallbacks def test_latest_dry_run(self): """Do nothing; log that nothing would be done""" finished = self.setup_cli_reactor() self.setup_exit(0) getPage = self.mocker.replace("twisted.web.client.getPage") getPage( CS_STORE_URL + "/charm-info?charms=cs%3Aseries/mysql", contextFactory=ANY) self.mocker.result(succeed(json.dumps( {"cs:series/mysql": {"revision": 1, "sha256": "whatever"}}))) self.mocker.replay() main(["upgrade-charm", "--dry-run", "mysql"]) yield finished self.assertIn( "Service already running latest charm", self.output.getvalue()) upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertFalse(upgrade_flag) @inlineCallbacks def test_latest_live_fire(self): """Do nothing; log that nothing was done""" finished = self.setup_cli_reactor() self.setup_exit(0) getPage = self.mocker.replace("twisted.web.client.getPage") getPage(CS_STORE_URL + "/charm-info?charms=cs%3Aseries/mysql", contextFactory=ANY) self.mocker.result(succeed(json.dumps( {"cs:series/mysql": {"revision": 1, "sha256": "whatever"}}))) self.mocker.replay() main(["upgrade-charm", "mysql"]) yield finished self.assertIn( "Charm 'cs:series/mysql-1' is the latest revision known", self.output.getvalue()) upgrade_flag = yield self.service_unit1.get_upgrade_flag() self.assertFalse(upgrade_flag) juju-0.7.orig/juju/control/tests/test_utils.py0000644000000000000000000001715412135220114017771 0ustar 00000000000000import os from twisted.internet.defer import inlineCallbacks, returnValue from juju.environment.tests.test_config import EnvironmentsConfigTestBase from juju.control.tests.common import ControlToolTest from juju.control.utils import ( get_environment, get_ip_address_for_machine, get_ip_address_for_unit, expand_path, parse_passthrough_args, ParseError) from juju.environment.config import EnvironmentsConfig from juju.environment.errors import EnvironmentsConfigError from juju.lib.serializer import yaml_dump as dump from juju.lib.testing import TestCase from juju.state.errors import ServiceUnitStateMachineNotAssigned from juju.state.tests.test_service import ServiceStateManagerTestBase class FakeOptions(object): pass class LookupTest(ServiceStateManagerTestBase, EnvironmentsConfigTestBase): @inlineCallbacks def setUp(self): yield super(LookupTest, self).setUp() self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() @inlineCallbacks def start_machine(self, dns_name): machine_state = yield self.add_machine_state() provider_machine, = yield self.provider.start_machine( {"machine-id": machine_state.id, "dns-name": dns_name}) yield machine_state.set_instance_id(provider_machine.instance_id) returnValue(machine_state) @inlineCallbacks def test_get_ip_address_for_machine(self): """Verify can retrieve dns name, machine state with machine id.""" machine_state = yield self.add_machine_state() provider_machine, = yield self.provider.start_machine( {"machine-id": machine_state.id, "dns-name": "steamcloud-1.com"}) yield machine_state.set_instance_id(provider_machine.instance_id) dns_name, lookedup_machine_state = yield get_ip_address_for_machine( self.client, self.provider, machine_state.id) self.assertEqual(dns_name, "steamcloud-1.com") self.assertEqual(lookedup_machine_state.id, machine_state.id) @inlineCallbacks def test_get_ip_address_for_unit(self): """Verify can retrieve dns name, unit state with unit name.""" service_state = yield self.add_service("wordpress") unit_state = yield service_state.add_unit_state() machine_state = yield self.start_machine("steamcloud-1.com") yield unit_state.assign_to_machine(machine_state) yield unit_state.set_public_address("steamcloud-1.com") dns_name, lookedup_unit_state = yield get_ip_address_for_unit( self.client, self.provider, "wordpress/0") self.assertEqual(dns_name, "steamcloud-1.com") self.assertEqual(lookedup_unit_state.unit_name, "wordpress/0") @inlineCallbacks def test_get_ip_address_for_unit_with_unassigned_machine(self): """Service unit exists, but it doesn't have an assigned machine.""" service_state = yield self.add_service("wordpress") yield service_state.add_unit_state() e = yield self.assertFailure( get_ip_address_for_unit(self.client, self.provider, "wordpress/0"), ServiceUnitStateMachineNotAssigned) self.assertEqual( str(e), "Service unit 'wordpress/0' is not assigned to a machine") class PathExpandTest(TestCase): def test_expand_path(self): self.assertEqual( os.path.abspath("."), expand_path(".")) self.assertEqual( os.path.expanduser("~/foobar"), expand_path("~/foobar")) class GetEnvironmentTest(ControlToolTest): def test_get_environment_from_environment(self): self.change_environment(JUJU_ENV="secondenv") config = { "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}}} self.write_config(dump(config)) env_config = EnvironmentsConfig() env_config.load_or_write_sample() options = FakeOptions() options.environment = None options.environments = env_config environment = get_environment(options) self.assertEqual(environment.name, "secondenv") def test_get_environment(self): config = { "environments": {"firstenv": {"type": "dummy"}}} self.write_config(dump(config)) env_config = EnvironmentsConfig() env_config.load_or_write_sample() options = FakeOptions() options.environment = None options.environments = env_config environment = get_environment(options) self.assertEqual(environment.name, "firstenv") def test_get_environment_default_with_multiple(self): config = { "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}}} self.write_config(dump(config)) env_config = EnvironmentsConfig() env_config.load_or_write_sample() options = FakeOptions() options.environment = None options.environments = env_config error = self.assertRaises( EnvironmentsConfigError, get_environment, options) self.assertIn( "There are multiple environments and no explicit default", str(error)) def test_get_nonexistant_environment(self): config = { "environments": {"firstenv": {"type": "dummy"}, "secondenv": {"type": "dummy"}}} self.write_config(dump(config)) env_config = EnvironmentsConfig() env_config.load_or_write_sample() options = FakeOptions() options.environment = "volcano" options.environments = env_config error = self.assertRaises( EnvironmentsConfigError, get_environment, options) self.assertIn("Invalid environment 'volcano'", str(error)) class ParsePassthroughArgsTest(ControlToolTest): def test_parse_typical_ssh(self): """Verify that flags and positional args are properly partitioned.""" ssh_flags = "bcDeFIiLlmOopRSWw" self.assertEqual( parse_passthrough_args( ["-L8080:localhost:80", "-o", "Volume 11", "mysql/0", "ls a*"], ssh_flags), (["-L8080:localhost:80", "-o", "Volume 11"], ["mysql/0", "ls a*"])) self.assertEqual( parse_passthrough_args( ["-L8080:localhost:80", "-aC26", "0", "foobar", "do", "123"], ssh_flags), (["-L8080:localhost:80", "-aC26"], ["0", "foobar", "do", "123"])) self.assertEqual( parse_passthrough_args(["mysql/0"], ssh_flags), ([], ["mysql/0"])) self.assertEqual( parse_passthrough_args( ["mysql/0", "command", "-L8080:localhost:80"], ssh_flags), ([], ["mysql/0", "command", "-L8080:localhost:80"])) def test_parse_flag_taking_args(self): """Verify that arg-taking flags properly combine with args""" # some sample flags, from the ssh command ssh_flags = "bcDeFIiLlmOopRSWw" for flag in ssh_flags: # This flag properly combines, either of the form -Xabc or -X abc self.assertEqual( parse_passthrough_args( ["-" + flag + "XYZ", "-1X", "-" + flag, "XYZ", "mysql/0"], ssh_flags), (["-" + flag + "XYZ", "-1X", "-" + flag, "XYZ"], ["mysql/0"])) # And requires that it is combined e = self.assertRaises( ParseError, parse_passthrough_args, ["-" + flag], ssh_flags) self.assertEqual(str(e), "argument -%s: expected one argument" % flag) juju-0.7.orig/juju/environment/__init__.py0000644000000000000000000000000012135220114017031 0ustar 00000000000000juju-0.7.orig/juju/environment/config.py0000644000000000000000000002463112135220114016557 0ustar 00000000000000import os import uuid import yaml from juju.environment.environment import Environment from juju.environment.errors import EnvironmentsConfigError from juju.errors import FileAlreadyExists, FileNotFound from juju.lib import serializer from juju.lib.schema import ( Constant, Dict, Int, KeyDict, OAuthString, OneOf, SchemaError, SelectDict, String, Bool) DEFAULT_CONFIG_PATH = "~/.juju/environments.yaml" SAMPLE_CONFIG = """\ environments: sample: type: ec2 control-bucket: %(control-bucket)s admin-secret: %(admin-secret)s default-series: precise ssl-hostname-verification: true """ _EITHER_PLACEMENT = OneOf(Constant("unassigned"), Constant("local")) # See juju.providers.openstack.credentials for definition and more details _OPENSTACK_AUTH_MODE = OneOf( Constant("userpass"), Constant("keypair"), Constant("legacy"), Constant("rax"), ) SCHEMA = KeyDict({ "default": String(), "environments": Dict(String(), SelectDict("type", { "ec2": KeyDict({ "control-bucket": String(), "admin-secret": String(), "access-key": String(), "secret-key": String(), "region": OneOf( Constant("us-east-1"), Constant("us-west-1"), Constant("us-west-2"), Constant("eu-west-1"), Constant("sa-east-1"), Constant("ap-northeast-1"), Constant("ap-southeast-1")), "ec2-uri": String(), "s3-uri": String(), "ssl-hostname-verification": OneOf( Constant(True), Constant(False)), "placement": _EITHER_PLACEMENT, "default-series": String()}, optional=[ "access-key", "secret-key", "region", "ec2-uri", "s3-uri", "placement", "ssl-hostname-verification"]), "openstack": KeyDict({ "control-bucket": String(), "admin-secret": String(), "access-key": String(), "secret-key": String(), "default-instance-type": String(), "default-image-id": OneOf(String(), Int()), "auth-url": String(), "project-name": String(), "use-floating-ip": Bool(), "auth-mode": _OPENSTACK_AUTH_MODE, "region": String(), "default-series": String(), "ssl-hostname-verification": Bool(), }, optional=[ "access-key", "secret-key", "auth-url", "project-name", "auth-mode", "region", "use-floating-ip", "ssl-hostname-verification", "default-instance-type"]), "openstack_s3": KeyDict({ "control-bucket": String(), "admin-secret": String(), "access-key": String(), "secret-key": String(), "default-instance-type": String(), "default-image-id": OneOf(String(), Int()), "auth-url": String(), "combined-key": String(), "s3-uri": String(), "use-floating-ip": Bool(), "auth-mode": _OPENSTACK_AUTH_MODE, "region": String(), "default-series": String(), "ssl-hostname-verification": Bool(), }, optional=[ "access-key", "secret-key", "combined-key", "auth-url", "s3-uri", "project-name", "auth-mode", "region", "use-floating-ip", "ssl-hostname-verification", "default-instance-type"]), "maas": KeyDict({ "maas-server": String(), "maas-oauth": OAuthString(), "admin-secret": String(), "placement": _EITHER_PLACEMENT, # MAAS currently only provisions precise; any other default-series # would just lead to errors down the line. "default-series": String()}, optional=["placement"]), "local": KeyDict({ "admin-secret": String(), "data-dir": String(), "placement": Constant("local"), "default-series": String()}, optional=["placement"]), "dummy": KeyDict({})}))}, optional=["default"]) class EnvironmentsConfig(object): """An environment configuration, with one or more environments. """ def __init__(self): self._config = None self._loaded_path = None def get_default_path(self): """Return the default environment configuration file path.""" return os.path.expanduser(DEFAULT_CONFIG_PATH) def _get_path(self, path): if path is None: return self.get_default_path() return path def load(self, path=None): """Load an enviornment configuration file. @param path: An optional environment configuration file path. Defaults to ~/.juju/environments.yaml This method will call the C{parse()} method with the content of the loaded file. """ path = self._get_path(path) if not os.path.isfile(path): raise FileNotFound(path) with open(path) as file: self.parse(file.read(), path) def parse(self, content, path=None): """Parse an enviornment configuration. @param content: The content to parse. @param path: An optional environment configuration file path, used when raising errors. @raise EnvironmentsConfigError: On other issues. """ if not isinstance(content, basestring): self._fail("Configuration must be a string", path, repr(content)) try: config = serializer.yaml_load(content) except yaml.YAMLError, error: self._fail(error, path=path, content=content) if not isinstance(config, dict): self._fail("Configuration must be a dictionary", path, content) try: config = SCHEMA.coerce(config, []) except SchemaError, error: self._fail(error, path=path) self._config = config self._loaded_path = path def _fail(self, error, path, content=None): if path is None: path_info = "" else: path_info = " %s:" % (path,) error = str(error) if content: error += ":\n%s" % content raise EnvironmentsConfigError( "Environments configuration error:%s %s" % (path_info, error)) def get_names(self): """Return the names of environments available in the configuration.""" return sorted(self._config["environments"].iterkeys()) def get(self, name): """Retrieve the Environment with the given name. @return: The Environment, or None if one isn't found. """ environment_config = self._config["environments"].get(name) if environment_config is not None: return Environment(name, environment_config) return None def get_default(self): """Get the default environment for this configuration. The default environment is either the single defined environment in the configuration, or the one explicitly named through the "default:" option in the outermost scope. @raise EnvironmentsConfigError: If it can't determine a default environment. """ environments_config = self._config.get("environments") if len(environments_config) == 1: return self.get(environments_config.keys()[0]) default = self._config.get("default") if default: if default not in environments_config: raise EnvironmentsConfigError( "Default environment '%s' was not found: %s" % (default, self._loaded_path)) return self.get(default) raise EnvironmentsConfigError("There are multiple environments and no " "explicit default (set one explicitly?): " "%s" % self._loaded_path) def write_sample(self, path=None): """Write down a sample configuration file. @param path: An optional environment configuration file path. Defaults to ~/.juju/environments.yaml """ path = self._get_path(path) dirname = os.path.dirname(path) if os.path.exists(path): raise FileAlreadyExists(path) if not os.path.exists(dirname): os.mkdir(dirname, 0700) defaults = { "control-bucket": "juju-%s" % (uuid.uuid4().hex), "admin-secret": "%s" % (uuid.uuid4().hex), "default-series": "precise" } with open(path, "w") as file: file.write(SAMPLE_CONFIG % defaults) os.chmod(path, 0600) def load_or_write_sample(self): """Try to load the configuration, and if it doesn't work dump a sample. This method will try to load the environment configuration from the default location, and if it doesn't work, it will write down a sample configuration there. This is handy for a default initialization. """ try: self.load() except FileNotFound: self.write_sample() raise EnvironmentsConfigError("No environments configured. Please " "edit: %s" % self.get_default_path(), sample_written=True) def serialize(self, name=None): """Serialize the environments configuration. Optionally an environment name can be specified and only that environment will be serialized. Serialization dispatches to the individual environments as they may serialize information not contained within the original config file. """ if not name: names = self.get_names() else: names = [name] config = self._config.copy() config["environments"] = {} for name in names: environment = self.get(name) if environment is None: raise EnvironmentsConfigError( "Invalid environment %r" % name) data = environment.get_serialization_data() # all environment data should be contained # in a nested dict under the environment name. assert data.keys() == [name] config["environments"].update(data) return serializer.dump(config) juju-0.7.orig/juju/environment/environment.py0000644000000000000000000000354712135220114017661 0ustar 00000000000000from juju.lib.loader import get_callable class Environment(object): """An environment where machines can be run on.""" def __init__(self, name, environment_config): self._name = name self._environment_config = environment_config self._machine_provider = None @property def name(self): """The name of this environment.""" return self._name def get_serialization_data(self): provider = self.get_machine_provider() data = {self.name: provider.get_serialization_data()} return data @property def type(self): """The type of the environment.""" return self._environment_config["type"] def get_machine_provider(self): """Return a MachineProvider instance for the given provider name. The returned instance will be retrieved from the module named after the type of the given machine provider. """ if self._machine_provider is None: provider_type = self._environment_config["type"] MachineProvider = get_callable( "juju.providers.%s.MachineProvider" % provider_type) self._machine_provider = MachineProvider( self._name, self._environment_config) return self._machine_provider @property def placement(self): """The name of the default placement policy. If the environment doesn't have a default unit placement policy None is returned """ return self._environment_config.get("placement") @property def default_series(self): """The Ubuntu series to run on machines in this environment.""" return self._environment_config.get("default-series") @property def origin(self): """Returns the origin of the code.""" return self._environment_config.get("juju-origin", "distro") juju-0.7.orig/juju/environment/errors.py0000644000000000000000000000047112135220114016622 0ustar 00000000000000from juju.errors import JujuError class EnvironmentsConfigError(JujuError): """Raised when the environment configuration file has problems.""" def __init__(self, message, sample_written=False): super(EnvironmentsConfigError, self).__init__(message) self.sample_written = sample_written juju-0.7.orig/juju/environment/tests/0000755000000000000000000000000012135220114016074 5ustar 00000000000000juju-0.7.orig/juju/environment/tests/__init__.py0000644000000000000000000000000012135220114020173 0ustar 00000000000000juju-0.7.orig/juju/environment/tests/data/0000755000000000000000000000000012135220114017005 5ustar 00000000000000juju-0.7.orig/juju/environment/tests/test_config.py0000644000000000000000000007025312135220114020761 0ustar 00000000000000import os from twisted.internet.defer import inlineCallbacks from juju.environment import environment from juju.environment.config import EnvironmentsConfig, SAMPLE_CONFIG from juju.environment.environment import Environment from juju.environment.errors import EnvironmentsConfigError from juju.errors import FileNotFound, FileAlreadyExists from juju.lib import serializer from juju.state.environment import EnvironmentStateManager from juju.lib.testing import TestCase DATA_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "data")) SAMPLE_ENV = """ environments: myfirstenv: type: dummy foo: bar mysecondenv: type: dummy nofoo: 1 """ SAMPLE_MAAS = """ environments: sample: type: maas maas-server: somewhe.re maas-oauth: foo:bar:baz admin-secret: garden default-series: precise """ SAMPLE_LOCAL = """ ensemble: environments environments: sample: type: local admin-secret: sekret default-series: oneiric """ SAMPLE_OPENSTACK = """ environments: sample: type: openstack admin-secret: sekret control-bucket: container default-image-id: 42 default-series: precise """ class EnvironmentsConfigTestBase(TestCase): @inlineCallbacks def setUp(self): yield super(EnvironmentsConfigTestBase, self).setUp() release_path = os.path.join(DATA_DIR, "lsb-release") self.patch(environment, "LSB_RELEASE_PATH", release_path) self.old_home = os.environ.get("HOME") self.tmp_home = self.makeDir() self.change_environment(HOME=self.tmp_home, PATH=os.environ["PATH"]) self.default_path = os.path.join(self.tmp_home, ".juju/environments.yaml") self.other_path = os.path.join(self.tmp_home, ".juju/other-environments.yaml") self.config = EnvironmentsConfig() def write_config(self, config_text, other_path=False): if other_path: path = self.other_path else: path = self.default_path parent_name = os.path.dirname(path) if not os.path.exists(parent_name): os.makedirs(parent_name) with open(path, "w") as file: file.write(config_text) # The following methods expect to be called *after* a subclass has set # self.client. def push_config(self, name, config): self.write_config(serializer.yaml_dump(config)) self.config.load() esm = EnvironmentStateManager(self.client) return esm.set_config_state(self.config, name) @inlineCallbacks def push_env_constraints(self, *constraint_strs): esm = EnvironmentStateManager(self.client) constraint_set = yield esm.get_constraint_set() yield esm.set_constraints(constraint_set.parse(constraint_strs)) @inlineCallbacks def push_default_config(self, with_constraints=True): config = { "environments": {"firstenv": { "type": "dummy", "storage-directory": self.makeDir()}}} yield self.push_config("firstenv", config) if with_constraints: yield self.push_env_constraints() class EnvironmentsConfigTest(EnvironmentsConfigTestBase): def test_get_default_path(self): self.assertEquals(self.config.get_default_path(), self.default_path) def compare_config(self, config1, sample_config2): config1 = serializer.yaml_load(config1) config2 = serializer.yaml_load( sample_config2 % config1["environments"]["sample"]) self.assertEqual(config1, config2) def setup_ec2_credentials(self): self.change_environment( AWS_ACCESS_KEY_ID="foobar", AWS_SECRET_ACCESS_KEY="secrat") def test_load_with_nonexistent_default_path(self): """ Raise an error if load() is called without a path and the default doesn't exist. """ try: self.config.load() except FileNotFound, error: self.assertEquals(error.path, self.default_path) else: self.fail("FileNotFound not raised") def test_load_with_nonexistent_custom_path(self): """ Raise an error if load() is called with non-existing path. """ path = "/non/existent/custom/path" try: self.config.load(path) except FileNotFound, error: self.assertEquals(error.path, path) else: self.fail("FileNotFound not raised") def test_write_sample_environment_default_path(self): """ write_sample() should write a pre-defined sample configuration file. """ self.config.write_sample() self.assertTrue(os.path.isfile(self.default_path)) with open(self.default_path) as file: self.compare_config(file.read(), SAMPLE_CONFIG) dir_path = os.path.dirname(self.default_path) dir_stat = os.stat(dir_path) self.assertEqual(dir_stat.st_mode & 0777, 0700) stat = os.stat(self.default_path) self.assertEqual(stat.st_mode & 0777, 0600) def test_write_sample_contains_secret_key_and_control_bucket(self): """ write_sample() should write a pre-defined sample with an ec2 machine provider type, a unique s3 control bucket, and an admin secret key. """ uuid_factory = self.mocker.replace("uuid.uuid4") uuid_factory().hex self.mocker.result("abc") uuid_factory().hex self.mocker.result("xyz") self.mocker.replay() self.config.write_sample() self.assertTrue(os.path.isfile(self.default_path)) with open(self.default_path) as file: config = serializer.yaml_load(file.read()) self.assertEqual( config["environments"]["sample"]["type"], "ec2") self.assertEqual( config["environments"]["sample"]["control-bucket"], "juju-abc") self.assertEqual( config["environments"]["sample"]["admin-secret"], "xyz") def test_write_sample_environment_with_default_path_and_existing_dir(self): """ write_sample() should not fail if the config directory already exists. """ os.mkdir(os.path.dirname(self.default_path)) self.config.write_sample() self.assertTrue(os.path.isfile(self.default_path)) with open(self.default_path) as file: self.compare_config(file.read(), SAMPLE_CONFIG) def test_write_sample_environment_with_custom_path(self): """ write_sample() may receive an argument with a custom path. """ path = os.path.join(self.tmp_home, "sample-file") self.config.write_sample(path) self.assertTrue(os.path.isfile(path)) with open(path) as file: self.compare_config(file.read(), SAMPLE_CONFIG) def test_write_sample_wont_overwrite_existing_configuration(self): """ write_sample() must never overwrite an existing file. """ path = self.other_path os.makedirs(os.path.dirname(path)) with open(path, "w") as file: file.write("previous content") try: self.config.write_sample(path) except FileAlreadyExists, error: self.assertEquals(error.path, path) else: self.fail("FileAlreadyExists not raised") def test_load_empty_environments(self): """ load() must raise an error if there are no enviroments defined in the configuration file. """ # Use a different path to ensure the error message is right. self.write_config(""" environments: """, other_path=True) try: self.config.load(self.other_path) except EnvironmentsConfigError, error: self.assertEquals(str(error), "Environments configuration error: %s: " "environments: expected dict, got None" % self.other_path) else: self.fail("EnvironmentsConfigError not raised") def test_load_environments_with_wrong_type(self): """ load() must raise an error if the "environments:" option in the YAML configuration file doesn't have a mapping underneath it. """ # Use a different path to ensure the error message is right. self.write_config(""" environments: - list """, other_path=True) try: self.config.load(self.other_path) except EnvironmentsConfigError, error: self.assertEquals(str(error), "Environments configuration error: %s: " "environments: expected dict, got ['list']" % self.other_path) else: self.fail("EnvironmentsConfigError not raised") def test_wb_parse(self): """ We'll have an exception, and use mocker here to test the implementation itself, because we don't want to repeat the same tests for both parse() and load(), so we'll just verify that one calls the other internally. """ mock = self.mocker.patch(self.config) mock.parse(SAMPLE_ENV, self.other_path) self.write_config(SAMPLE_ENV, other_path=True) self.mocker.replay() self.config.load(self.other_path) def test_parse_errors_without_filename(self): """ parse() may receive None as the file path, in which case the error should not mention it. """ # Use a different path to ensure the error message is right. try: self.config.parse(""" environments: """) except EnvironmentsConfigError, error: self.assertEquals(str(error), "Environments configuration error: " "environments: expected dict, got None") else: self.fail("EnvironmentsConfigError not raised") def test_get_environment_names(self): """ get_names() should return of the environments names contained in the configuration file. """ self.write_config(SAMPLE_ENV) self.config.load() self.assertEquals(self.config.get_names(), ["myfirstenv", "mysecondenv"]) def test_get_non_existing_environment(self): """ Trying to get() a non-existing configuration name should return None. """ self.config.parse(SAMPLE_ENV) self.assertEquals(self.config.get("non-existing"), None) def test_load_and_get_environment(self): """ get() should return an Environment instance. """ self.write_config(SAMPLE_ENV) self.config.load() self.assertEquals(type(self.config.get("myfirstenv")), Environment) def test_load_or_write_sample_with_non_existent_config(self): """ When an environment configuration does not exist, the method load_or_write_sample() must write down a sample configuration file, and raise an error to let the user know his request did not work, and he should edit this file. """ try: self.config.load_or_write_sample() except EnvironmentsConfigError, error: self.assertEquals(str(error), "No environments configured. Please edit: %s" % self.default_path) self.assertEquals(error.sample_written, True) with open(self.default_path) as file: self.compare_config(file.read(), SAMPLE_CONFIG) else: self.fail("EnvironmentsConfigError not raised") def test_environment_config_error_sample_written_defaults_to_false(self): """ The error raised by load_or_write_sample() has a flag to let the calling site know if a sample file was actually written down or not. It must default to false, naturally. """ error = EnvironmentsConfigError("message") self.assertFalse(error.sample_written) def test_load_or_write_sample_will_load(self): """ load_or_write_sample() must load the configuration file if it exists. """ self.write_config(SAMPLE_ENV) self.config.load_or_write_sample() self.assertTrue(self.config.get("myfirstenv")) def test_get_default_with_single_environment(self): """ get_default() must return the one defined environment, when it's indeed a single one. """ config = serializer.yaml_load(SAMPLE_ENV) del config["environments"]["mysecondenv"] self.write_config(serializer.yaml_dump(config)) self.config.load() env = self.config.get_default() self.assertEquals(env.name, "myfirstenv") def test_get_default_with_named_default(self): """ get_default() must otherwise return the environment named through the "default:" option. """ config = serializer.yaml_load(SAMPLE_ENV) config["default"] = "mysecondenv" self.write_config(serializer.yaml_dump(config)) self.config.load() env = self.config.get_default() self.assertEquals(env.name, "mysecondenv") def test_default_is_schema_protected(self): """ The schema should mention the "default:" option as a string. """ config = serializer.yaml_load(SAMPLE_ENV) config["default"] = 1 self.write_config(serializer.yaml_dump(config)) error = self.assertRaises(EnvironmentsConfigError, self.config.load) self.assertEquals( str(error), "Environments configuration error: %s: " "default: expected string, got 1" % self.default_path) def test_get_default_with_named_but_missing_default(self): """ get_default() must raise an error if the environment named through the "default:" option isn't found. """ config = serializer.yaml_load(SAMPLE_ENV) config["default"] = "non-existent" # Use a different path to ensure the error message is right. self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) try: self.config.get_default() except EnvironmentsConfigError, error: self.assertEquals(str(error), "Default environment 'non-existent' was not found: " + self.other_path) else: self.fail("EnvironmentsConfigError not raised") def test_get_default_without_computable_default(self): """ get_default() must raise an error if there are multiple defined environments and no explicit default was defined. """ # Use a different path to ensure the error message is right. self.write_config(SAMPLE_ENV, other_path=True) self.config.load(self.other_path) try: self.config.get_default() except EnvironmentsConfigError, error: self.assertEquals(str(error), "There are multiple environments and no explicit default " "(set one explicitly?): " + self.other_path) else: self.fail("EnvironmentsConfigError not raised") def test_ensure_provider_types_are_set(self): """ The schema should refuse to receive a configuration which contains a machine provider configuration without any type information. """ config = serializer.yaml_load(SAMPLE_ENV) # Delete the type. del config["environments"]["myfirstenv"]["type"] self.write_config(serializer.yaml_dump(config), other_path=True) try: self.config.load(self.other_path) except EnvironmentsConfigError, error: self.assertEquals(str(error), "Environments configuration error: %s: " "environments.myfirstenv.type: required value not found" % self.other_path) else: self.fail("EnvironmentsConfigError not raised") def test_serialize(self): """The config should be able to serialize itself.""" self.write_config(SAMPLE_ENV) self.config.load() config = self.config.serialize() serialized = serializer.yaml_load(SAMPLE_ENV) for d in serialized["environments"].values(): d["dynamicduck"] = "magic" self.assertEqual(serializer.yaml_load(config), serialized) def test_serialize_environment(self): """ The config serialization can take an environment name, in which case that environment is serialized in isolation into a valid config file that can be loaded. """ self.write_config(SAMPLE_ENV) self.config.load() data = serializer.yaml_load(SAMPLE_ENV) del data["environments"]["mysecondenv"] data["environments"]["myfirstenv"]["dynamicduck"] = "magic" self.assertEqual( serializer.yaml_load(self.config.serialize("myfirstenv")), data) def test_load_serialized_environment(self): """ Serialize an environment, and then load it again via an EnvironmentsConfig. """ self.write_config(SAMPLE_ENV) self.config.load() serialized = self.config.serialize("myfirstenv") config = EnvironmentsConfig() config.parse(serialized) self.assertTrue( isinstance(config.get("myfirstenv"), Environment)) self.assertFalse( isinstance(config.get("mysecondenv"), Environment)) def test_serialize_unknown_environment(self): """Serializing an unknown environment raises an error.""" self.write_config(SAMPLE_ENV) self.config.load() self.assertRaises( EnvironmentsConfigError, self.config.serialize, "zebra") def test_serialize_custom_variables_outside_environment(self): """Serializing captures custom variables out of the environment.""" data = serializer.yaml_load(SAMPLE_ENV) data["default"] = "myfirstenv" self.write_config(serializer.yaml_dump(data)) self.config.load() serialized = self.config.serialize() config = EnvironmentsConfig() config.parse(serialized) environment = config.get_default() self.assertEqual(environment.name, "myfirstenv") def test_invalid_configuration_data_raise_environment_config_error(self): self.write_config("ZEBRA") self.assertRaises(EnvironmentsConfigError, self.config.load) def test_nonstring_configuration_data_raise_environment_config_error(self): error = self.assertRaises( EnvironmentsConfigError, self.config.parse, None) self.assertIn( "Configuration must be a string:\nNone", str(error)) def test_yaml_load_error_raise_environment_config_error(self): self.write_config("\0") error = self.assertRaises(EnvironmentsConfigError, self.config.load) self.assertIn( "control characters are not allowed", str(error)) def test_ec2_verifies_region(self): # sample doesn't include credentials self.setup_ec2_credentials() self.config.write_sample() with open(self.default_path) as file: config = serializer.yaml_load(file.read()) config["environments"]["sample"]["region"] = "ap-southeast-2" self.write_config(serializer.yaml_dump(config), other_path=True) e = self.assertRaises(EnvironmentsConfigError, self.config.load, self.other_path) self.assertIn("expected 'us-east-1', got 'ap-southeast-2'", str(e)) with open(self.default_path) as file: config = serializer.yaml_load(file.read()) # Authorized keys are required for environment serialization. config["environments"]["sample"]["authorized-keys"] = "mickey" config["environments"]["sample"]["region"] = "ap-southeast-1" self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) data = self.config.get_default().get_serialization_data() self.assertEqual(data["sample"]["region"], "ap-southeast-1") def assert_ec2_sample_config(self, delete_key): self.config.write_sample() with open(self.default_path) as file: config = serializer.yaml_load(file.read()) del config["environments"]["sample"][delete_key] self.write_config(serializer.yaml_dump(config), other_path=True) try: self.config.load(self.other_path) except EnvironmentsConfigError, error: self.assertEquals( str(error), "Environments configuration error: %s: " "environments.sample.%s: required value not found" % (self.other_path, delete_key)) else: self.fail("Did not properly require " + delete_key) def test_ec2_sample_config_without_admin_secret(self): self.assert_ec2_sample_config("admin-secret") def test_ec2_sample_config_without_default_series(self): self.assert_ec2_sample_config("default-series") def test_ec2_sample_config_without_control_buckets(self): self.assert_ec2_sample_config("control-bucket") def test_ec2_verifies_placement(self): # sample doesn't include credentials self.setup_ec2_credentials() self.config.write_sample() with open(self.default_path) as file: config = serializer.yaml_load(file.read()) config["environments"]["sample"]["placement"] = "random" self.write_config(serializer.yaml_dump(config), other_path=True) e = self.assertRaises(EnvironmentsConfigError, self.config.load, self.other_path) self.assertIn("expected 'unassigned', got 'random'", str(e)) with open(self.default_path) as file: config = serializer.yaml_load(file.read()) # Authorized keys are required for environment serialization. config["environments"]["sample"]["authorized-keys"] = "mickey" config["environments"]["sample"]["placement"] = "local" self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) data = self.config.get_default().get_serialization_data() self.assertEqual(data["sample"]["placement"], "local") def test_ec2_respects_default_series(self): # sample doesn't include credentials self.setup_ec2_credentials() self.config.write_sample() with open(self.default_path) as f: config = serializer.yaml_load(f.read()) config["environments"]["sample"]["default-series"] = "astounding" self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) provider = self.config.get_default().get_machine_provider() self.assertEqual(provider.config["default-series"], "astounding") def test_ec2_respects_ssl_hostname_verification(self): self.setup_ec2_credentials() self.config.write_sample() with open(self.default_path) as f: config = serializer.yaml_load(f.read()) config["environments"]["sample"]["ssl-hostname-verification"] = True self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) provider = self.config.get_default().get_machine_provider() self.assertEqual(provider.config["ssl-hostname-verification"], True) def test_maas_schema_requires(self): requires = "maas-server maas-oauth admin-secret default-series".split() for require in requires: config = serializer.yaml_load(SAMPLE_MAAS) del config["environments"]["sample"][require] self.write_config(serializer.yaml_dump(config), other_path=True) try: self.config.load(self.other_path) except EnvironmentsConfigError as error: self.assertEquals(str(error), "Environments configuration error: %s: " "environments.sample.%s: " "required value not found" % (self.other_path, require)) else: self.fail("Did not properly require %s when type == maas" % require) def test_maas_default_series(self): config = serializer.yaml_load(SAMPLE_MAAS) config["environments"]["sample"]["default-series"] = "magnificent" self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) provider = self.config.get_default().get_machine_provider() self.assertEqual(provider.config["default-series"], "magnificent") def test_maas_verifies_placement(self): config = serializer.yaml_load(SAMPLE_MAAS) config["environments"]["sample"]["placement"] = "random" self.write_config(serializer.yaml_dump(config), other_path=True) e = self.assertRaises( EnvironmentsConfigError, self.config.load, self.other_path) self.assertIn("expected 'unassigned', got 'random'", str(e)) config["environments"]["sample"]["placement"] = "local" self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) data = self.config.get_default().placement self.assertEqual(data, "local") def test_lxc_requires_data_dir(self): """lxc dev only supports local placement.""" config = serializer.yaml_load(SAMPLE_LOCAL) self.write_config(serializer.yaml_dump(config), other_path=True) error = self.assertRaises( EnvironmentsConfigError, self.config.load, self.other_path) self.assertIn("data-dir: required value not found", str(error)) def test_lxc_verifies_placement(self): """lxc dev only supports local placement.""" config = serializer.yaml_load(SAMPLE_LOCAL) config["environments"]["sample"]["placement"] = "unassigned" self.write_config(serializer.yaml_dump(config), other_path=True) error = self.assertRaises( EnvironmentsConfigError, self.config.load, self.other_path) self.assertIn("expected 'local', got 'unassigned'", str(error)) def test_openstack_requires_default_image_id(self): """A VM image must be supplied for openstack provider.""" config = serializer.yaml_load(SAMPLE_OPENSTACK) del config["environments"]["sample"]["default-image-id"] self.write_config(serializer.yaml_dump(config), other_path=True) error = self.assertRaises( EnvironmentsConfigError, self.config.load, self.other_path) self.assertIn("default-image-id: required value not found", str(error)) def test_openstack_ignores_placement(self): """The placement config is not verified for openstack provider.""" config = serializer.yaml_load(SAMPLE_OPENSTACK) config["environments"]["sample"]["placement"] = "whatever" self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) def test_openstack_s3_requires_default_image_id(self): """A VM image must be supplied for openstack_s3 provider.""" config = serializer.yaml_load(SAMPLE_OPENSTACK) config["environments"]["sample"]["type"] = "openstack_s3" del config["environments"]["sample"]["default-image-id"] self.write_config(serializer.yaml_dump(config), other_path=True) error = self.assertRaises( EnvironmentsConfigError, self.config.load, self.other_path) self.assertIn("default-image-id: required value not found", str(error)) def test_openstack_s3_ignores_placement(self): """The placement config is not verified for openstack_s3 provider.""" config = serializer.yaml_load(SAMPLE_OPENSTACK) config["environments"]["sample"]["type"] = "openstack_s3" config["environments"]["sample"]["placement"] = "whatever" self.write_config(serializer.yaml_dump(config), other_path=True) self.config.load(self.other_path) juju-0.7.orig/juju/environment/tests/test_environment.py0000644000000000000000000000556512135220114022064 0ustar 00000000000000from juju.environment.tests.test_config import ( EnvironmentsConfigTestBase, SAMPLE_ENV) from juju.providers import dummy class EnvironmentTest(EnvironmentsConfigTestBase): def test_attributes(self): self.write_config(SAMPLE_ENV) self.config.load() env = self.config.get("myfirstenv") self.assertEquals(env.name, "myfirstenv") self.assertEquals(env.type, "dummy") self.assertEquals(env.origin, "distro") def test_get_machine_provider(self): """ get_machine_provider() should return a MachineProvider instance imported from a module named after the "type:" provided in the machine provider configuration. """ self.write_config(SAMPLE_ENV) self.config.load() env = self.config.get("myfirstenv") machine_provider = env.get_machine_provider() self.assertEquals(type(machine_provider), dummy.MachineProvider) def test_get_machine_provider_passes_config_into_provider(self): """ get_machine_provider() should pass the machine provider configuration when constructing the MachineProvider. """ self.write_config(SAMPLE_ENV) self.config.load() env = self.config.get("myfirstenv") dummy_provider = env.get_machine_provider() self.assertEquals(dummy_provider.config.get("foo"), "bar") def test_get_machine_provider_should_cache_results(self): """ get_machine_provider() must cache its results. """ self.write_config(SAMPLE_ENV) self.config.load() env = self.config.get("myfirstenv") machine_provider1 = env.get_machine_provider() machine_provider2 = env.get_machine_provider() self.assertIdentical(machine_provider1, machine_provider2) def test_get_serialization_data(self): """ Getting the serialization data returns a dictionary with the environment configuration. """ self.write_config(SAMPLE_ENV) self.config.load() env = self.config.get("myfirstenv") data = env.get_serialization_data() self.assertEqual( data, {"myfirstenv": {"type": "dummy", "foo": "bar", "dynamicduck": "magic"}}) def test_get_serialization_data_errors_passthrough(self): """Serialization errors are raised to the caller. """ self.write_config(SAMPLE_ENV) self.config.load() env = self.config.get("myfirstenv") mock_env = self.mocker.patch(env) mock_env.get_machine_provider() mock_provider = self.mocker.mock(dummy.MachineProvider) self.mocker.result(mock_provider) mock_provider.get_serialization_data() self.mocker.throw(SyntaxError()) self.mocker.replay() self.assertRaises(SyntaxError, env.get_serialization_data) juju-0.7.orig/juju/environment/tests/data/lsb-release0000644000000000000000000000015112135220114021123 0ustar 00000000000000DISTRIB_ID=Ubuntu DISTRIB_RELEASE=99.10 DISTRIB_CODENAME="tremendous" DISTRIB_DESCRIPTION="Ubuntu 99.10" juju-0.7.orig/juju/ftests/__init__.py0000644000000000000000000000000212135220114015777 0ustar 00000000000000# juju-0.7.orig/juju/ftests/test_aws.py0000644000000000000000000000716212135220114016107 0ustar 00000000000000""" Validate AWS (EC2/S3) Assumptions via api usage exercise. Requirements for functional test - valid amazon credentials present in the environment as AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID """ import random from twisted.internet.defer import inlineCallbacks from twisted.python.failure import Failure from txaws.service import AWSServiceRegion from txaws.s3.exception import S3Error from juju.lib.testing import TestCase class AWSFunctionalTest(TestCase): def setUp(self): region = AWSServiceRegion() self.ec2 = region.get_ec2_client() self.s3 = region.get_s3_client() class EC2SecurityGroupTest(AWSFunctionalTest): def setUp(self): super(EC2SecurityGroupTest, self).setUp() self.security_group = "juju-test-%s" % (random.random()) @inlineCallbacks def tearDown(self): yield self.ec2.delete_security_group(self.security_group) @inlineCallbacks def test_create_and_get_group(self): """Verify input/outputs of creating a security group. Specifically we want to see if we can pick up the owner id off the group, which we need for group 2 group ec2 authorizations.""" data = yield self.ec2.create_security_group( self.security_group, "test") self.assertEqual(data, True) info = yield self.ec2.describe_security_groups(self.security_group) self.assertEqual(len(info), 1) group = info.pop() self.assertTrue(group.owner_id) @inlineCallbacks def test_create_and_authorize_group(self): yield self.ec2.create_security_group(self.security_group, "test") info = yield self.ec2.describe_security_groups(self.security_group) group = info.pop() yield self.ec2.authorize_security_group( self.security_group, source_group_name = self.security_group, source_group_owner_id = group.owner_id) info = yield self.ec2.describe_security_groups(self.security_group) group = info.pop() self.assertEqual(group.name, self.security_group) class S3FilesTest(AWSFunctionalTest): def setUp(self): super(S3FilesTest, self).setUp() self.control_bucket = "juju-test-%s" % (random.random()) return self.s3.create_bucket(self.control_bucket) @inlineCallbacks def tearDown(self): listing = yield self.s3.get_bucket(self.control_bucket) for ob in listing.contents: yield self.s3.delete_object(self.control_bucket, ob.key) yield self.s3.delete_bucket(self.control_bucket) def test_put_object(self): """Verify input/outputs of putting an object in the bucket. The output is just an empty string on success.""" d = self.s3.put_object( self.control_bucket, "pirates/gold.txt", "blah blah") def verify_result(result): self.assertEqual(result, "") d.addCallback(verify_result) return d @inlineCallbacks def test_get_object(self): """Verify input/outputs of getting an object from the bucket.""" yield self.s3.put_object( self.control_bucket, "pirates/ship.txt", "argh argh") ob = yield self.s3.get_object( self.control_bucket, "pirates/ship.txt") self.assertEqual(ob, "argh argh") def test_get_object_nonexistant(self): """Verify output when an object does not exist.""" d = self.s3.get_object(self.control_bucket, "pirates/treasure.txt") def verify_result(result): self.assertTrue(isinstance(result, Failure)) result.trap(S3Error) d.addBoth(verify_result) return d juju-0.7.orig/juju/ftests/test_connection.py0000644000000000000000000000571012135220114017451 0ustar 00000000000000""" Functional tests for secure zookeeper connections using an ssh forwarded port. Requirements for functional test - sshd running on localhost - zookeeper running on localhost, listening on port 2181 - user can log into localhost via key authentication - ~/.ssh/id_dsa exists and is configured as an authorized key """ import os import pwd import zookeeper from juju.errors import NoConnection from juju.lib.testing import TestCase from juju.state.sshclient import SSHClient from juju.tests.common import get_test_zookeeper_address class ConnectionTest(TestCase): def setUp(self): super(ConnectionTest, self).setUp() self.username = pwd.getpwuid(os.getuid())[0] self.log = self.capture_logging("juju.state.sshforward") self.old_user_name = SSHClient.remote_user SSHClient.remote_user = self.username self.client = SSHClient() zookeeper.set_debug_level(0) def tearDown(self): super(ConnectionTest, self).tearDown() zookeeper.set_debug_level(zookeeper.LOG_LEVEL_DEBUG) SSHClient.remote_user = self.old_user_name def test_connect(self): """ Forwarding a port spawns an ssh process with port forwarding arguments. """ connect_deferred = self.client.connect( get_test_zookeeper_address(), timeout=20) def validate_connected(client): self.assertTrue(client.connected) client.close() connect_deferred.addCallback(validate_connected) return connect_deferred def test_invalid_host(self): """ if a connection can not be made before a timeout period, an exception is raised. with the sshclient layer, tihs test no longer returns a failure.. and its hard to cleanup the process tunnel.. """ SSHClient.remote_user = "rabbit" connect_deferred = self.client.connect( "foobar.example.com:2181", timeout=10) self.failUnlessFailure(connect_deferred, NoConnection) def validate_log(result): output = self.log.getvalue() self.assertTrue(output.strip().startswith( "Invalid host for SSH forwarding")) connect_deferred.addCallback(validate_log) return connect_deferred def test_invalid_user(self): """ if a connection can not be made before a timeout period, an exception is raised. with the sshclient layer, tihs test no longer returns a failure.. and its hard to cleanup the process tunnel.. """ SSHClient.remote_user = "rabbit" connect_deferred = self.client.connect( get_test_zookeeper_address(), timeout=10) self.failUnlessFailure(connect_deferred, NoConnection) def validate_log(result): output = self.log.getvalue() self.assertEqual(output.strip(), "Invalid SSH key") connect_deferred.addCallback(validate_log) return connect_deferred juju-0.7.orig/juju/ftests/test_ec2_provider.py0000644000000000000000000002123312135220114017673 0ustar 00000000000000""" Functional test for EC2 Provider. Requirements for functional test - valid amazon credentials present in the environment as AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID - an ssh key (id_dsa, id_rsa, identity) present in ~/.ssh - if the key has a password an ssh-agent must be running with SSH_AGENT_PID and SSH_AUTH_SOCK set in the environment. These tests may take several minutes for each test. """ from cStringIO import StringIO import os import pwd import sys import zookeeper from twisted.internet.defer import inlineCallbacks, Deferred, returnValue from juju.errors import FileNotFound, EnvironmentNotFound from juju.providers.ec2 import MachineProvider from juju.state.sshclient import SSHClient from juju.lib.testing import TestCase def wait_for_startup(client, instances, interval=2): """Poll EC2 waiting for instance to transition to running state.""" # XXX should we instead be waiting for /initialized? from twisted.internet import reactor on_started = Deferred() def poll_instance(): d = client.describe_instances(*[i.instance_id for i in instances]) d.addCallback(check_status) def check_status(instances): started = filter(lambda i: i.instance_state == "running", instances) if len(started) == len(instances): on_started.callback(instances) else: reactor.callLater(interval, poll_instance) reactor.callLater(interval, poll_instance) return on_started def get_juju_branch_url(): """ Inspect the current working tree, to determine the juju branch to utilize when running functional tests in the cloud. If the current directory is a branch, then use its push location. If its a checkout then use its bound location. Also verify the local tree has no uncommitted changes, and that all local commits have been pushed upstream. """ import juju from bzrlib import workingtree, branch, errors, transport try: tree, path = workingtree.WorkingTree.open_containing( os.path.abspath(os.path.dirname(juju.__file__))) except errors.NotBranchError: return "lp:juju" if tree.has_changes(): raise RuntimeError("Local branch has uncommitted changes") # only a checkout will have a bound location, typically trunk location = tree.branch.get_bound_location() if location: return location # else its a development branch location = tree.branch.get_push_location() assert location, "Could not determine juju location for ftests" # verify the branch is up to date pushed local_revno = tree.branch.revno() location = location.replace("lp:", "bzr+ssh://bazaar.launchpad.net/") t = transport.get_transport(location) try: remote_branch = branch.Branch.open_from_transport(t) except errors.NotBranchError: raise RuntimeError("Local branch not pushed") remote_revno = remote_branch.revno() if not local_revno <= remote_revno: raise RuntimeError("Local branch has unpushed changes") # the remote bzr invocation prefers lp: addresses location = location.replace("bzr+ssh://bazaar.launchpad.net/", "lp:") return str(location) class EC2ProviderFunctionalTest(TestCase): def setUp(self): super(EC2ProviderFunctionalTest, self).setUp() self.username = pwd.getpwuid(os.getuid())[0] self.log = self.capture_logging("juju") zookeeper.set_debug_level(0) juju_branch = "" # get_juju_branch_url() self.provider = MachineProvider("ec2-functional", {"control-bucket": "juju-test-%s" % (self.username), "admin-secret": "magic-beans", "juju-branch": juju_branch}) class EC2MachineTest(EC2ProviderFunctionalTest): def _filter_instances(self, instances): provider_instances = [] group_name = "juju-%s" % self.provider.environment_name for i in instances: if i.instance_state not in ("running", "pending"): continue if not group_name in i.reservation.groups: continue provider_instances.append(i) return provider_instances @inlineCallbacks def test_shutdown(self): """ Shutting down the provider, terminates all instances associated to the provider instance. """ running = yield self.provider.ec2.describe_instances() running_prior = set(self._filter_instances(running)) result = yield self.provider.shutdown() running_set = yield self.provider.ec2.describe_instances() running_now = set(self._filter_instances(running_set)) if result: shutdown = running_prior - running_now self.assertEqual(len(result), len(shutdown)) result_ids = [r.instance_id for r in result] for i in shutdown: self.failUnlessIn(i.instance_id, result_ids) @inlineCallbacks def test_bootstrap_and_connect(self): """ Launching a bootstrap instance, creates an ec2 instance with a zookeeper server running on it. This test may take up to 7m """ machines = yield self.provider.bootstrap() instances = yield wait_for_startup(self.provider.ec2, machines) test_complete = Deferred() def verify_running(): sys.stderr.write("running; ") @inlineCallbacks def validate_connected(client): self.assertTrue(client.connected) sys.stderr.write("connected.") exists_deferred, watch_deferred = client.exists_and_watch( "/charms") stat = yield exists_deferred if stat: test_complete.callback(None) returnValue(True) yield watch_deferred stat = yield client.exists("/charms") self.assertTrue(stat) test_complete.callback(None) def propogate_failure(failure): test_complete.errback(failure) return failure def close_client(result, client): client.close() server = "%s:2181" % instances[0].dns_name client = SSHClient() client_deferred = client.connect(server, timeout=300) client_deferred.addCallback(validate_connected) client_deferred.addErrback(propogate_failure) client_deferred.addBoth(close_client, client) yield verify_running() yield test_complete # set the timeout to something more reasonable for bootstraping test_bootstrap_and_connect.timeout = 300 @inlineCallbacks def test_provider_with_nonexistant_zk_instance(self): """ If the zookeeper instances as stored in s3 does not exist, then connect should return the appropriate error message. """ self.addCleanup(self.provider.save_state, {}) yield self.provider.save_state({"zookeeper-instances": [ "i-a189723", "i-a213213"]}) d = self.provider.connect() yield self.assertFailure(d, EnvironmentNotFound) class EC2StorageTest(EC2ProviderFunctionalTest): def setUp(self): super(EC2StorageTest, self).setUp() self.s3 = self.provider.s3 self.storage = self.provider.get_file_storage() self.control_bucket = self.provider.config.get("control-bucket") return self.s3.create_bucket(self.control_bucket) @inlineCallbacks def tearDown(self): listing = yield self.s3.get_bucket(self.control_bucket) for ob in listing.contents: yield self.s3.delete_object(self.control_bucket, ob.key) yield self.s3.delete_bucket(self.control_bucket) @inlineCallbacks def test_put_object(self): content = "snakes eat rubies" yield self.storage.put("files/reptile.txt", StringIO(content)) s3_content = yield self.s3.get_object( self.control_bucket, "files/reptile.txt") self.assertEqual(content, s3_content) @inlineCallbacks def test_get_object(self): content = "snakes eat rubies" yield self.storage.put("files/reptile.txt", StringIO(content)) file_obj = yield self.storage.get("files/reptile.txt") s3_content = file_obj.read() self.assertEqual(content, s3_content) def test_get_object_nonexistant(self): remote_path = "files/reptile.txt" d = self.storage.get(remote_path) self.failUnlessFailure(d, FileNotFound) def validate_error_message(result): self.assertEqual( result.path, "s3://%s/%s" % (self.control_bucket, remote_path)) d.addCallback(validate_error_message) return d juju-0.7.orig/juju/hooks/__init__.py0000644000000000000000000000000012135220114015610 0ustar 00000000000000juju-0.7.orig/juju/hooks/cli.py0000644000000000000000000002341312135220114014635 0ustar 00000000000000import argparse import base64 import copy import json import logging import os import sys from argparse import ArgumentTypeError from twisted.internet import defer from twisted.internet import protocol from juju.hooks.protocol import UnitAgentClient from juju.lib.format import get_charm_formatter_from_env _marker = object() class CommandLineClient(object): """Template for writing Command Line Clients. Used to implement the utility scripts available to hook authors. Provides a framework for utilities connected to the Unit Agent process via a UNIX Domain Socket. This provides facilities for standardized logging, error handling, output transformation and exit codes. There are a number of variables that can be set in a subclass to help configure its behavior. Instance Variables: `exit_code` -- Indicate an error to the caller. The default indicates no error. (default: 0) `keyvalue_pairs` -- Commands may process key-value pairs in the format 'alpha=a beta=b' as arguments. Setting this boolean to True enables the parsing of these options and supports the additional conventions described in the specifications/unit-agent-hooks document. (default: False) `require_cid` -- Does the command require the specification of a client_id. (default: True) """ default_mode = "wb" keyvalue_pairs = False exit_code = 0 require_cid = True manage_logging = True manage_connection = True def _setup_flags(self): parser = self.parser # Set up the default Arguments parser.add_argument("-o", "--output", type=argparse.FileType(self.default_mode), help="""Specify an output file""") parser.add_argument("-s", "--socket", help="Unit Agent communicates with " "tools over a socket. This value can be " "overridden here or read from the " "envionment variable JUJU_AGENT_SOCKET" ) parser.add_argument("--client-id", help="A token used to connect the client " "with an execution context and state " "cache. This value can be overridden " "here or read from the environment " "variable JUJU_CLIENT_ID" ) # output rendering parser.add_argument("--format", default="smart") # logging parser.add_argument("--log-file", metavar="FILE", default=sys.stderr, type=argparse.FileType('a'), help="Log output to file") parser.add_argument("--log-level", metavar="CRITICAL|DEBUG|INFO|ERROR|WARNING", help="Display messages starting at given level", type=parse_log_level, default=logging.WARNING) def customize_parser(self): """Hook for subclasses to add special handling after the basic parser and standard flags have been added. This hook is called at such a time that if positional args are defined these will be added before any key-value pair handling. """ pass def setup_parser(self): self.parser = argparse.ArgumentParser() self._setup_flags() self.customize_parser() if self.keyvalue_pairs: self.parser.add_argument("keyvalue_pairs", nargs="*") return self.parser def parse_args(self, arguments=None): """By default this processes command line arguments. However with arguments are passed they should be a list of arguments in the format normally provided by sys.argv. arguments: `arguments` -- optional list of arguments to parse in the sys.argv standard format. (default: None) """ options = self.parser.parse_args(arguments) self.options = options if self.manage_logging: self.setup_logging() exit = False if not exit and not options.socket: options.socket = os.environ.get("JUJU_AGENT_SOCKET") if not options.socket: exit = SystemExit("No JUJU_AGENT_SOCKET/" "-s option found") # use argparse std error code for this error exit.code = 2 if not exit and not options.client_id: options.client_id = os.environ.get("JUJU_CLIENT_ID") if not options.client_id and self.require_cid: exit = SystemExit("No JUJU_CLIENT_ID/" "--client_id option found") exit.code = 2 if exit: self.parser.print_usage(sys.stderr) print >>sys.stderr, (str(exit)) raise exit if self.keyvalue_pairs: self.parse_kvpairs(self.options.keyvalue_pairs) return options def setup_logging(self): logging.basicConfig( format="%(asctime)s %(levelname)s: %(message)s", level=self.options.log_level, stream=self.options.log_file) def parse_kvpairs(self, options): formatter = get_charm_formatter_from_env() data = formatter.parse_keyvalue_pairs(options) # cache self.options.keyvalue_pairs = data return data def _connect_to_agent(self): from twisted.internet import reactor def onConnectionMade(p): self.client = p return p d = protocol.ClientCreator( reactor, UnitAgentClient).connectUNIX(self.options.socket) d.addCallback(onConnectionMade) return d def __call__(self, arguments=None): from twisted.internet import reactor self.setup_parser() self.parse_args(arguments=arguments) if self.manage_connection: self._connect_to_agent().addCallback(self._run) else: reactor.callWhenRunning(self._run) reactor.run() sys.exit(self.exit_code) def _run(self, result=None): from twisted.internet import reactor d = defer.maybeDeferred(self.run) d.addCallbacks(self.render, self.render_error) d.addBoth(lambda x: reactor.stop()) return d def run(self): """Implemented by subclass. This method should implement any behavior specific to the command and return (or yield with inlineCallbacks) a value that will later be handed off to render for formatting and output. """ pass def render_error(self, result): tb = result.getTraceback(elideFrameworkCode=True) sys.stderr.write(tb) logging.error(tb) logging.error(str(result)) def render(self, result): options = self.options format = options.format if options.output: stream = options.output else: stream = sys.stdout formatter = getattr(self, "format_%s" % format, None) if formatter is not None: formatter(result, stream) else: print >>sys.stderr, "unknown output format: %s" % format if result: print >>stream, str(result) stream.flush() def format_json(self, result, stream): encoded = copy.copy(result) if isinstance(result, dict): for k, v in result.iteritems(): # Workaround the fact that JSON does not work with str # values that have high bytes and are not actually UTF-8 # encoded; workaround by firt testing whether it can be # decoded as UTF-8, and if not, wrapping as Base64 # encoded. if isinstance(v, str): try: v.decode("utf8") except UnicodeDecodeError: encoded[k] = base64.b64encode(v) json.dump(encoded, stream) def format_smart(self, result, stream): if result is not None: charm_formatter = get_charm_formatter_from_env() stream.write(charm_formatter.format_raw(result)) def parse_log_level(level): """Level name/level number => level number""" if isinstance(level, basestring): level = level.upper() if level.isdigit(): level = int(level) else: # converts the name INFO to level number level = logging.getLevelName(level) if not isinstance(level, int): logging.error("Invalid log level %s" % level) level = logging.INFO return level def parse_port_protocol(port_protocol_string): """Returns (`port`, `protocol`) by converting `port_protocol_string`. `port` is an integer for a valid port (1 through 65535). `protocol` is restricted to TCP and UDP. TCP is the default. Otherwise raises ArgumentTypeError(msg). """ split = port_protocol_string.split("/") if len(split) == 2: port_string, protocol = split elif len(split) == 1: port_string, protocol = split[0], "tcp" else: raise ArgumentTypeError( "Invalid format for port/protocol, got %r" % port_protocol_string) try: port = int(port_string) except ValueError: raise ArgumentTypeError( "Invalid port, must be an integer, got %r" % port_string) raise if port < 1 or port > 65535: raise ArgumentTypeError( "Invalid port, must be from 1 to 65535, got %r" % port) if protocol.lower() not in ("tcp", "udp"): raise ArgumentTypeError( "Invalid protocol, must be 'tcp' or 'udp', got %r" % protocol) return port, protocol.lower() juju-0.7.orig/juju/hooks/commands.py0000644000000000000000000002105312135220114015665 0ustar 00000000000000import logging import os import pipes import re import sys from twisted.internet.defer import inlineCallbacks, returnValue from juju.hooks.cli import ( CommandLineClient, parse_log_level, parse_port_protocol) from juju.hooks.protocol import MustSpecifyRelationName from juju.state.errors import InvalidRelationIdentity BAD_CHARS = re.compile("[\-\./:()<>|?*]|(\\\)") class RelationGetCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): remote_unit = os.environ.get("JUJU_REMOTE_UNIT") self.parser.add_argument( "-r", dest="relation_id", default="", metavar="RELATION ID") self.parser.add_argument("settings_name", default="", nargs="?") self.parser.add_argument("unit_name", default=remote_unit, nargs="?") @inlineCallbacks def run(self): if self.options.settings_name == "-": self.options.settings_name = "" if self.options.unit_name is None: print >>sys.stderr, "Unit name is not defined" return result = None try: result = yield self.client.relation_get(self.options.client_id, self.options.relation_id, self.options.unit_name, self.options.settings_name) except InvalidRelationIdentity, e: # This prevents the exception from getting wrapped by AMP print >>sys.stderr, e.relation_ident except Exception, e: print >>sys.stderr, str(e) returnValue(result) def format_shell(self, result, stream): options = self.options settings_name = options.settings_name if settings_name and settings_name != "-": # result should be a single value result = {settings_name.upper(): result} if result: errs = [] for k, v in sorted(os.environ.items()): if k.startswith("VAR_"): print >>stream, "%s=" % (k.upper(), ) errs.append(k) for k, v in sorted(result.items()): k = BAD_CHARS.sub("_", k.upper()) v = pipes.quote(v) print >>stream, "VAR_%s=%s" % (k.upper(), v) # Order of output within streams is assured, but we output # on (commonly) two streams here and the ordering of those # messages is significant to the user. Make a best # effort. However, this cannot be guaranteed when these # streams are collected by `HookProtocol`. stream.flush() if errs: print >>sys.stderr, "The following were omitted from " \ "the environment. VAR_ prefixed variables indicate a " \ "usage error." print >>sys.stderr, "".join(errs) def relation_get(): """Entry point for relation-get""" client = RelationGetCli() sys.exit(client()) class RelationSetCli(CommandLineClient): keyvalue_pairs = True def customize_parser(self): self.parser.add_argument( "-r", dest="relation_id", default="", metavar="RELATION ID") @inlineCallbacks def run(self): try: yield self.client.relation_set(self.options.client_id, self.options.relation_id, self.options.keyvalue_pairs) except InvalidRelationIdentity, e: # This prevents the exception from getting wrapped by AMP print >>sys.stderr, e.relation_ident except Exception, e: print >>sys.stderr, str(e) def relation_set(): """Entry point for relation-set.""" client = RelationSetCli() sys.exit(client()) class RelationIdsCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): relation_name = os.environ.get("JUJU_RELATION", "") self.parser.add_argument( "relation_name", metavar="RELATION NAME", nargs="?", default=relation_name, help=("Specify the relation name of the relation ids to list. " "Defaults to $JUJU_RELATION, if available.")) @inlineCallbacks def run(self): if not self.options.relation_name: raise MustSpecifyRelationName() result = yield self.client.relation_ids( self.options.client_id, self.options.relation_name) returnValue(result) def format_smart(self, result, stream): for ident in result: print >>stream, ident def relation_ids(): """Entry point for relation-set.""" client = RelationIdsCli() sys.exit(client()) class ListCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): self.parser.add_argument( "-r", dest="relation_id", default="", metavar="RELATION ID") @inlineCallbacks def run(self): result = None try: result = yield self.client.list_relations(self.options.client_id, self.options.relation_id) except InvalidRelationIdentity, e: # This prevents the exception from getting wrapped by AMP print >>sys.stderr, e.relation_ident except Exception, e: print >>sys.stderr, str(e) returnValue(result) def format_eval(self, result, stream): """ eval `juju-list` """ print >>stream, "export JUJU_MEMBERS=\"%s\"" % (" ".join(result)) def format_smart(self, result, stream): for member in result: print >>stream, member def relation_list(): """Entry point for relation-list.""" client = ListCli() sys.exit(client()) class LoggingCli(CommandLineClient): keyvalue_pairs = False require_cid = False def customize_parser(self): self.parser.add_argument("message", nargs="+") self.parser.add_argument("-l", metavar="CRITICAL|DEBUG|INFO|ERROR|WARNING", help="Send log message at the given level", type=parse_log_level, default=logging.INFO) def run(self, result=None): return self.client.log(self.options.l, self.options.message) def render(self, result): return None def log(): """Entry point for juju-log.""" client = LoggingCli() sys.exit(client()) class ConfigGetCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): self.parser.add_argument("option_name", default="", nargs="?") @inlineCallbacks def run(self): # handle settings_name being explictly skipped on the cli result = yield self.client.config_get(self.options.client_id, self.options.option_name) returnValue(result) def config_get(): """Entry point for config-get""" client = ConfigGetCli() sys.exit(client()) class OpenPortCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): self.parser.add_argument( "port_protocol", metavar="PORT[/PROTOCOL]", help="The port to open. The protocol defaults to TCP.", type=parse_port_protocol) def run(self): return self.client.open_port( self.options.client_id, *self.options.port_protocol) def open_port(): """Entry point for open-port.""" client = OpenPortCli() sys.exit(client()) class ClosePortCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): self.parser.add_argument( "port_protocol", metavar="PORT[/PROTOCOL]", help="The port to close. The protocol defaults to TCP.", type=parse_port_protocol) def run(self): return self.client.close_port( self.options.client_id, *self.options.port_protocol) def close_port(): """Entry point for close-port.""" client = ClosePortCli() sys.exit(client()) class UnitGetCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): self.parser.add_argument("setting_name") @inlineCallbacks def run(self): result = yield self.client.get_unit_info(self.options.client_id, self.options.setting_name) returnValue(result["data"]) def unit_get(): """Entry point for config-get""" client = UnitGetCli() sys.exit(client()) juju-0.7.orig/juju/hooks/executor.py0000644000000000000000000002424212135220114015725 0ustar 00000000000000""" Hook Execution. """ import os import fnmatch import logging import tempfile from twisted.internet.defer import ( inlineCallbacks, DeferredQueue, Deferred, DeferredLock, returnValue, DeferredFilesystemLock) from twisted.internet.error import ProcessExitedAlready DEBUG_HOOK_TEMPLATE = r"""#!/bin/bash set -e export JUJU_DEBUG=$(mktemp -d) exec > $JUJU_DEBUG/debug.log >&1 # Save environment variables and export them for sourcing. FILTER='^\(LS_COLORS\|LESSOPEN\|LESSCLOSE\|PWD\)=' env | grep -v $FILTER > $JUJU_DEBUG/env.sh sed -i 's/^/export /' $JUJU_DEBUG/env.sh # Create an internal script which will load the hook environment. cat > $JUJU_DEBUG/hook.sh < $JUJU_DEBUG/hook.pid exec /bin/bash END chmod +x $JUJU_DEBUG/hook.sh # If the session already exists, the ssh command won the race, so just use it. # The beauty below is a workaround for a bug in tmux (1.5 in Oneiric) or # epoll that doesn't support /dev/null or whatever. Without it the # command hangs. tmux new-session -d -s $JUJU_UNIT_NAME 2>&1 | cat > /dev/null || true tmux new-window -t $JUJU_UNIT_NAME -n {hook_name} "$JUJU_DEBUG/hook.sh" # If we exit for whatever reason, kill the hook shell. exit_handler() { if [ -f $JUJU_DEBUG/hook.pid ]; then kill -9 $(cat $JUJU_DEBUG/hook.pid) || true fi } trap exit_handler EXIT # Wait for the hook shell to start, and then wait for it to exit. while [ ! -f $JUJU_DEBUG/hook.pid ]; do sleep 1 done HOOK_PID=$(cat $JUJU_DEBUG/hook.pid) while kill -0 "$HOOK_PID" 2> /dev/null; do sleep 1 done """ class HookExecutor(object): """Executes scheduled hooks. A typical unit agent is subscribed to multiple event streams across unit and relation lifecycles. All of which will attempt to execute hooks in response to events. In order to serialize hook execution and bring observability, a hook executor is utilized across the different components that want to execute hooks. """ STOP = object() LOCK_PATH = "/var/lib/juju/hook.lock" def __init__(self): self._running = False self._executions = DeferredQueue() self._observer = None self._log = logging.getLogger("hook.executor") self._run_lock = DeferredLock() # Serialized container hook execution self._fs_lock = DeferredFilesystemLock(self.LOCK_PATH) # The currently executing hook invoker. None if no hook is executing. self._invoker = None # The currently executing hook's context. None if no hook is executing. self._hook_context = None # The current names of hooks that should be debugged. self._debug_hook_names = None # The path to the last utilized tempfile debug hook. self._debug_hook_file_path = None @property def running(self): """Returns a boolean, denoting if the executor is running.""" return self._running @inlineCallbacks def start(self): """Start the hook executor. After the executor is started, it will continue to serially execute any queued hook executions. """ assert self._running is False, "Already running" self._running = True self._log.debug("started") while self._running: next = yield self._executions.get() # The stop logic here is to allow for two different # scenarios. One is if the executor is currently waiting on # the queue, putting a stop value there will, immediately # wake it up and cause it to stop. # The other distinction is more subtle, if we invoke # start/stop/start on the executioner, and it was # currently executing a slow hook, then when the # executioner finishes with the hook it may now be in the # running state, resulting in two executioners closures # executing hooks. We track stops to ensure that only one # executioner closure is running at a time. if next is self.STOP: continue yield self._run_lock.acquire() if not self._running: self._run_lock.release() continue yield self._fs_lock.deferUntilLocked() try: yield self._run_one(*next) finally: try: self._fs_lock.unlock() except ValueError: # Defensive.. If on unlock we're not the owner the impl # will raise an error, we don't care as long the sys # is not blocked by us, lock will sanitize pass self._run_lock.release() @inlineCallbacks def _run_one(self, invoker, path, exec_deferred): """Run a hook. """ hook_path = self.get_hook_path(path) if not os.path.exists(hook_path): self._log.info( "Hook does not exist, skipping %s", hook_path) exec_deferred.callback(False) if self._observer: self._observer(path) returnValue(None) self._log.debug("Running hook: %s", path) # Store for context for callbacks, execution is serialized. self._invoker = invoker self._hook_context = invoker.get_context() try: yield invoker(hook_path) except Exception, e: self._invoker = self._hook_context = None self._log.debug("Hook error: %s %s", path, e) exec_deferred.errback(e) else: self._invoker = self._hook_context = None self._log.debug("Hook complete: %s", path) exec_deferred.callback(True) if self._observer: self._observer(path) @inlineCallbacks def stop(self): """Stop hook executions. Returns a deferred that fires when the executor has stopped. """ assert self._running, "Already stopped" yield self._run_lock.acquire() self._running = False self._executions.put(self.STOP) self._run_lock.release() self._log.debug("stopped") @inlineCallbacks def run_priority_hook(self, invoker, hook_path): """Execute a hook while the executor is stopped. Executes a hook immediately, ignoring the existing queued hook executions, requires the hook executor to be stopped. """ yield self._run_lock.acquire() try: assert not self._running, "Executor must not be running" exec_deferred = Deferred() yield self._run_one(invoker, hook_path, exec_deferred) finally: self._run_lock.release() yield exec_deferred def set_observer(self, observer): """Set a callback hook execution observer. The callback receives a single parameter, the path to the hook, and is invoked after the hook has been executed. """ self._observer = observer def get_hook_context(self, client_id): """Retrieve the context of the currently executing hook. This serves as the integration point with the hook api server, which utilizes this function to retrieve a hook context for a given client. Since we're serializing execution its effectively a constant lookup to the currently executing hook's context. """ return self._hook_context def get_hook_path(self, hook_path): """Retrieve a hook path. We use this to enable debugging. :param hook_path: The requested hook path to execute. If the executor is in debug mode, a path to a debug hook is returned. """ hook_name = os.path.basename(hook_path) # Cleanup/Release any previous hook debug scripts. if self._debug_hook_file_path: os.unlink(self._debug_hook_file_path) self._debug_hook_file_path = None # Check if debug is active, if not use the requested hook. if not self._debug_hook_names: return hook_path # Check if its a hook we want to debug found = False for debug_name in self._debug_hook_names: if fnmatch.fnmatch(hook_name, debug_name): found = True if not found: return hook_path # Setup a debug hook script. self._debug_hook_file_path = self._write_debug_hook(hook_name) return self._debug_hook_file_path def _write_debug_hook(self, hook_name): debug_hook = DEBUG_HOOK_TEMPLATE.replace("{hook_name}", hook_name) debug_hook_file = tempfile.NamedTemporaryFile( suffix="-%s" % hook_name, delete=False) debug_hook_file.write(debug_hook) debug_hook_file.flush() # We have to close the hook file, else linux throws a Text # File busy on exec because the file is open for write. debug_hook_file.close() os.chmod(debug_hook_file.name, 0700) return debug_hook_file.name def get_invoker(self, client_id): """Retrieve the invoker of the currently executing hook. This method enables a lookup point for the hook API. """ return self._invoker def set_debug(self, hook_names): """Set some hooks to be debugged. Also used to clear debug. :param hook_names: A list of hook names to debug, None values means disable debugging, and end current debugging underway. """ if hook_names is not None and not isinstance(hook_names, list): raise AssertionError("Invalid hook names %r" % (hook_names)) # Terminate an existing debug session when the debug ends. if hook_names is None and self._invoker: try: self._invoker.send_signal("HUP") except (ProcessExitedAlready, ValueError): pass self._debug_hook_names = hook_names def __call__(self, invoker, hook_path): """Schedule a hook for execution. Returns a deferred that fires when the hook has been executed. """ exec_deferred = Deferred() self._executions.put( (invoker, hook_path, exec_deferred)) return exec_deferred juju-0.7.orig/juju/hooks/invoker.py0000644000000000000000000003512212135220114015543 0ustar 00000000000000import os import sys from twisted.internet import protocol from twisted.internet.defer import Deferred, inlineCallbacks, returnValue from twisted.python.failure import Failure from juju import errors from juju.state.errors import RelationIdentNotFound, InvalidRelationIdentity from juju.state.hook import RelationHookContext class HookProtocol(protocol.ProcessProtocol): """Protocol used to communicate between the unit agent and hook process. This class manages events associated with the hook process, including its exit and status of file descriptors. """ def __init__(self, hook_name, context, log=None): self._context = context self._hook_name = hook_name self._log = log # The process has exited. The value of this Deferred is # the exit code, and only if it is 0. Otherwise a # `CharmInvocationError` is communicated through this # Deferred. self.exited = Deferred() # The process has ended, that is, its file descriptors # are closed. Output can now be fully read. The deferred # semantics are the same as `exited` above. self.ended = Deferred() def outReceived(self, data): """Log `data` from stduout until the child process has ended.""" self._log.info(data) def errReceived(self, data): """Log `data` from stderr until the child process has ended.""" self._log.info(data) def _process_reason(self, reason, deferred): """Common code for working with both processEnded and processExited. The semantics of `exited` and `ended` are the same with respect to how they process the status code; the difference is when these events occur. For more, see :class:`Invoker`. """ exit_code = reason.value.exitCode if exit_code == 0: return deferred.callback(exit_code) elif exit_code is None and reason.value.signal: error = errors.CharmInvocationError( self._hook_name, exit_code, signal=reason.value.signal) else: error = errors.CharmInvocationError(self._hook_name, exit_code) deferred.errback(error) def processExited(self, reason): """Called when the process has exited.""" self._process_reason(reason, self.exited) def processEnded(self, reason): """Called when file descriptors for the process are closed.""" self._process_reason(reason, self.ended) class FormatSettingChanges(object): """Wrapper to delay executing __str_ of changes until written, if at all. :param list changes: Each change is a pair (`relation_ident`, `item`), where `item` may be an `AddedItem`, `DeletedItem`, or `ModifiedItem`. If `relation_ident` is None, this implies that it is a setting on the implied (or parent) context; it is sorted first and the relation_ident for the implied context is not logged. """ def __init__(self, changes): self.changes = changes def __str__(self): changes = sorted( self.changes, key=lambda (relation_ident, item): (relation_ident, item.key)) lines = [] for relation_ident, item in changes: if relation_ident is None: lines.append(" %s" % str(item)) else: lines.append(" %s on %r" % (str(item), relation_ident)) return "\n".join(lines) class Invoker(object): """Responsible for the execution and management of hook invocation. In a nutshell, *how* hooks are invoked, not *when* or *why*. Responsible for the following: * Manages socket connection with the unit agent. * Connects the child process stdout/stderr file descriptors to logging. * Handles the exit of the hook process, including reporting its exit code. * Cleans up resources of the hook process upon its exit. It's important to understand the difference between a process exiting and the process ending (using the terminology established by Twisted). Process exit is simple - this is the first event and occurs by the process returning its status code through the exit process. Normally process ending occurs very shortly thereafter, however, it may be briefly delayed because of pending writes to its file descriptors. In certain cases, however, hook scripts may invoke poorly written commands that fork child processes in the background that will wait around indefinitely, but do not close their file descriptors. In this case, it is the responsibility of the Invoker to wait briefly (for now hardcoded to 5 seconds), then reap such processes. """ def __init__(self, context, change, client_id, socket_path, unit_path, logger): """Takes the following arguments: `context`: an `juju.state.hook.HookContext` `change`: an `juju.state.hook.RelationChange` `client_id`: a string uniquely identifying a client session `socket_path`: the path to the UNIX Domain socket used by clients to communicate with the Unit Agent `logger`: instance of a `logging.Logger` object used to capture hook output """ self.environment = {} self._context = context self._relation_contexts = {} self._change = change self._client_id = client_id self._socket_path = socket_path self._unit_path = unit_path self._log = logger self._charm_format = None # The twisted.internet.process.Process instance. self._process = None # The hook executable path self._process_executable = None # Deferred tracking whether the process HookProtocol is ended self._ended = None # When set, a delayed call that ensures the process is # properly terminated with loseConnection self._reaper = None # Add the initial context to the relation contexts if it's in # fact such if isinstance(context, RelationHookContext): self._relation_contexts[context.relation_ident] = context @inlineCallbacks def start(self): """Cache relation hook contexts for all relation idents.""" # Get all relation idents (None means "all") relation_idents = set((yield self._context.get_relation_idents(None))) if isinstance(self._context, RelationHookContext): # Exclude the parent context for being looked up as a child relation_idents.discard(self._context.relation_ident) display_parent_relation_ident = " on %r" % \ self._context.relation_ident else: display_parent_relation_ident = "" for relation_ident in relation_idents: child = yield self._context.get_relation_hook_context( relation_ident) self._relation_contexts[relation_ident] = child self._log.debug("Cached relation hook contexts%s: %r" % ( display_parent_relation_ident, sorted(relation_idents))) service = yield self._context.get_local_service() charm = yield service.get_charm_state() self._charm_format = (yield charm.get_metadata()).format @property def charm_format(self): return self._charm_format @property def ended(self): return self._ended @property def unit_path(self): return self._unit_path def get_environment(self): """ Returns the environment used to run the hook as a dict. Defaults are provided based on information passed to __init__. By setting keys inside Invoker.environment you can override the defaults or provide additional variables. """ base = dict(JUJU_AGENT_SOCKET=self._socket_path, JUJU_CLIENT_ID=self._client_id, CHARM_DIR=os.path.join(self._unit_path, "charm"), _JUJU_CHARM_FORMAT=str(self.charm_format), JUJU_ENV_UUID=os.environ["JUJU_ENV_UUID"], JUJU_UNIT_NAME=os.environ["JUJU_UNIT_NAME"], DEBIAN_FRONTEND="noninteractive", APT_LISTCHANGES_FRONTEND="none", PATH=os.environ["PATH"], JUJU_PYTHONPATH=":".join(sys.path)) base.update(self.environment) return self.get_environment_from_change(base, self._change) def get_environment_from_change(self, env, change): """Supplement the default environment with dict with variables originating from the `change` argument to __init__. """ return env def get_context(self): """Returns the hook context for the invocation.""" return self._context def get_cached_relation_hook_context(self, relation_ident): """Returns cached hook context corresponding to `relation_ident`""" try: return self._relation_contexts[relation_ident] except KeyError: parts = relation_ident.split(":") if len(parts) != 2 or not parts[1].isdigit(): raise InvalidRelationIdentity(relation_ident) else: raise RelationIdentNotFound(relation_ident) @inlineCallbacks def get_relation_idents(self, relation_name): """Returns valid relation instances for the given name.""" idents = yield self._context.get_relation_idents(relation_name) returnValue( list(set(idents).intersection(set(self._relation_contexts)))) def validate_hook(self, hook_filename): """Verify that the hook_filename exists and is executable. """ if not os.path.exists(hook_filename): raise errors.FileNotFound(hook_filename) if not os.access(hook_filename, os.X_OK): raise errors.CharmError(hook_filename, "hook is not executable") def send_signal(self, signal_id): """Send a signal of the given signal_id. `signal_id`: limited value interpretation, numeric signals ids are used as given, some values for symbolic string interpretation are available see ``twisted.internet.process._BaseProcess.signalProcess`` for additional details. Raises a `ValueError` if the process doesn't exist or `ProcessExitedAlready` if the process has already ended. """ if not self._process: raise ValueError("No Process") return self._process.signalProcess(signal_id) def _ensure_process_termination(self, ignored): """Cancels any scheduled reaper and terminates hook process, if around. Canceling the reaper itself is necessary to ensure that deferreds like this are not left in the reactor. This would otherwise be the case for test that are awaiting the log, by using the `Invoker.end` deferred. """ if self._reaper: if not self._reaper.called: self._reaper.cancel() self._process.loseConnection() @inlineCallbacks def _cleanup_process(self, hook, result): """Performs process cleanup: * Flushes any changes (eg relation settings maded by the hook) * Ensures that the result will be the exit code of the process (if 0), or the `CharmInvocationError` from the underlying `HookProtocol`, with cleaned up traceback. * Also schedules a reaper to be called later that ensures process termination. """ message = result if isinstance(message, Failure): message = message.getTraceback(elideFrameworkCode=True) self._log.debug("hook %s exited, exit code %s." % ( os.path.basename(hook), message)) # Ensure that the process is terminated (via loseConnection) # no more than 5 seconds (arbitrary) after it exits, unless it # normally ends. If ended, the reaper is cancelled to ensure # it is not left in the reactor. # # The 5 seconds was chosen to make it vanishly small that # there would be any lost output (as might be *occasionally* # seen with a 50ms threshold in actual testing). from twisted.internet import reactor self._reaper = reactor.callLater(5, self._process.loseConnection) # Flush context changes back to zookeeper if hook was successful. if result == 0 and self._context: relation_setting_changes = [] for context in self._relation_contexts.itervalues(): changes = yield context.flush() if changes: for change in changes: if context is self._context: relation_setting_changes.append((None, change)) else: # Only log relation idents for relation settings # on child relation hook contexts relation_setting_changes.append( (context.relation_ident, change)) if relation_setting_changes: if hasattr(self._context, "relation_ident"): display_parent_relation_ident = " on %r" % \ self._context.relation_ident else: display_parent_relation_ident = "" self._log.debug( "Flushed values for hook %r%s\n%s", os.path.basename(hook), display_parent_relation_ident, FormatSettingChanges(relation_setting_changes)) returnValue(result) def __call__(self, hook): """Execute `hook` in a runtime context and returns status code. The `hook` parameter should be a complete path to the desired executable. The returned value is a `Deferred` that is called when the hook exits. """ # Sanity check the hook. self.validate_hook(hook) # Setup for actual invocation env = self.get_environment() hook_proto = HookProtocol(hook, self._context, self._log) exited = hook_proto.exited self._ended = ended = hook_proto.ended from twisted.internet import reactor self._process = reactor.spawnProcess( hook_proto, hook, [hook], env, os.path.join(self._unit_path, "charm")) # Manage cleanup after hook exits def cb_cleanup_process(result): return self._cleanup_process(hook, result) exited.addBoth(cb_cleanup_process) ended.addBoth(self._ensure_process_termination) return exited juju-0.7.orig/juju/hooks/protocol.py0000644000000000000000000004440412135220114015732 0ustar 00000000000000""" Protocol Twisted AMP protocol used between the UnitAgent (via the juju/hooks/invoker template) and client scripts invoked hooks on behalf of charm authors. Interactions with the server happen through an exchange of commands. Each interaction with the UnitAgent is coordinated through the use of a single command. These commands have there concrete implementation relative to server state in the UnitAgentServer class. The utility methods in UnitAgentClient provide a synchronous interface for scripts derived from juju.hooks.cli to expose to scripts. To extend the system with additional command the following pattern is used. - Author a new BaseCommand subclass outlining the arguments and returns the Command neeeds. - Implement a responder for that command in UnitAgentServer returning a dict with the response agreed upon by the new Command - Implement a client side callable in UnitAgentClient which handles any pre-wire data marshaling (with the goal of mapping to the Command objects contract) and return a result after waiting for any asynchronous actions to complete. UnitAgentClient and UnitAgentServer act as the client and server sides of an RPC interface. Due to this they have a number of arguments in common which are documented here. arguments: `client_id` -- Client specifier identifying a client to the server side thus connecting it with an juju.state.hook.HookContent (str) `unit_name` -- String of the name of the unit being queried or manipulated. """ import logging from twisted.internet import defer from twisted.internet import protocol from twisted.protocols import amp from juju.errors import JujuError from juju.lib import serializer from juju.lib.format import get_charm_formatter, get_charm_formatter_from_env from juju.state.errors import ( InvalidRelationIdentity, RelationStateNotFound, UnitRelationStateNotFound, RelationIdentNotFound) from juju.state.hook import RelationHookContext class NoSuchUnit(JujuError): """ The requested Unit Name wasn't found """ # Amp Currently cannot construct the 3 required arguments for # UnitRelationStateNotFound. This captures the error message in # a way that can pass over the wire pass class NotRelationContext(JujuError): """Relation commands can only be used in relation hooks""" class NoSuchKey(JujuError): """ The requested key did not exist. """ class MustSpecifyRelationName(JujuError): """No relation name was specified.""" def __str__(self): return "Relation name must be specified" class BaseCommand(amp.Command): errors = { NoSuchUnit: "NoSuchUnit", NoSuchKey: "NoSuchKey", NotRelationContext: "NotRelationContext", UnitRelationStateNotFound: "UnitRelationStateNotFound", MustSpecifyRelationName: "MustSpecifyRelationName", InvalidRelationIdentity: "InvalidRelationIdentity", RelationStateNotFound: "RelationStateNotFound", RelationIdentNotFound: "RelationIdentNotFound" } # All the commands below this point should be documented in the # specification specifications/unit-agent-hooks class RelationGetCommand(BaseCommand): commandName = "relation_get" arguments = [("client_id", amp.String()), ("relation_id", amp.String()), ("unit_name", amp.String()), ("setting_name", amp.String())] response = [("data", amp.String()), ("charm_format", amp.Integer())] class RelationSetCommand(BaseCommand): commandName = "relation_set" arguments = [("client_id", amp.String()), ("relation_id", amp.String()), ("blob", amp.String())] response = [] class RelationIdsCommand(BaseCommand): commandName = "relation_ids" arguments = [("client_id", amp.String()), ("relation_name", amp.String())] response = [("ids", amp.String())] class ListRelationsCommand(BaseCommand): arguments = [("client_id", amp.String()), ("relation_id", amp.String())] # whitespace delimited string response = [("members", amp.String())] class LogCommand(BaseCommand): arguments = [("level", amp.Integer()), ("message", amp.String())] response = [] class ConfigGetCommand(BaseCommand): commandName = "config_get" arguments = [("client_id", amp.String()), ("option_name", amp.String())] response = [("data", amp.String())] class OpenPortCommand(BaseCommand): commandName = "open_port" arguments = [("client_id", amp.String()), ("port", amp.Integer()), ("proto", amp.String())] response = [] class ClosePortCommand(BaseCommand): commandName = "close_port" arguments = [("client_id", amp.String()), ("port", amp.Integer()), ("proto", amp.String())] response = [] class UnitGetCommand(BaseCommand): commandName = "get_unit_info" arguments = [("client_id", amp.String()), ("setting_name", amp.String())] response = [("data", amp.String())] def require_relation_context(context): """Is this a valid context for relation hook commands? A guard for relation methods ensuring they have the proper RelationHookContext. A NotRelationContext exception is raised when a non-RelationHookContext is provided. """ if not isinstance(context, RelationHookContext): raise NotRelationContext( "Calling relation related method without relation context: %s" % type(context)) class UnitAgentServer(amp.AMP, object): """ Protocol used by the UnitAgent to provide a server side to CLI tools """ def connectionMade(self): """Inform the factory a connection was made. """ super(UnitAgentServer, self).connectionMade() self.factory.connectionMade(self) @RelationGetCommand.responder @defer.inlineCallbacks def relation_get(self, client_id, relation_id, unit_name, setting_name): """Get settings from a state.hook.RelationHookContext :param settings_name: optional setting_name (str) indicating that the client requested a single value only. """ context = self.factory.get_context(client_id) invoker = self.factory.get_invoker(client_id) if relation_id: yield self.factory.log( logging.DEBUG, "Getting relation %s" % relation_id) context = invoker.get_cached_relation_hook_context(relation_id) require_relation_context(context) try: if setting_name: data = yield context.get_value(unit_name, setting_name) else: data = yield context.get(unit_name) except UnitRelationStateNotFound, e: raise NoSuchUnit(str(e)) formatter = get_charm_formatter(invoker.charm_format) defer.returnValue(dict( charm_format=invoker.charm_format, data=formatter.dump(data))) @RelationSetCommand.responder @defer.inlineCallbacks def relation_set(self, client_id, relation_id, blob): """Set values into state.hook.RelationHookContext. :param blob: a YAML or JSON dumped string of a dict that will contain the delta of settings to be applied to a unit_name. """ context = yield self.factory.get_context(client_id) invoker = self.factory.get_invoker(client_id) formatter = get_charm_formatter(invoker.charm_format) data = formatter.load(blob) if relation_id: yield self.factory.log( logging.DEBUG, "Setting relation %s" % relation_id) context = invoker.get_cached_relation_hook_context(relation_id) require_relation_context(context) for k, v in data.items(): if formatter.should_delete(v): yield context.delete_value(k) else: yield context.set_value(k, v) defer.returnValue({}) @ListRelationsCommand.responder @defer.inlineCallbacks def list_relations(self, client_id, relation_id): """Lists the members of a relation.""" context = yield self.factory.get_context(client_id) if relation_id: yield self.factory.log( logging.DEBUG, "Listing relation members for %s" % relation_id) invoker = yield self.factory.get_invoker(client_id) context = invoker.get_cached_relation_hook_context(relation_id) require_relation_context(context) members = yield context.get_members() defer.returnValue(dict(members=" ".join(members))) @RelationIdsCommand.responder @defer.inlineCallbacks def relation_ids(self, client_id, relation_name): """Get relation idents for this hook context. :client_id: hooks client id that is used to define a context for a consistent view of state. :param relation_name: The relation name to query relation ids for this context. If no such relation name is specified, raises `MustSpecifyRelationName`. """ if not relation_name: raise MustSpecifyRelationName() context = yield self.factory.get_context(client_id) ids = yield context.get_relation_idents(relation_name) defer.returnValue(dict(ids=" ".join(ids))) @LogCommand.responder @defer.inlineCallbacks def log(self, level, message): """Log a message from the hook with the UnitAgent. :param level: A python logging module log level integer indicating the level the message should be logged at. :param message: A string containing the message to be logged. """ yield self.factory.log(level, message) defer.returnValue({}) @ConfigGetCommand.responder @defer.inlineCallbacks def config_get(self, client_id, option_name): """Retrieve one or more configuration options for a service. Service is implied in the hooks context. :client_id: hooks client id, used to define a context for a consistent view of state, as in the relation_ commands. :param option_name: Optional name of an option to fetch from the list. """ context = self.factory.get_context(client_id) options = yield context.get_config() if option_name: options = options.get(option_name) else: options = dict(options) # NOTE: no need to consider charm format for blob here, this # blob has always been in YAML format defer.returnValue(dict(data=serializer.dump(options))) @OpenPortCommand.responder @defer.inlineCallbacks def open_port(self, client_id, port, proto): """Open `port` using `proto` for the service unit. The service unit is implied by the hook's context. `client_id` - hook's client id, used to define a context for a consistent view of state. `port` - port to be opened `proto` - protocol of the port to be opened """ context = self.factory.get_context(client_id) service_unit_state = yield context.get_local_unit_state() container = yield service_unit_state.get_container() if container: # also open the port on the container # we still do subordinate unit below to ease # status reporting yield container.open_port(port, proto) yield service_unit_state.open_port(port, proto) yield self.factory.log(logging.DEBUG, "opened %s/%s" % (port, proto)) defer.returnValue({}) @ClosePortCommand.responder @defer.inlineCallbacks def close_port(self, client_id, port, proto): """Close `port` using `proto` for the service unit. The service unit is implied by the hook's context. `client_id` - hook's client id, used to define a context for a consistent view of state. `port` - port to be closed `proto` - protocol of the port to be closed """ context = self.factory.get_context(client_id) service_unit_state = yield context.get_local_unit_state() container = yield service_unit_state.get_container() if container: # also close the port on the container # we still do subordinate unit below to ease # status reporting yield container.close_port(port, proto) yield service_unit_state.close_port(port, proto) yield self.factory.log(logging.DEBUG, "closed %s/%s" % (port, proto)) defer.returnValue({}) @UnitGetCommand.responder @defer.inlineCallbacks def get_unit_info(self, client_id, setting_name): """Retrieve a unit value with the given name. :param client_id: The hook's client id, used to define a context for a consitent view of state. :param setting_name: The name of the setting to be retrieved. """ context = self.factory.get_context(client_id) unit_state = yield context.get_local_unit_state() yield self.factory.log( logging.DEBUG, "Get unit setting: %r" % setting_name) if setting_name == "private-address": value = yield unit_state.get_private_address() elif setting_name == "public-address": value = yield unit_state.get_public_address() else: raise NoSuchKey("Unit has no setting: %r" % setting_name) value = value or "" defer.returnValue({"data": value}) class UnitAgentClient(amp.AMP, object): """ Helper used by the CLI tools to call the UnitAgentServer protocol run in the UnitAgent. """ @defer.inlineCallbacks def relation_get(self, client_id, relation_id, unit_name, setting_name): """ See UnitAgentServer.relation_get """ if not setting_name: setting_name = "" result = yield self.callRemote(RelationGetCommand, client_id=client_id, relation_id=relation_id, unit_name=unit_name, setting_name=setting_name) formatter = get_charm_formatter(result["charm_format"]) defer.returnValue(formatter.load(result["data"])) @defer.inlineCallbacks def relation_set(self, client_id, relation_id, data): """Set relation settings for unit_name :param data: Python dict applied as a delta hook settings """ formatter = get_charm_formatter_from_env() blob = formatter.dump(data) yield self.callRemote(RelationSetCommand, client_id=client_id, relation_id=relation_id, blob=blob) defer.returnValue(None) @defer.inlineCallbacks def list_relations(self, client_id, relation_id): result = yield self.callRemote(ListRelationsCommand, client_id=client_id, relation_id=relation_id) members = result["members"].split() defer.returnValue(members) @defer.inlineCallbacks def relation_ids(self, client_id, relation_name): result = yield self.callRemote(RelationIdsCommand, client_id=client_id, relation_name=relation_name) ids = result["ids"].split() defer.returnValue(ids) @defer.inlineCallbacks def log(self, level, message): if isinstance(message, (list, tuple)): message = " ".join(message) result = yield self.callRemote(LogCommand, level=level, message=message) defer.returnValue(result) @defer.inlineCallbacks def config_get(self, client_id, option_name=None): """See UnitAgentServer.config_get.""" result = yield self.callRemote(ConfigGetCommand, client_id=client_id, option_name=option_name) # Unbundle and deserialize result = serializer.load(result["data"]) defer.returnValue(result) @defer.inlineCallbacks def open_port(self, client_id, port, proto): """Open `port` for `proto` for this unit identified by `client_id`.""" yield self.callRemote( OpenPortCommand, client_id=client_id, port=port, proto=proto) defer.returnValue(None) @defer.inlineCallbacks def close_port(self, client_id, port, proto): """Close `port` for `proto` for this unit identified by `client_id`.""" yield self.callRemote( ClosePortCommand, client_id=client_id, port=port, proto=proto) defer.returnValue(None) @defer.inlineCallbacks def get_unit_info(self, client_id, setting_name): result = yield self.callRemote( UnitGetCommand, client_id=client_id, setting_name=setting_name) defer.returnValue(result) class UnitSettingsFactory(protocol.ServerFactory, object): protocol = UnitAgentServer def __init__(self, context_provider, invoker_provider, logger=None): """ Factory to be used by the server for communications. :param context_provider: Callable(client_id) returning an juju.state.hook.RelationHookContext. A given `client_id` will map to a single HookContext. :param invoker_provider: Callable(client_id) returning a juju.hook.invoker.Invoker. A given `client_id` will map to a single invoker. :param log: When not None a python.logging.Logger object. The log is usually managed by the UnitAgent and is passed through the factory. """ self.context_provider = context_provider self.invoker_provider = invoker_provider self._logger = logger self.onMade = defer.Deferred() def get_context(self, client_id): return self.context_provider(client_id) def get_invoker(self, client_id): return self.invoker_provider(client_id) def log(self, level, message): if self._logger is not None: self._logger.log(level, message) def connectionMade(self, protocol): if self.onMade: self.onMade.callback(protocol) self.onMade = None juju-0.7.orig/juju/hooks/scheduler.py0000644000000000000000000003050712135220114016046 0ustar 00000000000000import logging import os from twisted.internet.defer import ( DeferredQueue, inlineCallbacks, succeed, Deferred, QueueUnderflow, QueueOverflow) from juju.lib import serializer from juju.state.hook import RelationHookContext, RelationChange ADDED = "joined" REMOVED = "departed" MODIFIED = "modified" log = logging.getLogger("hook.scheduler") def check_writeable(path): try: with open(path, "a"): pass except IOError: raise AssertionError("%s is not writable!" % path) class HookQueue(DeferredQueue): # Single consumer, multi producer LIFO, with Nones treated # as FIFO def __init__(self, modify_callback): self._modify_callback = modify_callback super(HookQueue, self).__init__(backlog=1) def put(self, change, offset=1): """ LIFO except for a None value which is FIFO Add an object to this queue. @raise QueueOverflow: Too many objects are in this queue. """ if self.waiting: if change is None: return self.waiting.pop(0).callback(change) # Because there's a waiter we know the offset is 0 self._queue_change(change, offset=0) if self.pending: self.waiting.pop(0).callback(self.pending[0]) elif self.size is None or len(self.pending) < self.size: # If the queue is currently processing, no need # to store the stop change, it will catch the stop # during the loop iter. if change is None: return self._queue_change(change, offset) else: raise QueueOverflow() def next_change(self): """Get the next change from the queue""" if self.pending: return succeed(self.pending[0]) elif self.backlog is None or len(self.waiting) < self.backlog: d = Deferred(canceller=self._cancelGet) self.waiting.append(d) return d else: raise QueueUnderflow() def finished_change(self): """The last change fetched has been processed.""" if self.pending: value = self.pending.pop(0) self.cb_modified() return value def cb_modified(self): """Callback invoked by queue when the state has been modified""" self._modify_callback() def _previous(self, unit_name, pending): """Find the most recent previous operation for a unit. :param pending: sequence of pending operations to consider. """ for p in reversed(pending): if p['unit_name'] == unit_name: return pending.index(p), p return None, None def _wipe_member(self, unit_name, pending): """Remove a given unit from membership in pending.""" for p in pending: if unit_name in p['members']: p['members'].remove(unit_name) def _queue_change(self, change, offset=1): """Queue up the node change for execution. The process of queuing the change will automatically merge with previously queued changes. :param change: The change to queue up. :param offset: Starting position of any queue merges. If the queue is currently being processed, we don't attempt to merge with the head of the queue as its currently being operated on. """ # Find the previous change if any. previous_idx, previous = self._previous( change['unit_name'], self.pending[offset:]) # No previous change, just add if previous_idx is None: self.pending.append(change) return self.cb_modified() # Reduce change_idx, change_type = self._reduce( (previous_idx, previous['change_type']), (-1, change['change_type'])) # Previous change, done. if previous_idx == change_idx: return # New change, remove previous elif change_type is not None: self.pending.pop(previous_idx) change['change_type'] = change_type self.pending.append(change) # Changes cancelled, remove previous, wipe membership elif change_type is None or change_idx != previous_idx: assert change['change_type'] == REMOVED self._wipe_member( change['unit_name'], self.pending[offset:]) self.pending.pop(previous_idx) # Notify changed self.cb_modified() def _reduce(self, previous, new): """Given two change operations for a node, reduce to one operation. We depend on zookeeper's total ordering behavior as we don't attempt to handle nonsensical operation sequences like removed followed by a modified, or modified followed by an add. """ previous_clock, previous_change = previous new_clock, new_change = new if previous_change == REMOVED and new_change == ADDED: return (new_clock, MODIFIED) elif previous_change == ADDED and new_change == MODIFIED: return (previous_clock, previous_change) elif previous_change == ADDED and new_change == REMOVED: return (None, None) elif previous_change == MODIFIED and new_change == REMOVED: return (new_clock, new_change) elif previous_change == MODIFIED and new_change == MODIFIED: return (previous_clock, previous_change) elif previous_change == REMOVED and new_change == MODIFIED: return (previous_clock, previous_change) class HookScheduler(object): def __init__(self, client, executor, unit_relation, relation_ident, unit_name, state_path): self._running = False self._state_path = state_path # The thing that will actually run the hook for us self._executor = executor # For hook context construction. self._client = client self._unit_relation = unit_relation self._relation_ident = relation_ident self._relation_name = relation_ident.split(":")[0] self._unit_name = unit_name self._current_clock = None if os.path.exists(self._state_path): self._load_state() else: self._create_state() def _create_state(self): # Current units (as far as the next hook should know) self._context_members = None # Current units and settings versions (as far as the scheduler knows) self._member_versions = {} # Run queue (clock) self._run_queue = HookQueue(self._save_state) def _load_state(self): with open(self._state_path) as f: state = serializer.load(f.read()) if not state: return self._create_state() self._context_members = set(state["context_members"]) self._member_versions = state["member_versions"] self._run_queue = HookQueue(self._save_state) self._run_queue.pending = state["change_queue"] def _save_state(self): state = serializer.dump({ "context_members": sorted(self._context_members), "member_versions": self._member_versions, "change_queue": self._run_queue.pending}) temp_path = self._state_path + "~" with open(temp_path, "w") as f: f.write(state) os.rename(temp_path, self._state_path) def _execute(self, change): """Execute a hook script for a change. """ # Assemble the change and hook execution context rel_change = RelationChange( self._relation_ident, change['change_type'], change['unit_name']) context = RelationHookContext( self._client, self._unit_relation, self._relation_ident, change['members'], unit_name=self._unit_name) # Execute the change. return self._executor(context, rel_change) def _get_change(self, unit_name, change_type, members): """ Return a hook context, corresponding to the current state of the system. """ return dict(unit_name=unit_name, change_type=change_type, members=sorted(members)) @property def running(self): return self._running is True @inlineCallbacks def run(self): assert not self._running, "Scheduler is already running" check_writeable(self._state_path) self._running = True log.debug("start") while self._running: change = yield self._run_queue.next_change() if change is None: if not self._running: break continue log.debug( "executing hook for %s:%s", change['unit_name'], change['change_type']) # Execute the hook success = yield self._execute(change) # Queue up modified immediately after change. if change['change_type'] == ADDED: self._run_queue.put( self._get_change(change['unit_name'], MODIFIED, self._context_members)) if success: self._run_queue.finished_change() else: log.debug("hook error, stopping scheduler execution") self._running = False break log.info("stopped") def stop(self): """Stop the hook execution. Note this does not stop the scheduling, the relation watcher that feeds changes to the scheduler needs to be stopped to achieve that effect. """ log.debug("stopping") if not self._running: return self._running = False # Put a marker value onto the queue to designate, stop now. # This is in case we're waiting on the queue, when the stop # occurs. The queue treats this None specially as a transient # value for extant waiters wakeup. self._run_queue.put(None) def pop(self): """Pop the next event on the queue. The goal is that on a relation hook error we'll come back up and we have the option of retrying the failed hook OR to proceed to the next event. To proceed to the next event we pop the failed event off the queue. """ assert not self._running, "Scheduler must be stopped for pop()" return self._run_queue.finished_change() def cb_change_members(self, old_units, new_units): """Watch callback invoked when the relation membership changes. """ log.debug("members changed: old=%s, new=%s", old_units, new_units) if self._context_members is None: self._context_members = set(old_units) if set(self._member_versions) != set(old_units): # Can happen when we miss seeing some changes ie. disconnected. log.debug( "old does not match last recorded units: %s", sorted(self._member_versions)) added = set(new_units) - set(self._member_versions) removed = set(self._member_versions) - set(new_units) self._member_versions.update(dict((unit, 0) for unit in added)) for unit in removed: del self._member_versions[unit] for unit_name in sorted(added): self._context_members.add(unit_name) self._run_queue.put( self._get_change(unit_name, ADDED, self._context_members), int(self._running)) for unit_name in sorted(removed): self._context_members.remove(unit_name) self._run_queue.put( self._get_change(unit_name, REMOVED, self._context_members), int(self._running)) self._save_state() def cb_change_settings(self, unit_versions): """Watch callback invoked when related units change data. """ log.debug("settings changed: %s", unit_versions) for (unit_name, version) in unit_versions: if version > self._member_versions.get(unit_name, 0): self._member_versions[unit_name] = version self._run_queue.put( self._get_change( unit_name, MODIFIED, self._context_members), int(self._running)) self._save_state() juju-0.7.orig/juju/hooks/tests/0000755000000000000000000000000012135220114014653 5ustar 00000000000000juju-0.7.orig/juju/hooks/tests/__init__.py0000644000000000000000000000000012135220114016752 0ustar 00000000000000juju-0.7.orig/juju/hooks/tests/hooks/0000755000000000000000000000000012135220114015776 5ustar 00000000000000juju-0.7.orig/juju/hooks/tests/test_arguments.py0000644000000000000000000001245612135220114020301 0ustar 00000000000000import logging import os from juju.errors import JujuError from juju.hooks.cli import CommandLineClient from juju.lib.testing import TestCase class TestArguments(TestCase): """ Test verifying the standard argument parsing and handling used by cli hook tools functions properly. """ def setup_environment(self): self.change_environment( JUJU_AGENT_SOCKET="/tmp/juju_agent_socket", JUJU_CLIENT_ID="xyzzy") self.change_args("test-script") def test_usage(self): output = self.capture_stream("stdout") cli = CommandLineClient() cli.setup_parser() cli.parser.print_usage() # test for the existence of a std argument to # ensure function self.assertIn("-s SOCKET", output.getvalue()) def test_default_socket_argument(self): """ verify that the socket argument is accepted from a command line flag, or the environment or raises an error. """ self.setup_environment() os.environ.pop("JUJU_AGENT_SOCKET", None) cli = CommandLineClient() cli.setup_parser() options = cli.parse_args("-s /tmp/socket".split()) self.assertEquals(options.socket, "/tmp/socket") # now set the environment variable to a known state os.environ["JUJU_AGENT_SOCKET"] = "/tmp/socket2" options = cli.parse_args() self.assertEquals(options.socket, "/tmp/socket2") err = self.capture_stream("stderr") os.environ.pop("JUJU_AGENT_SOCKET", None) error = self.failUnlessRaises(SystemExit, cli.parse_args) self.assertEquals(str(error), "No JUJU_AGENT_SOCKET/-s option found") self.assertIn("No JUJU_AGENT_SOCKET/-s option found", err.getvalue()) def test_single_keyvalue(self): """ Verify that a single key/vaule setting can be properly read from the command line. """ self.setup_environment() cli = CommandLineClient() cli.keyvalue_pairs = True cli.setup_parser() options = cli.parse_args(["foo=bar"]) self.assertEqual(options.keyvalue_pairs["foo"], "bar") # need to verify this is akin to the sys.argv parsing that # will occur with single and double quoted strings around # foo's right hand side options = cli.parse_args(["foo=bar none"]) self.assertEqual(options.keyvalue_pairs["foo"], "bar none") def test_multiple_keyvalue(self): self.setup_environment() cli = CommandLineClient() cli.keyvalue_pairs = True cli.setup_parser() options = cli.parse_args(["foo=bar", "baz=whatever"]) self.assertIn(("foo", "bar"), options.keyvalue_pairs.items()) self.assertIn(("baz", "whatever"), options.keyvalue_pairs.items()) def test_without_keyvalue_flag(self): self.setup_environment() output = self.capture_stream("stderr") cli = CommandLineClient() cli.keyvalue_pairs = False cli.setup_parser() # exit with the proper error code and make sure a message # appears on stderr error = self.assertRaises(SystemExit, cli.parse_args, ["foo=bar"]) self.assertEqual(error.code, 2) self.assertIn("unrecognized arguments: foo=bar", output.getvalue()) def test_bad_keyvalue_pair(self): self.setup_environment() cli = CommandLineClient() cli.keyvalue_pairs = True cli.setup_parser() options = cli.parse_args(["foo=bar", "baz=whatever", "xxx=", "yyy=", "zzz=zzz"]) self.assertIn(("foo", "bar"), options.keyvalue_pairs.items()) self.assertIn(("baz", "whatever"), options.keyvalue_pairs.items()) def test_fileinput(self): self.setup_environment() filename = self.makeFile("""This is config""") # the @ sign maps to an argparse.File cli = CommandLineClient() cli.keyvalue_pairs = True cli.default_mode = "rb" cli.setup_parser() options = cli.parse_args(["foo=@%s" % filename]) contents = options.keyvalue_pairs["foo"] self.assertEquals("This is config", contents) def test_fileinput_missing_file(self): self.setup_environment() filename = "missing" # the @ sign maps to an argparse.File cli = CommandLineClient() cli.keyvalue_pairs = True cli.default_mode = "rb" cli.setup_parser() # files in read-mode must exist at the time of the parse self.assertRaises(JujuError, cli.parse_args, ["foo=@%s" % filename]) def test_fileoutput(self): self.setup_environment() filename = self.makeFile() cli = CommandLineClient() cli.setup_parser() options = cli.parse_args(["-o", filename]) # validate that the output file output = options.output self.assertInstance(output, file) self.assertEquals(output.mode, "wb") def test_logging(self): self.setup_environment() cli = CommandLineClient() cli.keyvalue_pairs = True cli.setup_parser() options = cli.parse_args(["foo=bar", "--log-level", "info"]) cli.setup_logging() self.assertEquals(options.log_level, logging.INFO) juju-0.7.orig/juju/hooks/tests/test_cli.py0000644000000000000000000003460512135220114017043 0ustar 00000000000000# -*- encoding: utf-8 -*- import json import logging import os import StringIO from argparse import ArgumentTypeError from contextlib import closing from twisted.internet.defer import inlineCallbacks, returnValue from juju.hooks.cli import ( CommandLineClient, parse_log_level, parse_port_protocol) from juju.lib.testing import TestCase class NoopCli(CommandLineClient): """ do nothing client used to test options """ manage_logging = True manage_connection = False def run(self): return self.options def format_special(self, result, stream): """ render will lookup this method with the correct format option and make the output special!! """ print >>stream, result + "!!" class ErrorCli(CommandLineClient): """ do nothing client used to test options """ manage_logging = True manage_connection = False def run(self): self.exit_code = 1 raise ValueError("Checking render error") class GetCli(CommandLineClient): keyvalue_pairs = False def customize_parser(self): self.parser.add_argument("unit_name") self.parser.add_argument("settings_name", nargs="*") @inlineCallbacks def run(self): result = yield self.client.get(self.options.client_id, self.options.unit_name, self.options.settings_name) returnValue(result) class SetCli(CommandLineClient): keyvalue_pairs = True def customize_parser(self): self.parser.add_argument("unit_name") @inlineCallbacks def run(self): result = yield self.client.set(self.options.client_id, self.options.unit_name, self.options.keyvalue_pairs) returnValue(result) class TestCli(TestCase): """ Verify the integration of the protocols with the cli tool helper. """ def tearDown(self): # remove the logging handlers we installed root = logging.getLogger() root.handlers = [] def setup_exit(self, code=0): mock_exit = self.mocker.replace("sys.exit") mock_exit(code) def setup_cli_reactor(self): """ When executing the cli via tests, we need to mock out any reactor start or shutdown. """ from twisted.internet import reactor mock_reactor = self.mocker.patch(reactor) mock_reactor.run() mock_reactor.stop() reactor.running = True def setup_environment(self): self.change_environment(JUJU_AGENT_SOCKET=self.makeFile(), JUJU_CLIENT_ID="client_id") self.change_args(__file__) def test_empty_invocation(self): self.setup_cli_reactor() self.setup_environment() self.setup_exit(0) cli = CommandLineClient() cli.manage_connection = False self.mocker.replay() cli() def test_cli_get(self): self.setup_environment() self.setup_cli_reactor() self.setup_exit(0) cli = GetCli() cli.manage_connection = False obj = self.mocker.patch(cli) obj.client.get("client_id", "test_unit", ["foobar"]) self.mocker.replay() cli("test_unit foobar".split()) def test_cli_get_without_settings_name(self): self.setup_cli_reactor() self.setup_environment() self.setup_exit(0) cli = GetCli() cli.manage_connection = False obj = self.mocker.patch(cli) obj.client.get("client_id", "test_unit", []) self.mocker.replay() cli("test_unit".split()) def test_cli_set(self): """ verify the SetCli works """ self.setup_environment() self.setup_cli_reactor() self.setup_exit(0) cli = SetCli() cli.manage_connection = False obj = self.mocker.patch(cli) obj.client.set("client_id", "test_unit", {"foo": "bar", "sheep": "lamb"}) self.mocker.replay() cli("test_unit foo=bar sheep=lamb".split()) def test_cli_set_fileinput(self): """ verify the SetCli works """ self.setup_environment() self.setup_cli_reactor() self.setup_exit(0) contents = "this is a test" filename = self.makeFile(contents) cli = SetCli() cli.manage_connection = False obj = self.mocker.patch(cli) obj.client.set("client_id", "test_unit", {"foo": "bar", "sheep": contents}) self.mocker.replay() # verify that the @notation read the file cmdline = "test_unit foo=bar sheep=@%s" % (filename) cli(cmdline.split()) def test_json_output(self): self.setup_environment() self.setup_cli_reactor() self.setup_exit(0) filename = self.makeFile() data = dict(a="b", c="d") cli = NoopCli() obj = self.mocker.patch(cli) obj.run() self.mocker.result(data) self.mocker.replay() cli(("--format json -o %s" % filename).split()) with open(filename, "r") as fp: result = fp.read() self.assertEquals(json.loads(result), data) def test_special_format(self): self.setup_environment() self.setup_cli_reactor() self.setup_exit(0) filename = self.makeFile() data = "Base Value" cli = NoopCli() obj = self.mocker.patch(cli) obj.run() self.mocker.result(data) self.mocker.replay() cli(("--format special -o %s" % filename).split()) with open(filename, "r") as fp: result = fp.read() self.assertEquals(result, data + "!!\n") def test_cli_no_socket(self): # don't set up the environment with a socket self.change_environment() self.change_args(__file__) cli = GetCli() cli.manage_connection = False cli.manage_logging = False self.mocker.replay() error_log = self.capture_stream("stderr") error = self.failUnlessRaises(SystemExit, cli, "test_unit foobar".split()) self.assertEquals(error.code, 2) self.assertIn("No JUJU_AGENT_SOCKET", error_log.getvalue()) def test_cli_no_client_id(self): # don't set up the environment with a socket self.setup_environment() del os.environ["JUJU_CLIENT_ID"] self.change_args(__file__) cli = GetCli() cli.manage_connection = False cli.manage_logging = False self.mocker.replay() error_log = self.capture_stream("stderr") error = self.failUnlessRaises(SystemExit, cli, "test_unit foobar".split()) self.assertEquals(error.code, 2) self.assertIn("No JUJU_CLIENT_ID", error_log.getvalue()) def test_log_level(self): self.setup_environment() self.change_args(__file__) cli = GetCli() cli.manage_connection = False self.mocker.replay() # bad log level log = self.capture_logging() cli.setup_parser() cli.parse_args("--log-level XYZZY test_unit".split()) self.assertIn("Invalid log level", log.getvalue()) # still get a default self.assertEqual(cli.options.log_level, logging.INFO) # good symbolic name cli.parse_args("--log-level CRITICAL test_unit".split()) self.assertEqual(cli.options.log_level, logging.CRITICAL) # made up numeric level cli.parse_args("--log-level 42 test_unit".split()) self.assertEqual(cli.options.log_level, 42) def test_log_format(self): self.setup_environment() self.change_args(__file__) cli = NoopCli() cli.setup_parser() cli.parse_args("--format smart".split()) self.assertEqual(cli.options.format, "smart") cli.parse_args("--format json".split()) self.assertEqual(cli.options.format, "json") out = self.capture_stream("stdout") err = self.capture_stream("stderr") self.setup_cli_reactor() self.setup_exit(0) self.mocker.replay() cli("--format missing".split()) self.assertIn("missing", err.getvalue()) self.assertIn("Namespace", out.getvalue()) def test_render_error(self): self.setup_environment() self.change_args(__file__) cli = ErrorCli() # bad log level err = self.capture_stream("stderr") self.setup_cli_reactor() self.setup_exit(1) self.mocker.replay() cli("") # make sure we got a traceback on stderr self.assertIn("Checking render error", err.getvalue()) def test_parse_log_level(self): self.assertEquals(parse_log_level("INFO"), logging.INFO) self.assertEquals(parse_log_level("ERROR"), logging.ERROR) self.assertEquals(parse_log_level(logging.INFO), logging.INFO) self.assertEquals(parse_log_level(logging.ERROR), logging.ERROR) def test_parse_port_protocol(self): self.assertEqual(parse_port_protocol("80"), (80, "tcp")) self.assertEqual(parse_port_protocol("443/tcp"), (443, "tcp")) self.assertEqual(parse_port_protocol("53/udp"), (53, "udp")) self.assertEqual(parse_port_protocol("443/TCP"), (443, "tcp")) self.assertEqual(parse_port_protocol("53/UDP"), (53, "udp")) error = self.assertRaises(ArgumentTypeError, parse_port_protocol, "eighty") self.assertEqual( str(error), "Invalid port, must be an integer, got 'eighty'") error = self.assertRaises(ArgumentTypeError, parse_port_protocol, "fifty-three/udp") self.assertEqual( str(error), "Invalid port, must be an integer, got 'fifty-three'") error = self.assertRaises(ArgumentTypeError, parse_port_protocol, "53/udp/") self.assertEqual( str(error), "Invalid format for port/protocol, got '53/udp/'") error = self.assertRaises(ArgumentTypeError, parse_port_protocol, "53/udp/bad-format") self.assertEqual( str(error), "Invalid format for port/protocol, got '53/udp/bad-format'") error = self.assertRaises(ArgumentTypeError, parse_port_protocol, "0") self.assertEqual( str(error), "Invalid port, must be from 1 to 65535, got 0") error = self.assertRaises( ArgumentTypeError, parse_port_protocol, "65536") self.assertEqual( str(error), "Invalid port, must be from 1 to 65535, got 65536") error = self.assertRaises(ArgumentTypeError, parse_port_protocol, "53/not-a-valid-protocol") self.assertEqual( str(error), "Invalid protocol, must be 'tcp' or 'udp', " "got 'not-a-valid-protocol'") def assert_smart_output_v1(self, sample, formatted=object()): """Verifies output serialization""" # No roundtripping is verified because str(obj) is in general # not roundtrippable cli = CommandLineClient() with closing(StringIO.StringIO()) as output: cli.format_smart(sample, output) self.assertEqual(output.getvalue(), formatted) def assert_format_smart_v1(self): """Verifies legacy smart format v1 which uses Python str encoding""" self.assert_smart_output_v1(None, "") # No \n in output for None self.assert_smart_output_v1("", "\n") self.assert_smart_output_v1("A string", "A string\n") self.assert_smart_output_v1( "High bytes: \xca\xfe", "High bytes: \xca\xfe\n") self.assert_smart_output_v1(u"", "\n") self.assert_smart_output_v1( u"A unicode string (but really ascii)", "A unicode string (but really ascii)\n") # Maintain LP bug #901495, fixed in v2 format; this happens because # str(obj) is used e = self.assertRaises( UnicodeEncodeError, self.assert_smart_output_v1, u"中文") self.assertEqual( str(e), ("'ascii' codec can't encode characters in position 0-1: " "ordinal not in range(128)")) self.assert_smart_output_v1({}, "{}\n") self.assert_smart_output_v1( {u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com"}, "{u'public-address': u'ec2-1-2-3-4.compute-1.amazonaws.com'}\n") self.assert_smart_output_v1(False, "False\n") self.assert_smart_output_v1(True, "True\n") self.assert_smart_output_v1(0.0, "0.0\n") self.assert_smart_output_v1(3.14159, "3.14159\n") self.assert_smart_output_v1(6.02214178e23, "6.02214178e+23\n") self.assert_smart_output_v1(0, "0\n") self.assert_smart_output_v1(42, "42\n") def test_format_smart_v1_implied(self): """Smart format v1 is implied if _JUJU_CHARM_FORMAT is not defined""" # Double check env setup self.assertNotIn("_JUJU_CHARM_FORMAT", os.environ) self.assert_format_smart_v1() def test_format_smart_v1(self): """Verify legacy format v1 works""" self.change_environment(_JUJU_CHARM_FORMAT="1") self.assert_format_smart_v1() def assert_smart_output(self, sample, formatted): cli = CommandLineClient() with closing(StringIO.StringIO()) as output: cli.format_smart(sample, output) self.assertEqual(output.getvalue(), formatted) def test_format_smart_v2(self): """Verifies smart format v2 writes raw strings properly""" self.change_environment(_JUJU_CHARM_FORMAT="2") # For each case, verify actual output serialization along with # roundtripping through YAML self.assert_smart_output(None, "") # No newline in output for None self.assert_smart_output("", "") self.assert_smart_output("A string", "A string") self.assert_smart_output( "High bytes: \xCA\xFE", "High bytes: \xca\xfe") self.assert_smart_output("中文", "\xe4\xb8\xad\xe6\x96\x87") self.assert_smart_output( {u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com", u"foo": u"bar", u"configured": True}, ("configured: true\n" "foo: bar\n" "public-address: ec2-1-2-3-4.compute-1.amazonaws.com")) juju-0.7.orig/juju/hooks/tests/test_communications.py0000644000000000000000000003515312135220114021323 0ustar 00000000000000from StringIO import StringIO import logging from twisted.internet import protocol, defer, error from juju.errors import JujuError from juju.hooks.protocol import (UnitAgentClient, UnitAgentServer, UnitSettingsFactory, NoSuchUnit, NoSuchKey, MustSpecifyRelationName) from juju.lib.mocker import ANY from juju.lib.testing import TestCase from juju.lib.twistutils import gather_results from juju.state.errors import UnitRelationStateNotFound def _loseAndPass(err, proto): # be specific, pass on the error to the client. err.trap(error.ConnectionLost, error.ConnectionDone) del proto.connectionLost proto.connectionLost(err) class UnitAgentServerMock(UnitAgentServer): def _get_data(self): return self.factory.data def _set_data(self, dictlike): """ protected method used in testing to rewrite internal data state """ self.factory.data = dictlike data = property(_get_data, _set_data) def _set_members(self, members): # replace the content of the current list self.factory.members = members def _set_relation_idents(self, relation_idents): self.factory.relation_idents = relation_idents @property def config(self): return self.factory.config def config_set(self, dictlike): """Write service state directly. """ self.factory.config_set(dictlike) class MockServiceUnitState(object): def __init__(self): self.ports = set() self.config = {} def get_container(self): return None def open_port(self, port, proto): self.ports.add((port, proto)) def close_port(self, port, proto): self.ports.discard((port, proto)) def get_public_address(self): return self.config.get("public-address", "") def get_private_address(self): return self.config.get("private-address", "") MockServiceUnitState = MockServiceUnitState() class MockServiceState(object): def get_unit_state(self, unit_name): return MockServiceUnitState class MockInvoker(object): def __init__(self, charm_format): self.charm_format = charm_format class UnitSettingsFactoryLocal(UnitSettingsFactory): """ For testing a UnitSettingsFactory with local storage. Loosely mimics a HookContext """ protocol = UnitAgentServerMock def __init__(self): super(UnitSettingsFactoryLocal, self).__init__( self.context_provider, self.invoker) self.data = {} # relation data self.config = {} # service options self.members = [] self.relation_idents = [] self._agent_io = StringIO() self._invoker = MockInvoker(charm_format=1) # hook context and a logger to the settings factory logger = logging.getLogger("unit-settings-fact-test") handler = logging.StreamHandler(self._agent_io) handler.setFormatter(logging.Formatter("%(levelname)s %(message)s")) logger.addHandler(handler) base = super(UnitSettingsFactoryLocal, self) base.__init__(self.context_provider, self.invoker, logger) def context_provider(self, client_id): return self def invoker(self, client_id): return self._invoker def get_value(self, unit_name, setting_name): return self.data[unit_name][setting_name] def get(self, unit_name): # Currently this is cheating as the real impl can if unit_name not in self.data: # raise it with fake data raise UnitRelationStateNotFound("mysql/1", "server", unit_name) return self.data[unit_name] def get_members(self): return self.members def get_relation_idents(self, relation_name): return self.relation_idents def set_value(self, key, value): self.data.setdefault(self._unit_name, {})[key] = value def set(self, blob): self.data[self._unit_name] = blob def _set_unit_name(self, unit_name): self._unit_name = unit_name def config_set(self, data): """Directly update service options for testing.""" self.config.update(data) def get_config(self, option_name=None): d = self.config.copy() if option_name: d = d[option_name] return d def get_local_unit_state(self): return MockServiceUnitState class LiveFireBase(TestCase): """ Utility for connected reactor-using tests. """ def _listen_server(self, addr): from twisted.internet import reactor self.server_factory = UnitSettingsFactoryLocal() self.server_socket = reactor.listenUNIX(addr, self.server_factory) self.addCleanup(self.server_socket.stopListening) return self.server_socket def _connect_client(self, addr): from twisted.internet import reactor d = protocol.ClientCreator( reactor, self.client_protocol).connectUNIX(addr) return d def setUp(self): """ Create an amp server and connect a client to it. """ super(LiveFireBase, self).setUp() sock = self.makeFile() self._listen_server(sock) on_client_connect = self._connect_client(sock) def getProtocols(results): [(_, client), (_, server)] = results self.client = client self.server = server dl = defer.DeferredList([on_client_connect, self.server_factory.onMade]) return dl.addCallback(getProtocols) def tearDown(self): """ Cleanup client and server connections, and check the error got at C{connectionLost}. """ L = [] for conn in self.client, self.server: if conn.transport is not None: # depend on amp's function connection-dropping behavior d = defer.Deferred().addErrback(_loseAndPass, conn) conn.connectionLost = d.errback conn.transport.loseConnection() L.append(d) super(LiveFireBase, self).tearDown() return gather_results(L) class TestCommunications(LiveFireBase): """ Verify that client and server can communicate with the proper protocol. """ client_protocol = UnitAgentClient server_protocol = UnitAgentServer @defer.inlineCallbacks def setUp(self): yield super(TestCommunications, self).setUp() self.log = self.capture_logging( level=logging.DEBUG, formatter=logging.Formatter("%(levelname)s %(message)s")) @defer.inlineCallbacks def test_relation_get_command(self): # Allow our testing class to pass the usual guard require_test_context = self.mocker.replace( "juju.hooks.protocol.require_relation_context") require_test_context(ANY) self.mocker.result(True) self.mocker.count(3) self.mocker.replay() # provide fake data to the server so the client can test it for # verification self.server.data = dict(test_node=dict(a="b", foo="bar")) self.assertIn("test_node", self.server.factory.data) data = yield self.client.relation_get( "client_id", "", "test_node", "a") self.assertEquals(data, "b") data = yield self.client.relation_get( "client_id", "", "test_node", "foo") self.assertEquals(data, "bar") # A request for asks for all the settings data = yield self.client.relation_get( "client_id", "", "test_node", "") self.assertEquals(data["a"], "b") self.assertEquals(data["foo"], "bar") @defer.inlineCallbacks def test_get_no_such_unit(self): """ An attempt to retrieve a value for a nonexistant unit raises an appropriate error. """ # Allow our testing class to pass the usual guard require_test_context = self.mocker.replace( "juju.hooks.protocol.require_relation_context") require_test_context(ANY) self.mocker.result(True) self.mocker.replay() yield self.assertFailure( self.client.relation_get( "client_id", "", "missing_unit/99", ""), NoSuchUnit) @defer.inlineCallbacks def test_relation_with_nonrelation_context(self): """ Verify that using a non-relation context doesn't allow for the calling of relation commands and that an appropriate error is available. """ # Allow our testing class to pass the usual guard from juju.hooks.protocol import NotRelationContext failure = self.client.relation_get( "client_id", "", "missing_unit/99", "") yield self.assertFailure(failure, NotRelationContext) @defer.inlineCallbacks def test_relation_set_command(self): # Allow our testing class to pass the usual guard require_test_context = self.mocker.replace( "juju.hooks.protocol.require_relation_context") require_test_context(ANY) self.mocker.result(True) self.mocker.replay() self.assertEquals(self.server.data, {}) # for testing mock the context being stored in the factory self.server_factory._set_unit_name("test_node") result = yield self.client.relation_set( "client_id", "", dict(a="b", foo="bar")) # set returns nothing self.assertEqual(result, None) # verify the data exists in the server now self.assertTrue(self.server.data) self.assertEquals(self.server.data["test_node"]["a"], "b") self.assertEquals(self.server.data["test_node"]["foo"], "bar") def test_must_specify_relation_name(self): """Verify `MustSpecifyRelationName` exception`""" error = MustSpecifyRelationName() self.assertTrue(isinstance(error, JujuError)) self.assertEquals( str(error), "Relation name must be specified") @defer.inlineCallbacks def test_relation_ids(self): """Verify api support of relation_ids command""" # NOTE: this is the point where the externally visible usage # of "relation ids" (as seen in the relation-ids command) is # converted to "relation idents", hence the use of both # conventions here. (It has to be somewhere.) self.server.factory.relation_type = "server" self.server._set_relation_idents(["db:0", "db:1", "db:42"]) relation_idents = yield self.client.relation_ids("client_id", "db") self.assertEqual(relation_idents, ["db:0", "db:1", "db:42"]) # A relation name must be specified. e = yield self.assertFailure( self.client.relation_ids("client_id", ""), MustSpecifyRelationName) self.assertEqual(str(e), "Relation name must be specified") @defer.inlineCallbacks def test_list_relations(self): # Allow our testing class to pass the usual guard require_test_context = self.mocker.replace( "juju.hooks.protocol.require_relation_context") require_test_context(ANY) self.mocker.result(True) self.mocker.replay() self.server.factory.relation_type = "peer" self.server._set_members(["riak/1", "riak/2"]) members = yield self.client.list_relations("client_id", "") self.assertIn("riak/1", members) self.assertIn("riak/2", members) @defer.inlineCallbacks def test_log_command(self): # This is the default calling convention from clients yield self.client.log(logging.WARNING, ["This", "is", "a", "WARNING"]) yield self.client.log(logging.INFO, "This is INFO") yield self.client.log(logging.CRITICAL, ["This is CRITICAL"]) self.assertIn("WARNING This is a WARNING", self.log.getvalue()) self.assertIn("INFO This is INFO", self.log.getvalue()) self.assertIn("CRITICAL This is CRITICAL", self.log.getvalue()) @defer.inlineCallbacks def test_config_get_command(self): """Verify ConfigGetCommand. Test that the communication between the client and server side of the protocol is marshalling data as expected. Using mock data and services this exists only to test that self.client.config_get is returning expected data. """ self.server.config_set(dict(a="b", foo="bar")) data = yield self.client.config_get("client_id", "a") self.assertEquals(data, "b") data = yield self.client.config_get("client_id", "foo") self.assertEquals(data, "bar") # A request for asks for all the settings data = yield self.client.config_get("client_id", "") self.assertEquals(data["a"], "b") self.assertEquals(data["foo"], "bar") # test with valid option names data = yield self.client.config_get("client_id", "a") self.assertEquals(data, "b") data = yield self.client.config_get("client_id", "foo") self.assertEquals(data, "bar") # test with invalid option name data = yield self.client.config_get("client_id", "missing") self.assertEquals(data, None) @defer.inlineCallbacks def test_port_commands(self): mock_service_unit_state = MockServiceState().get_unit_state("mock/0") yield self.client.open_port("client-id", 80, "tcp") self.assertEqual(mock_service_unit_state.ports, set([(80, "tcp")])) yield self.client.open_port("client-id", 53, "udp") yield self.client.close_port("client-id", 80, "tcp") self.assertEqual(mock_service_unit_state.ports, set([(53, "udp")])) yield self.client.close_port("client-id", 53, "udp") self.assertEqual(mock_service_unit_state.ports, set()) self.assertIn( "DEBUG opened 80/tcp\n" "DEBUG opened 53/udp\n" "DEBUG closed 80/tcp\n" "DEBUG closed 53/udp\n", self.log.getvalue()) @defer.inlineCallbacks def test_unit_get_commands(self): mock_service_unit_state = MockServiceState().get_unit_state("mock/0") mock_service_unit_state.config["public-address"] = "foobar.example.com" value = yield self.client.get_unit_info("client-id", "public-address") self.assertEqual(value, {"data": "foobar.example.com"}) yield self.assertFailure( self.client.get_unit_info("client-id", "garbage"), NoSuchKey) # Shouldn't ever happen in practice (unit agent inits on startup) value = yield self.client.get_unit_info("client-id", "private-address") self.assertEqual(value, {"data": ""}) juju-0.7.orig/juju/hooks/tests/test_executor.py0000644000000000000000000003373112135220114020131 0ustar 00000000000000import logging import os import subprocess import sys from twisted.internet.defer import inlineCallbacks, Deferred from twisted.internet.error import ProcessExitedAlready import juju.hooks.executor from juju.hooks.executor import HookExecutor from juju.hooks.invoker import Invoker from juju.lib.testing import TestCase from juju.lib.twistutils import gather_results class HookExecutorTest(TestCase): def setUp(self): self.lock_path = os.path.join(self.makeDir(), "hook.lock") self.patch(HookExecutor, "LOCK_PATH", self.lock_path) self._executor = HookExecutor() self.output = self.capture_logging("hook.executor", logging.DEBUG) @inlineCallbacks def test_observer(self): """An observer can be registered against the executor to recieve callbacks when hooks are executed.""" results = [] d = Deferred() def observer(hook_path): results.append(hook_path) if len(results) == 3: d.callback(True) self._executor.set_observer(observer) self._executor.start() class _Invoker(object): def get_context(self): return None def __call__(self, hook_path): results.append(hook_path) hook_path = self.makeFile("hook content") yield self._executor(_Invoker(), hook_path) # Also observes non existant hooks yield self._executor(_Invoker(), self.makeFile()) self.assertEqual(len(results), 3) @inlineCallbacks def test_start_deferred_ends_on_stop(self): """The executor start method returns a deferred that fires when the executor has been stopped.""" stopped = [] def on_start_finish(result): self.assertTrue(stopped) d = self._executor.start() d.addCallback(on_start_finish) stopped.append(True) yield self._executor.stop() self._executor.debug = True yield d def test_start_start(self): """Attempting to start twice raises an exception.""" self._executor.start() return self.assertFailure(self._executor.start(), AssertionError) def test_stop_stop(self): """Attempt to stop twice raises an exception.""" self._executor.start() self._executor.stop() return self.assertFailure(self._executor.stop(), AssertionError) @inlineCallbacks def test_debug_hook(self): """A debug hook is executed if a debug hook name is found. """ self.output = self.capture_logging( "hook.executor", level=logging.DEBUG) results = [] class _Invoker(object): def get_context(self): return None def __call__(self, hook_path): results.append(hook_path) self._executor.set_debug(["*"]) self._executor.start() yield self._executor(_Invoker(), "abc") self.assertNotEqual(results, ["abc"]) self.assertIn("abc", self.output.getvalue()) def test_get_debug_hook_path_executable(self): """The debug hook path return from the executor should be executable. """ self.patch( juju.hooks.executor, "DEBUG_HOOK_TEMPLATE", "#!/bin/bash\n echo {hook_name}\n exit 0") self._executor.set_debug(["*"]) debug_hook = self._executor.get_hook_path("something/good") stdout = open(self.makeFile(), "w+") p = subprocess.Popen(debug_hook, stdout=stdout.fileno()) self.assertEqual(p.wait(), 0) stdout.seek(0) self.assertEqual(stdout.read(), "good\n") @inlineCallbacks def test_end_debug_with_exited_process(self): """Ending debug with a process that has already ended is a noop.""" results = [] class _Invoker(object): process_ended = Deferred() def get_context(self): return None def __call__(self, hook_path): results.append(hook_path) return self.process_ended def send_signal(self, signal_id): if results: results.append(1) raise ProcessExitedAlready() results.append(2) raise ValueError("No such process") self._executor.start() self._executor.set_debug(["abc"]) hook_done = self._executor(_Invoker(), "abc") self._executor.set_debug(None) _Invoker.process_ended.callback(True) yield hook_done self.assertEqual(len(results), 2) self.assertNotEqual(results[0], "abc") self.assertEqual(results[1], 1) @inlineCallbacks def test_end_debug_with_hook_not_started(self): results = [] class _Invoker(object): process_ended = Deferred() def get_context(self): return None def __call__(self, hook_path): results.append(hook_path) return self.process_ended def send_signal(self, signal_id): if len(results) == 1: results.append(1) raise ValueError() results.append(2) raise ProcessExitedAlready() self._executor.start() self._executor.set_debug(["abc"]) hook_done = self._executor(_Invoker(), "abc") self._executor.set_debug(None) _Invoker.process_ended.callback(True) yield hook_done self.assertEqual(len(results), 2) self.assertNotEqual(results[0], "abc") self.assertEqual(results[1], 1) @inlineCallbacks def test_end_debug_with_debug_running(self): """If a debug hook is running, it is signaled if the debug is disabled. """ self.patch( juju.hooks.executor, "DEBUG_HOOK_TEMPLATE", "\n".join(("#!/bin/bash", "exit_handler() {", " echo clean exit", " exit 0", "}", 'trap "exit_handler" HUP', "sleep 0.2", "exit 1"))) unit_dir = self.makeDir() charm_dir = os.path.join(unit_dir, "charm") self.makeDir(path=charm_dir) self._executor.set_debug(["*"]) log = logging.getLogger("invoker") # Populate environment variables for default invoker. self.change_environment( JUJU_UNIT_NAME="dummy/1", JUJU_ENV_UUID="snowflake", PATH="/bin/:/usr/bin") output = self.capture_logging("invoker", level=logging.DEBUG) invoker = Invoker( None, None, "constant", self.makeFile(), unit_dir, log) self._executor.start() hook_done = self._executor(invoker, "abc") # Give a moment for execution to start. yield self.sleep(0.1) self._executor.set_debug(None) yield hook_done self.assertIn("clean exit", output.getvalue()) def test_get_debug_hook_path(self): """A debug hook file path is returned if a debug hook name is found. """ # Default is to return the file path. file_path = self.makeFile() hook_name = os.path.basename(file_path) self.assertEquals(self._executor.get_hook_path(file_path), file_path) # Hook names can be specified as globs. self._executor.set_debug(["*"]) debug_hook_path = self._executor.get_hook_path(file_path) self.assertNotEquals(file_path, debug_hook_path) # The hook base name is suffixed onto the debug hook file self.assertIn(os.path.basename(file_path), os.path.basename(debug_hook_path)) # Verify the debug hook contents. debug_hook_file = open(debug_hook_path) debug_contents = debug_hook_file.read() debug_hook_file.close() self.assertIn("hook.sh", debug_contents) self.assertIn("-n %s" % hook_name, debug_contents) self.assertTrue(os.access(debug_hook_path, os.X_OK)) # The hook debug can be set back to none. self._executor.set_debug(None) self.assertEquals(self._executor.get_hook_path(file_path), file_path) # the executor can debug only selected hooks. self._executor.set_debug(["abc"]) self.assertEquals(self._executor.get_hook_path(file_path), file_path) # The debug hook file is removed on the next hook path access. self.assertFalse(os.path.exists(debug_hook_path)) def test_hook_exception_propgates(self): """An error in a hook is propogated to the execution deferred.""" class _Invoker: def get_context(self): return None def __call__(self, hook_path): raise AttributeError("Foo") hook_path = self.makeFile("never got here") self._executor.start() return self.assertFailure( self._executor(_Invoker(), hook_path), AttributeError) @inlineCallbacks def test_executor_running_property(self): self._executor.start() self.assertTrue(self._executor.running) yield self._executor.stop() self.assertFalse(self._executor.running) @inlineCallbacks def test_nonexistant_hook_skipped(self): """If a hook does not exist a warning is logged and the hook skipped. """ class _Invoker: def get_context(self): return None self._executor.start() hook_path = self.makeFile() value = yield self._executor(_Invoker(), hook_path) self.assertEqual(value, False) self.assertIn("Hook does not exist, skipping %s" % hook_path, self.output.getvalue()) def test_start_stop_start(self): """The executor can be stopped and restarted.""" results = [] def invoke(hook_path): results.append(hook_path) self._executor(invoke, "1") start_complete = self._executor.start() self._executor.stop() yield start_complete self.assertEqual(len(results), 1) self._executor(invoke, "1") self._executor(invoke, "2") start_complete = self._executor.start() self._executor.stop() yield start_complete self.assertEqual(len(results), 3) @inlineCallbacks def test_run_priority_hook_while_already_running(self): """Attempting to run a priority hook while running is an error. """ def invoke(hook_path): pass self._executor.start() error = yield self.assertFailure( self._executor.run_priority_hook(invoke, "foobar"), AssertionError) self.assertEquals(str(error), "Executor must not be running") @inlineCallbacks def test_prioritize_with_queued(self): """A prioritized hook will execute before queued hooks. """ results = [] execs = [] hooks = [self.makeFile(str(i)) for i in range(5)] class _Invoker(object): def get_context(self): return None def __call__(self, hook_path): results.append(hook_path) invoke = _Invoker() for i in hooks: execs.append(self._executor(invoke, i)) priority_hook = self.makeFile(str("me first")) yield self._executor.run_priority_hook(invoke, priority_hook) self._executor.start() yield gather_results(execs) hooks.insert(0, priority_hook) self.assertEqual(results, hooks) def assert_lock_pid(self, pid=None): pid = pid or os.getpid() self.assertTrue(os.path.lexists(self.lock_path)) fpid = int(os.readlink(self.lock_path)) self.assertEqual(pid, fpid) @inlineCallbacks def test_fs_lock(self): """An FS Lock is acquired while hooks are executing.""" results = [] test = self class _Invoker(object): def get_context(self): return None def __call__(self, hook_path): test.assert_lock_pid() results.append(hook_path) invoker = _Invoker() d = self._executor.start() yield self._executor(invoker, self.makeFile("a")) yield self._executor(invoker, self.makeFile("b")) self._executor.stop() yield d self.assertEqual(len(results), 2) @inlineCallbacks def test_fs_lock_invalid_pid(self): """An invalid pid on the lock is handled.""" os.symlink(str(2 ** 31 - 1), self.lock_path) test = self class _Invoker(object): def get_context(self): return None def __call__(self, hook_path): test.assert_lock_pid() invoker = _Invoker() self._executor.start() # This would block forever if unsuccessful yield self._executor(invoker, self.makeFile("a")) def test_serialized_execution(self): """Hook execution is serialized via the HookExecution api. """ wait_callback = [Deferred() for i in range(5)] finish_callback = [Deferred() for i in range(5)] results = [] @inlineCallbacks def invoker(hook_path): results.append(hook_path) yield finish_callback[len(results) - 1] wait_callback[len(results) - 1].callback(True) start_complete = self._executor.start() for i in range(5): self._executor(invoker, "hook-%s" % i) self.assertEqual(len(results), 1) finish_callback[1].callback(True) self.assertEqual(len(results), 1) # Verify stop behavior stop_complete = yield self._executor.stop() # Finish the running execution. finish_callback[0].callback(True) # Verify we've stopped executing. yield stop_complete self.assertTrue(start_complete.called) self.assertEqual(len(results), 1) # Start the executioner again. self._executor.start() for finish in finish_callback[2:]: finish.callback(True) self.assertEqual(len(results), 5) juju-0.7.orig/juju/hooks/tests/test_invoker.py0000644000000000000000000025212412135220114017747 0ustar 00000000000000# -*- encoding: utf-8 -*- from StringIO import StringIO import base64 import json import logging import os import stat import sys from twisted.internet import defer from twisted.internet.process import Process import juju from juju import errors from juju.control.tests.test_status import StatusTestBase from juju.environment.tests.test_config import EnvironmentsConfigTestBase from juju.lib.pick import pick_attr from juju.hooks import invoker from juju.hooks import commands from juju.hooks.protocol import UnitSettingsFactory from juju.lib import serializer from juju.lib.mocker import MATCH from juju.lib.twistutils import get_module_directory from juju.state import hook from juju.state.endpoint import RelationEndpoint from juju.state.errors import RelationIdentNotFound from juju.state.relation import RelationStateManager from juju.state.tests.test_relation import RelationTestBase class MockUnitAgent(object): """Pretends to implement the client state cache, and the UA hook socket. """ def __init__(self, client, socket_path, charm_dir): self.client = client self.socket_path = socket_path self.charm_dir = charm_dir self._clients = {} # client_id -> HookContext self._invokers = {} # client_id -> Invoker self._agent_log = logging.getLogger("unit-agent") self._agent_io = StringIO() handler = logging.StreamHandler(self._agent_io) self._agent_log.addHandler(handler) self.server_listen() def make_context(self, relation_ident, change_type, unit_name, unit_relation, client_id): """Create, record and return a HookContext object for a change.""" change = hook.RelationChange(relation_ident, change_type, unit_name) context = hook.RelationHookContext( self.client, unit_relation, relation_ident, unit_name=unit_name) self._clients[client_id] = context return context, change def get_logger(self): """Build a logger to be used for a hook.""" logger = logging.getLogger("hook") log_file = StringIO() handler = logging.StreamHandler(log_file) logger.addHandler(handler) return logger @defer.inlineCallbacks def get_invoker(self, relation_ident, change_type, unit_name, unit_relation, client_id="client_id"): """Build an Invoker for the execution of a hook. `relation_ident`: relation identity of the relation the Invoker is for. `change_type`: the string name of the type of change the hook is invoked for. `unit_name`: the name of the local unit of the change. `unit_relation`: a UnitRelationState instance for the hook. `client_id`: unique client identifier. `service`: The local service of the executing hook. """ context, change = self.make_context( relation_ident, change_type, unit_name, unit_relation, client_id) logger = self.get_logger() exe = invoker.Invoker(context, change, self.get_client_id(), self.socket_path, self.charm_dir, logger) yield exe.start() self._invokers[client_id] = exe defer.returnValue(exe) def get_client_id(self): # simulate associating a client_id with a client connection # for later context look up. In reality this would be a mapping. return "client_id" def get_context(self, client_id): return self._clients[client_id] def lookup_invoker(self, client_id): return self._invokers[client_id] def stop(self): """Stop the process invocation. Trigger any registered cleanup functions. """ self.server_socket.stopListening() def server_listen(self): from twisted.internet import reactor # hook context and a logger to the settings factory logger = logging.getLogger("unit-agent") self.log_file = StringIO() handler = logging.StreamHandler(self.log_file) logger.addHandler(handler) self.server_factory = UnitSettingsFactory( self.get_context, self.lookup_invoker, logger) self.server_socket = reactor.listenUNIX( self.socket_path, self.server_factory) def capture_separate_log(name, level): """Support the separate capture of logging at different log levels. TestCase.capture_logging only allows one level to be captured at any given time. Given that the hook log captures both data AND traditional logging, it's useful to separate. """ logger = logging.getLogger(name) output = StringIO() handler = logging.StreamHandler(output) handler.setLevel(level) logger.addHandler(handler) return output def get_cli_environ_path(*search_path): """Construct a path environment variable. This path will contain the juju bin directory and any paths passed as *search_path. @param search_path: additional directories to put on PATH """ search_path = list(search_path) # Look for the top level juju bin directory and make sure # that is available for the client utilities. bin_path = os.path.normpath( os.path.join(get_module_directory(juju), "..", "bin")) search_path.append(bin_path) search_path.extend(os.environ.get("PATH", "").split(":")) return ":".join(search_path) class InvokerTestBase(EnvironmentsConfigTestBase): @defer.inlineCallbacks def setUp(self): yield super(InvokerTestBase, self).setUp() yield self.push_default_config() def update_invoker_env(self, local_unit, remote_unit): """Update os.env for a hook invocation. Update the invoker (and hence the hook) environment with the path to the juju cli utils, and the local unit name. """ test_hook_path = os.path.join( os.path.abspath( os.path.dirname(__file__)).replace("/_trial_temp", ""), "hooks") self.change_environment( PATH=get_cli_environ_path(test_hook_path, "/usr/bin", "/bin"), JUJU_ENV_UUID="snowflake", JUJU_UNIT_NAME=local_unit, JUJU_REMOTE_UNIT=remote_unit) def get_test_hook(self, hook): """Search for the test hook under the testing directory. Returns the full path name of the hook to be invoked from its basename. """ dirname = os.path.dirname(__file__) abspath = os.path.abspath(dirname) hook_file = os.path.join(abspath, "hooks", hook) if not os.path.exists(hook_file): # attempt to find it via sys_path for p in sys.path: hook_file = os.path.normpath( os.path.join(p, dirname, "hooks", hook)) if os.path.exists(hook_file): return hook_file raise IOError("%s doesn't exist" % hook_file) return hook_file def get_cli_hook(self, hook): bin_path = os.path.normpath( os.path.join(get_module_directory(juju), "..", "bin")) return os.path.join(bin_path, hook) def create_hook(self, hook, arguments): bin_path = self.get_cli_hook(hook) fn = self.makeFile("#!/bin/bash\n'%s' %s" % (bin_path, arguments)) # make the hook executable os.chmod(fn, stat.S_IEXEC | stat.S_IREAD) return fn def create_capturing_hook(self, hook, files=()): """Create a hook to enable capturing of results into files. This method helps test scenarios of bash scripts that depend on exact captures of stdout and stderr as well as set -eu (Juju hook commands do not return nonzero exit codes, except in the case of parse failures). bin-path (the path to Juju commands) is always defined, along with paths to temporary files, as keyed to the list of `files`. """ args = {"bin-path": os.path.normpath( os.path.join(get_module_directory(juju), "..", "bin"))} for f in files: args[f] = self.makeFile() hook_fn = self.makeFile(hook.format(**args)) os.chmod(hook_fn, stat.S_IEXEC | stat.S_IREAD) return hook_fn, args def assert_file(self, path, data): """Assert file reached by `path` contains exactly this `data`.""" with open(path) as f: self.assertEqual(f.read(), data) class TestCompleteInvoker(InvokerTestBase, StatusTestBase): @defer.inlineCallbacks def setUp(self): yield super(TestCompleteInvoker, self).setUp() self.update_invoker_env("mysql/0", "wordpress/0") self.socket_path = self.makeFile() unit_dir = self.makeDir() self.makeDir(path=os.path.join(unit_dir, "charm")) self.ua = MockUnitAgent( self.client, self.socket_path, unit_dir) @defer.inlineCallbacks def tearDown(self): self.ua.stop() yield super(TestCompleteInvoker, self).tearDown() @defer.inlineCallbacks def build_default_relationships(self): state = yield self.build_topology(skip_unit_agents=("*",)) myr = yield self.relation_state_manager.get_relations_for_service( state["services"]["mysql"]) self.mysql_relation = yield myr[0].add_unit_state( state["relations"]["mysql"][0]) wpr = yield self.relation_state_manager.get_relations_for_service( state["services"]["wordpress"]) wpr = [r for r in wpr if r.internal_relation_id == self.mysql_relation.internal_relation_id][0] self.wordpress_relation = yield wpr.add_unit_state( state["relations"]["wordpress"][0]) defer.returnValue(state) @defer.inlineCallbacks def test_get_from_different_unit(self): """Verify that relation-get works with a remote unit. This test will run the logic of relation-get and will ensure that, even though we're running the hook within the context of unit A, a hook can obtain the data from unit B using relation-get. To do this a more complete simulation of the runtime is needed than with the local test cases below. """ yield self.build_default_relationships() yield self.wordpress_relation.set_data({"hello": "world"}) hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.mysql_relation, client_id="client_id") yield exe(self.create_hook( "relation-get", "--format=json - wordpress/0")) self.assertEqual({"hello": "world"}, json.loads(hook_log.getvalue())) @defer.inlineCallbacks def test_spawn_cli_get_hook_no_args(self): """Validate the get hook works with no (or all default) args. This should default to the remote unit. We do pass a format arg so we can marshall the data. """ yield self.build_default_relationships() yield self.wordpress_relation.set_data({"hello": "world"}) hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.mysql_relation, client_id="client_id") exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" result = yield exe(self.create_hook("relation-get", "--format=json")) self.assertEqual(result, 0) # verify that its the wordpress data self.assertEqual({"hello": "world"}, json.loads(hook_log.getvalue())) @defer.inlineCallbacks def test_spawn_cli_get_implied_unit(self): """Validate the get hook can transmit values to the hook.""" yield self.build_default_relationships() hook_log = self.capture_logging("hook") # Populate and verify some data we will # later extract with the hook expected = {"name": "rabbit", "forgotten": "lyrics", "nottobe": "requested"} yield self.wordpress_relation.set_data(expected) exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.mysql_relation, client_id="client_id") exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" # invoke relation-get and verify the result result = yield exe(self.create_hook("relation-get", "--format=json -")) self.assertEqual(result, 0) data = json.loads(hook_log.getvalue()) self.assertEqual(data["name"], "rabbit") self.assertEqual(data["forgotten"], "lyrics") @defer.inlineCallbacks def test_spawn_cli_get_format_shell(self): """Validate the get hook can transmit values to the hook.""" yield self.build_default_relationships() hook_log = self.capture_logging("hook") # Populate and verify some data we will # later extract with the hook expected = {"name": "rabbit", "forgotten": "lyrics"} yield self.wordpress_relation.set_data(expected) exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.mysql_relation, client_id="client_id") exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" # invoke relation-get and verify the result result = yield exe( self.create_hook("relation-get", "--format=shell -")) self.assertEqual(result, 0) data = hook_log.getvalue() self.assertEqual('VAR_FORGOTTEN=lyrics\nVAR_NAME=rabbit\n\n', data) # and with a single value request hook_log.truncate(0) result = yield exe( self.create_hook("relation-get", "--format=shell name")) self.assertEqual(result, 0) data = hook_log.getvalue() self.assertEqual('VAR_NAME=rabbit\n\n', data) @defer.inlineCallbacks def test_relation_get_format_shell_bad_vars(self): """If illegal values are make somehow available warn.""" yield self.build_default_relationships() hook_log = self.capture_logging("hook") # Populate and verify some data we will # later extract with the hook expected = {"BAR": "none", "funny-chars*": "should work"} yield self.wordpress_relation.set_data(expected) exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.mysql_relation, client_id="client_id") exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" exe.environment["VAR_FOO"] = "jungle" result = yield exe( self.create_hook("relation-get", "--format=shell -")) self.assertEqual(result, 0) yield exe.ended data = hook_log.getvalue() self.assertIn('VAR_BAR=none', data) # Verify that illegal shell variable names get converted # in an expected way self.assertIn("VAR_FUNNY_CHARS_='should work'", data) # Verify that it sets VAR_FOO to null because it shouldn't # exist in the environment self.assertIn("VAR_FOO=", data) self.assertIn("The following were omitted from", data) @defer.inlineCallbacks def test_hook_exec_in_charm_directory(self): """Hooks are executed in the charm directory.""" yield self.build_default_relationships() hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.mysql_relation, client_id="client_id") self.assertTrue(os.path.isdir(exe.unit_path)) exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" # verify the hook's execution directory hook_path = self.makeFile("#!/bin/bash\necho $PWD") os.chmod(hook_path, stat.S_IEXEC | stat.S_IREAD) result = yield exe(hook_path) self.assertEqual(hook_log.getvalue().strip(), os.path.join(exe.unit_path, "charm")) self.assertEqual(result, 0) # Reset the output capture hook_log.seek(0) hook_log.truncate() # Verify the environment variable is set. hook_path = self.makeFile("#!/bin/bash\necho $CHARM_DIR") os.chmod(hook_path, stat.S_IEXEC | stat.S_IREAD) result = yield exe(hook_path) self.assertEqual(hook_log.getvalue().strip(), os.path.join(exe.unit_path, "charm")) @defer.inlineCallbacks def test_spawn_cli_config_get(self): """Validate that config-get returns expected values.""" yield self.build_default_relationships() hook_log = self.capture_logging("hook") # Populate and verify some data we will # later extract with the hook expected = {"name": "rabbit", "forgotten": "lyrics", "nottobe": "requested"} exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.mysql_relation, client_id="client_id") context = yield self.ua.get_context("client_id") config = yield context.get_config() config.update(expected) yield config.write() # invoke relation-get and verify the result result = yield exe(self.create_hook("config-get", "--format=json")) self.assertEqual(result, 0) data = json.loads(hook_log.getvalue()) self.assertEqual(data["name"], "rabbit") self.assertEqual(data["forgotten"], "lyrics") class RelationInvokerTestBase(InvokerTestBase, RelationTestBase): @defer.inlineCallbacks def setUp(self): yield super(RelationInvokerTestBase, self).setUp() yield self._default_relations() self.update_invoker_env("mysql/0", "wordpress/0") self.socket_path = self.makeFile() unit_dir = self.makeDir() self.makeDir(path=os.path.join(unit_dir, "charm")) self.ua = MockUnitAgent( self.client, self.socket_path, unit_dir) self.log = self.capture_logging( formatter=logging.Formatter( "%(name)s:%(levelname)s:: %(message)s"), level=logging.DEBUG) @defer.inlineCallbacks def tearDown(self): self.ua.stop() yield super(RelationInvokerTestBase, self).tearDown() @defer.inlineCallbacks def _default_relations(self): wordpress_ep = RelationEndpoint( "wordpress", "client-server", "app", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "db", "server") self.wordpress_states = yield self.\ add_relation_service_unit_from_endpoints(wordpress_ep, mysql_ep) self.mysql_states = yield self.add_opposite_service_unit( self.wordpress_states) self.relation = self.mysql_states["unit_relation"] class InvokerTest(RelationInvokerTestBase): @defer.inlineCallbacks def test_environment(self): """Test various way to manipulate the calling environment. """ exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) exe.environment.update(dict(FOO="bar")) env = exe.get_environment() # these come from the init argument self.assertEqual(env["JUJU_AGENT_SOCKET"], self.socket_path) self.assertEqual(env["JUJU_CLIENT_ID"], "client_id") # this comes from updating the Invoker.environment self.assertEqual(env["FOO"], "bar") # and this comes from the unit agent passing through its environment self.assertTrue(env["PATH"]) self.assertEqual(env["JUJU_UNIT_NAME"], "mysql/0") # Set for all hooks self.assertEqual(env["DEBIAN_FRONTEND"], "noninteractive") self.assertEqual(env["APT_LISTCHANGES_FRONTEND"], "none") # Specific to the charm that is running, in this case it's the # dummy charm (this is the default charm used when the # add_service method is use) self.assertEqual(env["_JUJU_CHARM_FORMAT"], "1") self.assertEqual(env["JUJU_ENV_UUID"], "snowflake") @defer.inlineCallbacks def test_missing_hook(self): exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) self.failUnlessRaises(errors.FileNotFound, exe, "hook-missing") @defer.inlineCallbacks def test_noexec_hook(self): exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) hook = self.get_test_hook("noexec-hook") error = self.failUnlessRaises(errors.CharmError, exe, hook) self.assertEqual(error.path, hook) self.assertEqual(error.message, "hook is not executable") @defer.inlineCallbacks def test_unhandled_signaled_on_hook(self): """A hook that executes as a result of an unhandled signal is an error. """ exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) hook_exec = exe(self.get_test_hook("sleep-hook")) # Send the process a signal to kill it exe._process.signalProcess("HUP") error = yield self.assertFailure( hook_exec, errors.CharmInvocationError) self.assertIn( "sleep-hook': signal 1.", str(error)) @defer.inlineCallbacks def test_spawn_success(self): """Validate hook with success from exit.""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) result = yield exe(self.get_test_hook("success-hook")) self.assertEqual(result, 0) yield exe.ended self.assertIn("WIN", self.log.getvalue()) self.assertIn("exit code 0", self.log.getvalue()) @defer.inlineCallbacks def test_spawn_fail(self): """Validate hook with fail from exit.""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) d = exe(self.get_test_hook("fail-hook")) result = yield self.assertFailure(d, errors.CharmInvocationError) self.assertEqual(result.exit_code, 1) # ERROR indicate the level name, we are checking that the # proper level was logged here yield exe.ended self.assertIn("INFO", self.log.getvalue()) # and the message self.assertIn("FAIL", self.log.getvalue()) self.assertIn("exit code 1", self.log.getvalue()) @defer.inlineCallbacks def test_hanging_hook(self): """Verify that a hook that's slow to end is terminated. Test this by having the hook fork a process that hangs around for a while, necessitating reaping. This happens because the child process does not close the parent's file descriptors (as expected with daemonization, for example). http://www.snailbook.com/faq/background-jobs.auto.html provides some insight into what can happen. """ from twisted.internet import reactor # Ordinarily the reaper for any such hanging hooks will run in # 5s, but we are impatient. Force it to end much sooner by # intercepting the reaper setup. mock_reactor = self.mocker.patch(reactor) # Although we can match precisely on the # Process.loseConnection, Mocker gets confused with also # trying to match the delay time, using something like # `MATCH(lambda x: isinstance(x, (int, float)))`. So instead # we hardcode it here as just 5. mock_reactor.callLater( 5, MATCH(lambda x: isinstance(x.im_self, Process))) def intercept_reaper_setup(delay, reaper): # Given this is an external process, let's sleep for a # short period of time return reactor.callLater(0.2, reaper) self.mocker.call(intercept_reaper_setup) self.mocker.replay() # The hook script will immediately exit with a status code of # 0, but it created a child process (via shell backgrounding) # that is running (and will sleep for >10s) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) result = yield exe(self.get_test_hook("hanging-hook")) self.assertEqual(result, 0) # Verify after waiting for the process to close (which means # the reaper ran!), we get output for the first phase of the # hanging hook, but not after its second, more extended sleep. yield exe.ended self.assertIn("Slept for 50ms", self.log.getvalue()) self.assertNotIn("Slept for 1s", self.log.getvalue()) # Lastly there's a nice long sleep that would occur after the # default timeout of this test. Successful completion of this # test without a timeout means this sleep was never executed. def test_path_setup(self): """Validate that the path allows finding the executable.""" from twisted.python.procutils import which exe = which("relation-get") self.assertTrue(exe) self.assertTrue(exe[0].endswith("relation-get")) @defer.inlineCallbacks def test_spawn_cli_get_hook(self): """Validate the get hook can transmit values to the hook""" hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Populate and verify some data we will # later extract with the hook context = self.ua.get_context("client_id") expected = {"a": "b", "c": "d", "private-address": "mysql-0.example.com"} yield context.set(expected) data = yield context.get("mysql/0") self.assertEqual(expected, data) # invoke the hook and process the results # verifying they are expected result = yield exe(self.create_hook("relation-get", "--format=json - mysql/0")) self.assertEqual(result, 0) data = hook_log.getvalue() self.assertEqual(json.loads(data), expected) @defer.inlineCallbacks def test_spawn_cli_get_value_hook(self): """Validate the get hook can transmit values to the hook.""" hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Populate and verify some data we will # later extract with the hook context = self.ua.get_context("client_id") expected = {"name": "rabbit", "private-address": "mysql-0.example.com"} yield context.set(expected) data = yield context.get("mysql/0") self.assertEqual(expected, data) # invoke the hook and process the results # verifying they are expected result = yield exe(self.create_hook("relation-get", "--format=json name mysql/0")) self.assertEqual(result, 0) data = hook_log.getvalue() self.assertEqual("rabbit", json.loads(data)) @defer.inlineCallbacks def test_spawn_cli_get_unit_private_address(self): """Private addresses can be retrieved.""" hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") result = yield exe(self.create_hook("unit-get", "private-address")) self.assertEqual(result, 0) data = hook_log.getvalue() self.assertEqual("mysql-0.example.com", data.strip()) @defer.inlineCallbacks def test_spawn_cli_get_unit_unknown_public_address(self): """If for some hysterical raison, the public address hasn't been set. We shouldn't error. This should never happen, the unit agent is sets it on startup. """ hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") result = yield exe(self.create_hook("unit-get", "public-address")) self.assertEqual(result, 0) data = hook_log.getvalue() self.assertEqual("", data.strip()) def test_get_remote_unit_arg(self): """Simple test around remote arg parsing.""" self.change_environment(JUJU_UNIT_NAME="mysql/0", JUJU_CLIENT_ID="client_id", JUJU_AGENT_SOCKET=self.socket_path) client = commands.RelationGetCli() client.setup_parser() options = client.parse_args(["-", "mysql/1"]) self.assertEqual(options.unit_name, "mysql/1") @defer.inlineCallbacks def test_spawn_cli_set_hook(self): """Validate the set hook can set values in zookeeper.""" output = self.capture_logging("hook", level=logging.DEBUG) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Invoke the hook and process the results verifying they are expected hook = self.create_hook("relation-set", "a=b c=d") result = yield exe(hook) self.assertEqual(result, 0) # Verify the context was flushed to zk zk_data = yield self.relation.get_data() self.assertEqual( {"a": "b", "c": "d", "private-address": "mysql-0.example.com"}, serializer.load(zk_data)) yield exe.ended self.assertIn( "Flushed values for hook %r on 'database:42'\n" " Setting changed: u'a'=u'b' (was unset)\n" " Setting changed: u'c'=u'd' (was unset)" % ( os.path.basename(hook)), output.getvalue()) @defer.inlineCallbacks def test_spawn_cli_set_can_delete_and_modify(self): """Validate the set hook can delete values in zookeeper.""" output = self.capture_logging("hook", level=logging.DEBUG) hook_directory = self.makeDir() hook_file_path = self.makeFile( content=("#!/bin/bash\n" "relation-set existing= absent= new-value=2 " "changed=abc changed2=xyz\n" "exit 0\n"), basename=os.path.join(hook_directory, "set-delete-test")) os.chmod(hook_file_path, stat.S_IRWXU) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Populate with data that will be deleted context = self.ua.get_context("client_id") yield context.set( {u"existing": u"42", u"changed": u"a" * 101, u"changed2": u"a" * 100}) yield context.flush() # Invoke the hook and process the results verifying they are expected self.assertTrue(os.path.exists(hook_file_path)) result = yield exe(hook_file_path) self.assertEqual(result, 0) # Verify the context was flushed to zk zk_data = yield self.relation.get_data() self.assertEqual( {"new-value": "2", "changed": "abc", "changed2": "xyz", "private-address": "mysql-0.example.com"}, serializer.load(zk_data)) # Verify that unicode/strings longer than 100 characters in # representation (including quotes and the u marker) are cut # off; 100 is the default cutoff used in the change items # __str__ method yield exe.ended self.assertIn( "Flushed values for hook 'set-delete-test' on 'database:42'\n" " Setting changed: u'changed'=u'abc' (was u'%s)\n" " Setting changed: u'changed2'=u'xyz' (was u'%s)\n" " Setting deleted: u'existing' (was u'42')\n" " Setting changed: u'new-value'=u'2' (was unset)" % ( "a" * 98, "a" * 98), output.getvalue()) @defer.inlineCallbacks def test_spawn_cli_set_noop_only_logs_on_change(self): """Validate the set hook only logs flushes when there are changes.""" output = self.capture_logging("hook", level=logging.DEBUG) hook_directory = self.makeDir() hook_file_path = self.makeFile( content=("#!/bin/bash\n" "relation-set no-change=42 absent=\n" "exit 0\n"), basename=os.path.join(hook_directory, "set-does-nothing")) os.chmod(hook_file_path, stat.S_IRWXU) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Populate with data that will be *not* be modified context = self.ua.get_context("client_id") yield context.set({"no-change": "42", "untouched": "xyz"}) yield context.flush() # Invoke the hook and process the results verifying they are expected self.assertTrue(os.path.exists(hook_file_path)) result = yield exe(hook_file_path) self.assertEqual(result, 0) # Verify the context was flushed to zk zk_data = yield self.relation.get_data() self.assertEqual({"no-change": "42", "untouched": "xyz", "private-address": "mysql-0.example.com"}, serializer.load(zk_data)) self.assertNotIn( "Flushed values for hook 'set-does-nothing'", output.getvalue()) @defer.inlineCallbacks def test_logging(self): exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) # The echo hook will echo out the value # it will also echo to stderr the ERROR variable message = "This is only a test" error = "All is full of fail" default = "Default level" exe.environment["MESSAGE"] = message exe.environment["ERROR"] = error exe.environment["DEFAULT"] = default # of the MESSAGE variable result = yield exe(self.get_test_hook("echo-hook")) self.assertEqual(result, 0) yield exe.ended self.assertIn(message, self.log.getvalue()) # Logging used to log an empty response dict # assure this doesn't happpen [b=915506] self.assertNotIn("{}", self.log.getvalue()) # The 'error' was sent via juju-log # to the UA. Our test UA has a fake log stream # which we can check now output = self.ua.log_file.getvalue() self.assertIn("ERROR:: " + error, self.log.getvalue()) self.assertIn("INFO:: " + default, self.log.getvalue()) assert message not in output, """Log includes unintended messages""" @defer.inlineCallbacks def test_spawn_cli_list_hook_smart(self): """Validate the get hook can transmit values to the hook.""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Populate and verify some data we will # later extract with the hook context = self.ua.get_context("client_id") # directly manipulate the context to the expected list of # members expected = ["alpha/0", "beta/0"] context._members = expected # invoke the hook and process the results # verifying they are expected exe.environment["FORMAT"] = "smart" result = yield exe(self.create_hook("relation-list", "--format=smart")) self.assertEqual(result, 0) yield exe.ended self.assertIn("alpha/0\nbeta/0\n", self.log.getvalue()) @defer.inlineCallbacks def test_spawn_cli_list_hook_eval(self): """Validate the get hook can transmit values to the hook.""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Populate and verify some data we will # later extract with the hook context = self.ua.get_context("client_id") # directly manipulate the context to the expected list of # members expected = ["alpha/0", "beta/0"] context._members = expected # invoke the hook and process the results # verifying they are expected result = yield exe(self.create_hook("relation-list", "--format=eval")) self.assertEqual(result, 0) yield exe.ended self.assertIn("alpha/0 beta/0", self.log.getvalue()) @defer.inlineCallbacks def test_spawn_cli_list_hook_json(self): """Validate the get hook can transmit values to the hook.""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Populate and verify some data we will # later extract with the hook context = self.ua.get_context("client_id") # directly manipulate the context to the expected list of # members expected = ["alpha/0", "beta/0"] context._members = expected # invoke the hook and process the results # verifying they are expected result = yield exe(self.create_hook("relation-list", "--format json")) self.assertEqual(result, 0) yield exe.ended self.assertIn('["alpha/0", "beta/0"]', self.log.getvalue()) class ChildRelationHookContextsTest(RelationInvokerTestBase): @defer.inlineCallbacks def add_a_blog(self, blog_name): blog_states = yield self.add_opposite_service_unit( (yield self.add_relation_service_unit_to_another_endpoint( self.mysql_states, RelationEndpoint( blog_name, "client-server", "app", "client")))) yield blog_states['service_relations'][-1].add_unit_state( self.mysql_states['unit']) defer.returnValue(blog_states) @defer.inlineCallbacks def add_db_admin_tool(self, admin_name): """Add another relation, using a different relation name""" admin_ep = RelationEndpoint( admin_name, "client-server", "admin-app", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "db-admin", "server") yield self.add_relation_service_unit_from_endpoints( admin_ep, mysql_ep) @defer.inlineCallbacks def assert_zk_data(self, context, expected): internal_relation_id, _ = yield context.get_relation_id_and_scope( context.relation_ident) internal_unit_id = (yield context.get_local_unit_state()).internal_id path = yield context.get_settings_path(internal_unit_id) data, stat = yield self.client.get(path) self.assertEqual(serializer.load(data), expected) @defer.inlineCallbacks def test_implied_relation_hook_context(self): """Verify implied hook context is cached and can get relation ids.""" yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") implied = exe.get_context() self.assertEqual(implied.relation_ident, "db:0") # Verify that the same hook context for the implied relation # is returned if referenced by its relation id self.assertEqual( implied, self.ua.server_factory.get_invoker("client_id").\ get_cached_relation_hook_context("db:0")) self.assertEqual( set((yield implied.get_relation_idents("db"))), set(["db:0", "db:1"])) @defer.inlineCallbacks def test_get_child_relation_hook_context(self): """Verify retrieval of a child hook context and methods on it.""" yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") # Add another relation, verify it's not yet visible yield self.add_a_blog("wordpress3") db0 = exe.get_cached_relation_hook_context("db:0") db1 = exe.get_cached_relation_hook_context("db:1") self.assertEqual(db1.relation_ident, "db:1") self.assertEqual( set((yield db1.get_relation_idents("db"))), set(["db:0", "db:1"])) self.assertEqual( db1, exe.get_cached_relation_hook_context("db:1")) # Not yet visible relation self.assertRaises( RelationIdentNotFound, exe.get_cached_relation_hook_context, "db:2") # Nonexistent relation idents self.assertRaises( RelationIdentNotFound, exe.get_cached_relation_hook_context, "db:12345") # Reread parent and child contexts exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") db0 = yield exe.get_context() db1 = exe.get_cached_relation_hook_context("db:1") db2 = exe.get_cached_relation_hook_context("db:2") # Verify that any changes are written out; write directly here # using the relation contexts yield db0.set({u"a": u"42", u"b": u"xyz"}) yield db1.set({u"c": u"21"}) yield db2.set({u"d": u"99"}) # Then actually execute a successfully hook so flushes occur # on both parent and children result = yield exe(self.get_test_hook("success-hook")) self.assertEqual(result, 0) yield exe.ended # Verify that all contexts were flushed properly to ZK yield self.assert_zk_data(db0, { u"a": u"42", u"b": u"xyz", "private-address": "mysql-0.example.com"}) yield self.assert_zk_data(db1, { u"c": u"21", "private-address": "mysql-0.example.com"}) yield self.assert_zk_data(db2, { u"d": u"99", "private-address": "mysql-0.example.com"}) # Verify log is written in order of relations: parent first, # then by children self.assertLogLines( self.log.getvalue(), ["Cached relation hook contexts on 'db:0': ['db:1']", "Flushed values for hook 'success-hook' on 'db:0'", " Setting changed: u'a'=u'42' (was unset)", " Setting changed: u'b'=u'xyz' (was unset)", " Setting changed: u'c'=u'21' (was unset) on 'db:1'", " Setting changed: u'd'=u'99' (was unset) on 'db:2'"]) @defer.inlineCallbacks def test_get_child_relation_hook_context_while_removing_relation(self): """Verify retrieval of a child hook context and methods on it.""" wordpress2_states = yield self.add_a_blog("wordpress2") yield self.add_a_blog("wordpress3") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") relation_state_manager = RelationStateManager(self.client) yield relation_state_manager.remove_relation_state( wordpress2_states["relation"]) self.assertEqual( set((yield exe.get_context().get_relation_idents("db"))), set(["db:0", "db:1", "db:2"])) db0 = exe.get_cached_relation_hook_context("db:0") db1 = exe.get_cached_relation_hook_context("db:1") db2 = exe.get_cached_relation_hook_context("db:2") # Verify that any changes are written out; write directly here # using the relation contexts yield db0.set({u"a": u"42", u"b": u"xyz"}) yield db1.set({u"c": u"21"}) yield db2.set({u"d": u"99"}) # Then actually execute a successfully hook so flushes occur # on both parent and children result = yield exe(self.get_test_hook("success-hook")) self.assertEqual(result, 0) yield exe.ended # Verify that both contexts were flushed properly to ZK: even # if the db:1 relation is gone, we allow its relation settings # to be written out to ZK yield self.assert_zk_data(db0, { u"a": u"42", u"b": u"xyz", "private-address": "mysql-0.example.com"}) yield self.assert_zk_data(db1, { u"c": "21", "private-address": "mysql-0.example.com"}) yield self.assert_zk_data(db2, { u"d": "99", "private-address": "mysql-0.example.com"}) self.assertLogLines( self.log.getvalue(), ["Cached relation hook contexts on 'db:0': ['db:1', 'db:2']", "Flushed values for hook 'success-hook' on 'db:0'", " Setting changed: u'a'=u'42' (was unset)", " Setting changed: u'b'=u'xyz' (was unset)", " Setting changed: u'c'=u'21' (was unset) on 'db:1'", " Setting changed: u'd'=u'99' (was unset) on 'db:2'"]) # Reread parent and child contexts, verify db:1 relation has # disappeared from topology exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") self.assertEqual( set((yield exe.get_context().get_relation_idents("db"))), set(["db:0", "db:2"])) yield self.assertRaises( RelationIdentNotFound, exe.get_cached_relation_hook_context, "db:1") @defer.inlineCallbacks def test_relation_ids(self): """Verify `relation-ids` command returns ids separated by newlines.""" hook_log = self.capture_logging("hook") # Invoker will be in the context of the mysql/0 service unit exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") # Invoker has already been started, verify cache coherence by # adding another relation. yield self.add_a_blog("wordpress2") # Then verify the hook lists the relation ids corresponding to # the relation name `db` hook = self.create_hook("relation-ids", "db") result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended # Smart formtting outputs one id per line self.assertEqual( hook_log.getvalue(), "db:0\n\n") # But newlines are just whitespace to the shell or to Python, # so they can be split accordingly, adhering to the letter of # the spec self.assertEqual( hook_log.getvalue().split(), ["db:0"]) @defer.inlineCallbacks def test_relation_ids_json_format(self): """Verify `relation-ids --format=json` command returns ids in json.""" yield self.add_a_blog("wordpress2") yield self.add_db_admin_tool("admin") hook_log = self.capture_logging("hook") # Invoker will be in the context of the mysql/0 service unit exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") # Then verify the hook lists the relation ids corresponding to # the relation name `db` hook = self.create_hook("relation-ids", "--format=json db") result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assertEqual( set(json.loads(hook_log.getvalue())), set(["db:0", "db:1"])) @defer.inlineCallbacks def test_relation_ids_no_relation_name(self): """Verify returns all relation ids if relation name not specified.""" yield self.add_a_blog("wordpress2") yield self.add_db_admin_tool("admin") # Invoker will be in the context of the mysql/0 service # unit. This test file's mock unit agent does not set the # additional environment variables for relation hooks that are # set by juju.unit.lifecycle.RelationInvoker; instead it has # to be set by individual tests. exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") exe.environment["JUJU_RELATION"] = "db" # Then verify the hook lists the relation ids corresponding to # to JUJU_RELATION (="db") hook_log = self.capture_logging("hook") hook = self.create_hook("relation-ids", "--format=json") result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assertEqual( set(json.loads(hook_log.getvalue())), set(["db:0", "db:1"])) # This time pretend this is a nonrelational hook # context. Ignore the relation stuff in the get_invoker # function below, really it is a nonrelational hook as far as # the hook commands are concerned: exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") hook_log = self.capture_logging("hook") hook = self.create_hook("relation-ids", "--format=json") result = yield exe(hook) # Currently, exceptions of all hook commands are only logged; # the exit code is always set to 0. self.assertEqual(result, 0) yield exe.ended self.assertIn( ("juju.hooks.protocol.MustSpecifyRelationName: " "Relation name must be specified"), hook_log.getvalue()) @defer.inlineCallbacks def test_relation_ids_nonexistent_relation_name(self): """Verify an empty listing is returned if name does not exist""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "data=$({bin-path}/relation-ids does-not-exist 2> {stderr})\n" "echo -n $data > {stdout}\n", files=["stdout", "stderr"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["stdout"], "") self.assert_file(args["stderr"], "") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) @defer.inlineCallbacks def test_relation_set_with_relation_id_option(self): """Verify `relation-set` works with -r option.""" # Invoker will be in the context of the db:0 relation yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") # But set from db:1 hook = self.create_hook("relation-set", "-r db:1 a=42 b=xyz") result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended db1 = exe.get_cached_relation_hook_context("db:1") yield self.assert_zk_data(db1, { "a": "42", "b": "xyz", "private-address": "mysql-0.example.com"}) self.assertLogLines( self.log.getvalue(), ["Cached relation hook contexts on 'db:0': ['db:1']", "Flushed values for hook %r on 'db:0'" % os.path.basename(hook), " Setting changed: u'a'=u'42' (was unset) on 'db:1'", " Setting changed: u'b'=u'xyz' (was unset) on 'db:1'"]) @defer.inlineCallbacks def test_relation_set_with_nonexistent_relation_id(self): """Verify error put on stderr when using nonexistent relation id.""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "data=$({bin-path}/relation-set -r db:12345 " "k1=v1 k2=v2 2> {stderr})\n" "echo -n $data > {stdout}\n", files=["stdout", "stderr"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["stdout"], "") self.assert_file(args["stderr"], "Relation not found for - db:12345\n") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) @defer.inlineCallbacks def test_relation_set_with_invalid_relation_id(self): """Verify message is written to stderr for invalid formatted id.""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "data=$({bin-path}/relation-set -r invalid-id:forty-two " "k1=v1 k2=v2 2> {stderr})\n" "echo -n $data > {stdout}\n", files=["stderr", "stdout"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["stdout"], "") self.assert_file(args["stderr"], "Not a valid relation id: 'invalid-id:forty-two'\n") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) @defer.inlineCallbacks def test_relation_get_with_relation_id_option(self): """Verify `relation-get` works with -r option.""" yield self.add_a_blog("wordpress2") hook_log = self.capture_logging("hook") # Invoker will be in the context of the db:0 relation exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") # First set through the context db1 = exe.get_cached_relation_hook_context("db:1") yield db1.set({"name": "whiterabbit"}) # Then get from db:1 hook = self.create_hook("relation-get", "--format=json -r db:1 - mysql/0") result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assertEqual( json.loads(hook_log.getvalue()), {"private-address": "mysql-0.example.com", "name": "whiterabbit"}) @defer.inlineCallbacks def test_relation_get_with_nonexistent_relation_id(self): """Verify error put on stderr when using nonexistent relation id.""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "settings=$({bin-path}/relation-get -r db:12345 - mysql/0 " "2> {stderr})\n" "echo -n $settings > {settings}\n", files=["settings", "stderr"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["settings"], "") self.assert_file(args["stderr"], "Relation not found for - db:12345\n") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) @defer.inlineCallbacks def test_relation_get_with_invalid_relation_id(self): """Verify message is written to stderr for invalid formatted id.""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "data=$({bin-path}/relation-get -r invalid-id:forty-two - " "2> {stderr})\n" "echo -n $data > {stdout}\n", files=["stderr", "stdout"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["stderr"], "Not a valid relation id: 'invalid-id:forty-two'\n") self.assert_file(args["stdout"], "") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) @defer.inlineCallbacks def test_relation_get_unset_remote_unit_in_env(self): """Verify error is reported if JUJU_REMOTE_UNIT is not defined.""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") self.assertNotIn("JUJU_REMOTE_UNIT", exe.environment) hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "data=$({bin-path}/relation-get -r db:0 - 2> {stderr})\n" "echo -n $data > {stdout}\n", files=["stderr", "stdout"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["stdout"], "") self.assert_file(args["stderr"], "Unit name is not defined\n") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) @defer.inlineCallbacks def test_relation_list_with_relation_id_option(self): """Verify `relation-list` works with -r option.""" yield self.add_a_blog("wordpress2") hook_log = self.capture_logging("hook") # Invoker will be in the context of the db:0 relation exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") # Then verify relation membership can be listed for db:1 hook = self.create_hook("relation-list", "--format=json -r db:1") result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assertEqual( json.loads(hook_log.getvalue()), ["wordpress2/0"]) @defer.inlineCallbacks def test_relation_list_with_nonexistent_relation_id(self): """Verify a nonexistent relation id writes message to stderr.""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "data=$({bin-path}/relation-list -r db:12345 2> {stderr})\n" "echo -n $data > {stdout}\n", files=["stdout", "stderr"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["stdout"], "") self.assert_file(args["stderr"], "Relation not found for - db:12345\n") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) @defer.inlineCallbacks def test_relation_list_with_invalid_relation_id(self): """Verify message is written to stderr for invalid formatted id.""" hook_log = self.capture_logging("hook", level=logging.DEBUG) yield self.add_a_blog("wordpress2") exe = yield self.ua.get_invoker( "db:0", "add", "mysql/0", self.relation, client_id="client_id") exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0" hook, args = self.create_capturing_hook( "#!/bin/bash\n" "set -eu\n" "data=$({bin-path}/relation-list -r invalid-id:forty-two " "2> {stderr})\n" "echo -n $data > {stdout}\n", files=["stderr", "stdout"]) result = yield exe(hook) self.assertEqual(result, 0) yield exe.ended self.assert_file(args["stdout"], "") self.assert_file(args["stderr"], "Not a valid relation id: 'invalid-id:forty-two'\n") self.assertEqual( hook_log.getvalue(), "Cached relation hook contexts on 'db:0': ['db:1']\n" "hook {} exited, exit code 0.\n".format(os.path.basename(hook))) class PortCommandsTest(RelationInvokerTestBase): def test_path_setup(self): """Validate that the path allows finding the executable.""" from twisted.python.procutils import which open_port_exe = which("open-port") self.assertTrue(open_port_exe) self.assertTrue(open_port_exe[0].endswith("open-port")) close_port_exe = which("close-port") self.assertTrue(close_port_exe) self.assertTrue(close_port_exe[0].endswith("close-port")) @defer.inlineCallbacks def test_open_and_close_ports(self): """Verify that port hook commands run and changes are immediate.""" unit_state = self.mysql_states["unit"] self.assertEqual((yield unit_state.get_open_ports()), []) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) result = yield exe(self.create_hook("open-port", "80")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}]) result = yield exe(self.create_hook("open-port", "53/udp")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}]) result = yield exe(self.create_hook("open-port", "53/tcp")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}, {"port": 53, "proto": "tcp"}]) result = yield exe(self.create_hook("open-port", "443/tcp")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}, {"port": 53, "proto": "tcp"}, {"port": 443, "proto": "tcp"}]) result = yield exe(self.create_hook("close-port", "80/tcp")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 53, "proto": "udp"}, {"port": 53, "proto": "tcp"}, {"port": 443, "proto": "tcp"}]) yield exe.ended self.assertLogLines( self.log.getvalue(), [ "opened 80/tcp", "opened 53/udp", "opened 443/tcp", "closed 80/tcp"]) @defer.inlineCallbacks def test_open_port_args(self): """Verify that open-port properly reports arg parse errors.""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) result = yield self.assertFailure( exe(self.create_hook("open-port", "80/invalid-protocol")), errors.CharmInvocationError) self.assertEqual(result.exit_code, 2) yield exe.ended self.assertIn( "open-port: error: argument PORT[/PROTOCOL]: " "Invalid protocol, must be 'tcp' or 'udp', got 'invalid-protocol'", self.log.getvalue()) result = yield self.assertFailure( exe(self.create_hook("open-port", "0/tcp")), errors.CharmInvocationError) self.assertEqual(result.exit_code, 2) yield exe.ended self.assertIn( "open-port: error: argument PORT[/PROTOCOL]: " "Invalid port, must be from 1 to 65535, got 0", self.log.getvalue()) result = yield self.assertFailure( exe(self.create_hook("open-port", "80/udp/extra-info")), errors.CharmInvocationError) self.assertEqual(result.exit_code, 2) yield exe.ended self.assertIn( "open-port: error: argument PORT[/PROTOCOL]: " "Invalid format for port/protocol, got '80/udp/extra-info", self.log.getvalue()) @defer.inlineCallbacks def test_close_port_args(self): """Verify that close-port properly reports arg parse errors.""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) result = yield self.assertFailure( exe(self.create_hook("close-port", "80/invalid-protocol")), errors.CharmInvocationError) self.assertEqual(result.exit_code, 2) yield exe.ended self.assertIn( "close-port: error: argument PORT[/PROTOCOL]: " "Invalid protocol, must be 'tcp' or 'udp', got 'invalid-protocol'", self.log.getvalue()) result = yield self.assertFailure( exe(self.create_hook("close-port", "0/tcp")), errors.CharmInvocationError) self.assertEqual(result.exit_code, 2) yield exe.ended self.assertIn( "close-port: error: argument PORT[/PROTOCOL]: " "Invalid port, must be from 1 to 65535, got 0", self.log.getvalue()) result = yield self.assertFailure( exe(self.create_hook("close-port", "80/udp/extra-info")), errors.CharmInvocationError) self.assertEqual(result.exit_code, 2) yield exe.ended self.assertIn( "close-port: error: argument PORT[/PROTOCOL]: " "Invalid format for port/protocol, got '80/udp/extra-info", self.log.getvalue()) class SubordinateRelationInvokerTest(InvokerTestBase, RelationTestBase): @defer.inlineCallbacks def setUp(self): yield super(SubordinateRelationInvokerTest, self).setUp() self.log = self.capture_logging( formatter=logging.Formatter( "%(name)s:%(levelname)s:: %(message)s"), level=logging.DEBUG) self.update_invoker_env("mysql/0", "logging/0") self.socket_path = self.makeFile() unit_dir = self.makeDir() self.makeDir(path=os.path.join(unit_dir, "charm")) self.ua = MockUnitAgent( self.client, self.socket_path, unit_dir) yield self._build_relation() @defer.inlineCallbacks def tearDown(self): self.ua.stop() yield super(SubordinateRelationInvokerTest, self).tearDown() @defer.inlineCallbacks def _build_relation(self): mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") mysql, my_units = yield self.get_service_and_units_by_charm_name( "mysql", 2) self.assertFalse((yield mysql.is_subordinate())) log, log_units = yield self.get_service_and_units_by_charm_name( "logging") self.assertTrue((yield log.is_subordinate())) # add the relationship so we can create units with containers relation_state, service_states = (yield self.relation_manager.add_relation_state( mysql_ep, logging_ep)) log, log_units = yield self.get_service_and_units_by_charm_name( "logging", containers=my_units) self.assertTrue((yield log.is_subordinate())) for lu in log_units: self.assertTrue((yield lu.is_subordinate())) mu1, mu2 = my_units lu1, lu2 = log_units self.mysql_units = my_units self.log_units = log_units mystate = pick_attr(service_states, relation_role="server") logstate = pick_attr(service_states, relation_role="client") yield mystate.add_unit_state(mu1) self.relation = yield logstate.add_unit_state(lu1) # add the second container yield mystate.add_unit_state(mu2) self.relation2 = yield logstate.add_unit_state(lu2) @defer.inlineCallbacks def test_subordinate_invocation(self): exe = yield self.ua.get_invoker( "juju-info", "add", "mysql/0", self.relation) result = yield exe(self.create_hook("relation-list", "--format=smart")) self.assertEqual(result, 0) yield exe.ended # verify that we see the proper unit self.assertIn("mysql/0", self.log.getvalue()) # we don't see units in the other container self.assertNotIn("mysql/1", self.log.getvalue()) # reset the log and verify other container self.log.seek(0) exe = yield self.ua.get_invoker( "juju-info", "add", "mysql/1", self.relation2) result = yield exe(self.create_hook("relation-list", "--format=smart")) self.assertEqual(result, 0) # verify that we see the proper unit yield exe.ended self.assertIn("mysql/1", self.log.getvalue()) # we don't see units in the other container self.assertNotIn("mysql/0", self.log.getvalue()) @defer.inlineCallbacks def test_open_and_close_ports(self): """Verify that port hook commands run and changes are immediate.""" unit_state = self.log_units[0] container_state = self.mysql_units[0] self.assertEqual((yield unit_state.get_open_ports()), []) exe = yield self.ua.get_invoker( "database:42", "add", "logging/0", self.relation) result = yield exe(self.create_hook("open-port", "80")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}]) self.assertEqual( (yield container_state.get_open_ports()), [{"port": 80, "proto": "tcp"}]) result = yield exe(self.create_hook("open-port", "53/udp")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}]) self.assertEqual( (yield container_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}]) result = yield exe(self.create_hook("close-port", "80/tcp")) self.assertEqual(result, 0) self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 53, "proto": "udp"}]) self.assertEqual( (yield container_state.get_open_ports()), [{"port": 53, "proto": "udp"}]) yield exe.ended self.assertLogLines( self.log.getvalue(), [ "opened 80/tcp", "opened 53/udp", "closed 80/tcp"]) class TestCharmFormatV1(RelationInvokerTestBase): @defer.inlineCallbacks def _default_relations(self): """Intercept mysql/wordpress setup to ensure v1 charm format""" # The mysql charm in the test repository has no format defined yield self.add_service_from_charm("mysql", charm_name="mysql") yield super(TestCharmFormatV1, self)._default_relations() @defer.inlineCallbacks def test_environment(self): """Ensure that an implicit setting of format: 1 works properly""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) env = exe.get_environment() self.assertEqual(env["_JUJU_CHARM_FORMAT"], "1") @defer.inlineCallbacks def test_output(self): """Verify roundtripping""" hook_debug_log = capture_separate_log("hook", level=logging.DEBUG) hook_log = capture_separate_log("hook", level=logging.INFO) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") set_hook = self.create_hook( "relation-set", "b=true i=42 f=1.23 s=ascii u=中文") yield exe(set_hook) result = yield exe(self.create_hook("relation-get", "- mysql/0")) self.assertEqual(result, 0) # No guarantee on output ordering, so keep to this test stable, # first a little parsing work: output = hook_log.getvalue() self.assertEqual(output[0], "{") self.assertEqual(output[-3:-1], "}\n") self.assertEqual( set(output[1:-3].split(", ")), set(["u'b': u'true'", "u'f': u'1.23'", "u'i': u'42'", "u'private-address': u'mysql-0.example.com'", "u's': u'ascii'", "u'u': u'\\u4e2d\\u6587'"])) self.assertLogLines( hook_debug_log.getvalue(), ["Flushed values for hook %r on 'database:42'" % ( os.path.basename(set_hook),), " Setting changed: u'b'=u'true' (was unset)", " Setting changed: u'f'=u'1.23' (was unset)", " Setting changed: u'i'=u'42' (was unset)", " Setting changed: u's'=u'ascii' (was unset)", " Setting changed: u'u'=u'\\u4e2d\\u6587' (was unset)"]) # Unlike v2 formatting, this will only fail on output hook_log.truncate() data_file = self.makeFile("But when I do drink, I prefer \xCA\xFE") yield exe(self.create_hook( "relation-set", "d=@%s" % data_file)) result = yield exe(self.create_hook("relation-get", "d mysql/0")) self.assertEqual(result, 1) self.assertIn( "Error: \'utf8\' codec can\'t decode byte 0xca in position 30: " "invalid continuation byte", hook_log.getvalue()) class TestCharmFormatV2(RelationInvokerTestBase): @defer.inlineCallbacks def _default_relations(self): """Intercept mysql/wordpress setup to ensure v2 charm format""" # The mysql-format-v2 charm defines format:2 in its metadata yield self.add_service_from_charm( "mysql", charm_name="mysql-format-v2") yield super(TestCharmFormatV2, self)._default_relations() def make_zipped_file(self): data_file = self.makeFile() with open(data_file, "wb") as f: # gzipped of 'abc' - however gzip will also includes the # source file name, so easiest to keep it stable here as # standard data f.write("\x1f\x8b\x08\x08\xbb\x8bAP\x02\xfftmpr" "fyP0e\x00KLJ\x06\x00\xc2A$5\x03\x00\x00\x00") return data_file @defer.inlineCallbacks def test_environment(self): """Ensure that an explicit setting of format: 2 works properly""" exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation) env = exe.get_environment() self.assertEqual(env["_JUJU_CHARM_FORMAT"], "2") @defer.inlineCallbacks def test_smart_output(self): """Verify roundtripping""" hook_debug_log = capture_separate_log("hook", level=logging.DEBUG) hook_log = capture_separate_log("hook", level=logging.INFO) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Test the support of raw strings, both from a file and from # command line. Unicode can also be used - this is just # rendered as UTF-8 in the shell; the source here is also # UTF-8 - note it is not a Unicode string, it's a bytestring. data_file = self.make_zipped_file() set_hook = self.create_hook( "relation-set", "b=true f=1.23 i=42 s=ascii u=中文 d=@%s " "r=\"$(echo -en 'But when I do drink, I prefer \\xCA\\xFE')\"" % ( data_file)) yield exe(set_hook) result = yield exe(self.create_hook("relation-get", "- mysql/0")) self.assertEqual(result, 0) # relation-get - uses YAML to dump keys. YAML guarantees that # the keys will be sorted lexicographically; note that we # output UTF-8 for Unicode when dumping YAML, so our source # text (with this test file in UTF-8 itself) matches the # output text, as seen in the characters for "zhongwen" # (Chinese language). self.assertEqual( hook_log.getvalue(), "b: 'true'\n" "d: !!binary |\n H4sICLuLQVAC/3RtcHJmeVAwZQBLTEoGAMJBJDUDAAAA\n" "f: '1.23'\n" "i: '42'\n" "private-address: mysql-0.example.com\n" "r: !!binary |\n QnV0IHdoZW4gSSBkbyBkcmluaywgSSBwcmVmZXIgyv4=\n" "s: ascii\n" "u: 中文\n") # Note: backslashes are necessarily doubled here; r"XYZ" # strings don't help with hexescapes self.assertLogLines( hook_debug_log.getvalue(), ["Flushed values for hook %r on 'database:42'" % ( os.path.basename(set_hook),), " Setting changed: 'b'='true' (was unset)", " Setting changed: 'd'='\\x1f\\x8b\\x08\\x08\\xbb\\x8bAP\\x02" "\\xfftmprfyP0e\\x00KLJ\\x06\\x00\\xc2A$5\\x03" "\\x00\\x00\\x00' (was unset)", " Setting changed: 'f'='1.23' (was unset)", " Setting changed: 'i'='42' (was unset)", " Setting changed: 'r'='But when I do drink, " "I prefer \\xca\\xfe' (was unset)", " Setting changed: 's'='ascii' (was unset)", " Setting changed: 'u'=u'\\u4e2d\\u6587' (was unset)" ]) @defer.inlineCallbacks def test_exact_roundtrip_binary_data(self): """Verify that binary data, including \x00, is roundtripped exactly""" hook_log = capture_separate_log("hook", level=logging.INFO) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") data_file = self.make_zipped_file() # relation-set can only read null bytes from a file; bash # would otherwise silently drop set_hook = self.create_hook("relation-set", "zipped=@%s" % ( data_file)) result = yield exe(set_hook) self.assertEqual(result, 0) # Abuse the create_hook method a little bit by adding a pipe get_hook = self.create_hook("relation-get", "zipped mysql/0 | zcat") result = yield exe(get_hook) self.assertEqual(result, 0) # Using the hook log for this verification does generate one # extra \n (seen elsewhere in our tests), but this is just # test noise: we are guaranteed roundtrip fidelity by using # the picky tool that is zcat - no extraneous data accepted. self.assertEqual(hook_log.getvalue(), "abc\n") @defer.inlineCallbacks def test_json_output(self): """Verify roundtripping""" hook_log = capture_separate_log("hook", level=logging.INFO) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") # Test the support of raw strings, both from a file and from # command line. In addition, test Unicode indirectly by using # UTF-8. Because the source of this file is marked as UTF-8, # we can embed such characters directly in bytestrings, not # just Unicode strings. This also works within the context of # the shell. raw = "But when I do drink, I prefer \xca\xfe" data_file = self.makeFile(raw) set_hook = self.create_hook( "relation-set", "b=true f=1.23 i=42 s=ascii u=中文 d=@%s " "r=\"$(echo -en 'But when I do drink, I prefer \\xCA\\xFE')\"" % ( data_file,)) yield exe(set_hook) result = yield exe(self.create_hook( "relation-get", "--format=json - mysql/0")) self.assertEqual(result, 0) # YAML serialization internally has converted (transparently) # UTF-8 to Unicode, which can be rendered by JSON. However the # "cafe" bytestring is invalid JSON, so verify that it's been # Base64 encoded. encoded = base64.b64encode(raw) self.assertEqual( hook_log.getvalue(), '{"b": "true", ' '"d": "%s", ' '"f": "1.23", ' '"i": "42", ' '"private-address": "mysql-0.example.com", ' '"s": "ascii", ' '"r": "%s", ' '"u": "\\u4e2d\\u6587"}\n' % (encoded, encoded)) @defer.inlineCallbacks def common_relation_set(self): hook_log = capture_separate_log("hook", level=logging.INFO) exe = yield self.ua.get_invoker( "database:42", "add", "mysql/0", self.relation, client_id="client_id") raw = "But when I do drink, I prefer \xCA\xFE" data_file = self.makeFile(raw) set_hook = self.create_hook( "relation-set", "s='some text' u=中文 d=@%s " "r=\"$(echo -en 'But when I do drink, I prefer \\xCA\\xFE')\"" % ( data_file)) result = yield exe(set_hook) self.assertEqual(result, 0) defer.returnValue((exe, hook_log)) @defer.inlineCallbacks def test_relation_get_ascii(self): """Verify that ascii data is roundtripped""" exe, hook_log = yield self.common_relation_set() result = yield exe(self.create_hook("relation-get", "s mysql/0")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "some text\n") @defer.inlineCallbacks def test_relation_get_raw(self): """Verify that raw data is roundtripped""" exe, hook_log = yield self.common_relation_set() result = yield exe(self.create_hook("relation-get", "r mysql/0")) self.assertEqual(result, 0) self.assertEqual( hook_log.getvalue(), "But when I do drink, I prefer \xca\xfe\n") @defer.inlineCallbacks def test_relation_get_unicode(self): """Verify Unicode is roundtripped (via UTF-8) through the shell""" exe, hook_log = yield self.common_relation_set() result = yield exe(self.create_hook("relation-get", "u mysql/0")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "中文\n") @defer.inlineCallbacks def setup_config(self): hook_log = self.capture_logging("hook") exe = yield self.ua.get_invoker( "db:42", "add", "mysql/0", self.relation, client_id="client_id") context = yield self.ua.get_context("client_id") config = yield context.get_config() with open(self.make_zipped_file(), "rb") as f: data = f.read() config.update({ "b": True, "f": 1.23, "i": 42, "s": "some text", # uses UTF-8 encoding in this test script "u": "中文", # use high byte and null byte characters "r": data }) yield config.write() defer.returnValue((exe, hook_log)) @defer.inlineCallbacks def test_config_get_boolean(self): """Validate that config-get returns lowercase names of booleans.""" exe, hook_log = yield self.setup_config() result = yield exe(self.create_hook("config-get", "b")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "true\n") @defer.inlineCallbacks def test_config_get_float(self): """Validate that config-get returns floats without quotes.""" exe, hook_log = yield self.setup_config() result = yield exe(self.create_hook("config-get", "f")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "1.23\n") @defer.inlineCallbacks def test_config_get_int(self): """Validate that config-get returns ints without quotes.""" exe, hook_log = yield self.setup_config() result = yield exe(self.create_hook("config-get", "i")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "42\n") @defer.inlineCallbacks def test_config_get_ascii(self): """Validate that config-get returns ascii strings.""" exe, hook_log = yield self.setup_config() result = yield exe(self.create_hook("config-get", "s")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "some text\n") @defer.inlineCallbacks def test_config_get_raw(self): """Validate config-get can work with high and null bytes.""" exe, hook_log = yield self.setup_config() result = yield exe(self.create_hook("config-get", "r | zcat")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "abc\n") @defer.inlineCallbacks def test_config_get_unicode(self): """Validate that config-get returns raw strings containing UTF-8.""" exe, hook_log = yield self.setup_config() result = yield exe(self.create_hook("config-get", "u")) self.assertEqual(result, 0) self.assertEqual(hook_log.getvalue(), "中文\n") juju-0.7.orig/juju/hooks/tests/test_scheduler.py0000644000000000000000000006673312135220114020261 0ustar 00000000000000import logging import os from twisted.internet.defer import ( inlineCallbacks, fail, succeed, Deferred, returnValue) from juju.lib import serializer from juju.hooks.scheduler import HookScheduler from juju.state.tests.test_service import ServiceStateManagerTestBase class SomeError(Exception): pass class HookSchedulerTest(ServiceStateManagerTestBase): @inlineCallbacks def setUp(self): yield super(HookSchedulerTest, self).setUp() self.client = self.get_zookeeper_client() self.unit_relation = self.mocker.mock() self.executions = [] self.service = yield self.add_service_from_charm("wordpress") self.state_file = self.makeFile() self.executor = self.collect_executor self._scheduler = None self.log_stream = self.capture_logging( "hook.scheduler", level=logging.DEBUG) @property def scheduler(self): # Create lazily, so we can create with a state file if we want to, # and swap out collect_executor when helpful to do so. if self._scheduler is None: self._scheduler = HookScheduler( self.client, self.executor, self.unit_relation, "", "wordpress/0", self.state_file) return self._scheduler def collect_executor(self, context, change): self.executions.append((context, change)) return True def write_single_unit_state(self): with open(self.state_file, "w") as f: f.write(serializer.dump({ "context_members": ["u-1"], "member_versions": {"u-1": 0}, "unit_ops": {}, "clock_units": {}, "change_queue": [], "clock_sequence": 1})) def test_add_expanded_modified(self): """An add event is expanded to a modified event. """ self.scheduler.cb_change_members([], ["u-1"]) self.scheduler.run() self.assertEqual(len(self.executions), 2) self.assertTrue( self.executions[-1][1].change_type == 'modified') @inlineCallbacks def test_add_expanded_on_error(self): """If the hook exec for an add fails, its still expanded. """ results = [succeed(False), succeed(True)] collected = [] def executor(context, change): res = results[len(collected)] collected.append((context, change)) return res self.executor = executor self.scheduler.cb_change_members([], ["u-1"]) sched_done = self.scheduler.run() self.assertEqual(len(collected), 1) self.assertTrue( collected[0][1].change_type == 'joined') yield sched_done self.assertFalse(self.scheduler.running) with open(self.state_file) as f: self.assertEquals(serializer.load(f.read()), { "context_members": ['u-1'], "member_versions": {"u-1": 0}, "change_queue": [ {'change_type': 'joined', 'members': ['u-1'], 'unit_name': 'u-1'}, {'change_type': 'modified', 'members': ['u-1'], 'unit_name': 'u-1'}]}) @inlineCallbacks def test_add_and_immediate_remove_can_ellipse_change(self): """A concurrent depart of a unit during the join hook elides expansion. Ie. This is the one scenario where a change hook won't be executed after a successful join, because the remote side is already gone, its of little purpose to execute the modify additionally. """ results = [Deferred() for i in range(5)] collected = [] @inlineCallbacks def executor(context, change): res = yield results[len(collected)] collected.append((context, change)) returnValue(res) self.executor = executor self.scheduler.cb_change_members([], ["u-1"]) sched_done = self.scheduler.run() self.scheduler.cb_change_members(["u-1"], []) self.assertFalse(collected) with open(self.state_file) as f: self.assertEquals(serializer.load(f.read()), { "context_members": [], "member_versions": {}, "change_queue": [ {'change_type': 'joined', 'members': ['u-1'], 'unit_name': 'u-1'}, {'change_type': 'departed', 'members': [], 'unit_name': 'u-1'}, ]}) for i in results: i.callback(True) self.scheduler.stop() yield sched_done self.assertEqual( [(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \ for i in collected], [("u-1", "joined", ["u-1"]), ("u-1", "departed", [])]) @inlineCallbacks def test_hook_error_doesnt_stop_reduction_of_events_in_clock(self): """Reduction of events continues even in the face of a hook error. """ results = [succeed(False), succeed(True), succeed(True)] collected = [] def executor(context, change): res = results[len(collected)] collected.append((context, change)) return res self.executor = executor self.scheduler.cb_change_members([], ["u-1"]) sched_done = self.scheduler.run() self.assertEqual(len(collected), 1) self.assertTrue( collected[0][1].change_type == 'joined') yield sched_done self.assertFalse(self.scheduler.running) self.scheduler.cb_change_members(["u-1"], ["u-2"]) with open(self.state_file) as f: self.assertEquals(serializer.load(f.read()), { "context_members": ['u-2'], "member_versions": {"u-2": 0}, "change_queue": [ {'change_type': 'joined', 'members': ['u-1'], 'unit_name': 'u-1'}, {'change_type': 'joined', 'members': ['u-1', 'u-2'], 'unit_name': 'u-2'}, {'change_type': 'departed', 'members': ['u-2'], 'unit_name': 'u-1'}]}) @inlineCallbacks def test_depart_hook_error_membership_affects(self): """An error in a remove event should not distort the membership. Also verifies restart after error starts with error event. """ yield self.write_single_unit_state() results = [ succeed(True), succeed(False), succeed(True), succeed(True)] collected = [] def executor(context, change): res = results[len(collected)] collected.append((context, change)) return res self.executor = executor self.scheduler.cb_change_members(["u-1"], ["u-2"]) self.scheduler.run() self.assertEqual( [(i[1].unit_name, i[1].change_type) for i in collected], [("u-2", "joined"), ("u-1", "departed")]) self.assertFalse(self.scheduler.running) with open(self.state_file) as f: self.assertEquals(serializer.load(f.read()), { "context_members": ['u-2'], "member_versions": {"u-2": 0}, "change_queue": [ {'change_type': 'departed', 'members': ['u-2'], 'unit_name': 'u-1'}, {'change_type': 'modified', 'members': ['u-2'], 'unit_name': 'u-2'}]}) self.scheduler.run() self.assertEqual( [(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \ for i in collected], [("u-2", "joined", ["u-1", "u-2"]), ("u-1", "departed", ["u-2"]), ("u-1", "departed", ["u-2"]), ("u-2", "modified", ["u-2"])]) @inlineCallbacks def test_restart_after_error_and_pop_starts_with_next_event(self): """If a hook errors, the schedule is popped, the next hook is new. """ yield self.write_single_unit_state() results = [ succeed(True), succeed(False), succeed(True), succeed(True)] collected = [] def executor(context, change): res = results[len(collected)] collected.append((context, change)) return res self.executor = executor self.scheduler.cb_change_members(["u-1"], ["u-1", "u-2"]) self.scheduler.cb_change_settings((("u-1", 2),)) yield self.scheduler.run() self.assertFalse(self.scheduler.running) self.assertEqual( (yield self.scheduler.pop()), {'change_type': 'modified', 'unit_name': 'u-1', 'members': ['u-1', 'u-2']}) self.scheduler.run() self.assertEqual( [(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \ for i in collected], [("u-2", "joined", ["u-1", "u-2"]), ("u-1", "modified", ["u-1", "u-2"]), ("u-2", "modified", ["u-1", "u-2"])]) @inlineCallbacks def test_current_unit_op_changes_while_executing(self): """The current operation being executed changing during execution is ok """ yield self.write_single_unit_state() results = [Deferred() for i in range(5)] collected = [] @inlineCallbacks def executor(context, change): res = yield results[len(collected)] collected.append((context, change)) returnValue(res) self.executor = executor self.scheduler.cb_change_settings((("u-1", 2),)) sched_done = self.scheduler.run() self.scheduler.cb_change_members(["u-1"], []) self.assertFalse(collected) results[0].callback(True) self.assertEqual(collected[-1][1].change_type, "modified") results[1].callback(True) self.scheduler.stop() yield sched_done self.assertEqual(collected[-1][1].change_type, "departed") @inlineCallbacks def test_next_unit_op_changes_during_previous_hook_exec(self): results = [Deferred() for i in range(5)] collected = [] @inlineCallbacks def executor(context, change): res = yield results[len(collected)] collected.append((context, change)) returnValue(res) self.executor = executor self.scheduler.cb_change_members([], ["u-1", "u-2"]) sched_done = self.scheduler.run() self.scheduler.cb_change_members(["u-1", "u-2"], ["u-1"]) self.assertFalse(collected) for i in results: i.callback(True) self.scheduler.stop() yield sched_done self.assertEqual( [(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \ for i in collected], [("u-1", "joined", ["u-1"]), ("u-1", "modified", ["u-1"])]) with open(self.state_file) as f: self.assertEquals(serializer.load(f.read()), { "context_members": ['u-1'], "member_versions": {"u-1": 0}, "change_queue": []}) # Event reduction/coalescing cases def test_reduce_removed_added(self): """ A remove event for a node followed by an add event, results in a modify event. note this isn't possible, unit ids are unique. """ self.write_single_unit_state() self.scheduler.cb_change_members(["u-1"], []) self.scheduler.cb_change_members([], ["u-1"]) self.scheduler.run() self.assertEqual(len(self.executions), 1) self.assertEqual(self.executions[0][1].change_type, "modified") output = ("members changed: old=['u-1'], new=[]", "members changed: old=[], new=['u-1']", "start", "executing hook for u-1:modified\n") self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) def test_reduce_modify_remove_add(self): """A modify, remove, add event for a node results in a modify. An extra validation of the previous test. """ self.write_single_unit_state() self.scheduler.cb_change_settings([("u-1", 1)]) self.scheduler.cb_change_members(["u-1"], []) self.scheduler.cb_change_members([], ["u-1"]) self.scheduler.run() self.assertEqual(len(self.executions), 1) self.assertEqual(self.executions[0][1].change_type, "modified") def test_reduce_add_modify(self): """An add and modify event for a node are coalesced to an add.""" self.scheduler.cb_change_members([], ["u-1"]) self.scheduler.cb_change_settings([("u-1", 1)]) self.scheduler.cb_change_settings([("u-1", 1)]) self.scheduler.run() self.assertEqual(len(self.executions), 2) self.assertEqual(self.executions[0][1].change_type, "joined") def test_reduce_add_remove(self): """an add followed by a removal results in a noop.""" self.scheduler.cb_change_members([], ["u-1"]) self.scheduler.cb_change_members(["u-1"], []) self.scheduler.run() self.assertEqual(len(self.executions), 0) def test_reduce_modify_remove(self): """Modifying and then removing a node, results in just the removal.""" self.write_single_unit_state() self.scheduler.cb_change_settings([("u-1", 1)]) self.scheduler.cb_change_members(["u-1"], []) self.scheduler.run() self.assertEqual(len(self.executions), 1) self.assertEqual(self.executions[0][1].change_type, "departed") def test_reduce_modify_modify(self): """Multiple modifies get coalesced to a single modify.""" # simulate normal startup, the first notify will always be the existing # membership set. self.scheduler.cb_change_members([], ["u-1"]) self.scheduler.run() self.scheduler.stop() self.assertEqual(len(self.executions), 2) # Now continue the modify/modify reduction. self.scheduler.cb_change_settings([("u-1", 1)]) self.scheduler.cb_change_settings([("u-1", 2)]) self.scheduler.cb_change_settings([("u-1", 3)]) self.scheduler.run() self.assertEqual(len(self.executions), 3) self.assertEqual(self.executions[1][1].change_type, "modified") # Other stuff. @inlineCallbacks def test_start_stop(self): self.assertFalse(self.scheduler.running) d = self.scheduler.run() self.assertTrue(self.scheduler.running) # starting multiple times results in an error self.assertFailure(self.scheduler.run(), AssertionError) self.scheduler.stop() self.assertFalse(self.scheduler.running) yield d # stopping multiple times is not an error yield self.scheduler.stop() self.assertFalse(self.scheduler.running) def test_start_stop_start(self): """Stop values should only be honored if the scheduler is stopped. """ waits = [Deferred(), succeed(True), succeed(True), succeed(True)] results = [] @inlineCallbacks def executor(context, change): res = yield waits[len(results)] results.append((context, change)) returnValue(res) scheduler = HookScheduler( self.client, executor, self.unit_relation, "", "wordpress/0", self.state_file) # Start the scheduler d = scheduler.run() # Now queue up some changes. scheduler.cb_change_members([], ["u-1"]) scheduler.cb_change_members(["u-1"], ["u-1", "u-2"]) # Stop the scheduler scheduler.stop() yield d self.assertFalse(scheduler.running) # Finish the hook execution waits[0].callback(True) d = scheduler.run() self.assertTrue(scheduler.running) # More changes scheduler.cb_change_settings([("u-1", 1)]) scheduler.cb_change_settings([("u-2", 1)]) # Scheduler should still be running. print self._debug_scheduler() print [(r[1].change_type, r[1].unit_name) for r in results] self.assertFalse(d.called) self.assertEqual(len(results), 4) @inlineCallbacks def test_run_requires_writable_state(self): # Induce lazy creation of scheduler, then break state file self.scheduler with open(self.state_file, "w"): pass os.chmod(self.state_file, 0) e = yield self.assertFailure(self.scheduler.run(), AssertionError) self.assertEquals(str(e), "%s is not writable!" % self.state_file) def test_empty_state(self): with open(self.state_file, "w") as f: f.write(serializer.dump({})) # Induce lazy creation to verify it can still survive self.scheduler @inlineCallbacks def test_membership_visibility_per_change(self): """Hooks are executed against changes, those changes are associated to a temporal timestamp, however the changes are scheduled for execution, and the state/time of the world may have advanced, to present a logically consistent view, we try to guarantee at a minimum, that hooks will always see the membership of a relation as it was at the time of their associated change. In conjunction with the event reduction, this keeps a consistent but up to date world view. """ self.scheduler.cb_change_members([], ["u-1", "u-2"]) self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"]) self.scheduler.cb_change_settings([("u-2", 1)]) self.scheduler.run() self.scheduler.stop() # two reduced events, resulting u-2, u-3 add, two expanded # u-2, u-3 modified #self.assertEqual(len(self.executions), 4) self.assertEqual( [(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \ for i in self.executions], [("u-2", "joined", ["u-2"]), ("u-3", "joined", ["u-2", "u-3"]), ("u-2", "modified", ["u-2", "u-3"]), ("u-3", "modified", ["u-2", "u-3"]), ]) # Now the first execution (u-2 add) should only see members # from the time of its change, not the current members. However # since u-1 has been subsequently removed, it no longer retains # an entry in the membership list. change_members = yield self.executions[0][0].get_members() self.assertEqual(change_members, ["u-2"]) self.scheduler.cb_change_settings([("u-2", 2)]) self.scheduler.cb_change_members(["u-2", "u-3"], ["u-2"]) self.scheduler.run() self.assertEqual(len(self.executions), 6) self.assertEqual(self.executions[4][1].change_type, "modified") # Verify modify events see the correct membership. change_members = yield self.executions[4][0].get_members() self.assertEqual(change_members, ["u-2", "u-3"]) @inlineCallbacks def test_membership_visibility_with_change(self): """We express a stronger guarantee of the above, namely that a hook wont see any 'active' members in a membership list, that it hasn't previously been given a notify of before. """ with open(self.state_file, "w") as f: f.write(serializer.dump({ "context_members": ["u-1", "u-2"], "member_versions": {"u-1": 0, "u-2": 0}, "change_queue": []})) self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3", "u-4"]) self.scheduler.cb_change_settings([("u-2", 1)]) self.scheduler.run() self.scheduler.stop() # add for u-3, u-4, mod for u3, u4, remove for u-1, modify for u-2 self.assertEqual(len(self.executions), 6) # Verify members for each change. self.assertEqual(self.executions[0][1].change_type, "joined") members = yield self.executions[0][0].get_members() self.assertEqual(members, ["u-1", "u-2", "u-3"]) self.assertEqual(self.executions[1][1].change_type, "joined") members = yield self.executions[1][0].get_members() self.assertEqual(members, ["u-1", "u-2", "u-3", "u-4"]) self.assertEqual(self.executions[2][1].change_type, "departed") members = yield self.executions[2][0].get_members() self.assertEqual(members, ["u-2", "u-3", "u-4"]) self.assertEqual(self.executions[3][1].change_type, "modified") members = yield self.executions[2][0].get_members() self.assertEqual(members, ["u-2", "u-3", "u-4"]) with open(self.state_file) as f: state = serializer.load(f.read()) self.assertEquals(state, { "change_queue": [], "context_members": ["u-2", "u-3", "u-4"], "member_versions": {"u-2": 1, "u-3": 0, "u-4": 0}}) @inlineCallbacks def test_state_is_loaded(self): with open(self.state_file, "w") as f: f.write(serializer.dump({ "context_members": ["u-1", "u-2", "u-3"], "member_versions": {"u-1": 5, "u-2": 2, "u-3": 0}, "change_queue": [ {'unit_name': 'u-1', 'change_type': 'modified', 'members': ['u-1', 'u-2']}, {'unit_name': 'u-3', 'change_type': 'joined', 'members': ['u-1', 'u-2', 'u-3']}]})) d = self.scheduler.run() while len(self.executions) < 2: yield self.sleep(0.1) self.scheduler.stop() yield d self.assertEqual(self.executions[0][1].change_type, "modified") members = yield self.executions[0][0].get_members() self.assertEqual(members, ["u-1", "u-2"]) self.assertEqual(self.executions[1][1].change_type, "joined") members = yield self.executions[1][0].get_members() self.assertEqual(members, ["u-1", "u-2", "u-3"]) with open(self.state_file) as f: state = serializer.load(f.read()) self.assertEquals(state, { "context_members": ["u-1", "u-2", "u-3"], "member_versions": {"u-1": 5, "u-2": 2, "u-3": 0}, "change_queue": []}) def test_state_is_stored(self): with open(self.state_file, "w") as f: f.write(serializer.dump({ "context_members": ["u-1", "u-2"], "member_versions": {"u-1": 0, "u-2": 2}, "change_queue": []})) self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"]) self.scheduler.cb_change_settings([("u-2", 3)]) # Add a stop instruction to the queue, which should *not* be saved. self.scheduler.stop() with open(self.state_file) as f: state = serializer.load(f.read()) self.assertEquals(state, { "context_members": ["u-2", "u-3"], "member_versions": {"u-2": 3, "u-3": 0}, 'change_queue': [{'change_type': 'joined', 'members': ['u-1', 'u-2', 'u-3'], 'unit_name': 'u-3'}, {'change_type': 'departed', 'members': ['u-2', 'u-3'], 'unit_name': 'u-1'}, {'change_type': 'modified', 'members': ['u-2', 'u-3'], 'unit_name': 'u-2'}], }) @inlineCallbacks def test_state_stored_after_tick(self): def execute(context, change): self.execute_calls += 1 if self.execute_calls > 1: return fail(SomeError()) return succeed(True) self.execute_calls = 0 self.executor = execute with open(self.state_file, "w") as f: f.write(serializer.dump({ "context_members": ["u-1", "u-2"], "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, "change_queue": [ {"unit_name": "u-1", "change_type": "modified", "members": ["u-1", "u-2"]}, {"unit_name": "u-3", "change_type": "added", "members": ["u-1", "u-2", "u-3"]}]})) d = self.scheduler.run() while self.execute_calls < 2: yield self.poke_zk() yield self.assertFailure(d, SomeError) with open(self.state_file) as f: self.assertEquals(serializer.load(f.read()), { "context_members": ["u-1", "u-2"], "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, "change_queue": [ {"unit_name": "u-3", "change_type": "added", "members": ["u-1", "u-2", "u-3"]}]}) @inlineCallbacks def test_state_not_stored_mid_tick(self): def execute(context, change): self.execute_called = True return fail(SomeError()) self.execute_called = False self.executor = execute initial_state = { "context_members": ["u-1", "u-2"], "member_versions": {"u-1": 1, "u-2": 0, "u-3": 0}, "change_queue": [ {"change_type": "modified", "unit_name": "u-1", "members":["u-1", "u-2"]}, {"change_type": "modified", "unit_name": "u-1", "members":["u-1", "u-2", "u-3"]}, ]} with open(self.state_file, "w") as f: f.write(serializer.dump(initial_state)) d = self.scheduler.run() while not self.execute_called: yield self.poke_zk() yield self.assertFailure(d, SomeError) with open(self.state_file) as f: self.assertEquals(serializer.load(f.read()), initial_state) def test_ignore_equal_settings_version(self): """ A modified event whose version is not greater than the latest known version for that unit will be ignored. """ self.write_single_unit_state() self.scheduler.cb_change_settings([("u-1", 0)]) self.scheduler.run() self.assertEquals(len(self.executions), 0) def test_settings_version_0_on_add(self): """ When a unit is added, we assume its settings version to be 0, and therefore modified events with version 0 will be ignored. """ self.scheduler.cb_change_members([], ["u-1"]) self.scheduler.run() self.assertEquals(len(self.executions), 2) self.scheduler.cb_change_settings([("u-1", 0)]) self.assertEquals(len(self.executions), 2) self.assertEqual(self.executions[0][1].change_type, "joined") def test_membership_timeslip(self): """ Adds and removes are calculated based on known membership state, NOT on old_units. """ with open(self.state_file, "w") as f: f.write(serializer.dump({ "context_members": ["u-1", "u-2"], "member_versions": {"u-1": 0, "u-2": 0}, "change_queue": []})) self.scheduler.cb_change_members(["u-2"], ["u-3", "u-4"]) self.scheduler.run() output = ( "members changed: old=['u-2'], new=['u-3', 'u-4']", "old does not match last recorded units: ['u-1', 'u-2']", "start", "executing hook for u-3:joined", "executing hook for u-4:joined", "executing hook for u-1:departed", "executing hook for u-2:departed", "executing hook for u-3:modified", "executing hook for u-4:modified\n") self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) juju-0.7.orig/juju/hooks/tests/hooks/echo-hook0000755000000000000000000000010612135220114017575 0ustar 00000000000000#!/bin/bash echo $MESSAGE juju-log -l ERROR $ERROR juju-log $DEFAULT juju-0.7.orig/juju/hooks/tests/hooks/fail-hook0000755000000000000000000000004212135220114017571 0ustar 00000000000000#!/bin/sh echo "FAIL" >&2 exit 1 juju-0.7.orig/juju/hooks/tests/hooks/hanging-hook0000755000000000000000000000014612135220114020276 0ustar 00000000000000#!/bin/bash sleep 0.05 && echo "Slept for 50ms" && sleep 1 && echo "Slept for 1s" && sleep 1000000 & juju-0.7.orig/juju/hooks/tests/hooks/noexec-hook0000644000000000000000000000000012135220114020126 0ustar 00000000000000juju-0.7.orig/juju/hooks/tests/hooks/sleep-hook0000755000000000000000000000002512135220114017767 0ustar 00000000000000#!/bin/bash sleep 10juju-0.7.orig/juju/hooks/tests/hooks/success-hook0000755000000000000000000000004512135220114020331 0ustar 00000000000000#!/bin/sh echo "EPIC WIN" && exit 0 juju-0.7.orig/juju/lib/__init__.py0000644000000000000000000000000012135220114015233 0ustar 00000000000000juju-0.7.orig/juju/lib/cache.py0000644000000000000000000000061312135220114014551 0ustar 00000000000000import time class CachedValue(object): def __init__(self, max_cache, value=None): self._max_cache = max_cache self.set(value) def get(self): delta = time.time() - self._timestamp if delta > self._max_cache: return None return self._value def set(self, value): self._value = value self._timestamp = time.time() juju-0.7.orig/juju/lib/filehash.py0000644000000000000000000000103512135220114015270 0ustar 00000000000000 def compute_file_hash(hash_type, filename): """Simple helper to compute the digest of a file. @param hash_type: A class like hashlib.sha256. @param filename: File path to compute the digest from. """ hash = hash_type() with open(filename) as file: # Chunk the digest extraction to avoid loading large # files in memory unnecessarily. while True: chunk = file.read(8192) if not chunk: break hash.update(chunk) return hash.hexdigest() juju-0.7.orig/juju/lib/format.py0000644000000000000000000001063312135220114015001 0ustar 00000000000000"""Utility functions and constants to support uniform i/o formatting.""" import json import os import yaml from juju.errors import JujuError class BaseFormat(object): """Maintains parallel code paths for input and output formatting through the subclasses PythonFormat (Python str formatting with JSON encoding) and YAMLFormat. """ def parse_keyvalue_pairs(self, pairs): """Parses key value pairs, using `_parse_value` for specific format""" data = {} for kv in pairs: if "=" not in kv: raise JujuError( "Expected `option=value`. Found `%s`" % kv) k, v = kv.split("=", 1) if v.startswith("@"): # Handle file input, any parsing/sanitization is done next # with respect to charm format filename = v[1:] try: with open(filename, "r") as f: v = f.read() except IOError: raise JujuError( "No such file or directory: %s (argument:%s)" % ( filename, k)) except Exception, e: raise JujuError("Bad file %s" % e) data[k] = self._parse_value(k, v) return data def _parse_value(self, key, value): """Interprets value as a str""" return value def should_delete(self, value): """Whether `value` implies corresponding key should be deleted""" return not value.strip() class PythonFormat(BaseFormat): """Supports backwards compatibility through str and JSON encoding.""" charm_format = 1 def format(self, data): """Formats `data` using Python str encoding""" return str(data) def format_raw(self, data): """Add extra \n seen in Python format, so not truly raw""" return self.format(data) + "\n" # For the old format: 1, using JSON serialization introduces some # subtle issues around Unicode conversion that then later results # in bugginess. For compatibility reasons, we keep these old bugs # around, by dumping and loading into JSON at appropriate points. def dump(self, data): """Dumps using JSON serialization""" return json.dumps(data) def load(self, data): """Loads data, but also converts str to Unicode""" return json.loads(data) class YAMLFormat(BaseFormat): """New format that uses YAML, so no unexpected encoding issues""" charm_format = 2 def format(self, data): """Formats `data` in Juju's preferred YAML format""" # Return value such that it roundtrips; this allows us to # report back the boolean false instead of the Python # output format, False if data is None: return "" serialized = yaml.dump( data, indent=4, default_flow_style=False, width=80, allow_unicode=True, Dumper=yaml.CSafeDumper) if serialized.endswith("\n...\n"): # Remove explicit doc end sentinel, still valid yaml serialized = serialized[0:-5] # Also remove any extra \n, will still be valid yaml return serialized.rstrip("\n") def format_raw(self, data): """Formats `data` as a raw string if str, otherwise as YAML""" if isinstance(data, str): return data else: return self.format(data) # Use the same format for dump dump = format def load(self, data): """Loads data safely, ensuring no Python specific type info leaks""" return yaml.load(data, Loader=yaml.CSafeLoader) def is_valid_charm_format(charm_format): """True if `charm_format` is a valid format""" return charm_format in (PythonFormat.charm_format, YAMLFormat.charm_format) def get_charm_formatter(charm_format): """Map `charm_format` to the implementing strategy for that format""" if charm_format == PythonFormat.charm_format: return PythonFormat() elif charm_format == YAMLFormat.charm_format: return YAMLFormat() else: raise JujuError( "Expected charm format to be either 1 or 2, got %s" % ( charm_format,)) def get_charm_formatter_from_env(): """Return the formatter specified by $_JUJU_CHARM_FORMAT""" return get_charm_formatter(int( os.environ.get("_JUJU_CHARM_FORMAT", "1"))) juju-0.7.orig/juju/lib/loader.py0000644000000000000000000000216012135220114014753 0ustar 00000000000000 _marker = object() def _get_module_function(specification): # converts foo.bar.baz to ['foo.bar', 'baz'] try: data = specification.rsplit('.', 1) except (ValueError, AttributeError): data = [] if len(data) != 2: raise ValueError("Invalid import specification: %r" % ( specification)) return data def load_symbol(specification): """load a symbol from a dot delimited path in the import namespace. returns (module, symbol) or raises ImportError """ module_path, symbol_name = _get_module_function(specification) module = __import__(module_path, fromlist=module_path.split()) symbol = getattr(module, symbol_name, _marker) return (module, symbol) def get_callable(specification): """ Convert a string version of a function name to the callable object. If no callable can be found an ImportError is raised. """ module, callback = load_symbol(specification) if callback is _marker or not callable(callback): raise ImportError("No callback found for %s" % ( specification)) return callback juju-0.7.orig/juju/lib/lxc/0000755000000000000000000000000012135220114013722 5ustar 00000000000000juju-0.7.orig/juju/lib/mocker.py0000644000000000000000000023245412135220114015000 0ustar 00000000000000""" Mocker Graceful platform for test doubles in Python: mocks, stubs, fakes, and dummies. Copyright (c) 2007-2010, Gustavo Niemeyer All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import __builtin__ import tempfile import unittest import inspect import shutil import types import sys import os import gc if sys.version_info < (2, 4): from sets import Set as set # pragma: nocover __all__ = ["Mocker", "Expect", "expect", "IS", "CONTAINS", "IN", "MATCH", "ANY", "ARGS", "KWARGS", "MockerTestCase"] __author__ = "Gustavo Niemeyer " __license__ = "BSD" __version__ = "1.0" ERROR_PREFIX = "[Mocker] " # -------------------------------------------------------------------- # Exceptions class MatchError(AssertionError): """Raised when an unknown expression is seen in playback mode.""" # -------------------------------------------------------------------- # Helper for chained-style calling. class expect(object): """This is a simple helper that allows a different call-style. With this class one can comfortably do chaining of calls to the mocker object responsible by the object being handled. For instance:: expect(obj.attr).result(3).count(1, 2) Is the same as:: obj.attr mocker.result(3) mocker.count(1, 2) """ __mocker__ = None def __init__(self, mock, attr=None): self._mock = mock self._attr = attr def __getattr__(self, attr): return self.__class__(self._mock, attr) def __call__(self, *args, **kwargs): mocker = self.__mocker__ if not mocker: mocker = self._mock.__mocker__ getattr(mocker, self._attr)(*args, **kwargs) return self def Expect(mocker): """Create an expect() "function" using the given Mocker instance. This helper allows defining an expect() "function" which works even in trickier cases such as: expect = Expect(mymocker) expect(iter(mock)).generate([1, 2, 3]) """ return type("Expect", (expect,), {"__mocker__": mocker}) # -------------------------------------------------------------------- # Extensions to Python's unittest. class MockerTestCase(unittest.TestCase): """unittest.TestCase subclass with Mocker support. @ivar mocker: The mocker instance. This is a convenience only. Mocker may easily be used with the standard C{unittest.TestCase} class if wanted. Test methods have a Mocker instance available on C{self.mocker}. At the end of each test method, expectations of the mocker will be verified, and any requested changes made to the environment will be restored. In addition to the integration with Mocker, this class provides a few additional helper methods. """ def __init__(self, methodName="runTest"): # So here is the trick: we take the real test method, wrap it on # a function that do the job we have to do, and insert it in the # *instance* dictionary, so that getattr() will return our # replacement rather than the class method. test_method = getattr(self, methodName, None) if test_method is not None: def test_method_wrapper(): try: result = test_method() except: raise else: if (self.mocker.is_recording() and self.mocker.get_events()): raise RuntimeError("Mocker must be put in replay " "mode with self.mocker.replay()") if (hasattr(result, "addCallback") and hasattr(result, "addErrback")): def verify(result): self.mocker.verify() return result result.addCallback(verify) else: self.mocker.verify() self.mocker.restore() return result # Copy all attributes from the original method.. for attr in dir(test_method): # .. unless they're present in our wrapper already. if not hasattr(test_method_wrapper, attr) or attr == "__doc__": setattr(test_method_wrapper, attr, getattr(test_method, attr)) setattr(self, methodName, test_method_wrapper) # We could overload run() normally, but other well-known testing # frameworks do it as well, and some of them won't call the super, # which might mean that cleanup wouldn't happen. With that in mind, # we make integration easier by using the following trick. run_method = self.run def run_wrapper(*args, **kwargs): try: return run_method(*args, **kwargs) finally: self.__cleanup() self.run = run_wrapper self.mocker = Mocker() self.expect = Expect(self.mocker) self.__cleanup_funcs = [] self.__cleanup_paths = [] super(MockerTestCase, self).__init__(methodName) def __call__(self, *args, **kwargs): # This is necessary for Python 2.3 only, because it didn't use run(), # which is supported above. try: super(MockerTestCase, self).__call__(*args, **kwargs) finally: if sys.version_info < (2, 4): self.__cleanup() def __cleanup(self): for path in self.__cleanup_paths: if os.path.isfile(path): os.unlink(path) elif os.path.isdir(path): shutil.rmtree(path) self.mocker.reset() for func, args, kwargs in self.__cleanup_funcs: func(*args, **kwargs) def addCleanup(self, func, *args, **kwargs): self.__cleanup_funcs.append((func, args, kwargs)) def makeFile(self, content=None, suffix="", prefix="tmp", basename=None, dirname=None, path=None): """Create a temporary file and return the path to it. @param content: Initial content for the file. @param suffix: Suffix to be given to the file's basename. @param prefix: Prefix to be given to the file's basename. @param basename: Full basename for the file. @param dirname: Put file inside this directory. The file is removed after the test runs. """ if path is not None: self.__cleanup_paths.append(path) elif basename is not None: if dirname is None: dirname = tempfile.mkdtemp() self.__cleanup_paths.append(dirname) path = os.path.join(dirname, basename) else: fd, path = tempfile.mkstemp(suffix, prefix, dirname) self.__cleanup_paths.append(path) os.close(fd) if content is None: os.unlink(path) if content is not None: file = open(path, "w") file.write(content) file.close() return path def makeDir(self, suffix="", prefix="tmp", dirname=None, path=None): """Create a temporary directory and return the path to it. @param suffix: Suffix to be given to the file's basename. @param prefix: Prefix to be given to the file's basename. @param dirname: Put directory inside this parent directory. The directory is removed after the test runs. """ if path is not None: os.makedirs(path) else: path = tempfile.mkdtemp(suffix, prefix, dirname) self.__cleanup_paths.append(path) return path def failUnlessIs(self, first, second, msg=None): """Assert that C{first} is the same object as C{second}.""" if first is not second: raise self.failureException(msg or "%r is not %r" % (first, second)) def failIfIs(self, first, second, msg=None): """Assert that C{first} is not the same object as C{second}.""" if first is second: raise self.failureException(msg or "%r is %r" % (first, second)) def failUnlessIn(self, first, second, msg=None): """Assert that C{first} is contained in C{second}.""" if first not in second: raise self.failureException(msg or "%r not in %r" % (first, second)) def failUnlessStartsWith(self, first, second, msg=None): """Assert that C{first} starts with C{second}.""" if first[:len(second)] != second: raise self.failureException(msg or "%r doesn't start with %r" % (first, second)) def failIfStartsWith(self, first, second, msg=None): """Assert that C{first} doesn't start with C{second}.""" if first[:len(second)] == second: raise self.failureException(msg or "%r starts with %r" % (first, second)) def failUnlessEndsWith(self, first, second, msg=None): """Assert that C{first} starts with C{second}.""" if first[len(first)-len(second):] != second: raise self.failureException(msg or "%r doesn't end with %r" % (first, second)) def failIfEndsWith(self, first, second, msg=None): """Assert that C{first} doesn't start with C{second}.""" if first[len(first)-len(second):] == second: raise self.failureException(msg or "%r ends with %r" % (first, second)) def failIfIn(self, first, second, msg=None): """Assert that C{first} is not contained in C{second}.""" if first in second: raise self.failureException(msg or "%r in %r" % (first, second)) def failUnlessApproximates(self, first, second, tolerance, msg=None): """Assert that C{first} is near C{second} by at most C{tolerance}.""" if abs(first - second) > tolerance: raise self.failureException(msg or "abs(%r - %r) > %r" % (first, second, tolerance)) def failIfApproximates(self, first, second, tolerance, msg=None): """Assert that C{first} is far from C{second} by at least C{tolerance}. """ if abs(first - second) <= tolerance: raise self.failureException(msg or "abs(%r - %r) <= %r" % (first, second, tolerance)) def failUnlessMethodsMatch(self, first, second): """Assert that public methods in C{first} are present in C{second}. This method asserts that all public methods found in C{first} are also present in C{second} and accept the same arguments. C{first} may have its own private methods, though, and may not have all methods found in C{second}. Note that if a private method in C{first} matches the name of one in C{second}, their specification is still compared. This is useful to verify if a fake or stub class have the same API as the real class being simulated. """ first_methods = dict(inspect.getmembers(first, inspect.ismethod)) second_methods = dict(inspect.getmembers(second, inspect.ismethod)) for name, first_method in first_methods.iteritems(): first_argspec = inspect.getargspec(first_method) first_formatted = inspect.formatargspec(*first_argspec) second_method = second_methods.get(name) if second_method is None: if name[:1] == "_": continue # First may have its own private methods. raise self.failureException("%s.%s%s not present in %s" % (first.__name__, name, first_formatted, second.__name__)) second_argspec = inspect.getargspec(second_method) if first_argspec != second_argspec: second_formatted = inspect.formatargspec(*second_argspec) raise self.failureException("%s.%s%s != %s.%s%s" % (first.__name__, name, first_formatted, second.__name__, name, second_formatted)) def failUnlessRaises(self, excClass, callableObj, *args, **kwargs): """ Fail unless an exception of class excClass is thrown by callableObj when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception. It returns the exception instance if it matches the given exception class. """ try: result = callableObj(*args, **kwargs) except excClass, e: return e else: excName = excClass if hasattr(excClass, "__name__"): excName = excClass.__name__ raise self.failureException( "%s not raised (%r returned)" % (excName, result)) assertIs = failUnlessIs assertIsNot = failIfIs assertIn = failUnlessIn assertNotIn = failIfIn assertStartsWith = failUnlessStartsWith assertNotStartsWith = failIfStartsWith assertEndsWith = failUnlessEndsWith assertNotEndsWith = failIfEndsWith assertApproximates = failUnlessApproximates assertNotApproximates = failIfApproximates assertMethodsMatch = failUnlessMethodsMatch assertRaises = failUnlessRaises # The following are missing in Python < 2.4. assertTrue = unittest.TestCase.failUnless assertFalse = unittest.TestCase.failIf # The following is provided for compatibility with Twisted's trial. assertIdentical = assertIs assertNotIdentical = assertIsNot failUnlessIdentical = failUnlessIs failIfIdentical = failIfIs # -------------------------------------------------------------------- # Mocker. class classinstancemethod(object): def __init__(self, method): self.method = method def __get__(self, obj, cls=None): def bound_method(*args, **kwargs): return self.method(cls, obj, *args, **kwargs) return bound_method class MockerBase(object): """Controller of mock objects. A mocker instance is used to command recording and replay of expectations on any number of mock objects. Expectations should be expressed for the mock object while in record mode (the initial one) by using the mock object itself, and using the mocker (and/or C{expect()} as a helper) to define additional behavior for each event. For instance:: mock = mocker.mock() mock.hello() mocker.result("Hi!") mocker.replay() assert mock.hello() == "Hi!" mock.restore() mock.verify() In this short excerpt a mock object is being created, then an expectation of a call to the C{hello()} method was recorded, and when called the method should return the value C{10}. Then, the mocker is put in replay mode, and the expectation is satisfied by calling the C{hello()} method, which indeed returns 10. Finally, a call to the L{restore()} method is performed to undo any needed changes made in the environment, and the L{verify()} method is called to ensure that all defined expectations were met. The same logic can be expressed more elegantly using the C{with mocker:} statement, as follows:: mock = mocker.mock() mock.hello() mocker.result("Hi!") with mocker: assert mock.hello() == "Hi!" Also, the MockerTestCase class, which integrates the mocker on a unittest.TestCase subclass, may be used to reduce the overhead of controlling the mocker. A test could be written as follows:: class SampleTest(MockerTestCase): def test_hello(self): mock = self.mocker.mock() mock.hello() self.mocker.result("Hi!") self.mocker.replay() self.assertEquals(mock.hello(), "Hi!") """ _recorders = [] # For convenience only. on = expect class __metaclass__(type): def __init__(self, name, bases, dict): # Make independent lists on each subclass, inheriting from parent. self._recorders = list(getattr(self, "_recorders", ())) def __init__(self): self._recorders = self._recorders[:] self._events = [] self._recording = True self._ordering = False self._last_orderer = None def is_recording(self): """Return True if in recording mode, False if in replay mode. Recording is the initial state. """ return self._recording def replay(self): """Change to replay mode, where recorded events are reproduced. If already in replay mode, the mocker will be restored, with all expectations reset, and then put again in replay mode. An alternative and more comfortable way to replay changes is using the 'with' statement, as follows:: mocker = Mocker() with mocker: The 'with' statement will automatically put mocker in replay mode, and will also verify if all events were correctly reproduced at the end (using L{verify()}), and also restore any changes done in the environment (with L{restore()}). Also check the MockerTestCase class, which integrates the unittest.TestCase class with mocker. """ if not self._recording: for event in self._events: event.restore() else: self._recording = False for event in self._events: event.replay() def restore(self): """Restore changes in the environment, and return to recording mode. This should always be called after the test is complete (succeeding or not). There are ways to call this method automatically on completion (e.g. using a C{with mocker:} statement, or using the L{MockerTestCase} class. """ if not self._recording: self._recording = True for event in self._events: event.restore() def reset(self): """Reset the mocker state. This will restore environment changes, if currently in replay mode, and then remove all events previously recorded. """ if not self._recording: self.restore() self.unorder() del self._events[:] def get_events(self): """Return all recorded events.""" return self._events[:] def add_event(self, event): """Add an event. This method is used internally by the implementation, and shouldn't be needed on normal mocker usage. """ self._events.append(event) if self._ordering: orderer = event.add_task(Orderer(event.path)) if self._last_orderer: orderer.add_dependency(self._last_orderer) self._last_orderer = orderer return event def verify(self): """Check if all expectations were met, and raise AssertionError if not. The exception message will include a nice description of which expectations were not met, and why. """ errors = [] for event in self._events: try: event.verify() except AssertionError, e: error = str(e) if not error: raise RuntimeError("Empty error message from %r" % event) errors.append(error) if errors: message = [ERROR_PREFIX + "Unmet expectations:", ""] for error in errors: lines = error.splitlines() message.append("=> " + lines.pop(0)) message.extend([" " + line for line in lines]) message.append("") raise AssertionError(os.linesep.join(message)) def mock(self, spec_and_type=None, spec=None, type=None, name=None, count=True): """Return a new mock object. @param spec_and_type: Handy positional argument which sets both spec and type. @param spec: Method calls will be checked for correctness against the given class. @param type: If set, the Mock's __class__ attribute will return the given type. This will make C{isinstance()} calls on the object work. @param name: Name for the mock object, used in the representation of expressions. The name is rarely needed, as it's usually guessed correctly from the variable name used. @param count: If set to false, expressions may be executed any number of times, unless an expectation is explicitly set using the L{count()} method. By default, expressions are expected once. """ if spec_and_type is not None: spec = type = spec_and_type return Mock(self, spec=spec, type=type, name=name, count=count) def proxy(self, object, spec=True, type=True, name=None, count=True, passthrough=True): """Return a new mock object which proxies to the given object. Proxies are useful when only part of the behavior of an object is to be mocked. Unknown expressions may be passed through to the real implementation implicitly (if the C{passthrough} argument is True), or explicitly (using the L{passthrough()} method on the event). @param object: Real object to be proxied, and replaced by the mock on replay mode. It may also be an "import path", such as C{"time.time"}, in which case the object will be the C{time} function from the C{time} module. @param spec: Method calls will be checked for correctness against the given object, which may be a class or an instance where attributes will be looked up. Defaults to the the C{object} parameter. May be set to None explicitly, in which case spec checking is disabled. Checks may also be disabled explicitly on a per-event basis with the L{nospec()} method. @param type: If set, the Mock's __class__ attribute will return the given type. This will make C{isinstance()} calls on the object work. Defaults to the type of the C{object} parameter. May be set to None explicitly. @param name: Name for the mock object, used in the representation of expressions. The name is rarely needed, as it's usually guessed correctly from the variable name used. @param count: If set to false, expressions may be executed any number of times, unless an expectation is explicitly set using the L{count()} method. By default, expressions are expected once. @param passthrough: If set to False, passthrough of actions on the proxy to the real object will only happen when explicitly requested via the L{passthrough()} method. """ if isinstance(object, basestring): if name is None: name = object import_stack = object.split(".") attr_stack = [] while import_stack: module_path = ".".join(import_stack) try: __import__(module_path) except ImportError: attr_stack.insert(0, import_stack.pop()) if not import_stack: raise continue else: object = sys.modules[module_path] for attr in attr_stack: object = getattr(object, attr) break if isinstance(object, types.UnboundMethodType): object = object.im_func if spec is True: spec = object if type is True: type = __builtin__.type(object) return Mock(self, spec=spec, type=type, object=object, name=name, count=count, passthrough=passthrough) def replace(self, object, spec=True, type=True, name=None, count=True, passthrough=True): """Create a proxy, and replace the original object with the mock. On replay, the original object will be replaced by the returned proxy in all dictionaries found in the running interpreter via the garbage collecting system. This should cover module namespaces, class namespaces, instance namespaces, and so on. @param object: Real object to be proxied, and replaced by the mock on replay mode. It may also be an "import path", such as C{"time.time"}, in which case the object will be the C{time} function from the C{time} module. @param spec: Method calls will be checked for correctness against the given object, which may be a class or an instance where attributes will be looked up. Defaults to the the C{object} parameter. May be set to None explicitly, in which case spec checking is disabled. Checks may also be disabled explicitly on a per-event basis with the L{nospec()} method. @param type: If set, the Mock's __class__ attribute will return the given type. This will make C{isinstance()} calls on the object work. Defaults to the type of the C{object} parameter. May be set to None explicitly. @param name: Name for the mock object, used in the representation of expressions. The name is rarely needed, as it's usually guessed correctly from the variable name used. @param passthrough: If set to False, passthrough of actions on the proxy to the real object will only happen when explicitly requested via the L{passthrough()} method. """ mock = self.proxy(object, spec, type, name, count, passthrough) event = self._get_replay_restore_event() event.add_task(ProxyReplacer(mock)) return mock def patch(self, object, spec=True): """Patch an existing object to reproduce recorded events. @param object: Class or instance to be patched. @param spec: Method calls will be checked for correctness against the given object, which may be a class or an instance where attributes will be looked up. Defaults to the the C{object} parameter. May be set to None explicitly, in which case spec checking is disabled. Checks may also be disabled explicitly on a per-event basis with the L{nospec()} method. The result of this method is still a mock object, which can be used like any other mock object to record events. The difference is that when the mocker is put on replay mode, the *real* object will be modified to behave according to recorded expectations. Patching works in individual instances, and also in classes. When an instance is patched, recorded events will only be considered on this specific instance, and other instances should behave normally. When a class is patched, the reproduction of events will be considered on any instance of this class once created (collectively). Observe that, unlike with proxies which catch only events done through the mock object, *all* accesses to recorded expectations will be considered; even these coming from the object itself (e.g. C{self.hello()} is considered if this method was patched). While this is a very powerful feature, and many times the reason to use patches in the first place, it's important to keep this behavior in mind. Patching of the original object only takes place when the mocker is put on replay mode, and the patched object will be restored to its original state once the L{restore()} method is called (explicitly, or implicitly with alternative conventions, such as a C{with mocker:} block, or a MockerTestCase class). """ if spec is True: spec = object patcher = Patcher() event = self._get_replay_restore_event() event.add_task(patcher) mock = Mock(self, object=object, patcher=patcher, passthrough=True, spec=spec) patcher.patch_attr(object, '__mocker_mock__', mock) return mock def act(self, path): """This is called by mock objects whenever something happens to them. This method is part of the implementation between the mocker and mock objects. """ if self._recording: event = self.add_event(Event(path)) for recorder in self._recorders: recorder(self, event) return Mock(self, path) else: # First run events that may run, then run unsatisfied events, then # ones not previously run. We put the index in the ordering tuple # instead of the actual event because we want a stable sort # (ordering between 2 events is undefined). events = self._events order = [(events[i].satisfied()*2 + events[i].has_run(), i) for i in range(len(events))] order.sort() postponed = None for weight, i in order: event = events[i] if event.matches(path): if event.may_run(path): return event.run(path) elif postponed is None: postponed = event if postponed is not None: return postponed.run(path) raise MatchError(ERROR_PREFIX + "Unexpected expression: %s" % path) def get_recorders(cls, self): """Return recorders associated with this mocker class or instance. This method may be called on mocker instances and also on mocker classes. See the L{add_recorder()} method for more information. """ return (self or cls)._recorders[:] get_recorders = classinstancemethod(get_recorders) def add_recorder(cls, self, recorder): """Add a recorder to this mocker class or instance. @param recorder: Callable accepting C{(mocker, event)} as parameters. This is part of the implementation of mocker. All registered recorders are called for translating events that happen during recording into expectations to be met once the state is switched to replay mode. This method may be called on mocker instances and also on mocker classes. When called on a class, the recorder will be used by all instances, and also inherited on subclassing. When called on instances, the recorder is added only to the given instance. """ (self or cls)._recorders.append(recorder) return recorder add_recorder = classinstancemethod(add_recorder) def remove_recorder(cls, self, recorder): """Remove the given recorder from this mocker class or instance. This method may be called on mocker classes and also on mocker instances. See the L{add_recorder()} method for more information. """ (self or cls)._recorders.remove(recorder) remove_recorder = classinstancemethod(remove_recorder) def result(self, value): """Make the last recorded event return the given value on replay. @param value: Object to be returned when the event is replayed. """ self.call(lambda *args, **kwargs: value) def generate(self, sequence): """Last recorded event will return a generator with the given sequence. @param sequence: Sequence of values to be generated. """ def generate(*args, **kwargs): for value in sequence: yield value self.call(generate) def throw(self, exception): """Make the last recorded event raise the given exception on replay. @param exception: Class or instance of exception to be raised. """ def raise_exception(*args, **kwargs): raise exception self.call(raise_exception) def call(self, func, with_object=False): """Make the last recorded event cause the given function to be called. @param func: Function to be called. The result of the function will be used as the event result. """ event = self._events[-1] if with_object and event.path.root_object is None: raise TypeError("Mock object isn't a proxy") event.add_task(FunctionRunner(func, with_root_object=with_object)) def count(self, min, max=False): """Last recorded event must be replayed between min and max times. @param min: Minimum number of times that the event must happen. @param max: Maximum number of times that the event must happen. If not given, it defaults to the same value of the C{min} parameter. If set to None, there is no upper limit, and the expectation is met as long as it happens at least C{min} times. """ event = self._events[-1] for task in event.get_tasks(): if isinstance(task, RunCounter): event.remove_task(task) event.add_task(RunCounter(min, max)) def is_ordering(self): """Return true if all events are being ordered. See the L{order()} method. """ return self._ordering def unorder(self): """Disable the ordered mode. See the L{order()} method for more information. """ self._ordering = False self._last_orderer = None def order(self, *path_holders): """Create an expectation of order between two or more events. @param path_holders: Objects returned as the result of recorded events. By default, mocker won't force events to happen precisely in the order they were recorded. Calling this method will change this behavior so that events will only match if reproduced in the correct order. There are two ways in which this method may be used. Which one is used in a given occasion depends only on convenience. If no arguments are passed, the mocker will be put in a mode where all the recorded events following the method call will only be met if they happen in order. When that's used, the mocker may be put back in unordered mode by calling the L{unorder()} method, or by using a 'with' block, like so:: with mocker.ordered(): In this case, only expressions in will be ordered, and the mocker will be back in unordered mode after the 'with' block. The second way to use it is by specifying precisely which events should be ordered. As an example:: mock = mocker.mock() expr1 = mock.hello() expr2 = mock.world expr3 = mock.x.y.z mocker.order(expr1, expr2, expr3) This method of ordering only works when the expression returns another object. Also check the L{after()} and L{before()} methods, which are alternative ways to perform this. """ if not path_holders: self._ordering = True return OrderedContext(self) last_orderer = None for path_holder in path_holders: if type(path_holder) is Path: path = path_holder else: path = path_holder.__mocker_path__ for event in self._events: if event.path is path: for task in event.get_tasks(): if isinstance(task, Orderer): orderer = task break else: orderer = Orderer(path) event.add_task(orderer) if last_orderer: orderer.add_dependency(last_orderer) last_orderer = orderer break def after(self, *path_holders): """Last recorded event must happen after events referred to. @param path_holders: Objects returned as the result of recorded events which should happen before the last recorded event As an example, the idiom:: expect(mock.x).after(mock.y, mock.z) is an alternative way to say:: expr_x = mock.x expr_y = mock.y expr_z = mock.z mocker.order(expr_y, expr_x) mocker.order(expr_z, expr_x) See L{order()} for more information. """ last_path = self._events[-1].path for path_holder in path_holders: self.order(path_holder, last_path) def before(self, *path_holders): """Last recorded event must happen before events referred to. @param path_holders: Objects returned as the result of recorded events which should happen after the last recorded event As an example, the idiom:: expect(mock.x).before(mock.y, mock.z) is an alternative way to say:: expr_x = mock.x expr_y = mock.y expr_z = mock.z mocker.order(expr_x, expr_y) mocker.order(expr_x, expr_z) See L{order()} for more information. """ last_path = self._events[-1].path for path_holder in path_holders: self.order(last_path, path_holder) def nospec(self): """Don't check method specification of real object on last event. By default, when using a mock created as the result of a call to L{proxy()}, L{replace()}, and C{patch()}, or when passing the spec attribute to the L{mock()} method, method calls on the given object are checked for correctness against the specification of the real object (or the explicitly provided spec). This method will disable that check specifically for the last recorded event. """ event = self._events[-1] for task in event.get_tasks(): if isinstance(task, SpecChecker): event.remove_task(task) def passthrough(self, result_callback=None): """Make the last recorded event run on the real object once seen. @param result_callback: If given, this function will be called with the result of the *real* method call as the only argument. This can only be used on proxies, as returned by the L{proxy()} and L{replace()} methods, or on mocks representing patched objects, as returned by the L{patch()} method. """ event = self._events[-1] if event.path.root_object is None: raise TypeError("Mock object isn't a proxy") event.add_task(PathExecuter(result_callback)) def __enter__(self): """Enter in a 'with' context. This will run replay().""" self.replay() return self def __exit__(self, type, value, traceback): """Exit from a 'with' context. This will run restore() at all times, but will only run verify() if the 'with' block itself hasn't raised an exception. Exceptions in that block are never swallowed. """ self.restore() if type is None: self.verify() return False def _get_replay_restore_event(self): """Return unique L{ReplayRestoreEvent}, creating if needed. Some tasks only want to replay/restore. When that's the case, they shouldn't act on other events during replay. Also, they can all be put in a single event when that's the case. Thus, we add a single L{ReplayRestoreEvent} as the first element of the list. """ if not self._events or type(self._events[0]) != ReplayRestoreEvent: self._events.insert(0, ReplayRestoreEvent()) return self._events[0] class OrderedContext(object): def __init__(self, mocker): self._mocker = mocker def __enter__(self): return None def __exit__(self, type, value, traceback): self._mocker.unorder() class Mocker(MockerBase): __doc__ = MockerBase.__doc__ # Decorator to add recorders on the standard Mocker class. recorder = Mocker.add_recorder # -------------------------------------------------------------------- # Mock object. class Mock(object): def __init__(self, mocker, path=None, name=None, spec=None, type=None, object=None, passthrough=False, patcher=None, count=True): self.__mocker__ = mocker self.__mocker_path__ = path or Path(self, object) self.__mocker_name__ = name self.__mocker_spec__ = spec self.__mocker_object__ = object self.__mocker_passthrough__ = passthrough self.__mocker_patcher__ = patcher self.__mocker_replace__ = False self.__mocker_type__ = type self.__mocker_count__ = count def __mocker_act__(self, kind, args=(), kwargs={}, object=None): if self.__mocker_name__ is None: self.__mocker_name__ = find_object_name(self, 2) action = Action(kind, args, kwargs, self.__mocker_path__) path = self.__mocker_path__ + action if object is not None: path.root_object = object try: return self.__mocker__.act(path) except MatchError, exception: root_mock = path.root_mock if (path.root_object is not None and root_mock.__mocker_passthrough__): return path.execute(path.root_object) # Reinstantiate to show raise statement on traceback, and # also to make the traceback shown shorter. raise MatchError(str(exception)) except AssertionError, e: lines = str(e).splitlines() message = [ERROR_PREFIX + "Unmet expectation:", ""] message.append("=> " + lines.pop(0)) message.extend([" " + line for line in lines]) message.append("") raise AssertionError(os.linesep.join(message)) def __getattribute__(self, name): if name.startswith("__mocker_"): return super(Mock, self).__getattribute__(name) if name == "__class__": if self.__mocker__.is_recording() or self.__mocker_type__ is None: return type(self) return self.__mocker_type__ if name == "__length_hint__": # This is used by Python 2.6+ to optimize the allocation # of arrays in certain cases. Pretend it doesn't exist. raise AttributeError("No __length_hint__ here!") return self.__mocker_act__("getattr", (name,)) def __setattr__(self, name, value): if name.startswith("__mocker_"): return super(Mock, self).__setattr__(name, value) return self.__mocker_act__("setattr", (name, value)) def __delattr__(self, name): return self.__mocker_act__("delattr", (name,)) def __call__(self, *args, **kwargs): return self.__mocker_act__("call", args, kwargs) def __contains__(self, value): return self.__mocker_act__("contains", (value,)) def __getitem__(self, key): return self.__mocker_act__("getitem", (key,)) def __setitem__(self, key, value): return self.__mocker_act__("setitem", (key, value)) def __delitem__(self, key): return self.__mocker_act__("delitem", (key,)) def __len__(self): # MatchError is turned on an AttributeError so that list() and # friends act properly when trying to get length hints on # something that doesn't offer them. try: result = self.__mocker_act__("len") except MatchError, e: raise AttributeError(str(e)) if type(result) is Mock: return 0 return result def __nonzero__(self): try: result = self.__mocker_act__("nonzero") except MatchError, e: return True if type(result) is Mock: return True return result def __iter__(self): # XXX On py3k, when next() becomes __next__(), we'll be able # to return the mock itself because it will be considered # an iterator (we'll be mocking __next__ as well, which we # can't now). result = self.__mocker_act__("iter") if type(result) is Mock: return iter([]) return result # When adding a new action kind here, also add support for it on # Action.execute() and Path.__str__(). def find_object_name(obj, depth=0): """Try to detect how the object is named on a previous scope.""" try: frame = sys._getframe(depth+1) except: return None for name, frame_obj in frame.f_locals.iteritems(): if frame_obj is obj: return name self = frame.f_locals.get("self") if self is not None: try: items = list(self.__dict__.iteritems()) except: pass else: for name, self_obj in items: if self_obj is obj: return name return None # -------------------------------------------------------------------- # Action and path. class Action(object): def __init__(self, kind, args, kwargs, path=None): self.kind = kind self.args = args self.kwargs = kwargs self.path = path self._execute_cache = {} def __repr__(self): if self.path is None: return "Action(%r, %r, %r)" % (self.kind, self.args, self.kwargs) return "Action(%r, %r, %r, %r)" % \ (self.kind, self.args, self.kwargs, self.path) def __eq__(self, other): return (self.kind == other.kind and self.args == other.args and self.kwargs == other.kwargs) def __ne__(self, other): return not self.__eq__(other) def matches(self, other): return (self.kind == other.kind and match_params(self.args, self.kwargs, other.args, other.kwargs)) def execute(self, object): # This caching scheme may fail if the object gets deallocated before # the action, as the id might get reused. It's somewhat easy to fix # that with a weakref callback. For our uses, though, the object # should never get deallocated before the action itself, so we'll # just keep it simple. if id(object) in self._execute_cache: return self._execute_cache[id(object)] execute = getattr(object, "__mocker_execute__", None) if execute is not None: result = execute(self, object) else: kind = self.kind if kind == "getattr": result = getattr(object, self.args[0]) elif kind == "setattr": result = setattr(object, self.args[0], self.args[1]) elif kind == "delattr": result = delattr(object, self.args[0]) elif kind == "call": result = object(*self.args, **self.kwargs) elif kind == "contains": result = self.args[0] in object elif kind == "getitem": result = object[self.args[0]] elif kind == "setitem": result = object[self.args[0]] = self.args[1] elif kind == "delitem": del object[self.args[0]] result = None elif kind == "len": result = len(object) elif kind == "nonzero": result = bool(object) elif kind == "iter": result = iter(object) else: raise RuntimeError("Don't know how to execute %r kind." % kind) self._execute_cache[id(object)] = result return result class Path(object): def __init__(self, root_mock, root_object=None, actions=()): self.root_mock = root_mock self.root_object = root_object self.actions = tuple(actions) self.__mocker_replace__ = False def parent_path(self): if not self.actions: return None return self.actions[-1].path parent_path = property(parent_path) def __add__(self, action): """Return a new path which includes the given action at the end.""" return self.__class__(self.root_mock, self.root_object, self.actions + (action,)) def __eq__(self, other): """Verify if the two paths are equal. Two paths are equal if they refer to the same mock object, and have the actions with equal kind, args and kwargs. """ if (self.root_mock is not other.root_mock or self.root_object is not other.root_object or len(self.actions) != len(other.actions)): return False for action, other_action in zip(self.actions, other.actions): if action != other_action: return False return True def matches(self, other): """Verify if the two paths are equivalent. Two paths are equal if they refer to the same mock object, and have the same actions performed on them. """ if (self.root_mock is not other.root_mock or len(self.actions) != len(other.actions)): return False for action, other_action in zip(self.actions, other.actions): if not action.matches(other_action): return False return True def execute(self, object): """Execute all actions sequentially on object, and return result. """ for action in self.actions: object = action.execute(object) return object def __str__(self): """Transform the path into a nice string such as obj.x.y('z').""" result = self.root_mock.__mocker_name__ or "" for action in self.actions: if action.kind == "getattr": result = "%s.%s" % (result, action.args[0]) elif action.kind == "setattr": result = "%s.%s = %r" % (result, action.args[0], action.args[1]) elif action.kind == "delattr": result = "del %s.%s" % (result, action.args[0]) elif action.kind == "call": args = [repr(x) for x in action.args] items = list(action.kwargs.iteritems()) items.sort() for pair in items: args.append("%s=%r" % pair) result = "%s(%s)" % (result, ", ".join(args)) elif action.kind == "contains": result = "%r in %s" % (action.args[0], result) elif action.kind == "getitem": result = "%s[%r]" % (result, action.args[0]) elif action.kind == "setitem": result = "%s[%r] = %r" % (result, action.args[0], action.args[1]) elif action.kind == "delitem": result = "del %s[%r]" % (result, action.args[0]) elif action.kind == "len": result = "len(%s)" % result elif action.kind == "nonzero": result = "bool(%s)" % result elif action.kind == "iter": result = "iter(%s)" % result else: raise RuntimeError("Don't know how to format kind %r" % action.kind) return result class SpecialArgument(object): """Base for special arguments for matching parameters.""" def __init__(self, object=None): self.object = object def __repr__(self): if self.object is None: return self.__class__.__name__ else: return "%s(%r)" % (self.__class__.__name__, self.object) def matches(self, other): return True def __eq__(self, other): return type(other) == type(self) and self.object == other.object class ANY(SpecialArgument): """Matches any single argument.""" ANY = ANY() class ARGS(SpecialArgument): """Matches zero or more positional arguments.""" ARGS = ARGS() class KWARGS(SpecialArgument): """Matches zero or more keyword arguments.""" KWARGS = KWARGS() class IS(SpecialArgument): def matches(self, other): return self.object is other def __eq__(self, other): return type(other) == type(self) and self.object is other.object class CONTAINS(SpecialArgument): def matches(self, other): try: other.__contains__ except AttributeError: try: iter(other) except TypeError: # If an object can't be iterated, and has no __contains__ # hook, it'd blow up on the test below. We test this in # advance to prevent catching more errors than we really # want. return False return self.object in other class IN(SpecialArgument): def matches(self, other): return other in self.object class MATCH(SpecialArgument): def matches(self, other): return bool(self.object(other)) def __eq__(self, other): return type(other) == type(self) and self.object is other.object def match_params(args1, kwargs1, args2, kwargs2): """Match the two sets of parameters, considering special parameters.""" has_args = ARGS in args1 has_kwargs = KWARGS in args1 if has_kwargs: args1 = [arg1 for arg1 in args1 if arg1 is not KWARGS] elif len(kwargs1) != len(kwargs2): return False if not has_args and len(args1) != len(args2): return False # Either we have the same number of kwargs, or unknown keywords are # accepted (KWARGS was used), so check just the ones in kwargs1. for key, arg1 in kwargs1.iteritems(): if key not in kwargs2: return False arg2 = kwargs2[key] if isinstance(arg1, SpecialArgument): if not arg1.matches(arg2): return False elif arg1 != arg2: return False # Keywords match. Now either we have the same number of # arguments, or ARGS was used. If ARGS wasn't used, arguments # must match one-on-one necessarily. if not has_args: for arg1, arg2 in zip(args1, args2): if isinstance(arg1, SpecialArgument): if not arg1.matches(arg2): return False elif arg1 != arg2: return False return True # Easy choice. Keywords are matching, and anything on args is accepted. if (ARGS,) == args1: return True # We have something different there. If we don't have positional # arguments on the original call, it can't match. if not args2: # Unless we have just several ARGS (which is bizarre, but..). for arg1 in args1: if arg1 is not ARGS: return False return True # Ok, all bets are lost. We have to actually do the more expensive # matching. This is an algorithm based on the idea of the Levenshtein # Distance between two strings, but heavily hacked for this purpose. args2l = len(args2) if args1[0] is ARGS: args1 = args1[1:] array = [0]*args2l else: array = [1]*args2l for i in range(len(args1)): last = array[0] if args1[i] is ARGS: for j in range(1, args2l): last, array[j] = array[j], min(array[j-1], array[j], last) else: array[0] = i or int(args1[i] != args2[0]) for j in range(1, args2l): last, array[j] = array[j], last or int(args1[i] != args2[j]) if 0 not in array: return False if array[-1] != 0: return False return True # -------------------------------------------------------------------- # Event and task base. class Event(object): """Aggregation of tasks that keep track of a recorded action. An event represents something that may or may not happen while the mocked environment is running, such as an attribute access, or a method call. The event is composed of several tasks that are orchestrated together to create a composed meaning for the event, including for which actions it should be run, what happens when it runs, and what's the expectations about the actions run. """ def __init__(self, path=None): self.path = path self._tasks = [] self._has_run = False def add_task(self, task): """Add a new task to this taks.""" self._tasks.append(task) return task def remove_task(self, task): self._tasks.remove(task) def get_tasks(self): return self._tasks[:] def matches(self, path): """Return true if *all* tasks match the given path.""" for task in self._tasks: if not task.matches(path): return False return bool(self._tasks) def has_run(self): return self._has_run def may_run(self, path): """Verify if any task would certainly raise an error if run. This will call the C{may_run()} method on each task and return false if any of them returns false. """ for task in self._tasks: if not task.may_run(path): return False return True def run(self, path): """Run all tasks with the given action. @param path: The path of the expression run. Running an event means running all of its tasks individually and in order. An event should only ever be run if all of its tasks claim to match the given action. The result of this method will be the last result of a task which isn't None, or None if they're all None. """ self._has_run = True result = None errors = [] for task in self._tasks: try: task_result = task.run(path) except AssertionError, e: error = str(e) if not error: raise RuntimeError("Empty error message from %r" % task) errors.append(error) else: if task_result is not None: result = task_result if errors: message = [str(self.path)] if str(path) != message[0]: message.append("- Run: %s" % path) for error in errors: lines = error.splitlines() message.append("- " + lines.pop(0)) message.extend([" " + line for line in lines]) raise AssertionError(os.linesep.join(message)) return result def satisfied(self): """Return true if all tasks are satisfied. Being satisfied means that there are no unmet expectations. """ for task in self._tasks: try: task.verify() except AssertionError: return False return True def verify(self): """Run verify on all tasks. The verify method is supposed to raise an AssertionError if the task has unmet expectations, with a one-line explanation about why this item is unmet. This method should be safe to be called multiple times without side effects. """ errors = [] for task in self._tasks: try: task.verify() except AssertionError, e: error = str(e) if not error: raise RuntimeError("Empty error message from %r" % task) errors.append(error) if errors: message = [str(self.path)] for error in errors: lines = error.splitlines() message.append("- " + lines.pop(0)) message.extend([" " + line for line in lines]) raise AssertionError(os.linesep.join(message)) def replay(self): """Put all tasks in replay mode.""" self._has_run = False for task in self._tasks: task.replay() def restore(self): """Restore the state of all tasks.""" for task in self._tasks: task.restore() class ReplayRestoreEvent(Event): """Helper event for tasks which need replay/restore but shouldn't match.""" def matches(self, path): return False class Task(object): """Element used to track one specific aspect on an event. A task is responsible for adding any kind of logic to an event. Examples of that are counting the number of times the event was made, verifying parameters if any, and so on. """ def matches(self, path): """Return true if the task is supposed to be run for the given path. """ return True def may_run(self, path): """Return false if running this task would certainly raise an error.""" return True def run(self, path): """Perform the task item, considering that the given action happened. """ def verify(self): """Raise AssertionError if expectations for this item are unmet. The verify method is supposed to raise an AssertionError if the task has unmet expectations, with a one-line explanation about why this item is unmet. This method should be safe to be called multiple times without side effects. """ def replay(self): """Put the task in replay mode. Any expectations of the task should be reset. """ def restore(self): """Restore any environmental changes made by the task. Verify should continue to work after this is called. """ # -------------------------------------------------------------------- # Task implementations. class OnRestoreCaller(Task): """Call a given callback when restoring.""" def __init__(self, callback): self._callback = callback def restore(self): self._callback() class PathMatcher(Task): """Match the action path against a given path.""" def __init__(self, path): self.path = path def matches(self, path): return self.path.matches(path) def path_matcher_recorder(mocker, event): event.add_task(PathMatcher(event.path)) Mocker.add_recorder(path_matcher_recorder) class RunCounter(Task): """Task which verifies if the number of runs are within given boundaries. """ def __init__(self, min, max=False): self.min = min if max is None: self.max = sys.maxint elif max is False: self.max = min else: self.max = max self._runs = 0 def replay(self): self._runs = 0 def may_run(self, path): return self._runs < self.max def run(self, path): self._runs += 1 if self._runs > self.max: self.verify() def verify(self): if not self.min <= self._runs <= self.max: if self._runs < self.min: raise AssertionError("Performed fewer times than expected.") raise AssertionError("Performed more times than expected.") class ImplicitRunCounter(RunCounter): """RunCounter inserted by default on any event. This is a way to differentiate explicitly added counters and implicit ones. """ def run_counter_recorder(mocker, event): """Any event may be repeated once, unless disabled by default.""" if event.path.root_mock.__mocker_count__: event.add_task(ImplicitRunCounter(1)) Mocker.add_recorder(run_counter_recorder) def run_counter_removal_recorder(mocker, event): """ Events created by getattr actions which lead to other events may be repeated any number of times. For that, we remove implicit run counters of any getattr actions leading to the current one. """ parent_path = event.path.parent_path for event in mocker.get_events()[::-1]: if (event.path is parent_path and event.path.actions[-1].kind == "getattr"): for task in event.get_tasks(): if type(task) is ImplicitRunCounter: event.remove_task(task) Mocker.add_recorder(run_counter_removal_recorder) class MockReturner(Task): """Return a mock based on the action path.""" def __init__(self, mocker): self.mocker = mocker def run(self, path): return Mock(self.mocker, path) def mock_returner_recorder(mocker, event): """Events that lead to other events must return mock objects.""" parent_path = event.path.parent_path for event in mocker.get_events(): if event.path is parent_path: for task in event.get_tasks(): if isinstance(task, MockReturner): break else: event.add_task(MockReturner(mocker)) break Mocker.add_recorder(mock_returner_recorder) class FunctionRunner(Task): """Task that runs a function everything it's run. Arguments of the last action in the path are passed to the function, and the function result is also returned. """ def __init__(self, func, with_root_object=False): self._func = func self._with_root_object = with_root_object def run(self, path): action = path.actions[-1] if self._with_root_object: return self._func(path.root_object, *action.args, **action.kwargs) else: return self._func(*action.args, **action.kwargs) class PathExecuter(Task): """Task that executes a path in the real object, and returns the result.""" def __init__(self, result_callback=None): self._result_callback = result_callback def get_result_callback(self): return self._result_callback def run(self, path): result = path.execute(path.root_object) if self._result_callback is not None: self._result_callback(result) return result class Orderer(Task): """Task to establish an order relation between two events. An orderer task will only match once all its dependencies have been run. """ def __init__(self, path): self.path = path self._run = False self._dependencies = [] def replay(self): self._run = False def has_run(self): return self._run def may_run(self, path): for dependency in self._dependencies: if not dependency.has_run(): return False return True def run(self, path): for dependency in self._dependencies: if not dependency.has_run(): raise AssertionError("Should be after: %s" % dependency.path) self._run = True def add_dependency(self, orderer): self._dependencies.append(orderer) def get_dependencies(self): return self._dependencies class SpecChecker(Task): """Task to check if arguments of the last action conform to a real method. """ def __init__(self, method): self._method = method self._unsupported = False if method: try: self._args, self._varargs, self._varkwargs, self._defaults = \ inspect.getargspec(method) except TypeError: self._unsupported = True else: if self._defaults is None: self._defaults = () if type(method) is type(self.run): self._args = self._args[1:] def get_method(self): return self._method def _raise(self, message): spec = inspect.formatargspec(self._args, self._varargs, self._varkwargs, self._defaults) raise AssertionError("Specification is %s%s: %s" % (self._method.__name__, spec, message)) def verify(self): if not self._method: raise AssertionError("Method not found in real specification") def may_run(self, path): try: self.run(path) except AssertionError: return False return True def run(self, path): if not self._method: raise AssertionError("Method not found in real specification") if self._unsupported: return # Can't check it. Happens with builtin functions. :-( action = path.actions[-1] obtained_len = len(action.args) obtained_kwargs = action.kwargs.copy() nodefaults_len = len(self._args) - len(self._defaults) for i, name in enumerate(self._args): if i < obtained_len and name in action.kwargs: self._raise("%r provided twice" % name) if (i >= obtained_len and i < nodefaults_len and name not in action.kwargs): self._raise("%r not provided" % name) obtained_kwargs.pop(name, None) if obtained_len > len(self._args) and not self._varargs: self._raise("too many args provided") if obtained_kwargs and not self._varkwargs: self._raise("unknown kwargs: %s" % ", ".join(obtained_kwargs)) def spec_checker_recorder(mocker, event): spec = event.path.root_mock.__mocker_spec__ if spec: actions = event.path.actions if len(actions) == 1: if actions[0].kind == "call": method = getattr(spec, "__call__", None) event.add_task(SpecChecker(method)) elif len(actions) == 2: if actions[0].kind == "getattr" and actions[1].kind == "call": method = getattr(spec, actions[0].args[0], None) event.add_task(SpecChecker(method)) Mocker.add_recorder(spec_checker_recorder) class ProxyReplacer(Task): """Task which installs and deinstalls proxy mocks. This task will replace a real object by a mock in all dictionaries found in the running interpreter via the garbage collecting system. """ def __init__(self, mock): self.mock = mock self.__mocker_replace__ = False def replay(self): global_replace(self.mock.__mocker_object__, self.mock) def restore(self): global_replace(self.mock, self.mock.__mocker_object__) def global_replace(remove, install): """Replace object 'remove' with object 'install' on all dictionaries.""" for referrer in gc.get_referrers(remove): if (type(referrer) is dict and referrer.get("__mocker_replace__", True)): for key, value in list(referrer.iteritems()): if value is remove: referrer[key] = install class Undefined(object): def __repr__(self): return "Undefined" Undefined = Undefined() class Patcher(Task): def __init__(self): super(Patcher, self).__init__() self._monitored = {} # {kind: {id(object): object}} self._patched = {} def is_monitoring(self, obj, kind): monitored = self._monitored.get(kind) if monitored: if id(obj) in monitored: return True cls = type(obj) if issubclass(cls, type): cls = obj bases = set([id(base) for base in cls.__mro__]) bases.intersection_update(monitored) return bool(bases) return False def monitor(self, obj, kind): if kind not in self._monitored: self._monitored[kind] = {} self._monitored[kind][id(obj)] = obj def patch_attr(self, obj, attr, value): original = obj.__dict__.get(attr, Undefined) self._patched[id(obj), attr] = obj, attr, original setattr(obj, attr, value) def get_unpatched_attr(self, obj, attr): cls = type(obj) if issubclass(cls, type): cls = obj result = Undefined for mro_cls in cls.__mro__: key = (id(mro_cls), attr) if key in self._patched: result = self._patched[key][2] if result is not Undefined: break elif attr in mro_cls.__dict__: result = mro_cls.__dict__.get(attr, Undefined) break if isinstance(result, object) and hasattr(type(result), "__get__"): if cls is obj: obj = None return result.__get__(obj, cls) return result def _get_kind_attr(self, kind): if kind == "getattr": return "__getattribute__" return "__%s__" % kind def replay(self): for kind in self._monitored: attr = self._get_kind_attr(kind) seen = set() for obj in self._monitored[kind].itervalues(): cls = type(obj) if issubclass(cls, type): cls = obj if cls not in seen: seen.add(cls) unpatched = getattr(cls, attr, Undefined) self.patch_attr(cls, attr, PatchedMethod(kind, unpatched, self.is_monitoring)) self.patch_attr(cls, "__mocker_execute__", self.execute) def restore(self): for obj, attr, original in self._patched.itervalues(): if original is Undefined: delattr(obj, attr) else: setattr(obj, attr, original) self._patched.clear() def execute(self, action, object): attr = self._get_kind_attr(action.kind) unpatched = self.get_unpatched_attr(object, attr) try: return unpatched(*action.args, **action.kwargs) except AttributeError: type, value, traceback = sys.exc_info() if action.kind == "getattr": # The normal behavior of Python is to try __getattribute__, # and if it raises AttributeError, try __getattr__. We've # tried the unpatched __getattribute__ above, and we'll now # try __getattr__. try: __getattr__ = unpatched("__getattr__") except AttributeError: pass else: return __getattr__(*action.args, **action.kwargs) raise type, value, traceback class PatchedMethod(object): def __init__(self, kind, unpatched, is_monitoring): self._kind = kind self._unpatched = unpatched self._is_monitoring = is_monitoring def __get__(self, obj, cls=None): object = obj or cls if not self._is_monitoring(object, self._kind): return self._unpatched.__get__(obj, cls) def method(*args, **kwargs): if self._kind == "getattr" and args[0].startswith("__mocker_"): return self._unpatched.__get__(obj, cls)(args[0]) mock = object.__mocker_mock__ return mock.__mocker_act__(self._kind, args, kwargs, object) return method def __call__(self, obj, *args, **kwargs): # At least with __getattribute__, Python seems to use *both* the # descriptor API and also call the class attribute directly. It # looks like an interpreter bug, or at least an undocumented # inconsistency. return self.__get__(obj)(*args, **kwargs) def patcher_recorder(mocker, event): mock = event.path.root_mock if mock.__mocker_patcher__ and len(event.path.actions) == 1: patcher = mock.__mocker_patcher__ patcher.monitor(mock.__mocker_object__, event.path.actions[0].kind) Mocker.add_recorder(patcher_recorder) juju-0.7.orig/juju/lib/pick.py0000644000000000000000000000240612135220114014436 0ustar 00000000000000import itertools _marker = object() def pick_all_key(iterable, **kwargs): """Return all element having key/value pairs listed in kwargs.""" def filtermethod(element): for k, v in kwargs.iteritems(): if element[k] != v: return False return True return itertools.ifilter(filtermethod, iterable) def pick_key(iterable, **kwargs): """Return the first element of iterable with all key/value pairs. If no matching element is found None is returned. """ try: return pick_all_key(iterable, **kwargs).next() except StopIteration: return None def pick_all_attr(iterable, **kwargs): """Return all element having key/value pairs listed in kwargs.""" def filtermethod(element): for k, v in kwargs.iteritems(): el = getattr(element, k, _marker) if el is _marker or el != v: return False return True return itertools.ifilter(filtermethod, iterable) def pick_attr(iterable, **kwargs): """Return the first element of iterable with all key/value pairs. If no matching element is found None is returned. """ try: return pick_all_attr(iterable, **kwargs).next() except StopIteration: return None juju-0.7.orig/juju/lib/port.py0000644000000000000000000000053612135220114014476 0ustar 00000000000000import socket def get_open_port(host=""): """Get an open port on the machine. """ temp_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) temp_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) temp_sock.bind((host, 0)) port = temp_sock.getsockname()[1] temp_sock.close() del temp_sock return port juju-0.7.orig/juju/lib/schema.py0000644000000000000000000002327112135220114014753 0ustar 00000000000000"""A schema system for validation of dict-based values.""" import re class SchemaError(Exception): """Raised when invalid input is received.""" def __init__(self, path, message): self.path = path self.message = message info = "%s: %s" % ("".join(self.path), self.message) super(Exception, self).__init__(info) class SchemaExpectationError(SchemaError): """Raised when an expected value is not found.""" def __init__(self, path, expected, got): self.expected = expected self.got = got message = "expected %s, got %s" % (expected, got) super(SchemaExpectationError, self).__init__(path, message) class Constant(object): """Something that must be equal to a constant value.""" def __init__(self, value): self.value = value def coerce(self, value, path): if value != self.value: raise SchemaExpectationError(path, repr(self.value), repr(value)) return value class Any(object): """Anything at all.""" def coerce(self, value, path): return value class OneOf(object): """Must necessarily match one of the given schemas.""" def __init__(self, *schemas): """ @param schemas: Any number of other schema objects. """ self.schemas = schemas def coerce(self, value, path): """ The result of the first schema which doesn't raise L{SchemaError} from its C{coerce} method will be returned. """ best_error = None for i, schema in enumerate(self.schemas): try: return schema.coerce(value, path) except SchemaError, be: if not best_error or len(be.path) > len(best_error.path): best_error = be raise best_error class Bool(object): """Something that must be a C{bool}.""" def coerce(self, value, path): if not isinstance(value, bool): raise SchemaExpectationError(path, "bool", repr(value)) return value class Int(object): """Something that must be an C{int} or C{long}.""" def coerce(self, value, path): if not isinstance(value, (int, long)): raise SchemaExpectationError(path, "int", repr(value)) return value class Float(object): """Something that must be an C{int}, C{long}, or C{float}.""" def coerce(self, value, path): if not isinstance(value, (int, long, float)): raise SchemaExpectationError(path, "number", repr(value)) return value class String(object): """Something that must be a C{str}.""" def coerce(self, value, path): if not isinstance(value, str): raise SchemaExpectationError(path, "string", repr(value)) return value class Unicode(object): """Something that must be a C{unicode}.""" def coerce(self, value, path): if not isinstance(value, unicode): raise SchemaExpectationError(path, "unicode", repr(value)) return value class Regex(object): """Something that must be a valid Python regular expression.""" def coerce(self, value, path): try: regex = re.compile(value) except re.error: raise SchemaExpectationError(path, "regex", repr(value)) return regex class UnicodeOrString(object): """Something that must be a C{unicode} or {str}. If the value is a C{str}, it will automatically be decoded. """ def __init__(self, encoding): """ @param encoding: The encoding to automatically decode C{str}s with. """ self.encoding = encoding def coerce(self, value, path): if isinstance(value, str): try: value = value.decode(self.encoding) except UnicodeDecodeError: raise SchemaExpectationError( path, "unicode or %s string" % self.encoding, repr(value)) elif not isinstance(value, unicode): raise SchemaExpectationError( path, "unicode or %s string" % self.encoding, repr(value)) return value class List(object): """Something which must be a C{list}.""" def __init__(self, schema): """ @param schema: The schema that all values of the list must match. """ self.schema = schema def coerce(self, value, path): if not isinstance(value, list): raise SchemaExpectationError(path, "list", repr(value)) new_list = list(value) path.extend(["[", "?", "]"]) try: for i, subvalue in enumerate(value): path[-2] = str(i) new_list[i] = self.schema.coerce(subvalue, path) finally: del path[-3:] return new_list class Tuple(object): """Something which must be a fixed-length tuple.""" def __init__(self, *schema): """ @param schema: A sequence of schemas, which will be applied to each value in the tuple respectively. """ self.schema = schema def coerce(self, value, path): if not isinstance(value, tuple): raise SchemaExpectationError(path, "tuple", repr(value)) if len(value) != len(self.schema): raise SchemaExpectationError( path, "tuple with %d elements" % len(self.schema), repr(value)) new_value = [] path.extend(["[", "?", "]"]) try: for i, (schema, value) in enumerate(zip(self.schema, value)): path[-2] = str(i) new_value.append(schema.coerce(value, path)) finally: del path[-3:] return tuple(new_value) class Dict(object): """Something which must be a C{dict} with arbitrary keys.""" def __init__(self, key_schema, value_schema): """ @param key_schema: The schema that keys must match. @param value_schema: The schema that values must match. """ self.key_schema = key_schema self.value_schema = value_schema def coerce(self, value, path): if not isinstance(value, dict): raise SchemaExpectationError(path, "dict", repr(value)) new_dict = {} key_path = path if not path: value_path = ["?"] else: value_path = path + [".", "?"] for key, subvalue in value.items(): new_key = self.key_schema.coerce(key, key_path) try: value_path[-1] = str(key) except ValueError: value_path[-1] = repr(key) new_subvalue = self.value_schema.coerce(subvalue, value_path) new_dict[new_key] = new_subvalue return new_dict class KeyDict(object): """Something which must be a C{dict} with defined keys. The keys must be constant and the values must match a per-key schema. """ def __init__(self, schema, optional=None): """ @param schema: A dict mapping keys to schemas that the values of those keys must match. """ self.optional = set(optional or ()) self.schema = schema def coerce(self, value, path): new_dict = {} if not isinstance(value, dict): raise SchemaExpectationError(path, "dict", repr(value)) path = path[:] if path: path.append(".") path.append("?") for k, v in value.iteritems(): if k in self.schema: try: path[-1] = str(k) except ValueError: path[-1] = repr(k) new_dict[k] = self.schema[k].coerce(v, path) else: # Just preserve entries which are not in the schema. # This is less likely to eat important values due to # different app versions being used, for instance. new_dict[k] = v for k in self.schema: if k not in value and k not in self.optional: path[-1] = k raise SchemaError(path, "required value not found") # No need to restore path. It was copied. return new_dict class SelectDict(object): """Something that must be a C{dict} whose schema depends on some value.""" def __init__(self, key, schemas): """ @param key: a key we expect to be in each of the possible schemas, which we use to select which schema to coerce to. @param schemas: a dictionary mapping values for C{key} to schemas, to which the eventual value should be coerced. """ self.key = key self.schemas = schemas def coerce(self, value, path): if self.key not in value: raise SchemaError( path + ['.', self.key], "required value not found") selected = value[self.key] return self.schemas[selected].coerce(value, path) class OAuthString(String): """A L{String} containing OAuth information, colon-separated. The string should contain three parts:: consumer-key:resource-key:resource-secret Each part is stripped of leading and trailing whitespace. @return: A 3-tuple of C{consumer-key}, C{resource-key}, C{resource-secret}. """ def coerce(self, value, path): value = super(OAuthString, self).coerce(value, path) parts = tuple(part.strip() for part in value.split(":")) if len(parts) != 3: raise SchemaError( path, "does not contain three colon-separated parts") if "" in parts: raise SchemaError( path, "one or more parts are empty") return parts juju-0.7.orig/juju/lib/serializer.py0000644000000000000000000000073712135220114015666 0ustar 00000000000000from yaml import CSafeLoader, CSafeDumper, Mark from yaml import dump as _dump from yaml import load as _load def dump(value): return _dump(value, Dumper=CSafeDumper) yaml_dump = dump def load(value): return _load(value, Loader=CSafeLoader) yaml_load = load def yaml_mark_with_path(path, mark): # yaml c ext, cant be modded, convert to capture path return Mark( path, mark.index, mark.line, mark.column, mark.buffer, mark.pointer) juju-0.7.orig/juju/lib/service.py0000644000000000000000000001105212135220114015145 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from twisted.internet.threads import deferToThread from juju.errors import ServiceError import os import subprocess def _check_call(args, env=None, output_path=None): if not output_path: output_path = os.devnull with open(output_path, "a") as f: return subprocess.check_call( args, stdout=f.fileno(), stderr=f.fileno(), env=env) def _cat(filename, use_sudo=False): args = ("cat", filename) if use_sudo and not os.access(filename, os.R_OK): args = ("sudo",) + args p = subprocess.Popen( args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) stdout_data, _ = p.communicate() r = p.returncode return (r, stdout_data) class TwistedDaemonService(object): """Manage the starting and stopping of an Agent. This manager tracks the agent via its --pidfile. The pidfile argument specifies the location of the pid file that is used to track this service. """ def __init__(self, name, pidfile, use_sudo=False): self._name = name self._use_sudo = use_sudo self._description = None self._environ = None self._command = None self._daemon = True self._output_path = None self._pid_path = pidfile self._pid = None @property def output_path(self): if self._output_path is not None: return self._output_path return "/tmp/%s.output" % self._name @output_path.setter def output_path(self, path): self._output_path = path def set_description(self, description): self._description = description def set_daemon(self, value): self._daemon = bool(value) def set_environ(self, environ): for k, v in environ.items(): environ[k] = str(v) self._environ = environ def set_command(self, command): if self._daemon: if "--pidfile" not in command: command += ["--pidfile", self._pid_path] else: # pid file is in command (consume it for get_pid) idx = command.index("--pidfile") self._pid_path = command[idx+1] self._command = command @inlineCallbacks def _trash_output(self): if os.path.exists(self.output_path): # Just using os.unlink will fail when we're running TEST_SUDO # tests which hit this code path (because root will own # self.output_path) yield self._call("rm", "-f", self.output_path) if os.path.exists(self._pid_path): yield self._call("rm", "-f", self._pid_path) def _call(self, *args, **kwargs): if self._use_sudo: if self._environ: _args = ["%s=%s" % (k, v) for k, v in self._environ.items()] else: _args = [] _args.insert(0, "sudo") _args.extend(args) args = _args return deferToThread(_check_call, args, env=self._environ, output_path=self.output_path) def install(self): if self._command is None: raise ServiceError("Cannot manage agent: %s no command set" % ( self._name)) @inlineCallbacks def start(self): if (yield self.is_running()): raise ServiceError( "%s already running: pid (%s)" % ( self._name, self.get_pid())) if not self.is_installed(): yield self.install() yield self._trash_output() yield self._call(*self._command, output_path=self.output_path) @inlineCallbacks def destroy(self): if (yield self.is_running()): yield self._call("kill", self.get_pid()) yield self._trash_output() def get_pid(self): if self._pid != None: return self._pid if not os.path.exists(self._pid_path): return None r, data = _cat(self._pid_path, use_sudo=self._use_sudo) if r != 0: return None # verify that pid is a number but leave # it as a string suitable for passing to kill if not data.strip().isdigit(): return None pid = data.strip() self._pid = pid return self._pid def is_running(self): pid = self.get_pid() if not pid: return False proc_file = "/proc/%s" % pid if not os.path.exists(proc_file): return False return True def is_installed(self): return False juju-0.7.orig/juju/lib/statemachine.py0000644000000000000000000003331112135220114016154 0ustar 00000000000000"""A simple state machine for twisted applications. Responsibilities are divided between three classes. A workflow class, composed of transitions, and responsible for verifying the transitions available from each state. The transitions define their endpoints, and optionally a transition action, and an error transition. When the transition is executed to move a context between two endpoint states, the transition action is invoked. If it fails with a TransitionError, the error transition is fired. If it succeeds, it can return a dictionary of values. These values are stored. The workflow state class, forms the basis for interacting with the workflow system. It bridges an arbitrary domain objectcontext, with its associated workflow. Its used to fire transitions, store/load state, and as a location for defining any relevant transition actions. """ import logging from twisted.internet.defer import DeferredLock, inlineCallbacks, returnValue class WorkflowError(Exception): pass class InvalidStateError(WorkflowError): pass class InvalidTransitionError(WorkflowError): pass class TransitionError(WorkflowError): pass log = logging.getLogger("statemachine") def class_name(instance): return instance.__class__.__name__.lower() class _ExitCaller(object): def __init__(self, func): self._func = func def __enter__(self): pass def __exit__(self, *exc_info): self._func() class WorkflowState(object): _workflow = None def __init__(self, workflow=None): if workflow: self._workflow = workflow self._observer = None self._lock = DeferredLock() @inlineCallbacks def lock(self): yield self._lock.acquire() returnValue(_ExitCaller(self._lock.release)) def _assert_locked(self): """Should be called at the start of any method which changes state. This is a frankly pitiful hack that should (handwave handwave) help people to use this correctly; it doesn't stop anyone from calling write methods on this object while someone *else* holds a lock, but hopefully it will help us catch these situations when unit testing. This method only exists as a place to put this documentation. """ assert self._lock.locked @inlineCallbacks def get_available_transitions(self): """Return a list of valid transitions from the current state. """ state_id = yield self.get_state() returnValue(self._workflow.get_transitions(state_id)) @inlineCallbacks def fire_transition_alias(self, transition_alias): """Fire a transition with the matching alias. A transition from the current state with the given alias will be located. The purpose of alias is to allow groups of transitions, each from a different state, to be invoked unambigiously by a caller, for example:: >> state.fire_transition_alias("upgrade") >> state.fire_transition_alias("settings-changed") >> state.fire_transition_alias("error") All will invoke the appropriate transition from their state without the caller having to do state inspection or transition id mangling. Ambigious (multiple) or no matching transitions cause an exception InvalidTransition to be raised. """ self._assert_locked() found = [] for t in (yield self.get_available_transitions()): if transition_alias == t.alias: found.append(t) if len(found) > 1: current_state = yield self.get_state() raise InvalidTransitionError( "Multiple transition for alias:%s state:%s transitions:%s" % ( transition_alias, current_state, found)) if len(found) == 0: current_state = yield self.get_state() raise InvalidTransitionError( "No transition found for alias:%s state:%s" % ( transition_alias, current_state)) value = yield self.fire_transition(found[0].transition_id) returnValue(value) @inlineCallbacks def transition_state(self, state_id): """Attempt a transition to the given state. Will look for a transition to the given state from the current state, and execute if it one exists. Returns a boolean value based on whether the state was achieved. """ self._assert_locked() # verify it's a valid state id if not self._workflow.has_state(state_id): raise InvalidStateError(state_id) transitions = yield self.get_available_transitions() for transition in transitions: if transition.destination == state_id: break else: returnValue(False) log.debug("%s: transition state (%s -> %s)", class_name(self), transition.source, transition.destination) result = yield self.fire_transition(transition.transition_id) returnValue(result) @inlineCallbacks def fire_transition(self, transition_id, **state_variables): """Fire a transition with given id. Invokes any transition actions, saves state and state variables, and error transitions as needed. """ self._assert_locked() # Verify and retrieve the transition. available = yield self.get_available_transitions() available_ids = [t.transition_id for t in available] if not transition_id in available_ids: current_state = yield self.get_state() raise InvalidTransitionError( "%r not a valid transition for state %s" % ( transition_id, current_state)) yield self.set_inflight(transition_id) transition = self._workflow.get_transition(transition_id) log.debug("%s: transition %s (%s -> %s) %r", class_name(self), transition_id, transition.source, transition.destination, state_variables) # Execute any per transition action. state_variables = state_variables action_id = "do_%s" % transition_id action = getattr(self, action_id, None) if callable(action): try: log.debug("%s: execute action %s", class_name(self), action.__name__) variables = yield action() if isinstance(variables, dict): state_variables.update(variables) except TransitionError, e: # If an error happens during the transition, allow for # executing an error transition. if transition.error_transition_id: log.debug("%s: executing error transition %s, %s", class_name(self), transition.error_transition_id, e) yield self.fire_transition( transition.error_transition_id) else: yield self.set_inflight(None) log.debug("%s: transition %s failed %s", class_name(self), transition_id, e) # Bail, and note the error as a return value. returnValue(False) # Set the state with state variables (and implicitly clear inflight) yield self.set_state(transition.destination, **state_variables) log.debug("%s: transition complete %s (state %s) %r", class_name(self), transition_id, transition.destination, state_variables) yield self._fire_automatic_transitions() returnValue(True) @inlineCallbacks def _fire_automatic_transitions(self): self._assert_locked() available = yield self.get_available_transitions() for t in available: if t.automatic: yield self.fire_transition(t.transition_id) return @inlineCallbacks def get_state(self): """Get the current workflow state. """ state_dict = yield self._load() if not state_dict: returnValue(None) returnValue(state_dict["state"]) @inlineCallbacks def get_state_variables(self): """Retrieve a dictionary of variables associated to the current state. """ state_dict = yield self._load() if not state_dict: returnValue({}) returnValue(state_dict["state_variables"]) def set_observer(self, observer): """Set a callback, that will be notified on state changes. The caller will receive the new state and the new state variables as dictionary via positional args. ie.:: def callback(new_state, state_variables): print new_state, state_variables """ self._observer = observer @inlineCallbacks def set_state(self, state, **variables): """Set the current workflow state, optionally setting state variables. """ self._assert_locked() yield self._store(dict(state=state, state_variables=variables)) if self._observer: self._observer(state, variables) @inlineCallbacks def set_inflight(self, transition_id): """Record intent to perform a transition, or completion of same. Ideally, this would not be exposed to the public, but it's necessary for writing sane tests. """ self._assert_locked() state = yield self._load() or {} state.setdefault("state", None) state.setdefault("state_variables", {}) if transition_id is not None: state["transition_id"] = transition_id else: state.pop("transition_id", None) yield self._store(state) @inlineCallbacks def get_inflight(self): """Get the id of the transition that is currently executing. (Or which was abandoned due to unexpected process death.) """ state = yield self._load() or {} returnValue(state.get("transition_id")) @inlineCallbacks def synchronize(self): """Rerun inflight transition, if any, and any default transitions.""" self._assert_locked() # First of all, complete any abandoned transition. transition_id = yield self.get_inflight() if transition_id is not None: yield self.fire_transition(transition_id) else: yield self._fire_automatic_transitions() def _load(self): """ Load the state and variables from persistent storage. """ pass def _store(self, state_dict): """ Store the state and variables to persistent storage. """ pass class Workflow(object): def __init__(self, *transitions): self.initialize(transitions) def initialize(self, transitions): """Initialize the internal data structures with the given transitions. """ self._sources = {} self._transitions = {} for t in transitions: self._sources.setdefault(t.source, []).append(t.transition_id) self._sources.setdefault(t.destination, []) self._transitions[t.transition_id] = t def get_transitions(self, source_id): """Retrieve transition ids valid from the srource id state. """ if not source_id in self._sources: raise InvalidStateError(source_id) transitions = self._sources[source_id] return [self._transitions[t] for t in transitions] def get_transition(self, transition_id): """Retrieve a transition object by id. """ return self._transitions[transition_id] def has_state(self, state_id): return state_id in self._sources class Transition(object): """A transition encapsulates an edge in the statemachine graph. :attr:`transition_id` The identity of the transition. :attr:`label` A human readable label of the transition's purpose. :attr:`source` The origin/source state of the transition. :attr:`destination` The target/destination state of the transition. :attr:`action_id` The name of the action method to use for this transition. :attr:`error_transition_id`: A transition to fire if the action fails. :attr:`automatic`: If true, always try to fire this transition whenever in `source` state. :attr:`alias` See :meth:`WorkflowState.fire_transition_alias` """ def __init__(self, transition_id, label, source, destination, error_transition_id=None, automatic=False, alias=None): self._transition_id = transition_id self._label = label self._source = source self._destination = destination self._error_transition_id = error_transition_id self._automatic = automatic self._alias = alias @property def transition_id(self): """The id of this transition. """ return self._transition_id @property def label(self): return self._label @property def destination(self): """The destination state id of this transition. """ return self._destination @property def source(self): """The origin state id of this transition. """ return self._source @property def alias(self): return self._alias @property def error_transition_id(self): """The id of a transition to fire upon an error of this transition. """ return self._error_transition_id @property def automatic(self): """Should this transition always fire whenever possible? """ return self._automatic juju-0.7.orig/juju/lib/testing.py0000644000000000000000000001517512135220114015174 0ustar 00000000000000import itertools import logging import os import yaml import StringIO import sys from twisted.internet.defer import Deferred, inlineCallbacks, returnValue from twisted.internet import reactor from twisted.trial.unittest import TestCase as TrialTestCase from txzookeeper import ZookeeperClient from txzookeeper.managed import ManagedClient from juju.lib.mocker import MockerTestCase from juju.tests.common import get_test_zookeeper_address class TestCase(TrialTestCase, MockerTestCase): """ Base class for all juju tests. """ # Default timeout for any test timeout = 5 # Default value for zookeeper test client client = None def capture_stream(self, stream_name): original = getattr(sys, stream_name) new = StringIO.StringIO() @self.addCleanup def reset_stream(): setattr(sys, stream_name, original) setattr(sys, stream_name, new) return new def capture_logging(self, name="", level=logging.INFO, log_file=None, formatter=None): if log_file is None: log_file = StringIO.StringIO() log_handler = logging.StreamHandler(log_file) if formatter: log_handler.setFormatter(formatter) logger = logging.getLogger(name) logger.addHandler(log_handler) old_logger_level = logger.level logger.setLevel(level) @self.addCleanup def reset_logging(): logger.removeHandler(log_handler) logger.setLevel(old_logger_level) return log_file _missing_attr = object() def patch(self, object, attr, value): """Replace an object's attribute, and restore original value later. Returns the original value of the attribute if any or None. """ original_value = getattr(object, attr, self._missing_attr) @self.addCleanup def restore_original(): if original_value is self._missing_attr: try: delattr(object, attr) except AttributeError: pass else: setattr(object, attr, original_value) setattr(object, attr, value) if original_value is self._missing_attr: return None return original_value def change_args(self, *args): """Change the cli args to the specified, with restoration later.""" original_args = sys.argv sys.argv = list(args) @self.addCleanup def restore(): sys.argv = original_args def change_environment(self, **kw): """Reset the environment to kwargs. The tests runtime environment will be initialized with only those values passed as kwargs. The original state of the environment will be restored after the tests complete. """ # preserve key elements needed for testing for env in ["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "EC2_PRIVATE_KEY", "EC2_CERT", "HOME", "ZOOKEEPER_ADDRESS"]: if env not in kw: kw[env] = os.environ.get(env, "") original_environ = dict(os.environ) @self.addCleanup def cleanup_env(): os.environ.clear() os.environ.update(original_environ) os.environ.clear() os.environ.update(kw) def assertInstance(self, instance, type): self.assertTrue(isinstance(instance, type)) def assertLogLines(self, observed, expected): """Asserts that the lines of `expected` exist in order in the log.""" logged = observed.split("\n") it = iter(expected) for line in logged: it, peekat = itertools.tee(it) peeked = next(peekat) if peeked in line: next(it) # then consume this line and move on self.assertFalse( expected, "Did not see all expected lines in log, in order: %s, %s" % ( observed, expected)) def sleep(self, delay): """Non-blocking sleep.""" deferred = Deferred() reactor.callLater(delay, deferred.callback, None) return deferred @inlineCallbacks def poke_zk(self): """Create a roundtrip communication to zookeeper. An alternative to sleeping in many cases when waiting for a zookeeper watch or interaction to trigger a callback. """ if self.client is None: raise ValueError("No Zookeeper client to utilize") yield self.client.exists("/zookeeper") returnValue(True) def get_zookeeper_client(self): client = ManagedClient( get_test_zookeeper_address(), session_timeout=1000) return client @inlineCallbacks def dump_data(self, path="/"): client = self.client output = {} @inlineCallbacks def export_tree(path, indent): d = {} data, stat = yield client.get(path) name = path.rsplit('/', 1)[1] d['contents'] = _decode_fmt(data, yaml.load) children = yield client.get_children(path) for name in children: if path == "/" and name == "zookeeper": continue cd = yield export_tree(path + '/' + name, indent) d[name] = cd returnValue(d) output[path.rsplit('/', 1)[1]] = yield export_tree(path, '') returnValue(output) @inlineCallbacks def assertTree(self, path, data): data = yield self.dump_data(path) self.assertEqual(data, data) @inlineCallbacks def dump_tree(self, path="/", format=yaml.load): client = self.client output = [] out = output.append @inlineCallbacks def export_tree(path, indent): data, stat = yield client.get(path) name = path.rsplit("/", 1)[1] properties = _decode_fmt(data, format) out(indent + "/" + name) indent += " " for i in sorted(properties.iteritems()): out(indent + "%s = %r" % i) children = yield client.get_children(path) for name in sorted(children): if path == "/" and name == "zookeeper": continue yield export_tree(path + "/" + name, indent) yield export_tree(path, "") returnValue("\n".join(output) + "\n") def _decode_fmt(s, decoder): s = s.strip() if not s: data = {} try: data = decoder(s) except: data = dict(string_value=s) return data juju-0.7.orig/juju/lib/tests/0000755000000000000000000000000012135220114014276 5ustar 00000000000000juju-0.7.orig/juju/lib/twistutils.py0000644000000000000000000000405112135220114015741 0ustar 00000000000000import inspect import os from twisted.internet import reactor from twisted.internet.defer import ( Deferred, maybeDeferred, succeed, DeferredList) from twisted.python.util import mergeFunctionMetadata def concurrent_execution_guard(attribute): """Sets attribute to True/False during execution of the decorated method. Used to ensure non concurrent execution of the decorated function via an instance attribute. *The underlying function must return a defered*. """ def guard(f): def guard_execute(self, *args, **kw): value = getattr(self, attribute, None) if value: return succeed(False) else: setattr(self, attribute, True) d = maybeDeferred(f, self, *args, **kw) def post_execute(result): setattr(self, attribute, False) return result d.addBoth(post_execute) return d return mergeFunctionMetadata(f, guard_execute) return guard def gather_results(deferreds, consume_errors=True): d = DeferredList(deferreds, fireOnOneErrback=1, consumeErrors=consume_errors) d.addCallback(lambda r: [x[1] for x in r]) d.addErrback(lambda f: f.value.subFailure) return d def get_module_directory(module): """Determine the directory of a module. Trial rearranges the working directory such that the module paths are relative to a modified current working directory, which results in failing tests when run under coverage, we manually remove the trial locations to ensure correct directories are utilized. """ return os.path.abspath(os.path.dirname(inspect.getabsfile(module)).replace( "/_trial_temp", "")) def sleep(delay): """Non-blocking sleep. :param int delay: time in seconds to sleep. :return: a Deferred that fires after the desired delay. :rtype: :class:`twisted.internet.defer.Deferred` """ deferred = Deferred() reactor.callLater(delay, deferred.callback, None) return deferred juju-0.7.orig/juju/lib/under.py0000644000000000000000000000044212135220114014623 0ustar 00000000000000import string _SAFE_CHARS = set(string.ascii_letters + string.digits + ".-") _CHAR_MAP = {} for i in range(256): c = chr(i) _CHAR_MAP[c] = c if c in _SAFE_CHARS else "_%02x_" % i _quote_char = _CHAR_MAP.__getitem__ def quote(unsafe): return "".join(map(_quote_char, unsafe)) juju-0.7.orig/juju/lib/upstart.py0000644000000000000000000001144012135220114015210 0ustar 00000000000000import os import subprocess from tempfile import NamedTemporaryFile from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.threads import deferToThread from twisted.internet.utils import getProcessOutput from juju.errors import ServiceError from juju.lib.twistutils import sleep _CONF_TEMPLATE = """\ description "%s" author "Juju Team " start on runlevel [2345] stop on runlevel [!2345] respawn %s exec %s >> %s 2>&1 """ def _silent_check_call(args): with open(os.devnull, "w") as f: return subprocess.check_call( args, stdout=f.fileno(), stderr=f.fileno()) class UpstartService(object): # on class for ease of testing init_dir = "/etc/init" def __init__(self, name, init_dir=None, use_sudo=False): self._name = name if init_dir is not None: self.init_dir = init_dir self._use_sudo = use_sudo self._output_path = None self._description = None self._environ = {} self._command = None @property def _conf_path(self): return os.path.join( self.init_dir, "%s.conf" % self._name) @property def output_path(self): if self._output_path is not None: return self._output_path return "/tmp/%s.output" % self._name def set_description(self, description): self._description = description def set_environ(self, environ): self._environ = environ def set_command(self, command): self._command = command def set_output_path(self, path): self._output_path = path @inlineCallbacks def _trash_output(self): if os.path.exists(self.output_path): # Just using os.unlink will fail when we're running TEST_SUDO tests # which hit this code path (because root will own self.output_path) yield self._call("rm", self.output_path) def _render(self): if self._description is None: raise ServiceError("Cannot render .conf: no description set") if self._command is None: raise ServiceError("Cannot render .conf: no command set") return _CONF_TEMPLATE % ( self._description, "\n".join('env %s="%s"' % kv for kv in sorted(self._environ.items())), self._command, self.output_path) def _call(self, *args): if self._use_sudo: args = ("sudo",) + args return deferToThread(_silent_check_call, args) def get_cloud_init_commands(self): return ["cat >> %s < "%(pid_path)s" """ log4j_properties = """ # DEFAULT: console appender only log4j.rootLogger=INFO, ROLLINGFILE log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender log4j.appender.ROLLINGFILE.Threshold=DEBUG log4j.appender.ROLLINGFILE.File=/dev/null """ zookeeper_conf_template = """ tickTime=2000 dataDir=%s clientPort=%s maxClientCnxns=500 minSessionTimeout=%d maxSessionTimeout=%d """ def check_zookeeper(): """Check for package installation of zookeeper.""" return os.path.exists("/usr/share/java/zookeeper.jar") class Zookeeper(object): def __init__(self, run_dir, port=None, host=None, zk_location="system", user=None, group=None, use_deferred=True, min_session_timeout=CLIENT_SESSION_TIMEOUT, max_session_timeout=MAX_SESSION_TIMEOUT, fsync=True): """ :param run_dir: Directory to store all zookeeper instance related data. :param port: The port zookeeper should run on. :param zk_location: Directory to find zk jars or dev checkout, defaults to using 'system' indicating a package installation. :param use_deferred: For usage in a twisted application, this will cause subprocess calls to be executed in a separate thread. Specifying either of the following parameters, requires the process using the library to be running as root. :param user: The user name under which to run zookeeper as. :param group: The group under which to run zookeeper under """ self._run_dir = run_dir self._host = host self._port = port self._user = user self._group = group self._zk_location = zk_location self._min_session_time = min_session_timeout self._max_session_time = max_session_timeout self._use_deferred = use_deferred self.fsync = fsync def start(self): assert self._port is not None if self._use_deferred: return deferToThread(self._start) return self._start() def stop(self): if self._use_deferred: return deferToThread(self._stop) return self._stop() @property def address(self): host = self._host or "localhost" return "%s:%s" % (host, self._port) @property def running(self): pid_path = os.path.join(self._run_dir, "zk.pid") try: with open(pid_path) as pid_file: pid = int(pid_file.read().strip()) except IOError: return False try: os.kill(pid, 0) except OSError, e: if e.errno == errno.ESRCH: # No such process return False raise return True package_environment_file = "/etc/zookeeper/conf/environment" def get_class_path(self): """Get the java class path as a list of paths """ class_path = None # Get class path for a package installation of zookeeper (default) if self._zk_location == "system": with open(self.package_environment_file) as package_environment: lines = package_environment.readlines() for l in lines: if not l.strip(): continue parts = l.split("=") if parts[0] == "CLASSPATH": value = parts[1] # On a system package, CLASSPATH comes in the form # "$ZOOCFGDIR:dir1:dir2:dir2"\n # First we strip off the leading and trailing " class_path = [p for p in value[1:-2].split(":")] # Next remove $ZOOCFGDIR, replaced by our run_dir class_path.pop(0) break elif self._zk_location and os.path.exists(self._zk_location): # Two additional possibilities, as seen in zkEnv.sh: # Either release binaries or a locally built version. # TODO: Needs to be verified against zookeeper - 3.4 builds software_dir = self._zk_location build_dir = os.path.join(software_dir, "build") if os.path.exists(build_dir): software_dir = build_dir class_path = glob.glob( os.path.join(software_dir, "zookeeper-*.jar")) class_path.extend( glob.glob(os.path.join(software_dir, "lib/*.jar"))) if not class_path: raise RuntimeError( "Zookeeper libraries not found in location %s." % ( self._zk_location)) # Add the managed dir containing log4j properties, which are retrieved # along classpath. class_path.insert(0, self._run_dir) return class_path def get_zookeeper_variables(self): """ Returns a dictionary containing variables for config templates """ d = {} class_path = self.get_class_path() d["class_path"] = ":".join(class_path).replace('"', '') d["log_config_path"] = os.path.join(self._run_dir, "log4j.properties") d["config_path"] = os.path.join(self._run_dir, "zoo.cfg") d["log_dir"] = os.path.join(self._run_dir, "log") d["pid_path"] = os.path.join(self._run_dir, "zk.pid") d["data_dir"] = os.path.join(self._run_dir, "data") return d def _setup_data_dir(self, zk_vars): uid = self._user and pwd.getpwnam(self._user).pw_uid or None gid = self._group and grp.getgrnam(self._group).gr_gid or None if uid is not None or gid is not None: change_daemon_user = True else: change_daemon_user = False if not os.path.exists(self._run_dir): os.makedirs(self._run_dir) if change_daemon_user: os.chown(self._run_dir, uid, gid) if not os.path.exists(zk_vars["log_dir"]): os.makedirs(zk_vars["log_dir"]) if change_daemon_user: os.chown(zk_vars["log_dir"], uid, gid) if not os.path.exists(zk_vars["data_dir"]): os.makedirs(zk_vars["data_dir"]) if change_daemon_user: os.chown(zk_vars["data_dir"], uid, gid) with open(zk_vars["log_config_path"], "w") as config: config.write(log4j_properties) with open(zk_vars["config_path"], "w") as config: conf = zookeeper_conf_template % ( zk_vars["data_dir"], self._port, self._min_session_time, self._max_session_time) if not self.fsync: conf += "\nforceSync=no\n" config.write(conf) if self._host: config.write("\nclientPortAddress=%s" % self._host) def _start(self): zk_vars = self.get_zookeeper_variables() self._setup_data_dir(zk_vars) zookeeper_script = zookeeper_script_template % zk_vars fh = tempfile.NamedTemporaryFile(delete=False) fh.write(zookeeper_script) os.chmod(fh.name, 0700) # Can't execute open file on linux fh.close() # Start zookeeper subprocess.check_call([fh.name], shell=True) # Ensure the tempfile is removed. os.remove(fh.name) def _stop(self): pid_file_path = os.path.join(self._run_dir, "zk.pid") try: with open(pid_file_path) as pid_file: zookeeper_pid = int(pid_file.read().strip()) except IOError: # No pid, move on return kill_grace_start = time.time() while True: try: os.kill(zookeeper_pid, 0) except OSError, e: if e.errno == errno.ESRCH: # No such process, already dead. break raise if time.time() - kill_grace_start > 5: # Hard kill if we've been trying this for a few seconds os.kill(zookeeper_pid, signal.SIGKILL) break else: # Graceful kill up to 5s os.kill(zookeeper_pid, signal.SIGTERM) # Give a moment for shutdown time.sleep(0.5) shutil.rmtree(self._run_dir) juju-0.7.orig/juju/lib/zklog.py0000644000000000000000000001433612135220114014643 0ustar 00000000000000""" Logging implementation which utilizes zookeeper for logs. """ import json import sys from logging import Handler, NOTSET, Formatter from twisted.internet.defer import inlineCallbacks, returnValue import zookeeper _error_formatter = Formatter() class ZookeeperHandler(Handler, object): """A logging.Handler implementation that stores records in Zookeeper. Intended use is a lightweight, low-volume distributed logging mechanism. Records are stored as json strings in sequence nodes. """ def __init__(self, client, context_name, level=NOTSET, log_path="/logs"): """Initialize a Zookeeper Log Handler. :param client: A connected zookeeper client. The client is managed independently. If the client is shutdown, the handler will no longer emit messages. :param context_name: An additional string value denoting the log record context. value will be injected into all emitted records. :param level: As per the logging.Handler api, denotes a minimum level that log records must exceed to be emitted from this Handler :param log_path: The location within zookeeper of the log records. """ self._client = client self._context_name = context_name self._log_container_path, self._log_path = self._format_log_path( log_path) super(ZookeeperHandler, self).__init__(level) @property def log_container_path(self): return self._log_container_path @property def log_path(self): return self._log_path def emit(self, record): """Emit a log record to zookeeper, enriched with context. This method returns a deferred, which the default logging implementation will ignore. """ if not self._client.connected: return json_record = self._format_json(record) return self._client.create( self._log_path, json_record, flags=zookeeper.SEQUENCE).addErrback(self._on_error) @inlineCallbacks def open(self): """Ensure the zookeeper logging location exists. This extends the :class logging.Handler: api to include an explicit open method. In the standard lib Handler implementation the handlers typically open their associated resources in __init__ but due to the asynchronous nature of zookeeper interaction, this usage is not appropriate. """ try: yield self._client.create(self._log_container_path) except zookeeper.NodeExistsException: pass def _on_error(self, failure): failure.printTraceback(sys.stderr) def _format_json(self, record): """Format a record into a serialized json dictionary """ data = dict(record.__dict__) data["context"] = self._context_name if record.exc_info: data["message"] = ( record.getMessage() + "\n" + _error_formatter.formatException( record.exc_info)) data["exc_info"] = None else: data["message"] = record.getMessage() data["msg"] = data["message"] data["args"] = () return json.dumps(data) def _format_log_path(self, log_path): """Determine the log container path, and log path for records. """ parts = filter(None, log_path.split("/")) if len(parts) == 1: container_path = "/" + "/".join(parts) return (container_path, container_path + "/log-") elif len(parts) == 2: container_path = "/" + "/".join(parts[:-1]) return (container_path, container_path + "/" + parts[-1]) else: raise ValueError("invalid log path %r" % log_path) class LogIterator(object): """An iterator over zookeeper stored log entries. Provides for reading log entries stored in zookeeper, with a persistent position marker, that is updated after size block reads. """ def __init__( self, client, replay=False, log_container="/logs", seen_block_size=10): self._client = client self._container_path = log_container self._container_exists = None self._seen_block_size = seen_block_size self._replay = replay self._log_index = None self._last_seen_index = 0 @inlineCallbacks def next(self): if self._container_exists is None: self._container_exists = yield self._wait_for_container() if self._log_index is None: self._last_seen_index = yield self._get_last_seen() if not self._replay: self._log_index = self._last_seen_index else: self._log_index = 0 log_entry_path = "/logs/log-%010d" % self._log_index try: data, stat = yield self._client.get(log_entry_path) except zookeeper.NoNodeException: exists_d, watch_d = self._client.exists_and_watch( log_entry_path) exists = yield exists_d if not exists: yield watch_d entry = yield self.next() returnValue(entry) else: self._log_index += 1 if self._replay and self._log_index > self._last_seen_index: self._replay = False if self._log_index % self._seen_block_size == 0: yield self._update_last_seen() returnValue(json.loads(data)) @inlineCallbacks def _wait_for_container(self): exists_d, watch_d = self._client.exists_and_watch(self._container_path) exists = yield exists_d if not exists: yield watch_d returnValue(True) def _update_last_seen(self): if self._replay: return data = {"next-log-index": self._log_index} return self._client.set(self._container_path, json.dumps(data)) @inlineCallbacks def _get_last_seen(self): content, stat = yield self._client.get(self._container_path) log_index = 0 if content: data = json.loads(content) if isinstance(data, dict): log_index = data.get("next-log-index", 0) returnValue(log_index) juju-0.7.orig/juju/lib/lxc/__init__.py0000644000000000000000000002247112135220114016041 0ustar 00000000000000import os import pipes import subprocess import sys import tempfile from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.threads import deferToThread from juju.errors import JujuError DATA_PATH = os.path.abspath( os.path.join(os.path.dirname(__file__), "data")) CONTAINER_OPTIONS_DOC = """ The following options are expected. JUJU_CONTAINER_NAME: Applied as the hostname of the machine. JUJU_ORIGIN: Where to obtain the containers version of juju from. (ppa, distro or branch). When 'branch' JUJU_SOURCE should be set to the location of a bzr(1) accessible branch. JUJU_PUBLIC_KEY: An SSH public key used by the ubuntu account for interaction with the container. """ DEVTMPFS_LINE = """devtmpfs dev devtmpfs mode=0755,nosuid 0 0""" # Used to specify the name of the default LXC template used # for container creation DEFAULT_TEMPLATE = "ubuntu-cloud" class LXCError(JujuError): """Indicates a low level error with an LXC container""" def _cmd(args): p = subprocess.Popen( args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) stdout_data, _ = p.communicate() r = p.returncode if r != 0: # read the stdout/err streams and show the user print >>sys.stderr, stdout_data raise LXCError(stdout_data) return (r, stdout_data) # Wrapped lxc cli primitives def _lxc_create(container_name, template, release, cloud_init_file=None, auth_key=None, release_stream=None): # the -- argument indicates the last parameters are passed # to the template and not handled by lxc-create args = ["sudo", "lxc-create", "-n", container_name, "-t", template, "--", "--debug", # Debug erors / set -x "--hostid", container_name, "-r", release] if cloud_init_file: args.extend(("--userdata", cloud_init_file)) if auth_key: args.extend(("-S", auth_key)) if release_stream: args.extend(("-s", release_stream)) return _cmd(args) def _lxc_start(container_name, debug_log=None, console_log=None): args = ["sudo", "lxc-start", "--daemon", "-n", container_name] if console_log: args.extend(["-c", console_log]) if debug_log: args.extend(["-l", "DEBUG", "-o", debug_log]) return _cmd(args) def _lxc_stop(container_name): _cmd(["sudo", "lxc-stop", "-n", container_name]) def _lxc_destroy(container_name): return _cmd(["sudo", "lxc-destroy", "-n", container_name]) def _lxc_ls(): _, output = _cmd(["lxc-ls"]) output = output.replace("\n", " ") return set([c for c in output.split(" ") if c]) def _lxc_wait(container_name, state="RUNNING"): """Wait for container to be in a given state RUNNING|STOPPED.""" def wait(container_name): rc, _ = _cmd(["sudo", "lxc-wait", "-n", container_name, "-s", state]) return rc == 0 return deferToThread(wait, container_name) def _lxc_clone(existing_container_name, new_container_name): return _cmd(["sudo", "lxc-clone", "-o", existing_container_name, "-n", new_container_name]) def _customize_container(customize_script, container_root): if not os.path.isdir(container_root): raise LXCError("Expect container root directory: %s" % container_root) # write the container scripts into the container fd, in_path = tempfile.mkstemp(prefix=os.path.basename(customize_script), dir=os.path.join(container_root, "tmp")) os.write(fd, open(customize_script, "r").read()) os.close(fd) os.chmod(in_path, 0755) args = ["sudo", "chroot", container_root, os.path.join("/tmp", os.path.basename(in_path))] return _cmd(args) def validate_path(pathname): if not os.access(pathname, os.R_OK): raise LXCError("Invalid or unreadable file: %s" % pathname) @inlineCallbacks def get_containers(prefix): """Return a dictionary of containers key names to runtime boolean value. :param prefix: Optionally specify a prefix that the container should match any returned containers. """ _, output = yield deferToThread(_cmd, ["lxc-ls"]) containers = {} for i in filter(None, output.split("\n")): if i in containers: containers[i] = True else: containers[i] = False if prefix: remove = [k for k in containers.keys() if not k.startswith(prefix)] map(containers.pop, remove) returnValue(containers) def ensure_devtmpfs_fstab(container_home): """ Workaround for bug in older LXC - We need to force mounting devtmpfs if it is not already in the rootfs, before starting. """ rootfs = os.path.join(container_home, 'rootfs') devpts = os.path.join(rootfs, 'dev', 'pts') if not os.path.exists(devpts): fstab_path = os.path.join(container_home, 'fstab') if os.path.exists(fstab_path): with open(fstab_path) as fstab: for line in fstab: if line.startswith('devtmpfs'): # Line already there, we are done return mode = 'a' else: mode = 'w' with open(fstab_path, mode) as fstab: print >>fstab, DEVTMPFS_LINE class LXCContainer(object): def __init__(self, container_name, series, cloud_init=None, debug_log=None, console_log=None, release_stream="released"): """Create an LXCContainer :param container_name: should be unique within the system :param series: distro release series (oneiric, precise, etc) :param cloud_init: full string of cloud-init userdata See :data CONFIG_OPTIONS_DOC: explain how these values map into the container in more detail. """ self.container_name = container_name self.debug_log = debug_log self.console_log = console_log self.cloud_init = cloud_init self.series = series self.release_stream = release_stream @property def container_home(self): return "/var/lib/lxc/%s" % self.container_name @property def rootfs(self): return "%s/rootfs/" % self.container_home def _p(self, path): if path[0] == "/": path = path[1:] return os.path.join(self.rootfs, path) def is_constructed(self): """Does the lxc image exist """ return os.path.exists(self.rootfs) @inlineCallbacks def is_running(self): """Is the lxc image running.""" state = yield get_containers(None) returnValue(state.get(self.container_name)) def execute(self, args): if not isinstance(args, (list, tuple)): args = [args, ] args = ["sudo", "chroot", self.rootfs] + args return _cmd(args) def _create_wait(self): """Create the container synchronously.""" if self.is_constructed(): return with tempfile.NamedTemporaryFile() as fh: if self.cloud_init: fh.write(self.cloud_init.render()) cloud_init_file = fh.name else: cloud_init_file = None fh.flush() _lxc_create(self.container_name, template=DEFAULT_TEMPLATE, cloud_init_file=cloud_init_file, release=self.series) ensure_devtmpfs_fstab(self.container_home) @inlineCallbacks def create(self): # open the template file and create a new temp processed yield deferToThread(self._create_wait) def _clone_wait(self, container_name): """Return a cloned LXCContainer with a the new container name. This method is synchronous and will provision the new image blocking till done. """ if not self.is_constructed(): raise LXCError("Attempted to clone container " "that hasn't been had create() called") container = LXCContainer(container_name, series=self.series, cloud_init=self.cloud_init, debug_log=self.debug_log, console_log=self.console_log, release_stream=self.release_stream) if not container.is_constructed(): _lxc_clone(self.container_name, container_name) return container def clone(self, container_name): return deferToThread(self._clone_wait, container_name) @inlineCallbacks def run(self): if not self.is_constructed(): raise LXCError("Attempting to run a container that " "hasn't been created or cloned.") yield deferToThread( _lxc_start, self.container_name, debug_log=self.debug_log, console_log=self.console_log) yield _lxc_wait(self.container_name, "RUNNING") @inlineCallbacks def stop(self): yield deferToThread(_lxc_stop, self.container_name) yield _lxc_wait(self.container_name, "STOPPED") @inlineCallbacks def destroy(self): yield self.stop() yield deferToThread(_lxc_destroy, self.container_name) juju-0.7.orig/juju/lib/lxc/data/0000755000000000000000000000000012135220114014633 5ustar 00000000000000juju-0.7.orig/juju/lib/lxc/tests/0000755000000000000000000000000012135220114015064 5ustar 00000000000000juju-0.7.orig/juju/lib/lxc/tests/__init__.py0000644000000000000000000000000012135220114017163 0ustar 00000000000000juju-0.7.orig/juju/lib/lxc/tests/data/0000755000000000000000000000000012135220114015775 5ustar 00000000000000juju-0.7.orig/juju/lib/lxc/tests/test_lxc.py0000644000000000000000000002331212135220114017264 0ustar 00000000000000import os import inspect from twisted.internet.defer import inlineCallbacks from twisted.internet.threads import deferToThread from juju.lib.lxc import (_lxc_start, _lxc_stop, _lxc_create, _lxc_wait, _lxc_ls, _lxc_destroy, LXCContainer, get_containers, LXCError, DEFAULT_TEMPLATE, ensure_devtmpfs_fstab, tests) from juju.lib.testing import TestCase from juju.providers.common.cloudinit import CloudInit from juju.machine import ProviderMachine def skip_sudo_tests(): if os.environ.get("TEST_SUDO"): # Get user's password *now*, if needed, not mid-run os.system("sudo false") return False return "TEST_SUDO=1 to include tests which use sudo (including lxc tests)" def uses_sudo(f): f.skip = skip_sudo_tests() return f DATA_DIR = os.path.join(os.path.dirname(inspect.getabsfile(tests)), "data") DEFAULT_SERIES = "precise" DEFAULT_CONTAINER = "lxc_test" @uses_sudo class LXCTest(TestCase): timeout = 240 def setUp(self): self.cloud_init_file = self.mktemp() with open(self.cloud_init_file, 'w') as cloud_init: cloud_init.write("# cloud-init\n") def tearDown(self): self.clean_container(DEFAULT_CONTAINER) def clean_container(self, container_name): if os.path.exists("/var/lib/lxc/%s" % container_name): _lxc_stop(container_name) _lxc_destroy(container_name) def test_lxc_create(self): self.addCleanup(self.clean_container, DEFAULT_CONTAINER) _lxc_create(DEFAULT_CONTAINER, DEFAULT_TEMPLATE, DEFAULT_SERIES, self.cloud_init_file) # verify we can find the container output = _lxc_ls() self.assertIn(DEFAULT_CONTAINER, output) # remove and verify the container was removed _lxc_destroy(DEFAULT_CONTAINER) output = _lxc_ls() self.assertNotIn(DEFAULT_CONTAINER, output) def test_lxc_start(self): self.addCleanup(self.clean_container, DEFAULT_CONTAINER) _lxc_create(DEFAULT_CONTAINER, DEFAULT_TEMPLATE, DEFAULT_SERIES, self.cloud_init_file) _lxc_start(DEFAULT_CONTAINER) _lxc_stop(DEFAULT_CONTAINER) @inlineCallbacks def test_lxc_deferred(self): self.addCleanup(self.clean_container, DEFAULT_CONTAINER) yield deferToThread( _lxc_create, DEFAULT_CONTAINER, DEFAULT_TEMPLATE, DEFAULT_SERIES, self.cloud_init_file) yield deferToThread(_lxc_start, DEFAULT_CONTAINER) @inlineCallbacks def test_lxc_container(self): self.addCleanup(self.clean_container, DEFAULT_CONTAINER) c = LXCContainer(DEFAULT_CONTAINER, "precise") running = yield c.is_running() self.assertFalse(running) self.assertFalse(c.is_constructed()) # verify we can't run a non-constructed container failure = c.run() yield self.assertFailure(failure, LXCError) yield c.create() self.assertFalse(running) self.assertTrue(c.is_constructed()) yield c.run() running = yield c.is_running() self.assertTrue(running) self.assertTrue(c.is_constructed()) output = _lxc_ls() self.assertIn(DEFAULT_CONTAINER, output) # Verify we have a path into the container self.assertTrue(os.path.exists(c.rootfs)) self.assertTrue(c.is_constructed()) self.verify_container(c, series="precise") # Verify that we are in containers containers = yield get_containers(None) self.assertEqual(containers[DEFAULT_CONTAINER], True) # tear it down yield c.destroy() running = yield c.is_running() self.assertFalse(running) containers = yield get_containers(None) self.assertNotIn(DEFAULT_CONTAINER, containers) # and its gone output = _lxc_ls() self.assertNotIn(DEFAULT_CONTAINER, output) @inlineCallbacks def test_lxc_wait(self): self.addCleanup(self.clean_container, DEFAULT_CONTAINER) _lxc_create(DEFAULT_CONTAINER, DEFAULT_TEMPLATE, DEFAULT_SERIES, self.cloud_init_file) _lxc_start(DEFAULT_CONTAINER) def waitForState(result): self.assertEqual(result, True) d = _lxc_wait(DEFAULT_CONTAINER, "RUNNING") d.addCallback(waitForState) yield d _lxc_stop(DEFAULT_CONTAINER) yield _lxc_wait(DEFAULT_CONTAINER, "STOPPED") _lxc_destroy(DEFAULT_CONTAINER) @inlineCallbacks def test_container_clone(self): self.addCleanup(self.clean_container, DEFAULT_CONTAINER) self.addCleanup(self.clean_container, DEFAULT_CONTAINER + "_child") cloud_init = CloudInit() cloud_init.add_ssh_key('dsa...') cloud_init.set_provider_type('local') cloud_init.set_zookeeper_machines([ProviderMachine('localhost','localhost','localhost')]) master_container = LXCContainer(DEFAULT_CONTAINER, series="precise", cloud_init=cloud_init) # verify that we cannot clone an unconstructed container failure = master_container.clone("test_lxc_fail") yield self.assertFailure(failure, LXCError) yield master_container.create() # Clone a child container from the template child_name = DEFAULT_CONTAINER + "_child" c = yield master_container.clone(child_name) self.assertEqual(c.container_name, child_name) running = yield c.is_running() self.assertFalse(running) yield c.run() running = yield c.is_running() self.assertTrue(running) output = _lxc_ls() self.assertIn(DEFAULT_CONTAINER, output) self.verify_container(c, series="precise", cloud_init=cloud_init.render()) # verify that we are in containers containers = yield get_containers(None) self.assertEqual(containers[child_name], True) # tear it down yield c.destroy() running = yield c.is_running() self.assertFalse(running) containers = yield get_containers(None) self.assertNotIn(child_name, containers) # and its gone output = _lxc_ls() self.assertNotIn(child_name, output) yield master_container.destroy() def test_create_wait(self): self.addCleanup(self.clean_container, DEFAULT_CONTAINER) cinit = CloudInit() cinit.add_ssh_key('dsa...') cinit.set_provider_type('local') cinit.set_zookeeper_machines([ProviderMachine('localhost','localhost','localhost')]) c = LXCContainer(DEFAULT_CONTAINER, series="precise", cloud_init=cinit) c._create_wait() self.verify_container(c, "precise", cinit.render()) def verify_container(self, c, series, cloud_init=None): """Verify properties of an LXCContainer""" def p(path): return os.path.join(c.rootfs, path) def sudo_get(path): # super get path (superuser priv) rc, output = c.execute(["cat", path]) return output def run(cmd): try: rc, output = c.execute(cmd) except LXCError: rc = 1 return rc # basic path checks self.assertTrue(os.path.exists(p('var/lib/cloud/seed/nocloud-net'))) # ubuntu user self.assertEqual(run(["id", "ubuntu"]), 0) # Verify the container release series. with open(os.path.join(c.rootfs, "etc", "lsb-release")) as fh: lsb_info = fh.read() self.assertIn(series, lsb_info) class LXCUtilTest(TestCase): def test_ensure_devtmpfs_fstab(self): without_dev_pts = self.makeDir() os.mkdir(os.path.join(without_dev_pts, 'rootfs')) ensure_devtmpfs_fstab(without_dev_pts) fstab_path = os.path.join(without_dev_pts, 'fstab') if os.path.exists(fstab_path): with open(fstab_path) as fstab: found = False for line in fstab: if line.startswith('devtmpfs'): found = True break self.assertTrue(found) else: self.fail('fstab missing') def test_ensure_devtmpfs_fstab_pts_exists(self): with_dev_pts = self.makeDir() os.mkdir(os.path.join(with_dev_pts, 'rootfs')) os.mkdir(os.path.join(with_dev_pts, 'rootfs', 'dev')) os.mkdir(os.path.join(with_dev_pts, 'rootfs', 'dev', 'pts')) ensure_devtmpfs_fstab(with_dev_pts) fstab_path = os.path.join(with_dev_pts, 'fstab') if os.path.exists(fstab_path): self.fail('fstab created but pts already there') def test_ensure_devtmpfs_fstab_line_added(self): self._test_devtmpfs_line('sample_fstab') def test_ensure_devtmpfs_fstab_line_onlyone(self): self._test_devtmpfs_line('sample_fstab_withdevtmpfs') def _test_devtmpfs_line(self, datafile): without_dev_pts = self.makeDir() os.mkdir(os.path.join(without_dev_pts, 'rootfs')) fstab_path = os.path.join(without_dev_pts, 'fstab') with open(fstab_path, 'w') as fstab: source = os.path.join(DATA_DIR, datafile) with open(source) as src: fstab.write(src.read()) ensure_devtmpfs_fstab(without_dev_pts) if os.path.exists(fstab_path): with open(fstab_path) as fstab: count = 0 for line in fstab: if line.startswith('devtmpfs'): count += 1 self.assertEquals(count, 1) else: self.fail('fstab missing') juju-0.7.orig/juju/lib/lxc/tests/data/sample_fstab0000644000000000000000000000015612135220114020362 0ustar 00000000000000proc proc proc nodev,noexec,nosuid 0 0 sysfs sys sysfs defaults 0 0 juju-0.7.orig/juju/lib/lxc/tests/data/sample_fstab_withdevtmpfs0000644000000000000000000000024112135220114023161 0ustar 00000000000000proc proc proc nodev,noexec,nosuid 0 0 sysfs sys sysfs defaults 0 0 devtmpfs dev devtmpfs defaults 0 0 juju-0.7.orig/juju/lib/tests/__init__.py0000644000000000000000000000000012135220114016375 0ustar 00000000000000juju-0.7.orig/juju/lib/tests/data/0000755000000000000000000000000012135220114015207 5ustar 00000000000000juju-0.7.orig/juju/lib/tests/test_cache.py0000644000000000000000000000071012135220114016750 0ustar 00000000000000import time from juju.lib.testing import TestCase from juju.lib.cache import CachedValue class CachedValueTest(TestCase): def test_cache_value(self): cache = CachedValue(10) cache.set(10) self.assertEqual(cache.get(), 10) n = time.time() + (3600 * 24) now = self.mocker.replace(time.time) now() self.mocker.result(n) self.mocker.replay() self.assertEqual(cache.get(), None) juju-0.7.orig/juju/lib/tests/test_filehash.py0000644000000000000000000000061212135220114017471 0ustar 00000000000000import hashlib from juju.lib.testing import TestCase from juju.lib.filehash import compute_file_hash class FileHashTest(TestCase): def test_compute_file_hash(self): for type in (hashlib.sha256, hashlib.md5): filename = self.makeFile("content") digest = compute_file_hash(type, filename) self.assertEquals(digest, type("content").hexdigest()) juju-0.7.orig/juju/lib/tests/test_format.py0000644000000000000000000003125312135220114017203 0ustar 00000000000000# -*- encoding: utf-8 -*- import json import yaml from juju.errors import JujuError from juju.lib.testing import TestCase from juju.lib.format import ( PythonFormat, YAMLFormat, is_valid_charm_format, get_charm_formatter, get_charm_formatter_from_env) class TestFormatLookup(TestCase): def test_is_valid_charm_format(self): """Verify currently valid charm formats""" self.assertFalse(is_valid_charm_format(0)) self.assertTrue(is_valid_charm_format(1)) self.assertTrue(is_valid_charm_format(2)) self.assertFalse(is_valid_charm_format(3)) def test_get_charm_formatter(self): self.assertInstance(get_charm_formatter(1), PythonFormat) self.assertInstance(get_charm_formatter(2), YAMLFormat) e = self.assertRaises(JujuError, get_charm_formatter, 0) self.assertEqual( str(e), "Expected charm format to be either 1 or 2, got 0") def test_get_charm_formatter_from_env(self): """Verifies _JUJU_CHARM_FORMAT can be mapped to valid formatters""" self.change_environment(_JUJU_CHARM_FORMAT="0") e = self.assertRaises(JujuError, get_charm_formatter_from_env) self.assertEqual( str(e), "Expected charm format to be either 1 or 2, got 0") self.change_environment(_JUJU_CHARM_FORMAT="1") self.assertInstance(get_charm_formatter_from_env(), PythonFormat) self.change_environment(_JUJU_CHARM_FORMAT="2") self.assertInstance(get_charm_formatter_from_env(), YAMLFormat) self.change_environment(_JUJU_CHARM_FORMAT="3") e = self.assertRaises(JujuError, get_charm_formatter_from_env) self.assertEqual( str(e), "Expected charm format to be either 1 or 2, got 3") class TestPythonFormat(TestCase): def assert_format(self, data, expected): """Verify str output serialization; no roundtripping is supported.""" formatted = PythonFormat().format(data) self.assertEqual(formatted, expected) def test_format(self): """Verifies Python str formatting of data""" self.assert_format(None, "None") self.assert_format("", "") self.assert_format("A string", "A string") self.assert_format( "High bytes: \xCA\xFE", "High bytes: \xca\xfe") self.assert_format(u"", "") self.assert_format( u"A unicode string (but really ascii)", "A unicode string (but really ascii)") e = self.assertRaises(UnicodeEncodeError, PythonFormat().format, u"中文") self.assertEqual( str(e), ("'ascii' codec can't encode characters in position 0-1: " "ordinal not in range(128)")) self.assert_format({}, "{}") self.assert_format( {u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com", u"foo": u"bar", u"configured": True}, ("{u'public-address': u'ec2-1-2-3-4.compute-1.amazonaws.com', " "u'foo': u'bar', u'configured': True}")) self.assert_format(False, "False") self.assert_format(True, "True") self.assert_format(0.0, "0.0") self.assert_format(3.14159, "3.14159") self.assert_format(6.02214178e23, "6.02214178e+23") self.assert_format(0, "0") self.assert_format(42, "42") def test_parse_keyvalue_pairs(self): """Verify reads in values as strings""" sample = self.makeFile("INPUT DATA") # test various styles of options being read options = ["alpha=beta", "content=@%s" % sample] formatter = PythonFormat() data = formatter.parse_keyvalue_pairs(options) self.assertEquals(data["alpha"], "beta") self.assertEquals(data["content"], "INPUT DATA") # and check an error condition options = ["content=@missing"] error = self.assertRaises( JujuError, formatter.parse_keyvalue_pairs, options) self.assertEquals( str(error), "No such file or directory: missing (argument:content)") # and check when fed non-kvpairs the error makes sense options = ["foobar"] error = self.assertRaises( JujuError, formatter.parse_keyvalue_pairs, options) self.assertEquals( str(error), "Expected `option=value`. Found `foobar`") def assert_dump_load(self, data, expected): """Asserts expected formatting and roundtrip through dump/load""" formatter = PythonFormat() dumped = formatter.dump({"data": data}) loaded = formatter.load(dumped)["data"] self.assertEqual(dumped, expected) self.assertEqual(dumped, json.dumps({"data": data})) self.assertEqual(data, loaded) if isinstance(data, str): # Verify promotion of str to unicode self.assertInstance(loaded, unicode) def test_dump_load(self): """Verify JSON roundtrip semantics""" self.assert_dump_load(None, '{"data": null}') self.assert_dump_load("", '{"data": ""}') self.assert_dump_load("A string", '{"data": "A string"}') e = self.assertRaises( UnicodeDecodeError, PythonFormat().dump, "High bytes: \xCA\xFE") self.assertEqual( str(e), "'utf8' codec can't decode byte 0xca in position 12: " "invalid continuation byte") self.assert_dump_load(u"", '{"data": ""}') self.assert_dump_load( u"A unicode string (but really ascii)", '{"data": "A unicode string (but really ascii)"}') e = self.assertRaises(UnicodeEncodeError, PythonFormat().format, u"中文") self.assertEqual( str(e), ("'ascii' codec can't encode characters in position 0-1: " "ordinal not in range(128)")) self.assert_dump_load({}, '{"data": {}}') self.assert_dump_load( {u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com", u"foo": u"bar", u"configured": True}, ('{"data": {"public-address": ' '"ec2-1-2-3-4.compute-1.amazonaws.com", "foo": "bar", ' '"configured": true}}')) self.assert_dump_load(False, '{"data": false}') self.assert_dump_load(True, '{"data": true}') self.assert_dump_load(0.0, '{"data": 0.0}') self.assert_dump_load(3.14159, '{"data": 3.14159}') self.assert_dump_load(6.02214178e23, '{"data": 6.02214178e+23}') self.assert_dump_load(0, '{"data": 0}') self.assert_dump_load(42, '{"data": 42}') def test_should_delete(self): """Verify empty or whitespace only strings indicate deletion""" formatter = PythonFormat() self.assertFalse(formatter.should_delete("0")) self.assertFalse(formatter.should_delete("something")) self.assertTrue(formatter.should_delete("")) self.assertTrue(formatter.should_delete(" ")) # Verify that format: 1 can only work with str values e = self.assertRaises(AttributeError, formatter.should_delete, 42) self.assertEqual(str(e), "'int' object has no attribute 'strip'") e = self.assertRaises(AttributeError, formatter.should_delete, None) self.assertEqual(str(e), "'NoneType' object has no attribute 'strip'") class TestYAMLFormat(TestCase): def assert_format(self, data, expected): """Verify actual output serialization and roundtripping through YAML""" formatted = YAMLFormat().format(data) self.assertEqual(formatted, expected) self.assertEqual(data, yaml.safe_load(formatted)) def test_format(self): """Verifies standard formatting of data into valid YAML""" self.assert_format(None, "") self.assert_format("", "''") self.assert_format("A string", "A string") # Note: YAML uses b64 encoding for byte strings tagged by !!binary self.assert_format( "High bytes: \xCA\xFE", "!!binary |\n SGlnaCBieXRlczogyv4=") self.assert_format(u"", "''") self.assert_format( u"A unicode string (but really ascii)", "A unicode string (but really ascii)") # Any non-ascii Unicode will use UTF-8 encoding self.assert_format(u"中文", "\xe4\xb8\xad\xe6\x96\x87") self.assert_format({}, "{}") self.assert_format( {u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com", u"foo": u"bar", u"configured": True}, ("configured: true\n" "foo: bar\n" "public-address: ec2-1-2-3-4.compute-1.amazonaws.com")) self.assert_format(False, "false") self.assert_format(True, "true") self.assert_format(0.0, "0.0") self.assert_format(3.14159, "3.14159") self.assert_format(6.02214178e23, "6.02214178e+23") self.assert_format(0, "0") self.assert_format(42, "42") def assert_parse(self, data): """Verify input parses as expected, including from a data file""" formatter = YAMLFormat() formatted = formatter.format_raw(data) data_file = self.makeFile(formatted) kvs = ["formatted=%s" % formatted, "file=@%s" % data_file] parsed = formatter.parse_keyvalue_pairs(kvs) self.assertEqual(parsed["formatted"], data) self.assertEqual(parsed["file"], data) def test_parse_keyvalue_pairs(self): """Verify key value pairs parse for a wide range of YAML inputs.""" formatter = YAMLFormat() self.assert_parse("") self.assert_parse("A string") self.assert_parse("High bytes: \xCA\xFE") # Raises an error if no such file e = self.assertRaises( JujuError, formatter.parse_keyvalue_pairs, ["content=@missing"]) self.assertEquals( str(e), "No such file or directory: missing (argument:content)") # Raises an error if not of the form K=V or K= e = self.assertRaises( JujuError, formatter.parse_keyvalue_pairs, ["foobar"]) self.assertEquals( str(e), "Expected `option=value`. Found `foobar`") def assert_dump_load(self, data, expected): """Asserts expected formatting and roundtrip through dump/load""" formatter = YAMLFormat() dumped = formatter.dump({"data": data}) loaded = formatter.load(dumped)["data"] self.assertEqual(dumped, expected) self.assertEqual(data, loaded) if isinstance(data, str): # Verify that no promotion to unicode occurs for str values self.assertInstance(loaded, str) def test_dump_load(self): """Verify JSON roundtrip semantics""" self.assert_dump_load(None, "data: null") self.assert_dump_load("", "data: ''") self.assert_dump_load("A string", "data: A string") self.assert_dump_load("High bytes: \xCA\xFE", "data: !!binary |\n SGlnaCBieXRlczogyv4=") self.assert_dump_load(u"", "data: ''") self.assert_dump_load( u"A unicode string (but really ascii)", "data: A unicode string (but really ascii)") self.assert_dump_load(u"中文", "data: \xe4\xb8\xad\xe6\x96\x87") self.assert_dump_load({}, "data: {}") self.assert_dump_load( {u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com", u"foo": u"bar", u"configured": True}, ("data:\n" " configured: true\n" " foo: bar\n" " public-address: ec2-1-2-3-4.compute-1.amazonaws.com")) self.assert_dump_load(False, "data: false") self.assert_dump_load(True, "data: true") self.assert_dump_load(0.0, "data: 0.0") self.assert_dump_load(3.14159, "data: 3.14159") self.assert_dump_load(6.02214178e23, "data: 6.02214178e+23") self.assert_dump_load(0, "data: 0") self.assert_dump_load(42, "data: 42") def test_should_delete(self): """Verify empty or whitespace only strings indicate deletion""" formatter = PythonFormat() self.assertFalse(formatter.should_delete("0")) self.assertFalse(formatter.should_delete("something")) self.assertTrue(formatter.should_delete("")) self.assertTrue(formatter.should_delete(" ")) # Verify that format: 1 can only work with str values e = self.assertRaises(AttributeError, formatter.should_delete, 42) self.assertEqual(str(e), "'int' object has no attribute 'strip'") e = self.assertRaises(AttributeError, formatter.should_delete, None) self.assertEqual(str(e), "'NoneType' object has no attribute 'strip'") juju-0.7.orig/juju/lib/tests/test_loader.py0000644000000000000000000000231412135220114017155 0ustar 00000000000000from juju.lib.testing import TestCase from juju.lib.loader import get_callable class LoaderTest(TestCase): def test_loader_no_module(self): self.failUnlessRaises(ImportError, get_callable, "missing.module") def test_loader_bad_specification(self): for el in ['', None, 123]: e = self.failUnlessRaises(ValueError, get_callable, el) self.failUnlessEqual(str(e), "Invalid import specification: %r" % el) def test_loader_non_callable(self): # os.sep is a valid import but isn't a callable self.failUnlessRaises(ImportError, get_callable, "os.sep") def test_loader_valid_specification(self): split = get_callable("os.path.split") assert callable(split) assert split("foo/bar") == ("foo", "bar") def test_loader_retrieve_provider(self): providers = ["juju.providers.dummy.MachineProvider", "juju.providers.ec2.MachineProvider"] for provider_name in providers: provider = get_callable(provider_name) self.assertEqual(provider.__name__, "MachineProvider") self.assertTrue(callable(provider)) juju-0.7.orig/juju/lib/tests/test_pick.py0000644000000000000000000000425612135220114016644 0ustar 00000000000000 from juju.lib.testing import TestCase from juju.lib.pick import pick_key, pick_all_key, pick_attr, pick_all_attr class adict(dict): def __getattr__(self, key): return self[key] def __setattr__(self, key, value): self[key] = value class PickTest(TestCase): def setUp(self): self.sample_key_data = [{"role": "client", "name": "db"}, {"role": "server", "name": "website"}, {"role": "client", "name": "cache"}] self.sample_attr_data = [adict({"role": "client", "name": "db"}), adict({"role": "server", "name": "website"}), adict({"role": "client", "name": "cache"})] def test_pick_all_key(self): self.assertEqual(list(pick_all_key(self.sample_key_data, role="client")), [{"role": "client", "name": "db"}, {"role": "client", "name": "cache"}]) def test_pick_key(self): sd = self.sample_key_data self.assertEqual(pick_key(sd, role="client"), {"role": "client", "name": "db"}) self.assertEqual(pick_key(sd, role="server"), {"role": "server", "name": "website"}) self.assertEqual(pick_key(sd, role="client", name="db"), {"role": "client", "name": "db"}) self.assertEqual(pick_key(sd, role="client", name="foo"), None) def test_pick_all_attr(self): self.assertEqual(list(pick_all_attr(self.sample_attr_data, role="client")), [{"role": "client", "name": "db"}, {"role": "client", "name": "cache"}]) def test_pick_attr(self): sd = self.sample_attr_data self.assertEqual(pick_attr(sd, role="client"), {"role": "client", "name": "db"}) self.assertEqual(pick_attr(sd, role="server"), {"role": "server", "name": "website"}) self.assertEqual(pick_attr(sd, role="client", name="db"), {"role": "client", "name": "db"}) self.assertEqual(pick_attr(sd, role="client", name="foo"), None) juju-0.7.orig/juju/lib/tests/test_port.py0000644000000000000000000000154112135220114016674 0ustar 00000000000000import socket from unittest import TestCase from juju.lib.port import get_open_port class OpenPortTest(TestCase): def test_get_open_port(self): port = get_open_port() self.assertTrue(isinstance(port, int)) sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) # would raise an error if we got it wrong. sock.bind(("127.0.0.1", port)) sock.listen(1) sock.close() del sock def test_get_open_port_with_host(self): port = get_open_port("localhost") self.assertTrue(isinstance(port, int)) sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) # would raise an error if we got it wrong. sock.bind(("127.0.0.1", port)) sock.listen(1) sock.close() del sock juju-0.7.orig/juju/lib/tests/test_schema.py0000644000000000000000000004340312135220114017153 0ustar 00000000000000import re from juju.lib.testing import TestCase from juju.lib.schema import ( SchemaError, SchemaExpectationError, Constant, Bool, Int, Float, String, Unicode, UnicodeOrString, List, KeyDict, SelectDict, Dict, Tuple, OneOf, Any, Regex, OAuthString) PATH = [""] class DummySchema(object): def coerce(self, value, path): return "hello!" class SchemaErrorsTest(TestCase): def test_invalid_schema(self): error = SchemaError(["a", ".", "b"], "error message") self.assertEquals(error.path, ["a", ".", "b"]) self.assertEquals(error.message, "error message") self.assertEquals(str(error), "a.b: error message") def test_invalid_schema_expectation(self): error = SchemaExpectationError(["a", ".", "b"], "", "") self.assertEquals(error.path, ["a", ".", "b"]) self.assertEquals(error.expected, "") self.assertEquals(error.got, "") self.assertEquals(error.message, "expected , got ") self.assertEquals(str(error), "a.b: expected , got ") class ConstantTest(TestCase): def test_constant(self): self.assertEquals(Constant("hello").coerce("hello", PATH), "hello") def test_constant_arbitrary(self): obj = object() self.assertEquals(Constant(obj).coerce(obj, PATH), obj) def test_constant_bad(self): error = self.assertRaises(SchemaError, Constant("foo").coerce, "bar", PATH) self.assertEquals(str(error), ": expected 'foo', got 'bar'") class AnyTest(TestCase): def test_any(self): obj = object() self.assertEquals(Any().coerce(obj, object()), obj) class OneOfTest(TestCase): def test_one_of(self): schema = OneOf(Constant(None), Unicode()) self.assertEquals(schema.coerce(None, PATH), None) self.assertEquals(schema.coerce(u"foo", PATH), u"foo") def test_one_of_bad(self): schema = OneOf(Constant(None), Unicode()) error = self.assertRaises(SchemaError, schema.coerce, "", PATH) # When no values are supported, raise the first error. self.assertEquals(str(error), ": expected None, got ''") class BoolTest(TestCase): def test_bool(self): self.assertEquals(Bool().coerce(True, PATH), True) self.assertEquals(Bool().coerce(False, PATH), False) def test_bool_bad(self): error = self.assertRaises(SchemaError, Bool().coerce, 1, PATH) self.assertEquals(str(error), ": expected bool, got 1") class IntTest(TestCase): def test_int(self): self.assertEquals(Int().coerce(3, PATH), 3) def test_int_accepts_long(self): self.assertEquals(Int().coerce(3L, PATH), 3) def test_int_bad_str(self): error = self.assertRaises(SchemaError, Int().coerce, "3", PATH) self.assertEquals(str(error), ": expected int, got '3'") def test_int_bad_float(self): error = self.assertRaises(SchemaError, Int().coerce, 3.0, PATH) self.assertEquals(str(error), ": expected int, got 3.0") class FloatTest(TestCase): def test_float(self): self.assertEquals(Float().coerce(3.3, PATH), 3.3) def test_float_accepts_int(self): self.assertEquals(Float().coerce(3, PATH), 3.0) def test_float_accepts_long(self): self.assertEquals(Float().coerce(3L, PATH), 3.0) def test_float_bad_str(self): error = self.assertRaises(SchemaError, Float().coerce, "3.0", PATH) self.assertEquals(str(error), ": expected number, got '3.0'") class StringTest(TestCase): def test_string(self): self.assertEquals(String().coerce("foo", PATH), "foo") def test_string_bad_unicode(self): error = self.assertRaises(SchemaError, String().coerce, u"foo", PATH) self.assertEquals(str(error), ": expected string, got u'foo'") def test_string_bad_int(self): error = self.assertRaises(SchemaError, String().coerce, 1, PATH) self.assertEquals(str(error), ": expected string, got 1") class UnicodeTest(TestCase): def test_unicode(self): self.assertEquals(Unicode().coerce(u"foo", PATH), u"foo") def test_unicode_bad_str(self): error = self.assertRaises(SchemaError, Unicode().coerce, "foo", PATH) self.assertEquals(str(error), ": expected unicode, got 'foo'") class UnicodeOrStringTest(TestCase): def test_unicode_or_str(self): schema = UnicodeOrString("utf-8") self.assertEquals(schema.coerce(u"foo", PATH), u"foo") def test_unicode_or_str_accepts_str(self): self.assertEquals(UnicodeOrString("utf-8").coerce("foo", PATH), u"foo") def test_unicode_or_str_bad(self): error = self.assertRaises(SchemaError, UnicodeOrString("utf-8").coerce, 32, PATH) self.assertEquals(str(error), ": expected unicode or utf-8 string, got 32") def test_unicode_or_str_decodes(self): """UnicodeOrString should decode plain strings.""" a = u"\N{HIRAGANA LETTER A}" self.assertEquals( UnicodeOrString("utf-8").coerce(a.encode("utf-8"), PATH), a) letter = u"\N{LATIN SMALL LETTER A WITH GRAVE}" self.assertEquals( UnicodeOrString("latin-1").coerce(letter.encode("latin-1"), PATH), letter) def test_unicode_or_str_bad_encoding(self): """Decoding errors should be converted to a SchemaError.""" error = self.assertRaises( SchemaError, UnicodeOrString("utf-8").coerce, "\xff", PATH) self.assertEquals(str(error), r": expected unicode or utf-8 string, " r"got '\xff'") class RegexTest(TestCase): def test_regex(self): exp = "\w+" pat = re.compile(exp) self.assertEquals(Regex().coerce(exp, PATH), pat) def test_regex_bad_regex(self): exp = "([a-" error = self.assertRaises( SchemaError, Regex().coerce, exp, PATH) self.assertEquals(str(error), ": expected regex, got '([a-'") class ListTest(TestCase): def test_list(self): schema = List(Int()) self.assertEquals(schema.coerce([1], PATH), [1]) def test_list_bad(self): error = self.assertRaises(SchemaError, List(Int()).coerce, 32, PATH) self.assertEquals(str(error), ": expected list, got 32") def test_list_inner_schema_coerces(self): self.assertEquals(List(DummySchema()).coerce([3], PATH), ["hello!"]) def test_list_bad_inner_schema_at_0(self): error = self.assertRaises(SchemaError, List(Int()).coerce, ["hello"], PATH) self.assertEquals(str(error), "[0]: expected int, got 'hello'") def test_list_bad_inner_schema_at_1(self): error = self.assertRaises(SchemaError, List(Int()).coerce, [1, "hello"], PATH) self.assertEquals(str(error), "[1]: expected int, got 'hello'") def test_list_multiple_items(self): a = u"\N{HIRAGANA LETTER A}" schema = List(UnicodeOrString("utf-8")) self.assertEquals(schema.coerce([a, a.encode("utf-8")], PATH), [a, a]) class TupleTest(TestCase): def test_tuple(self): self.assertEquals(Tuple(Int()).coerce((1,), PATH), (1,)) def test_tuple_coerces(self): self.assertEquals( Tuple(Int(), DummySchema()).coerce((23, object()), PATH), (23, "hello!")) def test_tuple_bad(self): error = self.assertRaises(SchemaError, Tuple().coerce, "hi", PATH) self.assertEquals(str(error), ": expected tuple, got 'hi'") def test_tuple_inner_schema_bad_at_0(self): error = self.assertRaises(SchemaError, Tuple(Int()).coerce, ("hi",), PATH) self.assertEquals(str(error), "[0]: expected int, got 'hi'") def test_tuple_inner_schema_bad_at_1(self): error = self.assertRaises(SchemaError, Tuple(Int(), Int()).coerce, (1, "hi"), PATH) self.assertEquals(str(error), "[1]: expected int, got 'hi'") def test_tuple_must_have_all_items(self): error = self.assertRaises(SchemaError, Tuple(Int(), Int()).coerce, (1,), PATH) self.assertEquals(str(error), ": expected tuple with 2 elements, got (1,)") def test_tuple_must_have_no_more_items(self): error = self.assertRaises(SchemaError, Tuple(Int()).coerce, (1, 2), PATH) self.assertEquals(str(error), ": expected tuple with 1 elements, got (1, 2)") class DictTest(TestCase): def test_dict(self): self.assertEquals(Dict(Int(), String()).coerce({32: "hello."}, PATH), {32: "hello."}) def test_dict_coerces(self): self.assertEquals( Dict(DummySchema(), DummySchema()).coerce({32: object()}, PATH), {"hello!": "hello!"}) def test_dict_bad(self): error = self.assertRaises(SchemaError, Dict(Int(), Int()).coerce, "hi", PATH) self.assertEquals(str(error), ": expected dict, got 'hi'") def test_dict_bad_key(self): error = self.assertRaises(SchemaError, Dict(Int(), Int()).coerce, {"hi": 32}, PATH) self.assertEquals(str(error), ": expected int, got 'hi'") def test_dict_bad_value(self): error = self.assertRaises(SchemaError, Dict(Int(), Int()).coerce, {32: "hi"}, PATH) self.assertEquals(str(error), ".32: expected int, got 'hi'") def test_dict_bad_value_unstringifiable(self): """ If the path can't be stringified, it's repr()ed. """ a = u"\N{HIRAGANA LETTER A}" schema = Dict(Unicode(), Int()) error = self.assertRaises(SchemaError, schema.coerce, {a: "hi"}, PATH) self.assertEquals(str(error), ".u'\\u3042': expected int, got 'hi'") def test_dict_path_without_dots(self): """ The first path entry shouldn't have a dot as prefix. """ schema = Dict(Int(), Dict(Int(), Int())) error = self.assertRaises(SchemaError, schema.coerce, {1: {2: "hi"}}, []) self.assertEquals(str(error), "1.2: expected int, got 'hi'") class KeyDictTest(TestCase): def test_key_dict(self): self.assertEquals(KeyDict({"foo": Int()}).coerce({"foo": 1}, PATH), {"foo": 1}) def test_key_dict_coerces(self): self.assertEquals( KeyDict({"foo": DummySchema()}).coerce({"foo": 3}, PATH), {"foo": "hello!"}) def test_key_dict_bad(self): error = self.assertRaises(SchemaError, KeyDict({}).coerce, "1", PATH) self.assertEquals(str(error), ": expected dict, got '1'") def test_key_dict_bad_value(self): schema = KeyDict({"foo": Int()}) error = self.assertRaises(SchemaError, schema.coerce, {"foo": "hi"}, PATH) self.assertEquals(str(error), ".foo: expected int, got 'hi'") def test_key_dict_bad_value_unstringifiable(self): """ If the path can't be stringified, it's repr()ed. """ a = u"\N{HIRAGANA LETTER A}" error = self.assertRaises(SchemaError, KeyDict({a: Int()}).coerce, {a: "hi"}, PATH) self.assertEquals(str(error), ".u'\\u3042': expected int, got 'hi'") def test_key_dict_unknown_key(self): """ Unknown key/value pairs processed by a KeyDict are left untouched. This is an attempt at not eating values by mistake due to something like different application versions operating on the same data. """ schema = KeyDict({"foo": Int()}) self.assertEquals(schema.coerce({"foo": 1, "bar": "hi"}, PATH), {"foo": 1, "bar": "hi"}) def test_key_dict_multiple_items(self): schema = KeyDict({"one": Int(), "two": List(Float())}) input = {"one": 32, "two": [1.5, 2.3]} self.assertEquals(schema.coerce(input, PATH), {"one": 32, "two": [1.5, 2.3]}) def test_key_dict_arbitrary_keys(self): """ KeyDict doesn't actually need to have strings as keys, just any object which hashes the same. """ key = object() self.assertEquals(KeyDict({key: Int()}).coerce({key: 32}, PATH), {key: 32}) def test_key_dict_must_have_all_keys(self): """ dicts which are applied to a KeyDict must have all the keys specified in the KeyDict. """ schema = KeyDict({"foo": Int()}) error = self.assertRaises(SchemaError, schema.coerce, {}, PATH) self.assertEquals(str(error), ".foo: required value not found") def test_key_dict_optional_keys(self): """KeyDict allows certain keys to be optional. """ schema = KeyDict({"foo": Int(), "bar": Int()}, optional=["bar"]) self.assertEquals(schema.coerce({"foo": 32}, PATH), {"foo": 32}) def test_key_dict_pass_optional_key(self): """Regression test. It should be possible to pass an optional key. """ schema = KeyDict({"foo": Int()}, optional=["foo"]) self.assertEquals(schema.coerce({"foo": 32}, PATH), {"foo": 32}) def test_key_dict_path_without_dots(self): """ The first path entry shouldn't have a dot as prefix. """ schema = KeyDict({1: KeyDict({2: Int()})}) error = self.assertRaises(SchemaError, schema.coerce, {1: {2: "hi"}}, []) self.assertEquals(str(error), "1.2: expected int, got 'hi'") class SelectDictTest(TestCase): def test_select_dict(self): schema = SelectDict("name", {"foo": KeyDict({"value": Int()}), "bar": KeyDict({"value": String()})}) self.assertEquals(schema.coerce({"name": "foo", "value": 1}, PATH), {"name": "foo", "value": 1}) self.assertEquals(schema.coerce({"name": "bar", "value": "one"}, PATH), {"name": "bar", "value": "one"}) def test_select_dict_errors(self): schema = SelectDict("name", {"foo": KeyDict({"foo_": Int()}), "bar": KeyDict({"bar_": Int()})}) error = self.assertRaises(SchemaError, schema.coerce, {"name": "foo"}, PATH) self.assertEquals(str(error), ".foo_: required value not found") error = self.assertRaises(SchemaError, schema.coerce, {"name": "bar"}, PATH) self.assertEquals(str(error), ".bar_: required value not found") class BestErrorTest(TestCase): def test_best_error(self): """ OneOf attempts to select a relevant error to report to user when no branch of its schema can be satisitifed. """ schema = OneOf(Unicode(), KeyDict({"flavor": String()}) ) # an error related to flavor is more specific and useful # than one related to the 1st branch error = self.assertRaises(SchemaExpectationError, schema.coerce, {"flavor": None}, PATH) # the correct error is returned self.assertEquals(str(error), ".flavor: expected string, got None") # here the error related to Unicode is better error = self.assertRaises(SchemaExpectationError, schema.coerce, "a string", PATH) self.assertEquals(str(error), ": expected unicode, got 'a string'") # and the success case still functions self.assertEqual(schema.coerce(u"some unicode", PATH), u"some unicode") class OAuthStringTest(TestCase): def test_oauth_string(self): self.assertEquals( ("a", "b", "c"), OAuthString().coerce("a:b:c", PATH)) def test_oauth_string_with_whitespace(self): # Leading and trailing whitespace is stripped; interior whitespace is # not modified. self.assertEquals( ("a", "b c", "d"), OAuthString().coerce(" a : b c :\n d \t", PATH)) def test_too_few_parts(self): # The string must contain three parts, colon-separated. error = self.assertRaises( SchemaError, OAuthString().coerce, "foo", PATH) expected = ": does not contain three colon-separated parts" self.assertEquals(expected, str(error)) def test_too_many_parts(self): # The string must contain three parts, colon-separated. error = self.assertRaises( SchemaError, OAuthString().coerce, "a:b:c:d", PATH) expected = ": does not contain three colon-separated parts" self.assertEquals(expected, str(error)) def test_empty_part(self): # All three parts of the string must contain some text. error = self.assertRaises( SchemaError, OAuthString().coerce, "a:b:", PATH) expected = ": one or more parts are empty" self.assertEquals(expected, str(error)) def test_whitespace_part(self): # A part containing only whitespace is treated as empty. error = self.assertRaises( SchemaError, OAuthString().coerce, "a: :c", PATH) expected = ": one or more parts are empty" self.assertEquals(expected, str(error)) juju-0.7.orig/juju/lib/tests/test_service.py0000644000000000000000000000713512135220114017355 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.lib.testing import TestCase from juju.lib.mocker import MATCH, KWARGS from juju.lib.service import TwistedDaemonService from juju.lib.lxc.tests.test_lxc import uses_sudo import os class TwistedDaemonServiceTest(TestCase): @inlineCallbacks def setUp(self): yield super(TwistedDaemonServiceTest, self).setUp() self.setup_service() def setup_service(self): service = TwistedDaemonService("juju-machine-agent", "/tmp/.juju-test.pid", use_sudo=False) service.set_description("Juju machine agent") service.set_environ({"JUJU_MACHINE_ID": 0}) service.set_command(["/bin/true", ]) self.service = service if os.path.exists("/tmp/.juju-test.pid"): os.remove("/tmp/.juju-test.pid") return service def setup_mock(self): self.check_call = self.mocker.replace("subprocess.check_call") def mock_call(self, args): def exe_match(cmd): cmd = " ".join(str(s) for s in cmd) return cmd.startswith(args) self.check_call(MATCH(exe_match), KWARGS) self.mocker.result(0) @inlineCallbacks def test_simple_service_start(self): self.setup_mock() self.mock_call("/bin/true") self.mocker.replay() yield self.service.start() def test_set_output_path(self): # defaults work self.assertEqual(self.service.output_path, "/tmp/juju-machine-agent.output") # override works self.service.output_path = "/tmp/valid.log" self.assertEqual(self.service.output_path, "/tmp/valid.log") @inlineCallbacks def test_simple_service_start_destroy(self): self.setup_mock() mock_service = self.mocker.patch(self.service) get_pid = mock_service.get_pid get_pid() self.mocker.result(1337) is_running = mock_service.is_running is_running() self.mocker.result(False) is_running() self.mocker.result(True) self.mock_call(("/bin/true", )) self.mock_call(("kill", "1337")) self.mocker.replay() yield self.service.start() yield self.service.destroy() @inlineCallbacks def test_webservice_start(self): # test using a real twisted service (with --pidfile) # arg ordering matters here so we set pidfile manually self.service.set_command([ "env", "twistd", "--pidfile", "/tmp/.juju-test.pid", "--logfile", "/tmp/.juju-test.log", "web", "--port", "9871", "--path", "/lib", ]) yield self.service.start() yield self.sleep(0.5) self.assertTrue(self.service.get_pid()) self.assertTrue(self.service.is_running()) self.assertTrue(os.path.exists("/tmp/.juju-test.pid")) yield self.service.destroy() yield self.sleep(0.1) self.assertFalse(os.path.exists("/tmp/.juju-test.pid")) @uses_sudo @inlineCallbacks def test_sudo_env_vars(self): self.service.set_environ( {"JUJU_MACHINE_ID": 0, "PYTHONPATH": "foo2"}) self.service.set_daemon(False) self.service.set_command(["/usr/bin/env"]) self.service.output_path = self.makeFile() yield self.service.start() with open(self.service.output_path) as fh: contents = fh.read() self.assertIn("PYTHONPATH=foo2", contents) self.assertIn("JUJU_MACHINE_ID=0", contents) juju-0.7.orig/juju/lib/tests/test_statemachine.py0000644000000000000000000004533612135220114020367 0ustar 00000000000000import logging from twisted.internet.defer import succeed, fail, inlineCallbacks, Deferred from juju.lib.testing import TestCase from juju.lib.statemachine import ( Workflow, Transition, WorkflowState, InvalidStateError, InvalidTransitionError, TransitionError) class TestWorkflowState(WorkflowState): _workflow_state = None # required workflow state implementations def _store(self, state_dict): self._workflow_state = state_dict return succeed(True) def _load(self): return self._workflow_state class AttributeWorkflowState(TestWorkflowState): # transition handlers. def do_jump_puddle(self): self._jumped = True def do_error_transition(self): self._error_handler_invoked = True return dict(error=True) def do_transition_variables(self): return dict(hello="world") def do_error_unknown(self): raise AttributeError("unknown") def do_error_deferred(self): return fail(AttributeError("unknown")) def do_raises_transition_error(self): raise TransitionError("eek") class StateMachineTest(TestCase): def setUp(self): super(StateMachineTest, self).setUp() self.log_stream = self.capture_logging( "statemachine", level=logging.DEBUG) def test_transition_constructor(self): t = Transition("id", "label", "source_state", "destination_state") self.assertEqual(t.transition_id, "id") self.assertEqual(t.label, "label") self.assertEqual(t.source, "source_state") self.assertEqual(t.destination, "destination_state") def test_workflow_get_transitions(self): workflow = Workflow( Transition("init_workflow", "", None, "initialized"), Transition("start", "", "initialized", "started")) self.assertRaises(InvalidStateError, workflow.get_transitions, "magic") self.assertEqual( workflow.get_transitions("initialized"), [workflow.get_transition("start")]) def test_workflow_get_transition(self): transition = Transition("init_workflow", "", None, "initialized") workflow = Workflow( Transition("init_workflow", "", None, "initialized"), Transition("start", "", "initialized", "started")) self.assertRaises(KeyError, workflow.get_transition, "rabid") self.assertEqual( workflow.get_transition("init_workflow").transition_id, transition.transition_id) @inlineCallbacks def test_state_get_available_transitions(self): workflow = Workflow( Transition("init_workflow", "", None, "initialized"), Transition("start", "", "initialized", "started")) workflow_state = AttributeWorkflowState(workflow) transitions = yield workflow_state.get_available_transitions() yield self.assertEqual( transitions, [workflow.get_transition("init_workflow")]) @inlineCallbacks def test_fire_transition_alias_multiple(self): workflow = Workflow( Transition("init", "", None, "initialized", alias="init"), Transition("init_start", "", None, "started", alias="init")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield self.assertFailure( workflow_state.fire_transition_alias("init"), InvalidTransitionError) @inlineCallbacks def test_fire_transition_alias_none(self): workflow = Workflow( Transition("init_workflow", "", None, "initialized"), Transition("start", "", "initialized", "started")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield self.assertFailure( workflow_state.fire_transition_alias("dog"), InvalidTransitionError) @inlineCallbacks def test_fire_transition_alias(self): workflow = Workflow( Transition("init_magic", "", None, "initialized", alias="init")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): value = yield workflow_state.fire_transition_alias("init") self.assertEqual(value, True) @inlineCallbacks def test_state_get_set(self): workflow = Workflow( Transition("init_workflow", "", None, "initialized"), Transition("start", "", "initialized", "started")) workflow_state = AttributeWorkflowState(workflow) current_state = yield workflow_state.get_state() self.assertEqual(current_state, None) current_vars = yield workflow_state.get_state_variables() self.assertEqual(current_vars, {}) with (yield workflow_state.lock()): yield workflow_state.set_state("started") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "started") current_vars = yield workflow_state.get_state_variables() self.assertEqual(current_vars, {}) @inlineCallbacks def test_state_fire_transition(self): workflow = Workflow( Transition("init_workflow", "", None, "initialized"), Transition("start", "", "initialized", "started")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield workflow_state.fire_transition("init_workflow") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "initialized") yield workflow_state.fire_transition("start") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "started") yield self.assertFailure(workflow_state.fire_transition("stop"), InvalidTransitionError) name = "attributeworkflowstate" output = ( "%s: transition init_workflow (None -> initialized) {}", "%s: transition complete init_workflow (state initialized) {}", "%s: transition start (initialized -> started) {}", "%s: transition complete start (state started) {}\n") self.assertEqual(self.log_stream.getvalue(), "\n".join([line % name for line in output])) @inlineCallbacks def test_state_transition_callback(self): """If the workflow state, defines an action callback for a transition, its invoked when the transition is fired. """ workflow = Workflow( Transition("jump_puddle", "", None, "dry")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield workflow_state.fire_transition("jump_puddle") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "dry") self.assertEqual( getattr(workflow_state, "_jumped", None), True) @inlineCallbacks def test_transition_action_workflow_error(self): """If a transition action callback raises a transitionerror, the transition does not complete, and the state remains the same. The fire_transition method in this case returns False. """ workflow = Workflow( Transition("raises_transition_error", "", None, "next-state")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): result = yield workflow_state.fire_transition( "raises_transition_error") self.assertEqual(result, False) current_state = yield workflow_state.get_state() self.assertEqual(current_state, None) name = "attributeworkflowstate" output = ( "%s: transition raises_transition_error (None -> next-state) {}", "%s: execute action do_raises_transition_error", "%s: transition raises_transition_error failed eek\n") self.assertEqual(self.log_stream.getvalue(), "\n".join([line % name for line in output])) @inlineCallbacks def test_transition_action_unknown_error(self): """If an unknown error is raised by a transition action, it is raised from the fire transition method. """ workflow = Workflow( Transition("error_unknown", "", None, "next-state")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield self.assertFailure( workflow_state.fire_transition("error_unknown"), AttributeError) @inlineCallbacks def test_transition_resets_state_variables(self): """State variables are only stored, while the associated state is current. """ workflow = Workflow( Transition("transition_variables", "", None, "next-state"), Transition("some_transition", "", "next-state", "final-state")) workflow_state = AttributeWorkflowState(workflow) state_variables = yield workflow_state.get_state_variables() self.assertEqual(state_variables, {}) with (yield workflow_state.lock()): yield workflow_state.fire_transition("transition_variables") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "next-state") state_variables = yield workflow_state.get_state_variables() self.assertEqual(state_variables, {"hello": "world"}) yield workflow_state.fire_transition("some_transition") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "final-state") state_variables = yield workflow_state.get_state_variables() self.assertEqual(state_variables, {}) @inlineCallbacks def test_transition_success_transition(self): """If a transition specifies a success transition, and its action handler completes successfully, the success transistion and associated action handler are executed. """ workflow = Workflow( Transition("initialized", "", None, "next"), Transition("markup", "", "next", "final", automatic=True), ) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield workflow_state.fire_transition("initialized") self.assertEqual((yield workflow_state.get_state()), "final") @inlineCallbacks def test_transition_error_transition(self): """If a transition specifies an error transition, and its action handler raises a transition error, the error transition and associated hooks are executed. """ workflow = Workflow( Transition("raises_transition_error", "", None, "next-state", error_transition_id="error_transition"), Transition("error_transition", "", None, "error-state")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield workflow_state.fire_transition("raises_transition_error") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "error-state") state_variables = yield workflow_state.get_state_variables() self.assertEqual(state_variables, {"error": True}) @inlineCallbacks def test_state_machine_observer(self): """A state machine observer can be registered for tests visibility of async state transitions.""" results = [] def observer(state, variables): results.append((state, variables)) workflow = Workflow( Transition("begin", "", None, "next-state"), Transition("continue", "", "next-state", "final-state")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): workflow_state.set_observer(observer) yield workflow_state.fire_transition("begin") yield workflow_state.fire_transition("continue") self.assertEqual(results, [("next-state", {}), ("final-state", {})]) @inlineCallbacks def test_state_variables_via_transition(self): """Per state variables can be passed into the transition. """ workflow = Workflow( Transition("begin", "", None, "next-state"), Transition("continue", "", "next-state", "final-state")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): yield workflow_state.fire_transition( "begin", rabbit="moon", hello=True) current_state = yield workflow_state.get_state() self.assertEqual(current_state, "next-state") variables = yield workflow_state.get_state_variables() self.assertEqual({"rabbit": "moon", "hello": True}, variables) yield workflow_state.fire_transition("continue") current_state = yield workflow_state.get_state() self.assertEqual(current_state, "final-state") variables = yield workflow_state.get_state_variables() self.assertEqual({}, variables) @inlineCallbacks def test_transition_state(self): """Transitions can be specified by the desired state. """ workflow = Workflow( Transition("begin", "", None, "trail"), Transition("to_cabin", "", "trail", "cabin"), Transition("to_house", "", "trail", "house")) workflow_state = AttributeWorkflowState(workflow) with (yield workflow_state.lock()): result = yield workflow_state.transition_state("trail") self.assertEqual(result, True) current_state = yield workflow_state.get_state() self.assertEqual(current_state, "trail") result = yield workflow_state.transition_state("cabin") self.assertEqual(result, True) result = yield workflow_state.transition_state("house") self.assertEqual(result, False) self.assertFailure(workflow_state.transition_state("unknown"), InvalidStateError) @inlineCallbacks def test_load_bad_state(self): class BadLoadWorkflowState(WorkflowState): def _load(self): return succeed({"some": "other-data"}) workflow = BadLoadWorkflowState(Workflow()) yield self.assertFailure(workflow.get_state(), KeyError) yield self.assertFailure(workflow.get_state_variables(), KeyError) SyncWorkflow = Workflow( Transition("init", "", None, "inited", error_transition_id="error_init"), Transition("error_init", "", None, "borken"), Transition("start", "", "inited", "started", automatic=True), # Disjoint states for testing default transition synchronize. Transition("predefault", "", "default_init", "default_start"), Transition("default", "", "default_start", "default_end", automatic=True), ) class SyncWorkflowState(TestWorkflowState): _workflow = SyncWorkflow def __init__(self): super(SyncWorkflowState, self).__init__() self.started = { "init": Deferred(), "error_init": Deferred(), "start": Deferred()} self.blockers = { "init": Deferred(), "error_init": Deferred(), "start": Deferred()} def do(self, transition): self.started[transition].callback(None) return self.blockers[transition] def do_init(self): return self.do("init") def do_error_init(self): return self.do("error_init") def do_start(self): return self.do("start") class StateMachineSynchronizeTest(TestCase): @inlineCallbacks def setUp(self): yield super(StateMachineSynchronizeTest, self).setUp() self.workflow = SyncWorkflowState() @inlineCallbacks def assert_state(self, state, inflight): self.assertEquals((yield self.workflow.get_state()), state) self.assertEquals((yield self.workflow.get_inflight()), inflight) @inlineCallbacks def test_plain_synchronize(self): """synchronize does nothing when no inflight transitions or applicable default transitions""" yield self.assert_state(None, None) with (yield self.workflow.lock()): yield self.workflow.synchronize() yield self.assert_state(None, None) @inlineCallbacks def test_synchronize_default_transition(self): """synchronize runs default transitions after inflight recovery""" with (yield self.workflow.lock()): yield self.workflow.set_state("default_init") yield self.workflow.set_inflight("predefault") yield self.workflow.synchronize() yield self.assert_state("default_end", None) @inlineCallbacks def test_synchronize_inflight_success(self): """synchronize will complete an unfinished transition and run the success transition where warranted""" with (yield self.workflow.lock()): yield self.workflow.set_inflight("init") d = self.workflow.synchronize() yield self.workflow.started["init"] yield self.assert_state(None, "init") self.workflow.blockers["init"].callback(None) yield self.workflow.started["start"] yield self.assert_state("inited", "start") self.workflow.blockers["start"].callback(None) yield d yield self.assert_state("started", None) @inlineCallbacks def test_synchronize_inflight_error(self): """synchronize will complete an unfinished transition and run the error transition where warranted""" with (yield self.workflow.lock()): yield self.workflow.set_inflight("init") d = self.workflow.synchronize() yield self.workflow.started["init"] yield self.assert_state(None, "init") self.workflow.blockers["init"].errback(TransitionError()) yield self.workflow.started["error_init"] yield self.assert_state(None, "error_init") self.workflow.blockers["error_init"].callback(None) yield d yield self.assert_state("borken", None) @inlineCallbacks def test_error_without_transition_clears_inflight(self): """when a transition fails, it should no longer be marked inflight""" with (yield self.workflow.lock()): yield self.workflow.set_state("inited") d = self.workflow.fire_transition("start") yield self.workflow.started["start"] yield self.assert_state("inited", "start") self.workflow.blockers["start"].errback(TransitionError()) yield d yield self.assert_state("inited", None) juju-0.7.orig/juju/lib/tests/test_twistutils.py0000644000000000000000000001036012135220114020142 0ustar 00000000000000import os import time from twisted.internet.defer import ( succeed, fail, Deferred, DeferredList, inlineCallbacks, returnValue) from twisted.internet import reactor import juju from juju.lib.testing import TestCase from juju.lib.twistutils import ( concurrent_execution_guard, gather_results, get_module_directory, sleep) class Bar(object): def __init__(self): self._count = 0 @concurrent_execution_guard("guard") def my_function(self, a, b=0): """zebra""" return succeed(a / b) @concurrent_execution_guard("other_guard") def other_function(self, a): return fail(OSError("Bad")) @concurrent_execution_guard("increment_guard") def slow_increment(self, delay=0.1): deferred = Deferred() def _increment(): self._count += 1 return deferred.callback(self._count) reactor.callLater(delay, _increment) return deferred @concurrent_execution_guard("inline_guard") @inlineCallbacks def inline_increment(self): result = yield self.slow_increment() returnValue(result * 100) class ExecutionGuardTest(TestCase): def test_guarded_function_metadata(self): self.assertEqual(Bar().my_function.__name__, "my_function") self.assertEqual(Bar().my_function.__doc__, "zebra") def test_guarded_function_failure(self): foo = Bar() return self.assertFailure(foo.other_function("1"), OSError) def test_guarded_function_sync_exception(self): foo = Bar() try: result = foo.my_function(1) except: self.fail("Should not raise exception") self.assertFailure(result, ZeroDivisionError) self.assertFailure(foo.my_function(1), ZeroDivisionError) self.assertFalse(foo.guard, False) def test_guard_multiple_execution(self): foo = Bar() d1 = foo.slow_increment() d2 = foo.slow_increment() def validate_results(results): success, value = results[0] self.assertTrue(success) self.assertEqual(value, 1) success, value = results[1] self.assertTrue(success) self.assertEqual(value, False) return foo.slow_increment() def validate_value(results): # if the guard had not prevent execution the value # would be 3. self.assertEqual(results, 2) dlist = DeferredList([d1, d2]) dlist.addCallback(validate_results) dlist.addCallback(validate_value) return dlist def test_guard_w_inline_callbacks(self): foo = Bar() def validate_result(result): self.assertEqual(result, 100) d = foo.inline_increment() d.addCallback(validate_result) return d class GatherResultsTest(TestCase): def test_empty(self): d = gather_results([]) def check_result(result): self.assertEqual(result, []) d.addCallback(check_result) return d def test_success(self): d1 = succeed(1) d2 = succeed(2) d = gather_results([d1, d2]) def check_result(result): self.assertEqual(result, [1, 2]) d.addCallback(check_result) return d def test_failure_consuming_errors(self): d1 = succeed(1) d2 = fail(AssertionError("Expected failure")) d = gather_results([d1, d2]) self.assertFailure(d, AssertionError) return d def test_failure_without_consuming_errors(self): d1 = succeed(1) d2 = fail(AssertionError("Expected failure")) d = gather_results([d1, d2], consume_errors=False) self.assertFailure(d, AssertionError) self.assertFailure(d2, AssertionError) return d class ModuleDirectoryTest(TestCase): def test_get_module_directory(self): directory = get_module_directory(juju) self.assertIn("juju", directory) self.assertNotIn("_trial_temp", directory) self.assertTrue(os.path.isdir(directory)) class SleepTest(TestCase): @inlineCallbacks def test_sleep(self): """Directly test deferred sleep.""" start = time.time() yield sleep(0.1) self.assertGreaterEqual(time.time() - start, 0.1) juju-0.7.orig/juju/lib/tests/test_under.py0000644000000000000000000000156012135220114017026 0ustar 00000000000000import string from juju.lib.testing import TestCase from juju.lib.under import quote class UnderTest(TestCase): def test_unmodified(self): s = string.ascii_letters + string.digits + "-." q = quote(s) self.assertEquals(quote(s), s) self.assertTrue(isinstance(q, str)) def test_quote(self): s = "hello_there/how'are~you-today.sir" q = quote(s) self.assertEquals(q, "hello_5f_there_2f_how_27_are_7e_you-today.sir") self.assertTrue(isinstance(q, str)) def test_coincidentally_unicode(self): s = u"hello_there/how'are~you-today.sir" q = quote(s) self.assertEquals(q, "hello_5f_there_2f_how_27_are_7e_you-today.sir") self.assertTrue(isinstance(q, str)) def test_necessarily_unicode(self): s = u"hello\u1234there" self.assertRaises(KeyError, quote, s) juju-0.7.orig/juju/lib/tests/test_upstart.py0000644000000000000000000002754012135220114017421 0ustar 00000000000000import os from twisted.internet.defer import inlineCallbacks, succeed from juju.errors import ServiceError from juju.lib.mocker import ANY, KWARGS from juju.lib.testing import TestCase from juju.lib.upstart import UpstartService DATA_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), "data") class UpstartServiceTest(TestCase): @inlineCallbacks def setUp(self): yield super(UpstartServiceTest, self).setUp() self.init_dir = self.makeDir() self.conf = os.path.join(self.init_dir, "some-name.conf") self.output = "/tmp/some-name.output" self.patch(UpstartService, "init_dir", self.init_dir) self.service = UpstartService("some-name") def setup_service(self): self.service.set_description("a wretched hive of scum and villainy") self.service.set_command("/bin/imagination-failure --no-ideas") self.service.set_environ({"LIGHTSABER": "civilised weapon"}) def setup_mock(self): self.check_call = self.mocker.replace("subprocess.check_call") self.getProcessOutput = self.mocker.replace( "twisted.internet.utils.getProcessOutput") def mock_status(self, result): self.getProcessOutput("/sbin/status", ["some-name"]) self.mocker.result(result) def mock_call(self, args, output=None): self.check_call(args, KWARGS) if output is None: self.mocker.result(0) else: def write(ANY, **_): with open(self.output, "w") as f: f.write(output) self.mocker.call(write) def mock_start(self, output=None): self.mock_call(("/sbin/start", "some-name"), output) def mock_stop(self): self.mock_call(("/sbin/stop", "some-name")) def mock_check_success(self): for _ in range(5): self.mock_status(succeed("blah start/running blah 12345")) def mock_check_unstable(self): for _ in range(4): self.mock_status(succeed("blah start/running blah 12345")) self.mock_status(succeed("blah start/running blah 12346")) def mock_check_not_running(self): self.mock_status(succeed("blah")) def write_dummy_conf(self): with open(self.conf, "w") as f: f.write("dummy") def assert_dummy_conf(self): with open(self.conf) as f: self.assertEquals(f.read(), "dummy") def assert_no_conf(self): self.assertFalse(os.path.exists(self.conf)) def assert_conf(self, name="test_standard_install"): with open(os.path.join(DATA_DIR, name)) as expected: with open(self.conf) as actual: self.assertEquals(actual.read(), expected.read()) def test_is_installed(self): """Check is_installed depends on conf file existence""" self.assertFalse(self.service.is_installed()) self.write_dummy_conf() self.assertTrue(self.service.is_installed()) def test_init_dir(self): """ Check is_installed still works when init_dir specified explicitly """ self.patch(UpstartService, "init_dir", "/BAD/PATH") self.service = UpstartService("some-name", init_dir=self.init_dir) self.setup_service() self.assertFalse(self.service.is_installed()) self.write_dummy_conf() self.assertTrue(self.service.is_installed()) @inlineCallbacks def test_is_running(self): """ Check is_running interprets status output (when service is installed) """ self.setup_mock() self.mock_status(succeed("blah stop/waiting blah")) self.mock_status(succeed("blah blob/gibbering blah")) self.mock_status(succeed("blah start/running blah 12345")) self.mocker.replay() # Won't hit status; conf is not installed self.assertFalse((yield self.service.is_running())) self.write_dummy_conf() # These 3 calls correspond to the first 3 mock_status calls above self.assertFalse((yield self.service.is_running())) self.assertFalse((yield self.service.is_running())) self.assertTrue((yield self.service.is_running())) @inlineCallbacks def test_is_stable_yes(self): self.setup_mock() self.mock_check_success() self.mocker.replay() self.write_dummy_conf() self.assertTrue((yield self.service.is_stable())) @inlineCallbacks def test_is_stable_no(self): self.setup_mock() self.mock_check_unstable() self.mocker.replay() self.write_dummy_conf() self.assertFalse((yield self.service.is_stable())) @inlineCallbacks def test_is_stable_not_running(self): self.setup_mock() self.mock_check_not_running() self.mocker.replay() self.write_dummy_conf() self.assertFalse((yield self.service.is_stable())) @inlineCallbacks def test_is_stable_not_even_installed(self): self.assertFalse((yield self.service.is_stable())) @inlineCallbacks def test_get_pid(self): """ Check get_pid interprets status output (when service is installed) """ self.setup_mock() self.mock_status(succeed("blah stop/waiting blah")) self.mock_status(succeed("blah blob/gibbering blah")) self.mock_status(succeed("blah start/running blah 12345")) self.mocker.replay() # Won't hit status; conf is not installed self.assertEquals((yield self.service.get_pid()), None) self.write_dummy_conf() # These 3 calls correspond to the first 3 mock_status calls above self.assertEquals((yield self.service.get_pid()), None) self.assertEquals((yield self.service.get_pid()), None) self.assertEquals((yield self.service.get_pid()), 12345) @inlineCallbacks def test_basic_install(self): """Check a simple UpstartService writes expected conf file""" e = yield self.assertFailure(self.service.install(), ServiceError) self.assertEquals(str(e), "Cannot render .conf: no description set") self.service.set_description("uninteresting service") e = yield self.assertFailure(self.service.install(), ServiceError) self.assertEquals(str(e), "Cannot render .conf: no command set") self.service.set_command("/bin/false") yield self.service.install() self.assert_conf("test_basic_install") @inlineCallbacks def test_less_basic_install(self): """Check conf for a different UpstartService (which sets an env var)""" self.service.set_description("pew pew pew blam") self.service.set_command("/bin/deathstar --ignore-ewoks endor") self.service.set_environ({"FOO": "bar baz qux", "PEW": "pew"}) self.service.set_output_path("/somewhere/else") yield self.service.install() self.assert_conf("test_less_basic_install") def test_install_via_script(self): """Check that the output-as-script form does the right thing""" self.setup_service() install, start = self.service.get_cloud_init_commands() os.system(install) self.assert_conf() self.assertEquals(start, "/sbin/start some-name") @inlineCallbacks def test_start_not_installed(self): """Check that .start() also installs if necessary""" self.setup_mock() self.mock_status(succeed("blah stop/waiting blah")) self.mock_start() self.mock_check_success() self.mocker.replay() self.setup_service() yield self.service.start() self.assert_conf() @inlineCallbacks def test_start_not_started_stable(self): """Check that .start() starts if stopped, and checks for stable pid""" self.write_dummy_conf() self.setup_mock() self.mock_status(succeed("blah stop/waiting blah")) self.mock_start("ignored") self.mock_check_success() self.mocker.replay() self.setup_service() yield self.service.start() self.assert_dummy_conf() @inlineCallbacks def test_start_not_started_unstable(self): """Check that .start() starts if stopped, and raises on unstable pid""" self.write_dummy_conf() self.setup_mock() self.mock_status(succeed("blah stop/waiting blah")) self.mock_start("kangaroo") self.mock_check_unstable() self.mocker.replay() self.setup_service() e = yield self.assertFailure(self.service.start(), ServiceError) self.assertEquals( str(e), "Failed to start job some-name; got output:\nkangaroo") self.assert_dummy_conf() @inlineCallbacks def test_start_not_started_failure(self): """Check that .start() starts if stopped, and raises on no pid""" self.write_dummy_conf() self.setup_mock() self.mock_status(succeed("blah stop/waiting blah")) self.mock_start() self.mock_check_not_running() self.mocker.replay() self.setup_service() e = yield self.assertFailure(self.service.start(), ServiceError) self.assertEquals( str(e), "Failed to start job some-name; no output detected") self.assert_dummy_conf() @inlineCallbacks def test_start_started(self): """Check that .start() does nothing if already running""" self.write_dummy_conf() self.setup_mock() self.mock_status(succeed("blah start/running blah 12345")) self.mocker.replay() self.setup_service() yield self.service.start() self.assert_dummy_conf() @inlineCallbacks def test_destroy_not_installed(self): """Check .destroy() does nothing if not installed""" yield self.service.destroy() self.assert_no_conf() @inlineCallbacks def test_destroy_not_started(self): """Check .destroy just deletes conf if not running""" self.write_dummy_conf() self.setup_mock() self.mock_status(succeed("blah stop/waiting blah")) self.mocker.replay() yield self.service.destroy() self.assert_no_conf() @inlineCallbacks def test_destroy_started(self): """Check .destroy() stops running service and deletes conf file""" self.write_dummy_conf() self.setup_mock() self.mock_status(succeed("blah start/running blah 54321")) self.mock_stop() self.mocker.replay() yield self.service.destroy() self.assert_no_conf() @inlineCallbacks def test_use_sudo(self): """Check that expected commands are generated when use_sudo is set""" self.setup_mock() self.service = UpstartService("some-name", use_sudo=True) self.setup_service() with open(self.output, "w") as f: f.write("clear this file out...") def verify_cp(args, **kwargs): sudo, cp, src, dst = args self.assertEquals(sudo, "sudo") self.assertEquals(cp, "cp") with open(os.path.join(DATA_DIR, "test_standard_install")) as exp: with open(src) as actual: self.assertEquals(actual.read(), exp.read()) self.assertEquals(dst, self.conf) self.write_dummy_conf() self.check_call(ANY, KWARGS) self.mocker.call(verify_cp) self.mock_call(("sudo", "rm", self.output)) self.mock_call(("sudo", "chmod", "644", self.conf)) self.mock_status(succeed("blah stop/waiting blah")) self.mock_call(("sudo", "/sbin/start", "some-name")) # 5 for initial stability check; 1 for final do-we-need-to-stop check for _ in range(6): self.mock_status(succeed("blah start/running blah 12345")) self.mock_call(("sudo", "/sbin/stop", "some-name")) self.mock_call(("sudo", "rm", self.conf)) self.mock_call(("sudo", "rm", self.output)) self.mocker.replay() yield self.service.start() yield self.service.destroy() juju-0.7.orig/juju/lib/tests/test_zk.py0000644000000000000000000001061112135220114016332 0ustar 00000000000000import os import zookeeper from twisted.internet.defer import inlineCallbacks from txzookeeper import ZookeeperClient from juju.lib.testing import TestCase from juju.lib.zk import Zookeeper, check_zookeeper from juju.lib.port import get_open_port sample_package_environment_conf = """\ NAME=zookeeper ZOOCFGDIR=/etc/$NAME/conf # TODO this is really ugly # How to find out, which jars are needed? # seems, that log4j requires the log4j.properties file to be in the classpath CLASSPATH="$ZOOCFGDIR:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xercesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/zookeeper.jar" ZOOCFG="$ZOOCFGDIR/zoo.cfg" ZOO_LOG_DIR=/var/log/$NAME USER=$NAME GROUP=$NAME PIDDIR=/var/run/$NAME PIDFILE=$PIDDIR/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME JAVA=/usr/bin/java ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain" ZOO_LOG4J_PROP="INFO,ROLLINGFILE" JMXLOCALONLY=false JAVA_OPTS="" """ class LocalManagedZookeeperTestCase(TestCase): def test_get_class_path_from_build(self): data_dir = self.makeDir() software_dir = self.makeDir() os.mkdir(os.path.join(software_dir, "build")) lib_dir = os.path.join(software_dir, "build", "lib") os.mkdir(lib_dir) self.makeFile("", path=os.path.join( software_dir, "build", "zookeeper-3.4.0.jar")) for p in ["jline-0.9.94.jar", "netty-3.2.2.Final.jar", "log4j-1.2.15.jar", "slf4j-log4j12-1.6.1.jar", "slf4j-api-1.6.1.jar"]: self.makeFile("", path=os.path.join(lib_dir, p)) instance = Zookeeper(data_dir, 12345, zk_location=software_dir) class_path = instance.get_class_path() self.assertEqual(class_path.index(data_dir), 0) self.assertIn( os.path.join(lib_dir, "log4j-1.2.15.jar"), class_path) self.assertIn( os.path.join(software_dir, "build", "zookeeper-3.4.0.jar"), class_path) def test_get_class_path_from_package_static(self): data_dir = self.makeDir() instance = Zookeeper(data_dir, 12345) instance.package_class_path_file = sample_package_environment_conf class_path = instance.get_class_path() self.assertEqual(class_path.index(data_dir), 0) self.assertIn("/usr/share/java/jline.jar", class_path) self.assertIn("/usr/share/java/log4j-1.2.jar", class_path) self.assertIn("/usr/share/java/zookeeper.jar", class_path) def test_get_class_path_from_package(self): data_dir = self.makeDir() instance = Zookeeper(data_dir, 12345) class_path = instance.get_class_path() self.assertEqual(class_path.index(data_dir), 0) self.assertIn("/usr/share/java/jline.jar", class_path) self.assertIn("/usr/share/java/log4j-1.2.jar", class_path) self.assertIn("/usr/share/java/zookeeper.jar", class_path) def test_get_zookeeper_variables(self): """All data files are contained in the specified directory. """ data_dir = self.makeDir() instance = Zookeeper(data_dir, 12345) variables = instance.get_zookeeper_variables() variables.pop("class_path") for v in variables.values(): self.assertTrue(v.startswith(data_dir)) @inlineCallbacks def test_managed_zookeeper(self): zookeeper.set_debug_level(0) port = get_open_port() # Start zookeeper data_dir = self.makeDir() instance = Zookeeper( data_dir, port, min_session_timeout=80000, max_session_timeout=100000) yield instance.start() self.assertTrue(instance.running) # Connect a client client = ZookeeperClient("127.0.1.1:%d" % port) yield client.connect() stat = yield client.exists("/") self.assertEqual(client.session_timeout, 80000) yield client.close() self.assertTrue(stat) # Stop Instance yield instance.stop() self.assertFalse(instance.running) self.assertFalse(os.path.exists(data_dir)) if not check_zookeeper(): notinst = "Zookeeper not installed in the system" test_managed_zookeeper.skip = notinst test_get_class_path_from_package_static.skip = notinst test_get_zookeeper_variables.skip = notinst test_get_class_path_from_package.skip = notinst juju-0.7.orig/juju/lib/tests/test_zklog.py0000644000000000000000000002070412135220114017040 0ustar 00000000000000import json import logging import time import zookeeper from twisted.internet.defer import inlineCallbacks, returnValue, fail from txzookeeper.tests.utils import deleteTree from juju.lib.mocker import MATCH from juju.lib.testing import TestCase from juju.lib.zklog import ZookeeperHandler, LogIterator class LogTestBase(TestCase): @inlineCallbacks def setUp(self): yield super(LogTestBase, self).setUp() zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() @inlineCallbacks def tearDown(self): yield super(LogTestBase, self).tearDown() if self.client.connected: self.client.close() client = yield self.get_zookeeper_client().connect() deleteTree(handle=client.handle) yield client.close() @inlineCallbacks def get_configured_log( self, channel="test-zk-log", context_name="unit:mysql/0"): """Get a log channel configured with a zk handler. """ log = logging.getLogger(channel) log.setLevel(logging.DEBUG) handler = ZookeeperHandler(self.client, context_name) yield handler.open() log.addHandler(handler) returnValue(log) @inlineCallbacks def poke_zk(self): """Do a roundtrip to zookeeper.""" yield self.client.exists("/zookeeper") class ZookeeperLogTest(LogTestBase): def test_log_container_path(self): handler = ZookeeperHandler(self.client, "unit:mysql/0") self.assertEqual(handler.log_container_path, "/logs") def test_invalid_log_path(self): """The handler raises ValueError on invalid log paths.""" self.assertRaises( ValueError, ZookeeperHandler, self.client, "something", log_path="//") self.assertRaises( ValueError, ZookeeperHandler, self.client, "something", log_path="/abc/def/gef") def test_log_path_specs_path(self): """The handler can specify either the container or full log path.""" # Verify default log path location handler = ZookeeperHandler(self.client, "something") self.assertEqual(handler.log_path, "/logs/log-") # Specify a log path handler = ZookeeperHandler( self.client, "something", log_path="/logs/msg-") self.assertEqual(handler.log_path, "/logs/msg-") @inlineCallbacks def test_open_with_existing_container_path(self): yield self.client.create("/logs") handler = ZookeeperHandler(self.client, "unit:mysql/0") # should not raise exception yield handler.open() @inlineCallbacks def test_open_with_nonexisting_container_path(self): handler = ZookeeperHandler(self.client, "unit:mysql/0") # should not raise exception yield handler.open() exists = yield self.client.exists("/logs") self.assertTrue(exists) @inlineCallbacks def test_log_after_client_close_does_not_emit(self): """After a client is closed no log messages are sent.""" log = yield self.get_configured_log() self.client.close() log.info("bad stuff") yield self.client.connect() entries = yield self.client.get_children("/logs") self.assertFalse(entries) @inlineCallbacks def test_log_message(self): """Log messages include standard log record information.""" log = yield self.get_configured_log() log.info("hello world") # Retrieve the log record content, stat = yield self.client.get("/logs/log-%010d" % 0) # Verify the stored record now = time.time() data = json.loads(content) self.assertEqual(data["msg"], "hello world") self.assertEqual(data["context"], "unit:mysql/0") self.assertEqual(data["levelname"], "INFO") self.assertEqual(data["funcName"], "test_log_message") self.assertTrue(data["created"] + 5 > now) @inlineCallbacks def test_log_error(self): """Exceptions, and errors are formatted w/ tracebacks.""" log = yield self.get_configured_log() try: 1 / 0 except Exception: log.exception("something bad") # Retrieve the log record content, stat = yield self.client.get("/logs/log-%010d" % 0) data = json.loads(content) self.assertIn("something bad", data["msg"]) self.assertIn("Traceback", data["message"]) self.assertIn("ZeroDivisionError", data["message"]) self.assertEqual(data["context"], "unit:mysql/0") self.assertEqual(data["levelname"], "ERROR") self.assertEqual(data["funcName"], "test_log_error") @inlineCallbacks def test_handler_error(self): """An error in the handler gets reported to stderr.""" self.error_stream = self.capture_stream("stderr") mock_client = self.mocker.patch(self.client) mock_client.create("/logs/log-", MATCH(lambda x: isinstance(x, str)), flags=zookeeper.SEQUENCE) self.mocker.result(fail(zookeeper.NoNodeException())) self.mocker.replay() log = yield self.get_configured_log() log.info("something interesting") # assert the log entry doesn't exist exists = yield self.client.exists("/logs/log-%010d" % 0) self.assertFalse(exists) # assert the error made it to stderr self.assertIn("NoNodeException", self.error_stream.getvalue()) @inlineCallbacks def test_log_object(self): """Log message interpolation using objects is supported.""" class Foobar(object): def __init__(self, v): self._v = v def __str__(self): return str("Foobar:%s" % self._v) log = yield self.get_configured_log() log.info("world of %s", Foobar("cloud")) content, stat = yield self.client.get("/logs/log-%010d" % 0) record = json.loads(content) self.assertEqual( record["message"], "world of Foobar:cloud") self.assertEqual( record["args"], []) class LogIteratorTest(LogTestBase): @inlineCallbacks def setUp(self): yield super(LogIteratorTest, self).setUp() self.log = yield self.get_configured_log() self.iter = LogIterator(self.client) @inlineCallbacks def test_get_next_log(self): self.log.info("hello world") yield self.poke_zk() entry = yield self.iter.next() self.assertEqual(entry["levelname"], "INFO") @inlineCallbacks def test_flush_log_last_seen(self): for i in ("a", "b", "c"): self.log.info(i) iter = LogIterator(self.client, seen_block_size=1) yield iter.next() yield iter.next() # Now if we pick up again, we should continue with the last message iter = LogIterator(self.client, seen_block_size=1) entry = yield iter.next() data, stat = yield self.client.get("/logs") self.assertTrue(data) self.assertEqual(json.loads(data), {"next-log-index": 3}) self.assertEqual(entry["msg"], "c") self.assertEqual(stat["version"], 3) @inlineCallbacks def test_iter_sans_container(self): iter = LogIterator(self.client) entry_d = iter.next() # make sure it doesn't blow up yield self.poke_zk() self.log.info("apple") entry = yield entry_d self.assertEqual(entry["msg"], "apple") @inlineCallbacks def test_replay_log(self): for i in ("a", "b", "c", "d", "e", "f", "z"): self.log.info(i) yield self.client.set("/logs", json.dumps({"next-log-index": 3})) iter = LogIterator(self.client, replay=True, seen_block_size=1) entry = yield iter.next() self.assertEqual(entry["msg"], "a") entry = yield iter.next() self.assertEqual(entry["msg"], "b") # make sure we haven't updated the last seen index. data, stat = yield self.client.get("/logs") self.assertEqual(json.loads(data), {"next-log-index": 3}) # now if we advance past the last seen index, we'll start # updating the counter. for i in range(4): entry = yield iter.next() self.assertEqual(entry["msg"], "f") # make sure we updated the last seen index. data, stat = yield self.client.get("/logs") self.assertEqual(json.loads(data), {"next-log-index": 6}) juju-0.7.orig/juju/lib/tests/data/test_basic_install0000644000000000000000000000027312135220114021002 0ustar 00000000000000description "uninteresting service" author "Juju Team " start on runlevel [2345] stop on runlevel [!2345] respawn exec /bin/false >> /tmp/some-name.output 2>&1 juju-0.7.orig/juju/lib/tests/data/test_less_basic_install0000644000000000000000000000035412135220114022030 0ustar 00000000000000description "pew pew pew blam" author "Juju Team " start on runlevel [2345] stop on runlevel [!2345] respawn env FOO="bar baz qux" env PEW="pew" exec /bin/deathstar --ignore-ewoks endor >> /somewhere/else 2>&1 juju-0.7.orig/juju/lib/tests/data/test_standard_install0000644000000000000000000000040412135220114021515 0ustar 00000000000000description "a wretched hive of scum and villainy" author "Juju Team " start on runlevel [2345] stop on runlevel [!2345] respawn env LIGHTSABER="civilised weapon" exec /bin/imagination-failure --no-ideas >> /tmp/some-name.output 2>&1 juju-0.7.orig/juju/machine/__init__.py0000644000000000000000000000133212135220114016102 0ustar 00000000000000class ProviderMachine(object): """ Representative of a machine resource created by a :class:`MachineProvider`. The object is typically annotated by the machine provider, such that the provider can perform subsequent actions upon it, using the additional metadata for identification, without leaking these details to consumers of the :class:`MachineProvider` api. """ def __init__(self, instance_id, dns_name=None, private_dns_name=None, state="unknown"): self.instance_id = instance_id # ideally this would be ip_address, but txaws doesn't expose it. self.dns_name = dns_name self.private_dns_name = private_dns_name self.state = state juju-0.7.orig/juju/machine/constraints.py0000644000000000000000000002771612135220114016730 0ustar 00000000000000import operator from UserDict import DictMixin from juju.errors import ConstraintError, UnknownConstraintError def _dont_convert(s): return s class _ConstraintType(object): """Defines a constraint. :param str name: The constraint's name :param default: The default value as a str, or None to indicate "unset" :param converter: Function to convert str value to "real" value (and thereby implicitly validate it; should raise ValueError) :param comparer: Function used to determine whether one constraint satisfies another :param bool visible: If False, indicates a computed constraint which should not be settable by a user. Merely creating a Constraint does not activate it; you also need to register it with a specific ConstraintSet. """ def __init__(self, name, default, converter, comparer, visible): self.name = name self.default = default self._converter = converter self._comparer = comparer self.visible = visible def convert(self, s): """Convert a string representation of a constraint into a useful form. """ if s is None: return try: return self._converter(s) except ValueError as e: raise ConstraintError( "Bad %r constraint %r: %s" % (self.name, s, e)) def can_satisfy(self, candidate, benchmark): """Check whether candidate can satisfy benchmark""" return self._comparer(candidate, benchmark) class ConstraintSet(object): """A ConstraintSet represents all constraints applicable to a provider. Individual providers can construct ConstraintSets which will be used to construct Constraints objects directly relevant to that provider.""" def __init__(self, provider_type): self._provider_type = provider_type self._registry = {} self._conflicts = {} # These constraints must always be available (but are not user-visible # or -settable). self.register("ubuntu-series", visible=False) self.register("provider-type", visible=False) def register(self, name, default=None, converter=_dont_convert, comparer=operator.eq, visible=True): """Register a constraint to be handled by this ConstraintSet. :param str name: The constraint's name :param default: The default value as a str, or None to indicate "unset" :param converter: Function to convert str value to "real" value (and thereby implicitly validate it; should raise ValueError) :param comparer: Function used to determine whether one constraint satisfies another :param bool visible: If False, indicates a computed constraint which should not be settable by a user. """ self._registry[name] = _ConstraintType( name, default, converter, comparer, visible) self._conflicts[name] = set() def register_conflicts(self, reds, blues): """Set cross-constraint override behaviour. :param reds: list of constraint names which affect all constraints specified in `blues` :param blues: list of constraint names which affect all constraints specified in `reds` When two constraints conflict: * It is an error to set both constraints in the same Constraints. * When a Constraints overrides another which specifies a conflicting constraint, the value in the overridden Constraints is cleared. """ for red in reds: self._conflicts[red].update(blues) for blue in blues: self._conflicts[blue].update(reds) def register_generics(self, instance_type_names): """Register a common set of constraints. This always includes arch, cpu, and mem; and will include instance-type if instance_type_names is not empty. This is because we believe instance-type to be a broadly applicable concept, even though the only provider that registers names here (and hence accepts the constraint) is currently EC2. """ self.register("arch", default="amd64", converter=_convert_arch) self.register( "cpu", default="1", converter=_convert_cpu, comparer=operator.ge) self.register( "mem", default="512M", converter=_convert_mem, comparer=operator.ge) if instance_type_names: def convert(instance_type_name): if instance_type_name in instance_type_names: return instance_type_name raise ValueError("unknown instance type") self.register("instance-type", converter=convert) self.register_conflicts(["cpu", "mem"], ["instance-type"]) def names(self): """Get the names of all registered constraints.""" return self._registry.keys() def get(self, name): """Get the (internal) _ConstraintType object corresponding to `name`. Returns None if no _ConstraintType has been registered under that name. """ return self._registry.get(name) def parse(self, strs): """Create a Constraints from strings (as used on the command line)""" data = {"provider-type": self._provider_type} for s in strs: try: name, value = s.split("=", 1) constraint = self.get(name) if constraint is None: raise UnknownConstraintError(name) if value == "any": value = None if value == "": value = constraint.default constraint.convert(value) except ValueError as e: raise ConstraintError( "Could not interpret %r constraint: %s" % (s, e)) if not constraint.visible: raise ConstraintError( "Cannot set computed constraint: %r" % name) data[name] = value conflicts = set() for name in sorted(data): if data[name] is None: continue for conflict in sorted(self._conflicts[name]): if conflict in data: raise ConstraintError( "Ambiguous constraints: %r overlaps with %r" % (name, conflict)) conflicts.add(conflict) data.update(dict((conflict, None) for conflict in conflicts)) return Constraints(self, data) def load(self, data): """Convert a data dict to a Constraints""" for k, v in data.items(): constraint = self.get(k) if constraint is not None: # Include all of data; validate those parts we know how to. constraint.convert(v) return Constraints(self, data) class Constraints(object, DictMixin): """A Constraints object encapsulates a set of machine constraints. Constraints instances should not be constructed directly; please use ConstraintSet's parse and load methods instead. They implement a dict interface, which exposes all constraints for the appropriate provider, and is the expected mode of usage for clients not concerned with the construction or comparison of Constraints objects. A Constraints object only ever contains a single "layer" of data, but can be combined with other Constraints objects in such a way as to produce a single object following the rules laid down in internals/placement-spec. Constraints objects can be compared, in a limited sense, by using the `can_satisfy` method. """ def __init__(self, available, data): self._available = available self._data = data def keys(self): """DictMixin""" return self._available.names() def __getitem__(self, name): """DictMixin""" if name not in self.keys(): raise KeyError(name) constraint = self._available.get(name) raw_value = self.data.get(name, constraint.default) return constraint.convert(raw_value) def with_series(self, series): """Return a Constraints with the "ubuntu-series" set to `series`""" data = dict(self._data) data["ubuntu-series"] = series return self._available.load(data) @property def complete(self): """Have provider-type and ubuntu-series both been set?""" return None not in ( self.get("provider-type"), self.get("ubuntu-series")) @property def data(self): """Return a dict suitable for serialisation and reconstruction. Note that data contains (1) the specified value for every constraint that has been explicitly set, and (2) a None value for every constraint which conflicts with one that has been set. Therefore, by updating one Constraints's data with another's, any setting thus masked on the lower level will be preserved as None; consequently, Constraints~s can be collapsed onto one another without losing any information that is not overridden (whether implicitly or explicitly) by the overriding Constraints. """ return dict(self._data) def update(self, other): """Overwrite `self`'s data from `other`.""" self._data.update(other.data) def can_satisfy(self, other): """Can a machine with constraints `self` be used for a unit with constraints `other`? ie :: if machine_constraints.can_satisfy(unit_constraints): # place unit on machine """ if not (self.complete and other.complete): # Incomplete constraints cannot satisfy or be satisfied; we should # only ever hit this branch if we're running new code (that knows # about constraints) against an old deployment (which will contain # at least *some* services/machines which don't have constraints). return False for (name, unit_value) in other.items(): if unit_value is None: # The unit doesn't care; any machine value will be fine. continue machine_value = self[name] if machine_value is None: # The unit *does* care, and the machine value isn't # specified, so we can't guarantee a match. If we were # to update machine constraints after provisioning (ie # when we knew the values of the constraints left # unspecified) we'd hit this branch less often. We # may also need to do something clever here to get # sensible machine reuse on ec2 -- in what # circumstances, if ever, is it OK to place a unit # specced for one instance-type on a machine of # another type? Does it matter if either or both were # derived from generic constraints? What about cost? return False constraint = self._available.get(name) if not constraint.can_satisfy(machine_value, unit_value): # The machine's value is definitely not ok for the unit. return False return True #============================================================================== # Generic constraint information (used by multiple providers). _VALID_ARCHS = ("i386", "amd64", "arm", "arm64") _MEGABYTES = 1 _GIGABYTES = _MEGABYTES * 1024 _TERABYTES = _GIGABYTES * 1024 _MEM_SUFFIXES = {"M": _MEGABYTES, "G": _GIGABYTES, "T": _TERABYTES} def _convert_arch(s): if s in _VALID_ARCHS: return s raise ValueError("unknown architecture") def _convert_cpu(s): value = float(s) if value >= 0: return value raise ValueError("must be non-negative") def _convert_mem(s): if s[-1] in _MEM_SUFFIXES: value = float(s[:-1]) * _MEM_SUFFIXES[s[-1]] else: value = float(s) if value >= 0: return value raise ValueError("must be non-negative") juju-0.7.orig/juju/machine/errors.py0000644000000000000000000000020512135220114015655 0ustar 00000000000000 from juju.errors import JujuError class UnitDeploymentError(JujuError): """An error occured while deploying a service unit""" juju-0.7.orig/juju/machine/tests/0000755000000000000000000000000012135220114015134 5ustar 00000000000000juju-0.7.orig/juju/machine/unit.py0000644000000000000000000002604112135220114015326 0ustar 00000000000000import os import shutil import logging import juju from twisted.internet.defer import inlineCallbacks, returnValue from juju.charm.bundle import CharmBundle from juju.errors import ServiceError from juju.lib.lxc import LXCContainer, get_containers from juju.lib.twistutils import get_module_directory from juju.lib.upstart import UpstartService from juju.providers.common.cloudinit import CloudInit from .errors import UnitDeploymentError log = logging.getLogger("unit.deploy") def get_deploy_factory(provider_type): if provider_type == "local": return UnitContainerDeployment elif provider_type == "subordinate": return SubordinateContainerDeployment return UnitMachineDeployment def _get_environment(unit_name, juju_home, machine_id, zookeeper_hosts, env_id): """ Return environment dictionary for unit. """ environ = dict() environ["JUJU_MACHINE_ID"] = str(machine_id) environ["JUJU_UNIT_NAME"] = unit_name environ["JUJU_HOME"] = juju_home environ["JUJU_ZOOKEEPER"] = zookeeper_hosts environ["JUJU_ENV_UUID"] = env_id environ["PYTHONPATH"] = ":".join( filter(None, [ os.path.dirname(get_module_directory(juju)), os.environ.get("PYTHONPATH")])) return environ class UnitMachineDeployment(object): """ Deploy a unit directly onto a machine. A service unit deployed directly to a machine has full access to the machine resources. Units deployed in such a manner have no isolation from other units on the machine, and may leave artifacts on the machine even upon service destruction. """ unit_agent_module = "juju.agents.unit" def __init__(self, unit_name, juju_home): assert ".." not in unit_name, "Invalid Unit Name" self.unit_name = unit_name self.juju_home = juju_home self.unit_path_name = unit_name.replace("/", "-") self.directory = os.path.join( self.juju_home, "units", self.unit_path_name) self.service = UpstartService( # NOTE: we need use_sudo to work correctly during tests that # launch actual processes (rather than just mocking/trusting). "juju-%s" % self.unit_path_name, use_sudo=True) @inlineCallbacks def start(self, env_id, machine_id, zookeeper_hosts, bundle): """Start a service unit agent.""" self.unpack_charm(bundle) self.service.set_description( "Juju unit agent for %s" % self.unit_name) self.service.set_environ(_get_environment( self.unit_name, self.juju_home, machine_id, zookeeper_hosts, env_id)) self.service.set_command(" ".join(( "/usr/bin/python", "-m", self.unit_agent_module, "--nodaemon", "--logfile", os.path.join(self.directory, "charm.log"), "--session-file", "/var/run/juju/unit-%s-agent.zksession" % self.unit_path_name))) try: yield self.service.start() except ServiceError as e: raise UnitDeploymentError(str(e)) @inlineCallbacks def destroy(self): """Forcibly terminate a service unit agent, and clean disk state. This will destroy/unmount any state on disk. """ yield self.service.destroy() if os.path.exists(self.directory): shutil.rmtree(self.directory) def get_pid(self): """Get the service unit's process id.""" return self.service.get_pid() def is_running(self): """Is the service unit running.""" return self.service.is_running() def unpack_charm(self, charm): """Unpack a charm to the service units directory.""" if not isinstance(charm, CharmBundle): raise UnitDeploymentError( "Invalid charm for deployment: %s" % charm.path) charm.extract_to(os.path.join(self.directory, "charm")) class SubordinateContainerDeployment(UnitMachineDeployment): """Deploy a subordinate unit. Assumes the basic runtime has been built for/by the principal service or its machine agent. """ def __init__(self, unit_name, juju_home): assert ".." not in unit_name, "Invalid Unit Name" self.unit_name = unit_name self.juju_home = juju_home self.unit_path_name = unit_name.replace("/", "-") self.directory = os.path.join( self.juju_home, "units", self.unit_path_name) self.service = UpstartService( # NOTE: we need use_sudo to work correctly during tests that # launch actual processes (rather than just mocking/trusting). "juju-%s" % self.unit_path_name, use_sudo=True) class UnitContainerDeployment(object): """Deploy a service unit in a container. Units deployed in a container have strong isolation between others units deployed in a container on the same machine. From the perspective of the service unit, the container deployment should be indistinguishable from a machine deployment. Note, strong isolation properties are still fairly trivial to escape for a user with a root account within the container. This is an ongoing development topic for LXC. """ def __init__(self, unit_name, juju_home): self.unit_name = unit_name self.juju_home = juju_home self.unit_path_name = unit_name.replace("/", "-") self._juju_origin = os.environ.get("JUJU_ORIGIN") self._juju_series = os.environ.get("JUJU_SERIES") assert self._juju_series is not None, "Required juju series not found" self._unit_namespace = os.environ.get("JUJU_UNIT_NS") assert self._unit_namespace is not None, "Required unit ns not found" self.container_name = "%s-%s" % ( self._unit_namespace, self.unit_path_name) self.container = LXCContainer(self.container_name, None, None, None) self.directory = None self._container_juju_home = '/var/lib/juju' def setup_directories(self): # Create state directories for unit in the container base = self.directory dirs = ((base, "var", "lib", "juju", "units", self.unit_path_name), (base, "var", "lib", "juju", "state"), (base, "var", "log", "juju"), (self.juju_home, "units", self.unit_path_name)) for parts in dirs: dir_ = os.path.join(*parts) if not os.path.exists(dir_): os.makedirs(dir_) def _get_cloud_init(self, zookeepers): cloud_init = CloudInit() # remove any quoting around the key authorized_keys = os.environ.get("JUJU_PUBLIC_KEY", "") authorized_keys = authorized_keys.strip("'\"") cloud_init.add_ssh_key(authorized_keys) zks = [] for zk in zookeepers.split(','): if ':' in zk: (zk, port) = zk.split(':') else: port = 2181 zks.append((zk, port)) cloud_init.set_zookeeper_hosts(zks) # XXX Very hard to access the provider's notion of network # or even env configs, so just assume the first ZK is running # the apt-cacher-ng since this is meant for local provider. cloud_init.set_apt_proxy('http://%s:3142' % zks[0][0]) if self._juju_origin: if self._juju_origin.startswith("lp:"): cloud_init.set_juju_source(branch=self._juju_origin) elif self._juju_origin == "ppa": cloud_init.set_juju_source(ppa=True) elif self._juju_origin == "proposed": cloud_init.set_juju_source(proposed=True) else: # Ignore other values, just use the distro for sanity cloud_init.set_juju_source(distro=True) cloud_init.set_unit_name(self.unit_name) cloud_init.set_juju_home(self._container_juju_home) return cloud_init @inlineCallbacks def _get_container(self, machine_id, cloud_init): container = LXCContainer( self.container_name, cloud_init=cloud_init, series=self._juju_series) if not container.is_constructed(): log.info( "Creating container %s...", self.unit_path_name) yield container.create() log.info("Created container %s", self.container_name) directory = container.rootfs returnValue((container, directory)) @inlineCallbacks def start(self, env_id, machine_id, zookeeper_hosts, bundle): """Start the unit. Creates and starts an lxc container for the unit. """ # Build a template container that can be cloned in deploy # we leave the loosely initialized self.container in place for # the class as thats all we need for methods other than start. cloud_init = self._get_cloud_init(zookeeper_hosts) self.container, self.directory = yield self._get_container( machine_id, cloud_init) # Create state directories for unit in the container self.setup_directories() # Extract the charm bundle charm_path = os.path.join( self.directory, "var", "lib", "juju", "units", self.unit_path_name, "charm") bundle.extract_to(charm_path) log.debug("Charm extracted into container") # Create symlinks on the host for easier access to the unit log files unit_log_path_host = os.path.join( self.juju_home, "units", self.unit_path_name, "unit.log") if not os.path.lexists(unit_log_path_host): os.symlink( os.path.join(self.directory, "var", "log", "juju", "unit-%s.log" % self.unit_path_name), unit_log_path_host) unit_output_path_host = os.path.join( self.juju_home, "units", self.unit_path_name, "output.log") if not os.path.lexists(unit_output_path_host): os.symlink( os.path.join(self.directory, "var", "log", "juju", "unit-%s-output.log" % self.unit_path_name), unit_output_path_host) # Debug log for the container container_log_path = os.path.join( self.juju_home, "units", self.unit_path_name, "container.log") self.container.debug_log = container_log_path log.debug("Starting container...") yield self.container.run() log.info("Started container for %s", self.unit_name) @inlineCallbacks def destroy(self): """Destroy the unit container.""" log.debug("Destroying container...") yield self.container.destroy() log.info("Destroyed container for %s", self.unit_name) @inlineCallbacks def is_running(self): """Is the unit container running?""" # TODO: container running may not imply agent running. # query zookeeper for the unit agent presence node? if not self.container: returnValue(False) container_map = yield get_containers( prefix=self.container.container_name) returnValue(container_map.get(self.container.container_name, False)) juju-0.7.orig/juju/machine/tests/__init__.py0000644000000000000000000000000212135220114017235 0ustar 00000000000000# juju-0.7.orig/juju/machine/tests/data/0000755000000000000000000000000012135220114016045 5ustar 00000000000000juju-0.7.orig/juju/machine/tests/test_constraints.py0000644000000000000000000003543712135220114021130 0ustar 00000000000000import operator from juju.errors import ConstraintError from juju.lib.testing import TestCase from juju.machine.constraints import Constraints, ConstraintSet # These objects exist for the convenience of other test files dummy_cs = ConstraintSet("dummy") dummy_cs.register_generics([]) dummy_constraints = dummy_cs.parse([]) series_constraints = dummy_constraints.with_series("series") generic_defaults = { "arch": "amd64", "cpu": 1, "mem": 512, "ubuntu-series": None, "provider-type": None} dummy_defaults = dict(generic_defaults, **{ "provider-type": "dummy"}) ec2_defaults = dict(generic_defaults, **{ "provider-type": "ec2", "ec2-zone": None, "instance-type": None}) orchestra_defaults = { "provider-type": "orchestra", "ubuntu-series": None, "orchestra-classes": None} all_providers = ["dummy", "ec2", "orchestra"] def _raiser(exc_type): def raise_(s): raise exc_type(s) return raise_ class ConstraintsTestCase(TestCase): def assert_error(self, message, *raises_args): e = self.assertRaises(ConstraintError, *raises_args) self.assertEquals(str(e), message) def assert_roundtrip_equal(self, cs, constraints, expected): self.assertEquals(dict(constraints), expected) self.assertEquals(constraints, expected) self.assertEquals(dict(cs.load(constraints.data)), expected) self.assertEquals(cs.load(constraints.data), expected) class ConstraintsTest(ConstraintsTestCase): def test_equality(self): self.assert_roundtrip_equal( dummy_cs, dummy_constraints, dummy_defaults) def test_complete(self): incomplete_constraints = dummy_cs.parse([]) complete_constraints = incomplete_constraints.with_series("wandering") self.assertTrue(complete_constraints.complete) def assert_invalid(self, message, *constraint_strs): self.assert_error( message, dummy_cs.parse, constraint_strs) def test_invalid_input(self): """Reject nonsense constraints""" self.assert_invalid( "Could not interpret 'BLAH' constraint: need more than 1 value to " "unpack", "BLAH") self.assert_invalid( "Unknown constraint: 'foo'", "foo=", "bar=") def test_invalid_constraints(self): """Reject nonsensical constraint values""" self.assert_invalid( "Bad 'arch' constraint 'leg': unknown architecture", "arch=leg") self.assert_invalid( "Bad 'cpu' constraint '-1': must be non-negative", "cpu=-1") self.assert_invalid( "Bad 'cpu' constraint 'fish': could not convert string to float: " "fish", "cpu=fish") self.assert_invalid( "Bad 'mem' constraint '-1': must be non-negative", "mem=-1") self.assert_invalid( "Bad 'mem' constraint '4P': invalid literal for float(): 4P", "mem=4P") def test_hidden_constraints(self): """Reject attempts to explicitly specify computed constraints""" self.assert_invalid( "Cannot set computed constraint: 'ubuntu-series'", "ubuntu-series=cheesy") self.assert_invalid( "Cannot set computed constraint: 'provider-type'", "provider-type=dummy") class ConstraintsUpdateTest(ConstraintsTestCase): def assert_constraints(self, strss, expected): constraints = dummy_cs.parse(strss[0]) for strs in strss[1:]: constraints.update(dummy_cs.parse(strs)) expected = dict(dummy_defaults, **expected) self.assert_roundtrip_equal(dummy_cs, constraints, expected) def test_constraints(self): """Sane constraints dicts are generated for unknown environments""" self.assert_constraints([[]], {}) self.assert_constraints([["cpu=", "mem="]], {}) self.assert_constraints([["arch=arm"]], {"arch": "arm"}) self.assert_constraints([["arch=arm64"]], {"arch": "arm64"}) self.assert_constraints([["cpu=0.1"]], {"cpu": 0.1}) self.assert_constraints([["mem=128"]], {"mem": 128}) self.assert_constraints([["cpu=0"]], {"cpu": 0}) self.assert_constraints([["mem=0"]], {"mem": 0}) self.assert_constraints( [["arch=amd64", "cpu=6", "mem=1.5G"]], {"arch": "amd64", "cpu": 6, "mem": 1536}) def test_overwriting_basic(self): """Later values shadow earlier values""" self.assert_constraints( [["cpu=4", "mem=512"], ["arch=i386", "mem=1G"]], {"arch": "i386", "cpu": 4, "mem": 1024}) def test_reset(self): """Empty string resets to juju default""" self.assert_constraints( [["arch=arm", "cpu=4", "mem=1024"], ["arch=", "cpu=", "mem="]], {"arch": "amd64", "cpu": 1, "mem": 512}) self.assert_constraints( [["arch=", "cpu=", "mem="], ["arch=arm", "cpu=4", "mem=1024"]], {"arch": "arm", "cpu": 4, "mem": 1024}) class ConstraintsFulfilmentTest(ConstraintsTestCase): def assert_match(self, c1, c2, expected): self.assertEquals(c1.can_satisfy(c2), expected) self.assertEquals(c2.can_satisfy(c1), expected) def test_fulfil_completeness(self): """ can_satisfy needs to be called on and with complete Constraints~s to have any chance of working. """ good = Constraints( dummy_cs, {"provider-type": "dummy", "ubuntu-series": "x"}) self.assert_match(good, good, True) bad = [Constraints(dummy_cs, {}), Constraints(dummy_cs, {"provider-type": "dummy"}), Constraints(dummy_cs, {"ubuntu-series": "x"})] for i, bad1 in enumerate(bad): self.assert_match(bad1, good, False) for bad2 in bad[i:]: self.assert_match(bad1, bad2, False) def test_fulfil_matches(self): other_cs = ConstraintSet("other") other_cs.register_generics([]) other_constraints = other_cs.parse([]) instances = ( dummy_constraints.with_series("x"), dummy_constraints.with_series("y"), other_constraints.with_series("x"), other_constraints.with_series("y")) for i, c1 in enumerate(instances): self.assert_match(c1, c1, True) for c2 in instances[i + 1:]: self.assert_match(c1, c2, False) def assert_can_satisfy( self, machine_strs, unit_strs, expected): machine = dummy_cs.parse(machine_strs) machine = machine.with_series("shiny") unit = dummy_cs.parse(unit_strs) unit = unit.with_series("shiny") self.assertEquals(machine.can_satisfy(unit), expected) def test_can_satisfy(self): self.assert_can_satisfy([], [], True) self.assert_can_satisfy(["arch=arm"], [], False) self.assert_can_satisfy(["arch=amd64"], [], True) self.assert_can_satisfy([], ["arch=arm"], False) self.assert_can_satisfy(["arch=i386"], ["arch=arm"], False) self.assert_can_satisfy(["arch=arm"], ["arch=amd64"], False) self.assert_can_satisfy(["arch=amd64"], ["arch=amd64"], True) self.assert_can_satisfy(["arch=i386"], ["arch=any"], True) self.assert_can_satisfy(["arch=arm"], ["arch=any"], True) self.assert_can_satisfy(["arch=amd64"], ["arch=any"], True) self.assert_can_satisfy(["arch=any"], ["arch=any"], True) self.assert_can_satisfy(["arch=any"], ["arch=i386"], False) self.assert_can_satisfy(["arch=any"], ["arch=amd64"], False) self.assert_can_satisfy(["arch=any"], ["arch=arm"], False) self.assert_can_satisfy(["cpu=64"], [], True) self.assert_can_satisfy([], ["cpu=64"], False) self.assert_can_satisfy(["cpu=64"], ["cpu=32"], True) self.assert_can_satisfy(["cpu=32"], ["cpu=64"], False) self.assert_can_satisfy(["cpu=64"], ["cpu=64"], True) self.assert_can_satisfy(["cpu=0.01"], ["cpu=any"], True) self.assert_can_satisfy(["cpu=9999"], ["cpu=any"], True) self.assert_can_satisfy(["cpu=any"], ["cpu=any"], True) self.assert_can_satisfy(["cpu=any"], ["cpu=0.01"], False) self.assert_can_satisfy(["cpu=any"], ["cpu=9999"], False) self.assert_can_satisfy(["mem=8G"], [], True) self.assert_can_satisfy([], ["mem=8G"], False) self.assert_can_satisfy(["mem=8G"], ["mem=4G"], True) self.assert_can_satisfy(["mem=4G"], ["mem=8G"], False) self.assert_can_satisfy(["mem=8G"], ["mem=8G"], True) self.assert_can_satisfy(["mem=2M"], ["mem=any"], True) self.assert_can_satisfy(["mem=256T"], ["mem=any"], True) self.assert_can_satisfy(["mem=any"], ["mem=any"], True) self.assert_can_satisfy(["mem=any"], ["mem=2M"], False) self.assert_can_satisfy(["mem=any"], ["mem=256T"], False) class ConstraintSetTest(TestCase): def test_unregistered_name(self): cs = ConstraintSet("provider") cs.register("bar") e = self.assertRaises(ConstraintError, cs.parse, ["bar=2", "baz=3"]) self.assertEquals(str(e), "Unknown constraint: 'baz'") def test_register_invisible(self): cs = ConstraintSet("provider") cs.register("foo", visible=False) e = self.assertRaises(ConstraintError, cs.parse, ["foo=bar"]) self.assertEquals(str(e), "Cannot set computed constraint: 'foo'") def test_register_comparer(self): cs = ConstraintSet("provider") cs.register("foo", comparer=operator.ne) c1 = cs.parse(["foo=bar"]).with_series("series") c2 = cs.parse(["foo=bar"]).with_series("series") self.assertFalse(c1.can_satisfy(c2)) self.assertFalse(c2.can_satisfy(c1)) c3 = cs.parse(["foo=baz"]).with_series("series") self.assertTrue(c1.can_satisfy(c3)) self.assertTrue(c3.can_satisfy(c1)) def test_register_default_and_converter(self): cs = ConstraintSet("provider") cs.register("foo", default="star", converter=lambda s: "death-" + s) c1 = cs.parse([]) self.assertEquals(c1["foo"], "death-star") c1 = cs.parse(["foo=clock"]) self.assertEquals(c1["foo"], "death-clock") def test_convert_wraps_ValueError(self): cs = ConstraintSet("provider") cs.register("foo", converter=_raiser(ValueError)) cs.register("bar", converter=_raiser(KeyError)) self.assertRaises(ConstraintError, cs.parse, ["foo=1"]) self.assertRaises(KeyError, cs.parse, ["bar=1"]) def test_register_conflicts(self): cs = ConstraintSet("provider") cs.register("foo") cs.register("bar") cs.register("baz") cs.register("qux") cs.parse(["foo=1", "bar=2", "baz=3", "qux=4"]) def assert_ambiguous(strs): e = self.assertRaises(ConstraintError, cs.parse, strs) self.assertTrue(str(e).startswith("Ambiguous constraints")) cs.register_conflicts(["foo"], ["bar", "baz", "qux"]) assert_ambiguous(["foo=1", "bar=2"]) assert_ambiguous(["foo=1", "baz=3"]) assert_ambiguous(["foo=1", "qux=4"]) cs.parse(["foo=1"]) cs.parse(["bar=2", "baz=3", "qux=4"]) cs.register_conflicts(["bar", "baz"], ["qux"]) assert_ambiguous(["bar=2", "qux=4"]) assert_ambiguous(["baz=3", "qux=4"]) cs.parse(["foo=1"]) cs.parse(["bar=2", "baz=3"]) cs.parse(["qux=4"]) def test_register_generics_no_instance_types(self): cs = ConstraintSet("provider") cs.register_generics([]) c1 = cs.parse([]) self.assertEquals(c1["arch"], "amd64") self.assertEquals(c1["cpu"], 1.0) self.assertEquals(c1["mem"], 512.0) self.assertFalse("instance-type" in c1) c2 = cs.parse(["arch=any", "cpu=0", "mem=8G"]) self.assertEquals(c2["arch"], None) self.assertEquals(c2["cpu"], 0.0) self.assertEquals(c2["mem"], 8192.0) self.assertFalse("instance-type" in c2) def test_register_generics_with_instance_types(self): cs = ConstraintSet("provider") cs.register_generics(["a1.big", "c7.peculiar"]) c1 = cs.parse([]) self.assertEquals(c1["arch"], "amd64") self.assertEquals(c1["cpu"], 1.0) self.assertEquals(c1["mem"], 512.0) self.assertEquals(c1["instance-type"], None) c2 = cs.parse(["arch=any", "cpu=0", "mem=8G"]) self.assertEquals(c2["arch"], None) self.assertEquals(c2["cpu"], 0.0) self.assertEquals(c2["mem"], 8192.0) self.assertEquals(c2["instance-type"], None) c3 = cs.parse(["instance-type=c7.peculiar", "arch=i386"]) self.assertEquals(c3["arch"], "i386") self.assertEquals(c3["cpu"], None) self.assertEquals(c3["mem"], None) self.assertEquals(c3["instance-type"], "c7.peculiar") def assert_ambiguous(strs): e = self.assertRaises(ConstraintError, cs.parse, strs) self.assertTrue(str(e).startswith("Ambiguous constraints")) assert_ambiguous(["cpu=1", "instance-type=c7.peculiar"]) assert_ambiguous(["mem=1024", "instance-type=c7.peculiar"]) c4 = cs.parse([]) c4.update(c2) self.assertEquals(c4["arch"], None) self.assertEquals(c4["cpu"], 0.0) self.assertEquals(c4["mem"], 8192.0) self.assertEquals(c4["instance-type"], None) c5 = cs.parse(["instance-type=a1.big"]) c5.update(cs.parse(["arch=i386"])) self.assertEquals(c5["arch"], "i386") self.assertEquals(c5["cpu"], None) self.assertEquals(c5["mem"], None) self.assertEquals(c5["instance-type"], "a1.big") c6 = cs.parse(["instance-type=a1.big"]) c6.update(cs.parse(["cpu=20"])) self.assertEquals(c6["arch"], "amd64") self.assertEquals(c6["cpu"], 20.0) self.assertEquals(c6["mem"], None) self.assertEquals(c6["instance-type"], None) c7 = cs.parse(["instance-type="]) self.assertEquals(c7["arch"], "amd64") self.assertEquals(c7["cpu"], 1.0) self.assertEquals(c7["mem"], 512.0) self.assertEquals(c7["instance-type"], None) c8 = cs.parse(["instance-type=any"]) self.assertEquals(c8["arch"], "amd64") self.assertEquals(c8["cpu"], 1.0) self.assertEquals(c8["mem"], 512.0) self.assertEquals(c8["instance-type"], None) def test_load_validates_known(self): cs = ConstraintSet("provider") cs.register("foo", converter=_raiser(ValueError)) e = self.assertRaises(ConstraintError, cs.load, {"foo": "bar"}) self.assertEquals(str(e), "Bad 'foo' constraint 'bar': bar") def test_load_preserves_unknown(self): cs = ConstraintSet("provider") constraints = cs.load({"foo": "bar"}) self.assertNotIn("foo", constraints) self.assertEquals(constraints.data, {"foo": "bar"}) juju-0.7.orig/juju/machine/tests/test_machine.py0000644000000000000000000000137012135220114020152 0ustar 00000000000000from juju.lib.testing import TestCase from juju.machine import ProviderMachine class ProviderMachineTest(TestCase): def test_minimal(self): machine = ProviderMachine("i-abc") self.assertEqual(machine.instance_id, "i-abc") self.assertEqual(machine.dns_name, None) self.assertEqual(machine.private_dns_name, None) self.assertEqual(machine.state, "unknown") def test_all_attrs(self): machine = ProviderMachine( "i-abc", "xe.example.com", "foo.local", "borken") self.assertEqual(machine.instance_id, "i-abc") self.assertEqual(machine.dns_name, "xe.example.com") self.assertEqual(machine.private_dns_name, "foo.local") self.assertEqual(machine.state, "borken") juju-0.7.orig/juju/machine/tests/test_unit_deployment.py0000644000000000000000000003657312135220114022002 0ustar 00000000000000""" Service Unit Deployment unit tests """ import logging import os import subprocess import inspect from twisted.internet.defer import inlineCallbacks, succeed from juju.charm import get_charm_from_path from juju.charm.tests.test_repository import RepositoryTestBase from juju.lib import serializer from juju.lib.lxc import LXCContainer from juju.lib.lxc.tests.test_lxc import uses_sudo from juju.lib.mocker import ANY, KWARGS, MATCH from juju.lib.upstart import UpstartService from juju.machine.unit import UnitMachineDeployment, UnitContainerDeployment from juju.machine.errors import UnitDeploymentError from juju.machine import tests from juju.tests.common import get_test_zookeeper_address from juju.providers.common.cloudinit import CloudInit DATA_DIR = os.path.join(os.path.dirname(inspect.getabsfile(tests)), "data") class UnitMachineDeploymentTest(RepositoryTestBase): def setUp(self): super(UnitMachineDeploymentTest, self).setUp() self.charm = get_charm_from_path(self.sample_dir1) self.bundle = self.charm.as_bundle() self.juju_directory = self.makeDir() self.units_directory = os.path.join(self.juju_directory, "units") os.mkdir(self.units_directory) self.unit_name = "wordpress/0" self.rootfs = self.makeDir() self.init_dir = os.path.join(self.rootfs, "etc", "init") os.makedirs(self.init_dir) self.real_init_dir = self.patch( UpstartService, "init_dir", self.init_dir) self.deployment = UnitMachineDeployment( self.unit_name, self.juju_directory) self.assertEqual( self.deployment.unit_agent_module, "juju.agents.unit") self.deployment.unit_agent_module = "juju.agents.dummy" def setup_mock(self): self.check_call = self.mocker.replace("subprocess.check_call") self.getProcessOutput = self.mocker.replace( "twisted.internet.utils.getProcessOutput") def mock_is_running(self, running): self.getProcessOutput("/sbin/status", ["juju-wordpress-0"]) if running: self.mocker.result(succeed( "juju-wordpress-0 start/running, process 12345")) else: self.mocker.result(succeed("juju-wordpress-0 stop/waiting")) def _without_sudo(self, args, **_): self.assertEquals(args[0], "sudo") return subprocess.call(args[1:]) def mock_install(self): self.check_call(ANY, KWARGS) # cp to init dir self.mocker.call(self._without_sudo) self.check_call(ANY, KWARGS) # chmod 644 self.mocker.call(self._without_sudo) def mock_start(self): self.check_call(("sudo", "/sbin/start", "juju-wordpress-0"), KWARGS) self.mocker.result(0) for _ in range(5): self.mock_is_running(True) def mock_destroy(self): self.check_call(("sudo", "/sbin/stop", "juju-wordpress-0"), KWARGS) self.mocker.result(0) self.check_call(ANY, KWARGS) # rm from init dir self.mocker.call(self._without_sudo) def assert_pid_running(self, pid, expect): self.assertEquals(os.path.exists("/proc/%s" % pid), expect) def test_unit_name_with_path_manipulation_raises_assertion(self): self.assertRaises( AssertionError, UnitMachineDeployment, "../../etc/password/zebra/0", self.units_directory) def test_unit_directory(self): self.assertEqual( self.deployment.directory, os.path.join(self.units_directory, self.unit_name.replace("/", "-"))) def test_service_unit_start(self): """ Starting a service unit will result in a unit workspace being created if it does not exist and a running service unit agent. """ self.setup_mock() self.mock_install() self.mock_is_running(False) self.mock_start() self.mocker.replay() d = self.deployment.start( "snowflake", "123", get_test_zookeeper_address(), self.bundle) def verify_upstart(_): conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") with open(conf_path) as f: lines = f.readlines() env = [] for line in lines: if line.startswith("env "): env.append(line[4:-1].split("=", 1)) if line.startswith("exec "): exec_ = line[5:-1] env = dict((k, v.strip('"')) for (k, v) in env) env.pop("PYTHONPATH") self.assertEquals(env, { "JUJU_HOME": self.juju_directory, "JUJU_UNIT_NAME": self.unit_name, "JUJU_ZOOKEEPER": get_test_zookeeper_address(), "JUJU_ENV_UUID": "snowflake", "JUJU_MACHINE_ID": "123"}) log_file = os.path.join( self.deployment.directory, "charm.log") command = " ".join([ "/usr/bin/python", "-m", "juju.agents.dummy", "--nodaemon", "--logfile", log_file, "--session-file", "/var/run/juju/unit-wordpress-0-agent.zksession", ">> /tmp/juju-wordpress-0.output 2>&1"]) self.assertEquals(exec_, command) d.addCallback(verify_upstart) return d @inlineCallbacks def test_service_unit_destroy(self): """ Forcibly stop a unit, and destroy any directories associated to it on the machine, and kills the unit agent process. """ self.setup_mock() self.mock_install() self.mock_is_running(False) self.mock_start() self.mock_is_running(True) self.mock_destroy() self.mocker.replay() yield self.deployment.start( "snowflake", "0", get_test_zookeeper_address(), self.bundle) yield self.deployment.destroy() self.assertFalse(os.path.exists(self.deployment.directory)) conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") self.assertFalse(os.path.exists(conf_path)) @inlineCallbacks def test_service_unit_destroy_undeployed(self): """ If the unit is has not been deployed, nothing happens. """ yield self.deployment.destroy() self.assertFalse(os.path.exists(self.deployment.directory)) @inlineCallbacks def test_service_unit_destroy_not_running(self): """ If the unit is not running, then destroy will just remove its directory. """ self.setup_mock() self.mock_install() self.mock_is_running(False) self.mock_start() self.mock_is_running(False) self.check_call(ANY, KWARGS) # rm from init dir self.mocker.call(self._without_sudo) self.mocker.replay() yield self.deployment.start( "snowflake", "0", get_test_zookeeper_address(), self.bundle) yield self.deployment.destroy() self.assertFalse(os.path.exists(self.deployment.directory)) conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") self.assertFalse(os.path.exists(conf_path)) def test_unpack_charm(self): """ The deployment unpacks a charm bundle into the unit workspace. """ self.deployment.unpack_charm(self.bundle) unit_path = os.path.join( self.units_directory, self.unit_name.replace("/", "-")) self.assertTrue(os.path.exists(unit_path)) charm_path = os.path.join(unit_path, "charm") self.assertTrue(os.path.exists(charm_path)) charm = get_charm_from_path(charm_path) self.assertEqual( charm.get_revision(), self.charm.get_revision()) def test_unpack_charm_exception_invalid_charm(self): """ If the charm bundle is corrupted or invalid a deployment specific error is raised. """ error = self.assertRaises( UnitDeploymentError, self.deployment.unpack_charm, self.charm) self.assertEquals( str(error), "Invalid charm for deployment: %s" % self.charm.path) @inlineCallbacks def test_is_running_not_installed(self): """ If there is no conf file the service unit is not running. """ self.assertEqual((yield self.deployment.is_running()), False) @inlineCallbacks def test_is_running_not_running(self): """ If the conf file exists, but job not running, unit not running """ conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") with open(conf_path, "w") as f: f.write("blah") self.setup_mock() self.mock_is_running(False) self.mocker.replay() self.assertEqual((yield self.deployment.is_running()), False) @inlineCallbacks def test_is_running_success(self): """ Check running job. """ conf_path = os.path.join(self.init_dir, "juju-wordpress-0.conf") with open(conf_path, "w") as f: f.write("blah") self.setup_mock() self.mock_is_running(True) self.mocker.replay() self.assertEqual((yield self.deployment.is_running()), True) @uses_sudo @inlineCallbacks def test_run_actual_process(self): # "unpatch" to use real /etc/init self.patch(UpstartService, "init_dir", self.real_init_dir) yield self.deployment.start( "snowflake", "0", get_test_zookeeper_address(), self.bundle) old_pid = yield self.deployment.get_pid() self.assert_pid_running(old_pid, True) # Give the job a chance to fall over and be restarted (if the # pid doesn't change, that hasn't hapened) yield self.sleep(0.1) self.assertEquals((yield self.deployment.get_pid()), old_pid) self.assert_pid_running(old_pid, True) # Kick the job over ourselves; check it comes back os.system("sudo kill -9 %s" % old_pid) yield self.sleep(0.1) self.assert_pid_running(old_pid, False) new_pid = yield self.deployment.get_pid() self.assertNotEquals(new_pid, old_pid) self.assert_pid_running(new_pid, True) yield self.deployment.destroy() self.assertEquals((yield self.deployment.get_pid()), None) self.assert_pid_running(new_pid, False) @uses_sudo @inlineCallbacks def test_fail_to_run_actual_process(self): self.deployment.unit_agent_module = "haha.disregard.that" self.patch(UpstartService, "init_dir", self.real_init_dir) d = self.deployment.start( "snowflake", "0", get_test_zookeeper_address(), self.bundle) e = yield self.assertFailure(d, UnitDeploymentError) self.assertTrue(str(e).startswith( "Failed to start job juju-wordpress-0; got output:\n")) self.assertIn("No module named haha", str(e)) yield self.deployment.destroy() class UnitContainerDeploymentTest(RepositoryTestBase): unit_name = "riak/0" @inlineCallbacks def setUp(self): yield super(UnitContainerDeploymentTest, self).setUp() self.juju_home = self.makeDir() # Setup unit namespace environ = dict(os.environ) environ["JUJU_UNIT_NS"] = "ns1" environ["JUJU_SERIES"] = "precise" self.change_environment(**environ) self.unit_deploy = UnitContainerDeployment( self.unit_name, self.juju_home) self.charm = get_charm_from_path(self.sample_dir1) self.bundle = self.charm.as_bundle() self.output = self.capture_logging("unit.deploy", level=logging.DEBUG) def get_normalized(self, output): # strip python path for comparison return "\n".join(filter(None, output.split("\n"))[:-2]) def test_get_container_name(self): self.assertEqual( "ns1-riak-0", self.unit_deploy.container_name) @inlineCallbacks def test_destroy(self): mock_container = self.mocker.patch(self.unit_deploy.container) mock_container.destroy() self.mocker.replay() yield self.unit_deploy.destroy() output = self.output.getvalue() self.assertIn("Destroying container...", output) self.assertIn("Destroyed container for riak/0", output) def test_origin_usage(self): """The machine agent is started with a origin environment variable """ environ = dict(os.environ) environ["JUJU_ORIGIN"] = "lp:~juju/foobar" self.change_environment(**environ) unit_deploy = UnitContainerDeployment( self.unit_name, self.juju_home) cloud_init = unit_deploy._get_cloud_init('') self.assertEquals(cloud_init._origin_url, "lp:~juju/foobar") @inlineCallbacks def test_start(self): container = LXCContainer(self.unit_name, None, "precise", None) rootfs = self.makeDir() env = dict(os.environ) env["JUJU_PUBLIC_KEY"] = "dsa ..." self.change_environment(**env) mock_deploy = self.mocker.patch(self.unit_deploy) # this minimally validates that we are also called with the # expect public key def is_cloudinit(obj): return isinstance(obj, CloudInit) mock_deploy._get_container("0", MATCH(is_cloudinit)) self.mocker.result((container, rootfs)) mock_container = self.mocker.patch(container) mock_container.run() self.mocker.replay() self.unit_deploy.directory = rootfs os.makedirs(os.path.join(rootfs, "etc", "init")) yield self.unit_deploy.start( "snowflake", "0", "127.0.1.1:2181", self.bundle) # Verify the symlinks exist self.assertTrue(os.path.lexists(os.path.join( self.unit_deploy.juju_home, "units", self.unit_deploy.unit_path_name, "unit.log"))) self.assertTrue(os.path.lexists(os.path.join( self.unit_deploy.juju_home, "units", self.unit_deploy.unit_path_name, "output.log"))) # Verify the charm is on disk. self.assertTrue(os.path.exists(os.path.join( self.unit_deploy.directory, "var", "lib", "juju", "units", self.unit_deploy.unit_path_name, "charm", "metadata.yaml"))) # Verify the directory structure in the unit. self.assertTrue(os.path.exists(os.path.join( self.unit_deploy.directory, "var", "lib", "juju", "state"))) self.assertTrue(os.path.exists(os.path.join( self.unit_deploy.directory, "var", "log", "juju"))) # Verify log output output = self.output.getvalue() self.assertIn("Charm extracted into container", output) self.assertIn("Started container for %s" % self.unit_deploy.unit_name, output) @inlineCallbacks def test_get_container(self): rootfs = self.makeDir() cloud_init = self.unit_deploy._get_cloud_init(zookeepers='localhost') cloud_init.set_environment_id('snowflake') expected = serializer.load( open(os.path.join(DATA_DIR, 'test_get_container'))) rendered = serializer.load(cloud_init.render()) self.assertEquals(rendered, expected) mock_container = self.mocker.patch(LXCContainer) mock_container.is_constructed() self.mocker.result(False) mock_container.create() self.mocker.result(mock_container) self.mocker.replay() container, rootfs = yield self.unit_deploy._get_container( "0", cloud_init) juju-0.7.orig/juju/machine/tests/data/test_get_container0000644000000000000000000000244012135220114021650 0ustar 00000000000000#cloud-config apt_sources: - {source: 'ppa:juju/pkgs'} apt_update: true apt_upgrade: true apt_proxy: 'http://localhost:3142' machine-data: {juju-provider-type: null, juju-zookeeper-hosts: 'localhost:2181', machine-id: null} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, 'cd /usr/lib/juju && sudo /usr/bin/bzr co --lightweight lp:juju juju', cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-riak-0.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_ENV_UUID="snowflake" env JUJU_HOME="/var/lib/juju" env JUJU_MACHINE_ID="None" env JUJU_UNIT_NAME="riak/0" env JUJU_ZOOKEEPER="localhost:2181" exec /usr/bin/python -m juju.agents.unit --nodaemon --logfile /var/log/juju/unit-riak-0.log --session-file /var/run/juju/unit-riak-0-agent.zksession >> /var/log/juju/unit-riak-0-output.log 2>&1 EOF ', /sbin/start juju-riak-0] ssh_authorized_keys: [''] juju-0.7.orig/juju/providers/__init__.py0000644000000000000000000000010612135220114016511 0ustar 00000000000000"""Contains code specific to the various machine provider backends""" juju-0.7.orig/juju/providers/common/0000755000000000000000000000000012135220114015673 5ustar 00000000000000juju-0.7.orig/juju/providers/dummy.py0000644000000000000000000001621612135220114016116 0ustar 00000000000000import logging import os import tempfile from twisted.internet.defer import inlineCallbacks, returnValue, succeed, fail #from txzookeeper import ZookeeperClient from txzookeeper.managed import ManagedClient from juju.errors import ( EnvironmentNotFound, MachinesNotFound, ProviderError) from juju.machine import ProviderMachine from juju.machine.constraints import ConstraintSet from juju.state.placement import UNASSIGNED_POLICY from juju.providers.common.files import FileStorage log = logging.getLogger("juju.providers") class DummyMachine(ProviderMachine): """Provider machine implementation specific to the dummy provider.""" def __init__(self, *args, **kw): super(DummyMachine, self).__init__(*args, **kw) self._opened_ports = set() class MachineProvider(object): def __init__(self, environment_name, config): self.environment_name = environment_name self.config = config self._machines = [] self._state = None self._storage = None def get_legacy_config_keys(self): return set(("some-legacy-key",)) & set(self.config) def get_placement_policy(self): """Get the unit placement policy for the provider. :param preference: A user specified plcaement policy preference """ return self.config.get("placement", UNASSIGNED_POLICY) def get_constraint_set(self): cs = ConstraintSet(self.provider_type) cs.register_generics([]) return succeed(cs) @property def provider_type(self): return "dummy" def connect(self, share=False): """Connect to the zookeeper juju running in the machine provider. @param share: Requests sharing of the connection with other clients attempting to connect to the same provider, if that's feasible. Unused for the dummy provider. """ return ManagedClient( os.environ.get("ZOOKEEPER_ADDRESS", "127.0.0.1:2181"), session_timeout=1000).connect() def get_machines(self, instance_ids=()): """List all the machine running in the provider.""" if not instance_ids: return succeed(self._machines[:]) machines_by_id = dict(((m.instance_id, m) for m in self._machines)) machines = [] missing_instance_ids = [] for instance_id in instance_ids: if instance_id in machines_by_id: machines.append(machines_by_id[instance_id]) else: missing_instance_ids.append(instance_id) if missing_instance_ids: return fail(MachinesNotFound(missing_instance_ids)) return succeed(machines) def start_machine(self, machine_data, master=False): """Start a machine in the provider.""" if not "machine-id" in machine_data: return fail(ProviderError( "Machine state `machine-id` required in machine_data")) dns_name = machine_data.get("dns-name") machine = DummyMachine(len(self._machines), dns_name) self._machines.append(machine) return succeed([machine]) def get_machine(self, instance_id): """Retrieve a machine by provider machine id. """ for machine in self._machines: if instance_id == machine.instance_id: return succeed(machine) return fail(MachinesNotFound([instance_id])) def bootstrap(self, constraints): """ Bootstrap juju on the machine provider. """ if self._machines: return succeed(self._machines[:1]) return self.start_machine({"machine-id": 0}) @inlineCallbacks def shutdown_machines(self, machines): """ Terminate any machine resources associated to the provider. """ instance_ids = [m.instance_id for m in machines] machines = yield self.get_machines(instance_ids) for machine in machines: self._machines.remove(machine) returnValue(machines) def shutdown_machine(self, machine): """Terminate the given machine""" if not isinstance(machine, DummyMachine): return fail(ProviderError("Invalid machine for provider")) for m in self._machines: if m.instance_id == machine.instance_id: self._machines.remove(m) return m return fail(ProviderError("Machine not found %r" % machine)) @inlineCallbacks def destroy_environment(self): yield self.save_state({}) machines = yield self.get_machines() machines = yield self.shutdown_machines(machines) returnValue(machines) def save_state(self, state): """Save the state to the provider.""" self._state = state return succeed(None) def load_state(self): """Load the state from the provider.""" if self._state: state = self._state else: state = {} return succeed(state) def get_file_storage(self): """Retrieve the C{FileStorage} provider abstracion.""" if self._storage: return self._storage storage_path = self.config.get("storage-directory") if storage_path is None: storage_path = tempfile.mkdtemp() self._storage = FileStorage(storage_path) return self._storage def get_serialization_data(self): config = self.config.copy() # Introduce an additional variable to simulate actual # providers which may serialize additional values # from the environment or other external sources. config["dynamicduck"] = "magic" return config def open_port(self, machine, machine_id, port, protocol="tcp"): """Dummy equivalent of ec2-authorize-group""" if not isinstance(machine, DummyMachine): return fail(ProviderError("Invalid machine for provider")) machine._opened_ports.add((port, protocol)) log.debug("Opened %s/%s on provider machine %r", port, protocol, machine.instance_id) return succeed(None) def close_port(self, machine, machine_id, port, protocol="tcp"): """Dummy equivalent of ec2-revoke-group""" if not isinstance(machine, DummyMachine): return fail(ProviderError("Invalid machine for provider")) try: machine._opened_ports.remove((port, protocol)) log.debug("Closed %s/%s on provider machine %r", port, protocol, machine.instance_id) except KeyError: pass return succeed(None) def get_opened_ports(self, machine, machine_id): """Dummy equivalent of ec2-describe-group This returns the current exposed ports in the environment for this machine. This directly goes against the provider. For EC2, this would be eventually consistent. """ if not isinstance(machine, DummyMachine): return fail(ProviderError("Invalid machine for provider")) return succeed(machine._opened_ports) def get_zookeeper_machines(self): if self._machines: return succeed(self._machines[:1]) return fail(EnvironmentNotFound("not bootstrapped")) juju-0.7.orig/juju/providers/ec2/0000755000000000000000000000000012135220114015054 5ustar 00000000000000juju-0.7.orig/juju/providers/local/0000755000000000000000000000000012135220114015475 5ustar 00000000000000juju-0.7.orig/juju/providers/maas/0000755000000000000000000000000012135220114015324 5ustar 00000000000000juju-0.7.orig/juju/providers/openstack/0000755000000000000000000000000012135220114016372 5ustar 00000000000000juju-0.7.orig/juju/providers/openstack_s3/0000755000000000000000000000000012135220114016777 5ustar 00000000000000juju-0.7.orig/juju/providers/tests/0000755000000000000000000000000012135220114015545 5ustar 00000000000000juju-0.7.orig/juju/providers/common/__init__.py0000644000000000000000000000007512135220114020006 0ustar 00000000000000"""Contains code common to more than one machine provider""" juju-0.7.orig/juju/providers/common/base.py0000644000000000000000000001727112135220114017167 0ustar 00000000000000import copy from operator import itemgetter from twisted.internet.defer import inlineCallbacks, returnValue, succeed from juju.environment.errors import EnvironmentsConfigError from juju.machine.constraints import ConstraintSet from juju.state.placement import UNASSIGNED_POLICY from .bootstrap import Bootstrap from .connect import ZookeeperConnect from .findzookeepers import find_zookeepers from .state import SaveState, LoadState from .utils import get_user_authorized_keys class MachineProviderBase(object): """Base class supplying common functionality for MachineProviders. To write a working subclass, you will need to override the following methods: * :meth:`get_file_storage` * :meth:`start_machine` * :meth:`get_machines` * :meth:`shutdown_machines` * :meth:`open_port` * :meth:`close_port` * :meth:`get_opened_ports` You may want to override the following methods, but you should be careful to call :class:`MachineProviderBase`'s implementation (or be very sure you don't need to): * :meth:`__init__` * :meth:`get_serialization_data` * :meth:`get_legacy_config_keys` * :meth:`get_placement_policy` * :meth:`get_constraint_set` You probably shouldn't override anything else. """ def __init__(self, environment_name, config): if ("authorized-keys-path" in config and "authorized-keys" in config): raise EnvironmentsConfigError( "Environment config cannot define both authorized-keys " "and authorized-keys-path. Pick one!") self.environment_name = environment_name self.config = config def get_constraint_set(self): """Return the set of constraints that are valid for this provider.""" return succeed(ConstraintSet(self.provider_type)) def get_legacy_config_keys(self): """Return any deprecated config keys that are set.""" return set() & set(self.config) def get_placement_policy(self): """Get the unit placement policy for the provider.""" return self.config.get("placement", UNASSIGNED_POLICY) def get_serialization_data(self): """Get provider configuration suitable for serialization.""" data = copy.deepcopy(self.config) data["authorized-keys"] = get_user_authorized_keys(data) # Not relevant, on a remote system. data.pop("authorized-keys-path", None) return data #================================================================ # Subclasses need to implement their own versions of everything # in the following block def get_file_storage(self): """Retrieve the provider FileStorage abstraction.""" raise NotImplementedError() def start_machine(self, machine_data, master=False): """Start a machine in the provider. :param dict machine_data: desired characteristics of the new machine; it must include a "machine-id" key, and may include a "constraints" key to specify the underlying OS and hardware (where available). :param bool master: if True, machine will initialize the juju admin and run a provisioning agent, in addition to running a machine agent. """ raise NotImplementedError() def get_machines(self, instance_ids=()): """List machines running in the provider. :param list instance_ids: ids of instances you want to get. Leave empty to list every :class:`juju.machine.ProviderMachine` owned by this provider. :return: a list of :class:`juju.machine.ProviderMachine` instances :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.MachinesNotFound` """ raise NotImplementedError() def shutdown_machines(self, machines): """Terminate machines associated with this provider. :param machines: machines to shut down :type machines: list of :class:`juju.machine.ProviderMachine` :return: list of terminated :class:`juju.machine.ProviderMachine` instances :rtype: :class:`twisted.internet.defer.Deferred` """ raise NotImplementedError() def open_port(self, machine, machine_id, port, protocol="tcp"): """Authorizes `port` using `protocol` for `machine`.""" raise NotImplementedError() def close_port(self, machine, machine_id, port, protocol="tcp"): """Revokes `port` using `protocol` for `machine`.""" raise NotImplementedError() def get_opened_ports(self, machine, machine_id): """Returns a set of open (port, protocol) pairs for `machine`.""" raise NotImplementedError() #================================================================ # Subclasses will not generally need to override the methods in # this block def get_zookeeper_machines(self): """Find running zookeeper instances. :return: all running or starting machines configured to run zookeeper, as a list of :class:`juju.machine.ProviderMachine` :rtype: :class:`twisted.internet.defer.Deferred` :raises: :class:`juju.errors.EnvironmentNotFound` """ return find_zookeepers(self) def connect(self, share=False): """Attempt to connect to a running zookeeper node. :param bool share: where feasible, attempt to share a connection with other clients :return: an open :class:`txzookeeper.client.ZookeeperClient` :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.EnvironmentNotFound` when no zookeepers exist """ return ZookeeperConnect(self).run(share=share) def bootstrap(self, constraints): """Bootstrap an juju server in the provider.""" return Bootstrap(self, constraints).run() def get_machine(self, instance_id): """Retrieve a provider machine by instance id. :param str instance_id: :attr:`instance_id` of the :class:`juju.machine.ProviderMachine` you want. :return: the requested :class:`juju.machine.ProviderMachine` :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.MachinesNotFound` """ d = self.get_machines([instance_id]) d.addCallback(itemgetter(0)) return d def shutdown_machine(self, machine): """Terminate one machine associated with this provider. :param machine: :class:`juju.machine.ProviderMachine` to shut down. :return: the terminated :class:`juju.machine.ProviderMachine`. :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.MachinesNotFound` """ d = self.shutdown_machines([machine]) d.addCallback(itemgetter(0)) return d @inlineCallbacks def destroy_environment(self): """Clear juju state and terminate all associated machines :rtype: :class:`twisted.internet.defer.Deferred` """ try: yield self.save_state({}) finally: live_machines = yield self.get_machines() killed_machines = yield self.shutdown_machines(live_machines) returnValue(killed_machines) def save_state(self, state): """Save state to the provider. :param dict state: state to persist :rtype: :class:`twisted.internet.defer.Deferred` """ return SaveState(self).run(state) def load_state(self): """Load state from the provider. :return: the current state dict, or False :rtype: :class:`twisted.internet.defer.Deferred` """ return LoadState(self).run() juju-0.7.orig/juju/providers/common/bootstrap.py0000644000000000000000000000366012135220114020267 0ustar 00000000000000from cStringIO import StringIO from twisted.internet.defer import inlineCallbacks, returnValue from juju.errors import EnvironmentNotFound, ProviderError from .utils import log _VERIFY_PATH = "bootstrap-verify" class Bootstrap(object): """Generic bootstrap operation class.""" def __init__(self, provider, constraints): self._provider = provider self._constraints = constraints def run(self): """Get an existing zookeeper, or launch a new one. :return: a single-element list containing an appropriate :class:`juju.machine.ProviderMachine` instance, set up to run zookeeper and a provisioning agent. :rtype: :class:`twisted.internet.defer.Deferred` """ machines = self._provider.get_zookeeper_machines() machines.addCallback(self._on_machines_found) machines.addErrback(self._on_no_machines_found) return machines def _on_machines_found(self, machines): log.info("juju environment previously bootstrapped.") return machines def _on_no_machines_found(self, failure): failure.trap(EnvironmentNotFound) d = self._verify_file_storage() d.addErrback(self._cannot_write) d.addCallback(self._launch_machine) return d def _verify_file_storage(self): log.debug("Verifying writable storage") storage = self._provider.get_file_storage() return storage.put(_VERIFY_PATH, StringIO("storage is writable")) def _cannot_write(self, failure): raise ProviderError( "Bootstrap aborted because file storage is not writable: %s" % str(failure.value)) @inlineCallbacks def _launch_machine(self, unused): log.debug("Launching juju bootstrap instance.") machines = yield self._provider.start_machine( {"machine-id": "0", "constraints": self._constraints}, master=True) returnValue(machines) juju-0.7.orig/juju/providers/common/cloudinit.py0000644000000000000000000003400412135220114020240 0ustar 00000000000000from base64 import b64encode from subprocess import Popen, PIPE from juju.errors import CloudInitError from juju.lib.upstart import UpstartService from juju.lib import serializer from juju.lib.zk import ( CLIENT_SESSION_TIMEOUT, TICK_TIME, MAX_SESSION_TIMEOUT) from juju.providers.common.utils import format_cloud_init from juju.state.auth import make_identity import juju import os DISTRO = "distro" PPA = "ppa" BRANCH = "branch" PROPOSED = "proposed" def _branch_install_scripts(branch): return [ "sudo apt-get install -y python-txzookeeper", "sudo mkdir -p /usr/lib/juju", "cd /usr/lib/juju && sudo /usr/bin/bzr co --lightweight %s juju" % branch, "cd /usr/lib/juju/juju && sudo python setup.py develop"] def _install_scripts(origin, origin_url): scripts = [] if origin == BRANCH: scripts.extend(_branch_install_scripts(origin_url)) scripts.extend([ "sudo mkdir -p /var/lib/juju", "sudo mkdir -p /var/log/juju"]) return scripts def _zookeeper_scripts(instance_id, secret, constraints, provider_type): return [ 'sed -i -e s/tickTime=2000/tickTime=%d/g /etc/zookeeper/conf/zoo.cfg' % ( TICK_TIME), 'echo "minSessionTimeout=%d" >> /etc/zookeeper/conf/zoo.cfg' % ( CLIENT_SESSION_TIMEOUT), 'echo "maxSessionTimeout=%d" >> /etc/zookeeper/conf/zoo.cfg' % ( MAX_SESSION_TIMEOUT), "juju-admin initialize" " --instance-id=%s" " --admin-identity=%s" " --constraints-data=%s" " --provider-type=%s" % (instance_id, make_identity("admin:%s" % secret), b64encode(serializer.dump(constraints.data)), provider_type)] def _machine_scripts(machine_id, zookeeper_hosts): service = UpstartService("juju-machine-agent") service.set_description("Juju machine agent") service.set_environ( {"JUJU_MACHINE_ID": machine_id, "JUJU_ZOOKEEPER": zookeeper_hosts}) service.set_command( "python -m juju.agents.machine --nodaemon " "--logfile /var/log/juju/machine-agent.log " "--session-file /var/run/juju/machine-agent.zksession") return service.get_cloud_init_commands() def _unit_scripts(machine_id, unit_name, zookeeper_hosts, juju_home, env_id): unit_path_name = unit_name.replace('/', '-') service_name = "juju-%s" % unit_path_name service = UpstartService(service_name) service.set_description( "Juju unit agent for %s" % unit_name) service.set_environ( {"JUJU_MACHINE_ID": str(machine_id), "JUJU_UNIT_NAME": unit_name, "JUJU_HOME": juju_home, "JUJU_ENV_UUID": env_id, "JUJU_ZOOKEEPER": zookeeper_hosts}) service.set_output_path( "/var/log/juju/unit-%s-output.log" % unit_path_name) service.set_command(" ".join( ["/usr/bin/python", "-m", "juju.agents.unit", "--nodaemon", "--logfile", "/var/log/juju/unit-%s.log" % unit_path_name, "--session-file", "/var/run/juju/unit-%s-agent.zksession" % unit_path_name])) return service.get_cloud_init_commands() def _provision_scripts(zookeeper_hosts): service = UpstartService("juju-provision-agent") service.set_description("Juju provisioning agent") env = {"JUJU_ZOOKEEPER": zookeeper_hosts} if os.environ.get("JUJU_TESTING", "") == "fast": env['JUJU_TESTING'] = os.environ['JUJU_TESTING'] service.set_environ(env) service.set_command( "python -m juju.agents.provision --nodaemon " "--logfile /var/log/juju/provision-agent.log " "--session-file /var/run/juju/provision-agent.zksession") return service.get_cloud_init_commands() def _line_generator(data): for line in data.splitlines(): stripped = line.lstrip() if stripped: yield (len(line) - len(stripped), stripped) def parse_juju_origin(data): next = _line_generator(data).next try: _, line = next() if line == "N: Unable to locate package juju": return BRANCH, "lp:juju" if line != "juju:": raise StopIteration # Find installed version. while True: _, line = next() if line.startswith("Installed:"): version = line[10:].strip() if version == "(none)": return BRANCH, "lp:juju" break # Find version table. while True: _, line = next() if line.startswith("Version table:"): break # Find installed version within the table. while True: _, line = next() if line.startswith("*** %s " % version): break # See if one of the sources is the PPA first_indent, line = next() while True: if "http://ppa.launchpad.net/juju/pkgs/" in line: return PPA, None indent, line = next() if indent != first_indent: break # Going into a different version except StopIteration: pass return DISTRO, None def get_default_origin(): """Select the best fit for running juju on cloudinit. Used if not otherwise specified by juju-origin. """ if not juju.__file__.startswith("/usr/"): return BRANCH, "lp:juju" try: popen = Popen(["apt-cache", "policy", "juju"], stdout=PIPE) data = popen.communicate()[0] except OSError: data = "" return parse_juju_origin(data) class CloudInit(object): """Encapsulates juju-specific machine initialisation. For more information on the mechanism used, see :func:`juju.providers.common.utils.format_cloud_init`. """ def __init__(self): self._machine_id = None self._instance_id = None self._provider_type = None self._ssh_keys = [] self._provision = False self._zookeeper = False self._zookeeper_hosts = [] self._zookeeper_secret = None self._constraints = None self._origin, self._origin_url = get_default_origin() self._unit_name = None self._juju_home = None self._apt_proxy = None self._env_id = None def add_ssh_key(self, key): """Add an SSH public key. :param key: an SSH key to allow to connect to the machine You have to set at least one SSH key. """ self._ssh_keys.append(key) def enable_bootstrap(self): """Make machine run a zookeeper and a provisioning agent.""" self._zookeeper = True self._provision = True def set_juju_source( self, branch=None, ppa=False, distro=False, proposed=False): """Set the version of juju the machine should run. :param branch: location from which to check out juju; for example, "lp:~someone/juju/some-feature-branch". :type branch: str or None :param bool ppa: if True, get the latest version of juju from its Private Package Archive. :param bool distro: if True, get the default juju version from the OS distribution. :param bool proposed: if True, get the proposed juju version from the OS distribution. :raises: :exc:`juju.errors.CloudInitError` if more than one option, or fewer than one options, are specified. Note that you don't need to call this method; the juju source defaults to what is returned by `get_default_origin`. """ if len(filter(None, (branch, ppa, distro, proposed))) != 1: raise CloudInitError("Please specify one source") if branch: self._origin = BRANCH self._origin_url = branch elif ppa: self._origin = PPA self._origin_url = None elif distro: self._origin = DISTRO self._origin_url = None elif proposed: self._origin = PROPOSED self._origin_url = None def set_environment_id(self, id): """Specify the environment id. """ self._env_id = id def set_machine_id(self, id): """Specify the juju machine ID. :param str id: the desired ID. You have to set the machine ID. """ self._machine_id = id def set_unit_name(self, name): """Specify the juju unit name. :param str name: the desired Unit name. This is optional, if present, will install a unit agent """ self._unit_name = name def set_juju_home(self, juju_home): """Specify the juju unit home dir :param str juju_home: the desired Unit home dir This is required if _unit_name is set. Tells unit agent where its home directory is. """ self._juju_home = juju_home def set_instance_id_accessor(self, expr): """Specify the provider-specific instance ID. :param str expr: a snippet of shell script that will, when executed on the machine, evaluate to the machine's instance ID. You have to set the instance ID. """ self._instance_id = expr def set_provider_type(self, type_): """Specify the provider type for this machine. :param str type_: the provider type. You have to set the provider type. """ self._provider_type = type_ def set_zookeeper_machines(self, machines): """Specify master :class:`juju.machine.ProviderMachine` instances. :param machines: machines running zookeeper which already exist. :type machines: list of :class:`juju.machine.ProviderMachine` You don't have to set this, so long as the machine you are starting is itself a zookeeper instance. """ self._zookeeper_hosts = [m.private_dns_name for m in machines] def set_zookeeper_hosts(self, hosts): """Specify master :class:`juju.machine.ProviderMachine` instances. :param hosts: hosts running zookeeper which already exist. :type list: list of str This is an alternative to set_zookeeper_machines. """ self._zookeeper_hosts = hosts def set_zookeeper_secret(self, secret): """Specify the admin password for zookeepeer. You only need to set this if this machine will be a zookeeper instance. """ self._zookeeper_secret = secret def set_apt_proxy(self, proxy): """Specify an apt proxy to configure :param proxy: proxy to set for APT using Acquire:HTTP :type str """ self._apt_proxy = proxy def set_constraints(self, constraints): """Specify the initial machine's constraints. You only need to set this if this machine will be a zookeeper instance. """ self._constraints = constraints def render(self): """Get content for a cloud-init file with appropriate specifications. :rtype: str :raises: :exc:`juju.errors.CloudInitError` if there isn't enough information to create a useful cloud-init. """ self._validate() return format_cloud_init( self._ssh_keys, packages=self._collect_packages(), repositories=self._collect_repositories(), scripts=self._collect_scripts(), data=self._collect_machine_data(), apt_proxy=self._apt_proxy) def _validate(self): missing = [] def require(attr, action): if not getattr(self, attr): missing.append(action) require("_ssh_keys", "add_ssh_key") if self._zookeeper: require("_provider_type", "set_provider_type") require("_instance_id", "set_instance_id_accessor") require("_zookeeper_secret", "set_zookeeper_secret") require("_constraints", "set_constraints") else: require("_zookeeper_hosts", "set_zookeeper_machines") if missing: raise CloudInitError("Incomplete cloud-init: you need to call %s" % ", ".join(missing)) def _join_zookeeper_hosts(self): all_hosts = self._zookeeper_hosts[:] if self._zookeeper: all_hosts.append("localhost") zks = [] for host in all_hosts: if isinstance(host, tuple) and len(host) == 2: zks.append("%s:%s" % host) else: zks.append("%s:2181" % host) return ",".join(zks) def _collect_packages(self): packages = [ "bzr", "byobu", "tmux", "python-setuptools", "python-twisted", "python-txaws", "python-zookeeper"] if self._zookeeper: packages.extend([ "default-jre-headless", "zookeeper", "zookeeperd"]) if self._origin in (DISTRO, PPA, PROPOSED): packages.append("juju") return packages def _collect_repositories(self): if self._origin == PROPOSED: return ["deb $MIRROR $RELEASE-proposed main universe"] if self._origin != DISTRO: return ["ppa:juju/pkgs"] return [] def _collect_scripts(self): scripts = _install_scripts(self._origin, self._origin_url) if self._zookeeper: scripts.extend(_zookeeper_scripts( self._instance_id, self._zookeeper_secret, self._constraints, self._provider_type)) if self._machine_id: scripts.extend(_machine_scripts( self._machine_id, self._join_zookeeper_hosts())) if self._unit_name: scripts.extend(_unit_scripts( self._machine_id, self._unit_name, self._join_zookeeper_hosts(), self._juju_home, self._env_id)) if self._provision: scripts.extend(_provision_scripts(self._join_zookeeper_hosts())) return scripts def _collect_machine_data(self): return { "machine-id": self._machine_id, "juju-provider-type": self._provider_type, "juju-zookeeper-hosts": self._join_zookeeper_hosts()} juju-0.7.orig/juju/providers/common/connect.py0000644000000000000000000000634412135220114017705 0ustar 00000000000000import random from twisted.internet.defer import inlineCallbacks, returnValue from txzookeeper.client import ConnectionTimeoutException from juju.errors import EnvironmentNotFound, EnvironmentPending, NoConnection from juju.lib.twistutils import sleep from juju.lib.zk import CLIENT_SESSION_TIMEOUT from juju.state.sshclient import SSHClient from .utils import log class ZookeeperConnect(object): def __init__(self, provider): self._provider = provider @inlineCallbacks def run(self, share=False): """Attempt to connect to a running zookeeper node, retrying as needed. :param bool share: where feasible, attempt to share a connection with other clients. :return: an open :class:`txzookeeper.client.ZookeeperClient` :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.EnvironmentNotFound` when no zookeepers exist Internally this method catches all :exc:`juju.errors.EnvironmentPending`, since this exception explicitly means that a retry is feasible. TODO consider supporting a timeout for this method, instead of any such timeouts being done externally. """ log.info("Connecting to environment...") while True: try: client = yield self._internal_connect(share) log.info("Connected to environment.") returnValue(client) except EnvironmentPending as e: log.debug("Retrying connection: %s", e) except EnvironmentNotFound: # Expected if not bootstrapped, simply raise up raise except Exception as e: # Otherwise this is unexpected, log with some details log.exception("Cannot connect to environment: %s", e) raise @inlineCallbacks def _internal_connect(self, share): """Attempt connection to one of the ZK nodes.""" candidates = yield self._provider.get_zookeeper_machines() assigned = [machine for machine in candidates if machine.dns_name] if not assigned: yield sleep(1) # Momentarily backoff raise EnvironmentPending("No machines have assigned addresses") chosen = random.choice(assigned) log.debug("Connecting to environment using %s...", chosen.dns_name) try: client = yield SSHClient( session_timeout=CLIENT_SESSION_TIMEOUT).connect( chosen.dns_name + ":2181", timeout=30, share=share) except (NoConnection, ConnectionTimeoutException) as e: raise EnvironmentPending( "Cannot connect to environment using %s " "(perhaps still initializing): %s" % ( chosen.dns_name, str(e))) yield self.wait_for_initialization(client) returnValue(client) @inlineCallbacks def wait_for_initialization(self, client): exists_d, watch_d = client.exists_and_watch("/initialized") exists = yield exists_d if not exists: log.debug("Environment still initializing. Will wait.") yield watch_d else: log.debug("Environment is initialized.") juju-0.7.orig/juju/providers/common/files.py0000644000000000000000000000233712135220114017354 0ustar 00000000000000""" Directory based file storage (for local and dummy). """ import os from twisted.internet.defer import fail, succeed from juju.errors import FileNotFound class FileStorage(object): def __init__(self, path): self._path = path def get(self, name): file_path = os.path.join( self._path, *filter(None, name.split("/"))) if os.path.exists(file_path): return succeed(open(file_path)) return fail(FileNotFound(file_path)) def put(self, remote_path, file_object): store_path = os.path.join( self._path, *filter(None, remote_path.split("/"))) store_path = os.path.abspath(store_path) if not store_path.startswith(self._path): return fail(AssertionError("Invalid Remote Path %s" % remote_path)) parent_store_path = os.path.dirname(store_path) if not os.path.exists(parent_store_path): os.makedirs(parent_store_path) with open(store_path, "wb") as f: f.write(file_object.read()) return succeed(True) def get_url(self, name): file_path = os.path.abspath(os.path.join( self._path, *filter(None, name.split("/")))) return "file://%s" % file_path juju-0.7.orig/juju/providers/common/findzookeepers.py0000644000000000000000000000234412135220114021277 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, returnValue from juju.errors import EnvironmentNotFound, MachinesNotFound def _require(x): if not x: raise EnvironmentNotFound("is the environment bootstrapped?") @inlineCallbacks def find_zookeepers(provider): """Find running zookeeper instances. :param provider: the MachineProvider in charge of the juju :return: all running or starting machines configured to run zookeeper, as a list of :class:`juju.machine.ProviderMachine` :rtype: :class:`twisted.internet.defer.Deferred` :raises: :class:`juju.errors.EnvironmentNotFound` """ state = yield provider.load_state() _require(state) instance_ids = state.get("zookeeper-instances") _require(instance_ids) machines = [] missing_instance_ids = [] for instance_id in instance_ids: try: machine = yield provider.get_machine(instance_id) machines.append(machine) except MachinesNotFound as e: missing_instance_ids.extend(e.instance_ids) if machines: returnValue(machines) raise EnvironmentNotFound("machines are not running (%s)" % ", ".join(map(str, missing_instance_ids))) juju-0.7.orig/juju/providers/common/instance_type.py0000644000000000000000000000332712135220114021117 0ustar 00000000000000import operator from juju.errors import ProviderError from collections import namedtuple # Just a generic base instance type, with required attrs. InstanceType = namedtuple("InstanceType", "arch cpu mem") class TypeSolver(object): def __init__(self, instance_types): self._instance_types = instance_types def run(self, constraints): instance_type = constraints["instance-type"] if instance_type is not None: if not instance_type in self._instance_types: raise ProviderError( "Invalid instance type %s" % instance_type) return instance_type return self._solve(constraints) def _solve(self, constraints): possible_types = list(self._instance_types) for f in self._get_filters(constraints): possible_types = filter(f, possible_types) if not possible_types: raise ProviderError( "No instance type satisfies %s" % dict(constraints)) return sorted( possible_types, key=lambda x: (self._instance_types[x].cpu, self._instance_types[x].mem))[0] def _get_filters(self, constraints): filters = [ self._filter_func("cpu", constraints, operator.ge), self._filter_func("mem", constraints, operator.ge), self._filter_func("arch", constraints, operator.eq) ] return filters def _filter_func(self, name, constraints, op): desired = constraints[name] if not desired: return lambda _: True def f(type_): value = getattr(self._instance_types[type_], name) return op(value, desired) return f juju-0.7.orig/juju/providers/common/launch.py0000644000000000000000000001116012135220114017516 0ustar 00000000000000from twisted.internet.defer import fail, inlineCallbacks, returnValue from juju.errors import ProviderError from .cloudinit import CloudInit from .utils import get_user_authorized_keys class LaunchMachine(object): """Abstract class with generic instance-launching logic. To create your own subclass, you will certainly need to override :meth:`start_machine`, which will very probably want to use the incomplete :class:`juju.providers.common.cloudinit.CloudInit` returned by :meth:`_create_cloud_init`. :param provider: the `MachineProvider` that will administer the machine :param bool master: if True, the machine will run a zookeeper and a provisioning agent, in addition to the machine agent that every machine runs automatically. :param dict constraints: specifies the underlying OS and hardware (where available) .. automethod:: _create_cloud_init """ def __init__(self, provider, constraints, master=False): self._provider = provider self._constraints = constraints self._master = master @classmethod def launch(cls, provider, machine_data, master): """Create and run a machine launch operation. Exists for the convenience of the `MachineProvider` implementations which actually use the "constraints" key in machine_data, which would otherwise duplicate code. """ if "machine-id" not in machine_data: return fail(ProviderError( "Cannot launch a machine without specifying a machine-id")) if "constraints" not in machine_data: return fail(ProviderError( "Cannot launch a machine without specifying constraints")) launcher = cls(provider, machine_data["constraints"], master) return launcher.run(machine_data["machine-id"]) @inlineCallbacks def run(self, machine_id): """Launch an instance node within the machine provider environment. :param str machine_id: the juju machine ID to assign """ # XXX at some point, we'll want to start multiple zookeepers # that know about each other; for now, this'll do if self._master: zookeepers = [] else: zookeepers = yield self._provider.get_zookeeper_machines() machines = yield self.start_machine(machine_id, zookeepers) if self._master: yield self._on_new_zookeepers(machines) returnValue(machines) def start_machine(self, machine_id, zookeepers): """Actually launch a machine for the appropriate provider. :param str machine_id: the juju machine ID to assign :param zookeepers: the machines currently running zookeeper, to which the new machine will need to connect :type zookeepers: list of :class:`juju.machine.ProviderMachine` :return: a singe-entry list containing a provider-specific :class:`juju.machine.ProviderMachine` representing the newly- launched machine :rtype: :class:`twisted.internet.defer.Deferred` """ raise NotImplementedError() def _create_cloud_init(self, machine_id, zookeepers): """Construct a provider-independent but incomplete :class:`CloudInit`. :return: a :class:`juju.providers.common.cloudinit.cloudInit`; it will not be ready to render, but will be configured to the greatest extent possible with the available information. :rtype: :class:`twisted.internet.defer.Deferred` """ config = self._provider.config cloud_init = CloudInit() cloud_init.add_ssh_key(get_user_authorized_keys(config)) cloud_init.set_machine_id(machine_id) cloud_init.set_zookeeper_machines(zookeepers) origin = config.get("juju-origin") if origin: if origin.startswith("lp:"): cloud_init.set_juju_source(branch=origin) elif origin == "ppa": cloud_init.set_juju_source(ppa=True) elif origin == "proposed": cloud_init.set_juju_source(proposed=True) else: # Ignore other values, just use the distro for sanity cloud_init.set_juju_source(distro=True) if self._master: cloud_init.enable_bootstrap() cloud_init.set_zookeeper_secret(config["admin-secret"]) cloud_init.set_constraints(self._constraints) return cloud_init def _on_new_zookeepers(self, machines): instance_ids = [m.instance_id for m in machines] return self._provider.save_state({"zookeeper-instances": instance_ids}) juju-0.7.orig/juju/providers/common/state.py0000644000000000000000000000264312135220114017372 0ustar 00000000000000from cStringIO import StringIO from juju.lib import serializer from juju.errors import FileNotFound _STATE_FILE = "provider-state" class LoadState(object): """Generic state-loading operation. Note that most juju state should be stored in zookeeper nodes; this is only for state which must be knowable without access to a zookeeper. """ def __init__(self, provider): self._provider = provider def run(self): """Actually load the state. :rtype: dict or False """ storage = self._provider.get_file_storage() d = storage.get(_STATE_FILE) d.addCallback(self._deserialize) d.addErrback(self._no_data) return d def _deserialize(self, data): return serializer.load(data.read()) or False def _no_data(self, failure): failure.trap(FileNotFound) return False class SaveState(object): """Generic state-saving operation. Note that most juju state should be stored in zookeeper nodes; this is only for state which must be knowable without access to a zookeeper. """ def __init__(self, provider): self._provider = provider def run(self, state): """Actually save new state. :param dict state: state to save. """ storage = self._provider.get_file_storage() data = serializer.dump(state) return storage.put(_STATE_FILE, StringIO(data)) juju-0.7.orig/juju/providers/common/tests/0000755000000000000000000000000012135220114017035 5ustar 00000000000000juju-0.7.orig/juju/providers/common/utils.py0000644000000000000000000001160312135220114017406 0ustar 00000000000000import logging import os from twisted.python.failure import Failure from juju.lib import serializer from juju.errors import JujuError, ProviderInteractionError log = logging.getLogger("juju.common") def convert_unknown_error(failure): """Convert any non-juju errors to a provider interaction error. Supports both usage from within an except clause, and as an errback handler ie. both the following forms are supported. ... try: something() except Exception, e: convert_unknown_errors(e) ... d.addErrback(convert_unknown_errors) """ if isinstance(failure, Failure): error = failure.value else: error = failure if not isinstance(error, JujuError): message = ("Unexpected %s interacting with provider: %s" % (type(error).__name__, str(error))) error = ProviderInteractionError(message) if isinstance(failure, Failure): return Failure(error) raise error # XXX There's some inconsistency in the handling of authorized_keys # here. While it's fine and correct for this function to read the # list of keys in authorized_keys format (text with key-per-line), # cloud-init itself expects a _list_ of keys, and what we end up # doing is passing the whole blob of data as a single key. This # should be fixed to *return* a list of keys by splitting the # lines in the data obtained, and then the call site can add each # individual key to CloudInit. # def get_user_authorized_keys(config): """Locate a public key for the user. If neither "authorized-keys" nor "authorized-keys-path" is present in config, will look in the user's .ssh directory. The name of this method is "authorized_keys", plural, because it returns the *text* (not list) to be inserted into the ~/.ssh/authorized_keys file. Multiple keys may be returned, in the same format expected by that file. :return: an SSH public key :rtype: str :raises: :exc:`LookupError` if an SSH public key is not found. """ key_names = ["id_dsa.pub", "id_rsa.pub", "identity.pub"] if config.get("authorized-keys"): return config["authorized-keys"] if config.get("authorized-keys-path"): key_names[:] = [config.get("authorized-keys-path")] for key_path in key_names: path_candidate = os.path.expanduser(key_path) if not os.path.exists(path_candidate): path_candidate = "~/.ssh/%s" % key_path path_candidate = os.path.expanduser(path_candidate) if not os.path.exists(path_candidate): continue return open(path_candidate).read() raise LookupError("SSH authorized/public key not found.") def format_cloud_init( authorized_keys, packages=(), repositories=None, scripts=None, data=None, apt_proxy=None): """Format a user-data cloud-init file. This will enable package installation, and ssh access, and script execution on launch, and passing data values to an instance. Its important to note that no sensistive data (credentials) should be conveyed via cloud-config, as the values are accessible from the any process by default on the instance. Further documentation on the capabilities of cloud-init https://help.ubuntu.com/community/CloudInit :param authorized_keys: The authorized SSH key to be used when populating the newly launched machines. :type authorized_keys: list of strings :param packages: The packages to be installed on a machine. :type packages: list of strings :param repositories: Debian repostiories to be used as apt sources on the machine. 'ppa:' syntax can be used. :type repositories: list of strings :param scripts: Scripts to be executed (in order) on machine start. :type scripts: list of strings :param dict data: Optional additional data to be passed via cloud config. It will be accessible via the key 'machine-data' from the yaml data structure. :param str apt_proxy: Optional proxy to pass through to cloud-config """ cloud_config = { "apt_update": True, "apt_upgrade": True, "ssh_authorized_keys": authorized_keys, "packages": [], "output": {"all": "| tee -a /var/log/cloud-init-output.log"}} if os.environ.get("JUJU_TESTING") == "fast": cloud_config['apt_update'] = False cloud_config['apt_upgrade'] = False if data: cloud_config["machine-data"] = data if packages: cloud_config["packages"].extend(packages) if repositories: sources = [dict(source=r) for r in repositories] cloud_config["apt_sources"] = sources if apt_proxy: cloud_config["apt_proxy"] = apt_proxy if scripts: cloud_config["runcmd"] = scripts output = serializer.dump(cloud_config) output = "#cloud-config\n%s" % (output) return output juju-0.7.orig/juju/providers/common/tests/__init__.py0000644000000000000000000000000012135220114021134 0ustar 00000000000000juju-0.7.orig/juju/providers/common/tests/data/0000755000000000000000000000000012135220114017746 5ustar 00000000000000juju-0.7.orig/juju/providers/common/tests/test_base.py0000644000000000000000000001221712135220114021363 0ustar 00000000000000from twisted.internet.defer import fail, succeed from juju.environment.errors import EnvironmentsConfigError from juju.lib.testing import TestCase from juju.machine import ProviderMachine from juju.providers.common.base import MachineProviderBase from juju.state.placement import UNASSIGNED_POLICY class SomeError(Exception): pass class DummyLaunchMachine(object): def __init__(self, master=False): self._master = master def run(self, machine_data): return succeed([ProviderMachine(machine_data["machine-id"])]) class DummyProvider(MachineProviderBase): def __init__(self, config=None): super(DummyProvider, self).__init__("venus", config or {}) class MachineProviderBaseTest(TestCase): def test_init(self): provider = DummyProvider({"some": "config"}) self.assertEquals(provider.environment_name, "venus") self.assertEquals(provider.config, {"some": "config"}) def test_bad_config(self): try: DummyProvider({"authorized-keys": "foo", "authorized-keys-path": "bar"}) except EnvironmentsConfigError as error: expect = ("Environment config cannot define both authorized-keys " "and authorized-keys-path. Pick one!") self.assertEquals(str(error), expect) else: self.fail("Failed to detect bad config") def test_get_serialization_data(self): keys_path = self.makeFile("some-key") provider = DummyProvider({"foo": {"bar": "baz"}, "authorized-keys-path": keys_path}) data = provider.get_serialization_data() self.assertEquals(data, {"foo": {"bar": "baz"}, "authorized-keys": "some-key"}) data["foo"]["bar"] = "qux" self.assertEquals(provider.config, {"foo": {"bar": "baz"}, "authorized-keys-path": keys_path}) def test_get_provider_placement(self): provider = DummyProvider() self.assertEqual( provider.get_placement_policy(), UNASSIGNED_POLICY) provider = DummyProvider({"placement": "local"}) self.assertEqual( provider.get_placement_policy(), "local") def test_get_legacy_config_keys(self): provider = DummyProvider() self.assertEqual(provider.get_legacy_config_keys(), set()) def test_get_machine_error(self): provider = DummyProvider() provider.get_machines = self.mocker.mock() provider.get_machines(["piffle"]) self.mocker.result(fail(SomeError())) self.mocker.replay() d = provider.get_machine("piffle") self.assertFailure(d, SomeError) return d def test_get_machine_success(self): provider = DummyProvider() provider.get_machines = self.mocker.mock() provider.get_machines(["piffle"]) machine = object() self.mocker.result(succeed([machine])) self.mocker.replay() d = provider.get_machine("piffle") def verify(result): self.assertEquals(result, machine) d.addCallback(verify) return d def test_shutdown_machine_error(self): provider = DummyProvider() provider.shutdown_machines = self.mocker.mock() machine = object() provider.shutdown_machines([machine]) self.mocker.result(fail(SomeError())) self.mocker.replay() d = provider.shutdown_machine(machine) self.assertFailure(d, SomeError) return d def test_shutdown_machine_success(self): provider = DummyProvider() provider.shutdown_machines = self.mocker.mock() machine = object() provider.shutdown_machines([machine]) probably_the_same_machine = object() self.mocker.result(succeed([probably_the_same_machine])) self.mocker.replay() d = provider.shutdown_machine(machine) def verify(result): self.assertEquals(result, probably_the_same_machine) d.addCallback(verify) return d def test_destroy_environment_error(self): provider = DummyProvider() provider.get_machines = self.mocker.mock() provider.get_machines() machines = [object(), object()] self.mocker.result(succeed(machines)) provider.shutdown_machines = self.mocker.mock() provider.shutdown_machines(machines) self.mocker.result(fail(SomeError())) self.mocker.replay() d = provider.destroy_environment() self.assertFailure(d, SomeError) return d def test_destroy_environment_success(self): provider = DummyProvider() provider.get_machines = self.mocker.mock() provider.get_machines() machines = [object(), object()] self.mocker.result(succeed(machines)) provider.shutdown_machines = self.mocker.mock() provider.shutdown_machines(machines) self.mocker.result(succeed(machines)) self.mocker.replay() d = provider.destroy_environment() def verify(result): self.assertEquals(result, machines) d.addCallback(verify) return d juju-0.7.orig/juju/providers/common/tests/test_bootstrap.py0000644000000000000000000000746212135220114022474 0ustar 00000000000000import logging import tempfile from twisted.internet.defer import fail, succeed, inlineCallbacks from juju.errors import EnvironmentNotFound, ProviderError from juju.lib.testing import TestCase from juju.machine.tests.test_constraints import dummy_cs from juju.providers.common.base import MachineProviderBase from juju.providers.dummy import DummyMachine, FileStorage class SomeError(Exception): pass class WorkingFileStorage(FileStorage): def __init__(self): super(WorkingFileStorage, self).__init__(tempfile.mkdtemp()) class UnwritableFileStorage(object): def put(self, name, f): return fail(Exception("oh noes")) class DummyProvider(MachineProviderBase): provider_type = "dummy" config = {"default-series": "splendid"} def __init__(self, test, file_storage, zookeeper): self._test = test self._file_storage = file_storage self._zookeeper = zookeeper def get_file_storage(self): return self._file_storage def get_zookeeper_machines(self): if isinstance(self._zookeeper, Exception): return fail(self._zookeeper) if self._zookeeper: return succeed([self._zookeeper]) return fail(EnvironmentNotFound()) def start_machine(self, machine_data, master=False): self._test.assertTrue(master) self._test.assertEquals(machine_data["machine-id"], "0") constraints = machine_data["constraints"] self._test.assertEquals(constraints["provider-type"], "dummy") self._test.assertEquals(constraints["ubuntu-series"], "splendid") self._test.assertEquals(constraints["arch"], "arm") return [DummyMachine("i-keepzoos")] class BootstrapTest(TestCase): @inlineCallbacks def setUp(self): yield super(BootstrapTest, self).setUp() self.constraints = dummy_cs.parse(["arch=arm"]).with_series("splendid") def test_unknown_error(self): provider = DummyProvider(self, None, SomeError()) d = provider.bootstrap(self.constraints) self.assertFailure(d, SomeError) return d def test_zookeeper_exists(self): log = self.capture_logging("juju.common", level=logging.DEBUG) provider = DummyProvider( self, WorkingFileStorage(), DummyMachine("i-alreadykeepzoos")) d = provider.bootstrap(self.constraints) def verify(machines): (machine,) = machines self.assertTrue(isinstance(machine, DummyMachine)) self.assertEquals(machine.instance_id, "i-alreadykeepzoos") log_text = log.getvalue() self.assertIn( "juju environment previously bootstrapped", log_text) self.assertNotIn("Launching", log_text) d.addCallback(verify) return d def test_bad_storage(self): provider = DummyProvider(self, UnwritableFileStorage(), None) d = provider.bootstrap(self.constraints) self.assertFailure(d, ProviderError) def verify(error): self.assertEquals( str(error), "Bootstrap aborted because file storage is not writable: " "oh noes") d.addCallback(verify) return d def test_create_zookeeper(self): log = self.capture_logging("juju.common", level=logging.DEBUG) provider = DummyProvider(self, WorkingFileStorage(), None) d = provider.bootstrap(self.constraints) def verify(machines): (machine,) = machines self.assertTrue(isinstance(machine, DummyMachine)) self.assertEquals(machine.instance_id, "i-keepzoos") log_text = log.getvalue() self.assertIn("Launching juju bootstrap instance", log_text) self.assertNotIn("previously bootstrapped", log_text) d.addCallback(verify) return d juju-0.7.orig/juju/providers/common/tests/test_cloudinit.py0000644000000000000000000003250612135220114022446 0ustar 00000000000000import os import stat from juju.errors import CloudInitError from juju.lib import serializer from juju.lib.testing import TestCase from juju.machine.tests.test_constraints import dummy_cs from juju.providers.common.cloudinit import ( CloudInit, parse_juju_origin, get_default_origin) from juju.providers.dummy import DummyMachine from juju.machine import ProviderMachine import juju DATA_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), "data") class CloudInitTest(TestCase): def construct_normal(self): cloud_init = CloudInit() cloud_init.add_ssh_key("chubb") cloud_init.set_machine_id("passport") cloud_init.set_provider_type("dummy") cloud_init.set_zookeeper_machines([ DummyMachine("blah", "blah", "cotswold"), DummyMachine("blah", "blah", "longleat")]) return cloud_init def construct_bootstrap(self, with_zookeepers=False): cloud_init = CloudInit() cloud_init.enable_bootstrap() cloud_init.add_ssh_key("chubb") cloud_init.set_machine_id("passport") cloud_init.set_provider_type("dummy") cloud_init.set_instance_id_accessor("token") cloud_init.set_zookeeper_secret("seekrit") cloud_init.set_constraints( dummy_cs.parse(["cpu=20"]).with_series("astonishing")) cloud_init.set_juju_source(distro=True) if with_zookeepers: cloud_init.set_zookeeper_machines([ DummyMachine("blah", "blah", "cotswold"), DummyMachine("blah", "blah", "longleat")]) return cloud_init def assert_render(self, cloud_init, name, update=False): with open(os.path.join(DATA_DIR, name)) as f: expected = serializer.load(f.read()) rendered = cloud_init.render() if update: with open(os.path.join(DATA_DIR, name), 'w') as fh: fh.write(rendered) self.assertTrue(rendered.startswith("#cloud-config")) self.assertEquals(serializer.load(rendered), expected) def test_render_validate_normal(self): cloud_init = CloudInit() error = self.assertRaises(CloudInitError, cloud_init.render) self.assertEquals( str(error), "Incomplete cloud-init: you need to call add_ssh_key, " "set_zookeeper_machines") def test_render_validate_bootstrap(self): cloud_init = CloudInit() cloud_init.enable_bootstrap() error = self.assertRaises(CloudInitError, cloud_init.render) self.assertEquals( str(error), "Incomplete cloud-init: you need to call add_ssh_key, " "set_provider_type, set_instance_id_accessor, " "set_zookeeper_secret, set_constraints") def test_source_validate(self): bad_choices = ( (None, False, False), (None, True, True), ("lp:blah", True, True), ("lp:blah", False, True), ("lp:blah", True, False)) cloud_init = CloudInit() for choice in bad_choices: error = self.assertRaises( CloudInitError, cloud_init.set_juju_source, *choice) self.assertEquals(str(error), "Please specify one source") def test_render_normal(self): path = os.environ.get("PATH", "") alt_apt_cache_path = self.makeDir() filename = os.path.join(alt_apt_cache_path, "apt-cache") with open(filename, "w") as f: f.write( "#!/bin/bash\n" "cat < /tmp/out"] repositories = ["ppa:juju/pkgs"] output = format_cloud_init( ["zebra"], packages=packages, scripts=scripts, repositories=repositories, data={"magic": [1, 2, 3]}) lines = output.split("\n") self.assertEqual(lines.pop(0), "#cloud-config") config = serializer.yaml_load("\n".join(lines)) self.assertEqual(config["ssh_authorized_keys"], ["zebra"]) self.assertTrue(config["apt_update"]) self.assertTrue(config["apt_upgrade"]) formatted_repos = [dict(source=r) for r in repositories] self.assertEqual(config["apt_sources"], formatted_repos) self.assertEqual(config["runcmd"], scripts) self.assertEqual(config["machine-data"]["magic"], [1, 2, 3]) def test_format_cloud_init_when_testing(self): """When in testing mode for speed of startup disable update/upgrade. """ self.change_environment(JUJU_TESTING="fast") packages = ["python-lxml"] scripts = ["wget http://lwn.net > /tmp/out"] repositories = ["ppa:juju/pkgs"] output = format_cloud_init( ["zebra"], packages=packages, scripts=scripts, repositories=repositories, data={"magic": [1, 2, 3]}) lines = output.split("\n") self.assertEqual(lines.pop(0), "#cloud-config") config = serializer.yaml_load("\n".join(lines)) self.assertFalse(config["apt_update"]) self.assertFalse(config["apt_upgrade"]) self.change_environment(JUJU_TESTING="yes") output = format_cloud_init( ["zebra"], packages=packages, scripts=scripts, repositories=repositories, data={"magic": [1, 2, 3]}) lines = output.split("\n") config = serializer.yaml_load("\n".join(lines)) self.assertTrue(config["apt_update"]) self.assertTrue(config["apt_upgrade"]) juju-0.7.orig/juju/providers/common/tests/data/cloud_init_bootstrap0000644000000000000000000000360312135220114024121 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'localhost:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, default-jre-headless, zookeeper, zookeeperd, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, sed -i -e s/tickTime=2000/tickTime=15000/g /etc/zookeeper/conf/zoo.cfg, echo "minSessionTimeout=30000" >> /etc/zookeeper/conf/zoo.cfg, echo "maxSessionTimeout=60000" >> /etc/zookeeper/conf/zoo.cfg, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= --constraints-data=e2NwdTogJzIwJywgcHJvdmlkZXItdHlwZTogZHVtbXksIHVidW50dS1zZXJpZXM6IGFzdG9uaXNoaW5nfQo= --provider-type=dummy', 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="localhost:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_ZOOKEEPER="localhost:2181" exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output 2>&1 EOF ', /sbin/start juju-provision-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_bootstrap_testing0000644000000000000000000000364212135220114025661 0ustar 00000000000000#cloud-config apt_update: false apt_upgrade: false machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'localhost:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, default-jre-headless, zookeeper, zookeeperd, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, sed -i -e s/tickTime=2000/tickTime=15000/g /etc/zookeeper/conf/zoo.cfg, echo "minSessionTimeout=30000" >> /etc/zookeeper/conf/zoo.cfg, echo "maxSessionTimeout=60000" >> /etc/zookeeper/conf/zoo.cfg, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= --constraints-data=e2NwdTogJzIwJywgcHJvdmlkZXItdHlwZTogZHVtbXksIHVidW50dS1zZXJpZXM6IGFzdG9uaXNoaW5nfQo= --provider-type=dummy', 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="localhost:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_TESTING="fast" env JUJU_ZOOKEEPER="localhost:2181" exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output 2>&1 EOF ', /sbin/start juju-provision-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_bootstrap_zookeepers0000644000000000000000000000372712135220114026376 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181,localhost:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, default-jre-headless, zookeeper, zookeeperd, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, sed -i -e s/tickTime=2000/tickTime=15000/g /etc/zookeeper/conf/zoo.cfg, echo "minSessionTimeout=30000" >> /etc/zookeeper/conf/zoo.cfg, echo "maxSessionTimeout=60000" >> /etc/zookeeper/conf/zoo.cfg, 'juju-admin initialize --instance-id=token --admin-identity=admin:19vlzY4Vc3q4Ew5OsCwKYqrq1HI= --constraints-data=e2NwdTogJzIwJywgcHJvdmlkZXItdHlwZTogZHVtbXksIHVidW50dS1zZXJpZXM6IGFzdG9uaXNoaW5nfQo= --provider-type=dummy', 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181,localhost:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181,localhost:2181" exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output 2>&1 EOF ', /sbin/start juju-provision-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_branch0000644000000000000000000000230612135220114023340 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true apt_sources: - {source: 'ppa:juju/pkgs'} machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, 'cd /usr/lib/juju && sudo /usr/bin/bzr co --lightweight lp:blah/juju/blah-blah juju', cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_branch_trunk0000644000000000000000000000226712135220114024571 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true apt_sources: - {source: 'ppa:juju/pkgs'} machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, 'cd /usr/lib/juju && sudo /usr/bin/bzr co --lightweight lp:juju juju', cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_distro0000644000000000000000000000171312135220114023410 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_no_machine_id0000644000000000000000000000066212135220114024662 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: } output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: ['sudo mkdir -p /var/lib/juju', 'sudo mkdir -p /var/log/juju'] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_normal0000644000000000000000000000116112135220114023371 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'JUJU_MACHINE_ID=passport JUJU_ZOOKEEPER=cotswold:2181,longleat:2181 python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid'] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_ppa0000644000000000000000000000176412135220114022672 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true apt_sources: - {source: 'ppa:juju/pkgs'} machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_ppa_apt_proxy0000644000000000000000000000202212135220114024763 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true apt_proxy: 'superproxy:37337' apt_sources: - {source: 'ppa:juju/pkgs'} machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/common/tests/data/cloud_init_proposed0000644000000000000000000000202012135220114023727 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true apt_sources: - {source: 'deb $MIRROR $RELEASE-proposed main universe'} machine-data: {juju-provider-type: dummy, juju-zookeeper-hosts: 'cotswold:2181,longleat:2181', machine-id: passport} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="passport" env JUJU_ZOOKEEPER="cotswold:2181,longleat:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [chubb] juju-0.7.orig/juju/providers/ec2/__init__.py0000644000000000000000000002152012135220114017165 0ustar 00000000000000import os import re from twisted.internet.defer import inlineCallbacks, returnValue from txaws.ec2.exception import EC2Error from txaws.service import AWSServiceRegion from juju.errors import ( MachinesNotFound, ProviderError, ProviderInteractionError, SSLVerificationUnsupported) from juju.providers.common.base import MachineProviderBase from .files import FileStorage from .launch import EC2LaunchMachine from .machine import EC2ProviderMachine, machine_from_instance from .securitygroup import ( open_provider_port, close_provider_port, get_provider_opened_ports, destroy_environment_security_group) from .utils import ( convert_zone, get_region_uri, DEFAULT_REGION, INSTANCE_TYPES, log, ssl) class MachineProvider(MachineProviderBase): """MachineProvider for use in an EC2/S3 environment""" def __init__(self, environment_name, config): super(MachineProvider, self).__init__(environment_name, config) if not config.get("ec2-uri"): ec2_uri = get_region_uri(config.get("region", DEFAULT_REGION)) else: ec2_uri = config.get("ec2-uri") self._service = AWSServiceRegion( access_key=config.get("access-key", ""), secret_key=config.get("secret-key", ""), ec2_uri=ec2_uri, s3_uri=config.get("s3-uri", "")) ssl_verify = self.config.get("ssl-hostname-verification", False) if ssl_verify: if ssl is None: raise SSLVerificationUnsupported() self._service.ec2_endpoint.ssl_hostname_verification = True self._service.s3_endpoint.ssl_hostname_verification = True else: log.warn( "ssl-hostname-verification is disabled for this environment") for endpoint, endpoint_type in [(self._service.ec2_endpoint, "EC2"), (self._service.s3_endpoint, "S3")]: if endpoint.scheme != "https": log.warn("%s API calls not using secure transport" % ( endpoint_type,)) elif not ssl_verify: log.warn("%s API calls encrypted but not authenticated" % ( endpoint_type,)) if not ssl_verify: log.warn( "Ubuntu Cloud Image lookups encrypted but not authenticated") self.s3 = self._service.get_s3_client() self.ec2 = self._service.get_ec2_client() @property def provider_type(self): return "ec2" @property def using_amazon(self): return "ec2-uri" not in self.config @inlineCallbacks def get_constraint_set(self): """Return the set of constraints that are valid for this provider.""" cs = yield super(MachineProvider, self).get_constraint_set() if 1: # These keys still need to be valid (instance-type and ec2-zone) #if self.using_amazon: # Expose EC2 instance types/zones on AWS itelf, not private clouds. cs.register_generics(INSTANCE_TYPES.keys()) cs.register("ec2-zone", converter=convert_zone) returnValue(cs) def get_legacy_config_keys(self): """Return any deprecated config keys that are set""" legacy = super(MachineProvider, self).get_legacy_config_keys() if self.using_amazon: # In the absence of a generic instance-type/image-id mechanism, # these keys remain valid on private clouds. amazon_legacy = set(("default-image-id", "default-instance-type")) legacy.update(amazon_legacy.intersection(self.config)) return legacy def get_serialization_data(self): """Get provider configuration suitable for serialization. Also extracts credential information from the environment. """ data = super(MachineProvider, self).get_serialization_data() data.setdefault("access-key", os.environ.get("AWS_ACCESS_KEY_ID")) data.setdefault("secret-key", os.environ.get("AWS_SECRET_ACCESS_KEY")) return data def get_file_storage(self): """Retrieve an S3-backed :class:`FileStorage`.""" return FileStorage(self.s3, self.config["control-bucket"]) def start_machine(self, machine_data, master=False): """Start an EC2 machine. :param dict machine_data: desired characteristics of the new machine; it must include a "machine-id" key, and may include a "constraints" key to specify the underlying OS and hardware. :param bool master: if True, machine will initialize the juju admin and run a provisioning agent, in addition to running a machine agent. """ return EC2LaunchMachine.launch(self, machine_data, master) @inlineCallbacks def get_machines(self, instance_ids=()): """List machines running in the provider. :param list instance_ids: ids of instances you want to get. Leave empty to list every :class:`juju.providers.ec2.machine.EC2ProviderMachine` owned by this provider. :return: a list of :class:`juju.providers.ec2.machine.EC2ProviderMachine` instances :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.MachinesNotFound` """ group_name = "juju-%s" % self.environment_name try: instances = yield self.ec2.describe_instances(*instance_ids) except EC2Error as error: code = error.get_error_codes() message = error.get_error_messages() if code == "InvalidInstanceID.NotFound": message = error.get_error_messages() raise MachinesNotFound( re.findall(r"\bi-[0-9a-f]{3,15}\b", message)) raise ProviderInteractionError( "Unexpected EC2Error getting machines %s: %s" % (", ".join(instance_ids), message)) machines = [] for instance in instances: if instance.instance_state not in ("running", "pending"): continue if group_name not in instance.reservation.groups: continue machines.append(machine_from_instance(instance)) if instance_ids: # We were asked for a specific list of machines, and if we can't # completely fulfil that request we should blow up. found_instance_ids = set(m.instance_id for m in machines) missing = set(instance_ids) - found_instance_ids if missing: raise MachinesNotFound(missing) returnValue(machines) @inlineCallbacks def destroy_environment(self): """Terminate all associated machines and security groups. The super defintion of this method terminates each machine in the environment; this needs to be augmented here by also removing the security group for the environment. :rtype: :class:`twisted.internet.defer.Deferred` """ try: killed_machines = yield super(MachineProvider, self).\ destroy_environment() returnValue(killed_machines) finally: yield destroy_environment_security_group(self) @inlineCallbacks def shutdown_machines(self, machines): """Terminate machines associated with this provider. :param machines: machines to shut down :type machines: list of :class:`juju.providers.ec2.machine.EC2ProviderMachine` :return: list of terminated :class:`juju.providers.ec2.machine.EC2ProviderMachine` instances :rtype: :class:`twisted.internet.defer.Deferred` """ if not machines: returnValue([]) for machine in machines: if not isinstance(machine, EC2ProviderMachine): raise ProviderError("Can only shut down EC2ProviderMachines; " "got a %r" % type(machine)) ids = [m.instance_id for m in machines] killable_machines = yield self.get_machines(ids) if not killable_machines: returnValue([]) # Nothing to do killable_ids = [m.instance_id for m in killable_machines] yield self.ec2.terminate_instances(*killable_ids) returnValue(killable_machines) def open_port(self, machine, machine_id, port, protocol="tcp"): """Authorizes `port` using `protocol` on EC2 for `machine`.""" return open_provider_port(self, machine, machine_id, port, protocol) def close_port(self, machine, machine_id, port, protocol="tcp"): """Revokes `port` using `protocol` on EC2 for `machine`.""" return close_provider_port(self, machine, machine_id, port, protocol) def get_opened_ports(self, machine, machine_id): """Returns a set of open (port, proto) pairs for `machine`.""" return get_provider_opened_ports(self, machine, machine_id) juju-0.7.orig/juju/providers/ec2/files.py0000644000000000000000000000734112135220114016535 0ustar 00000000000000""" Ec2 Provider File Storage on S3 """ from base64 import b64encode import hmac import sha import urllib import time from cStringIO import StringIO from twisted.internet.defer import fail from twisted.web.error import Error from OpenSSL.SSL import Error as SSLError from txaws.s3.client import URLContext from juju.errors import FileNotFound, SSLVerificationError _FILENOTFOUND_CODES = ("NoSuchKey", "NoSuchBucket") def _safe_string(s): if isinstance(s, unicode): s = s.encode('utf8') return s class FileStorage(object): """S3-backed :class:`FileStorage` abstraction""" def __init__(self, s3, bucket): self._s3 = s3 self._bucket = bucket def get_url(self, name): """Return a URL that can be used to access a stored file. S3 time authenticated URL reference: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html :param unicode name: the S3 key for which to provide a URL :return: a signed URL, expiring 10 years from now :rtype: str """ # URLs are good for 10 years. expires = int(time.time()) + 365 * 24 * 3600 * 10 name = _safe_string(name) path = "%s/%s" % (self._bucket, urllib.quote(name)) signed = hmac.new(self._s3.creds.secret_key, digestmod=sha) signed.update("GET\n\n\n%s\n/%s" % (expires, path)) signature = urllib.quote_plus(b64encode(signed.digest()).strip()) url_context = URLContext( self._s3.endpoint, urllib.quote(self._bucket), urllib.quote(name)) url = url_context.get_url() url += "?Signature=%s&Expires=%s&AWSAccessKeyId=%s" % ( signature, expires, self._s3.creds.access_key) return url def get(self, name): """Get a file object from S3. :param unicode name: S3 key for the desired file :return: an open file object :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.FileNotFound` if the file doesn't exist """ # for now we do the simplest thing and just fetch the file # in a single call, s3 limits this to some mb, as we grow # to have charms that might grow to this size, we can # revisit fetching in batches (head request for size) and # streaming to disk. d = self._s3.get_object(self._bucket, _safe_string(name)) def on_content_retrieved(content): return StringIO(content) d.addCallback(on_content_retrieved) def on_no_file(failure): # Trap file not found errors and wrap them in an application error. failure.trap(Error, SSLError) if type(failure.value) == SSLError: return fail(SSLVerificationError(failure.value)) if str(failure.value.status) != "404": return failure return fail(FileNotFound("s3://%s/%s" % (self._bucket, name))) d.addErrback(on_no_file) return d def put(self, remote_path, file_object): """Upload a file to S3. :param unicode remote_path: key on which to store the content :param file_object: open file object containing the content :rtype: :class:`twisted.internet.defer.Deferred` """ content = file_object.read() path = _safe_string(remote_path) def put(ignored): return self._s3.put_object(self._bucket, path, content) def create_retry(failure): failure.trap(Error) if str(failure.value.status) != "404": return failure d = self._s3.create_bucket(self._bucket) d.addCallback(put) return d d = put(None) d.addErrback(create_retry) return d juju-0.7.orig/juju/providers/ec2/launch.py0000644000000000000000000001042212135220114016677 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, returnValue from juju.providers.common.launch import LaunchMachine from .machine import machine_from_instance from .utils import get_machine_spec, log, DEFAULT_REGION class EC2LaunchMachine(LaunchMachine): """Amazon EC2 operation for launching an instance""" @inlineCallbacks def start_machine(self, machine_id, zookeepers): """Actually launch an instance on EC2. :param str machine_id: the juju machine ID to assign :param zookeepers: the machines currently running zookeeper, to which the new machine will need to connect :type zookeepers: list of :class:`juju.providers.ec2.machine.EC2ProviderMachine` :return: a singe-entry list containing a :class:`juju.providers.ec2.machine.EC2ProviderMachine` representing the newly-launched machine :rtype: :class:`twisted.internet.defer.Deferred` """ cloud_init = self._create_cloud_init(machine_id, zookeepers) cloud_init.set_provider_type("ec2") cloud_init.set_instance_id_accessor( "$(curl http://169.254.169.254/1.0/meta-data/instance-id)") user_data = cloud_init.render() availability_zone = self._constraints["ec2-zone"] if availability_zone is not None: region = self._provider.config.get("region", DEFAULT_REGION) availability_zone = region + availability_zone spec = yield get_machine_spec(self._provider.config, self._constraints) security_groups = yield self._ensure_groups(machine_id) log.debug("Launching with machine spec %s", spec) instances = yield self._provider.ec2.run_instances( min_count=1, max_count=1, image_id=spec.image_id, instance_type=spec.instance_type, security_groups=security_groups, availability_zone=availability_zone, user_data=user_data) returnValue([machine_from_instance(i) for i in instances]) @inlineCallbacks def _ensure_groups(self, machine_id): """Ensure the juju group is the machine launch groups. Machines launched by juju are tagged with a group so they can be distinguished from other machines that might be running on an EC2 account. This group can be specified explicitly or implicitly defined by the environment name. In addition, a specific machine security group is created for each machine, so that its firewall rules can be configured per machine. :param machine_id: The juju machine ID of the new machine """ juju_group = "juju-%s" % self._provider.environment_name juju_machine_group = "juju-%s-%s" % ( self._provider.environment_name, machine_id) security_groups = yield self._provider.ec2.describe_security_groups() group_ids = [group.name for group in security_groups] # Create the provider group if doesn't exist. if not juju_group in group_ids: log.debug("Creating juju provider group %s", juju_group) yield self._provider.ec2.create_security_group( juju_group, "juju group for %s" % self._provider.environment_name) # Authorize SSH. yield self._provider.ec2.authorize_security_group( juju_group, ip_protocol="tcp", from_port="22", to_port="22", cidr_ip="0.0.0.0/0") # We need to describe the group to pickup the owner_id for auth. groups_info = yield self._provider.ec2.describe_security_groups( juju_group) # Authorize Internal ZK Traffic yield self._provider.ec2.authorize_security_group( juju_group, source_group_name=juju_group, source_group_owner_id=groups_info.pop().owner_id) # Create the machine-specific group if it does not already exist if not juju_machine_group in group_ids: yield self._provider.ec2.create_security_group( juju_machine_group, "juju group for %s machine %s" % ( self._provider.environment_name, machine_id)) returnValue([juju_group, juju_machine_group]) juju-0.7.orig/juju/providers/ec2/machine.py0000644000000000000000000000111412135220114017027 0ustar 00000000000000from juju.machine import ProviderMachine class EC2ProviderMachine(ProviderMachine): """EC2-specific ProviderMachine implementation. Not really interesting right now, besides for "tagging" purposes. """ def machine_from_instance(instance): """Create an :class:`EC2ProviderMachine` from a txaws :class:`Instance` :param instance: the EC2 Instance :return: a matching :class:`EC2ProviderMachine` """ return EC2ProviderMachine( instance.instance_id, instance.dns_name, instance.private_dns_name, instance.instance_state) juju-0.7.orig/juju/providers/ec2/securitygroup.py0000644000000000000000000001241012135220114020350 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, returnValue from txaws.ec2.exception import EC2Error from juju.errors import ProviderInteractionError from .utils import log def _get_juju_security_group(provider): """Get EC2 security group name for environment of `provider`.""" return "juju-%s" % provider.environment_name def _get_machine_group_name(provider, machine_id): """Get EC2 security group name associated just with `machine_id`.""" return "juju-%s-%s" % (provider.environment_name, machine_id) # TODO These security group functions do not handle the eventual # consistency seen with EC2. A future branch will add support for # retry so that using code doesn't have to be aware of this issue. # # In addition, the functions work with respect to the machine id, # since they manipulate a security group permanently associated with # the EC2 provided machine, and the machine must be launched into this # security group. This security group, per the above # `_get_machine_group_name`, embeds the machine id, eg # juju-moon-42. Ideally, this would not be the case. See the # comments associated with the merge proposal of # https://code.launchpad.net/~jimbaker/juju/expose-provider-ec2/ @inlineCallbacks def open_provider_port(provider, machine, machine_id, port, protocol): """Authorize `port`/`proto` for the machine security group.""" try: yield provider.ec2.authorize_security_group( _get_machine_group_name(provider, machine_id), ip_protocol=protocol, from_port=str(port), to_port=str(port), cidr_ip="0.0.0.0/0") log.debug("Opened %s/%s on provider machine %r", port, protocol, machine.instance_id) except EC2Error, e: raise ProviderInteractionError( "Unexpected EC2Error opening %s/%s on machine %s: %s" % (port, protocol, machine.instance_id, e.get_error_messages())) @inlineCallbacks def close_provider_port(provider, machine, machine_id, port, protocol): """Revoke `port`/`proto` for the machine security group.""" try: yield provider.ec2.revoke_security_group( _get_machine_group_name(provider, machine_id), ip_protocol=protocol, from_port=str(port), to_port=str(port), cidr_ip="0.0.0.0/0") log.debug("Closed %s/%s on provider machine %r", port, protocol, machine.instance_id) except EC2Error, e: raise ProviderInteractionError( "Unexpected EC2Error closing %s/%s on machine %s: %s" % (port, protocol, machine.instance_id, e.get_error_messages())) @inlineCallbacks def get_provider_opened_ports(provider, machine, machine_id): """Gets the opened ports for `machine`. Retrieves the IP permissions associated with the machine security group, then parses them to return a set of (port, proto) pairs. """ try: security_groups = yield provider.ec2.describe_security_groups( _get_machine_group_name(provider, machine_id)) except EC2Error, e: raise ProviderInteractionError( "Unexpected EC2Error getting open ports on machine %s: %s" % (machine.instance_id, e.get_error_messages())) opened_ports = set() # made up of (port, protocol) pairs for ip_permission in security_groups[0].allowed_ips: if ip_permission.cidr_ip != "0.0.0.0/0": continue from_port = int(ip_permission.from_port) to_port = int(ip_permission.to_port) if from_port == to_port: # Only return ports that are individually opened. We # ignore multi-port ranges, since they are set outside of # juju (at this time at least) opened_ports.add((from_port, ip_permission.ip_protocol)) returnValue(opened_ports) def _get_machine_security_group_from_instance(provider, instance): """Parses the `reservation` of `instance` to get assoc machine group.""" juju_security_group = _get_juju_security_group(provider) for group in instance.reservation.groups: if group != juju_security_group: return group # Ignore if no such group exists; this allows some limited # backwards compatibility with old setups without machine # security group log.info("Ignoring missing machine security group for instance %r", instance.instance_id) return None @inlineCallbacks def _delete_security_group(provider, group): """Wrap EC2 delete_security_group.""" try: yield provider.ec2.delete_security_group(group) log.debug("Deleted security group %r", group) except EC2Error, e: raise ProviderInteractionError( "EC2 error when attempting to delete group %s: %s" % (group, e)) @inlineCallbacks def destroy_environment_security_group(provider): """Delete the security group for the environment of `provider`""" group = _get_juju_security_group(provider) try: yield provider.ec2.delete_security_group(group) log.debug("Deleted environment security group %r", group) returnValue(True) except EC2Error, e: # Ignore, since this is only attempting to cleanup log.debug( "Ignoring EC2 error when attempting to delete group %s: %s" % ( group, e)) returnValue(False) juju-0.7.orig/juju/providers/ec2/tests/0000755000000000000000000000000012135220114016216 5ustar 00000000000000juju-0.7.orig/juju/providers/ec2/utils.py0000644000000000000000000001320212135220114016564 0ustar 00000000000000from collections import namedtuple import csv import logging import operator import os from string import ascii_lowercase import StringIO from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.client import getPage from twisted.web.error import Error from juju.errors import ProviderError # We don't actually know what's available in any given region _PLAUSIBLE_ZONES = ascii_lowercase # "cost" is measured in $/h in us-east-1 # "hvm" is True if an HVM image is required _InstanceType = namedtuple("_InstanceType", "arch cpu mem cost hvm") # some instance types can be started as 1386 or amd64 _EITHER_ARCH = "?" INSTANCE_TYPES = { # t1.micro cpu is "up to 2", but in practice "very little" "t1.micro": _InstanceType(_EITHER_ARCH, 0.1, 613, 0.02, False), "m1.small": _InstanceType(_EITHER_ARCH, 1, 1740, 0.08, False), "m1.medium": _InstanceType(_EITHER_ARCH, 2, 3840, 0.16, False), "m1.large": _InstanceType("amd64", 4, 7680, 0.32, False), "m1.xlarge": _InstanceType("amd64", 8, 15360, 0.64, False), "m2.xlarge": _InstanceType("amd64", 6.5, 17510, 0.45, False), "m2.2xlarge": _InstanceType("amd64", 13, 35020, 0.9, False), "m2.4xlarge": _InstanceType("amd64", 26, 70040, 1.8, False), "c1.medium": _InstanceType(_EITHER_ARCH, 5, 1740, 0.165, False), "c1.xlarge": _InstanceType("amd64", 20, 7168, 0.66, False), "cc1.4xlarge": _InstanceType("amd64", 33.5, 23552, 1.3, True), "cc2.8xlarge": _InstanceType("amd64", 88, 61952, 2.4, True), # also has fancy GPUs we can't currently describe "cg1.4xlarge": _InstanceType("amd64", 33.5, 22528, 2.1, True)} DEFAULT_REGION = "us-east-1" log = logging.getLogger("juju.ec2") try: from txaws.client import ssl from txaws.client.ssl import VerifyingContextFactory except ImportError: ssl = None VerifyingContextFactory = None _CURRENT_IMAGE_HOST = 'cloud-images.ubuntu.com' _CURRENT_IMAGE_URI_TEMPLATE = ( "https://%s/query/%s/server/released.current.txt") _STREAM_IMAGE_URI_TEMPLATE = ( "https://%s/query/%s/server/daily.current.txt") MachineSpec = namedtuple("MachineSpec", "instance_type image_id") def convert_zone(s): s = s.lower() if len(s) == 1: if s in _PLAUSIBLE_ZONES: return s raise ValueError("expected single ascii letter") def get_region_uri(region): """Get the URL endpoint for the region.""" return "https://ec2.%s.amazonaws.com" % region def get_current_ami(series, arch, region, hvm, ssl_verify=False): required_kind = "hvm" if hvm else "paravirtual" def handle_404(failure): failure.trap(Error) if failure.value.status == "404": raise LookupError((series, arch, region)) return failure def extract_ami(current_data): data_stream = StringIO.StringIO(current_data) for tokens in csv.reader(data_stream, "excel-tab"): if tokens[4] != "ebs": continue if len(tokens) > 10: if tokens[10] != required_kind: continue elif hvm: raise LookupError("HVM images not available for %s" % series) if tokens[5] == arch and tokens[6] == region: return tokens[7] raise LookupError((series, arch, region)) if bool(os.environ.get("JUJU_TESTING")): uri = _STREAM_IMAGE_URI_TEMPLATE % (_CURRENT_IMAGE_HOST, series) else: uri = _CURRENT_IMAGE_URI_TEMPLATE % (_CURRENT_IMAGE_HOST, series) if ssl and ssl_verify: contextFactory = VerifyingContextFactory(_CURRENT_IMAGE_HOST) else: contextFactory = None d = getPage(uri, contextFactory=contextFactory) d.addErrback(handle_404) d.addCallback(extract_ami) return d def _arch_match(actual, desired): if actual == desired: return True if actual == _EITHER_ARCH: # Several instance types can be started in 32-bit mode if desired. if desired in ("i386", "amd64"): return True return False def _filter_func(name, constraints, op): desired = constraints[name] if not desired: return lambda _: True def f(type_): value = getattr(INSTANCE_TYPES[type_], name) return op(value, desired) return f def _get_filters(constraints): return ( _filter_func("arch", constraints, _arch_match), _filter_func("cpu", constraints, operator.ge), _filter_func("mem", constraints, operator.ge)) def _cost(name): return INSTANCE_TYPES[name].cost def get_instance_type(config, constraints): instance_type = config.get("default-instance-type") if instance_type is not None: return instance_type instance_type = constraints["instance-type"] if instance_type is not None: return instance_type possible_types = list(INSTANCE_TYPES) for f in _get_filters(constraints): possible_types = filter(f, possible_types) if not possible_types: raise ProviderError( "No instance type satisfies %s" % dict(constraints)) return sorted(possible_types, key=_cost)[0] @inlineCallbacks def get_machine_spec(config, constraints): instance_type = get_instance_type(config, constraints) image_id = config.get("default-image-id") if image_id is None: series = constraints["ubuntu-series"] or config["default-series"] arch = constraints["arch"] or "amd64" region = config.get("region", DEFAULT_REGION) hvm = INSTANCE_TYPES[instance_type].hvm ssl_verify = config.get("ssl-hostname-verification", False) image_id = yield get_current_ami(series, arch, region, hvm, ssl_verify) returnValue(MachineSpec(instance_type, image_id)) juju-0.7.orig/juju/providers/ec2/tests/__init__.py0000644000000000000000000000000212135220114020317 0ustar 00000000000000# juju-0.7.orig/juju/providers/ec2/tests/common.py0000644000000000000000000001667512135220114020077 0ustar 00000000000000 from twisted.internet.defer import fail, succeed, inlineCallbacks, returnValue from txaws.s3.client import S3Client from txaws.s3.exception import S3Error from txaws.ec2.client import EC2Client from txaws.ec2.exception import EC2Error from txaws.ec2.model import Instance, Reservation, SecurityGroup from juju.lib import serializer from juju.lib.mocker import KWARGS, MATCH from juju.providers.ec2 import MachineProvider from juju.providers.ec2.machine import EC2ProviderMachine MATCH_GROUP = MATCH(lambda x: x.startswith("juju-moon")) _constraints_provider = MachineProvider( "", {"access-key": "fog", "secret-key": "snow"}) @inlineCallbacks def get_constraints(strs, series="splendid"): cs = yield _constraints_provider.get_constraint_set() returnValue(cs.parse(strs).with_series(series)) class EC2TestMixin(object): env_name = "moon" service_factory_kwargs = None def get_config(self): return {"type": "ec2", "juju-origin": "distro", "admin-secret": "magic-beans", "access-key": "0f62e973d5f8", "secret-key": "3e5a7c653f59", "control-bucket": self.env_name, "default-series": "splendid"} def get_provider(self): """Return the ec2 machine provider. This should only be invoked after mocker is in replay mode so the AWS service class will be appropriately replaced by the mock. """ return MachineProvider(self.env_name, self.get_config()) def get_instance(self, instance_id, state="running", machine_id=42, **kwargs): groups = kwargs.pop("groups", ["juju-%s" % self.env_name, "juju-%s-%s" % (self.env_name, machine_id)]) reservation = Reservation("x", "y", groups=groups) return Instance(instance_id, state, reservation=reservation, **kwargs) def assert_machine(self, machine, instance_id, dns_name): self.assertTrue(isinstance(machine, EC2ProviderMachine)) self.assertEquals(machine.instance_id, instance_id) self.assertEquals(machine.dns_name, dns_name) def get_ec2_error(self, entity_id, format="The instance ID %r does not exist", code=503): """Make a representative EC2Error for `entity_id`, eg AWS instance_id. This error is paired with `get_wrapped_ec2_text` below. The default format represents a fairly common error seen in working with EC2. There are others.""" message = format % entity_id return EC2Error( "1%s" % message, code) def setUp(self): # mock out the aws services service_factory = self.mocker.replace( "txaws.service.AWSServiceRegion") self._service = service_factory(KWARGS) def store_factory_kwargs(**kwargs): self.service_factory_kwargs = kwargs self.mocker.call(store_factory_kwargs) self.s3 = self.mocker.mock(S3Client) self._service.get_s3_client() self.mocker.result(self.s3) self.ec2 = self.mocker.mock(EC2Client) self._service.get_ec2_client() self.mocker.result(self.ec2) class EC2MachineLaunchMixin(object): def _mock_launch_utils(self, ami_name="ami-default", get_ami_args=()): get_public_key = self.mocker.replace( "juju.providers.common.utils.get_user_authorized_keys") def match_config(arg): return isinstance(arg, dict) get_public_key(MATCH(match_config)) self.mocker.result("zebra") get_ami_args = get_ami_args or ( "splendid", "amd64", "us-east-1", False, False) get_ami = self.mocker.replace( "juju.providers.ec2.utils.get_current_ami") get_ami(*get_ami_args) self.mocker.result(succeed(ami_name)) def _mock_create_group(self): group_name = "juju-%s" % self.env_name self.ec2.create_security_group( group_name, "juju group for %s" % self.env_name) self.mocker.result(succeed(True)) self.ec2.authorize_security_group( group_name, ip_protocol="tcp", from_port="22", to_port="22", cidr_ip="0.0.0.0/0") self.mocker.result(succeed([self.env_name])) self.ec2.describe_security_groups(group_name) self.mocker.result(succeed( [SecurityGroup(group_name, "", owner_id="123")])) self.ec2.authorize_security_group( group_name, source_group_name=group_name, source_group_owner_id="123") self.mocker.result(succeed(True)) def _mock_create_machine_group(self, machine_id): machine_group_name = "juju-%s-%s" % (self.env_name, machine_id) self.ec2.create_security_group( machine_group_name, "juju group for %s machine %s" % ( self.env_name, machine_id)) self.mocker.result(succeed(True)) def _mock_get_zookeeper_hosts(self, hosts=None): """ Try to encapsulate a variety of behaviors here.. if hosts is None, a default host is used. if hosts is False, no s3 state is returned if hosts are passed as a list of instances, they are returned. """ if hosts is None: hosts = [self.get_instance( "i-es-zoo", private_dns_name="es.example.internal")] self.s3.get_object(self.env_name, "provider-state") if hosts is False: error = S3Error("", 404) error.errors = [{"Code": "NoSuchKey"}] self.mocker.result(fail(error)) return state = serializer.dump({ "zookeeper-instances": [i.instance_id for i in hosts]}) self.mocker.result(succeed(state)) if hosts: # connect grabs the first host of a set. self.ec2.describe_instances(hosts[0].instance_id) self.mocker.result(succeed([hosts[0]])) class MockInstanceState(object): """Mock the result of ec2_describe_instances when called successively. Each call of :method:`get_round` returns a list of mock `Instance` objects, using the state for that round. Instance IDs not used in the round (and passed in from ec2_describe_instances) are automatically skipped.""" def __init__(self, tester, instance_ids, machine_ids, states): self.tester = tester self.instance_ids = instance_ids self.machine_ids = machine_ids self.states = states self.round = 0 def get_round(self, *current_instance_ids): result = [] for instance_id, machine_id, state in zip( self.instance_ids, self.machine_ids, self.states[self.round]): if instance_id not in current_instance_ids: # Ignore instance_ids that are no longer being # described, because they have since moved into a # terminated state continue result.append(self.tester.get_instance(instance_id, machine_id=machine_id, state=state)) self.round += 1 return succeed(result) class Observed(object): """Minimal wrapper just to ensure :method:`add` returns a `Deferred`.""" def __init__(self): self.items = set() def add(self, item): self.items.add(item) return succeed(True) juju-0.7.orig/juju/providers/ec2/tests/data/0000755000000000000000000000000012135220114017127 5ustar 00000000000000juju-0.7.orig/juju/providers/ec2/tests/test_bootstrap.py0000644000000000000000000001301412135220114021643 0ustar 00000000000000import logging import os from twisted.internet.defer import succeed, inlineCallbacks from txaws.ec2.model import SecurityGroup from juju.lib import serializer from juju.lib.mocker import MATCH from juju.lib.testing import TestCase from juju.providers.ec2.machine import EC2ProviderMachine from .common import EC2TestMixin, EC2MachineLaunchMixin, get_constraints DATA_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), "data") class EC2BootstrapTest(EC2TestMixin, EC2MachineLaunchMixin, TestCase): def _mock_verify(self): self.s3.put_object( self.env_name, "bootstrap-verify", "storage is writable") self.mocker.result(succeed(True)) def _mock_save(self): """Mock saving bootstrap instances to S3.""" def match_string(data): return isinstance(data, str) self.s3.put_object( self.env_name, "provider-state", MATCH(match_string)) self.mocker.result(succeed(True)) def _mock_launch(self, update=False): """Mock launching a bootstrap machine on ec2.""" def verify_user_data(data): expect_path = os.path.join(DATA_DIR, "bootstrap_cloud_init") with open(expect_path) as f: expect_cloud_init = serializer.load(f.read()) if update: with open(expect_path, 'w') as f: f.write(data) self.assertEquals(serializer.load(data), expect_cloud_init) return True self.ec2.run_instances( image_id="ami-default", instance_type="m1.small", max_count=1, min_count=1, security_groups=["juju-moon", "juju-moon-0"], availability_zone=None, user_data=MATCH(verify_user_data)) @inlineCallbacks def test_launch_bootstrap(self): """The provider bootstrap can launch a bootstrap/zookeeper machine.""" log = self.capture_logging("juju.common", level=logging.DEBUG) self.s3.get_object(self.env_name, "provider-state") self.mocker.result(succeed("")) self._mock_verify() self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group(0) self._mock_launch_utils() self._mock_launch() self.mocker.result(succeed([])) self._mock_save() self.mocker.replay() provider = self.get_provider() constraints = yield get_constraints(["instance-type=m1.small"]) yield provider.bootstrap(constraints) log_text = log.getvalue() self.assertIn("Launching juju bootstrap instance", log_text) self.assertNotIn("previously bootstrapped", log_text) @inlineCallbacks def test_launch_bootstrap_existing_provider_group(self): """ When launching a bootstrap instance the provider will use an existing provider instance group. """ self.capture_logging("juju.ec2") self.s3.get_object(self.env_name, "provider-state") self.mocker.result(succeed("")) self._mock_verify() self.ec2.describe_security_groups() self.mocker.result(succeed([ SecurityGroup("juju-%s" % self.env_name, "")])) self._mock_create_machine_group(0) self._mock_launch_utils() self._mock_launch() self.mocker.result(succeed([])) self._mock_save() self.mocker.replay() provider = self.get_provider() constraints = yield get_constraints(["instance-type=m1.small"]) yield provider.bootstrap(constraints) @inlineCallbacks def test_run_with_loaded_state(self): """ If the provider bootstrap is run when there is already a running bootstrap instance, it will just return the existing machine. """ state = serializer.dump({"zookeeper-instances": ["i-foobar"]}) self.s3.get_object(self.env_name, "provider-state") self.mocker.result(succeed(state)) self.ec2.describe_instances("i-foobar") self.mocker.result(succeed([self.get_instance("i-foobar")])) self.mocker.replay() log = self.capture_logging("juju.common") provider = self.get_provider() constraints = yield get_constraints(["instance-type=m1.small"]) machines = yield provider.bootstrap(constraints) (machine,) = machines self.assertTrue(isinstance(machine, EC2ProviderMachine)) self.assertEqual(machine.instance_id, "i-foobar") self.assertEquals( log.getvalue(), "juju environment previously bootstrapped.\n") @inlineCallbacks def test_run_with_launch(self): """ The provider bootstrap will launch an instance when run if there is no existing instance. """ self.s3.get_object(self.env_name, "provider-state") self.mocker.result(succeed("")) self._mock_verify() self.ec2.describe_security_groups() self.mocker.result(succeed([ SecurityGroup("juju-%s" % self.env_name, "")])) self._mock_create_machine_group(0) self._mock_launch_utils() self._mock_launch() self.mocker.result(succeed([self.get_instance("i-foobar")])) self._mock_save() self.mocker.replay() provider = self.get_provider() constraints = yield get_constraints(["instance-type=m1.small"]) machines = yield provider.bootstrap(constraints) (machine,) = machines self.assert_machine(machine, "i-foobar", "") juju-0.7.orig/juju/providers/ec2/tests/test_files.py0000644000000000000000000001757312135220114020746 0ustar 00000000000000from cStringIO import StringIO import time from twisted.internet.defer import succeed, fail from twisted.web.error import Error from txaws.s3.exception import S3Error from OpenSSL.SSL import Error as SSLError from juju.lib.mocker import MATCH from juju.lib.testing import TestCase from juju.errors import FileNotFound, SSLVerificationError from juju.providers.ec2.tests.common import EC2TestMixin class FileStorageTestCase(EC2TestMixin, TestCase): def get_storage(self): provider = self.get_provider() storage = provider.get_file_storage() return storage def test_put_file(self): """ A file can be put in the storage. """ content = "blah blah" control_bucket = self.get_config()["control-bucket"] self.s3.put_object( control_bucket, "pirates/content.txt", content) self.mocker.result(succeed("")) self.mocker.replay() storage = self.get_storage() d = storage.put("pirates/content.txt", StringIO(content)) def validate_result(result): self.assertIdentical(result, "") d.addCallback(validate_result) return d def test_put_file_unicode(self): """ A file can be put in the storage with a unicode key, will be implicitly converted to a string. The reason for this conversion is that the txaws will raise an exception on unicode strings passed as keys. """ content = "blah blah" control_bucket = self.get_config()["control-bucket"] self.s3.put_object( control_bucket, "\xe2\x99\xa3\xe2\x99\xa6\xe2\x99\xa5\xe2\x99\xa0.txt", content) self.mocker.result(succeed("")) self.mocker.replay() storage = self.get_storage() d = storage.put(u"\u2663\u2666\u2665\u2660.txt", StringIO(content)) def validate_result(result): self.assertIdentical(result, "") d.addCallback(validate_result) return d def test_put_file_no_bucket(self): """The buket will be created if it doesn't exist yet""" content = "blah blah" control_bucket = self.get_config()["control-bucket"] self.s3.put_object( control_bucket, "pirates/content.txt", content) error = Error("404", "Not Found") self.mocker.result(fail(error)) self.s3.create_bucket(control_bucket) self.mocker.result(succeed(None)) self.s3.put_object( control_bucket, "pirates/content.txt", content) self.mocker.result(succeed("")) self.mocker.replay() storage = self.get_storage() d = storage.put(u"pirates/content.txt", StringIO(content)) def validate_result(result): self.assertIdentical(result, "") d.addCallback(validate_result) return d def verify_strange_put_error(self, error): """Weird errors? don't even try""" content = "blah blah" control_bucket = self.get_config()["control-bucket"] self.s3.put_object( control_bucket, "pirates/content.txt", content) self.mocker.result(fail(error)) self.mocker.replay() storage = self.get_storage() d = storage.put(u"pirates/content.txt", StringIO(content)) self.assertFailure(d, type(error)) return d def test_put_file_unknown_error(self): return self.verify_strange_put_error(Exception("cellosticks")) def test_get_url(self): """A url can be generated for any stored file.""" self.mocker.reset() # Freeze time for the hmac comparison self.patch(time, "time", lambda: 1313469969.311376) storage = self.get_storage() url = storage.get_url("pirates/content.txt") self.assertTrue(url.startswith( "https://s3.amazonaws.com/moon/pirates/content.txt?")) params = url[url.index("?") + 1:].split("&") self.assertEqual( sorted(params), ["AWSAccessKeyId=0f62e973d5f8", "Expires=1628829969", "Signature=8A%2BF4sk48OmJ8xfPoOY7U0%2FacvM%3D"]) def test_get_url_unicode(self): """A url can be generated for *any* stored file.""" self.mocker.reset() # Freeze time for the hmac comparison self.patch(time, "time", lambda: 1315469969.311376) storage = self.get_storage() url = storage.get_url(u"\u2663\u2666\u2665\u2660.txt") self.assertTrue(url.startswith( "https://s3.amazonaws.com/moon/" "%E2%99%A3%E2%99%A6%E2%99%A5%E2%99%A0.txt")) params = url[url.index("?") + 1:].split("&") self.assertEqual( sorted(params), ["AWSAccessKeyId=0f62e973d5f8", "Expires=1630829969", "Signature=bbmdpkLqmrY4ebc2eoCJgt95ojg%3D"]) def test_get_file(self): """Retrieving a file from storage returns a temporary file.""" content = "blah blah" control_bucket = self.get_config()["control-bucket"] self.s3.get_object( control_bucket, "pirates/content.txt") self.mocker.result(succeed(content)) self.mocker.replay() storage = self.get_storage() d = storage.get("pirates/content.txt") def validate_result(result): self.assertEqual(result.read(), content) d.addCallback(validate_result) return d def test_get_file_unicode(self): """Retrieving a file with a unicode object, will refetch with a utf8 interpretation.""" content = "blah blah" control_bucket = self.get_config()["control-bucket"] def match_string(s): self.assertEqual( s, "\xe2\x99\xa3\xe2\x99\xa6\xe2\x99\xa5\xe2\x99\xa0.txt") self.assertFalse(isinstance(s, unicode)) return True self.s3.get_object(control_bucket, MATCH(match_string)) self.mocker.result(succeed(content)) self.mocker.replay() storage = self.get_storage() d = storage.get(u"\u2663\u2666\u2665\u2660.txt") def validate_result(result): self.assertEqual(result.read(), content) d.addCallback(validate_result) return d def test_get_file_nonexistant(self): """Retrieving a nonexistant file raises a file not found error.""" control_bucket = self.get_config()["control-bucket"] file_name = "pirates/ship.txt" error = Error("404", "Not Found") self.s3.get_object(control_bucket, file_name) self.mocker.result(fail(error)) self.mocker.replay() storage = self.get_storage() d = storage.get(file_name) self.failUnlessFailure(d, FileNotFound) def validate_error_message(result): self.assertEqual( result.path, "s3://%s/%s" % (control_bucket, file_name)) d.addCallback(validate_error_message) return d def test_get_file_failed_ssl_verification(self): """SSL error is handled to be clear rather than backtrace""" control_bucket = self.get_config()["control-bucket"] file_name = "pirates/ship.txt" error = SSLError() self.s3.get_object(control_bucket, file_name) self.mocker.result(fail(error)) self.mocker.replay() storage = self.get_storage() d = storage.get(file_name) self.failUnlessFailure(d, SSLVerificationError) return d def test_get_file_error(self): """ An unexpected error from s3 on file retrieval is exposed via the api. """ control_bucket = self.get_config()["control-bucket"] file_name = "pirates/ship.txt" self.s3.get_object( control_bucket, file_name) self.mocker.result(fail(S3Error("", 503))) self.mocker.replay() storage = self.get_storage() d = storage.get(file_name) self.failUnlessFailure(d, S3Error) return d juju-0.7.orig/juju/providers/ec2/tests/test_findzookeeper.py0000644000000000000000000001200612135220114022472 0ustar 00000000000000from twisted.internet.defer import fail, succeed from txaws.ec2.exception import EC2Error from txaws.s3.exception import S3Error from juju.errors import EnvironmentNotFound from juju.lib.serializer import dump from juju.lib.testing import TestCase from juju.providers.ec2.machine import EC2ProviderMachine from juju.providers.ec2.tests.common import EC2TestMixin def _invalid_id_error(): e = EC2Error("", 400) e.errors = [{"Code": "InvalidInstanceID.NotFound", "Message": "blah i-abef014589 blah"}] return e class EC2FindZookeepersTest(EC2TestMixin, TestCase): def mock_load_state(self, result): self.s3.get_object(self.env_name, "provider-state") self.mocker.result(result) def assert_no_environment(self): provider = self.get_provider() d = provider.get_zookeeper_machines() self.failUnlessFailure(d, EnvironmentNotFound) return d def verify_no_environment(self, load_result): self.mock_load_state(load_result) self.mocker.replay() return self.assert_no_environment() def test_no_state(self): """ When loading saved state from S3, the provider method gracefully handles the scenario where there is no saved state. """ error = S3Error("", 404) error.errors = [{"Code": "NoSuchKey"}] return self.verify_no_environment(fail(error)) def test_empty_state(self): """ When loading saved state from S3, the provider method gracefully handles the scenario where there is no saved zookeeper state. """ return self.verify_no_environment(succeed(dump([]))) def test_no_hosts(self): """ If the saved state from s3 exists, but has no zookeeper hosts, the provider method correctly detects this and raises EnvironmentNotFound. """ return self.verify_no_environment(succeed(dump({"abc": 123}))) def test_machines_not_running(self): """ If the saved state exists but only contains zookeeper hosts that are not actually running, the provider method detects this and raises EnvironmentNotFound. """ self.mock_load_state(succeed(dump({"zookeeper-instances": ["i-x"]}))) self.ec2.describe_instances("i-x") self.mocker.result(succeed([])) self.mocker.replay() return self.assert_no_environment() def check_good_instance_state(self, state): self.s3.get_object(self.env_name, "provider-state") self.mocker.result(succeed(dump( {"zookeeper-instances": ["i-foobar"]}))) self.ec2.describe_instances("i-foobar") self.mocker.result(succeed([self.get_instance("i-foobar", state)])) self.mocker.replay() provider = self.get_provider() d = provider.get_zookeeper_machines() def verify(machines): (machine,) = machines self.assertEquals(machine.instance_id, "i-foobar") self.assertTrue(isinstance(machine, EC2ProviderMachine)) d.addCallback(verify) return d def test_pending_ok(self): return self.check_good_instance_state("pending") def test_running_ok(self): return self.check_good_instance_state("running") def test_eventual_success(self): """ When the S3 state contains valid zookeeper hosts, return a one-element list containing the first one encountered. """ self.s3.get_object(self.env_name, "provider-state") self.mocker.result(succeed(dump( {"zookeeper-instances": ["i-abef014589", "i-amnotyours", "i-amdead", "i-amok", "i-amtoo"]}))) # Zk instances are checked individually to handle invalid ids correctly self.ec2.describe_instances("i-abef014589") self.mocker.result(fail(_invalid_id_error())) self.ec2.describe_instances("i-amnotyours") self.mocker.result(succeed([ self.get_instance("i-amnotyours", groups=["bad"])])) self.ec2.describe_instances("i-amdead") self.mocker.result(succeed([ self.get_instance("i-amnotyours", "terminated")])) self.ec2.describe_instances("i-amok") self.mocker.result(succeed([self.get_instance("i-amok", "pending")])) self.ec2.describe_instances("i-amtoo") self.mocker.result(succeed([self.get_instance("i-amtoo", "running")])) self.mocker.replay() provider = self.get_provider() d = provider.get_zookeeper_machines() def verify_machines(machines): (foobaz, foobop) = machines self.assertEquals(foobaz.instance_id, "i-amok") self.assertTrue(isinstance(foobaz, EC2ProviderMachine)) self.assertEquals(foobop.instance_id, "i-amtoo") self.assertTrue(isinstance(foobop, EC2ProviderMachine)) d.addCallback(verify_machines) return d juju-0.7.orig/juju/providers/ec2/tests/test_getmachines.py0000644000000000000000000001376412135220114022131 0ustar 00000000000000from twisted.internet.defer import fail, succeed from txaws.ec2.exception import EC2Error from juju.errors import MachinesNotFound, ProviderInteractionError from juju.lib.testing import TestCase from .common import EC2TestMixin class SomeError(Exception): pass class GetMachinesTest(EC2TestMixin, TestCase): def assert_not_found(self, d, instance_ids): self.assertFailure(d, MachinesNotFound) def verify(error): self.assertEquals(error.instance_ids, instance_ids) d.addCallback(verify) return d def test_get_all_filters(self): """ The machine iteration api of the provider should list all running machines associated to the provider. """ self.ec2.describe_instances() self.mocker.result(succeed([ self.get_instance("i-amrunning", dns_name="x1.example.com"), self.get_instance("i-amdead", "terminated"), self.get_instance("i-amalien", groups=["other"]), self.get_instance("i-ampending", "pending")])) self.mocker.replay() provider = self.get_provider() d = provider.get_machines() def verify(result): (running, pending,) = result self.assert_machine(running, "i-amrunning", "x1.example.com") self.assert_machine(pending, "i-ampending", "") d.addCallback(verify) return d def test_get_all_no_results(self): self.ec2.describe_instances() self.mocker.result(succeed([])) self.mocker.replay() provider = self.get_provider() d = provider.get_machines() def verify(result): self.assertEquals(result, []) d.addCallback(verify) return d def test_get_some_bad_state(self): self.ec2.describe_instances("i-amfine", "i-amdead") self.mocker.result(succeed([ self.get_instance("i-amfine", dns_name="x1.example.com"), self.get_instance("i-amdead", "terminated")])) self.mocker.replay() provider = self.get_provider() d = provider.get_machines(["i-amfine", "i-amdead"]) return self.assert_not_found(d, ["i-amdead"]) def test_get_some_bad_group(self): self.ec2.describe_instances("i-amfine", "i-amalien") self.mocker.result(succeed([ self.get_instance("i-amfine", dns_name="x1.example.com"), self.get_instance("i-amalien", groups=["random"])])) self.mocker.replay() provider = self.get_provider() d = provider.get_machines(["i-amfine", "i-amalien"]) return self.assert_not_found(d, ["i-amalien"]) def test_get_some_too_few(self): self.ec2.describe_instances("i-amfine", "i-ammissing") self.mocker.result(succeed([self.get_instance("i-amfine")])) self.mocker.replay() provider = self.get_provider() d = provider.get_machines(["i-amfine", "i-ammissing"]) return self.assert_not_found(d, ["i-ammissing"]) def test_get_some_success(self): self.ec2.describe_instances("i-amrunning", "i-ampending") self.mocker.result(succeed([ self.get_instance("i-amrunning", dns_name="x1.example.com"), self.get_instance("i-ampending", "pending")])) self.mocker.replay() provider = self.get_provider() d = provider.get_machines(["i-amrunning", "i-ampending"]) def verify(result): (running, pending,) = result self.assert_machine(running, "i-amrunning", "x1.example.com") self.assert_machine(pending, "i-ampending", "") d.addCallback(verify) return d def test_describe_known_failure(self): self.ec2.describe_instances("i-acf059", "i-amfine", "i-920fda") error = EC2Error("", 400) error.errors = [{ "Code": "InvalidInstanceID.NotFound", "Message": "blah i-acf059, i-920fda blah"}] self.mocker.result(fail(error)) self.mocker.replay() provider = self.get_provider() d = provider.get_machines(["i-acf059", "i-amfine", "i-920fda"]) return self.assert_not_found(d, ["i-acf059", "i-920fda"]) def test_describe_unknown_failure(self): self.ec2.describe_instances("i-brokeit", "i-msorry") self.mocker.result(fail( self.get_ec2_error("splat! kerpow!", "unhelpful noises (%r)"))) self.mocker.replay() provider = self.get_provider() d = provider.get_machines(["i-brokeit", "i-msorry"]) self.assertFailure(d, ProviderInteractionError) def verify(error): self.assertEquals( str(error), "Unexpected EC2Error getting machines i-brokeit, i-msorry: " "unhelpful noises ('splat! kerpow!')") d.addCallback(verify) return d def test_describe_error(self): self.ec2.describe_instances("i-amdeadly") self.mocker.result(fail(SomeError())) self.mocker.replay() provider = self.get_provider() d = provider.get_machines(["i-amdeadly"]) self.assertFailure(d, SomeError) return d def test_get_one_error(self): self.ec2.describe_instances("i-amfatal") self.mocker.result(fail(SomeError())) self.mocker.replay() provider = self.get_provider() d = provider.get_machine("i-amfatal") self.assertFailure(d, SomeError) return d def test_get_one_not_found(self): self.ec2.describe_instances("i-amgone") self.mocker.result(succeed([])) self.mocker.replay() provider = self.get_provider() d = provider.get_machine("i-amgone") return self.assert_not_found(d, ["i-amgone"]) def test_get_one(self): self.ec2.describe_instances("i-amgood") self.mocker.result(succeed([self.get_instance("i-amgood")])) self.mocker.replay() provider = self.get_provider() d = provider.get_machine("i-amgood") d.addCallback(self.assert_machine, "i-amgood", "") return d juju-0.7.orig/juju/providers/ec2/tests/test_launch.py0000644000000000000000000002772312135220114021114 0ustar 00000000000000import os from twisted.internet.defer import inlineCallbacks, succeed from txaws.ec2.model import Instance, SecurityGroup from juju.errors import EnvironmentNotFound, ProviderError from juju.providers.ec2.machine import EC2ProviderMachine from juju.lib import serializer from juju.lib.testing import TestCase from juju.lib.mocker import MATCH from .common import EC2TestMixin, EC2MachineLaunchMixin, get_constraints DATA_DIR = os.path.join(os.path.abspath(os.path.dirname(__file__)), "data") class EC2MachineLaunchTest(EC2TestMixin, EC2MachineLaunchMixin, TestCase): @inlineCallbacks def setUp(self): yield super(EC2MachineLaunchTest, self).setUp() self.constraints = yield get_constraints([]) self.gen_constraints = yield get_constraints(["cpu=20", "mem=7168"]) self.gen_constraints = self.gen_constraints.with_series("dribbly") self.ec2_constraints = yield get_constraints( ["instance-type=cc2.8xlarge", "ec2-zone=b"]) self.ec2_constraints = self.ec2_constraints.with_series("vast") def _mock_launch(self, instance, expect_ami="ami-default", expect_instance_type="m1.small", expect_availability_zone=None, cloud_init="launch_cloud_init"): def verify_user_data(data): expect_path = os.path.join(DATA_DIR, cloud_init) with open(expect_path) as f: expect_cloud_init = serializer.load(f.read()) self.assertEquals(serializer.load(data), expect_cloud_init) return True self.ec2.run_instances( image_id=expect_ami, instance_type=expect_instance_type, max_count=1, min_count=1, security_groups=["juju-moon", "juju-moon-1"], availability_zone=expect_availability_zone, user_data=MATCH(verify_user_data)) self.mocker.result(succeed([instance])) def test_bad_data(self): self.mocker.replay() d = self.get_provider().start_machine({}) self.assertFailure(d, ProviderError) def verify(error): self.assertEquals( str(error), "Cannot launch a machine without specifying a machine-id") d.addCallback(verify) return d def test_provider_launch(self): """ The provider can be used to launch a machine with a minimal set of required packages, repositories, and security groups. """ self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group("1") self._mock_launch_utils() self._mock_get_zookeeper_hosts() self._mock_launch(self.get_instance("i-foobar")) self.mocker.replay() def verify_result(result): (machine,) = result self.assert_machine(machine, "i-foobar", "") provider = self.get_provider() d = provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) d.addCallback(verify_result) return d @inlineCallbacks def test_provider_launch_requires_constraints(self): self.mocker.replay() provider = self.get_provider() d = provider.start_machine({"machine-id": "1"}) e = yield self.assertFailure(d, ProviderError) self.assertEquals( str(e), "Cannot launch a machine without specifying constraints") @inlineCallbacks def test_provider_launch_using_branch(self): """Can use a juju branch to launch a machine""" self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group("1") self._mock_launch_utils() self._mock_get_zookeeper_hosts() self._mock_launch( self.get_instance("i-foobar"), cloud_init="launch_cloud_init_branch") self.mocker.replay() provider = self.get_provider() provider.config["juju-origin"] = "lp:~wizard/juju-juicebar" machines = yield provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assert_machine(machines[0], "i-foobar", "") @inlineCallbacks def test_provider_launch_using_ppa(self): """Can use the juju ppa to launch a machine""" self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group("1") self._mock_launch_utils() self._mock_get_zookeeper_hosts() self._mock_launch( self.get_instance("i-foobar"), cloud_init="launch_cloud_init_ppa") self.mocker.replay() provider = self.get_provider() provider.config["juju-origin"] = "ppa" machines = yield provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assert_machine(machines[0], "i-foobar", "") @inlineCallbacks def test_provider_launch_using_explicit_distro(self): """Can set juju-origin explicitly to `distro`""" self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group("1") self._mock_launch_utils() self._mock_get_zookeeper_hosts() self._mock_launch( self.get_instance("i-foobar"), cloud_init="launch_cloud_init") self.mocker.replay() provider = self.get_provider() provider.config["juju-origin"] = "distro" machines = yield provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assert_machine(machines[0], "i-foobar", "") @inlineCallbacks def test_provider_launch_existing_security_group(self): """Verify that the launch works if the env security group exists""" instance = Instance("i-foobar", "running", dns_name="x1.example.com") security_group = SecurityGroup("juju-moon", "some description") self.ec2.describe_security_groups() self.mocker.result(succeed([security_group])) self._mock_create_machine_group("1") self._mock_launch_utils() self._mock_get_zookeeper_hosts() self._mock_launch(instance) self.mocker.replay() provider = self.get_provider() machines = yield provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assertEqual(len(machines), 1) self.assertTrue(isinstance(machines[0], EC2ProviderMachine)) self.assertEqual(machines[0].instance_id, instance.instance_id) @inlineCallbacks def test_provider_launch_existing_machine_security_group(self): """Verify that the launch works if the machine security group exists""" instance = Instance("i-foobar", "running", dns_name="x1.example.com") machine_group = SecurityGroup( "juju-moon-1", "some description") self.ec2.describe_security_groups() self.mocker.result(succeed([machine_group])) self._mock_create_group() self._mock_launch_utils() self._mock_get_zookeeper_hosts() self._mock_launch(instance) self.mocker.replay() provider = self.get_provider() machines = yield provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assertEqual(len(machines), 1) self.assertTrue(isinstance(machines[0], EC2ProviderMachine)) self.assertEqual(machines[0].instance_id, instance.instance_id) @inlineCallbacks def test_provider_launch_existing_machine_security_group_is_active(self): """Verify launch fails properly if the machine group is stil active. This condition occurs when there is a corresponding machine in that security group, generally because it is still shutting down.""" instance = Instance("i-foobar", "running", dns_name="x1.example.com") machine_group = SecurityGroup( "juju-moon-1", "some description") self.ec2.describe_security_groups() self.mocker.result(succeed([machine_group])) self._mock_create_group() self._mock_launch_utils() self._mock_get_zookeeper_hosts() self._mock_launch(instance) self.mocker.replay() provider = self.get_provider() machines = yield provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assertEqual(len(machines), 1) self.assertTrue(isinstance(machines[0], EC2ProviderMachine)) self.assertEqual(machines[0].instance_id, instance.instance_id) def test_launch_with_no_juju_s3_state(self): """ Attempting to launch without any juju saved state, means we can't provide a way for a launched instance to connect to the zookeeper shared state. An assertion error is raised instead of allowing this. """ self._mock_get_zookeeper_hosts(False) self.mocker.replay() provider = self.get_provider() d = provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assertFailure(d, EnvironmentNotFound) return d def test_launch_with_no_juju_zookeeper_hosts(self): """ Attempting to launch without any juju zookeeper hosts, means we can't provide a way for a launched instance to connect to the zookeeper shared state. An assertion error is raised instead of allowing this. """ self._mock_get_zookeeper_hosts([]) self.mocker.replay() provider = self.get_provider() d = provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) self.assertFailure(d, EnvironmentNotFound) return d def test_launch_options_from_config_region(self): self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group("1") self._mock_launch_utils( ami_name="ami-regional", get_ami_args=( "splendid", "amd64", "somewhere-else-1", False, False)) self._mock_get_zookeeper_hosts() self._mock_launch(self.get_instance("i-foobar"), "ami-regional") self.mocker.replay() provider = self.get_provider() provider.config["region"] = "somewhere-else-1" return provider.start_machine({ "machine-id": "1", "constraints": self.constraints}) def test_launch_options_ec2_constraints(self): self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group("1") self._mock_launch_utils( ami_name="ami-fancy-cluster", get_ami_args=("vast", "amd64", "us-east-1", True, False)) self._mock_get_zookeeper_hosts() self._mock_launch( self.get_instance("i-foobar"), "ami-fancy-cluster", expect_instance_type="cc2.8xlarge", expect_availability_zone="us-east-1b") self.mocker.replay() provider = self.get_provider() return provider.start_machine({ "machine-id": "1", "constraints": self.ec2_constraints}) def test_launch_options_generic_constraints(self): self.ec2.describe_security_groups() self.mocker.result(succeed([])) self._mock_create_group() self._mock_create_machine_group("1") self._mock_launch_utils( get_ami_args=("dribbly", "amd64", "us-east-1", False, False)) self._mock_get_zookeeper_hosts() self._mock_launch( self.get_instance("i-foobar"), "ami-default", expect_instance_type="c1.xlarge") self.mocker.replay() provider = self.get_provider() return provider.start_machine({ "machine-id": "1", "constraints": self.gen_constraints}) juju-0.7.orig/juju/providers/ec2/tests/test_machine.py0000644000000000000000000000134412135220114021235 0ustar 00000000000000from txaws.ec2.model import Instance from juju.lib.testing import TestCase from juju.providers.ec2.machine import ( EC2ProviderMachine, machine_from_instance) class EC2ProviderMachineTest(TestCase): def test_machine_from_instance(self): instance = Instance( "i-foobar", "oscillating", dns_name="public", private_dns_name="private") machine = machine_from_instance(instance) self.assertTrue(isinstance(machine, EC2ProviderMachine)) self.assertEquals(machine.instance_id, "i-foobar") self.assertEquals(machine.dns_name, "public") self.assertEquals(machine.private_dns_name, "private") self.assertEquals(machine.state, "oscillating") juju-0.7.orig/juju/providers/ec2/tests/test_provider.py0000644000000000000000000002640212135220114021465 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju import errors from juju.environment.errors import EnvironmentsConfigError from juju.errors import ConstraintError from juju.lib.testing import TestCase from juju.providers import ec2 as _mod_provider from juju.providers.ec2 import MachineProvider from juju.providers.ec2.files import FileStorage import logging from .common import EC2TestMixin class ProviderTestCase(EC2TestMixin, TestCase): def setUp(self): super(ProviderTestCase, self).setUp() self.mocker.replay() def test_default_service_factory_construction(self): """ Ensure that the AWSServiceRegion gets called by the MachineProvider with the right arguments. This explores the mocking which is already happening within EC2TestMixin. """ expected_kwargs = {"access_key": "foo", "secret_key": "bar", "ec2_uri": "https://ec2.us-east-1.amazonaws.com", "s3_uri": ""} MachineProvider(self.env_name, {"access-key": "foo", "secret-key": "bar"}) self.assertEquals(self.service_factory_kwargs, expected_kwargs) def test_service_factory_construction(self): """ Ensure that the AWSServiceRegion gets called by the MachineProvider with the right arguments when they are present in the configuration. This explores the mocking which is already happening within EC2TestMixin. """ config = {"access-key": "secret-123", "secret-key": "secret-abc", "ec2-uri": "the-ec2-uri", "s3-uri": "the-ec2-uri"} expected_kwargs = {} for key, value in config.iteritems(): expected_kwargs[key.replace("-", "_")] = value MachineProvider(self.env_name, config) self.assertEquals(self.service_factory_kwargs, expected_kwargs) def test_service_factory_construction_region_provides_ec2_uri(self): """ The EC2 service URI can be dereferenced by region name alone. This explores the mocking which is already happening within EC2TestMixin. """ config = {"access-key": "secret-123", "secret-key": "secret-abc", "s3-uri": "the-ec2-uri", "region": "eu-west-1"} expected_kwargs = {} for key, value in config.iteritems(): expected_kwargs[key.replace("-", "_")] = value del expected_kwargs["region"] expected_kwargs["ec2_uri"] = "https://ec2.eu-west-1.amazonaws.com" MachineProvider(self.env_name, config) self.assertEquals(self.service_factory_kwargs, expected_kwargs) def test_provider_attributes(self): """ The provider environment name and config should be available as parameters in the provider. """ provider = self.get_provider() self.assertEqual(provider.environment_name, self.env_name) self.assertEqual(provider.config.get("type"), "ec2") self.assertEqual(provider.provider_type, "ec2") def test_get_file_storage(self): """The file storage is accessible via the machine provider.""" provider = self.get_provider() storage = provider.get_file_storage() self.assertTrue(isinstance(storage, FileStorage)) def test_config_serialization(self): """ The provider configuration can be serialized to yaml. """ keys_path = self.makeFile("my-keys") config = {"access-key": "secret-123", "secret-key": "secret-abc", "authorized-keys-path": keys_path} expected_serialization = config.copy() expected_serialization.pop("authorized-keys-path") expected_serialization["authorized-keys"] = "my-keys" provider = MachineProvider(self.env_name, config) serialized = provider.get_serialization_data() self.assertEqual(serialized, expected_serialization) def test_config_environment_extraction(self): """ The provider serialization loads keys as needed from the environment. Variables from the configuration take precendence over those from the environment, when serializing. """ config = {"access-key": "secret-12345", "secret-key": "secret-abc", "authorized-keys": "0123456789abcdef"} environ = { "AWS_SECRET_ACCESS_KEY": "secret-abc", "AWS_ACCESS_KEY_ID": "secret-123"} self.change_environment(**environ) provider = MachineProvider( self.env_name, {"access-key": "secret-12345", "authorized-keys": "0123456789abcdef"}) serialized = provider.get_serialization_data() self.assertEqual(config, serialized) def test_ssl_hostname_verification_config(self): """ Tests that SSL hostname verification is enabled in txaws when the config setting is set to true """ self.patch(_mod_provider, "ssl", object()) config = {"access-key": "secret-12345", "secret-key": "secret-abc", "ssl-hostname-verification": True} provider = MachineProvider(self.env_name, config) self.assertTrue( provider._service.ec2_endpoint.ssl_hostname_verification) self.assertTrue( provider._service.s3_endpoint.ssl_hostname_verification) def test_ssl_hostname_verification_unsupported(self): """Error is raised if verification is enabled but unsupported""" self.patch(_mod_provider, "ssl", None) self.mocker.reset() config = {"access-key": "secret-12345", "secret-key": "secret-abc", "ssl-hostname-verification": True} self.assertRaises(errors.SSLVerificationUnsupported, MachineProvider, self.env_name, config) def test_warn_on_no_ssl_hostname_verification(self): """ We should warn the user whenever they are not using hostname verification. """ ssl = self.patch(_mod_provider, "ssl", object()) config = {"access-key": "secret-12345", "secret-key": "secret-abc", "ssl-hostname-verification": False} output = self.capture_logging("juju.ec2", level=logging.WARN) provider = MachineProvider(self.env_name, config) self.assertIn('EC2 API calls encrypted but not authenticated', output.getvalue()) self.assertIn('S3 API calls encrypted but not authenticated', output.getvalue()) self.assertIn( 'Ubuntu Cloud Image lookups encrypted but not authenticated', output.getvalue()) self.assertIn('ssl-hostname-verification is disabled', output.getvalue()) if ssl: self.assertFalse( provider._service.ec2_endpoint.ssl_hostname_verification) self.assertFalse( provider._service.s3_endpoint.ssl_hostname_verification) def test_get_legacy_config_keys(self): provider = MachineProvider(self.env_name, { "access-key": "foo", "secret-key": "bar", # Note: these keys *will* at some stage be considered legacy keys; # they're included here to make sure the tests are updated when we # make that change. "default-series": "foo", "placement": "bar"}) self.assertEquals(provider.get_legacy_config_keys(), set()) # These keys are not valid on Amazon EC2... provider.config.update({ "default-instance-type": "baz", "default-image-id": "qux"}) self.assertEquals(provider.get_legacy_config_keys(), set(( "default-instance-type", "default-image-id"))) # ...but they still are when using private clouds. provider.config.update({"ec2-uri": "anything"}) self.assertEquals(provider.get_legacy_config_keys(), set()) class ProviderConstraintsTestCase(TestCase): def constraint_set(self): provider = MachineProvider( "some-ec2-env", {"access-key": "f", "secret-key": "x"}) return provider.get_constraint_set() @inlineCallbacks def assert_invalid(self, msg, *strs): cs = yield self.constraint_set() e = self.assertRaises(ConstraintError, cs.parse, strs) self.assertEquals(str(e), msg) @inlineCallbacks def test_constraints(self): cs = yield self.constraint_set() self.assertEquals(cs.parse([]), { "provider-type": "ec2", "ubuntu-series": None, "instance-type": None, "ec2-zone": None, "arch": "amd64", "cpu": 1.0, "mem": 512.0}) self.assertEquals(cs.parse(["ec2-zone=X", "instance-type=m1.small"]), { "provider-type": "ec2", "ubuntu-series": None, "instance-type": "m1.small", "ec2-zone": "x", "arch": "amd64", "cpu": None, "mem": None}) yield self.assert_invalid( "Bad 'ec2-zone' constraint '7': expected single ascii letter", "ec2-zone=7") yield self.assert_invalid( "Bad 'ec2-zone' constraint 'blob': expected single ascii letter", "ec2-zone=blob") yield self.assert_invalid( "Bad 'instance-type' constraint 'qq1.moar': unknown instance type", "instance-type=qq1.moar") yield self.assert_invalid( "Ambiguous constraints: 'cpu' overlaps with 'instance-type'", "instance-type=m1.small", "cpu=1") yield self.assert_invalid( "Ambiguous constraints: 'instance-type' overlaps with 'mem'", "instance-type=m1.small", "mem=2G") @inlineCallbacks def test_satisfy_zone_constraint(self): cs = yield self.constraint_set() a = cs.parse(["ec2-zone=a"]).with_series("series") b = cs.parse(["ec2-zone=b"]).with_series("series") self.assertTrue(a.can_satisfy(a)) self.assertTrue(b.can_satisfy(b)) self.assertFalse(a.can_satisfy(b)) self.assertFalse(b.can_satisfy(a)) @inlineCallbacks def xtest_non_amazon_constraints(self): # Disabled because the ec2 provider requires these keys (instance-type # and ec2-zone) provider = MachineProvider("some-non-ec2-env", { "ec2-uri": "blah", "secret-key": "foobar", "access-key": "bar"}) cs = yield provider.get_constraint_set() self.assertEquals(cs.parse([]), { "provider-type": "ec2", "ubuntu-series": None}) class FailCreateTest(TestCase): def test_conflicting_authorized_keys_options(self): """ We can't handle two different authorized keys options, so deny constructing an environment that way. """ config = {} config["authorized-keys"] = "File content" config["authorized-keys-path"] = "File path" error = self.assertRaises(EnvironmentsConfigError, MachineProvider, "some-env-name", config) self.assertEquals( str(error), "Environment config cannot define both authorized-keys and " "authorized-keys-path. Pick one!") juju-0.7.orig/juju/providers/ec2/tests/test_securitygroup.py0000644000000000000000000001535412135220114022563 0ustar 00000000000000import logging from twisted.internet.defer import fail, succeed, inlineCallbacks from txaws.ec2.model import IPPermission, SecurityGroup from juju.errors import ProviderInteractionError from juju.lib.testing import TestCase from juju.machine import ProviderMachine from juju.providers.ec2.securitygroup import ( open_provider_port, close_provider_port, get_provider_opened_ports, destroy_environment_security_group) from juju.providers.ec2.tests.common import EC2TestMixin class EC2PortMgmtTest(EC2TestMixin, TestCase): @inlineCallbacks def test_open_provider_port(self): """Verify open port op will use the correct EC2 API.""" log = self.capture_logging("juju.ec2", level=logging.DEBUG) machine = ProviderMachine("i-foobar", "x1.example.com") self.ec2.authorize_security_group( "juju-moon-machine-1", ip_protocol="tcp", from_port="80", to_port="80", cidr_ip="0.0.0.0/0") self.mocker.result(succeed(True)) self.mocker.replay() provider = self.get_provider() yield open_provider_port(provider, machine, "machine-1", 80, "tcp") self.assertIn( "Opened 80/tcp on provider machine 'i-foobar'", log.getvalue()) @inlineCallbacks def test_close_provider_port(self): """Verify close port op will use the correct EC2 API.""" log = self.capture_logging("juju.ec2", level=logging.DEBUG) machine = ProviderMachine("i-foobar", "x1.example.com") self.ec2.revoke_security_group( "juju-moon-machine-1", ip_protocol="tcp", from_port="80", to_port="80", cidr_ip="0.0.0.0/0") self.mocker.result(succeed(True)) self.mocker.replay() provider = self.get_provider() yield close_provider_port(provider, machine, "machine-1", 80, "tcp") self.assertIn( "Closed 80/tcp on provider machine 'i-foobar'", log.getvalue()) @inlineCallbacks def test_get_provider_opened_ports(self): """Verify correct parse of IP perms from describe_security_group.""" self.ec2.describe_security_groups("juju-moon-machine-1") self.mocker.result(succeed([ SecurityGroup( "juju-%s-machine-1" % self.env_name, "a security group name", ips=[ IPPermission("udp", "53", "53", "0.0.0.0/0"), IPPermission("tcp", "80", "80", "0.0.0.0/0"), # The range 8080-8082 will be ignored IPPermission("tcp", "8080", "8082", "0.0.0.0/0"), # Ignore permissions that are not 0.0.0.0/0 IPPermission("tcp", "443", "443", "10.1.2.3") ])])) self.mocker.replay() provider = self.get_provider() machine = ProviderMachine( "i-foobar", "x1.example.com") opened_ports = yield get_provider_opened_ports( provider, machine, "machine-1") self.assertEqual(opened_ports, set([(53, "udp"), (80, "tcp")])) @inlineCallbacks def test_open_provider_port_unknown_instance(self): """Verify open port op will use the correct EC2 API.""" machine = ProviderMachine("i-foobar", "x1.example.com") self.ec2.authorize_security_group( "juju-moon-machine-1", ip_protocol="tcp", from_port="80", to_port="80", cidr_ip="0.0.0.0/0") self.mocker.result(fail(self.get_ec2_error("i-foobar"))) self.mocker.replay() provider = self.get_provider() ex = yield self.assertFailure( open_provider_port(provider, machine, "machine-1", 80, "tcp"), ProviderInteractionError) self.assertEqual( str(ex), "Unexpected EC2Error opening 80/tcp on machine i-foobar: " "The instance ID 'i-foobar' does not exist") @inlineCallbacks def test_close_provider_port_unknown_instance(self): """Verify open port op will use the correct EC2 API.""" machine = ProviderMachine("i-foobar", "x1.example.com") self.ec2.revoke_security_group( "juju-moon-machine-1", ip_protocol="tcp", from_port="80", to_port="80", cidr_ip="0.0.0.0/0") self.mocker.result(fail(self.get_ec2_error("i-foobar"))) self.mocker.replay() provider = self.get_provider() ex = yield self.assertFailure( close_provider_port(provider, machine, "machine-1", 80, "tcp"), ProviderInteractionError) self.assertEqual( str(ex), "Unexpected EC2Error closing 80/tcp on machine i-foobar: " "The instance ID 'i-foobar' does not exist") @inlineCallbacks def test_get_provider_opened_ports_unknown_instance(self): """Verify open port op will use the correct EC2 API.""" self.ec2.describe_security_groups("juju-moon-machine-1") self.mocker.result(fail(self.get_ec2_error("i-foobar"))) self.mocker.replay() provider = self.get_provider() machine = ProviderMachine("i-foobar", "x1.example.com") ex = yield self.assertFailure( get_provider_opened_ports(provider, machine, "machine-1"), ProviderInteractionError) self.assertEqual( str(ex), "Unexpected EC2Error getting open ports on machine i-foobar: " "The instance ID 'i-foobar' does not exist") class EC2RemoveGroupsTest(EC2TestMixin, TestCase): @inlineCallbacks def test_destroy_environment_security_group(self): """Verify the deletion of the security group for the environment""" self.ec2.delete_security_group("juju-moon") self.mocker.result(succeed(True)) self.mocker.replay() provider = self.get_provider() destroyed = yield destroy_environment_security_group(provider) self.assertTrue(destroyed) @inlineCallbacks def test_destroy_environment_security_group_missing(self): """Verify ignores errors in deleting the env security group""" log = self.capture_logging(level=logging.DEBUG) self.ec2.delete_security_group("juju-moon") self.mocker.result(fail( self.get_ec2_error( "juju-moon", format="The security group %r does not exist" ))) self.mocker.replay() provider = self.get_provider() destroyed = yield destroy_environment_security_group(provider) self.assertFalse(destroyed) self.assertIn( "Ignoring EC2 error when attempting to delete group " "juju-moon: Error Message: The security group " "'juju-moon' does not exist", log.getvalue()) juju-0.7.orig/juju/providers/ec2/tests/test_shutdown.py0000644000000000000000000002107412135220114021506 0ustar 00000000000000from twisted.internet.defer import succeed, fail, inlineCallbacks from juju.lib.testing import TestCase from juju.providers.ec2.tests.common import ( EC2TestMixin, MATCH_GROUP, Observed, MockInstanceState) from juju.machine import ProviderMachine from juju.errors import MachinesNotFound, ProviderError from juju.providers.ec2.machine import EC2ProviderMachine class SomeError(Exception): pass class EC2ShutdownMachineTest(EC2TestMixin, TestCase): @inlineCallbacks def test_shutdown_machine(self): system_state = MockInstanceState( self, ["i-foobar"], [0], [["running"]]) self.ec2.describe_instances("i-foobar") self.mocker.call(system_state.get_round) self.mocker.count(1) self.ec2.terminate_instances("i-foobar") self.mocker.result(succeed([("i-foobar", "running", "shutting-down")])) self.mocker.replay() machine = EC2ProviderMachine("i-foobar") provider = self.get_provider() machine = yield provider.shutdown_machine(machine) self.assertTrue(isinstance(machine, EC2ProviderMachine)) self.assertEquals(machine.instance_id, "i-foobar") def test_shutdown_machine_invalid_group(self): """ Attempting to shutdown a machine that does not belong to this provider instance raises an exception. """ instance = self.get_instance("i-foobar", groups=["whatever"]) self.ec2.describe_instances("i-foobar") self.mocker.result(succeed([instance])) self.mocker.replay() machine = EC2ProviderMachine("i-foobar") provider = self.get_provider() d = provider.shutdown_machine(machine) self.failUnlessFailure(d, MachinesNotFound) def verify(error): self.assertEquals(str(error), "Cannot find machine: i-foobar") d.addCallback(verify) return d def test_shutdown_machine_invalid_machine(self): """ Attempting to shutdown a machine that from a different provider type will raise a `ProviderError`. """ self.mocker.replay() machine = ProviderMachine("i-foobar") provider = self.get_provider() d = provider.shutdown_machine(machine) self.assertFailure(d, ProviderError) def check_error(error): self.assertEquals(str(error), "Can only shut down EC2ProviderMachines; got a " "") d.addCallback(check_error) return d @inlineCallbacks def test_shutdown_machines_none(self): self.mocker.replay() provider = self.get_provider() result = yield provider.shutdown_machines([]) self.assertEquals(result, []) @inlineCallbacks def test_shutdown_machines_some_invalid(self): self.ec2.describe_instances("i-amkillable", "i-amalien", "i-amdead") self.mocker.result(succeed([ self.get_instance("i-amkillable"), self.get_instance("i-amalien", groups=["other"]), self.get_instance("i-amdead", "shutting-down")])) self.mocker.replay() provider = self.get_provider() ex = yield self.assertFailure( provider.shutdown_machines([ EC2ProviderMachine("i-amkillable"), EC2ProviderMachine("i-amalien"), EC2ProviderMachine("i-amdead")]), MachinesNotFound) self.assertEquals(str(ex), "Cannot find machines: i-amalien, i-amdead") @inlineCallbacks def test_shutdown_machines_some_success(self): """Verify that shutting down some machines works. In particular, the environment as a whole is not removed because there's still the environment security group left.""" system_state = MockInstanceState( self, ["i-amkillable", "i-amkillabletoo"], [0, 2], [["running", "running"]]) self.ec2.describe_instances("i-amkillable", "i-amkillabletoo") self.mocker.call(system_state.get_round) self.mocker.count(1) self.ec2.terminate_instances("i-amkillable", "i-amkillabletoo") self.mocker.result(succeed([ ("i-amkillable", "running", "shutting-down"), ("i-amkillabletoo", "running", "shutting-down")])) self.mocker.replay() provider = self.get_provider() machine_1, machine_2 = yield provider.shutdown_machines([ EC2ProviderMachine("i-amkillable"), EC2ProviderMachine("i-amkillabletoo")]) self.assertTrue(isinstance(machine_1, EC2ProviderMachine)) self.assertEquals(machine_1.instance_id, "i-amkillable") self.assertTrue(isinstance(machine_2, EC2ProviderMachine)) self.assertEquals(machine_2.instance_id, "i-amkillabletoo") class EC2DestroyTest(EC2TestMixin, TestCase): @inlineCallbacks def test_destroy_environment(self): """ The destroy_environment operation terminates all running and pending instances associated to the `MachineProvider` instance. """ self.s3.put_object("moon", "provider-state", "{}\n") self.mocker.result(succeed(None)) self.ec2.describe_instances() instances = [ self.get_instance("i-canbekilled", machine_id=0), self.get_instance("i-amdead", machine_id=1, state="terminated"), self.get_instance("i-dontbelong", groups=["unknown"]), self.get_instance( "i-canbekilledtoo", machine_id=2, state="pending")] self.mocker.result(succeed(instances)) self.ec2.describe_instances("i-canbekilled", "i-canbekilledtoo") self.mocker.result(succeed([ self.get_instance("i-canbekilled"), self.get_instance("i-canbekilledtoo", state="pending")])) self.ec2.terminate_instances("i-canbekilled", "i-canbekilledtoo") self.mocker.result(succeed([ ("i-canbekilled", "running", "shutting-down"), ("i-canbekilledtoo", "pending", "shutting-down")])) self.ec2.delete_security_group(MATCH_GROUP) deleted_groups = Observed() self.mocker.call(deleted_groups.add) self.mocker.count(1) self.mocker.replay() provider = self.get_provider() machine_1, machine_2 = yield provider.destroy_environment() self.assertTrue(isinstance(machine_1, EC2ProviderMachine)) self.assertEquals(machine_1.instance_id, "i-canbekilled") self.assertTrue(isinstance(machine_2, EC2ProviderMachine)) self.assertEquals(machine_2.instance_id, "i-canbekilledtoo") self.assertEquals( deleted_groups.items, set(["juju-moon"])) @inlineCallbacks def test_s3_failure(self): """Failing to store empty state should not stop us killing machines""" self.s3.put_object("moon", "provider-state", "{}\n") self.mocker.result(fail(SomeError())) self.ec2.describe_instances() self.mocker.result(succeed([self.get_instance("i-canbekilled")])) system_state = MockInstanceState( self, ["i-canbekilled"], [0], [["running"]]) self.ec2.describe_instances("i-canbekilled") self.mocker.call(system_state.get_round) self.mocker.count(1) self.ec2.terminate_instances("i-canbekilled") self.mocker.result(succeed([ ("i-canbekilled", "running", "shutting-down")])) self.ec2.delete_security_group("juju-moon") self.mocker.result(succeed(True)) self.mocker.replay() provider = self.get_provider() machine, = yield provider.destroy_environment() self.assertTrue(isinstance(machine, EC2ProviderMachine)) self.assertEquals(machine.instance_id, "i-canbekilled") @inlineCallbacks def test_shutdown_no_instances(self): """ If there are no instances to shutdown, running the destroy_environment operation does nothing. """ self.s3.put_object("moon", "provider-state", "{}\n") self.mocker.result(succeed(None)) self.ec2.describe_instances() self.mocker.result(succeed([])) self.ec2.delete_security_group("juju-moon") self.mocker.result(fail( self.get_ec2_error( "juju-moon", format="The security group %r does not exist" ))) self.mocker.replay() provider = self.get_provider() result = yield provider.destroy_environment() self.assertEquals(result, []) juju-0.7.orig/juju/providers/ec2/tests/test_state.py0000644000000000000000000001015012135220114020744 0ustar 00000000000000from twisted.internet.defer import succeed, fail from txaws.s3.exception import S3Error from juju.lib import serializer from juju.lib.testing import TestCase from juju.providers.ec2.tests.common import EC2TestMixin class EC2StateTest(TestCase, EC2TestMixin): def setUp(self): EC2TestMixin.setUp(self) super(EC2StateTest, self).setUp() def test_save(self): """ When passed some juju ec2 machine instances and asked to save, the machine, it will serialize the data to an s3 bucket. """ instances = [self.get_instance("i-foobar", dns_name="x1.example.com")] state = serializer.dump( {"zookeeper-instances": [[i.instance_id, i.dns_name] for i in instances]}) self.s3.put_object( self.env_name, "provider-state", state), self.mocker.result(succeed(state)) self.mocker.replay() provider = self.get_provider() d = provider.save_state( {"zookeeper-instances": [[i.instance_id, i.dns_name] for i in instances]}) def assert_state(saved_state): self.assertEqual(saved_state, state) d.addCallback(assert_state) return d def test_save_non_existant_bucket(self): """ When saving instance information to S3 the EC2 provider will create a namespaced bucket specific to the provider instance, if it does not already exist. """ instances = [self.get_instance("i-foobar", dns_name="x1.example.com")] state = serializer.dump( {"zookeeper-instances": [[i.instance_id, i.dns_name] for i in instances]}) self.s3.put_object( self.env_name, "provider-state", state), error = S3Error("", 404) error.errors = [{"Code": "NoSuchBucket"}] self.mocker.result(fail(error)) self.s3.create_bucket(self.env_name) self.mocker.result(succeed({})) self.s3.put_object( self.env_name, "provider-state", state), self.mocker.result(succeed(state)) self.mocker.replay() provider = self.get_provider() d = provider.save_state( {"zookeeper-instances": [[i.instance_id, i.dns_name] for i in instances]}) def assert_state(saved_state): self.assertEqual(saved_state, state) d.addCallback(assert_state) return d def test_load(self): """ The provider bootstrap will load and deserialize any saved state from s3. """ self.s3.get_object(self.env_name, "provider-state") self.mocker.result( succeed(serializer.dump({"zookeeper-instances": []}))) self.mocker.replay() provider = self.get_provider() d = provider.load_state() def assert_load_value(value): self.assertEqual(value, {"zookeeper-instances": []}) d.addCallback(assert_load_value) return d def test_load_nonexistant_bucket(self): """ When loading saved state from s3, the system returns False if the s3 control bucket does not exist. """ self.s3.get_object(self.env_name, "provider-state") error = S3Error("", 404) error.errors = [{"Code": "NoSuchBucket"}] self.mocker.result(fail(error)) self.mocker.replay() provider = self.get_provider() d = provider.load_state() def assert_load_value(value): self.assertIdentical(value, False) d.addCallback(assert_load_value) return d def test_load_nonexistant(self): """ When loading saved state from S3, the provider bootstrap gracefully handles the scenario where there is no saved state. """ self.s3.get_object(self.env_name, "provider-state") self.mocker.result(succeed(serializer.dump([]))) self.mocker.replay() provider = self.get_provider() d = provider.load_state() def assert_load_value(value): self.assertIdentical(value, False) d.addCallback(assert_load_value) return d juju-0.7.orig/juju/providers/ec2/tests/test_utils.py0000644000000000000000000002640112135220114020772 0ustar 00000000000000import inspect import os from twisted.internet.defer import fail, succeed, inlineCallbacks from twisted.web.error import Error from juju.errors import ProviderError from juju.lib.testing import TestCase from juju.lib.mocker import MATCH from juju.providers import ec2 from juju.providers.ec2.utils import ( get_current_ami, get_instance_type, get_machine_spec) from .common import get_constraints from juju.providers.ec2.utils import VerifyingContextFactory IMAGE_HOST = "cloud-images.ubuntu.com" IMAGE_URI_TEMPLATE = "https://%s/query/%%s/server/released.current.txt" % ( IMAGE_HOST) STREAM_URI_TEMPLATE = "https://%s/query/%%s/server/daily.current.txt" % ( IMAGE_HOST) IMAGE_DATA_DIR = os.path.join( os.path.dirname(inspect.getabsfile(ec2)), "tests", "data") class GetCurrentAmiTest(TestCase): def test_bad_url(self): """ If the requested page doesn't exist at all, a LookupError is raised """ page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "nutty", contextFactory=None) self.mocker.result(fail(Error("404"))) self.mocker.replay() d = get_current_ami("nutty", "i386", "us-east-1", False, False) self.failUnlessFailure(d, LookupError) return d def test_umatched_ami(self): """ If an ami is not found that matches the specifications, then a LookupError is raised. """ page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "lucid", contextFactory=None) self.mocker.result(succeed("")) self.mocker.replay() d = get_current_ami("lucid", "i386", "us-east-1", False, False) self.failUnlessFailure(d, LookupError) return d def test_current_ami(self): """The current server machine image can be retrieved.""" page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "lucid", contextFactory=None) self.mocker.result(succeed( open(os.path.join(IMAGE_DATA_DIR, "lucid.txt")).read())) self.mocker.replay() d = get_current_ami("lucid", "i386", "us-east-1", False, False) d.addCallback(self.assertEquals, "ami-714ba518") return d def test_stream_api_when_testing(self): """Retrieve the daily release image when in testing mode.""" self.change_environment(JUJU_TESTING="YES") page = self.mocker.replace("twisted.web.client.getPage") page(STREAM_URI_TEMPLATE % "lucid", contextFactory=None) self.mocker.result(succeed( open(os.path.join(IMAGE_DATA_DIR, "lucid.txt")).read())) self.mocker.replay() d = get_current_ami("lucid", "i386", "us-east-1", False, False) d.addCallback(self.assertEquals, "ami-714ba518") return d def test_current_ami_by_arch(self): """The current server machine image can be retrieved by arch.""" page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "lucid", contextFactory=None) self.mocker.result( succeed(open( os.path.join(IMAGE_DATA_DIR, "lucid.txt")).read())) self.mocker.replay() d = get_current_ami("lucid", "amd64", "us-east-1", False, False) d.addCallback(self.assertEquals, "ami-4b4ba522") return d def test_current_ami_by_region(self): """The current server machine image can be retrieved by region.""" page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "lucid", contextFactory=None) self.mocker.result( succeed(open( os.path.join(IMAGE_DATA_DIR, "lucid.txt")).read())) self.mocker.replay() d = get_current_ami("lucid", "i386", "us-west-1", False, False) d.addCallback(self.assertEquals, "ami-cb97c68e") return d def test_current_ami_with_virtualisation_info(self): """The current server machine image can be retrieved by arch.""" page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "natty", contextFactory=None) self.mocker.result( succeed(open( os.path.join(IMAGE_DATA_DIR, "natty.txt")).read())) self.mocker.replay() d = get_current_ami("natty", "amd64", "us-east-1", True, False) d.addCallback(self.assertEquals, "ami-1cad5275") return d def test_hvm_request_on_old_series(self): page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "lucid", contextFactory=None) self.mocker.result( succeed(open( os.path.join(IMAGE_DATA_DIR, "lucid.txt")).read())) self.mocker.replay() d = get_current_ami("lucid", "amd64", "us-east-1", True, False) self.failUnlessFailure(d, LookupError) return d def test_current_ami_ssl_verify(self): """ Test that the appropriate SSL verifying context factory is passed. """ def match_context(value): if VerifyingContextFactory is None: # We're running against an older twisted version without # certificate verification. return value is None return isinstance(value, VerifyingContextFactory) page = self.mocker.replace("twisted.web.client.getPage") page(IMAGE_URI_TEMPLATE % "lucid", contextFactory=MATCH(match_context)) self.mocker.result(succeed( open(os.path.join(IMAGE_DATA_DIR, "lucid.txt")).read())) self.mocker.replay() d = get_current_ami("lucid", "i386", "us-east-1", False, True) def verify_result(result): self.assertEqual(result, "ami-714ba518") d.addCallback(verify_result) return d class GetImageIdTest(TestCase): @inlineCallbacks def assert_image_id(self, config, constraints, series, arch, region, hvm, instance_type, ssl_verify=False): get_current_ami_m = self.mocker.replace(get_current_ami) get_current_ami_m(series, arch, region, hvm, ssl_verify) self.mocker.result(succeed("ami-giggle")) self.mocker.replay() spec = yield get_machine_spec(config, constraints) self.assertEquals(spec.image_id, "ami-giggle") self.assertEquals(spec.instance_type, instance_type) def test_ssl_hostname_verification(self): constraints = yield get_constraints(["arch=i386"]) yield self.assert_image_id( {}, constraints, "splendid", "i386", "us-east-1", False, "m1.small", ssl_verify=True) @inlineCallbacks def test_empty_config(self): constraints = yield get_constraints(["arch=i386"]) yield self.assert_image_id( {}, constraints, "splendid", "i386", "us-east-1", False, "m1.small") @inlineCallbacks def test_series_from_config(self): config = {"default-series": "puissant"} constraints = yield get_constraints(["arch=i386"], None) yield self.assert_image_id( config, constraints, "puissant", "i386", "us-east-1", False, "m1.small") @inlineCallbacks def test_arch_from_instance_constraint(self): constraints = yield get_constraints([ "instance-type=m1.large", "arch=any"]) yield self.assert_image_id( {}, constraints, "splendid", "amd64", "us-east-1", False, "m1.large") @inlineCallbacks def test_arch_from_nothing(self): constraints = yield get_constraints([ "arch=any", "cpu=any", "mem=any", "instance-type=any"]) yield self.assert_image_id( {}, constraints, "splendid", "amd64", "us-east-1", False, "t1.micro") @inlineCallbacks def test_hvm_from_cluster_instance(self): constraints = yield get_constraints(["instance-type=cc2.8xlarge"]) yield self.assert_image_id( {}, constraints, "splendid", "amd64", "us-east-1", True, "cc2.8xlarge") @inlineCallbacks def test_region_from_config(self): config = {"region": "sa-east-1"} constraints = yield get_constraints(["arch=amd64"], "desperate") yield self.assert_image_id( config, constraints, "desperate", "amd64", "sa-east-1", False, "m1.small") @inlineCallbacks def test_config_override_ami_only(self): self.mocker.replace(get_current_ami) self.mocker.replay() constraints = yield get_constraints(["instance-type=t1.micro"]) spec = yield get_machine_spec( {"default-image-id": "ami-blobble"}, constraints) self.assertEquals(spec.image_id, "ami-blobble") self.assertEquals(spec.instance_type, "t1.micro") @inlineCallbacks def test_config_override_ami_and_instance(self): self.mocker.replace(get_current_ami) self.mocker.replay() constraints = yield get_constraints(["instance-type=t1.micro"]) spec = yield get_machine_spec( {"default-image-id": "ami-blobble", "default-instance-type": "q1.arbitrary"}, constraints) self.assertEquals(spec.image_id, "ami-blobble") self.assertEquals(spec.instance_type, "q1.arbitrary") class GetInstanceTypeTest(TestCase): @inlineCallbacks def assert_instance_type(self, strs, expected): constraints = yield get_constraints(strs) instance_type = get_instance_type({}, constraints) self.assertEquals(instance_type, expected) @inlineCallbacks def assert_no_instance_type(self, strs): constraints = yield get_constraints(strs) e = self.assertRaises( ProviderError, get_instance_type, {}, constraints) self.assertIn("No instance type satisfies", str(e)) @inlineCallbacks def test_basic(self): yield self.assert_instance_type([], "m1.small") yield self.assert_instance_type( ["instance-type=cg1.4xlarge"], "cg1.4xlarge") @inlineCallbacks def test_picks_cheapest(self): yield self.assert_instance_type(["mem=0", "cpu=0"], "t1.micro") yield self.assert_instance_type( ["arch=i386", "mem=0", "cpu=0"], "t1.micro") yield self.assert_instance_type( ["arch=amd64", "mem=0", "cpu=0"], "t1.micro") yield self.assert_instance_type(["arch=i386", "cpu=1"], "m1.small") yield self.assert_instance_type(["arch=amd64", "cpu=1"], "m1.small") yield self.assert_instance_type(["cpu=5"], "c1.medium") yield self.assert_instance_type(["cpu=5", "mem=2G"], "m2.xlarge") yield self.assert_instance_type(["cpu=50"], "cc2.8xlarge") @inlineCallbacks def test_unsatisfiable(self): yield self.assert_no_instance_type(["cpu=99"]) yield self.assert_no_instance_type(["mem=100G"]) yield self.assert_no_instance_type(["arch=i386", "mem=5G"]) yield self.assert_no_instance_type(["arch=arm"]) @inlineCallbacks def test_config_override(self): constraints = yield get_constraints([]) instance_type = get_instance_type( {"default-instance-type": "whatever-they-typed"}, constraints) self.assertEquals(instance_type, "whatever-they-typed") juju-0.7.orig/juju/providers/ec2/tests/data/bootstrap_cloud_init0000644000000000000000000000372012135220114023302 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'localhost:2181', machine-id: '0'} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, default-jre-headless, zookeeper, zookeeperd, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, sed -i -e s/tickTime=2000/tickTime=15000/g /etc/zookeeper/conf/zoo.cfg, echo "minSessionTimeout=30000" >> /etc/zookeeper/conf/zoo.cfg, echo "maxSessionTimeout=60000" >> /etc/zookeeper/conf/zoo.cfg, 'juju-admin initialize --instance-id=$(curl http://169.254.169.254/1.0/meta-data/instance-id) --admin-identity=admin:JbJ6sDGV37EHzbG9FPvttk64cmg= --constraints-data=e2NwdTogbnVsbCwgaW5zdGFuY2UtdHlwZTogbTEuc21hbGwsIG1lbTogbnVsbCwgcHJvdmlkZXItdHlwZTogZWMyLCB1YnVudHUtc2VyaWVzOiBzcGxlbmRpZH0K --provider-type=ec2', 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="0" env JUJU_ZOOKEEPER="localhost:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent, 'cat >> /etc/init/juju-provision-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_ZOOKEEPER="localhost:2181" exec python -m juju.agents.provision --nodaemon --logfile /var/log/juju/provision-agent.log --session-file /var/run/juju/provision-agent.zksession >> /tmp/juju-provision-agent.output 2>&1 EOF ', /sbin/start juju-provision-agent] ssh_authorized_keys: [zebra] juju-0.7.orig/juju/providers/ec2/tests/data/launch_cloud_init0000644000000000000000000000166512135220114022545 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', machine-id: '1'} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="1" env JUJU_ZOOKEEPER="es.example.internal:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [zebra] juju-0.7.orig/juju/providers/ec2/tests/data/launch_cloud_init_branch0000644000000000000000000000226412135220114024056 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true apt_sources: - {source: 'ppa:juju/pkgs'} machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', machine-id: '1'} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper] runcmd: [sudo apt-get install -y python-txzookeeper, sudo mkdir -p /usr/lib/juju, 'cd /usr/lib/juju && sudo /usr/bin/bzr co --lightweight lp:~wizard/juju-juicebar juju', cd /usr/lib/juju/juju && sudo python setup.py develop, sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="1" env JUJU_ZOOKEEPER="es.example.internal:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [zebra] juju-0.7.orig/juju/providers/ec2/tests/data/launch_cloud_init_ppa0000644000000000000000000000173612135220114023404 0ustar 00000000000000#cloud-config apt_update: true apt_upgrade: true apt_sources: - {source: 'ppa:juju/pkgs'} machine-data: {juju-provider-type: ec2, juju-zookeeper-hosts: 'es.example.internal:2181', machine-id: '1'} output: {all: '| tee -a /var/log/cloud-init-output.log'} packages: [bzr, byobu, tmux, python-setuptools, python-twisted, python-txaws, python-zookeeper, juju] runcmd: [sudo mkdir -p /var/lib/juju, sudo mkdir -p /var/log/juju, 'cat >> /etc/init/juju-machine-agent.conf <" start on runlevel [2345] stop on runlevel [!2345] respawn env JUJU_MACHINE_ID="1" env JUJU_ZOOKEEPER="es.example.internal:2181" exec python -m juju.agents.machine --nodaemon --logfile /var/log/juju/machine-agent.log --session-file /var/run/juju/machine-agent.zksession >> /tmp/juju-machine-agent.output 2>&1 EOF ', /sbin/start juju-machine-agent] ssh_authorized_keys: [zebra] juju-0.7.orig/juju/providers/ec2/tests/data/lucid.txt0000644000000000000000000000252412135220114020773 0ustar 00000000000000lucid server release 20100427.1 ebs amd64 ap-southeast-1 ami-77f28d25 aki-a9f38cfb lucid server release 20100427.1 ebs i386 ap-southeast-1 ami-4df28d1f aki-bdf38cef lucid server release 20100427.1 instance-store amd64 ap-southeast-1 ami-57f28d05 aki-a9f38cfb lucid server release 20100427.1 instance-store i386 ap-southeast-1 ami-a5f38cf7 aki-bdf38cef lucid server release 20100427.1 ebs amd64 eu-west-1 ami-ab4d67df aki-cb4d67bf lucid server release 20100427.1 ebs i386 eu-west-1 ami-a94d67dd aki-c34d67b7 lucid server release 20100427.1 instance-store amd64 eu-west-1 ami-a54d67d1 aki-cb4d67bf lucid server release 20100427.1 instance-store i386 eu-west-1 ami-cf4d67bb aki-c34d67b7 lucid server release 20100427.1 ebs amd64 us-east-1 ami-4b4ba522 aki-0b4aa462 lucid server release 20100427.1 ebs i386 us-east-1 ami-714ba518 aki-754aa41c lucid server release 20100427.1 instance-store amd64 us-east-1 ami-fd4aa494 aki-0b4aa462 lucid server release 20100427.1 instance-store i386 us-east-1 ami-2d4aa444 aki-754aa41c lucid server release 20100427.1 ebs amd64 us-west-1 ami-d197c694 aki-c397c686 lucid server release 20100427.1 ebs i386 us-west-1 ami-cb97c68e aki-3197c674 lucid server release 20100427.1 instance-store amd64 us-west-1 ami-c997c68c aki-c397c686 lucid server release 20100427.1 instance-store i386 us-west-1 ami-c597c680 aki-3197c674 juju-0.7.orig/juju/providers/ec2/tests/data/natty.txt0000644000000000000000000000370512135220114021034 0ustar 00000000000000natty server release 20110426 ebs amd64 ap-northeast-1 ami-dab812db aki-d409a2d5 paravirtual natty server release 20110426 ebs i386 ap-northeast-1 ami-d8b812d9 aki-d209a2d3 paravirtual natty server release 20110426 instance-store amd64 ap-northeast-1 ami-a4b812a5 aki-d409a2d5 paravirtual natty server release 20110426 instance-store i386 ap-northeast-1 ami-46b81247 aki-d209a2d3 paravirtual natty server release 20110426 ebs amd64 ap-southeast-1 ami-60582132 aki-11d5aa43 paravirtual natty server release 20110426 ebs i386 ap-southeast-1 ami-62582130 aki-13d5aa41 paravirtual natty server release 20110426 instance-store amd64 ap-southeast-1 ami-aa5920f8 aki-11d5aa43 paravirtual natty server release 20110426 instance-store i386 ap-southeast-1 ami-c0592092 aki-13d5aa41 paravirtual natty server release 20110426 ebs amd64 eu-west-1 ami-379ea943 aki-4feec43b paravirtual natty server release 20110426 ebs i386 eu-west-1 ami-359ea941 aki-4deec439 paravirtual natty server release 20110426 instance-store amd64 eu-west-1 ami-619ea915 aki-4feec43b paravirtual natty server release 20110426 instance-store i386 eu-west-1 ami-1b9fa86f aki-4deec439 paravirtual natty server release 20110426 ebs amd64 us-east-1 ami-1cad5275 hvm natty server release 20110426 ebs amd64 us-east-1 ami-1aad5273 aki-427d952b paravirtual natty server release 20110426 ebs i386 us-east-1 ami-06ad526f aki-407d9529 paravirtual natty server release 20110426 instance-store amd64 us-east-1 ami-68ad5201 aki-427d952b paravirtual natty server release 20110426 instance-store i386 us-east-1 ami-e2af508b aki-407d9529 paravirtual natty server release 20110426 ebs amd64 us-west-1 ami-136f3c56 aki-9ba0f1de paravirtual natty server release 20110426 ebs i386 us-west-1 ami-116f3c54 aki-99a0f1dc paravirtual natty server release 20110426 instance-store amd64 us-west-1 ami-0b6f3c4e aki-9ba0f1de paravirtual natty server release 20110426 instance-store i386 us-west-1 ami-596f3c1c aki-99a0f1dc paravirtual juju-0.7.orig/juju/providers/local/__init__.py0000644000000000000000000002341112135220114017607 0ustar 00000000000000import os import logging from twisted.internet.defer import succeed, fail, inlineCallbacks, returnValue from txzookeeper import ZookeeperClient from juju.errors import ProviderError, EnvironmentNotFound from juju.lib.lxc import LXCContainer, get_containers from juju.lib.zk import Zookeeper from juju.lib.port import get_open_port from juju.providers.common.base import MachineProviderBase from juju.providers.common.connect import ZookeeperConnect from juju.providers.common.utils import get_user_authorized_keys from juju.providers.local.agent import ManagedMachineAgent from juju.providers.local.files import StorageServer, LocalStorage from juju.providers.local.machine import LocalMachine from juju.providers.local.network import Network from juju.providers.local.pkg import check_packages from juju.state.auth import make_identity from juju.state.initialize import StateHierarchy from juju.state.placement import LOCAL_POLICY log = logging.getLogger("juju.local-dev") REQUIRED_PACKAGES = ["zookeeper", "lxc", "apt-cacher-ng"] class MachineProvider(MachineProviderBase): """LXC/Ubuntu local provider. Only the host machine is utilized. """ def __init__(self, environment_name, config): super(MachineProvider, self).__init__(environment_name, config) self._qualified_name = self._get_qualified_name() self._directory = os.path.join( self.config["data-dir"], self._qualified_name) def get_placement_policy(self): """Local dev supports only one unit placement policy.""" if self.config.get("placement", LOCAL_POLICY) != LOCAL_POLICY: raise ProviderError( "Unsupported placement policy for local provider") return LOCAL_POLICY @property def provider_type(self): return "local" def _get_storage_server(self, ip='127.0.0.1'): return StorageServer( self._qualified_name, storage_dir=os.path.join(self._directory, "files"), host=ip, port=get_open_port(ip), logfile=os.path.join(self._directory, "storage-server.log")) @inlineCallbacks def bootstrap(self, constraints): """Bootstrap a local development environment. """ # Validate `data-dir` or raise/warn self._validate_data_dir() # Check for existing environment state = yield self.load_state() if state is not False: raise ProviderError("Environment already bootstrapped") # Check for required packages log.info("Checking for required packages...") missing = check_packages(*REQUIRED_PACKAGES) if missing: raise ProviderError("Missing packages %s" % ( ", ".join(sorted(list(missing))))) # Store user credentials from the running user try: public_key = get_user_authorized_keys(self.config) public_key = public_key.strip() except LookupError, e: raise ProviderError(str(e)) # Start networking, and get an open port. log.info("Starting networking...") net = Network() # Start is a noop if its already started, which it is by default, # since lxc has it started by default yield net.start() net_attributes = yield net.get_attributes() port = get_open_port(net_attributes["ip"]) # Get/create directory for zookeeper and files zookeeper_dir = os.path.join(self._directory, "zookeeper") if not os.path.exists(zookeeper_dir): os.makedirs(zookeeper_dir) # Start ZooKeeper log.info("Starting ZooKeeper...") # Run zookeeper as the current user, unless we're being run as root # in which case run zookeeper as the 'zookeeper' user. zookeeper_user = None if os.geteuid() == 0: zookeeper_user = "zookeeper" zookeeper = Zookeeper(zookeeper_dir, port=port, host=net_attributes["ip"], user=zookeeper_user, group=zookeeper_user) yield zookeeper.start() # Starting provider storage server log.info("Starting storage server...") storage_server = self._get_storage_server(net_attributes["ip"]) yield storage_server.start() # Save the zookeeper start to provider storage. yield self.save_state({"zookeeper-instances": ["local"], "zookeeper-address": zookeeper.address}) # Initialize the zookeeper state log.debug("Initializing state...") admin_identity = make_identity( "admin:%s" % self.config["admin-secret"]) client = ZookeeperClient(zookeeper.address) yield client.connect() hierarchy = StateHierarchy( client, admin_identity, "local", constraints.data, "local") yield hierarchy.initialize() yield client.close() # Startup the machine agent log_file = os.path.join(self._directory, "machine-agent.log") juju_origin = self.config.get("juju-origin") agent = ManagedMachineAgent(self._qualified_name, zookeeper_hosts=zookeeper.address, machine_id="0", juju_directory=self._directory, log_file=log_file, juju_origin=juju_origin, public_key=public_key, juju_series=self.config["default-series"]) log.info( "Starting machine agent (origin: %s)... ", agent.juju_origin) yield agent.start() log.info("Environment bootstrapped") def _validate_data_dir(self): data_dir = self.config["data-dir"] if not os.path.exists(data_dir): raise ProviderError("`data-dir` does not exist") if not os.access(data_dir, os.W_OK): raise ProviderError("`data-dir` not writable") @inlineCallbacks def destroy_environment(self): """Shutdown the machine environment. """ # Stop all the containers log.info("Destroying unit containers...") yield self._destroy_containers() # Stop the machine agent log.debug("Stopping machine agent...") agent = ManagedMachineAgent(self._qualified_name, juju_directory=self._directory) yield agent.stop() # Stop the storage server log.debug("Stopping storage server...") storage_server = self._get_storage_server() yield storage_server.stop() # Stop zookeeper log.debug("Stopping zookeeper...") zookeeper_dir = os.path.join(self._directory, "zookeeper") zookeeper = Zookeeper(zookeeper_dir, None) yield zookeeper.stop() # Clean out local state yield self.save_state(False) # Don't stop the network since we're using the default from lxc log.debug("Environment destroyed.") @inlineCallbacks def _destroy_containers(self): container_map = yield get_containers(self._qualified_name) for container_name in container_map: container = LXCContainer(container_name, None, None, None) if container_map[container.container_name]: yield container.stop() yield container.destroy() @inlineCallbacks def connect(self, share=False): """Connect to juju's zookeeper. """ state = yield self.load_state() if not state: raise EnvironmentNotFound() client = yield ZookeeperClient(state["zookeeper-address"]).connect() yield ZookeeperConnect(self).wait_for_initialization(client) returnValue(client) def get_file_storage(self): """Retrieve the provider C{FileStorage} abstraction. """ storage_path = self.config.get( "storage-dir", os.path.join(self._directory, "files")) if not os.path.exists(storage_path): try: os.makedirs(storage_path) except OSError: raise ProviderError( "Unable to create file storage for environment") return LocalStorage(storage_path) def start_machine(self, machine_data, master=False): """Start a machine in the provider. @param machine_data: a dictionary of data to pass along to the newly launched machine. @param master: if True, machine will initialize the juju admin and run a provisioning agent. """ return fail(ProviderError("Only local machine available")) def shutdown_machine(self, machine_id): return fail(ProviderError( "Not enabled for local dev, use remove-unit")) def get_machines(self, instance_ids=()): """List machines running in the provider. @param instance_ids: ids of instances you want to get. Leave blank to list all machines owned by juju. @return: a list of LocalMachine, always contains one item. @raise: MachinesNotFound """ if instance_ids and instance_ids != ["local"]: raise ProviderError("Only local machine available") return succeed([LocalMachine()]) def _get_qualified_name(self): """Get a qualified environment name for use by local dev resources """ # Ensure we namespace resources by user. user = os.environ.get("USER") # We need sudo access for lxc (till user namespaces), use the actual # user. if user == "root": sudo_user = os.environ.get("SUDO_USER") if sudo_user: user = sudo_user return "%s-%s" % (user, self.environment_name) juju-0.7.orig/juju/providers/local/agent.py0000644000000000000000000000607112135220114017151 0ustar 00000000000000import sys import os import tempfile from juju.lib.service import TwistedDaemonService from juju.providers.common.cloudinit import get_default_origin, BRANCH class ManagedMachineAgent(object): agent_module = "juju.agents.machine" def __init__( self, juju_unit_namespace, zookeeper_hosts=None, machine_id="0", log_file=None, juju_directory="/var/lib/juju", public_key=None, juju_origin=None, juju_series=None): """ :param juju_series: The release series to use (maverick, natty, etc). :param machine_id: machine id for the local machine. :param zookeeper_hosts: Zookeeper hosts to connect. :param log_file: A file to use for the agent logs. :param juju_directory: The directory to use for all state and logs. :param juju_unit_namespace: The machine agent will create units with a known a prefix to allow for multiple users and multiple environments to create containers. The namespace should be unique per user and per environment. :param public_key: An SSH public key (string) that will be used in the container for access. """ self._juju_origin = juju_origin if self._juju_origin is None: origin, source = get_default_origin() if origin == BRANCH: origin = source self._juju_origin = origin env = {"JUJU_MACHINE_ID": machine_id, "JUJU_ZOOKEEPER": zookeeper_hosts, "JUJU_HOME": juju_directory, "JUJU_ORIGIN": self._juju_origin, "JUJU_UNIT_NS": juju_unit_namespace, "JUJU_SERIES": juju_series, "PATH": os.environ['PATH'], "PYTHONPATH": ":".join(sys.path)} if public_key: env["JUJU_PUBLIC_KEY"] = public_key pidfile = os.path.join(juju_directory, "machine-agent.pid") self._service = TwistedDaemonService( "juju-%s-machine-agent" % juju_unit_namespace, pidfile, use_sudo=True) self._service.set_description( "Juju machine agent for %s" % juju_unit_namespace) self._service.set_environ(env) self._service.output_path = log_file self._service_args = [ "/usr/bin/python", "-m", self.agent_module, "--logfile", log_file, "--zookeeper-servers", zookeeper_hosts, "--juju-directory", juju_directory, "--machine-id", machine_id, "--session-file", "/var/run/juju/%s-machine-agent.zksession" % juju_unit_namespace] @property def juju_origin(self): return self._juju_origin def start(self): """Start the machine agent.""" self._service.set_command(self._service_args) return self._service.start() def stop(self): """Stop the machine agent.""" return self._service.destroy() def is_running(self): """Boolean value, true if the machine agent is running.""" return self._service.is_running() juju-0.7.orig/juju/providers/local/files.py0000644000000000000000000000765012135220114017161 0ustar 00000000000000import os from StringIO import StringIO from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.error import ConnectionRefusedError from twisted.web.client import getPage from juju.errors import ProviderError, FileNotFound from juju.lib import serializer from juju.lib.service import TwistedDaemonService from juju.providers.common.files import FileStorage SERVER_URL_KEY = "local-storage-url" class StorageServer(object): def __init__(self, juju_unit_namespace, storage_dir=None, host=None, port=None, logfile=None): """Management facade for a web server on top of the provider storage. :param juju_unit_namespace: For disambiguation. :param host: Host interface to bind to. :param port: Port to bind to. :param logfile: Path to store log output. """ if storage_dir: storage_dir = os.path.abspath(storage_dir) self._storage_dir = storage_dir self._host = host self._port = port self._logfile = logfile if storage_dir: self._pidfile = os.path.abspath( os.path.join(storage_dir, '..', 'storage-server.pid')) else: self._pidfile = os.path.join('/tmp', 'storage-server.pid') self._service = TwistedDaemonService( "juju-%s-file-storage" % juju_unit_namespace, pidfile=self._pidfile, use_sudo=False) self._service.set_description( "Juju file storage for %s" % juju_unit_namespace) self._service_args = [ "twistd", "--pidfile", self._pidfile, "--logfile", logfile, "-d", self._storage_dir, "web", "--port", "tcp:%s:interface=%s" % (self._port, self._host), "--path", self._storage_dir] @inlineCallbacks def is_serving(self): try: storage = LocalStorage(self._storage_dir) yield getPage((yield storage.get_url(SERVER_URL_KEY))) returnValue(True) except ConnectionRefusedError: returnValue(False) @inlineCallbacks def start(self): """Start the storage server. Also stores the storage server url directly into provider storage. """ assert self._storage_dir, "no storage_dir set" assert self._host, "no host set" assert self._port, "no port set" assert None not in self._service_args, "unset params" assert os.path.exists(self._storage_dir), "Invalid storage directory" try: with open(self._logfile, "a"): pass except IOError: raise AssertionError("logfile not writable by this user") storage = LocalStorage(self._storage_dir) yield storage.put( SERVER_URL_KEY, StringIO(serializer.dump( {"storage-url": "http://%s:%s/" % (self._host, self._port)}))) self._service.set_command(self._service_args) yield self._service.start() def get_pid(self): return self._service.get_pid() def stop(self): """Stop the storage server.""" return self._service.destroy() class LocalStorage(FileStorage): @inlineCallbacks def get_url(self, key): """Get a network url to a local provider storage. The command line tools directly utilize the disk backed storage. The agents will use the read only web interface provided by the StorageServer to download resources, as in the local provider scenario they won't always have direct disk access. """ try: storage_data = (yield self.get(SERVER_URL_KEY)).read() except FileNotFound: storage_data = "" if not storage_data or not "storage-url" in storage_data: raise ProviderError("Storage not initialized") url = serializer.load(storage_data)["storage-url"] returnValue(url + key) juju-0.7.orig/juju/providers/local/machine.py0000644000000000000000000000045312135220114017455 0ustar 00000000000000from juju.machine import ProviderMachine class LocalMachine(ProviderMachine): """Represents host machine, when doing local development. """ def __init__(self): super(LocalMachine, self).__init__("local", "localhost", "localhost") self.state = "running" # a tautology juju-0.7.orig/juju/providers/local/network.py0000644000000000000000000000551712135220114017550 0ustar 00000000000000 import os import subprocess import logging from juju.errors import ProviderInteractionError from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.threads import deferToThread DEFAULT_LXC_ADDR = '10.0.3.1' DEFAULT_LXC_BRIDGE = 'lxcbr0' log = logging.getLogger("juju.local-dev") _SHELL_GET_LXC_DEFAULTS=""". /etc/default/lxc; export LXC_BRIDGE; export LXC_ADDR; env|grep ^LXC_""" class Network(object): """ Setup a bridge network with forwarding and dnsmasq for the environment. Utilizes lxc's networking subsystem to actualize configuration. """ def start(self): """Start the network. """ return deferToThread(start_network) def stop(self): """Stop the network. """ return deferToThread(stop_network) @inlineCallbacks def is_running(self): """Returns True if the network is currently active, False otherwise. """ (goal, state) = yield deferToThread(_status_lxc_net) returnValue(bool(state == 'running')) def get_attributes(self): """Return attributes of the network as a dictionary. The network name, starting ip address, and bridge name are returned """ return deferToThread(get_network_attributes) def _status_lxc_net(): try: status_lxc_net = subprocess.check_output(['status','lxc-net']) except subprocess.CalledProcessError: raise ProviderInteractionError( 'Problem checking status of lxc-net upstart job.') try: (_, state) = status_lxc_net.split(' ') (goal, state) = state.split('/') except ValueError: raise ProviderInteractionError( 'status lxc-net returned unexpected output (%s)' % status_lxc_net) return (goal, state) def _set_lxc_net_state(desired_state): (goal, state) = _status_lxc_net() if goal != desired_state: try: subprocess.check_call([desired_state,'lxc-net']) except subprocess.CalledProcessError, e: log.warn('Problem %sing lxc-net' % desired_state) log.warn(e) def start_network(): _set_lxc_net_state('start') def get_network_attributes(): env = {} if os.path.exists('/etc/default/lxc'): try: output = subprocess.check_output( ["sh", "-c", _SHELL_GET_LXC_DEFAULTS]) for l in output.split("\n"): if '=' in l: (name, value) = l.split('=', 1) env[name] = value except subprocess.CalledProcessError, e: log.warn('Problem reading values from /etc/default/lxc') log.warn(e) attrs = {} attrs["ip"] = env.get('LXC_ADDR', DEFAULT_LXC_ADDR) attrs["bridge"] = env.get('LXC_BRIDGE', DEFAULT_LXC_BRIDGE) return attrs def stop_network(): _set_lxc_net_state('stop') juju-0.7.orig/juju/providers/local/pkg.py0000644000000000000000000000065612135220114016637 0ustar 00000000000000import apt def check_packages(*packages): """Given a list of packages, return the packages which are not installed. """ cache = apt.Cache() missing = set() for pkg_name in packages: try: pkg = cache[pkg_name] except KeyError: missing.add(pkg_name) continue if pkg.is_installed: continue missing.add(pkg_name) return missing juju-0.7.orig/juju/providers/local/tests/0000755000000000000000000000000012135220114016637 5ustar 00000000000000juju-0.7.orig/juju/providers/local/tests/__init__.py0000644000000000000000000000000212135220114020740 0ustar 00000000000000# juju-0.7.orig/juju/providers/local/tests/test_agent.py0000644000000000000000000000632312135220114021352 0ustar 00000000000000import os import sys import tempfile import subprocess from twisted.internet.defer import inlineCallbacks from juju.lib.lxc.tests.test_lxc import uses_sudo from juju.lib.testing import TestCase from juju.tests.common import get_test_zookeeper_address from juju.providers.local.agent import ManagedMachineAgent class ManagedAgentTest(TestCase): @inlineCallbacks def test_managed_agent_config(self): subprocess_calls = [] def intercept_args(args, **kwargs): self.assertEquals(args[0], "sudo") if args[2] == "rm": return 0 subprocess_calls.append(args) return 0 self.patch(subprocess, "check_call", intercept_args) juju_directory = self.makeDir() log_file = self.makeFile() agent = ManagedMachineAgent( "ns1", get_test_zookeeper_address(), juju_series="precise", juju_directory=juju_directory, log_file=log_file, juju_origin="lp:juju/trunk") try: os.remove(agent._service.output_path) except OSError: pass # just make sure it's not there, so the .start() # doesn't insert a spurious rm yield agent.start() start = subprocess_calls[0] self.assertEquals(start, [ "sudo", "JUJU_MACHINE_ID=0", "JUJU_ORIGIN=lp:juju/trunk", "JUJU_ZOOKEEPER=localhost:28181", "PYTHONPATH=%s" % ':'.join(sys.path), "JUJU_HOME=%s" % juju_directory, "PATH=%s" % os.environ['PATH'], "JUJU_UNIT_NS=ns1", "JUJU_SERIES=precise", "/usr/bin/python", "-m", "juju.agents.machine", "--logfile", log_file, "--zookeeper-servers", get_test_zookeeper_address(), "--juju-directory", juju_directory, "--machine-id", "0", "--session-file", "/var/run/juju/ns1-machine-agent.zksession", "--pidfile", agent._service._pid_path]) @uses_sudo @inlineCallbacks def test_managed_agent_root(self): juju_directory = self.makeDir() log_file = tempfile.mktemp() # The pid file and log file get written as root def cleanup_root_file(cleanup_file): subprocess.check_call( ["sudo", "rm", "-f", cleanup_file], stderr=subprocess.STDOUT) self.addCleanup(cleanup_root_file, log_file) agent = ManagedMachineAgent( "test-ns", machine_id="0", log_file=log_file, juju_series="precise", zookeeper_hosts=get_test_zookeeper_address(), juju_directory=juju_directory) agent.agent_module = "juju.agents.dummy" self.assertFalse((yield agent.is_running())) yield agent.start() # Give a moment for the process to start and write its config yield self.sleep(0.1) self.assertTrue((yield agent.is_running())) # running start again is fine, detects the process is running yield agent.start() yield agent.stop() self.assertFalse((yield agent.is_running())) # running stop again is fine, detects the process is stopped. yield agent.stop() juju-0.7.orig/juju/providers/local/tests/test_container.py0000644000000000000000000000223212135220114022231 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks from juju.lib.testing import TestCase from juju.lib.lxc import get_containers from juju.lib import lxc lxc_output_sample = "\ calendarserver\ncaprica\ngemini\nreconnoiter\nvirgo\ncalendarserver\n" class GetContainersTest(TestCase): @inlineCallbacks def test_get_containers(self): lxc_ls_mock = self.mocker.mock() self.patch(lxc, "_cmd", lxc_ls_mock) lxc_ls_mock(["lxc-ls"]) self.mocker.result((0, lxc_output_sample)) self.mocker.replay() container_map = yield get_containers(None) self.assertEqual( dict(caprica=False, gemini=False, reconnoiter=False, virgo=False, calendarserver=True), container_map) @inlineCallbacks def test_get_containers_with_prefix(self): lxc_ls_mock = self.mocker.mock() self.patch(lxc, "_cmd", lxc_ls_mock) lxc_ls_mock(["lxc-ls"]) self.mocker.result((0, lxc_output_sample)) self.mocker.replay() container_map = yield get_containers("ca") self.assertEqual( dict(calendarserver=True, caprica=False), container_map) juju-0.7.orig/juju/providers/local/tests/test_files.py0000644000000000000000000001164612135220114021362 0ustar 00000000000000import os import signal from StringIO import StringIO from twisted.internet.defer import inlineCallbacks, succeed from twisted.web.client import getPage from juju.errors import ProviderError, ServiceError from juju.lib import serializer from juju.lib.lxc.tests.test_lxc import uses_sudo from juju.lib.testing import TestCase from juju.lib.service import TwistedDaemonService from juju.lib.mocker import ANY from juju.lib.port import get_open_port from juju.providers.local.files import ( LocalStorage, StorageServer, SERVER_URL_KEY) class WebFileStorageTest(TestCase): @inlineCallbacks def setUp(self): yield super(WebFileStorageTest, self).setUp() self._storage_path = self.makeDir() self._logfile = self.makeFile() self._storage = LocalStorage(self._storage_path) self._port = get_open_port() self._server = StorageServer( "ns1", self._storage_path, "localhost", self._port, self._logfile) @inlineCallbacks def wait_for_server(self, server): while not (yield server.is_serving()): yield self.sleep(0.1) def test_start_missing_args(self): server = StorageServer("ns1", self._storage_path) return self.assertFailure(server.start(), AssertionError) def test_start_invalid_directory(self): os.rmdir(self._storage_path) return self.assertFailure(self._server.start(), AssertionError) @inlineCallbacks def test_storage_start(self): lstorage = self.mocker.patch(LocalStorage) lstorage.put(ANY, ANY) self.mocker.result(succeed(True)) twisted = self.mocker.patch(TwistedDaemonService) twisted.start() self.mocker.result(succeed(True)) self.mocker.replay() yield self._server.start() @uses_sudo @inlineCallbacks def test_start_stop(self): yield self._storage.put("abc", StringIO("hello world")) yield self._server.start() # Starting multiple times is fine. yield self._server.start() storage_url = yield self._storage.get_url("abc") # It might not have started actually accepting connections yet... yield self.wait_for_server(self._server) self.assertEqual((yield getPage(storage_url)), "hello world") # Check that it can be killed by the current user (ie, is not running # as root) and still comes back up old_pid = yield self._server.get_pid() os.kill(old_pid, signal.SIGKILL) new_pid = yield self._server.get_pid() self.assertNotEquals(old_pid, new_pid) # Give it a moment to actually start serving again yield self.wait_for_server(self._server) self.assertEqual((yield getPage(storage_url)), "hello world") yield self._server.stop() # Stopping multiple times is fine too. yield self._server.stop() @uses_sudo @inlineCallbacks def test_namespacing(self): alt_storage_path = self.makeDir() alt_storage = LocalStorage(alt_storage_path) yield alt_storage.put("some-path", StringIO("alternative")) yield self._storage.put("some-path", StringIO("original")) alt_server = StorageServer( "ns2", alt_storage_path, "localhost", get_open_port(), self.makeFile()) yield alt_server.start() yield self._server.start() yield self.wait_for_server(alt_server) yield self.wait_for_server(self._server) alt_contents = yield getPage( (yield alt_storage.get_url("some-path"))) self.assertEquals(alt_contents, "alternative") orig_contents = yield getPage( (yield self._storage.get_url("some-path"))) self.assertEquals(orig_contents, "original") yield alt_server.stop() yield self._server.stop() @uses_sudo @inlineCallbacks def test_capture_errors(self): self._port = get_open_port() self._server = StorageServer( "borken", self._storage_path, "lol borken", self._port, self._logfile) d = self._server.start() e = yield self.assertFailure(d, ServiceError) self.assertTrue(str(e).startswith( "Failed to start job juju-borken-file-storage; got output:\n")) self.assertIn("Wrong number of arguments", str(e)) yield self._server.stop() class FileStorageTest(TestCase): def setUp(self): self._storage = LocalStorage(self.makeDir()) @inlineCallbacks def test_get_url(self): yield self.assertFailure(self._storage.get_url("abc"), ProviderError) self._storage.put(SERVER_URL_KEY, StringIO("abc")) yield self.assertFailure(self._storage.get_url("abc"), ProviderError) self._storage.put( SERVER_URL_KEY, StringIO( serializer.dump({"storage-url": "http://localhost/"}))) self.assertEqual((yield self._storage.get_url("abc")), "http://localhost/abc") juju-0.7.orig/juju/providers/local/tests/test_machine.py0000644000000000000000000000057112135220114021657 0ustar 00000000000000from juju.providers.local.machine import LocalMachine from juju.lib.testing import TestCase class LocalMachineTest(TestCase): def test_machine_attributes(self): machine = LocalMachine() self.assertEqual(machine.instance_id, "local") self.assertEqual(machine.dns_name, "localhost") self.assertEqual(machine.private_dns_name, "localhost") juju-0.7.orig/juju/providers/local/tests/test_network.py0000644000000000000000000001017012135220114021740 0ustar 00000000000000import os import subprocess from twisted.internet.defer import inlineCallbacks from juju.providers.local.network import Network, _SHELL_GET_LXC_DEFAULTS from juju.lib.testing import TestCase from juju.lib.mocker import ANY from juju.errors import ProviderInteractionError default_lxc_output = """\ LXC_BRIDGE=fakebridge0 LXC_ADDR=10.90.30.1 """ class NetworkTestCase(TestCase): def _mock_default_lxc_exists(self): path_exists = self.mocker.mock() self.patch(os.path, 'exists', path_exists) path_exists('/etc/default/lxc') self.mocker.result(True) @inlineCallbacks def test_get_network_attributes(self): self._mock_default_lxc_exists() read_default_lxc = self.mocker.mock() self.patch(subprocess, 'check_output', read_default_lxc) read_default_lxc(['sh', '-c', _SHELL_GET_LXC_DEFAULTS]) self.mocker.result(default_lxc_output) self.mocker.replay() network = Network() attributes = yield network.get_attributes() self.assertEqual( dict(ip="10.90.30.1", bridge="fakebridge0"), attributes) def _mock_status(self, expected_status): status_output_mock = self.mocker.mock() self.patch(subprocess, 'check_output', status_output_mock) status_output_mock(ANY) self.mocker.result(expected_status) def _mock_command(self, command): command_mock = self.mocker.mock() self.patch(subprocess, 'check_call', command_mock) command_mock([command, 'lxc-net']) return command_mock @inlineCallbacks def test_start_started_network(self): self._mock_status('lxc-net start/running') self.mocker.replay() network = Network() yield network.start() @inlineCallbacks def test_start_stoppped_network(self): self._mock_status('lxc-net stop/waiting') self._mock_command('start') self.mocker.replay() network = Network() yield network.start() @inlineCallbacks def test_stop_started_network(self): self._mock_status('lxc-net start/running') self._mock_command('stop') self.mocker.replay() network = Network() yield network.stop() @inlineCallbacks def test_stop_stopped_network(self): self._mock_status('lxc-net stop/waiting') self.mocker.replay() network = Network() yield network.stop() @inlineCallbacks def test_is_running(self): self._mock_status('lxc-net start/running') self.mocker.replay() network = Network() running = yield network.is_running() self.assertTrue(running) def test_bad_status(self): self._mock_status('nospaces') self.mocker.replay() network = Network() yield self.assertFailure(ProviderInteractionError, network.is_running()) def _mock_subprocess_error(self, command, args): status_mock = self.mocker.mock() self.patch(subprocess, command, status_mock) status_mock(args) self.mocker.throw(subprocess.CalledProcessError(13, ' '.join(args))) self.mocker.replay() def test_upstart_status_error(self): self._mock_subprocess_error('check_output', ['status','lxc-net']) network = Network() yield self.assertFailure(ProviderInteractionError, network.is_running()) @inlineCallbacks def test_upstart_start_stop_error(self): self._mock_status('lxc-net stop/waiting') self._mock_subprocess_error('check_call', ['start','lxc-net']) log = self.capture_logging('juju.local-dev') network = Network() yield network.start() self.assertIn('Problem starting lxc-net', log.getvalue()) @inlineCallbacks def test_get_network_attributes_sh_error(self): self._mock_default_lxc_exists() self._mock_subprocess_error('check_output', ['sh', '-c', _SHELL_GET_LXC_DEFAULTS]) log = self.capture_logging('juju.local-dev') network = Network() yield network.get_attributes() self.assertIn('Problem reading values from /etc/default/lxc', log.getvalue()) juju-0.7.orig/juju/providers/local/tests/test_pkg.py0000644000000000000000000000151412135220114021032 0ustar 00000000000000import apt from juju.lib.testing import TestCase from juju.providers.local.pkg import check_packages class PackageInstallTest(TestCase): def test_package_reports_missing(self): pkg_name = "tryton-modules-sale-opportunity" missing_pkg = self.mocker.mock() mock_cache = self.mocker.patch(apt.Cache) mock_cache[pkg_name] self.mocker.result(missing_pkg) missing_pkg.is_installed self.mocker.result(False) self.mocker.replay() self.assertEqual( check_packages(pkg_name), set([pkg_name])) def test_package_reports_installed(self): self.assertEqual( check_packages("python-apt"), set()) def test_package_handles_unknown(self): self.assertEqual( check_packages("global-gook"), set(["global-gook"])) juju-0.7.orig/juju/providers/local/tests/test_provider.py0000644000000000000000000001734412135220114022113 0ustar 00000000000000import logging import os import pwd from StringIO import StringIO import zookeeper from twisted.internet.defer import succeed, inlineCallbacks from txzookeeper.tests.utils import deleteTree from juju.errors import ProviderError, EnvironmentNotFound from juju.lib.lxc import LXCContainer from juju.lib.zk import Zookeeper from juju.machine.constraints import ConstraintSet from juju.providers.local import MachineProvider from juju.providers import local from juju.providers.local.agent import ManagedMachineAgent from juju.providers.local.files import StorageServer from juju.providers.local.network import Network from juju.lib import lxc as lxc_lib from juju.lib.testing import TestCase from juju.tests.common import get_test_zookeeper_address class LocalProviderTest(TestCase): @inlineCallbacks def setUp(self): self.constraints = ConstraintSet("local").parse([]).with_series("foo") self.provider = MachineProvider( "local-test", { "admin-secret": "admin:abc", "data-dir": self.makeDir(), "authorized-keys": "fooabc123", "default-series": "oneiric"}) self.output = self.capture_logging( "juju.local-dev", level=logging.DEBUG) zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() def tearDown(self): deleteTree("/", self.client.handle) self.client.close() @property def qualified_name(self): user_name = pwd.getpwuid(os.getuid()).pw_name return "%s-%s" % (user_name, self.provider.environment_name) def test_get_placement_policy(self): """Lxc provider only supports local placement.""" self.assertEqual(self.provider.get_placement_policy(), "local") provider = MachineProvider( "test", {"placement": "unassigned", "data-dir": self.makeDir()}) self.assertRaises( ProviderError, provider.get_placement_policy) def assertDir(self, *path_parts): path = os.path.join(*path_parts) self.assertTrue(os.path.isdir(path)) def bootstrap_mock(self): self.patch(local, "REQUIRED_PACKAGES", []) mock_network = self.mocker.patch(Network) mock_network.start() self.mocker.result(succeed(True)) mock_network.get_attributes() self.mocker.result({"ip": "127.0.0.1"}) mock_storage = self.mocker.patch(StorageServer) mock_storage.start() self.mocker.result(succeed(True)) mock_zookeeper = self.mocker.patch(Zookeeper) mock_zookeeper.start() self.mocker.result(succeed(True)) mock_zookeeper.address self.mocker.result(get_test_zookeeper_address()) self.mocker.count(3) mock_agent = self.mocker.patch(ManagedMachineAgent) mock_agent.start() self.mocker.result(succeed(True)) def test_provider_type(self): self.assertEqual(self.provider.provider_type, "local") @inlineCallbacks def test_bootstrap(self): self.bootstrap_mock() self.mocker.replay() yield self.provider.bootstrap(self.constraints) children = yield self.client.get_children("/") self.assertEqual( sorted(['services', 'settings', 'charms', 'relations', 'zookeeper', 'initialized', 'topology', 'machines', 'units', 'constraints']), sorted(children)) output = self.output.getvalue() self.assertIn("Starting networking...", output) self.assertIn("Starting ZooKeeper...", output) self.assertIn("Initializing state...", output) self.assertIn("Starting storage server", output) self.assertIn("Starting machine agent", output) self.assertIn("Environment bootstrapped", output) self.assertDir( self.provider.config["data-dir"], self.qualified_name, "files") self.assertDir( self.provider.config["data-dir"], self.qualified_name, "zookeeper") self.assertEqual((yield self.provider.load_state()), {"zookeeper-address": get_test_zookeeper_address(), "zookeeper-instances": ["local"]}) @inlineCallbacks def test_bootstrap_bad_data_dir(self): data_dir = self.makeDir() # make it read only os.chmod(data_dir, 0444) self.provider = MachineProvider( "local-test", { "admin-secret": "admin:abc", "data-dir": data_dir, "authorized-keys": "fooabc123", "default-series": "oneiric"}) self.mocker.replay() error = yield self.assertFailure( self.provider.bootstrap(self.constraints), ProviderError) self.assertIn("`data-dir` not writable", str(error)) @inlineCallbacks def test_boostrap_previously_bootstrapped(self): """Any local state is a sign that we had a previous bootstrap. """ yield self.provider.save_state({"xyz": 1}) error = yield self.assertFailure( self.provider.bootstrap(self.constraints), ProviderError) self.assertEqual(str(error), "Environment already bootstrapped") @inlineCallbacks def test_destroy_environment(self): """Destroying a local environment kills units, zk, and machine agent. """ user_name = pwd.getpwuid(os.getuid()).pw_name # Mock container destruction, including stopping running contianers. lxc_ls_mock = self.mocker.mock() self.patch(lxc_lib, "_cmd", lxc_ls_mock) lxc_ls_mock(["lxc-ls"]) self.mocker.result( (0, "%(name)s-local-test-unit-1\n%(name)s-local-test-unit-2\n" "%(name)s-local-test-unit-3\n%(name)s-local-test-unit-1\n" "%(name)s-local-test-unit-2\n" % dict(name=user_name))) mock_container = self.mocker.patch(LXCContainer) mock_container.stop() self.mocker.count(2) mock_container.destroy() self.mocker.count(3) mock_agent = self.mocker.patch(ManagedMachineAgent) mock_agent.stop() mock_server = self.mocker.patch(StorageServer) mock_server.stop() mock_zookeeper = self.mocker.patch(Zookeeper) mock_zookeeper.stop() yield self.provider.save_state({"i-exist": 1}) self.mocker.replay() yield self.provider.destroy_environment() output = self.output.getvalue() self.assertIn("Environment destroyed", output) self.assertEqual((yield self.provider.load_state()), False) @inlineCallbacks def test_connect(self): self.bootstrap_mock() self.mocker.replay() yield self.provider.bootstrap(self.constraints) client = yield self.provider.connect() self.assertTrue((yield client.exists("/initialized"))) def test_connect_sans_environment(self): return self.assertFailure(self.provider.connect(), EnvironmentNotFound) def test_shutdown_machine(self): return self.assertFailure( self.provider.shutdown_machine("local"), ProviderError) def test_start_machine(self): return self.assertFailure( self.provider.start_machine({}), ProviderError) @inlineCallbacks def test_get_machines(self): machines = yield self.provider.get_machines() self.assertEqual(len(machines), 1) self.assertEqual(machines[0].instance_id, "local") self.assertEqual(machines[0].dns_name, "localhost") @inlineCallbacks def test_get_file_storage(self): storage = self.provider.get_file_storage() content = StringIO("put") yield storage.put("abc", content) data_dir = self.provider.config["data-dir"] self.assertTrue("abc" in os.listdir( os.path.join(data_dir, self.qualified_name, "files"))) juju-0.7.orig/juju/providers/maas/__init__.py0000644000000000000000000000044712135220114017442 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). __all__ = ['MAASMachine', 'MachineProvider'] from juju.providers.maas.machine import MAASMachine from juju.providers.maas.provider import MachineProvider juju-0.7.orig/juju/providers/maas/auth.py0000644000000000000000000000544412135220114016646 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """OAuth functions for authorisation against a maas server.""" import oauth.oauth as oauth from twisted.internet import reactor from twisted.web.client import HTTPClientFactory from urlparse import urlparse DEFAULT_FACTORY = HTTPClientFactory DEFAULT_CONNECT = reactor.connectTCP def _ascii_url(url): """Ensure that the given URL is ASCII, encoding if necessary.""" if isinstance(url, unicode): urlparts = urlparse(url) urlparts = urlparts._replace( netloc=urlparts.netloc.encode("idna")) url = urlparts.geturl() return url.encode("ascii") class MAASOAuthConnection(object): """Helper class to provide an OAuth auth'd connection to MAAS.""" factory = staticmethod(DEFAULT_FACTORY) connect = staticmethod(DEFAULT_CONNECT) def __init__(self, oauth_info): consumer_key, resource_token, resource_secret = oauth_info resource_tok_string = "oauth_token_secret=%s&oauth_token=%s" % ( resource_secret, resource_token) self.resource_token = oauth.OAuthToken.from_string(resource_tok_string) self.consumer_token = oauth.OAuthConsumer(consumer_key, "") def oauth_sign_request(self, url, headers): """Sign a request. @param url: The URL to which the request is to be sent. @param headers: The headers in the request. """ oauth_request = oauth.OAuthRequest.from_consumer_and_token( self.consumer_token, token=self.resource_token, http_url=url) oauth_request.sign_request( oauth.OAuthSignatureMethod_PLAINTEXT(), self.consumer_token, self.resource_token) headers.update(oauth_request.to_header()) def dispatch_query(self, request_url, method="GET", data=None, headers=None): """Dispatch an OAuth-signed request to L{request_url}. @param request_url: The URL to which the request is to be sent. @param method: The HTTP method, e.g. C{GET}, C{POST}, etc. @param data: The data to send, if any. @type data: A byte string. @param headers: Headers to including in the request. """ if headers is None: headers = {} self.oauth_sign_request(request_url, headers) self.client = self.factory( url=_ascii_url(request_url), method=method, headers=headers, postdata=data) urlparts = urlparse(request_url) if urlparts.port is not None: port = urlparts.port elif urlparts.scheme == "https": port = 443 else: port = 80 self.connect(urlparts.hostname, port, self.client) return self.client.deferred juju-0.7.orig/juju/providers/maas/files.py0000644000000000000000000001104112135220114016775 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """File Storage API client for MAAS.""" from cStringIO import StringIO import httplib import mimetypes import random import string from twisted.web.error import Error import urllib from urlparse import urljoin from juju.errors import ( FileNotFound, ProviderError, ) from juju.providers.common.utils import convert_unknown_error from juju.providers.maas.auth import MAASOAuthConnection def _convert_error(failure, method, url, errors): if failure.check(Error): status = failure.value.status error = errors.get(int(status)) if error: raise error raise ProviderError( "Unexpected HTTP %s trying to %s %s" % (status, method, url)) return convert_unknown_error(failure) def _random_string(length): return ''.join(random.choice(string.letters) for ii in range(length + 1)) def _get_content_type(filename): return mimetypes.guess_type(filename)[0] or 'application/octet-stream' def _encode_field(field_name, data, boundary): return ('--' + boundary, 'Content-Disposition: form-data; name="%s"' % field_name, '', str(data)) def _encode_file(name, fileObj, boundary): return ('--' + boundary, 'Content-Disposition: form-data; name="%s"; filename="%s"' % (name, name), 'Content-Type: %s' % _get_content_type(name), '', fileObj.read()) def encode_multipart_data(data, files): """Create a MIME multipart payload from L{data} and L{files}. @param data: A mapping of names (ASCII strings) to data (byte string). @param files: A mapping of names (ASCII strings) to file objects ready to be read. @return: A 2-tuple of C{(body, headers)}, where C{body} is a a byte string and C{headers} is a dict of headers to add to the enclosing request in which this payload will travel. """ boundary = _random_string(30) lines = [] for name in data: lines.extend(_encode_field(name, data[name], boundary)) for name in files: lines.extend(_encode_file(name, files[name], boundary)) lines.extend(('--%s--' % boundary, '')) body = '\r\n'.join(lines) headers = {'content-type': 'multipart/form-data; boundary=' + boundary, 'content-length': str(len(body))} return body, headers class MAASFileStorage(MAASOAuthConnection): """A file storage abstraction for MAAS.""" def __init__(self, config): server_url = config["maas-server"] if not server_url.endswith('/'): server_url += "/" self._base_url = urljoin(server_url, "api/1.0/files/") self._auth = config["maas-oauth"] super(MAASFileStorage, self).__init__(self._auth) def get_url(self, name): """Return a URL that can be used to access a stored file. :param unicode name: the file path for which to provide a URL :return: a URL :rtype: str """ params = {"op": "get", "filename": name} param_string = urllib.urlencode(params) return self._base_url + "?" + param_string def get(self, name): """Get a file object from the MAAS server. :param unicode name: path to the desired file :return: an open file object :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.FileNotFound` if the file doesn't exist """ url = self.get_url(name) d = self.dispatch_query(url) d.addCallback(StringIO) error_tab = {httplib.NOT_FOUND: FileNotFound(url)} d.addErrback(_convert_error, "GET", url, error_tab) return d def put(self, name, file_object): """Upload a file to MAAS. :param unicode name: name with which to store the content :param file_object: open file object containing the content :rtype: :class:`twisted.internet.defer.Deferred` """ url = self._base_url params = {"op": "add", "filename": name} files = {"file": file_object} body, headers = encode_multipart_data(params, files) d = self.dispatch_query( url, method="POST", headers=headers, data=body) d.addCallback(lambda _: True) error_tab = { httplib.UNAUTHORIZED: ProviderError( "The supplied storage credentials were not " "accepted by the server")} d.addErrback(_convert_error, "PUT", url, error_tab) return d juju-0.7.orig/juju/providers/maas/launch.py0000644000000000000000000000446612135220114017162 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Machine Launcher for MAAS.""" import logging import sys from twisted.internet.defer import ( inlineCallbacks, returnValue, ) from juju.providers.common.launch import LaunchMachine from juju.providers.maas.machine import MAASMachine log = logging.getLogger("juju.maas") class MAASLaunchMachine(LaunchMachine): """MAAS operation for launching an instance.""" @inlineCallbacks def start_machine(self, machine_id, zookeepers): """Start an instance with MAAS. :param str machine_id: the juju machine ID to assign :param zookeepers: the machines currently running zookeeper, to which the new machine will need to connect :type zookeepers: list of :class:`juju.providers.maas.provider.MAASMachine` :return: a singe-entry list containing a :class:`juju.providers.maas.provider.MAASMachine` representing the newly-launched machine :rtype: :class:`twisted.internet.defer.Deferred` """ maas_client = self._provider.maas_client series = self._constraints["ubuntu-series"] instance_data = yield maas_client.acquire_node(self._constraints) instance_uri = instance_data["resource_uri"] # If anything goes wrong after the acquire and before the launch # actually happens, we attempt to release the node. try: cloud_init = self._create_cloud_init(machine_id, zookeepers) cloud_init.set_provider_type("maas") cloud_init.set_instance_id_accessor(instance_uri) node_data = yield maas_client.start_node( instance_uri, series, cloud_init.render()) machine = MAASMachine.from_dict(node_data) except Exception: log.exception( "Failed to launch machine %s; attempting to release.", instance_uri) exc_info = sys.exc_info() yield maas_client.release_node(instance_uri) # Use three-expression form to ensure that the error with its # traceback is correctly propagated. raise exc_info[0], exc_info[1], exc_info[2] else: returnValue([machine]) juju-0.7.orig/juju/providers/maas/maas.py0000644000000000000000000001533312135220114016624 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """MAAS API client for Juju""" from base64 import b64encode import json import logging import re from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python.failure import Failure from twisted.web.error import Error from urllib import urlencode from urlparse import urljoin from juju.errors import ProviderError from juju.providers.common.utils import convert_unknown_error from juju.providers.maas.auth import MAASOAuthConnection from juju.providers.maas.files import encode_multipart_data log = logging.getLogger("juju.maas") CONSUMER_SECRET = "" _re_resource_uri = re.compile( '/api/(?P[^/]+)/nodes/(?P[^/]+)/?') def extract_system_id(resource_uri): """Extract a system ID from a resource URI. This is fairly unforgiving; an exception is raised if the URI given does not look like a MAAS node resource URI. :param resource_uri: A URI that corresponds to a MAAS node resource. :raises: :exc:`juju.errors.ProviderError` when `resource_uri` does not resemble a MAAS node resource URI. """ match = _re_resource_uri.search(resource_uri) if match is None: raise ProviderError( "%r does not resemble a MAAS resource URI." % (resource_uri,)) else: return match.group("system_id") def _int_str(value, k, c): """Convert value to integer then string""" return str(int(value)) def _str(value, k, c): return str(value) def _raw(value, k, c): return c.data.get(k) class MAASClient(MAASOAuthConnection): _handled_constraints = ( ("maas-name", "name", _str), ("maas-tags", "tags", _raw), ("arch", "arch", _str), ("cpu", "cpu_count", _int_str), ("mem", "mem", _int_str), ) def __init__(self, config): """Initialise an API client for MAAS. :param config: a dict of configuration values; must contain 'maas-server', 'maas-oauth', 'admin-secret' """ self.url = config["maas-server"] if not self.url.endswith('/'): self.url += "/" self.oauth_info = config["maas-oauth"] self.admin_secret = config["admin-secret"] super(MAASClient, self).__init__(self.oauth_info) def _process_error(self, failure): if isinstance(failure, Failure): error = failure.value else: error = failure # Catch twisted.web.error.Error here as we need to present the # error text that it comes with. if isinstance(error, Error): raise ProviderError(error.response) return convert_unknown_error(failure) def get(self, path, params): """Dispatch a C{GET} call to a MAAS server. :param uri: The MAAS path for the endpoint to call. :param params: A C{dict} of parameters - or sequence of 2-tuples - to encode into the request. :return: A Deferred which fires with the result of the call. """ url = "%s?%s" % (urljoin(self.url, path), urlencode(params)) d = self.dispatch_query(url) d.addCallback(json.loads) d.addErrback(self._process_error) return d def post(self, path, params): """Dispatch a C{POST} call to a MAAS server. :param uri: The MAAS path for the endpoint to call. :param params: A C{dict} of parameters to encode into the request. :return: A Deferred which fires with the result of the call. """ url = urljoin(self.url, path) body, headers = encode_multipart_data(params, {}) d = self.dispatch_query(url, "POST", headers=headers, data=body) d.addCallback(json.loads) d.addErrback(self._process_error) return d def get_nodes(self, resource_uris=None): """Ask MAAS to return a list of all the nodes it knows about. :param resource_uris: The MAAS URIs for the nodes you want to get. :return: A Deferred whose value is the list of nodes. """ params = [("op", "list_allocated")] if resource_uris is not None: params.extend( ("id", extract_system_id(resource_uri)) for resource_uri in resource_uris) return self.get("api/1.0/nodes/", params) def acquire_node(self, constraints=None): """Ask MAAS to assign a node to us. :return: A Deferred whose value is the resource URI to the node that was acquired. """ params = {"op": "acquire"} if constraints is not None: for key_from, key_to, translate in self._handled_constraints: value = constraints.get(key_from, None) if value is not None: params[key_to] = translate(value, key_from, constraints) return self.post("api/1.0/nodes/", params) def start_node(self, resource_uri, ubuntu_series, user_data): """Ask MAAS to start a node. :param resource_uri: The MAAS URI for the node you want to start. :param user_data: Any blob of data to be passed to MAAS. Must be possible to encode as base64. :return: A Deferred whose value is the resource data for the node as returned by get_nodes(). """ assert isinstance(user_data, str), ( "User data must be a byte string.") params = {"op": "start", "distro_series": ubuntu_series, "user_data": b64encode(user_data)} return self.post(resource_uri, params) def stop_node(self, resource_uri): """Ask maas to shut down a node. :param resource_uri: The MAAS URI for the node you want to stop. :return: A Deferred whose value is the resource data for the node as returned by get_nodes(). """ params = {"op": "stop"} return self.post(resource_uri, params) def release_node(self, resource_uri): """Ask MAAS to release a node from our ownership. :param resource_uri: The URI in MAAS for the node you want to release. :return: A Deferred which fires with the resource data for the node just released. """ params = {"op": "release"} return self.post(resource_uri, params) @inlineCallbacks def list_tags(self): """Ask MAAS to return a list of all the tags defined. :return: A Deferred whose value is the list of tags. """ params = {"op": "list"} try: value = yield self.get("api/1.0/tags/", params) except ProviderError as e: log.error("Listing valid maas-tags failed: %s", e) value = [] returnValue(value) juju-0.7.orig/juju/providers/maas/machine.py0000644000000000000000000000140112135220114017276 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Juju machine provider for MAAS.""" from juju.machine import ProviderMachine class MAASMachine(ProviderMachine): """MAAS-specific provider machine implementation.""" @classmethod def from_dict(cls, d): """Convert a `dict` into a :class:`MAASMachine`. :param dict d: a dict as returned (in a list) by :meth:`juju.providers.maas.maas.MAASClient.start_node` :rtype: :class:`MAASMachine` """ resource_uri, hostname = d["resource_uri"], d["hostname"] return cls( instance_id=resource_uri, dns_name=hostname, private_dns_name=hostname) juju-0.7.orig/juju/providers/maas/provider.py0000644000000000000000000001612412135220114017534 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Juju provider to connect to a MAAS server.""" import logging from twisted.internet.defer import inlineCallbacks, returnValue, succeed from juju.errors import MachinesNotFound, ProviderError from juju.providers.common.base import MachineProviderBase from juju.providers.maas.files import MAASFileStorage from juju.providers.maas.launch import MAASLaunchMachine from juju.providers.maas.maas import MAASClient from juju.providers.maas.machine import MAASMachine log = logging.getLogger("juju.maas") class _TagHandler(object): """Parser and validator for tags constraint expressions Tag names are extracted and checked against the list of known tags reported by the api. Currently tag constraints consist of just comma and/or whitespace tag names, all of are required for a match. Extending this to support full boolean expressions would be possible, and some forward compatibility is attempted. """ def __init__(self, tags_info): self.tag_names = [tag['name'] for tag in tags_info] compare = staticmethod(set.issuperset) def convert(self, tag_expression): """Extract set of names in tag_expression checking they all exist""" tags = set() stripped_expression = tag_expression for c in (",", "&", "|", "!"): stripped_expression = stripped_expression.replace(c, " ") for word in stripped_expression.strip().split(): tag = word.lower() if tag not in self.tag_names: raise ValueError("tag %r does not exist" % (tag,)) tags.add(tag) return tags class MachineProvider(MachineProviderBase): """MachineProvider for use in a MAAS environment""" def __init__(self, environment_name, config): super(MachineProvider, self).__init__(environment_name, config) self.maas_client = MAASClient(config) self._storage = MAASFileStorage(config) @property def provider_type(self): return "maas" def get_file_storage(self): """Return a WebDAV-backed FileStorage abstraction.""" return self._storage @inlineCallbacks def get_constraint_set(self): """Return the set of constraints that are valid for this provider.""" cs = yield super(MachineProvider, self).get_constraint_set() cs.register("ubuntu-series", visible=False) cs.register("maas-name") cs.register_generics([]) # MaaS client errors on the provisioning agent are not reported by the # juju cli so try validating tags when the constraint is created. # Because new tags may be registered at any point, caching the list of # tags is not safe and they must be refetched every time. tags_info = yield self.maas_client.list_tags() handler = _TagHandler(tags_info) cs.register("maas-tags", converter=handler.convert, comparer=handler.compare) returnValue(cs) def get_serialization_data(self): """Get provider configuration suitable for serialization. We're overriding the base method so that we can deal with the maas-oauth data in a special way because when the environment file is parsed it writes the data in the config object as a list of its token parts. """ data = super(MachineProvider, self).get_serialization_data() data["maas-oauth"] = ":".join(self.config["maas-oauth"]) return data def start_machine(self, machine_data, master=False): """Start a MAAS machine. :param dict machine_data: desired characteristics of the new machine; it must include a "machine-id" key, and may include a "constraints" key (which is currently ignored by this provider). :param bool master: if True, machine will initialize the juju admin and run a provisioning agent, in addition to running a machine agent. """ return MAASLaunchMachine.launch(self, machine_data, master) @inlineCallbacks def get_machines(self, instance_ids=()): """List machines running in the provider. :param list instance_ids: ids of instances you want to get. Leave empty to list every :class:`juju.providers.maas.MAASMachine` owned by this provider. :return: a list of :class:`juju.providers.maas.MAASMachine` :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.MachinesNotFound` """ instances = yield self.maas_client.get_nodes(instance_ids) machines = [MAASMachine.from_dict(i) for i in instances] if instance_ids: instance_ids_expected = set(instance_ids) instance_ids_returned = set( machine.instance_id for machine in machines) instance_ids_missing = ( instance_ids_expected - instance_ids_returned) instance_ids_unexpected = ( instance_ids_returned - instance_ids_expected) if instance_ids_missing: raise MachinesNotFound(sorted(instance_ids_missing)) if instance_ids_unexpected: raise ProviderError( "Machines not requested returned: %s" % ( ", ".join(sorted(instance_ids_unexpected)))) returnValue(machines) @inlineCallbacks def shutdown_machines(self, machines): """Terminate machines associated with this provider. :param machines: machines to shut down :type machines: list of :class:`juju.providers.maas.MAASMachine` :return: list of terminated :class:`juju.providers.maas.MAASMachine` instances :rtype: :class:`twisted.internet.defer.Deferred` """ if not machines: returnValue([]) for machine in machines: if not isinstance(machine, MAASMachine): raise ProviderError( "Can only shut down MAASMachines; " "got a %r" % type(machine)) ids = [m.instance_id for m in machines] killable_machines = yield self.get_machines(ids) for machine in killable_machines: yield self.maas_client.stop_node(machine.instance_id) yield self.maas_client.release_node(machine.instance_id) returnValue(killable_machines) def open_port(self, machine, machine_id, port, protocol="tcp"): """Authorizes `port` using `protocol` for `machine`.""" log.warn("Firewalling is not yet implemented") return succeed(None) def close_port(self, machine, machine_id, port, protocol="tcp"): """Revokes `port` using `protocol` for `machine`.""" log.warn("Firewalling is not yet implemented") return succeed(None) def get_opened_ports(self, machine, machine_id): """Returns a set of open (port, protocol) pairs for `machine`.""" log.warn("Firewalling is not yet implemented") return succeed(set()) juju-0.7.orig/juju/providers/maas/tests/0000755000000000000000000000000012135220114016466 5ustar 00000000000000juju-0.7.orig/juju/providers/maas/tests/__init__.py0000644000000000000000000000000012135220114020565 0ustar 00000000000000juju-0.7.orig/juju/providers/maas/tests/test_auth.py0000644000000000000000000000417012135220114021042 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Test cases for juju.providers.maas.auth""" from juju.providers.maas.auth import _ascii_url from juju.providers.maas.tests.testing import TestCase class TestAsciiUrl(TestCase): """Tests for L{_ascii_url}.""" def assertEncode(self, expected, url): """Assert that L{_ascii_url} encodes L{url} correctly. The encoded URL must be a byte string (i.e. str in Python 2.x or bytes in Python 3.x) and be equal to L{expected}. """ url_encoded = _ascii_url(url) self.assertIsInstance(url_encoded, str) self.assertEqual(expected, url_encoded) def test_already_ascii_str(self): # A URL passed as a byte string with only ASCII characters is returned # unaltered. url = "http://www.example.com/some/where" self.assertEncode(url, url) def test_already_ascii_unicode_str(self): # A URL passed as a unicode string with only ASCII characters is # returned as a byte string. self.assertEncode( "http://www.example.com/some/where", u"http://www.example.com/some/where") def test_non_ascii_str(self): # An exception is raised if the URL is byte string containing # non-ASCII characters. url = "http://fran\xe7aise.example.com/some/where" self.assertRaises(UnicodeDecodeError, _ascii_url, url) def test_non_ascii_unicode_hostname(self): # A URL passed as a unicode string with non-ASCII characters in the # hostname part is returned with the hostname IDNA encoded. url = u"http://fran\xe7aise.example.com/some/where" url_expected = "http://xn--franaise-v0a.example.com/some/where" self.assertEncode(url_expected, url) def test_non_ascii_unicode_path(self): # An exception is raised if the URL is a unicode string with non-ASCII # characters in parts outside of the hostname. url = u"http://example.com/fran\xe7aise" self.assertRaises(UnicodeEncodeError, _ascii_url, url) juju-0.7.orig/juju/providers/maas/tests/test_files.py0000644000000000000000000001234212135220114021203 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Tests for juju.providers.maas.files""" import httplib from io import BytesIO import re from textwrap import dedent from twisted.internet import defer from twisted.web import error from urlparse import urlparse from juju.errors import ( FileNotFound, ProviderError, ProviderInteractionError) from juju.providers.maas.files import encode_multipart_data, MAASFileStorage from juju.providers.maas.tests.testing import CONFIG, TestCase class FakeFileStorage(object): """A fake http client to MAAS so MAASFileStorage tests can operate.""" def __init__(self, url, method='GET', postdata=None, headers=None): # Store passed data for later inspection. self.headers = headers self.url = url self.data = postdata self.action = method func = getattr(self, method.lower()) self.deferred = func() def get(self): return defer.succeed("blah") def post(self): return defer.succeed("blah") class FakeFileStorageReturning404(FakeFileStorage): def get(self): self.status = str(httplib.NOT_FOUND) return defer.fail(error.Error(self.status, "this is a 404", "")) class FakeFileStorageReturningUnexpectedError(FakeFileStorage): def get(self): return defer.fail(ZeroDivisionError("numpty")) class FakeFileStorageWithErrorOnAddingFile(FakeFileStorage): def post(self): self.status = str(httplib.UNAUTHORIZED) return defer.fail(error.Error(self.status, "this is a 401", "")) class TestMAASFileAPIFunctions(TestCase): def test_encode_multipart_data(self): # The encode_multipart_data() function should take a list of # parameters and files and encode them into a MIME # multipart/form-data suitable for posting to the MAAS server. params = {"op": "add", "filename": "foo"} fileObj = BytesIO(b"random data") files = {"file": fileObj} body, headers = encode_multipart_data(params, files) expected_body_regex = b"""\ --(?P.+) Content-Disposition: form-data; name="filename" foo --(?P=boundary) Content-Disposition: form-data; name="op" add --(?P=boundary) Content-Disposition: form-data; name="file"; filename="file" Content-Type: application/octet-stream random data --(?P=boundary)-- """ expected_body_regex = dedent(expected_body_regex) expected_body_regex = "\r\n".join(expected_body_regex.splitlines()) expected_body = re.compile(expected_body_regex, re.MULTILINE) self.assertRegexpMatches(body, expected_body) boundary = expected_body.match(body).group("boundary") expected_headers = { "content-length": "365", "content-type": "multipart/form-data; boundary=%s" % boundary} self.assertEqual(expected_headers, headers) class TestMAASFileStorage(TestCase): def test_get_url(self): # get_url should return the base URL plus the op params for a # file name. storage = MAASFileStorage(CONFIG) url = storage.get_url("foofile") urlparts = urlparse(url) self.assertEqual("/maas/api/1.0/files/", urlparts.path) self.assertEqual("filename=foofile&op=get", urlparts.query) def test_get_succeeds(self): self.setup_connection(MAASFileStorage, FakeFileStorage) storage = MAASFileStorage(CONFIG) d = storage.get("foo") def check(value): # The underlying code returns a StringIO but because # implementations of StringIO and cStringIO are completely # different the only reasonable thing to do here is to # check to see if the returned object has a "read" method. attr = getattr(value, "read") self.assertIsNot(None, attr) self.assertTrue(value.read) d.addCallback(check) d.addErrback(self.fail) return d def test_get_with_bad_filename(self): self.setup_connection(MAASFileStorage, FakeFileStorageReturning404) storage = MAASFileStorage(CONFIG) d = storage.get("foo") return self.assertFailure(d, FileNotFound) def test_get_with_unexpected_response(self): self.setup_connection( MAASFileStorage, FakeFileStorageReturningUnexpectedError) storage = MAASFileStorage(CONFIG) d = storage.get("foo") return self.assertFailure(d, ProviderInteractionError) def test_put_succeeds(self): self.setup_connection(MAASFileStorage, FakeFileStorage) storage = MAASFileStorage(CONFIG) fileObj = BytesIO("some data") d = storage.put("foo", fileObj) d.addCallback(self.assertTrue) d.addErrback(self.fail) return d def test_put_with_error_returned(self): self.setup_connection( MAASFileStorage, FakeFileStorageWithErrorOnAddingFile) storage = MAASFileStorage(CONFIG) fileObj = BytesIO("some data") d = storage.put("foo", fileObj) return self.assertFailure(d, ProviderError) juju-0.7.orig/juju/providers/maas/tests/test_launch.py0000644000000000000000000001470712135220114021362 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Tests for juju.providers.maas.launch""" import json from StringIO import StringIO from twisted.internet import defer from twisted.internet.defer import inlineCallbacks, succeed from juju.errors import ProviderError from juju.lib.mocker import ANY from juju.machine import ProviderMachine from juju.providers.common.state import _STATE_FILE from juju.providers.maas import MAASMachine, MachineProvider from juju.providers.maas.launch import MAASLaunchMachine from juju.providers.maas.maas import MAASClient from juju.providers.maas.tests.testing import ( CONFIG, FakeMAASHTTPConnection, NODE_JSON, TestCase, FakeMAASHTTPConnectionWithNoAvailableNodes) LOG_NAME = "juju.maas" class FakeStorage(object): fake_state = """ zookeeper-instances: [%s] """ % NODE_JSON[0]['resource_uri'] def get(self, name): if name == _STATE_FILE: return defer.succeed(StringIO(self.fake_state)) class LaunchMachineTest(TestCase): def _get_provider(self, client_factory=None): self.setup_connection(MAASClient, client_factory) provider = MachineProvider("mymaas", CONFIG) provider._storage = FakeStorage() return provider def test_no_machine_id(self): provider = self._get_provider() d = provider.start_machine({}) self.assertFailure(d, ProviderError) def verify(error): self.assertEqual( str(error), "Cannot launch a machine without specifying a machine-id") d.addCallback(verify) return d def test_no_constraints(self): provider = self._get_provider() d = provider.start_machine({"machine-id": 99}) self.assertFailure(d, ProviderError) def verify(error): self.assertEqual( str(error), "Cannot launch a machine without specifying constraints") d.addCallback(verify) return d def test_no_available_machines(self): # Requesting a startup of an already-acquired machine should # result in a Fault being returned. provider = self._get_provider( FakeMAASHTTPConnectionWithNoAvailableNodes) machine_data = { "machine-id": "foo", "constraints": {"ubuntu-series": "splendid"}} d = provider.start_machine(machine_data) # These arbitrary fake failure values come from # FakeMAASHTTPConnectionWithNoAvailableNodes.acquire_node() def check_failure_values(failure): text = failure.message self.assertEqual("No matching node is available.", text) return self.assertFailure(d, ProviderError).addBoth(check_failure_values) @inlineCallbacks def test_actually_launch(self): # Grab a node from the pre-prepared testing data and some of its # data. target_node = NODE_JSON[0] machine_id = target_node['resource_uri'] dns_name = target_node['hostname'] # Try to start up that node using the fake MAAS. provider = self._get_provider(FakeMAASHTTPConnection) machine_data = { "machine-id": "foo", "constraints": {"ubuntu-series": "splendid"}} machine_list = yield provider.start_machine(machine_data) # Test that it returns a list containing a single MAASMachine # with the right properties. [machine] = machine_list self.assertIsInstance(machine, MAASMachine) expected = [machine_id, dns_name] actual = [machine.instance_id, machine.dns_name] self.assertEqual( actual, expected, "MAASMachine values of instance_id / dns_name don't match expected" "values") @inlineCallbacks def test_launch_with_constraints(self): # Try to launch a particular machine by its name using the # "maas-name" constraint. # Pick the "moon" node out of the test data. target_node = NODE_JSON[1] self.assertEqual("moon", target_node["hostname"]) machine_id = "foo" test = self class FakeWithTags(FakeMAASHTTPConnection): def acquire_node(self): test.assertIn("clawed|furry", self.data) return super(FakeWithTags, self).acquire_node() def list_tags(self): return succeed(json.dumps([ {'name': 'furry', 'definition': 'fuzzy', 'comment': ''}, {'name': 'clawed', 'definition': 'curvy', 'comment': ''} ])) provider = self._get_provider(FakeWithTags) cs = yield provider.get_constraint_set() constraints = cs.parse(["maas-name=moon", "maas-tags=clawed|furry"]) constraints = constraints.with_series('splendid') machine_data = {"machine-id": machine_id, "constraints": constraints} machine_list = yield provider.start_machine(machine_data) # Check that "moon" was started. [machine] = machine_list self.assertIsInstance(machine, MAASMachine) self.assertEqual(machine.dns_name, "moon") def test_failed_launch(self): """If an acquired node fails to start, it is released.""" # Throw away logs. self.capture_logging(LOG_NAME) mocker = self.mocker mock_client = mocker.mock() mock_provider = mocker.mock() mock_provider.maas_client mocker.result(mock_client) # These are used for generating cloud-init data. mock_provider.get_zookeeper_machines() mocker.result([ProviderMachine("zk.example.com")]) mock_provider.config mocker.result({"authorized-keys": "key comment"}) # The following operations happen in sequence. mocker.order() # First, the node is acquired. mock_client.acquire_node({"ubuntu-series": "precise"}) mocker.result({"resource_uri": "/node/123"}) # Second, the node is started. We simulate a failure at this stage. mock_client.start_node("/node/123", "precise", ANY) mocker.throw(ZeroDivisionError) # Last, the node is released. mock_client.release_node("/node/123") mocker.result({"resource_uri": "/node/123"}) mocker.replay() return self.assertFailure( MAASLaunchMachine(mock_provider, {"ubuntu-series": "precise"}).run("fred"), ZeroDivisionError) juju-0.7.orig/juju/providers/maas/tests/test_maas.py0000644000000000000000000003651112135220114021026 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Test cases for juju.providers.maas.maas""" import json from textwrap import dedent from urlparse import urlparse from twisted.internet.defer import inlineCallbacks, fail, succeed from twisted.web.error import Error from juju.errors import ProviderError from juju.providers.maas import MachineProvider from juju.providers.maas.maas import extract_system_id, MAASClient from juju.providers.maas.tests.testing import ( CONFIG, FakeMAASHTTPConnection, FakeMAASHTTPConnectionWithNoAvailableNodes, NODE_JSON, TestCase) class FakeMAASHTTPConnectionWithNoTags(FakeMAASHTTPConnection): """Fake client that raises Not Found on tag listing""" def list_tags(self): return fail(Error(404, "Not Found", "Tags? What tags?")) class TestFunctions(TestCase): def assertExtractSystemID(self, system_id, resource_uri): """ Assert that a system ID extracted from `resource_uri` matches `system_id`. """ self.assertEqual(system_id, extract_system_id(resource_uri)) def test_extract_system_id(self): """ The system ID is extracted from URIs resembling resource URIs. """ self.assertExtractSystemID("fred", "/api/1.0/nodes/fred/") self.assertExtractSystemID("fred", "/api/2.3/nodes/fred") self.assertExtractSystemID("fred", "/api/3.x/nodes/fred/mac") def test_extract_system_id_leading_path_elements(self): """ There can be path elements prior to the resource URI part; these are discarded. """ self.assertExtractSystemID("fred", "alice/api/1.0/nodes/fred/") self.assertExtractSystemID("fred", "bob/api/2.3/nodes/fred") self.assertExtractSystemID("fred", "/carol/api/3.x/nodes/fred/mac") def test_extract_system_id_not_a_resource_uri(self): """ `ProviderError` is raised if the argument does not resemble a resource URI. """ error = self.assertRaises( ProviderError, extract_system_id, "/hoopy/frood") self.assertEqual( "'/hoopy/frood' does not resemble a MAAS resource URI.", str(error)) def test_extract_system_id_empty_uri(self): """ `ProviderError` is raised if the URI is empty. """ error = self.assertRaises(ProviderError, extract_system_id, "") self.assertEqual( "'' does not resemble a MAAS resource URI.", str(error)) def test_extract_system_id_with_NODE_JSON(self): """ The system IDs in the sample data are extracted from the sample data's resource URIs. """ for node in NODE_JSON: self.assertExtractSystemID( node["system_id"], node["resource_uri"]) class TestMAASConnection(TestCase): def test_init_config(self): """ `MAASClient` gets its configuration from the passed config object. It ensures that the server URL has a trailing slash; this is important when constructing URLs using resource URIs. """ client = MAASClient(CONFIG) expected = [ CONFIG["maas-server"] + "/", CONFIG["maas-oauth"], CONFIG["admin-secret"]] actual = [client.url, client.oauth_info, client.admin_secret] self.assertEqual(expected, actual) def test_init_config_leaves_trailing_slash_on_url(self): """ When the MAAS server is configured with a trailing slash in the URL it is left alone (and not doubled up). """ config = CONFIG config["maas-server"] = "http://maas.example.com/maas/" client = MAASClient(CONFIG) self.assertEqual("http://maas.example.com/maas/", client.url) def test_oauth_sign_request(self): client = MAASClient(CONFIG) headers = {} client.oauth_sign_request( CONFIG["maas-server"], headers) auth = headers['Authorization'] match_regex = ( 'OAuth realm="", oauth_nonce="[^"]+", ' 'oauth_timestamp="[^"]+", oauth_consumer_key="maas", ' 'oauth_signature_method="PLAINTEXT", ' 'oauth_version="1.0", oauth_token="DEADBEEF1234", ' 'oauth_signature="[^"]+"') self.assertRegexpMatches(auth, match_regex) class MAASClientThatReturnsDispatchURL(MAASClient): def dispatch_query(self, request_url, *args, **kwargs): return succeed(json.dumps(request_url)) class TestMAASClientBase: def get_client(self): """Return a MAASClient with a FakeMAASHTTPConnection factory.""" log = self.setup_connection(MAASClient, FakeMAASHTTPConnection) return MAASClient(CONFIG), log class TestMAASClientWithTwisted(TestCase, TestMAASClientBase): def assertPathEqual(self, expected_path, observed_url): """Assert path of `observed_url` is equal to `expected_path`.""" observed_path = urlparse(observed_url).path self.assertEqual(expected_path, observed_path) def test_get_respects_relative_paths(self): """`MAASClient.get()` respects a relative path.""" client = MAASClientThatReturnsDispatchURL(CONFIG) self.assertPathEqual("/maas/fred", client.get("fred", ()).result) def test_get_respects_absolute_paths(self): """`MAASClient.get()` respects an absolute path.""" client = MAASClientThatReturnsDispatchURL(CONFIG) self.assertPathEqual("/fred", client.get("/fred", ()).result) def test_post_respects_relative_paths(self): """`MAASClient.post()` respects a relative path.""" client = MAASClientThatReturnsDispatchURL(CONFIG) self.assertPathEqual("/maas/fred", client.post("fred", ()).result) def test_post_respects_absolute_paths(self): """`MAASClient.post()` respects an absolute path.""" client = MAASClientThatReturnsDispatchURL(CONFIG) self.assertPathEqual("/fred", client.post("/fred", ()).result) def test_get_nodes_uses_relative_path(self): client = MAASClientThatReturnsDispatchURL(CONFIG) self.assertPathEqual( "/maas/api/1.0/nodes/", client.get_nodes().result) def test_acquire_node_uses_relative_path(self): client = MAASClientThatReturnsDispatchURL(CONFIG) self.assertPathEqual( "/maas/api/1.0/nodes/", client.acquire_node().result) @inlineCallbacks def test_get_nodes_returns_decoded_json(self): client, log = self.get_client() result = yield client.get_nodes() self.assertEqual(NODE_JSON, result) @inlineCallbacks def test_get_nodes_takes_resource_uris(self): """ System IDs are extracted from resource URIs passed into get_nodes(), where possible. Non resource URIs are passed through. """ client, log = self.get_client() resource_uris = ["/api/42/nodes/Ford", "/api/42/nodes/Prefect"] yield client.get_nodes(resource_uris) factory_call = next( call for call in log if call.called == "factory") self.assertEqual("GET", factory_call.kwargs["method"]) self.assertEndsWith( factory_call.kwargs["url"], '?op=list_allocated&id=Ford&id=Prefect') @inlineCallbacks def test_get_nodes_connects_with_oauth_credentials(self): client, log = self.get_client() yield client.get_nodes() [factory_call] = [ record for record in log if record.called == "factory"] self.assertIn("Authorization", factory_call.result.headers) @inlineCallbacks def test_acquire_node(self): client, log = self.get_client() maas_node_data = yield client.acquire_node() # Test that the returned data is a dict containing the node # data. self.assertIsInstance(maas_node_data, dict) self.assertIn("resource_uri", maas_node_data) @inlineCallbacks def test_acquire_node_connects_with_oauth_credentials(self): client, log = self.get_client() yield client.acquire_node() [factory_call] = [ record for record in log if record.called == "factory"] self.assertIn("Authorization", factory_call.result.headers) @inlineCallbacks def test_acquire_node_raises_correct_exception_when_CONFLICT_happens(self): log = self.setup_connection( MAASClient, FakeMAASHTTPConnectionWithNoAvailableNodes) client = MAASClient(CONFIG) e = yield self.assertFailure(client.acquire_node(), ProviderError) self.assertEqual("No matching node is available.", str(e)) @inlineCallbacks def test_list_tags_unsupported(self): """When tags are unspupported just report no valid tags""" log = self.setup_connection( MAASClient, FakeMAASHTTPConnectionWithNoTags) client = MAASClient(CONFIG) log = self.capture_logging() result = yield client.list_tags() self.assertEqual([], result) self.assertRegexpMatches(log.getvalue(), "(?m)^Listing valid maas-tags failed: ") @inlineCallbacks def test_start_node(self): resource_uri = NODE_JSON[0]["resource_uri"] series = "splendid" data = "This is test data." client, log = self.get_client() returned_data = yield client.start_node(resource_uri, series, data) self.assertEqual(returned_data, NODE_JSON[0]) # Also make sure that the connection was passed the user_data in # the POST data. expected_text = dedent( """ Content-Disposition: form-data; name="user_data" VGhpcyBpcyB0ZXN0IGRhdGEu """) expected_text = "\r\n".join(expected_text.splitlines()) [factory_call] = [ record for record in log if record.called == "factory"] self.assertIn(expected_text, factory_call.result.data) @inlineCallbacks def test_start_node_connects_with_oauth_credentials(self): client, log = self.get_client() yield client.start_node("foo", "splendid", "bar") [factory_call] = [ record for record in log if record.called == "factory"] self.assertIn("Authorization", factory_call.result.headers) @inlineCallbacks def test_stop_node(self): # stop_node should power down the node and return its json data. resource_uri = NODE_JSON[0]["resource_uri"] client, log = self.get_client() returned_data = yield client.stop_node(resource_uri) self.assertEqual(returned_data, NODE_JSON[0]) @inlineCallbacks def test_stop_node_connects_with_oauth_credentials(self): client, log = self.get_client() yield client.stop_node("foo") [factory_call] = [ record for record in log if record.called == "factory"] self.assertIn("Authorization", factory_call.result.headers) @inlineCallbacks def test_release_node(self): """C{release_node} asks MAAS to release the node back to the pool. The node's new state is returned. """ resource_uri = NODE_JSON[0]["resource_uri"] client, log = self.get_client() returned_data = yield client.release_node(resource_uri) self.assertEqual(returned_data, NODE_JSON[0]) @inlineCallbacks def test_default_port_http(self): # If the port is not specified in the maas-server config url # then it should default to 80 for http. client, log = self.get_client() yield client.stop_node("foo") [connect_call] = [ record for record in log if record.called == "connect"] addr, port, connection = connect_call.args self.assertEqual(80, port) self.assertEqual(addr, "example.com") self.assertTrue(isinstance(connection, FakeMAASHTTPConnection)) @inlineCallbacks def test_default_port_https(self): # If the port is not specified in the maas-server config url # then it should default to 443 for https. log = self.setup_connection(MAASClient, FakeMAASHTTPConnection) https_config = CONFIG.copy() https_config["maas-server"] = "https://example.com/maas" client = MAASClient(https_config) yield client.stop_node("foo") [connect_call] = [ record for record in log if record.called == "connect"] addr, port, connection = connect_call.args self.assertEqual(443, port) self.assertEqual(addr, "example.com") self.assertTrue(isinstance(connection, FakeMAASHTTPConnection)) class TestConstraints(TestCase, TestMAASClientBase): class fake_post: def __call__(self, uri, params): self.params_used = params def set_up_client_with_fake(self): fake = self.fake_post() client, log = self.get_client() self.patch(client, 'post', fake) return client def test_acquire_node_handles_name_constraint(self): # Ensure that the name constraint is passed through to the post # method. client = self.set_up_client_with_fake() constraints = {"maas-name": "gargleblaster"} client.acquire_node(constraints) name = client.post.params_used.get("name") self.assertEqual("gargleblaster", name) def test_acquire_node_ignores_unknown_constraints(self): # If an unknown constraint is passed it should be ignored. client = self.set_up_client_with_fake() constraints = {"maas-name": "zaphod", "guinness": "widget"} client.acquire_node(constraints) guinness = client.post.params_used.get("guinness") self.assertIs(None, guinness) name = client.post.params_used.get("name") self.assertEqual("zaphod", name) def test_acquire_node_handles_arch_constraint(self): client = self.set_up_client_with_fake() constraints = {"arch": "i386"} client.acquire_node(constraints) arch = client.post.params_used.get("arch") self.assertEqual("i386", arch) def test_acquire_node_handles_cpu_constraint(self): client = self.set_up_client_with_fake() constraints = {"cpu": 2.0} client.acquire_node(constraints) cpu_count = client.post.params_used.get("cpu_count") self.assertEqual("2", cpu_count) def test_acquire_node_handles_mem_constraint(self): client = self.set_up_client_with_fake() constraints = {"mem": 2048.0} client.acquire_node(constraints) mem = client.post.params_used.get("mem") self.assertEqual("2048", mem) @inlineCallbacks def test_acquire_node_handles_arbitrary_tag_query(self): mock_client = self.mocker.patch(MAASClient(CONFIG)) mock_client.list_tags() self.mocker.result(succeed([ {'name': 'red', 'definition': '', 'comment': ''}, {'name': 'white', 'definition': '', 'comment': ''}, {'name': 'blue', 'definition': '', 'comment': ''}])) self.mocker.replay() provider = MachineProvider("mymaas", CONFIG) provider.maas_client = mock_client cs = yield provider.get_constraint_set() constraints = cs.parse(["maas-tags=red&!white|blue"]) constraints = constraints.with_series('splendid') client = self.set_up_client_with_fake() client.acquire_node(constraints) tags = client.post.params_used.get("tags") self.assertEqual("red&!white|blue", tags) juju-0.7.orig/juju/providers/maas/tests/test_machine.py0000644000000000000000000000163712135220114021512 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Tests for juju.providers.maas.machine.""" from juju.providers.maas.machine import MAASMachine from juju.providers.maas.tests.testing import TestCase class MAASMachineTest(TestCase): "Tests for `MAASMachine`.""" def test_from_dict(self): """ Given a dict of node data from the MAAS API, `from_dict()` returns a `MAASMachine` instance representing that machine. """ data = { "resource_uri": "/an/example/uri", "hostname": "machine.example.com", } machine = MAASMachine.from_dict(data) self.assertEqual("/an/example/uri", machine.instance_id) self.assertEqual("machine.example.com", machine.dns_name) self.assertEqual("machine.example.com", machine.private_dns_name) juju-0.7.orig/juju/providers/maas/tests/test_provider.py0000644000000000000000000001757512135220114021750 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Test cases for juju.providers.maas.provider""" from twisted.internet.defer import inlineCallbacks, succeed from juju.errors import ConstraintError, MachinesNotFound, ProviderError from juju.lib.mocker import ANY from juju.lib import serializer from juju.machine.constraints import Constraints from juju.providers.maas import MachineProvider from juju.providers.maas.maas import MAASClient from juju.providers.maas.tests.testing import ( CONFIG, FakeMAASHTTPConnection, NODE_JSON, TestCase) LOG_NAME = "juju.maas" class TestMAASProvider(TestCase): def test_create(self): name = "mymaas" provider = MachineProvider(name, CONFIG) self.assertEqual(provider.environment_name, name) self.assertEqual(provider.config, CONFIG) def test_config_serialization(self): """ The environment config should be serializable so that the zookeeper agent can read it back on the master node. """ keys_path = self.makeFile("my-keys") config = { "admin-secret": "foo", "maas-server": "http://localhost:8000", "maas-oauth": ["HskWvqQmpEwNpkQLnd", "wxHZ99gBwAucKZbwUD", "323ybLTwTcENZsuDGNV6KaGkp99DjWcy"], "authorized-keys-path": keys_path} expected = { "admin-secret": "foo", "maas-server": "http://localhost:8000", "maas-oauth": "HskWvqQmpEwNpkQLnd:wxHZ99gBwAucKZbwUD:" "323ybLTwTcENZsuDGNV6KaGkp99DjWcy", "authorized-keys": "my-keys"} provider = MachineProvider("maas", config) serialized = provider.get_serialization_data() self.assertEqual(serialized, expected) @inlineCallbacks def test_open_port(self): log = self.capture_logging(LOG_NAME) yield MachineProvider("blah", CONFIG).open_port(None, None, None) self.assertIn( "Firewalling is not yet implemented", log.getvalue()) @inlineCallbacks def test_close_port(self): log = self.capture_logging(LOG_NAME) yield MachineProvider("blah", CONFIG).close_port(None, None, None) self.assertIn( "Firewalling is not yet implemented", log.getvalue()) @inlineCallbacks def test_get_opened_ports(self): log = self.capture_logging(LOG_NAME) ports = yield MachineProvider( "blah", CONFIG).get_opened_ports(None, None) self.assertEquals(ports, set()) self.assertIn( "Firewalling is not yet implemented", log.getvalue()) @inlineCallbacks def test_get_machines(self): self.setup_connection(MAASClient, FakeMAASHTTPConnection) provider = MachineProvider("blah", CONFIG) machines = yield provider.get_machines() self.assertNotEqual([], machines) def test_get_machines_too_few(self): """ :exc:`juju.errors.MachinesNotFound` is raised when a requested machine is not found by `get_machines`. The error contains a list of instance IDs that were missing. """ self.setup_connection(MAASClient, FakeMAASHTTPConnection) provider = MachineProvider("blah", CONFIG) instance_id = "/api/123/nodes/fred" d = provider.get_machines([instance_id]) d = self.assertFailure(d, MachinesNotFound) d.addCallback( lambda error: self.assertEqual( [instance_id], error.instance_ids)) return d def test_get_machines_too_many(self): """ :exc:`juju.errors.ProviderError` is raised when a machine not requested is returned by `get_machines`. """ self.setup_connection(MAASClient, FakeMAASHTTPConnection) provider = MachineProvider("blah", CONFIG) instance_id = NODE_JSON[1]["resource_uri"] d = provider.get_machines([instance_id]) d = self.assertFailure(d, ProviderError) d.addCallback( lambda error: self.assertEqual( "Cannot find machine: %s" % instance_id, str(error))) return d @inlineCallbacks def test_shutdown_machines(self): self.setup_connection(MAASClient, FakeMAASHTTPConnection) client = MAASClient(CONFIG) # Patch the client to demonstrate that the node is stopped and # released when shutting down. mocker = self.mocker mock_client = mocker.patch(client) mocker.order() mock_client.stop_node(ANY) mocker.result(None) mock_client.release_node(ANY) mocker.result(None) mocker.replay() provider = MachineProvider("blah", CONFIG) provider.maas_client = mock_client machines = yield provider.get_machines() machine_to_shutdown = machines[0] machines_terminated = ( yield provider.shutdown_machines([machine_to_shutdown])) self.assertEqual( [machines[0].instance_id], [machine.instance_id for machine in machines_terminated]) @inlineCallbacks def test_constraints(self): self.setup_connection(MAASClient, FakeMAASHTTPConnection) provider = MachineProvider("blah", CONFIG) cs = yield provider.get_constraint_set() self.assertEquals(cs.parse([]), { "provider-type": "maas", "ubuntu-series": None, "arch": "amd64", "cpu": 1, "mem": 512, "maas-tags": None, "maas-name": None}) self.assertEquals(cs.parse(["maas-name=totoro"]), { "provider-type": "maas", "ubuntu-series": None, "arch": "amd64", "cpu": 1, "mem": 512, "maas-tags": None, "maas-name": "totoro"}) bill = cs.parse(["maas-name=bill"]).with_series("precise") ben = cs.parse(["maas-name=ben"]).with_series("precise") nil = cs.parse([]).with_series("precise") self.assertTrue(bill.can_satisfy(bill)) self.assertTrue(bill.can_satisfy(nil)) self.assertTrue(ben.can_satisfy(ben)) self.assertTrue(ben.can_satisfy(nil)) self.assertTrue(nil.can_satisfy(nil)) self.assertFalse(nil.can_satisfy(bill)) self.assertFalse(nil.can_satisfy(ben)) self.assertFalse(ben.can_satisfy(bill)) self.assertFalse(bill.can_satisfy(ben)) @inlineCallbacks def test_constraints_on_tags(self): mock_client = self.mocker.patch(MAASClient(CONFIG)) mock_client.list_tags() self.mocker.result(succeed([ {'name': "furry", 'definition': "HAS fur", 'comment': ""}, {'name': "clawed", 'definition': "HAS claws", 'comment': ""}, ])) self.mocker.replay() provider = MachineProvider("maasiv", CONFIG) provider.maas_client = mock_client cs = yield provider.get_constraint_set() bear = cs.parse(["maas-tags=clawed, furry"]) # Incomplete constraints (no series) can't satisify self.assertFalse(bear.can_satisfy(bear)) self.assertEqual(set(["clawed", "furry"]), bear["maas-tags"]) err = self.assertRaises( ConstraintError, cs.parse, ["maas-tags=furry, bouncy"]) self.assertEqual( "Bad 'maas-tags' constraint 'furry, bouncy': " "tag 'bouncy' does not exist", str(err)) bear = bear.with_series("precise") # Ensure we can roundtrip through serialization. raw = serializer.dump(bear.data) grizzly = cs.load(serializer.load(raw)) self.assertTrue(grizzly.can_satisfy(bear)) rodent = cs.parse( ["maas-tags=clawed"]).with_series("raring") self.assertFalse(rodent.can_satisfy(grizzly)) self.assertTrue(grizzly.can_satisfy( rodent.with_series("precise"))) juju-0.7.orig/juju/providers/maas/tests/testing.py0000644000000000000000000001360212135220114020517 0ustar 00000000000000# Copyright 2012 Canonical Ltd. This software is licensed under the # GNU Affero General Public License version 3 (see the file LICENSE). """Helpers for testing juju.providers.maas.""" from collections import namedtuple import json from twisted.internet import defer from twisted.web.error import Error from juju.lib import testing from juju.providers.maas.auth import DEFAULT_FACTORY, MAASOAuthConnection LogRecord = namedtuple( "LogRecord", ("called", "args", "kwargs", "result")) class TestCase(testing.TestCase): def setup_connection(self, cls, factory=DEFAULT_FACTORY): """Temporarily override client and connection factories in `cls`. This is intended for use with `MAASOAuthConnection` and subclasses. The connection factory is always set to a function that logs the call but always returns `None`. This prevents tests from making external connections. Returns a list, to which events will be logged. See `LogRecord` for the form these logs take. """ assert issubclass(cls, MAASOAuthConnection), ( "setup_connection() is only suitable for use " "with MAASOAuthConnection and its subclasses.") log = [] def factory_logger(*args, **kwargs): instance = factory(*args, **kwargs) record = LogRecord("factory", args, kwargs, instance) log.append(record) return instance def connect_logger(*args, **kwargs): record = LogRecord("connect", args, kwargs, None) log.append(record) return None self.patch(cls, "factory", staticmethod(factory_logger)) self.patch(cls, "connect", staticmethod(connect_logger)) return log CONFIG = { "maas-server": "http://example.com/maas", "maas-oauth": ("maas", "DEADBEEF1234", "BEEFDEAD4321"), "admin-secret": "whatever", "authorized-keys": "ssh-rsa DEADBEEF987654321"} # All resource URIs include the maas-server path in CONFIG (above) because # that's what the MAAS server will return. NODE_JSON = [ {"macaddress_set": [ {"resource_uri": ("/maas/api/1.0/" "nodes/node-2666dd64-4671-11e1-93b8-00225f89f211" "/macs/08:34:2a:b5:8a:45/"), "mac_address": "08:34:2a:b5:8a:45"}, {"resource_uri": ("/maas/api/1.0" "/nodes/node-2666dd64-4671-11e1-93b8-00225f89f211" "/macs/dd:67:33:33:1a:bb/"), "mac_address": "dd:67:33:33:1a:bb"}], "hostname": "sun", "system_id": "node-2666dd64-4671-11e1-93b8-00225f89f211", "resource_uri": "/maas/api/1.0/nodes/node-2666dd64-4671-11e1-93b8-00225f89f211/"}, {"macaddress_set": [ {"resource_uri": ("/maas/api/1.0" "/nodes/node-29d7ad70-4671-11e1-93b8-00225f89f211" "/macs/08:05:44:c7:bb:45/"), "mac_address": "08:05:44:c7:bb:45"}], "hostname": "moon", "system_id": "node-29d7ad70-4671-11e1-93b8-00225f89f211", "resource_uri": "/maas/api/1.0/nodes/node-29d7ad70-4671-11e1-93b8-00225f89f211/"}] class FakeMAASHTTPConnection(object): """A L{MAASHTTPConnection} that fakes all connections. Its responses are based on the contents of L{NODE_JSON}. See L{MAASHTTPConnection} for more information. """ def __init__(self, url, method='GET', postdata=None, headers=None): # Store passed data for later inspection. self.headers = headers self.url = url self.data = postdata self.action = method func = getattr(self, method.lower()) self.deferred = func() def get(self): # List all nodes. if self.url.endswith("/nodes/?op=list_allocated"): return self.list_nodes() # List some nodes. elif "nodes/?op=list_allocated&id=" in self.url: return self.list_some_nodes() # List tags. elif self.url.endswith("/tags/?op=list"): return self.list_tags() # Not recognized. else: raise AssertionError("Unknown API method called") def list_nodes(self): return defer.succeed(json.dumps(NODE_JSON)) def list_some_nodes(self): # TODO: Ignores the URL and returns the first node in the test data. return defer.succeed(json.dumps([NODE_JSON[0]])) def post(self): # Power up a node. if "start" in self.data: return self.start_node() # Acquire a node. elif "acquire" in self.data: return self.acquire_node() # Stop a node. elif "stop" in self.data: return self.stop_node() elif "release" in self.data: return self.release_node() # Not recognized. else: raise AssertionError("Unknown API method called") def start_node(self): # Poor man's node selection. if NODE_JSON[1]["system_id"] in self.url: return defer.succeed(json.dumps(NODE_JSON[1])) return defer.succeed(json.dumps(NODE_JSON[0])) def acquire_node(self): # Implement a poor man's name constraints. if "moon" in self.data: return defer.succeed(json.dumps(NODE_JSON[1])) elif "sun" in self.data: return defer.succeed(json.dumps(NODE_JSON[0])) return defer.succeed(json.dumps(NODE_JSON[0])) def stop_node(self): return defer.succeed(json.dumps(NODE_JSON[0])) def release_node(self): return defer.succeed(json.dumps(NODE_JSON[0])) def list_tags(self): return defer.succeed("[]") class FakeMAASHTTPConnectionWithNoAvailableNodes(FakeMAASHTTPConnection): """Special version of L{FakeMAASHTTPConnection} that fakes that no nodes are available.""" def acquire_node(self): return defer.fail(Error(409, "CONFLICT", "No matching node is available.")) juju-0.7.orig/juju/providers/openstack/__init__.py0000644000000000000000000000020512135220114020500 0ustar 00000000000000"""Support for using OpenStack as a cloud provider for juju""" __all__ = ['MachineProvider'] from .provider import MachineProvider juju-0.7.orig/juju/providers/openstack/_ssl.py0000644000000000000000000000140512135220114017704 0ustar 00000000000000"""Exposes the txaws certificate validation mechanism for general twisted use Older versions of of txaws do not include the relevent code in which case an ImportError will be raised when trying to import this module. """ from OpenSSL.SSL import Error as SSLError from txaws.client.ssl import VerifyingContextFactory class WebVerifyingContextFactory(object): """Compatibility wrapper bridging twisted http client and ssl interfaces This differs from the original implementation in txaws.client.base by creating a VerifyingContextFactory using the hostname passed, rather than ignoring the value and requiring the hostname on construction. """ def getContext(self, hostname, port): return VerifyingContextFactory(hostname).getContext() juju-0.7.orig/juju/providers/openstack/client.py0000644000000000000000000004731412135220114020233 0ustar 00000000000000"""Client for talking to OpenStack APIs using twisted This is not a complete implemention of all interfaces, just what juju needs. There is a fair bit of code cleanup and feature implementation to do here still. * Must check https certificates, can use code in txaws to do this. * Must support user/password authentication with keystone as well as keypair. * Want a ProviderInteractionError subclass that can include the extra details returned in json form when something goes wrong and is raised by clients. * Request flow and json handling in general needs polish. * Need to prevent concurrent authentication attempts. * Need to limit concurrent http api requests to 4 or something reasonable, can use DeferredSemaphore for this. * Should really have authentication retry logic in case the token expires. * Would be nice to use Agent keep alive support that twisted 12.1.0 added. """ import base64 import json import logging import operator import urllib import twisted from twisted.internet.defer import ( Deferred, inlineCallbacks, returnValue, succeed) from twisted.internet.protocol import Protocol from twisted.internet.interfaces import IProducer from twisted.internet import reactor from twisted.web.client import Agent # Older twisted versions don't expose _newclient exceptions via client module try: from twisted.web.client import ResponseDone, ResponseFailed except ImportError: from twisted.web._newclient import ResponseDone, ResponseFailed from twisted.web.http_headers import Headers from zope.interface import implements import juju from juju import errors try: from ._ssl import SSLError, WebVerifyingContextFactory except ImportError: WebVerifyingContextFactory = None log = logging.getLogger("juju.openstack") _USER_AGENT = "juju/%s twisted/%s" % (juju.__version__, twisted.__version__) class BytestringProducer(object): """Wrap basic bytestring as a needlessly fancy twisted producer.""" implements(IProducer) def __init__(self, bytestring): self.content = bytestring self.length = len(bytestring) def pauseProducing(self): """Nothing to do if production is paused""" def startProducing(self, consumer): """Write entire contents when production starts""" consumer.write(self.content) return succeed(None) def stopProducing(self): """Nothing to do when production halts""" class ResponseReader(Protocol): """Protocol object suitable for use with Response.deliverBody The 'onConnectionLost' deferred will be called back once the connection is shut down with all the bytes from the body collected at that point. """ def __init__(self): self.onConnectionLost = Deferred() def connectionMade(self): self.data = [] def dataReceived(self, data): self.data.append(data) def connectionLost(self, reason): """Called on connection shut down Here 'reason' can be one of ResponseDone, PotentialDataLost, or ResponseFailed, but currently there is no fancy handling of these. """ self.onConnectionLost.callback("".join(self.data)) def _translate_response_failed(failure): """Turn internal twisted client failures into juju exceptions""" txerr = failure.value if isinstance(txerr, ResponseFailed): for reason in txerr.reasons: err = reason.value if isinstance(err, SSLError): raise errors.SSLVerificationError(err) return failure @inlineCallbacks def request(method, url, extra_headers=(), body=None, check_certs=False): headers = Headers({ # GZ 2012-07-03: Previously passed Accept: application/json header # here, but not always the right thing. Bad for swift? "User-Agent": [_USER_AGENT], }) for header, value in extra_headers: headers.setRawHeaders(header, [value]) if body is not None: if isinstance(body, dict): content_type = "application/json" body = json.dumps(body) elif isinstance(body, str): content_type = "application/octet-stream" headers.setRawHeaders("Content-Type", [content_type]) body = BytestringProducer(body) kwargs = {} if check_certs: kwargs['contextFactory'] = WebVerifyingContextFactory() agent = Agent(reactor, **kwargs) response = yield agent.request(method, url, headers, body).addErrback( _translate_response_failed) if response.length == 0: returnValue((response, "")) reader = ResponseReader() response.deliverBody(reader) body = yield reader.onConnectionLost returnValue((response, body)) class _OpenStackClient(object): def __init__(self, credentials, check_certs): self.credentials = credentials if check_certs and WebVerifyingContextFactory is None: raise errors.SSLVerificationUnsupported() self.check_certs = check_certs log.debug("openstack: using auth-mode %r with %s", credentials.mode, credentials.url) if credentials.mode == "keypair": self.authenticate = self.authenticate_v2_keypair elif credentials.mode == "legacy": self.authenticate = self.authenticate_v1 elif credentials.mode == "rax": self.authenticate = self.authenticate_rax_auth else: self.authenticate = self.authenticate_v2_userpass self.token = None def _make_url(self, service, parts): """Form full url from path components to service endpoint url""" # GZ 2012-07-03: Need to ensure either services is populated or catch # error here and propogate as one useful for users. endpoint = self.services[service] if not endpoint[-1] == "/": endpoint += "/" if isinstance(parts, str): return endpoint + parts quoted_parts = [] for part in parts: if not isinstance(part, str): part = urllib.quote(unicode(part).encode("utf-8"), "/~") quoted_parts.append(part) url = endpoint + "/".join(quoted_parts) log.debug('access %s @ %s', service, url) return url @inlineCallbacks def authenticate_v1(self): deferred = request( "GET", self.credentials.url, extra_headers=[ ("X-Auth-User", self.credentials.username), ("X-Auth-Key", self.credentials.access_key), ], check_certs=self.check_certs, ) response, body = yield deferred if response.code != 204: raise errors.ProviderInteractionError("Failed to authenticate") # TODO: check response has right headers [nova_url] = response.headers.getRawHeaders("X-Server-Management-Url") if self.check_certs: self._warn_if_endpoint_insecure("compute", nova_url) self.nova_url = nova_url self.services = {"compute": nova_url} # No swift_url set as that is not supported [self.token] = response.headers.getRawHeaders("X-Auth-Token") def authenticate_v2_keypair(self): deferred = request( "POST", self.credentials.url + "tokens", body={"auth": { "apiAccessKeyCredentials": { "accessKey": self.credentials.access_key, "secretKey": self.credentials.secret_key, }, "tenantName": self.credentials.project_name, }}, check_certs=self.check_certs, ) return deferred.addCallback(self._handle_v2_auth) def authenticate_v2_userpass(self): deferred = request( "POST", self.credentials.url + "tokens", body={"auth": { "passwordCredentials": { "username": self.credentials.username, "password": self.credentials.password, }, "tenantName": self.credentials.project_name, }}, check_certs=self.check_certs, ) return deferred.addCallback(self._handle_v2_auth) def authenticate_rax_auth(self): # openstack is not a product, but a kit for making snowflakes. deferred = request( "POST", self.credentials.url + "tokens", body={"auth": { "RAX-KSKEY:apiKeyCredentials": { "username": self.credentials.username, "apiKey": self.credentials.password, "tenantName": self.credentials.project_name}}}, check_certs=self.check_certs, ) return deferred.addCallback(self._handle_v2_auth) def _handle_v2_auth(self, result): access_details = self._json(result, 200, 'access') token_details = access_details["token"] # Decoded json uses unicode for all string values, but that can upset # twisted when serialising headers later. Really should encode at that # point, but as keystone should only give ascii tokens a cast will do. self.token = token_details["id"].encode("ascii") # TODO: care about token_details["expires"] # Don't need to we're not preserving tokens. services = [] log.debug("openstack: authenticated til %r", token_details['expires']) region = self.credentials.region # HP cloud uses both az-1.region-a.geo-1 and region-a.geo-1 forms, not # clear what should be in config or what the correct logic is. if region is not None: base_region = region.split('.', 1)[-1] # GZ: 2012-07-03: Should split extraction of endpoints, add logging, # and make more robust. for catalog in access_details["serviceCatalog"]: for endpoint in catalog["endpoints"]: if region is not None and region != endpoint["region"]: if base_region != endpoint["region"]: continue services.append((catalog["type"], str(endpoint["publicURL"]))) break if not services: raise errors.ProviderInteractionError("No suitable endpoints") self.services = dict(services) if self.check_certs: for service in ("compute", "object-store"): if service in self.services: self._warn_if_endpoint_insecure(service, self.services[service]) def _warn_if_endpoint_insecure(self, service_type, url): # XXX: Should only warn per host ideally, otherwise is just annoying if not url.startswith("https:"): log.warn("OpenStack %s service not using secure transport" % service_type) def is_authenticated(self): return self.token is not None @inlineCallbacks def authed_request(self, method, url, headers=None, body=None): log.debug("openstack: %s %r", method, url) request_headers = [("X-Auth-Token", self.token)] if headers: request_headers += headers response, body = yield request(method, url, request_headers, body, self.check_certs) log.debug("openstack: %d %r", response.code, body) # OpenStack returns 401 when using an expired token; simply # retry after reauthenticating if response.code == 401: self.token = None raise errors.ProviderInteractionError( "Need to reauthenticate by retrying") returnValue((response, body)) def _empty(self, result, code): response, body = result if response.code != code: # XXX: This is a deeply unhelpful error, need context from request raise errors.ProviderInteractionError("Unexpected %d: %r" % ( response.code, body)) def _json(self, result, code, root=None): response, body = result if response.code != code: raise errors.ProviderInteractionError("Unexpected %d: %r" % ( response.code, body)) type_headers = response.headers.getRawHeaders("Content-Type") found = False for h in type_headers: if 'application/json' in h: found = True if not found: raise errors.ProviderInteractionError( "Expected json response got %s" % type_headers) data = json.loads(body) if root is not None: return data[root] return data class _NovaClient(object): def __init__(self, client): self._client = client @inlineCallbacks def request(self, method, parts, headers=None, body=None): if not self._client.is_authenticated(): yield self._client.authenticate() url = self._client._make_url("compute", parts) result = yield self._client.authed_request(method, url, headers, body) returnValue(result) def delete(self, parts, code=202): deferred = self.request("DELETE", parts) return deferred.addCallback(self._client._empty, code) def get(self, parts, root, code=200): deferred = self.request("GET", parts) return deferred.addCallback(self._client._json, code, root) def post(self, parts, jsonobj, root, code=200): deferred = self.request("POST", parts, None, jsonobj) # XXX return deferred.addCallback(self._client._json, code, root) def post_no_data(self, parts, root, code=200): deferred = self.request("POST", parts, None, "") # XXX return deferred.addCallback(self._client._json, code, root) def post_no_result(self, parts, jsonobj, code=202): deferred = self.request("POST", parts, None, jsonobj) # XXX return deferred.addCallback(self._client._empty, code) def list_flavors(self): return self.get("flavors", "flavors") def list_flavor_details(self): return self.get(["flavors", "detail"], "flavors") def get_server(self, server_id): return self.get(["servers", server_id], "server") def list_servers(self): return self.get(["servers"], "servers") def list_servers_detail(self): return self.get(["servers", "detail"], "servers") def delete_server(self, server_id): return self.delete(["servers", server_id], code=204) def run_server(self, image_id, flavor_id, name, security_group_names=None, user_data=None, scheduler_hints=None): server = { 'name': name, 'flavorRef': flavor_id, 'imageRef': image_id, } post_dict = {"server": server} if user_data is not None: server["user_data"] = base64.b64encode(user_data) if security_group_names is not None: server["security_groups"] = [{'name': n} for n in security_group_names] if scheduler_hints is not None: post_dict["OS-SCH-HNT:scheduler_hints"] = scheduler_hints return self.post(["servers"], post_dict, root="server", code=202) def get_server_security_groups(self, server_id): d = self.get( ["servers", server_id, "os-security-groups"], root="security_groups") # 2012-07-12: kt Workaround lack of this api in HP cloud def _get_group_fallback(f): log.debug("Falling back to older/diablo sec groups api") return self.get_server(server_id).addCallback( operator.itemgetter("security_groups")) d.addErrback(_get_group_fallback) return d def get_security_group_details(self, group_id): return self.get(["os-security-groups", group_id], "security_group") def list_security_groups(self): return self.get(["os-security-groups"], "security_groups") def create_security_group(self, name, description): return self.post("os-security-groups", { 'security_group': { 'name': name, 'description': description, } }, root="security_group") def delete_security_group(self, group_id): return self.delete(["os-security-groups", group_id]) def add_security_group_rule(self, parent_group_id, **kwargs): rule = {'parent_group_id': parent_group_id} using_group = "group_id" in kwargs if using_group: rule['group_id'] = kwargs['group_id'] elif "cidr" in kwargs: rule['cidr'] = kwargs['cidr'] if not using_group or "ip_protocol" in kwargs: rule['ip_protocol'] = kwargs['ip_protocol'] rule['from_port'] = kwargs['from_port'] rule['to_port'] = kwargs['to_port'] return self.post("os-security-group-rules", {'security_group_rule': rule}, root="security_group_rule") def delete_security_group_rule(self, rule_id): return self.delete(["os-security-group-rules", rule_id]) def add_server_security_group(self, server_id, group_name): return self.post_no_result(["servers", server_id, "action"], { "addSecurityGroup": { "name": group_name, }}) def remove_server_security_group(self, server_id, group_name): return self.post_no_result(["servers", server_id, "action"], { "removeSecurityGroup": { "name": group_name, }}) def list_floating_ips(self): return self.get(["os-floating-ips"], "floating_ips") def get_floating_ip(self, ip_id): return self.get(["os-floating-ips", ip_id], "floating_ip") def allocate_floating_ip(self): return self.post_no_data(["os-floating-ips"], "floating_ip") def delete_floating_ip(self, ip_id): return self.delete(["os-floating-ips", ip_id]) def add_floating_ip(self, server_id, addr): return self.post_no_result(["servers", server_id, "action"], { 'addFloatingIp': { 'address': addr, }}) def remove_floating_ip(self, server_id, addr): return self.post_no_result(["servers", server_id, "action"], { 'removeFloatingIp': { 'address': addr, }}) class _SwiftClient(object): def __init__(self, client): self._client = client @inlineCallbacks def request(self, method, parts, headers=None, body=None): if not self._client.is_authenticated(): yield self._client.authenticate() url = self._client._make_url("object-store", parts) result = yield self._client.authed_request(method, url, headers, body) returnValue(result) def public_object_url(self, container, object_name): if not self._client.is_authenticated(): raise ValueError("Need to have authenticated to get object url") return self._client._make_url("object-store", [container, object_name]) def put_container(self, container_name): # Juju expects there to be a (semi) public url for some objects. This # could probably be more restrictive or placed in a seperate container # with some refactoring, but for now just make everything public. read_acl_header = ("X-Container-Read", ".r:*") return self.request("PUT", [container_name], [read_acl_header], "") def delete_container(self, container_name): return self.request("DELETE", [container_name]) def head_object(self, container, object_name): return self.request("HEAD", [container, object_name]) def get_object(self, container, object_name): return self.request("GET", [container, object_name]) def delete_object(self, container, object_name): return self.request("DELETE", [container, object_name]) def put_object(self, container, object_name, bytestring): return self.request("PUT", [container, object_name], None, bytestring) juju-0.7.orig/juju/providers/openstack/credentials.py0000644000000000000000000001003012135220114021233 0ustar 00000000000000"""Handling of the credentials needed to authenticate with the OpenStack api Supports several different sets of credentials different auth modes need: * 'legacy' is built into nova and deprecated in favour of using keystone * 'keypair' works with the HP public cloud implemention of keystone * 'userpass' is the way keystone seems to want to do authentication generally """ import os class OpenStackCredentials(object): """Encapsulation of credentials used to authenticate with OpenStack""" _config_vars = { 'auth-url': ("OS_AUTH_URL", "NOVA_URL"), 'username': ("OS_USERNAME", "NOVA_USERNAME"), 'password': ("OS_PASSWORD", "NOVA_PASSWORD"), # HP exposes both a numeric id and a name for tenants, passed back # as tenantId and tenantName. Use the name only for simplicity. 'project-name': ("OS_TENANT_NAME", "NOVA_PROJECT_NAME", "NOVA_PROJECT_ID"), 'region': ("OS_REGION_NAME", "NOVA_REGION_NAME", "NOVA_REGION"), # The key variables don't seem to have modern OS_ prefixed aliases 'access-key': ("NOVA_API_KEY",), 'secret-key': ("EC2_SECRET_KEY", "AWS_SECRET_ACCESS_KEY"), # A usable mode can normally be guessed, but may be configured 'auth-mode': (), } # Really, legacy auth could pass in the project id and keystone doesn't # require it, but this is what the client expects for now. _modes = { 'userpass': ('username', 'password', 'project-name'), 'rax': ('username', 'password', 'project-name'), 'keypair': ('access-key', 'secret-key', 'project-name'), 'legacy': ('username', 'access-key'), } _version_to_mode = { "v2.0": 'userpass', "v1.1": 'legacy', "v1.0": 'legacy', } def __init__(self, creds_dict): url = creds_dict.get("auth-url") if not url: raise ValueError("Missing config 'auth-url' for OpenStack api") mode = creds_dict.get("auth-mode") if mode is None: mode = self._guess_auth_mode(url) elif mode not in self._modes: # The juju.environment.config layer should raise a pretty error raise ValueError("Unknown 'auth-mode' value %r" % (self.mode,)) missing_keys = [key for key in self._modes[mode] if not creds_dict.get(key)] if missing_keys: raise ValueError("Missing config %s required for %s auth" % ( ", ".join(map(repr, missing_keys)), mode)) self.url = url self.mode = mode for key in self._config_vars: if key not in ("auth-url", "auth-mode"): setattr(self, key.replace("-", "_"), creds_dict.get(key)) @classmethod def _guess_auth_mode(cls, url): """Pick a mode based on the version at the end of `url` given""" final_part = url.rstrip("/").rsplit("/", 1)[-1] try: return cls._version_to_mode[final_part] except KeyError: raise ValueError( "Missing config 'auth-mode' as unknown version" " in 'auth-url' given: " + url) @classmethod def _get(cls, config, key): """Retrieve `key` from `config` if present or in matching envvars""" val = config.get(key) if val is None: for env_key in cls._config_vars[key]: val = os.environ.get(env_key) if val: return val return val @classmethod def from_environment(cls, config): """Create credentials from `config` falling back to environment""" return cls(dict((k, cls._get(config, k)) for k in cls._config_vars)) def set_config_defaults(self, data): """Populate `data` with these credentials where not already set""" for key in self._config_vars: if key not in data: val = getattr(self, key.replace("auth-", "").replace("-", "_")) if val is not None: data[key] = val juju-0.7.orig/juju/providers/openstack/files.py0000644000000000000000000000637612135220114020062 0ustar 00000000000000"""OpenStack provider file storage on Swift Basically a limited wrapper around the underlying api calls, with a few added quirks. There's some specific handling for 404 responses, raises FileNotFound on GET, and on PUT attempts to create the container then retries. Expects file-like objects for data. This isn't terribly useful as it doesn't fit well with the twisted model for chunking http data and most objects are small anyway. The main complication is the get_url method, which requires the generation of link that can be used by any http client to fetch a particular object without authentication. This is not possible in the general case in swift, however there are a few ways around the problem: * The stub nova-objectstore service doesn't check access anyway, see lp:947374 * The tempurl swift middleware if enabled can do this with some advance setup * A container can have a more permissive ACL applying to all objects within All of these require touching the network to at least get the swift endpoint from the identity service, which as get_url doesn't return a deferred is problematic. In practice it's only used after putting an object however, so just raising if client has not yet been authenticated is good enough. """ from cStringIO import StringIO from twisted.internet.defer import inlineCallbacks, returnValue from juju import errors class FileStorage(object): """Swift-backed :class:`FileStorage` abstraction""" def __init__(self, swift, container): self._swift = swift self._container = container def get_url(self, name): return self._swift.public_object_url(self._container, name) @inlineCallbacks def get(self, name): """Get a file object from Swift. :param unicode name: S3 key for the desired file :return: an open file object :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.FileNotFound` if the file doesn't exist """ response, body = yield self._swift.get_object(self._container, name) if response.code == 404: raise errors.FileNotFound(name) if response.code != 200: raise errors.ProviderInteractionError( "Couldn't fetch object %r %r" % (response.code, body)) returnValue(StringIO(body)) @inlineCallbacks def put(self, remote_path, file_object): """Upload a file to Swift. :param unicode remote_path: key on which to store the content :param file_object: open file object containing the content :rtype: :class:`twisted.internet.defer.Deferred` """ data = file_object.read() response, body = yield self._swift.put_object(self._container, remote_path, data) if response.code == 404: response, body = yield self._swift.put_container(self._container) if response.code != 201: raise errors.ProviderInteractionError( "Couldn't create container %r" % (self._container,)) response, body = yield self._swift.put_object(self._container, remote_path, data) if response.code != 201: raise errors.ProviderInteractionError( "Couldn't create object %r %r" % (response.code, remote_path)) juju-0.7.orig/juju/providers/openstack/launch.py0000644000000000000000000001513012135220114020216 0ustar 00000000000000"""Helpers for creating servers catered to Juju needs with OpenStack Specific notes: * Expects a public address for each machine, as that's what EC2 promises, would be good to weaken this requirement. * Creates a per-machine security group in case need to poke ports open later, as EC2 doesn't support changing groups later, but OpenStack does. * Needs to tell cloud-init how to get server id, currently in essex metadata service gives i-08x style only, so cheat and use filestorage. * Config must specify an image id, as there's no standard way of looking up from distro series across clouds yet in essex. * Would be really nice to put the service name in the server name, but it's not passed down into LaunchMachine currently. There are some race issues with the current setup: * Storing of server id needs to complete before cloud-init does the lookup, extremely unlikely (two successive api calls vs instance boot and running). * for environments configured with floating-ips. A floating ip may be assigned to another server before the current one finishes launching and can use the available ip itself. """ from cStringIO import StringIO from twisted.internet.defer import inlineCallbacks, returnValue from juju.errors import ProviderError, ProviderInteractionError from juju.lib import twistutils from juju.providers.common.launch import LaunchMachine from juju.providers.common.instance_type import TypeSolver, InstanceType from .machine import machine_from_instance, get_server_status from .client import log class NovaLaunchMachine(LaunchMachine): """OpenStack Nova operation for creating a server""" _DELAY_FOR_ADDRESSES = 5 # seconds @inlineCallbacks def start_machine(self, machine_id, zookeepers): """Actually launch an instance on Nova. :param str machine_id: the juju machine ID to assign :param zookeepers: the machines currently running zookeeper, to which the new machine will need to connect :type zookeepers: list of :class:`juju.providers.openstack.machine.NovaProviderMachine` :return: a singe-entry list containing a :class:`juju.providers.openstack.machine.NovaProviderMachine` representing the newly-launched machine :rtype: :class:`twisted.internet.defer.Deferred` """ cloud_init = self._create_cloud_init(machine_id, zookeepers) cloud_init.set_provider_type(self._provider.provider_type) filestorage = self._provider.get_file_storage() # Only the master is required to get its own instance id like this. if self._master: id_name = "juju_master_id" # If Swift does not have a valid certificate, by default curl will # print a complaint to stderr and nothing to stdout. This makes # cloud-init think the instance id is an empty string. Work around # by allowing insecure connections if https certs are unchecked. if self._provider._check_certs: curl = "curl" else: curl = "curl -k" cloud_init.set_instance_id_accessor("$(%s %s)" % ( curl, filestorage.get_url(id_name),)) user_data = cloud_init.render() # For openstack deployments, really need image id configured as there # are no standards to provide a fallback value. image_id = self._provider.config.get("default-image-id") if image_id is None: raise ProviderError("Need to specify a default-image-id") security_groups = ( yield self._provider.port_manager.ensure_groups(machine_id)) # Find appropriate instance type for the given constraints. Warn # if deprecated is instance-type is being used. flavor_name = self._provider.config.get("default-instance-type") if flavor_name is not None: log.warning( "default-instance-type is deprecated, use cli --constraints") flavors = yield self._provider.nova.list_flavor_details() flavor_id = _solve_flavor(self._constraints, flavor_name, flavors) hints = self._constraints["os-scheduler-hints"] server = yield self._provider.nova.run_server( name="juju %s instance %s" % (self._provider.environment_name, machine_id,), image_id=image_id, flavor_id=flavor_id, security_group_names=security_groups, user_data=user_data, scheduler_hints=hints, ) if self._master: yield filestorage.put(id_name, StringIO(str(server['id']))) # For private clouds allow an option of attaching public # floating ips to all the machines. None of the extant public # clouds need this. if self._provider.config.get('use-floating-ip'): # Not possible to attach a floating ip to a newly booted # server, must wait for networking to be ready when some # kind of address exists. while not server.get('addresses'): status = get_server_status(server) if status != "pending": raise ProviderInteractionError( "Server out of pending status " "without addresses set: %r" % server) # Bad, bad place to be doing a wait loop directly yield twistutils.sleep(self._DELAY_FOR_ADDRESSES) log.debug("Waited for %d seconds for networking on server %r", self._DELAY_FOR_ADDRESSES, server['id']) server = yield self._provider.nova.get_server(server['id']) yield _assign_floating_ip(self._provider, server['id']) returnValue([machine_from_instance(server)]) def _solve_flavor(constraints, flavor_name, flavors): flavor_map, flavor_types = {}, {} for f in flavors: flavor_map[f['name']] = f['id'] # Arch needs to be part of nova details. flavor_types[f['name']] = InstanceType( 'amd64', f['vcpus'], f['ram']) solver = TypeSolver(flavor_types) flavor_name = solver.run(constraints) if not flavor_name in flavor_map: ProviderError("Unknown instance type given: %r" % (flavor_name,)) return flavor_map[flavor_name] @inlineCallbacks def _assign_floating_ip(provider, server_id): floating_ips = yield provider.nova.list_floating_ips() for floating_ip in floating_ips: if floating_ip['instance_id'] is None: break else: floating_ip = yield provider.nova.allocate_floating_ip() yield provider.nova.add_floating_ip(server_id, floating_ip['ip']) juju-0.7.orig/juju/providers/openstack/machine.py0000644000000000000000000000427712135220114020362 0ustar 00000000000000"""Helpers for mapping Nova api results to the juju machine abstraction""" from juju.machine import ProviderMachine _SERVER_STATE_MAP = { None: 'pending', 'ACTIVE': 'running', 'BUILD': 'pending', 'BUILDING': 'pending', 'REBUILDING': 'pending', 'DELETED': 'terminated', 'STOPPED': 'stopped', } class NovaProviderMachine(ProviderMachine): """Nova-specific ProviderMachine implementation""" def get_server_status(server): status = server.get('status') if status is not None and "(" in status: status = status.split("(", 1)[0] return _SERVER_STATE_MAP.get(status, 'unknown') def get_server_addresses(server): private_addr = public_addr = None addresses = server.get("addresses") if addresses is not None: # Issue with some setups, have custom network only, use as private network = () for name in sorted(addresses): if name not in ("private", "public"): network = addresses[name] if network: break network = addresses.get("private", network) for address in network: if address.get("version", 0) == 4: private_addr = address['addr'] break # Issue with HP cloud, public address is second in private network network = addresses.get("public", network[1:]) for address in network: if address.get("version", 0) == 4: public_addr = address['addr'] return private_addr, public_addr def machine_from_instance(server): """Create an :class:`NovaProviderMachine` from a server details dict :param server: a dictionary of server info as given by the Nova api :return: a matching :class:`NovaProviderMachine` """ private_addr, public_addr = get_server_addresses(server) # Juju assumes it always needs a public address and loops waiting for one. # In fact a private address is generally fine provided it can be sshed to. if public_addr is None and private_addr is not None: public_addr = private_addr return NovaProviderMachine( server['id'], public_addr, private_addr, get_server_status(server)) juju-0.7.orig/juju/providers/openstack/ports.py0000644000000000000000000001770712135220114020127 0ustar 00000000000000"""Manage port access to machines using Nova security group extension The mechanism is based on the existing scheme used by the EC2 provider. Each machine is launched with two security groups, a juju group that is shared across all machines and allows access to 22/tcp for ssh, and a machine group just for that server so ports can be opened and closed on an individual level. There is some mismatch between the port hole poking and security group models: * A new security group is created for every machine * Rules are not shared between service units but set up again each launch * Support for port ranges is not exposed The Nova security group module follows the EC2 example quite closely, but as of Essex it's still under contrib and has a number of quirks: * To run a server with, or add or remove groups from a server, 'name' is used * To get details, delete, or add or remove rules from a group, 'id' is needed The only way of getting 'id' if 'name' is known is by listing all groups then looking at the details of the one with the matching name. """ from twisted.internet.defer import inlineCallbacks, returnValue from juju import errors from .client import log class NovaPortManager(object): """Mapping of port-based juju interface to Nova security group actions There is the potential to record some state on the instance to reduce api round-trips when, for instance, launching multiple machines at once, but for now """ def __init__(self, nova, environment_name): self.nova = nova self.tag = environment_name def _juju_group_name(self): return "juju-%s" % (self.tag,) def _machine_group_name(self, machine_id): return "juju-%s-%s" % (self.tag, machine_id) @inlineCallbacks def _get_machine_group(self, machine, machine_id): """Get details of the machine specific security group As only the name of the group can be derived, this means listing every security group for that server and seeing which has a matching name. """ group_name = self._machine_group_name(machine_id) server_id = machine.instance_id groups = yield self.nova.get_server_security_groups(server_id) found = False for group in groups: if group['name'] == group_name: found = group break # 2012/12/19: kt diablo/hpcloud compatibility if found and not 'rules' in group: group = yield self.nova.get_security_group_details(group['id']) found = True if found: returnValue(group) raise errors.ProviderInteractionError( "Missing security group %r for machine %r" % (group_name, server_id)) @inlineCallbacks def open_port(self, machine, machine_id, port, protocol="tcp"): """Allow access to a port for the given machine only""" group = yield self._get_machine_group(machine, machine_id) yield self.nova.add_security_group_rule( group['id'], ip_protocol=protocol, from_port=port, to_port=port) log.debug("Opened %s/%s on machine %r", port, protocol, machine.instance_id) @inlineCallbacks def close_port(self, machine, machine_id, port, protocol="tcp"): """Revoke access to a port for the given machine only""" group = yield self._get_machine_group(machine, machine_id) for rule in group["rules"]: if (port == rule["from_port"] == rule["to_port"] and rule["ip_protocol"] == protocol): yield self.nova.delete_security_group_rule(rule["id"]) log.debug("Closed %s/%s on machine %r", port, protocol, machine.instance_id) return raise errors.ProviderInteractionError( "Couldn't close unopened %s/%s on machine %r", port, protocol, machine.instance_id) @inlineCallbacks def get_opened_ports(self, machine, machine_id): """Get a set of opened port/protocol pairs for a machine""" group = yield self._get_machine_group(machine, machine_id) opened_ports = set() for rule in group.get("rules", []): if not rule.get("group"): protocol = rule["ip_protocol"] from_port = rule["from_port"] to_port = rule["to_port"] if from_port == to_port: opened_ports.add((from_port, protocol)) returnValue(opened_ports) @inlineCallbacks def ensure_groups(self, machine_id): """Get names of the security groups for a machine, creating if needed If the juju group already exists, it is assumed to be correctly set up. If the machine group already exists, it is deleted then recreated. """ security_groups = yield self.nova.list_security_groups() groups_by_name = dict((sg['name'], sg['id']) for sg in security_groups) juju_group = self._juju_group_name() if not juju_group in groups_by_name: log.debug("Creating juju security group %s", juju_group) sg = yield self.nova.create_security_group( juju_group, "juju group for %s" % (self.tag,)) # Add external ssh access yield self.nova.add_security_group_rule( sg['id'], ip_protocol="tcp", from_port=22, to_port=22) # Add internal group access yield self.nova.add_security_group_rule( parent_group_id=sg['id'], group_id=sg['id'], ip_protocol="tcp", from_port=1, to_port=65535) yield self.nova.add_security_group_rule( parent_group_id=sg['id'], group_id=sg['id'], ip_protocol="udp", from_port=1, to_port=65535) machine_group = self._machine_group_name(machine_id) if machine_group in groups_by_name: yield self.nova.delete_security_group( groups_by_name[machine_group]) log.debug("Creating machine security group %s", machine_group) yield self.nova.create_security_group( machine_group, "juju group for %s machine %s" % (self.tag, machine_id)) returnValue([juju_group, machine_group]) @inlineCallbacks def get_machine_groups(self, machine, with_juju_group=False): try: ret = yield self.get_machine_groups_pure(machine, with_juju_group) except errors.ProviderInteractionError, e: # XXX: Need to wire up treatment of 500s properly in client if getattr(e, "kind", None) == "computeError": try: yield self.nova.get_server(machine.instance_id) except errors.ProviderInteractionError, e: pass # just rebinding e if True or getattr(e, "kind", None) == "itemNotFound": returnValue(None) raise returnValue(ret) @inlineCallbacks def get_machine_groups_pure(self, machine, with_juju_group=False): server_id = machine.instance_id groups = yield self.nova.get_server_security_groups(server_id) juju_group = self._juju_group_name() groups_by_name = dict( (g['name'], g['id']) for g in groups if g['name'].startswith(juju_group)) if juju_group not in groups_by_name: # Not a juju machine, shouldn't touch returnValue(None) if not with_juju_group: groups_by_name.pop(juju_group) # else assumption: only one remaining group, is the machine group returnValue(groups_by_name) @inlineCallbacks def delete_juju_group(self): security_groups = yield self.nova.list_security_groups() juju_group = self._juju_group_name() for group in security_groups: if group['name'] == juju_group: break else: log.debug("Can't delete missing juju group") return yield self.nova.delete_security_group(group['id']) juju-0.7.orig/juju/providers/openstack/provider.py0000644000000000000000000002273312135220114020605 0ustar 00000000000000"""Provider interface implementation for OpenStack backend Much of the logic is implemented in sibling modules, but the overall model is exposed here. Still in need of work here: * Implement constraints using the Nova flavors api. This will always mean an api call rather than hard coding values as is done with EC2. Things like memory and cpu count are broadly equivalent, but there's no guarentee what details are exposed and ranking by price will generally not be an option. """ import logging import json from twisted.internet.defer import inlineCallbacks, returnValue from juju import errors from juju.lib.twistutils import gather_results from juju.lib.cache import CachedValue from juju.providers.common.base import MachineProviderBase from .client import _OpenStackClient, _NovaClient, _SwiftClient from . import credentials from .files import FileStorage from .launch import NovaLaunchMachine from .machine import ( NovaProviderMachine, get_server_status, machine_from_instance ) from .ports import NovaPortManager log = logging.getLogger("juju.openstack") def _convert_scheduler_hints(string): """Check constraint value suitable for Nova SchedulerHints extension""" obj = json.loads(string) if not isinstance(obj, dict): raise ValueError("Need json object of key/value strings") # GZ 2012-10-26: Does nova have other restrictions on what it will accept? return obj class MachineProvider(MachineProviderBase): """MachineProvider for use in an OpenStack environment""" Credentials = credentials.OpenStackCredentials def __init__(self, environment_name, config): super(MachineProvider, self).__init__(environment_name, config) self.credentials = self.Credentials.from_environment(config) self._check_certs = self.config.get("ssl-hostname-verification", True) if not self._check_certs: log.warn("Verification of HTTPS certificates is disabled for this" " environment.\nSet 'ssl-hostname-verification' to ensure" " secure communication.") elif not self.credentials.url.startswith("https:"): log.warn("OpenStack identity service not using secure transport") client = _OpenStackClient(self.credentials, self._check_certs) self.nova = _NovaClient(client) self.swift = _SwiftClient(client) self.port_manager = NovaPortManager(self.nova, environment_name) # constraints are good for several hrs self._cached_constraint = CachedValue(3600 * 12) @property def provider_type(self): return "openstack" def get_serialization_data(self): """Get provider configuration suitable for serialization. Also fills in credential information that may have earlier been extracted from the environment. """ data = super(MachineProvider, self).get_serialization_data() self.credentials.set_config_defaults(data) return data def get_file_storage(self): """Retrieve a Swift-backed :class:`FileStorage`.""" return FileStorage(self.swift, self.config["control-bucket"]) @inlineCallbacks def get_constraint_set(self): """Get the provider specific machine constraints. """ # Use cached value if available. cs = self._cached_constraint.get() if cs is not None: returnValue(cs) cs = yield super(MachineProvider, self).get_constraint_set() # Pseudo-constraint that does not affect flavor selected but is passed # through to server creation for influencing the scheduler, for # instance in the placement of the server. # Perhaps only register this constraint if the deployment advertises # the SchedulerHints extension as available? cs.register("os-scheduler-hints", converter=_convert_scheduler_hints) # Fetch provider defined instance types (just names) flavors = yield self.nova.list_flavors() flavor_names = [f['name'] for f in flavors] cs.register_generics(flavor_names) self._cached_constraint.set(cs) returnValue(cs) def start_machine(self, machine_data, master=False): """Start an OpenStack machine. :param dict machine_data: desired characteristics of the new machine; it must include a "machine-id" key, and may include a "constraints" key to specify the underlying OS and hardware. :param bool master: if True, machine will initialize the juju admin and run a provisioning agent, in addition to running a machine agent. """ return NovaLaunchMachine.launch(self, machine_data, master) @inlineCallbacks def get_machines(self, instance_ids=()): """List machines running in the provider. :param list instance_ids: ids of instances you want to get. Leave empty to list every :class:`juju.providers.openstack.machine.NovaProviderMachine` owned by this provider. :return: a list of :class:`juju.providers.openstack.machine.NovaProviderMachine` instances :rtype: :class:`twisted.internet.defer.Deferred` :raises: :exc:`juju.errors.MachinesNotFound` """ if len(instance_ids) == 1: try: instances = [(yield self.nova.get_server(instance_ids[0]))] except errors.ProviderInteractionError, e: # XXX: Need to wire up treatment of 404s properly in client if True or getattr(e, "kind", None) == "itemNotFound": raise errors.MachinesNotFound(set(instance_ids)) raise instance_ids = frozenset(instance_ids) else: instances = yield self.nova.list_servers_detail() if instance_ids: instance_ids = frozenset(instance_ids) instances = [instance for instance in instances if instance['id'] in instance_ids] # Only want to deal with servers that were created by juju, checking # the name begins with the prefix launch uses is good enough. name_prefix = "juju %s instance " % (self.environment_name,) machines = [] for instance in instances: if (instance['name'].startswith(name_prefix) and get_server_status(instance) in ("running", "pending")): machines.append(machine_from_instance(instance)) if instance_ids: # We were asked for a specific list of machines, and if we can't # completely fulfil that request we should blow up. missing = instance_ids.difference(m.instance_id for m in machines) if missing: raise errors.MachinesNotFound(missing) returnValue(machines) @inlineCallbacks def _delete_machine(self, machine, full=False): server_id = machine.instance_id server = yield self.nova.get_server(server_id) if not server['name'].startswith( "juju %s instance" % self.environment_name): raise errors.MachinesNotFound(set([machine.instance_id])) yield self.nova.delete_server(server_id) returnValue(machine) def shutdown_machine(self, machine): if not isinstance(machine, NovaProviderMachine): raise errors.ProviderError( "Need a NovaProviderMachine to shutdown not: %r" % (machine,)) # EC2 provider re-gets the machine to see if it's still in existance # and can be shutdown, instead just handle an error? 404-ish? return self._delete_machine(machine) @inlineCallbacks def destroy_environment(self): """Terminate all associated machines and security groups. The super defintion of this method terminates each machine in the environment; this needs to be augmented here by also removing the security group for the environment. :rtype: :class:`twisted.internet.defer.Deferred` """ machines = yield self.get_machines() deleted_machines = yield gather_results( [self._delete_machine(m, True) for m in machines]) yield self.save_state({}) returnValue(deleted_machines) def shutdown_machines(self, machines): """Terminate machines associated with this provider. :param machines: machines to shut down :type machines: list of :class:`juju.providers.openstack.machine.NovaProviderMachine` :return: list of terminated :class:`juju.providers.openstack.machine.NovaProviderMachine` instances :rtype: :class:`twisted.internet.defer.Deferred` """ # XXX: need to actually handle errors as non-terminated machines # and not include them in the resulting list return gather_results( [self.shutdown_machine(m) for m in machines], consume_errors=True) def open_port(self, machine, machine_id, port, protocol="tcp"): """Authorizes `port` using `protocol` on EC2 for `machine`.""" return self.port_manager.open_port(machine, machine_id, port, protocol) def close_port(self, machine, machine_id, port, protocol="tcp"): """Revokes `port` using `protocol` on EC2 for `machine`.""" return self.port_manager.close_port( machine, machine_id, port, protocol) def get_opened_ports(self, machine, machine_id): """Returns a set of open (port, proto) pairs for `machine`.""" return self.port_manager.get_opened_ports(machine, machine_id) juju-0.7.orig/juju/providers/openstack/tests/0000755000000000000000000000000012135220114017534 5ustar 00000000000000juju-0.7.orig/juju/providers/openstack/tests/__init__.py0000644000000000000000000002005012135220114021642 0ustar 00000000000000"""Infrastructure shared by all tests for OpenStack provider""" import json from twisted.internet import ( defer, ) from twisted.web import ( http, http_headers, ) from juju import errors from juju.lib import ( mocker, ) from juju.providers.openstack import ( client, files, machine, ports, provider, ) from juju.machine.constraints import ConstraintSet class FakeResponse(object): def __init__(self, code): if code not in http.RESPONSES: raise ValueError("Unknown http code: %r" % (code,)) self.code = code self.headers = http_headers.Headers( {"Content-Type": ["application/json"]}) class ConstraintSupportMixin(object): provider_type = "openstack" default_flavors = [ {'id': 1, 'name': "standard.xsmall", 'vcpus': 1, 'ram': 1024}, {'id': 2, 'name': "standard.medium", 'vcpus': 2, 'ram': 4096}, ] def setup_constraints(self): self.constraint_set = ConstraintSet(self.provider_type) self.constraint_set.register("os-scheduler-hints", converter=json.loads) self.constraint_set.register_generics( [f['name'] for f in self.default_flavors]) class OpenStackTestMixin(ConstraintSupportMixin): """Helpers for test classes that want to exercise the OpenStack APIs This goes all the way down to the http layer, which is really a little too low for most of what the test cases want to assert. With some cleanups to the client code, tests could instead mock out client methods. """ environment_name = "testing" api_url = "https://test.invalid/v2.0/" def setUp(self): self._mock_request = self.mocker.replace( "juju.providers.openstack.client.request", passthrough=False) self.mocker.order() self.nova_url = self.api_url + "nova" self.swift_url = self.api_url + "swift" self._mock_request("POST", self.api_url + "tokens", body=mocker.ANY, check_certs=True) self.mocker.result(defer.succeed((FakeResponse(200), json.dumps({"access": { "token": {"id": "tok", "expires": "2030-01-01T00:00:00"}, "serviceCatalog": [ { "type": "compute", "endpoints": [{"publicURL": self.nova_url}] }, { "type": "object-store", "endpoints": [{"publicURL": self.swift_url}] } ] }})))) self.setup_constraints() # Clear the environment so provider won't pick up config from envvars self.change_environment() def get_config(self): return { "type": "openstack", "admin-secret": "password", "access-key": "90abcdef", "secret-key": "xxxxxxxx", "auth-mode": "keypair", "authorized-keys": "abc", "auth-url": self.api_url, "project-name": "test_project", "control-bucket": self.environment_name, "default-image-id": 42, "use-floating-ip": True, "ssl-hostname-verification": True, } def get_provider(self): """Return the openstack machine provider. This should only be invoked after mocker is in replay mode. """ return provider.MachineProvider(self.environment_name, self.get_config()) def make_server(self, server_id, name=None, status="ACTIVE"): if name is None: # Would be machine id rather than server id really but will do. name = "juju testing instance " + str(server_id) return { "id": server_id, "name": name, "status": status, "addresses": {}, } def assert_not_found(self, deferred, server_ids): self.assertFailure(deferred, errors.MachinesNotFound) return deferred.addCallback(self._check_not_found, server_ids) def _check_not_found(self, error, server_ids): self.assertEqual(error.instance_ids, server_ids) def _mock_nova(self, method, path, body): self._mock_request(method, "%s/%s" % (self.nova_url, path), [("X-Auth-Token", "tok")], body, True) def _mock_swift(self, method, path, body, extra_headers=None): headers = [("X-Auth-Token", "tok")] if extra_headers is not None: headers += extra_headers url = "%s/%s" % (self.swift_url, path) self._mock_request(method, url, headers, body, True) def expect_nova_get(self, path, code=200, response=""): self._mock_nova("GET", path, None) if not isinstance(response, str): response = json.dumps(response) self.mocker.result(defer.succeed((FakeResponse(code), response))) def expect_nova_post(self, path, body, code=200, response=""): self._mock_nova("POST", path, body) if not isinstance(response, str): response = json.dumps(response) self.mocker.result(defer.succeed((FakeResponse(code), response))) def expect_nova_delete(self, path, code=202, response=""): self._mock_nova("DELETE", path, None) self.mocker.result(defer.succeed((FakeResponse(code), response))) def expect_swift_get(self, path, code=200, response=""): self._mock_swift("GET", path, None) self.mocker.result(defer.succeed((FakeResponse(code), response))) def expect_swift_put(self, path, body, code=201, response=""): self._mock_swift("PUT", path, body) self.mocker.result(defer.succeed((FakeResponse(code), response))) def expect_swift_put_container(self, container, code=201, response=""): self._mock_swift("PUT", container, "", [('X-Container-Read', '.r:*')]) self.mocker.result(defer.succeed((FakeResponse(code), response))) class MockedProvider(ConstraintSupportMixin): provider_type = "openstack" environment_name = "testing" api_url = "https://testing.invalid/2.0/" default_config = { "type": "openstack", "admin-secret": "asecret", "authorized-keys": "abc", "username": "auser", "password": "xxxxxxxx", "auth-url": api_url, "project-name": "aproject", "control-bucket": environment_name, "default-image-id": 42, "ssl-hostname-verification": True, } def __init__(self, mocker, config=None): if config is None: config = dict(self.default_config) self.config = config self._check_certs = config.get("ssl-hostname-verification", True) self.nova = mocker.proxy(client._NovaClient(None), passthrough=False) self.swift = mocker.proxy(client._SwiftClient(None), passthrough=False) self.port_manager = mocker.proxy( ports.NovaPortManager(self.nova, self.environment_name)) self.provider_actions = mocker.mock() self.mocker = mocker self.setup_constraints() def get_file_storage(self): return files.FileStorage(self.swift, self.config['control-bucket']) def __getattr__(self, attr): return getattr(self.provider_actions, attr) def expect_swift_public_object_url(self, object_name): container_name = self.config['control-bucket'] event = self.swift.public_object_url(container_name, object_name) self.mocker.result("%sswift/%s" % (self.api_url, object_name)) return event def expect_swift_put(self, object_name, body, code=201, response=""): container_name = self.config['control-bucket'] event = self.swift.put_object(container_name, object_name, body) self.mocker.result(defer.succeed((FakeResponse(code), response))) return event def expect_zookeeper_machines(self, iid, hostname="undefined.invalid"): event = self.provider_actions.get_zookeeper_machines() self.mocker.result(defer.succeed([ machine.NovaProviderMachine(iid, hostname, hostname) ])) return event juju-0.7.orig/juju/providers/openstack/tests/test_bootstrap.py0000644000000000000000000002071012135220114023162 0ustar 00000000000000"""Tests for bootstrapping juju on openstack Bootstrap touches a lot of the other parts of the provider, including machine launching, security groups and so on. Testing this in an end-to-end fashion as is done currently duplicates many checks from other more focussed tests. """ import logging from juju.lib import mocker, testing, serializer from juju.providers.openstack.machine import NovaProviderMachine from juju.providers.openstack.tests import OpenStackTestMixin class OpenStackBootstrapTest(OpenStackTestMixin, testing.TestCase): def expect_verify(self): self.expect_swift_put("testing/bootstrap-verify", "storage is writable") def expect_provider_state_fresh(self): self.expect_swift_get("testing/provider-state") self.expect_verify() def expect_create_group(self): self.expect_nova_post("os-security-groups", {'security_group': { 'name': 'juju-testing', 'description': 'juju group for testing', }}, response={'security_group': { 'id': 1, }}) self.expect_nova_post("os-security-group-rules", {'security_group_rule': { 'parent_group_id': 1, 'ip_protocol': "tcp", 'from_port': 22, 'to_port': 22, }}, response={'security_group_rule': { 'id': 144, 'parent_group_id': 1, }}) self.expect_nova_post("os-security-group-rules", {'security_group_rule': { 'parent_group_id': 1, 'group_id': 1, 'ip_protocol': "tcp", 'from_port': 1, 'to_port': 65535, }}, response={'security_group_rule': { 'id': 145, 'parent_group_id': 1, }}) self.expect_nova_post("os-security-group-rules", {'security_group_rule': { 'parent_group_id': 1, 'group_id': 1, 'ip_protocol': "udp", 'from_port': 1, 'to_port': 65535, }}, response={'security_group_rule': { 'id': 146, 'parent_group_id': 1, }}) def expect_create_machine_group(self, machine_id): machine = str(machine_id) self.expect_nova_post("os-security-groups", {'security_group': { 'name': 'juju-testing-' + machine, 'description': 'juju group for testing machine ' + machine, }}, response={'security_group': { 'id': 2, }}) def _match_server(self, data): userdata = data['server'].pop('user_data').decode("base64") self.assertEqual("#cloud-config", userdata.split("\n", 1)[0]) # TODO: assertions on cloud-init content # cloud_init = yaml.load(userdata) self.assertEqual({'server': { 'flavorRef': 1, 'imageRef': 42, 'name': 'juju testing instance 0', 'security_groups': [ {'name': 'juju-testing'}, {'name': 'juju-testing-0'}, ], }}, data) return True def expect_launch(self): self.expect_nova_get("flavors/detail", response={'flavors': self.default_flavors}) self.expect_nova_post("servers", mocker.MATCH(self._match_server), code=202, response={'server': { 'id': '1000', 'status': "PENDING", 'addresses': {'private': [{'version': 4, 'addr': "4.4.4.4"}]}, }}) self.expect_swift_put("testing/juju_master_id", "1000") self.expect_nova_get("os-floating-ips", response={'floating_ips': [ {'id': 80, 'instance_id': None, 'ip': "8.8.8.8"} ]}) self.expect_nova_post("servers/1000/action", {"addFloatingIp": {"address": "8.8.8.8"}}, code=202) self.expect_swift_put("testing/provider-state", serializer.dump({'zookeeper-instances': ['1000']})) def _check_machine(self, machine_list): [machine] = machine_list self.assertTrue(isinstance(machine, NovaProviderMachine)) self.assertEqual(machine.instance_id, '1000') def bootstrap(self): provider = self.get_provider() return provider.bootstrap( self.constraint_set.load({'instance-type': None})) def test_bootstrap_clean(self): """Bootstrap from a clean slate makes groups and zookeeper instance""" self.expect_provider_state_fresh() self.expect_nova_get("os-security-groups", response={'security_groups': []}) self.expect_create_group() self.expect_create_machine_group(0) self.expect_launch() self.mocker.replay() log = self.capture_logging("juju.common", level=logging.DEBUG) deferred = self.bootstrap() deferred.addCallback(self._check_machine) def check_log(_): log_text = log.getvalue() self.assertIn("Launching juju bootstrap instance", log_text) self.assertNotIn("previously bootstrapped", log_text) return deferred.addCallback(check_log) def test_bootstrap_existing_group(self): """Bootstrap reuses an exisiting provider security group""" self.expect_provider_state_fresh() self.expect_nova_get("os-security-groups", response={'security_groups': [ {'name': "juju-testing", 'id': 1}, ]}) self.expect_create_machine_group(0) self.expect_launch() self.mocker.replay() return self.bootstrap().addCallback(self._check_machine) def test_bootstrap_existing_machine_group(self): """Bootstrap deletes and remakes an existing machine security group""" self.expect_provider_state_fresh() self.expect_nova_get("os-security-groups", response={'security_groups': [ {'name': "juju-testing-0", 'id': 3}, ]}) self.expect_create_group() self.expect_nova_delete("os-security-groups/3") self.expect_create_machine_group(0) self.expect_launch() self.mocker.replay() return self.bootstrap().addCallback(self._check_machine) def test_existing_machine(self): """A preexisting zookeeper instance is returned if present""" self.expect_swift_get("testing/provider-state", response=serializer.dump({'zookeeper-instances': ['1000']})) self.expect_nova_get("servers/1000", response={'server': { 'id': '1000', 'name': 'juju testing instance 0', 'state': "RUNNING" }}) self.mocker.replay() log = self.capture_logging("juju.common") self.capture_logging("juju.openstack") # Drop to avoid stderr kipple deferred = self.bootstrap() deferred.addCallback(self._check_machine) def check_log(_): self.assertEqual("juju environment previously bootstrapped.\n", log.getvalue()) return deferred.addCallback(check_log) def test_existing_machine_missing(self): """Bootstrap overwrites existing zookeeper if instance is present""" self.expect_swift_get("testing/provider-state", response=serializer.dump({'zookeeper-instances': [3000]})) self.expect_nova_get("servers/3000", code=404, response={'itemNotFound': {'message': "The resource could not be found.", 'code': 404} }) self.expect_verify() self.expect_nova_get("os-security-groups", response={'security_groups': [ {'name': "juju-testing", 'id': 1}, {'name': "juju-testing-0", 'id': 3}, ]}) self.expect_nova_delete("os-security-groups/3") self.expect_create_machine_group(0) self.expect_launch() self.mocker.replay() log = self.capture_logging("juju.common", level=logging.DEBUG) self.capture_logging("juju.openstack") # Drop to avoid stderr kipple deferred = self.bootstrap() deferred.addCallback(self._check_machine) def check_log(_): log_text = log.getvalue() self.assertIn("Launching juju bootstrap instance", log_text) self.assertNotIn("previously bootstrapped", log_text) return deferred.addCallback(check_log) juju-0.7.orig/juju/providers/openstack/tests/test_client.py0000644000000000000000000003336512135220114022435 0ustar 00000000000000"""Tests for OpenStack API twisted client""" import json from twisted.internet import defer, reactor from twisted.python.failure import Failure from twisted.web import http_headers from juju import errors from juju.lib import mocker, testing from juju.providers.openstack import client, credentials class StubClient(client._OpenStackClient): def __init__(self): self.url = "http://testing.invalid" make_url = client._OpenStackClient._make_url class TestMakeUrl(testing.TestCase): def setUp(self): self.client = StubClient() self.client.services = { "compute": self.client.url + "/nova", "object-store": self.client.url + "/swift", } def test_list_str(self): self.assertEqual("http://testing.invalid/nova/servers", self.client.make_url("compute", ["servers"])) self.assertEqual("http://testing.invalid/swift/container/object", self.client.make_url("object-store", ["container", "object"])) def test_list_int(self): self.assertEqual("http://testing.invalid/nova/servers/1000", self.client.make_url("compute", ["servers", 1000])) self.assertEqual("http://testing.invalid/nova/servers/1000/detail", self.client.make_url("compute", ["servers", 1000, "detail"])) def test_list_unicode(self): url = self.client.make_url("object-store", ["container", u"\xa7"]) self.assertIsInstance(url, str) self.assertEqual("http://testing.invalid/swift/container/%C2%A7", url) def test_str(self): self.assertEqual("http://testing.invalid/nova/servers", self.client.make_url("compute", "servers")) self.assertEqual("http://testing.invalid/swift/container/object", self.client.make_url("object-store", "container/object")) def test_trailing_slash(self): self.client.services["object-store"] += "/" self.assertEqual("http://testing.invalid/nova/container", self.client.make_url("compute", "container")) self.assertEqual("http://testing.invalid/nova/container/object", self.client.make_url("compute", ["container", "object"])) class FakeResponse(object): """Bare minimum needed to look like a twisted http response""" def __init__(self, code, headers, body=None): self.code = code self.headers = headers if body is None: self.length = 0 else: self.length = len(body) self.body = body def deliverBody(self, reader): reader.connectionMade() reader.dataReceived(self.body) reader.connectionLost(client.ResponseDone()) class ClientTests(testing.TestCase): """Testing of low level client behaviour Rough temporary tests until client rearrangements make this easier. """ class Checker(object): """Standin for cert checker that exists regardless of txaws""" class SSLError(Exception): """Standin for ssl exception that exists regardless of OpenSSL""" def setUp(self): super(ClientTests, self).setUp() self.patch(client, "SSLError", self.SSLError) self.patch(client, "WebVerifyingContextFactory", self.Checker) self.mock_agent = self.mocker.replace("twisted.web.client.Agent", passthrough=False) self.mocker.order() def get_credentials(self, config): return credentials.OpenStackCredentials(config) def is_checker(self, obj): return isinstance(obj, self.Checker) def make_client_legacy(self): config = { "auth-url": "https://testing.invalid", "auth-mode": "legacy", "username": "user", "access-key": "key", } osc = client._OpenStackClient(self.get_credentials(config), True) self.mock_agent(reactor, contextFactory=mocker.MATCH(self.is_checker)) self.mocker.result(self.mock_agent) return config, osc def make_client_userpass(self): config = { "auth-url": "https://testing.invalid/v2.0/", "username": "user", "password": "pass", "project-name": "project", } osc = client._OpenStackClient(self.get_credentials(config), True) self.mock_agent(reactor, contextFactory=mocker.MATCH(self.is_checker)) self.mocker.result(self.mock_agent) return config, osc @defer.inlineCallbacks def test_auth_legacy(self): config, osc = self.make_client_legacy() # TODO: check headers for correct auth values self.mock_agent.request("GET", config["auth-url"], mocker.ANY, None) response = FakeResponse(204, http_headers.Headers({ "X-Server-Management-Url": ["http://testing.invalid/compute"], "X-Auth-Token": ["tok"], })) self.mocker.result(defer.succeed(response)) self.mocker.replay() log = self.capture_logging() yield osc.authenticate() self.assertIsInstance(osc.token, str) self.assertEqual("tok", osc.token) self.assertEqual("http://testing.invalid/compute/path", osc._make_url("compute", ["path"])) self.assertIn("compute service not using secure", log.getvalue()) @defer.inlineCallbacks def test_auth_userpass(self): config, osc = self.make_client_userpass() self.mock_agent.request("POST", "https://testing.invalid/v2.0/tokens", mocker.ANY, mocker.ANY) response = FakeResponse(200, http_headers.Headers({ "Content-Type": ["application/json"], }), json.dumps({'access': { 'token': {'id': "tok", 'expires': "shortly"}, 'serviceCatalog': [ { 'type': "compute", 'endpoints': [ {'publicURL': "http://testing.invalid/compute"}, ], }, { 'type': "object-store", 'endpoints': [ {'publicURL': "http://testing.invalid/objstore"}, ], }, ], }})) self.mocker.result(defer.succeed(response)) self.mocker.replay() log = self.capture_logging() yield osc.authenticate() self.assertIsInstance(osc.token, str) self.assertEqual("tok", osc.token) self.assertEqual("http://testing.invalid/compute/path", osc._make_url("compute", ["path"])) self.assertEqual("http://testing.invalid/objstore/path", osc._make_url("object-store", ["path"])) self.assertIn("compute service not using secure", log.getvalue()) self.assertIn("object-store service not using secure", log.getvalue()) def test_cert_failure(self): config, osc = self.make_client_legacy() self.mock_agent.request("GET", config["auth-url"], mocker.ANY, None) response = FakeResponse(204, http_headers.Headers({ "X-Server-Management-Url": ["http://testing.invalid/compute"], "X-Auth-Token": ["tok"], })) self.mocker.result(defer.fail(client.ResponseFailed([ Failure(self.SSLError()), ]))) self.mocker.replay() deferred = osc.authenticate() return self.assertFailure(deferred, errors.SSLVerificationError) def is_server_with_hints(self, producer): obj = json.loads(producer.content) self.assertEqual({"server": { "imageRef": "an-image", "flavorRef": "a-flavor", "name": "Test", }, "OS-SCH-HNT:scheduler_hints": {"hint-key": "hint-val"}, }, obj) return True @defer.inlineCallbacks def test_run_server_passes_hints(self): config, osc = self.make_client_legacy() self.mock_agent.request("POST", "https://testing.invalid/compute/servers", mocker.ANY, mocker.MATCH(self.is_server_with_hints)) response = FakeResponse(202, http_headers.Headers({ "Content-Type": ["application/json"], }), json.dumps({'server': { }})) self.mocker.result(defer.succeed(response)) self.mocker.replay() # Fake having already authenticated by setting token and services osc.token = "tok" osc.services = {"compute": "https://testing.invalid/compute"} novac = client._NovaClient(osc) result = yield novac.run_server("an-image", "a-flavor", "Test", scheduler_hints={"hint-key": "hint-val"}) class TestReauthentication(testing.TestCase): @defer.inlineCallbacks def test_detect_token_expiration(self): """Verify expired tokens reset is_authenticated to False on client""" # Note that the swift client uses the same code path as the # nova client in terms of token expiration management mock_request = self.mocker.mock() self.patch(client, "request", mock_request) # 1. First authenticate mock_request("GET", "https://example.com", check_certs=True, extra_headers=[("X-Auth-User", "my-user"), ("X-Auth-Key", "my-key")]) self.mocker.result(defer.succeed( (FakeResponse(204, http_headers.Headers({ "X-Server-Management-Url": ["https://example.com/compute"], "X-Auth-Token": ["first-token"], })), None))) # 2. Succeed on the first pass through mock_request("GET", "https://example.com/compute/os-floating-ips", [("X-Auth-Token", "first-token")], None, True) self.mocker.result(defer.succeed( (FakeResponse(200, http_headers.Headers({ "Content-Type": ["application/json"] })), json.dumps({"floating_ips": ["ip-1", "ip-2", "ip-3"]})))) # 3. Then fail upon the next request mock_request("GET", "https://example.com/compute/os-floating-ips", [("X-Auth-Token", "first-token")], None, True) self.mocker.result(defer.succeed( (FakeResponse(401, http_headers.Headers()), None))) # 4. This forces the client to authenticate again mock_request("GET", "https://example.com", check_certs=True, extra_headers=[("X-Auth-User", "my-user"), ("X-Auth-Key", "my-key")]) self.mocker.result(defer.succeed( (FakeResponse(204, http_headers.Headers({ "X-Server-Management-Url": ["https://example.com/compute"], "X-Auth-Token": ["second-token"], })), None))) # 5. And finally it gets the desired result upon the last request mock_request("GET", "https://example.com/compute/os-floating-ips", [("X-Auth-Token", "second-token")], None, True) body = json.dumps({"floating_ips": ["another-ip-1", "another-ip-3"]}) response = FakeResponse( 200, http_headers.Headers({ "Content-Type": ["application/json"] })) self.mocker.result(defer.succeed((response, body))) self.mocker.replay() # Exercise mocks and verify assertions osc = client._OpenStackClient( credentials.OpenStackCredentials({ "auth-url": "https://example.com", "auth-mode": "legacy", "username": "my-user", "access-key": "my-key", }), True) nova_client = client._NovaClient(osc) ips = yield nova_client.list_floating_ips() self.assertTrue(osc.is_authenticated()) self.assertEqual("first-token", osc.token) self.assertEqual(ips, ["ip-1", "ip-2", "ip-3"]) e = yield self.assertFailure(nova_client.list_floating_ips(), errors.ProviderInteractionError) self.assertEqual(str(e), "Need to reauthenticate by retrying") self.assertFalse(osc.is_authenticated()) # Now retry: the token will change and the mock is also # returning new data ips = yield nova_client.list_floating_ips() self.assertTrue(osc.is_authenticated()) self.assertEqual("second-token", osc.token) self.assertEqual(ips, ["another-ip-1", "another-ip-3"]) class TestPlan(testing.TestCase): """Ideas for tests needed""" # auth request without auth # get bytes, content-length 0, return "" # get bytes, not ResponseDone, raise wrapped in ProviderError (with any bytes?) # get bytes, type json, return bytes # get json, content length 0, raise ProviderError # get json, bad header (several forms), raise ProviderError (with bytes) # get json, not ResponseDone, raise wrapped in ProviderError # (with any bytes?) # get json, undecodable, raise wrapped in ProviderError with bytes # get json, mismatching root, raise ProviderError with bytes or json? # wrong code, no json header, raise ProviderError with bytes # wrong code, not ResponseDone, raise ProviderError from code # with any bytes # wrong code, undecodable, raise ProviderError from code with bytes # wrong code, has mystery root, raise ProviderError from code with bytes or json? # wrong code, has good root, no message # wrong code, has good root, no code # wrong code, has good root, differing code # wrong code, has good root, message, and matching code juju-0.7.orig/juju/providers/openstack/tests/test_credentials.py0000644000000000000000000001742012135220114023446 0ustar 00000000000000"""Tests for handling of OpenStack credentials in config and environment""" from juju.lib import ( testing, ) from juju.providers.openstack import ( credentials, ) class OpenStackCredentialsTests(testing.TestCase): def test_required_url(self): e = self.assertRaises(ValueError, credentials.OpenStackCredentials, {}) self.assertIn("Missing config 'auth-url'", str(e)) def test_required_mode_if_unguessable(self): e = self.assertRaises(ValueError, credentials.OpenStackCredentials, { 'auth-url': "http://example.com", }) self.assertIn("Missing config 'auth-mode'", str(e)) def test_legacy(self): creds = credentials.OpenStackCredentials({ 'auth-url': "http://example.com", 'auth-mode': "legacy", 'username': "luser", 'access-key': "laccess", }) self.assertEqual(creds.url, "http://example.com") self.assertEqual(creds.mode, "legacy") self.assertEqual(creds.username, "luser") self.assertEqual(creds.access_key, "laccess") def test_legacy_required_username(self): e = self.assertRaises(ValueError, credentials.OpenStackCredentials, { 'auth-url': "http://example.com", 'auth-mode': "legacy", 'access-key': "laccess", }) self.assertIn("Missing config 'username'", str(e)) def test_legacy_required_access_key(self): e = self.assertRaises(ValueError, credentials.OpenStackCredentials, { 'auth-url': "http://example.com", 'auth-mode': "legacy", 'username': "luser", }) self.assertIn("Missing config 'access-key'", str(e)) # v1.0 auth is gone from the upstream codebase so maybe remove support def test_legacy_guess_v1_0(self): creds = credentials.OpenStackCredentials({ 'auth-url': "http://example.com/v1.0/", 'username': "luser", 'access-key': "laccess", }) self.assertEqual(creds.mode, "legacy") def test_legacy_guess_v1_1(self): creds = credentials.OpenStackCredentials({ 'auth-url': "http://example.com/v1.1/", 'username': "luser", 'access-key': "laccess", }) self.assertEqual(creds.mode, "legacy") def test_userpass(self): creds = credentials.OpenStackCredentials({ 'auth-url': "http://example.com", 'auth-mode': "userpass", 'username': "uuser", 'password': "upass", 'project-name': "uproject", }) self.assertEqual(creds.url, "http://example.com") self.assertEqual(creds.mode, "userpass") self.assertEqual(creds.username, "uuser") self.assertEqual(creds.password, "upass") self.assertEqual(creds.project_name, "uproject") def test_userpass_guess_v2_0_no_slash(self): creds = credentials.OpenStackCredentials({ 'auth-url': "http://example.com/v2.0", 'username': "uuser", 'password': "upass", 'project-name': "uproject", }) self.assertEqual(creds.mode, "userpass") def test_userpass_guess_v2_0_slash(self): creds = credentials.OpenStackCredentials({ 'auth-url': "http://example.com/v2.0/", 'username': "uuser", 'password': "upass", 'project-name': "uproject", }) self.assertEqual(creds.mode, "userpass") def test_keypair(self): creds = credentials.OpenStackCredentials({ 'auth-url': "http://example.com", 'auth-mode': "keypair", 'access-key': "kaccess", 'secret-key': "ksecret", 'project-name': "kproject", }) self.assertEqual(creds.url, "http://example.com") self.assertEqual(creds.mode, "keypair") self.assertEqual(creds.access_key, "kaccess") self.assertEqual(creds.secret_key, "ksecret") self.assertEqual(creds.project_name, "kproject") class FromEnvironmentTests(testing.TestCase): def test_required_url(self): self.change_environment() e = self.assertRaises(ValueError, credentials.OpenStackCredentials.from_environment, {}) self.assertIn("Missing config 'auth-url'", str(e)) def test_required_mode_if_unguessable(self): self.change_environment(**{"NOVA_URL": "http://example.com"}) e = self.assertRaises(ValueError, credentials.OpenStackCredentials.from_environment, {}) self.assertIn("Missing config 'auth-mode'", str(e)) def test_legacy(self): self.change_environment(**{ "NOVA_URL": "http://example.com/v1.1/", "NOVA_USERNAME": "euser", "NOVA_API_KEY": "ekey", }) creds = credentials.OpenStackCredentials.from_environment({}) self.assertEqual(creds.mode, "legacy") self.assertEqual(creds.username, "euser") self.assertEqual(creds.access_key, "ekey") def test_keypair(self): self.change_environment(**{ "NOVA_URL": "http://example.com/v2.0/", "NOVA_API_KEY": "eaccess", "NOVA_PROJECT_NAME": "eproject", "NOVA_PROJECT_ID": "349212", "NOVA_REGION_NAME": "eregion", "EC2_SECRET_KEY": "esecret", }) creds = credentials.OpenStackCredentials.from_environment({ 'auth-mode': "keypair", }) self.assertEqual(creds.access_key, "eaccess") self.assertEqual(creds.secret_key, "esecret") self.assertEqual(creds.project_name, "eproject") self.assertEqual(creds.region, "eregion") def test_userpass(self): self.change_environment(**{ "OS_AUTH_URL": "http://example.com/v2.0/", "OS_USERNAME": "euser", "OS_PASSWORD": "epass", "OS_TENANT_NAME": "eproject", "OS_REGION_NAME": "eregion", }) creds = credentials.OpenStackCredentials.from_environment({}) self.assertEqual(creds.mode, "userpass") self.assertEqual(creds.username, "euser") self.assertEqual(creds.password, "epass") self.assertEqual(creds.project_name, "eproject") self.assertEqual(creds.region, "eregion") def test_prefer_os_auth_url(self): self.change_environment(**{ "NOVA_URL": "http://example.com/v1.1/", "NOVA_API_KEY": "eaccess", "OS_AUTH_URL": "http://example.com/v2.0/", "OS_USERNAME": "euser", "OS_PASSWORD": "epass", "OS_TENANT_NAME": "eproject", }) creds = credentials.OpenStackCredentials.from_environment({}) self.assertEqual(creds.url, "http://example.com/v2.0/") self.assertEqual(creds.mode, "userpass") class SetConfigDefaultsTests(testing.TestCase): def test_set_all(self): config = { 'auth-url': "http://example.com", 'auth-mode': "legacy", 'username': "luser", 'access-key': "laccess", } creds = credentials.OpenStackCredentials(config) new_config = {} creds.set_config_defaults(new_config) self.assertEqual(config, new_config) def test_set_only_missing(self): config = { 'auth-url': "http://example.com/v1.1/", 'username': "luser", 'access-key': "laccess", } creds = credentials.OpenStackCredentials(config) new_config = { 'username': "nuser", } creds.set_config_defaults(new_config) self.assertEqual({ 'auth-url': "http://example.com/v1.1/", 'auth-mode': "legacy", 'username': "nuser", 'access-key': "laccess", }, new_config) juju-0.7.orig/juju/providers/openstack/tests/test_files.py0000644000000000000000000000746712135220114022265 0ustar 00000000000000"""Tests for file storage backend based on Swift""" from cStringIO import StringIO from twisted.internet.defer import inlineCallbacks, fail from juju import errors from juju.lib import testing from juju.providers.openstack.tests import OpenStackTestMixin class FileStorageTestCase(OpenStackTestMixin, testing.TestCase): def get_storage(self): provider = self.get_provider() storage = provider.get_file_storage() return storage def test_put_file(self): """A file can be put in the storage""" content = "some text" self.expect_swift_put("testing/object", content) self.mocker.replay() return self.get_storage().put("object", StringIO(content)) def test_put_file_unicode(self): """A file with a unicode name is put in UTF-8 url encoded form""" content = "some text" self.expect_swift_put("testing/%C2%A7", content) self.mocker.replay() return self.get_storage().put(u"\xa7", StringIO(content)) def test_put_file_create_container(self): """The container will be created if it doesn't exist yet""" content = "some text" self.expect_swift_put("testing/object", content, code=404) self.expect_swift_put_container("testing") self.expect_swift_put("testing/object", content) self.mocker.replay() return self.get_storage().put("object", StringIO(content)) def test_put_file_unknown_error(self): """Unexpected errors from client propogate""" content = "some text" self._mock_swift("PUT", "testing/object", content) self.mocker.result(fail(ValueError("Something unexpected"))) self.mocker.replay() deferred = self.get_storage().put("object", StringIO(content)) return self.assertFailure(deferred, ValueError) @inlineCallbacks def test_get_url(self): """A url can be generated for any stored file.""" self.mocker.replay() storage = self.get_storage() yield storage._swift._client.authenticate() url = storage.get_url("object") self.assertEqual(self.swift_url + "/testing/object", url) @inlineCallbacks def test_get_url_unicode(self): """A url can be generated for *any* stored file.""" self.mocker.replay() storage = self.get_storage() yield storage._swift._client.authenticate() url = storage.get_url(u"\xa7") self.assertEqual(self.swift_url + "/testing/%C2%A7", url) @inlineCallbacks def test_get_file(self): """Retrieving a file returns a file-like object with the content""" content = "some text" self.expect_swift_get("testing/object", response=content) self.mocker.replay() result = yield self.get_storage().get("object") self.assertEqual(result.read(), content) @inlineCallbacks def test_get_file_unicode(self): """Retrieving a file with a unicode key uses a UTF-8 url""" content = "some text" self.expect_swift_get(u"testing/%C2%A7", response=content) self.mocker.replay() result = yield self.get_storage().get(u"\xa7") self.assertEqual(result.read(), content) def test_get_file_nonexistant(self): """Retrieving a nonexistant file raises a file not found error.""" self.expect_swift_get(u"testing/missing", code=404) self.mocker.replay() deferred = self.get_storage().get("missing") return self.assertFailure(deferred, errors.FileNotFound) def test_get_file_error(self): """An error from the client results is attributed to the provider""" self.expect_swift_get(u"testing/unavailable", code=500) self.mocker.replay() deferred = self.get_storage().get("unavailable") return self.assertFailure(deferred, errors.ProviderInteractionError) juju-0.7.orig/juju/providers/openstack/tests/test_getmachines.py0000644000000000000000000001114512135220114023436 0ustar 00000000000000"""Tests for OpenStack provider method for listing live juju machines""" from juju.lib import testing from juju.providers.openstack.machine import NovaProviderMachine from juju.providers.openstack.tests import OpenStackTestMixin class GetMachinesTests(OpenStackTestMixin, testing.TestCase): def check_machines(self, machines, expected): machine_details = [] for m in machines: self.assertTrue(isinstance(m, NovaProviderMachine)) machine_details.append((m.instance_id, m.state)) machine_details.sort() expected.sort() self.assertEqual(expected, machine_details) def test_all_none(self): self.expect_nova_get("servers/detail", response={"servers": []}) self.mocker.replay() return self.get_provider().get_machines().addCallback( self.check_machines, []) def test_all_single(self): self.expect_nova_get("servers/detail", response={"servers": [ self.make_server(1000), ]}) self.mocker.replay() return self.get_provider().get_machines().addCallback( self.check_machines, [(1000, 'running')]) def test_all_multiple(self): self.expect_nova_get("servers/detail", response={"servers": [ self.make_server(1001), self.make_server(1002, status="BUILDING"), ]}) self.mocker.replay() return self.get_provider().get_machines().addCallback( self.check_machines, [(1001, 'running'), (1002, 'pending')]) def test_all_some_dead(self): self.expect_nova_get("servers/detail", response={"servers": [ self.make_server(1001, status="BUILDING"), self.make_server(1002, status="DELETED"), ]}) self.mocker.replay() return self.get_provider().get_machines().addCallback( self.check_machines, [(1001, 'pending')]) def test_all_some_other(self): self.expect_nova_get("servers/detail", response={"servers": [ self.make_server(1001, name="nova started server"), self.make_server(1002), ]}) self.mocker.replay() return self.get_provider().get_machines().addCallback( self.check_machines, [(1002, 'running')]) def test_two_none(self): self.expect_nova_get("servers/detail", response={"servers": []}) self.mocker.replay() deferred = self.get_provider().get_machines([1001, 1002]) return self.assert_not_found(deferred, [1001, 1002]) def test_two_some_dead(self): self.expect_nova_get("servers/detail", response={"servers": [ self.make_server(1001, status="BUILDING"), self.make_server(1002, status="DELETED"), ]}) self.mocker.replay() deferred = self.get_provider().get_machines([1001, 1002]) return self.assert_not_found(deferred, [1002]) def test_two_some_other(self): self.expect_nova_get("servers/detail", response={"servers": [ self.make_server(1001, name="nova started server"), self.make_server(1002), ]}) self.mocker.replay() deferred = self.get_provider().get_machines([1001, 1002]) return self.assert_not_found(deferred, [1001]) def test_one_running(self): self.expect_nova_get("servers/1000", response={"server": self.make_server(1000)}) self.mocker.replay() deferred = self.get_provider().get_machines([1000]) return deferred.addCallback(self.check_machines, [(1000, 'running')]) def test_one_dead(self): self.expect_nova_get("servers/1000", response={"server": self.make_server(1000, status="DELETED")}) self.mocker.replay() deferred = self.get_provider().get_machines([1000]) return self.assert_not_found(deferred, [1000]) def test_one_other(self): self.expect_nova_get("servers/1000", response={"server": self.make_server(1000, name="testing")}) self.mocker.replay() deferred = self.get_provider().get_machines([1000]) return self.assert_not_found(deferred, [1000]) def test_one_missing(self): self.expect_nova_get("servers/1000", code=404, response={"itemNotFound": {"message": "The resource could not be found.", "code": 404}}) self.mocker.replay() deferred = self.get_provider().get_machines([1000]) return self.assert_not_found(deferred, [1000]) # XXX: Need to do error handling properly in client still test_one_missing.skip = True # XXX: EC2 also has some tests for wrapping unexpected errors from backend juju-0.7.orig/juju/providers/openstack/tests/test_launch.py0000644000000000000000000002152312135220114022422 0ustar 00000000000000"""Tests for launching a new server customised for juju using Nova""" from twisted.internet.defer import succeed from juju import errors from juju.lib import serializer from juju.lib.mocker import MATCH from juju.lib.testing import TestCase from juju.providers.openstack.tests import MockedProvider from juju.providers.openstack.launch import NovaLaunchMachine class MockedLaunchProvider(MockedProvider): def launch(self, machine_id, master=False, constraints=None): if constraints is None: constraints = self.constraint_set.load({'instance-type': None}) else: constraints = self.constraint_set.parse(constraints) details = { 'machine-id': machine_id, 'constraints': constraints } return NovaLaunchMachine.launch(self, details, master) def expect_launch_setup(self, machine_id): self.port_manager.ensure_groups(machine_id) self.mocker.result(succeed(["juju-x", "juju-y"])) self.nova.list_flavor_details() self.mocker.result(succeed(self.default_flavors)) def expect_run_server(self, machine_id, cc_match, response, flavor_id=1, hints=None): self.nova.run_server( name="juju testing instance " + machine_id, image_id=42, flavor_id=flavor_id, security_group_names=["juju-x", "juju-y"], user_data=MATCH(cc_match), scheduler_hints=hints, ) self.mocker.result(succeed(response)) def expect_available_floating_ip(self, server_id): self.nova.list_floating_ips() self.mocker.result(succeed([ {'instance_id': None, 'ip': "198.162.1.0"}, ])) self.nova.add_floating_ip(server_id, "198.162.1.0") self.mocker.result(succeed(None)) class _CloudConfigMatcher(object): """Hack around testcase and provider seperation to check cloud-config Really provider launch tests have no business caring about the specifics of the cloud-config format, but the contents varies depending on launch parameters. """ def __init__(self, testcase, machine_id, provider, is_master): self._case = testcase self.machine_id = machine_id self.provider = provider self.is_master = is_master def match(self, user_data): """Check contents of user_data but always match if assertions pass""" self._case.assertEqual("#cloud-config", user_data.split("\n", 1)[0]) cc = serializer.load(user_data) if self.is_master: zookeeper_hosts = "localhost:2181" else: zookeeper_hosts = "master.invalid:2181" self._case.assertEqual({ 'juju-provider-type': self.provider.provider_type, 'juju-zookeeper-hosts': zookeeper_hosts, 'machine-id': self.machine_id}, cc['machine-data']) self._case.assertEqual([self.provider.config["authorized-keys"]], cc["ssh_authorized_keys"]) if self.is_master: self._match_master_runcmd(cc["runcmd"]) return True def _match_master_runcmd(self, runcmd): id_url = "".join([self.provider.api_url, "swift", "/juju_master_id"]) if self.provider.config["ssl-hostname-verification"]: instance_arg = "$(curl %s)" % id_url else: instance_arg = "$(curl -k %s)" % id_url for cmd in runcmd: if cmd.startswith("juju-admin initialize "): self._case.assertIn(" --provider-type=openstack", cmd) self._case.assertIn(" --instance-id=" + instance_arg, cmd) break else: self._case.fail("Missing juju-admin initialize: " + repr(runcmd)) class NovaLaunchMachineTests(TestCase): def test_launch_requires_default_image_id(self): config = dict(MockedLaunchProvider.default_config) del config['default-image-id'] provider = MockedLaunchProvider(self.mocker, config) provider.expect_zookeeper_machines(1000) self.mocker.replay() deferred = provider.launch("1") return self.assertFailure(deferred, errors.ProviderError) def get_cc_matcher(self, machine_id, provider, is_master=False): return _CloudConfigMatcher(self, machine_id, provider, is_master).match def _check_log(self, log, pattern): self.assertRegexpMatches(log.getvalue(), pattern) def capture_and_check_log(self, pattern="^$", logname="juju.openstack"): log = self.capture_logging(logname) self.addCleanup(self._check_log, log, pattern) def test_start_machine_with_constraints(self): provider = MockedLaunchProvider(self.mocker) provider.expect_zookeeper_machines(1000, "master.invalid") provider.expect_launch_setup("1") provider.expect_run_server( "1", self.get_cc_matcher("1", provider), response={ 'id': 1001, 'addresses': {'public': []}, }, flavor_id=2) self.mocker.replay() self.capture_and_check_log() return provider.launch("1", constraints=["cpu=2", "mem=3G"]) def test_start_machine_with_scheduler_hints(self): provider = MockedLaunchProvider(self.mocker) provider.expect_zookeeper_machines(1000, "master.invalid") provider.expect_launch_setup("1") provider.expect_run_server( "1", self.get_cc_matcher("1", provider), response={ 'id': 1001, 'addresses': {'public': []}, }, hints={"hint-key": "hint-value"}) self.mocker.replay() self.capture_and_check_log() return provider.launch("1", constraints=[ "os-scheduler-hints={\"hint-key\": \"hint-value\"}"]) def test_start_machine_with_default_instance_type(self): config = dict(MockedLaunchProvider.default_config) config['default-instance-type'] = "m1.sample" provider = MockedLaunchProvider(self.mocker, config) provider.expect_zookeeper_machines(1000, "master.invalid") provider.expect_launch_setup("1") provider.expect_run_server("1", self.get_cc_matcher("1", provider), response={ 'id': 1001, 'addresses': {'public': []}, }) self.mocker.replay() self.capture_and_check_log( "^default-instance-type is deprecated.*constraints") return provider.launch("1") def test_start_machine(self): provider = MockedLaunchProvider(self.mocker) provider.expect_zookeeper_machines(1000, "master.invalid") provider.expect_launch_setup("1") provider.expect_run_server("1", self.get_cc_matcher("1", provider), response={ 'id': 1001, 'addresses': {'public': []}, }) self.mocker.replay() self.capture_and_check_log() return provider.launch("1") def test_start_machine_delay(self): provider = MockedLaunchProvider(self.mocker) provider.config["use-floating-ip"] = True provider.expect_zookeeper_machines(1000, "master.invalid") provider.expect_launch_setup("1") provider.expect_run_server("1", self.get_cc_matcher("1", provider), response={ 'id': 1001, }) provider.nova.get_server(1001) self.mocker.result(succeed({ 'id': 1001, 'addresses': {'public': []}, })) provider.expect_available_floating_ip(1001) self.mocker.result(succeed(None)) self.mocker.replay() self.capture_and_check_log() self.patch(NovaLaunchMachine, "_DELAY_FOR_ADDRESSES", 0) return provider.launch("1") def _start_master_machine_test(self, provider): provider.expect_swift_public_object_url("juju_master_id") provider.expect_launch_setup("0") provider.expect_run_server("0", self.get_cc_matcher("0", provider, is_master=True), response={ 'id': 1000, 'addresses': {'public': []}, }) provider.expect_swift_put("juju_master_id", "1000") provider.provider_actions.save_state({'zookeeper-instances': [1000]}) self.mocker.result(succeed(None)) self.mocker.replay() self.capture_and_check_log() return provider.launch("0", master=True) def test_start_machine_master(self): provider = MockedLaunchProvider(self.mocker) return self._start_master_machine_test(provider) def test_start_machine_master_no_certs(self): config = dict(MockedLaunchProvider.default_config) config['ssl-hostname-verification'] = False provider = MockedLaunchProvider(self.mocker, config) return self._start_master_machine_test(provider) juju-0.7.orig/juju/providers/openstack/tests/test_machine.py0000644000000000000000000000727512135220114022564 0ustar 00000000000000"""Tests for server wrapper and helper functions""" from juju.providers.openstack.machine import ( get_server_addresses, get_server_status, ) from juju.lib.testing import TestCase class GetServerStatusTest(TestCase): """Tests for mapping of Nova status names to EC2 style names""" def test_build_schedualing(self): self.assertEqual("pending", get_server_status({u'status': u'BUILD(scheduling)'})) def test_build_spawning(self): self.assertEqual("pending", get_server_status({u'status': u'BUILD(spawning)'})) def test_active(self): self.assertEqual("running", get_server_status({u'status': u'ACTIVE'})) def test_no_status(self): self.assertEqual("pending", get_server_status({})) def test_mystery_status(self): self.assertEqual("unknown", get_server_status({u'status': u'NEVER_BEFORE_SEEN_MYSTERY'})) class GetServerAddressesTest(TestCase): """Tests for deriving a public and private address from Nova dict""" def test_missing(self): self.assertEqual((None, None), get_server_addresses({})) def test_empty(self): self.assertEqual((None, None), get_server_addresses({u'addresses': {}})) def test_private_only(self): self.assertEqual(("127.0.0.4", None), get_server_addresses({u'addresses': { "private": [{"addr": "127.0.0.4", "version": 4}], }})) def test_private_plus(self): self.assertEqual(("127.0.0.4", "8.8.4.4"), get_server_addresses({u'addresses': { "private": [ {"addr": "127.0.0.4", "version": 4}, {"addr": "8.8.4.4", "version": 4}, ], }})) def test_public_only(self): self.assertEqual((None, "8.8.8.8"), get_server_addresses({u'addresses': { "public": [{"addr": "8.8.8.8", "version": 4}], }})) def test_public_and_private(self): self.assertEqual(("127.0.0.4", "8.8.8.8"), get_server_addresses({u'addresses': { "public": [{"addr": "8.8.8.8", "version": 4}], "private": [{"addr": "127.0.0.4", "version": 4}], }})) def test_public_and_private_plus(self): self.assertEqual(("127.0.0.4", "8.8.8.8"), get_server_addresses({u'addresses': { "public": [{"addr": "8.8.8.8", "version": 4}], "private": [ {"addr": "127.0.0.4", "version": 4}, {"addr": "8.8.4.4", "version": 4}, ], }})) def test_custom_only(self): self.assertEqual(("127.0.0.2", None), get_server_addresses({u'addresses': { "special": [{"addr": "127.0.0.2", "version": 4}], }})) def test_custom_plus(self): self.assertEqual(("127.0.0.2", "8.8.4.4"), get_server_addresses({u'addresses': { "special": [ {"addr": "127.0.0.2", "version": 4}, {"addr": "8.8.4.4", "version": 4}, ], }})) def test_custom_and_private(self): self.assertEqual(("127.0.0.4", None), get_server_addresses({u'addresses': { "special": [{"addr": "127.0.0.2", "version": 4}], "private": [{"addr": "127.0.0.4", "version": 4}], }})) def test_custom_and_public(self): self.assertEqual(("127.0.0.2", "8.8.8.8"), get_server_addresses({u'addresses': { "special": [{"addr": "127.0.0.2", "version": 4}], "public": [{"addr": "8.8.8.8", "version": 4}], }})) juju-0.7.orig/juju/providers/openstack/tests/test_ports.py0000644000000000000000000004403312135220114022320 0ustar 00000000000000"""Tests for emulating port management with security groups""" import logging from juju import errors from juju.lib.testing import TestCase from juju.providers.openstack.machine import NovaProviderMachine from juju.providers.openstack.ports import NovaPortManager from juju.providers.openstack.tests import OpenStackTestMixin class ProviderPortMgmtTests(OpenStackTestMixin, TestCase): """Tests for provider exposed port management methods""" def expect_create_rule(self, group_id, proto, port): self.expect_nova_post("os-security-group-rules", {'security_group_rule': { 'parent_group_id': group_id, 'ip_protocol': proto, 'from_port': port, 'to_port': port, }}, response={'security_group_rule': { 'id': 144, 'parent_group_id': group_id, }}) def expect_existing_rule(self, rule_id, proto, port): self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [ {'name': "juju-testing-1", 'id': 1, 'rules': [{ 'id': rule_id, 'parent_group_id': 1, 'ip_protocol': proto, 'from_port': port, 'to_port': port, }] }, ]}) def test_open_port(self): """Opening a port adds the rule to the appropriate security group""" self.expect_nova_get( "servers/1000/os-security-groups", response={'security_groups': [ {'name': "juju-testing-1", 'id': 1, 'rules': []}]}) self.expect_create_rule(1, "tcp", 80) self.mocker.replay() log = self.capture_logging("juju.openstack", level=logging.DEBUG) machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().open_port(machine, "1", 80) def _check_log(_): self.assertIn("Opened 80/tcp on machine '1000'", log.getvalue()) return deferred.addCallback(_check_log) def test_diablo_hpcloud_ccompatbility(self): """Verify compatibility workarounds for hpcloud/diablo.""" self.expect_nova_get( "servers/1000/os-security-groups", response={'security_groups': [ {'name': "juju-testing-1", 'id': 1}]}) self.expect_nova_get( "os-security-groups/1", response={'security_group': { 'name': "juju-testing-1", 'id': 1, 'rules': [{ 'id': 1, 'parent_group_id': 1, 'ip_protocol': 'tcp', 'from_port': 80, 'to_port': 80}]}}) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().get_opened_ports(machine, "1") return deferred.addCallback(self.assertEqual, set([(80, "tcp")])) def test_open_port_missing_group(self): """Missing security group raises an error on deleting port""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': []}) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().open_port(machine, "1", 80) return self.assertFailure(deferred, errors.ProviderInteractionError) def test_close_port(self): """Closing a port removes the matching rule from the security group""" self.expect_existing_rule(12, "tcp", 80) self.expect_nova_delete("os-security-group-rules/12") self.mocker.replay() log = self.capture_logging("juju.openstack", level=logging.DEBUG) machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().close_port(machine, "1", 80) def _check_log(_): self.assertIn("Closed 80/tcp on machine '1000'", log.getvalue()) return deferred.addCallback(_check_log) def test_close_port_missing_group(self): """Missing security group raises an error on closing port""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': []}) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().close_port(machine, "1", 80) return self.assertFailure(deferred, errors.ProviderInteractionError) def test_close_port_missing_rule(self): """Missing security group rule raises an error on closing port""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [{ 'name': "juju-testing-1", 'id': 1, "rules": [], }]}) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().close_port(machine, "1", 80) return self.assertFailure(deferred, errors.ProviderInteractionError) def test_close_port_mismatching_rule(self): """Rule with different port raises an error on closing port""" self.expect_existing_rule(12, "tcp", 8080) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().close_port(machine, "1", 80) return self.assertFailure(deferred, errors.ProviderInteractionError) def test_get_opened_ports_none(self): """No opened ports are listed when there are no rules""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [{ 'name': "juju-testing-1", 'id': 1, "rules": [], }]}) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().get_opened_ports(machine, "1") return deferred.addCallback(self.assertEqual, set()) def test_get_opened_ports_one(self): """Opened port is listed when there is a matching rule""" self.expect_existing_rule(12, "tcp", 80) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().get_opened_ports(machine, "1") return deferred.addCallback(self.assertEqual, set([(80, "tcp")])) def test_get_opened_ports_group_ignored(self): """Opened ports exclude rules delegating to other security groups""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [{ 'name': "juju-testing-1", 'id': 1, "rules": [{ 'id': 12, 'parent_group_id': 1, 'ip_protocol': None, 'from_port': None, 'to_port': None, 'group': {'name': "juju-testing"}, }], }]}) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().get_opened_ports(machine, "1") return deferred.addCallback(self.assertEqual, set()) def test_get_opened_ports_multiport_ignored(self): """Opened ports exclude rules spanning multiple ports""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [{ 'name': "juju-testing-1", 'id': 1, "rules": [{ 'id': 12, 'parent_group_id': 1, 'ip_protocol': "tcp", 'from_port': 8080, 'to_port': 8081, }], }]}) self.mocker.replay() machine = NovaProviderMachine('1000', "server1000.testing.invalid") deferred = self.get_provider().get_opened_ports(machine, "1") return deferred.addCallback(self.assertEqual, set()) class PortManagerTestMixin(OpenStackTestMixin): def get_port_manager(self): provider = self.get_provider() return NovaPortManager(provider.nova, provider.environment_name) class EnsureGroupsTests(PortManagerTestMixin, TestCase): """Tests for ensure_groups method used when launching machines""" def expect_create_juju_group(self): self.expect_nova_post("os-security-groups", {'security_group': { 'name': 'juju-testing', 'description': 'juju group for testing', }}, response={'security_group': { 'id': 1, }}) self.expect_nova_post("os-security-group-rules", {'security_group_rule': { 'parent_group_id': 1, 'ip_protocol': "tcp", 'from_port': 22, 'to_port': 22, }}, response={'security_group_rule': { 'id': 144, 'parent_group_id': 1, }}) self.expect_nova_post("os-security-group-rules", {'security_group_rule': { 'parent_group_id': 1, 'group_id': 1, 'ip_protocol': "tcp", 'from_port': 1, 'to_port': 65535, }}, response={'security_group_rule': { 'id': 145, 'parent_group_id': 1, }}) self.expect_nova_post("os-security-group-rules", {'security_group_rule': { 'parent_group_id': 1, 'group_id': 1, 'ip_protocol': "udp", 'from_port': 1, 'to_port': 65535, }}, response={'security_group_rule': { 'id': 146, 'parent_group_id': 1, }}) def expect_create_machine_group(self, machine_id): machine = str(machine_id) self.expect_nova_post("os-security-groups", {'security_group': { 'name': 'juju-testing-' + machine, 'description': 'juju group for testing machine ' + machine, }}, response={'security_group': { 'id': 2, }}) def check_group_names(self, result, machine_id): self.assertEqual(["juju-testing", "juju-testing-" + str(machine_id)], result) def test_none_existing(self): """When no groups exist juju and machine security groups are created""" self.expect_nova_get("os-security-groups", response={'security_groups': []}) self.expect_create_juju_group() self.expect_create_machine_group(0) self.mocker.replay() deferred = self.get_port_manager().ensure_groups(0) return deferred.addCallback(self.check_group_names, 0) def test_other_existing(self): """Existing groups in a different environment are not affected""" self.expect_nova_get("os-security-groups", response={'security_groups': [ {'name': "juju-testingish", 'id': 7}, {'name': "juju-testingish-0", 'id': 8}, ]}) self.expect_create_juju_group() self.expect_create_machine_group(0) self.mocker.replay() deferred = self.get_port_manager().ensure_groups(0) return deferred.addCallback(self.check_group_names, 0) def test_existing_juju_group(self): """An exisiting juju security group is reused""" self.expect_nova_get("os-security-groups", response={'security_groups': [ {'name': "juju-testing", 'id': 1}, ]}) self.expect_create_machine_group(0) self.mocker.replay() deferred = self.get_port_manager().ensure_groups(0) return deferred.addCallback(self.check_group_names, 0) def test_existing_machine_group(self): """An existing machine security group is deleted and remade""" self.expect_nova_get("os-security-groups", response={'security_groups': [ {'name': "juju-testing-6", 'id': 3}, ]}) self.expect_create_juju_group() self.expect_nova_delete("os-security-groups/3") self.expect_create_machine_group(6) self.mocker.replay() deferred = self.get_port_manager().ensure_groups(6) return deferred.addCallback(self.check_group_names, 6) class GetMachineGroupsTests(PortManagerTestMixin, TestCase): """Tests for get_machine_groups method needed for machine shutdown""" def test_normal(self): """A standard juju machine returns the machine group name and id""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [ {'id': 7, 'name': "juju-testing"}, {'id': 8, 'name': "juju-testing-0"}, ]}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return deferred.addCallback(self.assertEqual, {"juju-testing-0": 8}) def test_normal_include_juju(self): """If param with_juju_group=True the juju group is also returned""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [ {'id': 7, 'name': "juju-testing"}, {'id': 8, 'name': "juju-testing-0"}, ]}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine, True) return deferred.addCallback(self.assertEqual, {"juju-testing": 7, "juju-testing-0": 8}) def test_extra_group(self): """Additional groups not in the juju namespace are ignored""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [ {'id': 1, 'name': "default"}, {'id': 7, 'name': "juju-testing"}, {'id': 8, 'name': "juju-testing-0"}, ]}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return deferred.addCallback(self.assertEqual, {"juju-testing-0": 8}) def test_other_group(self): """A server not managed by juju returns nothing""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': [ {'id': 1, 'name': "default"}, ]}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return deferred.addCallback(self.assertEqual, None) def test_missing_groups(self): """A server with no groups returns nothing""" self.expect_nova_get("servers/1000/os-security-groups", response={'security_groups': []}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return deferred.addCallback(self.assertEqual, None) def test_error_missing_server(self): """A server that doesn't exist or has been deleted returns nothing""" self.expect_nova_get("servers/1000/os-security-groups", code=404, response={"itemNotFound": { "message": "Instance 1000 could not be found.", "code": 404, }}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return deferred.addCallback(self.assertEqual, None) # XXX: Broken by workaround for HP not supporting this api test_error_missing_server.skip = True def test_error_missing_page(self): """Unexpected errors from the client are propogated""" self.expect_nova_get("servers/1000/os-security-groups", code=404, response="404 Not Found\n\n" "The resource could not be found.\n\n ") self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return self.assertFailure(deferred, errors.ProviderInteractionError) # XXX: Need implemention of fancy error to exception mapping test_error_missing_page.skip = True def test_error_missing_server_fault(self): "A bogus compute fault due to lp:1010486 returns nothing""" self.expect_nova_get("servers/1000/os-security-groups", code=500, response={"computeFault": { "message": "The server has either erred or is incapable of" " performing the requested operation.", "code": 500, }}) self.expect_nova_get("servers/1000", code=404, response={"itemNotFound": { "message": "The resource could not be found.", "code": 404, }}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return deferred.addCallback(self.assertEqual, None) # XXX: Need implemention of fancy error to exception mapping test_error_missing_server_fault.skip = True def test_error_really_fault(self): """A real compute fault is propogated""" self.expect_nova_get("servers/1000/os-security-groups", code=500, response={"computeFault": { "message": "The server has either erred or is incapable of" " performing the requested operation.", "code": 500, }}) self.expect_nova_get("servers/1000", response={"server": {"id": 1000}}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_port_manager().get_machine_groups(machine) return self.assertFailure(deferred, errors.ProviderInteractionError) # XXX: Need implemention of fancy error to exception mapping test_error_really_fault.skip = True juju-0.7.orig/juju/providers/openstack/tests/test_provider.py0000644000000000000000000002052712135220114023005 0ustar 00000000000000"""Testing for the OpenStack provider interface""" from twisted.internet.defer import inlineCallbacks from juju import errors from juju.lib.testing import TestCase from juju.environment.errors import EnvironmentsConfigError from juju.machine.constraints import ConstraintSet from juju.providers.openstack import client as _mod_client from juju.providers.openstack.tests import OpenStackTestMixin class ProviderTestMixin(object): from juju.providers.openstack.files import FileStorage as FileStorageClass from juju.providers.openstack.provider import ( MachineProvider as ProviderClass) environment_name = "testing" test_environ = { "NOVA_URL": "https://environ.invalid", "NOVA_API_KEY": "env-key", "EC2_SECRET_KEY": "env-xxxx", "NOVA_PROJECT_ID": "env-project", } def get_config(self): return { "type": "openstack", "auth-mode": "keypair", "access-key": "key", "secret-key": "xxxxxxxx", "auth-url": "https://testing.invalid", "project-name": "project", "control-bucket": self.environment_name, } def get_provider(self, config=None): if config is None: config = self.get_config() return self.ProviderClass(self.environment_name, config) def get_client(self, provider): client = provider.nova._client self.assertIs(client, provider.swift._client) return client class ProviderTests(ProviderTestMixin, TestCase): def test_empty_config_raises(self): """Passing no config raises an exception about lacking credentials""" self.change_environment() # XXX: Should this raise EnvironmentsConfigError instead? self.assertRaises(ValueError, self.get_provider, {}) def test_client_params(self): """Config details get passed through to OpenStack client correctly""" provider = self.get_provider() creds = provider.credentials self.assertEquals("key", creds.access_key) self.assertEquals("xxxxxxxx", creds.secret_key) self.assertEquals("https://testing.invalid", creds.url) self.assertEquals("project", creds.project_name) self.assertIs(creds, self.get_client(provider).credentials) def test_provider_attributes(self): """ The provider environment name and config should be available as parameters in the provider. """ provider = self.get_provider() self.assertEqual(provider.environment_name, self.environment_name) self.assertEqual(provider.config.get("type"), "openstack") self.assertEqual(provider.provider_type, "openstack") def test_get_file_storage(self): """The file storage is accessible via the machine provider.""" provider = self.get_provider() storage = provider.get_file_storage() self.assertTrue(isinstance(storage, self.FileStorageClass)) def test_config_serialization(self): """ The provider configuration can be serialized to yaml. """ self.change_environment() config = self.get_config() expected = config.copy() config["authorized-keys-path"] = self.makeFile("key contents") expected["authorized-keys"] = "key contents" provider = self.get_provider(config) self.assertEqual(expected, provider.get_serialization_data()) def test_config_environment_extraction(self): """ The provider serialization loads keys as needed from the environment. Variables from the configuration take precendence over those from the environment, when serializing. """ self.change_environment(**self.test_environ) provider = self.get_provider({ "auth-mode": "keypair", "project-name": "other-project", "authorized-keys": "key-data", }) serialized = provider.get_serialization_data() expected = { "auth-mode": "keypair", "access-key": "env-key", "secret-key": "env-xxxx", "auth-url": "https://environ.invalid", "project-name": "other-project", "authorized-keys": "key-data", } self.assertEqual(expected, serialized) def test_conflicting_authorized_keys_options(self): """ We can't handle two different authorized keys options, so deny constructing an environment that way. """ config = self.get_config() config["authorized-keys"] = "File content" config["authorized-keys-path"] = "File path" error = self.assertRaises(EnvironmentsConfigError, self.get_provider, config) self.assertEquals( str(error), "Environment config cannot define both authorized-keys and " "authorized-keys-path. Pick one!") class CheckCertsTests(ProviderTestMixin, TestCase): def run_test(self, config_changes, txaws_support=True): log = self.capture_logging("juju") if txaws_support: obj = object() else: obj = None self.patch(_mod_client, "WebVerifyingContextFactory", obj) config = self.get_config() config.update(config_changes) provider = self.get_provider(config) return provider, log def test_default_true(self): provider, log = self.run_test({}) self.assertNotIn("ssl-hostname-verification", provider.config) self.assertEquals(True, provider._check_certs) self.assertEquals(True, self.get_client(provider).check_certs) self.assertEqual("", log.getvalue()) def test_false(self): provider, log = self.run_test({"ssl-hostname-verification": False}) self.assertEquals(False, provider._check_certs) self.assertEquals(False, self.get_client(provider).check_certs) self.assertIn("Set 'ssl-hostname-verification'", log.getvalue()) def test_true(self): provider, log = self.run_test({"ssl-hostname-verification": True}) self.assertEquals(True, provider._check_certs) self.assertEquals(True, self.get_client(provider).check_certs) self.assertEqual("", log.getvalue()) def test_http_auth_url(self): provider, log = self.run_test({ "auth-url": "http://testing.invalid", "ssl-hostname-verification": True, }) self.assertEquals(True, self.get_client(provider).check_certs) self.assertIn("identity service not using secure", log.getvalue()) def test_no_txaws_support(self): self.assertRaises(errors.SSLVerificationUnsupported, self.run_test, {"ssl-hostname-verification": True}, txaws_support=False) class GetConstraintSetTests(OpenStackTestMixin, TestCase): @inlineCallbacks def test_get_constraints(self): self.expect_nova_get("flavors", response={'flavors': self.default_flavors}) self.mocker.replay() provider = self.get_provider() cs = yield provider.get_constraint_set() self.assertIsInstance(cs, ConstraintSet) cs2 = yield provider.get_constraint_set() self.assertIsInstance(cs2, ConstraintSet) self.assertEqual(cs, cs2) def create_constraint_set(self): self.expect_nova_get("flavors", response={'flavors': self.default_flavors}) self.mocker.replay() provider = self.get_provider() return provider.get_constraint_set() @inlineCallbacks def test_parse_scheduler_hints_one(self): cs = yield self.create_constraint_set() c = cs.parse(["os-scheduler-hints={\"hint-key\": \"hint-val\"}"]) self.assertEqual({"hint-key": "hint-val"}, c["os-scheduler-hints"]) @inlineCallbacks def test_parse_scheduler_hints_bad_value(self): cs = yield self.create_constraint_set() err = self.assertRaises(errors.ConstraintError, cs.parse, ["os-scheduler-hints=notjson"]) self.assertRegexpMatches(str(err), "Bad 'os-scheduler-hints' constraint 'notjson': .*") @inlineCallbacks def test_parse_scheduler_hints_bad_array(self): cs = yield self.create_constraint_set() err = self.assertRaises(errors.ConstraintError, cs.parse, ["os-scheduler-hints=[]"]) self.assertRegexpMatches(str(err), "Bad 'os-scheduler-hints' constraint '\\[\\]': .*") juju-0.7.orig/juju/providers/openstack/tests/test_shutdown.py0000644000000000000000000001242212135220114023021 0ustar 00000000000000"""Tests for terminating machines and cleaning up the environment""" from juju import errors from juju.lib import testing from juju.machine import ProviderMachine from juju.providers.openstack.machine import NovaProviderMachine from juju.providers.openstack.tests import OpenStackTestMixin class ShutdownMachineTests(OpenStackTestMixin, testing.TestCase): def test_shutdown_single(self): self.expect_nova_get("servers/1000", response={"server": { 'name': "juju testing instance 0", }}) self.expect_nova_delete("servers/1000", code=204) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_provider().shutdown_machine(machine) deferred.addCallback(self.assertIs, machine) def test_shutdown_single_other(self): self.expect_nova_get("servers/1000", response={"server": { 'name': "some other instance", }}) self.mocker.replay() machine = NovaProviderMachine(1000) deferred = self.get_provider().shutdown_machine(machine) return self.assert_not_found(deferred, [1000]) def test_shutdown_single_wrong_machine(self): self.mocker.reset() machine = ProviderMachine("i-000003E8") e = self.assertRaises(errors.ProviderError, self.get_provider().shutdown_machine, machine) self.assertIn("Need a NovaProviderMachine to shutdown", str(e)) def test_shutdown_multi_none(self): self.mocker.reset() deferred = self.get_provider().shutdown_machines([]) return deferred.addCallback(self.assertEqual, []) def test_shutdown_multi_some_invalid(self): """No machines are shutdown if some are invalid""" self.mocker.unorder() self.expect_nova_get("servers/1001", response={"server": { 'name': "juju testing instance 1", }}) self.expect_nova_get("servers/1002", response={"server": { 'name': "some other instance", }}) self.mocker.replay() machines = [NovaProviderMachine(1001), NovaProviderMachine(1002)] deferred = self.get_provider().shutdown_machines(machines) return self.assert_not_found(deferred, [1002]) # XXX: dumb requirement to keep all running if some invalid, drop this test_shutdown_multi_some_invalid.skip = True # GZ 2012-06-11: Corner case difference, EC2 rechecks machine statuses and # group membership on shutdown. def test_shutdown_multi_success(self): """Machines are shutdown and groups except juju group are deleted""" self.mocker.unorder() self.expect_nova_get("servers/1001", response={"server": { 'name': "juju testing instance 1", }}) self.expect_nova_get("servers/1002", response={"server": { 'name': "juju testing instance 2", }}) self.expect_nova_delete("servers/1001", code=204) self.expect_nova_delete("servers/1002", code=204) self.mocker.replay() machines = [NovaProviderMachine(1001), NovaProviderMachine(1002)] deferred = self.get_provider().shutdown_machines(machines) return deferred class DestroyEnvironmentTests(OpenStackTestMixin, testing.TestCase): def check_machine_ids(self, machines, server_ids): self.assertEqual(set(m.instance_id for m in machines), set(server_ids)) def test_destroy_environment(self): self.mocker.unorder() self.expect_swift_put("testing/provider-state", "{}\n") self.expect_nova_get("servers/detail", response={"servers": [ self.make_server(1001), self.make_server(1002), ]}) self.expect_nova_get("servers/1001", response={"server": { 'name': "juju testing instance 1", }}) self.expect_nova_get("servers/1002", response={"server": { 'name': "juju testing instance 2", }}) self.expect_nova_delete("servers/1001", code=204) self.expect_nova_delete("servers/1002", code=204) self.mocker.replay() deferred = self.get_provider().destroy_environment() return deferred.addCallback(self.check_machine_ids, [1001, 1002]) def test_s3_failure(self): self.mocker.unorder() self.expect_swift_put("testing/provider-state", "{}\n", code=500, response="Server unavailable or something") # XXX: normal server shutdown should be expected here self.mocker.replay() deferred = self.get_provider().destroy_environment() return deferred.addCallback(self.assertEqual, []) # XXX: Need to bolster swift robustness in response to api errors test_s3_failure.skip = True # GZ 2012-06-15: Always tries removing juju group unlike EC2 currently def test_shutdown_no_instances(self): """With no instances no shutdowns are attempted""" self.mocker.unorder() self.expect_swift_put("testing/provider-state", "{}\n") self.expect_nova_get("servers/detail", response={"servers": []}) self.mocker.replay() deferred = self.get_provider().destroy_environment() return deferred.addCallback(self.assertEqual, []) juju-0.7.orig/juju/providers/openstack/tests/test_state.py0000644000000000000000000000444312135220114022272 0ustar 00000000000000"""Tests for the common state interface over the the Openstack provider The testcases here largely duplicate those in openstack.tests.test_files as state handling is a pretty thin layer over the file storage. """ from juju.lib import serializer from juju.lib.testing import TestCase from juju.providers.openstack.tests import OpenStackTestMixin class OpenStackStateTest(OpenStackTestMixin, TestCase): def test_save(self): """Saving a dict puts yaml serialized bytes in provider-state""" state = {"zookeeper-instances": [ [1000, "x1.example.com"], ]} self.expect_swift_put("testing/provider-state", serializer.dump(state)) self.mocker.replay() return self.get_provider().save_state(state) def test_save_missing_container(self): """Saving will create the container when it does not exist already""" state = {"zookeeper-instances": [ [1000, "x1.example.com"], ]} state_bytes = serializer.dump(state) self.expect_swift_put("testing/provider-state", state_bytes, code=404) self.expect_swift_put_container("testing") self.expect_swift_put("testing/provider-state", state_bytes) self.mocker.replay() return self.get_provider().save_state(state) def test_load(self): """Loading deserializes yaml from provider-state to a python dict""" state = {"zookeeper-instances": []} self.expect_swift_get("testing/provider-state", response=serializer.dump(state)) self.mocker.replay() deferred = self.get_provider().load_state() return deferred.addCallback(self.assertEqual, state) def test_load_missing(self): """Loading returns False if provider-state does not exist""" self.expect_swift_get("testing/provider-state", code=404, response={}) self.mocker.replay() deferred = self.get_provider().load_state() return deferred.addCallback(self.assertIs, False) def test_load_no_content(self): """Loading returns False if provider-state is empty""" self.expect_swift_get("testing/provider-state", response="") self.mocker.replay() deferred = self.get_provider().load_state() return deferred.addCallback(self.assertIs, False) juju-0.7.orig/juju/providers/openstack_s3/__init__.py0000644000000000000000000000364412135220114021117 0ustar 00000000000000"""Provider interface implementation for Openstack with S3 storage""" import os import logging from txaws.service import AWSServiceRegion from juju.providers.openstack import provider as os_provider from juju.providers.openstack import credentials from juju.providers.ec2 import files as s3_files log = logging.getLogger("juju.openstack_s3") class HybridCredentials(credentials.OpenStackCredentials): """Encapsulation of credentials with S3 required values included""" _config_vars = { 'combined-key': ("EC2_ACCESS_KEY", "AWS_ACCESS_KEY_ID"), 's3-uri': ("S3_URL",), } _config_vars.update(credentials.OpenStackCredentials._config_vars) class MachineProvider(os_provider.MachineProvider): """MachineProvider for use in Openstack environment but with S3 API""" Credentials = HybridCredentials def __init__(self, environment_name, config): super(MachineProvider, self).__init__(environment_name, config) del self.swift # If access or secret keys are still blank, inside txaws environment # a ValueError will be raised after rechecking the environment. self._aws_service = AWSServiceRegion( access_key=self.credentials.combined_key, secret_key=self.credentials.secret_key, ec2_uri="", # The EC2 client will not be used s3_uri=self.credentials.s3_uri) if self._check_certs: s3_endpoint = self._aws_service.s3_endpoint s3_endpoint.ssl_hostname_verification = True if s3_endpoint.scheme != "https": log.warn("S3 API calls not using secure transport") self.s3 = self._aws_service.get_s3_client() @property def provider_type(self): return "openstack_s3" def get_file_storage(self): """Retrieve a S3 API compatible backend FileStorage class""" return s3_files.FileStorage(self.s3, self.config["control-bucket"]) juju-0.7.orig/juju/providers/openstack_s3/tests/0000755000000000000000000000000012135220114020141 5ustar 00000000000000juju-0.7.orig/juju/providers/openstack_s3/tests/__init__.py0000644000000000000000000000000012135220114022240 0ustar 00000000000000juju-0.7.orig/juju/providers/openstack_s3/tests/test_provider.py0000644000000000000000000000633412135220114023412 0ustar 00000000000000"""Testing for the OpenStack provider interface""" from juju.providers.openstack.tests import test_provider class ProviderTestMixin(test_provider.ProviderTestMixin): from juju.providers.ec2.files import FileStorage as FileStorageClass from juju.providers.openstack_s3 import MachineProvider as ProviderClass test_environ = { "NOVA_URL": "https://environ.invalid", "NOVA_API_KEY": "env-key", "EC2_ACCESS_KEY": "env-key:env-project", "EC2_SECRET_KEY": "env-xxxx", "NOVA_PROJECT_ID": "env-project", "S3_URL": "https://environ.invalid:3333", } def get_config(self): config = super(ProviderTestMixin, self).get_config() config.update({ "type": "openstack_s3", "combined-key": "key:project", "s3-uri": "https://testing.invalid:3333", }) return config def get_client(self, provider): return provider.nova._client class ProviderTests(test_provider.ProviderTests, ProviderTestMixin): def test_s3_params(self): """Config details get passed through to txaws S3 client correctly""" s3 = self.get_provider().s3 self.assertEquals("https://testing.invalid:3333/", s3.endpoint.get_uri()) self.assertEquals("key:project", s3.creds.access_key) self.assertEquals("xxxxxxxx", s3.creds.secret_key) def test_provider_attributes(self): """ The provider environment name and config should be available as parameters in the provider. """ provider = self.get_provider() self.assertEqual(provider.environment_name, self.environment_name) self.assertEqual(provider.config.get("type"), "openstack_s3") self.assertEqual(provider.provider_type, "openstack_s3") def test_config_environment_extraction(self): """ The provider serialization loads keys as needed from the environment. Variables from the configuration take precendence over those from the environment, when serializing. """ self.change_environment(**self.test_environ) provider = self.get_provider({ "auth-mode": "keypair", "project-name": "other-project", "authorized-keys": "key-data", }) serialized = provider.get_serialization_data() expected = { "auth-mode": "keypair", "access-key": "env-key", "secret-key": "env-xxxx", "auth-url": "https://environ.invalid", "project-name": "other-project", "authorized-keys": "key-data", "combined-key": "env-key:env-project", "s3-uri": "https://environ.invalid:3333", } self.assertEqual(expected, serialized) class CheckCertsTests(test_provider.CheckCertsTests, ProviderTestMixin): def test_http_s3_url(self): provider, log = self.run_test({ "s3-uri": "http://testing.invalid:3333", "ssl-hostname-verification": True, }) self.assertTrue(provider.s3.endpoint.ssl_hostname_verification) self.assertEquals(True, self.get_client(provider).check_certs) self.assertIn("S3 API calls not using secure", log.getvalue()) juju-0.7.orig/juju/providers/tests/__init__.py0000644000000000000000000000000212135220114017646 0ustar 00000000000000# juju-0.7.orig/juju/providers/tests/test_dummy.py0000644000000000000000000001426512135220114020321 0ustar 00000000000000from cStringIO import StringIO import zookeeper from twisted.internet.defer import inlineCallbacks from juju.errors import ProviderError from juju.machine import ProviderMachine from juju.providers.dummy import MachineProvider, DummyMachine from juju.state.placement import UNASSIGNED_POLICY from juju.lib.testing import TestCase class DummyProviderTest(TestCase): def setUp(self): super(DummyProviderTest, self).setUp() self.provider = MachineProvider("foo", {"peter": "rabbit"}) zookeeper.set_debug_level(0) def test_environment_name(self): self.assertEqual(self.provider.environment_name, "foo") def test_provider_type(self): self.assertEqual(self.provider.provider_type, "dummy") def test_get_placement_policy(self): self.assertEqual( self.provider.get_placement_policy(), UNASSIGNED_POLICY) self.provider = MachineProvider("foo", {"placement": "local"}) self.assertEqual( self.provider.get_placement_policy(), "local") @inlineCallbacks def test_bootstrap(self): machines = yield self.provider.bootstrap(None) self.assertTrue(machines) for m in machines: self.assertTrue(isinstance(m, ProviderMachine)) @inlineCallbacks def test_start_machine(self): machines = yield self.provider.start_machine({"machine-id": 0}) self.assertTrue(machines) for m in machines: self.assertTrue(isinstance(m, ProviderMachine)) @inlineCallbacks def test_start_machine_with_dns_name(self): # This ability is for testing purposes. machines = yield self.provider.start_machine( {"machine-id": 0, "dns-name": "xe.example.com"}) self.assertEqual(len(machines), 1) machine = machines.pop() self.assertTrue(isinstance(machine, ProviderMachine)) self.assertTrue(machine.dns_name, "xe.example.com") @inlineCallbacks def test_get_machine(self): machines = yield self.provider.start_machine( {"machine-id": 0}) machine = machines.pop() result = yield self.provider.get_machine(machine.instance_id) self.assertTrue(isinstance(result, ProviderMachine)) self.assertEqual(machine.instance_id, result.instance_id) @inlineCallbacks def test_start_machine_accepts_machine_data(self): machines = yield self.provider.start_machine( {"machine-id": 0, "a": 1}) self.assertEqual(len(machines), 1) self.assertTrue(isinstance(machines[0], ProviderMachine)) def test_start_machine_requires_machine_id(self): d = self.provider.start_machine({"a": 1}) return self.assertFailure(d, ProviderError) @inlineCallbacks def test_destroy_environment(self): result = yield self.provider.destroy_environment() self.assertEqual(result, []) @inlineCallbacks def test_destroy_environment_returns_machines(self): yield self.provider.bootstrap(None) result = yield self.provider.destroy_environment() self.assertEqual(len(result), 1) self.assertTrue(isinstance(result[0], ProviderMachine)) @inlineCallbacks def test_connect(self): client = yield self.provider.connect() self.assertTrue(client.connected) @inlineCallbacks def test_connect_with_sharing(self): # Ensure the sharing option is simply ignored for dummy. client = yield self.provider.connect(share=True) self.assertTrue(client.connected) @inlineCallbacks def test_get_machines(self): machines = yield self.provider.get_machines() self.assertEqual(machines, []) @inlineCallbacks def test_shutdown_machine(self): result = yield self.provider.bootstrap(None) machine = result[0] machines = yield self.provider.get_machines() self.assertTrue(machines) yield self.provider.shutdown_machine(machine) machines = yield self.provider.get_machines() self.assertFalse(machines) def test_shutdown_rejects_invalid_machine(self): machine = ProviderMachine("a-value") d = self.provider.shutdown_machine(machine) self.assertFailure(d, ProviderError) return d def test_shutdown_rejects_unknown_machine(self): machine = DummyMachine(1) d = self.provider.shutdown_machine(machine) self.assertFailure(d, ProviderError) return d @inlineCallbacks def test_save_state(self): yield self.provider.save_state(dict(a=1)) state = yield self.provider.load_state() self.assertTrue("a" in state) self.assertEqual(state["a"], 1) @inlineCallbacks def test_load_state(self): state = yield self.provider.load_state() self.assertEqual(state, {}) def test_get_serialization_data(self): data = self.provider.get_serialization_data() self.assertEqual( data, {"peter": "rabbit", "dynamicduck": "magic"}) @inlineCallbacks def test_port_exposing(self): """Verifies dummy provider properly works with ports.""" machines = yield self.provider.start_machine({"machine-id": 0}) machine = machines[0] yield self.provider.open_port(machine, 0, 25, "tcp") yield self.provider.open_port(machine, 0, 80) yield self.provider.open_port(machine, 0, 53, "udp") yield self.provider.open_port(machine, 0, 443, "tcp") yield self.provider.close_port(machine, 0, 25) yield self.provider.close_port(machine, 0, 25) # ignored exposed_ports = yield self.provider.get_opened_ports(machine, 0) self.assertEqual(exposed_ports, set([(53, 'udp'), (80, 'tcp'), (443, 'tcp')])) @inlineCallbacks def test_file_storage_returns_same_storage(self): """Multiple invocations of MachineProvider.get_file_storage use the same path. """ file_obj = StringIO("rabbits") storage = self.provider.get_file_storage() yield storage.put("/magic/beans.txt", file_obj) storage2 = self.provider.get_file_storage() fh = yield storage2.get("/magic/beans.txt") self.assertEqual(fh.read(), "rabbits") juju-0.7.orig/juju/state/__init__.py0000644000000000000000000000000212135220114015607 0ustar 00000000000000# juju-0.7.orig/juju/state/agent.py0000644000000000000000000000255312135220114015163 0ustar 00000000000000import zookeeper class AgentStateMixin(object): """A mixin for state objects that will have agents processes. Provides for the observation and connection of agent processes. Subclasses must implement M{_get_agent_path}. """ def has_agent(self): """Does this domain object have an agent connected. Return boolean deferred informing whether an agent is connected. """ d = self._client.exists(self._get_agent_path()) d.addCallback(lambda result: bool(result)) return d def _get_agent_path(self): raise NotImplementedError def watch_agent(self): """Observe changes to an agent's presence. Return two boolean deferreds informing whether an agent is connected, and whether a change happened. Both presence and content changes are encapsulated in the second deferred, callers interested in only presence need to perform event filtering as needed. """ exists_d, watch_d = self._client.exists_and_watch( self._get_agent_path()) exists_d.addCallback(lambda result: bool(result)) return exists_d, watch_d def connect_agent(self): """Inform juju that this associated agent is alive. """ return self._client.create( self._get_agent_path(), flags=zookeeper.EPHEMERAL) juju-0.7.orig/juju/state/auth.py0000644000000000000000000000325412135220114015025 0ustar 00000000000000import hashlib import base64 import zookeeper def make_identity(credentials): """ Given a principal credentials in the form of principal_id:password, transform it into an identity of the form principal_id:hash that can be used for an access control list entry. """ if not ":" in credentials: raise SyntaxError( "Credentials in wrong format, should be principal_id:password") user, password = credentials.split(":", 1) identity = "%s:%s" % ( user, base64.b64encode(hashlib.new("sha1", credentials).digest())) return identity PERM_MAP = { "read": zookeeper.PERM_READ, "write": zookeeper.PERM_WRITE, "delete": zookeeper.PERM_DELETE, "create": zookeeper.PERM_CREATE, "admin": zookeeper.PERM_ADMIN, "all": zookeeper.PERM_ALL} VALID_SCHEMES = ["digest", "world"] def make_ace(identity, scheme="digest", **permissions): """ Given a user identity, and boolean keyword arguments corresponding to permissions construct an access control entry (ACE). """ assert scheme in VALID_SCHEMES ace_permissions = 0 for name in permissions: if name not in PERM_MAP: raise SyntaxError("Invalid permission keyword %r" % name) if not isinstance(permissions[name], bool) or not permissions[name]: raise SyntaxError( "Permissions can only be enabled via ACL - %s" % name) ace_permissions = ace_permissions | PERM_MAP[name] if not ace_permissions: raise SyntaxError("No permissions specified") access_control_entry = { "id": identity, "scheme": scheme, "perms": ace_permissions} return access_control_entry juju-0.7.orig/juju/state/base.py0000644000000000000000000001310712135220114014774 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks, returnValue from zookeeper import NoNodeException from txzookeeper.utils import retry_change from juju.state.errors import StopWatcher from juju.state.topology import InternalTopology log = logging.getLogger("juju.state") class StateBase(object): """Base class for state handling subclasses. At the moment, this class provides a useful constructor, and a couple of methods to deal with reading and changing the /topology node in a sensible way. """ def __init__(self, client): """Constructor. @param client: ZookeeperClient instance. """ self._client = client self._old_topology = None @inlineCallbacks def _read_topology(self): """Read the /topology node and return an InternalTopology object. This object should be used with read-only semantics. For changing the topology, check out the _retry_topology_change() method. Note that this method name is underlined to mean "protected", not "private", since the only purpose of this method is to be used by subclasses. """ topology = InternalTopology() try: content, stat = yield self._client.get("/topology") topology.parse(content) except NoNodeException: pass returnValue(topology) def _retry_topology_change(self, change_topology_function): """Change the current /topology node in a reliable way. @param change_topology_function: A function/method which accepts a InternalTopology instance as an argument. This function can read and modify the topology instance, and after it returns (or after the returned deferred fires) the modified topology will be persisted into the /topology node. Note that this function must have no side-effects, since it may be called multiple times depending on conflict situations. Note that this method name is underlined to mean "protected", not "private", since the only purpose of this method is to be used by subclasses. """ @inlineCallbacks def change_content_function(content, stat): topology = InternalTopology() if content: topology.parse(content) yield change_topology_function(topology) returnValue(topology.dump()) return retry_change(self._client, "/topology", change_content_function) @inlineCallbacks def _watch_topology(self, watch_topology_function): """Changes in the /topology node will fire the given callback. @param watch_topology_function: A function/method which accepts two InternalTopology parameters: the old topology, and the new one. The old topology will be None the first time this function is called. Note that there are no guarantees that this function will be called once for *every* change in the topology, which means that multiple modifications may be observed as a single call. This method currently sets a pretty much perpetual watch (errors will make it bail out). In order to cleanly stop the watcher, a StopWatch exception can be raised by the callback. Note that this method name is underlined to mean "protected", not "private", since the only purpose of this method is to be used by subclasses. """ # Need to guard on the client being connected in the case # 1) a watch is waiting to run (in the reactor); # 2) and the connection is closed. # Because _watch_topology always chains to __watch_topology, # the other guarding seen with `StopWatcher` is done there. if not self._client.connected: return exists, watch = self._client.exists_and_watch("/topology") stat = yield exists if stat is not None: yield self.__topology_changed(None, watch_topology_function) else: watch.addCallback(self.__topology_changed, watch_topology_function) @inlineCallbacks def __topology_changed(self, ignored, watch_topology_function): """Internal callback used by _watch_topology().""" # Need to guard on the client being connected in the case # 1) a watch is waiting to run (in the reactor); # 2) and the connection is closed. # # It remains the reponsibility of `watch_topology_function` to # raise `StopWatcher`, per the doc of `_topology_changed`. if not self._client.connected: return try: get, watch = self._client.get_and_watch("/topology") content, stat = yield get except NoNodeException: # WTF? The node went away! This is an unexpected bug # which we try to hide from the callback to simplify # things. We'll set the watch back, and once the new # content comes up, we'll present the delta as usual. log.warning("The /topology node went missing!") self._watch_topology(watch_topology_function) else: new_topology = InternalTopology() new_topology.parse(content) try: yield watch_topology_function(self._old_topology, new_topology) except StopWatcher: return self._old_topology = new_topology watch.addCallback(self.__topology_changed, watch_topology_function) juju-0.7.orig/juju/state/charm.py0000644000000000000000000000662212135220114015160 0ustar 00000000000000 from twisted.internet.defer import ( inlineCallbacks, returnValue, succeed) from zookeeper import NoNodeException from juju.charm.config import ConfigOptions from juju.charm.metadata import MetaData from juju.charm.url import CharmURL from juju.lib import under, serializer from juju.state.base import StateBase from juju.state.errors import CharmStateNotFound def _charm_path(charm_id): return "/charms/%s" % under.quote(charm_id) class CharmStateManager(StateBase): """Manages the state of charms in an environment.""" @inlineCallbacks def add_charm_state(self, charm_id, charm, url): """Register metadata about the provided Charm. :param str charm_id: The key under which to store the Charm. :param charm: The Charm itself. :param url: The provider storage url for the Charm. """ charm_data = { "config": charm.config.get_serialization_data(), "metadata": charm.metadata.get_serialization_data(), "sha256": charm.get_sha256(), "url": url } # XXX In the future we'll have to think about charm # replacements here. For now this will do, and will # explode reliably in case of conflicts. yield self._client.create( _charm_path(charm_id), serializer.dump(charm_data)) charm_state = CharmState(self._client, charm_id, charm_data) returnValue(charm_state) @inlineCallbacks def get_charm_state(self, charm_id): """Retrieve a CharmState for the given charm id.""" try: content, stat = yield self._client.get(_charm_path(charm_id)) except NoNodeException: raise CharmStateNotFound(charm_id) charm_data = serializer.load(content) charm_state = CharmState(self._client, charm_id, charm_data) returnValue(charm_state) class CharmState(object): """State of a charm registered in an environment.""" def __init__(self, client, charm_id, charm_data): self._client = client self._charm_url = CharmURL.parse(charm_id) self._charm_url.assert_revision() self._metadata = MetaData() self._metadata.parse_serialization_data(charm_data["metadata"]) self._config = ConfigOptions() self._config.parse(charm_data["config"]) # Just a health check: assert self._metadata.name == self.name self._sha256 = charm_data["sha256"] self._bundle_url = charm_data.get("url") @property def name(self): """The charm name.""" return self._charm_url.name @property def revision(self): """The monotonically increasing charm revision number. """ return self._charm_url.revision @property def bundle_url(self): """The url to the charm bundle in the provider storage.""" return self._bundle_url @property def id(self): """The charm id""" return str(self._charm_url) def get_metadata(self): """Return deferred MetaData.""" return succeed(self._metadata) def get_config(self): """Return deferred ConfigOptions.""" return succeed(self._config) def get_sha256(self): """Return deferred sha256 for the charm.""" return succeed(self._sha256) def is_subordinate(self): """Is this a subordinate charm.""" return self._metadata.is_subordinate juju-0.7.orig/juju/state/endpoint.py0000644000000000000000000000364512135220114015710 0ustar 00000000000000""" This module supports a user-level view of the topology, instead of the low-level perspective of service states and relation states. """ from collections import namedtuple class RelationEndpoint( # This idiom allows for the efficient and simple construction of # value types based on tuples; notably __eq__ and __hash__ are # correctly defined for value types. In addition, the storage # overhead is that of an ordinary tuple (although not so important # for this usage). See urlib/parse.py in Python 3.x for other # examples. namedtuple( "RelationEndpoint", ("service_name", "relation_type", "relation_name", "relation_role", "relation_scope"))): __slots__ = () def __new__(cls, service_name, relation_type, relation_name, relation_role, relation_scope="global"): return super(cls, RelationEndpoint).__new__( cls, service_name, relation_type, relation_name, relation_role, relation_scope) def may_relate_to(self, other): """Test whether the `other` endpoint may be used in a common relation. RelationEndpoints may be related if they share the same relation_type (which is called an "interface" in charms) and one is a ``provides`` and the other is a ``requires``; or if both endpoints have a relation_role of ``peers``. Raises a `TypeError` is `other` is not a `RelationEndpoint`. """ if not isinstance(other, RelationEndpoint): raise TypeError("Not a RelationEndpoint", other) return (self.relation_type == other.relation_type and ((self.relation_role == "server" and other.relation_role == "client") or (self.relation_role == "client" and other.relation_role == "server") or (self.relation_role == "peer" and other.relation_role == "peer"))) juju-0.7.orig/juju/state/environment.py0000644000000000000000000001605212135220114016430 0ustar 00000000000000import zookeeper from twisted.internet.defer import inlineCallbacks, returnValue from txzookeeper.utils import retry_change from juju.environment.config import EnvironmentsConfig from juju.lib import serializer from juju.state.errors import EnvironmentStateNotFound from juju.state.base import StateBase SETTINGS_PATH = "/settings" class EnvironmentStateManager(StateBase): def _force(self, path, content): return retry_change(self._client, path, lambda *_: content) def set_config_state(self, config, environment_name): serialized_env = config.serialize(environment_name) return self._force("/environment", serialized_env) @inlineCallbacks def get_config(self): try: content, stat = yield self._client.get("/environment") except zookeeper.NoNodeException: raise EnvironmentStateNotFound() config = EnvironmentsConfig() config.parse(content) returnValue(config) @inlineCallbacks def get_in_legacy_environment(self): """Return True if bootstrapped with pre-constraints code.""" stat = yield self._client.exists("/constraints") returnValue(not stat) def set_constraints(self, constraints): """Set the machine constraints for the whole environment. These may be overridden piecemeal, or entirely, by service constraints. """ data = constraints.data data.pop("ubuntu-series", None) return self._force("/constraints", serializer.dump(data)) @inlineCallbacks def get_constraint_set(self): """Get the ConstraintSet generated by this environment's provider. This is needed to generate any Constraints object that will apply in this environment; but it shouldn't be needed by any processes other than the CLI and the provisioning agent. """ config = yield self.get_config() provider = config.get_default().get_machine_provider() constraint_set = yield provider.get_constraint_set() returnValue(constraint_set) @inlineCallbacks def get_constraints(self): """Get the environment's machine constraints.""" constraint_set = yield self.get_constraint_set() try: content, stat = yield self._client.get("/constraints") data = serializer.load(content) except zookeeper.NoNodeException: data = {} returnValue(constraint_set.load(data)) # TODO The environment waiting/watching logic in the # provisioning agent should be moved here (#640726). class GlobalSettingsStateManager(StateBase): """State for the the environment's runtime characterstics. This can't be stored directly in the environment, as that has access restrictions. The runtime state can be accessed from all connected juju clients. """ def set_provider_type(self, provider_type): return self._set_value("provider-type", provider_type, once=True) def get_provider_type(self): return self._get_value("provider-type") def is_debug_log_enabled(self): """Find out if the debug log is enabled. Returns a boolean. """ return self._get_value("debug-log", False) def set_debug_log(self, enabled): """Enable/Disable the debug log. :param enabled: Boolean denoting whether the log should be enabled. """ return self._set_value("debug-log", bool(enabled)) def set_environment_id(self, uid): return self._set_value("env-id", uid, once=True) def get_environment_id(self): return self._get_value("env-id") @inlineCallbacks def _get_value(self, key, default=None): try: content, stat = yield self._client.get(SETTINGS_PATH) except zookeeper.NoNodeException: returnValue(default) data = serializer.load(content) returnValue(data.get(key, default)) def _set_value(self, key, value, once=False): def set_value(old_content, stat): if not old_content: data = {} else: data = serializer.load(old_content) if once and key in data: raise ValueError("%s can only be set once" % key) data[key] = value return serializer.dump(data) return retry_change(self._client, SETTINGS_PATH, set_value) def watch_settings_changes(self, callback, error_callback=None): """Register a callback to invoked when the runtime changes. This watch primarily serves to get a persistent watch of the existance and modifications to the global settings. The callback will be invoked the first time as soon as the settings are present. If the settings are already present, it will be invoked immediately. For initial presence the callback value will be the boolean value True. An error callback will be invoked if the callback raised an exception. The watcher will be stopped, and the error consumed by the error callback. """ assert callable(callback), "Invalid callback" watcher = _RuntimeWatcher(self._client, callback, error_callback) return watcher.start() class _RuntimeWatcher(object): def __init__(self, client, callback, error_callback=None): self._client = client self._callback = callback self._watching = False self._error_callback = error_callback @property def is_running(self): return self._watching @inlineCallbacks def start(self): """Start watching the settings. The callback will receive notification of changes in addition to an initial presence message. No state is conveyed via the watch api only notifications. """ assert not self._watching, "Already Watching" self._watching = True # This logic will break if the node is removed, and so will # the function below, but the internal logic never removes # it, so we do not handle this case. exists_d, watch_d = self._client.exists_and_watch(SETTINGS_PATH) exists = yield exists_d if exists: yield self._on_settings_changed() else: watch_d.addCallback(self._on_settings_changed) returnValue(self) def stop(self): """Stop the environment watcher, no more callbacks will be invoked.""" self._watching = False @inlineCallbacks def _on_settings_changed(self, change_event=True): """Setup a perpetual watch till the watcher is stopped. """ # Ensure the watch is active, and the client is connected. if not self._watching or not self._client.connected: returnValue(False) exists_d, watch_d = self._client.exists_and_watch(SETTINGS_PATH) try: yield self._callback(change_event) except Exception, e: self._watching = False if self._error_callback: self._error_callback(e) return watch_d.addCallback(self._on_settings_changed) juju-0.7.orig/juju/state/errors.py0000644000000000000000000003034612135220114015402 0ustar 00000000000000from juju.errors import JujuError class StateError(JujuError): """Base class for state-related errors.""" class StateChanged(StateError): """Service state was modified while operation was in progress. This is generally a situation which should be avoided via locks, ignored if it was an automated procedure, or reported back to the user when operating interactively. """ def __str__(self): return "State changed while operation was in progress" class StopWatcher(JujuError): """Exception value to denote watching should stop. """ class StateNotFound(StateError): """State not found. Expecting a Zookeeper node with serialised state but it could not be found at a given path. """ def __init__(self, path): self.path = path def __str__(self): return "State for %s not found." % self.path class PrincipalNotFound(StateError): def __init__(self, principal_name): self.principal_name = principal_name def __str__(self): return "Principal %r not found" % self.principal_name class CharmStateNotFound(StateError): """Charm state was not found.""" def __init__(self, charm_id): self.charm_id = charm_id def __str__(self): return "Charm %r was not found" % self.charm_id class ServiceStateNotFound(StateError): """Service state was not found.""" def __init__(self, service_name): self.service_name = service_name def __str__(self): return "Service %r was not found" % self.service_name class ServiceStateNameInUse(StateError): """Service name is already in use.""" def __init__(self, service_name): self.service_name = service_name def __str__(self): return "Service name %r is already in use" % self.service_name class ServiceUnitStateNotFound(StateError): """Service unit state was not found.""" def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return "Service unit %r was not found" % self.unit_name class BadServiceStateName(StateError): """Service name was misused when another service name was expected. This may happen, for instance, because unit names embed service names in them (e.g. wordpress/2), so there's a chance to misuse such a unit name in an incorrect location. """ def __init__(self, expected_name, obtained_name): self.expected_name = expected_name self.obtained_name = obtained_name def __str__(self): return "Expected service name %r but got %r" % \ (self.expected_name, self.obtained_name) class MachineStateNotFound(StateError): """Machine state was not found.""" def __init__(self, machine_id): self.machine_id = machine_id def __str__(self): return "Machine %r was not found" % self.machine_id class MachineStateInUse(StateError): """Machine state in use.""" def __init__(self, machine_id): self.machine_id = machine_id def __str__(self): return "Resources are currently assigned to machine %r" % \ self.machine_id class NoUnusedMachines(StateError): """No unused machines are available for assignment.""" def __str__(self): return "No unused machines are available for assignment" class IllegalSubordinateMachineAssignment(StateError): def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return ("Unable to assign subordinate %s to machine." % ( self.unit_name)) class ServiceUnitStateMachineAlreadyAssigned(StateError): def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return "Service unit %r is already assigned to a machine" % \ self.unit_name class ServiceUnitStateMachineNotAssigned(StateError): def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return "Service unit %r is not assigned to a machine" % \ self.unit_name class ServiceUnitDebugAlreadyEnabled(StateError): """The unit already is in debug mode. """ def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return "Service unit %r is already in debug mode." % \ self.unit_name class ServiceUnitUpgradeAlreadyEnabled(StateError): """The unit has already been marked for upgrade. """ def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return "Service unit %r is already marked for upgrade." % ( self.unit_name) class ServiceUnitResolvedAlreadyEnabled(StateError): """The unit has already been marked resolved. """ def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return "Service unit %r is already marked as resolved." % ( self.unit_name) class ServiceUnitRelationResolvedAlreadyEnabled(StateError): """The relation has already been marked resolved. """ def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return "Service unit %r already has relations marked as resolved." % ( self.unit_name) class RelationAlreadyExists(StateError): def __init__(self, endpoints): self.endpoints = endpoints def __str__(self): services = [endpoint.service_name for endpoint in self.endpoints] if len(services) > 1: return "Relation %s already exists between %s" % ( self.endpoints[0].relation_type, " and ".join(services)) else: return "Relation %s already exists for %s" % ( self.endpoints[0].relation_type, services[0]) class RelationStateNotFound(StateError): def __str__(self): return "Relation not found" class RelationIdentNotFound(StateError): def __init__(self, relation_ident): # Play nice with amp oddities, where we get our own string repr if "-" in relation_ident: relation_ident = relation_ident.split("-")[-1].strip() self.relation_ident = relation_ident def __str__(self): return "Relation not found for - %s" % (self.relation_ident) class UnitRelationStateAlreadyAssigned(StateError): """The unit already exists in the relation.""" def __init__(self, relation_id, relation_name, unit_name): self.relation_id = relation_id self.relation_name = relation_name self.unit_name = unit_name def __str__(self): return "The relation %r already contains a unit for %r" % ( self.relation_name, self.unit_name) class UnitRelationStateNotFound(StateError): """The unit does not exist in the relation.""" def __init__(self, relation_id, relation_name, unit_name): self.relation_id = relation_id self.relation_name = relation_name self.unit_name = unit_name def __str__(self): return "The relation %r has no unit state for %r" % ( self.relation_name, self.unit_name) class PrincipalServiceUnitRequired(StateError): """Non-principal service unit was used as a container.""" def __init__(self, service_name, value): self.service_name = service_name self.value = value def __str__(self): return ("Expected principal service unit as container " "for %s instance, got %r" % ( self.service_name, self.value)) class UnitMissingContainer(StateError): """The subordinate unit was added without a container.""" def __init__(self, unit_name): self.unit_name = unit_name def __str__(self): return ("The unit %s expected a principal container " "but none was assigned." % ( self.unit_name)) class SubordinateUsedAsContainer(StateError): """The subordinate unit was used to contain another unit.""" def __init__(self, service_name, other_unit): self.service_name = service_name self.other_unit = other_unit def __str__(self): return "Attempted to assign unit of %s to subordinate %s" % ( self.service_name, self.other_unit) class NotSubordinateCharm(StateError): """Can not add `subordinate: false` to a container.""" def __init__(self, service_name, other_unit): self.service_name = service_name self.other_unit = other_unit def __str__(self): return "%s cannot be used as subordinate to %s" % ( self.service_name, self.other_unit) class UnsupportedSubordinateServiceRemoval(StateError): """ Currently unspported to remove subordinate services once a container relation has been established. This will change in a future release, however, removal of subordinate services is complicated by stop hooks not being called properly at this time. """ def __init__(self, subordinate_service_name, principal_service_name): self.subordinate_service_name = subordinate_service_name self.principal_service_name = principal_service_name def __str__(self): return ("Unsupported attempt to destroy subordinate " "service '%s' while principal service '%s' " "is related." % ( self.subordinate_service_name, self.principal_service_name)) class EnvironmentStateNotFound(StateError): """Environment state was not found.""" def __str__(self): return "Environment state was not found" class UnknownRelationRole(StateError): """An unknown relation type was specified.""" def __init__(self, relation_id, relation_role, service_name): self.relation_id = relation_id self.relation_role = relation_role self.service_name = service_name def __str__(self): return "Unknown relation role %r for service %r" % ( self.relation_role, self.service_name) class BadDescriptor(ValueError, JujuError): """Descriptor is not valid. A descriptor must be of the form [:]. Currently the only restriction on these names is that they not embed colons, but we may wish to impose other restrictions. """ def __init__(self, descriptor): self.descriptor = descriptor def __str__(self): return "Bad descriptor: %r" % (self.descriptor,) class DuplicateEndpoints(StateError): """Endpoints cannot be duplicate.""" def __init__(self, *endpoints): self.endpoints = endpoints def __str__(self): return "Duplicate endpoints: %r" % (self.endpoints,) class IncompatibleEndpoints(StateError): """Endpoints are incompatible.""" def __init__(self, *endpoints): self.endpoints = endpoints def __str__(self): return "Incompatible endpoints: %r" % (self.endpoints,) class NoMatchingEndpoints(StateError): """Endpoints do not match in a relation.""" def __str__(self): return "No matching endpoints" class AmbiguousRelation(StateError): """Endpoints have more than one possible shared relation.""" def __init__(self, requested_pair, endpoint_pairs): self.requested_pair = requested_pair self.endpoint_pairs = endpoint_pairs def __str__(self): relations = [] for (end1, end2) in self.endpoint_pairs: relations.append(" '%s:%s %s:%s' (%s %s / %s %s)" % ( end1.service_name, end1.relation_name, end2.service_name, end2.relation_name, end1.relation_type, end1.relation_role, end2.relation_type, end2.relation_role)) requested = "%s %s" % self.requested_pair return "Ambiguous relation %r; could refer to:\n%s" % ( requested, "\n".join(sorted(relations))) class RelationBrokenContextError(StateError): """An inappropriate operation was attempted in a relation-broken hook""" class InvalidRelationIdentity(ValueError, JujuError): """Relation identity is not valid. A relation identity must be of the form :. """ def __init__(self, relation_ident): self.relation_ident = relation_ident def __str__(self): return "Not a valid relation id: %r" % (self.relation_ident,) juju-0.7.orig/juju/state/firewall.py0000644000000000000000000003262412135220114015674 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks from juju.errors import MachinesNotFound from juju.state.errors import ( ServiceStateNotFound, ServiceUnitStateNotFound, StateChanged, StopWatcher) from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager log = logging.getLogger("juju.state.expose") NotExposed = object() class FirewallManager(object): """Manages the opening and closing of ports in the firewall. """ def __init__(self, client, is_running, provider): """Initialize a Firewall Manager. :param client: A connected zookeeper client. :param is_running: A function (usually a bound method) that returns whether the associated agent is still running or not. :param provider: A machine provider, used for making the actual changes in the environment to firewall settings. """ self.machine_state_manager = MachineStateManager(client) self.service_state_manager = ServiceStateManager(client) self.is_running = is_running self.provider = provider # Track all currently watched machines, using machine ID. self._watched_machines = set() # Map service name to either NotExposed or set of exposed unit names. # If a service name is present in the dictionary, it means its # respective expose node is being watched. self._watched_services = {} # Machines to retry open_close_ports because of earlier errors self._retry_machines_on_port_error = set() # Registration of observers for corresponding actions self._open_close_ports_observers = set() self._open_close_ports_on_machine_observers = set() @inlineCallbacks def process_machine(self, machine_state): """Ensures watch is setup per machine and performs any necessary retry. :param machine_state: The machine state of the machine to be checked. The watch that is established, via :method:`juju.state.machine.MachineState.watch_assigned_units`, handles the scenario where a service or service unit is removed from the topology. Because the service unit is no longer in the topology, the corresponding watch terminates and is unable to `open_close_ports` in response to the change. However, the specific machine watch will be called in this case, and that suffices to determine that its port policy should be checked. In addition, this method can rely on the fact that the provisioning agent periodically rechecks machines so as to support retries of security group operations that failed for that provider. This method is called by the corresponding :method:`juju.agents.provision.ProvisioningAgent.process_machine` in the provisioning agent. """ if machine_state.id in self._retry_machines_on_port_error: self._retry_machines_on_port_error.remove(machine_state.id) try: yield self.open_close_ports_on_machine(machine_state.id) except StopWatcher: # open_close_ports_on_machine can also be called from # a watch, so simply ignore this since it's just used # to shutdown a watch in the case of agent shutdown pass def cb_watch_assigned_units(old_units, new_units): """Watch assigned units for changes possibly require port mgmt. """ log.debug("Assigned units for machine %r: old=%r, new=%r", machine_state.id, old_units, new_units) return self.open_close_ports_on_machine(machine_state.id) if machine_state.id not in self._watched_machines: self._watched_machines.add(machine_state.id) yield machine_state.watch_assigned_units(cb_watch_assigned_units) @inlineCallbacks def watch_service_changes(self, old_services, new_services): """Manage watching service exposed status. This method is called upon every change to the set of services currently deployed. All services are then watched for changes to their exposed flag setting. :param old_services: the set of services before this change. :param new_services: the current set of services. """ removed_services = old_services - new_services for service_name in removed_services: self._watched_services.pop(service_name, None) for service_name in new_services: yield self._setup_new_service_watch(service_name) @inlineCallbacks def _setup_new_service_watch(self, service_name): """Sets up the watching of the exposed flag for a new service. If `service_name` is not watched (as known by `self._watched_services`), adds the watch and a corresponding entry in self._watched_services. (This dict is necessary because there is currently no way to introspect a service for whether it is watched or not.) """ if service_name in self._watched_services: return # already watched self._watched_services[service_name] = NotExposed try: service_state = yield self.service_state_manager.get_service_state( service_name) except ServiceStateNotFound: log.debug("Cannot setup watch, since service %r no longer exists", service_name) self._watched_services.pop(service_name, None) return @inlineCallbacks def cb_watch_service_exposed_flag(exposed): if not self.is_running(): raise StopWatcher() if exposed: log.debug("Service %r is exposed", service_name) else: log.debug("Service %r is unexposed", service_name) try: unit_states = yield service_state.get_all_unit_states() except StateChanged: log.debug("Stopping watch on %r, no longer in topology", service_name) raise StopWatcher() for unit_state in unit_states: yield self.open_close_ports(unit_state) if not exposed: log.debug("Service %r is unexposed", service_name) self._watched_services[service_name] = NotExposed else: log.debug("Service %r is exposed", service_name) self._watched_services[service_name] = set() yield self._setup_service_unit_watch(service_state) yield service_state.watch_exposed_flag(cb_watch_service_exposed_flag) log.debug("Started watch of %r on changes to being exposed", service_name) @inlineCallbacks def _setup_service_unit_watch(self, service_state): """Setup watches on service units of newly exposed `service_name`.""" @inlineCallbacks def cb_check_service_units(old_service_units, new_service_units): watched_units = self._watched_services.get( service_state.service_name, NotExposed) if not self.is_running() or watched_units is NotExposed: raise StopWatcher() removed_service_units = old_service_units - new_service_units for unit_name in removed_service_units: watched_units.discard(unit_name) if not self.is_running(): raise StopWatcher() try: unit_state = yield service_state.get_unit_state(unit_name) except (ServiceUnitStateNotFound, StateChanged): log.debug("Not setting up watch on %r, not in topology", unit_name) continue yield self.open_close_ports(unit_state) for unit_name in new_service_units: if unit_name not in watched_units: watched_units.add(unit_name) yield self._setup_watch_ports(service_state, unit_name) yield service_state.watch_service_unit_states(cb_check_service_units) log.debug("Started watch of service units for exposed service %r", service_state.service_name) @inlineCallbacks def _setup_watch_ports(self, service_state, unit_name): """Setup the watching of ports for `unit_name`.""" try: unit_state = yield service_state.get_unit_state(unit_name) except (ServiceUnitStateNotFound, StateChanged): log.debug("Cannot setup watch on %r (no longer exists), ignoring", unit_name) return @inlineCallbacks def cb_watch_ports(value): """Permanently watch ports until service is no longer exposed.""" watched_units = self._watched_services.get( service_state.service_name, NotExposed) if (not self.is_running() or watched_units is NotExposed or unit_name not in watched_units): log.debug("Stopping ports watch for %r", unit_name) raise StopWatcher() yield self.open_close_ports(unit_state) yield unit_state.watch_ports(cb_watch_ports) log.debug("Started watch of %r on changes to open ports", unit_name) def add_open_close_ports_observer(self, observer): """Set `observer` for calls to `open_close_ports`. :param observer: The callback is called with the corresponding :class:`juju.state.service.UnitState`. """ self._open_close_ports_observers.add(observer) @inlineCallbacks def open_close_ports(self, unit_state): """Called upon changes that *may* open/close ports for a service unit. """ if not self.is_running(): raise StopWatcher() try: try: machine_id = yield unit_state.get_assigned_machine_id() except StateChanged: log.debug("Stopping watch, machine %r no longer in topology", unit_state.unit_name) raise StopWatcher() if machine_id is not None: yield self.open_close_ports_on_machine(machine_id) finally: # Ensure that the observations runs after the # corresponding action completes. In particular, tests # that use observation depend on this ordering to ensure # that the action has in fact happened before they can # proceed. observers = list(self._open_close_ports_observers) for observer in observers: yield observer(unit_state) def add_open_close_ports_on_machine_observer(self, observer): """Add `observer` for calls to `open_close_ports`. :param observer: A callback receives the machine id for each call. """ self._open_close_ports_on_machine_observers.add(observer) @inlineCallbacks def open_close_ports_on_machine(self, machine_id): """Called upon changes that *may* open/close ports for a machine. :param machine_id: The machine ID of the machine that needs to be checked. This machine supports multiple service units being assigned to a machine; all service units are checked each time this is called to determine the active set of ports to be opened. """ if not self.is_running(): raise StopWatcher() try: machine_state = yield self.machine_state_manager.get_machine_state( machine_id) instance_id = yield machine_state.get_instance_id() machine = yield self.provider.get_machine(instance_id) unit_states = yield machine_state.get_all_service_unit_states() policy_ports = set() for unit_state in unit_states: service_state = yield self.service_state_manager.\ get_service_state(unit_state.service_name) exposed = yield service_state.get_exposed_flag() if exposed: ports = yield unit_state.get_open_ports() for port in ports: policy_ports.add( (port["port"], port["proto"])) current_ports = yield self.provider.get_opened_ports( machine, machine_id) to_open = policy_ports - current_ports to_close = current_ports - policy_ports for port, proto in to_open: yield self.provider.open_port(machine, machine_id, port, proto) for port, proto in to_close: yield self.provider.close_port( machine, machine_id, port, proto) except MachinesNotFound: log.info("No provisioned machine for machine %r", machine_id) except Exception: log.exception("Got exception in opening/closing ports, will retry") self._retry_machines_on_port_error.add(machine_id) finally: # Ensure that the observation runs after the corresponding # action completes. In particular, tests that use # observation depend on this ordering to ensure that this # action has happened before they can proceed. observers = list(self._open_close_ports_on_machine_observers) for observer in observers: yield observer(machine_id) juju-0.7.orig/juju/state/hook.py0000644000000000000000000003734312135220114015032 0ustar 00000000000000import logging from collections import namedtuple from twisted.internet.defer import inlineCallbacks, returnValue, succeed, fail from juju.state.base import StateBase from juju.state.errors import ( UnitRelationStateNotFound, StateNotFound, RelationBrokenContextError, RelationStateNotFound, InvalidRelationIdentity, StateChanged) from juju.state.relation import ServiceRelationState, UnitRelationState from juju.state.service import ServiceStateManager, parse_service_name from juju.state.utils import YAMLState log = logging.getLogger("juju.state.hook") class RelationChange( namedtuple( "RelationChange", "relation_ident change_type unit_name")): __slots__ = () @property def relation_name(self): return self.relation_ident.split(":")[0] class HookContext(StateBase): """Context for hooks which don't depend on relation state. """ def __init__(self, client, unit_name, topology=None): super(HookContext, self).__init__(client) self._unit_name = unit_name self._service = None # A cache of retrieved nodes. self._node_cache = {} # A cache of node names to node ids. self._name_cache = {} # Service options self._config_options = None # Cached topology self._topology = topology @inlineCallbacks def _resolve_id(self, unit_id): """Resolve a unit id to a unit name.""" if self._topology is None: self._topology = yield self._read_topology() unit_name = self._topology.get_service_unit_name_from_id(unit_id) returnValue(unit_name) @inlineCallbacks def _resolve_name(self, unit_name): """Resolve a unit name to a unit id with caching.""" if unit_name in self._name_cache: returnValue(self._name_cache[unit_name]) if self._topology is None: self._topology = yield self._read_topology() unit_id = self._topology.get_service_unit_id_from_name(unit_name) self._name_cache[unit_name] = unit_id returnValue(unit_id) @inlineCallbacks def get_local_unit_state(self): """Return ServiceUnitState for the local service unit.""" service_state_manager = ServiceStateManager(self._client) unit_state = yield service_state_manager.get_unit_state( self._unit_name) returnValue(unit_state) @inlineCallbacks def get_local_service(self): """Return ServiceState for the local service.""" if self._service is None: service_state_manager = ServiceStateManager(self._client) self._service = yield( service_state_manager.get_service_state( parse_service_name(self._unit_name))) returnValue(self._service) @inlineCallbacks def get_config(self): """Gather the configuration options. Returns YAMLState for the service options of the current hook and caches them internally.This state object's `write` method must be called to publish changes to Zookeeper. `flush` will do this automatically. """ if not self._config_options: service = yield self.get_local_service() self._config_options = yield service.get_config() returnValue(self._config_options) @inlineCallbacks def get_relations(self): """Get the relations associated to the local service.""" relations = [] if self._topology is None: self._topology = yield self._read_topology() service = yield self.get_local_service() internal_service_id = service.internal_id service_unit_state = yield self.get_local_unit_state() for info in self._topology.get_relations_for_service( internal_service_id): service_info = info["service"] relation = ServiceRelationState( self._client, internal_service_id, info["relation_id"], info["scope"], **service_info) # Verify that it has unit relation state defined in ZK try: yield relation.get_unit_state(service_unit_state) relations.append(relation) except UnitRelationStateNotFound: log.debug("Ignoring partially constructed relation: %s", relation.relation_ident) returnValue(relations) @inlineCallbacks def get_relation_idents(self, relation_name): """Return the relation idents for `relation_name`""" relations = yield self.get_relations() returnValue(sorted([r.relation_ident for r in relations if not relation_name or r.relation_name == relation_name])) @inlineCallbacks def get_relation_id_and_scope(self, relation_ident): """Return the (internal) relation id for `relation_ident`.""" parts = relation_ident.split(":") if len(parts) != 2 or not parts[1].isdigit(): raise InvalidRelationIdentity(relation_ident) relation_name, normalized_id = parts relation_id = "%s-%s" % ("relation", normalized_id.zfill(10)) # Double check the internal relation id by looking it up relations = yield self.get_relations() for r in relations: if (r.relation_name == relation_name and \ r.internal_relation_id == relation_id): returnValue((relation_id, r.relation_scope)) else: raise RelationStateNotFound() @inlineCallbacks def get_relation_hook_context(self, relation_ident): """Return a child hook context for `relation_ident`""" service = yield self.get_local_service() unit = yield self.get_local_unit_state() relation_id, relation_scope = yield self.get_relation_id_and_scope( relation_ident) unit_relation = UnitRelationState( self._client, service.internal_id, unit.internal_id, relation_id, relation_scope) # Ensure that the topology is shared so there's a consistent view # between the parent and any children. returnValue(RelationHookContext( self._client, unit_relation, relation_ident, unit_name=self._unit_name, topology=self._topology)) @inlineCallbacks def flush(self): """Flush pending state.""" config = yield self.get_config() yield config.write() class RelationHookContext(HookContext): """A hook execution data cache and write buffer of relation settings. Performs caching of any relation settings examined by the hook. Also buffers all writes till the flush method is invoked. """ def __init__(self, client, unit_relation, relation_ident, members=None, unit_name=None, topology=None): """ @param unit_relation: The unit relation state associated to the hook. @param change: A C{RelationChange} instance. """ # Zookeeper client. super(RelationHookContext, self).__init__( client, unit_name=unit_name, topology=topology) self._unit_relation = unit_relation # A cache of related units in the relation. self._members = members # The relation ident of this context self._relation_ident = relation_ident # Whether settings have been modified (set/delete) for this context self._needs_flushing = False # A cache of the relation scope path self._settings_scope_path = None def get_settings_path(self, unit_id): if self._unit_relation.relation_scope == "global": return "/relations/%s/settings/%s" % ( self._unit_relation.internal_relation_id, unit_id) if self._settings_scope_path: return "%s/%s" % (self._settings_scope_path, unit_id) def process(unit_settings_path): self._settings_scope_path = "/".join( unit_settings_path.split("/")[:-1]) return "%s/%s" % (self._settings_scope_path, unit_id) d = self._unit_relation.get_settings_path() return d.addCallback(process) @property def relation_ident(self): """Returns the relation ident corresponding to this context.""" return self._relation_ident @property def relation_name(self): """Returns the relation name corresponding to this context.""" return self._relation_ident.split(":")[0] @inlineCallbacks def get_members(self): """Gets the related unit members of the relation with caching.""" if self._members is not None: returnValue(self._members) try: container = yield self._unit_relation.get_related_unit_container() except StateChanged: # The unit relation has vanished, so there are no members. returnValue([]) unit_ids = yield self._client.get_children(container) if self._unit_relation.internal_unit_id in unit_ids: unit_ids.remove(self._unit_relation.internal_unit_id) members = [] for unit_id in unit_ids: unit_name = yield self._resolve_id(unit_id) members.append(unit_name) self._members = members returnValue(members) @inlineCallbacks def _setup_relation_state(self, unit_name=None): """For a given unit name make sure we have YAMLState.""" if unit_name is None: unit_name = yield self._resolve_id( self._unit_relation.internal_unit_id) if unit_name in self._node_cache: returnValue(self._node_cache[unit_name]) unit_id = yield self._resolve_name(unit_name) path = yield self.get_settings_path(unit_id) # verify the unit relation path exists relation_data = YAMLState(self._client, path) try: yield relation_data.read(required=True) except StateNotFound: raise UnitRelationStateNotFound( self._unit_relation.internal_relation_id, self.relation_name, unit_name) # cache the value self._node_cache[unit_name] = relation_data returnValue(relation_data) @inlineCallbacks def get(self, unit_name): """Get the relation settings for a unit. Returns the settings as a dictionary. """ relation_data = yield self._setup_relation_state(unit_name) returnValue(dict(relation_data)) @inlineCallbacks def get_value(self, unit_name, key): """Get a relation setting value for a unit.""" settings = yield self.get(unit_name) if not settings: returnValue("") returnValue(settings.get(key, "")) @inlineCallbacks def set(self, data): """Set the relation settings for a unit. @param data: A dictionary containing the settings. **Warning**, this method will replace existing values for the unit relation with those from the ``data`` dictionary. """ if not isinstance(data, dict): raise TypeError("A dictionary is required.") self._needs_flushing = True state = yield self._setup_relation_state() state.update(data) @inlineCallbacks def set_value(self, key, value): """Set a relation value for a unit.""" self._needs_flushing = True state = yield self._setup_relation_state() state[key] = value @inlineCallbacks def delete_value(self, key): """Delete a relation value for a unit.""" self._needs_flushing = True state = yield self._setup_relation_state() try: del state[key] except KeyError: # deleting a non-existent key is a no-op pass def has_read(self, unit_name): """Has the context been used to access the settings of the unit. """ return unit_name in self._node_cache @inlineCallbacks def flush(self): """Flush all writes to the unit settings. A flush will attempt to intelligently merge values modified on the context to the current state of the underlying settings node. It supports externally modified or deleted values that are unchanged on the context, to be preserved. The change items to the relation YAMLState is returned (this could also be done with config settings, but given their usage model, doesn't seem to be worth logging). """ relation_setting_changes = [] if self._needs_flushing: rel_state = yield self._setup_relation_state() relation_setting_changes = yield rel_state.write() yield super(RelationHookContext, self).flush() returnValue(relation_setting_changes) class DepartedRelationHookContext(HookContext): """A hook execution context suitable for running a relation-broken hook. This context exposes the same interface as RelationHookContext, but: * relation settings cannot be changed * no remote units are reported to exist * remote unit settings are not accessible """ def __init__(self, client, unit_name, unit_id, relation_name, relation_id): super(DepartedRelationHookContext, self).__init__(client, unit_name) self._relation_name = relation_name self._relation_id = relation_id self._settings_path = None # Cache of relation settings for the local unit self._relation_cache = None def get_members(self): return succeed([]) @property def relation_ident(self): """Returns the external relation id corresponding to this context.""" return ServiceRelationState.get_relation_ident( self._relation_name, self._relation_id) @inlineCallbacks def get_settings_path(self): if self._settings_path: returnValue(self._settings_path) unit_id = yield self._resolve_name(self._unit_name) topology = yield self._read_topology() container = topology.get_service_unit_container(unit_id) container_info = "" if container: container_info = "%s/" % container[-1] self._settings_path = "/relations/%s/%ssettings/%s" % ( self._relation_id, container_info, unit_id) returnValue(self._settings_path) @inlineCallbacks def get(self, unit_name): # Only this unit's settings should be accessible. if unit_name not in (None, self._unit_name): raise RelationBrokenContextError( "Cannot access other units in broken relation") settings_path = yield self.get_settings_path() if self._relation_cache is None: relation_data = YAMLState(self._client, settings_path) try: yield relation_data.read(required=True) self._relation_cache = dict(relation_data) except StateNotFound: self._relation_cache = {} returnValue(self._relation_cache) @inlineCallbacks def get_value(self, unit_name, key): settings = yield self.get(unit_name) returnValue(settings.get(key, "")) def set(self, data): return fail(RelationBrokenContextError( "Cannot change settings in broken relation")) def set_value(self, key, value): return fail(RelationBrokenContextError( "Cannot change settings in broken relation")) def delete_value(self, key): return fail(RelationBrokenContextError( "Cannot change settings in broken relation")) def has_read(self, unit_name): """Has the context been used to access the settings of the unit. """ if unit_name in (None, self._unit_name): return self._relation_cache is not None return False juju-0.7.orig/juju/state/initialize.py0000644000000000000000000000570112135220114016224 0ustar 00000000000000import logging import uuid from twisted.internet.defer import inlineCallbacks from txzookeeper.client import ZOO_OPEN_ACL_UNSAFE from juju.machine.constraints import Constraints, ConstraintSet from .auth import make_ace from .environment import EnvironmentStateManager, GlobalSettingsStateManager from .machine import MachineStateManager log = logging.getLogger("juju.state.init") class StateHierarchy(object): """ An initializer to the juju zookeeper hierarchy. """ def __init__(self, client, admin_identity, instance_id, constraints_data, provider_type): """ :param client: A zookeeper client :param admin_identity: A zookeeper auth identity for the admin. :param instance_id: The boostrap node machine id. :param constraints_data: A Constraints's data dictionary with which to set up the first machine state, and to store in the environment. :param provider_type: The type of the environnment machine provider. """ self.client = client self.admin_identity = admin_identity if instance_id.isdigit(): instance_id = int(instance_id) self.instance_id = instance_id self.constraints_data = constraints_data self.provider_type = provider_type @inlineCallbacks def initialize(self): log.info("Initializing zookeeper hierarchy") acls = [make_ace(self.admin_identity, all=True), # XXX till we have roles throughout ZOO_OPEN_ACL_UNSAFE] yield self.client.create("/charms", acls=acls) yield self.client.create("/services", acls=acls) yield self.client.create("/machines", acls=acls) yield self.client.create("/units", acls=acls) yield self.client.create("/relations", acls=acls) # In this very specific case, it's OK to create a Constraints object # with a non-provider-specific ConstraintSet, because *all* we need it # for is its data dict. In *any* other circumstances, this would be Bad # and Wrong. constraints = Constraints(ConstraintSet(None), self.constraints_data) # Poke constraints data into a machine state to represent this machine. manager = MachineStateManager(self.client) machine_state = yield manager.add_machine_state(constraints) yield machine_state.set_instance_id(self.instance_id) # Set up environment constraints similarly. esm = EnvironmentStateManager(self.client) yield esm.set_constraints(constraints) # Setup default global settings information. settings = GlobalSettingsStateManager(self.client) yield settings.set_provider_type(self.provider_type) yield settings.set_environment_id(uuid.uuid4().get_hex()) # This must come last, since clients will wait on it. yield self.client.create("/initialized", acls=acls) # DON'T WRITE ANYTHING HERE. See line above. juju-0.7.orig/juju/state/machine.py0000644000000000000000000002556512135220114015501 0ustar 00000000000000import zookeeper from twisted.internet.defer import inlineCallbacks, returnValue from juju.errors import ConstraintError from juju.lib import serializer from juju.state.agent import AgentStateMixin from juju.state.environment import EnvironmentStateManager from juju.state.errors import MachineStateNotFound, MachineStateInUse from juju.state.base import StateBase from juju.state.utils import remove_tree, YAMLStateNodeMixin class MachineStateManager(StateBase): """Manages the state of machines in an environment.""" @inlineCallbacks def add_machine_state(self, constraints): """Create a new machine state. @return: MachineState for the created machine. """ if not constraints.complete: raise ConstraintError( "Unprovisionable machine: incomplete constraints") machine_data = {"constraints": constraints.data} path = yield self._client.create( "/machines/machine-", serializer.dump(machine_data), flags=zookeeper.SEQUENCE) _, internal_id = path.rsplit("/", 1) def add_machine(topology): topology.add_machine(internal_id) yield self._retry_topology_change(add_machine) returnValue(MachineState(self._client, internal_id)) @inlineCallbacks def remove_machine_state(self, machine_id): """Remove machine state identified by `machine_id` if present. Returns True if machine state was actually removed. """ internal_id = "machine-%010d" % machine_id must_delete = [False] def remove_machine(topology): # Removing a non-existing machine again won't fail, since # the end intention is preserved. This makes dealing # with concurrency easier. if topology.has_machine(internal_id): if topology.machine_has_units(internal_id): raise MachineStateInUse(machine_id) topology.remove_machine(internal_id) must_delete[0] = True else: must_delete[0] = False yield self._retry_topology_change(remove_machine) if must_delete[0]: # If the process is interrupted here, this node will stay # around, but it's not a big deal since it's not being # referenced by the topology anymore. yield remove_tree(self._client, "/machines/%s" % (internal_id,)) returnValue(must_delete[0]) @inlineCallbacks def get_machine_state(self, machine_id): """Return deferred machine state with the given id. @return MachineState with the given id. @raise MachineStateNotFound if the id is not found. """ if isinstance(machine_id, str) and machine_id.isdigit(): machine_id = int(machine_id) if isinstance(machine_id, int): internal_id = "machine-%010d" % machine_id else: raise MachineStateNotFound(machine_id) topology = yield self._read_topology() if not topology.has_machine(internal_id): raise MachineStateNotFound(machine_id) machine_state = MachineState(self._client, internal_id) returnValue(machine_state) @inlineCallbacks def get_all_machine_states(self): """Get information on all machines. @return: list of MachineState instances. """ topology = yield self._read_topology() machines = [] for machine_id in topology.get_machines(): # topology yields internal ids -> map to public machine_state = MachineState(self._client, machine_id) machines.append(machine_state) returnValue(machines) def watch_machine_states(self, callback): """Observe changes in the known machines through the watch function. @param callback: A function/method which accepts two sets of machine ids: the old machines, and the new ones. The old machines set variable will be None the first time this function is called. Note that there are no guarantees that this function will be called once for *every* change in the topology, which means that multiple modifications may be observed as a single call. This method currently sets a perpetual watch (errors will make it bail out). To stop the watch cleanly raise an juju.state.errors.StopWatch exception. """ def watch_topology(old_topology, new_topology): if old_topology is None: old_machines = None else: old_machines = set(_public_machine_id(x) for x in old_topology.get_machines()) new_machines = set(_public_machine_id(x) for x in new_topology.get_machines()) if old_machines != new_machines: return callback(old_machines, new_machines) return self._watch_topology(watch_topology) class MachineState(StateBase, AgentStateMixin, YAMLStateNodeMixin): def __init__(self, client, internal_id): super(MachineState, self).__init__(client) self._internal_id = internal_id def __hash__(self): return hash(self.id) def __eq__(self, other): if not isinstance(other, MachineState): return False return self.id == other.id def __str__(self): return "" % self._internal_id @property def id(self): """High-level id built using the sequence as an int.""" return _public_machine_id(self._internal_id) @property def internal_id(self): """Machine's internal id, of the form machine-NNNNNNNNNN.""" return self._internal_id @property def _zk_path(self): """Return the path within zookeeper. This attribute should not be used outside of the .state package or for debugging. """ return "/machines/" + self.internal_id def _get_agent_path(self): """Get the zookeeper path for the machine agent.""" return "%s/agent" % self._zk_path def _node_missing(self): raise MachineStateNotFound(self.id) def set_instance_id(self, instance_id): """Set the provider-specific machine id in this machine state.""" return self._set_node_value("provider-machine-id", instance_id) def get_instance_id(self): """Retrieve the provider-specific machine id for this machine.""" return self._get_node_value("provider-machine-id") @inlineCallbacks def get_constraints(self): """Get the machine's hardware constraints""" # Note: machine constraints should not be settable; they're a snapshot # of the constraints of the unit state for which they were created. (It # makes no sense to arbitrarily declare that an m1.small is now a # cc2.8xlarge, anyway.) esm = EnvironmentStateManager(self._client) constraint_set = yield esm.get_constraint_set() data = yield self._get_node_value("constraints", {}) returnValue(constraint_set.load(data)) def watch_assigned_units(self, callback): """Observe changes in service units assigned to this machine. @param callback: A function/method which accepts two sets of unit names: the old assigned units, and the new ones. The old units set variable will be None the first time this function is called, and the new one will be None if the machine itself is ever deleted. Note that there are no guarantees that this function will be called once for *every* change in the topology, which means that multiple modifications may be observed as a single call. This method currently sets a perpetual watch (errors will make it bail out). To stop the watch cleanly raise an juju.state.errors.StopWatch exception. """ return self._watch_topology( _WatchAssignedUnits(self._internal_id, callback)) @inlineCallbacks def get_all_service_unit_states(self): # avoid circular imports by deferring the import until now from juju.state.service import ServiceUnitState topology = yield self._read_topology() service_unit_states = [] for internal_service_unit_id in topology.get_service_units_in_machine( self.internal_id): internal_service_id = topology.get_service_unit_service( internal_service_unit_id) service_name = topology.get_service_name(internal_service_id) unit_sequence = topology.get_service_unit_sequence( internal_service_id, internal_service_unit_id) service_unit_state = ServiceUnitState( self._client, internal_service_id, service_name, unit_sequence, internal_service_unit_id) service_unit_states.append(service_unit_state) returnValue(service_unit_states) class _WatchAssignedUnits(object): """Helper to implement MachineState.watch_assigned_units(). See above.""" def __init__(self, internal_id, callback): self._internal_id = internal_id self._callback = callback self._old_units = None def __call__(self, old_topology, new_topology): if new_topology.has_machine(self._internal_id): unit_ids = new_topology.get_service_units_in_machine( self._internal_id) # Translate the internal ids to nice unit names. new_units = self._get_unit_names(new_topology, unit_ids) else: # Machine state is gone, so no units there of course. This can # only be visible in practice if the change happens fast # enough for the client to see the unassignment and removal as # a single change, since the topology enforces # unassignment-before-removal. new_units = set() if (new_units or self._old_units) and new_units != self._old_units: maybe_deferred = self._callback(self._old_units, new_units) self._old_units = new_units # The callback can return a deferred, to postpone its execution. # As a side effect, this watch won't fire again until the returned # deferred has not fired. return maybe_deferred def _get_unit_names(self, topology, internal_ids): """Translate internal ids to nice unit names.""" unit_names = set() for internal_id in internal_ids: service_id = topology.get_service_unit_service(internal_id) unit_names.add( topology.get_service_unit_name(service_id, internal_id)) return unit_names def _public_machine_id(internal_id): """Convert an internal_id to an external one. That's an implementation detail, and shouldn't be used elsewhere. """ _, sequence = internal_id.rsplit("-", 1) return int(sequence) juju-0.7.orig/juju/state/placement.py0000644000000000000000000000560112135220114016032 0ustar 00000000000000"""Various unit placement strategies for use in deploy and add-unit. The API used by the `place_unit` method is: machine_state = yield placement_strategy(zk_client, machine_state_manager, unit_state) The placement strategy is passed the machine manager for the deployment and the unit_state it is attempting to place. According to its policy it should yield back the machine_state for where it placed the unit. """ from twisted.internet.defer import inlineCallbacks, returnValue from juju.state.errors import NoUnusedMachines from juju.errors import InvalidPlacementPolicy, JujuError from juju.state.machine import MachineStateManager LOCAL_POLICY = "local" UNASSIGNED_POLICY = "unassigned" @inlineCallbacks def _local_placement(client, machine_state_manager, unit_state): """Assigns unit to the machine/0, aka the bootstrap node. Primary use is intended for local development. """ machine = yield machine_state_manager.get_machine_state(0) yield unit_state.assign_to_machine(machine) returnValue(machine) @inlineCallbacks def _unassigned_placement(client, machine_state_manager, unit_state): """Assigns unit on a machine, without any units, which satisfies unit_state's constraints. If no such machine is found, a new machine is created, and the unit assigned to it. """ try: machine = yield unit_state.assign_to_unused_machine() except NoUnusedMachines: constraints = yield unit_state.get_constraints() machine = yield machine_state_manager.add_machine_state(constraints) yield unit_state.assign_to_machine(machine) returnValue(machine) _PLACEMENT_LOOKUP = { LOCAL_POLICY: _local_placement, UNASSIGNED_POLICY: _unassigned_placement, } def pick_policy(preference, provider): policies = provider.get_placement_policies() if not preference: return policies[0] if preference not in policies: raise InvalidPlacementPolicy( preference, provider.provider_type, policies) return preference def place_unit(client, policy_name, unit_state): """Return machine state of unit_states assignment. :param client: A connected zookeeper client. :param policy_name: The name of the unit placement policy. :param unit_state: The unit to be assigned. :param provider_type: The type of the machine environment provider. """ machine_state_manager = MachineStateManager(client) # default policy handling if policy_name is None: placement = _unassigned_placement else: placement = _PLACEMENT_LOOKUP.get(policy_name) if placement is None: # We should never get here, pick policy should always pick valid. raise JujuError("Invalid policy:%r for provider" % policy_name) return placement(client, machine_state_manager, unit_state) juju-0.7.orig/juju/state/relation.py0000644000000000000000000010447312135220114015706 0ustar 00000000000000import logging from os.path import basename, dirname import zookeeper from twisted.internet.defer import ( inlineCallbacks, returnValue, maybeDeferred, Deferred) from txzookeeper.utils import retry_change from juju.lib import serializer from juju.state.base import StateBase from juju.state.errors import ( DuplicateEndpoints, IncompatibleEndpoints, RelationAlreadyExists, RelationStateNotFound, StateChanged, UnitRelationStateNotFound, UnknownRelationRole) class RelationStateManager(StateBase): """Manages the state of relations in an environment.""" @inlineCallbacks def add_relation_state(self, *endpoints): """Add new relation state with the common relation type of `endpoints`. There must be one or two endpoints specified, with the same `relation_type`. Their corresponding services will be assigned atomically. """ # the TODOs in the following comments in this function are for # type checking to be implemented ASAP. However, this will # require some nontrivial test modification, so it's best to # be done in a future branch. This is because these tests use # such invalid role names as ``role``, ``dev``, and ``prod``; # or they add non-peer relations of only one endpoint. if len(endpoints) == 1: # TODO verify that the endpoint is a peer endpoint only pass elif len(endpoints) == 2: if endpoints[0] == endpoints[1]: raise DuplicateEndpoints(endpoints) # TODO verify that the relation roles are client or server # only if (endpoints[0].relation_role in ("client", "server") or endpoints[1].relation_role in ("client", "server")): if not endpoints[0].may_relate_to(endpoints[1]): raise IncompatibleEndpoints(endpoints) else: raise TypeError("Requires 1 or 2 endpoints, %d given" % \ len(endpoints)) # first check so as to prevent unnecessary garbage in ZK in # case this relation has been previously added topology = yield self._read_topology() if topology.has_relation_between_endpoints(endpoints): raise RelationAlreadyExists(endpoints) scope = "global" for endpoint in endpoints: if endpoint.relation_scope == "container": scope = "container" break relation_type = endpoints[0].relation_type relation_id = yield self._add_relation_state(relation_type, scope) services = [] for endpoint in endpoints: service_id = topology.find_service_with_name( endpoint.service_name) yield self._add_service_relation_state( relation_id, service_id, endpoint, scope) services.append(ServiceRelationState(self._client, service_id, relation_id, scope, endpoint.relation_role, endpoint.relation_name)) def add_relation(topology): if topology.has_relation_between_endpoints(endpoints): raise RelationAlreadyExists(endpoints) topology.add_relation(relation_id, relation_type, relation_scope=scope) for service_relation in services: if not topology.has_service( service_relation.internal_service_id): # One of the service endpoints has gone away. raise StateChanged() topology.assign_service_to_relation( relation_id, service_relation.internal_service_id, service_relation.relation_name, service_relation.relation_role) yield self._retry_topology_change(add_relation) returnValue((RelationState(self._client, relation_id), services)) @inlineCallbacks def _add_relation_state(self, relation_type, relation_scope): path = yield self._client.create( "/relations/relation-", flags=zookeeper.SEQUENCE) internal_id = basename(path) # Create the settings container, for individual units settings. # Creation is per container for container scoped relations and # occurs elsewhere. if relation_scope == "global": yield self._client.create(path + "/settings") returnValue(internal_id) @inlineCallbacks def _add_service_relation_state( self, relation_id, service_id, endpoint, relation_scope): """Add a service relation state. """ # while the full path is # /relations/relation_id/optional_container_id/relation_role/... # how far down the path we can create at this point depends on # what type of relation we have. if relation_scope == "global": # Add service container in relation. path = "/relations/%s/%s" % (relation_id, endpoint.relation_role) else: path = "/relations/%s" % (relation_id,) try: yield self._client.create(path) except zookeeper.NodeExistsException: pass @inlineCallbacks def remove_relation_state(self, relation_state): """Remove the relation of the given id. :param relation_state: Either a relation state or a service relation state. The relation is removed from the topology, however its container node is not removed, as associated units will still be processing its removal. """ if isinstance(relation_state, RelationState): relation_id = relation_state.internal_id elif isinstance(relation_state, ServiceRelationState): relation_id = relation_state.internal_relation_id def remove_relation(topology): if not topology.has_relation(relation_id): raise StateChanged() topology.remove_relation(relation_id) yield self._retry_topology_change(remove_relation) @inlineCallbacks def get_relations_for_service(self, service_state): """Get the relations associated to the service. """ relations = [] internal_service_id = service_state.internal_id topology = yield self._read_topology() for info in topology.get_relations_for_service(internal_service_id): service_info = info["service"] relations.append( ServiceRelationState( self._client, internal_service_id, info["relation_id"], info["scope"], **service_info)) returnValue(relations) @inlineCallbacks def get_relation_state(self, *endpoints): """Return `relation_state` connecting the endpoints. Raises `RelationStateNotFound if no such relation exists. """ topology = yield self._read_topology() internal_id = topology.get_relation_between_endpoints(endpoints) if internal_id is None: raise RelationStateNotFound() returnValue(RelationState(self._client, internal_id)) class RelationState(StateBase): """Represents a connection between one or more services. The relation state is representative of the entire connection and its endpoints, while the :class ServiceRelationState: is representative of one of the service endpoints. """ def __init__(self, client, internal_relation_id): super(RelationState, self).__init__(client) self._internal_id = internal_relation_id @property def internal_id(self): return self._internal_id class ServiceRelationState(StateBase): """A state representative of a relation between one or more services.""" def __init__(self, client, service_id, relation_id, relation_scope, role, name): """`client`: ZooKeeper client `service_id`: service internal id `relation_id`: relation internal id `relation_scope`: the active scope of the service relation (global or container) `role`: the role of of the service in the relation `name`: the name of the service in the relation """ super(ServiceRelationState, self).__init__(client) self._service_id = service_id self._relation_id = relation_id self._relation_scope = relation_scope self._role = role self._name = name def __repr__(self): return "" % ( self.relation_name, self.relation_role, self.internal_relation_id, self.relation_scope) @property def internal_relation_id(self): return self._relation_id @staticmethod def get_relation_ident(relation_name, internal_relation_id): """Returns :""" # NOTE: ideally this function would not be directly exposed, # however, UnitRelationLifecycle currently uses internal # relation ids, and needs to reconstruct the workflow, which # requires an external relation id, from a file when a # relation has departed. # normalize internal ids like 'relation-0000000042' to '42' digits = internal_relation_id.split('-')[1] normalized = digits.lstrip("0") if not normalized: normalized = '0' return "%s:%s" % (relation_name, normalized) @property def relation_ident(self): """Returns :""" return self.get_relation_ident( self.relation_name, self.internal_relation_id) @property def internal_service_id(self): return self._service_id @property def relation_name(self): """The service's name for the relation.""" return self._name @property def relation_role(self): """The service's role within the relation.""" return self._role @property def relation_scope(self): """The scope of the relationship.""" return self._relation_scope def _get_scope_path(self, unit_state, container): relation_scope = self._relation_scope if relation_scope == "container": if container is None: relation_scope_value = unit_state.internal_id else: relation_scope_value = container.internal_id else: relation_scope_value = "" scope_path = "/relations/%s/%s" % ( self._relation_id, relation_scope_value) return scope_path.rstrip("/") @inlineCallbacks def _create_scope_container(self, scope_path, unit_name): # this is a container scoped relation and we must add # node_data as was done in _add_relation_state. # before we can do that however we need to extract the # proper relation information from the topology. This # includes the relation name and the relation_role from .service import parse_service_name topology = yield self._read_topology() service_id = topology.find_service_with_name( parse_service_name(unit_name)) interface, relation_data = topology.get_relation_service( self._relation_id, service_id) try: yield self._client.create(scope_path) except zookeeper.NodeExistsException: pass node_data = serializer.dump(relation_data) role_path = "%s/%s" % (scope_path, relation_data["role"]) yield retry_change( self._client, role_path, lambda c, s: node_data) yield retry_change( self._client, "%s/settings" % scope_path, lambda c, s: "") @inlineCallbacks def add_unit_state(self, unit_state): """Add a unit to the service relation. This api is intended for use by the unit agent, as it also creates an ephemeral presence node, denoting the active existence of the unit in the relation. returns a unit relation state. """ container = yield unit_state.get_container() scope_path = self._get_scope_path(unit_state, container) settings_path = "%s/settings/%s" % ( scope_path, unit_state.internal_id) # Pre-populate the relation node with the node's private address. if self._relation_scope == "container": # Create the service relation node data in the proper scope yield self._create_scope_container( scope_path, unit_state.unit_name) if container: private_address = yield container.get_private_address() else: private_address = yield unit_state.get_private_address() def update_address(content, stat): unit_map = None if content: unit_map = serializer.load(content) if not unit_map: unit_map = {} unit_map["private-address"] = private_address return serializer.dump(unit_map) yield retry_change(self._client, settings_path, update_address) # Update the unit name -> id mapping on the relation node def update_unit_mapping(content, stat): if content: unit_map = serializer.load(content) else: unit_map = {} # If it's already present, we're done, just return the # existing content, to avoid unstable yaml dict # serialization. if unit_state.internal_id in unit_map: return content unit_map[unit_state.internal_id] = unit_state.unit_name return serializer.dump(unit_map) yield retry_change(self._client, "/relations/%s" % self._relation_id, update_unit_mapping) # Create the presence node. role_path = scope_path + "/" + self._role alive_path = role_path + "/" + unit_state.internal_id try: # create the role node yield self._client.create(role_path) except zookeeper.NodeExistsException: pass try: yield self._client.create(alive_path, flags=zookeeper.EPHEMERAL) except zookeeper.NodeExistsException: # Concurrent creation is okay, end state is the same. pass returnValue( UnitRelationState( self._client, self._service_id, unit_state.internal_id, self._relation_id, self._relation_scope)) @inlineCallbacks def get_unit_state(self, unit_state): """Given a service unit state, return its unit relation state.""" if self._relation_scope == "global": alive_path = "/relations/%s/%s/%s" % ( self._relation_id, self._role, unit_state.internal_id) else: container = yield unit_state.get_container() scope_path = self._get_scope_path(unit_state, container) alive_path = "%s/%s/%s" % ( scope_path, self._role, unit_state.internal_id) stat = yield self._client.exists(alive_path) if not stat: raise UnitRelationStateNotFound( self._relation_id, self._name, unit_state.unit_name) returnValue( UnitRelationState( self._client, self._service_id, unit_state.internal_id, self._relation_id, self._relation_scope)) @inlineCallbacks def get_service_states(self): """Get all the services associated with this relation. @return: list of ServiceState instance associated with this relation. """ from juju.state.service import ServiceStateManager, ServiceState service_manager = ServiceStateManager(self._client) services = [] topology = yield service_manager._read_topology() for service_id in topology.get_relation_services( self.internal_relation_id): service_name = topology.get_service_name(service_id) service = ServiceState(self._client, service_id, service_name) services.append(service) returnValue(services) class UnitRelationState(StateBase): """A service unit's relation state. """ def __init__( self, client, service_id, unit_id, relation_id, relation_scope): # super(UnitRelationState, self).__init__(client) self._service_id = service_id self._unit_id = unit_id self._relation_id = relation_id self._relation_scope = relation_scope # cached value self._cached_relation_role = None @property def internal_service_id(self): return self._service_id @property def internal_unit_id(self): return self._unit_id @property def internal_relation_id(self): return self._relation_id @property def relation_scope(self): return self._relation_scope @inlineCallbacks def set_data(self, data): """Set the relation local configuration data for a unit. This call overwrites any data currently in the node with the dictionary supplied as `data`. """ path = yield self.get_settings_path() # encode as a YAML string data = serializer.dump(data) yield retry_change( self._client, path, lambda content, stat: data) @inlineCallbacks def get_data(self): """Get the relation local configuration data for a unit. """ path = yield self.get_settings_path() data, stat = yield self._client.get(path) returnValue(data) @inlineCallbacks def get_settings_path(self): parts = ["/relations", self.internal_relation_id, (yield self._get_container_id()), "settings", self.internal_unit_id] returnValue("/".join(filter(None, parts))) @inlineCallbacks def get_relation_role(self): """Return the service's role within the relation. """ if self._cached_relation_role is not None: returnValue(self._cached_relation_role) # Perhaps this information could be passed directly into the # unit relation state constructor, its already got 4 params though. topology = yield self._read_topology() if not (topology.has_relation(self._relation_id) and \ topology.has_service(self._service_id)): raise StateChanged() relation_type, info = topology.get_relation_service(self._relation_id, self._service_id) relation_role = info["role"] self._cached_relation_role = relation_role returnValue(relation_role) @inlineCallbacks def _get_container_id(self): topology = yield self._read_topology() container_id = topology.get_service_unit_container(self._unit_id) if container_id is None and self._relation_scope == "global": return elif container_id is None: returnValue(self._unit_id) # Container id comes back as as svc-id, rel-id, unit-id returnValue(container_id[-1]) @inlineCallbacks def get_related_unit_container(self): """Return the path to the relation role container of the related units. If container_unit_state is provided it will be used to compute the path into a single container's scope for this relationship. """ relation_role = yield self.get_relation_role() endpoint_container = ["/relations", self._relation_id] if self._relation_scope == "container": container_id = yield self._get_container_id() endpoint_container.append(container_id) if relation_role == "server": endpoint_container.append("client") elif relation_role == "client": endpoint_container.append("server") elif relation_role == "peer": endpoint_container.append("peer") else: topology = yield self._read_topology() service_name = topology.get_service_name(self._service_id) raise UnknownRelationRole( self._relation_id, relation_role, service_name) returnValue("/".join(endpoint_container)) @inlineCallbacks def watch_related_units(self, cb_members, cb_settings): """Register a callback to be invoked when related units change. @param: callback a function that gets invoked when related units of the appropriate role are added, removed, or change their settings. The callback should expect three keyword arguments, old_units, new_units, and modified. If there is a membership change, the old_units and new_units parameters will be passed containing the related unit membership (as a list) before and after the change. If a related unit changes, the modified parameter will be passed with a list of changed units. The callback will be invoked in parallel for different changes to different nodes. However it will be invoked serially for changes to a single node. This method returns a watcher instance, that exposes an api for starting and stopping the watch and the callback invocation. See C{RelationUnitWatcherBase} for additional details. """ relation_role = yield self.get_relation_role() endpoint_container = yield self.get_related_unit_container() # Determine the watcher implementation. if relation_role in ["server", "client"]: watcher_factory = ClientServerUnitWatcher elif relation_role == "peer": watcher_factory = PeerUnitWatcher watcher = watcher_factory( self._client, self, endpoint_container, cb_members, cb_settings, self._relation_scope) returnValue(watcher) class RelationUnitWatcherBase(StateBase): """Unit relation observation of other units. When a service unit is participating in a relation, it needs to watch other units within the relation to observe their setting and membership changes in order to invoke its own charm hooks. This base class provides for most of the behavior of watching other units within a relation. Various subclasses provide for concrete implementations of this logic based on the relation role and thereby the units within the relation that need watching. The two focus points of watching relations, deal with watching the presence nodes of other units within the relation, and watching their respective settings nodes. Which units in particular are watched are determined by the relation role of the service as per the charm specification. The watcher will concurrently execute the callback in parallel for changes to different nodes. However for changes to a single node the callback will be executed serially. """ def __init__(self, client, watcher_unit, unit_container_path, cb_members, cb_settings, relation_scope=None): super(RelationUnitWatcherBase, self).__init__(client) self._units = [] self._watcher_unit = watcher_unit self._container_path = unit_container_path self._cb_members = cb_members self._cb_settings = cb_settings self._stopped = None self._unit_name_map = None self._relation_scope = relation_scope self._scope_path = dirname(unit_container_path) self._log = logging.getLogger("unit.relation.watch") @property def running(self): return self._stopped is False def _watch_container(self, watch_established_callback=None): """Watch the service role container, for related units. """ child_d, watch_d = self._client.get_children_and_watch( self._container_path) # After we've established a container watch we should # invoke the watch established callback, if any. if watch_established_callback is not None: child_d.addCallback(watch_established_callback) # Setup child watches, and invoke user callbacks for membership. child_d.addCallback(self._cb_container_children) # After processing children, setup the container watch callback. child_d.addCallback(lambda result: watch_d.addCallback( self._cb_container_child_change)) # Handle container nonexistant errors child_d.addErrback(self._eb_no_container, watch_established_callback) return child_d def _eb_no_container(self, failure, watch_established_callback=None): """Handle the case where the service-role container does not exist. We establish an existance watch with a callback to start the unit watching. """ failure.trap(zookeeper.NoNodeException) # Establish an exists watch on the container. exists_d, watch_d = self._client.exists_and_watch(self._container_path) # After the container watch is established, invoke any est. callback if watch_established_callback: exists_d.addCallback(watch_established_callback) # Set a callback, to watch the container when its created. watch_d.addCallback(self._cb_container_created) # Check if the container has been created prior to exists call. exists_d.addCallback(self._cb_container_exists) # return an empty set of children (no container yet) return [] def _cb_container_exists(self, stat): """If the container exists, start watching it. This is used as a callback from the no container error handler, as it establishes a watch on the container, to verify that the container does not already exist. """ if stat: return self._watch_container() def _cb_container_created(self, event): """Once the service role container is created, establish watches for it """ if event.type_name == "created": return self._watch_container() @inlineCallbacks def _resolve_unit_names(self, *unit_ids): """Resolve the names of units given their ids. Takes multiple lists of unit ids as parameters, and returns corresponding lists of unit names as results. """ if not self._unit_name_map: relation_path = dirname(self._container_path) if self._relation_scope == "container": relation_path = dirname(relation_path) content, stat = yield self._client.get(relation_path) self._unit_name_map = serializer.load(content) results = [] for unit_id_list in unit_ids: names = [] for unit_id in unit_id_list: names.append(self._unit_name_map[unit_id]) results.append(names) returnValue(results) def _cb_container_children(self, children): """Process children of the service role container. Establishes watches on the settings of units in a relation role container. @param children: A list of unit ids within the relation role container. """ # Filter the units we're interested in. children = self._filter_units(children) # If there is no delta from the last known state, we're done. if self._units == children: return # Determine if we have any new nodes. added = set(children) - set(self._units) if added: # If we do have new units, invalidate the unit name cache self._unit_name_map = None # Setup watches on new children so we catch all changes but # don't attach handlers till after the container callback # is complete. This way we ensure we get membership changes # before modification changes. settings_watches = self._watch_settings_for_units(added) else: settings_watches = [] # Resolve unit ids to names callback_d = self._resolve_unit_names(self._units, children) # Update the list of known children self._units = children # Invoke callback callback_d.addCallback( lambda (old_units, new_units): maybeDeferred( self._cb_members, sorted(old_units), sorted(new_units))) # Attach initial notifiers and change handlers to new nodes if settings_watches: def watch_unit_settings(_, watch_d): watch_d.addCallback(self._cb_unit_change) def track_all_unit_settings(_): for (unit_id, exists_d, watch_d) in settings_watches: exists_d.addCallback( self._notify_unit_settings_version, unit_id) exists_d.addCallback(watch_unit_settings, watch_d) callback_d.addCallback(track_all_unit_settings) return callback_d @inlineCallbacks def _notify_unit_settings_version(self, stat, unit_id): if not stat: return ((unit_name,),) = yield self._resolve_unit_names([unit_id]) node_info = (unit_name, stat["version"]) yield self._cb_settings((node_info,)) def _watch_settings_for_units(self, added): """Setup watches on new unit relation setting nodes. """ settings_watches = [] # Watch new settings node for changes. for unit_id in added: settings_path = "%s/settings/%s" % ( self._scope_path, unit_id) # Since we have a concurrent execution model, unit tests, # will error out since this callback might still be # utilizing the zookeeper api, after the client is # closed. Verify the connection is open, before we invoke # zk apis. if not self._client.connected: return # We always notify a modification, even on new nodes, so that # the callback can know the versions of the settings nodes and # take appropriate action. (Generally the appropriate action will # be "do nothing", but it allows the unit agent's HookSchedulers # to detect settings changes that occurred while the unit was not # running.) exists_d, watch_d = self._client.exists_and_watch(settings_path) settings_watches.append((unit_id, exists_d, watch_d)) return settings_watches def _cb_container_child_change(self, event): """Processes container child events. These changes correspond to the addition and removal of unit relation presence nodes within the relation. """ self._log.debug("relation membership change") # If the watcher has been stopped, don't observe child changes. if self._stopped: return # Restablish child watch on presence nodes and fetch children. children_d, watch_d = self._client.get_children_and_watch( self._container_path) # Callback to set watches on children and notify membership changes. children_d.addCallback(self._cb_container_children) # After processing children, setup the container watch callback. children_d.addCallback(lambda result: watch_d.addCallback( self._cb_container_child_change)) return children_d def _cb_unit_change(self, event): """Process a unit relation settings node change. """ self._log.debug("relation watcher settings change %s", event) unit_id = basename(event.path) # Don't process deleted units or if we've been stopped. if self._stopped or not unit_id in self._units: return exists_d, watch_d = self._client.exists_and_watch(event.path) exists_d.addCallback(self._notify_unit_settings_version, unit_id) # Restablish the child watch callback after the user callback completes exists_d.addCallback( lambda result: watch_d.addCallback(self._cb_unit_change)) def _filter_units(self, units): """A utility method to filter the unit relations based on relation type """ return units def stop(self): """Stops watch processing, and callback invocation. After this method is invoked, no additional watches will be established any existing watches will be ignored. The user callback will not be invoked. Start can be called after stop, however any modifications of existing nodes will not be detected, only membership changes from the stopped period will be sent after restarting. """ self._stopped = True self._log.debug("relation watcher stop") def start(self): """Start watching membership and settings changes of relation units. Returns a deferred that fires, when the related unit container has a child watch established, or a watch has been created on the container existence. Individual watches on the children will not yet have been established, but that property is O(n) size of the container and requires as many communication roundtrips. So the watch started callback is a more limited guarantee that at least the container watch (children or exists if the container does not already exist) has been established. """ assert self._stopped or self._stopped is None, "Already started" self._stopped = False watcher_started = Deferred() def on_container_watched(result): self._log.debug("relation watcher start") watcher_started.callback(True) return result self._watch_container(on_container_watched) return watcher_started class ClientServerUnitWatcher(RelationUnitWatcherBase): pass class PeerUnitWatcher(RelationUnitWatcherBase): def _filter_units(self, units): """Units in the peer relation type, ignore themselves. """ return [unit_id for unit_id in units \ if unit_id != self._watcher_unit.internal_unit_id] juju-0.7.orig/juju/state/security.py0000644000000000000000000003304212135220114015731 0ustar 00000000000000import base64 import random import string from zookeeper import ( BadArgumentsException, BadVersionException, NoNodeException, NodeExistsException, SEQUENCE) from twisted.internet.defer import inlineCallbacks, returnValue from txzookeeper.client import ZOO_OPEN_ACL_UNSAFE, ZookeeperClient from juju.lib import serializer from juju.state.auth import make_identity, make_ace from juju.state.errors import ( StateNotFound, PrincipalNotFound) from juju.state.utils import YAMLState ZOO_OPEN_AUTH_ACL_UNSAFE = make_ace("auth", "world", all=True) class Principal(object): """An juju/zookeeper principal. """ def __init__(self, name, password): self._name = name self._password = password @property def name(self): """A principal has a login name.""" return self._name def get_token(self): """A principal identity token can be retrieved. An identity token is used to construct ACLs. """ return make_identity("%s:%s" % (self._name, self._password)) def attach(self, connection): """A principal can be attached to a connection.""" return connection.add_auth( "digest", "%s:%s" % (self._name, self._password)) class GroupPrincipal(object): """A group principal is a principal that can have multiple members. Group principals are persistent with their credentials stored in zk. Membership of the group allows a group member to retrieve and utilize these credentials. """ def __init__(self, client, path): self._client = client self._path = path self._name = None self._password = None @inlineCallbacks def initialize(self): """Initialize the group. The group must always be initialized before attribute use. Groups are persistent principals. If a group is being used as a principal, it must first be loladed via initialize before any access to principal attributes or principal methods. """ try: data, stat = yield self._client.get(self._path) except NoNodeException: raise StateNotFound("Group does not exist at %r" % self._path) credentials = serializer.load(data) self._name = credentials["name"] self._password = credentials["password"] def _check(self): if self._name is None: raise RuntimeError("initialize() must be called before usage") @property def name(self): """A principal has a login name.""" self._check() return self._name def get_token(self): """A principal identity token can be retrieved. An identity token is used to construct ACLs. """ self._check() return make_identity("%s:%s" % (self._name, self._password)) @inlineCallbacks def attach(self, connection): self._check() yield connection.add_auth( "digest", "%s:%s" % (self._name, self._password)) @inlineCallbacks def create(self, name, password): """Create the group with the given name and password.""" try: yield self._client.create( self._path, serializer.dump(dict(name=name, password=password))) except NodeExistsException: raise RuntimeError("Group already exists at %r" % self._path) self._name = name self._password = password @inlineCallbacks def add_member(self, principal): """Add a principal as a member of the group. A member of the group can use the group as an additional principal attached to the connection. """ self._check() acl, stat = yield self._client.get_acl(self._path) token = principal.get_token() for ace in acl: if ace["id"] == token: return acl.append(make_ace(token, read=True)) yield self._client.set_acl(self._path, acl) @inlineCallbacks def remove_member(self, name): """Remove a principal from a group by principal name""" self._check() acl, stat = yield self._client.get_acl(self._path) found = False for ace in acl: if ace["id"].split(":")[0] == name: acl.remove(ace) found = True break if not found: return yield self._client.set_acl(self._path, acl) class OTPPrincipal(object): """One Time Password (OTP) Principal. Its common for juju to need to pass credentials for newly created principals over unsecure channels to external processes. In order to mitigate the risks of interception of these credentials, a one time password is used that enables the retrieval of the intended principal credential by the external process. """ # Additional OTP node ACL entry, see :method set_otp_additional_ace: _extra_otp_ace = None def __init__(self, client, path="/otp/otp-"): self._client = client self._name = None self._password = None self._otp_name = None self._otp = None self._path = path def _generate_string(self, size=16): return "".join(random.sample(string.letters, size)) def _check(self): if not self._name: raise RuntimeError("OTPPrincipal must be created before use") @property def name(self): """A principal has a login name.""" self._check() return self._name def get_token(self): """A principal identity token can be retrieved. An identity token is used to construct ACLs. """ self._check() return make_identity("%s:%s" % (self._name, self._password)) @inlineCallbacks def attach(self, connection): raise NotImplemented("OTP Principals shouldn't attach") @classmethod @inlineCallbacks def consume(cls, client, otp_data): """Consume an OTP serialization to retrieve the actual credentials. returns a username, password tuple, and destroys the OTP node. """ # Decode the data to get the path and otp credentials path, credentials = base64.b64decode(otp_data).split(":", 1) yield client.add_auth("digest", credentials) # Load the otp principal data data, stat = yield client.get(path) principal_data = serializer.load(data) # Consume the otp node yield client.delete(path) returnValue((principal_data["name"], principal_data["password"])) @inlineCallbacks def create(self, name, password=None, otp_name=None, otp=None): """Create an OTP for a principal. """ if self._name: raise ValueError("OTPPrincipal has already been created.") self._name = name self._password = password or self._generate_string() self._otp_name = otp_name or self._generate_string() self._otp = otp or self._generate_string() acl = [make_ace( make_identity("%s:%s" % (self._otp_name, self._otp)), read=True)] # Optional additional ACL entry for unit test teardown. if self._extra_otp_ace: acl.append(self._extra_otp_ace) self._path = yield self._client.create( self._path, serializer.dump(dict(name=name, password=password)), acls=acl, flags=SEQUENCE) returnValue(self) def serialize(self): """Return a serialization of the OTP path and credentials. This can be sent to an external process such that they can access the OTP identity. """ return base64.b64encode( "%s:%s:%s" % (self._path, self._otp_name, self._otp)) @classmethod def set_additional_otp_ace(cls, ace): """This method sets an additional ACl entry to be added to OTP nodes. This method is meant for testing only, to ease construction of unit test tear downs when OTP nodes are created. """ cls._extra_otp_ace = ace class TokenDatabase(object): """A hash map of principal names to their identity tokens. Identity tokens are used to construct node ACLs. """ def __init__(self, client, path="/auth-tokens"): self._state = YAMLState(client, path) @inlineCallbacks def add(self, principal): """Add a principal to the token database. """ yield self._state.read() self._state[principal.name] = principal.get_token() yield self._state.write() @inlineCallbacks def get(self, name): """Return the identity token for a principal name. """ yield self._state.read() try: returnValue(self._state[name]) except KeyError: raise PrincipalNotFound(name) @inlineCallbacks def remove(self, name): """Remove a principal by name from the token database. """ yield self._state.read() if name in self._state: del self._state[name] yield self._state.write() class SecurityPolicy(object): """The security policy generates ACLs for new nodes based on their path. """ def __init__(self, client, token_db, rules=(), owner=None): self._client = client self._rules = list(rules) self._token_db = token_db self._owner = None def set_owner(self, principal): """If an owner is set all nodes ACLs will grant access to the owner. """ assert not self._owner, "Owner already assigned" self._owner = principal def add_rule(self, rule): """Add a security rule to the policy. A rule is a callable object accepting the policy and the path as arguments. The rule should return a list of ACL entries that apply to the node at the given path. Rules may return deferred values. """ self._rules.append(rule) @inlineCallbacks def __call__(self, path): """Given a node path, determine the ACL. """ acl_entries = [] for rule in self._rules: entries = yield rule(self, path) if entries: acl_entries.extend(entries) # XXX/TODO - Remove post security-integration # Allow incremental integration if not acl_entries: acl_entries.append(ZOO_OPEN_AUTH_ACL_UNSAFE) # Give cli admin access by default admin_token = yield self._token_db.get("admin") acl_entries.append(make_ace(admin_token, all=True)) # Give owner access by default if self._owner: acl_entries.append( make_ace(self._owner.get_token(), all=True)) returnValue(acl_entries) class SecurityPolicyConnection(ZookeeperClient): """A ZooKeeper Connection that delegates default ACLs to a security policy. """ _policy = None def set_security_policy(self, policy): self._policy = policy @inlineCallbacks def create(self, path, data="", acls=[ZOO_OPEN_ACL_UNSAFE], flags=0): """Creates a zookeeper node at the given path, with the given data. The secure connection mixin, defers ACL values to a security policy set on the connection if any. """ if self._policy and acls == [ZOO_OPEN_ACL_UNSAFE]: acls = yield self._policy(path) result = yield super(SecurityPolicyConnection, self).create( path, data, acls=acls, flags=flags) returnValue(result) class ACL(object): """A ZooKeeper Node ACL. Allows for permission grants and removals to principals by name. """ def __init__(self, client, path): self._client = client self._path = path self._token_db = TokenDatabase(client) @inlineCallbacks def grant(self, principal_name, **perms): """Grant permissions on node to the given principal name.""" token = yield self._token_db.get(principal_name) ace = make_ace(token, **perms) def add(acl): index = self._principal_index(acl, principal_name) if index is not None: acl_ace = acl[index] acl_ace["perms"] = ace["perms"] | acl_ace["perms"] return acl acl.append(ace) return acl yield self._update_acl(add) @inlineCallbacks def prohibit(self, principal_name): """Remove all grant for the given principal name.""" def remove(acl): index = self._principal_index(acl, principal_name) if index is None: # We got to the same end goal. return acl acl.pop(index) return acl yield self._update_acl(remove) @inlineCallbacks def _update_acl(self, change_func): """Update an ACL using the given change function to get a new acl. Goal is to be tolerant of non-conflicting concurrent updates. """ while True: try: acl, stat = yield self._client.get_acl(self._path) except (BadArgumentsException, NoNodeException): raise StateNotFound(self._path) acl = change_func(acl) try: yield self._client.set_acl( self._path, acl, version=stat["aversion"]) break except BadVersionException: pass def _principal_index(self, acl, principal_name): """Determine the index into the ACL of a given principal ACE.""" for index in range(len(acl)): ace = acl[index] if ace["id"].split(":", 1)[0] == principal_name: return index return None juju-0.7.orig/juju/state/service.py0000644000000000000000000016567212135220114015541 0ustar 00000000000000import zookeeper from twisted.internet.defer import ( inlineCallbacks, returnValue, maybeDeferred) from txzookeeper.utils import retry_change from juju.charm.url import CharmURL from juju.lib import serializer from juju.state.agent import AgentStateMixin from juju.state.base import log, StateBase from juju.state.endpoint import RelationEndpoint from juju.state.environment import EnvironmentStateManager from juju.state.errors import ( StateChanged, ServiceStateNotFound, ServiceUnitStateNotFound, ServiceUnitStateMachineAlreadyAssigned, ServiceStateNameInUse, BadDescriptor, BadServiceStateName, NoUnusedMachines, ServiceUnitDebugAlreadyEnabled, ServiceUnitResolvedAlreadyEnabled, ServiceUnitRelationResolvedAlreadyEnabled, StopWatcher, IllegalSubordinateMachineAssignment, PrincipalServiceUnitRequired, NotSubordinateCharm, ServiceUnitUpgradeAlreadyEnabled) from juju.state.charm import CharmStateManager from juju.state.relation import ServiceRelationState, RelationStateManager from juju.state.machine import _public_machine_id, MachineState from juju.state.topology import InternalTopologyError from juju.state.utils import ( remove_tree, dict_merge, YAMLState, YAMLStateNodeMixin) RETRY_HOOKS = 1000 NO_HOOKS = 1001 def _series_constraints(base_constraints, charm_id): series = CharmURL.parse(charm_id).collection.series return base_constraints.with_series(series) class ServiceStateManager(StateBase): """Manages the state of services in an environment.""" @inlineCallbacks def add_service_state(self, service_name, charm_state, constraints): """Create a new service with the given name. @param service_name: Unique name of the service created. @param charm_state: CharmState for the service. @param constraints: Constraints needed to deploy the service. @return: ServiceState for the created service. """ charm_id = charm_state.id constraints = _series_constraints(constraints, charm_id) service_details = { "charm": charm_id, "constraints": constraints.data} # charm metadata is always decoded into unicode, ensure any # serialized state references strings to avoid tying to py runtime. node_data = serializer.dump(service_details) path = yield self._client.create("/services/service-", node_data, flags=zookeeper.SEQUENCE) internal_id = path.rsplit("/", 1)[1] # create a child node for configuration options yield self._client.create( "%s/config" % path, serializer.dump({})) def add_service(topology): if topology.find_service_with_name(service_name): raise ServiceStateNameInUse(service_name) topology.add_service(internal_id, service_name) yield self._retry_topology_change(add_service) returnValue(ServiceState(self._client, internal_id, service_name)) @inlineCallbacks def remove_service_state(self, service_state): """Remove the service's state. This will destroy any existing units, and break any existing relations of the service. """ # Remove relations first, to prevent spurious hook execution. relation_manager = RelationStateManager(self._client) relations = yield relation_manager.get_relations_for_service( service_state) for relation_state in relations: yield relation_manager.remove_relation_state(relation_state) # Remove the units unit_names = yield service_state.get_unit_names() for unit_name in unit_names: unit_state = yield service_state.get_unit_state(unit_name) yield service_state.remove_unit_state(unit_state) # Remove the service from the topology. def remove_service(topology): if not topology.has_service(service_state.internal_id): raise StateChanged() topology.remove_service(service_state.internal_id) yield self._retry_topology_change(remove_service) # Remove any remaining state yield remove_tree( self._client, "/services/%s" % service_state.internal_id) @inlineCallbacks def get_service_state(self, service_name): """Return a service state with the given name. @return ServiceState with the given name. @raise ServiceStateNotFound if the unit id is not found. """ topology = yield self._read_topology() internal_id = topology.find_service_with_name(service_name) if internal_id is None: raise ServiceStateNotFound(service_name) returnValue(ServiceState(self._client, internal_id, service_name)) @inlineCallbacks def get_unit_state(self, unit_name): """Returns the unit state with the given name. A convience api to retrieve a unit in one api call. May raise exceptions regarding the nonexistance of either the service or unit. """ if not "/" in unit_name: raise ServiceUnitStateNotFound(unit_name) service_name, _ = unit_name.split("/") service_state = yield self.get_service_state(service_name) unit_state = yield service_state.get_unit_state(unit_name) returnValue(unit_state) @inlineCallbacks def get_all_service_states(self): """Get all the deployed services in the environment. @return: list of ServiceState instances. """ topology = yield self._read_topology() services = [] for service_id in topology.get_services(): service_name = topology.get_service_name(service_id) service = ServiceState(self._client, service_id, service_name) services.append(service) returnValue(services) @inlineCallbacks def get_relation_endpoints(self, descriptor): """Get all relation endpoints for `descriptor`. A `descriptor` is of the form ``[:]``. Returns the following: - Returns a list of matching endpoints, drawn from the peers, provides, and requires interfaces. An empty list is returned if there are no endpoints matching the `descriptor`. This list is sorted such that implicit relations appear last. - Raises a `BadDescriptor` exception if `descriptor` cannot be parsed. """ tokens = descriptor.split(":") if len(tokens) == 1 and bool(tokens[0]): query_service_name, query_relation_name = descriptor, None elif len(tokens) == 2 and bool(tokens[0]) and bool(tokens[1]): query_service_name, query_relation_name = tokens else: raise BadDescriptor(descriptor) service_state = yield self.get_service_state( query_service_name) charm_state = yield service_state.get_charm_state() charm_metadata = yield charm_state.get_metadata() endpoints = set() relation_role_map = { "peer": "peers", "client": "requires", "server": "provides"} for relation_role in ("peer", "client", "server"): relations = getattr( charm_metadata, relation_role_map[relation_role]) if relations: for relation_name, spec in relations.iteritems(): if (query_relation_name is None or query_relation_name == relation_name): endpoints.add(RelationEndpoint( service_name=query_service_name, relation_type=spec["interface"], relation_name=relation_name, relation_role=relation_role, relation_scope=spec["scope"])) # Add in implicit relations endpoints.add( RelationEndpoint( query_service_name, "juju-info", "juju-info", "server")) # When offering matches implicit relations should be # considered last. This cmpfunc pushes implicit methods to # the end of the list of possible endpoints. def low_priority_implicit_cmp(rel1, rel2): if rel1.relation_name.startswith("juju-"): return 1 if rel2.relation_name.startswith("juju-"): return -1 return cmp(rel1.relation_name, rel2.relation_name) endpoints = sorted(endpoints, low_priority_implicit_cmp) returnValue(endpoints) @inlineCallbacks def join_descriptors(self, descriptor1, descriptor2): """Return a list of pairs of RelationEndpoints joining descriptors.""" result = [] relations_1 = yield self.get_relation_endpoints(descriptor1) relations_2 = yield self.get_relation_endpoints(descriptor2) for relation1 in relations_1: for relation2 in relations_2: if relation1.may_relate_to(relation2): result.append((relation1, relation2)) returnValue(result) def watch_service_states(self, callback): """Observe changes in the known services via `callback`. `callback(old_service_names, new_service_names)`: function called upon a change to the service topology. `old_service_names` and `new_service_names` are both sets, possibly empty. Note that there are no guarantees that this function will be called once for *every* change in the topology, which means that multiple modifications may be observed as a single call. This method currently sets a pretty much perpetual watch (errors will make it bail out). In the future, the return value of the watch function may be used to define whether to continue watching or to stop. """ def watch_topology(old_topology, new_topology): def get_service_names(topology): service_names = set() if topology is None: return service_names for service_id in topology.get_services(): service_names.add(topology.get_service_name(service_id)) return service_names old_services = get_service_names(old_topology) new_services = get_service_names(new_topology) if old_services != new_services: return callback(old_services, new_services) return self._watch_topology(watch_topology) class ServiceState(StateBase, YAMLStateNodeMixin): """State of a service registered in an environment. Each service is composed by units, and each unit represents an actual deployment of software to satisfy the needs defined in this service state. """ def __init__(self, client, internal_id, service_name): super(ServiceState, self).__init__(client) self._internal_id = internal_id self._service_name = service_name def __hash__(self): return hash(self.internal_id) def __eq__(self, other): if not isinstance(other, ServiceState): return False return self.internal_id == other.internal_id def __repr__(self): return "<%s %s>" % ( self.__class__.__name__, self.internal_id) @property def service_name(self): """Name of the service represented by this state. """ return self._service_name @property def internal_id(self): return self._internal_id @property def _zk_path(self): """Return the path within zookeeper. This attribute should not be used outside of the .state package or for debugging. """ return "/services/" + self._internal_id @property def _config_path(self): return "%s/config" % self._zk_path @property def _exposed_path(self): """Path of ZK node that if it exists, indicates service is exposed.""" return "%s/exposed" % self._zk_path def _node_missing(self): raise ServiceStateNotFound(self._service_name) def get_charm_id(self): """Return the charm id this service is supposed to use. """ return self._get_node_value("charm") @inlineCallbacks def set_charm_id(self, charm_id): """Set the charm id this service is supposed to use. """ # Verify its a valid charm id CharmURL.parse(charm_id).assert_revision() yield self._set_node_value("charm", charm_id) @inlineCallbacks def get_charm_state(self): """Return the CharmState for this service.""" charm_id = yield self.get_charm_id() formula_state_manager = CharmStateManager(self._client) charm = yield formula_state_manager.get_charm_state(charm_id) returnValue(charm) @inlineCallbacks def is_subordinate(self): charm_state = yield self.get_charm_state() returnValue(charm_state.is_subordinate()) @inlineCallbacks def set_constraints(self, constraints): """Set hardware requirements for any new machines running this service. :param constraints: a Constraints instance describing the service-level machine constraints. Constraints are settable individually by level, and dynamically combined with the current environment constraints when it's time to add a unit. """ charm_id = yield self.get_charm_id() constraints = _series_constraints(constraints, charm_id) yield self._set_node_value("constraints", constraints.data) @inlineCallbacks def get_constraints(self): """Get combined environment- and service-level machine constraints. :return: a Constraints instance This combined Constraints is used both to set the unit constraints at add-unit time (so the unit gets a snapshot of the constraints in play at creation time) and to display to the user in get-constraints (so the user gets to see what constraints will actually be used to deploy a unit, and what constraints are currently in play and hence need to be set when changing service constraints). """ esm = EnvironmentStateManager(self._client) constraints = yield esm.get_constraints() constraint_set = yield esm.get_constraint_set() service_data = yield self._get_node_value("constraints", {}) constraints.update(constraint_set.load(service_data)) returnValue(constraints) @inlineCallbacks def _validate_principal_container(self, principal_service_unit): if not isinstance(principal_service_unit, ServiceUnitState): raise PrincipalServiceUnitRequired( self.service_name, principal_service_unit) principal_service = yield principal_service_unit.get_service_state() if (yield principal_service.is_subordinate()): raise PrincipalServiceUnitRequired( self.service_name, principal_service_unit.unit_name) if not (yield self.is_subordinate()): raise NotSubordinateCharm(self.service_name, principal_service.unit_name) @inlineCallbacks def add_unit_state(self, container=None): """Add a new service unit to this state. When provided container should be the service unit state of the container for the unit state being added. @return: ServiceUnitState for the created unit. The new unit's constraints are a frozen copy of the service's actualised constraints at the time of unit addition (so, they don't change in response to changes in the service or environment constraints; if they did it would lead to extreme confusion (ie, after a notable change to env/service, users would potentially see units apparently deployed on wildly inappropriate machines)). """ constraints = yield self.get_constraints() charm_id = yield self.get_charm_id() unit_data = {"charm": charm_id, "constraints": constraints.data} path = yield self._client.create( "/units/unit-", serializer.dump(unit_data), flags=zookeeper.SEQUENCE) internal_unit_id = path.rsplit("/", 1)[1] sequence = [None] if container is not None: yield self._validate_principal_container(container) container_id = container.internal_id else: container_id = None def add_unit(topology): if not topology.has_service(self._internal_id): raise StateChanged() sequence[0] = topology.add_service_unit(self._internal_id, internal_unit_id, container_id) yield self._retry_topology_change(add_unit) returnValue(ServiceUnitState(self._client, self._internal_id, self._service_name, sequence[0], internal_unit_id)) @inlineCallbacks def get_unit_names(self): topology = yield self._read_topology() if not topology.has_service(self._internal_id): raise StateChanged() unit_ids = topology.get_service_units(self._internal_id) unit_names = [] for unit_id in unit_ids: unit_names.append( topology.get_service_unit_name(self._internal_id, unit_id)) returnValue(unit_names) @inlineCallbacks def remove_unit_state(self, service_unit): """Destroy a unit state. """ # Unassign from machine if currently assigned. yield service_unit.unassign_from_machine() # Remove from topology def remove_unit(topology): if not topology.has_service(self._internal_id) or \ not topology.has_service_unit( self._internal_id, service_unit.internal_id): raise StateChanged() topology.remove_service_unit( self._internal_id, service_unit.internal_id) yield self._retry_topology_change(remove_unit) # Remove any local settings. yield remove_tree( self._client, "/units/%s" % service_unit.internal_id) @inlineCallbacks def get_all_unit_states(self): """Get all the service unit states associated with this service. @return: list of ServiceUnitState instances. """ topology = yield self._read_topology() if not topology.has_service(self._internal_id): raise StateChanged() units = [] for unit_id in topology.get_service_units(self._internal_id): unit_name = topology.get_service_unit_name(self._internal_id, unit_id) service_name, sequence = _parse_unit_name(unit_name) internal_unit_id = \ topology.find_service_unit_with_sequence(self._internal_id, sequence) unit = ServiceUnitState(self._client, self._internal_id, self._service_name, sequence, internal_unit_id) units.append(unit) returnValue(units) @inlineCallbacks def get_unit_state(self, unit_name): """Return service unit state with the given unit name. @return: ServiceUnitState with the given name. @raise ServiceUnitStateNotFound if the unit name is not found. """ assert "/" in unit_name, "Bad unit name: %s" % (unit_name,) service_name, sequence = _parse_unit_name(unit_name) if service_name != self._service_name: raise BadServiceStateName(self._service_name, service_name) topology = yield self._read_topology() if not topology.has_service(self._internal_id): raise StateChanged() internal_unit_id = \ topology.find_service_unit_with_sequence(self._internal_id, sequence) if internal_unit_id is None: raise ServiceUnitStateNotFound(unit_name) returnValue(ServiceUnitState(self._client, self._internal_id, self._service_name, sequence, internal_unit_id)) def watch_relation_states(self, callback): """Observe changes in the assigned relations for the service. @param callback: A function/method which accepts two sequences of C{ServiceRelationState} instances, representing the old relations and new relations. The old relations variable will be 'None' the first time the function is called. Note there are no guarantees that this function will be called once for *every* change in the topology, which means that multiple modifications may be observed as a single call. This method currently sets a pretty much perpetual watch (errors will make it bail out). In order to cleanly stop the watcher, a StopWatch exception can be raised by the callback. """ def watch_topology(old_topology, new_topology): if old_topology is None: old_relations = None else: old_relations = old_topology.get_relations_for_service( self._internal_id) new_relations = new_topology.get_relations_for_service( self._internal_id) if old_relations != new_relations: if old_relations: old_relations = _to_service_relation_state( self._client, self._internal_id, old_relations) new_relations = _to_service_relation_state( self._client, self._internal_id, new_relations) return callback(old_relations, new_relations) return self._watch_topology(watch_topology) @inlineCallbacks def watch_config_state(self, callback): """Observe changes to config state for a service. @param callback: A function/method which accepts the YAMLState node of the changed service. No effort is made to present deltas to the change function. Note there are no guarantees that this function will be called once for *every* change in the topology, which means that multiple modifications may be observed as a single call. This method currently sets a pretty much perpetual watch (errors will make it bail out). In order to cleanly stop the watcher, a StopWatch exception can be raised by the callback. """ @inlineCallbacks def watcher(change_event): if self._client.connected: exists_d, watch_d = self._client.exists_and_watch( self._config_path) yield callback(change_event) watch_d.addCallback(watcher) exists_d, watch_d = self._client.exists_and_watch(self._config_path) exists = yield exists_d # Setup the watch deferred callback after the user defined callback # has returned successfully from the existence invocation. callback_d = maybeDeferred(callback, bool(exists)) callback_d.addCallback( lambda x: watch_d.addCallback(watcher) and x) # Wait on the first callback, reflecting present state, not a zk watch yield callback_d def watch_service_unit_states(self, callback): """Observe changes in service unit membership for this service. `callback(old_service_unit_names, new_service_unit_names)`: function called upon a change to the service topology. Both parameters to the callback are sets, possibly empty. Note that there are no guarantees that this function will be called once for *every* change in the topology, which means that multiple modifications may be observed as a single call. This method currently sets a pretty much perpetual watch (errors will make it bail out). In the future, the return value of the watch function may be used to define whether to continue watching or to stop. """ def watch_topology(old_topology, new_topology): def get_service_unit_names(topology): if topology is None: return set() if not topology.has_service(self._internal_id): # The watch is now running, but by the time we # read the topology node from ZK, the topology has # changed with the service being removed. Since # there are no service units for this service, # simply bail out. return set() service_unit_names = set() for unit_id in topology.get_service_units(self._internal_id): service_unit_names.add(topology.get_service_unit_name( self._internal_id, unit_id)) return service_unit_names old_service_units = get_service_unit_names(old_topology) new_service_units = get_service_unit_names(new_topology) if old_service_units != new_service_units: return callback(old_service_units, new_service_units) return self._watch_topology(watch_topology) @inlineCallbacks def set_exposed_flag(self): """Inform the service that it has been exposed Typically set by juju expose """ try: yield self._client.create(self._exposed_path) except zookeeper.NodeExistsException: # We get to the same end state pass @inlineCallbacks def get_exposed_flag(self): """Returns a boolean denoting if the exposed flag is set. """ stat = yield self._client.exists(self._exposed_path) returnValue(bool(stat)) @inlineCallbacks def clear_exposed_flag(self): """Clear the exposed flag. Typically cleared by juju unexpose """ try: yield self._client.delete(self._exposed_path) except zookeeper.NoNodeException: # We get to the same end state. pass def watch_exposed_flag(self, callback): """Set `callback` called on changes to this service's exposed flag. `callback` - The callback receives a single parameter, the current boolean value of the exposed flag (True if present in ZK, False otherwise). Only changes will be observed, and they respect ZK watch semantics in terms of ordering and reliability. Consequently, client of this watch do not need to retrieve the exposed flag setting in this callback (no surprises). It is the responsibility of `callback` to ensure that it shuts down the watch with `StopWatcher` before application teardown. For example, this can be done by having the callback depend on the application state in some way. The watch is permanent until `callback` raises a `StopWatcher` exception. """ @inlineCallbacks def manage_callback(*ignored): # Need to guard on the client being connected in the case # 1) a watch is waiting to run (in the reactor); # 2) and the connection is closed. # # It remains the reponsibility of `callback` to raise # `StopWatcher`, per above. if not self._client.connected: returnValue(None) exists_d, watch_d = self._client.exists_and_watch( self._exposed_path) stat = yield exists_d exists = bool(stat) try: yield callback(exists) except StopWatcher: returnValue(None) watch_d.addCallback(manage_callback) return manage_callback() @inlineCallbacks def get_config(self): """Returns the services current options as a YAMLState object. When returned this object will have already had its `read` method invoked and is ready for use. The state object can then have its `write` method invoked to publish the state to Zookeeper. """ charm_state = yield self.get_charm_state() config = yield charm_state.get_config() defaults = config.get_defaults() config_node = ConfigState( self._client, "/services/%s/config" % self._internal_id, defaults) yield config_node.read() returnValue(config_node) class ConfigState(YAMLState): def __init__(self, client, path, defaults): super(ConfigState, self).__init__(client, path) self._defaults = defaults def __getitem__(self, key): self._check() if key in self._cache: return self._cache[key] return self._defaults[key] def keys(self): return list(set(self._cache).union(set(self._defaults))) def __delitem__(self, key): raise NotImplemented("Not defined") def _to_service_relation_state(client, service_id, assigned_relations): """Helper method to construct a list of service relation states. @param client: Zookeeper client @param service_id: Id of the service @param assigned_relations: sequence of relation_id, relation_type and the service relation specific information (role and name). """ service_relations = [] for relation in assigned_relations: relation_info = relation["service"] service_relations.append( ServiceRelationState(client, service_id, relation["relation_id"], relation["scope"], **relation_info)) return service_relations class ServiceUnitState(StateBase, AgentStateMixin, YAMLStateNodeMixin): """State of a service unit registered in an environment. Each service is composed by units, and each unit represents an actual deployment of software to satisfy the needs defined in this service state. """ def __init__(self, client, internal_service_id, service_name, unit_sequence, internal_id): self._client = client self._internal_service_id = internal_service_id self._service_name = service_name self._unit_sequence = unit_sequence self._internal_id = internal_id def __hash__(self): return hash(self.unit_name) def __eq__(self, other): if not isinstance(other, ServiceUnitState): return False return self.unit_name == other.unit_name def __repr__(self): return "<%s %s>" % (self.__class__.__name__, self.unit_name) @property def service_name(self): """Service name for the service from this unit.""" return self._service_name @property def internal_id(self): """Unit's internal id, of the form unit-NNNNNNNNNN.""" return self._internal_id @property def unit_name(self): """Get a nice user-oriented identifier for this unit.""" return "%s/%d" % (self._service_name, self._unit_sequence) @property def _zk_path(self): return "/units/%s" % self._internal_id @property def _ports_path(self): """The path for the open ports for this service unit.""" return "%s/ports" % self._zk_path def _get_agent_path(self): """Get the zookeeper path for the service unit agent.""" return "%s/agent" % self._zk_path def _check_valid_in(self, topology): ok = topology.has_service(self._internal_service_id) ok = ok and topology.has_service_unit( self._internal_service_id, self._internal_id) if not ok: raise StateChanged() def _node_missing(self): raise ServiceUnitStateNotFound(self._service_name) return "/units/%s/agent" % self._internal_id @inlineCallbacks def is_subordinate(self): service_state = yield self.get_service_state() returnValue((yield service_state.is_subordinate())) @inlineCallbacks def get_container(self): """Return the ServiceUnitState of the container this unit executes in. By default units are their own container, however subordinate services will return the unit state of their principal service. """ topology = yield self._read_topology() try: container_info = topology.get_service_unit_container( self.internal_id) except InternalTopologyError: returnValue(None) if container_info is None: returnValue(None) service_id, service_name, sequence, container_id = container_info returnValue(ServiceUnitState(self._client, service_id, service_name, sequence, container_id)) def get_service_state(self): """Return the service state for this unit.""" return ServiceState(self._client, self._internal_service_id, self._service_name) def get_public_address(self): """Get the public address of the unit. If the unit is unassigned, or its unit agent hasn't started this value maybe None. """ return self._get_node_value("public-address") def set_public_address(self, public_address): """A unit's public address can be utilized to access the service. The service must have been exposed for the service to be reachable outside of the environment. """ return self._set_node_value("public-address", public_address) def get_private_address(self): """Get the private address of the unit. If the unit is unassigned, or its unit agent hasn't started this value maybe None. """ return self._get_node_value("private-address") def set_private_address(self, private_address): """A unit's address private to the environment. Other service will see and utilize this address for relations. """ return self._set_node_value("private-address", private_address) def get_charm_id(self): """The id of the charm currently deployed on the unit""" return self._get_node_value("charm") @inlineCallbacks def set_charm_id(self, charm_id): """Set the charm identifier that the unit is currently running.""" CharmURL.parse(charm_id).assert_revision() yield self._set_node_value("charm", charm_id) @inlineCallbacks def get_constraints(self): """The complete constraints for this unit (as set at creation time). Unit constraints are not settable; they're a snapshot of environment/ service state at unit addition time, so that get-constraints returns sane results when asked about a unit when considered relative to the constraints of the machine it's deployed on (rather than showing combined service/env constraints for the parent, which could be wildly different, and would certainly lead to confusion and bug reports). """ esm = EnvironmentStateManager(self._client) constraint_set = yield esm.get_constraint_set() data = yield self._get_node_value("constraints", {}) returnValue(constraint_set.load(data)) @inlineCallbacks def get_assigned_machine_id(self): """Get the assigned machine id or None if the unit is not assigned. """ topology = yield self._read_topology() self._check_valid_in(topology) machine_id = topology.get_service_unit_machine( self._internal_service_id, self._internal_id) if machine_id is None: container = yield self.get_container() if container: machine_id = yield container.get_assigned_machine_id() returnValue(machine_id) if machine_id is not None: machine_id = _public_machine_id(machine_id) returnValue(machine_id) @inlineCallbacks def assign_to_machine(self, machine_state): """Assign this service unit to the given machine. """ constraints = yield self.get_constraints() machine_constraints = yield machine_state.get_constraints() if not machine_constraints.can_satisfy(constraints): log.warning( "Unit %s assigned to machine %s with incompatible constraints", self.unit_name, machine_state.id) def assign_unit(topology): self._check_valid_in(topology) machine_id = topology.get_service_unit_machine( self._internal_service_id, self._internal_id) container = topology.get_service_unit_container(self.internal_id) if container is not None: raise IllegalSubordinateMachineAssignment(self.unit_name) if machine_id is None: topology.assign_service_unit_to_machine( self._internal_service_id, self._internal_id, machine_state.internal_id) elif machine_id == machine_state.internal_id: # It's a NOOP. To avoid dealing with concurrency issues # here, we can let it go through. pass else: raise ServiceUnitStateMachineAlreadyAssigned(self.unit_name) yield self._retry_topology_change(assign_unit) @inlineCallbacks def assign_to_unused_machine(self): """Assign this service unit to an unused machine (if available). It will not attempt to reuse machine 0, since this is currently special. Raises `NoUnusedMachines` if there are no available machines for reuse. Usually this then should result in using code to subsequently attempt to create a new machine in the environment, then assign directly to it with `assign_to_machine`. """ # used to provide a writable result for the callback scope_escaper = [None] unit_constraints = yield self.get_constraints() @inlineCallbacks def assign_unused_unit(topology): self._check_valid_in(topology) # XXX We cannot reuse the "root" machine (used by the # provisioning agent), but the topology metadata does not # properly reflect its allocation. In the future, once it # is managed like any other service, this special case can # be removed. root_machine_id = "machine-%010d" % 0 for internal_id in topology.get_machines(): if internal_id == root_machine_id: continue if topology.machine_has_units(internal_id): continue machine_state = MachineState(self._client, internal_id) machine_constraints = yield machine_state.get_constraints() if machine_constraints.can_satisfy(unit_constraints): break else: raise NoUnusedMachines() topology.assign_service_unit_to_machine( self._internal_service_id, self._internal_id, machine_state.internal_id) scope_escaper[0] = machine_state yield self._retry_topology_change(assign_unused_unit) returnValue(scope_escaper[0]) def unassign_from_machine(self): """Unassign this service unit from whatever machine it's assigned to. """ def unassign_unit(topology): self._check_valid_in(topology) # If for whatever reason it's not already assigned to a # machine, ignore it and move forward so that we don't # have to deal with conflicts. machine_id = topology.get_service_unit_machine( self._internal_service_id, self._internal_id) if machine_id is not None: topology.unassign_service_unit_from_machine( self._internal_service_id, self._internal_id) return self._retry_topology_change(unassign_unit) @property def _hook_debug_path(self): return "%s/debug" % self._zk_path @inlineCallbacks def enable_hook_debug(self, hook_names): """Enable hook debugging. :param hook_name: The name of the hook to debug. The special value ``*`` will enable debugging on all hooks. Returns True if the debug was successfully enabled. The enabling hook debugging functionality triggers the creation of an ephemeral node used to notify unit agents of the debug behavior they should enable. Upon close of the zookeeper client used to enable this debug, this value will be cleared. """ if not isinstance(hook_names, (list, tuple)): raise AssertionError("Hook names must be a list: got %r" % hook_names) if "*" in hook_names and len(hook_names) > 1: msg = "Ambigious to debug all hooks and named hooks %r" % ( hook_names,) raise ValueError(msg) try: yield self._client.create( self._hook_debug_path, serializer.dump({"debug_hooks": hook_names}), flags=zookeeper.EPHEMERAL) except zookeeper.NodeExistsException: raise ServiceUnitDebugAlreadyEnabled(self.unit_name) returnValue(True) @inlineCallbacks def clear_hook_debug(self): """Clear any debug hook settings. When a single hook is being debugged this method is used by agents to clear the debug settings after they have been processed. """ try: yield self._client.delete(self._hook_debug_path) except zookeeper.NoNodeException: # We get to the same end state. pass returnValue(True) @inlineCallbacks def get_hook_debug(self): """Retrieve the current value if any of the hook debug setting. If no setting is found, None is returned. """ try: content, stat = yield self._client.get(self._hook_debug_path) except zookeeper.NoNodeException: # We get to the same end state. returnValue(None) returnValue(serializer.load(content)) @inlineCallbacks def watch_hook_debug(self, callback, permanent=True): """Set a callback to be invoked when the debug state changes. :param callback: The callback recieves a single parameter, the change event. The watcher always recieve an initial boolean value invocation denoting the existence of the debug setting. Subsequent invocations will be with change events. :param permanent: Determines if the watch automatically resets. Its important that clients do not rely on the event as reflective of the current state. It is only a reflection of some change happening, to inform watch users should fetch the current value. """ @inlineCallbacks def watcher(change_event): if permanent and self._client.connected: exists_d, watch_d = self._client.exists_and_watch( self._hook_debug_path) yield callback(change_event) if permanent: watch_d.addCallback(watcher) exists_d, watch_d = self._client.exists_and_watch( self._hook_debug_path) exists = yield exists_d # Setup the watch deferred callback after the user defined callback # has returned successfully from the existence invocation. callback_d = maybeDeferred(callback, bool(exists)) callback_d.addCallback( lambda x: watch_d.addCallback(watcher) and x) # Wait on the first callback, reflecting present state, not a zk watch yield callback_d @property def _upgrade_flag_path(self): return "%s/upgrade" % self._zk_path @inlineCallbacks def set_upgrade_flag(self, force=False): """Inform the unit it should perform an upgrade. """ assert isinstance(force, bool), "Invalid force upgrade flag" def update(content, stat): if not content: flags = dict(force=force) return serializer.dump(flags) flags = serializer.load(content) if not isinstance(flags, dict): flags = dict(force=force) return serializer.dump(flags) if flags['force'] != force: raise ServiceUnitUpgradeAlreadyEnabled(self.unit_name) return content yield retry_change(self._client, self._upgrade_flag_path, update) @inlineCallbacks def get_upgrade_flag(self): """Returns a dictionary containing the upgrade flag or False. """ try: content, stat = yield self._client.get(self._upgrade_flag_path) if not content: returnValue(False) except zookeeper.NoNodeException: returnValue(False) returnValue(serializer.load(content)) @inlineCallbacks def clear_upgrade_flag(self): """Clear the upgrade flag. Typically done by the unit agent before beginning the upgrade. """ try: yield self._client.delete(self._upgrade_flag_path) except zookeeper.NoNodeException: # We get to the same end state. pass @inlineCallbacks def watch_upgrade_flag(self, callback, permanent=True): """Set a callback to be invoked when an upgrade is requested. :param callback: The callback recieves a single parameter, the change event. The watcher always recieve an initial boolean value invocation denoting the existence of the upgrade setting. Subsequent invocations will be with change events. :param permanent: Determines if the watch automatically resets. Its important that clients do not rely on the event as reflective of the current state. It is only a reflection of some change happening, the callback should fetch the current value via the API, if needed. """ @inlineCallbacks def watcher(change_event): if permanent and self._client.connected: exists_d, watch_d = self._client.exists_and_watch( self._upgrade_flag_path) yield callback(change_event) if permanent: watch_d.addCallback(watcher) exists_d, watch_d = self._client.exists_and_watch( self._upgrade_flag_path) exists = yield exists_d # Setup the watch deferred callback after the user defined callback # has returned successfully from the existence invocation. callback_d = maybeDeferred(callback, bool(exists)) callback_d.addCallback( lambda x: watch_d.addCallback(watcher) and x) # Wait on the first callback, reflecting present state, not a zk watch yield callback_d @property def _unit_resolved_path(self): return "%s/resolved" % self._zk_path @inlineCallbacks def set_resolved(self, retry): """Mark the unit as in need of being resolved. :param retry: A boolean denoting if hooks should fire as a result of the retry. The resolved setting is set by the command line to inform a unit to attempt an retry transition from an error state. """ if not retry in (RETRY_HOOKS, NO_HOOKS): raise ValueError("invalid retry value %r" % retry) try: yield self._client.create( self._unit_resolved_path, serializer.dump({"retry": retry})) except zookeeper.NodeExistsException: raise ServiceUnitResolvedAlreadyEnabled(self.unit_name) @inlineCallbacks def get_resolved(self): """Get the value of the resolved setting if any. The resolved setting is retrieved by the unit agent and if found instructs it to attempt an retry transition from an error state. """ try: content, stat = yield self._client.get(self._unit_resolved_path) except zookeeper.NoNodeException: # Return a default value. returnValue(None) returnValue(serializer.load(content)) @inlineCallbacks def clear_resolved(self): """Remove any resolved setting on the unit.""" try: yield self._client.delete(self._unit_resolved_path) except zookeeper.NoNodeException: # We get to the same end state. pass @inlineCallbacks def watch_resolved(self, callback): """Set a callback to be invoked when an unit is marked resolved. :param callback: The callback recieves a single parameter, the change event. The watcher always recieve an initial boolean value invocation denoting the existence of the resolved setting. Subsequent invocations will be with change events. """ @inlineCallbacks def watcher(change_event): if not self._client.connected: returnValue(None) exists_d, watch_d = self._client.exists_and_watch( self._unit_resolved_path) try: yield callback(change_event) except StopWatcher: returnValue(None) watch_d.addCallback(watcher) exists_d, watch_d = self._client.exists_and_watch( self._unit_resolved_path) exists = yield exists_d # Setup the watch deferred callback after the user defined callback # has returned successfully from the existence invocation. callback_d = maybeDeferred(callback, bool(exists)) callback_d.addCallback( lambda x: watch_d.addCallback(watcher) and x) callback_d.addErrback( lambda failure: failure.trap(StopWatcher)) # Wait on the first callback, reflecting present state, not a zk watch yield callback_d @property def _relation_resolved_path(self): return "%s/relation-resolved" % self._zk_path @inlineCallbacks def set_relation_resolved(self, relation_map): """Mark a unit's relations as being resolved. The unit agent will watch this setting and unblock the unit, via manipulation of the unit workflow and lifecycle. :param relation_map: A map of internal relation ids, to retry hook values either juju.state.service.NO_HOOKS or RETRY_HOOKS. TODO: The api currently takes internal relation ids, this should be cleaned up with a refactor to state request protocol objects. Only public names should be exposed beyond the state api. There's an ongoing discussion on whether this needs to support retries. Currently it doesn't, without it the the arg to this method could just be a list of relations. Supporting retries would mean capturing enough information to retry the hook, and has reconciliation issues wrt to what's current at the time of re-execution. The existing hook scheduler automatically performs merges of redundant events. The retry could execute a relation change hook, for a remote unit that has already departed at the time of re-execution (and for which we have a pending hook execution), which would be inconsistent, wrt to what would be exposed via the hook cli api. With support for on disk persistence and recovery, some of this temporal synchronization would already be in place. """ if not isinstance(relation_map, dict): raise ValueError( "Relation map must be a dictionary %r" % relation_map) if [v for v in relation_map.values() if v not in ( RETRY_HOOKS, NO_HOOKS)]: raise ValueError("Invalid setting for retry hook") def update_relation_resolved(content, stat): if not content: return serializer.dump(relation_map) content = serializer.dump( dict_merge(serializer.load(content), relation_map)) return content try: yield retry_change( self._client, self._relation_resolved_path, update_relation_resolved) except StateChanged: raise ServiceUnitRelationResolvedAlreadyEnabled(self.unit_name) returnValue(True) @inlineCallbacks def get_relation_resolved(self): """Retrieve any resolved flags set for this unit's relations. """ try: content, stat = yield self._client.get( self._relation_resolved_path) except zookeeper.NoNodeException: returnValue(None) returnValue(serializer.load(content)) @inlineCallbacks def clear_relation_resolved(self): """ Clear the relation resolved setting. """ try: yield self._client.delete(self._relation_resolved_path) except zookeeper.NoNodeException: # We get to the same end state. pass @inlineCallbacks def watch_relation_resolved(self, callback): """Set a callback to be invoked when a unit's relations are resolved. :param callback: The callback recieves a single parameter, the change event. The watcher always recieve an initial boolean value invocation denoting the existence of the resolved setting. Subsequent invocations will be with change events. """ @inlineCallbacks def watcher(change_event): if not self._client.connected: returnValue(None) exists_d, watch_d = self._client.exists_and_watch( self._relation_resolved_path) try: yield callback(change_event) except StopWatcher: returnValue(None) watch_d.addCallback(watcher) exists_d, watch_d = self._client.exists_and_watch( self._relation_resolved_path) exists = yield exists_d # Setup the watch deferred callback after the user defined callback # has returned successfully from the existence invocation. callback_d = maybeDeferred(callback, bool(exists)) callback_d.addCallback( lambda x: watch_d.addCallback(watcher) and x) callback_d.addErrback( lambda failure: failure.trap(StopWatcher)) # Wait on the first callback, reflecting present state, not a zk watch yield callback_d @inlineCallbacks def open_port(self, port, proto): """Sets policy that `port` (using `proto`) should be opened. This only takes effect when the service itself is exposed. """ def zk_open_port(content, stat): if content is None: data = {} else: data = serializer.load(content) if data is None: data = {} open_ports = data.setdefault("open", []) port_proto = dict(port=port, proto=proto) if port_proto not in open_ports: open_ports.append(port_proto) return serializer.dump(data) yield retry_change(self._client, self._ports_path, zk_open_port) @inlineCallbacks def close_port(self, port, proto): """Sets policy that `port` (using `proto`) should be closed. This only takes effect when the service itself is exposed; otherwise all ports are closed regardless. """ def zk_close_port(content, stat): if content is None: data = {} else: data = serializer.load(content) if data is None: data = {} open_ports = data.setdefault("open", []) port_proto = dict(port=port, proto=proto) if port_proto in open_ports: open_ports.remove(port_proto) return serializer.dump(data) yield retry_change( self._client, self._ports_path, zk_close_port) @inlineCallbacks def get_open_ports(self): """Gets the open ports for this service unit, or an empty list. The retrieved format is [{"port": PORT, "proto": PROTO}, ...] Any open ports are only opened if the service itself is exposed. """ try: content, stat = yield self._client.get(self._ports_path) except zookeeper.NoNodeException: returnValue([]) data = serializer.load(content) if data is None: returnValue(()) returnValue(data.get("open", ())) @inlineCallbacks def watch_ports(self, callback): """Set `callback` to be invoked when a unit's ports are changed. `callback` - receives a single parameter, the change event. The watcher always receives an initial boolean value invocation denoting the existence of the open ports node. Subsequent invocations will be with change events. """ @inlineCallbacks def watcher(change_event): if not self._client.connected: returnValue(None) exists_d, watch_d = self._client.exists_and_watch( self._ports_path) try: yield callback(change_event) except StopWatcher: returnValue(None) watch_d.addCallback(watcher) exists_d, watch_d = self._client.exists_and_watch( self._ports_path) exists = yield exists_d # Setup the watch deferred callback after the user defined callback # has returned successfully from the existence invocation. callback_d = maybeDeferred(callback, bool(exists)) callback_d.addCallback( lambda x: watch_d.addCallback(watcher) and x) callback_d.addErrback( lambda failure: failure.trap(StopWatcher)) # Wait on the first callback, reflecting present state, not a zk watch yield callback_d def _parse_unit_name(unit_name): """Parse a unit's name into the service name and its sequence. Expecting a unit_name in the common format 'wordpress/0' this method will return ('wordpress', 0). @return: a tuple containing the service name(str) and the sequence number(int). """ service_name, sequence = unit_name.rsplit("/", 1) sequence = int(sequence) return service_name, sequence def parse_service_name(unit_name): """Return the service name from a given unit name.""" try: return _parse_unit_name(unit_name)[0] except (AttributeError, ValueError): raise ValueError("Not a proper unit name: %r" % (unit_name,)) juju-0.7.orig/juju/state/sshclient.py0000644000000000000000000001147512135220114016064 0ustar 00000000000000import re import socket import time from twisted.internet.defer import Deferred, inlineCallbacks, returnValue from txzookeeper.client import ConnectionTimeoutException from juju.errors import NoConnection, InvalidHost, InvalidUser from juju.lib.port import get_open_port from juju.state.security import SecurityPolicyConnection from juju.state.sshforward import forward_port, ClientTunnelProtocol from .utils import PortWatcher SERVER_RE = re.compile("^(\S+):(\d+)$") class SSHClient(SecurityPolicyConnection): """ A ZookeeperClient which will internally handle an SSH tunnel to connect to the remote host. """ remote_user = "ubuntu" _process = None @inlineCallbacks def _internal_connect(self, server, timeout, share=False): """Connect to the remote host provided via an ssh port forward. An SSH process is fired with port forwarding established on localhost 22181, which the zookeeper client connects to. :param server: Remote host to connect to, specified as hostname:port :type string :param timeout: An timeout interval in seconds. :type float Returns a connected client or error. """ hostname, port = self._parse_servers(server or self._servers) start_time = time.time() # Determine which port we'll be using. local_port = get_open_port() port_watcher = PortWatcher("localhost", local_port, timeout) tunnel_error = Deferred() # On a tunnel error, stop the port watch early and bail with error. tunnel_error.addErrback(port_watcher.stop) # If a tunnel error happens now or later, close the connection. tunnel_error.addErrback(lambda x: self.close()) # Setup tunnel via an ssh process for port forwarding. protocol = ClientTunnelProtocol(self, tunnel_error) self._process = forward_port( self.remote_user, local_port, hostname, int(port), process_protocol=protocol, share=share) # Wait for the tunneled port to open. try: yield port_watcher.async_wait() except socket.error: self.close() # Stop the tunnel process. raise ConnectionTimeoutException("could not connect") else: # If we stopped because of a tunnel error, raise it. if protocol.error: yield tunnel_error # Check timeout new_timeout = timeout - (time.time() - start_time) if new_timeout <= 0: self.close() raise ConnectionTimeoutException( "could not connect before timeout") # Connect the client try: yield super(SSHClient, self).connect( "localhost:%d" % local_port, new_timeout) except: self.close() # Stop the tunnel raise returnValue(self) def _parse_servers(self, servers): """Extract a server host and port.""" match = SERVER_RE.match(servers) hostname, port = match.groups() return hostname, port @inlineCallbacks def connect(self, server=None, timeout=60, share=False): """Probe ZK is accessible via ssh tunnel, return client on success.""" until = time.time() + timeout num_retries = 0 while time.time() < until: num_retries += 1 try: yield self._internal_connect( server, timeout=until - time.time(), share=share) except ConnectionTimeoutException: # Reraises implicitly, but with the number of retries # (see the outside of this loop); this circumstance # would occur if the port watcher timed out before we # got anything from the tunnel break except InvalidHost: # No point in retrying if the host itself is invalid self.close() raise except InvalidUser: # Or if the user doesn't have a login self.close() raise except NoConnection: # Otherwise retry after ssh tunnel forwarding failures self.close() else: returnValue(self) self.close() # we raise ConnectionTimeoutException (rather than one of our own, with # the same meaning) to maintain ZookeeperClient interface raise ConnectionTimeoutException( "could not connect before timeout after %d retries" % num_retries) def close(self): """Close the zookeeper connection, and the associated ssh tunnel.""" super(SSHClient, self).close() if self._process is not None: self._process.signalProcess("TERM") self._process.loseConnection() self._process = None juju-0.7.orig/juju/state/sshforward.py0000644000000000000000000001140012135220114016236 0ustar 00000000000000""" An SSH forwarding connection using ssh as a spawned process from the twisted reactor. """ import os import logging from twisted.internet.protocol import ProcessProtocol from juju.errors import ( FileNotFound, NoConnection, InvalidHost, InvalidUser) log = logging.getLogger("juju.state.sshforward") def _verify_ports(*ports): for port in ports: if not isinstance(port, int): if not port.isdigit(): raise SyntaxError("Port must be integer, got %s." % (port)) class TunnelProtocol(ProcessProtocol): def errReceived(self, data): """ Bespoke stderr interpretation to determine error and connection states. """ log.error("SSH tunnel error %s", data) class ClientTunnelProtocol(ProcessProtocol): def __init__(self, client, error_deferred): self._client = client self._deferred = error_deferred @property def error(self): return self._deferred is None def errReceived(self, data): """Bespoke stderr interpretation to determine error/connection states. On errors, invokes the client.close method. """ # Even with a null host file, and ignoring strict host checking # we'll end up with this output, suppress it as its effectively # normal for our usage... if data.startswith("Warning: Permanently added"): return message = ex = None if data.startswith("ssh: Could not resolve hostname"): message = "Invalid host for SSH forwarding: %s" % data ex = InvalidHost(message) elif data.startswith("Permission denied"): message = "Invalid SSH key" ex = InvalidUser(message) elif "Connection refused" in data: ex = NoConnection("Connection refused") else: # Handle any other error message = "SSH forwarding error: %s" % data ex = NoConnection(message) if self._deferred: # The provider will retry repeatedly till connections work # only log uncommon errors. if message: log.error(message) self._deferred.errback(ex) self._deferred = None self._client.close() return True def forward_port(remote_user, local_port, remote_host, remote_port, private_key_path=None, process_protocol=None, share=False): """ Fork an ssh process to enable port forwarding from the given local host port to the remote host, remote port. @param local_port: The local port that should be bound for the forward. @type int @param remote_user: The user for login into the remote host. @type str @param remote_host: The name or ip address of the remote host. @type str @param remote_port: The remote port that is forwarded to. @type int @param private_key: The identity file that the private key should be read from. If none is specified. @param process_protocl: The process interaction protocol @type C{ProcessProtocol} """ _verify_ports(local_port, remote_port) info = {"remote_user": remote_user, "local_port": local_port, "remote_host": remote_host, "remote_port": remote_port} from twisted.internet import reactor args = [ "ssh", "-T", "-o", "PasswordAuthentication no", "-Llocalhost:%(local_port)s:localhost:%(remote_port)s" % info, "%(remote_user)s@%(remote_host)s" % info] args[2:2] = prepare_ssh_sharing(auto_master=share) if private_key_path: private_key_path = os.path.expandvars(os.path.expanduser( private_key_path)) if not os.path.exists(private_key_path): raise FileNotFound("Private key file not found: %r." % \ private_key_path) args[2:2] = ["-i", private_key_path] log.debug("Using private key from %s." % private_key_path) # use the existing process environment to utilize an ssh agent if present. log.debug("Spawning SSH process with %s." % ( " ".join("%s=\"%s\"" % pair for pair in info.items()))) if process_protocol is None: process_protocol = TunnelProtocol() return reactor.spawnProcess( process_protocol, "/usr/bin/ssh", args, env=os.environ) def prepare_ssh_sharing(auto_master=False): path = os.path.expanduser("~/.juju/ssh/master-%r@%h:%p") pathdir = os.path.dirname(path) if auto_master and not os.path.isdir(pathdir): os.makedirs(pathdir) if auto_master: master = "auto" else: master = "no" args = [ "-o", "ControlPath " + path, "-o", "ControlMaster " + master, ] return args juju-0.7.orig/juju/state/tests/0000755000000000000000000000000012135220114014650 5ustar 00000000000000juju-0.7.orig/juju/state/topology.py0000644000000000000000000005531012135220114015740 0ustar 00000000000000 from juju.errors import IncompatibleVersion from juju.lib import serializer # The protocol version, which is stored in the /topology node under # the "version" key. The protocol version should *only* be updated # when we know that a version is in fact actually incompatible. VERSION = 2 class InternalTopologyError(Exception): """Inconsistent action attempted. This is mostly for testing and debugging, since it should never happen in practice. """ class InternalTopology(object): """Helper to deal with the high-level topology map stored in ZK. This must not be used outside of juju.state. To work with the topology itself, check out the "machine" and "service"modules. The internal topology implementation is based on the use of single node to function as a logical map of some entities within the zookeeper hierarchy. Being a single node means that it may be changed atomically, and thus the network of services have a central consistency point which may be used to develop further algorithms on top of. Without it, incomplete creation of various multi-node objects would have to be considered. This topology contains details such as service names, service unit sequence numbers on a per service basis, mapping of service_id to service names (service names are inteded as display names, and hence subject to change), and mapping of service units to machines. The internal state is maintained in a dictionary, but its structure and storage format should not be depended upon. """ _nil_dict = {} def __init__(self): self._state = {"version": VERSION} def reset(self): """Put the InternalTopology back in its initial state. """ self._state = {"version": VERSION} def dump(self): """Return string containing the state of this topology. This string may be provided to the :method:`parse` to reestablish the same topology state back. """ return serializer.dump(self._state) def parse(self, data): """Parse the dumped data provided and restore internal state. The provided data must necessarily have been retrieved by calling the :method:`dump`. """ parsed = serializer.load(data) self._state = parsed version = self.get_version() if version != VERSION: raise IncompatibleVersion(version, VERSION) def get_version(self): return self._state.get("version", 0) def add_machine(self, machine_id): """Add the given machine_id to the topology state. """ machines = self._state.setdefault("machines", {}) if machine_id in machines: raise InternalTopologyError( "Attempted to add duplicated " "machine (%s)" % machine_id) machines[machine_id] = {} def has_machine(self, machine_id): """Return True if machine_id was registered in the topology. """ return machine_id in self._state.get( "machines", self._nil_dict) def get_machines(self): """Return list of machine ids registered in this topology. """ return sorted(self._state.get("machines", self._nil_dict).keys()) def machine_has_units(self, machine_id): """Return True if machine has any assigned units.""" self._assert_machine(machine_id) services = self._state.get("services", self._nil_dict) for service in services.itervalues(): for unit in service["units"].itervalues(): if unit.get("machine") == machine_id: return True return False def remove_machine(self, machine_id): """Remove machine_id from this topology. """ self._assert_machine(machine_id) if self.machine_has_units(machine_id): raise InternalTopologyError( "Can't remove machine %r while units are assigned" % machine_id) # It's fine, so remove it. del self._state["machines"][machine_id] def add_service(self, service_id, service_name): """Add service_id to this topology. """ services = self._state.setdefault("services", {}) if service_id in services: raise InternalTopologyError( "Attempted to add duplicated service: %s" % service_id) for some_service_id in services: if services[some_service_id].get("name") == service_name: raise InternalTopologyError( "Service name %r already in use" % service_name) services[service_id] = {"name": service_name, "units": {}} unit_sequence = self._state.setdefault("unit-sequence", {}) if not service_name in unit_sequence: unit_sequence[service_name] = 0 def has_service(self, service_id): """Return True if service_id was previously added. """ return service_id in self._state.get( "services", self._nil_dict) def get_service_name(self, service_id): """Return service name for the given service id. """ self._assert_service(service_id) return self._state["services"][service_id]["name"] def find_service_with_name(self, service_name): """Return service_id for the named service, or None.""" services = self._state.get("services", ()) for service_id in services: if services[service_id].get("name") == service_name: return service_id return None def get_services(self): """Return list of previously added service ids. """ return self._state.get("services", {}).keys() def remove_service(self, service_id): """Remove service_id from this topology. """ self._assert_service(service_id) relations = self.get_relations_for_service(service_id) if relations: raise InternalTopologyError( "Service %r is associated to relations %s" % ( service_id, relations)) del self._state["services"][service_id] def add_service_unit(self, service_id, unit_id, container_id=None): """Register unit_id under service_id in this topology state. The new unit id registered will get a sequence number assigned to it. The sequence number increases monotonically for each service, and is helpful to provide nice unit names for users. :param container_id: optional unit_id of the principal service unit to which the new unit is subordinate. Defaults to None. If a `container` scoped relationship to the service of the principal node doesn't exist InternalTopologyError is raised. :return: The sequence number assigned to the unit_id. """ self._assert_service(service_id) services = self._state["services"] for some_service_id in services: if unit_id in services[some_service_id]["units"]: raise InternalTopologyError( "Unit %s already in service: %s" % (unit_id, some_service_id)) if container_id is not None: principal_unit_service = self.get_service_unit_service( container_id) relations = self.get_relations_for_service(service_id) found_container_relation = False for relation in relations: relation_id = relation["relation_id"] if (self.relation_has_service( relation_id, principal_unit_service) and self.get_relation_scope(relation_id) == "container"): found_container_relation = True break if not found_container_relation: raise InternalTopologyError( "Attempted to add subordinate unit " "without container relation") service = services[service_id] services[service_id]["units"][unit_id] = unit = {} unit["sequence"] = self._state[ \ "unit-sequence"][service["name"]] unit["container"] = container_id self._state["unit-sequence"][service["name"]] += 1 return unit["sequence"] def has_service_unit(self, service_id, unit_id): """Return True if unit_id was exists under service_id. """ self._assert_service(service_id) service = self._state["services"][service_id] return unit_id in service.get("units", self._nil_dict) def get_service_units(self, service_id): """Return list of unit_id registered under service_id. """ self._assert_service(service_id) service = self._state["services"].get(service_id, self._nil_dict) return service.get("units", self._nil_dict).keys() def get_service_unit_service(self, unit_id): """Given a unit id, return its corresponding service id.""" services = self._state.get("services", self._nil_dict) for service_id, service in services.iteritems(): if unit_id in service["units"]: return service_id raise InternalTopologyError("Service unit ID %s not " "found" % unit_id) def get_service_unit_name(self, service_id, unit_id): """Return the user-oriented name for the given unit.""" self._assert_service_unit(service_id, unit_id) service = self._state["services"][service_id] service_name = service["name"] unit_sequence = service["units"][unit_id]["sequence"] return "%s/%s" % (service_name, unit_sequence) def get_service_unit_id_from_name(self, unit_name): """Return the service unit id from the unit name.""" service_name, unit_sequence_id = unit_name.split("/") service_id = self.find_service_with_name(service_name) unit_id = self.find_service_unit_with_sequence( service_id, int(unit_sequence_id)) return unit_id def get_service_unit_name_from_id(self, unit_id): """Retrieve the user-oriented name from the given unit. A simple convenience accessor. """ service_id = self.get_service_unit_service(unit_id) return self.get_service_unit_name(service_id, unit_id) def get_service_unit_principal(self, unit_id): services = self._state.get("services", self._nil_dict) for service_id, service in services.iteritems(): if unit_id in service["units"]: unit_info = service["units"][unit_id] return unit_info.get("container") raise InternalTopologyError("Service unit ID %s not " "found" % unit_id) def get_service_unit_container(self, unit_id): """Return information about the container of a unit. If the unit_id has a container this method returns (service_id, service_name, sequence, container_id). Otherwise it returns None. """ container_id = self.get_service_unit_principal(unit_id) if container_id is None: return None service_id = self.get_service_unit_service(container_id) container_unit_name = self.get_service_unit_name(service_id, container_id) service_name, sequence = container_unit_name.rsplit("/", 1) sequence = int(sequence) return (service_id, service_name, sequence, container_id) def remove_service_unit(self, service_id, unit_id): """Remove unit_id from under service_id in the topology state. """ self._assert_service_unit(service_id, unit_id) del self._state["services"][service_id]["units"][unit_id] def find_service_unit_with_sequence(self, service_id, sequence): """Return unit_id with the given sequence under service_id. @return: unit_id with the given sequence, or None if not found. """ self._assert_service(service_id) units = self._state["services"][service_id]["units"] for unit_id in units: if units[unit_id]["sequence"] == sequence: return unit_id return None def get_service_unit_sequence(self, service_id, unit_id): """Return the sequence number for the given service unit. """ self._assert_service_unit(service_id, unit_id) unit = self._state["services"][service_id]["units"][unit_id] return unit["sequence"] def assign_service_unit_to_machine(self, service_id, unit_id, machine_id): """Assign the given unit_id to the provided machine_id. The unit_id must exist and be in an unassigned state for this to work. """ self._assert_service_unit(service_id, unit_id) self._assert_machine(machine_id) unit = self._state["services"][service_id]["units"][unit_id] if "machine" in unit: raise InternalTopologyError( "Service unit %s in service %s already " "assigned to a machine." % (unit_id, service_id)) unit["machine"] = machine_id def get_service_unit_machine(self, service_id, unit_id): """Return the machine_id the unit_id is assigned to, or None. """ self._assert_service_unit(service_id, unit_id) unit = self._state["services"][service_id]["units"][unit_id] if "machine" not in unit: return None return unit["machine"] def unassign_service_unit_from_machine(self, service_id, unit_id): """Unassign the given unit_id from its current machine. The unit_id must necessarily be assigned to a machine for this to work. """ self._assert_service_unit(service_id, unit_id) unit = self._state["services"][service_id]["units"][unit_id] if "machine" not in unit: raise InternalTopologyError( "Service unit %s in service %s is not " "assigned to a machine." % (service_id, unit_id)) del unit["machine"] def get_service_units_in_machine(self, machine_id): self._assert_machine(machine_id) units = [] services = self._state.get("services", self._nil_dict) for service_id, service in services.iteritems(): for unit_id, unit in service["units"].iteritems(): if unit.get("machine") == machine_id: units.append(unit_id) return units def add_relation(self, relation_id, relation_type, relation_scope="global"): """Add a relation with given id and of the given type. """ relations = self._state.setdefault("relations", {}) if relation_id in relations: raise InternalTopologyError( "Relation id %r already in use" % relation_id) relations[relation_id] = dict(interface=relation_type, scope=relation_scope, services=dict()) def has_relation(self, relation_id): """Return True if relation with relation_id exists. """ return relation_id in self._state.get( "relations", self._nil_dict) def get_relations(self): """Returns a list of relation_id in the topology. """ return self._state.get("relations", self._nil_dict).keys() def get_relation_services(self, relation_id): """Get all the services associated to the relation. """ self._assert_relation(relation_id) relation_data = self._state["relations"][relation_id] return relation_data["services"] def get_relation_type(self, relation_id): """Get the type of a relation (its interface name).""" self._assert_relation(relation_id) relation_data = self._state["relations"][relation_id] return relation_data["interface"] def get_relation_scope(self, relation_id): """Get the scope of a relation.""" self._assert_relation(relation_id) relation_data = self._state["relations"][relation_id] return relation_data["scope"] def relation_has_service(self, relation_id, service_id): """Return if `service_id` is assigned to `relation_id`.""" relations = self._state.get("relations", self._nil_dict) relation_data = relations.get(relation_id, self._nil_dict) services = relation_data.get("services", self._nil_dict) return service_id in services def remove_relation(self, relation_id): """It should be possible to remove a relation. """ self._assert_relation(relation_id) del self._state["relations"][relation_id] def assign_service_to_relation(self, relation_id, service_id, name, role): """Associate a service to a relation. @param role: The relation role of the service. @param name: The relation name from the service. """ self._assert_service(service_id) self._assert_relation(relation_id) relation_data = self._state["relations"][relation_id] services = relation_data["services"] for sid in services: if sid == service_id: raise InternalTopologyError( "Service %r is already assigned " "to relation %r" % (service_id, relation_id)) service_info = services[sid] if service_info["role"] == role: raise InternalTopologyError( ("Another service %r is already providing %r " "role in relation") % (sid, service_info["role"])) services[service_id] = {"role": role, "name": name} def unassign_service_from_relation(self, relation_id, service_id): """Disassociate service to relation. """ self._assert_service(service_id) self._assert_relation(relation_id) relation_data = self._state["relations"][relation_id] services = relation_data["services"] if not service_id in services: raise InternalTopologyError( "Service %r is not assigned to relation %r" % ( service_id, relation_id)) del services[service_id] def get_relation_service(self, relation_id, service_id): """Retrieve the service settings for a relation.""" self._assert_service(service_id) self._assert_relation(relation_id) relation_data = self._state.get("relations").get(relation_id) if not service_id in relation_data.get( "services", self._nil_dict): raise InternalTopologyError( "Service %r not assigned to relation %r" % ( service_id, relation_id)) return (relation_data["interface"], relation_data["services"][service_id]) def get_relations_for_service(self, service_id): """Given a service id retrieve its relations.""" self._assert_service(service_id) relations = [] relations_dict = self._state.get("relations", self._nil_dict) for relation_id, relation_data in relations_dict.items(): services = relation_data.get("services") if services and service_id in services: relations.append(dict( relation_id=relation_id, interface=relation_data["interface"], scope=relation_data["scope"], service=services[service_id])) return relations def _assert_relation(self, relation_id): if relation_id not in self._state.get( "relations", self._nil_dict): raise InternalTopologyError( "Relation not found: %s" % relation_id) def _assert_machine(self, machine_id): if machine_id not in self._state.get( "machines", self._nil_dict): raise InternalTopologyError( "Machine not found: %s" % machine_id) def _assert_service(self, service_id): if service_id not in self._state.get( "services", self._nil_dict): raise InternalTopologyError( "Service not found: %s" % service_id) def _assert_service_unit(self, service_id, unit_id): self._assert_service(service_id) service = self._state["services"][service_id] if unit_id not in service.get("units", self._nil_dict): raise InternalTopologyError( "Service unit %s not found in service %s" % ( unit_id, service_id)) def has_relation_between_endpoints(self, endpoints): """Check if relation exists between `endpoints`. The relation, with a ``relation type`` common to the endpoints, must exist between all endpoints (presumably one for peer, two for client-server). The topology for the relations looks like the following in YAML:: relations: relation-0000000000: - mysql - global - service-0000000000: {name: db, role: client} service-0000000001: {name: server, role: server} """ service_ids = dict((e, self.find_service_with_name( e.service_name)) for e in endpoints) relations = self._state.get("relations", self._nil_dict) for relation_data in relations.itervalues(): scope = relation_data["scope"] services = relation_data["services"] for endpoint in endpoints: service = services.get(service_ids[endpoint]) if (not service or service["name"] != endpoint.relation_name or scope not in (endpoint.relation_scope, "container")): break else: return True return False def get_relation_between_endpoints(self, endpoints): """Return relation id existing between `endpoints` or None""" service_ids = dict((e, self.find_service_with_name( e.service_name)) for e in endpoints) relations = self._state.get("relations", self._nil_dict) for relation_id, relation_data in relations.iteritems(): interface = relation_data["interface"] services = relations[relation_id]["services"] if interface != endpoints[0].relation_type: continue for endpoint in endpoints: service = services.get(service_ids[endpoint]) if not service or service["name"] != endpoint.relation_name: break else: return relation_id return None juju-0.7.orig/juju/state/utils.py0000644000000000000000000002075112135220114015225 0ustar 00000000000000from collections import namedtuple from UserDict import DictMixin import socket import errno import time from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.threads import deferToThread from txzookeeper.utils import retry_change import zookeeper from juju.lib import serializer from juju.state.errors import StateChanged from juju.state.errors import StateNotFound class PortWatcher(object): def __init__(self, host, port, timeout, listen=False): """Watches a `port` on `host` until available. Used with `sync_wait` or `async_wait` methods. Times out after `timeout` seconds. Normally the watcher is used to determine when a port starts listening for client use, but the parameter `listen` may be used to wait for when the port can be used by a server (because of previous usage without properly closing). """ self._host = host self._port = port self._timeout = timeout self._stop = False self._listen = listen def stop(self, result=None): """Interrupt port watching in its loop.""" self._stop = True return result def sync_wait(self): """Waits until the port is available, or `socket.error`.""" until = time.time() + self._timeout while time.time() < until and not self._stop: sock = socket.socket() sock.settimeout(1) try: if self._listen: sock.bind((self._host, self._port)) else: sock.connect((self._host, self._port)) except socket.timeout: time.sleep(0.5) except socket.error, e: if e.args[0] not in ( errno.EWOULDBLOCK, errno.ECONNREFUSED, errno.EADDRINUSE): raise else: # Sleep, otherwise this code will create useless sockets time.sleep(0.5) else: sock.close() return True if self._stop: return raise socket.timeout("could not connect before timeout") def async_wait(self, *ignored): """Returns a deferred that is called back on port available. An exception is returned through the deferred errback if the port wait timed out, or another problem occurs.""" return deferToThread(self.sync_wait) @inlineCallbacks def remove_tree(client, path): children = yield client.get_children(path) for child in children: yield remove_tree(client, "%s/%s" % (path, child)) yield client.delete(path) def dict_merge(d1, d2): """Return a union of dicts if they have no conflicting values. Else raise a StateChanged error. """ must_match = set(d1).intersection(d2) for k in must_match: if not d1[k] == d2[k]: raise StateChanged() d = {} d.update(d1) d.update(d2) return d class DeletedItem(namedtuple("DeletedItem", "key old")): """Represents deleted items when :class:`YAMLState` writes.""" def __str__(self): return "Setting deleted: %r (was %.100r)" % (self.key, self.old) class ModifiedItem(namedtuple("ModifiedItem", "key old new")): """Represents modified items when :class:`YAMLState` writes.""" def __str__(self): return "Setting changed: %r=%.100r (was %.100r)" % \ (self.key, self.new, self.old) class AddedItem(namedtuple("AddedItem", "key new")): """Represents added items when :class:`YAMLState` writes.""" def __str__(self): return "Setting changed: %r=%.100r (was unset)" % \ (self.key, self.new) class YAMLState(DictMixin, object): """Provides a dict like interface around a Zookeeper node containing serialised YAML data. The dict provided represents the local view of all node data. `write` writes this information into the Zookeeper node, using a retry until success and merges against any existing keys in ZK. YAMLState(client, path) `client`: a Zookeeper client `path`: the path of the Zookeeper node to manage The state of this object always represents the product of the pristine settings (from Zookeeper) and the pending writes. All mutation to the dict expects the use of inlineCallbacks and a yield. This includes set and update. """ # By always updating 'self' on mutation we don't need to do any # special handling on data access (gets). def __init__(self, client, path): self._client = client self._path = path self._pristine_cache = None self._cache = {} @inlineCallbacks def read(self, required=False): """Read Zookeeper state. Read in the current Zookeeper state for this node. This operation should be called prior to other interactions with this object. `required`: boolean indicating if the node existence should be required at read time. Normally write will create the node if the path is possible. This allows for simplified catching of errors. """ self._pristine_cache = {} self._cache = {} try: data, stat = yield self._client.get(self._path) data = serializer.load(data) if data: self._pristine_cache = data self._cache = data.copy() except zookeeper.NoNodeException: if required: raise StateNotFound(self._path) def _check(self): """Verify that sync was called for operations which expect it.""" if self._pristine_cache is None: raise ValueError( "You must call .read() on %s instance before use." % ( self.__class__.__name__,)) ## DictMixin Interface def keys(self): return self._cache.keys() def __getitem__(self, key): self._check() return self._cache[key] def __setitem__(self, key, value): self._check() self._cache[key] = value def __delitem__(self, key): self._check() del self._cache[key] @inlineCallbacks def write(self): """Write object state to Zookeeper. This will write the current state of the object to Zookeeper, taking the final merged state as the new one, and resetting any write buffers. """ self._check() cache = self._cache pristine_cache = self._pristine_cache self._pristine_cache = cache.copy() # Used by `apply_changes` function to return the changes to # this scope. changes = [] def apply_changes(content, stat): """Apply the local state to the Zookeeper node state.""" del changes[:] current = serializer.load(content) if content else {} missing = object() for key in set(pristine_cache).union(cache): old_value = pristine_cache.get(key, missing) new_value = cache.get(key, missing) if old_value != new_value: if new_value != missing: current[key] = new_value if old_value != missing: changes.append( ModifiedItem(key, old_value, new_value)) else: changes.append(AddedItem(key, new_value)) elif key in current: del current[key] changes.append(DeletedItem(key, old_value)) return serializer.dump(current) # Apply the change till it takes. yield retry_change(self._client, self._path, apply_changes) returnValue(changes) class YAMLStateNodeMixin(object): """Enables simpler setters/getters. Mixee requires ._zk_path and ._client attributes, and a ._node_missing method. """ @inlineCallbacks def _get_node_value(self, key, default=None): node_data = YAMLState(self._client, self._zk_path) try: yield node_data.read(required=True) except StateNotFound: self._node_missing() returnValue(node_data.get(key, default)) @inlineCallbacks def _set_node_value(self, key, value): node_data = YAMLState(self._client, self._zk_path) try: yield node_data.read(required=True) except StateNotFound: self._node_missing() node_data[key] = value yield node_data.write() juju-0.7.orig/juju/state/tests/__init__.py0000644000000000000000000000000212135220114016751 0ustar 00000000000000# juju-0.7.orig/juju/state/tests/common.py0000644000000000000000000000356312135220114016521 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, returnValue import zookeeper from txzookeeper.tests.utils import deleteTree from juju.charm.directory import CharmDirectory from juju.charm.tests.test_directory import sample_directory from juju.environment.tests.test_config import EnvironmentsConfigTestBase from juju.state.topology import InternalTopology class StateTestBase(EnvironmentsConfigTestBase): @inlineCallbacks def setUp(self): yield super(StateTestBase, self).setUp() zookeeper.set_debug_level(0) self.charm = CharmDirectory(sample_directory) self.client = self.get_zookeeper_client() yield self.client.connect() yield self.client.create("/charms") yield self.client.create("/machines") yield self.client.create("/services") yield self.client.create("/units") yield self.client.create("/relations") @inlineCallbacks def tearDown(self): # Close and reopen connection, so that watches set during # testing are not affected by the cleaning up. self.client.close() client = self.get_zookeeper_client() yield client.connect() deleteTree(handle=client.handle) client.close() yield super(StateTestBase, self).tearDown() @inlineCallbacks def get_topology(self): """Read /topology and return InternalTopology instance with it.""" content, stat = yield self.client.get("/topology") topology = InternalTopology() topology.parse(content) returnValue(topology) @inlineCallbacks def set_topology(self, topology): """Dump the given InternalTopology into /topology.""" content = topology.dump() try: yield self.client.set("/topology", content) except zookeeper.NoNodeException: yield self.client.create("/topology", content) juju-0.7.orig/juju/state/tests/test_agent.py0000644000000000000000000000524412135220114017364 0ustar 00000000000000import zookeeper from twisted.internet.defer import inlineCallbacks from txzookeeper.tests.utils import deleteTree from juju.lib.testing import TestCase from juju.state.base import StateBase from juju.state.agent import AgentStateMixin class DomainObject(StateBase, AgentStateMixin): def _get_agent_path(self): return "/agent" class AgentDomainTest(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) yield super(TestCase, self).setUp() self.client = self.get_zookeeper_client() yield self.client.connect() @inlineCallbacks def tearDown(self): yield self.client.close() client = self.get_zookeeper_client() yield client.connect() deleteTree("/", client.handle) yield client.close() @inlineCallbacks def test_has_agent(self): domain = DomainObject(self.client) exists = yield domain.has_agent() self.assertIs(exists, False) yield domain.connect_agent() exists = yield domain.has_agent() self.assertIs(exists, True) @inlineCallbacks def test_watch_agent(self): domain = DomainObject(self.client) results = [] def on_change(event): results.append(event) exists_d, watch_d = domain.watch_agent() exists = yield exists_d self.assertIs(exists, False) watch_d.addCallback(on_change) self.assertFalse(results) # connect the agent, and ensure its observed. yield domain.connect_agent() yield self.sleep(0.1) self.assertEqual(len(results), 1) # restablish the watch and manually delete. exists_d, watch_d = domain.watch_agent() exists = yield exists_d self.assertIs(exists, True) watch_d.addCallback(on_change) yield self.client.delete("/agent") self.assertEqual(results[0].type_name, "created") self.assertEqual(results[1].type_name, "deleted") @inlineCallbacks def test_connect_agent(self): client = self.get_zookeeper_client() yield client.connect() exists_d, watch_d = self.client.exists_and_watch("/agent") exists = yield exists_d self.assertFalse(exists) domain = DomainObject(client) yield domain.connect_agent() event = yield watch_d self.assertEqual(event.type_name, "created") exists_d, watch_d = self.client.exists_and_watch("/agent") self.assertTrue((yield exists_d)) # Force the connection of the domain object to disappear yield client.close() event = yield watch_d self.assertEqual(event.type_name, "deleted") juju-0.7.orig/juju/state/tests/test_auth.py0000644000000000000000000000566612135220114017237 0ustar 00000000000000import hashlib import base64 import zookeeper from juju.lib.testing import TestCase from juju.state.auth import make_identity, make_ace class AuthTestCase(TestCase): def test_make_identity(self): username = "admin" password = "pass" credentials = "%s:%s" % (username, password) identity = "%s:%s" %( username, base64.b64encode(hashlib.new("sha1", credentials).digest())) self.assertEqual(identity, make_identity(credentials)) def test_make_identity_with_colon_in_password(self): username = "admin" password = ":pass:" credentials = "%s:%s" % (username, password) identity = "%s:%s" %( username, base64.b64encode(hashlib.new("sha1", credentials).digest())) self.assertEqual(identity, make_identity(credentials)) def test_make_identity_invalid_credential(self): credentials = "abc" self.assertRaises(SyntaxError, make_identity, credentials) def test_make_ace(self): identity = "admin:moss" ace = make_ace(identity, write=True, create=True) self.assertEqual(ace["id"], identity) self.assertEqual(ace["scheme"], "digest") self.assertEqual( ace["perms"], zookeeper.PERM_WRITE|zookeeper.PERM_CREATE) def test_make_ace_with_unknown_perm(self): identity = "admin:moss" self.assertRaises( SyntaxError, make_ace, identity, read=True, extra=True) def test_make_ace_no_perms(self): identity = "admin:moss" self.assertRaises(SyntaxError, make_ace, identity) def test_world_scheme(self): identity = "anyone" result = make_ace(identity, scheme="world", all=True) self.assertEqual(result, {"perms": zookeeper.PERM_ALL, "scheme": "world", "id": "anyone"}) def test_unknown_scheme_raises_assertion(self): identity = "admin:moss" self.assertRaises(AssertionError, make_ace, identity, scheme="mickey") def test_make_ace_with_false_raises(self): """Permissions can only be enabled via ACL, other usage raises.""" identity = "admin:max" try: make_ace(identity, write=False, create=True) except SyntaxError, e: self.assertEqual( e.args[0], "Permissions can only be enabled via ACL - %s" % "write") else: self.fail("Should have raised exception.") def test_make_ace_with_nonbool_raises(self): """Permissions can only be enabled via ACL, other usage raises.""" identity = "admin:max" try: make_ace(identity, write=None, create=True) except SyntaxError, e: self.assertEqual( e.args[0], "Permissions can only be enabled via ACL - %s" % "write") else: self.fail("Should have raised exception.") juju-0.7.orig/juju/state/tests/test_base.py0000644000000000000000000003570712135220114017207 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, Deferred from juju.state.tests.common import StateTestBase from juju.state.base import StateBase from juju.state.errors import StopWatcher from juju.state.topology import InternalTopology class StateBaseTest(StateTestBase): """ Test the StateBase class, which provides helpers to other state management classes. Note that some underscored methods in this class are like this with the intention of implementing "protected" semantics, not "private". """ @inlineCallbacks def setUp(self): yield super(StateBaseTest, self).setUp() self.base = StateBase(self.client) def parse_topology(self, content): topology = InternalTopology() topology.parse(content) return topology @inlineCallbacks def test_read_empty_topology(self): """ When the state is empty (no /topology) file, reading the topology should return an empty one. """ topology = yield self.base._read_topology() empty_topology = InternalTopology() self.assertEquals(topology.dump(), empty_topology.dump()) @inlineCallbacks def test_non_empty_topology(self): """ When there's something in /topology already, it should of course be read and parsed when we read the topology. """ test_topology = InternalTopology() test_topology.add_machine("m-0") topology_dump = test_topology.dump() yield self.client.create("/topology", topology_dump) topology = yield self.base._read_topology() self.assertEquals(topology.dump(), topology_dump) @inlineCallbacks def test_change_empty_topology(self): """ Attempting to change a non-existing topology should create it with the initial content in place. """ def change_topology(topology): topology.add_machine("m-0") yield self.base._retry_topology_change(change_topology) content, stat = yield self.client.get("/topology") topology = self.parse_topology(content) self.assertTrue(topology.has_machine("m-0")) @inlineCallbacks def test_change_non_empty_topology(self): """ Attempting to change a pre-existing topology should modify it accordingly. """ test_topology = InternalTopology() test_topology.add_machine("m-0") topology_dump = test_topology.dump() yield self.client.create("/topology", topology_dump) def change_topology(topology): topology.add_machine("m-1") yield self.base._retry_topology_change(change_topology) content, stat = yield self.client.get("/topology") topology = self.parse_topology(content) self.assertTrue(topology.has_machine("m-0")) self.assertTrue(topology.has_machine("m-1")) @inlineCallbacks def test_watch_topology_when_being_created(self): """ It should be possible to start watching the topology even before it is created. In this case, the callback will be made when it's actually introduced. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_topology(old_topology, new_topology): calls.append((old_topology, new_topology)) wait_callback[len(calls)-1].callback(True) # Start watching. self.base._watch_topology(watch_topology) # Callback is still untouched. self.assertEquals(calls, []) # Create the topology, and wait for callback. topology = InternalTopology() topology.add_machine("m-0") yield self.set_topology(topology) yield wait_callback[0] # The first callback must have been fired, and it must have None # as the first argument because that's the first topology seen. self.assertEquals(len(calls), 1) old_topology, new_topology = calls[0] self.assertEquals(old_topology, None) self.assertEquals(new_topology.has_machine("m-0"), True) self.assertEquals(new_topology.has_machine("m-1"), False) # Change the topology again. topology.add_machine("m-1") yield self.set_topology(topology) yield wait_callback[1] # Now the watch callback must have been fired with two # different topologies. The old one, and the new one. self.assertEquals(len(calls), 2) old_topology, new_topology = calls[1] self.assertEquals(old_topology.has_machine("m-0"), True) self.assertEquals(old_topology.has_machine("m-1"), False) self.assertEquals(new_topology.has_machine("m-0"), True) self.assertEquals(new_topology.has_machine("m-1"), True) @inlineCallbacks def test_watch_topology_when_it_already_exists(self): """ It should also be possible to start watching the topology when the topology already exists, of course. In this case, the callback should fire immediately, and should have an old_topology of None so that the callback has a chance to understand that it's the first time a topology is being processed, even if it already existed before. """ # Create the topology ahead of time. topology = InternalTopology() topology.add_machine("m-0") yield self.set_topology(topology) wait_callback = [Deferred() for i in range(10)] calls = [] def watch_topology(old_topology, new_topology): calls.append((old_topology, new_topology)) wait_callback[len(calls)-1].callback(True) # Start watching, and wait on callback immediately. self.base._watch_topology(watch_topology) yield wait_callback[0] # The first callback must have been fired, and it must have None # as the first argument because that's the first topology seen. self.assertEquals(len(calls), 1) old_topology, new_topology = calls[0] self.assertEquals(old_topology, None) self.assertEquals(new_topology.has_machine("m-0"), True) self.assertEquals(new_topology.has_machine("m-1"), False) # Change the topology again. topology.add_machine("m-1") yield self.set_topology(topology) yield wait_callback[1] # Give a chance for something bad to happen. yield self.poke_zk() # Now the watch callback must have been fired with two # different topologies. The old one, and the new one. self.assertEquals(len(calls), 2) old_topology, new_topology = calls[1] self.assertEquals(old_topology.has_machine("m-0"), True) self.assertEquals(old_topology.has_machine("m-1"), False) self.assertEquals(new_topology.has_machine("m-0"), True) self.assertEquals(new_topology.has_machine("m-1"), True) @inlineCallbacks def test_watch_topology_when_it_goes_missing(self): """ We consider a deleted /topology to be an error, and will not warn the callback about it. Instead, we'll wait until a new topology is brought up, and then will fire the callback with the full delta. """ # Create the topology ahead of time. topology = InternalTopology() topology.add_machine("m-0") yield self.set_topology(topology) wait_callback = [Deferred() for i in range(10)] calls = [] def watch_topology(old_topology, new_topology): calls.append((old_topology, new_topology)) wait_callback[len(calls)-1].callback(True) # Start watching, and wait on callback immediately. self.base._watch_topology(watch_topology) # Ignore the first callback with the initial state, as # this is already tested above. yield wait_callback[0] log = self.capture_logging() # Kill the /topology node entirely. yield self.client.delete("/topology") # Create a new topology from the ground up. topology.add_machine("m-1") yield self.set_topology(topology) yield wait_callback[1] # The issue should have been logged. self.assertIn("The /topology node went missing!", log.getvalue()) # Check that we've only perceived the delta. self.assertEquals(len(calls), 2) old_topology, new_topology = calls[1] self.assertEquals(old_topology.has_machine("m-0"), True) self.assertEquals(old_topology.has_machine("m-1"), False) self.assertEquals(new_topology.has_machine("m-0"), True) self.assertEquals(new_topology.has_machine("m-1"), True) @inlineCallbacks def test_watch_topology_may_defer(self): """ The watch topology may return a deferred so that it performs some of its logic asynchronously. In this case, it must not be called a second time before its postponed logic is finished completely. """ wait_callback = [Deferred() for i in range(10)] finish_callback = [Deferred() for i in range(10)] calls = [] def watch_topology(old_topology, new_topology): calls.append((old_topology, new_topology)) wait_callback[len(calls)-1].callback(True) return finish_callback[len(calls)-1] # Start watching. self.base._watch_topology(watch_topology) # Create the topology. topology = InternalTopology() topology.add_machine("m-0") yield self.set_topology(topology) # Hold off until callback is started. yield wait_callback[0] # Change the topology again. topology.add_machine("m-1") yield self.set_topology(topology) # Give a chance for something bad to happen. yield self.poke_zk() # Ensure we still have a single call. self.assertEquals(len(calls), 1) # Allow the first call to be completed, and wait on the # next one. finish_callback[0].callback(None) yield wait_callback[1] finish_callback[1].callback(None) # We should have the second change now. self.assertEquals(len(calls), 2) old_topology, new_topology = calls[1] self.assertEquals(old_topology.has_machine("m-0"), True) self.assertEquals(old_topology.has_machine("m-1"), False) self.assertEquals(new_topology.has_machine("m-0"), True) self.assertEquals(new_topology.has_machine("m-1"), True) @inlineCallbacks def test_stop_watch(self): """ A watch that fires a `StopWatcher` exception will end the watch.""" wait_callback = [Deferred() for i in range(5)] calls = [] def watcher(old_topology, new_topology): calls.append((old_topology, new_topology)) wait_callback[len(calls)-1].callback(True) if len(calls) == 2: raise StopWatcher() # Start watching. self.base._watch_topology(watcher) # Create the topology. topology = InternalTopology() topology.add_machine("m-0") yield self.set_topology(topology) # Hold off until callback is started. yield wait_callback[0] # Change the topology again. topology.add_machine("m-1") yield self.set_topology(topology) yield wait_callback[1] self.assertEqual(len(calls), 2) # Change the topology again, we shouldn't see this. topology.add_machine("m-2") yield self.set_topology(topology) # Give a chance for something bad to happen. yield self.poke_zk() # Ensure we still have a single call. self.assertEquals(len(calls), 2) @inlineCallbacks def test_watch_stops_on_closed_connection(self): """Verify watches stops when the connection is closed.""" # Use a separate client connection for watching so it can be # disconnected. watch_client = self.get_zookeeper_client() yield watch_client.connect() watch_base = StateBase(watch_client) wait_callback = Deferred() finish_callback = Deferred() calls = [] def watcher(old_topology, new_topology): calls.append((old_topology, new_topology)) wait_callback.callback(True) return finish_callback # Start watching. yield watch_base._watch_topology(watcher) # Create the topology. topology = InternalTopology() topology.add_machine("m-0") yield self.set_topology(topology) # Hold off until callback is started. yield wait_callback # Change the topology. topology.add_machine("m-1") yield self.set_topology(topology) # Ensure that the watch has been called just once so far # (although still pending due to the finish_callback). self.assertEquals(len(calls), 1) # Now disconnect the client. watch_client.close() self.assertFalse(watch_client.connected) self.assertTrue(self.client.connected) # Change the topology again. topology.add_machine("m-2") yield self.set_topology(topology) # Allow the first call to be completed, starting a process of # watching for the next change. At this point, the watch will # encounter that the client is disconnected. finish_callback.callback(True) # Give a chance for something bad to happen. yield self.poke_zk() # Ensure the watch was still not called. self.assertEquals(len(calls), 1) @inlineCallbacks def test_watch_stops_on_early_closed_connection(self): """Verify watches stops when the connection is closed early. _watch_topology chains from an exists_and_watch to a get_and_watch. This test ensures that this chaining will fail gracefully if the connection is closed before this chaining can occur. """ # Use a separate client connection for watching so it can be # disconnected. watch_client = self.get_zookeeper_client() yield watch_client.connect() watch_base = StateBase(watch_client) calls = [] @inlineCallbacks def watcher(old_topology, new_topology): calls.append((old_topology, new_topology)) # Create the topology. topology = InternalTopology() topology.add_machine("m-0") yield self.set_topology(topology) # Now disconnect the client. watch_client.close() self.assertFalse(watch_client.connected) self.assertTrue(self.client.connected) # Start watching. yield watch_base._watch_topology(watcher) # Change the topology, this will trigger the watch. topology.add_machine("m-1") yield self.set_topology(topology) # Give a chance for something bad to happen. yield self.poke_zk() # Ensure the watcher was never called, because its client was # disconnected. self.assertEquals(len(calls), 0) juju-0.7.orig/juju/state/tests/test_charm.py0000644000000000000000000001552212135220114017360 0ustar 00000000000000import os import shutil from twisted.internet.defer import inlineCallbacks from juju.charm.directory import CharmDirectory from juju.charm.tests import local_charm_id from juju.charm.tests.test_directory import sample_directory from juju.charm.tests.test_repository import unbundled_repository from juju.lib import serializer from juju.state.charm import CharmStateManager from juju.state.errors import CharmStateNotFound from juju.state.tests.common import StateTestBase class CharmStateManagerTest(StateTestBase): @inlineCallbacks def setUp(self): yield super(CharmStateManagerTest, self).setUp() self.charm_state_manager = CharmStateManager(self.client) self.charm_id = local_charm_id(self.charm) self.unbundled_repo_path = self.makeDir() os.rmdir(self.unbundled_repo_path) shutil.copytree(unbundled_repository, self.unbundled_repo_path) @inlineCallbacks def test_add_charm(self): """ Adding a Charm into a CharmStateManager should register the charm within the Zookeeper state, according to the specification. """ charm_state = yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "http://example.com/abc") self.assertEquals(charm_state.id, "local:series/dummy-1") children = yield self.client.get_children("/charms") self.assertEquals(children, ["local_3a_series_2f_dummy-1"]) content, stat = yield self.client.get( "/charms/local_3a_series_2f_dummy-1") charm_data = serializer.load(content) self.assertEquals(charm_data, { "metadata": self.charm.metadata.get_serialization_data(), "config": self.charm.config.get_serialization_data(), "sha256": self.charm.get_sha256(), "url": "http://example.com/abc" }) @inlineCallbacks def test_get_charm(self): """ A CharmState should be available if one get()s a charm that was previously added into the manager. """ yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/dummy-1") self.assertEquals(charm_state.id, "local:series/dummy-1") @inlineCallbacks def test_charm_state_attributes(self): """ Verify that the basic (invariant) attributes of the CharmState are correctly in place. """ yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "http://example.com/abc") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/dummy-1") self.assertEquals(charm_state.name, "dummy") self.assertEquals(charm_state.revision, 1) self.assertEquals(charm_state.id, "local:series/dummy-1") self.assertEquals(charm_state.bundle_url, "http://example.com/abc") @inlineCallbacks def test_is_subordinate(self): """ Verify is_subordinate for traditional and subordinate charms """ yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/dummy-1") self.assertEquals(charm_state.is_subordinate(), False) sub_charm = CharmDirectory( os.path.join(self.unbundled_repo_path, "series", "logging")) self.charm_state_manager.add_charm_state("local:series/logging-1", sub_charm, "") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/logging-1") self.assertEquals(charm_state.is_subordinate(), True) @inlineCallbacks def test_charm_state_metadata(self): """ Check that the charm metadata was correctly saved and loaded. """ yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/dummy-1") metadata = yield charm_state.get_metadata() self.assertEquals(metadata.name, "dummy") self.assertFalse(metadata.is_subordinate) self.assertFalse(charm_state.is_subordinate()) @inlineCallbacks def test_charm_state_is_subordinate(self): log_dir = os.path.join(os.path.dirname(sample_directory), "logging") charm = CharmDirectory(log_dir) yield self.charm_state_manager.add_charm_state( "local:series/logging-1", charm, "") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/logging-1") self.assertTrue(charm_state.is_subordinate) @inlineCallbacks def test_charm_state_config_options(self): """Verify ConfigOptions present and correct.""" from juju.charm.tests.test_config import sample_yaml_data yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/dummy-1") config = yield charm_state.get_config() self.assertEquals(config.get_serialization_data(), sample_yaml_data) @inlineCallbacks def test_get_non_existing_charm_prior_to_initialization(self): """ Getting a charm before the charms node was even initialized should raise an error about the charm not being present. """ try: yield self.charm_state_manager.get_charm_state( "local:series/dummy-1") except CharmStateNotFound, e: self.assertEquals(e.charm_id, "local:series/dummy-1") else: self.fail("Error not raised.") @inlineCallbacks def test_get_non_existing_charm(self): """ Trying to retrieve a charm from the state when it was never added should raise an error. """ yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "") try: yield self.charm_state_manager.get_charm_state( "local:anotherseries/dummy-1") except CharmStateNotFound, e: self.assertEquals(e.charm_id, "local:anotherseries/dummy-1") else: self.fail("Error not raised.") @inlineCallbacks def test_get_sha256(self): """ We should be able to retrieve the sha256 of a stored charm. """ yield self.charm_state_manager.add_charm_state( self.charm_id, self.charm, "") charm_state = yield self.charm_state_manager.get_charm_state( "local:series/dummy-1") sha256 = yield charm_state.get_sha256() self.assertEquals(sha256, self.charm.get_sha256()) juju-0.7.orig/juju/state/tests/test_endpoint.py0000644000000000000000000000267512135220114020113 0ustar 00000000000000from juju.lib.testing import TestCase from juju.state.endpoint import RelationEndpoint class RelationEndpointTest(TestCase): def test_may_relate_to(self): # TODO: Needs a doc string mysql_ep = RelationEndpoint("mysqldb", "mysql", "db", "server") blog_ep = RelationEndpoint("blog", "mysql", "mysql", "client") pg_ep = RelationEndpoint("postgres", "postgres", "db", "server") self.assertRaises(TypeError, mysql_ep.may_relate_to, 42) # should relate, along with symmetric case self.assert_(mysql_ep.may_relate_to(blog_ep)) self.assert_(blog_ep.may_relate_to(mysql_ep)) # no common relation_type self.assertFalse(blog_ep.may_relate_to(pg_ep)) self.assertFalse(pg_ep.may_relate_to(blog_ep)) # antireflexive om relation_role - # must be consumer AND provider or vice versa self.assertFalse(blog_ep.may_relate_to( RelationEndpoint("foo", "mysql", "db", "client"))) self.assertFalse(mysql_ep.may_relate_to( RelationEndpoint("foo", "mysql", "db", "server"))) # irreflexive for server/client self.assertFalse(mysql_ep.may_relate_to(mysql_ep)) self.assertFalse(blog_ep.may_relate_to(blog_ep)) self.assertFalse(pg_ep.may_relate_to(pg_ep)) # but reflexive for peer riak_ep = RelationEndpoint("riak", "riak", "riak", "peer") self.assert_(riak_ep.may_relate_to(riak_ep)) juju-0.7.orig/juju/state/tests/test_environment.py0000644000000000000000000002310712135220114020630 0ustar 00000000000000from twisted.internet.defer import inlineCallbacks, Deferred from juju.environment.tests.test_config import ( EnvironmentsConfigTestBase, SAMPLE_ENV) from juju.machine.tests.test_constraints import dummy_cs from juju.lib import serializer from juju.state.errors import EnvironmentStateNotFound from juju.state.environment import ( EnvironmentStateManager, GlobalSettingsStateManager, SETTINGS_PATH) from juju.state.tests.common import StateTestBase # Coverage dislikes dynamic imports, convert to static from juju.providers import dummy class EnvironmentStateManagerTest(StateTestBase, EnvironmentsConfigTestBase): @inlineCallbacks def setUp(self): yield super(EnvironmentStateManagerTest, self).setUp() self.environment_state_manager = EnvironmentStateManager(self.client) self.write_config(SAMPLE_ENV) self.config.load() @inlineCallbacks def tearDown(self): yield super(EnvironmentStateManagerTest, self).tearDown() @inlineCallbacks def test_set_config_state(self): """ The simplest thing the manager can do is serialize a given environment and save it in zookeeper. """ manager = self.environment_state_manager yield manager.set_config_state(self.config, "myfirstenv") serialized = self.config.serialize("myfirstenv") content, stat = yield self.client.get("/environment") self.assertEquals(serializer.load(content), serializer.load(serialized)) @inlineCallbacks def test_set_config_state_replaces_environment(self): """ Setting the environment should also work with an existing environment. """ yield self.client.create("/environment", "Replace me!") manager = self.environment_state_manager yield manager.set_config_state(self.config, "myfirstenv") serialized = self.config.serialize("myfirstenv") content, stat = yield self.client.get("/environment") self.assertEquals(serializer.load(content), serializer.load(serialized)) @inlineCallbacks def test_get_config(self): """ We can also retrieve a loaded config from the environment. """ manager = self.environment_state_manager yield manager.set_config_state(self.config, "myfirstenv") config = yield manager.get_config() serialized1 = self.config.serialize("myfirstenv") serialized2 = config.serialize("myfirstenv") self.assertEquals(serializer.load(serialized1), serializer.load(serialized2)) def test_get_config_when_missing(self): """ get_config should blow up politely if the environment config is missing. """ d = self.environment_state_manager.get_config() return self.assertFailure(d, EnvironmentStateNotFound) @inlineCallbacks def test_get_in_legacy_environment_no(self): yield self.push_default_config() esm = self.environment_state_manager legacy = yield esm.get_in_legacy_environment() self.assertEquals(legacy, False) @inlineCallbacks def test_get_in_legacy_environment_yes(self): yield self.push_default_config() self.client.delete("/constraints") esm = self.environment_state_manager legacy = yield esm.get_in_legacy_environment() self.assertEquals(legacy, True) def test_get_constraint_set_no_env(self): d = self.environment_state_manager.get_constraint_set() return self.assertFailure(d, EnvironmentStateNotFound) @inlineCallbacks def test_get_constraint_set(self): yield self.push_default_config() cs = yield self.environment_state_manager.get_constraint_set() constraints = cs.parse(["arch=any", "cpu=10"]) self.assertEquals(constraints, { "ubuntu-series": None, "provider-type": "dummy", "arch": None, "cpu": 10.0, "mem": 512.0}) def test_get_constraints_no_env(self): d = self.environment_state_manager.get_constraints() return self.assertFailure(d, EnvironmentStateNotFound) @inlineCallbacks def test_get_constraints_env_with_no_node(self): yield self.push_default_config() self.client.delete("/constraints") constraints = yield self.environment_state_manager.get_constraints() self.assertEquals(constraints.data, {}) @inlineCallbacks def test_set_constraints(self): yield self.push_default_config() constraints = dummy_cs.parse(["cpu=any", "mem=32T"]) yield self.environment_state_manager.set_constraints(constraints) roundtrip = yield self.environment_state_manager.get_constraints() self.assertEquals(roundtrip, constraints) class GlobalSettingsTest(StateTestBase): @inlineCallbacks def setUp(self): yield super(GlobalSettingsTest, self).setUp() self.manager = GlobalSettingsStateManager(self.client) @inlineCallbacks def test_get_set_provider_type(self): """Debug logging is off by default.""" self.assertEqual((yield self.manager.get_provider_type()), None) yield self.manager.set_provider_type("ec2") yield self.assertFailure( self.manager.set_provider_type("abc"), ValueError) self.assertEqual((yield self.manager.get_provider_type()), "ec2") content, stat = yield self.client.get("/settings") self.assertEqual(serializer.load(content), {"provider-type": "ec2"}) @inlineCallbacks def test_set_get_environment(self): self.assertEqual((yield self.manager.get_environment_id()), None) yield self.manager.set_environment_id('snowflake') yield self.assertFailure( self.manager.set_environment_id('snowflake'), ValueError) self.assertEqual( (yield self.manager.get_environment_id()), "snowflake") @inlineCallbacks def test_get_debug_log_enabled_no_settings_default(self): """Debug logging is off by default.""" value = yield self.manager.is_debug_log_enabled() self.assertFalse(value) @inlineCallbacks def test_set_debug_log(self): """Debug logging can be (dis)enabled via the runtime manager.""" yield self.manager.set_debug_log(True) value = yield self.manager.is_debug_log_enabled() self.assertTrue(value) yield self.manager.set_debug_log(False) value = yield self.manager.is_debug_log_enabled() self.assertFalse(value) @inlineCallbacks def test_watcher(self): """Use the watch facility of the settings manager to observer changes. """ results = [] callbacks = [Deferred() for i in range(5)] def watch(content): results.append(content) callbacks[len(results) - 1].callback(content) yield self.manager.set_debug_log(True) yield self.manager.watch_settings_changes(watch) self.assertTrue(results) yield self.manager.set_debug_log(False) yield self.manager.set_debug_log(True) yield callbacks[2] self.assertEqual(len(results), 3) self.assertEqual( map(lambda x: isinstance(x, bool) and x or x.type_name, results), [True, "changed", "changed"]) data, stat = yield self.client.get(SETTINGS_PATH) self.assertEqual( (yield self.manager.is_debug_log_enabled()), True) @inlineCallbacks def test_watcher_start_stop(self): """Setings watcher observes changes till stopped. Additionally watching can be enabled on a setting node that doesn't exist yet. XXX For reasons unknown this fails under coverage outside of the test, at least for me (k.), but not for others. """ results = [] callbacks = [Deferred() for i in range(5)] def watch(content): results.append(content) callbacks[len(results) - 1].callback(content) watcher = yield self.manager.watch_settings_changes(watch) yield self.client.create(SETTINGS_PATH, "x") value = yield callbacks[0] self.assertEqual(value.type_name, "created") data = dict(x=1, y=2, z=3, moose=u"moon") yield self.client.set( SETTINGS_PATH, serializer.dump(data)) value = yield callbacks[1] self.assertEqual(value.type_name, "changed") watcher.stop() yield self.client.set(SETTINGS_PATH, "z") # Give a chance for things to go bad. yield self.sleep(0.1) self.assertFalse(callbacks[2].called) @inlineCallbacks def test_watcher_stops_on_callback_exception(self): """If a callback has an exception the watcher is stopped.""" results = [] callbacks = [Deferred(), Deferred()] def watch(content): results.append(content) callbacks[len(results) - 1].callback(content) raise AttributeError("foobar") def on_error(error): results.append(True) yield self.client.create(SETTINGS_PATH, "z") watcher = yield self.manager.watch_settings_changes( watch, on_error) yield callbacks[0] # The callback error should have disconnected the system. yield self.client.set(SETTINGS_PATH, "x") # Give a chance for things to go bad. yield self.sleep(0.1) # Verify nothing did go bad. self.assertFalse(watcher.is_running) self.assertFalse(callbacks[1].called) self.assertIdentical(results[1], True) juju-0.7.orig/juju/state/tests/test_errors.py0000644000000000000000000002731312135220114017603 0ustar 00000000000000from textwrap import dedent from juju.lib.testing import TestCase from juju.state.endpoint import RelationEndpoint from juju.state.errors import ( JujuError, StateError, StateChanged, CharmStateNotFound, ServiceStateNotFound, ServiceUnitStateNotFound, MachineStateNotFound, MachineStateInUse, NoUnusedMachines, ServiceUnitStateMachineAlreadyAssigned, ServiceStateNameInUse, BadServiceStateName, EnvironmentStateNotFound, RelationAlreadyExists, RelationStateNotFound, UnitRelationStateNotFound, UnitRelationStateAlreadyAssigned, UnknownRelationRole, BadDescriptor, DuplicateEndpoints, IncompatibleEndpoints, NoMatchingEndpoints, AmbiguousRelation, ServiceUnitStateMachineNotAssigned, ServiceUnitDebugAlreadyEnabled, ServiceUnitResolvedAlreadyEnabled, ServiceUnitUpgradeAlreadyEnabled, ServiceUnitRelationResolvedAlreadyEnabled, PrincipalNotFound, RelationBrokenContextError, PrincipalServiceUnitRequired, NotSubordinateCharm, UnitMissingContainer, SubordinateUsedAsContainer, InvalidRelationIdentity, UnsupportedSubordinateServiceRemoval, IllegalSubordinateMachineAssignment) class StateErrorsTest(TestCase): def assertIsStateError(self, error): self.assertTrue(isinstance(error, StateError)) self.assertTrue(isinstance(error, JujuError)) def test_state_changed(self): error = StateChanged() self.assertIsStateError(error) self.assertEquals(str(error), "State changed while operation was in progress") def test_principal_not_found(self): error = PrincipalNotFound("joe") self.assertIsStateError(error) self.assertEquals(str(error), "Principal 'joe' not found") def test_charm_not_found(self): error = CharmStateNotFound("namespace:name-123") self.assertIsStateError(error) self.assertEquals(error.charm_id, "namespace:name-123") self.assertEquals(str(error), "Charm 'namespace:name-123' was not found") def test_service_not_found(self): error = ServiceStateNotFound("wordpress") self.assertIsStateError(error) self.assertEquals(error.service_name, "wordpress") self.assertEquals(str(error), "Service 'wordpress' was not found") def test_service_unit_not_found(self): error = ServiceUnitStateNotFound("wordpress/0") self.assertIsStateError(error) self.assertEquals(error.unit_name, "wordpress/0") self.assertEquals(str(error), "Service unit 'wordpress/0' was not found") def test_machine_not_found(self): error = MachineStateNotFound(0) self.assertIsStateError(error) self.assertEquals(error.machine_id, 0) self.assertEquals(str(error), "Machine 0 was not found") def test_machine_in_use(self): error = MachineStateInUse(0) self.assertIsStateError(error) self.assertEquals(error.machine_id, 0) self.assertEquals( str(error), "Resources are currently assigned to machine 0") def test_no_unused_machines(self): error = NoUnusedMachines() self.assertIsStateError(error) self.assertEquals( str(error), "No unused machines are available for assignment") def test_machine_already_assigned(self): error = ServiceUnitStateMachineAlreadyAssigned("wordpress/0") self.assertIsStateError(error) self.assertEquals(error.unit_name, "wordpress/0") self.assertEquals(str(error), "Service unit 'wordpress/0' is already assigned " "to a machine") def test_unit_machine_not_assigned(self): error = ServiceUnitStateMachineNotAssigned("wordpress/0") self.assertIsStateError(error) self.assertEquals(error.unit_name, "wordpress/0") self.assertEquals(str(error), "Service unit 'wordpress/0' is not assigned " "to a machine") def test_unit_already_in_debug_mode(self): error = ServiceUnitDebugAlreadyEnabled("wordpress/0") self.assertIsStateError(error) self.assertEquals(error.unit_name, "wordpress/0") self.assertEquals( str(error), "Service unit 'wordpress/0' is already in debug mode.") def test_unit_already_marked_for_upgrade(self): error = ServiceUnitUpgradeAlreadyEnabled("wordpress/0") self.assertIsStateError(error) self.assertEquals(error.unit_name, "wordpress/0") self.assertEquals( str(error), "Service unit 'wordpress/0' is already marked for upgrade.") def test_unit_already_in_resolved_mode(self): error = ServiceUnitResolvedAlreadyEnabled("wordpress/0") self.assertIsStateError(error) self.assertEquals(error.unit_name, "wordpress/0") self.assertEquals( str(error), "Service unit 'wordpress/0' is already marked as resolved.") def test_unit_already_in_relation_resolved_mode(self): error = ServiceUnitRelationResolvedAlreadyEnabled("wordpress/0") self.assertIsStateError(error) self.assertEquals(error.unit_name, "wordpress/0") self.assertEquals( str(error), "Service unit %r already has relations marked as resolved." % ( "wordpress/0")) def test_service_name_in_use(self): error = ServiceStateNameInUse("wordpress") self.assertIsStateError(error) self.assertEquals(error.service_name, "wordpress") self.assertEquals(str(error), "Service name 'wordpress' is already in use") def test_bad_service_name(self): error = BadServiceStateName("wordpress", "mysql") self.assertIsStateError(error) self.assertEquals(error.expected_name, "wordpress") self.assertEquals(error.obtained_name, "mysql") self.assertEquals(str(error), "Expected service name 'wordpress' but got 'mysql'") def test_environment_not_found(self): error = EnvironmentStateNotFound() self.assertIsStateError(error) self.assertEquals(str(error), "Environment state was not found") def test_relation_already_exists(self): error = RelationAlreadyExists( (RelationEndpoint("wordpress", "mysql", "mysql", "client"), RelationEndpoint("mysql", "mysql", "db", "server"))) self.assertIsStateError(error) self.assertEqual( str(error), "Relation mysql already exists between wordpress and mysql") def test_relation_state_not_found(self): error = RelationStateNotFound() self.assertIsStateError(error) self.assertEqual(str(error), "Relation not found") def test_unit_relation_state_not_found(self): error = UnitRelationStateNotFound( "rel-1", "rel-client", "mysql/0") self.assertIsStateError(error) msg = "The relation 'rel-client' has no unit state for 'mysql/0'" self.assertEquals(str(error), msg) def test_unit_relation_state_exists(self): error = UnitRelationStateAlreadyAssigned( "rel-id", "rel-client", "mysql/0") self.assertIsStateError(error) msg = "The relation 'rel-client' already contains a unit for 'mysql/0'" self.assertEquals(str(error), msg) def test_unknown_relation_role(self): error = UnknownRelationRole("rel-id", "server2", "service-name") self.assertIsStateError(error) msg = "Unknown relation role 'server2' for service 'service-name'" self.assertEquals(str(error), msg) def test_bad_descriptor(self): error = BadDescriptor("a:b:c") self.assertTrue(isinstance(error, JujuError)) msg = "Bad descriptor: 'a:b:c'" self.assertEquals(str(error), msg) def test_duplicate_endpoints(self): riak_ep = RelationEndpoint("riak", "riak", "ring", "peer") error = DuplicateEndpoints(riak_ep, riak_ep) self.assertIsStateError(error) self.assertTrue("riak" in str(error)) def test_incompatible_endpoints(self): error = IncompatibleEndpoints( RelationEndpoint("mysql", "mysql", "db", "server"), RelationEndpoint("riak", "riak", "ring", "peer")) self.assertIsStateError(error) self.assertTrue("mysql" in str(error)) self.assertTrue("riak" in str(error)) def test_no_matching_endpoints(self): error = NoMatchingEndpoints() self.assertIsStateError(error) self.assertEqual("No matching endpoints", str(error)) def test_ambiguous_relation(self): def endpoints(*pairs): return [( RelationEndpoint(*pair[0].split()), RelationEndpoint(*pair[1].split())) for pair in pairs] error = AmbiguousRelation( ("myblog", "mydb"), endpoints( ("myblog mysql db client", "mydb mysql db-admin server"), ("myblog mysql db client", "mydb mysql db server"))) self.assertIsStateError(error) self.assertEquals( str(error), dedent("""\ Ambiguous relation 'myblog mydb'; could refer to: 'myblog:db mydb:db' (mysql client / mysql server) 'myblog:db mydb:db-admin' (mysql client / mysql server)""")) def test_relation_broken_context(self): error = RelationBrokenContextError("+++ OUT OF CHEESE ERROR +++") self.assertIsStateError(error) self.assertEquals(str(error), "+++ OUT OF CHEESE ERROR +++") def test_unit_missing_container(self): error = UnitMissingContainer("blubber/0") self.assertIsStateError(error) self.assertEquals(str(error), "The unit blubber/0 expected a principal " "container but none was assigned.") def test_principal_service_unit_required(self): error = PrincipalServiceUnitRequired("lard", 1) self.assertIsStateError(error) self.assertEquals(str(error), "Expected principal service unit as container for " "lard instance, got 1") def test_subordinate_used_as_container(self): error = SubordinateUsedAsContainer("lard", "blubber/0") self.assertIsStateError(error) self.assertEquals(str(error), "Attempted to assign unit of lard " "to subordinate blubber/0") def test_not_subordinate_charm(self): error = NotSubordinateCharm("lard", "blubber/0") self.assertIsStateError(error) self.assertEquals(str(error), "lard cannot be used as subordinate to blubber/0") def test_unsupported_subordinate_service_removal(self): error = UnsupportedSubordinateServiceRemoval("lard", "blubber") self.assertIsStateError(error) self.assertEquals(str(error), "Unsupported attempt to destroy subordinate " "service 'lard' while principal service " "'blubber' is related.") def test_invalid_relation_ident(self): error = InvalidRelationIdentity("invalid-id$forty-two") self.assertTrue(isinstance(error, JujuError)) self.assertTrue(isinstance(error, ValueError)) self.assertEquals( str(error), "Not a valid relation id: 'invalid-id$forty-two'") def test_illegal_subordinate_machine_assignment(self): error = IllegalSubordinateMachineAssignment("blubber/1") self.assertTrue(isinstance(error, JujuError)) self.assertEquals( str(error), "Unable to assign subordinate blubber/1 to machine.") juju-0.7.orig/juju/state/tests/test_firewall.py0000644000000000000000000007157512135220114020105 0ustar 00000000000000import logging from twisted.internet.defer import ( Deferred, inlineCallbacks, fail, returnValue, succeed) from juju.errors import ProviderInteractionError from juju.lib.mocker import MATCH from juju.machine.tests.test_constraints import series_constraints from juju.providers.dummy import DummyMachine, MachineProvider from juju.state.errors import StopWatcher from juju.state.firewall import FirewallManager from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager from juju.state.tests.test_service import ServiceStateManagerTestBase MATCH_MACHINE = MATCH(lambda x: isinstance(x, DummyMachine)) class FirewallTestBase(ServiceStateManagerTestBase): @inlineCallbacks def setUp(self): yield super(FirewallTestBase, self).setUp() self._running = True self.environment = self.config.get_default() self.provider = self.environment.get_machine_provider() self.firewall_manager = FirewallManager( self.client, self.is_running, self.provider) self.service_state_manager = ServiceStateManager(self.client) self.output = self.capture_logging(level=logging.DEBUG) # The following methods are used to provide the scaffolding given # by the provisioning agent, which normally runs the FirewallManager def is_running(self): return self._running def start(self): self.service_state_manager.watch_service_states( self.firewall_manager.watch_service_changes) def stop(self): self._running = False @inlineCallbacks def provide_machine(self, machine_state): machines = yield self.provider.start_machine( {"machine-id": machine_state.id}) instance_id = machines[0].instance_id yield machine_state.set_instance_id(instance_id) def wait_on_expected_units(self, expected): """Returns deferred for waiting on `expected` unit names. These unit names may require the firewall to have ports opened and/or closed. """ condition_met = Deferred() seen = set() def observer(unit_state): unit_name = unit_state.unit_name seen.add(unit_name) if seen >= expected: # Call the callback just once, since it is possible # for this condition to be satisfied multiple times in # using tests because of background activity if not condition_met.called: condition_met.callback(True) return succeed(True) self.firewall_manager.add_open_close_ports_observer(observer) return condition_met def wait_on_expected_machines(self, expected): """Returns deferred for waiting on `expected` machine IDs. These machines may require the firewall to have ports opened and/or closed. """ condition_met = Deferred() seen = set() def observer(machine_id): seen.add(machine_id) if seen >= expected: # Call the callback just once, since it is possible # for this condition to be satisfied multiple times in # using tests because of background activity if not condition_met.called: condition_met.callback(True) return succeed(True) self.firewall_manager.add_open_close_ports_on_machine_observer( observer) return condition_met class FirewallServiceTest(FirewallTestBase): @inlineCallbacks def test_service_exposed_flag_changes(self): """Verify that a service unit is checked whenever a change occurs such that ports may need to be opened and/or closed for the machine corresponding to a given service unit. """ self.start() expected_units = self.wait_on_expected_units( set(["wordpress/0"])) wordpress = yield self.add_service("wordpress") yield wordpress.add_unit_state() yield wordpress.set_exposed_flag() self.assertTrue((yield expected_units)) # Then clear the flag, see that it triggers on the expected units expected_units = self.wait_on_expected_units( set(["wordpress/0"])) yield wordpress.clear_exposed_flag() self.assertTrue((yield expected_units)) # Re-expose wordpress: set the flag again, verify that it # triggers on the expected units expected_units = self.wait_on_expected_units( set(["wordpress/0"])) yield wordpress.set_exposed_flag() self.assertTrue((yield expected_units)) self.stop() @inlineCallbacks def test_add_remove_service_units_for_exposed_service(self): """Verify that adding/removing service units for an exposed service triggers the appropriate firewall management of opening/closing ports on the machines for the corresponding service units. """ self.start() wordpress = yield self.add_service("wordpress") yield wordpress.set_exposed_flag() # Adding service units to this exposed service will trigger expected_units = self.wait_on_expected_units( set(["wordpress/0", "wordpress/1"])) wordpress_0 = yield wordpress.add_unit_state() yield wordpress.add_unit_state() self.assertTrue((yield expected_units)) # Removing service units will also trigger expected_units = self.wait_on_expected_units( set(["wordpress/2"])) yield wordpress.remove_unit_state(wordpress_0) yield wordpress.add_unit_state() self.assertTrue((yield expected_units)) self.stop() @inlineCallbacks def test_open_close_ports(self): """Verify that opening/closing ports triggers the appropriate firewall management for the corresponding service units. """ self.start() expected_units = self.wait_on_expected_units( set(["wordpress/0"])) wordpress = yield self.add_service("wordpress") yield wordpress.set_exposed_flag() wordpress_0 = yield wordpress.add_unit_state() wordpress_1 = yield wordpress.add_unit_state() yield wordpress.add_unit_state() yield wordpress_0.open_port(443, "tcp") yield wordpress_0.open_port(80, "tcp") yield wordpress_0.close_port(443, "tcp") self.assertTrue((yield expected_units)) expected_units = self.wait_on_expected_units( set(["wordpress/1", "wordpress/3"])) wordpress_3 = yield wordpress.add_unit_state() yield wordpress_1.open_port(53, "udp") yield wordpress_3.open_port(80, "tcp") self.assertTrue((yield expected_units)) expected_units = self.wait_on_expected_units( set(["wordpress/0", "wordpress/1", "wordpress/3"])) yield wordpress.clear_exposed_flag() self.assertTrue((yield expected_units)) self.stop() @inlineCallbacks def test_remove_service_state(self): """Verify that firewall mgmt for corresponding service units is triggered upon the service's removal. """ self.start() expected_units = self.wait_on_expected_units( set(["wordpress/0", "wordpress/1"])) wordpress = yield self.add_service("wordpress") yield wordpress.add_unit_state() yield wordpress.add_unit_state() yield wordpress.set_exposed_flag() self.assertTrue((yield expected_units)) # Do not clear the exposed flag prior to removal, triggering # should still occur as expected yield self.service_state_manager.remove_service_state(wordpress) self.stop() @inlineCallbacks def test_port_mgmt_for_unexposed_service_is_a_nop(self): """Verify that activity on an unexposed service does NOT trigger firewall mgmt for the corresponding service unit.""" self.start() expected_units = self.wait_on_expected_units( set(["not-called"])) wordpress = yield self.add_service("wordpress") wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.open_port(53, "tcp") # The observer should not be called in this case self.assertFalse(expected_units.called) self.stop() @inlineCallbacks def test_provisioning_agent_restart(self): """Verify that firewall management is correct if the agent restarts. In particular, this test verifies that all state relevant for firewall management is stored in ZK and not in the agent itself. """ # Store into ZK relevant state, this might have been observed # in a scenario in which the agent has previously been # running. wordpress = yield self.add_service("wordpress") wordpress_0 = yield wordpress.add_unit_state() wordpress_1 = yield wordpress.add_unit_state() yield wordpress_1.open_port(443, "tcp") yield wordpress_1.open_port(80, "tcp") yield wordpress.set_exposed_flag() # Now simulate agent start self.start() # Verify the expected service units are observed as needing # firewall mgmt expected_units = self.wait_on_expected_units( set(["wordpress/0", "wordpress/1"])) yield wordpress_0.open_port(53, "udp") yield wordpress_1.close_port(443, "tcp") self.assertTrue((yield expected_units)) # Also verify that opening/closing ports work as expected expected_units = self.wait_on_expected_units( set(["wordpress/1"])) yield wordpress_1.close_port(80, "tcp") expected_units = self.wait_on_expected_units( set(["wordpress/0", "wordpress/1"])) yield wordpress.clear_exposed_flag() self.assertTrue((yield expected_units)) self.stop() class FirewallMachineTest(FirewallTestBase): def add_machine_state(self): manager = MachineStateManager(self.client) return manager.add_machine_state(series_constraints) @inlineCallbacks def get_provider_ports(self, machine): instance_id = yield machine.get_instance_id() machine_provider = yield self.provider.get_machine(instance_id) provider_ports = yield self.provider.get_opened_ports( machine_provider, machine.id) returnValue(provider_ports) def test_open_close_ports_on_machine(self): """Verify opening/closing ports on a machine works properly. In particular this is done without watch support.""" machine = yield self.add_machine_state() yield self.firewall_manager.process_machine(machine) # Expose a service wordpress = yield self.add_service("wordpress") yield wordpress.set_exposed_flag() wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.open_port(80, "tcp") yield wordpress_0.open_port(443, "tcp") yield wordpress_0.assign_to_machine(machine) yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set([(80, "tcp"), (443, "tcp")])) self.assertIn("Opened 80/tcp on provider machine 0", self.output.getvalue()) self.assertIn("Opened 443/tcp on provider machine 0", self.output.getvalue()) # Now change port setup yield wordpress_0.open_port(8080, "tcp") yield wordpress_0.close_port(443, "tcp") yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set([(80, "tcp"), (8080, "tcp")])) self.assertIn("Opened 8080/tcp on provider machine 0", self.output.getvalue()) self.assertIn("Closed 443/tcp on provider machine 0", self.output.getvalue()) @inlineCallbacks def test_open_close_ports_on_unassigned_machine(self): """Verify corner case that nothing happens on an unassigned machine.""" machine = yield self.add_machine_state() yield self.provide_machine(machine) yield self.firewall_manager.process_machine(machine) yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set()) @inlineCallbacks def test_open_close_ports_on_machine_unexposed_service(self): """Verify opening/closing ports on a machine works properly. In particular this is done without watch support.""" machine = yield self.add_machine_state() yield self.provide_machine(machine) wordpress = yield self.add_service("wordpress") wordpress_0 = yield wordpress.add_unit_state() # Port activity, but service is not exposed yield wordpress_0.open_port(80, "tcp") yield wordpress_0.open_port(443, "tcp") yield wordpress_0.assign_to_machine(machine) yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set()) # Now expose it yield wordpress.set_exposed_flag() yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set([(80, "tcp"), (443, "tcp")])) @inlineCallbacks def test_open_close_ports_on_machine_not_yet_provided(self): """Verify that opening/closing ports will eventually succeed once a machine is provided. """ machine = yield self.add_machine_state() wordpress = yield self.add_service("wordpress") yield wordpress.set_exposed_flag() wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.open_port(80, "tcp") yield wordpress_0.open_port(443, "tcp") yield wordpress_0.assign_to_machine(machine) # First attempt to open ports quietly fails (except for # logging) because the machine has not yet been provisioned yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertIn("No provisioned machine for machine 0", self.output.getvalue()) yield self.provide_machine(machine) # Machine is now provisioned (normally visible in the # provisioning agent through periodic rescan and corresponding # watches) yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set([(80, "tcp"), (443, "tcp")])) @inlineCallbacks def test_immediately_close_previously_opened_ports(self): """Verify machine with already open ports is closed upon provisioning. This can happen because of the security group associated with the machine is reused. """ # Get a machine from the provider, ensure that it already has # open ports by manually opening the port. This could be done # by reuse, or an administrator hand tweaking the security # group in EC2 itself, if using the EC2 provider machine = yield self.add_machine_state() provider_machine, = yield self.provider.start_machine( {"machine-id": machine.id}) yield machine.set_instance_id(provider_machine.instance_id) yield self.provider.open_port(provider_machine, machine.id, 53, "tcp") self.assertEqual((yield self.get_provider_ports(machine)), set([(53, "tcp")])) # Assign wordpress/0 to this machine and apply firewall policy wordpress = yield self.add_service("wordpress") yield wordpress.set_exposed_flag() wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.open_port(80, "tcp") yield wordpress_0.open_port(443, "tcp") yield wordpress_0.assign_to_machine(machine) yield self.firewall_manager.process_machine(machine) # Verify the initial manual open of 53/tcp, followed by the # close due to the firewall processing self.assertLogLines(self.output.getvalue(), ["Opened 53/tcp on provider machine 0", "Closed 53/tcp on provider machine 0"]) self.assertEqual((yield self.get_provider_ports(machine)), set([(80, "tcp"), (443, "tcp")])) @inlineCallbacks def test_open_close_ports_in_stopped_agent_stops_watch(self): """Verify code called by watches properly stops when agent stops.""" self.stop() yield self.assertFailure( self.firewall_manager.open_close_ports_on_machine(0), StopWatcher) @inlineCallbacks def test_watches_trigger_port_mgmt(self): """Verify that watches properly trigger firewall management for the corresponding service units on the corresponding machines. """ self.start() # Immediately expose drupal = yield self.add_service("drupal") wordpress = yield self.add_service("wordpress") yield drupal.set_exposed_flag() yield wordpress.set_exposed_flag() # Then add these units drupal_0 = yield drupal.add_unit_state() wordpress_0 = yield wordpress.add_unit_state() wordpress_1 = yield wordpress.add_unit_state() wordpress_2 = yield wordpress.add_unit_state() # Assign some machines; in particular verify that multiple # service units on one machine works properly with opening # firewall machine_0 = yield self.add_machine_state() machine_1 = yield self.add_machine_state() machine_2 = yield self.add_machine_state() yield self.provide_machine(machine_0) yield self.provide_machine(machine_1) yield self.provide_machine(machine_2) yield drupal_0.assign_to_machine(machine_0) yield wordpress_0.assign_to_machine(machine_0) yield wordpress_1.assign_to_machine(machine_1) yield wordpress_2.assign_to_machine(machine_2) # Simulate service units opening ports expected_machines = self.wait_on_expected_machines(set([0, 1])) expected_units = self.wait_on_expected_units( set(["wordpress/0", "wordpress/1", "drupal/0"])) yield drupal_0.open_port(8080, "tcp") yield drupal_0.open_port(443, "tcp") yield wordpress_0.open_port(80, "tcp") yield wordpress_1.open_port(80, "tcp") self.assertTrue((yield expected_units)) self.assertTrue((yield expected_machines)) self.assertEqual((yield self.get_provider_ports(machine_0)), set([(80, "tcp"), (443, "tcp"), (8080, "tcp")])) self.assertEqual((yield self.get_provider_ports(machine_1)), set([(80, "tcp")])) # Simulate service units close port expected_machines = self.wait_on_expected_machines(set([1, 2])) yield wordpress_1.close_port(80, "tcp") yield wordpress_2.open_port(80, "tcp") self.assertTrue((yield expected_machines)) self.assertEqual((yield self.get_provider_ports(machine_1)), set()) # Simulate service units open port expected_machines = self.wait_on_expected_machines(set([0])) yield wordpress_0.open_port(53, "udp") self.assertTrue((yield expected_machines)) self.assertEqual((yield self.get_provider_ports(machine_0)), set([(53, "udp"), (80, "tcp"), (443, "tcp"), (8080, "tcp")])) self.stop() @inlineCallbacks def test_late_expose_properly_triggers(self): """Verify that an expose flag properly cascades the corresponding watches to perform the desired firewall mgmt. """ self.start() drupal = yield self.add_service("drupal") wordpress = yield self.add_service("wordpress") # Then add these units drupal_0 = yield drupal.add_unit_state() wordpress_0 = yield wordpress.add_unit_state() wordpress_1 = yield wordpress.add_unit_state() machine_0 = yield self.add_machine_state() machine_1 = yield self.add_machine_state() yield self.provide_machine(machine_0) yield self.provide_machine(machine_1) yield drupal_0.assign_to_machine(machine_0) yield wordpress_0.assign_to_machine(machine_0) yield wordpress_1.assign_to_machine(machine_1) # Simulate service units opening ports expected_machines = self.wait_on_expected_machines(set([0, 1])) expected_units = self.wait_on_expected_units( set(["wordpress/0", "wordpress/1"])) yield drupal_0.open_port(8080, "tcp") yield drupal_0.open_port(443, "tcp") yield wordpress_0.open_port(80, "tcp") yield wordpress_1.open_port(80, "tcp") yield wordpress.set_exposed_flag() self.assertTrue((yield expected_units)) self.assertTrue((yield expected_machines)) self.assertEqual((yield self.get_provider_ports(machine_0)), set([(80, "tcp")])) self.assertEqual((yield self.get_provider_ports(machine_1)), set([(80, "tcp")])) # Expose drupal service, verify ports are opened on provider expected_machines = self.wait_on_expected_machines(set([0])) expected_units = self.wait_on_expected_units(set(["drupal/0"])) yield drupal.set_exposed_flag() self.assertTrue((yield expected_machines)) self.assertTrue((yield expected_units)) self.assertEqual((yield self.get_provider_ports(machine_0)), set([(80, "tcp"), (443, "tcp"), (8080, "tcp")])) # Unexpose drupal service, verify only wordpress ports are now opened expected_machines = self.wait_on_expected_machines(set([0])) expected_units = self.wait_on_expected_units(set(["drupal/0"])) yield drupal.clear_exposed_flag() self.assertTrue((yield expected_machines)) self.assertTrue((yield expected_units)) self.assertEqual((yield self.get_provider_ports(machine_0)), set([(80, "tcp")])) # Re-expose drupal service, verify ports are once again opened expected_machines = self.wait_on_expected_machines(set([0])) expected_units = self.wait_on_expected_units(set(["drupal/0"])) yield drupal.set_exposed_flag() self.assertTrue((yield expected_machines)) self.assertTrue((yield expected_units)) self.assertEqual((yield self.get_provider_ports(machine_0)), set([(80, "tcp"), (443, "tcp"), (8080, "tcp")])) self.stop() @inlineCallbacks def test_open_close_ports_on_machine_will_retry(self): """Verify port mgmt for a machine will retry if there's a failure.""" mock_provider = self.mocker.patch(MachineProvider) mock_provider.open_port(MATCH_MACHINE, 0, 80, "tcp") self.mocker.result(fail( TypeError("'NoneType' object is not iterable"))) mock_provider.open_port(MATCH_MACHINE, 0, 80, "tcp") self.mocker.result(fail( ProviderInteractionError("Some sort of EC2 problem"))) mock_provider.open_port(MATCH_MACHINE, 0, 80, "tcp") self.mocker.passthrough() self.mocker.replay() machine = yield self.add_machine_state() yield self.provide_machine(machine) # Expose a service and attempt to open/close ports. The first # attempt will see the simulated failure. wordpress = yield self.add_service("wordpress") yield wordpress.set_exposed_flag() wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.assign_to_machine(machine) yield self.firewall_manager.process_machine(machine) yield wordpress_0.open_port(80, "tcp") yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set()) self.assertIn( "Got exception in opening/closing ports, will retry", self.output.getvalue()) self.assertIn("TypeError: 'NoneType' object is not iterable", self.output.getvalue()) # Retries will now happen in the periodic recheck. First one # still fails due to simulated error. yield self.firewall_manager.process_machine(machine) self.assertEqual((yield self.get_provider_ports(machine)), set()) self.assertIn("ProviderInteractionError: Some sort of EC2 problem", self.output.getvalue()) # Third time is the charm in the mock setup, the recheck succeeds yield self.firewall_manager.process_machine(machine) self.assertEqual((yield self.get_provider_ports(machine)), set([(80, "tcp")])) self.assertIn("Opened 80/tcp on provider machine 0", self.output.getvalue()) @inlineCallbacks def test_process_machine_ignores_stop_watcher(self): """Verify that process machine catches `StopWatcher`. `process_machine` calls `open_close_ports_on_machine`, which as verified in an earlier test, raises a `StopWatcher` exception to shutdown watches that use it in the event of agent shutdown. Verify this dual usage does not cause issues while the agent is being stopped for this usage. """ mock_provider = self.mocker.patch(MachineProvider) mock_provider.open_port(MATCH_MACHINE, 0, 80, "tcp") self.mocker.result(fail( TypeError("'NoneType' object is not iterable"))) self.mocker.replay() machine = yield self.add_machine_state() yield self.provide_machine(machine) # Expose a service and attempt to open/close ports. The first # attempt will see the simulated failure. wordpress = yield self.add_service("wordpress") yield wordpress.set_exposed_flag() wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.assign_to_machine(machine) yield self.firewall_manager.process_machine(machine) yield wordpress_0.open_port(80, "tcp") yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual((yield self.get_provider_ports(machine)), set()) self.assertIn( "Got exception in opening/closing ports, will retry", self.output.getvalue()) self.assertIn("TypeError: 'NoneType' object is not iterable", self.output.getvalue()) # Stop the provisioning agent self.stop() # But retries can potentially still happening anyway, just # make certain nothing bad happens. yield self.firewall_manager.process_machine(machine) class Observer(object): def __init__(self, calls, name): self.calls = calls self.name = name def __call__(self, obj): self.calls[0].add((self.name, obj)) class FirewallObserversTest(FirewallTestBase): @inlineCallbacks def test_observe_open_close_ports(self): """Verify one or more observers can be established on action.""" wordpress = yield self.add_service("wordpress") wordpress_0 = yield wordpress.add_unit_state() yield wordpress.set_exposed_flag() # Add one observer, verify it gets called calls = [set()] self.firewall_manager.add_open_close_ports_observer( Observer(calls, "a")) yield self.firewall_manager.open_close_ports(wordpress_0) self.assertEqual(calls[0], set([("a", wordpress_0)])) # Reset records of calls, and then add a second observer. # Verify both get called. calls[0] = set() self.firewall_manager.add_open_close_ports_observer( Observer(calls, "b")) yield self.firewall_manager.open_close_ports(wordpress_0) self.assertEqual( calls[0], set([("a", wordpress_0), ("b", wordpress_0)])) @inlineCallbacks def test_observe_open_close_ports_on_machine(self): """Verify one or more observers can be established on action.""" machine = yield self.add_machine_state() # Add one observer, verify it gets called calls = [set()] self.firewall_manager.add_open_close_ports_on_machine_observer( Observer(calls, "a")) yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual(calls[0], set([("a", machine.id)])) # Reset records of calls, and then add a second observer. # Verify both get called. calls[0] = set() self.firewall_manager.add_open_close_ports_on_machine_observer( Observer(calls, "b")) yield self.firewall_manager.open_close_ports_on_machine(machine.id) self.assertEqual( calls[0], set([("a", machine.id), ("b", machine.id)])) juju-0.7.orig/juju/state/tests/test_hook.py0000644000000000000000000011355712135220114017235 0ustar 00000000000000import logging from twisted.internet.defer import inlineCallbacks, returnValue from juju.lib.pick import pick_attr from juju.lib import serializer from juju.lib.testing import TestCase from juju.state.endpoint import RelationEndpoint from juju.state.hook import ( DepartedRelationHookContext, HookContext, RelationChange, RelationHookContext) from juju.state.errors import ( UnitRelationStateNotFound, RelationBrokenContextError, RelationStateNotFound, InvalidRelationIdentity) from juju.state.tests.test_relation import RelationTestBase from juju.state.utils import AddedItem, DeletedItem, ModifiedItem class RelationChangeTest(TestCase): def test_change_properties(self): change = RelationChange("db:42", "membership", "mysql/0") self.assertEqual(change.relation_name, "db") self.assertEqual(change.relation_ident, "db:42") self.assertEqual(change.change_type, "membership") self.assertEqual(change.unit_name, "mysql/0") class CommonHookContextTestsMixin(object): @inlineCallbacks def test_config_get(self): """Verify we can get config settings. This is a simple test that basic I/O works through the context. """ config = yield self.service.get_config() config.update({"hello": "world"}) yield config.write() data = yield self.context.get_config() self.assertEqual(data, {"hello": "world", "title": "My Title", "username": "admin001"}) # Verify that context.flush triggers writes as well data["goodbye"] = "goodnight" yield self.context.flush() # get a new yamlstate from the service itself config = yield self.service.get_config() self.assertEqual(config["goodbye"], "goodnight") @inlineCallbacks def test_config_get_cache(self): """Verify we can get config settings. This is a simple test that basic I/O works through the context. """ config = yield self.service.get_config() config.update({"hello": "world"}) yield config.write() data = yield self.context.get_config() self.assertEqual(data, {"hello": "world", "title": "My Title", "username": "admin001"}) d2 = yield self.context.get_config() self.assertIs(data, d2) @inlineCallbacks def test_hook_knows_service(self): """Verify that hooks can get their local service.""" service = yield self.context.get_local_service() self.assertEqual(service.service_name, self.service.service_name) @inlineCallbacks def test_hook_knows_unit_state(self): """Verify that hook has access to its local unit state.""" unit = yield self.context.get_local_unit_state() self.assertEqual(unit.unit_name, self.unit.unit_name) class HookContextTestBase(RelationTestBase): @inlineCallbacks def setUp(self): yield super(HookContextTestBase, self).setUp() wordpress_ep = RelationEndpoint( "wordpress", "client-server", "database", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "db", "server") self.wordpress_states = yield self.\ add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) self.mysql_states = yield self.add_opposite_service_unit( self.wordpress_states) self.relation = self.mysql_states["relation"] @inlineCallbacks def add_another_blog(self, blog_name): blog_ep = RelationEndpoint(blog_name, "client-server", "app", "client") # Fully construct states for the relation connecting to this additional blog other_mysql_states = yield self.add_relation_service_unit_to_another_endpoint( self.mysql_states, blog_ep) # Then complete in the opposite direction blog_states = yield self.add_opposite_service_unit(other_mysql_states) yield blog_states['service_relations'][-1].add_unit_state( self.mysql_states['unit']) returnValue(blog_states) @inlineCallbacks def add_db_admin_tool(self, admin_name): """Add another relation, using a different relation name""" admin_ep = RelationEndpoint( admin_name, "client-server", "admin-app", "client") mysql_admin_ep = RelationEndpoint( "mysql", "client-server", "db-admin", "server") mysql_admin_states = yield self.reuse_service_unit_in_new_relation( self.mysql_states, mysql_admin_ep, admin_ep) admin_states = yield self.add_opposite_service_unit(mysql_admin_states) returnValue(admin_states) @inlineCallbacks def reuse_service_unit_in_new_relation(self, reused_states, *endpoints): """Reuse an existing service unit as part of a new relation""" service_state = reused_states["service"] unit_state = reused_states["unit"] # 1. Setup all service states service_states = [service_state] for endpoint in endpoints[1:]: service_state = yield self.add_service(endpoint.service_name) service_states.append(service_state) # 2. And join together in a relation relation_state, service_relation_states = \ yield self.relation_manager.add_relation_state( *endpoints) # 3. Add a service unit to only the first endpoint - we need # to test what happens when service units are added to the # other service state (if any), so do so separately relation_unit_state = yield service_relation_states[0].add_unit_state( unit_state) returnValue({ "endpoints": list(endpoints), "service": service_states[0], "services": service_states, "unit": unit_state, "relation": relation_state, "service_relation": service_relation_states[0], "unit_relation": relation_unit_state, "service_relations": service_relation_states}) class HookContextTest(HookContextTestBase, CommonHookContextTestsMixin): @inlineCallbacks def setUp(self): yield super(HookContextTest, self).setUp() self.service = self.wordpress_states["service"] self.unit = self.wordpress_states["unit"] self.context = HookContext( self.client, unit_name=self.unit.unit_name) def get_context(self, unit_name): return HookContext(self.client, unit_name=unit_name) @inlineCallbacks def test_get_relation_idents(self): """Verify relation idents can be queried on non-relation contexts.""" yield self.add_another_blog("wordpress2") context = self.get_context("mysql/0") self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:1"])) self.assertEqual( set((yield context.get_relation_idents("not-a-relation"))), set()) # Add some more relations, verify nothing changes from this # context's perspective yield self.add_another_blog("wordpress3") yield self.add_db_admin_tool("admin") self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:1"])) self.assertEqual( set((yield context.get_relation_idents(None))), set(["db:0", "db:1"])) # Create a new context to see this change new_context = self.get_context("mysql/0") self.assertEqual( set((yield new_context.get_relation_idents("db"))), set(["db:0", "db:1", "db:2"])) self.assertEqual( set((yield new_context.get_relation_idents(None))), set(["db:0", "db:1", "db:2", "db-admin:3"])) @inlineCallbacks def test_get_relation_idents_partial_updates_to_zk(self): """Verify relation idents do not reflect partial updates to ZK.""" log = self.capture_logging(level=logging.DEBUG) # 1. Partial update - no corresponding service relation unit yield self.add_relation_service_unit_to_another_endpoint( self.mysql_states, RelationEndpoint( "wordpress2", "client-server", "app", "client")) # 2. Do a complete update of adding another relation and # corresponding units yield self.add_another_blog("wordpress3") context = self.get_context("mysql/0") # 3. Observe only the relation ids for wordpress, wordpress3 self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:2"])) self.assertIn("Ignoring partially constructed relation: db:1", log.getvalue()) # 4. Finally, relation ids for a nonexistent relation are # still not seen, or cause an error. self.assertEqual( set((yield context.get_relation_idents("not-a-relation"))), set()) class RelationHookContextTest(HookContextTestBase, CommonHookContextTestsMixin): @inlineCallbacks def setUp(self): yield super(RelationHookContextTest, self).setUp() self.service = self.wordpress_states["service"] self.unit = self.wordpress_states["unit"] self.reset_context() def reset_context(self): self.context = self.get_context( self.wordpress_states, "modified", "mysql/0") def get_context(self, states, change_type, unit_name): return RelationHookContext( self.client, states["unit_relation"], states["service_relation"].relation_ident, unit_name=states["unit"].unit_name) @inlineCallbacks def test_get(self): """Settings from a related unit can be retrieved as a blob.""" yield self.mysql_states["unit_relation"].set_data({"hello": "world"}) # use mocker to verify we only access the node once. mock_client = self.mocker.patch(self.client) mock_client.get(self.get_unit_settings_path(self.mysql_states)) self.mocker.passthrough() self.mocker.replay() data = yield self.context.get("mysql/0") self.assertEqual(data, {"hello": "world"}) @inlineCallbacks def test_get_uses_copied_dict(self): """If we retrieve the settings with a get, modifying those values does not modify the underlying write buffer. They must be explicitly set. """ yield self.context.set_value("hello", u"world") data = yield self.context.get("wordpress/0") self.assertEqual( data, {"hello": "world", "private-address": "wordpress-0.example.com"}) del data["hello"] current_data = yield self.context.get("wordpress/0") self.assertNotEqual(current_data, data) self.client.set(self.get_unit_settings_path(self.mysql_states), serializer.dump({"hello": "world"})) data = yield self.context.get("mysql/0") data["abc"] = 1 data = yield self.context.get("mysql/0") del data["hello"] current_data = yield self.context.get("mysql/0") self.assertEqual(current_data, {"hello": "world"}) @inlineCallbacks def test_get_value(self): """Settings from a related unit can be retrieved by name.""" # Getting a value from an existing empty unit is returns empty # strings for all keys. port = yield self.context.get_value("mysql/0", "port") self.assertEqual(port, "") # Write some data to retrieve and refetch the context yield self.mysql_states["unit_relation"].set_data({ "host": "xe.example.com", "port": 2222}) self.reset_context() # use mocker to verify we only access the node once. mock_client = self.mocker.patch(self.client) mock_client.get(self.get_unit_settings_path(self.mysql_states)) self.mocker.passthrough() self.mocker.replay() port = yield self.context.get_value("mysql/0", "port") self.assertEqual(port, 2222) host = yield self.context.get_value("mysql/0", "host") self.assertEqual(host, "xe.example.com") magic = yield self.context.get_value("mysql/0", "unknown") self.assertEqual(magic, "") # fetching from a value from a non existent unit raises an error. yield self.assertFailure( self.context.get_value("mysql/5", "zebra"), UnitRelationStateNotFound) @inlineCallbacks def test_get_self_value(self): """Settings from the unit associated to context can be retrieved. This is also holds true for values locally modified on the context. """ data = yield self.context.get_value("wordpress/0", "magic") self.assertEqual(data, "") yield self.context.set_value("magic", "room") data = yield self.context.get_value("wordpress/0", "magic") self.assertEqual(data, "room") @inlineCallbacks def test_set(self): """The unit relation settings can be done as a blob.""" yield self.assertFailure(self.context.set("abc"), TypeError) data = {"abc": 12, "bar": "21"} yield self.context.set(data) changes = yield self.context.flush() content, stat = yield self.client.get( self.get_unit_settings_path(self.wordpress_states)) data["private-address"] = "wordpress-0.example.com" self.assertEqual(serializer.load(content), data) self.assertEqual(changes, [AddedItem("abc", 12), AddedItem("bar", "21")]) @inlineCallbacks def test_set_value(self): """Values can be set by name, and are written at flush time.""" yield self.context.set_value("zebra", 12) yield self.context.set_value("donkey", u"abc") data, stat = yield self.client.get( self.get_unit_settings_path(self.wordpress_states)) self.assertEqual(serializer.load(data), {"private-address": "wordpress-0.example.com"}) changes = yield self.context.flush() data, stat = yield self.client.get( self.get_unit_settings_path(self.wordpress_states)) self.assertEqual( serializer.load(data), {"zebra": 12, "donkey": "abc", "private-address": "wordpress-0.example.com"}) self.assertEqual( changes, [AddedItem("donkey", u"abc"), AddedItem("zebra", 12)]) @inlineCallbacks def test_delete_value(self): """A value can be deleted via key. """ yield self.client.set( self.get_unit_settings_path( self.wordpress_states), serializer.dump({"key": "secret"})) yield self.context.delete_value("key") changes = yield self.context.flush() data, stat = yield self.client.get( self.get_unit_settings_path(self.wordpress_states)) self.assertNotIn("key", serializer.load(data)) self.assertEqual(changes, [DeletedItem("key", "secret")]) @inlineCallbacks def test_delete_nonexistent_value(self): """Deleting a non existent key is a no-op. """ yield self.client.set( self.get_unit_settings_path(self.wordpress_states), serializer.dump({"lantern": "green"})) yield self.context.delete_value("key") changes = yield self.context.flush() data, stat = yield self.client.get( self.get_unit_settings_path(self.wordpress_states)) self.assertEqual(serializer.load(data), {"lantern": "green"}) self.assertEqual(changes, []) @inlineCallbacks def test_empty_flush_maintains_value(self): """Flushing a context which has no writes is a noop.""" yield self.client.set( self.get_unit_settings_path(self.wordpress_states), serializer.dump({"key": "secret"})) changes = yield self.context.flush() data, stat = yield self.client.get( self.get_unit_settings_path(self.wordpress_states)) self.assertEqual(serializer.load(data), {"key": "secret"}) self.assertEqual(changes, []) @inlineCallbacks def test_flush_merges_setting_values(self): """When flushing a context we merge the changes with the current value of the node. The goal is to allow external processes to modify, delete, and add new values and allow those changes to persist to the final state, IFF the context has not also modified that key. If the context has modified the key, then context change takes precendence over the external change. """ data = {"key": "secret", "seed": "21", "castle": "keep", "tower": "moat", "db": "wordpress", "host": "xe1.example.com"} yield self.client.set( self.get_unit_settings_path(self.wordpress_states), serializer.dump(data)) # On the context: # - add a new key # - modify an existing key # - delete an old key/value yield self.context.set_value("home", "good") yield self.context.set_value("db", 21) yield self.context.delete_value("seed") # Also test conflict on delete, modify, and add yield self.context.delete_value("castle") yield self.context.set_value("tower", "rock") yield self.context.set_value("zoo", "keeper") # Outside of the context: # - add a new key/value. # - modify an existing value # - delete a key data["port"] = 22 data["host"] = "xe2.example.com" del data["key"] # also test conflict on delete, modify, and add del data["castle"] data["zoo"] = "mammal" data["tower"] = "london" yield self.client.set( self.get_unit_settings_path(self.wordpress_states), serializer.dump(data)) changes = yield self.context.flush() data, stat = yield self.client.get( self.get_unit_settings_path(self.wordpress_states)) self.assertEqual( serializer.load(data), {"port": 22, "host": "xe2.example.com", "db": 21, "home": "good", "tower": "rock", "zoo": "keeper"}) self.assertEqual( changes, [ModifiedItem("db", "wordpress", 21), AddedItem("zoo", "keeper"), DeletedItem("seed", "21"), AddedItem("home", "good"), ModifiedItem("tower", "moat", "rock")]) @inlineCallbacks def test_set_value_existing_setting(self): """We can set a value even if we have existing saved settings.""" yield self.client.set( self.get_unit_settings_path(self.wordpress_states), serializer.dump({"key": "secret"})) yield self.context.set_value("magic", "room") value = yield self.context.get_value("wordpress/0", "key") self.assertEqual(value, "secret") value = yield self.context.get_value("wordpress/0", "magic") self.assertEqual(value, "room") @inlineCallbacks def test_get_members(self): """The related units of a relation can be retrieved.""" members = yield self.context.get_members() self.assertEqual(members, ["mysql/0"]) # Add a new member and refetch yield self.add_related_service_unit(self.mysql_states) members2 = yield self.context.get_members() # There should be no change in the retrieved members. self.assertEqual(members, members2) @inlineCallbacks def test_get_members_peer(self): """When retrieving members from a peer relation, the unit associated to the context is not included in the set. """ riak1_states = yield self.add_relation_service_unit( "riak", "riak", "peer", "peer") riak2_states = yield self.add_related_service_unit( riak1_states) context = self.get_context(riak1_states, "modified", "riak/1") members = yield context.get_members() self.assertEqual(members, [riak2_states["unit"].unit_name]) @inlineCallbacks def test_tracking_read_nodes(self): """The context tracks which nodes it has read, this is used by external components to determine if the context has read a value that may have subsequently been modified, and act accordingly. """ # notify the context of a change self.assertFalse(self.context.has_read("mysql/0")) # read the node data yield self.context.get("mysql/0") # Now verify we've read it self.assertTrue(self.context.has_read("mysql/0")) # And only it. self.assertFalse(self.context.has_read("mysql/1")) @inlineCallbacks def test_invalid_get_relation_id_and_scope(self): """Verify `InvalidRelationIdentity` is raised for invalid idents""" yield self.assertFailure( self.context.get_relation_id_and_scope("not-a-relation:99"), RelationStateNotFound) e = yield self.assertFailure( self.context.get_relation_id_and_scope("invalid-id:forty-two"), InvalidRelationIdentity) self.assertEqual( str(e), "Not a valid relation id: 'invalid-id:forty-two'") yield self.assertFailure( self.context.get_relation_id_and_scope("invalid-id*42"), InvalidRelationIdentity) yield self.assertFailure( self.context.get_relation_id_and_scope("invalid-id:42:extra"), InvalidRelationIdentity) yield self.assertFailure( self.context.get_relation_id_and_scope("unknown-name:0"), RelationStateNotFound) @inlineCallbacks def test_get_relation_id_and_scope(self): """Verify relation id and scope is returned for relation idents""" # The mysql service has relations with two wordpress services, # verify that the corresponding relation hook context can see # both relations yield self.add_another_blog("wordpress2") mysql_context = self.get_context( self.mysql_states, "modified", "mysql/0") self.assertEqual( (yield mysql_context.get_relation_id_and_scope("db:0")), ("relation-0000000000", "global")) self.assertEqual( (yield mysql_context.get_relation_id_and_scope("db:1")), ("relation-0000000001", "global")) # Need to use the correct relation name in the relation id yield self.assertFailure( mysql_context.get_relation_id_and_scope("database:0"), RelationStateNotFound) yield self.assertFailure( mysql_context.get_relation_id_and_scope("database:1"), RelationStateNotFound) # The first wordpress service can only see the relation it # has with mysql in its hook context wordpress1_context = self.get_context( self.wordpress_states, "modified", "wordpress/0") self.assertEqual( (yield wordpress1_context.get_relation_id_and_scope("database:0")), ("relation-0000000000", "global")) yield self.assertFailure( wordpress1_context.get_relation_id_and_scope("database:1"), RelationStateNotFound) yield self.assertFailure( wordpress1_context.get_relation_id_and_scope("db:0"), RelationStateNotFound) @inlineCallbacks def test_get_relation_hook_context(self): """Verify usage of child hook contexts""" yield self.add_another_blog("wordpress2") context = self.get_context(self.mysql_states, "modified", "mysql/0") self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:1"])) db0 = yield context.get_relation_hook_context("db:0") self.assertEqual(db0.relation_ident, context.relation_ident) self.assertEqual((yield db0.get_members()), ["wordpress/0"]) # But unlike through the Invoker, no caching so these contexts # will not be identical self.assertNotEqual(db0, context) db1 = yield context.get_relation_hook_context("db:1") self.assertEqual(db1.relation_ident, "db:1") self.assertEqual((yield db1.get_members()), ["wordpress2/0"]) # Add some more relations, verify nothing changes from this # context's perspective yield self.add_another_blog("wordpress3") yield self.assertFailure( context.get_relation_hook_context("db:2"), RelationStateNotFound) # Next, create a new context to see this change in child contexts new_context = self.get_context( self.mysql_states, "modified", "mysql/0") db2 = yield new_context.get_relation_hook_context("db:2") self.assertEqual(db2.relation_ident, "db:2") self.assertEqual((yield db2.get_members()), ["wordpress3/0"]) @inlineCallbacks def test_get_relation_hook_context_while_removing_relation(self): """Verify usage of child hook contexts once a relation is removed""" yield self.add_another_blog("wordpress2") context = self.get_context(self.mysql_states, "modified", "mysql/0") self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:1"])) # Remove the first relation (db:0). Verify it's still cached # from this context yield self.relation_manager.remove_relation_state( self.wordpress_states["relation"]) self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:1"])) db0 = yield context.get_relation_hook_context("db:0") db1 = yield context.get_relation_hook_context("db:1") self.assertEqual(db0.relation_ident, "db:0") self.assertEqual(db1.relation_ident, "db:1") # Unit membership changes (with the removed db:0 relation), # however, are visible since get_members does another topology # read (albeit subsequently cached) self.assertEqual((yield db0.get_members()), []) self.assertEqual((yield db1.get_members()), ["wordpress2/0"]) # Create a new context and verify db:0 is now gone new_context = self.get_context( self.mysql_states, "modified", "mysql/0") self.assertEqual( set((yield new_context.get_relation_idents("db"))), set(["db:1"])) yield self.assertFailure( new_context.get_relation_hook_context("db:0"), RelationStateNotFound) db1 = yield new_context.get_relation_hook_context("db:1") self.assertEqual(db1.relation_ident, "db:1") self.assertEqual((yield db1.get_members()), ["wordpress2/0"]) @inlineCallbacks def test_get_relation_idents(self): """Verify relation idents can be queried on relation hook contexts.""" yield self.add_another_blog("wordpress2") context = self.get_context(self.mysql_states, "modified", "mysql/0") self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:1"])) self.assertEqual( set((yield context.get_relation_idents("not-a-relation"))), set()) # Add some more relations, verify nothing changes from this # context's perspective yield self.add_another_blog("wordpress3") yield self.add_db_admin_tool("admin") self.assertEqual( set((yield context.get_relation_idents("db"))), set(["db:0", "db:1"])) self.assertEqual( set((yield context.get_relation_idents(None))), set(["db:0", "db:1"])) # Create a new context to see this change new_context = self.get_context( self.mysql_states, "modified", "mysql/0") self.assertEqual( set((yield new_context.get_relation_idents("db"))), set(["db:0", "db:1", "db:2"])) self.assertEqual( set((yield new_context.get_relation_idents(None))), set(["db:0", "db:1", "db:2", "db-admin:3"])) class DepartedRelationHookContextTest( HookContextTestBase, CommonHookContextTestsMixin): @inlineCallbacks def setUp(self): yield super(DepartedRelationHookContextTest, self).setUp() self.service = self.wordpress_states["service"] self.unit = self.wordpress_states["unit"] relation = self.wordpress_states["service_relation"] self.context = DepartedRelationHookContext( self.client, self.unit.unit_name, self.unit.internal_id, relation.relation_name, relation.internal_relation_id) @inlineCallbacks def test_get_members(self): """Related units cannot be retrieved.""" members = yield self.context.get_members() self.assertEqual(members, []) # Add a new member and refetch yield self.add_related_service_unit(self.mysql_states) members2 = yield self.context.get_members() # There should be no change in the retrieved members. self.assertEqual(members2, []) @inlineCallbacks def test_get_self(self): """Own settings can be retrieved.""" self.client.set(self.get_unit_settings_path(self.wordpress_states), serializer.dump({"hello": "world"})) data = yield self.context.get(None) self.assertEquals(data, {"hello": "world"}) @inlineCallbacks def test_get_self_by_name(self): """Own settings can be retrieved by name.""" self.client.set(self.get_unit_settings_path(self.wordpress_states), serializer.dump({"hello": "world"})) data = yield self.context.get("wordpress/0") self.assertEquals(data, {"hello": "world"}) @inlineCallbacks def test_get_other(self): """Other unit settings cannot be retrieved.""" e = yield self.assertFailure( self.context.get("mysql/0"), RelationBrokenContextError) self.assertEquals( str(e), "Cannot access other units in broken relation") @inlineCallbacks def test_get_value_self(self): """Own settings can be retrieved.""" self.client.set(self.get_unit_settings_path(self.wordpress_states), serializer.dump({"hello": "world"})) self.assertEquals( (yield self.context.get_value("wordpress/0", "hello")), "world") self.assertEquals( (yield self.context.get_value("wordpress/0", "goodbye")), "") @inlineCallbacks def test_get_value_other(self): """Other unit settings cannot be retrieved.""" e = yield self.assertFailure( self.context.get_value("mysql/0", "anything"), RelationBrokenContextError) self.assertEquals( str(e), "Cannot access other units in broken relation") @inlineCallbacks def test_set(self): """Own settings cannot be changed.""" e = yield self.assertFailure( self.context.set({"anything": "anything"}), RelationBrokenContextError) self.assertEquals( str(e), "Cannot change settings in broken relation") @inlineCallbacks def test_set_value(self): """Own settings cannot be changed.""" e = yield self.assertFailure( self.context.set_value("anything", "anything"), RelationBrokenContextError) self.assertEquals( str(e), "Cannot change settings in broken relation") @inlineCallbacks def test_delete_value(self): """Own settings cannot be changed.""" e = yield self.assertFailure( self.context.delete_value("anything"), RelationBrokenContextError) self.assertEquals( str(e), "Cannot change settings in broken relation") @inlineCallbacks def test_has_read(self): """We can tell whether settings have been read""" self.assertFalse(self.context.has_read("wordpress/0")) self.assertFalse(self.context.has_read("mysql/0")) yield self.context.get(None) self.assertTrue(self.context.has_read("wordpress/0")) self.assertFalse(self.context.has_read("mysql/0")) yield self.assertFailure( self.context.get_value("mysql/0", "anything"), RelationBrokenContextError) self.assertTrue(self.context.has_read("wordpress/0")) self.assertFalse(self.context.has_read("mysql/0")) @inlineCallbacks def test_relation_ident(self): """Verify relation ident and enumerate other relations for context.""" self.assertEqual(self.context.relation_ident, "database:0") self.assertEqual((yield self.context.get_relation_idents(None)), ["database:0"]) class SubordinateRelationHookContextTest(HookContextTestBase): @inlineCallbacks def setUp(self): yield super(SubordinateRelationHookContextTest, self).setUp() relation = yield self._build_relation() self.context = DepartedRelationHookContext( self.client, self.unit.unit_name, self.unit.internal_id, relation.relation_name, relation.internal_relation_id) @inlineCallbacks def _build_relation(self): mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") mysql, my_units = yield self.get_service_and_units_by_charm_name( "mysql", 1) self.assertFalse((yield mysql.is_subordinate())) log, log_units = yield self.get_service_and_units_by_charm_name( "logging") self.assertTrue((yield log.is_subordinate())) # add the relationship so we can create units with containers relation_state, service_states = (yield self.relation_manager.add_relation_state( mysql_ep, logging_ep)) log, log_units = yield self.get_service_and_units_by_charm_name( "logging", containers=my_units) self.assertTrue((yield log.is_subordinate())) for lu in log_units: self.assertTrue((yield lu.is_subordinate())) self.mu1 = my_units[0] self.lu1 = log_units[0] self.mystate = pick_attr(service_states, relation_role="server") self.logstate = pick_attr(service_states, relation_role="client") yield self.mystate.add_unit_state(self.mu1) self.log_unit_relation = yield self.logstate.add_unit_state(self.lu1) self.my_unit_relation = yield self.mystate.add_unit_state(self.mu1) self.unit = self.lu1 self.container_unit = self.mu1 self.service = log self.relation_state = relation_state returnValue(self.logstate) def get_unit_settings_path(self, relation, unit, container=None): container_info = "" if container: container_info = "%s/" % container.internal_id unit_relation_path = "/relations/%s/%ssettings/%s" % ( relation.internal_id, container_info, unit.internal_id) return unit_relation_path @inlineCallbacks def test_get_departed_members(self): """Related units cannot be retrieved.""" members = yield self.context.get_members() self.assertEqual(members, []) # Add a new member and refetch yield self.add_related_service_unit(self.mysql_states) members2 = yield self.context.get_members() # There should be no change in the retrieved members. self.assertEqual(members2, []) @inlineCallbacks def test_get_self(self): """Own settings can be retrieved.""" settings_path = self.get_unit_settings_path( self.relation_state, self.unit, self.container_unit) self.client.set(settings_path, serializer.dump({"hello": "world"})) data = yield self.context.get(None) self.assertEquals(data, {"hello": "world"}) def get_context(self, service_relation, unit_relation, unit, change_type, unit_name): change = RelationChange( service_relation.relation_ident, change_type, unit_name) return RelationHookContext( self.client, unit_relation, change, unit_name=unit.unit_name) @inlineCallbacks def test_get_members_subordinate_context(self): context = self.get_context(self.logstate, self.log_unit_relation, self.lu1, "joined", "logging/0") self.assertEquals((yield context.get_members()), ["mysql/1"]) context = self.get_context(self.mystate, self.my_unit_relation, self.mu1, "joined", "mysql/1") self.assertEquals((yield context.get_members()), ["logging/0"]) @inlineCallbacks def test_get_settings_path(self): @inlineCallbacks def verify_settings_path(service_relation, unit_relation, unit, expected): sp = yield unit_relation.get_settings_path() self.assertEquals(sp, expected) context = self.get_context(service_relation, unit_relation, unit, "joined", "") sp2 = yield context.get_settings_path(unit.internal_id) self.assertEquals(sp2, expected) yield verify_settings_path(self.mystate, self.my_unit_relation, self.mu1, "/relations/relation-0000000001/" "unit-0000000002/settings/unit-0000000002") yield verify_settings_path(self.logstate, self.log_unit_relation, self.lu1, "/relations/relation-0000000001/" "unit-0000000002/settings/unit-0000000003") juju-0.7.orig/juju/state/tests/test_initialize.py0000644000000000000000000000700412135220114020423 0ustar 00000000000000import zookeeper from twisted.internet.defer import inlineCallbacks from txzookeeper.tests.utils import deleteTree from juju.environment.tests.test_config import EnvironmentsConfigTestBase from juju.state.auth import make_identity from juju.state.environment import ( GlobalSettingsStateManager, EnvironmentStateManager) from juju.state.initialize import StateHierarchy from juju.state.machine import MachineStateManager class LayoutTest(EnvironmentsConfigTestBase): @inlineCallbacks def setUp(self): yield super(LayoutTest, self).setUp() self.log = self.capture_logging("juju.state.init") zookeeper.set_debug_level(0) self.client = self.get_zookeeper_client() self.identity = make_identity("admin:genie") constraints_data = { "arch": "arm", "cpu": None, "ubuntu-series": "cranky", "provider-type": "dummy"} self.layout = StateHierarchy( self.client, self.identity, "i-abcdef", constraints_data, "dummy") yield self.client.connect() def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() @inlineCallbacks def assert_existence_and_acl(self, path): exists = yield self.client.exists(path) self.assertTrue(exists) acls, stat = yield self.client.get_acl(path) found_admin_acl = False for acl in acls: if acl["id"] == self.identity \ and acl["perms"] == zookeeper.PERM_ALL: found_admin_acl = True break self.assertTrue(found_admin_acl) @inlineCallbacks def test_initialize(self): yield self.layout.initialize() yield self.assert_existence_and_acl("/charms") yield self.assert_existence_and_acl("/services") yield self.assert_existence_and_acl("/units") yield self.assert_existence_and_acl("/machines") yield self.assert_existence_and_acl("/relations") yield self.assert_existence_and_acl("/initialized") # To check that the constraints landed correctly, we need the # environment config to have been sent, or we won't be able to # get a provider to help us construct the appropriate objects. yield self.push_default_config(with_constraints=False) esm = EnvironmentStateManager(self.client) env_constraints = yield esm.get_constraints() self.assertEquals(env_constraints, { "provider-type": "dummy", "ubuntu-series": None, "arch": "arm", "cpu": None, "mem": 512}) machine_state_manager = MachineStateManager(self.client) machine_state = yield machine_state_manager.get_machine_state(0) machine_constraints = yield machine_state.get_constraints() self.assertTrue(machine_constraints.complete) self.assertEquals(machine_constraints, { "provider-type": "dummy", "ubuntu-series": "cranky", "arch": "arm", "cpu": None, "mem": 512}) instance_id = yield machine_state.get_instance_id() self.assertEqual(instance_id, "i-abcdef") settings_manager = GlobalSettingsStateManager(self.client) env_id = yield settings_manager.get_environment_id() self.assertEqual(len(env_id), 32) self.assertEqual((yield settings_manager.get_provider_type()), "dummy") self.assertEqual( self.log.getvalue().strip(), "Initializing zookeeper hierarchy") juju-0.7.orig/juju/state/tests/test_machine.py0000644000000000000000000007276712135220114017710 0ustar 00000000000000import functools from twisted.internet.defer import inlineCallbacks, Deferred, returnValue from juju.charm.tests import local_charm_id from juju.errors import ConstraintError from juju.lib import serializer from juju.machine.tests.test_constraints import ( dummy_constraints, series_constraints) from juju.state.charm import CharmStateManager from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager from juju.state.errors import MachineStateNotFound, MachineStateInUse from juju.state.utils import YAMLState from juju.state.tests.common import StateTestBase class MachineStateManagerTest(StateTestBase): @inlineCallbacks def setUp(self): yield super(MachineStateManagerTest, self).setUp() yield self.push_default_config() self.charm_state_manager = CharmStateManager(self.client) self.machine_state_manager = MachineStateManager(self.client) self.service_state_manager = ServiceStateManager(self.client) self.charm_state = yield self.charm_state_manager.add_charm_state( local_charm_id(self.charm), self.charm, "") def add_machine_state(self, constraints=None): return self.machine_state_manager.add_machine_state( constraints or series_constraints) @inlineCallbacks def add_service(self, service_name): service_state = yield self.service_state_manager.add_service_state( service_name, self.charm_state, dummy_constraints) returnValue(service_state) @inlineCallbacks def test_add_machine(self): """ Adding a machine state should register it in zookeeper. """ machine_state1 = yield self.add_machine_state() machine_state2 = yield self.add_machine_state() self.assertEquals(machine_state1.id, 0) self.assertEquals(machine_state1.internal_id, "machine-0000000000") constraints1 = yield machine_state1.get_constraints() self.assertEquals(constraints1, series_constraints) self.assertEquals(machine_state2.id, 1) self.assertEquals(machine_state2.internal_id, "machine-0000000001") constraints2 = yield machine_state2.get_constraints() self.assertEquals(constraints2, series_constraints) children = yield self.client.get_children("/machines") self.assertEquals(sorted(children), ["machine-0000000000", "machine-0000000001"]) topology = yield self.get_topology() self.assertTrue(topology.has_machine("machine-0000000000")) self.assertTrue(topology.has_machine("machine-0000000001")) @inlineCallbacks def test_incomplete_constraints(self): e = yield self.assertFailure( self.add_machine_state(dummy_constraints), ConstraintError) self.assertEquals( str(e), "Unprovisionable machine: incomplete constraints") @inlineCallbacks def test_missing_constraints(self): """ensure compatibility with nodes written for previous versions""" yield self.add_machine_state() machine = yield self.machine_state_manager.get_machine_state(0) path = "/machines/" + machine.internal_id node = YAMLState(self.client, path) yield node.read() del node["constraints"] yield node.write() constraints = yield machine.get_constraints() self.assertEquals(constraints.data, {}) @inlineCallbacks def test_machine_str_representation(self): """The str(machine) value includes the machine id. """ machine_state1 = yield self.add_machine_state() self.assertEqual( str(machine_state1), "" % (0)) @inlineCallbacks def test_remove_machine(self): """ Adding a machine state should register it in zookeeper. """ machine_state1 = yield self.add_machine_state() yield self.add_machine_state() removed = yield self.machine_state_manager.remove_machine_state( machine_state1.id) self.assertTrue(removed) children = yield self.client.get_children("/machines") self.assertEquals(sorted(children), ["machine-0000000001"]) topology = yield self.get_topology() self.assertFalse(topology.has_machine("machine-0000000000")) self.assertTrue(topology.has_machine("machine-0000000001")) # Removing a non-existing machine again won't fail, since the end # intention is preserved. This makes dealing with concurrency easier. # However, False will be returned in this case. removed = yield self.machine_state_manager.remove_machine_state( machine_state1.id) self.assertFalse(removed) @inlineCallbacks def test_remove_machine_with_agent(self): """Removing a machine with a connected machine agent should succeed. The removal signals intent to remove a working machine (with an agent) with the provisioning agent to remove it subsequently. """ # Add two machines. machine_state1 = yield self.add_machine_state() yield self.add_machine_state() # Connect an agent yield machine_state1.connect_agent() # Remove a machine removed = yield self.machine_state_manager.remove_machine_state( machine_state1.id) self.assertTrue(removed) # Verify the second one is still present children = yield self.client.get_children("/machines") self.assertEquals(sorted(children), ["machine-0000000001"]) # Verify the topology state. topology = yield self.get_topology() self.assertFalse(topology.has_machine("machine-0000000000")) self.assertTrue(topology.has_machine("machine-0000000001")) @inlineCallbacks def test_get_machine_and_check_attributes(self): """ Getting a machine state should be possible using both the user-oriented id and the internal id. """ yield self.add_machine_state() yield self.add_machine_state() machine_state = yield self.machine_state_manager.get_machine_state(0) self.assertEquals(machine_state.id, 0) machine_state = yield self.machine_state_manager.get_machine_state("0") self.assertEquals(machine_state.id, 0) yield self.assertFailure( self.machine_state_manager.get_machine_state("a"), MachineStateNotFound) @inlineCallbacks def test_get_machine_not_found(self): """ Getting a machine state which is not available should errback a meaningful error. """ # No state whatsoever. try: yield self.machine_state_manager.get_machine_state(0) except MachineStateNotFound, e: self.assertEquals(e.machine_id, 0) else: self.fail("Error not raised") # Some state. yield self.add_machine_state() try: yield self.machine_state_manager.get_machine_state(1) except MachineStateNotFound, e: self.assertEquals(e.machine_id, 1) else: self.fail("Error not raised") @inlineCallbacks def test_get_all_machine_states(self): machines = yield self.machine_state_manager.get_all_machine_states() self.assertFalse(machines) yield self.add_machine_state() machines = yield self.machine_state_manager.get_all_machine_states() self.assertEquals(len(machines), 1) yield self.add_machine_state() machines = yield self.machine_state_manager.get_all_machine_states() self.assertEquals(len(machines), 2) @inlineCallbacks def test_set_functions(self): m1 = yield self.add_machine_state() m2 = yield self.add_machine_state() m3 = yield self.machine_state_manager.get_machine_state(0) m4 = yield self.machine_state_manager.get_machine_state(1) self.assertEquals(hash(m1), hash(m3)) self.assertEquals(hash(m2), hash(m4)) self.assertEquals(m1, m3) self.assertEquals(m2, m4) self.assertNotEqual(m1, object()) self.assertNotEqual(m1, m2) @inlineCallbacks def test_set_and_get_instance_id(self): """ Each provider must have its own notion of an id for machines it offers. The MachineState enables keeping track of that for reference, so we must be able to get and set with simple accessor methods. """ machine_state0 = yield self.add_machine_state() yield machine_state0.set_instance_id("custom-id") machine_state1 = yield self.machine_state_manager.get_machine_state( machine_state0.id) instance_id = yield machine_state1.get_instance_id() self.assertEquals(instance_id, "custom-id") content, stat = yield self.client.get("/machines/machine-0000000000") self.assertEquals( serializer.load(content)["provider-machine-id"], "custom-id") @inlineCallbacks def test_set_instance_id_preserves_existing_data(self): """ If there's more data in the machine node, it will be preserved. """ machine_state = yield self.add_machine_state() yield self.client.set("/machines/machine-0000000000", serializer.dump({"foo": "bar"})) yield machine_state.set_instance_id("custom-id") content, stat = yield self.client.get("/machines/machine-0000000000") self.assertEquals(serializer.load(content), {"provider-machine-id": "custom-id", "foo": "bar"}) @inlineCallbacks def test_set_instance_id_if_machine_state_is_removed(self): """ The set_instance_id method shouldn't attempt to recreate the zk node in case it gets removed. Instead, it should raise a MachineStateNotFound exception. """ machine_state = yield self.add_machine_state() removed = yield self.machine_state_manager.remove_machine_state( machine_state.id) self.assertTrue(removed) d = machine_state.set_instance_id(123) yield self.assertFailure(d, MachineStateNotFound) exists = yield self.client.exists("/machines/machine-0000000000") self.assertFalse(exists) @inlineCallbacks def test_get_unset_instance_id(self): """ When the instance_id is still unset, None is returned. """ machine_state = yield self.add_machine_state() instance_id = yield machine_state.get_instance_id() self.assertEquals(instance_id, None) @inlineCallbacks def test_get_instance_id_when_machine_is_removed(self): """ When the machine doesn't exist, raise MachineStateNotFound. """ machine_state = yield self.add_machine_state() removed = yield self.machine_state_manager.remove_machine_state( machine_state.id) self.assertTrue(removed) d = machine_state.get_instance_id() yield self.assertFailure(d, MachineStateNotFound) @inlineCallbacks def test_get_unset_instance_id_when_node_has_data(self): """ When the instance_id is still unset, None is returned, *even* if the node has some other data in it. """ machine_state = yield self.add_machine_state() yield self.client.set("/machines/machine-0000000000", serializer.dump({"foo": "bar"})) instance_id = yield machine_state.get_instance_id() self.assertEquals(instance_id, None) @inlineCallbacks def test_machine_agent(self): """A machine state has an associated machine agent. """ machine_state = yield self.add_machine_state() exists_d, watch_d = machine_state.watch_agent() exists = yield exists_d self.assertFalse(exists) yield machine_state.connect_agent() event = yield watch_d self.assertEqual(event.type_name, "created") self.assertEqual(event.path, "/machines/%s/agent" % machine_state.internal_id) @inlineCallbacks def test_watch_machines_initial_callback(self): """Watch machine processes initial state before returning. Note the callback is only executed if there is some meaningful state change. """ results = [] def callback(*args): results.append(True) yield self.add_machine_state() yield self.machine_state_manager.watch_machine_states(callback) self.assertTrue(results) @inlineCallbacks def test_watch_machines_when_being_created(self): """ It should be possible to start watching machines even before they are created. In this case, the callback will be made when it's actually introduced. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_machines(old_machines, new_machines): calls.append((old_machines, new_machines)) wait_callback[len(calls) - 1].callback(True) # Start watching. self.machine_state_manager.watch_machine_states(watch_machines) # Callback is still untouched. self.assertEquals(calls, []) # Add a machine, and wait for callback. yield self.add_machine_state() yield wait_callback[0] # The first callback must have been fired, and it must have None # as the first argument because that's the first machine seen. self.assertEquals(len(calls), 1) old_machines, new_machines = calls[0] self.assertEquals(old_machines, None) self.assertEquals(new_machines, set([0])) # Add a machine again. yield self.add_machine_state() yield wait_callback[1] # Now the watch callback must have been fired with two # different machine sets. The old one, and the new one. self.assertEquals(len(calls), 2) old_machines, new_machines = calls[1] self.assertEquals(old_machines, set([0])) self.assertEquals(new_machines, set([0, 1])) @inlineCallbacks def test_watch_machines_may_defer(self): """ The watch machines callback may return a deferred so that it performs some of its logic asynchronously. In this case, it must not be called a second time before its postponed logic is finished completely. """ wait_callback = [Deferred() for i in range(10)] finish_callback = [Deferred() for i in range(10)] calls = [] def watch_machines(old_machines, new_machines): calls.append((old_machines, new_machines)) wait_callback[len(calls) - 1].callback(True) return finish_callback[len(calls) - 1] # Start watching. self.machine_state_manager.watch_machine_states(watch_machines) # Create the machine. yield self.add_machine_state() # Hold off until callback is started. yield wait_callback[0] # Add another machine. yield self.add_machine_state() # Give a chance for something bad to happen. yield self.sleep(0.3) # Ensure we still have a single call. self.assertEquals(len(calls), 1) # Allow the first call to be completed, and wait on the # next one. finish_callback[0].callback(None) yield wait_callback[1] finish_callback[1].callback(None) # We should have the second change now. self.assertEquals(len(calls), 2) old_machines, new_machines = calls[1] self.assertEquals(old_machines, set([0])) self.assertEquals(new_machines, set([0, 1])) @inlineCallbacks def test_watch_machines_with_changing_topology(self): """ If the topology changes in an unrelated way, the machines watch callback should not be called with two equal arguments. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_machines(old_machines, new_machines): calls.append((old_machines, new_machines)) wait_callback[len(calls) - 1].callback(True) # Start watching. self.machine_state_manager.watch_machine_states(watch_machines) # Callback is still untouched. self.assertEquals(calls, []) # Add a machine, and wait for callback. yield self.add_machine_state() yield wait_callback[0] # Give some time to prevent changes from grouping. yield self.sleep(0.1) # Now change the topology in an unrelated way. yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) # Give some time to prevent changes from grouping. yield self.sleep(0.1) # Add a machine again. yield self.add_machine_state() yield wait_callback[1] # Finally, give a chance for the third call to happen. yield self.sleep(0.3) # But it *shouldn't* have happened. self.assertEquals(len(calls), 2) @inlineCallbacks def test_watch_service_units_processes_current_state(self): """ The watch creation method only returns after processing initial state. Note, the callback is only invoked if there is a state change that needs processing. """ machine_state0 = yield self.add_machine_state() service_state0 = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state0 = yield service_state0.add_unit_state() yield unit_state0.assign_to_machine(machine_state0) results = [] def callback(*args): results.append(True) yield machine_state0.watch_assigned_units(callback) self.assertTrue(results) @inlineCallbacks def test_watch_service_units_in_machine(self): """ We can also watch for service units which are assigned or unassigned from a specific machine. This enables the service agent to keep an eye on things it's supposed to deploy/undeploy. """ wait_callback = [Deferred() for i in range(30)] calls = [] def watch_units(machine_id, old_units, new_units): calls.append((machine_id, old_units, new_units)) wait_callback[len(calls) - 1].callback(True) watch_units0 = functools.partial(watch_units, 0) watch_units1 = functools.partial(watch_units, 1) # Create a couple of machines. machine_state0 = yield self.add_machine_state() machine_state1 = yield self.add_machine_state() # Set their watches. machine_state0.watch_assigned_units(watch_units0) machine_state1.watch_assigned_units(watch_units1) # Create some services and units. service_state0 = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state1 = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) unit_state0 = yield service_state0.add_unit_state() unit_state1 = yield service_state0.add_unit_state() unit_state2 = yield service_state1.add_unit_state() unit_state3 = yield service_state1.add_unit_state() # With all this setup in place, no unit was actually assigned to a # machine, so no callbacks should have happened yet. self.assertEquals(calls, []) # So assign a unit, and wait for the first callback. yield unit_state0.assign_to_machine(machine_state0) yield wait_callback[0] self.assertEquals(calls, [(0, None, set(["wordpress/0"]))]) # Try it again with a different unit, same service and machine. yield unit_state1.assign_to_machine(machine_state0) yield wait_callback[1] self.assertEquals(len(calls), 2, calls) self.assertEquals(calls[1], (0, set(["wordpress/0"]), set(["wordpress/0", "wordpress/1"]))) # Try it with a different service now, same machine. yield unit_state2.assign_to_machine(machine_state0) yield wait_callback[2] self.assertEquals(len(calls), 3, calls) self.assertEquals(calls[2], (0, set(["wordpress/0", "wordpress/1"]), set(["wordpress/0", "wordpress/1", "mysql/0"]))) # Try it with a different machine altogether. yield unit_state3.assign_to_machine(machine_state1) yield wait_callback[3] self.assertEquals(len(calls), 4, calls) self.assertEquals(calls[3], (1, None, set(["mysql/1"]))) # Now let's unassign a unit from a machine. yield unit_state1.unassign_from_machine() yield wait_callback[4] self.assertEquals(len(calls), 5, calls) self.assertEquals(calls[4], (0, set(["wordpress/0", "wordpress/1", "mysql/0"]), set(["wordpress/0", "mysql/0"]))) # And finally, let's *delete* a machine. To do that, though, we must # first unassign the unit from it. This will trigger the callback # already, with an empty set of units. yield unit_state3.unassign_from_machine() yield wait_callback[5] self.assertEquals(len(calls), 6, calls) self.assertEquals(calls[5], (1, set(["mysql/1"]), set())) # Now we can remove the machine itself, but that won't cause any # callbacks, since there were no units already. removed = yield self.machine_state_manager.remove_machine_state( machine_state1.id) self.assertTrue(removed) self.assertEquals(len(calls), 6, calls) test_watch_service_units_in_machine.timeout = 10 @inlineCallbacks def test_watch_units_may_defer(self): """ The watch units callback may return a deferred so that it performs some of its logic asynchronously. In this case, it must not be called a second time before its postponed logic is finished completely. """ wait_callback = [Deferred() for i in range(10)] finish_callback = [Deferred() for i in range(10)] calls = [] def watch_units(old_units, new_units): calls.append((old_units, new_units)) wait_callback[len(calls) - 1].callback(True) return finish_callback[len(calls) - 1] # Create the basic state and set the watch. machine_state = yield self.add_machine_state() machine_state.watch_assigned_units(watch_units) service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state0 = yield service_state.add_unit_state() unit_state1 = yield service_state.add_unit_state() # Shouldn't have any callbacks yet. self.assertEquals(calls, []) # Assign first unit. yield unit_state0.assign_to_machine(machine_state) # Hold off until callback is started. yield wait_callback[0] # Assign another unit. yield unit_state1.assign_to_machine(machine_state) # Give a chance for something bad to happen. yield self.sleep(0.3) # Ensure we still have a single call. self.assertEquals(len(calls), 1) # Allow the first call to be completed, and wait on the # next one. finish_callback[0].callback(None) yield wait_callback[1] finish_callback[1].callback(None) # We should have the second change now. self.assertEquals(len(calls), 2) @inlineCallbacks def test_watch_unassignment_and_removal_at_once(self): """ If units get quickly unassigned and the machine is removed, the change may be observed as a single modification in state, and the watch has to pretend that it saw the unassigned units rather than blowing up with the missing machine. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_units(old_units, new_units): calls.append((old_units, new_units)) wait_callback[len(calls) - 1].callback(True) # Create the state ahead of time. machine_state = yield self.add_machine_state() service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() # Add the watch. machine_state.watch_assigned_units(watch_units) # Assign unit. yield unit_state.assign_to_machine(machine_state) # Wait for the callback, and discard it. yield wait_callback[0] self.assertEquals(len(calls), 1) # Grab the topology to ensure that unassignment and # machine removal are perceived at once. topology = yield self.get_topology() topology.unassign_service_unit_from_machine( service_state.internal_id, unit_state.internal_id) topology.remove_machine(machine_state.internal_id) yield self.set_topology(topology) # Hold off until callback is started. yield wait_callback[1] # Ensure we have a single call. self.assertEquals(len(calls), 2, calls) self.assertEquals(calls[1], (set(['wordpress/0']), set())) @inlineCallbacks def test_machine_cannot_be_removed_if_assigned(self): """Verify that a machine cannot be removed before being unassigned""" machine_state = yield self.add_machine_state() service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() yield unit_state.assign_to_machine(machine_state) ex = yield self.assertFailure( self.machine_state_manager.remove_machine_state(machine_state.id), MachineStateInUse) self.assertEqual(ex.machine_id, 0) yield unit_state.unassign_from_machine() topology = yield self.get_topology() self.assertTrue(topology.has_machine("machine-0000000000")) removed = yield self.machine_state_manager.remove_machine_state( machine_state.id) self.assertTrue(removed) topology = yield self.get_topology() self.assertFalse(topology.has_machine("machine-0000000000")) # can do this multiple times removed = yield self.machine_state_manager.remove_machine_state( machine_state.id) self.assertFalse(removed) @inlineCallbacks def test_get_all_service_unit_states(self): """Verify retrieval of service unit states related to machine state.""" # check with one service unit state for a machine, as # currently supported by provisioning machine_state = yield self.add_machine_state() wordpress = yield self.add_service("wordpress") wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.assign_to_machine(machine_state) unit_states = yield machine_state.get_all_service_unit_states() self.assertEqual(len(unit_states), 1) self.assertEqual(unit_states[0].unit_name, "wordpress/0") # check against multiple service units mysql = yield self.add_service("mysql") mysql_0 = yield mysql.add_unit_state() yield mysql_0.assign_to_machine(machine_state) wordpress_1 = yield wordpress.add_unit_state() yield wordpress_1.assign_to_machine(machine_state) unit_states = yield machine_state.get_all_service_unit_states() self.assertEqual(len(unit_states), 3) self.assertEqual( set(unit_state.unit_name for unit_state in unit_states), set(["wordpress/0", "wordpress/1", "mysql/0"])) # then check after unassigning the service unit to the machine yield wordpress_0.unassign_from_machine() unit_states = yield machine_state.get_all_service_unit_states() self.assertEqual(len(unit_states), 2) self.assertEqual( set(unit_state.unit_name for unit_state in unit_states), set(["wordpress/1", "mysql/0"])) @inlineCallbacks def test_get_all_service_unit_states_chaining(self): """Verify going from one service unit state, to machine, and back.""" # to be extra cautious, create an extra machine to avoid # possibly having machine id = 0 be in some way spurious yield self.add_machine_state() # create one machine state, assigned to mysql/0 and wordpress/0 machine_state = yield self.add_machine_state() wordpress = yield self.add_service("wordpress") wordpress_0 = yield wordpress.add_unit_state() yield wordpress_0.assign_to_machine(machine_state) mysql = yield self.add_service("mysql") mysql_0 = yield mysql.add_unit_state() yield mysql_0.assign_to_machine(machine_state) # verify get back an equivalent machine state machine_id = yield wordpress_0.get_assigned_machine_id() self.assertEqual(machine_state.id, machine_id) # and verify we have the right service unit states, including # the one we started with (wordpress/0) unit_states = yield machine_state.get_all_service_unit_states() self.assertEqual(len(unit_states), 2) self.assertEqual( set(unit_state.unit_name for unit_state in unit_states), set(["mysql/0", "wordpress/0"])) juju-0.7.orig/juju/state/tests/test_placement.py0000644000000000000000000000667512135220114020247 0ustar 00000000000000 from twisted.internet.defer import inlineCallbacks from juju.errors import InvalidPlacementPolicy from juju.machine.tests.test_constraints import ( dummy_constraints, series_constraints) from juju.state.placement import place_unit, pick_policy from juju.state.tests.test_service import ServiceStateManagerTestBase class TestPlacement(ServiceStateManagerTestBase): @inlineCallbacks def setUp(self): yield super(TestPlacement, self).setUp() self.service = yield self.add_service_from_charm("mysql") self.unit_state = yield self.service.add_unit_state() def test_pick_policy(self): mock_provider = self.mocker.mock() mock_provider.get_placement_policies() self.mocker.result(["unassigned", "local", "new"]) self.mocker.count(3) mock_provider.provider_type self.mocker.result("dummy") self.mocker.replay() # No selection gets first listed provider policy self.assertEqual( pick_policy(None, mock_provider), "unassigned") # If the user selection doesn't match we get an error self.assertRaises( InvalidPlacementPolicy, pick_policy, "smart", mock_provider) # The user choice is respected if its available self.assertEqual( pick_policy("new", mock_provider), "new") @inlineCallbacks def test_unassign_placement(self): # Never picked; bad constraints yield self.machine_state_manager.add_machine_state( dummy_constraints.with_series("different-series")) # Would be picked if not hosting another unit machine1 = yield self.machine_state_manager.add_machine_state( series_constraints) yield self.unit_state.assign_to_machine(machine1) # Actually will be picked machine2 = yield self.machine_state_manager.add_machine_state( series_constraints) # This will have the default constraints already set, and will # therefore accept machine2 unit2 = yield self.service.add_unit_state() ms2 = yield place_unit(self.client, "unassigned", unit2) self.assertEqual(ms2.id, machine2.id) # ...and placing a new unit creates a new machine state, with correct # constraints (for the PA to use while actually provisioning) unit3 = yield self.service.add_unit_state() ms3 = yield place_unit(self.client, "unassigned", unit3) self.assertEqual(ms3.id, machine2.id + 1) constraints = yield ms3.get_constraints() self.assertEqual(constraints, series_constraints) @inlineCallbacks def test_local_placement(self): ms0 = yield self.machine_state_manager.add_machine_state( series_constraints) self.assertEqual(ms0.id, 0) # These shouldn't be used with local (but should be available # to prove a different policy is at work) yield self.machine_state_manager.add_machine_state( series_constraints) yield self.machine_state_manager.add_machine_state( series_constraints) unit2 = yield self.service.add_unit_state() ms1 = yield place_unit(self.client, "local", self.unit_state) ms2 = yield place_unit(self.client, "local", unit2) # Everything should end up on machine 0 with local placement # even though other machines are available self.assertEqual(ms0.id, ms1.id) self.assertEqual(ms0.id, ms2.id) juju-0.7.orig/juju/state/tests/test_relation.py0000644000000000000000000023303612135220114020105 0ustar 00000000000000import logging import os import time import zookeeper from twisted.internet.defer import ( inlineCallbacks, returnValue, Deferred, fail, succeed) from juju.charm.directory import CharmDirectory from juju.charm.tests import local_charm_id from juju.lib import serializer from juju.machine.tests.test_constraints import dummy_constraints from juju.charm.tests.test_metadata import test_repository_path from juju.charm.tests.test_repository import unbundled_repository from juju.lib.pick import pick_attr from juju.state.charm import CharmStateManager from juju.state.endpoint import RelationEndpoint from juju.state.errors import ( DuplicateEndpoints, IncompatibleEndpoints, RelationAlreadyExists, RelationStateNotFound, StateChanged, UnitRelationStateNotFound, UnknownRelationRole, ServiceStateNameInUse, ServiceStateNotFound, CharmStateNotFound) from juju.state.relation import ( RelationStateManager, ServiceRelationState, UnitRelationState) from juju.state.service import ServiceStateManager from juju.state.tests.common import StateTestBase class RelationTestBase(StateTestBase): @inlineCallbacks def setUp(self): yield super(RelationTestBase, self).setUp() yield self.push_default_config() self.relation_manager = RelationStateManager(self.client) self.charm_manager = CharmStateManager(self.client) self.service_manager = ServiceStateManager(self.client) self.charm_state = None @inlineCallbacks def add_service(self, name): if not self.charm_state: self.charm_state = yield self.charm_manager.add_charm_state( local_charm_id(self.charm), self.charm, "") try: service_state = yield self.service_manager.add_service_state( name, self.charm_state, dummy_constraints) except ServiceStateNameInUse: service_state = yield self.service_manager.get_service_state(name) returnValue(service_state) @inlineCallbacks def add_service_from_charm( self, service_name, charm_id=None, constraints=None, charm_dir=None, charm_name=None): """Add a service from a charm. """ if not charm_id and charm_dir is None: charm_name = charm_name or service_name charm_dir = CharmDirectory(os.path.join( test_repository_path, "series", charm_name)) if charm_id is None: charm_state = yield self.charm_manager.add_charm_state( local_charm_id(charm_dir), charm_dir, "") else: charm_state = yield self.charm_manager.get_charm_state( charm_id) service_state = yield self.service_manager.add_service_state( service_name, charm_state, constraints or dummy_constraints) returnValue(service_state) @inlineCallbacks def add_relation(self, relation_type, *services): """Support older tests that don't use `RelationEndpoint`s""" endpoints = [] for service_meta in services: service_state, relation_name, relation_role = service_meta endpoints.append(RelationEndpoint( service_state.service_name, relation_type, relation_name, relation_role)) relation_state = yield self.relation_manager.add_relation_state( *endpoints) returnValue(relation_state[0]) @inlineCallbacks def add_relation_service_unit_from_endpoints(self, *endpoints): """Build the relation and add one service unit to the first endpoint. This method is used to migrate older tests that would create the relation, assign one service, add a service unit, AND then assign a service. However, service assignment is now done all at once with the relation creation. Because we are interested in testing what happens with the changes to the service units, such tests remain valid. Returns a dict to collect together the various state objects being created. This is created from the perspective of the first endpoint, but the states of all of the endpoints are also captured, so it can be worked with from the opposite endpoint, as seen in :func:`add_opposite_service_unit`. """ # 1. Setup all service states service_states = [] for endpoint in endpoints: service_state = yield self.add_service(endpoint.service_name) service_states.append(service_state) # 2. And join together in a relation relation_state, service_relation_states = \ yield self.relation_manager.add_relation_state( *endpoints) # 3. Add a service unit to only the first endpoint - we need # to test what happens when service units are added to the # other service state (if any), so do so separately unit_state = yield service_states[0].add_unit_state() yield unit_state.set_private_address("%s.example.com" % ( unit_state.unit_name.replace("/", "-"))) relation_unit_state = yield service_relation_states[0].add_unit_state( unit_state) returnValue({ "endpoints": list(endpoints), "service": service_states[0], "services": service_states, "unit": unit_state, "relation": relation_state, "service_relation": service_relation_states[0], "unit_relation": relation_unit_state, "service_relations": service_relation_states}) @inlineCallbacks def add_opposite_service_unit(self, other_states): """Given `other_states`, add a service unit to the opposite endpoint. Like :func:`add_relation_service_unit_from_endpoints`, this is used to support older tests. Although it's slightly awkward to use because of attempt to be backwards compatible, it does enable the testing of a typical case: we are now bringing online a service unit on the opposite side of a relation endpoint pairing. TODO: there's probably a better name for this method. """ assert len(other_states["services"]) == 2 unit_state = yield other_states["services"][1].add_unit_state() yield unit_state.set_private_address("%s.example.com" % ( unit_state.unit_name.replace("/", "-"))) relation_unit_state = yield other_states["service_relations"][1].\ add_unit_state(unit_state) def rotate(X): rotated = X[1:] rotated.append(X[0]) return rotated returnValue({ "endpoints": rotate(other_states["endpoints"]), "service": other_states["services"][1], "services": rotate(other_states["services"]), "unit": unit_state, "relation": other_states["relation"], "service_relation": other_states["service_relations"][1], "unit_relation": relation_unit_state, "service_relations": rotate(other_states["service_relations"])}) @inlineCallbacks def add_relation_service_unit_to_another_endpoint(self, states, endpoint): """Add a relation to `endpoint` from the first endpoint in `states`. This enables a scenario of creating two services and a relation by calling :func:`add_relation_service_unit_from_endpoints`, then adding one more endpoint to the first one. Like the other functions in this series, this is here to work with tests that use the now-deleted assign service functionality. """ new_states = states.copy() new_states["services"][1] = (yield self.add_service( endpoint.service_name)) new_states["endpoints"][1] = endpoint relation_state, service_relation_states = \ yield self.relation_manager.add_relation_state( *new_states["endpoints"]) new_states["relation"] = relation_state new_states["service_relations"] = service_relation_states new_states["service_relation"] = service_relation_states[0] new_states["unit"] = yield new_states["services"][0].add_unit_state() new_states["unit_relation"] = yield service_relation_states[0].\ add_unit_state(new_states["unit"]) returnValue(new_states) @inlineCallbacks def add_relation_service_unit(self, relation_type, service_name, relation_name="name", relation_role="role", relation_state=None, client=None): """ Create a relation, service, and service unit, with the service assigned to the relation. Optionally utilize existing relation state if passed in. If client is the service relation and the service relation state will utilize that as their zookeeper client. """ # Add the service, relation, unit states service_state = yield self.add_service(service_name) relation_state = yield self.add_relation( relation_type, (service_state, relation_name, relation_role)) unit_state = yield service_state.add_unit_state() # Get the service relation. relations = yield self.relation_manager.get_relations_for_service( service_state) for service_relation in relations: if (service_relation.internal_relation_id == relation_state.internal_id): break # Utilize a separate client if requested. if client: service_relation = ServiceRelationState( client, service_relation.internal_service_id, service_relation.internal_relation_id, service_relation.relation_role, service_relation.relation_name) # Create the relation unit state relation_unit_state = yield service_relation.add_unit_state( unit_state) returnValue({ "service": service_state, "unit": unit_state, "relation": relation_state, "service_relation": service_relation, "unit_relation": relation_unit_state}) @inlineCallbacks def add_related_service_unit(self, state_dict): """ Add a new service unit of the given service within the relation. """ unit_state = yield state_dict["service"].add_unit_state() unit_relation = yield state_dict["service_relation"].add_unit_state( unit_state) new_state_dict = dict(state_dict) new_state_dict["unit"] = unit_state new_state_dict["unit_relation"] = unit_relation returnValue(new_state_dict) def get_unit_settings_path(self, state_dict): unit_relation_path = "/relations/%s/settings/%s" % ( state_dict["relation"].internal_id, state_dict["unit"].internal_id) return unit_relation_path @inlineCallbacks def get_local_charm(self, charm_name): charm_dir = CharmDirectory( os.path.join(unbundled_repository, "series", charm_name)) try: charm_state = yield self.charm_manager.get_charm_state( local_charm_id(charm_dir)) except CharmStateNotFound: charm_state = yield self.charm_manager.add_charm_state( local_charm_id(charm_dir), charm_dir, "") returnValue(charm_state) @inlineCallbacks def get_subordinate_charm(self): """Return charm state for a subordinate charm. Many tests rely on adding relationships to a proper subordinate. This return the charm state of a testing subordinate charm. """ sub_charm = yield self.get_local_charm("logging") returnValue(sub_charm) @inlineCallbacks def get_service_and_units_by_charm(self, charm_state, units=None, containers=None, service_name=None): """Return [service_state, [o..n units]] `units (int)` is provided that many units will be added `containers` if provided it should be a list of unit states for containers that should be associated with each new unit (in order). This option implies units == len(containers) (though this is checked). `service_name` optional name to use for service, defaults to charm name. """ if not service_name: service_name = charm_state.name try: service_state = yield self.service_manager.get_service_state( service_name) except ServiceStateNotFound: service_state = yield self.service_manager.add_service_state( service_name, charm_state, dummy_constraints) if containers: if units is None: units = len(containers) elif len(containers) != units: raise ValueError( "Containers and number of expected units mismatch") unit_states = [] if units: for i in range(units): container = None if containers: container = containers[i] unit = yield service_state.add_unit_state( container=container) unit_states.append(unit) returnValue([service_state, unit_states]) @inlineCallbacks def get_service_and_units_by_charm_name(self, charm_name, units=None, containers=None, service_name=None): charm_state = yield self.get_local_charm(charm_name) returnValue(( yield self.get_service_and_units_by_charm( charm_state, units=units, containers=containers, service_name=service_name))) class RelationStateManagerTest(RelationTestBase): @inlineCallbacks def test_add_relation_state(self): """Adding relation will create a relation node and update topology.""" mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") yield self.add_service("mysql") relation_state = (yield self.relation_manager.add_relation_state( mysql_ep))[0] topology = yield self.get_topology() self.assertTrue(topology.has_relation(relation_state.internal_id)) exists = yield self.client.exists( "/relations/%s" % relation_state.internal_id) self.assertTrue(exists) exists = yield self.client.get( "/relations/%s/settings" % relation_state.internal_id) self.assertTrue(exists) @inlineCallbacks def test_add_relation_state_to_missing_service(self): """Test adding a relation to a nonexistent service""" mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") yield self.add_service("wordpress") # but didn't create the service for wordpress yield self.assertFailure( self.relation_manager.add_relation_state( mysql_ep, blog_ep), StateChanged) @inlineCallbacks def test_add_relation_state_bad_relation_role(self): """Test adding a relation with a bad role when is one is well defined (client or server)""" blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") bad_mysql_ep = RelationEndpoint( "mysql", "mysql", "db", "bad-server-role") bad_blog_ep = RelationEndpoint( "wordpress", "mysql", "mysql", "bad-client-role") yield self.add_service("mysql") yield self.add_service("wordpress") yield self.assertFailure( self.relation_manager.add_relation_state( bad_mysql_ep, blog_ep), IncompatibleEndpoints) yield self.assertFailure( self.relation_manager.add_relation_state( bad_blog_ep, mysql_ep), IncompatibleEndpoints) # TODO in future branch referenced in relation, also test # bad_blog_ep *and* bad_mysql_ep @inlineCallbacks def test_add_binary_relation_state_twice(self): """Test adding the same relation twice""" blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") yield self.add_service("mysql") yield self.add_service("wordpress") yield self.relation_manager.add_relation_state(mysql_ep, blog_ep) e = yield self.assertFailure( self.relation_manager.add_relation_state(blog_ep, mysql_ep), RelationAlreadyExists) self.assertEqual( str(e), "Relation mysql already exists between wordpress and mysql") e = yield self.assertFailure( self.relation_manager.add_relation_state(mysql_ep, blog_ep), RelationAlreadyExists) self.assertEqual( str(e), "Relation mysql already exists between mysql and wordpress") @inlineCallbacks def test_add_peer_relation_state_twice(self): """Test adding the same relation twice""" riak_ep = RelationEndpoint("riak", "riak", "ring", "peer") yield self.add_service("riak") yield self.relation_manager.add_relation_state(riak_ep) e = yield self.assertFailure( self.relation_manager.add_relation_state(riak_ep), RelationAlreadyExists) self.assertEqual(str(e), "Relation riak already exists for riak") @inlineCallbacks def test_add_relation_state_no_endpoints(self): """Test adding a relation with no endpoints (no longer allowed)""" yield self.assertFailure( self.relation_manager.add_relation_state(), TypeError) @inlineCallbacks def test_add_relation_state_relation_type_unshared(self): """Test adding a relation with endpoints not sharing a relation type""" pg_ep = RelationEndpoint("pg", "postgres", "db", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") yield self.assertFailure( self.relation_manager.add_relation_state(pg_ep, blog_ep), IncompatibleEndpoints) @inlineCallbacks def test_add_relation_state_too_many_endpoints(self): """Test adding a relation between too many endpoints (> 2)""" mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") yield self.add_service("mysql") yield self.add_service("wordpress") yield self.assertFailure( self.relation_manager.add_relation_state( mysql_ep, blog_ep, mysql_ep), TypeError) @inlineCallbacks def test_add_relation_state_duplicate_peer_endpoints(self): """Test adding a relation between duplicate peer endpoints""" riak_ep = RelationEndpoint("riak", "riak", "ring", "peer") yield self.add_service("riak") yield self.assertFailure( self.relation_manager.add_relation_state(riak_ep, riak_ep), DuplicateEndpoints) @inlineCallbacks def test_add_relation_state_endpoints_duplicate_role(self): """Test adding a relation with services overlapped by duplicate role""" mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") drizzle_ep = RelationEndpoint("drizzle", "mysql", "db", "server") yield self.add_service("mysql") yield self.add_service("drizzle") yield self.assertFailure( self.relation_manager.add_relation_state(mysql_ep, drizzle_ep), IncompatibleEndpoints) @inlineCallbacks def test_add_relation_state_scope_container_relation(self): """Verify that container scope is applied to relation. Even in the case where only one endpoint is marked as scope:container. """ mysql_ep = RelationEndpoint( "mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint( "logging", "juju-info", "juju-info", "client", "container") yield self.add_service("mysql") yield self.add_service("logging") relation_state, service_states = \ yield self.relation_manager.add_relation_state(mysql_ep, logging_ep) # verify that the relation state instances are both marked # with the live scope (container). This happens even though # the provides side of the relation is global for service_state in service_states: self.assertEqual(service_state.relation_scope, "container") @inlineCallbacks def test_add_relation_state_scope_container_relation_2_containers(self): mysql_ep = RelationEndpoint( "mysql", "juju-info", "juju-info", "server", "container") logging_ep = RelationEndpoint( "logging", "juju-info", "juju-info", "client", "container") yield self.add_service("mysql") yield self.add_service("logging") relation_state, service_states = \ yield self.relation_manager.add_relation_state(mysql_ep, logging_ep) for service_state in service_states: self.assertEqual(service_state.relation_scope, "container") @inlineCallbacks def test_add_relation_state_scope_container_scoped_principal(self): mysql_ep = RelationEndpoint( "mysql", "juju-info", "juju-info", "server", "container") logging_ep = RelationEndpoint( "logging", "juju-info", "juju-info", "client", "global") yield self.add_service("mysql") yield self.add_service("logging") relation_state, service_states = \ yield self.relation_manager.add_relation_state(mysql_ep, logging_ep) for service_state in service_states: self.assertEqual(service_state.relation_scope, "container") @inlineCallbacks def test_remove_relation_state(self): """Removing a relation will remove it from the topology.""" # Simulate add and remove. varnish_ep = RelationEndpoint("varnish", "webcache", "cache", "server") yield self.add_service("varnish") relation_state = (yield self.relation_manager.add_relation_state( varnish_ep))[0] topology = yield self.get_topology() self.assertTrue(topology.has_relation(relation_state.internal_id)) yield self.relation_manager.remove_relation_state(relation_state) # Verify removal. topology = yield self.get_topology() self.assertFalse(topology.has_relation(relation_state.internal_id)) @inlineCallbacks def test_remove_relation_state_with_service_state(self): """A relation can be removed using a ServiceRelationState argument.""" # Simulate add and remove. varnish_endpoint = RelationEndpoint( "varnish", "webcache", "cache", "server") yield self.add_service("varnish") relation_state, _ = yield self.relation_manager.add_relation_state( varnish_endpoint) service_relation = _.pop() topology = yield self.get_topology() self.assertTrue(topology.has_relation(relation_state.internal_id)) yield self.relation_manager.remove_relation_state(service_relation) # Verify removal. topology = yield self.get_topology() self.assertFalse(topology.has_relation(relation_state.internal_id)) @inlineCallbacks def test_remove_relation_with_changing_state(self): # Simulate add and remove. varnish_ep = RelationEndpoint("varnish", "webcache", "cache", "server") yield self.add_service("varnish") relation_state = (yield self.relation_manager.add_relation_state( varnish_ep))[0] topology = yield self.get_topology() self.assertTrue(topology.has_relation(relation_state.internal_id)) yield self.relation_manager.remove_relation_state(relation_state) topology = yield self.get_topology() self.assertFalse(topology.has_relation(relation_state.internal_id)) # try to remove again, should get state change error. yield self.assertFailure( self.relation_manager.remove_relation_state(relation_state), StateChanged) @inlineCallbacks def test_get_relations_for_service(self): # Create some services and relations service1 = yield self.add_service("database") service2 = yield self.add_service("application") service3 = yield self.add_service("cache") relation1 = yield self.add_relation( "database", (service1, "client", "server"), (service2, "db", "client")) relation2 = yield self.add_relation( "cache", (service3, "app", "server"), (service2, "cache", "client")) relations = yield self.relation_manager.get_relations_for_service( service2) rel_ids = [r.internal_relation_id for r in relations] self.assertEqual(sorted(rel_ids), [relation1.internal_id, relation2.internal_id]) relations = yield self.relation_manager.get_relations_for_service( service1) rel_ids = [r.internal_relation_id for r in relations] self.assertEqual(sorted(rel_ids), [relation1.internal_id]) @inlineCallbacks def test_get_relations_for_service_with_none(self): service1 = yield self.add_service("database") relations = yield self.relation_manager.get_relations_for_service( service1) self.assertFalse(relations) @inlineCallbacks def assertGetEqualRelationState(self, relation_state, *endpoints): get_relation_state = yield self.relation_manager.get_relation_state( *endpoints) self.assertEqual( relation_state.internal_id, get_relation_state.internal_id) @inlineCallbacks def test_get_relation_state(self): """Test that relation state can be retrieved from pair of endpoints.""" mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") blog_mysql_ep = RelationEndpoint( "wordpress", "mysql", "mysql", "client") blog_varnish_ep = RelationEndpoint( "wordpress", "varnish", "webcache", "client") varnish_ep = RelationEndpoint("varnish", "varnish", "cache", "server") yield self.add_service("mysql") yield self.add_service("wordpress") yield self.add_service("varnish") blog_mysql = (yield self.relation_manager.add_relation_state( blog_mysql_ep, mysql_ep))[0] blog_varnish = (yield self.relation_manager.add_relation_state( blog_varnish_ep, varnish_ep))[0] yield self.assertGetEqualRelationState( blog_mysql, blog_mysql_ep, mysql_ep) yield self.assertGetEqualRelationState( blog_varnish, varnish_ep, blog_varnish_ep) @inlineCallbacks def test_get_relation_state_missing_relation(self): """Test that `RelationStateNotFound` is raised if no relation exists""" mysql_ep = RelationEndpoint("mysql", "mysql", "db", "server") blog_mysql_ep = RelationEndpoint( "wordpress", "mysql", "mysql", "client") blog_varnish_ep = RelationEndpoint( "wordpress", "varnish", "webcache", "client") varnish_ep = RelationEndpoint("varnish", "varnish", "cache", "server") yield self.add_service("mysql") yield self.add_service("wordpress") yield self.add_service("varnish") blog_varnish = (yield self.relation_manager.add_relation_state( blog_varnish_ep, varnish_ep))[0] yield self.assertFailure( self.relation_manager.get_relation_state(mysql_ep, blog_mysql_ep), RelationStateNotFound) yield self.assertGetEqualRelationState( blog_varnish, varnish_ep, blog_varnish_ep) class ServiceRelationStateTest(RelationTestBase): @inlineCallbacks def setUp(self): yield super(ServiceRelationStateTest, self).setUp() self.service_state1 = yield self.add_service("wordpress-prod") self.service_state2 = yield self.add_service("wordpress-dev") self.relation_state = yield self.add_relation( "riak", (self.service_state1, "dev-connect", "prod"), (self.service_state2, "prod-connect", "dev")) relations = yield self.relation_manager.get_relations_for_service( self.service_state1) self.service1_relation = relations.pop() def get_presence_path( self, relation_state, relation_role, unit_state, container=None): # container = container.internal_id if container else None presence_path = "/".join(filter(None, [ "/relations", relation_state.internal_id, container, relation_role, unit_state.internal_id])) return presence_path def test_property_internal_service_id(self): self.assertEqual(self.service1_relation.internal_service_id, self.service_state1.internal_id) def test_property_internal_relation_id(self): self.assertEqual(self.service1_relation.internal_relation_id, self.relation_state.internal_id) def test_property_relation_role(self): self.assertEqual(self.service1_relation.relation_role, "prod") def test_repr(self): id = "relation-0000000000" self.assertEqual( repr(self.service1_relation), "" % id) def test_property_relation_name(self): """ The service's name for the relation is accessible from the service relation state. """ self.assertEqual(self.service1_relation.relation_name, "dev-connect") @inlineCallbacks def assert_relation_idents(self, service, expected): relations = yield self.relation_manager.get_relations_for_service( service) self.assertEqual(set((r.relation_ident for r in relations)), set(expected)) @inlineCallbacks def test_property_relation_id(self): """Verify normalization of relation id and correct selection of relation name. """ yield self.assert_relation_idents( self.service_state1, ["dev-connect:0"]) yield self.assert_relation_idents( self.service_state2, ["prod-connect:0"]) # Setup another group of services and establish relations to # verify working with non-zero id and with multiple consumers mysql = yield self.add_service("mysql") blog1 = yield self.add_service("blog1") blog2 = yield self.add_service("blog2") yield self.add_relation( "mysql", (mysql, "database", "server"), (blog1, "db", "client")) yield self.add_relation( "mysql", (mysql, "database", "server"), (blog2, "db", "client")) yield self.assert_relation_idents(blog1, ["db:1"]) yield self.assert_relation_idents(blog2, ["db:2"]) yield self.assert_relation_idents(mysql, ["database:1", "database:2"]) @inlineCallbacks def test_add_unit_state(self): """The service state is used to create units in the relation.""" unit_state = yield self.service_state1.add_unit_state() # set some watches to verify the order things are created. state_created = Deferred() creation_order = [] presence_path = self.get_presence_path( self.relation_state, "prod", unit_state) def append_event(name, event): creation_order.append((name, event)) if len(creation_order) == 2: state_created.callback(creation_order) self.client.exists_and_watch(presence_path)[1].addCallback( lambda result: append_event("presence", result)) settings_path = "/relations/%s/settings/%s" % ( self.relation_state.internal_id, unit_state.internal_id) self.client.exists_and_watch(settings_path)[1].addCallback( lambda result: append_event("settings", result)) yield unit_state.set_private_address("foobar.local") # add the unit agent yield self.service1_relation.add_unit_state(unit_state) # wait for the watches yield state_created # Verify order of creation, settings first, then presence. self.assertEqual(creation_order[0][0], "settings") self.assertEqual(creation_order[0][1].type_name, "created") self.assertEqual(creation_order[1][0], "presence") self.assertEqual(creation_order[1][1].type_name, "created") # Verify the unit mapping unit_map_data, stat = yield self.client.get("/relations/%s" % ( self.relation_state.internal_id)) unit_map = serializer.load(unit_map_data) self.assertEqual( unit_map, {unit_state.internal_id: unit_state.unit_name}) content, stat = yield self.client.get(settings_path) self.assertEqual( serializer.load(content), {"private-address": "foobar.local"}) @inlineCallbacks def test_add_unit_state_scope_container_relation(self): """Verify that container scope is applied to relation. Even in the case where only one endpoint is marked as scope:container. """ mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") mysql = yield self.add_service("mysql") log_charm = yield self.get_subordinate_charm() log = yield self.service_manager.add_service_state("logging", log_charm, dummy_constraints) self.assertTrue(log.is_subordinate()) relation_state, service_states = (yield self.relation_manager.add_relation_state( mysql_ep, logging_ep)) mu1 = yield mysql.add_unit_state() lu1 = yield log.add_unit_state(mu1) mystate = pick_attr(service_states, relation_role="server") logstate = pick_attr(service_states, relation_role="client") lurs1 = yield logstate.add_unit_state(lu1) yield lurs1.set_data(dict(magic="horse")) murs1 = yield mystate.add_unit_state(mu1) yield murs1.set_data(dict(magic="unicorn")) # verify that the relation state instances are both marked # with the live scope (container). This happens even though # the provides side of the relation is global for service_state in service_states: self.assertEqual(service_state.relation_scope, "container") yield self.assertTree("/relations/%s/%s" % ( murs1.internal_relation_id, murs1.internal_unit_id), {"unit-0000000000": { "client": {"contents": {"name": "juju-info", "role": "client"}, "unit-0000000002": {"contents": {}}}, "contents": {}, "server": {"contents": {"name": "juju-info", "role": "server"}, "unit-0000000000": {"contents": {}}}, "settings": {"contents": {}, "unit-0000000000": { "contents": {"private-address": None}}, "unit-0000000002": { "contents": {"private-address": None}}}}}) topology = yield self.get_topology() self.assertTrue( topology.has_relation_between_endpoints([mysql_ep, logging_ep])) @inlineCallbacks def test_presence_node_is_ephemeral(self): """ A unit relation state is composed of two nodes, an ephemeral presence node, and a persistent settings node. Verify that the presence node is ephemeral. """ unit_state = yield self.service_state1.add_unit_state() # manually construct a unit relation state using a separate # connection. client2 = self.get_zookeeper_client() yield client2.connect() service_relation = ServiceRelationState( client2, self.service_state1.internal_id, self.relation_state.internal_id, "global", "prod", "name") yield service_relation.add_unit_state(unit_state) presence_path = self.get_presence_path( self.relation_state, "prod", unit_state) exists_d, watch_d = self.client.exists_and_watch(presence_path) exists = yield exists_d self.assertTrue(exists) yield client2.close() event = yield watch_d self.assertEquals(event.type_name, "deleted") @inlineCallbacks def test_add_unit_state_with_preexisting_presence(self): """ If a unit relation presence node exists, attempting to add it will return a unit relation state. """ unit_state = yield self.service_state1.add_unit_state() presence_path = self.get_presence_path( self.relation_state, "prod", unit_state) yield self.client.create(presence_path) # Adding it again is fine. unit_relation = yield self.service1_relation.add_unit_state(unit_state) self.assertEqual(unit_relation.internal_unit_id, unit_state.internal_id) @inlineCallbacks def test_add_unit_state_with_preexisting_settings(self): """A unit coming backup retains its existing settings. With the exception of the unit address, which is always kept current on subsequent joinings. """ unit_state = yield self.service_state1.add_unit_state() settings_path = "/relations/%s/settings/%s" % ( self.relation_state.internal_id, unit_state.internal_id) data = {"hello": "world", "private-address": "foobar.local"} yield self.client.create(settings_path, serializer.dump(data)) yield unit_state.set_private_address("northwest.local") yield self.service1_relation.add_unit_state(unit_state) node_data, stat = yield self.client.get(settings_path) # The unit address has been updated to current data["private-address"] = "northwest.local" self.assertEqual(node_data, serializer.dump(data)) data, stat = yield self.client.get( "/relations/%s" % self.relation_state.internal_id) unit_map = serializer.load(data) self.assertEqual(unit_map, {unit_state.internal_id: unit_state.unit_name}) @inlineCallbacks def test_get_unit_state(self): unit_state = yield self.service_state1.add_unit_state() unit_relation_state = yield self.service1_relation.add_unit_state( unit_state) self.assertTrue(isinstance(unit_relation_state, UnitRelationState)) unit_relation_state2 = yield self.service1_relation.get_unit_state( unit_state) self.assertEqual( (unit_relation_state.internal_unit_id, unit_relation_state.internal_service_id, unit_relation_state.internal_relation_id), (unit_relation_state2.internal_unit_id, unit_relation_state2.internal_service_id, unit_relation_state2.internal_relation_id)) @inlineCallbacks def test_get_unit_state_nonexistant(self): unit_state = yield self.service_state1.add_unit_state() yield self.assertFailure( self.service1_relation.get_unit_state(unit_state), UnitRelationStateNotFound) @inlineCallbacks def test_get_all_service_states(self): services = yield self.service1_relation.get_service_states() self.assertEqual(set(services), set((self.service_state1, self.service_state2))) def test_get_relation_ident(self): self.assertEqual( ServiceRelationState.get_relation_ident( "db", "relation-0000000042"), "db:42") self.assertEqual( ServiceRelationState.get_relation_ident( "db", "relation-0000000000"), "db:0") class WatchChecker(object): """Helper class to simplify UnitRelationState.watch_related_units tests. Records change notification callbacks, in order, and lets you verify that expected callbacks have occurred. Optionally allows for blocking of callback completion, to allow you to test ordering and coalescing of changes (specify `block_cbs=True`; call `unblock_cb(index)` at any time to allow the `index`th callback to return immediately (and thereby free up dependent callbacks)). Only works for tests in which `max_cb_count` >= the actual number of callbacks made. """ def __init__(self, test, block_cbs=False, max_cb_count=10): self.test = test self.results = [] self.sentinels = [Deferred() for i in range(max_cb_count)] self.blockers = [ Deferred() if block_cbs else True for i in range(max_cb_count)] def _cb_change_members(self, old, new): self.results.append((old, new)) return self._synchronize() def _cb_change_settings(self, modified): (change,) = modified self.results.append(change) return self._synchronize() @inlineCallbacks def _synchronize(self): index = len(self.results) - 1 self.sentinels[index].callback(True) yield self.blockers[index] def watch(self, unit_relation): """Start watching `unit_relation` for related unit changes""" return unit_relation.watch_related_units( self._cb_change_members, self._cb_change_settings) def assert_cb_count(self, count): """Assert that `count` callbacks have been triggered. Includes those callbacks which have started but not yet completed due to blocking""" self.test.assertEquals(len(self.results), count) def wait_for_cb(self, index): """Wait until `index` + 1 callbacks have been triggered.""" return self.sentinels[index] def unblock_cb(self, index): """Allow a blocked callback to complete. Only valid if class was constructed with `block_cbs=True`, and if the `index`th callback has not already been unblocked; perfectly valid to unblock a callback before it's made.""" self.blockers[index].callback(True) @inlineCallbacks def assert_members_cb(self, index, old, new): """Check that the `index`th callback was a member change notification. Note: `old` and `new` are ordered lists of unit relations; the actual callbacks fire with ordered lists of names. """ yield self.wait_for_cb(index) for actual, expected in zip(self.results[index], (old, new)): for actual_name, expected_unit in zip(actual, expected): self.test.assertEquals(actual_name, expected_unit.unit_name) @inlineCallbacks def assert_settings_cb(self, index, unit, version): """Check that the `index`th callback was a version change notification. Note: `unit` is a unit relation; callback takes a list of (name, version) pairs""" yield self.wait_for_cb(index) change = self.results[index] self.test.assertEquals(change, (unit.unit_name, version)) class UnitRelationStateTest(RelationTestBase): @inlineCallbacks def test_properties(self): states = yield self.add_relation_service_unit("webcache", "varnish") unit_relation = states["unit_relation"] self.assertEqual( unit_relation.internal_service_id, states["service"].internal_id) self.assertEqual( unit_relation.internal_relation_id, states["relation"].internal_id) self.assertEqual( unit_relation.internal_unit_id, states["unit"].internal_id) @inlineCallbacks def test_get_data(self): states = yield self.add_relation_service_unit("webcache", "varnish") unit_relation = states["unit_relation"] data = yield unit_relation.get_data() self.assertEqual(serializer.load(data), {"private-address": None}) unit_relation_path = self.get_unit_settings_path(states) self.client.set( unit_relation_path, serializer.dump(dict(hello="world"))) data = yield unit_relation.get_data() self.assertEqual(data, serializer.dump(dict(hello="world"))) @inlineCallbacks def test_set_data(self): states = yield self.add_relation_service_unit("webcache", "varnish") unit_relation = states["unit_relation"] unit_relation_path = self.get_unit_settings_path(states) yield unit_relation.set_data(dict(hello="world")) data, stat = yield self.client.get(unit_relation_path) self.assertEqual(data, serializer.dump(dict(hello="world"))) @inlineCallbacks def test_get_relation_role(self): """Retrieve the service's relation role. """ states = yield self.add_relation_service_unit( "webcache", "varnish", "name", "server") role = yield states["unit_relation"].get_relation_role() self.assertEqual("server", role) @inlineCallbacks def test_get_relation_role_on_removed_relation(self): """Verify `StateChanged` raised if relation is removed.""" states = yield self.add_relation_service_unit( "webcache", "varnish", "name", "server") yield self.relation_manager.remove_relation_state(states["relation"]) yield self.assertFailure( states["unit_relation"].get_relation_role(), StateChanged) @inlineCallbacks def test_get_relation_role_on_removed_service(self): """Verify `StateChanged` raised if service is removed.""" states = yield self.add_relation_service_unit( "webcache", "varnish", "name", "server") yield self.service_manager.remove_service_state(states["service"]) yield self.assertFailure( states["unit_relation"].get_relation_role(), StateChanged) @inlineCallbacks def test_get_related_unit_container(self): """Retrieve the container path of the related units.""" states = yield self.add_relation_service_unit( "webcache", "varnish", "name", "server") container_path = "/relations/%s/%s" % ( states["relation"].internal_id, "client") path = yield states["unit_relation"].get_related_unit_container() self.assertEqual(path, container_path) states = yield self.add_relation_service_unit( "riak", "riak", "name", "peer") container_path = "/relations/%s/%s" % ( states["relation"].internal_id, "peer") path = yield states["unit_relation"].get_related_unit_container() self.assertEqual(path, container_path) states = yield self.add_relation_service_unit( "wordpress", "wordpress", "name", "client") container_path = "/relations/%s/%s" % ( states["relation"].internal_id, "server") path = yield states["unit_relation"].get_related_unit_container() self.assertEqual(path, container_path) @inlineCallbacks def test_watch_start_existing_service(self): """Invoking watcher.start returns a deferred that only fires after watch on the container is in place. In the case of an existing service, this is after a child watch is established. """ wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) yield self.add_opposite_service_unit(wordpress_states) checker = WatchChecker(self) mock_client = self.mocker.patch(self.client) mock_client.get_children_and_watch("/relations/%s/server" % ( wordpress_states["relation"].internal_id)) def invoked(*args, **kw): # sleep to make sure that things haven't fired till the watch is # in place. time.sleep(0.1) checker.assert_cb_count(0) self.mocker.call(invoked) self.mocker.passthrough() self.mocker.replay() watcher = yield checker.watch(wordpress_states["unit_relation"]) self.assertFalse(watcher.running) yield watcher.start() self.assertTrue(watcher.running) yield checker.wait_for_cb(0) @inlineCallbacks def test_watch_start_new_service(self): """Invoking watcher.start returns a deferred that only fires after watch on the container is in place. In the case of a new service this after an existance watch is established on the container. """ wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep) checker = WatchChecker(self) mock_client = self.mocker.patch(self.client) mock_client.exists_and_watch("/relations/%s/server" % ( wordpress_states["relation"].internal_id)) def invoked(*args, **kw): # sleep to make sure that things haven't fired till the watch is # in place. time.sleep(0.1) checker.assert_cb_count(0) self.mocker.call(invoked) self.mocker.passthrough() self.mocker.replay() watcher = yield checker.watch(wordpress_states["unit_relation"]) yield watcher.start() @inlineCallbacks def test_watch_client_server_with_new_service(self): """We simulate a scenario where the client units appear first within relation, and start to monitor the server service as it joins the relation, adds a unit, modifies, the unit. """ wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) checker = WatchChecker(self) watcher = yield checker.watch(wordpress_states["unit_relation"]) yield watcher.start() # adding another unit of wordpress, does not cause any changes service1_relation = wordpress_states["service_relation"] service1_unit2 = yield wordpress_states["service"].add_unit_state() yield service1_relation.add_unit_state(service1_unit2) # give chance for accidental watch firing. yield self.poke_zk() checker.assert_cb_count(0) # add the server service and a unit of that mysql_states = yield self.add_opposite_service_unit(wordpress_states) mysql_unit = mysql_states["unit"] topology = yield self.get_topology() # assert the relation is established correctly services = topology.get_relation_services( wordpress_states["relation"].internal_id) self.assertEqual(len(services), 2) # wait for initial callbacks yield checker.assert_members_cb(0, [], [mysql_unit]) yield checker.assert_settings_cb(1, mysql_unit, 0) # modify unit, wait for callback yield mysql_states["unit_relation"].set_data(dict(hello="world")) yield checker.assert_settings_cb(2, mysql_unit, 1) # add another unit, wait for callback mysql_unit2 = yield mysql_states["service"].add_unit_state() yield mysql_states["service_relation"].add_unit_state(mysql_unit2) yield checker.assert_members_cb( 3, [mysql_unit], [mysql_unit, mysql_unit2]) @inlineCallbacks def test_watch_client_server_with_existing_service(self): """We simulate a scenario where the client and server are both in place before the client begins observing. The server subsequently modifies, and remove its unit from the relation. """ # add the client service and a unit of that wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) mysql_states = yield self.add_opposite_service_unit( wordpress_states) mysql_unit = mysql_states["unit"] checker = WatchChecker(self) watcher = yield checker.watch(wordpress_states["unit_relation"]) yield watcher.start() yield checker.assert_members_cb(0, [], [mysql_unit]) yield checker.assert_settings_cb(1, mysql_unit, 0) yield mysql_states["unit_relation"].set_data(dict(hello="world")) yield checker.assert_settings_cb(2, mysql_unit, 1) # directly delete the presence node to trigger a deletion notification self.client.delete("/relations/%s/server/%s" % ( mysql_states["relation"].internal_id, mysql_states["unit"].internal_id)) # verify the deletion result. yield checker.assert_members_cb(3, [mysql_unit], []) @inlineCallbacks def test_watch_server_client_with_new_service(self): """We simulate a server watching a client. """ wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") # add the server service and a unit of that mysql_states = yield self.add_relation_service_unit_from_endpoints( mysql_ep, wordpress_ep) checker = WatchChecker(self) watcher = yield checker.watch(mysql_states["unit_relation"]) yield watcher.start() yield self.poke_zk() checker.assert_cb_count(0) # add the client service and a unit of that wordpress_states = yield self.add_opposite_service_unit( mysql_states) wordpress_unit = wordpress_states["unit"] yield checker.assert_members_cb(0, [], [wordpress_unit]) yield checker.assert_settings_cb(1, wordpress_unit, 0) @inlineCallbacks def test_watch_server_client_with_new_subordinate_service(self): """We simulate a server watching a client. """ mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") mysql, my_units = yield self.get_service_and_units_by_charm_name( "mysql", 2) self.assertFalse((yield mysql.is_subordinate())) log, log_units = yield self.get_service_and_units_by_charm_name( "logging") self.assertTrue((yield log.is_subordinate())) # add the relationship so we can create units with containers relation_state, service_states = (yield self.relation_manager.add_relation_state( mysql_ep, logging_ep)) log, log_units = yield self.get_service_and_units_by_charm_name( "logging", containers=my_units) self.assertTrue((yield log.is_subordinate())) for lu in log_units: self.assertTrue((yield lu.is_subordinate())) mu1, mu2 = my_units lu1, lu2 = log_units mystate = pick_attr(service_states, relation_role="server") logstate = pick_attr(service_states, relation_role="client") murs1 = yield mystate.add_unit_state(mu1) lurs1 = yield logstate.add_unit_state(lu1) # add the second container murs2 = yield mystate.add_unit_state(mu2) lurs2 = yield logstate.add_unit_state(lu2) @inlineCallbacks def verify_watch(urs, expected): checker = WatchChecker(self) watcher = yield checker.watch(urs) self.assertFalse(watcher.running) yield watcher.start() self.assertTrue(watcher.running) # Here we show the watchers bound to a given unit # only see the contained unit yield checker.assert_members_cb(0, [], expected) yield checker.assert_settings_cb(1, expected[0], 0) yield verify_watch(murs1, [lu1]) yield verify_watch(murs2, [lu2]) yield verify_watch(lurs1, [mu1]) yield verify_watch(lurs2, [mu2]) @inlineCallbacks def test_watch_peer(self): """Peer relations always watch the peer container. """ # add the peer relation and two unit of the service. riak_ep = RelationEndpoint("riak", "peer", "riak-db", "peer") riak_states = yield self.add_relation_service_unit_from_endpoints( riak_ep) riak2_unit = yield riak_states["service"].add_unit_state() yield riak_states["service_relation"].add_unit_state(riak2_unit) checker = WatchChecker(self) watcher = yield checker.watch(riak_states["unit_relation"]) yield watcher.start() # wait for initial callbacks yield checker.assert_members_cb(0, [], [riak2_unit]) yield checker.assert_settings_cb(1, riak2_unit, 0) # verify modifying self does not cause a notification. yield riak_states["unit_relation"].set_data(dict(hello="world")) yield self.poke_zk() checker.assert_cb_count(2) # add another unit riak3_unit = yield riak_states["service"].add_unit_state() riak3_relation = yield riak_states["service_relation"].add_unit_state( riak3_unit) yield checker.assert_members_cb( 2, [riak2_unit], [riak2_unit, riak3_unit]) yield checker.assert_settings_cb(3, riak3_unit, 0) # remove one (no api atm, so directly to trigger notification) yield self.client.delete( "/relations/%s/peer/%s" % (riak_states["relation"].internal_id, riak2_unit.internal_id)) yield checker.assert_members_cb( 4, [riak2_unit, riak3_unit], [riak3_unit]) # modify one. yield riak3_relation.set_data(dict(later="eventually")) yield checker.assert_settings_cb(5, riak3_unit, 1) @inlineCallbacks def test_watch_role_container_created_concurrently(self): """If the relation role container that the unit is observing is created concurrent to the unit observatiohn starting, the created container is detected correctly and the observation works immediately. """ # Add the relation, services, and related units. wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) wordpress_unit = wordpress_states["unit"] mysql_states = yield self.add_opposite_service_unit( wordpress_states) container_path = "/relations/%s/client" % ( mysql_states["relation"].internal_id) patch_client = self.mocker.patch(self.client) # via mocker play a scenario where the container doesn't exist # buts its created while the watcher is starting the observation. patch_client.get_children_and_watch(container_path) self.mocker.result((fail(zookeeper.NoNodeException()), Deferred())) patch_client.exists_and_watch(container_path) self.mocker.result((succeed({"version": 1}), Deferred())) patch_client.get_children_and_watch(container_path) self.mocker.passthrough() self.mocker.replay() checker = WatchChecker(self) watcher = yield checker.watch(mysql_states["unit_relation"]) yield watcher.start() yield checker.assert_members_cb(0, [], [wordpress_unit]) @inlineCallbacks def test_watch_deleted_modify_notifications(self): """Verify modified notifications are only sent for existing nodes. Verify modifying a deleted unit relation settings doesn't cause a notification. """ # Add the relation, services, and related units. wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) wordpress_unit = wordpress_states["unit"] mysql_states = yield self.add_opposite_service_unit( wordpress_states) # Start watching checker = WatchChecker(self) watcher = yield checker.watch(mysql_states["unit_relation"]) yield watcher.start() yield checker.assert_members_cb(0, [], [wordpress_unit]) yield checker.assert_settings_cb(1, wordpress_unit, 0) # Delete the presence path presence_path = "/relations/%s/client/%s" % ( wordpress_states["relation"].internal_id, wordpress_states["unit"].internal_id) yield self.client.delete(presence_path) yield checker.assert_members_cb(2, [wordpress_unit], []) # Modify the settings path settings_path = self.get_unit_settings_path(wordpress_states) yield self.client.set(settings_path, "some random string") # Give a moment to ensure we don't see any new callbacks yield self.poke_zk() checker.assert_cb_count(3) @inlineCallbacks def test_watch_with_settings_deleted(self): """If a unit relation settings are deleted, there are no callbacks. The agents presence node is the sole determinier of availability if through some unforeseen mechanism, the settings are deleted while the unit is being observed, the watcher will ignore the deletion. """ # Add the relation, services, and related units. wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) wordpress_unit = wordpress_states["unit"] mysql_states = yield self.add_opposite_service_unit( wordpress_states) checker = WatchChecker(self) watcher = yield checker.watch(mysql_states["unit_relation"]) yield watcher.start() yield checker.assert_members_cb(0, [], [wordpress_unit]) yield checker.assert_settings_cb(1, wordpress_unit, 0) # Delete the settings path settings_path = "/relations/%s/settings/%s" % ( wordpress_states["relation"].internal_id, wordpress_states["unit"].internal_id) yield self.client.delete(settings_path) # Verify no callbacks yield self.poke_zk() checker.assert_cb_count(2) # Recreate the settings path; this should trigger a callback. # Note that this is not likely to happen in reality, and if it does # we're in trouble, because the settings node version will be reset # to 0, and HookScheduler depends on that value continuing to increase # so it can determine whether changes happened while it was inactive. yield self.client.create(settings_path, "abc") yield checker.assert_settings_cb(2, wordpress_unit, 0) # And modify it. yield self.client.set(settings_path, "123") yield checker.assert_settings_cb(3, wordpress_unit, 1) @inlineCallbacks def test_watch_start_stop_start_with_existing_service(self): """Unit relation watching can be stopped, and restarted. Upon restarting a watch, deltas since the watching was stopped are only notified regarding membership changes. Any settings changes to individual nodes are not captured. This capability mostly exists to enable agents to stop watching relations no longer assigned to their service in a single api call, and without additional callbacks. """ # Add the relation, services, and related units. wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) wordpress_unit = wordpress_states["unit"] mysql_states = yield self.add_opposite_service_unit( wordpress_states) checker = WatchChecker(self) watcher = yield checker.watch(mysql_states["unit_relation"]) yield watcher.start() self.assertTrue(watcher.running) yield checker.assert_members_cb(0, [], [wordpress_unit]) yield checker.assert_settings_cb(1, wordpress_unit, 0) # Stop watching watcher.stop() self.assertFalse(watcher.running) # Add a new unit wordpress2_states = yield self.add_related_service_unit( wordpress_states) wordpress2_unit = wordpress2_states["unit"] # Modify a unit (this change will not be detected, ever) yield wordpress_states["unit_relation"].set_data(dict(hello="world")) # Verify no callbacks yield self.poke_zk() checker.assert_cb_count(2) # Start watching again; watch for addition yield watcher.start() self.assertTrue(watcher.running) yield checker.assert_members_cb( 2, [wordpress_unit], [wordpress_unit, wordpress2_unit]) yield checker.assert_settings_cb(3, wordpress2_unit, 0) @inlineCallbacks def test_watch_start_stop_start_with_new_service(self): """Unit relation watching can be stopped, and restarted. Upon restarting a watch, deltas since the watching was stopped are only notified regarding membership changes. Any settings changes to individual nodes are not captured. This capability mostly exists to enable agents to stop watching relations no longer assigned to their service in a single api call, and without additional callbacks. """ # Add the relation, services, and related units. wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") mysql_states = yield self.add_relation_service_unit_from_endpoints( mysql_ep, wordpress_ep) checker = WatchChecker(self) watcher = yield checker.watch(mysql_states["unit_relation"]) yield watcher.start() self.assertTrue(watcher.running) watcher.stop() self.assertFalse(watcher.running) # Add the new service and 2 units wordpress_states = yield self.add_opposite_service_unit( mysql_states) wordpress2_states = yield self.add_related_service_unit( wordpress_states) wordpress_unit = wordpress_states["unit"] wordpress2_unit = wordpress2_states["unit"] # Modify a unit yield wordpress_states["unit_relation"].set_data(dict(hello="world")) # Verify no callbacks yield self.poke_zk() checker.assert_cb_count(0) # Start watching yield watcher.start() self.assertTrue(watcher.running) yield checker.assert_members_cb( 0, [], [wordpress_unit, wordpress2_unit]) # We expect a settings callback for each new unit... yield checker.wait_for_cb(2) # ...but only one yield self.poke_zk() checker.assert_cb_count(3) @inlineCallbacks def test_watch_user_callback_invocation_delays_node_watch(self): """ We defer on user callbacks, this effects an invariant where we won't receive additional notifications for the same node while processing an user callback for the node. We will receive the first modification """ output = self.capture_logging("unit.relation.watch", logging.DEBUG) # Add the relation, services, and related units. riak_states = yield self.add_relation_service_unit( "riak", "riak", "kvstore", "peer") checker = WatchChecker(self, block_cbs=True) watcher = yield checker.watch(riak_states["unit_relation"]) yield watcher.start() # Create a new unit and add it to the relation. riak_unit2 = yield riak_states["service"].add_unit_state() riak_unit2_rel = yield riak_states["service_relation"].add_unit_state( riak_unit2) # Wait for it yield checker.assert_members_cb(0, [], [riak_unit2]) # We are also expecting a notification for the initial settings version # ...but we won't get that until the first callback is done yield self.poke_zk() checker.assert_cb_count(1) # While we wait for this, someone modifies the settings yield riak_unit2_rel.set_data(dict(hello="world")) # Hey, the add callback finished! checker.unblock_cb(0) # OK, now we expect to see the initial setting version callback yield checker.assert_settings_cb(1, riak_unit2, 0) # ...but that is also taking a long time, so we shouldn't expect to see # the callback for the explicit modification yet... yield self.poke_zk() checker.assert_cb_count(2) # ...or, in fact, for this other modification that just happened, which # will be collapsed into the other one... yield riak_unit2_rel.set_data(dict(hello="world 2")) yield self.poke_zk() checker.assert_cb_count(2) # ...so, we should have 1 callback in progress, and only 1 pending # notification. OK, finish the callback... checker.unblock_cb(1) # ...and wait for the change notification. yield checker.assert_settings_cb(2, riak_unit2, 2) # Finish the callback and verify no other invocations. checker.unblock_cb(2) yield self.poke_zk() checker.assert_cb_count(3) # Modify the node again; we should see this change immediately. yield riak_unit2_rel.set_data(dict(hello="goodbye")) yield checker.assert_settings_cb(3, riak_unit2, 3) # And again, finish the callback and verify no other invocations. checker.unblock_cb(3) yield self.poke_zk() checker.assert_cb_count(4) node_path = "/relations/relation-0000000000/settings/unit-0000000001" expected_output = ( "relation watcher start", "relation membership change", "relation watcher settings change %s" % ( "" % node_path), "relation watcher settings change %s" % ( "\n" % node_path) ) self.assertEqual(output.getvalue(), "\n".join(expected_output)) @inlineCallbacks def test_watch_user_callback_invocation_delays_child_watch(self): """We defer on user callbacks to ensure that we don't trigger a callback on the same node twice in parallel. In the case of the container, this means we'll be processing at most one membership notification at a time. """ # Add the relation, services, and related units. riak_states = yield self.add_relation_service_unit( "riak", "riak", "kvstore", "peer") checker = WatchChecker(self, block_cbs=True) watcher = yield checker.watch(riak_states["unit_relation"]) yield watcher.start() # Create a new unit and add it to the relation. riak_unit2 = yield riak_states["service"].add_unit_state() yield riak_states["service_relation"].add_unit_state( riak_unit2) # Wait for it yield checker.assert_members_cb(0, [], [riak_unit2]) # Now add a new unit: we won't see it immediately, since the callback # is still executing, but we will have a container change pending riak_unit3 = yield riak_states["service"].add_unit_state() yield riak_states["service_relation"].add_unit_state( riak_unit3) # Finish the first callback; immediately hit the settings version # callback, which will also take a while checker.unblock_cb(0) yield checker.assert_settings_cb(1, riak_unit2, 0) # Adding another unit; will be rolled into the container change we're # already expecting from before riak_unit4 = yield riak_states["service"].add_unit_state() yield riak_states["service_relation"].add_unit_state( riak_unit4) # Now release the container callback, and verify the callback # for both the new nodes. checker.unblock_cb(1) yield checker.assert_members_cb( 2, [riak_unit2], [riak_unit2, riak_unit3, riak_unit4]) @inlineCallbacks def test_watch_concurrent_callback_execution(self): """Unit relating watching invokes callbacks concurrently. IFF they are not synchronous and not on the same node. """ # Add the relation, services, and related units. wordpress_ep = RelationEndpoint( "wordpress", "client-server", "", "client") mysql_ep = RelationEndpoint( "mysql", "client-server", "", "server") wordpress_states = yield self.add_relation_service_unit_from_endpoints( wordpress_ep, mysql_ep) wordpress_unit = wordpress_states["unit"] mysql_states = yield self.add_opposite_service_unit( wordpress_states) checker = WatchChecker(self, block_cbs=True) # To verify parallel execution, this checker will make us wait for # some of the callbacks (2 and 5), but leave the rest unimpeded. for i in (0, 1, 3, 4, 6): checker.unblock_cb(i) watcher = yield checker.watch(mysql_states["unit_relation"]) yield watcher.start() yield checker.wait_for_cb(0) yield checker.wait_for_cb(1) # Modify a unit (blocking callback) yield wordpress_states["unit_relation"].set_data(dict(hello="world")) yield checker.wait_for_cb(2) # Modify the unit again (will wait for previous) yield wordpress_states["unit_relation"].set_data(dict(hello="world 2")) # Add a unit. wordpress2_states = yield self.add_related_service_unit( wordpress_states) wordpress2_unit = wordpress2_states["unit"] yield checker.wait_for_cb(3) yield checker.wait_for_cb(4) # Delete a unit (blocking callbck) presence_path = "/relations/%s/client/%s" % ( wordpress_states["relation"].internal_id, wordpress_states["unit"].internal_id) yield self.client.delete(presence_path) yield checker.wait_for_cb(5) # ...and delete the other unit (also blocked) presence_path = "/relations/%s/client/%s" % ( wordpress2_states["relation"].internal_id, wordpress2_states["unit"].internal_id) yield self.client.delete(presence_path) # Verify that all unblocked callbacks have started correctly yield checker.assert_members_cb(0, [], [wordpress_unit]) yield checker.assert_settings_cb(1, wordpress_unit, 0) yield checker.assert_settings_cb(2, wordpress_unit, 1) yield checker.assert_members_cb( 3, [wordpress_unit], [wordpress_unit, wordpress2_unit]) yield checker.assert_settings_cb(4, wordpress2_unit, 0) yield checker.assert_members_cb( 5, [wordpress_unit, wordpress2_unit], [wordpress2_unit]) checker.assert_cb_count(6) # OK, fine, but cbs 2 and 5 are still blocking. # Whoops, looks like the unit got deleted before we could notify the # second settings change. Check nothing happens: checker.unblock_cb(2) yield self.poke_zk() checker.assert_cb_count(6) # Now complete processing of the first delete, and check that we then # *do* get notified of the second delete: checker.unblock_cb(5) yield checker.assert_members_cb(6, [wordpress2_unit], []) # ...and finally double-check no further callbacks: yield self.poke_zk() checker.assert_cb_count(7) @inlineCallbacks def test_watch_unknown_relation_role_error(self): """ Attempting to watch a unit within an unknown relation role raises an error. """ wordpress_states = yield self.add_relation_service_unit( "client-server", "wordpress", "", "zebra") def not_called(*kw): self.fail("Should not be called") yield self.failUnlessFailure( wordpress_states["unit_relation"].watch_related_units( not_called, not_called), UnknownRelationRole) juju-0.7.orig/juju/state/tests/test_security.py0000644000000000000000000004532312135220114020137 0ustar 00000000000000import base64 import zookeeper from twisted.internet.defer import inlineCallbacks, succeed from juju.state.auth import make_identity, make_ace from juju.state.errors import StateNotFound, PrincipalNotFound from juju.state.security import ( ACL, Principal, GroupPrincipal, OTPPrincipal, TokenDatabase, SecurityPolicy, SecurityPolicyConnection) from juju.lib import serializer from juju.lib.testing import TestCase from juju.tests.common import get_test_zookeeper_address from txzookeeper.client import ZOO_OPEN_ACL_UNSAFE from txzookeeper.tests.utils import deleteTree class PrincipalTests(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() def test_name(self): """Principals have names.""" principal = Principal("foobar", "secret") self.assertEqual(principal.name, "foobar") def test_get_token(self): """An identity token can be gotten from a Principal.""" principal = Principal("foobar", "secret") self.assertEqual(principal.get_token(), make_identity("foobar:secret")) @inlineCallbacks def test_activate(self): """A principal can be used with a client connection.""" client = yield self.get_zookeeper_client().connect() self.addCleanup(lambda: client.close()) admin_credentials = "admin:admin" test_credentials = "test:test" yield self.client.add_auth("digest", admin_credentials) acl = [make_ace(make_identity(admin_credentials), all=True), make_ace(make_identity( test_credentials), read=True, create=True)] yield client.create("/acl-test", "content", acls=acl) # Verify the acl is active yield self.assertFailure( client.get("/acl-test"), zookeeper.NoAuthException) # Attach the principal to the connection principal = Principal("test", "test") yield principal.attach(client) content, stat = yield client.get("/acl-test") self.assertEqual(content, "content") class GroupPrincipalTests(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() def test_uninitialized_usage(self): """Attempting to access the name before initialized raises an error""" principal = GroupPrincipal(self.client, "/group-a") try: principal.name except RuntimeError: pass else: self.fail("Uninitialized usage should raise error") @inlineCallbacks def test_create(self): """An identity token can be gotten from a Principal.""" principal = GroupPrincipal(self.client, "/group-a") yield principal.create("group/a", "zebra") self.assertEqual(principal.name, "group/a") yield self.assertFailure( principal.create("group/a", "zebra"), RuntimeError) @inlineCallbacks def test_initialize(self): principal = GroupPrincipal(self.client, "/group-a") yield principal.create("group/a", "zebra") principal = GroupPrincipal(self.client, "/group-a") yield principal.initialize() self.assertEqual(principal.name, "group/a") principal = GroupPrincipal(self.client, "/group-b") yield self.assertFailure(principal.initialize(), StateNotFound) @inlineCallbacks def test_get_token(self): """An identity token can be gotten from a Principal.""" principal = GroupPrincipal(self.client, "/group-a") yield principal.create("foobar", "secret") self.assertEqual(principal.get_token(), make_identity("foobar:secret")) @inlineCallbacks def test_add_member(self): group = GroupPrincipal(self.client, "/group-a") yield group.create("group/a", "zebra") principal = Principal("aladdin", "genie") yield group.add_member(principal) acl, stat = yield self.client.get_acl("/group-a") self.assertEqual( acl[1:], [make_ace(principal.get_token(), read=True)]) # Adding a member again is fine yield group.add_member(principal) @inlineCallbacks def test_remove_member(self): group = GroupPrincipal(self.client, "/group-a") yield group.create("group/a", "zebra") principal = Principal("aladdin", "genie") # Removing a member that doesn't exist is a no-op yield group.remove_member(principal) yield group.add_member(principal) yield group.remove_member(principal.name) acl, stat = yield self.client.get_acl("/group-a") self.assertEqual(acl[1:], []) @inlineCallbacks def test_activate(self): """A principal can be used with a client connection.""" client = yield self.get_zookeeper_client().connect() self.addCleanup(lambda: client.close()) admin_credentials = "admin:admin" test_credentials = "test:test" yield self.client.add_auth("digest", admin_credentials) acl = [make_ace(make_identity(admin_credentials), all=True), make_ace(make_identity( test_credentials), read=True, create=True)] yield client.create("/acl-test", "content", acls=acl) # Verify the acl is active yield self.assertFailure( client.get("/acl-test"), zookeeper.NoAuthException) # Attach the principal to the connection group = GroupPrincipal(self.client, "/group-b") yield group.create("test", "test") yield group.attach(client) content, stat = yield client.get("/acl-test") self.assertEqual(content, "content") class OTPPrincipalTests(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() self.client.create("/otp") def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() def set_otp_test_ace(self, test_ace=ZOO_OPEN_ACL_UNSAFE): """Set an additional OTP ACL entry for test cleanup.""" OTPPrincipal.set_additional_otp_ace(test_ace) self.addCleanup(lambda: OTPPrincipal.set_additional_otp_ace(None)) def test_using_uncreated_raises(self): """Principals have names.""" principal = OTPPrincipal(self.client) try: principal.name except RuntimeError: pass else: self.fail("Use of an uncreated OTP principal should raise error.") @inlineCallbacks def test_get_token(self): """An identity token can be gotten from a OTPPrincipal. The token returned is that of the stored credentials, not the serialized one time password principal. """ self.set_otp_test_ace() principal = OTPPrincipal(self.client) yield principal.create("foobar", "secret") self.assertEqual(principal.get_token(), make_identity("foobar:secret")) self.assertEqual(principal.name, "foobar") @inlineCallbacks def test_create(self): """A principal can be used with a client connection.""" self.set_otp_test_ace() principal = OTPPrincipal(self.client) yield principal.create("foobar", "secret") children = yield self.client.get_children("/otp") self.assertEqual(len(children), 1) otp_path = "/otp/%s" % (children.pop()) data, stat = yield self.client.get(otp_path) credentials = serializer.load(data) self.assertEqual(credentials["name"], "foobar") self.assertEqual(credentials["password"], "secret") acl, stat = yield self.client.get_acl(otp_path) self.assertEqual(len(acl), 2) @inlineCallbacks def test_serialize(self): """The principal can be serialized to just the OTP data.""" self.set_otp_test_ace() principal = OTPPrincipal(self.client) yield principal.create("foobar", "secret") otp_data = principal.serialize() path, user, password = base64.b64decode(otp_data).split(":") acl, stat = yield self.client.get_acl(path) self.assertEqual(principal.get_token(), make_identity("foobar:secret")) self.assertEqual(principal.name, "foobar") @inlineCallbacks def test_consume(self): """The OTP serialization can be used to retrievethe actual credentials. """ principal = OTPPrincipal(self.client) yield principal.create("foobar", "secret") otp_data = principal.serialize() path, _ = base64.b64decode(otp_data).split(":", 1) acl, stat = yield self.client.get_acl(path) # Verify that the OTP data is secure yield self.assertFailure( self.client.get(path), zookeeper.NoAuthException) name, password = yield OTPPrincipal.consume(self.client, otp_data) self.assertEqual(name, "foobar") self.assertEqual(password, "secret") children = yield self.client.get_children("/otp") self.assertFalse(children) class TokenDatabaseTest(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() self.db = TokenDatabase(self.client, "/token-test") def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() @inlineCallbacks def test_add(self): principal = Principal("zebra", "zoo") yield self.db.add(principal) content, stat = yield self.client.get("/token-test") data = serializer.load(content) self.assertEqual(data, {"zebra": principal.get_token()}) @inlineCallbacks def test_remove(self): principal = Principal("zebra", "zoo") yield self.db.add(principal) yield self.db.remove(principal) content, stat = yield self.client.get("/token-test") data = serializer.load(content) self.assertEqual(data, {"zebra": principal.get_token()}) @inlineCallbacks def test_get(self): principal = Principal("zebra", "zoo") yield self.db.add(principal) token = yield self.db.get(principal.name) self.assertEqual(token, principal.get_token()) @inlineCallbacks def test_get_nonexistant(self): principal = Principal("zebra", "zoo") error = yield self.assertFailure(self.db.get(principal.name), PrincipalNotFound) self.assertEquals(str(error), "Principal 'zebra' not found") class PolicyTest(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() self.tokens = TokenDatabase(self.client) yield self.tokens.add(Principal("admin", "admin")) self.policy = SecurityPolicy(self.client, self.tokens) def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() @inlineCallbacks def test_default_no_owner_no_rules_gives_admin_access(self): """By default the policy setups a global access for the cli admins. """ acl = yield self.policy("/random") self.assertIn( make_ace(Principal("admin", "admin").get_token(), all=True), acl) @inlineCallbacks def test_default_no_rules_gives_global_authenticated_access(self): """If no rules match, the default acl gives authenticated users access. XXX/TODO: This is intended as a temporary crutch for integration of the security machinery, not a long term solution. """ acl = yield self.policy("/random") self.assertIn(make_ace("auth", "world", all=True), acl) @inlineCallbacks def test_rule_match_suppress_open_access(self): """If a rule returns an acl, then no default access is given.""" principal = Principal("foobar", "foobar") self.policy.add_rule(lambda policy, path: [ make_ace(principal.get_token(), all=True)]) acl = yield self.policy("/random") # Check for matched rule ACL self.assertIn(make_ace(principal.get_token(), all=True), acl) # Verify no default access self.assertNotIn(make_ace("auth", "world", all=True), acl) @inlineCallbacks def test_rule_that_returns_deferred(self): """If a rule may do additional lookups, resulting in deferred values. """ principal = Principal("foobar", "foobar") self.policy.add_rule(lambda policy, path: succeed([ make_ace(principal.get_token(), all=True)])) acl = yield self.policy("/random") # Check for matched rule ACL self.assertIn(make_ace(principal.get_token(), all=True), acl) # Verify no default access self.assertNotIn(make_ace("auth", "world", all=True), acl) @inlineCallbacks def test_owner_ace(self): """If an owner is set, all nodes ACLs will have an owner ACE. """ owner = Principal("john", "doe") self.policy.set_owner(owner) acl = yield self.policy("/random") self.assertIn(make_ace(owner.get_token(), all=True), acl) class SecureConnectionTest(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = yield SecurityPolicyConnection( get_test_zookeeper_address()).connect() admin = Principal("admin", "admin") self.token_db = TokenDatabase(self.client) yield self.token_db.add(admin) self.policy = SecurityPolicy(self.client, self.token_db, owner=admin) attach_defer = admin.attach(self.client) # Trick to speed up the auth response processing (fixed in ZK trunk) self.client.exists("/") yield attach_defer def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() @inlineCallbacks def test_create_without_policy(self): """If no policy is set the connection behaves normally""" def rule(policy, path): return [make_ace(Principal("magic", "not").get_token(), all=True)] self.policy.add_rule(rule) yield self.client.create("/xyz") acl, stat = yield self.client.get_acl("/xyz") self.assertEqual(acl, [ZOO_OPEN_ACL_UNSAFE]) @inlineCallbacks def test_create_with_policy(self): """If a policy is set ACL are determined by the policy.""" def rule(policy, path): return [make_ace(Principal("magic", "not").get_token(), all=True)] self.policy.add_rule(rule) self.client.set_security_policy(self.policy) yield self.client.create("/xyz") acl, stat = yield self.client.get_acl("/xyz") self.assertEqual( acl, [make_ace(Principal("magic", "not").get_token(), all=True), make_ace(Principal("admin", "admin").get_token(), all=True)]) class ACLTest(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = yield self.get_zookeeper_client().connect() self.tokens = TokenDatabase(self.client) self.admin = Principal("admin", "admin") yield self.tokens.add(self.admin) self.policy = SecurityPolicy(self.client, self.tokens) attach_deferred = self.admin.attach(self.client) self.client.exists("/") yield attach_deferred def tearDown(self): deleteTree(handle=self.client.handle) self.client.close() @inlineCallbacks def test_acl_on_non_existant_node(self): acl = ACL(self.client, "abc") yield self.assertFailure(acl.grant("admin", all=True), StateNotFound) @inlineCallbacks def test_acl_without_admin(self): """A client needs an attached principle with the admin perm to set acl. """ client = yield self.get_zookeeper_client().connect() principal = Principal("zebra", "stripes") yield self.tokens.add(principal) attach_deferred = principal.attach(client) yield self.client.create( "/abc", acls=[make_ace(self.admin.get_token(), all=True)]) yield attach_deferred acl = ACL(client, "/abc") yield self.assertFailure( acl.grant("zebra", all=True), zookeeper.NoAuthException) @inlineCallbacks def test_grant(self): path = yield self.client.create("/abc") acl = ACL(self.client, path) yield acl.grant("admin", all=True) node_acl, stat = yield self.client.get_acl(path) self.assertEqual( node_acl, [ZOO_OPEN_ACL_UNSAFE, make_ace(self.admin.get_token(), all=True)]) @inlineCallbacks def test_grant_additive(self): path = yield self.client.create("/abc") acl = ACL(self.client, "/abc") yield acl.grant("admin", read=True) yield acl.grant("admin", write=True) test_ace = make_ace(":", read=True, write=True) node_acl, stat = yield self.client.get_acl(path) self.assertEqual(node_acl[-1]["perms"], test_ace["perms"]) @inlineCallbacks def test_grant_not_in_token_database(self): path = yield self.client.create("/abc") acl = ACL(self.client, path) yield self.assertFailure(acl.grant("zebra"), PrincipalNotFound) @inlineCallbacks def test_prohibit(self): principal = Principal("zebra", "stripes") yield self.tokens.add(principal) path = yield self.client.create("/abc", acls=[ make_ace(self.admin.get_token(), all=True), make_ace(principal.get_token(), write=True)]) acl = ACL(self.client, path) yield acl.prohibit("zebra") acl, stat = yield self.client.get_acl(path) self.assertEqual( acl, [make_ace(self.admin.get_token(), all=True)]) @inlineCallbacks def test_prohibit_non_existant_node(self): acl = ACL(self.client, "/abc") yield self.assertFailure( acl.prohibit("zebra"), StateNotFound) @inlineCallbacks def test_prohibit_not_in_acl(self): principal = Principal("zebra", "stripes") yield self.tokens.add(principal) path = yield self.client.create("/abc", acls=[ make_ace(self.admin.get_token(), all=True)]) acl = ACL(self.client, path) # We get to the same end state so its fine. yield acl.prohibit("zebra") acl, stat = yield self.client.get_acl(path) self.assertEqual( acl, [make_ace(self.admin.get_token(), all=True)]) juju-0.7.orig/juju/state/tests/test_service.py0000644000000000000000000034365012135220114017734 0ustar 00000000000000import os import shutil import zookeeper from twisted.internet.defer import inlineCallbacks, Deferred, returnValue from juju.charm.errors import CharmURLError from juju.charm.directory import CharmDirectory from juju.charm.tests import local_charm_id from juju.charm.tests.test_metadata import test_repository_path from juju.lib import serializer from juju.machine.tests.test_constraints import ( dummy_constraints, dummy_cs, series_constraints) from juju.lib.pick import pick_attr from juju.state.charm import CharmStateManager from juju.charm.tests.test_repository import unbundled_repository from juju.state.endpoint import RelationEndpoint from juju.state.service import ( ServiceStateManager, NO_HOOKS, RETRY_HOOKS, parse_service_name) from juju.state.machine import MachineStateManager from juju.state.relation import RelationStateManager from juju.state.utils import YAMLState from juju.state.errors import ( StateChanged, ServiceStateNotFound, ServiceUnitStateNotFound, ServiceUnitStateMachineAlreadyAssigned, ServiceStateNameInUse, BadDescriptor, BadServiceStateName, ServiceUnitDebugAlreadyEnabled, MachineStateNotFound, NoUnusedMachines, ServiceUnitResolvedAlreadyEnabled, ServiceUnitRelationResolvedAlreadyEnabled, StopWatcher, ServiceUnitUpgradeAlreadyEnabled) from juju.state.tests.common import StateTestBase class ServiceStateManagerTestBase(StateTestBase): @inlineCallbacks def setUp(self): yield super(ServiceStateManagerTestBase, self).setUp() yield self.push_default_config() self.charm_state_manager = CharmStateManager(self.client) self.service_state_manager = ServiceStateManager(self.client) self.machine_state_manager = MachineStateManager(self.client) self.charm_state = yield self.charm_state_manager.add_charm_state( local_charm_id(self.charm), self.charm, "") self.relation_state_manager = RelationStateManager(self.client) @inlineCallbacks def add_service(self, name): service_state = yield self.service_state_manager.add_service_state( name, self.charm_state, dummy_constraints) returnValue(service_state) @inlineCallbacks def add_service_from_charm( self, service_name, charm_id=None, constraints=None, charm_dir=None, charm_name=None): """Add a service from a charm. """ if not charm_id and charm_dir is None: charm_name = charm_name or service_name charm_dir = CharmDirectory(os.path.join( test_repository_path, "series", charm_name)) if charm_id is None: charm_state = yield self.charm_state_manager.add_charm_state( local_charm_id(charm_dir), charm_dir, "") else: charm_state = yield self.charm_state_manager.get_charm_state( charm_id) service_state = yield self.service_state_manager.add_service_state( service_name, charm_state, constraints or dummy_constraints) returnValue(service_state) @inlineCallbacks def get_subordinate_charm(self): """Return charm state for a subordinate charm. Many tests rely on adding relationships to a proper subordinate. This return the charm state of a testing subordinate charm. """ unbundled_repo_path = self.makeDir() os.rmdir(unbundled_repo_path) shutil.copytree(unbundled_repository, unbundled_repo_path) sub_charm = CharmDirectory( os.path.join(unbundled_repo_path, "series", "logging")) self.charm_state_manager.add_charm_state("local:series/logging-1", sub_charm, "") logging_charm_state = yield self.charm_state_manager.get_charm_state( "local:series/logging-1") returnValue(logging_charm_state) @inlineCallbacks def add_relation(self, relation_type, relation_scope, *services): endpoints = [] for service_meta in services: service_state, relation_name, relation_role = service_meta endpoints.append(RelationEndpoint( service_state.service_name, relation_type, relation_name, relation_role, relation_scope)) relation_state = yield self.relation_state_manager.add_relation_state( *endpoints) returnValue(relation_state[0]) @inlineCallbacks def remove_service(self, internal_service_id): topology = yield self.get_topology() topology.remove_service(internal_service_id) yield self.set_topology(topology) @inlineCallbacks def remove_service_unit(self, internal_service_id, internal_unit_id): topology = yield self.get_topology() topology.remove_service_unit(internal_service_id, internal_unit_id) yield self.set_topology(topology) def add_machine_state(self, constraints=None): return self.machine_state_manager.add_machine_state( constraints or series_constraints) @inlineCallbacks def assert_machine_states(self, present, absent): """Assert that machine IDs are either `present` or `absent` in topo.""" for machine_id in present: self.assertTrue( (yield self.machine_state_manager.get_machine_state( machine_id))) for machine_id in absent: ex = yield self.assertFailure( self.machine_state_manager.get_machine_state(machine_id), MachineStateNotFound) self.assertEqual(ex.machine_id, machine_id) @inlineCallbacks def assert_machine_assignments(self, service_name, machine_ids): """Assert that `service_name` is deployed on `machine_ids`.""" topology = yield self.get_topology() service_id = topology.find_service_with_name(service_name) assigned_machine_ids = [ topology.get_service_unit_machine(service_id, unit_id) for unit_id in topology.get_service_units(service_id)] internal_machine_ids = [] for machine_id in machine_ids: if machine_id is None: # corresponds to get_service_unit_machine API in this case internal_machine_ids.append(None) else: internal_machine_ids.append("machine-%010d" % machine_id) self.assertEqual( set(assigned_machine_ids), set(internal_machine_ids)) @inlineCallbacks def get_unit_state(self): """Simple test helper to get a unit state.""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() returnValue(unit_state) class ServiceStateManagerTest(ServiceStateManagerTestBase): @inlineCallbacks def test_add_service(self): """ Adding a service state should register it in zookeeper, including the requested charm id. """ yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) children = yield self.client.get_children("/services") self.assertEquals(sorted(children), [ "service-0000000000", "service-0000000001"]) content, stat = yield self.client.get("/services/service-0000000000") details = serializer.load(content) self.assertTrue(details) self.assertEquals(details.get("charm"), "local:series/dummy-1") self.assertFalse(isinstance(details.get("charm"), unicode)) self.assertEquals(details.get("constraints"), series_constraints.data) topology = yield self.get_topology() self.assertEquals(topology.find_service_with_name("wordpress"), "service-0000000000") self.assertEquals(topology.find_service_with_name("mysql"), "service-0000000001") @inlineCallbacks def test_add_service_with_duplicated_name(self): """ If a service is added with a duplicated name, a meaningful error should be raised. """ yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) try: yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) except ServiceStateNameInUse, e: self.assertEquals(e.service_name, "wordpress") else: self.fail("Error not raised") @inlineCallbacks def test_get_service_and_check_attributes(self): """ Getting a service state should be possible, and the service state identification should be available through its attributes. """ yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") self.assertEquals(service_state.service_name, "wordpress") self.assertEquals(service_state.internal_id, "service-0000000000") @inlineCallbacks def test_get_service_not_found(self): """ Getting a service state which is not available should errback a meaningful error. """ try: yield self.service_state_manager.get_service_state("wordpress") except ServiceStateNotFound, e: self.assertEquals(e.service_name, "wordpress") else: self.fail("Error not raised") def test_get_unit_state(self): """A unit state can be retrieved by name from the service manager.""" self.assertFailure(self.service_state_manager.get_unit_state( "wordpress/1"), ServiceStateNotFound) self.assertFailure(self.service_state_manager.get_unit_state( "wordpress1"), ServiceUnitStateNotFound) wordpress_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) self.assertFailure(self.service_state_manager.get_unit_state( "wordpress/1"), ServiceUnitStateNotFound) wordpress_unit = wordpress_state.add_unit_state() unit_state = yield self.service_state_manager.get_unit_state( "wordpress/1") self.assertEqual(unit_state.internal_id, wordpress_unit.internal_id) @inlineCallbacks def test_get_service_charm_id(self): """ The service state should make its respective charm id available. """ yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") charm_id = yield service_state.get_charm_id() self.assertEquals(charm_id, "local:series/dummy-1") @inlineCallbacks def test_set_service_charm_id(self): """ The service state should allow its charm id to be set. """ yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") yield service_state.set_charm_id("local:series/dummy-2") charm_id = yield service_state.get_charm_id() self.assertEquals(charm_id, "local:series/dummy-2") @inlineCallbacks def test_get_service_constraints(self): """The service state should make constraints available""" initial_constraints = dummy_cs.parse(["cpu=256", "arch=amd64"]) yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, initial_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") constraints = yield service_state.get_constraints() self.assertEquals( constraints, initial_constraints.with_series("series")) @inlineCallbacks def test_get_service_constraints_inherits(self): """The service constraints should be combined with the environment's""" yield self.push_env_constraints("arch=arm", "cpu=32") service_constraints = dummy_cs.parse(["cpu=256"]) yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, service_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") constraints = yield service_state.get_constraints() expected_base = dummy_cs.parse(["arch=arm", "cpu=256"]) self.assertEquals(constraints, expected_base.with_series("series")) @inlineCallbacks def test_get_missing_service_constraints(self): """ Nodes created before the constraints mechanism was added should have empty constraints. """ yield self.client.delete("/constraints") yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") path = "/services/" + service_state.internal_id node = YAMLState(self.client, path) yield node.read() del node["constraints"] yield node.write() constraints = yield service_state.get_constraints() self.assertEquals(constraints.data, {}) @inlineCallbacks def test_get_missing_unit_constraints(self): """ Nodes created before the constraints mechanism was added should have empty constraints. """ yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") unit_state = yield service_state.add_unit_state() path = "/units/" + unit_state.internal_id node = YAMLState(self.client, path) yield node.read() del node["constraints"] yield node.write() constraints = yield unit_state.get_constraints() self.assertEquals(constraints.data, {}) @inlineCallbacks def test_set_service_constraints(self): """The service state should make constraints available for change""" initial_constraints = dummy_cs.parse(["cpu=256", "arch=amd64"]) yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, initial_constraints) service_state = yield self.service_state_manager.get_service_state( "wordpress") new_constraints = dummy_cs.parse(["mem=2G", "arch=arm"]) yield service_state.set_constraints(new_constraints) retrieved_constraints = yield service_state.get_constraints() self.assertEquals( retrieved_constraints, new_constraints.with_series("series")) @inlineCallbacks def test_remove_service_state(self): """ A service state can be removed along with its relations, units, and zookeeper state. """ service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) relation_state = yield self.add_relation( "rel-type2", "global", [service_state, "app", "server"]) unit_state = yield service_state.add_unit_state() machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield unit_state.assign_to_machine(machine_state) yield self.service_state_manager.remove_service_state(service_state) topology = yield self.get_topology() self.assertFalse(topology.has_relation(relation_state.internal_id)) self.assertFalse(topology.has_service(service_state.internal_id)) self.assertFalse(topology.get_service_units_in_machine( machine_state.internal_id)) exists = yield self.client.exists( "/services/%s" % service_state.internal_id) self.assertFalse(exists) @inlineCallbacks def test_add_service_unit_and_check_attributes(self): """ A service state should enable adding a new service unit under it, and again the unit should offer attributes allowing its identification. """ service_state0 = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state1 = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) unit_state0 = yield service_state0.add_unit_state() unit_state1 = yield service_state1.add_unit_state() unit_state2 = yield service_state0.add_unit_state() unit_state3 = yield service_state1.add_unit_state() children = yield self.client.get_children("/units") self.assertEquals(sorted(children), [ "unit-0000000000", "unit-0000000001", "unit-0000000002", "unit-0000000003"]) self.assertEquals(unit_state0.service_name, "wordpress") self.assertEquals(unit_state0.internal_id, "unit-0000000000") self.assertEquals(unit_state0.unit_name, "wordpress/0") self.assertEquals(unit_state1.service_name, "mysql") self.assertEquals(unit_state1.internal_id, "unit-0000000001") self.assertEquals(unit_state1.unit_name, "mysql/0") self.assertEquals(unit_state2.service_name, "wordpress") self.assertEquals(unit_state2.internal_id, "unit-0000000002") self.assertEquals(unit_state2.unit_name, "wordpress/1") self.assertEquals(unit_state3.service_name, "mysql") self.assertEquals(unit_state3.internal_id, "unit-0000000003") self.assertEquals(unit_state3.unit_name, "mysql/1") topology = yield self.get_topology() self.assertTrue( topology.has_service_unit("service-0000000000", "unit-0000000000")) self.assertTrue( topology.has_service_unit("service-0000000001", "unit-0000000001")) self.assertTrue( topology.has_service_unit("service-0000000000", "unit-0000000002")) self.assertTrue( topology.has_service_unit("service-0000000001", "unit-0000000003")) def get_presence_path( self, relation_state, relation_role, unit_state, container=None): container = container.internal_id if container else None presence_path = "/".join(filter(None, [ "/relations", relation_state.internal_id, container, relation_role, unit_state.internal_id])) return presence_path @inlineCallbacks def test_add_service_unit_with_container(self): """ Validate adding units with containers specified and recovering that. """ mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") logging_charm_state = yield self.get_subordinate_charm() self.assertTrue(logging_charm_state.is_subordinate()) log_state = yield self.service_state_manager.add_service_state( "logging", logging_charm_state, dummy_constraints) mysql_state = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) relation_state, service_states = (yield self.relation_state_manager.add_relation_state( mysql_ep, logging_ep)) unit_state1 = yield mysql_state.add_unit_state() unit_state0 = yield log_state.add_unit_state(container=unit_state1) unit_state3 = yield mysql_state.add_unit_state() unit_state2 = yield log_state.add_unit_state(container=unit_state3) self.assertEquals((yield unit_state1.get_container()), None) self.assertEquals((yield unit_state0.get_container()), unit_state1) self.assertEquals((yield unit_state2.get_container()), unit_state3) self.assertEquals((yield unit_state3.get_container()), None) for unit_state in (unit_state1, unit_state3): yield unit_state.set_private_address( "%s.example.com" % ( unit_state.unit_name.replace("/", "-"))) # construct the proper relation state mystate = pick_attr(service_states, relation_role="server") logstate = pick_attr(service_states, relation_role="client") yield logstate.add_unit_state(unit_state0) yield logstate.add_unit_state(unit_state2) yield mystate.add_unit_state(unit_state1) yield mystate.add_unit_state(unit_state3) @inlineCallbacks def verify_container(relation_state, service_relation_state, unit_state, container): presence_path = self.get_presence_path( relation_state, service_relation_state.relation_role, unit_state, container) content, stat = yield self.client.get(presence_path) self.assertTrue(stat) self.assertEqual(content, '') # verify the node data on the relation role nodes role_path = os.path.dirname(presence_path) # role path content, stat = yield self.client.get(role_path) self.assertTrue(stat) node_info = serializer.load(content) self.assertEqual( node_info["name"], service_relation_state.relation_name) self.assertEqual( node_info["role"], service_relation_state.relation_role) settings_path = os.path.dirname( os.path.dirname(presence_path)) + "/settings/" + \ unit_state.internal_id content, stat = yield self.client.get(settings_path) self.assertTrue(stat) settings_info = serializer.load(content) # Verify that private address was set # we verify the content elsewhere self.assertTrue(settings_info["private-address"]) # verify all the units are constructed as expected # first the client roles with another container yield verify_container(relation_state, logstate, unit_state0, unit_state1) yield verify_container(relation_state, logstate, unit_state2, unit_state3) # and now the principals (which are their own relation containers) yield verify_container(relation_state, logstate, unit_state0, unit_state1) yield verify_container(relation_state, mystate, unit_state1, unit_state1) @inlineCallbacks def test_get_container_no_principal(self): """Get container should handle no principal.""" mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") logging_charm_state = yield self.get_subordinate_charm() self.assertTrue(logging_charm_state.is_subordinate()) log_state = yield self.service_state_manager.add_service_state( "logging", logging_charm_state, dummy_constraints) mysql_state = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) relation_state, service_states = (yield self.relation_state_manager.add_relation_state( mysql_ep, logging_ep)) unit_state1 = yield mysql_state.add_unit_state() unit_state0 = yield log_state.add_unit_state(container=unit_state1) unit_state3 = yield mysql_state.add_unit_state() unit_state2 = yield log_state.add_unit_state(container=unit_state3) self.assertEquals((yield unit_state1.get_container()), None) self.assertEquals((yield unit_state0.get_container()), unit_state1) self.assertEquals((yield unit_state2.get_container()), unit_state3) self.assertEquals((yield unit_state3.get_container()), None) # now remove a principal node and test again yield mysql_state.remove_unit_state(unit_state1) container = yield unit_state0.get_container() self.assertEquals(container, None) # the other pair is still fine self.assertEquals((yield unit_state2.get_container()), unit_state3) self.assertEquals((yield unit_state3.get_container()), None) @inlineCallbacks def test_add_service_unit_with_changing_state(self): """ When adding a service unit, there's a chance that the service will go away mid-way through. Rather than blowing up randomly, a nice error should be raised. """ service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) yield self.remove_service(service_state.internal_id) d = service_state.add_unit_state() yield self.assertFailure(d, StateChanged) @inlineCallbacks def test_get_unit_names(self): """A service's units names are retrievable.""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) expected_names = [] for i in range(3): unit_state = yield service_state.add_unit_state() expected_names.append(unit_state.unit_name) unit_names = yield service_state.get_unit_names() self.assertEqual(unit_names, expected_names) @inlineCallbacks def test_remove_service_unit(self): """Removing a service unit removes all state associated. """ service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() # Assign to a machine machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield unit_state.assign_to_machine(machine_state) # Connect a unit agent yield unit_state.connect_agent() # Now try and destroy it. yield service_state.remove_unit_state(unit_state) # Verify destruction. topology = yield self.get_topology() self.assertTrue(topology.has_service(service_state.internal_id)) self.assertFalse(topology.has_service_unit( service_state.internal_id, unit_state.internal_id)) exists = yield self.client.exists("/units/%s" % unit_state.internal_id) self.assertFalse(exists) def test_remove_service_unit_nonexistant(self): """Removing a non existant service unit, is fine.""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() yield service_state.remove_unit_state(unit_state) yield service_state.remove_unit_state(unit_state) @inlineCallbacks def test_get_all_service_states(self): services = yield self.service_state_manager.get_all_service_states() self.assertFalse(services) yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) services = yield self.service_state_manager.get_all_service_states() self.assertEquals(len(services), 1) yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) services = yield self.service_state_manager.get_all_service_states() self.assertEquals(len(services), 2) @inlineCallbacks def test_get_service_unit(self): """ Getting back service units should be possible using the user-oriented id. """ service_state0 = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state1 = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) yield service_state0.add_unit_state() yield service_state1.add_unit_state() yield service_state0.add_unit_state() yield service_state1.add_unit_state() unit_state0 = yield service_state0.get_unit_state("wordpress/0") unit_state1 = yield service_state1.get_unit_state("mysql/0") unit_state2 = yield service_state0.get_unit_state("wordpress/1") unit_state3 = yield service_state1.get_unit_state("mysql/1") self.assertEquals(unit_state0.internal_id, "unit-0000000000") self.assertEquals(unit_state1.internal_id, "unit-0000000001") self.assertEquals(unit_state2.internal_id, "unit-0000000002") self.assertEquals(unit_state3.internal_id, "unit-0000000003") self.assertEquals(unit_state0.unit_name, "wordpress/0") self.assertEquals(unit_state1.unit_name, "mysql/0") self.assertEquals(unit_state2.unit_name, "wordpress/1") self.assertEquals(unit_state3.unit_name, "mysql/1") @inlineCallbacks def test_get_all_unit_states(self): service_state0 = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state1 = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) yield service_state0.add_unit_state() yield service_state1.add_unit_state() yield service_state0.add_unit_state() yield service_state1.add_unit_state() unit_state0 = yield service_state0.get_unit_state("wordpress/0") unit_state1 = yield service_state1.get_unit_state("mysql/0") unit_state2 = yield service_state0.get_unit_state("wordpress/1") unit_state3 = yield service_state1.get_unit_state("mysql/1") wordpress_units = yield service_state0.get_all_unit_states() self.assertEquals( set(wordpress_units), set((unit_state0, unit_state2))) mysql_units = yield service_state1.get_all_unit_states() self.assertEquals(set(mysql_units), set((unit_state1, unit_state3))) @inlineCallbacks def test_get_all_unit_states_with_changing_state(self): """ When getting the service unit states, there's a chance that the service will go away mid-way through. Rather than blowing up randomly, a nice error should be raised. """ service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) yield service_state.add_unit_state() unit_state = (yield service_state.get_all_unit_states())[0] self.assertEqual(unit_state.unit_name, "wordpress/0") yield self.remove_service(service_state.internal_id) yield self.assertFailure( service_state.get_all_unit_states(), StateChanged) @inlineCallbacks def test_set_functions(self): wordpress = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) mysql = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) s1 = yield self.service_state_manager.get_service_state( "wordpress") s2 = yield self.service_state_manager.get_service_state( "mysql") self.assertEquals(hash(s1), hash(wordpress)) self.assertEquals(hash(s2), hash(mysql)) self.assertNotEqual(s1, object()) self.assertNotEqual(s1, s2) self.assertEquals(s1, wordpress) self.assertEquals(s2, mysql) us0 = yield wordpress.add_unit_state() us1 = yield wordpress.add_unit_state() unit_state0 = yield wordpress.get_unit_state("wordpress/0") unit_state1 = yield wordpress.get_unit_state("wordpress/1") self.assertEquals(us0, unit_state0) self.assertEquals(us1, unit_state1) self.assertEquals(hash(us1), hash(unit_state1)) self.assertNotEqual(us0, object()) self.assertNotEqual(us0, us1) @inlineCallbacks def test_get_service_unit_not_found(self): """ Attempting to retrieve a non-existent service unit should result in an errback. """ service_state0 = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state1 = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) # Add some state in a different service to make it a little # bit more prone to breaking in case of errors. yield service_state1.add_unit_state() try: yield service_state0.get_unit_state("wordpress/0") except ServiceUnitStateNotFound, e: self.assertEquals(e.unit_name, "wordpress/0") else: self.fail("Error not raised") @inlineCallbacks def test_get_set_public_address(self): service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() self.assertEqual((yield unit_state.get_public_address()), None) yield unit_state.set_public_address("example.foobar.com") yield self.assertEqual( (yield unit_state.get_public_address()), "example.foobar.com") @inlineCallbacks def test_get_set_private_address(self): service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() self.assertEqual((yield unit_state.get_private_address()), None) yield unit_state.set_private_address("example.local") yield self.assertEqual( (yield unit_state.get_private_address()), "example.local") @inlineCallbacks def test_get_service_unit_with_changing_state(self): """ If a service is removed during operation, get_service_unit() should raise a nice error. """ service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) yield self.remove_service(service_state.internal_id) d = service_state.get_unit_state("wordpress/0") yield self.assertFailure(d, StateChanged) @inlineCallbacks def test_get_service_unit_with_bad_service_name(self): """ Service unit names contain a service name embedded into them. The service name requested when calling get_unit_state() must match that of the object being used. """ service_state0 = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) service_state1 = yield self.service_state_manager.add_service_state( "mysql", self.charm_state, dummy_constraints) # Add some state in a different service to make it a little # bit more prone to breaking in case of errors. yield service_state1.add_unit_state() try: yield service_state0.get_unit_state("mysql/0") except BadServiceStateName, e: self.assertEquals(e.expected_name, "wordpress") self.assertEquals(e.obtained_name, "mysql") else: self.fail("Error not raised") @inlineCallbacks def test_assign_unit_to_machine(self): service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield unit_state.assign_to_machine(machine_state) topology = yield self.get_topology() self.assertEquals( topology.get_service_unit_machine(service_state.internal_id, unit_state.internal_id), machine_state.internal_id) @inlineCallbacks def test_assign_unit_to_machine_with_changing_state(self): service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield self.remove_service_unit(service_state.internal_id, unit_state.internal_id) d = unit_state.assign_to_machine(machine_state) yield self.assertFailure(d, StateChanged) yield self.remove_service(service_state.internal_id) d = unit_state.assign_to_machine(machine_state) yield self.assertFailure(d, StateChanged) @inlineCallbacks def test_unassign_unit_from_machine(self): service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield unit_state.assign_to_machine(machine_state) yield unit_state.unassign_from_machine() topology = yield self.get_topology() self.assertEquals(topology.get_service_unit_machine( service_state.internal_id, unit_state.internal_id), None) @inlineCallbacks def test_get_set_clear_resolved(self): """The a unit can be set to resolved to mark a future transition, with an optional retry flag.""" unit_state = yield self.get_unit_state() self.assertIdentical((yield unit_state.get_resolved()), None) yield unit_state.set_resolved(NO_HOOKS) yield self.assertFailure( unit_state.set_resolved(NO_HOOKS), ServiceUnitResolvedAlreadyEnabled) yield self.assertEqual((yield unit_state.get_resolved()), {"retry": NO_HOOKS}) yield unit_state.clear_resolved() self.assertIdentical((yield unit_state.get_resolved()), None) yield unit_state.clear_resolved() yield self.assertFailure(unit_state.set_resolved(None), ValueError) @inlineCallbacks def test_watch_resolved(self): """A unit resolved watch can be instituted on a permanent basis.""" unit_state = yield self.get_unit_state() results = [] def callback(value): results.append(value) unit_state.watch_resolved(callback) yield unit_state.set_resolved(RETRY_HOOKS) yield unit_state.clear_resolved() yield unit_state.set_resolved(NO_HOOKS) yield self.poke_zk() self.assertEqual(len(results), 4) self.assertIdentical(results.pop(0), False) self.assertEqual(results.pop(0).type_name, "created") self.assertEqual(results.pop(0).type_name, "deleted") self.assertEqual(results.pop(0).type_name, "created") self.assertEqual( (yield unit_state.get_resolved()), {"retry": NO_HOOKS}) @inlineCallbacks def test_watch_resolved_processes_current_state(self): """The watch method processes the current state before returning.""" unit_state = yield self.get_unit_state() results = [] @inlineCallbacks def callback(value): results.append((yield unit_state.get_resolved())) yield unit_state.watch_resolved(callback) self.assertTrue(results) @inlineCallbacks def test_stop_watch_resolved(self): """A unit resolved watch can be instituted on a permanent basis. However the callback can raise StopWatcher at anytime to stop the watch """ unit_state = yield self.get_unit_state() results = [] def callback(value): results.append(value) if len(results) == 1: raise StopWatcher() if len(results) == 3: raise StopWatcher() unit_state.watch_resolved(callback) yield unit_state.set_resolved(RETRY_HOOKS) yield unit_state.clear_resolved() yield self.poke_zk() unit_state.watch_resolved(callback) yield unit_state.set_resolved(NO_HOOKS) yield unit_state.clear_resolved() yield self.poke_zk() self.assertEqual(len(results), 3) self.assertIdentical(results.pop(0), False) self.assertIdentical(results.pop(0), False) self.assertEqual(results.pop(0).type_name, "created") self.assertEqual( (yield unit_state.get_resolved()), None) @inlineCallbacks def test_get_set_clear_relation_resolved(self): """The a unit's realtions can be set to resolved to mark a future transition, with an optional retry flag.""" unit_state = yield self.get_unit_state() self.assertIdentical((yield unit_state.get_relation_resolved()), None) yield unit_state.set_relation_resolved({"0": RETRY_HOOKS}) # Trying to set a conflicting raises an error yield self.assertFailure( unit_state.set_relation_resolved({"0": NO_HOOKS}), ServiceUnitRelationResolvedAlreadyEnabled) # Doing the same thing is fine yield unit_state.set_relation_resolved({"0": RETRY_HOOKS}), # Its fine to put in new values yield unit_state.set_relation_resolved({"21": RETRY_HOOKS}) yield self.assertEqual( (yield unit_state.get_relation_resolved()), {"0": RETRY_HOOKS, "21": RETRY_HOOKS}) yield unit_state.clear_relation_resolved() self.assertIdentical((yield unit_state.get_relation_resolved()), None) yield unit_state.clear_relation_resolved() yield self.assertFailure( unit_state.set_relation_resolved(True), ValueError) yield self.assertFailure( unit_state.set_relation_resolved(None), ValueError) @inlineCallbacks def test_watch_relation_resolved(self): """A unit resolved watch can be instituted on a permanent basis.""" unit_state = yield self.get_unit_state() results = [] def callback(value): results.append(value) unit_state.watch_relation_resolved(callback) yield unit_state.set_relation_resolved({"0": RETRY_HOOKS}) yield unit_state.clear_relation_resolved() yield unit_state.set_relation_resolved({"0": NO_HOOKS}) yield self.poke_zk() self.assertEqual(len(results), 4) self.assertIdentical(results.pop(0), False) self.assertEqual(results.pop(0).type_name, "created") self.assertEqual(results.pop(0).type_name, "deleted") self.assertEqual(results.pop(0).type_name, "created") self.assertEqual( (yield unit_state.get_relation_resolved()), {"0": NO_HOOKS}) @inlineCallbacks def test_watch_relation_resolved_processes_current_state(self): """The watch method returns only after processing the current state.""" unit_state = yield self.get_unit_state() results = [] @inlineCallbacks def callback(value): results.append((yield unit_state.get_relation_resolved())) yield unit_state.watch_relation_resolved(callback) self.assertTrue(results) @inlineCallbacks def test_stop_watch_relation_resolved(self): """A unit resolved watch can be instituted on a permanent basis.""" unit_state = yield self.get_unit_state() results = [] def callback(value): results.append(value) if len(results) == 1: raise StopWatcher() if len(results) == 3: raise StopWatcher() unit_state.watch_relation_resolved(callback) yield unit_state.set_relation_resolved({"0": RETRY_HOOKS}) yield unit_state.clear_relation_resolved() yield self.poke_zk() self.assertEqual(len(results), 1) unit_state.watch_relation_resolved(callback) yield unit_state.set_relation_resolved({"0": RETRY_HOOKS}) yield unit_state.clear_relation_resolved() yield self.poke_zk() self.assertEqual(len(results), 3) self.assertIdentical(results.pop(0), False) self.assertIdentical(results.pop(0), False) self.assertEqual(results.pop(0).type_name, "created") self.assertEqual( (yield unit_state.get_relation_resolved()), None) @inlineCallbacks def test_watch_resolved_slow_callback(self): """A slow watch callback is still invoked serially.""" unit_state = yield self.get_unit_state() callbacks = [Deferred() for i in range(5)] results = [] contents = [] @inlineCallbacks def watch(value): results.append(value) yield callbacks[len(results) - 1] contents.append((yield unit_state.get_resolved())) callbacks[0].callback(True) yield unit_state.watch_resolved(watch) # These get collapsed into a single event yield unit_state.set_resolved(RETRY_HOOKS) yield unit_state.clear_resolved() yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(results), 2) self.assertEqual(len(contents), 1) # Let it finish callbacks[1].callback(True) yield self.poke_zk() # Verify result counts self.assertEqual(len(results), 3) self.assertEqual(len(contents), 2) # Verify result values. Even though we have created event, the # setting retrieved shows the hook is not enabled. self.assertEqual(results[-1].type_name, "deleted") self.assertEqual(contents[-1], None) yield unit_state.set_resolved(NO_HOOKS) callbacks[2].callback(True) yield self.poke_zk() self.assertEqual(len(results), 4) self.assertEqual(contents[-1], {"retry": NO_HOOKS}) # Clear out any pending activity. yield self.poke_zk() @inlineCallbacks def test_watch_relation_resolved_slow_callback(self): """A slow watch callback is still invoked serially.""" unit_state = yield self.get_unit_state() callbacks = [Deferred() for i in range(5)] results = [] contents = [] @inlineCallbacks def watch(value): results.append(value) yield callbacks[len(results) - 1] contents.append((yield unit_state.get_relation_resolved())) callbacks[0].callback(True) yield unit_state.watch_relation_resolved(watch) # These get collapsed into a single event yield unit_state.set_relation_resolved({"0": RETRY_HOOKS}) yield unit_state.clear_relation_resolved() yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(results), 2) self.assertEqual(len(contents), 1) # Let it finish callbacks[1].callback(True) yield self.poke_zk() # Verify result counts self.assertEqual(len(results), 3) self.assertEqual(len(contents), 2) # Verify result values. Even though we have created event, the # setting retrieved shows the hook is not enabled. self.assertEqual(results[-1].type_name, "deleted") self.assertEqual(contents[-1], None) yield unit_state.set_relation_resolved({"0": RETRY_HOOKS}) callbacks[2].callback(True) yield self.poke_zk() self.assertEqual(len(results), 4) self.assertEqual(contents[-1], {"0": RETRY_HOOKS}) # Clear out any pending activity. yield self.poke_zk() @inlineCallbacks def test_set_and_clear_upgrade_flag(self): """An upgrade flag can be set on a unit.""" # Defaults to false unit_state = yield self.get_unit_state() upgrade_flag = yield unit_state.get_upgrade_flag() self.assertEqual(upgrade_flag, False) # Can be set yield unit_state.set_upgrade_flag() upgrade_flag = yield unit_state.get_upgrade_flag() self.assertEqual(upgrade_flag, {"force": False}) # Attempting to set multiple times is an error if the values # differ. yield self.assertFailure( unit_state.set_upgrade_flag(force=True), ServiceUnitUpgradeAlreadyEnabled) self.assertEqual(upgrade_flag, {"force": False}) # Can be cleared yield unit_state.clear_upgrade_flag() upgrade_flag = yield unit_state.get_upgrade_flag() self.assertEqual(upgrade_flag, False) # Can be cleared multiple times yield unit_state.clear_upgrade_flag() upgrade_flag = yield unit_state.get_upgrade_flag() self.assertEqual(upgrade_flag, False) # A empty node present is not problematic yield self.client.create(unit_state._upgrade_flag_path, "") upgrade_flag = yield unit_state.get_upgrade_flag() self.assertEqual(upgrade_flag, False) yield unit_state.set_upgrade_flag(force=True) upgrade_flag = yield unit_state.get_upgrade_flag() self.assertEqual(upgrade_flag, {"force": True}) @inlineCallbacks def test_watch_upgrade_flag_once(self): """An upgrade watch can be set to notified of presence and changes.""" unit_state = yield self.get_unit_state() yield unit_state.set_upgrade_flag() results = [] def callback(value): results.append(value) unit_state.watch_upgrade_flag(callback, permanent=False) yield unit_state.clear_upgrade_flag() yield unit_state.set_upgrade_flag(force=True) yield self.sleep(0.1) yield self.poke_zk() self.assertEqual(len(results), 2) self.assertIdentical(results.pop(0), True) self.assertIdentical(results.pop().type_name, "deleted") self.assertEqual( (yield unit_state.get_upgrade_flag()), {"force": True}) @inlineCallbacks def test_watch_upgrade_processes_current_state(self): unit_state = yield self.get_unit_state() results = [] @inlineCallbacks def callback(value): results.append((yield unit_state.get_upgrade_flag())) yield unit_state.watch_upgrade_flag(callback) self.assertTrue(results) @inlineCallbacks def test_watch_upgrade_flag_permanent(self): """An upgrade watch can be instituted on a permanent basis.""" unit_state = yield self.get_unit_state() results = [] def callback(value): results.append(value) yield unit_state.watch_upgrade_flag(callback) self.assertTrue(results) yield unit_state.set_upgrade_flag() yield unit_state.clear_upgrade_flag() yield unit_state.set_upgrade_flag() yield self.poke_zk() self.assertEqual(len(results), 4) self.assertIdentical(results.pop(0), False) self.assertIdentical(results.pop(0).type_name, "created") self.assertIdentical(results.pop(0).type_name, "deleted") self.assertIdentical(results.pop(0).type_name, "created") self.assertEqual( (yield unit_state.get_upgrade_flag()), {"force": False}) @inlineCallbacks def test_watch_upgrade_flag_waits_on_slow_callbacks(self): """A slow watch callback is still invoked serially.""" unit_state = yield self.get_unit_state() callbacks = [Deferred() for i in range(5)] results = [] contents = [] @inlineCallbacks def watch(value): results.append(value) yield callbacks[len(results) - 1] contents.append((yield unit_state.get_upgrade_flag())) yield callbacks[0].callback(True) yield unit_state.watch_upgrade_flag(watch) # These get collapsed into a single event yield unit_state.set_upgrade_flag() yield unit_state.clear_upgrade_flag() yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(results), 2) self.assertEqual(len(contents), 1) # Let it finish callbacks[1].callback(True) yield self.poke_zk() # Verify result counts self.assertEqual(len(results), 3) self.assertEqual(len(contents), 2) # Verify result values. Even though we have created event, the # setting retrieved shows the hook is not enabled. self.assertEqual(results[-1].type_name, "deleted") self.assertEqual(contents[-1], False) yield unit_state.set_upgrade_flag() yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(contents), 2) callbacks[2].callback(True) yield self.poke_zk() # Verify values. self.assertEqual(len(contents), 3) self.assertEqual(results[-1].type_name, "created") self.assertEqual(contents[-1], {"force": False}) # Clear out any pending activity. yield self.poke_zk() @inlineCallbacks def test_enable_debug_hook(self): """Unit hook debugging can be enabled on the unit state.""" unit_state = yield self.get_unit_state() enabled = yield unit_state.enable_hook_debug(["*"]) self.assertIdentical(enabled, True) content, stat = yield self.client.get( "/units/%s/debug" % unit_state.internal_id) data = serializer.load(content) self.assertEqual(data, {"debug_hooks": ["*"]}) @inlineCallbacks def test_enable_debug_multiple_named_hooks(self): """Unit hook debugging can be enabled for multiple hooks.""" unit_state = yield self.get_unit_state() enabled = yield unit_state.enable_hook_debug( ["db-relation-broken", "db-relation-changed"]) self.assertIdentical(enabled, True) content, stat = yield self.client.get( "/units/%s/debug" % unit_state.internal_id) data = serializer.load(content) self.assertEqual(data, {"debug_hooks": ["db-relation-broken", "db-relation-changed"]}) @inlineCallbacks def test_enable_debug_all_and_named_is_error(self): """Unit hook debugging can be enabled for multiple hooks, but only if they are all named hooks.""" unit_state = yield self.get_unit_state() error = yield self.assertFailure( unit_state.enable_hook_debug(["*", "db-relation-changed"]), ValueError) self.assertEquals( str(error), "Ambigious to debug all hooks and named hooks " "['*', 'db-relation-changed']") @inlineCallbacks def test_enable_debug_requires_sequence(self): """The enable hook debug only accepts a sequences of names. """ unit_state = yield self.get_unit_state() error = yield self.assertFailure( unit_state.enable_hook_debug(None), AssertionError) self.assertEquals(str(error), "Hook names must be a list: got None") @inlineCallbacks def test_enable_named_debug_hook(self): """Unit hook debugging can be enabled on for a named hook.""" unit_state = yield self.get_unit_state() enabled = yield unit_state.enable_hook_debug( ["db-relation-changed"]) self.assertIdentical(enabled, True) content, stat = yield self.client.get( "/units/%s/debug" % unit_state.internal_id) data = serializer.load(content) self.assertEqual(data, {"debug_hooks": ["db-relation-changed"]}) @inlineCallbacks def test_enable_debug_hook_pre_existing(self): """Attempting to enable debug on a unit state already being debugged raises an exception. """ unit_state = yield self.get_unit_state() yield unit_state.enable_hook_debug(["*"]) error = yield self.assertFailure(unit_state.enable_hook_debug(["*"]), ServiceUnitDebugAlreadyEnabled) self.assertEquals(str(error), "Service unit 'wordpress/0' is already in debug mode.") @inlineCallbacks def test_enable_debug_hook_lifetime(self): """A debug hook setting is only active for the lifetime of the client that created it. """ unit_state = yield self.get_unit_state() yield unit_state.enable_hook_debug(["*"]) exists = yield self.client.exists( "/units/%s/debug" % unit_state.internal_id) self.assertTrue(exists) yield self.client.close() self.client = self.get_zookeeper_client() yield self.client.connect() exists = yield self.client.exists( "/units/%s/debug" % unit_state.internal_id) self.assertFalse(exists) @inlineCallbacks def test_watch_debug_hook_once(self): """A watch can be set to notified of presence and changes.""" unit_state = yield self.get_unit_state() yield unit_state.enable_hook_debug(["*"]) results = [] def callback(value): results.append(value) yield unit_state.watch_hook_debug(callback, permanent=False) yield unit_state.clear_hook_debug() yield unit_state.enable_hook_debug(["*"]) yield self.poke_zk() self.assertEqual(len(results), 2) self.assertIdentical(results.pop(0), True) self.assertIdentical(results.pop().type_name, "deleted") self.assertEqual( (yield unit_state.get_hook_debug()), {"debug_hooks": ["*"]}) @inlineCallbacks def test_watch_debug_hook_processes_current_state(self): """A hook debug watch can be instituted on a permanent basis.""" unit_state = yield self.get_unit_state() results = [] @inlineCallbacks def callback(value): results.append((yield unit_state.get_hook_debug())) yield unit_state.watch_hook_debug(callback) self.assertTrue(results) @inlineCallbacks def test_watch_debug_hook_permanent(self): """A hook debug watch can be instituted on a permanent basis.""" unit_state = yield self.get_unit_state() results = [] def callback(value): results.append(value) yield unit_state.watch_hook_debug(callback) yield unit_state.enable_hook_debug(["*"]) yield unit_state.clear_hook_debug() yield unit_state.enable_hook_debug(["*"]) yield self.poke_zk() self.assertEqual(len(results), 4) self.assertIdentical(results.pop(0), False) self.assertIdentical(results.pop(0).type_name, "created") self.assertIdentical(results.pop(0).type_name, "deleted") self.assertIdentical(results.pop(0).type_name, "created") self.assertEqual( (yield unit_state.get_hook_debug()), {"debug_hooks": ["*"]}) @inlineCallbacks def test_watch_debug_hook_waits_on_slow_callbacks(self): """A slow watch callback is still invoked serially.""" unit_state = yield self.get_unit_state() callbacks = [Deferred() for i in range(5)] results = [] contents = [] @inlineCallbacks def watch(value): results.append(value) yield callbacks[len(results) - 1] contents.append((yield unit_state.get_hook_debug())) callbacks[0].callback(True) # Finish the current state processing yield unit_state.watch_hook_debug(watch) # These get collapsed into a single event yield unit_state.enable_hook_debug(["*"]) yield unit_state.clear_hook_debug() yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(results), 2) self.assertEqual(len(contents), 1) # Let it finish callbacks[1].callback(True) yield self.poke_zk() # Verify result counts self.assertEqual(len(results), 3) self.assertEqual(len(contents), 2) # Verify result values. Even though we have created event, the # setting retrieved shows the hook is not enabled. self.assertEqual(results[-1].type_name, "deleted") self.assertEqual(contents[-1], None) yield unit_state.enable_hook_debug(["*"]) yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(contents), 2) callbacks[2].callback(True) yield self.poke_zk() # Verify values. self.assertEqual(len(contents), 3) self.assertEqual(results[-1].type_name, "created") self.assertEqual(contents[-1], {"debug_hooks": ["*"]}) # Clear out any pending activity. yield self.poke_zk() @inlineCallbacks def test_service_unit_agent(self): """A service unit state has an associated unit agent.""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() exists_d, watch_d = unit_state.watch_agent() exists = yield exists_d self.assertFalse(exists) yield unit_state.connect_agent() event = yield watch_d self.assertEqual(event.type_name, "created") self.assertEqual(event.path, "/units/%s/agent" % unit_state.internal_id) @inlineCallbacks def test_get_charm_id(self): """A service unit knows its charm id""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() unit_charm = yield unit_state.get_charm_id() service_charm = yield service_state.get_charm_id() self.assertTrue( (self.charm_state.id == unit_charm == service_charm)) @inlineCallbacks def test_set_charm_id(self): """A service unit charm can be set and is validated when set.""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() yield self.assertFailure( unit_state.set_charm_id("abc"), CharmURLError) yield self.assertFailure( unit_state.set_charm_id("abc:foobar-a"), CharmURLError) yield self.assertFailure( unit_state.set_charm_id(None), CharmURLError) charm_id = "local:series/name-1" yield unit_state.set_charm_id(charm_id) value = yield unit_state.get_charm_id() self.assertEqual(charm_id, value) @inlineCallbacks def test_add_unit_state_combines_constraints(self): """Constraints are inherited both from juju defaults and service""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_cs.parse(["arch=arm", "mem=1G"])) unit_state = yield service_state.add_unit_state() constraints = yield unit_state.get_constraints() expected = { "arch": "arm", "cpu": 1, "mem": 1024, "provider-type": "dummy", "ubuntu-series": "series"} self.assertEquals(constraints, expected) @inlineCallbacks def test_unassign_unit_from_machine_without_being_assigned(self): """ When unassigning a machine from a unit, it is possible that the machine has not been previously assigned, or that it was assigned but the state changed beneath us. In either case, the end state is the intended state, so we simply move forward without any errors here, to avoid having to handle the extra complexity of dealing with the concurrency problems. """ service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() yield unit_state.unassign_from_machine() topology = yield self.get_topology() self.assertEquals(topology.get_service_unit_machine( service_state.internal_id, unit_state.internal_id), None) machine_id = yield unit_state.get_assigned_machine_id() self.assertEqual(machine_id, None) @inlineCallbacks def test_assign_unit_to_machine_again_fails(self): """ Trying to assign a machine to an already assigned unit should fail, unless we're assigning to precisely the same machine, in which case it's no big deal. """ service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() machine_state0 = yield self.machine_state_manager.add_machine_state( series_constraints) machine_state1 = yield self.machine_state_manager.add_machine_state( series_constraints) yield unit_state.assign_to_machine(machine_state0) # Assigning again to the same machine is a NOOP, so nothing # terrible should happen if we let it go through. yield unit_state.assign_to_machine(machine_state0) try: yield unit_state.assign_to_machine(machine_state1) except ServiceUnitStateMachineAlreadyAssigned, e: self.assertEquals(e.unit_name, "wordpress/0") else: self.fail("Error not raised") machine_id = yield unit_state.get_assigned_machine_id() self.assertEqual(machine_id, 0) @inlineCallbacks def test_unassign_unit_from_machine_with_changing_state(self): service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) unit_state = yield service_state.add_unit_state() yield self.remove_service_unit(service_state.internal_id, unit_state.internal_id) d = unit_state.unassign_from_machine() yield self.assertFailure(d, StateChanged) d = unit_state.get_assigned_machine_id() yield self.assertFailure(d, StateChanged) yield self.remove_service(service_state.internal_id) d = unit_state.unassign_from_machine() yield self.assertFailure(d, StateChanged) d = unit_state.get_assigned_machine_id() yield self.assertFailure(d, StateChanged) @inlineCallbacks def test_assign_unit_to_unused_machine(self): """Verify that unused machines can be assigned to when their machine constraints match the service unit's.""" yield self.machine_state_manager.add_machine_state( series_constraints) mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield \ self.machine_state_manager.add_machine_state(series_constraints) yield mysql_unit_state.assign_to_machine(mysql_machine_state) yield self.service_state_manager.remove_service_state( mysql_service_state) wordpress_service_state = yield self.add_service_from_charm( "wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() yield wordpress_unit_state.assign_to_unused_machine() self.assertEqual( (yield self.get_topology()).get_machines(), ["machine-0000000000", "machine-0000000001"]) yield self.assert_machine_assignments("wordpress", [1]) @inlineCallbacks def test_assign_unit_to_unused_machine_bad_constraints(self): """Verify that unused machines do not get allocated service units with non-matching constraints.""" yield self.machine_state_manager.add_machine_state( series_constraints) mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield \ self.machine_state_manager.add_machine_state( series_constraints) yield mysql_unit_state.assign_to_machine(mysql_machine_state) yield self.service_state_manager.remove_service_state( mysql_service_state) other_constraints = dummy_cs.parse(["arch=arm"]) wordpress_service_state = yield self.add_service_from_charm( "wordpress", constraints=other_constraints.with_series("series")) wordpress_unit_state = yield wordpress_service_state.add_unit_state() yield self.assertFailure( wordpress_unit_state.assign_to_unused_machine(), NoUnusedMachines) self.assertEqual( (yield self.get_topology()).get_machines(), ["machine-0000000000", "machine-0000000001"]) yield self.assert_machine_assignments("wordpress", [None]) @inlineCallbacks def test_assign_unit_to_unused_machine_with_changing_state_service(self): """Verify `StateChanged` raised if service is manipulated during reuse. """ yield self.machine_state_manager.add_machine_state( series_constraints) mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield mysql_unit_state.assign_to_machine(mysql_machine_state) yield self.service_state_manager.remove_service_state( mysql_service_state) wordpress_service_state = yield self.add_service_from_charm( "wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() yield self.remove_service(wordpress_service_state.internal_id) yield self.assertFailure( wordpress_unit_state.assign_to_unused_machine(), StateChanged) @inlineCallbacks def test_assign_unit_to_unused_machine_with_changing_state_service_unit(self): "Verify `StateChanged` raised if unit is manipulated during reuse." yield self.machine_state_manager.add_machine_state( series_constraints) mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield mysql_unit_state.assign_to_machine(mysql_machine_state) yield self.service_state_manager.remove_service_state( mysql_service_state) wordpress_service_state = yield self.add_service_from_charm( "wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() yield self.remove_service_unit( wordpress_service_state.internal_id, wordpress_unit_state.internal_id) yield self.assertFailure( wordpress_unit_state.assign_to_unused_machine(), StateChanged) @inlineCallbacks def test_assign_unit_to_unused_machine_only_machine_zero(self): """Verify when the only available machine is machine 0""" yield self.machine_state_manager.add_machine_state( series_constraints) wordpress_service_state = yield self.add_service_from_charm( "wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() yield self.assertFailure( wordpress_unit_state.assign_to_unused_machine(), NoUnusedMachines) @inlineCallbacks def test_assign_unit_to_unused_machine_none_available(self): """Verify when there are no unused machines""" yield self.machine_state_manager.add_machine_state( series_constraints) mysql_service_state = yield self.add_service_from_charm("mysql") mysql_unit_state = yield mysql_service_state.add_unit_state() mysql_machine_state = yield self.machine_state_manager.add_machine_state( series_constraints) yield mysql_unit_state.assign_to_machine(mysql_machine_state) yield self.assert_machine_assignments("mysql", [1]) wordpress_service_state = yield self.add_service_from_charm( "wordpress") wordpress_unit_state = yield wordpress_service_state.add_unit_state() yield self.assertFailure( wordpress_unit_state.assign_to_unused_machine(), NoUnusedMachines) @inlineCallbacks def test_watch_relations_processes_current_state(self): """ The watch method returns only after processing initial state. Note the callback is only invoked if there are changes requiring processing. """ service_state = yield self.add_service("wordpress") yield self.add_relation( "rel-type", "global", [service_state, "name", "role"]) results = [] def callback(*args): results.append(True) yield service_state.watch_relation_states(callback) self.assertTrue(results) @inlineCallbacks def test_watch_relations_when_being_created(self): """ We can watch relations before we have any. """ service_state = yield self.add_service("wordpress") wait_callback = [Deferred() for i in range(5)] calls = [] def watch_relations(old_relations, new_relations): calls.append((old_relations, new_relations)) wait_callback[len(calls) - 1].callback(True) # Start watching service_state.watch_relation_states(watch_relations) # Callback is still untouched self.assertEquals(calls, []) # add a service relation and wait for the callback relation_state = yield self.add_relation( "rel-type", "global", [service_state, "name", "role"]) yield wait_callback[1] # verify the result self.assertEquals(len(calls), 2) old_relations, new_relations = calls[1] self.assertFalse(old_relations) self.assertEquals(new_relations[0].relation_role, "role") # add a new relation with the service assigned to it. relation_state2 = yield self.add_relation( "rel-type2", "global", [service_state, "app", "server"]) yield wait_callback[2] self.assertEquals(len(calls), 3) old_relations, new_relations = calls[2] self.assertEquals([r.internal_relation_id for r in old_relations], [relation_state.internal_id]) self.assertEquals([r.internal_relation_id for r in new_relations], [relation_state.internal_id, relation_state2.internal_id]) @inlineCallbacks def test_watch_relations_may_defer(self): """ The watch relations callback may return a deferred so that it performs some its logic asynchronously. In this case, it must not be called a second time before its postponed logic is finished completely. """ wait_callback = [Deferred() for i in range(5)] finish_callback = [Deferred() for i in range(5)] calls = [] def watch_relations(old_relations, new_relations): calls.append((old_relations, new_relations)) wait_callback[len(calls) - 1].callback(True) return finish_callback[len(calls) - 1] service_state = yield self.add_service("s-1") service_state.watch_relation_states(watch_relations) # Shouldn't have any callbacks yet. self.assertEquals(calls, []) # Assign to a relation. yield self.add_relation("rel-type", "global", (service_state, "name", "role")) # Hold off until callback is started. yield wait_callback[0] # Assign to another relation. yield self.add_relation("rel-type", "global", (service_state, "name2", "role")) # Give a chance for something bad to happen. yield self.sleep(0.3) # Ensure we still have a single call. self.assertEquals(len(calls), 1) # Allow the first call to be completed, and wait on the # next one. finish_callback[0].callback(None) yield wait_callback[1] finish_callback[1].callback(None) # We should have the second change now. self.assertEquals(len(calls), 2) @inlineCallbacks def test_get_relation_endpoints_service_name(self): """Test getting endpoints with descriptor ````""" yield self.add_service_from_charm("wordpress") self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "wordpress")), [RelationEndpoint("wordpress", "varnish", "cache", "client"), RelationEndpoint("wordpress", "mysql", "db", "client"), RelationEndpoint("wordpress", "http", "url", "server"), RelationEndpoint("wordpress", "juju-info", "juju-info", "server")]) yield self.add_service_from_charm("riak") self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "riak")), [RelationEndpoint("riak", "http", "admin", "server"), RelationEndpoint("riak", "http", "endpoint", "server"), RelationEndpoint("riak", "riak", "ring", "peer"), RelationEndpoint("riak", "juju-info", "juju-info", "server")]) @inlineCallbacks def test_get_relation_endpoints_service_name_relation_name(self): """Test getting endpoints with ````""" yield self.add_service_from_charm("wordpress") self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "wordpress:url")), [RelationEndpoint("wordpress", "http", "url", "server"), RelationEndpoint("wordpress", "juju-info", "juju-info", "server")]) self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "wordpress:db")), [RelationEndpoint("wordpress", "mysql", "db", "client"), RelationEndpoint("wordpress", "juju-info", "juju-info", "server")]) self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "wordpress:cache")), [RelationEndpoint("wordpress", "varnish", "cache", "client"), RelationEndpoint("wordpress", "juju-info", "juju-info", "server")]) yield self.add_service_from_charm("riak") self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "riak:ring")), [RelationEndpoint("riak", "riak", "ring", "peer"), RelationEndpoint("riak", "juju-info", "juju-info", "server")]) @inlineCallbacks def test_descriptor_for_services_without_charms(self): """Test with services that have no corresponding charms defined""" yield self.add_service("nocharm") # verify we get the implicit interface self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "nocharm")), [RelationEndpoint("nocharm", "juju-info", "juju-info", "server")]) self.assertEqual( (yield self.service_state_manager.get_relation_endpoints( "nocharm:nonsense")), [RelationEndpoint("nocharm", "juju-info", "juju-info", "server")]) @inlineCallbacks def test_descriptor_for_missing_service(self): """Test with a service that is not in the topology""" yield self.assertFailure( self.service_state_manager.get_relation_endpoints("notadded"), ServiceStateNotFound) @inlineCallbacks def test_bad_descriptors(self): """Test that the descriptors meet the minimum naming standards""" yield self.assertFailure( self.service_state_manager.get_relation_endpoints("a:b:c"), BadDescriptor) yield self.assertFailure( self.service_state_manager.get_relation_endpoints(""), BadDescriptor) @inlineCallbacks def test_join_descriptors_service_name(self): """Test descriptor of the form ```""" yield self.add_service_from_charm("wordpress") yield self.add_service_from_charm("mysql") self.assertEqual( (yield self.service_state_manager.join_descriptors( "wordpress", "mysql")), [(RelationEndpoint("wordpress", "mysql", "db", "client"), RelationEndpoint("mysql", "mysql", "server", "server"))]) # symmetric - note the pair has rotated self.assertEqual( (yield self.service_state_manager.join_descriptors( "mysql", "wordpress")), [(RelationEndpoint("mysql", "mysql", "server", "server"), RelationEndpoint("wordpress", "mysql", "db", "client"))]) yield self.add_service_from_charm("varnish") self.assertEqual( (yield self.service_state_manager.join_descriptors( "wordpress", "varnish")), [(RelationEndpoint("wordpress", "varnish", "cache", "client"), RelationEndpoint("varnish", "varnish", "webcache", "server"))]) @inlineCallbacks def test_join_descriptors_service_name_relation_name(self): """Test joining descriptors ````""" yield self.add_service_from_charm("wordpress") yield self.add_service_from_charm("mysql") self.assertEqual( (yield self.service_state_manager.join_descriptors( "wordpress:db", "mysql")), [(RelationEndpoint("wordpress", "mysql", "db", "client"), RelationEndpoint("mysql", "mysql", "server", "server"))]) self.assertEqual( (yield self.service_state_manager.join_descriptors( "mysql:server", "wordpress")), [(RelationEndpoint("mysql", "mysql", "server", "server"), RelationEndpoint("wordpress", "mysql", "db", "client"))]) self.assertEqual( (yield self.service_state_manager.join_descriptors( "mysql:server", "wordpress:db")), [(RelationEndpoint("mysql", "mysql", "server", "server"), RelationEndpoint("wordpress", "mysql", "db", "client"))]) yield self.add_service_from_charm("varnish") self.assertEqual( (yield self.service_state_manager.join_descriptors( "wordpress:cache", "varnish")), [(RelationEndpoint("wordpress", "varnish", "cache", "client"), RelationEndpoint("varnish", "varnish", "webcache", "server"))]) self.assertEqual( (yield self.service_state_manager.join_descriptors( "wordpress:cache", "varnish:webcache")), [(RelationEndpoint("wordpress", "varnish", "cache", "client"), RelationEndpoint("varnish", "varnish", "webcache", "server"))]) @inlineCallbacks def test_join_peer_descriptors(self): """Test joining of peer relation descriptors""" yield self.add_service_from_charm("riak") self.assertEqual( (yield self.service_state_manager.join_descriptors( "riak", "riak")), [(RelationEndpoint("riak", "riak", "ring", "peer"), RelationEndpoint("riak", "riak", "ring", "peer"))]) self.assertEqual( (yield self.service_state_manager.join_descriptors( "riak:ring", "riak")), [(RelationEndpoint("riak", "riak", "ring", "peer"), RelationEndpoint("riak", "riak", "ring", "peer"))]) self.assertEqual( (yield self.service_state_manager.join_descriptors( "riak:ring", "riak:ring")), [(RelationEndpoint("riak", "riak", "ring", "peer"), RelationEndpoint("riak", "riak", "ring", "peer"))]) self.assertEqual( (yield self.service_state_manager.join_descriptors( "riak:no-ring", "riak:ring")), []) @inlineCallbacks def test_join_descriptors_no_common_relation(self): """Test joining of descriptors that do not share a relation""" yield self.add_service_from_charm("mysql") yield self.add_service_from_charm("riak") yield self.add_service_from_charm("wordpress") yield self.add_service_from_charm("varnish") self.assertEqual((yield self.service_state_manager.join_descriptors( "mysql", "riak")), []) self.assertEqual((yield self.service_state_manager.join_descriptors( "mysql:server", "riak:ring")), []) self.assertEqual((yield self.service_state_manager.join_descriptors( "varnish", "mysql")), []) self.assertEqual((yield self.service_state_manager.join_descriptors( "riak:ring", "riak:admin")), []) self.assertEqual((yield self.service_state_manager.join_descriptors( "riak", "wordpress")), []) @inlineCallbacks def test_join_descriptors_no_service_state(self): """Test joining of nonexistent services""" yield self.add_service_from_charm("wordpress") yield self.assertFailure(self.service_state_manager.join_descriptors( "wordpress", "nosuch"), ServiceStateNotFound) yield self.assertFailure(self.service_state_manager.join_descriptors( "notyet", "nosuch"), ServiceStateNotFound) @inlineCallbacks def test_watch_services_initial_callback(self): """Watch service processes initial state before returning. Note the callback is only executed if there is some meaningful state change. """ results = [] def callback(*args): results.append(True) yield self.service_state_manager.watch_service_states(callback) yield self.add_service("wordpress") yield self.poke_zk() self.assertTrue(results) @inlineCallbacks def test_watch_services_when_being_created(self): """ It should be possible to start watching services even before they are created. In this case, the callback will be made when it's actually introduced. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_services(old_services, new_services): calls.append((old_services, new_services)) wait_callback[len(calls) - 1].callback(True) # Start watching. self.service_state_manager.watch_service_states(watch_services) # Callback is still untouched. self.assertEquals(calls, []) # Add a service, and wait for callback. yield self.add_service("wordpress") yield wait_callback[0] # The first callback must have been fired, and it must have None # as the first argument because that's the first service seen. self.assertEquals(len(calls), 1) old_services, new_services = calls[0] self.assertEquals(old_services, set()) self.assertEquals(new_services, set(["wordpress"])) # Add a service again. yield self.add_service("mysql") yield wait_callback[1] # Now the watch callback must have been fired with two # different service sets. The old one, and the new one. self.assertEquals(len(calls), 2) old_services, new_services = calls[1] self.assertEquals(old_services, set(["wordpress"])) self.assertEquals(new_services, set(["mysql", "wordpress"])) @inlineCallbacks def test_watch_services_may_defer(self): """ The watch services callback may return a deferred so that it performs some of its logic asynchronously. In this case, it must not be called a second time before its postponed logic is finished completely. """ wait_callback = [Deferred() for i in range(10)] finish_callback = [Deferred() for i in range(10)] calls = [] def watch_services(old_services, new_services): calls.append((old_services, new_services)) wait_callback[len(calls) - 1].callback(True) return finish_callback[len(calls) - 1] # Start watching. self.service_state_manager.watch_service_states(watch_services) # Create the service. yield self.add_service("wordpress") # Hold off until callback is started. yield wait_callback[0] # Add another service. yield self.add_service("mysql") # Ensure we still have a single call. self.assertEquals(len(calls), 1) # Allow the first call to be completed, and wait on the # next one. finish_callback[0].callback(None) yield wait_callback[1] finish_callback[1].callback(None) # We should have the second change now. self.assertEquals(len(calls), 2) old_services, new_services = calls[1] self.assertEquals(old_services, set(["wordpress"])) self.assertEquals(new_services, set(["mysql", "wordpress"])) @inlineCallbacks def test_watch_services_with_changing_topology(self): """ If the topology changes in an unrelated way, the services watch callback should not be called with two equal arguments. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_services(old_services, new_services): calls.append((old_services, new_services)) wait_callback[len(calls) - 1].callback(True) # Start watching. self.service_state_manager.watch_service_states(watch_services) # Callback is still untouched. self.assertEquals(calls, []) # Add a service, and wait for callback. yield self.add_service("wordpress") yield wait_callback[0] # Now change the topology in an unrelated way. yield self.machine_state_manager.add_machine_state( series_constraints) # Add a service again. yield self.add_service("mysql") yield wait_callback[1] # But it *shouldn't* have happened. self.assertEquals(len(calls), 2) @inlineCallbacks def test_watch_service_units_initial_callback(self): """Watch service unit processes initial state before returning. Note the callback is only executed if there is some meaningful state change. """ results = [] def callback(*args): results.append(True) service_state = yield self.add_service("wordpress") yield service_state.watch_service_unit_states(callback) yield service_state.add_unit_state() yield self.poke_zk() self.assertTrue(results) @inlineCallbacks def test_watch_service_units_when_being_created(self): """ It should be possible to start watching service units even before they are created. In this case, the callback will be made when it's actually introduced. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_service_units(old_service_units, new_service_units): calls.append((old_service_units, new_service_units)) wait_callback[len(calls) - 1].callback(True) # Start watching. service_state = yield self.add_service("wordpress") service_state.watch_service_unit_states(watch_service_units) # Callback is still untouched. self.assertEquals(calls, []) # Add a service unit, and wait for callback. yield service_state.add_unit_state() yield wait_callback[0] # The first callback must have been fired, and it must have None # as the first argument because that's the first service seen. self.assertEquals(len(calls), 1) old_service_units, new_service_units = calls[0] self.assertEquals(old_service_units, set()) self.assertEquals(new_service_units, set(["wordpress/0"])) # Add another service unit. yield service_state.add_unit_state() yield wait_callback[1] # Now the watch callback must have been fired with two # different service sets. The old one, and the new one. self.assertEquals(len(calls), 2) old_service_units, new_service_units = calls[1] self.assertEquals(old_service_units, set(["wordpress/0"])) self.assertEquals(new_service_units, set(["wordpress/0", "wordpress/1"])) @inlineCallbacks def test_watch_service_units_may_defer(self): """ The watch service units callback may return a deferred so that it performs some of its logic asynchronously. In this case, it must not be called a second time before its postponed logic is finished completely. """ wait_callback = [Deferred() for i in range(10)] finish_callback = [Deferred() for i in range(10)] calls = [] def watch_service_units(old_service_units, new_service_units): calls.append((old_service_units, new_service_units)) wait_callback[len(calls) - 1].callback(True) return finish_callback[len(calls) - 1] # Start watching. service_state = yield self.add_service("wordpress") service_state.watch_service_unit_states(watch_service_units) # Create the service unit. yield service_state.add_unit_state() # Hold off until callback is started. yield wait_callback[0] # Add another service unit. yield service_state.add_unit_state() # Ensure we still have a single call. self.assertEquals(len(calls), 1) # Allow the first call to be completed, and wait on the # next one. finish_callback[0].callback(None) yield wait_callback[1] finish_callback[1].callback(None) # We should have the second change now. self.assertEquals(len(calls), 2) old_service_units, new_service_units = calls[1] self.assertEquals(old_service_units, set(["wordpress/0"])) self.assertEquals( new_service_units, set(["wordpress/0", "wordpress/1"])) @inlineCallbacks def test_watch_service_units_with_changing_topology(self): """ If the topology changes in an unrelated way, the services watch callback should not be called with two equal arguments. """ wait_callback = [Deferred() for i in range(10)] calls = [] def watch_service_units(old_service_units, new_service_units): calls.append((old_service_units, new_service_units)) wait_callback[len(calls) - 1].callback(True) # Start watching. service_state = yield self.add_service("wordpress") service_state.watch_service_unit_states(watch_service_units) # Callback is still untouched. self.assertEquals(calls, []) # Add a service, and wait for callback. yield service_state.add_unit_state() yield wait_callback[0] # Now change the topology in an unrelated way. yield self.machine_state_manager.add_machine_state( series_constraints) # Add a service again. yield service_state.add_unit_state() yield wait_callback[1] # But it *shouldn't* have happened. self.assertEquals(len(calls), 2) @inlineCallbacks def test_service_config_get_set(self): """Validate that we can set and get service config options.""" wordpress = yield self.add_service_from_charm("wordpress") # attempt to get the initialized service state config = yield wordpress.get_config() # the initial state is empty self.assertEqual(config, {"blog-title": "My Title"}) # behaves as a normal dict self.assertRaises(KeyError, config.__getitem__, "missing") # various ways to set state config.update(dict(alpha="beta", one="two")) config["another"] = "value" # write the values yield config.write() # we should be able to read the config and see the same values # (in this case it would be the cached object) config2 = yield wordpress.get_config() self.assertEqual(config2, {"alpha": "beta", "one": "two", "another": "value", "blog-title": "My Title"}) # now set a non-string value and recover it config2["number"] = 1 config2["one"] = None yield config2.write() yield config.read() self.assertEquals(config["number"], 1) self.assertEquals(config["one"], None) @inlineCallbacks def test_service_config_get_returns_new(self): """Validate that we can set and get service config options.""" wordpress = yield self.add_service_from_charm("wordpress") # attempt to get the initialized service state config = yield wordpress.get_config() config.update({"foo": "bar"}) # Defaults come through self.assertEqual(config, {"foo": "bar", "blog-title": "My Title"}) config2 = yield wordpress.get_config() self.assertEqual(config2, {"blog-title": "My Title"}) yield config.write() self.assertEqual(config, {"foo": "bar", "blog-title": "My Title"}) # Config2 is still empty (a different YAML State), with charm defaults. self.assertEqual(config2, {"blog-title": "My Title"}) yield config2.read() self.assertEqual(config, {"foo": "bar", "blog-title": "My Title"}) # The default was never written to storage. data, stat = yield self.client.get( "/services/%s/config" % wordpress.internal_id) self.assertEqual(serializer.load(data), {"foo": "bar"}) @inlineCallbacks def test_get_charm_state(self): wordpress = yield self.add_service_from_charm("wordpress") charm = yield wordpress.get_charm_state() self.assertEqual(charm.name, "wordpress") metadata = yield charm.get_metadata() self.assertEqual(metadata.summary, "Blog engine") class ExposedFlagTest(ServiceStateManagerTestBase): @inlineCallbacks def test_set_and_clear_exposed_flag(self): """An exposed flag can be set on a service.""" # Defaults to false service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) exposed_flag = yield service_state.get_exposed_flag() self.assertEqual(exposed_flag, False) # Can be set yield service_state.set_exposed_flag() exposed_flag = yield service_state.get_exposed_flag() self.assertEqual(exposed_flag, True) # Can be set multiple times yield service_state.set_exposed_flag() exposed_flag = yield service_state.get_exposed_flag() self.assertEqual(exposed_flag, True) # Can be cleared yield service_state.clear_exposed_flag() exposed_flag = yield service_state.get_exposed_flag() self.assertEqual(exposed_flag, False) # Can be cleared multiple times yield service_state.clear_exposed_flag() exposed_flag = yield service_state.get_exposed_flag() self.assertEqual(exposed_flag, False) @inlineCallbacks def test_watch_exposed_flag(self): """An exposed watch is setup on a permanent basis.""" service_state = yield self.add_service("wordpress") results = [] def callback(value): results.append(value) yield service_state.set_exposed_flag() # verify that the current state is processed before returning yield service_state.watch_exposed_flag(callback) yield service_state.clear_exposed_flag() yield service_state.set_exposed_flag() yield service_state.clear_exposed_flag() yield service_state.clear_exposed_flag() # should be ignored yield service_state.set_exposed_flag() self.assertEqual((yield service_state.get_exposed_flag()), True) self.assertEqual(results, [True, False, True, False, True]) @inlineCallbacks def test_stop_watch_exposed_flag(self): """The watch is setup on a permanent basis, but can be stopped. The callback can raise StopWatcher at any time to stop the watch. """ service_state = yield self.add_service("wordpress") results = [] def callback(value): results.append(value) if len(results) == 2: raise StopWatcher() if len(results) == 4: raise StopWatcher() yield service_state.watch_exposed_flag(callback) yield service_state.set_exposed_flag() # two sets in a row do not retrigger callback since no change # in flag status yield service_state.set_exposed_flag() yield service_state.clear_exposed_flag() # no callback now, because StopWatcher was just raised yield service_state.set_exposed_flag() # then setup watch again yield service_state.watch_exposed_flag(callback) yield service_state.clear_exposed_flag() # no callbacks for these two lines, because StopWatcher was # already raised yield service_state.set_exposed_flag() yield service_state.clear_exposed_flag() self.assertEqual( (yield service_state.get_exposed_flag()), False) self.assertEqual(results, [False, True, True, False]) @inlineCallbacks def test_watch_exposed_flag_waits_on_slow_callbacks(self): """Verify that a slow watch callback is still invoked serially.""" service_state = yield self.add_service("wordpress") callbacks = [Deferred() for i in range(3)] before = [] # values seen before callback in `cb_watch` after = [] # and after @inlineCallbacks def cb_watch(value): before.append(value) yield callbacks[len(before) - 1] after.append((yield service_state.get_exposed_flag())) yield service_state.set_exposed_flag() # Need to let first callback be completed, otherwise will wait # forever in watch_exposed_flag. This is because `cb_watch` is # initially called in the setup of the watch callbacks[0].callback(True) yield service_state.watch_exposed_flag(cb_watch) self.assertEqual(before, [True]) self.assertEqual(after, [True]) # Go through the watch again, verifying that it is waiting on # `callbacks[1]` yield service_state.clear_exposed_flag() yield self.poke_zk() self.assertEqual(before, [True, False]) self.assertEqual(after, [True]) # Now let `cb_watch` finish callbacks[1].callback(True) yield self.poke_zk() # Go through another watch cycle yield service_state.set_exposed_flag() yield self.poke_zk() # Verify results, still haven't advanced through `callbacks[2]` self.assertEqual(before, [True, False, True]) self.assertEqual(after, [True, False]) # Now let it go through, verifying that `before` hasn't # changed, but `after` has now updated callbacks[2].callback(True) yield self.poke_zk() self.assertEqual(before, [True, False, True]) self.assertEqual(after, [True, False, True]) class PortsTest(ServiceStateManagerTestBase): @inlineCallbacks def test_watch_config_options(self): """Verify callback trigger on config options modification""" service_state = yield self.service_state_manager.add_service_state( "wordpress", self.charm_state, dummy_constraints) results = [] def callback(value): results.append(value) yield service_state.watch_config_state(callback) config = yield service_state.get_config() config["alpha"] = "beta" yield config.write() yield self.poke_zk() self.assertIdentical(results.pop(0), True) self.assertIdentical(results.pop(0).type_name, "changed") # and changing it again should trigger the callback again config["gamma"] = "delta" yield config.write() yield self.poke_zk() self.assertEqual(len(results), 1) self.assertIdentical(results.pop(0).type_name, "changed") @inlineCallbacks def test_get_open_ports(self): """Verify introspection and that the ports changes are immediate.""" service_state = yield self.add_service("wordpress") unit_state = yield service_state.add_unit_state() # verify no open ports before activity self.assertEqual((yield unit_state.get_open_ports()), []) # then open_port, close_port yield unit_state.open_port(80, "tcp") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}]) yield unit_state.open_port(53, "udp") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}]) yield unit_state.open_port(53, "tcp") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}, {"port": 53, "proto": "tcp"}]) yield unit_state.open_port(443, "tcp") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}, {"port": 53, "proto": "tcp"}, {"port": 443, "proto": "tcp"}]) yield unit_state.close_port(80, "tcp") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 53, "proto": "udp"}, {"port": 53, "proto": "tcp"}, {"port": 443, "proto": "tcp"}]) @inlineCallbacks def test_close_open_port(self): """Verify closing an unopened port, then actually opening it, works.""" service_state = yield self.add_service("wordpress") unit_state = yield service_state.add_unit_state() yield unit_state.close_port(80, "tcp") self.assertEqual( (yield unit_state.get_open_ports()), []) yield unit_state.open_port(80, "tcp") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 80, "proto": "tcp"}]) @inlineCallbacks def test_open_ports_znode_representation(self): """Verify the specific representation of open ports in ZK.""" service_state = yield self.add_service("wordpress") unit_state = yield service_state.add_unit_state() ports_path = "/units/unit-0000000000/ports" # verify no node exists before activity self.assertFailure( self.client.get(ports_path), zookeeper.NoNodeException) # verify representation format after open_port, close_port yield unit_state.open_port(80, "tcp") content, stat = yield self.client.get(ports_path) self.assertEquals( serializer.load(content), {"open": [{"port": 80, "proto": "tcp"}]}) yield unit_state.open_port(53, "udp") content, stat = yield self.client.get(ports_path) self.assertEquals( serializer.load(content), {"open": [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}]}) yield unit_state.open_port(443, "tcp") content, stat = yield self.client.get(ports_path) self.assertEquals( serializer.load(content), {"open": [{"port": 80, "proto": "tcp"}, {"port": 53, "proto": "udp"}, {"port": 443, "proto": "tcp"}]}) yield unit_state.close_port(80, "tcp") content, stat = yield self.client.get(ports_path) self.assertEquals( serializer.load(content), {"open": [{"port": 53, "proto": "udp"}, {"port": 443, "proto": "tcp"}]}) @inlineCallbacks def test_watch_ports(self): """An open ports watch notifies of presence and changes.""" service_state = yield self.add_service("wordpress") unit_state = yield service_state.add_unit_state() yield unit_state.open_port(80, "tcp") yield unit_state.open_port(53, "udp") yield unit_state.open_port(443, "tcp") results = [] def callback(value): results.append(value) # set up a one-time watch unit_state.watch_ports(callback) # do two actions yield unit_state.close_port(80, "tcp") yield unit_state.open_port(22, "tcp") # but see just the callback with one changed event, plus initial setup yield self.poke_zk() self.assertEqual(len(results), 3) self.assertEqual(results.pop(0), True) self.assertEqual(results.pop().type_name, "changed") self.assertEqual(results.pop().type_name, "changed") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 53, "proto": "udp"}, {"port": 443, "proto": "tcp"}, {"port": 22, "proto": "tcp"}]) @inlineCallbacks def test_stop_watch_ports(self): """An exposed watch can be instituted on a permanent basis. However the callback can raise StopWatcher any time to stop the watch. """ service_state = yield self.add_service("wordpress") unit_state = yield service_state.add_unit_state() yield unit_state.open_port(80, "tcp") results = [] def callback(value): results.append(value) if len(results) == 1: raise StopWatcher() if len(results) == 4: raise StopWatcher() unit_state.watch_ports(callback) yield unit_state.close_port(80, "tcp") yield self.poke_zk() self.assertEqual(len(results), 1) self.assertEqual(results[0], True) unit_state.watch_ports(callback) yield unit_state.open_port(53, "udp") yield self.poke_zk() yield self.sleep(0.1) self.assertEqual(len(results), 3) self.assertEqual(results[0], True) self.assertEqual(results[1], True) self.assertEqual(results[2].type_name, "changed") self.assertEqual( (yield unit_state.get_open_ports()), [{"port": 53, "proto": "udp"}]) @inlineCallbacks def test_watch_ports_slow_callbacks(self): """A slow watch callback is still invoked serially.""" unit_state = yield self.get_unit_state() callbacks = [Deferred() for i in range(5)] results = [] contents = [] @inlineCallbacks def watch(value): results.append(value) yield callbacks[len(results) - 1] contents.append((yield unit_state.get_open_ports())) callbacks[0].callback(True) yield unit_state.watch_ports(watch) # These get collapsed into a single event yield unit_state.open_port(80, "tcp") yield unit_state.open_port(53, "udp") yield unit_state.open_port(443, "tcp") yield unit_state.close_port(80, "tcp") yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(results), 2) self.assertEqual(len(contents), 1) # Let it finish callbacks[1].callback(True) yield self.poke_zk() # Verify the callback hasn't completed self.assertEqual(len(contents), 2) callbacks[2].callback(True) yield self.poke_zk() # Verify values. self.assertEqual(len(contents), 3) self.assertEqual(results[-1].type_name, "changed") self.assertEqual(contents[-1], [{"port": 53, "proto": "udp"}, {"port": 443, "proto": "tcp"}]) yield self.poke_zk() def test_parse_service_name(self): self.assertEqual(parse_service_name("wordpress/0"), "wordpress") self.assertEqual(parse_service_name("myblog/1"), "myblog") self.assertRaises(ValueError, parse_service_name, "invalid") self.assertRaises(ValueError, parse_service_name, None) juju-0.7.orig/juju/state/tests/test_sshclient.py0000644000000000000000000002624312135220114020264 0ustar 00000000000000import collections import socket import time from twisted.internet.process import Process from twisted.internet.protocol import ProcessProtocol from twisted.internet.defer import succeed, fail, Deferred, inlineCallbacks from txzookeeper import ZookeeperClient from txzookeeper.client import ConnectionTimeoutException from juju.errors import NoConnection, InvalidHost, InvalidUser from juju.lib.mocker import MATCH, match_params from juju.lib.testing import TestCase from juju.state.sshclient import SSHClient MATCH_PROTOCOL = MATCH(lambda x: isinstance(x, ProcessProtocol)) MATCH_TIMEOUT = MATCH(lambda x: isinstance(x, (int, float))) MATCH_FUNC = MATCH(lambda x: callable(x)) MATCH_DEFERRED = MATCH(lambda x: isinstance(x, Deferred)) MATCH_PORT = MATCH(lambda x: isinstance(x, int)) def _match_localhost_port(value): if not ":" in value: return False host, port = value.split(":") if not port.isdigit(): return False return True MATCH_LOCALHOST_PORT = MATCH(_match_localhost_port) class TestDeferredSequence(object): """Helper class for testing sequence of Deferred values w/ mocker.""" def __init__(self, sequence, *args, **kwargs): self.sequence = collections.deque(sequence) self.args = args self.kwargs = kwargs def __call__(self, func): def f(*args, **kwargs): if not match_params(self.args, self.kwargs, args, kwargs): raise AssertionError( "Unmet expectation %r %r %r %r", func, self.args, self.kwargs, args, kwargs) obj = self.sequence.popleft() if isinstance(obj, BaseException): return fail(obj) else: return succeed(obj) return f class SSHClientTest(TestCase): def setUp(self): self.sshclient = SSHClient() self.log = self.capture_logging("juju.state.sshforward") def test_is_zookeeper_client(self): self.assertTrue(isinstance(self.sshclient, ZookeeperClient)) def test_close_while_not_connected_does_nothing(self): # Hah! Nothing! But it doesn't blow up either. self.sshclient.close() def test_internal_connect_behavior(self): """Verify the order of operations for sshclient._internal_connect.""" zkconnect = self.mocker.replace(ZookeeperClient.connect) zkclose = self.mocker.replace(ZookeeperClient.close) forward = self.mocker.replace("juju.state.sshforward.forward_port") thread = self.mocker.replace("twisted.internet.threads.deferToThread") process = self.mocker.mock(Process) with self.mocker.order(): # First, get the forwarding going, targetting the remote # address provided. forward("ubuntu", MATCH_PORT, "remote", 2181, process_protocol=MATCH_PROTOCOL, share=False) self.mocker.result(process) # Next ensure the port check succeeds thread(MATCH_FUNC) self.mocker.result(succeed(True)) # Then, connect to localhost, though the set up proxy. zkconnect(MATCH_LOCALHOST_PORT, MATCH_TIMEOUT) # Just a marker to ensure the following happens as a # side effect of actually closing the SSHClient. process.pre_close_marker zkclose() process.signalProcess("TERM") process.loseConnection() # There we go! self.mocker.replay() self.sshclient.connect( "remote:2181", timeout=123) # Trick to ensure process.close() didn't happen # before this point. This only works because we're # asking mocker to order events here. process.pre_close_marker self.sshclient.close() def test_new_timeout_after_port_probe(self): forward = self.mocker.replace("juju.state.sshforward.forward_port") thread = self.mocker.replace("twisted.internet.threads.deferToThread") original_time = time.time times = [220, 200, 200] def get_time(): if times: return times.pop() return original_time() self.patch(time, "time", get_time) protocol = self.mocker.mock() forward("ubuntu", MATCH_PORT, "remote", 2181, process_protocol=MATCH_PROTOCOL, share=False) self.mocker.result(protocol) thread(MATCH_FUNC) self.mocker.result(succeed(True)) protocol.signalProcess("TERM") protocol.loseConnection() self.mocker.replay() d = self.sshclient.connect("remote:2181", timeout=20) self.failUnlessFailure(d, ConnectionTimeoutException) return d def test_tunnel_port_open_error(self): """Errors when probing the port are reported on the connect deferred. Port probing errors are converted to connectiontimeout exceptions. """ forward = self.mocker.replace("juju.state.sshforward.forward_port") thread = self.mocker.replace("twisted.internet.threads.deferToThread") protocol = self.mocker.mock() forward("ubuntu", MATCH_PORT, "remote", 2181, process_protocol=MATCH_PROTOCOL, share=False) self.mocker.result(protocol) thread(MATCH_FUNC) self.mocker.result(fail(socket.error("a",))) protocol.signalProcess("TERM") protocol.loseConnection() self.mocker.replay() d = self.sshclient.connect("remote:2181", timeout=20) self.failUnlessFailure(d, ConnectionTimeoutException) return d def test_tunnel_client_error(self): """A zkclient connect error is reported on the sshclient deferred. Client connection errors are propogated as is. """ forward = self.mocker.replace("juju.state.sshforward.forward_port") thread = self.mocker.replace("twisted.internet.threads.deferToThread") protocol = self.mocker.mock() forward("ubuntu", MATCH_PORT, "remote", 2181, process_protocol=MATCH_PROTOCOL, share=False) self.mocker.result(protocol) thread(MATCH_FUNC) def wait_result(func): return succeed(True) self.mocker.call(wait_result) zkconnect = self.mocker.replace(ZookeeperClient.connect) zkconnect(MATCH_LOCALHOST_PORT, MATCH_TIMEOUT) self.mocker.result(fail(OSError())) protocol.signalProcess("TERM") protocol.loseConnection() self.mocker.replay() d = self.sshclient.connect( "remote:2181", timeout=20) self.failUnlessFailure(d, OSError) return d def test_share_connection(self): """Connection sharing requests are passed to port_forward(). """ forward = self.mocker.replace("juju.state.sshforward.forward_port") thread = self.mocker.replace("twisted.internet.threads.deferToThread") protocol = self.mocker.mock() forward("ubuntu", MATCH_PORT, "remote", 2181, process_protocol=MATCH_PROTOCOL, share=True) self.mocker.result(protocol) thread(MATCH_FUNC) def wait_result(func): return succeed(True) self.mocker.call(wait_result) zkconnect = self.mocker.replace(ZookeeperClient.connect) zkconnect(MATCH_LOCALHOST_PORT, MATCH_TIMEOUT) self.mocker.result(True) protocol.signalProcess("TERM") protocol.loseConnection() self.mocker.replay() yield self.sshclient.connect("remote:2181", timeout=20, share=True) @inlineCallbacks def test_connect(self): """Test normal connection w/o retry loop.""" mock_client = self.mocker.patch(self.sshclient) mock_client._internal_connect( "remote:2181", timeout=MATCH_TIMEOUT, share=False) self.mocker.result(succeed(True)) self.mocker.replay() yield self.sshclient.connect("remote:2181", timeout=123) @inlineCallbacks def test_connect_no_connection(self): """Test sequence of NoConnection failures, followed by success.""" mock_client = self.mocker.patch(self.sshclient) mock_client._internal_connect self.mocker.call(TestDeferredSequence( [NoConnection(), NoConnection(), True], "remote:2181", timeout=MATCH_TIMEOUT, share=False)) self.mocker.count(3, 3) self.mocker.replay() yield self.sshclient.connect("remote:2181", timeout=123) @inlineCallbacks def test_connect_invalid_host(self): """Test connect to invalid host will raise exception asap.""" mock_client = self.mocker.patch(self.sshclient) mock_client._internal_connect self.mocker.call(TestDeferredSequence( [NoConnection(), InvalidHost(), succeed(True)], "remote:2181", timeout=MATCH_TIMEOUT, share=False)) self.mocker.count(2, 2) self.mocker.replay() yield self.assertFailure( self.sshclient.connect("remote:2181", timeout=123), InvalidHost) @inlineCallbacks def test_connect_invalid_user(self): """Test connect with invalid user will raise exception asap.""" mock_client = self.mocker.patch(self.sshclient) mock_client._internal_connect self.mocker.call(TestDeferredSequence( [NoConnection(), InvalidUser(), succeed(True)], "remote:2181", timeout=MATCH_TIMEOUT, share=False)) self.mocker.count(2, 2) self.mocker.replay() yield self.assertFailure( self.sshclient.connect("remote:2181", timeout=123), InvalidUser) @inlineCallbacks def test_connect_timeout(self): """Test that retry fails after timeout in retry loop.""" mock_client = self.mocker.patch(self.sshclient) mock_client._internal_connect self.mocker.call(TestDeferredSequence( [NoConnection(), NoConnection(), True], "remote:2181", timeout=MATCH_TIMEOUT, share=False)) self.mocker.count(2, 2) original_time = time.time times = [220, 215, 210, 205, 200] def get_time(): if times: return times.pop() return original_time() self.patch(time, "time", get_time) self.mocker.replay() ex = yield self.assertFailure( self.sshclient.connect("remote:2181", timeout=123), ConnectionTimeoutException) self.assertEqual( str(ex), "could not connect before timeout after 2 retries") @inlineCallbacks def test_connect_tunnel_portwatcher_timeout(self): """Test that retry fails after timeout seen in tunnel portwatcher.""" mock_client = self.mocker.patch(self.sshclient) mock_client._internal_connect self.mocker.call(TestDeferredSequence( [NoConnection(), NoConnection(), ConnectionTimeoutException(), True], "remote:2181", timeout=MATCH_TIMEOUT, share=False)) self.mocker.count(3, 3) self.mocker.replay() ex = yield self.assertFailure( self.sshclient.connect("remote:2181", timeout=123), ConnectionTimeoutException) self.assertEqual( str(ex), "could not connect before timeout after 3 retries") juju-0.7.orig/juju/state/tests/test_sshforward.py0000644000000000000000000002163612135220114020453 0ustar 00000000000000import os import logging from twisted.internet.defer import Deferred from juju.lib.testing import TestCase from juju.lib.mocker import ARGS, KWARGS from juju.errors import FileNotFound, NoConnection from juju.state.sshforward import ( forward_port, TunnelProtocol, ClientTunnelProtocol, prepare_ssh_sharing) class ConnectionTest(TestCase): def setUp(self): super(ConnectionTest, self).setUp() self.home = self.makeDir() self.change_environment(HOME=self.home) def test_invalid_forward_args(self): self.assertRaises( SyntaxError, forward_port, "ubuntu", "10000", "localhost", "1000a") def test_ssh_spawn(self): """ Forwarding a port spawns an ssh process with port forwarding arguments. """ from twisted.internet import reactor mock_reactor = self.mocker.patch(reactor) mock_reactor.spawnProcess(ARGS, KWARGS) saved = [] self.mocker.call(lambda *args, **kwargs: saved.append((args, kwargs))) self.mocker.result(None) self.mocker.replay() result = forward_port("ubuntu", 8888, "remote_host", 9999) self.assertEquals(result, None) self.assertTrue(saved) args, kwargs = saved[0] self.assertIsInstance(args[0], TunnelProtocol) self.assertEquals(args[1], "/usr/bin/ssh") self.assertEquals(args[2], [ "ssh", "-T", "-o", "ControlPath " + self.home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "-o", "PasswordAuthentication no", "-Llocalhost:8888:localhost:9999", "ubuntu@remote_host"]) self.assertEquals(kwargs, {"env": os.environ}) def test_ssh_spawn_sharing(self): """ When sharing is enabled, ssh will be set up so that it becomes the master if there's no other master alive yet. """ from twisted.internet import reactor mock_reactor = self.mocker.patch(reactor) mock_reactor.spawnProcess(ARGS, KWARGS) saved = [] self.mocker.call(lambda *args, **kwargs: saved.append((args, kwargs))) self.mocker.result(None) self.mocker.replay() result = forward_port("ubuntu", 8888, "remote_host", 9999, share=True) self.assertEquals(result, None) self.assertTrue(saved) args, kwargs = saved[0] self.assertIsInstance(args[0], TunnelProtocol) self.assertEquals(args[1], "/usr/bin/ssh") self.assertEquals(args[2], [ "ssh", "-T", "-o", "ControlPath " + self.home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster auto", "-o", "PasswordAuthentication no", "-Llocalhost:8888:localhost:9999", "ubuntu@remote_host"]) self.assertEquals(kwargs, {"env": os.environ}) def test_forward_with_invalid_key(self): """ Using an invalid private key with connect raises an FileNotFound exception. """ self.assertRaises( FileNotFound, forward_port, "ubuntu", 2222, "remote", 2181, private_key_path="xyz-123_dsa") def test_connect_with_key(self): """ A private key path can optionally be specified as an argument to connect in which case its passed to the ssh command line. """ from twisted.internet import reactor file_path = self.makeFile("content") mock_reactor = self.mocker.patch(reactor) mock_reactor.spawnProcess(ARGS, KWARGS) saved = [] self.mocker.call(lambda *args, **kwargs: saved.append((args, kwargs))) self.mocker.result("the-process") self.mocker.replay() result = forward_port("ubuntu", 22181, "remote_host", 2181, private_key_path=file_path) self.assertEquals(result, "the-process") args, kwargs = saved[0] self.assertIsInstance(args[0], TunnelProtocol) self.assertEquals(args[1], "/usr/bin/ssh") self.assertEquals(args[2], [ "ssh", "-T", "-i", file_path, "-o", "ControlPath " + self.home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", "-o", "PasswordAuthentication no", "-Llocalhost:22181:localhost:2181", "ubuntu@remote_host"]) self.assertEquals(kwargs, {"env": os.environ}) def test_prepare_ssh_sharing_not_master(self): args = prepare_ssh_sharing() self.assertEquals(args, [ "-o", "ControlPath " + self.home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster no", ]) self.assertFalse(os.path.exists(self.home + "/.juju/ssh")) def test_prepare_ssh_sharing_auto_master(self): args = prepare_ssh_sharing(auto_master=True) self.assertEquals(args, [ "-o", "ControlPath " + self.home + "/.juju/ssh/master-%r@%h:%p", "-o", "ControlMaster auto", ]) self.assertTrue(os.path.isdir(self.home + "/.juju/ssh")) def test_tunnel_protocol_logs_errors(self): """ When using ssh to setup the tunnel, we log all stderr output, and it will get logged at the error level to the connection log handler. """ log = self.capture_logging(level=logging.ERROR) protocol = TunnelProtocol() protocol.errReceived("test") self.assertEqual(log.getvalue(), "SSH tunnel error test\n") def test_tunnel_protocol_ignores_warnings(self): """ When using ssh to setup the tunnel, we typically recieve a unknown host key warning message on stderr, the tunnel protocol will filter these messages. """ log = self.capture_logging(level=logging.ERROR) protocol = ClientTunnelProtocol(None, None) protocol.errReceived("Warning: Permanently added") self.assertEqual(log.getvalue(), "") def test_tunnel_protocol_closes_on_error(self): """ When using ssh to setup the tunnel, we typically recieve a unknown host key warning message on stderr, the tunnel protocol will filter these messages. """ log = self.capture_logging(level=logging.ERROR) client = self.mocker.mock() tunnel_deferred = Deferred() protocol = ClientTunnelProtocol(client, tunnel_deferred) client.close() self.mocker.replay() protocol.errReceived("badness") def verify_failure(error): self.assertEqual(error.args[0], "SSH forwarding error: badness") self.assertEqual(log.getvalue().strip(), "SSH forwarding error: badness") self.assertFailure(tunnel_deferred, NoConnection) tunnel_deferred.addCallback(verify_failure) return tunnel_deferred def test_tunnel_protocol_notes_invalid_host(self): log = self.capture_logging(level=logging.ERROR) client = self.mocker.mock() client.close() self.mocker.replay() tunnel_deferred = Deferred() protocol = ClientTunnelProtocol(client, tunnel_deferred) message = "ssh: Could not resolve hostname magicbean" protocol.errReceived(message) expected_message = "Invalid host for SSH forwarding: %s" % message def verify_failure(error): self.assertEqual(error.args[0], expected_message) self.assertEqual(log.getvalue().strip(), expected_message) self.assertFailure(tunnel_deferred, NoConnection) tunnel_deferred.addCallback(verify_failure) return tunnel_deferred def test_tunnel_protocol_notes_invalid_key(self): log = self.capture_logging(level=logging.ERROR) client = self.mocker.mock() client.close() self.mocker.replay() tunnel_deferred = Deferred() protocol = ClientTunnelProtocol(client, tunnel_deferred) message = "Permission denied" protocol.errReceived(message) expected_message = "Invalid SSH key" def verify_failure(error): self.assertEqual(error.args[0], expected_message) self.assertEqual(log.getvalue().strip(), expected_message) self.failUnlessFailure(tunnel_deferred, NoConnection) tunnel_deferred.addCallback(verify_failure) return tunnel_deferred def test_tunnel_protocol_notes_connection_refused(self): client = self.mocker.mock() client.close() self.mocker.replay() tunnel_deferred = Deferred() protocol = ClientTunnelProtocol(client, tunnel_deferred) message = "blah blah blah Connection refused blah blah" protocol.errReceived(message) expected_message = "Connection refused" def verify_failure(error): self.assertEqual(error.args[0], expected_message) self.failUnlessFailure(tunnel_deferred, NoConnection) tunnel_deferred.addCallback(verify_failure) return tunnel_deferred juju-0.7.orig/juju/state/tests/test_topology.py0000644000000000000000000015221012135220114020136 0ustar 00000000000000from juju.errors import IncompatibleVersion from juju.lib import serializer from juju.lib.testing import TestCase from juju.state.endpoint import RelationEndpoint from juju.state.topology import ( InternalTopology, InternalTopologyError, VERSION) class InternalTopologyMapTest(TestCase): def setUp(self): self.topology = InternalTopology() def test_add_machine(self): """ The topology map is stored as YAML at the moment, so it should be able to read it. """ self.topology.add_machine("m-0") self.topology.add_machine("m-1") self.assertEquals(sorted(self.topology.get_machines()), ["m-0", "m-1"]) def test_add_duplicated_machine(self): """ Adding a machine which is already registered should fail. """ self.topology.add_machine("m-0") self.assertRaises(InternalTopologyError, self.topology.add_machine, "m-0") def test_has_machine(self): """ Testing if a machine is registered should be possible. """ self.assertFalse(self.topology.has_machine("m-0")) self.topology.add_machine("m-0") self.assertTrue(self.topology.has_machine("m-0")) self.assertFalse(self.topology.has_machine("m-1")) def test_get_machines(self): """ get_machines() must return a list of machine ids previously registered. """ self.assertEquals(self.topology.get_machines(), []) self.topology.add_machine("m-0") self.topology.add_machine("m-1") self.assertEquals(sorted(self.topology.get_machines()), ["m-0", "m-1"]) def test_remove_machine(self): """ Removing machines should take them off the state. """ self.topology.add_machine("m-0") self.topology.add_machine("m-1") # Add a non-assigned unit, to test that the logic of # checking for assigned units validates this. self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.remove_machine("m-0") self.assertFalse(self.topology.has_machine("m-0")) self.assertTrue(self.topology.has_machine("m-1")) def test_remove_non_existent_machine(self): """ Removing non-existing machines should raise an error. """ self.assertRaises(InternalTopologyError, self.topology.remove_machine, "m-0") def test_remove_machine_with_assigned_units(self): """ A machine can't be removed when it has assigned units. """ self.topology.add_machine("m-0") self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.topology.assign_service_unit_to_machine("s-0", "u-1", "m-0") self.assertRaises(InternalTopologyError, self.topology.remove_machine, "m-0") def test_machine_has_units(self): """Test various ways a machine might or might not be assigned.""" self.topology.add_machine("m-0") self.topology.add_machine("m-1") self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.topology.assign_service_unit_to_machine("s-0", "u-1", "m-0") self.assertTrue(self.topology.machine_has_units("m-0")) self.assertFalse(self.topology.machine_has_units("m-1")) self.assertRaises( InternalTopologyError, self.topology.machine_has_units, "m-nonesuch") def test_add_service(self): """ The topology map is stored as YAML at the moment, so it should be able to read it. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(sorted(self.topology.get_services()), ["s-0", "s-1"]) def test_add_duplicated_service(self): """ Adding a service which is already registered should fail. """ self.topology.add_service("s-0", "wordpress") self.assertRaises(InternalTopologyError, self.topology.add_service, "s-0", "wp") def test_add_services_with_duplicated_names(self): """ Adding a service which is already registered should fail. """ self.topology.add_service("s-0", "wordpress") self.assertRaises(InternalTopologyError, self.topology.add_service, "s-1", "wordpress") def test_has_service(self): """ Testing if a service is registered should be possible. """ self.assertFalse(self.topology.has_service("s-0")) self.topology.add_service("s-0", "wordpress") self.assertTrue(self.topology.has_service("s-0")) self.assertFalse(self.topology.has_service("s-1")) def test_find_service_with_name(self): """ find_service_with_name() must return the service_id for the service with the given name, or None if no service is found with that name. """ self.assertEquals( self.topology.find_service_with_name("wordpress"), None) self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals( self.topology.find_service_with_name("wordpress"), "s-0") self.assertEquals( self.topology.find_service_with_name("mysql"), "s-1") def test_get_service_name(self): """ get_service_name() should return the service name for the given service_id. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals( self.topology.get_service_name("s-0"), "wordpress") self.assertEquals( self.topology.get_service_name("s-1"), "mysql") def test_get_service_name_with_non_existing_service(self): """ get_service_name() should raise an error if the service does not exist. """ # Without any state: self.assertRaises(InternalTopologyError, self.topology.get_service_name, "s-0") self.topology.add_service("s-0", "wordpress") # With some state: self.assertRaises(InternalTopologyError, self.topology.get_service_name, "s-1") def test_get_services(self): """ Retrieving a list of available services must be possible. """ self.assertEquals(self.topology.get_services(), []) self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(sorted(self.topology.get_services()), ["s-0", "s-1"]) def test_remove_service(self): """ Removing a service should work properly, so that the service isn't available anymore after it happens (duh!). """ self.topology.add_service("m-0", "wordpress") self.topology.add_service("m-1", "mysql") self.topology.remove_service("m-0") self.assertFalse(self.topology.has_service("m-0")) self.assertTrue(self.topology.has_service("m-1")) def test_remove_principal_service(self): """Verify that removing a principal service behaves correctly. This will have to change as the implementation of remove is still pending. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(self.topology.add_service_unit("s-0", "u-05"), 0) self.assertEquals(self.topology.add_service_unit("s-0", "u-12"), 1) # This fails w/o a container relation in place err = self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-1", "u-07", container_id="u-05") self.assertEquals(str(err), "Attempted to add subordinate unit without " "container relation") # now add the relationship and try again self.topology.add_relation("r-1", "client-server", "container") self.topology.assign_service_to_relation( "r-1", "s-0", "juju-info", "server") self.topology.assign_service_to_relation( "r-1", "s-1", "juju-info", "client") err = self.assertRaises(InternalTopologyError, self.topology.remove_service, "s-0") self.assertIn("Service 's-0' is associated to relations", str(err)) def test_remove_subordinate_service(self): """Verify that removing a principal service behaves correctly. This will have to change as the implementation of remove is still pending. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(self.topology.add_service_unit("s-0", "u-05"), 0) self.assertEquals(self.topology.add_service_unit("s-0", "u-12"), 1) # This fails w/o a container relation in place err = self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-1", "u-07", container_id="u-05") self.assertEquals(str(err), "Attempted to add subordinate unit without " "container relation") # now add the relationship and try again self.topology.add_relation("r-1", "client-server", "container") self.topology.assign_service_to_relation( "r-1", "s-0", "juju-info", "server") self.topology.assign_service_to_relation( "r-1", "s-1", "juju-info", "client") err = self.assertRaises(InternalTopologyError, self.topology.remove_service, "s-1") self.assertIn("Service 's-1' is associated to relations", str(err)) def test_remove_non_existent_service(self): """ Attempting to remove a non-existing service should be an error. """ self.assertRaises(InternalTopologyError, self.topology.remove_service, "m-0") def test_add_service_unit(self): """ add_service_unit() should register a new service unit for a given service, and should return a sequence number for the unit. The sequence number increases monotonically for each service, and is helpful to provide nice unit names. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(self.topology.add_service_unit("s-0", "u-05"), 0) self.assertEquals(self.topology.add_service_unit("s-0", "u-12"), 1) self.assertEquals(self.topology.add_service_unit("s-1", "u-07"), 0) self.assertEquals(sorted(self.topology.get_service_units("s-0")), ["u-05", "u-12"]) self.assertEquals(self.topology.get_service_units("s-1"), ["u-07"]) def test_add_service_unit_with_container(self): """ validates that add_service_unit() properly handles its conatiner_id argument. This test checks both the case where a container relation does and does not exist. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(self.topology.add_service_unit("s-0", "u-05"), 0) self.assertEquals(self.topology.add_service_unit("s-0", "u-12"), 1) # This fails w/o a container relation in place err = self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-1", "u-07", container_id="u-05") self.assertEquals(str(err), "Attempted to add subordinate unit without " "container relation") # now add the relationship and try again self.topology.add_relation("r-1", "client-server", "container") self.topology.assign_service_to_relation("r-1", "s-0", "juju-info", "server") self.topology.assign_service_to_relation("r-1", "s-1", "juju-info", "client") self.assertEquals(self.topology.add_service_unit("s-1", "u-07", container_id="u-05"), 0) self.assertEquals(sorted(self.topology.get_service_units("s-0")), ["u-05", "u-12"]) self.assertEquals(self.topology.get_service_units("s-1"), ["u-07"]) self.assertEquals(self.topology.get_service_unit_principal("u-07"), "u-05") self.assertEquals(self.topology.get_service_unit_principal("u-12"), None) self.assertEquals(self.topology.get_service_unit_principal("u-05"), None) self.assertEquals(self.topology.get_service_unit_container("u-07"), ("s-0", "wordpress", 0, "u-05")) self.assertEquals(self.topology.get_service_unit_container("u-05"), None) def test_global_unique_service_names(self): """Service unit names are unique. Even if the underlying service is destroyed and a new service with the same name is created, we'll never get a duplicate service unit name. """ self.topology.add_service("s-0", "wordpress") sequence = self.topology.add_service_unit("s-0", "u-0") self.assertEqual(sequence, 0) sequence = self.topology.add_service_unit("s-0", "u-1") self.assertEqual(sequence, 1) self.topology.remove_service("s-0") self.topology.add_service("s-0", "wordpress") sequence = self.topology.add_service_unit("s-0", "u-1") self.assertEqual(sequence, 2) self.assertEqual( self.topology.get_service_unit_name("s-0", "u-1"), "wordpress/2") def test_add_duplicated_service_unit(self): """ Adding the same unit to the same service must not be possible. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-0", "u-0") def test_add_service_unit_to_non_existing_service(self): """ Adding a service unit requires the service to have been previously created. """ self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-0", "u-0") def test_add_service_unit_to_different_service(self): """ Adding the same unit to two different services must not be possible. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.topology.add_service_unit("s-0", "u-0") self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-1", "u-0") def test_get_service_units(self): """ Getting units registered from a service should return a list of these. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(self.topology.get_service_units("s-0"), []) self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.topology.add_service_unit("s-1", "u-2") self.assertEquals(sorted(self.topology.get_service_units("s-0")), ["u-0", "u-1"]) self.assertEquals(sorted(self.topology.get_service_units("s-1")), ["u-2"]) def test_get_service_units_with_non_existing_service(self): """ Getting service units from a non-existing service should raise an error. """ self.assertRaises(InternalTopologyError, self.topology.get_service_units, "s-0") def test_has_service_units(self): """ Testing if a service unit exists in a service should be possible. """ self.topology.add_service("s-0", "wordpress") self.assertFalse(self.topology.has_service_unit("s-0", "u-0")) self.topology.add_service_unit("s-0", "u-0") self.assertTrue(self.topology.has_service_unit("s-0", "u-0")) self.assertFalse(self.topology.has_service_unit("s-0", "u-1")) def test_has_service_units_with_non_existing_service(self): """ Testing if a service unit exists should only work if a sensible service was provided. """ self.assertRaises(InternalTopologyError, self.topology.has_service_unit, "s-1", "u-0") def test_get_service_unit_service(self): """ The reverse operation is also feasible: given a service unit, return the service id for the service containing the unit. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.topology.add_service_unit("s-1", "u-2") self.assertEquals(self.topology.get_service_unit_service("u-0"), "s-0") self.assertEquals(self.topology.get_service_unit_service("u-1"), "s-0") self.assertEquals(self.topology.get_service_unit_service("u-2"), "s-1") def test_get_unit_service_with_non_existing_unit(self): """ If the unit provided to get_service_unit_service() doesn't exist, raise an error. """ # Without any services. self.assertRaises(InternalTopologyError, self.topology.get_service_unit_service, "u-1") # With a service without units. self.topology.add_service("s-0", "wordpress") self.assertRaises(InternalTopologyError, self.topology.get_service_unit_service, "u-1") # With a service with a different unit. self.topology.add_service_unit("s-0", "u-0") self.assertRaises(InternalTopologyError, self.topology.get_service_unit_service, "u-1") def test_get_service_unit_name(self): """ Service units are named with the service name and the sequence number joined by a slash, such as wordpress/3. This makes it convenient to use from higher layers. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.topology.add_service_unit("s-1", "u-2") self.assertEquals(self.topology.get_service_unit_name("s-0", "u-0"), "wordpress/0") self.assertEquals(self.topology.get_service_unit_name("s-0", "u-1"), "wordpress/1") self.assertEquals(self.topology.get_service_unit_name("s-1", "u-2"), "mysql/0") def test_get_service_unit_name_from_id(self): """ Service units are named with the service name and the sequence number joined by a slash, such as wordpress/3. This makes it convenient to use from higher layers. Those layers ocassionally need to resolve the id to a name. This is mostly a simple convience wrapper around get_service_unit_name """ self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.assertEqual( self.topology.get_service_unit_name_from_id("u-0"), "wordpress/0") self.assertEqual( self.topology.get_service_unit_name_from_id("u-1"), "wordpress/1") self.assertRaises(InternalTopologyError, self.topology.get_service_unit_name_from_id, "u-2") def test_get_unit_service_id_from_name(self): """Retrieve the unit id from the user oriented unit name.""" self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.assertEqual( "u-0", self.topology.get_service_unit_id_from_name("wordpress/0")) self.assertEqual( "u-1", self.topology.get_service_unit_id_from_name("wordpress/1")) def test_get_unit_service_with_non_existing_service_or_unit(self): """ If the unit provided to get_service_unit_service() doesn't exist, raise an error. """ # Without any services. self.assertRaises(InternalTopologyError, self.topology.get_service_unit_name, "s-0", "u-1") # With a service without units. self.topology.add_service("s-0", "wordpress") self.assertRaises(InternalTopologyError, self.topology.get_service_unit_name, "s-0", "u-1") # With a service with a different unit. self.topology.add_service_unit("s-0", "u-0") self.assertRaises(InternalTopologyError, self.topology.get_service_unit_name, "s-0", "u-1") def test_remove_service_unit(self): """ It should be possible to remove a service unit from an existing service. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "m-0") self.topology.add_service_unit("s-0", "m-1") self.topology.remove_service_unit("s-0", "m-0") self.assertFalse(self.topology.has_service_unit("s-0", "m-0")) self.assertTrue(self.topology.has_service_unit("s-0", "m-1")) def test_remove_principal_service_unit(self): """Verify that removing a principal service unit behaves correctly. This will have to change as the implementation of remove is still pending. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(self.topology.add_service_unit("s-0", "u-05"), 0) self.assertEquals(self.topology.add_service_unit("s-0", "u-12"), 1) # This fails w/o a container relation in place err = self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-1", "u-07", container_id="u-05") self.assertEquals(str(err), "Attempted to add subordinate unit without " "container relation") # now add the relationship and try again self.topology.add_relation("r-1", "client-server", "container") self.topology.assign_service_to_relation( "r-1", "s-0", "juju-info", "server") self.topology.assign_service_to_relation( "r-1", "s-1", "juju-info", "client") self.assertEquals(self.topology.add_service_unit( "s-1", "u-07", container_id="u-05"), 0) self.topology.remove_service_unit("s-0", "u-05") self.assertFalse(self.topology.has_service_unit("s-0", "u-05")) self.assertTrue(self.topology.has_service_unit("s-0", "u-12")) def test_remove_subordinate_service_unit(self): """Verify that removing a subordinate service unit behaves correctly. This will have to change as the implementation of remove is still pending. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") self.assertEquals(self.topology.add_service_unit("s-0", "u-05"), 0) self.assertEquals(self.topology.add_service_unit("s-0", "u-12"), 1) # This fails w/o a container relation in place err = self.assertRaises(InternalTopologyError, self.topology.add_service_unit, "s-1", "u-07", container_id="u-05") self.assertEquals(str(err), "Attempted to add subordinate unit without " "container relation") # now add the relationship and try again self.topology.add_relation("r-1", "client-server", "container") self.topology.assign_service_to_relation( "r-1", "s-0", "juju-info", "server") self.topology.assign_service_to_relation( "r-1", "s-1", "juju-info", "client") self.assertEquals(self.topology.add_service_unit( "s-1", "u-07", container_id="u-05"), 0) self.topology.remove_service_unit("s-1", "u-07") self.assertTrue(self.topology.has_service_unit("s-0", "u-05")) self.assertTrue(self.topology.has_service_unit("s-0", "u-12")) # The subordinate unit can be removed self.assertFalse(self.topology.has_service_unit("s-1", "u-07")) def test_remove_non_existent_service_unit(self): """ Attempting to remove a non-existing service unit or a unit in a non-existing service should raise a local error. """ self.assertRaises(InternalTopologyError, self.topology.remove_service_unit, "s-0", "m-0") self.topology.add_service("s-0", "wordpress") self.assertRaises(InternalTopologyError, self.topology.remove_service_unit, "s-0", "m-0") def test_service_unit_sequencing(self): """ Even if service units are unregistered, the sequence number should not be reused. """ self.topology.add_service("s-0", "wordpress") self.assertEquals(self.topology.add_service_unit("s-0", "u-05"), 0) self.assertEquals(self.topology.add_service_unit("s-0", "u-12"), 1) self.topology.remove_service_unit("s-0", "u-05") self.topology.remove_service_unit("s-0", "u-12") self.assertEquals(self.topology.add_service_unit("s-0", "u-14"), 2) self.assertEquals(self.topology.add_service_unit("s-0", "u-17"), 3) self.assertEquals( self.topology.get_service_unit_sequence("s-0", "u-14"), 2) self.assertEquals( self.topology.get_service_unit_sequence("s-0", "u-17"), 3) self.assertRaises( InternalTopologyError, self.topology.get_service_unit_sequence, "s-0", "u-05") def test_find_service_unit_with_sequence(self): """ Given a service name and a sequence number, the function find_service_unit_with_sequence() should return the unit_id, or None if the sequence number is not found. """ self.topology.add_service("s-1", "mysql") self.topology.add_service_unit("s-1", "u-05") self.topology.add_service_unit("s-1", "u-12") self.assertEquals( self.topology.find_service_unit_with_sequence("s-1", 0), "u-05") self.assertEquals( self.topology.find_service_unit_with_sequence("s-1", 1), "u-12") self.assertEquals( self.topology.find_service_unit_with_sequence("s-1", 2), None) def test_find_service_unit_with_sequence_using_non_existing_service(self): """ If the service_id provided to find_service_unit_with_sequence does not exist, an error should be raised. """ self.assertRaises( InternalTopologyError, self.topology.find_service_unit_with_sequence, "s-0", 0) def test_assign_service_unit_to_machine(self): """ Assigning a service unit to a machine should work. """ self.topology.add_machine("m-0") self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.assign_service_unit_to_machine("s-0", "u-0", "m-0") machine_id = self.topology.get_service_unit_machine("s-0", "u-0") self.assertEquals(machine_id, "m-0") def test_assign_service_unit_machine_with_non_existing_service(self): """ If the service_id provided when assigning a unit to a machine doesn't exist, an error must be raised. """ self.topology.add_machine("m-0") self.assertRaises(InternalTopologyError, self.topology.assign_service_unit_to_machine, "s-0", "u-0", "m-0") def test_assign_service_unit_machine_with_non_existing_service_unit(self): """ If the unit_id provided when assigning a unit to a machine doesn't exist, an error must be raised. """ self.topology.add_machine("m-0") self.topology.add_service("s-0", "wordpress") self.assertRaises(InternalTopologyError, self.topology.assign_service_unit_to_machine, "s-0", "u-0", "m-0") def test_assign_service_unit_machine_with_non_existing_machine(self): """ If the machine_id provided when assigning a unit to a machine doesn't exist, an error must be raised. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.assertRaises(InternalTopologyError, self.topology.assign_service_unit_to_machine, "s-0", "u-0", "m-0") def test_assign_service_unit_machine_twice(self): """ If the service unit was previously assigned to a machine_id, attempting to assign it again should raise an error, even if the machine_id is exactly the same. """ self.topology.add_machine("m-0") self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.assign_service_unit_to_machine("s-0", "u-0", "m-0") self.assertRaises(InternalTopologyError, self.topology.assign_service_unit_to_machine, "s-0", "u-0", "m-0") def test_get_service_unit_machine(self): """ get_service_unit_machine() should return the current machine the unit is assigned to, or None if it wasn't yet assigned to any machine. """ self.topology.add_machine("m-0") self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.assertEquals( self.topology.get_service_unit_machine("s-0", "u-0"), None) self.topology.assign_service_unit_to_machine("s-0", "u-0", "m-0") self.assertEquals( self.topology.get_service_unit_machine("s-0", "u-0"), "m-0") def test_get_service_unit_machine_with_non_existing_service(self): """ If the service_id provided when attempting to retrieve a service unit's machine does not exist, an error must be raised. """ self.assertRaises( InternalTopologyError, self.topology.get_service_unit_machine, "s-0", "u-0") def test_get_service_unit_machine_with_non_existing_service_unit(self): """ If the unit_id provided when attempting to retrieve a service unit's machine does not exist, an error must be raised. """ # Without any units: self.topology.add_service("s-0", "wordpress") self.assertRaises( InternalTopologyError, self.topology.get_service_unit_machine, "s-0", "u-0") # With a different unit in place: self.topology.add_service_unit("s-0", "u-0") self.assertRaises( InternalTopologyError, self.topology.get_service_unit_machine, "s-0", "u-1") def test_unassign_service_unit_from_machine(self): """ It should be possible to unassign a service unit from a machine, as long as it has been previously assigned to some machine. """ self.topology.add_machine("m-0") self.topology.add_machine("m-1") self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.topology.assign_service_unit_to_machine("s-0", "u-0", "m-0") self.topology.assign_service_unit_to_machine("s-0", "u-1", "m-1") self.topology.unassign_service_unit_from_machine("s-0", "u-0") self.assertEquals( self.topology.get_service_unit_machine("s-0", "u-0"), None) self.assertEquals( self.topology.get_service_unit_machine("s-0", "u-1"), "m-1") def test_unassign_service_unit_from_machine_when_not_assigned(self): """ Can't unassign a unit from a machine if it wasn't previously assigned. """ self.topology.add_service("s-0", "wordpress") self.topology.add_service_unit("s-0", "u-0") self.assertRaises( InternalTopologyError, self.topology.unassign_service_unit_from_machine, "s-0", "u-0") def test_unassign_service_unit_with_non_existing_service(self): """ If the service_id used when attempting to unassign the service unit from a machine does not exist, an error must be raised. """ self.assertRaises( InternalTopologyError, self.topology.unassign_service_unit_from_machine, "s-0", "u-0") def test_unassign_service_unit_with_non_existing_unit(self): """ If the unit_id used when attempting to unassign the service unit from a machine does not exist, an error must be raised. """ # Without any units: self.topology.add_service("s-0", "wordpress") self.assertRaises( InternalTopologyError, self.topology.unassign_service_unit_from_machine, "s-0", "u-0") # Without a different unit in place: self.topology.add_service_unit("s-0", "u-0") self.assertRaises( InternalTopologyError, self.topology.unassign_service_unit_from_machine, "s-0", "u-1") def test_get_service_units_in_machine(self): """ We must be able to get all service units in a given machine as well. """ self.topology.add_machine("m-0") self.topology.add_machine("m-1") # Shouldn't break before services are added. self.assertEquals(self.topology.get_service_units_in_machine("m-0"), []) self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysql") # Shouldn't break before units are added either. self.assertEquals(self.topology.get_service_units_in_machine("m-0"), []) self.topology.add_service_unit("s-0", "u-0") self.topology.add_service_unit("s-0", "u-1") self.topology.add_service_unit("s-1", "u-2") self.topology.add_service_unit("s-1", "u-3") # Shouldn't break with units which aren't assigned. self.topology.add_service_unit("s-1", "u-4") self.topology.assign_service_unit_to_machine("s-0", "u-0", "m-0") self.topology.assign_service_unit_to_machine("s-0", "u-1", "m-1") self.topology.assign_service_unit_to_machine("s-1", "u-2", "m-1") self.topology.assign_service_unit_to_machine("s-1", "u-3", "m-0") unit_ids0 = self.topology.get_service_units_in_machine("m-0") unit_ids1 = self.topology.get_service_units_in_machine("m-1") self.assertEquals(sorted(unit_ids0), ["u-0", "u-3"]) self.assertEquals(sorted(unit_ids1), ["u-1", "u-2"]) def test_get_service_units_in_machine_with_non_existing_machine(self): """ If the machine passed to get_service_units_in_machine() doesn't exist, it should bail out gracefully. """ # Shouldn't break before services are added. self.assertRaises(InternalTopologyError, self.topology.get_service_units_in_machine, "m-0") def test_dump_and_parse(self): """ dump() and parse() are opposite operations which enable the state of a topology to be persisted as a string, and then loaded back. """ empty_data = self.topology.dump() self.assertEquals(serializer.load(empty_data), {"version": VERSION}) self.topology.add_machine("m-0") machine_data = self.topology.dump() self.topology.parse(empty_data) self.assertFalse(self.topology.has_machine("m-0")) self.topology.parse(machine_data) self.assertTrue(self.topology.has_machine("m-0")) def test_incompatible_version(self): """Verify `IncompatibleVersion` raised if using old topology.""" empty_data = self.topology.dump() self.assertEquals(serializer.load(empty_data), {"version": VERSION}) self.topology.add_machine("m-0") machine_data = self.topology.dump() self.topology.parse(machine_data) self.assertTrue(self.topology.has_machine("m-0")) # Pretend to bump the versioning by one actual_version = VERSION import juju self.patch(juju.state.topology, "VERSION", actual_version + 1) # With this change to juju.state.topology.VERSION, verify # topology ops will now raise an incompatibility exception ex = self.assertRaises(IncompatibleVersion, self.topology.parse, machine_data) self.assertEqual( str(ex), "Incompatible juju protocol versions (found %d, want %d)" % ( actual_version, juju.state.topology.VERSION)) def test_reset(self): """ Resetting a topology should put it back in the state it was initialized with. """ empty_data = self.topology.dump() self.topology.add_machine("m-0") self.topology.reset() self.assertEquals(self.topology.dump(), empty_data) self.assertEquals(self.topology._state["version"], VERSION) def test_has_relation(self): """Testing if a relation exists should be possible. """ self.topology.add_service("s-0", "wordpress") self.assertFalse(self.topology.has_relation("r-1")) self.topology.add_relation("r-1", "type") self.assertTrue(self.topology.has_relation("r-1")) def test_add_relation(self): """Add a relation between the given service ids. """ self.assertFalse(self.topology.has_relation("r-1")) # Verify add relation works correctly. self.topology.add_relation("r-1", "type") self.assertTrue(self.topology.has_relation("r-1")) # Attempting to add again raises an exception self.assertRaises( InternalTopologyError, self.topology.add_relation, "r-1", "type") def test_assign_service_to_relation(self): """A service can be associated to a relation. """ # Both service and relation must be valid. self.assertRaises(InternalTopologyError, self.topology.assign_service_to_relation, "r-1", "s-0", "name", "role") self.topology.add_relation("r-1", "type") self.assertRaises(InternalTopologyError, self.topology.assign_service_to_relation, "r-1", "s-0", "name", "role") self.topology.add_service("s-0", "wordpress") # The relation can be assigned. self.assertFalse(self.topology.relation_has_service("r-1", "s-0")) self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") self.assertEqual(self.topology.get_relations_for_service("s-0"), [{"interface": "type", "relation_id": "r-1", "scope": "global", "service": {"name": "name", "role": "role"}}] ) # Adding it again raises an error, even with a different name/role self.assertRaises(InternalTopologyError, self.topology.assign_service_to_relation, "r-1", "s-0", "name2", "role2") # Another service can't provide the same role within a relation. self.topology.add_service("s-1", "database") self.assertRaises(InternalTopologyError, self.topology.assign_service_to_relation, "r-1", "s-1", "name", "role") def test_unassign_service_from_relation(self): """A service can be disassociated from a relation. """ # Both service and relation must be valid. self.assertRaises(InternalTopologyError, self.topology.unassign_service_from_relation, "r-1", "s-0") self.topology.add_relation("r-1", "type") self.assertRaises(InternalTopologyError, self.topology.unassign_service_from_relation, "r-1", "s-0") self.topology.add_service("s-0", "wordpress") # If the service is not assigned to the relation, raises an error. self.assertRaises(InternalTopologyError, self.topology.unassign_service_from_relation, "r-1", "s-0") self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") self.assertEqual(self.topology.get_relations_for_service("s-0"), [{"interface": "type", "relation_id": "r-1", "scope": "global", "service": {"name": "name", "role": "role"}}]) self.topology.unassign_service_from_relation("r-1", "s-0") self.assertFalse(self.topology.get_relations_for_service("s-0")) def test_relation_has_service(self): """We can test to see if a service is associated to a relation.""" self.assertFalse(self.topology.relation_has_service("r-1", "s-0")) self.topology.add_relation("r-1", "type") self.topology.add_service("s-0", "wordpress") self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") self.assertTrue(self.topology.relation_has_service("r-1", "s-0")) def test_get_relation_service(self): """We can fetch the setting of a service within a relation.""" # Invalid relations cause an exception. self.assertRaises(InternalTopologyError, self.topology.get_relation_service, "r-1", "s-0") self.topology.add_relation("r-1", "rel-type") # Invalid services cause an exception. self.assertRaises(InternalTopologyError, self.topology.get_relation_service, "r-1", "s-0") self.topology.add_service("s-0", "wordpress") # Fetching info for services not assigned to a relation cause an error. self.assertRaises(InternalTopologyError, self.topology.get_relation_service, "r-1", "s-0") self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") relation_type, info = self.topology.get_relation_service("r-1", "s-0") self.assertEqual(info["name"], "name") self.assertEqual(info["role"], "role") self.assertEqual(relation_type, "rel-type") def test_get_relation_type(self): """The type of a relation can be instrospected. """ self.assertRaises(InternalTopologyError, self.topology.get_relation_type, "r-1") self.topology.add_relation("r-1", "rel-type") self.assertEqual(self.topology.get_relation_type("r-1"), "rel-type") def test_get_relations(self): names = self.topology.get_relations() self.assertEqual(names, []) self.topology.add_relation("r-1", "type") names = self.topology.get_relations() self.assertEqual(names, ["r-1"]) self.topology.add_relation("r-2", "type") names = self.topology.get_relations() self.assertEqual(set(names), set(["r-1", "r-2"])) def test_get_services_for_relations(self): """The services for a given relation can be retrieved.""" self.topology.add_relation("r-1", "type") self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "database") self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") self.topology.assign_service_to_relation("r-1", "s-1", "name", "role2") self.assertEqual( self.topology.get_relation_services("r-1"), {"s-1": {"role": "role2", "name": "name"}, "s-0": {"role": "role", "name": "name"}}) def test_get_relations_for_service(self): """The relations for a given service can be retrieved. """ # Getting relations for unknown service raises a topologyerror. self.assertRaises(InternalTopologyError, self.topology.get_relations_for_service, "s-0") # A new service has no relations. self.topology.add_service("s-0", "wordpress") self.assertFalse(self.topology.get_relations_for_service("s-0")) # Add a relation and fetch it. self.topology.add_relation("r-1", "type") self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") self.topology.add_relation("r-2", "type") self.topology.assign_service_to_relation("r-2", "s-0", "name", "role") self.assertEqual( sorted(self.topology.get_relations_for_service("s-0")), [{"interface": "type", "relation_id": "r-1", "scope": "global", "service": {"name": "name", "role": "role"}}, {"interface": "type", "relation_id": "r-2", "scope": "global", "service": {"name": "name", "role": "role"}}]) self.topology.unassign_service_from_relation("r-2", "s-0") def test_remove_relation(self): """A relation can be removed. """ # Attempting to remove unknown relation raises a topologyerror self.assertRaises(InternalTopologyError, self.topology.remove_relation, "r-1") # Adding a relation with associated service, and remove it. self.topology.add_service("s-0", "wordpress") self.topology.add_relation("r-1", "type") self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") self.assertTrue(self.topology.has_relation("r-1")) self.topology.remove_relation("r-1") self.assertFalse(self.topology.has_relation("r-1")) def test_remove_service_with_relations(self): """ Attempting to remove a service that's assigned to relations raises an InternalTopologyError. """ self.topology.add_service("s-0", "wordpress") self.topology.add_relation("r-1", "type") self.topology.assign_service_to_relation("r-1", "s-0", "name", "role") self.assertRaises(InternalTopologyError, self.topology.remove_service, "s-0") def test_has_relation_between_dyadic_endpoints(self): mysql_ep = RelationEndpoint("mysqldb", "mysql", "db", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysqldb") self.topology.add_relation("r-0", "mysql") self.topology.assign_service_to_relation( "r-0", "s-0", "mysql", "client") self.topology.assign_service_to_relation( "r-0", "s-1", "db", "server") self.assertTrue(self.topology.has_relation_between_endpoints([ mysql_ep, blog_ep])) self.assertTrue(self.topology.has_relation_between_endpoints([ blog_ep, mysql_ep])) def test_has_relation_between_dyadic_endpoints_missing_assignment(self): mysql_ep = RelationEndpoint("mysqldb", "mysql", "db", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysqldb") self.topology.add_relation("r-0", "mysql") self.topology.assign_service_to_relation( "r-0", "s-1", "db", "server") self.assertFalse(self.topology.has_relation_between_endpoints([ mysql_ep, blog_ep])) self.assertFalse(self.topology.has_relation_between_endpoints([ blog_ep, mysql_ep])) def test_has_relation_between_dyadic_endpoints_wrong_relation_name(self): mysql_ep = RelationEndpoint("mysqldb", "mysql", "wrong-name", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysqldb") self.topology.add_relation("r-0", "mysql") self.topology.assign_service_to_relation( "r-0", "s-0", "mysql", "client") self.topology.assign_service_to_relation( "r-0", "s-1", "db", "server") self.assertFalse(self.topology.has_relation_between_endpoints([ mysql_ep, blog_ep])) self.assertFalse(self.topology.has_relation_between_endpoints([ blog_ep, mysql_ep])) def test_has_relation_between_monadic_endpoints(self): riak_ep = RelationEndpoint("riak", "riak", "riak", "peer") self.topology.add_service("s-0", "riak") self.topology.add_relation("r-0", "riak") self.topology.assign_service_to_relation("r-0", "s-0", "riak", "peer") self.assertTrue(self.topology.has_relation_between_endpoints( [riak_ep])) def test_get_relation_between_dyadic_endpoints(self): mysql_ep = RelationEndpoint("mysqldb", "mysql", "db", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysqldb") self.topology.add_relation("r-0", "mysql") self.topology.assign_service_to_relation( "r-0", "s-0", "mysql", "client") self.topology.assign_service_to_relation( "r-0", "s-1", "db", "server") self.assertEqual(self.topology.get_relation_between_endpoints([ mysql_ep, blog_ep]), "r-0") self.assertEqual(self.topology.get_relation_between_endpoints([ blog_ep, mysql_ep]), "r-0") def test_get_relation_between_dyadic_endpoints_missing_assignment(self): mysql_ep = RelationEndpoint("mysqldb", "mysql", "db", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysqldb") self.topology.add_relation("r-0", "mysql") self.topology.assign_service_to_relation( "r-0", "s-1", "db", "server") self.assertEqual(self.topology.get_relation_between_endpoints([ mysql_ep, blog_ep]), None) self.assertEqual(self.topology.get_relation_between_endpoints([ blog_ep, mysql_ep]), None) def test_get_relation_between_dyadic_endpoints_wrong_relation_name(self): mysql_ep = RelationEndpoint("mysqldb", "mysql", "wrong-name", "server") blog_ep = RelationEndpoint("wordpress", "mysql", "mysql", "client") self.topology.add_service("s-0", "wordpress") self.topology.add_service("s-1", "mysqldb") self.topology.add_relation("r-0", "mysql") self.topology.assign_service_to_relation( "r-0", "s-0", "mysql", "client") self.topology.assign_service_to_relation( "r-0", "s-1", "db", "server") self.assertEqual(self.topology.get_relation_between_endpoints([ mysql_ep, blog_ep]), None) self.assertEqual(self.topology.get_relation_between_endpoints([ blog_ep, mysql_ep]), None) def test_get_relation_between_monadic_endpoints(self): riak_ep = RelationEndpoint("riak", "riak", "riak", "peer") self.topology.add_service("s-0", "riak") self.topology.add_relation("r-0", "riak") self.topology.assign_service_to_relation( "r-0", "s-0", "riak", "peer") self.assertEqual(self.topology.get_relation_between_endpoints( [riak_ep]), "r-0") juju-0.7.orig/juju/state/tests/test_utils.py0000644000000000000000000005535112135220114017432 0ustar 00000000000000import errno import socket import time import zookeeper from Queue import Queue from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from twisted.internet.threads import deferToThread from juju.lib import serializer from juju.lib.testing import TestCase from juju.state.base import StateBase from juju.state.errors import StateChanged, StateNotFound from juju.state.utils import ( PortWatcher, remove_tree, dict_merge, YAMLState, YAMLStateNodeMixin, AddedItem, ModifiedItem, DeletedItem) class PortWait(TestCase): def bind_port(self, delay, port=0): """Create a fake server for testing. Binds socket to `port` for `delay` seconds, then listens. If `port` is 0, then it is assigned by the system. Returns a pair of deferreds: completed, port The `completed` deferred is called back when the socket is closed. The port deferred has the port number, which is useful if a system-assigned port is requested. """ # used to communicate the port number port_queue = Queue() def bind_port_sync(): sock = socket.socket() sock.bind(("127.0.0.1", port)) sock.listen(1) port_queue.put(sock.getsockname()[1]) time.sleep(delay) sock.close() return deferToThread(bind_port_sync), deferToThread(port_queue.get) @inlineCallbacks def test_existing_connect(self): """Test watch fires asap if a real server is then available.""" server_deferred, port_deferred = self.bind_port(0.2) port = yield port_deferred yield self.sleep(0.1) yield PortWatcher("127.0.0.1", port, 1).async_wait() yield server_deferred @inlineCallbacks def test_connect(self): """Test watch fires soon after a real server becomes available. 0.5 second for the approximation is based on the polling interval of PortWatcher.""" now = time.time() reactor.callLater(0.7, self.bind_port, 1, 22181) yield PortWatcher("127.0.0.1", 22181, 1.5).async_wait() self.failUnlessApproximates(time.time() - now, 0.7, 0.5) def test_wait_until_timeout_raises_timeout(self): """If the timeout is exceeded, a socket.timeout error is raised.""" self.assertRaises( socket.timeout, PortWatcher("localhost", 800, 0).sync_wait) def test_wait_stops_when_watcher_stopped(self): """ If the watcher is stopped, no more attempts are made to attempt connect to the socket. """ watcher = PortWatcher("127.0.0.1", 800, 30) mock_socket = self.mocker.patch(socket.socket) sleep = self.mocker.replace("time.sleep") mock_socket.connect(("127.0.0.1", 800)) self.mocker.throw(socket.error(errno.ECONNREFUSED)) sleep(0.5) self.mocker.call(watcher.stop) self.mocker.replay() self.assertEquals(watcher.sync_wait(), None) def test_unknown_socket_error_raises(self): """Unknown socket errors are thrown to the caller.""" mock_socket = self.mocker.patch(socket.socket) mock_socket.connect(("127.0.0.1", 465)) self.mocker.throw(socket.error(2000)) self.mocker.replay() watcher = PortWatcher("127.0.0.1", 465, 5) self.assertRaises(socket.error, watcher.sync_wait) def test_unknown_error_raises(self): """Unknown errors are thrown to the caller.""" mock_socket = self.mocker.patch(socket.socket) mock_socket.connect(("127.0.0.1", 465)) self.mocker.throw(SyntaxError()) self.mocker.replay() watcher = PortWatcher("127.0.0.1", 465, 5) self.assertRaises(SyntaxError, watcher.sync_wait) def test_on_connect_returns(self): """On a successful connect, the function returns None.""" mock_socket = self.mocker.patch(socket.socket) mock_socket.connect(("127.0.0.1", 800)) self.mocker.result(True) mock_socket.close() self.mocker.replay() watcher = PortWatcher("127.0.0.1", 800, 30) self.assertEqual(watcher.sync_wait(), True) @inlineCallbacks def test_listen(self): """Test with a real socket server watching for port availability.""" bind_port_time = 0.7 now = time.time() server_deferred, port_deferred = self.bind_port(bind_port_time) port = yield port_deferred yield PortWatcher("127.0.0.1", port, 10, True).async_wait() self.failUnlessApproximates(time.time() - now, bind_port_time, 0.5) yield server_deferred @inlineCallbacks def test_listen_nothing_there(self): """Test with a real socket server watching for port availability. This should result in the watch returning almost immediately, since nothing is (or should be - dangers of real testing) holding this port.""" now = time.time() yield PortWatcher("127.0.0.1", 22181, 10, True).async_wait() self.failUnlessApproximates(time.time() - now, 0, 0.1) def test_listen_wait_stops_when_watcher_stopped(self): """ If the watcher is stopped, no more attempts are made to attempt to listen to the socket. """ watcher = PortWatcher("127.0.0.1", 800, 30, True) mock_socket = self.mocker.patch(socket.socket) sleep = self.mocker.replace("time.sleep") mock_socket.bind(("127.0.0.1", 800)) self.mocker.throw(socket.error(errno.ECONNREFUSED)) sleep(0.5) self.mocker.call(watcher.stop) self.mocker.replay() self.assertEquals(watcher.sync_wait(), None) def test_listen_unknown_socket_error_raises(self): """Unknown socket errors are thrown to the caller.""" mock_socket = self.mocker.patch(socket.socket) mock_socket.bind(("127.0.0.1", 465)) self.mocker.throw(socket.error(2000)) self.mocker.replay() watcher = PortWatcher("127.0.0.1", 465, 5, True) self.assertRaises(socket.error, watcher.sync_wait) def test_listen_unknown_error_raises(self): """Unknown errors are thrown to the caller.""" mock_socket = self.mocker.patch(socket.socket) mock_socket.bind(("127.0.0.1", 465)) self.mocker.throw(SyntaxError()) self.mocker.replay() watcher = PortWatcher("127.0.0.1", 465, 5, True) self.assertRaises(SyntaxError, watcher.sync_wait) def test_listen_on_connect_returns(self): """On a successful connect, the function returns None.""" mock_socket = self.mocker.patch(socket.socket) mock_socket.bind(("127.0.0.1", 800)) self.mocker.result(True) mock_socket.close() self.mocker.replay() watcher = PortWatcher("127.0.0.1", 800, 30, True) self.assertEqual(watcher.sync_wait(), True) class RemoveTreeTest(TestCase): @inlineCallbacks def setUp(self): yield super(RemoveTreeTest, self).setUp() zookeeper.set_debug_level(0) self.client = self.get_zookeeper_client() yield self.client.connect() @inlineCallbacks def test_remove_tree(self): yield self.client.create("/zoo") yield self.client.create("/zoo/mammals") yield self.client.create("/zoo/mammals/elephant") yield self.client.create("/zoo/reptiles") yield self.client.create("/zoo/reptiles/snake") yield remove_tree(self.client, "/zoo") children = yield self.client.get_children("/") self.assertNotIn("zoo", children) class DictMergeTest(TestCase): def test_merge_no_match(self): self.assertEqual( dict_merge(dict(a=1), dict(b=2)), dict(a=1, b=2)) def test_merge_matching_keys_same_value(self): self.assertEqual( dict_merge(dict(a=1, b=2), dict(b=2, c=1)), dict(a=1, b=2, c=1)) def test_merge_conflict(self): self.assertRaises( StateChanged, dict_merge, dict(a=1, b=3), dict(b=2, c=1)) class ChangeItemsTest(TestCase): """Tests the formatting of change items. Note that values always are stored in Unicode in the current scheme, but this testing ensures that other types can also be used if/when desired. """ def test_deleted_item(self): self.assertEqual(str(DeletedItem("my-int", 42)), "Setting deleted: 'my-int' (was 42)") self.assertEqual(str(DeletedItem("my-uni", u"a")), "Setting deleted: 'my-uni' (was u'a')") self.assertEqual(str(DeletedItem("my-str", "x")), "Setting deleted: 'my-str' (was 'x')") self.assertEqual( str(DeletedItem("my-long-uni", u"a" * 101)), "Setting deleted: 'my-long-uni' (was u'%s)" % ( "a" * 98,)) def test_modified_item(self): self.assertEqual(str(ModifiedItem("formerly-int", 42, u"x")), "Setting changed: 'formerly-int'=u'x' (was 42)") self.assertEqual(str(ModifiedItem("formerly-uni", u"a", "b")), "Setting changed: 'formerly-uni'='b' (was u'a')") self.assertEqual(str(ModifiedItem("changed-str", "x", "y")), "Setting changed: 'changed-str'='y' (was 'x')") self.assertEqual( str(ModifiedItem("my-long-uni", u"a" * 101, u"x" * 200)), "Setting changed: 'my-long-uni'=u'%s (was u'%s)" % ( "x" * 98, "a" * 98,)) def test_added_item(self): self.assertEqual(str(AddedItem("new-int", 42)), "Setting changed: 'new-int'=42 (was unset)") self.assertEqual(str(AddedItem("new-uni", u"a")), "Setting changed: 'new-uni'=u'a' (was unset)") self.assertEqual(str(AddedItem("new-str", "x")), "Setting changed: 'new-str'='x' (was unset)") self.assertEqual( str(AddedItem("my-long-uni", u"a" * 101)), "Setting changed: 'my-long-uni'=u'%s (was unset)" % ( "a" * 98,)) class YAMLStateTest(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = self.get_zookeeper_client() yield self.client.connect() self.path = "/zoo" @inlineCallbacks def tearDown(self): exists = yield self.client.exists(self.path) if exists: yield remove_tree(self.client, self.path) @inlineCallbacks def test_get_empty(self): """Verify getting an empty node works as expected.""" path = yield self.client.create(self.path) node = YAMLState(self.client, path) self.assertEqual(node, {}) @inlineCallbacks def test_access_wo_create(self): """Verify accessing data for a non-existant node works as expected.""" node = YAMLState(self.client, self.path) yield node.read() self.assertEqual(node, {}) def test_set_wo_read(self): """Verify that not calling read before mutation raises.""" node = YAMLState(self.client, self.path) self.assertRaises(ValueError, node.__setitem__, "alpha", "beta") self.assertRaises(ValueError, node.update, {"alpha": "beta"}) @inlineCallbacks def test_set_wo_write(self): """Check that get resolves from the internal write buffer. set/get pairs w/o write should present a view of the state reflecting local change. Verify that w/o write local data appears on subsequent calls but that zk state hasn't been changed. """ path = yield self.client.create(self.path) node = YAMLState(self.client, path) yield node.read() options = dict(alpha="beta", one=1) node.update(options) self.assertEqual(node, options) zk_data, stat = yield self.client.get(self.path) # the node isn't created yet in zk self.assertEqual(zk_data, "") @inlineCallbacks def test_set_w_write(self): """Verify that write updates the local and zk state. When write is called we expect that zk state reflects this. We also expect calls to get to expect the reflected state. """ node = YAMLState(self.client, self.path) yield node.read() options = dict(alpha="beta", one=1) node.update(options) changes = yield node.write() self.assertEqual( set(changes), set([AddedItem(key='alpha', new='beta'), AddedItem(key='one', new=1)])) # a local get should reflect proper data self.assertEqual(node, options) # and a direct look at zk should work as well zk_data, stat = yield self.client.get(self.path) zk_data = serializer.load(zk_data) self.assertEqual(zk_data, options) @inlineCallbacks def test_conflict_on_set(self): """Version conflict error tests. Test that two YAMLState objects writing to the same path can and will throw version errors when elements become out of read. """ node = YAMLState(self.client, self.path) node2 = YAMLState(self.client, self.path) yield node.read() yield node2.read() options = dict(alpha="beta", one=1) node.update(options) yield node.write() node2.update(options) changes = yield node2.write() self.assertEqual( set(changes), set([AddedItem("alpha", "beta"), AddedItem("one", 1)])) # first read node2 self.assertEqual(node, options) # write on node 1 options2 = dict(alpha="gamma", one="two") node.update(options2) changes = yield node.write() self.assertEqual( set(changes), set([ModifiedItem("alpha", "beta", "gamma"), ModifiedItem("one", 1, "two")])) # verify that node 1 reports as expected self.assertEqual(node, options2) # verify that node2 has the older data still self.assertEqual(node2, options) # now issue a set/write from node2 # this will merge the data deleting 'one' # and updating other values options3 = dict(alpha="cappa", new="next") node2.update(options3) del node2["one"] expected = dict(alpha="cappa", new="next") changes = yield node2.write() self.assertEqual( set(changes), set([DeletedItem("one", 1), ModifiedItem("alpha", "beta", "cappa"), AddedItem("new", "next")])) self.assertEqual(expected, node2) # but node still reflects the old data self.assertEqual(node, options2) @inlineCallbacks def test_setitem(self): node = YAMLState(self.client, self.path) yield node.read() options = dict(alpha="beta", one=1) node["alpha"] = "beta" node["one"] = 1 changes = yield node.write() self.assertEqual( set(changes), set([AddedItem("alpha", "beta"), AddedItem("one", 1)])) # a local get should reflect proper data self.assertEqual(node, options) # and a direct look at zk should work as well zk_data, stat = yield self.client.get(self.path) zk_data = serializer.load(zk_data) self.assertEqual(zk_data, options) @inlineCallbacks def test_multiple_reads(self): """Calling read resets state to ZK after multiple round-trips.""" node = YAMLState(self.client, self.path) yield node.read() node.update({"alpha": "beta", "foo": "bar"}) self.assertEqual(node["alpha"], "beta") self.assertEqual(node["foo"], "bar") yield node.read() # A read resets the data to the empty state self.assertEqual(node, {}) node.update({"alpha": "beta", "foo": "bar"}) changes = yield node.write() self.assertEqual( set(changes), set([AddedItem("alpha", "beta"), AddedItem("foo", "bar")])) # A write retains the newly set values self.assertEqual(node["alpha"], "beta") self.assertEqual(node["foo"], "bar") # now get another state instance and change zk state node2 = YAMLState(self.client, self.path) yield node2.read() node2.update({"foo": "different"}) changes = yield node2.write() self.assertEqual( changes, [ModifiedItem("foo", "bar", "different")]) # This should pull in the new state (and still have the merged old. yield node.read() self.assertEqual(node["alpha"], "beta") self.assertEqual(node["foo"], "different") def test_dictmixin_usage(self): """Verify that the majority of dict operation function.""" node = YAMLState(self.client, self.path) yield node.read() node.update({"alpha": "beta", "foo": "bar"}) self.assertEqual(node, {"alpha": "beta", "foo": "bar"}) result = node.pop("foo") self.assertEqual(result, "bar") self.assertEqual(node, {"alpha": "beta"}) node["delta"] = "gamma" self.assertEqual(set(node.keys()), set(("alpha", "delta"))) result = list(node.iteritems()) self.assertIn(("alpha", "beta"), result) self.assertIn(("delta", "gamma"), result) @inlineCallbacks def test_del_empties_state(self): d = YAMLState(self.client, self.path) yield d.read() d["a"] = "foo" changes = yield d.write() self.assertEqual(changes, [AddedItem("a", "foo")]) del d["a"] changes = yield d.write() self.assertEqual(changes, [DeletedItem("a", "foo")]) self.assertEqual(d, {}) @inlineCallbacks def test_read_resync(self): d1 = YAMLState(self.client, self.path) yield d1.read() d1["a"] = "foo" changes = yield d1.write() self.assertEqual(changes, [AddedItem("a", "foo")]) d2 = YAMLState(self.client, self.path) yield d2.read() del d2["a"] changes = yield d2.write() self.assertEqual(changes, [DeletedItem("a", "foo")]) d2["a"] = "bar" changes = yield d2.write() self.assertEqual(changes, [AddedItem("a", "bar")]) zk_data, stat = yield self.client.get(self.path) yield d1.read() # d1 should pick up the new value (from d2) on a read zk_data, stat = yield self.client.get(self.path) self.assertEqual(d1["a"], "bar") @inlineCallbacks def test_multiple_writes(self): d1 = YAMLState(self.client, self.path) yield d1.read() d1.update(dict(foo="bar", this="that")) changes = yield d1.write() self.assertEqual( set(changes), set([AddedItem("foo", "bar"), AddedItem("this", "that")])) del d1["this"] d1["another"] = "value" changes = yield d1.write() self.assertEqual( set(changes), set([DeletedItem("this", "that"), AddedItem("another", "value")])) expected = {"foo": "bar", "another": "value"} self.assertEqual(d1, expected) changes = yield d1.write() self.assertEqual(changes, []) self.assertEqual(d1, expected) yield d1.read() self.assertEqual(d1, expected) # This shouldn't write any changes changes = yield d1.write() self.assertEqual(changes, []) self.assertEqual(d1, expected) @inlineCallbacks def test_write_twice(self): d1 = YAMLState(self.client, self.path) yield d1.read() d1["a"] = "foo" changes = yield d1.write() self.assertEqual(changes, [AddedItem("a", "foo")]) d2 = YAMLState(self.client, self.path) yield d2.read() d2["a"] = "bar" changes = yield d2.write() self.assertEqual(changes, [ModifiedItem("a", "foo", "bar")]) # Shouldn't write again. Changes were already # flushed and acted upon by other parties. changes = yield d1.write() self.assertEqual(changes, []) yield d1.read() self.assertEquals(d1, d2) @inlineCallbacks def test_read_requires_node(self): """Validate that read raises when required=True.""" d1 = YAMLState(self.client, self.path) yield self.assertFailure(d1.read(True), StateNotFound) class SomeError(Exception): pass class TestMixee(StateBase, YAMLStateNodeMixin): def __init__(self, client, path): self._client = client self._zk_path = path def _node_missing(self): raise SomeError() def get(self): return self._get_node_value("key") def set(self, value): return self._set_node_value("key", value) class YAMLStateNodeMixinTest(TestCase): @inlineCallbacks def setUp(self): zookeeper.set_debug_level(0) self.client = self.get_zookeeper_client() yield self.client.connect() self.path = "/path" @inlineCallbacks def tearDown(self): exists = yield self.client.exists(self.path) if exists: yield remove_tree(self.client, self.path) @inlineCallbacks def test_error_when_no_node(self): mixee = TestMixee(self.client, self.path) yield self.assertFailure(mixee.get(), SomeError) exists = yield self.client.exists(self.path) self.assertFalse(exists) yield self.assertFailure(mixee.set("something"), SomeError) exists = yield self.client.exists(self.path) self.assertFalse(exists) @inlineCallbacks def test_get_empty(self): yield self.client.create(self.path) mixee = TestMixee(self.client, self.path) self.assertEquals((yield mixee.get()), None) @inlineCallbacks def test_get_missing(self): yield self.client.create(self.path, serializer.dump({"foo": "bar"})) mixee = TestMixee(self.client, self.path) self.assertEquals((yield mixee.get()), None) @inlineCallbacks def test_get_exists(self): yield self.client.create( self.path, serializer.dump({"key": "butterfly"})) mixee = TestMixee(self.client, self.path) self.assertEquals((yield mixee.get()), "butterfly") @inlineCallbacks def test_set_empty(self): yield self.client.create(self.path) mixee = TestMixee(self.client, self.path) yield mixee.set("caterpillar") content, _ = yield self.client.get(self.path) self.assertEquals(serializer.load(content), {"key": "caterpillar"}) @inlineCallbacks def test_set_safely(self): yield self.client.create(self.path, serializer.dump({"foo": "bar"})) mixee = TestMixee(self.client, self.path) yield mixee.set("cocoon") content, _ = yield self.client.get(self.path) self.assertEquals( serializer.load(content), {"foo": "bar", "key": "cocoon"}) juju-0.7.orig/juju/tests/__init__.py0000644000000000000000000000000012135220114015627 0ustar 00000000000000juju-0.7.orig/juju/tests/common.py0000644000000000000000000000305412135220114015374 0ustar 00000000000000import logging import os import tempfile from juju.lib.zk import Zookeeper from contextlib import contextmanager __all__ = ("ManagedZooKeeper", "zookeeper_test_context", "get_zookeeper_test_address") log = logging.getLogger("juju.tests.common") """Global to manage the ZK test address - only for testing of course!""" _zookeeper_address = "127.0.0.1:2181" def get_test_zookeeper_address(): """Get the current test ZK address, such as '127.0.0.1:2181'""" return _zookeeper_address @contextmanager def zookeeper_test_context(install_path, port=28181, fsync=False): """Manage context to run/stop a ZooKeeper for testing and related vars. @param install_path: The path to the install for ZK. Bare word "system" causes special behavior to use system conf for ZK @param port: The port to run the managed ZK instance """ global _zookeeper_address saved_zookeeper_address = _zookeeper_address saved_env = os.environ.get("ZOOKEEPER_ADDRESS") test_zookeeper = Zookeeper( tempfile.mkdtemp(), port, zk_location=install_path, use_deferred=False, fsync=fsync) test_zookeeper.start() os.environ["ZOOKEEPER_ADDRESS"] = test_zookeeper.address _zookeeper_address = test_zookeeper.address try: yield test_zookeeper finally: test_zookeeper.stop() _zookeeper_address = saved_zookeeper_address if saved_env: os.environ["ZOOKEEPER_ADDRESS"] = saved_env else: del os.environ["ZOOKEEPER_ADDRESS"] juju-0.7.orig/juju/tests/test_errors.py0000644000000000000000000001436312135220114016464 0ustar 00000000000000from juju.errors import ( JujuError, FileNotFound, FileAlreadyExists, CharmError, CharmInvocationError, CharmUpgradeError, NoConnection, InvalidHost, InvalidUser, ProviderError, CloudInitError, ProviderInteractionError, CannotTerminateMachine, MachinesNotFound, EnvironmentPending, EnvironmentNotFound, IncompatibleVersion, InvalidPlacementPolicy, ServiceError, ConstraintError, UnknownConstraintError, SSLVerificationError, SSLVerificationUnsupported) from juju.lib.testing import TestCase class ErrorsTest(TestCase): def assertIsJujuError(self, error): self.assertTrue(isinstance(error, JujuError), "%s is not a subclass of JujuError" % error.__class__.__name__) def test_IncompatibleVersion(self): error = IncompatibleVersion(123, 42) self.assertEqual( str(error), "Incompatible juju protocol versions (found 123, want 42)") self.assertIsJujuError(error) def test_FileNotFound(self): error = FileNotFound("/path") self.assertEquals(str(error), "File was not found: '/path'") self.assertIsJujuError(error) def test_FileAlreadyExists(self): error = FileAlreadyExists("/path") self.assertEquals(str(error), "File already exists, won't overwrite: '/path'") self.assertIsJujuError(error) def test_NoConnection(self): error = NoConnection("unable to connect") self.assertIsJujuError(error) def test_InvalidHost(self): error = InvalidHost("Invalid host for SSH forwarding") self.assertTrue(isinstance(error, NoConnection)) self.assertEquals( str(error), "Invalid host for SSH forwarding") def test_InvalidUser(self): error = InvalidUser("Invalid SSH key") self.assertTrue(isinstance(error, NoConnection)) self.assertEquals( str(error), "Invalid SSH key") def test_ConstraintError(self): error = ConstraintError("bork bork bork") self.assertIsJujuError(error) self.assertEquals(str(error), "bork bork bork") def test_UnknownConstraintError(self): error = UnknownConstraintError("meatball") self.assertTrue(isinstance(error, ConstraintError)) self.assertEquals(str(error), "Unknown constraint: 'meatball'") def test_ProviderError(self): error = ProviderError("Invalid credentials") self.assertIsJujuError(error) self.assertEquals(str(error), "Invalid credentials") def test_CloudInitError(self): error = CloudInitError("BORKEN") self.assertIsJujuError(error) self.assertEquals(str(error), "BORKEN") def test_ProviderInteractionError(self): error = ProviderInteractionError("Bad Stuff") self.assertIsJujuError(error) self.assertEquals(str(error), "Bad Stuff") def test_CannotTerminateMachine(self): error = CannotTerminateMachine(0, "environment would be destroyed") self.assertIsJujuError(error) self.assertEquals( str(error), "Cannot terminate machine 0: environment would be destroyed") def test_MachinesNotFoundSingular(self): error = MachinesNotFound(("i-sublimed",)) self.assertIsJujuError(error) self.assertEquals(error.instance_ids, ["i-sublimed"]) self.assertEquals(str(error), "Cannot find machine: i-sublimed") def test_MachinesNotFoundPlural(self): error = MachinesNotFound(("i-disappeared", "i-exploded")) self.assertIsJujuError(error) self.assertEquals(error.instance_ids, ["i-disappeared", "i-exploded"]) self.assertEquals(str(error), "Cannot find machines: i-disappeared, i-exploded") def test_EnvironmentNotFoundWithInfo(self): error = EnvironmentNotFound("problem") self.assertIsJujuError(error) self.assertEquals(str(error), "juju environment not found: problem") def test_EnvironmentNotFoundNoInfo(self): error = EnvironmentNotFound() self.assertIsJujuError(error) self.assertEquals(str(error), "juju environment not found: no details " "available") def test_EnvironmentPendingWithInfo(self): error = EnvironmentPending("problem") self.assertIsJujuError(error) self.assertEquals(str(error), "problem") def test_InvalidPlacementPolicy(self): error = InvalidPlacementPolicy("x", "foobar", ["a", "b", "c"]) self.assertIsJujuError(error) self.assertEquals( str(error), ("Unsupported placement policy: 'x' for provider: 'foobar', " "supported policies a, b, c")) def test_ServiceError(self): error = ServiceError("blah") self.assertEquals(str(error), "blah") self.assertIsJujuError(error) def test_CharmError(self): error = CharmError("/foo/bar", "blah blah") self.assertIsJujuError(error) self.assertEquals(str(error), "Error processing '/foo/bar': blah blah") def test_CharmInvocationError(self): error = CharmInvocationError("/foo/bar", 1) self.assertIsJujuError(error) self.assertEquals( str(error), "Error processing '/foo/bar': exit code 1.") def test_CharmInvocationError_with_signal(self): error = CharmInvocationError("/foo/bar", None, 13) self.assertIsJujuError(error) self.assertEquals( str(error), "Error processing '/foo/bar': signal 13.") def test_CharmUpgradeError(self): error = CharmUpgradeError("blah blah") self.assertIsJujuError(error) self.assertEquals(str(error), "Cannot upgrade charm: blah blah") def test_SSLVerificationError(self): orig_error = Exception() error = SSLVerificationError(orig_error) self.assertIsJujuError(error) self.assertIs(orig_error, error.ssl_error) self.assertIn("Bad HTTPS certificate", str(error)) def test_SSLVerificationUnsupported(self): error = SSLVerificationUnsupported() self.assertIsJujuError(error) self.assertIn("HTTPS certificates cannot be verified", str(error)) juju-0.7.orig/juju/unit/__init__.py0000644000000000000000000000000212135220114015446 0ustar 00000000000000# juju-0.7.orig/juju/unit/address.py0000644000000000000000000000704212135220114015347 0ustar 00000000000000"""Service units have both a public and private address. """ import subprocess from twisted.internet.defer import inlineCallbacks, returnValue, succeed from twisted.internet.threads import deferToThread from twisted.web import client from juju.errors import JujuError from juju.state.environment import GlobalSettingsStateManager @inlineCallbacks def get_unit_address(client): settings = GlobalSettingsStateManager(client) provider_type = yield settings.get_provider_type() if provider_type == "ec2": returnValue(EC2UnitAddress()) if provider_type in ("openstack", "openstack_s3"): returnValue(OpenStackUnitAddress()) elif provider_type == "local": returnValue(LocalUnitAddress()) elif provider_type == "orchestra": returnValue(OrchestraUnitAddress()) elif provider_type == "dummy": returnValue(DummyUnitAddress()) elif provider_type == "maas": returnValue(MAASUnitAddress()) raise JujuError( "Unknown provider type: %r, unit addresses unknown." % provider_type) class UnitAddress(object): def get_private_address(self): raise NotImplementedError(self.get_private_address) def get_public_address(self): raise NotImplementedError(self.get_public_address) class DummyUnitAddress(UnitAddress): def get_private_address(self): return succeed("localhost") def get_public_address(self): return succeed("localhost") class EC2UnitAddress(UnitAddress): @inlineCallbacks def get_private_address(self): content = yield client.getPage( "http://169.254.169.254/latest/meta-data/local-hostname") returnValue(content.strip()) @inlineCallbacks def get_public_address(self): content = yield client.getPage( "http://169.254.169.254/latest/meta-data/public-hostname") returnValue(content.strip()) class OpenStackUnitAddress(UnitAddress): """Address determination of a service unit on an OpenStack server Unlike EC2 there are no promises that an instance will have a resolvable hostname, or for that matter a public ip address. """ def _get_metadata_string(self, key): return client.getPage("http://169.254.169.254/%s/meta-data/%s" % ("2009-04-04", key)) def get_private_address(self): return self._get_metadata_string("local-ipv4") @inlineCallbacks def get_public_address(self): address = yield self._get_metadata_string("public-ipv4") if not address: address = yield self.get_private_address() returnValue(address) class LocalUnitAddress(UnitAddress): def get_private_address(self): return deferToThread(self._get_address) def get_public_address(self): return deferToThread(self._get_address) def _get_address(self): output = subprocess.check_output(["hostname", "-I"]) return output.strip().split()[0] class OrchestraUnitAddress(UnitAddress): def get_private_address(self): return deferToThread(self._get_address) def get_public_address(self): return deferToThread(self._get_address) def _get_address(self): output = subprocess.check_output(["hostname", "-f"]) return output.strip() class MAASUnitAddress(UnitAddress): def get_private_address(self): return deferToThread(self._get_address) def get_public_address(self): return deferToThread(self._get_address) def _get_address(self): output = subprocess.check_output(["hostname", "-f"]) return output.strip() juju-0.7.orig/juju/unit/charm.py0000644000000000000000000000253612135220114015017 0ustar 00000000000000 import os import shutil from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.client import downloadPage from twisted.web.error import Error from juju.errors import FileNotFound from juju.charm.bundle import CharmBundle from juju.lib import under from juju.state.charm import CharmStateManager @inlineCallbacks def download_charm(client, charm_id, charms_directory): """Retrieve a charm from the provider storage to the local machine. """ charm_state_manager = CharmStateManager(client) charm_state = yield charm_state_manager.get_charm_state(charm_id) # Calculate local charm path checksum = yield charm_state.get_sha256() charm_key = under.quote("%s:%s" % (charm_state.id, checksum)) local_charm_path = os.path.join( charms_directory, charm_key) # Retrieve charm from provider storage link if charm_state.bundle_url.startswith("file://"): file_path = charm_state.bundle_url[len("file://"):] if not os.path.exists(file_path): raise FileNotFound(charm_state.bundle_url) shutil.copyfileobj(open(file_path), open(local_charm_path, "w")) else: try: yield downloadPage(charm_state.bundle_url, local_charm_path) except Error: raise FileNotFound(charm_state.bundle_url) returnValue(CharmBundle(local_charm_path)) juju-0.7.orig/juju/unit/deploy.py0000644000000000000000000000761112135220114015220 0ustar 00000000000000import logging import os from twisted.internet.defer import inlineCallbacks from juju.machine.unit import get_deploy_factory from juju.state.charm import CharmStateManager from juju.state.environment import GlobalSettingsStateManager from juju.state.service import ServiceStateManager from juju.unit.charm import download_charm log = logging.getLogger("unit.deploy") class UnitDeployer(object): """Manages service unit deployment for an agent. """ def __init__(self, client, machine_id, juju_directory): """Initialize a Unit Deployer. :param client: A connected zookeeper client. :param str machine_id: the ID of the machine the agent is being run on. :param str juju_directory: the directory the agent is running in. """ self.client = client self.machine_id = machine_id self.juju_directory = juju_directory self.service_state_manager = ServiceStateManager(self.client) self.charm_state_manager = CharmStateManager(self.client) self.env_id = None @property def charms_directory(self): return os.path.join(self.juju_directory, "charms") @inlineCallbacks def start(self, provider_type=None): """Starts the unit deployer.""" # Find out what provided the machine, and how to deploy units. settings = GlobalSettingsStateManager(self.client) if provider_type is None: provider_type = yield settings.get_provider_type() self.deploy_factory = get_deploy_factory(provider_type) self.env_id = yield settings.get_environment_id() if not os.path.exists(self.charms_directory): os.makedirs(self.charms_directory) def download_charm(self, charm_state): """Retrieve a charm from the provider storage to the local machine. :param charm_state: Charm to be downloaded """ log.debug("Downloading charm %s to %s", charm_state.id, self.charms_directory) return download_charm( self.client, charm_state.id, self.charms_directory) @inlineCallbacks def start_service_unit(self, service_unit_name): """Start a service unit on the machine. Downloads the charm, and extract it to the service unit directory, and launch the service unit agent within the unit directory. :param str service_unit_name: Service unit name to be started """ # Retrieve the charm state to get at the charm. unit_state = yield self.service_state_manager.get_unit_state( service_unit_name) charm_id = yield unit_state.get_charm_id() charm_state = yield self.charm_state_manager.get_charm_state( charm_id) # Download the charm. bundle = yield self.download_charm(charm_state) # Use deployment to setup the workspace and start the unit agent. deployment = self.deploy_factory( service_unit_name, self.juju_directory) log.debug("Using %r for %s in %s", deployment, service_unit_name, self.juju_directory) running = yield deployment.is_running() if not running: log.debug("Starting service unit %s...", service_unit_name) yield deployment.start( self.env_id, self.machine_id, self.client.servers, bundle) log.info("Started service unit %s", service_unit_name) @inlineCallbacks def kill_service_unit(self, service_unit_name): """Stop service unit and destroy disk state, ala SIGKILL or lxc-destroy :param str service_unit_name: Service unit name to be killed """ deployment = self.deploy_factory( service_unit_name, self.juju_directory) log.info("Stopping service unit %s...", service_unit_name) yield deployment.destroy() log.info("Stopped service unit %s", service_unit_name) juju-0.7.orig/juju/unit/lifecycle.py0000644000000000000000000007521712135220114015672 0ustar 00000000000000import os import logging import shutil import tempfile from twisted.internet.defer import ( inlineCallbacks, DeferredLock, DeferredList, returnValue) from juju.errors import CharmUpgradeError from juju.hooks.invoker import Invoker from juju.hooks.scheduler import HookScheduler from juju.lib import serializer from juju.state.hook import ( DepartedRelationHookContext, HookContext, RelationChange) from juju.state.errors import StopWatcher, UnitRelationStateNotFound from juju.state.relation import ( RelationStateManager, UnitRelationState) from juju.unit.charm import download_charm from juju.unit.deploy import UnitDeployer from juju.unit.workflow import RelationWorkflowState HOOK_SOCKET_FILE = ".juju.hookcli.sock" hook_log = logging.getLogger("hook.output") # This is used as `client_id` when constructing Invokers _EVIL_CONSTANT = "constant" class _CharmUpgradeOperation(object): """Helper class dealing only with the bare mechanics of upgrading""" def __init__(self, client, service, unit, unit_dir): self._client = client self._service = service self._unit = unit self._old_id = None self._new_id = None self._download_dir = tempfile.mkdtemp(prefix="tmp-charm-upgrade") self._bundle = None self._charm_dir = os.path.join(unit_dir, "charm") self._log = logging.getLogger("charm.upgrade") @inlineCallbacks def prepare(self): self._log.debug("Checking for newer charm...") try: self._new_id = yield self._service.get_charm_id() self._old_id = yield self._unit.get_charm_id() if self._new_id != self._old_id: self._log.debug("Downloading %s...", self._new_id) self._bundle = yield download_charm( self._client, self._new_id, self._download_dir) else: self._log.debug("Latest charm is already present.") except Exception as e: self._log.exception("Charm upgrade preparation failed.") raise CharmUpgradeError(str(e)) @property def ready(self): return self._bundle is not None @inlineCallbacks def run(self): assert self.ready self._log.debug( "Replacing charm %s with %s.", self._old_id, self._new_id) try: # TODO this will leave droppings from the old charm; but we can't # delete the whole charm dir and replace it, because some charms # store state within their directories. See lp:791035 self._bundle.extract_to(self._charm_dir) self._log.debug( "Charm has been upgraded to %s.", self._new_id) yield self._unit.set_charm_id(self._new_id) self._log.debug("Upgrade recorded.") except Exception as e: self._log.exception("Charm upgrade failed.") raise CharmUpgradeError(str(e)) def cleanup(self): if os.path.exists(self._download_dir): shutil.rmtree(self._download_dir) class UnitLifecycle(object): """Manager for a unit lifecycle. Primarily used by the workflow interaction, to modify unit behavior according to the current unit workflow state and transitions. See docs/source/internals/unit-workflow-lifecycle.rst for a brief discussion of some of the more interesting implementation decisions. """ def __init__(self, client, unit, service, unit_dir, state_dir, executor): self._client = client self._unit = unit self._service = service self._executor = executor self._unit_dir = unit_dir self._state_dir = state_dir self._relations = None self._running = False self._watching_relation_memberships = False self._watching_relation_resolved = False self._run_lock = DeferredLock() self._log = logging.getLogger("unit.lifecycle") @property def running(self): return self._running def get_relation_workflow(self, relation_id): """Accessor to a unit relation workflow, by relation id. Primarily intended for and used by unit tests. Raises a KeyError if the relation workflow does not exist. """ return self._relations[relation_id] @inlineCallbacks def install(self, fire_hooks=True): """Invoke the unit's install hook. """ if fire_hooks: yield self._execute_hook("install") @inlineCallbacks def start(self, fire_hooks=True, start_relations=True): """Invoke the start hook, and setup relation watching. :param fire_hooks: False to skip running config-change and start hooks. Will not affect any relation hooks that happen to be fired as a consequence of starting up. :param start_relations: True to transition all "down" relation workflows to "up". """ self._log.debug("pre-start acquire, running:%s", self._running) yield self._run_lock.acquire() self._log.debug("start running, unit lifecycle") watches = [] try: if fire_hooks: yield self._execute_hook("config-changed") yield self._execute_hook("start") if self._relations is None: yield self._load_relations() if start_relations: # We actually want to transition from "down" to "up" where # applicable (ie a stopped unit is starting up again) for workflow in self._relations.values(): with (yield workflow.lock()): state = yield workflow.get_state() if state == "down": yield workflow.transition_state("up") # Establish a watch on the existing relations. if not self._watching_relation_memberships: self._log.debug("starting service relation watch") watches.append(self._service.watch_relation_states( self._on_service_relation_changes)) self._watching_relation_memberships = True # Establish a watch for resolved relations if not self._watching_relation_resolved: self._log.debug("starting unit relation resolved watch") watches.append(self._unit.watch_relation_resolved( self._on_relation_resolved_changes)) self._watching_relation_resolved = True # Set current status self._running = True finally: self._run_lock.release() # Give up the run lock before waiting on initial watch invocations. results = yield DeferredList(watches, consumeErrors=True) # If there's an error reraise the first one found. errors = [e[1] for e in results if not e[0]] if errors: returnValue(errors[0]) self._log.debug("started unit lifecycle") @inlineCallbacks def stop(self, fire_hooks=True, stop_relations=True): """Stop the unit, executes the stop hook, and stops relation watching. :param fire_hooks: False to skip running stop hooks. :param stop_relations: True to transition all "up" relation workflows to "down"; when False, simply shut down relation lifecycles (in preparation for process shutdown, for example). """ self._log.debug("pre-stop acquire, running:%s", self._running) yield self._run_lock.acquire() try: # Verify state assert self._running, "Already Stopped" if stop_relations: # We actually want to transition relation states # (probably because the unit workflow state is stopped/error) for workflow in self._relations.values(): with (yield workflow.lock()): yield workflow.transition_state("down") else: # We just want to stop the relations from acting # (probably because the process is going down) self._log.debug("stopping relation lifecycles") for workflow in self._relations.values(): yield workflow.lifecycle.stop() if fire_hooks: yield self._execute_hook("stop") # Set current status self._running = False finally: self._run_lock.release() self._log.debug("stopped unit lifecycle") @inlineCallbacks def configure(self, fire_hooks=True): """Inform the unit that its service config has changed. """ if not fire_hooks: returnValue(None) yield self._run_lock.acquire() try: # Verify State assert self._running, "Needs to be running." # Execute hook yield self._execute_hook("config-changed") finally: self._run_lock.release() self._log.debug("configured unit") @inlineCallbacks def upgrade_charm(self, fire_hooks=True, force=False): """Upgrade the charm and invoke the upgrade-charm hook if requested. :param fire_hooks: if False, *and* the actual upgrade operation is not necessary, skip the upgrade-charm hook. When the actual charm has changed during this invocation, this flag is ignored: hooks will always be fired. :param force: Boolean, if true then we're merely putting the charm into place on disk, not executing charm hooks. """ msg = "Upgrading charm" if force: msg += " - forced" self._log.debug(msg) upgrade = _CharmUpgradeOperation( self._client, self._service, self._unit, self._unit_dir) yield self._run_lock.acquire() try: yield upgrade.prepare() # Executor may already be stopped if we're retrying. if self._executor.running: self._log.debug("Pausing normal hook execution") yield self._executor.stop() if upgrade.ready: yield upgrade.run() fire_hooks = True if fire_hooks and not force: yield self._execute_hook("upgrade-charm", now=True) # Always restart executor on success; charm upgrade operations and # errors are the only reasons for the executor to be stopped. self._log.debug("Resuming normal hook execution.") self._executor.start() finally: self._run_lock.release() upgrade.cleanup() @inlineCallbacks def _on_relation_resolved_changes(self, event): """Callback for unit relation resolved watching. The callback is invoked whenever the relation resolved settings change. """ self._log.debug("relation resolved changed") # Acquire the run lock, and process the changes. yield self._run_lock.acquire() try: # If the unit lifecycle isn't running we shouldn't process # any relation resolutions. if not self._running: self._log.debug("stop watch relation resolved changes") self._watching_relation_resolved = False raise StopWatcher() self._log.info("processing relation resolved changed") if self._client.connected: yield self._process_relation_resolved_changes() finally: yield self._run_lock.release() @inlineCallbacks def _process_relation_resolved_changes(self): """Invoke retry transitions on relations if their not running. """ relation_resolved = yield self._unit.get_relation_resolved() if relation_resolved is None: returnValue(None) else: yield self._unit.clear_relation_resolved() keys = set(relation_resolved).intersection(self._relations) for internal_rel_id in keys: workflow = self._relations[internal_rel_id] with (yield workflow.lock()): state = yield workflow.get_state() if state != "up": yield workflow.transition_state("up") @inlineCallbacks def _on_service_relation_changes(self, old_relations, new_relations): """Callback for service relation watching. The callback is used to manage the unit relation lifecycle in accordance with the current relations of the service. @param old_relations: Previous service relations for a service. On the initial execution, this value is None. @param new_relations: Current service relations for a service. """ self._log.debug( "services changed old:%s new:%s", old_relations, new_relations) # Acquire the run lock, and process the changes. yield self._run_lock.acquire() try: # If the lifecycle is not running, then stop the watcher if not self._running: self._log.debug("stop service-rel watcher, discarding changes") self._watching_relation_memberships = False raise StopWatcher() self._log.debug("processing relations changed") yield self._process_service_changes(old_relations, new_relations) finally: self._run_lock.release() def _sort_relations(self, rel_ids, rels, invert=False): """ Sort a set of relations. We process peer relations first when adding, and last when removing, else deferring to creation order. """ rel_ids = list(rel_ids) def _sort(x, y): xr, yr = rels[x].relation_role, rels[y].relation_role if xr == yr: return cmp(x, y) elif xr == "peer": return -1 elif yr == "peer": return 1 return cmp(x, y) rel_ids.sort(_sort) if invert: return list(reversed(rel_ids)) return rel_ids @inlineCallbacks def _process_service_changes(self, old_relations, new_relations): """Add and remove unit lifecycles per the service relations Determine. """ # Calculate delta between zookeeper state and our stored state. new_relations = dict( (service_relation.internal_relation_id, service_relation) for service_relation in new_relations) if old_relations: old_relations = dict( (service_relation.internal_relation_id, service_relation) for service_relation in old_relations) added = self._sort_relations( set(new_relations.keys()) - set(self._relations.keys()), new_relations) removed = self._sort_relations( set(self._relations.keys()) - set(new_relations.keys()), self._relations, invert=True) # Could this service be a principal container? is_principal = not (yield self._service.is_subordinate()) # Once we know a relation is departed, *immediately* stop running # its hooks. We can't really handle the case in which a hook is # *already* running, but we can at least make sure it doesn't run # any *more* hooks (which could have been queued in the past, but # not yet executed).# This isn't *currently* an exceptionally big # deal, because: # # (1) The ZK state won't actually be deleted, so an inappropriate # hook will still run happily. # (2) Even if the state is deleted, and the hook errors out, the # only actual consequence is that we'll eventually run the # error_depart transition rather than depart or down_depart. # # However, (1) will certainly change in the future, and (2) is not # necessarily a watertight guarantee. for relation_id in removed: yield self._relations[relation_id].lifecycle.stop() # Actually depart old relations. for relation_id in removed: workflow = self._relations.pop(relation_id) with (yield workflow.lock()): yield workflow.transition_state("departed") self._store_relations() # Process new relations for relation_id in added: service_relation = new_relations[relation_id] yield self._add_relation(service_relation) if (is_principal and service_relation.relation_scope == "container"): yield self._add_subordinate_unit(service_relation) yield self._store_relations() @inlineCallbacks def _add_relation(self, service_relation): try: unit_relation = yield service_relation.get_unit_state( self._unit) except UnitRelationStateNotFound: # This unit has not yet been assigned a unit relation state, # Go ahead and add one. unit_relation = yield service_relation.add_unit_state( self._unit) lifecycle = UnitRelationLifecycle( self._client, self._unit.unit_name, unit_relation, service_relation.relation_ident, self._unit_dir, self._state_dir, self._executor) workflow = RelationWorkflowState( self._client, unit_relation, service_relation.relation_name, lifecycle, self._state_dir) self._relations[service_relation.internal_relation_id] = workflow with (yield workflow.lock()): yield workflow.synchronize() @inlineCallbacks def _do_unit_deploy(self, unit_name, machine_id, charm_dir): # this method exists to aid testing rather than being an # inline unit_deployer = UnitDeployer(self._client, machine_id, charm_dir) yield unit_deployer.start("subordinate") yield unit_deployer.start_service_unit(unit_name) @inlineCallbacks def _add_subordinate_unit(self, service_relation): """Deploy a subordinate unit for service_relation remote endpoint.""" # Figure out the remote service state service_states = yield service_relation.get_service_states() subordinate_service = [s for s in service_states if s.service_name != self._unit.service_name][0] # add a unit state to service (using self._unit as the # principal container) subordinate_unit = yield subordinate_service.add_unit_state( container=self._unit) machine_id = yield self._unit.get_assigned_machine_id() subordinate_unit_dir = os.path.dirname(self._unit_dir) charm_dir = os.path.join(subordinate_unit_dir, subordinate_unit.unit_name.replace( "/", "-")) state_dir = os.path.join(charm_dir, "state") if not os.path.exists(state_dir): os.makedirs(state_dir) self._log.debug("deploying %s as subordinate of %s", subordinate_unit.unit_name, self._unit.unit_name) # with the relation in place and the units added to the # container we can start the unit agent yield self._do_unit_deploy(subordinate_unit.unit_name, machine_id, charm_dir) @property def _known_relations_path(self): return os.path.join( self._state_dir, "%s.lifecycle.relations" % self._unit.internal_id) def _store_relations(self): """Store *just* enough information to recreate RelationWorkflowStates. Note that we don't need to store the actual states -- if we can reconstruct the RWS, it will be responsible for finding its own state -- but we *do* need to store the fact of their existence, so that we can still depart broken relations even if they break while we're not running. """ state_dict = {} for relation_wf in self._relations.itervalues(): state_dict.update(relation_wf.get_relation_info()) state = serializer.dump(state_dict) temp_path = self._known_relations_path + "~" with open(temp_path, "w") as f: f.write(state) os.rename(temp_path, self._known_relations_path) @inlineCallbacks def _load_relations(self): """Recreate workflows for any relation we had previously stored. All relations (including those already departed) are stored in ._relations (and will be added or departed as usual); but only relations *not* already departed will be synchronized, to avoid errors caused by trying to access ZK state that may not exist any more. """ self._relations = {} if not os.path.exists(self._known_relations_path): return rsm = RelationStateManager(self._client) relations = yield rsm.get_relations_for_service(self._service) relations_by_id = dict((r.internal_relation_id, r) for r in relations) with open(self._known_relations_path) as f: known_relations = serializer.load(f.read()) for relation_id, relation_info in known_relations.items(): if relation_id in relations_by_id: # The service relation's still around: set up workflow as usual yield self._add_relation(relations_by_id[relation_id]) else: # The relation has departed. Create an *un*synchronized # workflow and place it in relations for detection and # removal (with hook-firing) in _process_service_changes. workflow = self._reconstruct_workflow( relation_id, relation_info["relation_name"], relation_info["relation_scope"]) self._relations[relation_id] = workflow def _reconstruct_workflow(self, relation_id, relation_ident, relation_scope): """Create a RelationWorkflowState which may refer to outdated state. This means that *if* this service has already departed the relevant relation, it is not safe to synchronize the resultant workflow, because its lifecycle may attempt to watch state that doesn't exist. Since synchronization is a one-time occurrence, and this method has only one client, this shouldn't be too hard to keep track of. """ unit_relation = UnitRelationState( self._client, self._service.internal_id, self._unit.internal_id, relation_id, relation_scope) lifecycle = UnitRelationLifecycle( self._client, self._unit.unit_name, unit_relation, relation_ident, self._unit_dir, self._state_dir, self._executor) relation_name = relation_ident.split(":")[0] return RelationWorkflowState( self._client, unit_relation, relation_name, lifecycle, self._state_dir) @inlineCallbacks def _execute_hook(self, hook_name, now=False): """Execute the hook with the given name. For priority hooks, the hook is scheduled and then the executioner started, before wait on the result. """ hook_path = os.path.join(self._unit_dir, "charm", "hooks", hook_name) socket_path = os.path.join(self._unit_dir, HOOK_SOCKET_FILE) invoker = Invoker( HookContext(self._client, self._unit.unit_name), None, _EVIL_CONSTANT, socket_path, self._unit_dir, hook_log) yield invoker.start() if now: yield self._executor.run_priority_hook(invoker, hook_path) else: yield self._executor(invoker, hook_path) class RelationInvoker(Invoker): """A relation hook invoker, that populates the environment. """ def get_environment_from_change(self, env, change): """Populate environment with relation change information.""" env["JUJU_RELATION"] = change.relation_name env["JUJU_RELATION_ID"] = change.relation_ident env["JUJU_REMOTE_UNIT"] = change.unit_name return env class UnitRelationLifecycle(object): """Unit Relation Lifcycle management. Provides for watching related units in a relation, and executing hooks in response to changes. The lifecycle is driven by the workflow. The Unit relation lifecycle glues together a number of components. It controls a watcher that recieves watch events from zookeeper, and it controls a hook scheduler which gets fed those events. When the scheduler wants to execute a hook, the executor is called with the hook path and the hook invoker. **Relation hook invocation do not maintain global order or determinism across relations**. They only maintain ordering and determinism within a relation. A shared scheduler across relations would be needed to maintain such behavior. See docs/source/internals/unit-workflow-lifecycle.rst for a brief discussion of some of the more interesting implementation decisions. """ def __init__(self, client, unit_name, unit_relation, relation_ident, unit_dir, state_dir, executor): self._client = client self._unit_dir = unit_dir self._relation_ident = relation_ident self._relation_name = relation_ident.split(":")[0] self._unit_relation = unit_relation self._unit_name = unit_name self._executor = executor self._run_lock = DeferredLock() self._log = logging.getLogger("unit.relation.lifecycle") self._error_handler = None schedule_path = os.path.join( state_dir, "%s.schedule" % unit_relation.internal_relation_id) self._scheduler = HookScheduler( client, self._execute_change_hook, self._unit_relation, self._relation_ident, unit_name, schedule_path) self._watcher = None @property def watching(self): """Are we queuing up hook executions in response to state changes?""" return self._watcher and self._watcher.running @property def executing(self): """Are we currently dequeuing and executing any queued hooks?""" return self._scheduler.running def set_hook_error_handler(self, handler): """Set an error handler to be invoked if a hook errors. The handler should accept two parameters, the RelationChange that triggered the hook, and the exception instance. """ self._error_handler = handler @inlineCallbacks def start(self, start_watches=True, start_scheduler=True): """Start watching related units and executing change hooks. :param bool start_watches: True to start relation watches :param bool start_scheduler: True to run the scheduler and actually react to any changes delivered by the watcher """ yield self._run_lock.acquire() try: # Start the hook execution scheduler. if start_scheduler and not self.executing: self._scheduler.run() # Create a watcher if we don't have one yet. if self._watcher is None: self._watcher = yield self._unit_relation.watch_related_units( self._scheduler.cb_change_members, self._scheduler.cb_change_settings) # And start the watcher. if start_watches and not self.watching: yield self._watcher.start() finally: self._run_lock.release() self._log.debug( "started relation:%s lifecycle", self._relation_name) @inlineCallbacks def stop(self, stop_watches=True): """Stop executing relation change hooks; maybe stop watching changes. :param bool stop_watches: True to stop watches as well as scheduler (which will prevent changes from being detected and queued, as well as stopping them being executed). """ yield self._run_lock.acquire() try: if stop_watches and self.watching: self._watcher.stop() if self._scheduler.running: self._scheduler.stop() finally: yield self._run_lock.release() self._log.debug("stopped relation:%s lifecycle", self._relation_name) @inlineCallbacks def depart(self): """Inform the charm that the service has departed the relation. """ self._log.debug("depart relation lifecycle") unit_id = self._unit_relation.internal_unit_id context = DepartedRelationHookContext( self._client, self._unit_name, unit_id, self._relation_name, self._unit_relation.internal_relation_id) change = RelationChange(self._relation_ident, "departed", "") invoker = self._get_invoker(context, change) hook_name = "%s-relation-broken" % self._relation_name yield self._execute_hook(invoker, hook_name, change) def _get_invoker(self, context, change): socket_path = os.path.join(self._unit_dir, HOOK_SOCKET_FILE) return RelationInvoker( context, change, "constant", socket_path, self._unit_dir, hook_log) def _execute_change_hook(self, context, change): """Invoked by the contained HookScheduler, to execute a hook. We utilize the HookExecutor to execute the hook, if an error occurs, it will be reraised, unless an error handler is specified see ``set_hook_error_handler``. """ if change.change_type == "departed": hook_name = "%s-relation-departed" % self._relation_name elif change.change_type == "joined": hook_name = "%s-relation-joined" % self._relation_name else: hook_name = "%s-relation-changed" % self._relation_name invoker = self._get_invoker(context, change) return self._execute_hook(invoker, hook_name, change) @inlineCallbacks def _execute_hook(self, invoker, hook_name, change): yield invoker.start() hook_path = os.path.join( self._unit_dir, "charm", "hooks", hook_name) yield self._run_lock.acquire() self._log.debug("Executing hook %s", hook_name) try: yield self._executor(invoker, hook_path) except Exception, e: # We can't hold the run lock when we invoke the error # handler, or we get a deadlock if the handler # manipulates the lifecycle. yield self._run_lock.release() self._log.warn("Error in %s hook: %s", hook_name, e) if not self._error_handler: raise self._log.info( "Invoked error handler for %s hook", hook_name) yield self._error_handler(change, e) returnValue(False) else: yield self._run_lock.release() returnValue(True) juju-0.7.orig/juju/unit/tests/0000755000000000000000000000000012135220114014507 5ustar 00000000000000juju-0.7.orig/juju/unit/workflow.py0000644000000000000000000005221312135220114015574 0ustar 00000000000000import csv import os import logging from zookeeper import NoNodeException from twisted.internet.defer import inlineCallbacks, returnValue from txzookeeper.utils import retry_change from juju.errors import CharmError, FileNotFound from juju.lib import serializer from juju.lib.statemachine import ( WorkflowState, Workflow, Transition, TransitionError) UnitWorkflow = Workflow( # Install transitions Transition("install", "Install", None, "installed", error_transition_id="error_install", automatic=True), Transition("error_install", "Install error", None, "install_error"), Transition("retry_install", "Retry install", "install_error", "installed", alias="retry"), Transition("retry_install_hook", "Retry install with hook", "install_error", "installed", alias="retry_hook"), # Start transitions Transition("start", "Start", "installed", "started", error_transition_id="error_start", automatic=True), Transition("error_start", "Start error", "installed", "start_error"), Transition("retry_start", "Retry start", "start_error", "started", alias="retry"), Transition("retry_start_hook", "Retry start with hook", "start_error", "started", alias="retry_hook"), # Stop transitions Transition("stop", "Stop", "started", "stopped", error_transition_id="error_stop"), Transition("error_stop", "Stop error", "started", "stop_error"), Transition("retry_stop", "Retry stop", "stop_error", "stopped", alias="retry"), Transition("retry_stop_hook", "Retry stop with hook", "stop_error", "stopped", alias="retry_hook"), # Upgrade transitions Transition( "upgrade_charm", "Upgrade", "started", "started", error_transition_id="upgrade_charm_error"), Transition( "upgrade_charm_error", "Upgrade error", "started", "charm_upgrade_error"), Transition( "retry_upgrade_charm_error", "Upgrade error", "charm_upgrade_error", "charm_upgrade_error"), Transition( "retry_upgrade_charm", "Retry upgrade", "charm_upgrade_error", "started", alias="retry", error_transition_id="retry_upgrade_charm_error"), Transition( "retry_upgrade_charm_hook", "Retry upgrade with hook", "charm_upgrade_error", "started", alias="retry_hook", error_transition_id="retry_upgrade_charm_error"), # Configuration Transitions Transition( "configure", "Configure", "started", "started", error_transition_id="error_configure"), Transition( "error_configure", "On configure error", "started", "configure_error"), Transition( "error_retry_configure", "On retry configure error", "configure_error", "configure_error"), Transition( "retry_configure", "Retry configure", "configure_error", "started", alias="retry", error_transition_id="error_retry_configure"), Transition( "retry_configure_hook", "Retry configure with hooks", "configure_error", "started", alias="retry_hook", error_transition_id="error_retry_configure") ) # Unit relation error states # # There's been some discussion, if we should have per change type # error states here, corresponding to the different changes that the # relation-changed hook is invoked for. The important aspects to # capture are both observability of error type locally and globally # (zk), and per error type and instance recovery of the same. To # provide for this functionality without additional states, the error # information (change type, and error message) are captured in state # variables which are locally and globally observable. Future # extension of the restart transition action, will allow for # customized recovery based on the change type state # variable. Effectively this differs from the unit definition, in that # it collapses three possible error states, into a behavior off # switch. A separate state will be needed to denote departing. # Process recovery using on disk workflow state # # Another interesting issue, process recovery using the on disk state, # is complicated by consistency to the the in memory state, which # won't be directly recoverable anymore without some state specific # semantics to recovering from on disk state, ie a restarted unit # agent, with a relation in an error state would require special # semantics around loading from disk to ensure that the in-memory # process state (watching and scheduling but not executing) matches # the recovery transition actions (which just restart hook execution, # but assume the watch continues).. this functionality added to better # allow for the behavior that while down due to a hook error, the # relation would continues to schedule pending hooks RelationWorkflow = Workflow( Transition("start", "Start", None, "up", automatic=True), Transition("stop", "Stop", "up", "down"), Transition("restart", "Restart", "down", "up", alias="retry"), Transition("error", "Relation hook error", "up", "error"), Transition("reset", "Recover from hook error", "error", "up"), Transition("depart", "Relation broken", "up", "departed"), Transition("down_depart", "Relation broken", "down", "departed"), Transition("error_depart", "Relation broken", "error", "departed"), ) @inlineCallbacks def is_unit_running(client, unit): """Is the service unit in a running state. Returns a boolean which is true if the unit is running, and the unit workflow state in a two element tuple. """ workflow_state = yield WorkflowStateClient(client, unit).get_state() if not workflow_state: returnValue((False, None)) running = workflow_state == "started" returnValue((running, workflow_state)) @inlineCallbacks def is_relation_running(client, relation): """Is the unit relation in a running state. Returns a boolean which is true if the relation is running, and the unit relation workflow state in a two element tuple. """ workflow_state = yield WorkflowStateClient(client, relation).get_state() if not workflow_state: returnValue((False, None)) running = workflow_state == "up" returnValue((running, workflow_state)) def zk_workflow_identity(domain_state): """Return workflow storage path and key for zookeeper. Returns back the path to the zk workflow state node, and this domain object's key into the workflow data. """ from juju.state.service import ServiceUnitState from juju.state.relation import UnitRelationState if isinstance(domain_state, ServiceUnitState): return ( "/units/%s" % domain_state.internal_id, domain_state.unit_name) elif isinstance(domain_state, UnitRelationState): return ( "/units/%s" % domain_state.internal_unit_id, domain_state.internal_relation_id) else: raise ValueError("Unknown domain object %r" % domain_state) def fs_workflow_paths(state_directory, domain_state): """Returns back the file paths where state should be stored. Return value is a two element tuple (state_file, history_file). """ from juju.state.service import ServiceUnitState from juju.state.relation import UnitRelationState if isinstance(domain_state, ServiceUnitState): return ( "%s/%s-%s" % ( state_directory, domain_state.unit_name.replace("/", "-"), "state.txt"), "%s/%s-%s" % ( state_directory, domain_state.unit_name.replace("/", "-"), "history.txt")) elif isinstance(domain_state, UnitRelationState): return ( "%s/%s-%s-%s" % ( state_directory, domain_state.internal_unit_id, domain_state.internal_relation_id, "state.txt"), "%s/%s-%s-%s" % ( state_directory, domain_state.internal_unit_id, domain_state.internal_relation_id, "history.txt")) else: raise ValueError("Unknown domain object %r" % domain_state) class ZookeeperWorkflowState(WorkflowState): """Workflow state persisted in zookeeper. """ def __init__(self, client, domain_state): self._client = client self._state = domain_state self.zk_state_path, self.zk_state_id = zk_workflow_identity( domain_state) super(ZookeeperWorkflowState, self).__init__() @inlineCallbacks def _store(self, state_dict): """Store the workflow state dictionary in zookeeper.""" state_serialized = serializer.dump(state_dict) def update_state(content, stat): unit_data = serializer.load(content) if not unit_data: unit_data = {} persistent_workflow = unit_data.setdefault("workflow_state", {}) persistent_workflow[self.zk_state_id] = state_serialized return serializer.dump(unit_data) yield retry_change(self._client, self.zk_state_path, update_state) yield super(ZookeeperWorkflowState, self)._store( state_dict) @inlineCallbacks def _load(self): """Load the workflow state dictionary from zookeeper.""" try: data, stat = yield self._client.get(self.zk_state_path) except NoNodeException: returnValue({"state": None}) unit_data = serializer.load(data) data = serializer.load(unit_data.get("workflow_state", {}).get( self.zk_state_id, "")) returnValue(data) class DiskWorkflowState(ZookeeperWorkflowState): """Stores the workflow state and history on disk. Also stores state to zookeeper, but always reads state from disk only. """ def __init__(self, client, domain_state, state_directory): super(DiskWorkflowState, self).__init__( client, domain_state) self.state_file_path, self.state_history_path = fs_workflow_paths( state_directory, domain_state) def _store(self, state_dict): """Persist the workflow state. Stores the state as the sole contents of the state file. For history, append workflow state to history file. Internally the history file is stored a csv, with a new row per entry with CSV escaping. """ state_serialized = serializer.dump(state_dict) # State File with open(self.state_file_path, "w") as handle: handle.write(state_serialized) # History File with open(self.state_history_path, "a") as handle: writer = csv.writer(handle) writer.writerow((state_serialized,)) handle.flush() return super(DiskWorkflowState, self)._store(state_dict) def _load(self): """Load the on-disk workflow state. """ if not os.path.exists(self.state_file_path): return {"state": None} with open(self.state_file_path, "r") as handle: content = handle.read() # TODO load ZK state and overwrite with disk state if different? return serializer.load(content) class WorkflowStateClient(ZookeeperWorkflowState): """A remote accessor to a unit or unit relation workflow state in zookeeper. Meant for out of process usage to examine the client's state. Currently read-only. For example to get the workflow state of a unit:: >> from juju.unit.workflow import WorkflowStateClient >> state_dict = yield WorkflowStateClient(unit_state).get_state() >> print state_dict["state"] "started" This client can also be used with unit relations:: >> from juju.unit.workflow import WorkflowStateClient >> state_dict = yield WorkflowStateClient(unit_relation).get_state() >> print state_dict["state"] "up" """ def _store(self, state_dict): raise NotImplementedError("Read only client") class UnitWorkflowState(DiskWorkflowState): _workflow = UnitWorkflow def __init__(self, client, unit, lifecycle, state_directory): super(UnitWorkflowState, self).__init__( client, unit, state_directory) self._lifecycle = lifecycle @inlineCallbacks def _invoke_lifecycle(self, method, *args, **kw): try: result = yield method(*args, **kw) except (FileNotFound, CharmError) as e: raise TransitionError(e) returnValue(result) @inlineCallbacks def _get_preconditions(self): """Given StateMachine state, return expected executor/lifecycle state. :return: (run_executor, run_lifecycle) Once the executor and lifecycle are in the expected state, it should be safe to call StateMachine.synchronize(), and to run other transitions as appropriate. """ mid_upgrade = (False, True) started = (True, True) other = (True, False) state = yield self.get_state() if state == "charm_upgrade_error": returnValue(mid_upgrade) if state == "started": if (yield self.get_inflight()) == "upgrade_charm": # We don't want any risk of queued hooks firing while we're in # a potentially-broken mid-upgrade state. returnValue(mid_upgrade) returnValue(started) returnValue(other) @inlineCallbacks def synchronize(self, executor): """Ensure the workflow's lifecycle is in the correct state, given current zookeeper state. :param executor: the unit agent's shared HookExecutor, which should not run if we come up during an incomplete charm upgrade. In addition, if the lifecycle has never been started before, the necessary state transitions are run. """ self._assert_locked() run_executor, run_lifecycle = yield self._get_preconditions() if run_executor: if not executor.running: executor.start() elif executor.running: yield executor.stop() if run_lifecycle: if not self._lifecycle.running: yield self._lifecycle.start( fire_hooks=False, start_relations=False) elif self._lifecycle.running: yield self._lifecycle.stop(fire_hooks=False) yield super(UnitWorkflowState, self).synchronize() # Install transitions def do_install(self): return self._invoke_lifecycle(self._lifecycle.install) def do_retry_install(self): return self._invoke_lifecycle(self._lifecycle.install, fire_hooks=False) def do_retry_install_hook(self): return self._invoke_lifecycle(self._lifecycle.install) # Start transitions def do_start(self): return self._invoke_lifecycle(self._lifecycle.start) def do_retry_start(self): return self._invoke_lifecycle(self._lifecycle.start, fire_hooks=False) def do_retry_start_hook(self): return self._invoke_lifecycle(self._lifecycle.start) # Stop transitions def do_stop(self): return self._invoke_lifecycle(self._lifecycle.stop) def do_retry_stop(self): return self._invoke_lifecycle(self._lifecycle.stop, fire_hooks=False) def do_retry_stop_hook(self): return self._invoke_lifecycle(self._lifecycle.stop) # Upgrade transititions def do_upgrade_charm(self): return self._invoke_lifecycle(self._lifecycle.upgrade_charm) def do_retry_upgrade_charm(self): return self._invoke_lifecycle(self._lifecycle.upgrade_charm, fire_hooks=False) def do_retry_upgrade_charm_hook(self): return self._invoke_lifecycle(self._lifecycle.upgrade_charm) # Config transitions def do_error_configure(self): return self._invoke_lifecycle(self._lifecycle.stop, fire_hooks=False) def do_configure(self): return self._invoke_lifecycle(self._lifecycle.configure) def do_error_retry_configure(self): return self._invoke_lifecycle(self._lifecycle.stop, fire_hooks=False) @inlineCallbacks def do_retry_configure(self): yield self._invoke_lifecycle(self._lifecycle.start, fire_hooks=False) yield self._invoke_lifecycle(self._lifecycle.configure, fire_hooks=False) @inlineCallbacks def do_retry_configure_hook(self): yield self._invoke_lifecycle(self._lifecycle.start, fire_hooks=False) yield self._invoke_lifecycle(self._lifecycle.configure) class RelationWorkflowState(DiskWorkflowState): _workflow = RelationWorkflow def __init__( self, client, unit_relation, relation_name, lifecycle, state_dir): super(RelationWorkflowState, self).__init__( client, unit_relation, state_dir) self._lifecycle = lifecycle self.relation_name = relation_name # Catch any related-change hook errors self._lifecycle.set_hook_error_handler(self.on_hook_error) self._log = logging.getLogger("unit.relation.workflow") @inlineCallbacks def synchronize(self): """Ensure the workflow's lifecycle is in the correct state, given current zookeeper state. In addition, if the lifecycle has never been started before, the necessary state transitions are run. """ self._assert_locked() state = yield self.get_state() if state == "up": watches, scheduler = True, True elif state in (None, "down", "departed"): watches, scheduler = False, False elif state == "error": watches, scheduler = True, False yield self._lifecycle.stop() if watches or scheduler: yield self._lifecycle.start( start_watches=watches, start_scheduler=scheduler) yield super(RelationWorkflowState, self).synchronize() @property def lifecycle(self): return self._lifecycle def get_relation_info(self): """Return relation info for use in persistence.""" rs = {} rs[self._state.internal_relation_id] = dict( relation_name=self.relation_name, relation_scope=self._state.relation_scope) return rs @inlineCallbacks def on_hook_error(self, relation_change, error): """Handle relation-change hook errors. Invoked by the hook scheduler on error. The relation-change hooks are executed out of band, as a result of watch invocations. We have the relation lifecycle accept this method as an error handler, so we can drive workflow changes as a result of hook errors. @param: relation_change: describes the change for which the hook is being invoked. @param: error: The error from hook invocation. """ with (yield self.lock()): yield self.fire_transition("error", change_type=relation_change.change_type, error_message=str(error)) @inlineCallbacks def do_stop(self): """Transition the workflow to the 'down' state. Turns off the unit-relation lifecycle monitoring and hook execution. :param error_info: If called on relation hook error, contains error variables. """ yield self._lifecycle.stop() @inlineCallbacks def do_reset(self): """Transition the workflow to the 'up' state from an error state. Turns on the unit-relation lifecycle monitoring and hook execution. """ yield self._lifecycle.start(start_watches=False) @inlineCallbacks def do_error(self, **error_info): """A relation hook error, stops further execution hooks but continues to watch for changes. """ yield self._lifecycle.stop(stop_watches=False) @inlineCallbacks def do_restart(self): """Transition the workflow to the 'up' state from the down state. Turns on the unit-relation lifecycle monitoring and hook execution. """ yield self._lifecycle.start() @inlineCallbacks def do_start(self): """Transition the workflow to the 'up' state. Turns on the unit-relation lifecycle monitoring and hook execution. """ yield self._lifecycle.start() @inlineCallbacks def do_depart(self): """Transition a relation to the departed state, from any state. We ignore hook errors, as we won't logically process any additional events for the relation once it doesn't exist. However we do note the error in the log. """ # Ensure that no further relation hook executions can occur. yield self._lifecycle.stop() # Handle errors ourselves, don't try to transition again self._lifecycle.set_hook_error_handler(None) try: yield self._lifecycle.depart() except Exception, e: self._log.error("Depart hook error, ignoring: %s", str(e)) returnValue({"change_type": "depart", "error_message": str(e)}) do_down_depart = do_depart do_error_depart = do_depart juju-0.7.orig/juju/unit/tests/__init__.py0000644000000000000000000000000012135220114016606 0ustar 00000000000000juju-0.7.orig/juju/unit/tests/test_address.py0000644000000000000000000001422712135220114017553 0ustar 00000000000000import subprocess from twisted.internet.defer import inlineCallbacks, succeed, returnValue from twisted.web import client from juju.errors import JujuError from juju.lib.testing import TestCase from juju.state.tests.common import StateTestBase from juju.unit.address import ( EC2UnitAddress, LocalUnitAddress, OrchestraUnitAddress, DummyUnitAddress, MAASUnitAddress, OpenStackUnitAddress, UnitAddress, get_unit_address) from juju.state.environment import GlobalSettingsStateManager class AddressTest(StateTestBase): @inlineCallbacks def get_address_for(self, provider_type): settings = GlobalSettingsStateManager(self.client) yield settings.set_provider_type(provider_type) address = yield get_unit_address(self.client) returnValue(address) @inlineCallbacks def test_get_ec2_address(self): address = yield self.get_address_for("ec2") self.assertTrue(isinstance(address, EC2UnitAddress)) @inlineCallbacks def test_get_openstack_address(self): address = yield self.get_address_for("openstack") self.assertTrue(isinstance(address, OpenStackUnitAddress)) @inlineCallbacks def test_get_openstack_s3_address(self): address = yield self.get_address_for("openstack_s3") self.assertTrue(isinstance(address, OpenStackUnitAddress)) @inlineCallbacks def test_get_local_address(self): address = yield self.get_address_for("local") self.assertTrue(isinstance(address, LocalUnitAddress)) @inlineCallbacks def test_get_orchestra_address(self): address = yield self.get_address_for("orchestra") self.assertTrue(isinstance(address, OrchestraUnitAddress)) @inlineCallbacks def test_get_dummy_address(self): address = yield self.get_address_for("dummy") self.assertTrue(isinstance(address, DummyUnitAddress)) @inlineCallbacks def test_get_MAAS_address(self): address = yield self.get_address_for("maas") self.assertTrue(isinstance(address, MAASUnitAddress)) def test_get_unknown_address(self): return self.assertFailure(self.get_address_for("foobar"), JujuError) class SubclassAddressTest(TestCase): class TestingAddress(UnitAddress): """An address class that neglects to implement the required methods""" def test_get_public_address(self): err = self.assertRaises(NotImplementedError, self.TestingAddress().get_public_address) self.assertIn("TestingAddress.get_public_address", str(err)) def test_get_private_address(self): err = self.assertRaises(NotImplementedError, self.TestingAddress().get_private_address) self.assertIn("TestingAddress.get_private_address", str(err)) class DummyAddressTest(TestCase): def setUp(self): self.address = DummyUnitAddress() def test_get_address(self): self.assertEqual( (yield self.address.get_public_address()), "localhost") self.assertEqual( (yield self.address.get_private_address()), "localhost") class EC2AddressTest(TestCase): def setUp(self): self.address = EC2UnitAddress() @inlineCallbacks def test_get_address(self): urls = [ "http://169.254.169.254/latest/meta-data/local-hostname", "http://169.254.169.254/latest/meta-data/public-hostname"] def verify_args(url): self.assertEqual(urls.pop(0), url) return succeed("foobar\n") self.patch(client, "getPage", verify_args) self.assertEqual( (yield self.address.get_private_address()), "foobar") self.assertEqual( (yield self.address.get_public_address()), "foobar") class OpenStackAddressTest(TestCase): def setUp(self): self.address = OpenStackUnitAddress() self.patch(client, "getPage", self._fetch_metadata) def _fetch_metadata(self, url): head, tail = url.rsplit("/", 1) self.assertEqual("http://169.254.169.254/2009-04-04/meta-data", head) return succeed(self.meta.pop(tail)) @inlineCallbacks def test_get_private_address(self): self.meta = {"local-ipv4": "192.168.0.2"} self.assertEqual("192.168.0.2", (yield self.address.get_private_address())) @inlineCallbacks def test_get_public_address_present(self): self.meta = {"public-ipv4": "8.8.8.8"} self.assertEqual("8.8.8.8", (yield self.address.get_public_address())) @inlineCallbacks def test_get_public_address_missing(self): self.meta = {"public-ipv4": "", "local-ipv4": "192.168.0.2"} self.assertEqual("192.168.0.2", (yield self.address.get_public_address())) class LocalAddressTest(TestCase): def setUp(self): self.address = LocalUnitAddress() @inlineCallbacks def test_get_address(self): self.patch( subprocess, "check_output", lambda args: "192.168.1.122 127.0.0.1\n") self.assertEqual( (yield self.address.get_public_address()), "192.168.1.122") self.assertEqual( (yield self.address.get_private_address()), "192.168.1.122") class OrchestraAddressTest(TestCase): def setUp(self): self.address = OrchestraUnitAddress() @inlineCallbacks def test_get_address(self): self.patch( subprocess, "check_output", lambda args: "slice.foobar.domain.net\n") self.assertEqual( (yield self.address.get_public_address()), "slice.foobar.domain.net") self.assertEqual( (yield self.address.get_private_address()), "slice.foobar.domain.net") class MAASAddressTest(TestCase): def setUp(self): self.address = OrchestraUnitAddress() @inlineCallbacks def test_get_address(self): self.patch( subprocess, "check_output", lambda args: "absent.friends.net\n") self.assertEqual( (yield self.address.get_public_address()), "absent.friends.net") self.assertEqual( (yield self.address.get_private_address()), "absent.friends.net") juju-0.7.orig/juju/unit/tests/test_charm.py0000644000000000000000000001344212135220114017216 0ustar 00000000000000from functools import partial import os import shutil from twisted.internet.defer import inlineCallbacks, returnValue, succeed, fail from twisted.web.error import Error from twisted.web.client import downloadPage from juju.charm import get_charm_from_path from juju.charm.bundle import CharmBundle from juju.charm.publisher import CharmPublisher from juju.charm.tests import local_charm_id from juju.charm.tests.test_directory import sample_directory from juju.errors import FileNotFound from juju.lib import under from juju.state.errors import CharmStateNotFound from juju.state.tests.common import StateTestBase from juju.unit.charm import download_charm from juju.lib.mocker import MATCH class CharmPublisherTestBase(StateTestBase): @inlineCallbacks def setUp(self): yield super(CharmPublisherTestBase, self).setUp() yield self.push_default_config() self.provider = self.config.get_default().get_machine_provider() self.storage = self.provider.get_file_storage() @inlineCallbacks def publish_charm(self, charm_path=sample_directory): charm = get_charm_from_path(charm_path) publisher = CharmPublisher(self.client, self.storage) yield publisher.add_charm(local_charm_id(charm), charm) charm_states = yield publisher.publish() returnValue((charm, charm_states[0])) class DownloadTestCase(CharmPublisherTestBase): @inlineCallbacks def test_charm_download_file(self): """Downloading a charm should store the charm locally. """ charm, charm_state = yield self.publish_charm() charm_directory = self.makeDir() # Download the charm yield download_charm( self.client, charm_state.id, charm_directory) # Verify the downloaded copy checksum = charm.get_sha256() charm_id = local_charm_id(charm) charm_key = under.quote("%s:%s" % (charm_id, checksum)) charm_path = os.path.join(charm_directory, charm_key) self.assertTrue(os.path.exists(charm_path)) bundle = CharmBundle(charm_path) self.assertEquals(bundle.get_revision(), charm.get_revision()) self.assertEqual(checksum, bundle.get_sha256()) @inlineCallbacks def test_charm_missing_download_file(self): """Downloading a file that doesn't exist raises FileNotFound. """ charm, charm_state = yield self.publish_charm() charm_directory = self.makeDir() # Delete the file file_path = charm_state.bundle_url[len("file://"):] os.remove(file_path) # Download the charm yield self.assertFailure( download_charm(self.client, charm_state.id, charm_directory), FileNotFound) @inlineCallbacks def test_charm_download_http(self): """Downloading a charm should store the charm locally. """ mock_storage = self.mocker.patch(self.storage) def match_string(expected, value): self.assertTrue(isinstance(value, basestring)) self.assertIn(expected, value) return True mock_storage.get_url(MATCH( partial(match_string, "local_3a_series_2f_dummy-1"))) self.mocker.result("http://example.com/foobar.zip") download_page = self.mocker.replace(downloadPage) download_page( MATCH(partial(match_string, "http://example.com/foobar.zip")), MATCH(partial(match_string, "local_3a_series_2f_dummy-1"))) def bundle_in_place(url, local_path): # must keep ref to charm else temp file goes out of scope. charm = get_charm_from_path(sample_directory) bundle = charm.as_bundle() shutil.copyfile(bundle.path, local_path) self.mocker.call(bundle_in_place) self.mocker.result(succeed(True)) self.mocker.replay() charm, charm_state = yield self.publish_charm() charm_directory = self.makeDir() self.assertEqual( charm_state.bundle_url, "http://example.com/foobar.zip") # Download the charm yield download_charm( self.client, charm_state.id, charm_directory) @inlineCallbacks def test_charm_download_http_error(self): """Errors in donwloading a charm are reported as charm not found. """ def match_string(expected, value): self.assertTrue(isinstance(value, basestring)) self.assertIn(expected, value) return True mock_storage = self.mocker.patch(self.storage) mock_storage.get_url( MATCH(partial(match_string, "local_3a_series_2f_dummy-1"))) remote_url = "http://example.com/foobar.zip" self.mocker.result(remote_url) download_page = self.mocker.replace(downloadPage) download_page( MATCH(partial(match_string, "http://example.com/foobar.zip")), MATCH(partial(match_string, "local_3a_series_2f_dummy-1"))) self.mocker.result(fail(Error("400", "Bad Stuff", ""))) self.mocker.replay() charm, charm_state = yield self.publish_charm() charm_directory = self.makeDir() self.assertEqual(charm_state.bundle_url, remote_url) error = yield self.assertFailure( download_charm(self.client, charm_state.id, charm_directory), FileNotFound) self.assertIn(remote_url, str(error)) @inlineCallbacks def test_charm_download_not_found(self): """An error is raised if trying to download a non existant charm. """ charm_directory = self.makeDir() # Download the charm error = yield self.assertFailure( download_charm( self.client, "local:mickey-21", charm_directory), CharmStateNotFound) self.assertEquals(str(error), "Charm 'local:mickey-21' was not found") juju-0.7.orig/juju/unit/tests/test_deploy.py0000644000000000000000000001361112135220114017416 0ustar 00000000000000import logging import os import os.path from twisted.internet.defer import inlineCallbacks, succeed, Deferred from juju.charm.bundle import CharmBundle from juju.charm.publisher import CharmPublisher from juju.charm.tests import local_charm_id from juju.lib import under from juju.lib.mocker import MATCH from juju.machine.tests.test_constraints import ( dummy_constraints, series_constraints) from juju.machine.unit import UnitMachineDeployment from juju.state.environment import GlobalSettingsStateManager from juju.state.machine import MachineStateManager from juju.state.service import ServiceStateManager from juju.state.tests.common import StateTestBase from juju.unit.deploy import UnitDeployer from juju.tests.common import get_test_zookeeper_address MATCH_BUNDLE = MATCH(lambda x: isinstance(x, CharmBundle)) class UnitDeployerTest(StateTestBase): @inlineCallbacks def setUp(self): yield super(UnitDeployerTest, self).setUp() self.output = self.capture_logging(level=logging.DEBUG) yield self.push_default_config() # Load the environment with the charm state and charm binary environment = self.config.get_default() provider = environment.get_machine_provider() storage = provider.get_file_storage() publisher = CharmPublisher(self.client, storage) yield publisher.add_charm(local_charm_id(self.charm), self.charm) self.charm_state, = yield publisher.publish() # Create a service from the charm, then add a unit and assign # it to a machine. self.service_state_manager = ServiceStateManager(self.client) self.machine_state_manager = MachineStateManager(self.client) self.service = yield self.service_state_manager.add_service_state( "myblog", self.charm_state, dummy_constraints) self.unit_state = yield self.service.add_unit_state() self.machine_state = yield self.machine_state_manager.\ add_machine_state(series_constraints) yield self.unit_state.assign_to_machine(self.machine_state) self.env_settings = GlobalSettingsStateManager(self.client) yield self.env_settings.set_environment_id("snowflake") # NOTE machine_id must be a str to use with one of the # deployment classes self.juju_dir = self.makeDir() self.unit_manager = UnitDeployer( self.client, str(self.machine_state.id), self.juju_dir) yield self.unit_manager.start() def test_start_initializes(self): """Verify starting unit manager initializes any necessary resources.""" self.assertTrue(os.path.isdir(self.unit_manager.charms_directory)) self.assertEqual(self.unit_manager.deploy_factory, UnitMachineDeployment) @inlineCallbacks def test_start_with_provider_type(self): # bug test against 1100245 self.unit_manager = UnitDeployer( self.client, str(self.machine_state.id), self.juju_dir) yield self.unit_manager.start("subordinate") self.assertEqual(self.unit_manager.env_id, "snowflake") @inlineCallbacks def test_charm_download(self): """Downloading a charm should store the charm locally.""" yield self.unit_manager.download_charm(self.charm_state) checksum = self.charm.get_sha256() charm_id = local_charm_id(self.charm) charm_key = under.quote("%s:%s" % (charm_id, checksum)) charm_path = os.path.join( self.unit_manager.charms_directory, charm_key) self.assertTrue(os.path.exists(charm_path)) bundle = CharmBundle(charm_path) self.assertEquals( bundle.get_revision(), self.charm.get_revision()) self.assertEquals(bundle.get_sha256(), checksum) self.assertIn( "Downloading charm %s" % charm_id, self.output.getvalue()) @inlineCallbacks def test_start_service_unit(self): """Verify starting a service unit kicks off the desired deployment.""" mock_deployment = self.mocker.patch(self.unit_manager.deploy_factory) mock_deployment.start( "snowflake", "0", get_test_zookeeper_address(), MATCH_BUNDLE) test_deferred = Deferred() def test_complete(env_id, machine_id, servers, bundle): test_deferred.callback(True) self.mocker.call(test_complete) self.mocker.replay() yield self.unit_manager.start_service_unit("myblog/0") yield test_deferred self.assertLogLines( self.output.getvalue(), ["Downloading charm local:series/dummy-1 to %s" % os.path.join(self.juju_dir, "charms"), "Starting service unit myblog/0...", "Started service unit myblog/0"]) @inlineCallbacks def test_kill_service_unit(self): """Verify killing a service unit destroys the deployment.""" mock_deployment = self.mocker.patch(self.unit_manager.deploy_factory) mock_deployment.start( "snowflake", "0", get_test_zookeeper_address(), MATCH_BUNDLE) self.mocker.result(succeed(True)) mock_deployment.destroy() self.mocker.result(succeed(True)) test_deferred = Deferred() def test_complete(): test_deferred.callback(True) self.mocker.call(test_complete) self.mocker.replay() # Start yield self.unit_manager.start_service_unit("myblog/0") # and stop. yield self.unit_state.unassign_from_machine() yield self.unit_manager.kill_service_unit("myblog/0") yield test_deferred self.assertLogLines( self.output.getvalue(), ["Downloading charm local:series/dummy-1 to %s" % os.path.join(self.juju_dir, "charms"), "Starting service unit myblog/0...", "Started service unit myblog/0", "Stopping service unit myblog/0...", "Stopped service unit myblog/0"]) juju-0.7.orig/juju/unit/tests/test_lifecycle.py0000644000000000000000000015731312135220114020071 0ustar 00000000000000import StringIO import itertools import logging import os import shutil import stat import sys import zookeeper from twisted.internet.defer import (inlineCallbacks, Deferred, fail, returnValue) from juju.charm.directory import CharmDirectory from juju.charm.tests.test_repository import unbundled_repository from juju.charm.url import CharmURL from juju.control.tests.test_upgrade_charm import CharmUpgradeTestBase from juju.errors import CharmInvocationError, CharmError, CharmUpgradeError from juju.hooks.invoker import Invoker from juju.hooks.executor import HookExecutor from juju.lib import serializer from juju.machine.tests.test_constraints import series_constraints from juju.state.endpoint import RelationEndpoint from juju.state.errors import UnitRelationStateNotFound from juju.state.machine import MachineStateManager from juju.state.relation import ClientServerUnitWatcher from juju.state.service import NO_HOOKS from juju.state.tests.test_relation import RelationTestBase from juju.state.hook import RelationChange from juju.unit.lifecycle import ( UnitLifecycle, UnitRelationLifecycle, RelationInvoker) from juju.unit.tests.test_charm import CharmPublisherTestBase from juju.lib.testing import TestCase from juju.lib.mocker import MATCH, ANY class UnwriteablePath(object): def __init__(self, path): self.path = path def __enter__(self): self.mode = os.stat(self.path).st_mode os.chmod(self.path, 0000) def __exit__(self, *exc_info): os.chmod(self.path, self.mode) class LifecycleTestBase(RelationTestBase): juju_directory = None @inlineCallbacks def setUp(self): yield super(LifecycleTestBase, self).setUp() if self.juju_directory is None: self.juju_directory = self.makeDir() self.hook_log = self.capture_logging("hook.output", level=logging.DEBUG) self.agent_log = self.capture_logging("unit-agent", level=logging.DEBUG) self.patch( HookExecutor, "LOCK_PATH", os.path.join(self.makeDir(), "hook.lock")) self.executor = HookExecutor() self.executor.start() self.change_environment( PATH=os.environ["PATH"], JUJU_ENV_UUID="snowflake", JUJU_UNIT_NAME="service-unit/0") @inlineCallbacks def setup_default_test_relation(self): mysql_ep = RelationEndpoint( "mysql", "client-server", "app", "server") wordpress_ep = RelationEndpoint( "wordpress", "client-server", "db", "client") self.states = yield self.add_relation_service_unit_from_endpoints( mysql_ep, wordpress_ep) self.unit_directory = os.path.join(self.juju_directory, "units", self.states["unit"].unit_name.replace("/", "-")) os.makedirs(os.path.join(self.unit_directory, "charm", "hooks")) self.state_directory = os.path.join(self.juju_directory, "state") os.makedirs(self.state_directory) def frozen_charm(self): return UnwriteablePath(os.path.join(self.unit_directory, "charm")) def write_hook(self, name, text, no_exec=False, hooks_dir=None): if hooks_dir is None: hooks_dir = os.path.join(self.unit_directory, "charm", "hooks") if not os.path.exists(hooks_dir): os.makedirs(hooks_dir) hook_path = os.path.join(hooks_dir, name) hook_file = open(hook_path, "w") hook_file.write(text.strip()) hook_file.flush() hook_file.close() if not no_exec: os.chmod(hook_path, stat.S_IRWXU) return hook_path def wait_on_hook(self, name=None, count=None, sequence=(), debug=False, executor=None): """Wait on the given named hook to be executed. @param: name: if specified only one hook name can be waited on at a given time. @param: count: Multiples of the same name can be captured by specifying the count parameter. @param: sequence: A list of hook names executed in sequence to be waited on @param: debug: This parameter enables debug stdout loogging. @param: executor: A HookExecutor instance to use instead of the default """ d = Deferred() results = [] assert name is not None or sequence, "Hook match must be specified" def observer(hook_path): hook_name = os.path.basename(hook_path) results.append(hook_name) if debug: print "-> exec hook", hook_name if d.called: return if results == sequence: d.callback(True) if hook_name == name and count is None: d.callback(True) if hook_name == name and results.count(hook_name) == count: d.callback(True) executor = executor or self.executor executor.set_observer(observer) return d def wait_on_state(self, workflow, state, debug=False): state_changed = Deferred() def observer(workflow_state, state_variables): if debug: print " workflow state", state, workflow if workflow_state == state: state_changed.callback(True) workflow.set_observer(observer) return state_changed def capture_output(self, stdout=True, channels=()): """Convience method to capture log output. Useful tool for observing interaction between components. """ if stdout: output = sys.stdout else: output = StringIO.StringIO() channels = zip(channels, itertools.repeat(0)) for log_name, indent in itertools.chain(( ("statemachine", 0), ("hook.executor", 2), ("hook.scheduler", 1), ("unit.deploy", 2), ("unit.lifecycle", 1), ("unit.relation.watch", 1), ("unit.relation.lifecycle", 1)), channels): formatter = logging.Formatter( (" " * indent) + "%(name)s: %(message)s") self.capture_logging( log_name, level=logging.DEBUG, log_file=output, formatter=formatter) print return output class LifecycleResolvedTest(LifecycleTestBase): @inlineCallbacks def setUp(self): yield super(LifecycleResolvedTest, self).setUp() yield self.setup_default_test_relation() self.lifecycle = UnitLifecycle( self.client, self.states["unit"], self.states["service"], self.unit_directory, self.state_directory, self.executor) @inlineCallbacks def tearDown(self): if self.lifecycle.running: yield self.lifecycle.stop(fire_hooks=False) yield super(LifecycleResolvedTest, self).tearDown() @inlineCallbacks def wb_test_start_with_relation_errors(self): """ White box testing to ensure that an error when starting the lifecycle is propogated appropriately, and that we collect all results before returning. """ mock_service = self.mocker.patch(self.lifecycle._service) mock_service.watch_relation_states(MATCH(lambda x: callable(x))) self.mocker.result(fail(SyntaxError())) mock_unit = self.mocker.patch(self.lifecycle._unit) mock_unit.watch_relation_resolved(MATCH(lambda x: callable(x))) results = [] wait = Deferred() @inlineCallbacks def complete(*args): yield wait results.append(True) returnValue(True) self.mocker.call(complete) self.mocker.replay() # Start the unit, assert a failure, and capture the deferred wait_failure = self.assertFailure(self.lifecycle.start(), SyntaxError) # Verify we have no results for the second callback or the start call self.assertFalse(results) self.assertFalse(wait_failure.called) # Let the second callback complete wait.callback(True) # Wait for the start error to bubble up. yield wait_failure # Verify the second deferred was waited on. self.assertTrue(results) @inlineCallbacks def test_resolved_relation_watch_unit_lifecycle_not_running(self): """If the unit is not running then no relation resolving is performed. However the resolution value remains the same. """ # Start the unit. yield self.lifecycle.start() # Simulate relation down on an individual unit relation workflow = self.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) self.assertEqual("up", (yield workflow.get_state())) with (yield workflow.lock()): yield workflow.transition_state("down") resolved = self.wait_on_state(workflow, "up") # Stop the unit lifecycle yield self.lifecycle.stop() # Set the relation to resolved yield self.states["unit"].set_relation_resolved( {self.states["unit_relation"].internal_relation_id: NO_HOOKS}) # Ensure we didn't attempt a transition. yield self.sleep(0.1) self.assertFalse(resolved.called) self.assertEqual( {self.states["unit_relation"].internal_relation_id: NO_HOOKS}, (yield self.states["unit"].get_relation_resolved())) @inlineCallbacks def test_resolved_relation_watch_relation_up(self): """If a relation marked as to be resolved is already running, then no work is performed. """ # Start the unit. yield self.lifecycle.start() # get a hold of the unit relation and verify state workflow = self.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) self.assertEqual("up", (yield workflow.get_state())) # Set the relation to resolved yield self.states["unit"].set_relation_resolved( {self.states["unit_relation"].internal_relation_id: NO_HOOKS}) # Give a moment for the watch to fire, invoke callback, and reset. yield self.sleep(0.1) # Ensure we're still up and the relation resolved setting has been # cleared. self.assertEqual( None, (yield self.states["unit"].get_relation_resolved())) self.assertEqual("up", (yield workflow.get_state())) @inlineCallbacks def test_resolved_relation_watch_from_error(self): """Unit lifecycle's will process a unit relation resolved setting, and transition a down relation back to a running state. """ log_output = self.capture_logging( "unit.lifecycle", level=logging.DEBUG) # Start the unit. yield self.lifecycle.start() # Simulate an error condition workflow = self.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) self.assertEqual("up", (yield workflow.get_state())) with (yield workflow.lock()): yield workflow.fire_transition("error") resolved = self.wait_on_state(workflow, "up") # Set the relation to resolved yield self.states["unit"].set_relation_resolved( {self.states["unit_relation"].internal_relation_id: NO_HOOKS}) # Wait for the relation to come back up value = yield self.states["unit"].get_relation_resolved() yield resolved # Verify state value = yield workflow.get_state() self.assertEqual(value, "up") self.assertIn( "processing relation resolved changed", log_output.getvalue()) @inlineCallbacks def test_resolved_relation_watch(self): """Unit lifecycle's will process a unit relation resolved setting, and transition a down relation back to a running state. """ log_output = self.capture_logging( "unit.lifecycle", level=logging.DEBUG) # Start the unit. yield self.lifecycle.start() # Simulate an error condition workflow = self.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) self.assertEqual("up", (yield workflow.get_state())) with (yield workflow.lock()): yield workflow.transition_state("down") resolved = self.wait_on_state(workflow, "up") # Set the relation to resolved yield self.states["unit"].set_relation_resolved( {self.states["unit_relation"].internal_relation_id: NO_HOOKS}) # Wait for the relation to come back up value = yield self.states["unit"].get_relation_resolved() yield resolved # Verify state value = yield workflow.get_state() self.assertEqual(value, "up") self.assertIn( "processing relation resolved changed", log_output.getvalue()) class UnitLifecycleTest(LifecycleTestBase): @inlineCallbacks def setUp(self): yield super(UnitLifecycleTest, self).setUp() yield self.setup_default_test_relation() self.lifecycle = UnitLifecycle( self.client, self.states["unit"], self.states["service"], self.unit_directory, self.state_directory, self.executor) @inlineCallbacks def tearDown(self): if self.lifecycle.running: yield self.lifecycle.stop(fire_hooks=False) yield super(UnitLifecycleTest, self).tearDown() @inlineCallbacks def test_hook_invocation(self): """Verify lifecycle methods invoke corresponding charm hooks. """ # install hook file_path = self.makeFile() self.write_hook( "install", '#!/bin/sh\n echo "hello world" > %s' % file_path) yield self.lifecycle.install() self.assertEqual(open(file_path).read().strip(), "hello world") # Start hook file_path = self.makeFile() self.write_hook( "start", '#!/bin/sh\n echo "sugarcane" > %s' % file_path) yield self.lifecycle.start() self.assertEqual(open(file_path).read().strip(), "sugarcane") # Stop hook file_path = self.makeFile() self.write_hook( "stop", '#!/bin/sh\n echo "siesta" > %s' % file_path) yield self.lifecycle.stop() self.assertEqual(open(file_path).read().strip(), "siesta") # verify the sockets are cleaned up. self.assertEqual(os.listdir(self.unit_directory), ["charm"]) @inlineCallbacks def test_start_sans_hook(self): """The lifecycle start can be invoked without firing hooks.""" self.write_hook("start", "#!/bin/sh\n exit 1") start_executed = self.wait_on_hook("start") yield self.lifecycle.start(fire_hooks=False) self.assertFalse(start_executed.called) @inlineCallbacks def test_stop_sans_hook(self): """The lifecycle stop can be invoked without firing hooks.""" self.write_hook("stop", "#!/bin/sh\n exit 1") stop_executed = self.wait_on_hook("stop") yield self.lifecycle.start() yield self.lifecycle.stop(fire_hooks=False) self.assertFalse(stop_executed.called) @inlineCallbacks def test_install_sans_hook(self): """The lifecycle install can be invoked without firing hooks.""" self.write_hook("install", "#!/bin/sh\n exit 1") install_executed = self.wait_on_hook("install") yield self.lifecycle.install(fire_hooks=False) self.assertFalse(install_executed.called) @inlineCallbacks def test_running(self): self.assertFalse(self.lifecycle.running) yield self.lifecycle.install() self.assertFalse(self.lifecycle.running) yield self.lifecycle.start() self.assertTrue(self.lifecycle.running) yield self.lifecycle.stop() self.assertFalse(self.lifecycle.running) def test_hook_error(self): """Verify hook execution error, raises an exception.""" self.write_hook("install", '#!/bin/sh\n exit 1') d = self.lifecycle.install() return self.failUnlessFailure(d, CharmInvocationError) def test_hook_not_executable(self): """A hook not executable, raises an exception.""" self.write_hook("install", '#!/bin/sh\n exit 0', no_exec=True) return self.failUnlessFailure( self.lifecycle.install(), CharmError) def test_hook_not_formatted_correctly(self): """Hook execution error, raises an exception.""" self.write_hook("install", '!/bin/sh\n exit 0') return self.failUnlessFailure( self.lifecycle.install(), CharmInvocationError) def write_start_and_relation_hooks(self, relation_name=None): """Write some minimal start, and relation-changed hooks. Returns the output file of the relation hook. """ file_path = self.makeFile() if relation_name is None: relation_name = self.states["service_relation"].relation_name self.write_hook("start", ("#!/bin/bash\n" "echo hello")) self.write_hook("config-changed", ("#!/bin/bash\n" "echo configure")) self.write_hook("stop", ("#!/bin/bash\n" "echo goodbye")) self.write_hook( "%s-relation-joined" % relation_name, ("#!/bin/bash\n" "echo joined >> %s\n" % file_path)) self.write_hook( "%s-relation-changed" % relation_name, ("#!/bin/bash\n" "echo changed >> %s\n" % file_path)) self.write_hook( "%s-relation-departed" % relation_name, ("#!/bin/bash\n" "echo departed >> %s\n" % file_path)) self.assertFalse(os.path.exists(file_path)) return file_path @inlineCallbacks def test_config_hook_invoked_on_configure(self): """Invoke the configure lifecycle method will execute the config-changed hook. """ output = self.capture_logging("unit.lifecycle", level=logging.DEBUG) # configure hook requires a running unit lifecycle.. yield self.assertFailure(self.lifecycle.configure(), AssertionError) # Config hook file_path = self.makeFile() self.write_hook( "config-changed", '#!/bin/sh\n echo "palladium" > %s' % file_path) yield self.lifecycle.start() yield self.lifecycle.configure() self.assertEqual(open(file_path).read().strip(), "palladium") self.assertIn("configured unit", output.getvalue()) @inlineCallbacks def test_service_relation_watching(self): """When the unit lifecycle is started, the assigned relations of the service are watched, with unit relation lifecycles created for each. Relation hook invocation do not maintain global order or determinism across relations. They only maintain ordering and determinism within a relation. A shared scheduler across relations would be needed to maintain such behavior. """ file_path = self.write_start_and_relation_hooks() wordpress1_states = yield self.add_opposite_service_unit(self.states) yield self.lifecycle.start() yield self.wait_on_hook("app-relation-changed") self.assertTrue(os.path.exists(file_path)) self.assertEqual([x.strip() for x in open(file_path).readlines()], ["joined", "changed"]) # Queue up our wait condition, of 4 hooks firing hooks_complete = self.wait_on_hook( sequence=[ "app-relation-joined", # joined event fires join hook, "app-relation-changed", # followed by changed hook "app-relation-departed"]) # add another. wordpress2_states = yield self.add_opposite_service_unit( (yield self.add_relation_service_unit_to_another_endpoint( self.states, RelationEndpoint( "wordpress-2", "client-server", "db", "client")))) # modify one. wordpress1_states["unit_relation"].set_data( {"hello": "world"}) # delete one. self.client.delete( "/relations/%s/client/%s" % ( wordpress2_states["relation"].internal_id, wordpress2_states["unit"].internal_id)) # verify results, waiting for hooks to complete yield hooks_complete self.assertEqual( set([x.strip() for x in open(file_path).readlines()]), set(["joined", "changed", "joined", "changed", "departed"])) @inlineCallbacks def test_removed_relation_depart(self): """ If a removed relation is detected, the unit relation lifecycle is stopped. """ file_path = self.write_start_and_relation_hooks() self.write_hook("app-relation-broken", "#!/bin/bash\n echo broken") yield self.lifecycle.start() wordpress_states = yield self.add_opposite_service_unit(self.states) # Wait for the watch and hook to fire. yield self.wait_on_hook("app-relation-changed") self.assertTrue(os.path.exists(file_path)) self.assertEqual([x.strip() for x in open(file_path).readlines()], ["joined", "changed"]) self.assertTrue(self.lifecycle.get_relation_workflow( self.states["relation"].internal_id)) # Remove the relation between mysql and wordpress yield self.relation_manager.remove_relation_state( self.states["relation"]) # Wait till the unit relation workflow has been processed the event. yield self.wait_on_state( self.lifecycle.get_relation_workflow( self.states["relation"].internal_id), "departed") # Modify the unit relation settings, to generate a spurious event. yield wordpress_states["unit_relation"].set_data( {"hello": "world"}) # Verify no notice was recieved for the modify before we where stopped. self.assertEqual([x.strip() for x in open(file_path).readlines()], ["joined", "changed"]) # Verify the unit relation lifecycle has been disposed of. self.assertRaises(KeyError, self.lifecycle.get_relation_workflow, self.states["relation"].internal_id) @inlineCallbacks def test_lifecycle_start_stop_starts_relations(self): """Starting a stopped lifecycle will restart relation events. """ wordpress1_states = yield self.add_opposite_service_unit(self.states) wordpress2_states = yield self.add_opposite_service_unit( (yield self.add_relation_service_unit_to_another_endpoint( self.states, RelationEndpoint( "wordpress-2", "client-server", "db", "client")))) yield wordpress1_states['service_relations'][-1].add_unit_state( self.states['unit']) yield wordpress2_states['service_relations'][-1].add_unit_state( self.states['unit']) # Start and stop lifecycle file_path = self.write_start_and_relation_hooks() yield self.lifecycle.start() yield self.wait_on_hook("app-relation-changed", count=2) self.assertTrue(os.path.exists(file_path)) yield self.lifecycle.stop() ######################################################## # Add, remove relations, and modify related unit settings. # The following isn't enough to trigger a hook notification. # yield wordpress1_states["relation"].unassign_service( # wordpress1_states["service"]) # # The removal of the external relation, means we stop getting notifies # of it, but the underlying unit agents of the service are responsible # for removing their presence nodes within the relationship, which # triggers a hook invocation. yield self.client.delete("/relations/%s/client/%s" % ( wordpress1_states["relation"].internal_id, wordpress1_states["unit"].internal_id)) yield wordpress2_states["unit_relation"].set_data( {"hello": "world"}) yield self.add_opposite_service_unit( (yield self.add_relation_service_unit_to_another_endpoint( self.states, RelationEndpoint( "wordpress-3", "client-server", "db", "client")))) # Verify no hooks are executed. yield self.sleep(0.1) res = [x.strip() for x in open(file_path)] if ((res != ["joined", "changed", "joined", "changed"]) and (res != ["joined", "joined", "changed", "changed"])): self.fail("Invalid join sequence %s" % res) # XXX - With scheduler local state recovery, we should get the modify. # Start and verify events. hooks_executed = self.wait_on_hook( sequence=[ "config-changed", "start", "app-relation-departed", "app-relation-joined", # joined event fires joined hook, "app-relation-changed" # followed by changed hook ]) yield self.lifecycle.start() yield hooks_executed res.extend(["departed", "joined", "changed"]) self.assertEqual([x.strip() for x in open(file_path)], res) @inlineCallbacks def test_lock_start_stop_watch(self): """The lifecycle, internally employs lock to prevent simulatenous execution of methods which modify internal state. This allows for a long running hook to be called safely, even if the other invocations of the lifecycle, the subsequent invocations will block till they can acquire the lock. """ self.write_hook("start", "#!/bin/bash\necho start\n") self.write_hook("stop", "#!/bin/bash\necho stop\n") results = [] finish_callback = [Deferred() for i in range(4)] # Control the speed of hook execution original_invoker = Invoker.__call__ invoker = self.mocker.patch(Invoker) @inlineCallbacks def long_hook(ctx, hook_path): results.append(os.path.basename(hook_path)) yield finish_callback[len(results) - 1] yield original_invoker(ctx, hook_path) for i in range(4): invoker( MATCH(lambda x: x.endswith("start") or x.endswith("stop"))) self.mocker.call(long_hook, with_object=True) self.mocker.replay() # Hook execution sequence to match on. test_complete = self.wait_on_hook(sequence=["config-changed", "start", "stop", "config-changed", "start"]) # Fire off the lifecycle methods execution_callbacks = [self.lifecycle.start(), self.lifecycle.stop(), self.lifecycle.start(), self.lifecycle.stop()] self.assertEqual([0, 0, 0, 0], [x.called for x in execution_callbacks]) # kill the delay on the second finish_callback[1].callback(True) finish_callback[2].callback(True) self.assertEqual([0, 0, 0, 0], [x.called for x in execution_callbacks]) # let them pass, kill the delay on the first finish_callback[0].callback(True) yield test_complete self.assertEqual([False, True, True, False], [x.called for x in execution_callbacks]) # Finish the last hook finish_callback[3].callback(True) yield self.wait_on_hook("stop") self.assertEqual([True, True, True, True], [x.called for x in execution_callbacks]) @inlineCallbacks def test_start_stop_relations(self): yield self.lifecycle.start() # Simulate relation down on an individual unit relation workflow = self.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) self.assertEqual("up", (yield workflow.get_state())) # Stop the unit lifecycle yield self.lifecycle.stop() self.assertEqual("down", (yield workflow.get_state())) # Start again yield self.lifecycle.start() self.assertEqual("up", (yield workflow.get_state())) @inlineCallbacks def test_start_without_relations(self): yield self.lifecycle.start() # Simulate relation down on an individual unit relation workflow = self.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) self.assertEqual("up", (yield workflow.get_state())) with (yield workflow.lock()): yield workflow.transition_state("down") resolved = self.wait_on_state(workflow, "up") # Stop the unit lifecycle yield self.lifecycle.stop() self.assertEqual("down", (yield workflow.get_state())) # Start again without start_relations yield self.lifecycle.start(start_relations=False) self.assertEqual("down", (yield workflow.get_state())) # Give a moment for the watch to fire erroneously yield self.sleep(0.1) # Ensure we didn't attempt a transition. self.assertFalse(resolved.called) @inlineCallbacks def test_stop_without_relations(self): yield self.lifecycle.start() # Simulate relation down on an individual unit relation workflow = self.lifecycle.get_relation_workflow( self.states["unit_relation"].internal_relation_id) self.assertEqual("up", (yield workflow.get_state())) # Stop the unit lifecycle yield self.lifecycle.stop(stop_relations=False) self.assertEqual("up", (yield workflow.get_state())) # Start again without start_relations yield self.lifecycle.start(start_relations=False) self.assertEqual("up", (yield workflow.get_state())) @inlineCallbacks def test_remembers_relation_removes(self): # Add another relation that *won't* be trashed; there's no way to tell # the diference between a running relation that's loaded from disk (as # this one should be) and one that's just picked up from the call to # watch_relation_states, but this should ensure the tests at least hit # the correct path. other_states = yield self.add_opposite_service_unit( (yield self.add_relation_service_unit_to_another_endpoint( self.states, RelationEndpoint( "wordpress-2", "client-server", "db", "client")))) yield self.lifecycle.start() going_workflow = self.lifecycle.get_relation_workflow( self.states["service_relations"][0].internal_relation_id) staying_workflow = self.lifecycle.get_relation_workflow( other_states["service_relations"][0].internal_relation_id) # Stop the lifecycle as though the process were being shut down yield self.lifecycle.stop(fire_hooks=False, stop_relations=False) self.assertEqual("up", (yield going_workflow.get_state())) self.assertEqual("up", (yield staying_workflow.get_state())) # This lifecycle is not responding to events; while it's not looking, # trash one of the relations. broken_complete = self.wait_on_hook("app-relation-broken") yield self.relation_manager.remove_relation_state( self.states["relation"]) # Check it's really not responding to events. yield self.sleep(0.1) self.assertFalse(broken_complete.called) # Trash the unit relation presence nodes yield self.client.close() yield self.client.connect() # Verify that the unit relation presence nodes have been trashed staying_service_relation = other_states["service_relations"][1] going_service_relation = self.states["service_relations"][1] yield self.assertFailure( staying_service_relation.get_unit_state(self.states["unit"]), UnitRelationStateNotFound) yield self.assertFailure( going_service_relation.get_unit_state(self.states["unit"]), UnitRelationStateNotFound) # Create a new lifecycle with the same params; ie one that doesn't # share memory state. new_lifecycle = UnitLifecycle( self.client, self.states["unit"], self.states["service"], self.unit_directory, self.state_directory, self.executor) # Demonstrate that state was stored by the first one and picked up # by the second one, by showing that "missing" relations still have # depart hooks fired despite the second one having no direct knowledge # of the departed relation's existence.. yield new_lifecycle.start(fire_hooks=False, start_relations=False) yield broken_complete # Verify that the unit relation presence node has been restored... yield staying_service_relation.get_unit_state(self.states["unit"]) # ...but that we didn't create one for the departed relation. yield self.assertFailure( going_service_relation.get_unit_state(self.states["unit"]), UnitRelationStateNotFound) # The workflow, which we grabbed earlier from the original lifecycle, # should now be "departed" (rather than "up", which it was when we # stopped the original lifecycle). self.assertEqual("departed", (yield going_workflow.get_state())) self.assertEqual("up", (yield staying_workflow.get_state())) yield new_lifecycle.stop() def test_multiple_rel_order(self): """Given multiple new rels, verify peer processed first.""" class rel(object): def __init__(self, r): self.relation_role = r rels = {'5': rel('peer'), '4': rel('client'), '2': rel('server'), '1': rel('peer')} self.assertEqual( self.lifecycle._sort_relations(rels.keys(), rels), ["1", "5", "2", "4"]) @inlineCallbacks def test_start_invoker(self): """Verify the invoker is started by the unit relation lifecycle""" # Setup three different wordpress services, wordpress, # wordpress-1, wordpress-2 with one service unit each. Each of # these will be on the relation with ident app:0, app:1, app:2 self.write_hook( "start", '#!/bin/sh\n echo hello\n') blog_states = yield self.add_opposite_service_unit(self.states) yield blog_states['service_relations'][-1].add_unit_state( self.states['unit']) for i in range(1, 3): blog_states = yield self.add_opposite_service_unit( (yield self.add_relation_service_unit_to_another_endpoint( self.states, RelationEndpoint( "wordpress-%d" % i, "client-server", "db", "client")))) yield blog_states['service_relations'][-1].add_unit_state( self.states['unit']) finished = self.wait_on_hook("app-relation-changed", 3) yield self.lifecycle.start() yield finished # This will verify that config-changed and start both are # cached properly. The remaining hook caching will be for the # relation hook contexts, along some permutation based on how # the hooks are actually scheduled. However, this is tested in # UnitRelationLifecycleTest.test_start_invoker so need not be # covered here. self.assertLogLines( self.hook_log.getvalue(), ["Cached relation hook contexts: ['app:0', 'app:1', 'app:2']"] * 2) class UnitLifecycleUpgradeTest( LifecycleTestBase, CharmPublisherTestBase, CharmUpgradeTestBase): @inlineCallbacks def setUp(self): yield super(UnitLifecycleUpgradeTest, self).setUp() yield self.setup_default_test_relation() self.lifecycle = UnitLifecycle( self.client, self.states["unit"], self.states["service"], self.unit_directory, self.state_directory, self.executor) @inlineCallbacks def test_no_actual_upgrade_bad_hook(self): self.write_hook("upgrade-charm", "#!/bin/bash\nexit 1\n") done = self.wait_on_hook("upgrade-charm") yield self.assertFailure( self.lifecycle.upgrade_charm(), CharmInvocationError) yield done self.assertFalse(self.executor.running) @inlineCallbacks def test_no_actual_upgrade_good_hook(self): self.write_hook("upgrade-charm", "#!/bin/bash\nexit 0\n") # Ensure we don't actually upgrade with self.frozen_charm(): done = self.wait_on_hook("upgrade-charm") yield self.lifecycle.upgrade_charm() yield done self.assertTrue(self.executor.running) @inlineCallbacks def test_no_actual_upgrade_dont_fire_hooks(self): self.write_hook("upgrade-charm", "#!/bin/bash\nexit 1\n") with self.frozen_charm(): done = self.wait_on_hook("upgrade-charm") yield self.lifecycle.upgrade_charm(fire_hooks=False) yield self.sleep(0.1) self.assertFalse(done.called) @inlineCallbacks def prepare_real_upgrade(self, hook_exit): repo = self.increment_charm(self.charm) hooks_dir = os.path.join(repo.path, "series", "mysql", "hooks") self.write_hook( "upgrade-charm", "#!/bin/bash\nexit %s\n" % hook_exit, hooks_dir=hooks_dir) charm = yield repo.find(CharmURL.parse("local:series/mysql")) charm, charm_state = yield self.publish_charm(charm.path) yield self.states["service"].set_charm_id(charm_state.id) @inlineCallbacks def test_full_run_bad_write(self): self.capture_logging("charm.upgrade") yield self.prepare_real_upgrade(0) with self.frozen_charm(): yield self.assertFailure( self.lifecycle.upgrade_charm(), CharmUpgradeError) self.assertFalse(self.executor.running) @inlineCallbacks def test_full_run_bad_hook(self): yield self.prepare_real_upgrade(1) done = self.wait_on_hook("upgrade-charm") yield self.assertFailure( self.lifecycle.upgrade_charm(), CharmInvocationError) yield done self.assertFalse(self.executor.running) @inlineCallbacks def test_full_run_good_hook(self): yield self.prepare_real_upgrade(0) done = self.wait_on_hook("upgrade-charm") yield self.lifecycle.upgrade_charm() yield done self.assertTrue(self.executor.running) class RelationInvokerTest(TestCase): def test_relation_invoker_environment(self): """Verify the relation invoker has populated the environment as per charm specification of hook invocation.""" self.change_environment( PATH=os.environ["PATH"], JUJU_ENV_UUID="snowflake", JUJU_UNIT_NAME="service-unit/0") change = RelationChange("clients:42", "joined", "s/2") unit_hook_path = self.makeDir() invoker = RelationInvoker(None, change, "", "", unit_hook_path, None) environ = invoker.get_environment() self.assertEqual(environ["JUJU_RELATION"], "clients") self.assertEqual(environ["JUJU_RELATION_ID"], "clients:42") self.assertEqual(environ["JUJU_REMOTE_UNIT"], "s/2") self.assertEqual(environ["JUJU_ENV_UUID"], "snowflake") self.assertEqual(environ["CHARM_DIR"], os.path.join(unit_hook_path, "charm")) class UnitRelationLifecycleTest(LifecycleTestBase): hook_template = ( "#!/bin/bash\n" "echo %(change_type)s >> %(file_path)s\n" "echo JUJU_RELATION=$JUJU_RELATION >> %(file_path)s\n" "echo JUJU_RELATION_ID=$JUJU_RELATION_ID >> %(file_path)s\n" "echo JUJU_REMOTE_UNIT=$JUJU_REMOTE_UNIT >> %(file_path)s") @inlineCallbacks def setUp(self): yield super(UnitRelationLifecycleTest, self).setUp() yield self.setup_default_test_relation() self.relation_name = self.states["service_relation"].relation_name self.lifecycle = UnitRelationLifecycle( self.client, self.states["unit"].unit_name, self.states["unit_relation"], self.states["service_relation"].relation_ident, self.unit_directory, self.state_directory, self.executor) self.log_stream = self.capture_logging("unit.relation.lifecycle", logging.DEBUG) @inlineCallbacks def tearDown(self): yield self.lifecycle.stop() yield super(UnitRelationLifecycleTest, self).tearDown() @inlineCallbacks def test_initial_start_lifecycle_no_related_no_exec(self): """ If there are no related units on startup, the relation joined hook is not invoked. """ file_path = self.makeFile() self.write_hook( "%s-relation-changed" % self.relation_name, ("/bin/bash\n" "echo executed >> %s\n" % file_path)) yield self.lifecycle.start() self.assertFalse(os.path.exists(file_path)) self.assertTrue(self.lifecycle.watching) self.assertTrue(self.lifecycle.executing) @inlineCallbacks def test_stop_can_continue_watching(self): """ """ file_path = self.makeFile() self.write_hook( "%s-relation-changed" % self.relation_name, ("#!/bin/bash\n" "echo executed >> %s\n" % file_path)) rel_states = yield self.add_opposite_service_unit(self.states) yield self.lifecycle.start() self.assertTrue(self.lifecycle.watching) self.assertTrue(self.lifecycle.executing) yield self.wait_on_hook( sequence=["app-relation-joined", "app-relation-changed"]) changed_executed = self.wait_on_hook("app-relation-changed") yield self.lifecycle.stop(stop_watches=False) self.assertTrue(self.lifecycle.watching) self.assertFalse(self.lifecycle.executing) rel_states["unit_relation"].set_data( serializer.dump(dict(hello="world"))) # Sleep to give an error a chance. yield self.sleep(0.1) self.assertFalse(changed_executed.called) yield self.lifecycle.start(start_watches=False) self.assertTrue(self.lifecycle.watching) self.assertTrue(self.lifecycle.executing) yield changed_executed @inlineCallbacks def test_join_hook_error(self): """Join hook errors don't execute synthetic change hook. A related unit membership event is aliased to execute both a join hook and change hook, an error executing the join hook, stops the execution of the change hook. """ yield self.add_opposite_service_unit(self.states) errors = [] def on_error(change, error): assert not errors, "Changed hook fired after join error" errors.append((change, error)) self.lifecycle.set_hook_error_handler(on_error) self.write_hook( "app-relation-joined", "!/bin/bash\nexit 1") self.write_hook( "app-relation-changed", "!/bin/bash\nexit 1") yield self.lifecycle.start() yield self.wait_on_hook("app-relation-joined") # Give a chance for the relation-changed hook to erroneously fire. yield self.sleep(0.3) self.assertEqual(len(errors), 1) @inlineCallbacks def test_initial_start_lifecycle_with_related(self): """ If there are related units on startup, the relation changed hook is invoked. """ yield self.add_opposite_service_unit(self.states) file_path = self.makeFile() self.write_hook("%s-relation-joined" % self.relation_name, self.hook_template % dict(change_type="joined", file_path=file_path)) self.write_hook("%s-relation-changed" % self.relation_name, self.hook_template % dict(change_type="changed", file_path=file_path)) yield self.lifecycle.start() yield self.wait_on_hook( sequence=["app-relation-joined", "app-relation-changed"]) self.assertTrue(os.path.exists(file_path)) with open(file_path)as f: contents = f.read() self.assertEqual(contents, ("joined\n" "JUJU_RELATION=app\n" "JUJU_RELATION_ID=app:0\n" "JUJU_REMOTE_UNIT=wordpress/0\n" "changed\n" "JUJU_RELATION=app\n" "JUJU_RELATION_ID=app:0\n" "JUJU_REMOTE_UNIT=wordpress/0\n")) @inlineCallbacks def test_hooks_executed_during_lifecycle_start_stop_start(self): """If the unit relation lifecycle is stopped, hooks will no longer be executed.""" file_path = self.makeFile() self.write_hook("%s-relation-joined" % self.relation_name, self.hook_template % dict(change_type="joined", file_path=file_path)) self.write_hook("%s-relation-changed" % self.relation_name, self.hook_template % dict(change_type="changed", file_path=file_path)) # starting is async yield self.lifecycle.start() self.assertTrue(self.lifecycle.watching) self.assertTrue(self.lifecycle.executing) # stopping is sync. self.lifecycle.stop() self.assertFalse(self.lifecycle.watching) self.assertFalse(self.lifecycle.executing) # Add a related unit. yield self.add_opposite_service_unit(self.states) # Give a chance for things to go bad yield self.sleep(0.1) self.assertFalse(os.path.exists(file_path)) # Now start again yield self.lifecycle.start() self.assertTrue(self.lifecycle.watching) self.assertTrue(self.lifecycle.executing) # Verify we get our join event. yield self.wait_on_hook("app-relation-changed") self.assertTrue(os.path.exists(file_path)) @inlineCallbacks def test_hook_error_handler(self): # use an error handler that completes async. self.write_hook("app-relation-joined", "#!/bin/bash\nexit 0\n") self.write_hook("app-relation-changed", "#!/bin/bash\nexit 1\n") results = [] finish_callback = Deferred() @inlineCallbacks def error_handler(change, e): yield self.client.create( "/errors", str(e), flags=zookeeper.EPHEMERAL | zookeeper.SEQUENCE) results.append((change.change_type, e)) yield self.lifecycle.stop() finish_callback.callback(True) self.lifecycle.set_hook_error_handler(error_handler) # Add a related unit. yield self.add_opposite_service_unit(self.states) yield self.lifecycle.start() yield finish_callback self.assertEqual(len(results), 1) self.assertTrue(results[0][0], "joined") self.assertTrue(isinstance(results[0][1], CharmInvocationError)) hook_relative_path = "charm/hooks/app-relation-changed" output = ( "started relation:app lifecycle", "Executing hook app-relation-joined", "Executing hook app-relation-changed", "Error in app-relation-changed hook: %s '%s/%s': exit code 1." % ( "Error processing", self.unit_directory, hook_relative_path), "Invoked error handler for app-relation-changed hook", "stopped relation:app lifecycle\n") self.assertEqual(self.log_stream.getvalue(), "\n".join(output)) @inlineCallbacks def test_depart(self): """If a relation is departed, the depart hook is executed. """ file_path = self.makeFile() self.write_hook("%s-relation-broken" % self.relation_name, self.hook_template % dict(change_type="broken", file_path=file_path)) yield self.lifecycle.start() wordpress_states = yield self.add_opposite_service_unit(self.states) yield self.wait_on_hook( sequence=["app-relation-joined", "app-relation-changed"]) yield self.lifecycle.stop() yield self.relation_manager.remove_relation_state( wordpress_states["relation"]) hook_complete = self.wait_on_hook("app-relation-broken") yield self.lifecycle.depart() yield hook_complete contents = open(file_path).read() self.assertEqual( contents, ("broken\n" "JUJU_RELATION=app\n" "JUJU_RELATION_ID=app:0\n" "JUJU_REMOTE_UNIT=\n")) @inlineCallbacks def test_lock_start_stop(self): """ The relation lifecycle, internally uses a lock when its interacting with zk, and acquires the lock to protct its internal data structures. """ original_method = ClientServerUnitWatcher.start watcher = self.mocker.patch(ClientServerUnitWatcher) finish_callback = Deferred() @inlineCallbacks def long_op(*args): yield finish_callback yield original_method(*args) watcher.start() self.mocker.call(long_op, with_object=True) self.mocker.replay() start_complete = self.lifecycle.start() stop_complete = self.lifecycle.stop() yield self.sleep(0.1) self.assertFalse(start_complete.called) self.assertFalse(stop_complete.called) finish_callback.callback(True) yield start_complete self.assertTrue(stop_complete.called) @inlineCallbacks def test_start_scheduler(self): yield self.lifecycle.start(start_scheduler=False) self.assertTrue(self.lifecycle.watching) self.assertFalse(self.lifecycle.executing) hooks_complete = self.wait_on_hook( sequence=["app-relation-joined", "app-relation-changed"]) # Watches are firing, but scheduler is not running hooks yield self.add_opposite_service_unit(self.states) yield self.sleep(0.1) self.assertFalse(hooks_complete.called) # Shut down everything yield self.lifecycle.stop() self.assertFalse(self.lifecycle.watching) self.assertFalse(self.lifecycle.executing) # Start the scheduler only, without the watches yield self.lifecycle.start(start_watches=False) self.assertFalse(self.lifecycle.watching) self.assertTrue(self.lifecycle.executing) # The scheduler should run the hooks it queued up earlier yield hooks_complete @inlineCallbacks def test_start_invoker(self): """Verify the invoker is started by the unit relation lifecycle""" # Setup 5 different wordpress services, wordpress, wordpress-1 # through wordpress-4 with one service unit each. Each of these # will be on the relation app:0 ... app:4 blog_states = yield self.add_opposite_service_unit(self.states) yield blog_states['service_relations'][-1].add_unit_state( self.states['unit']) for i in range(1, 5): blog_states = yield self.add_opposite_service_unit( (yield self.add_relation_service_unit_to_another_endpoint( self.states, RelationEndpoint( "wordpress-%d" % i, "client-server", "db", "client")))) yield blog_states['service_relations'][-1].add_unit_state( self.states['unit']) yield self.lifecycle.start() yield self.wait_on_hook( sequence=["app-relation-joined", "app-relation-changed"]) # Currently the only action taken by the invoker.start is to # cache the relation hook context; app:0 is excluded because # it is the parent self.assertIn( ("Cached relation hook contexts on 'app:0': " "['app:1', 'app:2', 'app:3', 'app:4']"), self.hook_log.getvalue()) class SubordinateUnitLifecycleTest(LifecycleTestBase, CharmPublisherTestBase): @inlineCallbacks def setUp(self): yield super(SubordinateUnitLifecycleTest, self).setUp() self.output = self.capture_logging(level=logging.DEBUG) environment = self.config.get_default() self.provider = environment.get_machine_provider() self.storage = self.provider.get_file_storage() @inlineCallbacks def get_local_charm(self, charm_name): "override get_local_charm to use copy" repo = self.makeDir() os.rmdir(repo) shutil.copytree(unbundled_repository, repo) charm_dir = CharmDirectory( os.path.join(repo, "series", charm_name)) charm, charm_state = yield self.publish_charm(charm_dir.path) @self.addCleanup def rm_repo(): shutil.rmtree(repo) returnValue(charm_state) @inlineCallbacks def setup_default_subordinate_relation(self): mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", "server", "global") logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", "client", "container") mysql, my_units = yield self.get_service_and_units_by_charm_name( "mysql", 1) self.assertFalse((yield mysql.is_subordinate())) log, log_units = yield self.get_service_and_units_by_charm_name( "logging") self.assertTrue((yield log.is_subordinate())) # add the relationship so we can create units with containers relation_state, service_states = (yield self.relation_manager.add_relation_state( mysql_ep, logging_ep)) mu1 = my_units[0] msm = MachineStateManager(self.client) m1 = yield msm.add_machine_state(series_constraints) yield m1.set_instance_id(str(m1.id)) yield mu1.assign_to_machine(m1) yield my_units[0].set_private_address("10.10.10.42") self.unit_directory = os.path.join(self.juju_directory, "units", mu1.unit_name.replace("/", "-")) os.makedirs(os.path.join(self.unit_directory, "charm", "hooks")) self.state_directory = os.path.join(self.juju_directory, "state") os.makedirs(self.state_directory) returnValue(dict(principal=(mu1, mysql), subordinate=[None, log], relation_state=relation_state)) @inlineCallbacks def test_subordinate_lifecycle_start(self): state = yield self.setup_default_subordinate_relation() unit, service = state["principal"] lifecycle = UnitLifecycle( self.client, unit, service, self.unit_directory, self.state_directory, self.executor) test_deferred = Deferred() results = [] def test_complete(unit_name, machine_id, charm_dir): results.append([unit_name, machine_id, charm_dir]) test_deferred.callback(True) mock_unit_deploy = self.mocker.patch(lifecycle) mock_unit_deploy._do_unit_deploy("logging/0", 0, MATCH(lambda x: os.path.exists(x))) self.mocker.call(test_complete) self.mocker.replay() yield lifecycle.start() yield test_deferred # mocker has already verified the call signature, we're happy juju-0.7.orig/juju/unit/tests/test_workflow.py0000644000000000000000000012532012135220114017775 0ustar 00000000000000import csv import itertools import logging import os from twisted.internet.defer import inlineCallbacks, returnValue from juju.control.tests.test_upgrade_charm import CharmUpgradeTestBase from juju.unit.tests.test_charm import CharmPublisherTestBase from juju.unit.tests.test_lifecycle import LifecycleTestBase from juju.charm.directory import CharmDirectory from juju.charm.url import CharmURL from juju.lib import serializer from juju.lib.statemachine import WorkflowState from juju.unit.lifecycle import UnitLifecycle, UnitRelationLifecycle from juju.unit.workflow import ( UnitWorkflowState, RelationWorkflowState, WorkflowStateClient, is_unit_running, is_relation_running) class WorkflowTestBase(LifecycleTestBase): @inlineCallbacks def setUp(self): yield super(WorkflowTestBase, self).setUp() self.output = self.makeFile() @inlineCallbacks def assertState(self, workflow, state): workflow_state = yield workflow.get_state() self.assertEqual(workflow_state, state) @inlineCallbacks def read_persistent_state(self, unit=None, history_id=None, workflow=None): unit = unit or self.states["unit"] history_id = history_id or unit.unit_name data, stat = yield self.client.get("/units/%s" % unit.internal_id) workflow = workflow or self.workflow state = open(workflow.state_file_path).read() history = open(workflow.state_history_path) zk_state = serializer.load(data)["workflow_state"] returnValue((serializer.load(state), [serializer.load(r[0]) for r in csv.reader(history)], serializer.load(zk_state[history_id]))) @inlineCallbacks def assert_history(self, expected, **kwargs): f_state, history, zk_state = yield self.read_persistent_state(**kwargs) self.assertEquals(f_state, zk_state) self.assertEquals(f_state, history[-1]) self.assertEquals(history, expected) def assert_history_concise(self, *chunks, **kwargs): state = None history = [] for chunk in chunks: for transition in chunk[:-1]: history.append({ "state": state, "state_variables": {}, "transition_id": transition}) state = chunk[-1] history.append({"state": state, "state_variables": {}}) return self.assert_history(history, **kwargs) def write_exit_hook(self, name, code=0, hooks_dir=None): self.write_hook( name, "#!/bin/bash\necho %s >> %s\n exit %s" % (name, self.output, code), hooks_dir=hooks_dir) class UnitWorkflowTestBase(WorkflowTestBase): @inlineCallbacks def setUp(self): yield super(UnitWorkflowTestBase, self).setUp() yield self.setup_default_test_relation() self.lifecycle = UnitLifecycle( self.client, self.states["unit"], self.states["service"], self.unit_directory, self.state_directory, self.executor) self.workflow = UnitWorkflowState( self.client, self.states["unit"], self.lifecycle, self.state_directory) self.write_exit_hook("install") self.write_exit_hook("start") self.write_exit_hook("stop") self.write_exit_hook("config-changed") self.write_exit_hook("upgrade-charm") @inlineCallbacks def assert_transition(self, transition, success=True): with (yield self.workflow.lock()): result = yield self.workflow.fire_transition(transition) self.assertEquals(result, success) @inlineCallbacks def assert_transition_alias(self, transition, success=True): with (yield self.workflow.lock()): result = yield self.workflow.fire_transition_alias(transition) self.assertEquals(result, success) @inlineCallbacks def assert_state(self, expected): actual = yield self.workflow.get_state() self.assertEquals(actual, expected) def assert_hooks(self, *hooks): with open(self.output) as f: lines = tuple(l.strip() for l in f) self.assertEquals(lines, hooks) class UnitWorkflowTest(UnitWorkflowTestBase): @inlineCallbacks def test_install(self): yield self.assert_transition("install") yield self.assert_state("started") self.assert_hooks("install", "config-changed", "start") yield self.assert_history_concise( ("install", "installed"), ("start", "started")) @inlineCallbacks def test_install_with_error_and_retry(self): """If the install hook fails, the workflow is transition to the install_error state. If the install is retried, a success transition will take us to the started state. """ self.write_exit_hook("install", 1) yield self.assert_transition("install", False) yield self.assert_state("install_error") yield self.assert_transition("retry_install") yield self.assert_state("started") self.assert_hooks("install", "config-changed", "start") yield self.assert_history_concise( ("install", "error_install", "install_error"), ("retry_install", "installed"), ("start", "started")) @inlineCallbacks def test_install_error_with_retry_hook(self): """If the install hook fails, the workflow is transition to the install_error state. """ self.write_exit_hook("install", 1) yield self.assert_transition("install", False) yield self.assert_state("install_error") yield self.assert_transition("retry_install_hook", False) yield self.assert_state("install_error") self.write_exit_hook("install") yield self.assert_transition_alias("retry_hook") yield self.assert_state("started") self.assert_hooks( "install", "install", "install", "config-changed", "start") yield self.assert_history_concise( ("install", "error_install", "install_error"), ("retry_install_hook", "install_error"), ("retry_install_hook", "installed"), ("start", "started")) @inlineCallbacks def test_start(self): with (yield self.workflow.lock()): yield self.workflow.set_state("installed") yield self.assert_transition("start") yield self.assert_state("started") self.assert_hooks("config-changed", "start") yield self.assert_history_concise( ("installed",), ("start", "started")) @inlineCallbacks def test_start_with_error(self): """Executing the start transition with a hook error, results in the workflow going to the start_error state. The start can be retried. """ self.write_exit_hook("start", 1) # The install transition succeeded; error from success_transition # is ignored in StateMachine yield self.assert_transition("install") yield self.assert_state("start_error") yield self.assert_transition("retry_start") yield self.assert_state("started") self.assert_hooks("install", "config-changed", "start") yield self.assert_history_concise( ("install", "installed"), ("start", "error_start", "start_error"), ("retry_start", "started")) @inlineCallbacks def test_start_error_with_retry_hook(self): """Executing the start transition with a hook error, results in the workflow going to the start_error state. The start can be retried. """ self.write_exit_hook("start", 1) yield self.assert_transition("install") yield self.assert_state("start_error") yield self.assert_transition("retry_start_hook", False) yield self.assert_state("start_error") self.write_exit_hook("start") yield self.assert_transition_alias("retry_hook") yield self.assert_state("started") self.assert_hooks( "install", "config-changed", "start", "config-changed", "start", "config-changed", "start") yield self.assert_history_concise( ("install", "installed"), ("start", "error_start", "start_error"), ("retry_start_hook", "start_error"), ("retry_start_hook", "started")) @inlineCallbacks def test_stop(self): """Executing the stop transition, results in the workflow going to the down state. """ yield self.assert_transition("install") yield self.assert_transition("stop") yield self.assert_state("stopped") self.assert_hooks("install", "config-changed", "start", "stop") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("stop", "stopped")) @inlineCallbacks def test_stop_with_error(self): self.write_exit_hook("stop", 1) yield self.assert_transition("install") yield self.assert_transition("stop", False) yield self.assert_state("stop_error") yield self.assert_transition("retry_stop") yield self.assert_state("stopped") self.assert_hooks("install", "config-changed", "start", "stop") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("stop", "error_stop", "stop_error"), ("retry_stop", "stopped")) @inlineCallbacks def test_stop_error_with_retry_hook(self): self.write_exit_hook("stop", 1) yield self.assert_transition("install") yield self.assert_transition("stop", False) yield self.assert_state("stop_error") yield self.assert_transition("retry_stop_hook", False) yield self.assert_state("stop_error") self.write_exit_hook("stop") yield self.assert_transition_alias("retry_hook") yield self.assert_state("stopped") self.assert_hooks( "install", "config-changed", "start", "stop", "stop", "stop") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("stop", "error_stop", "stop_error"), ("retry_stop_hook", "stop_error"), ("retry_stop_hook", "stopped")) @inlineCallbacks def test_configure(self): """Configuring a unit results in the config-changed hook being run. """ yield self.assert_transition("install") yield self.assert_transition("configure") yield self.assert_state("started") self.assert_hooks( "install", "config-changed", "start", "config-changed") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("configure", "started")) @inlineCallbacks def test_configure_error_and_retry(self): """An error while configuring, transitions the unit and stops the lifecycle.""" yield self.assert_transition("install") self.write_exit_hook("config-changed", 1) yield self.assert_transition("configure", False) yield self.assert_state("configure_error") yield self.assert_transition("retry_configure") yield self.assert_state("started") self.assert_hooks( "install", "config-changed", "start", "config-changed") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("configure", "error_configure", "configure_error"), ("retry_configure", "started")) @inlineCallbacks def test_configure_error_and_retry_hook(self): """An error while configuring, transitions the unit and stops the lifecycle.""" yield self.assert_transition("install") self.write_exit_hook("config-changed", 1) yield self.assert_transition("configure", False) yield self.assert_state("configure_error") yield self.assert_transition("retry_configure_hook", False) yield self.assert_state("configure_error") self.write_exit_hook("config-changed") yield self.assert_transition_alias("retry_hook") yield self.assert_state("started") self.assert_hooks( "install", "config-changed", "start", "config-changed", "config-changed", "config-changed") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("configure", "error_configure", "configure_error"), ("retry_configure_hook", "error_retry_configure", "configure_error"), ("retry_configure_hook", "started")) @inlineCallbacks def test_is_unit_running(self): running, state = yield is_unit_running( self.client, self.states["unit"]) self.assertIdentical(running, False) self.assertIdentical(state, None) with (yield self.workflow.lock()): yield self.workflow.fire_transition("install") running, state = yield is_unit_running( self.client, self.states["unit"]) self.assertIdentical(running, True) self.assertEqual(state, "started") with (yield self.workflow.lock()): yield self.workflow.fire_transition("stop") running, state = yield is_unit_running( self.client, self.states["unit"]) self.assertIdentical(running, False) self.assertEqual(state, "stopped") @inlineCallbacks def test_client_with_no_state(self): workflow_client = WorkflowStateClient(self.client, self.states["unit"]) state = yield workflow_client.get_state() self.assertEqual(state, None) @inlineCallbacks def test_client_with_state(self): with (yield self.workflow.lock()): yield self.workflow.fire_transition("install") workflow_client = WorkflowStateClient(self.client, self.states["unit"]) self.assertEqual( (yield workflow_client.get_state()), "started") @inlineCallbacks def test_client_readonly(self): with (yield self.workflow.lock()): yield self.workflow.fire_transition("install") workflow_client = WorkflowStateClient( self.client, self.states["unit"]) self.assertEqual( (yield workflow_client.get_state()), "started") with (yield workflow_client.lock()): yield self.assertFailure( workflow_client.set_state("stopped"), NotImplementedError) self.assertEqual( (yield workflow_client.get_state()), "started") @inlineCallbacks def assert_synchronize(self, start_state, state, lifecycle, executor, sync_lifecycle=None, sync_executor=None, start_inflight=None): # Handle cases where we expect to be in a different state pre-sync # to the final state post-sync. if sync_lifecycle is None: sync_lifecycle = lifecycle if sync_executor is None: sync_executor = executor super_sync = WorkflowState.synchronize @inlineCallbacks def check_sync(obj): # We don't care about RelationWorkflowState syncing here if type(obj) == UnitWorkflowState: self.assertEquals( self.lifecycle.running, sync_lifecycle) self.assertEquals( self.executor.running, sync_executor) yield super_sync(obj) all_start_states = itertools.product((True, False), (True, False)) for initial_lifecycle, initial_executor in all_start_states: if initial_executor and not self.executor.running: self.executor.start() elif not initial_executor and self.executor.running: yield self.executor.stop() if initial_lifecycle and not self.lifecycle.running: yield self.lifecycle.start(fire_hooks=False) elif not initial_lifecycle and self.lifecycle.running: yield self.lifecycle.stop(fire_hooks=False) with (yield self.workflow.lock()): yield self.workflow.set_state(start_state) yield self.workflow.set_inflight(start_inflight) # self.patch is not suitable because we can't unpatch until # the end of the test, and we don't really want [many] distinct # one-line test_synchronize_foo methods. WorkflowState.synchronize = check_sync try: yield self.workflow.synchronize(self.executor) finally: WorkflowState.synchronize = super_sync new_inflight = yield self.workflow.get_inflight() self.assertEquals(new_inflight, None) new_state = yield self.workflow.get_state() self.assertEquals(new_state, state) vars = yield self.workflow.get_state_variables() self.assertEquals(vars, {}) self.assertEquals(self.lifecycle.running, lifecycle) self.assertEquals(self.executor.running, executor) def assert_default_synchronize(self, state): return self.assert_synchronize(state, state, False, True) @inlineCallbacks def test_synchronize_automatic(self): # No transition in flight yield self.assert_synchronize( None, "started", True, True, False, True) yield self.assert_synchronize( "installed", "started", True, True, False, True) yield self.assert_synchronize( "started", "started", True, True) yield self.assert_synchronize( "charm_upgrade_error", "charm_upgrade_error", True, False) @inlineCallbacks def test_synchronize_trivial(self): yield self.assert_default_synchronize("install_error") yield self.assert_default_synchronize("start_error") yield self.assert_default_synchronize("configure_error") yield self.assert_default_synchronize("stop_error") yield self.assert_default_synchronize("stopped") @inlineCallbacks def test_synchronize_inflight(self): # With transition inflight (we check the important one (upgrade_charm) # and a couple of others at random, but testing every single one is # entirely redundant). yield self.assert_synchronize( "started", "started", True, True, True, False, "upgrade_charm") yield self.assert_synchronize( None, "started", True, True, False, True, "install") yield self.assert_synchronize( "configure_error", "started", True, True, False, True, "retry_configure_hook") class UnitWorkflowUpgradeTest( UnitWorkflowTestBase, CharmPublisherTestBase, CharmUpgradeTestBase): expected_upgrade = None @inlineCallbacks def ready_upgrade(self, bad_hook): repository = self.increment_charm(self.charm) hooks_dir = os.path.join(repository.path, "series", "mysql", "hooks") self.write_exit_hook( "upgrade-charm", int(bad_hook), hooks_dir=hooks_dir) charm = yield repository.find(CharmURL.parse("local:series/mysql")) charm, charm_state = yield self.publish_charm(charm.path) yield self.states["service"].set_charm_id(charm_state.id) self.expected_upgrade = charm_state.id @inlineCallbacks def assert_charm_upgraded(self, expect_upgraded): charm_id = yield self.states["unit"].get_charm_id() self.assertEquals(charm_id == self.expected_upgrade, expect_upgraded) if expect_upgraded: expect_revision = CharmURL.parse(self.expected_upgrade).revision charm = CharmDirectory(os.path.join(self.unit_directory, "charm")) self.assertEquals(charm.get_revision(), expect_revision) @inlineCallbacks def test_upgrade_not_available(self): """Upgrading when there's no new version runs the hook anyway""" yield self.assert_transition("install") yield self.assert_state("started") yield self.assert_transition("upgrade_charm") yield self.assert_state("started") self.assert_hooks( "install", "config-changed", "start", "upgrade-charm") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("upgrade_charm", "started")) @inlineCallbacks def test_upgrade(self): """Upgrading a workflow results in the upgrade hook being executed. """ yield self.assert_transition("install") yield self.assert_state("started") yield self.ready_upgrade(False) yield self.assert_charm_upgraded(False) yield self.assert_transition("upgrade_charm") yield self.assert_state("started") yield self.assert_charm_upgraded(True) self.assert_hooks( "install", "config-changed", "start", "upgrade-charm") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("upgrade_charm", "started")) @inlineCallbacks def test_upgrade_error_retry(self): """A hook error during an upgrade transitions to upgrade_error. """ yield self.assert_transition("install") self.write_exit_hook("upgrade-charm", 1) yield self.assert_state("started") yield self.ready_upgrade(True) yield self.assert_charm_upgraded(False) yield self.assert_transition("upgrade_charm", False) yield self.assert_state("charm_upgrade_error") self.assertFalse(self.executor.running) # The upgrade should have completed before the hook blew up. yield self.assert_charm_upgraded(True) # The bad hook is still in place, but we don't run it again yield self.assert_transition("retry_upgrade_charm") yield self.assert_state("started") yield self.assert_charm_upgraded(True) self.assertTrue(self.executor.running) self.assert_hooks( "install", "config-changed", "start", "upgrade-charm") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("upgrade_charm", "upgrade_charm_error", "charm_upgrade_error"), ("retry_upgrade_charm", "started")) @inlineCallbacks def test_upgrade_error_retry_hook(self): """A hook error during an upgrade transitions to upgrade_error, and can be re-tried with hook execution. """ yield self.assert_transition("install") yield self.assert_state("started") yield self.ready_upgrade(True) yield self.assert_charm_upgraded(False) yield self.assert_transition("upgrade_charm", False) yield self.assert_state("charm_upgrade_error") self.assertFalse(self.executor.running) # The upgrade should have completed before the hook blew up. yield self.assert_charm_upgraded(True) yield self.assert_transition("retry_upgrade_charm_hook", False) yield self.assert_state("charm_upgrade_error") self.assertFalse(self.executor.running) yield self.assert_charm_upgraded(True) self.write_exit_hook("upgrade-charm") yield self.assert_transition_alias("retry_hook") yield self.assert_state("started") self.assertTrue(self.executor.running) yield self.assert_charm_upgraded(True) self.assert_hooks( "install", "config-changed", "start", "upgrade-charm", "upgrade-charm", "upgrade-charm") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("upgrade_charm", "upgrade_charm_error", "charm_upgrade_error"), ("retry_upgrade_charm_hook", "retry_upgrade_charm_error", "charm_upgrade_error"), ("retry_upgrade_charm_hook", "started")) @inlineCallbacks def test_upgrade_error_before_hook(self): """If we blow up during the critical pre-hook bits, we should still end up in the same error state""" self.capture_logging("charm.upgrade") yield self.assert_transition("install") yield self.assert_state("started") yield self.ready_upgrade(False) # Induce a surprising error with self.frozen_charm(): yield self.assert_charm_upgraded(False) yield self.assert_transition("upgrade_charm", False) yield self.assert_state("charm_upgrade_error") self.assertFalse(self.executor.running) # The upgrade did not complete yield self.assert_charm_upgraded(False) yield self.assert_transition("retry_upgrade_charm") yield self.assert_state("started") self.assertTrue(self.executor.running) yield self.assert_charm_upgraded(True) # The hook must run here, even though it's a retry, because the actual # charm only just got overwritten: and so we know that we've never even # tried to execute a hook for this upgrade, and we must do so to fulfil # the guarantee that that hook runs first after upgrade. self.assert_hooks( "install", "config-changed", "start", "upgrade-charm") yield self.assert_history_concise( ("install", "installed"), ("start", "started"), ("upgrade_charm", "upgrade_charm_error", "charm_upgrade_error"), ("retry_upgrade_charm", "started")) class UnitRelationWorkflowTest(WorkflowTestBase): @inlineCallbacks def setUp(self): yield super(UnitRelationWorkflowTest, self).setUp() yield self.setup_default_test_relation() self.relation_name = self.states["service_relation"].relation_name self.relation_ident = self.states["service_relation"].relation_ident self.log_stream = self.capture_logging( "unit.relation.lifecycle", logging.DEBUG) self.lifecycle = UnitRelationLifecycle( self.client, self.states["unit"].unit_name, self.states["unit_relation"], self.relation_ident, self.unit_directory, self.state_directory, self.executor) self.workflow = RelationWorkflowState( self.client, self.states["unit_relation"], self.states["unit"].unit_name, self.lifecycle, self.state_directory) @inlineCallbacks def tearDown(self): yield self.lifecycle.stop() yield super(UnitRelationWorkflowTest, self).tearDown() @inlineCallbacks def test_is_relation_running(self): """The unit relation's workflow state can be categorized as a boolean. """ with (yield self.workflow.lock()): running, state = yield is_relation_running( self.client, self.states["unit_relation"]) self.assertIdentical(running, False) self.assertIdentical(state, None) relation_state = self.workflow.get_relation_info() self.assertEquals(relation_state, {"relation-0000000000": {"relation_name": "mysql/0", "relation_scope": "global"}}) yield self.workflow.fire_transition("start") running, state = yield is_relation_running( self.client, self.states["unit_relation"]) self.assertIdentical(running, True) self.assertEqual(state, "up") relation_state = self.workflow.get_relation_info() self.assertEquals(relation_state, {"relation-0000000000": {"relation_name": "mysql/0", "relation_scope": "global"}}) yield self.workflow.fire_transition("stop") running, state = yield is_relation_running( self.client, self.states["unit_relation"]) self.assertIdentical(running, False) self.assertEqual(state, "down") relation_state = self.workflow.get_relation_info() self.assertEquals(relation_state, {"relation-0000000000": {"relation_name": "mysql/0", "relation_scope": "global"}}) @inlineCallbacks def test_up_down_cycle(self): """The workflow can be transition from up to down, and back. """ self.write_hook("%s-relation-changed" % self.relation_name, "#!/bin/bash\nexit 0\n") with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") hook_executed = self.wait_on_hook("app-relation-changed") # Add a new unit, while we're stopped. with (yield self.workflow.lock()): yield self.workflow.fire_transition("stop") yield self.add_opposite_service_unit(self.states) yield self.assertState(self.workflow, "down") self.assertFalse(hook_executed.called) # Come back up; check unit add detected. with (yield self.workflow.lock()): yield self.workflow.fire_transition("restart") yield self.assertState(self.workflow, "up") yield hook_executed self.assert_history_concise( ("start", "up"), ("stop", "down"), ("restart", "up"), history_id=self.workflow.zk_state_id) @inlineCallbacks def test_join_hook_with_error(self): """A join hook error stops execution of synthetic change hooks. """ self.capture_logging("unit.relation.lifecycle", logging.DEBUG) self.write_hook("%s-relation-joined" % self.relation_name, "#!/bin/bash\nexit 1\n") self.write_hook("%s-relation-changed" % self.relation_name, "#!/bin/bash\nexit 1\n") with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") # Add a new unit, and wait for the broken hook to result in # the transition to the down state. yield self.add_opposite_service_unit(self.states) yield self.wait_on_state(self.workflow, "error") f_state, history, zk_state = yield self.read_persistent_state( history_id=self.workflow.zk_state_id) self.assertEqual(f_state, zk_state) error = "Error processing '%s': exit code 1." % ( os.path.join(self.unit_directory, "charm", "hooks", "app-relation-joined")) self.assertEqual(f_state, {"state": "error", "state_variables": { "change_type": "joined", "error_message": error}}) @inlineCallbacks def test_change_hook_with_error(self): """An error while processing a change hook, results in the workflow transitioning to the down state. """ self.capture_logging("unit.relation.lifecycle", logging.DEBUG) self.write_hook("%s-relation-joined" % self.relation_name, "#!/bin/bash\nexit 0\n") self.write_hook("%s-relation-changed" % self.relation_name, "#!/bin/bash\nexit 1\n") with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") # Add a new unit, and wait for the broken hook to result in # the transition to the down state. yield self.add_opposite_service_unit(self.states) yield self.wait_on_state(self.workflow, "error") f_state, history, zk_state = yield self.read_persistent_state( history_id=self.workflow.zk_state_id) self.assertEqual(f_state, zk_state) error = "Error processing '%s': exit code 1." % ( os.path.join(self.unit_directory, "charm", "hooks", "app-relation-changed")) self.assertEqual(f_state, {"state": "error", "state_variables": { "change_type": "modified", "error_message": error}}) @inlineCallbacks def test_depart(self): """When the workflow is transition to the down state, a relation broken hook is executed, and the unit stops responding to relation changes. """ with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") wait_on_hook = self.wait_on_hook("app-relation-changed") states = yield self.add_opposite_service_unit(self.states) yield wait_on_hook wait_on_hook = self.wait_on_hook("app-relation-broken") wait_on_state = self.wait_on_state(self.workflow, "departed") with (yield self.workflow.lock()): yield self.workflow.fire_transition("depart") yield wait_on_hook yield wait_on_state # verify further changes to the related unit don't result in # hook executions. results = [] def collect_executions(*args): results.append(args) self.executor.set_observer(collect_executions) yield states["unit_relation"].set_data(dict(a=1)) # Sleep to give errors a chance. yield self.sleep(0.1) self.assertFalse(results) def test_lifecycle_attribute(self): """The workflow lifecycle is accessible from the workflow.""" self.assertIdentical(self.workflow.lifecycle, self.lifecycle) @inlineCallbacks def test_client_read_none(self): workflow = WorkflowStateClient( self.client, self.states["unit_relation"]) self.assertEqual(None, (yield workflow.get_state())) @inlineCallbacks def test_client_read_state(self): """The relation workflow client can read the state of a unit relation.""" with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") self.write_hook("%s-relation-changed" % self.relation_name, "#!/bin/bash\necho hello\n") wait_on_hook = self.wait_on_hook("app-relation-changed") yield self.add_opposite_service_unit(self.states) yield wait_on_hook workflow = WorkflowStateClient( self.client, self.states["unit_relation"]) self.assertEqual("up", (yield workflow.get_state())) @inlineCallbacks def test_client_read_only(self): workflow_client = WorkflowStateClient( self.client, self.states["unit_relation"]) with (yield workflow_client.lock()): yield self.assertFailure( workflow_client.set_state("up"), NotImplementedError) @inlineCallbacks def assert_synchronize(self, start_state, state, watches, scheduler, sync_watches=None, sync_scheduler=None, start_inflight=None): # Handle cases where we expect to be in a different state pre-sync # to the final state post-sync. if sync_watches is None: sync_watches = watches if sync_scheduler is None: sync_scheduler = scheduler super_sync = WorkflowState.synchronize @inlineCallbacks def check_sync(obj): self.assertEquals( self.workflow.lifecycle.watching, sync_watches) self.assertEquals( self.workflow.lifecycle.executing, sync_scheduler) yield super_sync(obj) start_states = itertools.product((True, False), (True, False)) for (initial_watches, initial_scheduler) in start_states: yield self.workflow.lifecycle.stop() yield self.workflow.lifecycle.start( start_watches=initial_watches, start_scheduler=initial_scheduler) self.assertEquals( self.workflow.lifecycle.watching, initial_watches) self.assertEquals( self.workflow.lifecycle.executing, initial_scheduler) with (yield self.workflow.lock()): yield self.workflow.set_state(start_state) yield self.workflow.set_inflight(start_inflight) # self.patch is not suitable because we can't unpatch until # the end of the test, and we don't really want 13 distinct # one-line test_synchronize_foo methods. WorkflowState.synchronize = check_sync try: yield self.workflow.synchronize() finally: WorkflowState.synchronize = super_sync new_inflight = yield self.workflow.get_inflight() self.assertEquals(new_inflight, None) new_state = yield self.workflow.get_state() self.assertEquals(new_state, state) self.assertEquals(self.workflow.lifecycle.watching, watches) self.assertEquals(self.workflow.lifecycle.executing, scheduler) @inlineCallbacks def test_synchronize(self): # No transition in flight yield self.assert_synchronize(None, "up", True, True, False, False) yield self.assert_synchronize("down", "down", False, False) yield self.assert_synchronize("departed", "departed", False, False) yield self.assert_synchronize("error", "error", True, False) yield self.assert_synchronize("up", "up", True, True) # With transition inflight yield self.assert_synchronize( None, "up", True, True, False, False, "start") yield self.assert_synchronize( "up", "down", False, False, True, True, "stop") yield self.assert_synchronize( "down", "up", True, True, False, False, "restart") yield self.assert_synchronize( "up", "error", True, False, True, True, "error") yield self.assert_synchronize( "error", "up", True, True, True, False, "reset") yield self.assert_synchronize( "up", "departed", False, False, True, True, "depart") yield self.assert_synchronize( "down", "departed", False, False, False, False, "down_depart") yield self.assert_synchronize( "error", "departed", False, False, True, False, "error_depart") @inlineCallbacks def test_depart_hook_error(self): """A depart hook error, still results in a transition to the departed state with a state variable noting the error.""" self.write_hook("%s-relation-broken" % self.relation_name, "#!/bin/bash\nexit 1\n") error_output = self.capture_logging("unit.relation.workflow") with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") wait_on_hook = self.wait_on_hook("app-relation-changed") states = yield self.add_opposite_service_unit(self.states) yield wait_on_hook wait_on_hook = self.wait_on_hook("app-relation-broken") wait_on_state = self.wait_on_state(self.workflow, "departed") with (yield self.workflow.lock()): yield self.workflow.fire_transition("depart") yield wait_on_hook yield wait_on_state # verify further changes to the related unit don't result in # hook executions. results = [] def collect_executions(*args): results.append(args) self.executor.set_observer(collect_executions) yield states["unit_relation"].set_data(dict(a=1)) # Sleep to give errors a chance. yield self.sleep(0.1) self.assertFalse(results) # Verify final state and log output. msg = "Depart hook error, ignoring: " error_msg = "Error processing " error_msg += repr(os.path.join( self.unit_directory, "charm", "hooks", "app-relation-broken")) error_msg += ": exit code 1." self.assertEqual( error_output.getvalue(), (msg + error_msg + "\n")) current_state = yield self.workflow.get_state() self.assertEqual(current_state, "departed") f_state, history, zk_state = yield self.read_persistent_state( history_id=self.workflow.zk_state_id) self.assertEqual(f_state, zk_state) self.assertEqual(f_state, {"state": "departed", "state_variables": { "change_type": "depart", "error_message": error_msg}}) def test_depart_down(self): """When the workflow is transitioned from down to departed, a relation broken hook is executed, and the unit stops responding to relation changes. """ with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") yield self.workflow.fire_transition("stop") yield self.assertState(self.workflow, "down") states = yield self.add_opposite_service_unit(self.states) wait_on_hook = self.wait_on_hook("app-relation-broken") wait_on_state = self.wait_on_state(self.workflow, "departed") with (yield self.workflow.lock()): yield self.workflow.fire_transition("depart") yield wait_on_hook yield wait_on_state # Verify further changes to the related unit don't result in # hook executions. results = [] def collect_executions(*args): results.append(args) self.executor.set_observer(collect_executions) yield states["unit_relation"].set_data(dict(a=1)) # Sleep to give errors a chance. yield self.sleep(0.1) self.assertFalse(results) def test_depart_error(self): with (yield self.workflow.lock()): yield self.workflow.fire_transition("start") yield self.assertState(self.workflow, "up") yield self.workflow.fire_transition("error") yield self.assertState(self.workflow, "error") states = yield self.add_opposite_service_unit(self.states) wait_on_hook = self.wait_on_hook("app-relation-broken") wait_on_state = self.wait_on_state(self.workflow, "departed") with (yield self.workflow.lock()): yield self.workflow.fire_transition("depart") yield wait_on_hook yield wait_on_state # Verify further changes to the related unit don't result in # hook executions. results = [] def collect_executions(*args): results.append(args) self.executor.set_observer(collect_executions) yield states["unit_relation"].set_data(dict(a=1)) # Sleep to give errors a chance. yield self.sleep(0.1) self.assertFalse(results) juju-0.7.orig/misc/bash_completion.d/0000755000000000000000000000000012135220114015734 5ustar 00000000000000juju-0.7.orig/misc/devel-tools/0000755000000000000000000000000012135220114014601 5ustar 00000000000000juju-0.7.orig/misc/bash_completion.d/juju0000644000000000000000000000661612135220114016645 0ustar 00000000000000shopt -s progcomp _juju () { local cur cmds cmdIdx cmd cmdOpts fixedWords i globalOpts local curOpt optEnums local IFS=$' \n' COMPREPLY=() cur=${COMP_WORDS[COMP_CWORD]} cmds='add-relation add-unit bootstrap debug-hooks debug-log deploy destroy-environment destroy-service expose open-tunnel remove-relation remove-unit resolved scp set scp ssh status terminate-machine unexpose upgrade-charm get' globalOpts=( -h --verbose -v --log-file) # do ordinary expansion if we are anywhere after a -- argument for ((i = 1; i < COMP_CWORD; ++i)); do [[ ${COMP_WORDS[i]} == "--" ]] && return 0 done # find the command; it's the first word not starting in - cmd= for ((cmdIdx = 1; cmdIdx < ${#COMP_WORDS[@]}; ++cmdIdx)); do if [[ ${COMP_WORDS[cmdIdx]} != -* ]]; then cmd=${COMP_WORDS[cmdIdx]} break fi done # complete command name if we are not already past the command if [[ $COMP_CWORD -le cmdIdx ]]; then COMPREPLY=( $( compgen -W "$cmds ${globalOpts[*]}" -- "$cur" ) ) return 0 fi # find the option for which we want to complete a value curOpt= if [[ $cur != -* ]] && [[ $COMP_CWORD -gt 1 ]]; then curOpt=${COMP_WORDS[COMP_CWORD - 1]} if [[ "$curOpt" == = ]]; then curOpt=${COMP_WORDS[COMP_CWORD - 2]} elif [[ "$cur" == : ]]; then cur= curOpt="$curOpt:" elif [[ "$curOpt" == : ]]; then curOpt=${COMP_WORDS[COMP_CWORD - 2]}: fi fi cmdOpts=( ) optEnums=( ) fixedWords=( ) case "$cmd" in add-relation|debug-hooks|destroy-environment|destory-service|expose-service|unexpose-service|open-tunnel|remove-relation|remove-unit|scp|set|scp|ssh|terminate-machine) cmdOpts=( --environment --help) ;; get) cmdOpts=( --environment --help --schema) ;; bootstrap) cmdOpts=( --environment --help) ;; add-unit) cmdOpts=( --environment --num-units --help) ;; deploy) cmdOpts=( --environment --num-units --repository --config --help) ;; debug-log) cmdOpts=( --environment --exclude --include --replay --level --limit --output --help) case "$curOpt" in -l) optEnums=( DEBUG INFO ERROR WARNING CRITICAL ) ;; esac ;; resolved) cmdOpts=( --retry --environment --help) ;; status) cmdOpts=( --output --format --environment --help) case "$curOpt" in --format) optEnums=( json yaml png svg dot ) ;; esac ;; upgrade-charm) cmdOpts=( --dry-run --environment --repository --help) ;; *) cmdOpts=(--help) ;; esac IFS=$'\n' if [[ "$cur" == = ]] && [[ ${#optEnums[@]} -gt 0 ]]; then # complete directly after "--option=", list all enum values COMPREPLY=( "${optEnums[@]}" ) return 0 else fixedWords=( "${cmdOpts[@]}" "${globalOpts[@]}" "${optEnums[@]}" "${fixedWords[@]}" ) fi if [[ ${#fixedWords[@]} -gt 0 ]]; then COMPREPLY=( $( compgen -W "${fixedWords[*]}" -- "$cur" ) ) fi return 0 } complete -F _juju -o default juju juju-0.7.orig/misc/devel-tools/juju-inspect-local-provider0000755000000000000000000000241612135220114022072 0ustar 00000000000000#!/bin/bash -x # Gather a collection of data into an output file to make # debugging any possible issues with the local provider simpler if [ ! `id -u` == '0' ]; then echo "This script should be run as root" exit 1; fi if [ ${#} = 1 ]; then image_name=$1 fi # 11.10 (Oneiric) is the first supported release for the local provider due # to its improved LXC support source /etc/lsb-release major_release=`echo $DISTRIB_RELEASE | cut -d . -f 1` minor_release=`echo $DISTRIB_RELEASE | cut -d . -f 2` if [ $major_release -lt 11 -o $minor_release -lt 10 ]; then echo "Oneiric 11.10 is the first supported release of the local provider" exit 1; fi # Collect various status information about the system echo "#Local provider inspection" uname -a ifconfig virsh net-list --all ls /var/cache/lxc ls /var/cache/lxc/* lxc-ls # guess about the users data-dir if [ -n "${SUDO_USER}" ]; then user=$SUDO_USER fi image=/var/lib/lxc/$image_name if [ -n "$image_name" -a -e "$image" ]; then cat "$image/config" chroot "$image/rootfs" bash -xc " cat /etc/juju/juju.conf; ls /usr/lib/juju/juju; dpkg-query -s juju; cat /etc/hostname; cat /etc/resolv.conf; cat /etc/hosts; tail -n 100 /var/log/juju/*.log " fi