juju-deployer-0.6.4/0000775000175000017500000000000012666044061020446 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/README0000664000175000017500000000371112600342600021314 0ustar tvansteenburghtvansteenburgh00000000000000Juju Deployer ------------- A deployment tool for juju that allows stack-like configurations of complex deployments. It supports configuration in yaml or json. Installation ------------ $ virtualenv --system-site-packages deployer $ ./deployer/bin/easy_install juju-deployer $ ./deployer/bin/juju-deployer -h Usage ----- Stack Definitions ----------------- High level view of v3 stacks:: blog: series: precise services: blog: charm: wordpress branch: lp:charms/precise/wordpress db: charm: mysql branch: lp:charms/precise/mysql relations: - [db, blog] blog-prod: inherits: blog services: blog: num_units: 3 constraints: instance-type=m1.medium options: wp-content: include-file://content-branch.txt db: constraints: instance-type=m1.large options: tuning: include-base64://db-tuning.txt cachelb: charm: varnish branch: lp:charms/precise/varnish relations: - [cachelb, blog] We've got two deployment stacks here, blog, and blog-prod. The blog stack defines a simple wordpress deploy with mysql and two relations. In this case its Version 4 bundles are currently under development. The development document for these types of bundles is available `here `_. Development ----------- Obtain source $ bzr branch lp:juju-deployer/darwin deployer $ cd deployer # Test runner $ python setup.py test Background ---------- This is a wrapper for Juju that allows stack-like configurations of complex deployments. It was created to deploy Openstack but should be able to deploy other complex service configurations in the same manner. See deployments.cfg and deployments.cfg.sample for examples of how to describe service stacks in JSON. juju-deployer-0.6.4/setup.cfg0000664000175000017500000000026012666044061022265 0ustar tvansteenburghtvansteenburgh00000000000000[build_sphinx] source-dir = doc/ build-dir = doc/_build all_files = 1 [upload_sphinx] upload-dir = doc/_build/html [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 juju-deployer-0.6.4/PKG-INFO0000664000175000017500000000624512666044061021552 0ustar tvansteenburghtvansteenburgh00000000000000Metadata-Version: 1.1 Name: juju-deployer Version: 0.6.4 Summary: A tool for deploying complex stacks with juju. Home-page: http://launchpad.net/juju-deployer Author: Kapil Thangavelu Author-email: kapil.foss@gmail.com License: UNKNOWN Description: Juju Deployer ------------- A deployment tool for juju that allows stack-like configurations of complex deployments. It supports configuration in yaml or json. Installation ------------ $ virtualenv --system-site-packages deployer $ ./deployer/bin/easy_install juju-deployer $ ./deployer/bin/juju-deployer -h Usage ----- Stack Definitions ----------------- High level view of v3 stacks:: blog: series: precise services: blog: charm: wordpress branch: lp:charms/precise/wordpress db: charm: mysql branch: lp:charms/precise/mysql relations: - [db, blog] blog-prod: inherits: blog services: blog: num_units: 3 constraints: instance-type=m1.medium options: wp-content: include-file://content-branch.txt db: constraints: instance-type=m1.large options: tuning: include-base64://db-tuning.txt cachelb: charm: varnish branch: lp:charms/precise/varnish relations: - [cachelb, blog] We've got two deployment stacks here, blog, and blog-prod. The blog stack defines a simple wordpress deploy with mysql and two relations. In this case its Version 4 bundles are currently under development. The development document for these types of bundles is available `here `_. Development ----------- Obtain source $ bzr branch lp:juju-deployer/darwin deployer $ cd deployer # Test runner $ python setup.py test Background ---------- This is a wrapper for Juju that allows stack-like configurations of complex deployments. It was created to deploy Openstack but should be able to deploy other complex service configurations in the same manner. See deployments.cfg and deployments.cfg.sample for examples of how to describe service stacks in JSON. Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Programming Language :: Python Classifier: Topic :: Internet Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Intended Audience :: System Administrators Classifier: Intended Audience :: Developers juju-deployer-0.6.4/MANIFEST.in0000664000175000017500000000011312600342600022163 0ustar tvansteenburghtvansteenburgh00000000000000include README include LICENSE recursive-include deployer/tests/test_data *juju-deployer-0.6.4/setup.py0000664000175000017500000000152212666043737022171 0ustar tvansteenburghtvansteenburgh00000000000000from setuptools import setup, find_packages setup( name="juju-deployer", version="0.6.4", description="A tool for deploying complex stacks with juju.", long_description=open("README").read(), author="Kapil Thangavelu", author_email="kapil.foss@gmail.com", url="http://launchpad.net/juju-deployer", install_requires=["jujuclient>=0.18", "PyYAML>=3.10"], packages=find_packages(), classifiers=[ "Development Status :: 4 - Beta", "Programming Language :: Python", "Topic :: Internet", "Topic :: Software Development :: Libraries :: Python Modules", "Intended Audience :: System Administrators", "Intended Audience :: Developers"], test_suite="deployer.tests", entry_points={ "console_scripts": [ 'juju-deployer = deployer.cli:main']}) juju-deployer-0.6.4/juju_deployer.egg-info/0000775000175000017500000000000012666044061025020 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/juju_deployer.egg-info/entry_points.txt0000664000175000017500000000006512666044061030317 0ustar tvansteenburghtvansteenburgh00000000000000[console_scripts] juju-deployer = deployer.cli:main juju-deployer-0.6.4/juju_deployer.egg-info/dependency_links.txt0000664000175000017500000000000112666044061031066 0ustar tvansteenburghtvansteenburgh00000000000000 juju-deployer-0.6.4/juju_deployer.egg-info/SOURCES.txt0000664000175000017500000000546312666044061026714 0ustar tvansteenburghtvansteenburgh00000000000000LICENSE MANIFEST.in README setup.cfg setup.py deployer/__init__.py deployer/charm.py deployer/cli.py deployer/config.py deployer/deployment.py deployer/errors.py deployer/feedback.py deployer/guiserver.py deployer/relation.py deployer/service.py deployer/utils.py deployer/vcs.py deployer/action/__init__.py deployer/action/base.py deployer/action/diff.py deployer/action/export.py deployer/action/importer.py deployer/env/__init__.py deployer/env/base.py deployer/env/go.py deployer/env/gui.py deployer/env/mem.py deployer/env/py.py deployer/env/watchers.py deployer/tests/__init__.py deployer/tests/base.py deployer/tests/mock.py deployer/tests/test_base.py deployer/tests/test_charm.py deployer/tests/test_config.py deployer/tests/test_constraints.py deployer/tests/test_deployment.py deployer/tests/test_diff.py deployer/tests/test_goenv.py deployer/tests/test_guienv.py deployer/tests/test_guiserver.py deployer/tests/test_importer.py deployer/tests/test_pyenv.py deployer/tests/test_service.py deployer/tests/test_utils.py deployer/tests/test_watchers.py deployer/tests/test_data/blog-haproxy-services.yaml deployer/tests/test_data/blog.snippet deployer/tests/test_data/blog.yaml deployer/tests/test_data/negative.cfg deployer/tests/test_data/negative.yaml deployer/tests/test_data/stack-default.cfg deployer/tests/test_data/stack-include.template deployer/tests/test_data/stack-include.yaml deployer/tests/test_data/stack-includes.cfg deployer/tests/test_data/stack-inherits.cfg deployer/tests/test_data/stack-placement-invalid-2.yaml deployer/tests/test_data/stack-placement-invalid-subordinate.yaml deployer/tests/test_data/stack-placement-invalid.yaml deployer/tests/test_data/stack-placement-maas.yml deployer/tests/test_data/stack-placement.yaml deployer/tests/test_data/wiki.yaml deployer/tests/test_data/openstack/openstack.cfg deployer/tests/test_data/openstack/openstack_base.cfg deployer/tests/test_data/openstack/ubuntu_base.cfg deployer/tests/test_data/precise/appsrv/metadata.yaml deployer/tests/test_data/v4/container-existing-machine.yaml deployer/tests/test_data/v4/container-new.yaml deployer/tests/test_data/v4/container.yaml deployer/tests/test_data/v4/fill_placement.yaml deployer/tests/test_data/v4/hulk-smash-nounits-nomachines.yaml deployer/tests/test_data/v4/hulk-smash-nounits.yaml deployer/tests/test_data/v4/hulk-smash.yaml deployer/tests/test_data/v4/placement-invalid-number.yaml deployer/tests/test_data/v4/placement-invalid-subordinate.yaml deployer/tests/test_data/v4/placement.yaml deployer/tests/test_data/v4/series.yaml deployer/tests/test_data/v4/simple.yaml deployer/tests/test_data/v4/validate.yaml juju_deployer.egg-info/PKG-INFO juju_deployer.egg-info/SOURCES.txt juju_deployer.egg-info/dependency_links.txt juju_deployer.egg-info/entry_points.txt juju_deployer.egg-info/requires.txt juju_deployer.egg-info/top_level.txtjuju-deployer-0.6.4/juju_deployer.egg-info/PKG-INFO0000664000175000017500000000624512666044061026124 0ustar tvansteenburghtvansteenburgh00000000000000Metadata-Version: 1.1 Name: juju-deployer Version: 0.6.4 Summary: A tool for deploying complex stacks with juju. Home-page: http://launchpad.net/juju-deployer Author: Kapil Thangavelu Author-email: kapil.foss@gmail.com License: UNKNOWN Description: Juju Deployer ------------- A deployment tool for juju that allows stack-like configurations of complex deployments. It supports configuration in yaml or json. Installation ------------ $ virtualenv --system-site-packages deployer $ ./deployer/bin/easy_install juju-deployer $ ./deployer/bin/juju-deployer -h Usage ----- Stack Definitions ----------------- High level view of v3 stacks:: blog: series: precise services: blog: charm: wordpress branch: lp:charms/precise/wordpress db: charm: mysql branch: lp:charms/precise/mysql relations: - [db, blog] blog-prod: inherits: blog services: blog: num_units: 3 constraints: instance-type=m1.medium options: wp-content: include-file://content-branch.txt db: constraints: instance-type=m1.large options: tuning: include-base64://db-tuning.txt cachelb: charm: varnish branch: lp:charms/precise/varnish relations: - [cachelb, blog] We've got two deployment stacks here, blog, and blog-prod. The blog stack defines a simple wordpress deploy with mysql and two relations. In this case its Version 4 bundles are currently under development. The development document for these types of bundles is available `here `_. Development ----------- Obtain source $ bzr branch lp:juju-deployer/darwin deployer $ cd deployer # Test runner $ python setup.py test Background ---------- This is a wrapper for Juju that allows stack-like configurations of complex deployments. It was created to deploy Openstack but should be able to deploy other complex service configurations in the same manner. See deployments.cfg and deployments.cfg.sample for examples of how to describe service stacks in JSON. Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Programming Language :: Python Classifier: Topic :: Internet Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Intended Audience :: System Administrators Classifier: Intended Audience :: Developers juju-deployer-0.6.4/juju_deployer.egg-info/requires.txt0000664000175000017500000000003512666044061027416 0ustar tvansteenburghtvansteenburgh00000000000000jujuclient>=0.18 PyYAML>=3.10juju-deployer-0.6.4/juju_deployer.egg-info/top_level.txt0000664000175000017500000000001112666044061027542 0ustar tvansteenburghtvansteenburgh00000000000000deployer juju-deployer-0.6.4/LICENSE0000664000175000017500000010451212600342600021442 0ustar tvansteenburghtvansteenburgh00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read .juju-deployer-0.6.4/deployer/0000775000175000017500000000000012666044061022271 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/charm.py0000664000175000017500000002010212665642670023741 0ustar tvansteenburghtvansteenburgh00000000000000import logging import os import urllib2 import shutil from .vcs import Git, Bzr from .utils import ( _check_call, _get_juju_home, extract_zip, get_qualified_charm_url, path_join, path_exists, STORE_URL, temp_file, yaml_load) class Charm(object): log = logging.getLogger('deployer.charm') def __init__(self, name, path, branch, rev, build, charm_url=""): self.name = name self._path = path self.branch = branch self.rev = rev self._charm_url = charm_url self._build = build self.vcs = self.get_vcs() def is_git_branch(self): return self.branch.startswith('git') or \ self.branch.find("review.openstack.org") != -1 or \ "github.com" in self.branch or \ "git.launchpad.net" in self.branch or \ os.path.exists(os.path.join(self.branch, '.git')) def get_vcs(self): if not self.branch: return None if self.is_git_branch(): return Git(self.path, self.branch, self.log) elif self.branch.startswith("bzr") or self.branch.startswith('lp:') \ or os.path.exists(os.path.join(self.branch, '.bzr')) \ or self.branch.startswith('file:'): return Bzr(self.path, self.branch, self.log) raise ValueError( "Could not determine vcs backend for %s" % self.branch) @classmethod def from_service(cls, name, repo_path, deploy_series, data): """ name: service name data['charm']: charm name or store charm url data['charm_url'] store charm url """ branch, rev, series = None, None, None charm_branch = data.get('branch') if charm_branch is not None: branch, sep, rev = charm_branch.partition('@') charm_path, store_url, build = None, None, None name = data.get('charm', name) if name.startswith('cs:'): store_url = name elif name.startswith('local:'): # Support vcs charms specifying their parts = name[len('local:'):].split('/') if len(parts) == 2: series, name = parts elif data.get('series'): series = data['series'] name = parts.pop() else: series = deploy_series charm_path = path_join(repo_path, series, name) elif os.path.isabs(name): # charm points to an absolute local path charm_path = name.rstrip(os.path.sep) elif 'series' in data: series = data['series'] charm_path = path_join(repo_path, series, name) else: charm_path = path_join(repo_path, deploy_series, name) if not store_url: store_url = data.get('charm_url', None) if store_url and branch: cls.log.error( "Service: %s has both charm url: %s and branch: %s specified", name, store_url, branch) if not store_url: build = data.get('build', '') return cls(name, charm_path, branch, rev, build, store_url) def is_absolute(self): """Charm config points to an absolute path on disk. """ return os.path.isabs(self.name) @property def repo_path(self): """The Juju repository path in which this charm resides. For most charms this returns None, leaving the repo path to be determined by the Deployment that deploys the charm. For charms at an absolute path, however, the repo path is by definition the directory two levels up from the charm. (And the series directory is one level up.) This allows us to deploy charms from anywhere on the filesystem without first gathering them under one repository path. """ if self.is_absolute(): d = os.path.dirname return d(d(self.path)) return None def is_local(self): if self._charm_url: if self._charm_url.startswith('cs:'): return False return True def exists(self): return self.is_local() and path_exists(self.path) def is_subordinate(self): return self.metadata.get('subordinate', False) @property def charm_url(self): if self._charm_url: return self._charm_url series = os.path.basename(os.path.dirname(self.path)) charm_name = self.metadata['name'] return "local:%s/%s" % (series, charm_name) def build(self): if not self._build: return self.log.debug("Building charm %s with %s", self.path, self._build) _check_call([self._build], self.log, "Charm build failed %s @ %s", self._build, self.path, cwd=self.path, shell=True) def fetch(self): if self._charm_url: self._fetch_store_charm() return elif self.is_absolute(): return elif not self.branch: self.log.warning("Invalid charm specification %s", self.name) return self.log.debug(" Branching charm %s @ %s", self.branch, self.path) self.vcs.branch() if self.rev: self.vcs.update(self.rev) self.build() @property def path(self): if not self.is_local() and not self._path: self._path = self._get_charm_store_cache() return self._path @property def series_path(self): if self.is_absolute() or not self.is_local(): return None return os.path.dirname(self.path) def _fetch_store_charm(self, update=False): cache_dir = self._get_charm_store_cache() self.log.debug("Cache dir %s", cache_dir) if os.path.exists(cache_dir) and not update: return qualified_url = get_qualified_charm_url(self.charm_url) self.log.debug("Retrieving store charm %s" % qualified_url) if update and os.path.exists(cache_dir): shutil.rmtree(cache_dir) store_url = "%s/charm/%s" % (STORE_URL, qualified_url[3:]) with temp_file() as fh: ufh = urllib2.urlopen(store_url) shutil.copyfileobj(ufh, fh) fh.flush() extract_zip(fh.name, self.path) self.config def _get_charm_store_cache(self): assert not self.is_local(), "Attempt to get store charm for local" # Cache jhome = _get_juju_home() cache_dir = os.path.join(jhome, ".deployer-store-cache") if not os.path.exists(cache_dir): os.mkdir(cache_dir) return os.path.join( cache_dir, self.charm_url.replace(':', '_').replace("/", "_")) def update(self, build=False): if not self.branch: return assert self.exists() self.log.debug(" Updating charm %s from %s", self.path, self.branch) self.vcs.update(self.rev) if build: self.build() def is_modified(self): if not self.branch: return False return self.vcs.is_modified() @property def config(self): config_path = path_join(self.path, "config.yaml") if not path_exists(config_path): return {} with open(config_path) as fh: return yaml_load(fh.read()).get('options', {}) @property def metadata(self): md_path = path_join(self.path, "metadata.yaml") if not path_exists(md_path): if not path_exists(self.path): raise RuntimeError("No charm metadata @ %s", md_path) with open(md_path) as fh: return yaml_load(fh.read()) def get_provides(self): p = {'juju-info': [{'name': 'juju-info'}]} for key, value in self.metadata['provides'].items(): value['name'] = key p.setdefault(value['interface'], []).append(value) return p def get_requires(self): r = {} for key, value in self.metadata['requires'].items(): value['name'] = key r.setdefault(value['interface'], []).append(value) return r juju-deployer-0.6.4/deployer/service.py0000664000175000017500000003432412632572262024313 0ustar tvansteenburghtvansteenburgh00000000000000import itertools from feedback import Feedback class Service(object): def __init__(self, name, svc_data): self.svc_data = svc_data self.name = name def __repr__(self): return "" % (self.name) @property def annotations(self): a = self.svc_data.get('annotations') if a is None: return a # core annotations only supports string key / values d = {} for k, v in a.items(): d[str(k)] = str(v) return d @property def config(self): return self.svc_data.get('options', None) @property def constraints(self): return self.svc_data.get('constraints', None) @property def num_units(self): return int(self.svc_data.get('num_units', 1)) @property def unit_placement(self): # Separate checks to support machine 0 placement. value = self.svc_data.get('to') if value is None: value = self.svc_data.get('force-machine') if value is not None and not isinstance(value, list): value = [value] return value and map(str, value) or [] @property def expose(self): return self.svc_data.get('expose', False) class ServiceUnitPlacement(object): def __init__(self, service, deployment, status, arbitrary_machines=False): self.service = service self.deployment = deployment self.status = status self.arbitrary_machines = arbitrary_machines @staticmethod def _format_placement(machine, container=None): if container: return "%s:%s" % (container, machine) else: return machine def colocate(self, status, placement, u_idx, container, svc): """Colocate one service with an existing service either within a container alongside that service or hulk-smashed onto the same unit. status: the environment status. placement: the placement directive of the unit to be colocated. u_idx: the unit index of the unit to be colocated. container: a string containing the container type, or None. svc: the service object for this placement. """ with_service = status['services'].get(placement) if with_service is None: # Should be caught in validate relations but sanity check # for concurrency. self.deployment.log.error( "Service %s to be deployed with non-existent service %s", svc.name, placement) # Prefer continuing deployment with a new machine rather # than an in-progress abort. return None svc_units = with_service['units'] if int(u_idx) >= len(svc_units): self.deployment.log.warning( "Service:%s, Deploy-with-service:%s, Requested-unit-index=%s, " "Cannot solve, falling back to default placement", svc.name, placement, u_idx) return None unit_names = svc_units.keys() unit_names.sort() machine = svc_units[unit_names[int(u_idx)]].get('machine') if not machine: self.deployment.log.warning( "Service:%s deploy-with unit missing machine %s", svc.name, unit_names[int(u_idx)]) return None return self._format_placement(machine, container) class ServiceUnitPlacementV3(ServiceUnitPlacement): def _parse_placement(self, unit_placement): placement = unit_placement container = None u_idx = None if ':' in unit_placement: container, placement = unit_placement.split(":") if '=' in placement: placement, u_idx = placement.split("=") return container, placement, u_idx def validate(self): feedback = Feedback() unit_placement = self.service.unit_placement if unit_placement is None: return feedback if not isinstance(unit_placement, list): unit_placement = [unit_placement] unit_placement = map(str, unit_placement) services = dict([(s.name, s) for s in self.deployment.get_services()]) machines = self.deployment.get_machines() for idx, p in enumerate(unit_placement): container, p, u_idx = self._parse_placement(p) if container and container not in ('lxc', 'kvm'): feedback.error( "Invalid container type:%s service: %s placement: %s" \ % (container, self.service.name, unit_placement[idx])) if u_idx: if p in ('maas', 'zone'): continue if not u_idx.isdigit(): feedback.error( "Invalid service:%s placement: %s" % ( self.service.name, unit_placement[idx])) if p.isdigit(): if p == '0' or p in machines or self.arbitrary_machines: continue else: feedback.error( ("Service placement to machine " "not supported %s to %s") % ( self.service.name, unit_placement[idx])) elif p in services: if services[p].unit_placement: feedback.error( "Nested placement not supported %s -> %s -> %s" % ( self.service.name, p, services[p].unit_placement)) elif self.deployment.get_charm_for(p).is_subordinate(): feedback.error( "Cannot place to a subordinate service: %s -> %s" % ( self.service.name, p)) else: feedback.error( "Invalid service placement %s to %s" % ( self.service.name, unit_placement[idx])) return feedback def get(self, unit_number): """Get the placement directive for a given unit. unit_number: the number of the unit to deploy """ status = self.status svc = self.service unit_mapping = svc.unit_placement if not unit_mapping: return None if len(unit_mapping) <= unit_number: return None unit_placement = placement = str(unit_mapping[unit_number]) container = None u_idx = unit_number if ':' in unit_placement: container, placement = unit_placement.split(":") if '=' in placement: placement, u_idx = placement.split("=") if placement.isdigit(): if self.arbitrary_machines or placement == '0': return self._format_placement(placement, container) if placement == 'maas': return u_idx elif placement == "zone": return "zone=%s" % u_idx return self.colocate(status, placement, u_idx, container, svc) class ServiceUnitPlacementV4(ServiceUnitPlacement): def __init__(self, service, deployment, status, arbitrary_machines=False, machines_map=None): super(ServiceUnitPlacementV4, self).__init__( service, deployment, status, arbitrary_machines=arbitrary_machines) # Arbitrary machines will not be allowed in v4 bundles. self.arbitrary_machines = False self.machines_map = machines_map # Ensure that placement spec is filled according to the bundle # specification. self._fill_placement() def _fill_placement(self): """Fill the placement spec with necessary data. From the spec: A unit placement may be specified with a service name only, in which case its unit number is assumed to be one more than the unit number of the previous unit in the list with the same service, or zero if there were none. If there are less elements in To than NumUnits, the last element is replicated to fill it. If there are no elements (or To is omitted), "new" is replicated. """ unit_mapping = self.service.unit_placement unit_count = self.service.num_units if not unit_mapping: self.service.svc_data['to'] = ['new'] * unit_count return self.service.svc_data['to'] = ( unit_mapping + list(itertools.repeat(unit_mapping[-1], unit_count - len(unit_mapping))) ) unit_mapping = self.service.unit_placement colocate_counts = {} for idx, mapping in enumerate(unit_mapping): service = mapping if ':' in mapping: service = mapping.split(':')[1] if service in self.deployment.data['services']: unit_number = colocate_counts.setdefault(service, 0) unit_mapping[idx] = "{}/{}".format(mapping, unit_number) colocate_counts[service] += 1 self.service.svc_data['to'] = unit_mapping def _parse_placement(self, placement): """Parse a unit placement statement. In version 4 bundles, unit placement statements take the form of (:)?(||new) This splits the placement into a container, a placement, and a unit number. Both container and unit number are optional and can be None. """ container = unit_number = None if ':' in placement: container, placement = placement.split(':') if '/' in placement: placement, unit_number = placement.split('/') return container, placement, unit_number def validate(self): """Validate the placement of a service and all of its units. If a service has a 'to' block specified, the list of machines, units, containers, and/or services must be internally consistent, consistent with other services in the deployment, and consistent with any machines specified in the 'machines' block of the deployment. A feedback object is returned, potentially with errors and warnings inside it. """ feedback = Feedback() unit_placement = self.service.unit_placement if unit_placement is None: return feedback if not isinstance(unit_placement, (list, tuple)): unit_placement = [unit_placement] unit_placement = map(str, unit_placement) services = dict([(s.name, s) for s in self.deployment.get_services()]) machines = self.deployment.get_machines() service_name = self.service.name for i, placement in enumerate(unit_placement): container, target, unit_number = self._parse_placement(placement) # Validate the container type. if container and container not in ('lxc', 'kvm'): feedback.error( 'Invalid container type: {} service: {} placement: {}' ''.format(container, service_name, placement)) # Specify an existing machine (or, if the number is in the # list of machine specs, one of those). if str(target) in machines: continue if target.isdigit(): feedback.error( 'Service placement to machine not supported: {} to {}' ''.format(service_name, placement)) # Specify a service for co-location. elif target in services: # Specify a particular unit for co-location. if unit_number is not None: try: unit_number = int(unit_number) except (TypeError, ValueError): feedback.error( 'Invalid unit number for placement: {} to {}' ''.format(service_name, unit_number)) continue if unit_number > services[target].num_units: feedback.error( 'Service unit does not exist: {} to {}/{}' ''.format(service_name, target, unit_number)) continue if self.deployment.get_charm_for(target).is_subordinate(): feedback.error( 'Cannot place to a subordinate service: {} -> {}' ''.format(service_name, target)) # Create a new machine or container. elif target == 'new': continue else: feedback.error( 'Invalid service placement: {} to {}' ''.format(service_name, placement)) return feedback def get_new_machines_for_containers(self): """Return a list of containers in the service's unit placement that have been requested to be put on new machines.""" new_machines = [] unit = itertools.count() for placement in self.service.unit_placement: if ':new' in placement: # Generate a name for this machine to be used in the # machines_map used later; as a quick path forward, simply use # the unit's name. new_machines.append('{}/{}'.format(self.service.name, unit.next())) return new_machines def get(self, unit_number): """Get the placement directive for a given unit. unit_number: the number of the unit to deploy """ status = self.status svc = self.service unit_mapping = svc.unit_placement if not unit_mapping: return None unit_placement = placement = str(unit_mapping[unit_number]) container = None u_idx = unit_number # Shortcut for new machines. if placement == 'new': return None container, placement, unit_number = self._parse_placement( unit_placement) if placement in self.machines_map: return self._format_placement( self.machines_map[placement], container) # Handle :new if placement == 'new': return self._format_placement( self.machines_map['%s/%d' % (self.service.name, u_idx)], container) return self.colocate(status, placement, u_idx, container, svc) juju-deployer-0.6.4/deployer/relation.py0000664000175000017500000000243012600342600024443 0ustar tvansteenburghtvansteenburgh00000000000000import yaml class Endpoint(object): def __init__(self, ep): self.ep = ep self.name = None if ":" in self.ep: self.service, self.name = self.ep.split(":") else: self.service = ep class EndpointPair(object): # Really simple endpoint service matching that does not work for multiple # relations between two services (used by diff at the moment) def __init__(self, ep_x, ep_y=None): self.ep_x = Endpoint(ep_x) self.ep_y = ep_y and Endpoint(ep_y) def __eq__(self, ep_pair): if not isinstance(ep_pair, EndpointPair): return False return (ep_pair.ep_x.service in self and ep_pair.ep_y.service in self) def __contains__(self, svc_name): return (svc_name == self.ep_x.service or svc_name == self.ep_y.service) def __hash__(self): return hash(tuple(sorted( (self.ep_x.service, self.ep_y.service)))) def __repr__(self): return "%s <-> %s" % ( self.ep_x.ep, self.ep_y.ep) @staticmethod def to_yaml(dumper, data): return dumper.represent_list([[data.ep_x.ep, data.ep_y.ep]]) yaml.add_representer(EndpointPair, EndpointPair.to_yaml) juju-deployer-0.6.4/deployer/env/0000775000175000017500000000000012666044061023061 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/env/go.py0000664000175000017500000002301612600542536024040 0ustar tvansteenburghtvansteenburgh00000000000000import time from .base import BaseEnvironment from ..utils import ( ErrorExit, parse_constraints, ) from jujuclient import ( EnvError, Environment as EnvironmentClient, UnitErrors, ) from .watchers import ( raise_on_errors, WaitForMachineTermination, WaitForUnits, ) class GoEnvironment(BaseEnvironment): def __init__(self, name, options=None, endpoint=None): self.name = name self.options = options self.api_endpoint = endpoint self.client = None def add_machine(self, series="", constraints={}): """Add a top level machine to the Juju environment. Use the given series and constraints. Return the machine identifier (e.g. "1"). series: a string such as 'precise' or 'trusty'. constraints: a map of constraints (such as mem, arch, etc.) which can be parsed by utils.parse_constraints """ return self.client.add_machine( series=series, constraints=parse_constraints(constraints))['Machine'] def add_unit(self, service_name, machine_spec): return self.client.add_unit(service_name, machine_spec) def add_units(self, service_name, num_units): return self.client.add_units(service_name, num_units) def add_relation(self, endpoint_a, endpoint_b): return self.client.add_relation(endpoint_a, endpoint_b) def close(self): if self.client: self.client.close() def connect(self): self.log.debug("Connecting to environment...") with open("/dev/null", 'w') as fh: self._check_call( self._named_env(["juju", "api-endpoints"]), self.log, "Error getting env api endpoints, env bootstrapped?", stderr=fh) self.client = EnvironmentClient.connect(self.name) self.log.debug("Connected to environment") def get_config(self, svc_name): return self.client.get_config(svc_name) def get_constraints(self, svc_name): try: return self.client.get_constraints(svc_name) except EnvError as err: if 'constraints do not apply to subordinate services' in str(err): return {} raise def expose(self, name): return self.client.expose(name) def reset(self, terminate_machines=False, terminate_delay=0, timeout=360, watch=False, force_terminate=False): """Destroy/reset the environment.""" status = self.status() destroyed = False for s in status.get('services', {}).keys(): self.log.debug(" Destroying service %s", s) self.client.destroy_service(s) destroyed = True if destroyed: # Mark any errors as resolved so destruction can proceed. self.resolve_errors() if not (terminate_machines and force_terminate): # Wait for units to be removed. Only necessary if we're not # force-terminating machines. self.wait_for_units(timeout, goal_state='removed', watch=watch) # The only value to not terminating is keeping the data on the # machines around. if not terminate_machines: self.log.info( " warning: juju-core machines are not reusable for units") return self._terminate_machines( status, watch, terminate_delay, force_terminate) def _terminate_machines(self, status, watch, terminate_wait, force): """Terminate all machines, optionally wait for termination. """ # Terminate machines self.log.debug( " Terminating machines %s", 'forcefully' if force else '') # Don't bother if there are no service unit machines if len(status['machines']) == 1: return # containers before machines, container hosts post wait. machines = status['machines'].keys() container_hosts = set() containers = set() def machine_sort(x, y): for ctype in ('lxc', 'kvm'): for m in (x, y): if ctype in m: container_hosts.add(m.split('/', 1)[0]) containers.add(m) if m == x: return -1 if m == y: return 1 return cmp(x, y) machines.sort(machine_sort) for mid in machines: self._terminate_machine(mid, container_hosts, force=force) if containers: watch = self.client.get_watch(120) WaitForMachineTermination( watch, containers).run(self._delta_event_log) for mid in container_hosts: self._terminate_machine(mid, force=force) if terminate_wait: self.log.info(" Waiting for machine termination") callback = watch and self._delta_event_log or None self.client.wait_for_no_machines(terminate_wait, callback) def _terminate_machine(self, mid, container_hosts=(), force=False): if mid == "0": return if mid in container_hosts: return self.log.debug(" Terminating machine %s", mid) self.terminate_machine(mid, force=force) def _check_timeout(self, etime): w_timeout = etime - time.time() if w_timeout < 0: self.log.error("Timeout reached while resolving errors") raise ErrorExit() return w_timeout def resolve_errors(self, retry_count=0, timeout=600, watch=False, delay=5): """Resolve any unit errors in the environment. If retry_count is given then the hook execution is reattempted. The system will do up to retry_count passes through the system resolving errors. If retry count is not given, the error is marked resolved permanently. """ etime = time.time() + timeout count = 0 while True: error_units = self._get_units_in_error() for e_uid in error_units: try: self.client.resolved(e_uid, retry=bool(retry_count)) self.log.debug(" Resolving error on %s", e_uid) except EnvError as err: if 'already resolved' in err: continue if not error_units: if not count: self.log.debug(" No unit errors found.") else: self.log.debug(" No more unit errors found.") return w_timeout = self._check_timeout(etime) if retry_count: time.sleep(delay) count += 1 try: self.wait_for_units( timeout=int(w_timeout), watch=True, on_errors=raise_on_errors(UnitErrors)) except UnitErrors as err: if retry_count == count: self.log.info( " Retry count %d exhausted, but units in error (%s)", retry_count, " ".join(u['Name'] for u in err.errors)) return else: return def set_annotation(self, entity_name, annotation, entity_type='service'): """Set an annotation on an entity. entity_name: the name of the entity (machine, service, etc.) to annotate. annotation: a dict of key/value pairs to set on the entity. entity_type: the type of entity (machine, service, etc.) to annotate. """ return self.client.set_annotation(entity_name, entity_type, annotation) def status(self): return self.client.get_stat() def log_errors(self, errors): """Log the given unit errors. This can be used in the WaitForUnits error handling machinery, e.g. see deployer.watchers.log_on_errors. """ messages = [ 'unit: {Name}: machine: {MachineId} agent-state: {Status} ' 'details: {StatusInfo}'.format(**error) for error in errors ] self.log.error( 'The following units had errors:\n {}'.format( ' \n'.join(messages))) def wait_for_units( self, timeout, goal_state="started", watch=False, services=None, on_errors=None): """Wait for units to reach a given condition.""" callback = self._delta_event_log if watch else None watcher = self.client.get_watch(timeout) WaitForUnits( watcher, goal_state=goal_state, services=services, on_errors=on_errors).run(callback) def _delta_event_log(self, et, ct, d): # event type, change type, data name = d.get('Name', d.get('Id', 'unknown')) state = d.get('Status', d.get('Life', 'unknown')) if et == "relation": name = self._format_endpoints(d['Endpoints']) state = "created" if ct == "remove": state = "removed" self.log.debug( " Delta %s: %s %s:%s", et, name, ct, state) def _format_endpoints(self, eps): if len(eps) == 1: ep = eps.pop() return "[%s:%s:%s]" % ( ep['ServiceName'], ep['Relation']['Name'], ep['Relation']['Role']) return "[%s:%s <-> %s:%s]" % ( eps[0]['ServiceName'], eps[0]['Relation']['Name'], eps[1]['ServiceName'], eps[1]['Relation']['Name']) juju-deployer-0.6.4/deployer/env/base.py0000664000175000017500000001460012600347110024333 0ustar tvansteenburghtvansteenburgh00000000000000import logging from ..utils import ( _get_juju_home, path_join, yaml_load, ErrorExit, yaml_dump, temp_file, _check_call) class BaseEnvironment(object): log = logging.getLogger("deployer.env") @property def env_config_path(self): jhome = _get_juju_home() env_config_path = path_join(jhome, 'environments.yaml') return env_config_path def _check_call(self, *args, **kwargs): if self.options and self.options.retry_count: kwargs['max_retry'] = self.options.retry_count return _check_call(*args, **kwargs) def _named_env(self, params): if self.name: params.extend(["-e", self.name]) return params def _get_env_config(self): with open(self.env_config_path) as fh: config = yaml_load(fh.read()) if self.name: if self.name not in config['environments']: self.log.error("Environment %r not found", self.name) raise ErrorExit() return config['environments'][self.name] else: env_name = config.get('default') if env_name is None: if len(config['environments'].keys()) == 1: env_name = config['environments'].keys().pop() else: self.log.error("Ambigious operation environment") raise ErrorExit() return config['environments'][env_name] def _write_config(self, svc_name, config, fh): fh.write(yaml_dump({svc_name: config})) fh.flush() # def _write_config(self, svc_name, config, fh): # fh.write(yaml_dump(config)) # fh.flush() def _get_units_in_error(self, status=None): units = [] if status is None: status = self.status() for s in status.get('services', {}).keys(): for uid, u in status['services'][s].get('units', {}).items(): if 'error' in u['agent-state']: units.append(uid) for uid, u in u.get('subordinates', {}).items(): if 'error' in u['agent-state']: units.append(uid) return units def bootstrap(self, constraints=None): self.log.info("bootstrapping, this might take a while...") params = ["juju", "bootstrap"] if constraints: params.extend(['--constraints', constraints]) self._named_env(params) self._check_call( params, self.log, "Failed to bootstrap") # block until topology is returned self.get_cli_status() self.log.info(" Bootstrap complete") def deploy(self, name, charm_url, repo=None, config=None, constraints=None, num_units=1, force_machine=None): params = self._named_env(["juju", "deploy"]) with temp_file() as fh: if config: fh.write(yaml_dump({name: config})) fh.flush() params.extend(["--config", fh.name]) if constraints: if isinstance(constraints, list): constraints = ' '.join(constraints) if isinstance(constraints, dict): constraints = ' '.join([ '{}={}'.format(k, v) for k, v in constraints.items() ]) params.extend(['--constraints', constraints]) if num_units not in (1, None): params.extend(["--num-units", str(num_units)]) if charm_url.startswith('local'): if repo == "": repo = "." params.extend(["--repository=%s" % repo]) if force_machine is not None: params.extend(["--to=%s" % force_machine]) params.extend([charm_url, name]) self._check_call( params, self.log, "Error deploying service %r", name) def expose(self, name): params = self._named_env(["juju", "expose", name]) self._check_call( params, self.log, "Error exposing service %r", name) def terminate_machine(self, mid, wait=False, force=False): """Terminate a machine. Unless ``force=True``, the machine can't have any running units. After removing the units or destroying the service, use wait_for_units to know when its safe to delete the machine (i.e., units have finished executing stop hooks and are removed). """ if ((isinstance(mid, int) and mid == 0) or (mid.isdigit() and int(mid) == 0)): # Don't kill state server raise RuntimeError("Can't terminate machine 0") params = self._named_env(["juju", "terminate-machine"]) params.append(mid) if force: params.append('--force') try: self._check_call( params, self.log, "Error terminating machine %r" % mid) except ErrorExit, e: if ("machine %s does not exist" % mid) in e.error.output: return raise def get_service_address(self, svc_name): status = self.get_cli_status() if svc_name not in status['services']: self.log.warning("Service %s does not exist", svc_name) return None svc = status['services'][svc_name] if 'subordinate-to' in svc: ps = svc['subordinate-to'][0] self.log.info( 'Service %s is a subordinate to %s, finding principle service' % (svc_name, ps)) return self.get_service_address(svc['subordinate-to'][0]) units = svc.get('units', {}) unit_keys = list(sorted(units.keys())) if unit_keys: return units[unit_keys[0]].get('public-address', '') self.log.warning("Service %s has no units" % svc_name) def get_cli_status(self): params = self._named_env(["juju", "status", "--format=yaml"]) with open('/dev/null', 'w') as fh: output = self._check_call( params, self.log, "Error getting status, is it bootstrapped?", stderr=fh) status = yaml_load(output) return status def add_unit(self, service_name, machine_spec): raise NotImplementedError() def set_annotation(self, entity, annotations, entity_type='service'): raise NotImplementedError() juju-deployer-0.6.4/deployer/env/watchers.py0000664000175000017500000000703012600342600025237 0ustar tvansteenburghtvansteenburgh00000000000000"""A collection of juju-core environment watchers.""" from jujuclient import WatchWrapper from ..utils import ErrorExit class WaitForMachineTermination(WatchWrapper): """Wait until the given machines are terminated.""" def __init__(self, watch, machines): super(WaitForMachineTermination, self).__init__(watch) self.machines = set(machines) self.known = set() def process(self, entity_type, change, data): if entity_type != 'machine': return if change == 'remove' and data['Id'] in self.machines: self.machines.remove(data['Id']) else: self.known.add(data['Id']) def complete(self): for m in self.machines: if m in self.known: return False return True class WaitForUnits(WatchWrapper): """Wait for units of the environment to reach a particular goal state. If services are provided, only consider the units belonging to the given services. If the on_errors callable is provided, call the given function each time a change set is processed and a new unit is found in an error state. The callable is called passing a list of units' data corresponding to the units in an error state. """ def __init__( self, watch, goal_state='started', services=None, on_errors=None): super(WaitForUnits, self).__init__(watch) self.goal_state = goal_state self.services = services self.on_errors = on_errors # The units dict maps unit names to units data. self.units = {} # The units_in_error list contains the names of the units in error. self.units_in_error = [] def process(self, entity, action, data): if entity != 'unit': return if (self.services is None) or (data['Service'] in self.services): unit_name = data['Name'] if action == 'remove' and unit_name in self.units: del self.units[unit_name] else: self.units[unit_name] = data def complete(self): ready = True new_errors = [] goal_state = self.goal_state on_errors = self.on_errors units_in_error = self.units_in_error for unit_name, data in self.units.items(): status = data['Status'] if status == 'error': if unit_name not in units_in_error: units_in_error.append(unit_name) new_errors.append(data) elif status != goal_state: ready = False if new_errors and goal_state != 'removed' and callable(on_errors): on_errors(new_errors) return ready def log_on_errors(env): """Return a function receiving errors and logging them. The resulting function is suitable to be used as the on_errors callback for WaitForUnits (see above). """ return env.log_errors def exit_on_errors(env): """Return a function receiving errors, logging them and exiting the app. The resulting function is suitable to be used as the on_errors callback for WaitForUnits (see above). """ def callback(errors): log_on_errors(env)(errors) raise ErrorExit() return callback def raise_on_errors(exception): """Return a function receiving errors and raising the given exception. The resulting function is suitable to be used as the on_errors callback for WaitForUnits (see above). """ def callback(errors): raise exception(errors) return callback juju-deployer-0.6.4/deployer/env/py.py0000664000175000017500000001152412600342600024052 0ustar tvansteenburghtvansteenburgh00000000000000import time from deployer.errors import UnitErrors from deployer.utils import ErrorExit from .base import BaseEnvironment class PyEnvironment(BaseEnvironment): def __init__(self, name, options=None): self.name = name self.options = options def add_units(self, service_name, num_units): params = self._named_env(["juju", "add-unit"]) if num_units > 1: params.extend(["-n", str(num_units)]) params.append(service_name) self._check_call( params, self.log, "Error adding units to %s", service_name) def add_relation(self, endpoint_a, endpoint_b): params = self._named_env(["juju", "add-relation"]) params.extend([endpoint_a, endpoint_b]) self._check_call( params, self.log, "Error adding relation to %s %s", endpoint_a, endpoint_b) def close(self): """ NoOp """ def connect(self): """ NoOp """ def _destroy_service(self, service_name): params = self._named_env(["juju", "destroy-service"]) params.append(service_name) self._check_call( params, self.log, "Error destroying service %s" % service_name) def get_config(self, svc_name): params = self._named_env(["juju", "get"]) params.append(svc_name) return self._check_call( params, self.log, "Error retrieving config for %r", svc_name) def get_constraints(self, svc_name): params = self._named_env(["juju", "get-constraints"]) params.append(svc_name) return self._check_call( params, self.log, "Error retrieving constraints for %r", svc_name) def reset(self, terminate_machines=False, terminate_delay=0, timeout=360, watch=False): status = self.status() for s in status.get('services'): self.log.debug(" Destroying service %s", s) self._destroy_service(s) if not terminate_machines: return True for m in status.get('machines'): if int(m) == 0: continue self.log.debug(" Terminating machine %s", m) self.terminate_machine(str(m)) if terminate_delay: self.log.debug(" Waiting for terminate delay") time.sleep(terminate_delay) def resolve_errors(self, retry_count=0, timeout=600, watch=False, delay=5): pass def status(self): return self.get_cli_status() def log_errors(self, errors): """Log the given unit errors. This can be used in the WaitForUnits error handling machinery, e.g. see deployer.watchers.log_on_errors. """ messages = [ 'unit: {unit[name]}: machine: {unit[machine]} ' 'agent-state: {unit[agent-state]}'.format(unit=error) for error in errors ] self.log.error( 'The following units had errors:\n {}'.format( ' \n'.join(messages))) def wait_for_units( self, timeout, goal_state="started", watch=False, services=None, on_errors=None): # Note that the services keyword argument is ignored in this pyJuju # implementation: we wait for all the units in the environment. max_time = time.time() + timeout while max_time > time.time(): status = self.status() pending = [] error_units = self._get_units_in_error(status) errors = [] for s in status.get("services", {}).values(): for uid, u in s.get("units", {}).items(): state = u.get("agent-state") or "pending" if uid in error_units: errors.append({"name": uid, "machine": u["machine"], "agent-state": state}) elif state != goal_state: pending.append(u) for rid in u.get("relation-errors", {}).keys(): errors.append({"name": uid, "machine": u["machine"], "agent-state": "relation-error: %s" % rid}) for sid, sub in u.get("subordinates", {}).items(): state = sub.get("agent-state") or "pending" if sid in error_units: errors.append({"name": sid, "machine": u["machine"], "agent-state": state}) elif state != goal_state: pending.append(sid) if not pending and not errors: break if errors: on_errors(errors) juju-deployer-0.6.4/deployer/env/mem.py0000664000175000017500000001431012600342600024174 0ustar tvansteenburghtvansteenburgh00000000000000from deployer.utils import parse_constraints from jujuclient import (UnitErrors, EnvError) class MemoryEnvironment(object): """ In memory deployment: not all features implemented (notably subordinates and their relations). """ def __init__(self, name, deployment): """ Two main dicts: _services (return-able as part of status(), and _services_data (to hold e.g. config, constraints) """ super(MemoryEnvironment, self).__init__() self.name = name self._deployment = deployment self._services = {} self._services_data = {} self._relations = {} self._do_deploy() def add_units(self, svc_name, num): """Add units """ next_num = self._services_data[svc_name]['next_unit_num'] for idx in xrange(next_num, next_num + num): self._services[svc_name]['units'].append( '{}/{}'.format(svc_name, idx)) self._services_data[svc_name]['next_unit_num'] = \ next_num + num def remove_unit(self, unit_name): """ Remove a unit by name """ svc_name = unit_name.split('/')[0] units_idx = {unit: idx for idx, unit in enumerate(self._services[svc_name]['units'])} try: self._services[svc_name]['units'].pop( units_idx[unit_name]) except KeyError: raise UnitErrors("Invalid unit name") def _get_service(self, svc_name): """ Get service by name (as returned by status()) """ if not svc_name in self._services: raise EnvError("Invalid service name") return self._services[svc_name] def add_relation(self, endpoint_a, endpoint_b): """Add relations """ def destroy_service(self, svc_name): """ Destroy a service """ if not svc_name in self._services: raise EnvError("Invalid service name") del self._services[svc_name] def close(self): """ """ def connect(self): """ """ def set_config(self, svc_name, cfg_dict): """ Set service config from passed dict, keeping the structure as needed for status() return """ config = self.get_config(svc_name) if cfg_dict: for cfg_k, cfg_v in cfg_dict.items(): config_entry = config.setdefault(cfg_k, {}) config_entry['value'] = cfg_v def set_constraints(self, svc_name, constr_str): """ Set service constraints from "key=value ..." passed string """ constraints = parse_constraints(constr_str) if constr_str else {} self._services_data[svc_name]['constraints'] = constraints def get_config(self, svc_name): """ Return service configs - note its structure: config{thename: {'value': thevalue}, ...} """ return self._services_data[svc_name]['config'] def get_constraints(self, svc_name): """ Return service constraints dict """ return self._services_data[svc_name]['constraints'] def get_cli_status(self): pass def reset(self): pass def resolve_errors(self, retry_count=0, timeout=600, watch=False, delay=5): pass def _do_deploy(self): """ Fake deploy: build in-memory representation of the deployed set of services from deployment """ self._compile_relations() for service in self._deployment.get_services(): svc_name = service.name charm = self._deployment.get_charm_for(svc_name) relations = self._relations.setdefault(svc_name, {}) self._services[svc_name] = { 'units': [], 'charm': charm.name, 'relations': relations, } self._services_data[svc_name] = { 'next_unit_num': 0, 'config': {}, 'constraints': {}, } # XXX: Incomplete relations support: only add units for non-subords num_units = 0 if charm.is_subordinate() else service.num_units self.add_units(svc_name, num_units) self.set_config(svc_name, service.config) self.set_constraints(svc_name, service.constraints) def _compile_relations(self): """ Compile a relation dictionary by svc_name, with their values structured for status() return """ for rel in self._deployment.get_relations(): for src, dst in (rel[0], rel[1]), (rel[1], rel[0]): try: src_requires = self._deployment.get_charm_for( src).get_requires() dst_provides = self._deployment.get_charm_for( dst).get_provides() except KeyError: continue # Create dicts key-ed by: # { interface_type: (interface_name, svc_name)} src_dict = {} dst_dict = {} # {rel_name: [{interface: name }...]} for _, interfaces in src_requires.items(): for interface in interfaces: src_dict[interface.get('interface')] = ( interface.get('name'), src) for _, interfaces in dst_provides.items(): for interface in interfaces: dst_dict[interface.get('interface')] = ( interface.get('name'), dst) # Create juju env relation entries as: # {svc_name: { interface_name: [ svc_name2, ...] }, ...} for src_rel, (if_name, src_svc_name) in src_dict.items(): if src_rel in dst_dict: src_rels = self._relations.setdefault(src_svc_name, {}) src_rels.setdefault(if_name, []) dst_svc_name = dst_dict[src_rel][1] src_rels[if_name].append(dst_svc_name) def status(self): """ Return all services status """ return {'services': self._services} def wait_for_units(self, *args, **kwargs): pass juju-deployer-0.6.4/deployer/env/gui.py0000664000175000017500000000375012600342600024210 0ustar tvansteenburghtvansteenburgh00000000000000"""GUI server environment implementation. The environment defined here is intended to be used by the Juju GUI server. See . """ from .go import GoEnvironment, EnvironmentClient from ..utils import get_qualified_charm_url, parse_constraints class GUIEnvironment(GoEnvironment): """A Juju environment for the juju-deployer. Add support for deployments via the Juju API and for authenticating with the provided credentials. """ def __init__(self, endpoint, username, password): super(GUIEnvironment, self).__init__('gui', endpoint=endpoint) self._username = username self._password = password def connect(self): """Connect the API client to the Juju backend. This method is overridden so that a call to connect is a no-op if the client is already connected. """ if self.client is None: self.client = EnvironmentClient(self.api_endpoint) self.client.login(self._password, user=self._username) def close(self): """Close the API connection. Also set the client attribute to None after the disconnection. """ super(GUIEnvironment, self).close() self.client = None def deploy( self, name, charm_url, repo=None, config=None, constraints=None, num_units=1, force_machine=None): """Deploy a service using the API. Using the API in place of the command line introduces some limitations: - it is not possible to use a local charm/repository. The repo argument is ignored but listed since the Importer always passes the value as a positional argument. """ charm_url = get_qualified_charm_url(charm_url) constraints = parse_constraints(constraints) self.client.deploy( name, charm_url, config=config, constraints=constraints, num_units=num_units, machine_spec=force_machine) juju-deployer-0.6.4/deployer/env/__init__.py0000664000175000017500000000052512600342600025160 0ustar tvansteenburghtvansteenburgh00000000000000# from .go import GoEnvironment from .py import PyEnvironment from ..utils import _check_call def select_runtime(name, options): # pyjuju does juju --version result = _check_call(["juju", "version"], None, ignoreerr=True) if result is None: return PyEnvironment(name, options) return GoEnvironment(name, options) juju-deployer-0.6.4/deployer/errors.py0000664000175000017500000000017612600342600024147 0ustar tvansteenburghtvansteenburgh00000000000000# TODO make deployer specific exceptions, also move errorexit from utils to here. from jujuclient import UnitErrors, EnvError juju-deployer-0.6.4/deployer/cli.py0000664000175000017500000002122412600346160023404 0ustar tvansteenburghtvansteenburgh00000000000000#!/usr/bin/env python """ Juju Deployer Deployment automation for juju. """ import argparse import errno import logging import os import sys import time from deployer.config import ConfigStack from deployer.env import select_runtime from deployer.action import diff, importer from deployer.utils import ErrorExit, setup_logging, get_env_name def setup_parser(): parser = argparse.ArgumentParser() parser.add_argument( '-c', '--config', help=('File containing deployment(s) json config. This ' 'option can be repeated, with later files overriding ' 'values in earlier ones.'), dest='configs', action='append') parser.add_argument( '-d', '--debug', help='Enable debugging to stdout', dest="debug", action="store_true", default=False) parser.add_argument( '-L', '--local-mods', help='Allow deployment of locally-modified charms', dest="no_local_mods", default=True, action='store_false') parser.add_argument( '-u', '--update-charms', help='Update existing charm branches', dest="update_charms", default=False, action="store_true") parser.add_argument( '-l', '--ls', help='List available deployments', dest="list_deploys", action="store_true", default=False) parser.add_argument( '-D', '--destroy-services', help='Destroy all services (do not terminate machines)', dest="destroy_services", action="store_true", default=False) parser.add_argument( '-T', '--terminate-machines', help=('Terminate all machines but the bootstrap node. ' 'Destroy any services that exist on each. ' 'Use -TT to forcefully terminate.'), dest="terminate_machines", action="count", default=0) parser.add_argument( '-t', '--timeout', help='Timeout (sec) for entire deployment (45min default)', dest='timeout', action='store', type=int, default=2700) parser.add_argument( "-f", '--find-service', action="store", type=str, help='Find hostname from first unit of a specific service.', dest="find_service") parser.add_argument( "-b", '--branch-only', action="store_true", help='Update vcs branches and exit.', dest="branch_only") parser.add_argument( "-S", '--skip-unit-wait', action="store_true", help="Don't wait for units to come up, deploy, add rels and exit.") parser.add_argument( '-B', '--bootstrap', help=('Bootstrap specified environment, blocks until ready'), dest="bootstrap", action="store_true", default=False) parser.add_argument( '-s', '--deploy-delay', action='store', type=float, help=("Time in seconds to sleep between 'deploy' commands, " "to allow machine provider to process requests. On " "terminate machines this also signals waiting for " "machine removal."), dest="deploy_delay", default=0) parser.add_argument( '-e', '--environment', action='store', dest='juju_env', help='Deploy to a specific Juju environment.', default=os.getenv('JUJU_ENV')) parser.add_argument( '-o', '--override', action='append', type=str, help=('Override *all* config options of the same name ' 'across all services. Input as key=value.'), dest='overrides', default=None) parser.add_argument( '--series', type=str, help=('Override distro series in config files'), dest='series', default=None) parser.add_argument( '-v', '--verbose', action='store_true', default=False, dest="verbose", help='Verbose output') parser.add_argument( '-W', '--watch', help='Watch environment changes on console', dest="watch", action="store_true", default=False) parser.add_argument( '-r', "--retry", default=0, type=int, dest="retry_count", help=("Resolve CLI and unit errors via number of retries (default: 0)." " Either standalone or in a deployment")) parser.add_argument( '--ignore-errors', action='store_true', dest='ignore_errors', help='Proceed with the bundle deployment ignoring units errors. ' 'Unit errors are also automatically ignored if --retry != 0') parser.add_argument( "--diff", action="store_true", default=False, help=("Generate a delta between a configured deployment and a running" " environment.")) parser.add_argument( '-w', '--relation-wait', action='store', dest='rel_wait', default=60, type=int, help=('Number of seconds to wait before checking for ' 'relation errors after all relations have been added ' 'and subordinates started. (default: 60)')) parser.add_argument( '-n', '--no-relations', default=False, dest='no_relations', action='store_true', help=('Do not add relations to environment, just services/units ' '(default: False)')) parser.add_argument("--description", help=argparse.SUPPRESS, action="store_true") parser.add_argument("deployment", nargs="?") return parser def main(): stime = time.time() try: run() except ErrorExit: logging.getLogger('deployer.cli').info( "Deployment stopped. run time: %0.2f", time.time() - stime) sys.exit(1) def run(): parser = setup_parser() options = parser.parse_args() if options.description: print("Tool for declarative management of complex deployments.") sys.exit(0) # Debug implies watching and verbose if options.debug: options.watch = options.verbose = True setup_logging(options.verbose, options.debug) log = logging.getLogger("deployer.cli") start_time = time.time() env_name = get_env_name(options.juju_env) try: env = select_runtime(env_name, options) except OSError as e: if e.errno != errno.ENOENT: raise log.error("No juju binary found, have you installed juju?") sys.exit(1) log.debug('Using runtime %s on %s', env.__class__.__name__, env_name) config = ConfigStack(options.configs or [], options.series) # Destroy services and exit if options.destroy_services or options.terminate_machines: log.info("Resetting environment...") env.connect() env.reset(terminate_machines=bool(options.terminate_machines), terminate_delay=options.deploy_delay, watch=options.watch, force_terminate=options.terminate_machines > 1) log.info("Environment reset in %0.2f", time.time() - start_time) sys.exit(0) # Display service info and exit if options.find_service: address = env.get_service_address(options.find_service) if address is None: log.error("Service not found %r", options.find_service) sys.exit(1) elif not address: log.warning("Service: %s has no address for first unit", options.find_service) else: log.info("Service: %s address: %s", options.find_service, address) print(address) sys.exit(0) # Just resolve/retry hooks in the environment if not options.deployment and options.retry_count: log.info("Retrying hooks for error resolution") env.connect() env.resolve_errors( options.retry_count, watch=options.watch, timeout=options.timeout) # Arg check on config files and deployment name. if not options.configs: log.error("Config files must be specified") sys.exit(1) config.load() # Just list the available deployments if options.list_deploys: print("\n".join(sorted(config.keys()))) sys.exit(0) # Do something to a deployment if not options.deployment: # If there's only one option then use it. if len(config.keys()) == 1: options.deployment = config.keys()[0] log.info("Using deployment %s", options.deployment) else: log.error( "Deployment name must be specified. available: %s", tuple(sorted(config.keys()))) sys.exit(1) deployment = config.get(options.deployment) if options.diff: diff.Diff(env, deployment, options).run() return # Import it log.info("Starting deployment of %s", options.deployment) importer.Importer(env, deployment, options).run() # Deploy complete log.info("Deployment complete in %0.2f seconds" % ( time.time() - start_time)) if __name__ == '__main__': main() juju-deployer-0.6.4/deployer/tests/0000775000175000017500000000000012666044061023433 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/tests/test_config.py0000664000175000017500000001746012600342600026305 0ustar tvansteenburghtvansteenburgh00000000000000import logging import mock import os import tempfile import yaml from deployer.deployment import Deployment from deployer.config import ConfigStack from deployer.utils import ErrorExit from .base import Base class ConfigTest(Base): def setUp(self): self.output = self.capture_logging( "deployer.config", level=logging.DEBUG) def test_config_basic(self): config = ConfigStack(['configs/ostack-testing-sample.cfg']) config.load() self.assertEqual( config.keys(), [u'openstack-precise-ec2', u'openstack-precise-ec2-trunk', u'openstack-ubuntu-testing']) self.assertRaises(ErrorExit, config.get, 'zeeland') result = config.get("openstack-precise-ec2") self.assertTrue(isinstance(result, Deployment)) self.assertEqual(config.version, 3) def test_config(self): config = ConfigStack([ os.path.join(self.test_data_dir, "stack-default.cfg"), os.path.join(self.test_data_dir, "stack-inherits.cfg")]) config.load() self.assertEqual( config.keys(), [u'my-files-frontend-dev', u'wordpress']) deployment = config.get("wordpress") self.assertTrue(deployment) self.assertEqual(config.version, 3) def test_config_v4(self): config = ConfigStack([ os.path.join(self.test_data_dir, 'v4', 'simple.yaml')]) config.load() self.assertEqual( config.keys(), [os.path.join(self.test_data_dir, 'v4', 'simple.yaml')]) with mock.patch('deployer.config.ConfigStack._resolve_inherited') \ as mock_resolve: deployment = config.get(config.keys()[0]) self.assertTrue(deployment) self.assertFalse(mock_resolve.called) self.assertEqual(config.version, 4) def test_config_include_file(self): config = ConfigStack([ os.path.join(self.test_data_dir, "stack-includes.cfg")]) config.load() # ensure picked up stacks from both files self.assertEqual( config.keys(), [u'my-files-frontend-dev', u'wordpress']) # ensure inheritance was adhered to during cross-file load wordpress = [s.name for s in config.get('wordpress').get_services()] my_app = [s.name for s in config.get('my-files-frontend-dev').get_services()] self.assertTrue(set(wordpress).issubset(set(my_app))) def test_inherits_config_overridden(self): config = ConfigStack([ os.path.join(self.test_data_dir, "stack-default.cfg"), os.path.join(self.test_data_dir, "stack-inherits.cfg")]) config.load() deployment = config.get('my-files-frontend-dev') db = deployment.get_service('db') # base deployment (wordpress)'s db tuning level should have been # over-ridden self.assertEquals(db.config.get('tuning-level'), 'fastest') def test_multi_inheritance_multi_files(self): config = ConfigStack([ os.path.join(self.test_data_dir, "openstack", "openstack.cfg"), os.path.join(self.test_data_dir, "openstack", "ubuntu_base.cfg"), os.path.join( self.test_data_dir, "openstack", "openstack_base.cfg"), ]) self._test_multiple_inheritance(config) def test_multi_inheritance_multi_included_files(self): # openstack.cfg # includes -> ubuntu_base.cfg includes # includes -> openstack_base.cfg test_conf = yaml.load(open( os.path.join(self.test_data_dir, "openstack", "openstack.cfg"))) includes = [ os.path.join(self.test_data_dir, "openstack", "ubuntu_base.cfg"), os.path.join(self.test_data_dir, "openstack", "openstack_base.cfg") ] for key in ['include-config', 'include-configs']: test_conf[key] = includes with tempfile.NamedTemporaryFile() as tmp_cfg: tmp_cfg.write(yaml.dump(test_conf)) tmp_cfg.flush() config = ConfigStack([tmp_cfg.name]) self._test_multiple_inheritance(config) del test_conf[key] def test_multi_inheritance_included_multi_configs(self): # openstack.cfg # includes -> [ubuntu_base.cfg, openstack_base.cfg] config = ConfigStack([ os.path.join(self.test_data_dir, "openstack", "openstack.cfg"), ]) self._test_multiple_inheritance(config) def _test_multiple_inheritance(self, config): config.load() deployment = config.get('precise-grizzly') services = [s.name for s in list(deployment.get_services())] self.assertEquals(['mysql', 'nova-cloud-controller'], services) nova = deployment.get_service('nova-cloud-controller') self.assertEquals(nova.config['openstack-origin'], 'cloud:precise-grizzly') deployment = config.get('precise-grizzly-quantum') services = [s.name for s in list(deployment.get_services())] self.assertEquals(services, ['mysql', 'nova-cloud-controller', 'quantum-gateway']) nova = deployment.get_service('nova-cloud-controller') self.assertEquals(nova.config['network-manager'], 'Quantum') self.assertEquals(nova.config['openstack-origin'], 'cloud:precise-grizzly') ex_rels = [('quantum-gateway', 'nova-cloud-controller'), ('quantum-gateway', 'mysql'), ('nova-cloud-controller', 'mysql')] self.assertEquals(ex_rels, list(deployment.get_relations())) def test_config_series_override(self): config = ConfigStack(['configs/wiki.yaml'], 'trusty') config.load() result = config.get("wiki") self.assertTrue(isinstance(result, Deployment)) self.assertEquals(result.series, 'trusty') class NetworkConfigFetchingTests(Base): """Configuration files can be specified via URL that is then fetched.""" def setUp(self): self.output = self.capture_logging( "deployer.config", level=logging.DEBUG) def test_urls_are_fetched(self): # If a config file is specified as a URL, that URL is fetched and # placed at a temporary location where it is read and treated as a # regular config file. CONFIG_URL = 'http://site.invalid/config-1' config = ConfigStack([]) config.config_files = [CONFIG_URL] class FauxResponse(file): def getcode(self): return 200 def faux_urlopen(url): self.assertEqual(url, CONFIG_URL) return FauxResponse('configs/ostack-testing-sample.cfg') config.urlopen = faux_urlopen config.load() self.assertEqual( config.keys(), [u'openstack-precise-ec2', u'openstack-precise-ec2-trunk', u'openstack-ubuntu-testing']) self.assertRaises(ErrorExit, config.get, 'zeeland') result = config.get("openstack-precise-ec2") self.assertTrue(isinstance(result, Deployment)) def test_unfetchable_urls_generate_an_error(self): # If a config file is specified as a URL, that URL is fetched and # placed at a temporary location where it is read and treated as a # regular config file. CONFIG_URL = 'http://site.invalid/config-1' config = ConfigStack([]) config.config_files = [CONFIG_URL] class FauxResponse(file): def getcode(self): return 400 def faux_urlopen(url): self.assertEqual(url, CONFIG_URL) return FauxResponse('configs/ostack-testing-sample.cfg') config.urlopen = faux_urlopen self.assertRaises(ErrorExit, config.load) juju-deployer-0.6.4/deployer/tests/test_base.py0000664000175000017500000000216212600342600025743 0ustar tvansteenburghtvansteenburgh00000000000000import os import shutil import unittest import StringIO import logging import tempfile class Base(unittest.TestCase): def capture_logging(self, name="", level=logging.INFO, log_file=None, formatter=None): if log_file is None: log_file = StringIO.StringIO() log_handler = logging.StreamHandler(log_file) if formatter: log_handler.setFormatter(formatter) logger = logging.getLogger(name) logger.addHandler(log_handler) old_logger_level = logger.level logger.setLevel(level) @self.addCleanup def reset_logging(): logger.removeHandler(log_handler) logger.setLevel(old_logger_level) return log_file def mkdir(self): d = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, d) return d def change_environment(self, **kw): """ """ original_environ = dict(os.environ) @self.addCleanup def cleanup_env(): os.environ.clear() os.environ.update(original_environ) os.environ.update(kw) juju-deployer-0.6.4/deployer/tests/test_watchers.py0000664000175000017500000001627412600342600026662 0ustar tvansteenburghtvansteenburgh00000000000000"""Tests juju-core environment watchers.""" import unittest import mock from deployer.env import watchers from deployer.utils import ErrorExit class TestWaitForUnits(unittest.TestCase): def make_watch(self, *changesets): """Create and return a mock watcher returning the given change sets.""" watch = mock.MagicMock() watch.__iter__.return_value = changesets return watch def make_change(self, unit_name, status): """Create and return a juju-core mega-watcher unit change.""" service = unit_name.split('/')[0] return {'Name': unit_name, 'Service': service, 'Status': status} def test_success(self): # It is possible to watch for units to be started. watch = self.make_watch( [('unit', 'change', self.make_change('django/0', 'pending')), ('unit', 'change', self.make_change('haproxy/1', 'pending'))], [('unit', 'change', self.make_change('django/0', 'started')), ('unit', 'change', self.make_change('haproxy/1', 'started'))], ) watcher = watchers.WaitForUnits(watch) callback = mock.Mock() watcher.run(callback) # The callback has been called once for each change in the change sets, # excluding the initial state. self.assertEqual(2, callback.call_count) # The watcher has been properly stopped. watch.stop.assert_called_once_with() def test_watch_restart(self): # It is possible for a watch to be restarted. watch = self.make_watch( [('unit', 'change', self.make_change('django/0', 'pending')), ('unit', 'change', self.make_change('haproxy/1', 'pending'))], [('unit', 'change', self.make_change('django/0', 'started'))], # On a restart we get reinformed of things already current [('unit', 'change', self.make_change('django/0', 'started')), ('unit', 'change', self.make_change('haproxy/1', 'pending'))], [('unit', 'change', self.make_change('django/0', 'started')), ('unit', 'change', self.make_change('haproxy/1', 'started'))] ) watcher = watchers.WaitForUnits(watch) callback = mock.Mock() watcher.run(callback) # The callback has been called once for each change in the change sets, # excluding the initial state. self.assertEqual(5, callback.call_count) # The watcher has been properly stopped. watch.stop.assert_called_once_with() def test_errors_handling(self): # Errors can be handled providing an on_errors callable. watch = self.make_watch( [('unit', 'change', self.make_change('django/0', 'pending')), ('unit', 'change', self.make_change('django/42', 'pending')), ('unit', 'change', self.make_change('haproxy/1', 'pending'))], [('unit', 'change', self.make_change('django/0', 'pending')), ('unit', 'change', self.make_change('django/42', 'error')), ('unit', 'change', self.make_change('haproxy/1', 'error'))], [('unit', 'change', self.make_change('django/0', 'error')), ('unit', 'change', self.make_change('django/42', 'error')), ('unit', 'change', self.make_change('haproxy/1', 'error'))], ) on_errors = mock.Mock() watcher = watchers.WaitForUnits(watch, on_errors=on_errors) watcher.run() # The watcher has been properly stopped. watch.stop.assert_called_once_with() # The errors handler has been called once for each changeset containing # errors. self.assertEqual(2, on_errors.call_count) on_errors.assert_has_calls([ mock.call([ {'Status': 'error', 'Name': 'django/42', 'Service': 'django'}, {'Status': 'error', 'Name': 'haproxy/1', 'Service': 'haproxy'} ]), mock.call([ {'Status': 'error', 'Name': 'django/0', 'Service': 'django'}]), ]) def test_specific_services(self): # It is possible to only watch units belonging to specific services. watch = self.make_watch( [('unit', 'change', self.make_change('django/0', 'pending')), ('unit', 'change', self.make_change('django/42', 'pending')), ('unit', 'change', self.make_change('haproxy/1', 'pending')), ('unit', 'change', self.make_change('haproxy/47', 'pending'))], [('unit', 'change', self.make_change('django/0', 'pending')), ('unit', 'change', self.make_change('django/42', 'started')), ('unit', 'change', self.make_change('haproxy/1', 'error')), ('unit', 'change', self.make_change('haproxy/47', 'pending'))], [('unit', 'change', self.make_change('django/0', 'error')), ('unit', 'change', self.make_change('django/42', 'started')), ('unit', 'change', self.make_change('haproxy/1', 'error')), ('unit', 'change', self.make_change('haproxy/47', 'pending'))], ) on_errors = mock.Mock() watcher = watchers.WaitForUnits( watch, services=['django'], on_errors=on_errors) watcher.run() # The watcher has been properly stopped, even if haproxy/47 is pending. watch.stop.assert_called_once_with() # The errors handler has been called for the django error, not for the # haproxy one. on_errors.assert_called_once_with( [{'Status': 'error', 'Name': 'django/0', 'Service': 'django'}]) def test_goal_states(self): # It is possible to watch for the given goal state watch = self.make_watch( [('unit', 'change', self.make_change('django/0', 'pending')), ('unit', 'change', self.make_change('haproxy/1', 'pending'))], ) watcher = watchers.WaitForUnits(watch, goal_state='pending') callback = mock.Mock() watcher.run(callback) # Since all the units are already pending, the watcher has been # properly stopped. watch.stop.assert_called_once_with() class TestLogOnErrors(unittest.TestCase): def setUp(self): # Set up a mock environment. self.env = mock.Mock() def test_returned_callable(self): # The returned function uses the env to log errors. callback = watchers.log_on_errors(self.env) self.assertEqual(self.env.log_errors, callback) class TestExitOnErrors(unittest.TestCase): def setUp(self): # Set up a mock environment. self.env = mock.Mock() def test_returned_callable(self): # The returned function uses the env to log errors and then exits the # application. callback = watchers.exit_on_errors(self.env) with self.assertRaises(ErrorExit): callback('bad wolf') self.env.log_errors.assert_called_once_with('bad wolf') class TestRaiseOnErrors(unittest.TestCase): def test_returned_callable(self): # The returned function raises the given exception passing the errors. callback = watchers.raise_on_errors(ValueError) with self.assertRaises(ValueError) as cm: callback('bad wolf') self.assertEqual('bad wolf', bytes(cm.exception)) juju-deployer-0.6.4/deployer/tests/test_goenv.py0000664000175000017500000000624512600536031026160 0ustar tvansteenburghtvansteenburgh00000000000000import logging import os import time import sys import unittest from deployer.env.go import GoEnvironment from .base import Base # Takes roughly about 6m on core2 + ssd, mostly cloudinit time @unittest.skipIf( (not bool(os.environ.get("TEST_ENDPOINT"))), "Test env must be defined: TEST_ENDPOINT") class LiveEnvironmentTest(Base): @classmethod def setUpClass(cls): """Base class sets JUJU_HOME to a new tmp dir, but for these tests we need a real JUJU_HOME so that calls to ``juju api-endpoints`` work properly. """ if not os.environ.get('JUJU_HOME'): raise RuntimeError('JUJU_HOME must be set') @classmethod def tearDownClass(cls): """Base class deletes the tmp JUJU_HOME dir it created. Override so we don't delete our real JUJU_HOME! """ pass def setUp(self): self.endpoint = os.environ.get("TEST_ENDPOINT") self.output = self.capture_logging( "deployer", log_file=sys.stderr, level=logging.DEBUG) self.env = GoEnvironment( os.environ.get("JUJU_ENV"), endpoint=self.endpoint) self.env.connect() status = self.env.status() self.assertFalse(status.get('services')) # Destroy everything.. consistent baseline self.env.reset( terminate_machines=len(status['machines'].keys()) > 1, terminate_delay=240) def tearDown(self): self.env.reset( terminate_machines=True, terminate_delay=240, force_terminate=True) self.env.close() def test_env(self): status = self.env.status() self.env.deploy("test-blog", "cs:precise/wordpress") self.env.deploy("test-db", "cs:precise/mysql") self.env.add_relation("test-db", "test-blog") self.env.add_units('test-blog', 1) # Sleep because juju core watches are eventually consistent (5s window) # and status rpc is broken (http://pad.lv/1203105) time.sleep(6) self.env.wait_for_units(timeout=800) status = self.env.status() services = ["test-blog", "test-db"] self.assertEqual( sorted(status['services'].keys()), services) for s in services: for k, u in status['services'][s]['units'].items(): self.assertIn(u['agent-state'], ("allocating", "started")) def test_add_machine(self): machine_name = self.env.add_machine() # Sleep because juju core watches are eventually consistent (5s window) # and status rpc is broken (http://pad.lv/1203105) time.sleep(6) status = self.env.status() self.assertIn(machine_name, status['machines']) def test_set_annotation(self): machine_name = self.env.add_machine() self.env.set_annotation( machine_name, {'foo': 'bar'}, entity_type='machine') # Sleep because juju core watches are eventually consistent (5s window) # and status rpc is broken (http://pad.lv/1203105) time.sleep(6) self.env.status() self.assertIn('foo', self.env.client.get_annotation( machine_name, 'machine')['Annotations']) juju-deployer-0.6.4/deployer/tests/test_utils.py0000664000175000017500000001412112647473442026213 0ustar tvansteenburghtvansteenburgh00000000000000import os from StringIO import StringIO from subprocess import CalledProcessError from mock import ( MagicMock, patch, ) from .base import Base from deployer.utils import ( _check_call, _is_qualified_charm_url, DeploymentError, dict_merge, ErrorExit, get_qualified_charm_url, HTTPError, mkdir, URLError, ) class UtilTests(Base): def test_relation_list_merge(self): self.assertEqual( dict_merge( {'relations': [['m1', 'x1']]}, {'relations': [['m2', 'x2']]}), {'relations': [['m1', 'x1'], ['m2', 'x2']]}) def test_no_rels_in_target(self): self.assertEqual( dict_merge( {'a': 1}, {'relations': [['m1', 'x1'], ['m2', 'x2']]}), {'a': 1, 'relations': [['m1', 'x1'], ['m2', 'x2']]}) @patch('subprocess.check_output') def test_check_call_fail_no_retry(self, check_output): _e = CalledProcessError(returncode=1, cmd=['fail']) check_output.side_effect = _e self.assertRaises( ErrorExit, _check_call, params=['fail'], log=MagicMock()) @patch('time.sleep') @patch('subprocess.check_output') def test_check_call_fail_retry(self, check_output, sleep): _e = CalledProcessError(returncode=1, cmd=['fail']) check_output.side_effect = _e self.assertRaises( ErrorExit, _check_call, params=['fail'], log=MagicMock(), max_retry=3) # 1 failure + 3 retries self.assertEquals(len(check_output.call_args_list), 4) @patch('time.sleep') @patch('subprocess.check_output') def test_check_call_succeed_after_retry(self, check_output, sleep): # call succeeds after the 3rd try _e = CalledProcessError(returncode=1, cmd=['maybe_fail']) check_output.side_effect = [ _e, _e, 'good', _e] output = _check_call( params=['maybe_fail'], log=MagicMock(), max_retry=3) self.assertEquals(output, 'good') # 1 failure + 3 retries self.assertEquals(len(check_output.call_args_list), 3) def test_check_call_uses_shell(self): cmd = 'echo "foo"' self.assertRaises( OSError, _check_call, params=[cmd], log=MagicMock()) output = _check_call(params=[cmd], log=MagicMock(), shell=True) self.assertEqual(output, "foo\n") class TestMkdir(Base): def setUp(self): self.playground = self.mkdir() def test_create_dir(self): # A directory is correctly created. path = os.path.join(self.playground, 'foo') mkdir(path) self.assertTrue(os.path.isdir(path)) def test_intermediate_dirs(self): # All intermediate directories are created. path = os.path.join(self.playground, 'foo', 'bar', 'leaf') mkdir(path) self.assertTrue(os.path.isdir(path)) def test_expand_user(self): # The ~ construction is expanded. with patch('os.environ', {'HOME': self.playground}): mkdir('~/in/my/home') path = os.path.join(self.playground, 'in', 'my', 'home') self.assertTrue(os.path.isdir(path)) def test_existing_dir(self): # The function exits without errors if the target directory exists. path = os.path.join(self.playground, 'foo') os.mkdir(path) mkdir(path) def test_existing_file(self): # An OSError is raised if a file already exists in the target path. path = os.path.join(self.playground, 'foo') with open(path, 'w'): with self.assertRaises(OSError): mkdir(path) def test_failure(self): # Errors are correctly re-raised. path = os.path.join(self.playground, 'foo') os.chmod(self.playground, 0000) self.addCleanup(os.chmod, self.playground, 0700) with self.assertRaises(OSError): mkdir(os.path.join(path)) self.assertFalse(os.path.exists(path)) class TestCharmRevisioning(Base): """Test the functions related to charm URL revisioning.""" def test_is_qualified_false(self): url = "cs:precise/mysql" self.assertFalse(_is_qualified_charm_url(url)) def test_is_qualified_1_digit(self): url = "cs:precise/mysql-2" self.assertTrue(_is_qualified_charm_url(url)) def test_is_qualified_many_digits(self): url = "cs:precise/mysql-2014" self.assertTrue(_is_qualified_charm_url(url)) def test_is_qualified_no_digits(self): url = "cs:precise/mysql-" self.assertFalse(_is_qualified_charm_url(url)) def test_get_qualified_url(self): fake_json = """ {"cs:precise/mysql": {"revision":333} } """ def mocked_urlopen(url): return StringIO(fake_json) path = 'deployer.utils.urlopen' with patch(path, mocked_urlopen): url = get_qualified_charm_url('cs:precise/mysql') self.assertEqual('cs:precise/mysql-333', url) def test_get_qualified_url_raise_exception_on_HTTPError(self): def mocked_urlopen(url): raise HTTPError(url, 404, 'Bad Earl', None, None) with patch('deployer.utils.urlopen', mocked_urlopen): with self.assertRaises(DeploymentError) as exc: get_qualified_charm_url('cs:precise/mysql') expected = ('HTTP Error 404: ' 'Bad Earl (https://api.jujucharms.com/charmstore/charm-info' '?charms=cs:precise/mysql)') self.assertEqual([expected], exc.exception.message) def test_get_qualified_url_raise_exception_on_URLError(self): def mocked_urlopen(url): raise URLError('Hinky URL') with patch('deployer.utils.urlopen', mocked_urlopen): with self.assertRaises(DeploymentError) as exc: get_qualified_charm_url('cs:precise/mysql') expected = (' ' '(https://api.jujucharms.com/charmstore/charm-info' '?charms=cs:precise/mysql)') self.assertEqual([expected], exc.exception.message) juju-deployer-0.6.4/deployer/tests/test_deployment.py0000664000175000017500000004310112600342600027207 0ustar tvansteenburghtvansteenburgh00000000000000import base64 import StringIO import os from deployer.deployment import Deployment from deployer.utils import setup_logging, ErrorExit from .base import Base, skip_if_offline class FauxService(object): """A fake service with a unit_placement attribute, used for testing the sort functionality. """ def __init__(self, name=None, unit_placement=None): self.name = name self.unit_placement = unit_placement class DeploymentTest(Base): def setUp(self): self.output = setup_logging( debug=True, verbose=True, stream=StringIO.StringIO()) def get_named_deployment_and_fetch_v3(self, file_name, stack_name): deployment = self.get_named_deployment_v3(file_name, stack_name) # Fetch charms in order to allow proper late binding config and # placement validation. repo_path = self.mkdir() os.mkdir(os.path.join(repo_path, "precise")) deployment.repo_path = repo_path deployment.fetch_charms() return deployment def get_deployment_and_fetch_v4(self, file_name): deployment = self.get_deployment_v4(file_name) # Fetch charms in order to allow proper late binding config and # placement validation. repo_path = self.mkdir() os.mkdir(os.path.join(repo_path, "precise")) deployment.repo_path = repo_path deployment.fetch_charms() return deployment @skip_if_offline def test_deployer(self): d = self.get_named_deployment_and_fetch_v3('blog.yaml', 'wordpress-prod') services = d.get_services() self.assertTrue([s for s in services if s.name == "newrelic"]) # Ensure inheritance order reflects reality, instead of merge value. self.assertEqual( d.data['inherits'], ['wordpress-stage', 'wordpress-base', 'metrics-base']) # Load up overrides and resolves d.load_overrides(["key=abc"]) d.resolve() # Verify include-base64 self.assertEqual(d.get_service('newrelic').config, {'key': 'abc'}) self.assertEqual( base64.b64decode(d.get_service('blog').config['wp-content']), "HelloWorld") # TODO verify include-file # Verify relations self.assertEqual( list(d.get_relations()), [('blog', 'db'), ('blog', 'cache'), ('blog', 'haproxy')]) def test_maas_name_and_zone_placement(self): d = self.get_named_deployment_v3("stack-placement-maas.yml", "stack") d.validate_placement() placement = d.get_unit_placement('ceph', {}) self.assertEqual(placement.get(0), "arnolt") placement = d.get_unit_placement('heat', {}) self.assertEqual(placement.get(0), "zone=zebra") @skip_if_offline def test_validate_placement_sorting(self): d = self.get_named_deployment_and_fetch_v3("stack-placement.yaml", "stack") services = d.get_services() self.assertEqual(services[0].name, 'nova-compute') try: d.validate_placement() except ErrorExit: self.fail("Should not fail") def test_machines_placement_sort(self): d = Deployment('test', None, None) self.assertEqual( d._machines_placement_sort( FauxService(unit_placement=1), FauxService() ), 1) self.assertEqual( d._machines_placement_sort( FauxService(), FauxService(unit_placement=1) ), -1) self.assertEqual( d._machines_placement_sort( FauxService(name="x", unit_placement=['asdf']), FauxService(name="y", unit_placement=['lxc:x/1']) ), -1) self.assertEqual( d._machines_placement_sort( FauxService(name="y", unit_placement=['lxc:x/1']), FauxService(name="x", unit_placement=['asdf']) ), 1) self.assertEqual( d._machines_placement_sort( FauxService(name="x", unit_placement=['y']), FauxService(name="y") ), 1) self.assertEqual( d._machines_placement_sort( FauxService(name="x", unit_placement=['asdf']), FauxService(name="y", unit_placement=['hjkl']) ), -1) self.assertEqual( d._machines_placement_sort( FauxService(name="x"), FauxService(name="y") ), -1) def test_colocate(self): status = { 'services': { 'foo': { 'units': { '1': { 'machine': 1 }, '2': {} } } } } d = self.get_named_deployment_v3("stack-placement.yaml", "stack") p = d.get_unit_placement('ceph', status) svc = FauxService(name='bar') self.assertEqual(p.colocate(status, 'asdf', '1', '', svc), None) self.assertIn('Service bar to be deployed with non-existent service ' 'asdf', self.output.getvalue()) self.assertEqual(p.colocate(status, 'foo', '2', '', svc), None) self.assertIn('Service:bar, Deploy-with-service:foo, ' 'Requested-unit-index=2, Cannot solve, ' 'falling back to default placement', self.output.getvalue()) self.assertEqual(p.colocate(status, 'foo', '1', '', svc), None) self.assertIn('Service:bar deploy-with unit missing machine 2', self.output.getvalue()) self.assertEqual(p.colocate(status, 'foo', '0', '', svc), 1) @skip_if_offline def test_validate_invalid_placement_nested(self): d = self.get_named_deployment_and_fetch_v3("stack-placement-invalid.yaml", "stack") services = d.get_services() self.assertEqual(services[0].name, 'nova-compute') try: d.validate_placement() except ErrorExit: pass else: self.fail("Should fail") @skip_if_offline def test_validate_invalid_placement_no_with_service(self): d = self.get_named_deployment_and_fetch_v3( "stack-placement-invalid-2.yaml", "stack") services = d.get_services() self.assertEqual(services[0].name, 'nova-compute') try: d.validate_placement() except ErrorExit: pass else: self.fail("Should fail") @skip_if_offline def test_validate_invalid_placement_subordinate_v3(self): # Placement validation fails if a subordinate charm is provided. deployment = self.get_named_deployment_and_fetch_v3( 'stack-placement-invalid-subordinate.yaml', 'stack') with self.assertRaises(ErrorExit): deployment.validate_placement() output = self.output.getvalue() self.assertIn( 'Cannot place to a subordinate service: ceph -> nrpe\n', output) self.assertIn( 'Cannot place to a subordinate service: nova-compute -> nrpe\n', output) @skip_if_offline def test_validate_invalid_placement_subordinate_v4(self): # Placement validation fails if a subordinate charm is provided. deployment = self.get_deployment_and_fetch_v4( 'placement-invalid-subordinate.yaml') with self.assertRaises(ErrorExit): deployment.validate_placement() output = self.output.getvalue() self.assertIn( 'Cannot place to a subordinate service: nova-compute -> nrpe\n', output) def test_validate_invalid_unit_number(self): # Placement validation fails if an invalid unit number is provided. deployment = self.get_deployment_v4('placement-invalid-number.yaml') with self.assertRaises(ErrorExit): deployment.validate_placement() output = self.output.getvalue() self.assertIn( 'Invalid unit number for placement: django to bad-wolf\n', output) def test_get_unit_placement_v3(self): d = self.get_named_deployment_v3("stack-placement.yaml", "stack") status = { 'services': { 'nova-compute': { 'units': { 'nova-compute/2': {'machine': '1'}, 'nova-compute/3': {'machine': '2'}, 'nova-compute/4': {'machine': '3'}}}}} placement = d.get_unit_placement('ceph', status) self.assertEqual(placement.get(0), '1') self.assertEqual(placement.get(1), '2') self.assertEqual(placement.get(2), None) placement = d.get_unit_placement('quantum', status) self.assertEqual(placement.get(0), 'lxc:1') self.assertEqual(placement.get(2), 'lxc:3') self.assertEqual(placement.get(3), None) placement = d.get_unit_placement('verity', status) self.assertEqual(placement.get(0), 'lxc:3') placement = d.get_unit_placement('mysql', status) self.assertEqual(placement.get(0), '0') placement = d.get_unit_placement('semper', status) self.assertEqual(placement.get(0), '3') placement = d.get_unit_placement('lxc-service', status) self.assertEqual(placement.get(0), 'lxc:2') self.assertEqual(placement.get(1), 'lxc:3') self.assertEqual(placement.get(2), 'lxc:1') self.assertEqual(placement.get(3), 'lxc:1') self.assertEqual(placement.get(4), 'lxc:3') def test_fill_placement_v4(self): d = self.get_deployment_v4('fill_placement.yaml') self.assertEqual( d.get_unit_placement('mediawiki1', 0).service.svc_data['to'], ['new', 'new']) self.assertEqual( d.get_unit_placement('mediawiki2', 0).service.svc_data['to'], ['0', '0']) self.assertEqual( d.get_unit_placement('mediawiki3', 0).service.svc_data['to'], ['mediawiki1/0', 'mediawiki1/1']) def test_parse_placement_v4(self): # Short-cut to winding up with a valid placement. d = self.get_deployment_v4('simple.yaml') placement = d.get_unit_placement('mysql', {}) c, p, u = placement._parse_placement('mysql') self.assertEqual(c, None) self.assertEqual(p, 'mysql') self.assertEqual(u, None) c, p, u = placement._parse_placement('mysql/1') self.assertEqual(c, None) self.assertEqual(p, 'mysql') self.assertEqual(u, '1') c, p, u = placement._parse_placement('lxc:mysql') self.assertEqual(c, 'lxc') self.assertEqual(p, 'mysql') self.assertEqual(u, None) def test_validate_v4(self): d = self.get_deployment_v4('validate.yaml') placement = d.get_unit_placement('mysql', {}) feedback = placement.validate() self.assertEqual(feedback.get_errors(), [ 'Invalid container type: asdf service: mysql placement: asdf:0', 'Service placement to machine not supported: mysql to asdf:0', 'Invalid service placement: mysql to lxc:asdf', 'Service placement to machine not supported: mysql to 1', 'Service unit does not exist: mysql to wordpress/3', 'Invalid service placement: mysql to asdf']) def test_get_unit_placement_v4_simple(self): d = self.get_deployment_v4('simple.yaml') placement = d.get_unit_placement('mysql', {}) self.assertEqual(placement.get(0), None) placement = d.get_unit_placement('mediawiki', {}) self.assertEqual(placement.get(0), None) def test_get_unit_placement_v4_placement(self): d = self.get_deployment_v4('placement.yaml') machines = { '1': 1, '2': 2, } d.set_machines(machines) placement = d.get_unit_placement('mysql', {}) d.set_machines(machines) self.assertEqual(placement.get(0), 2) placement = d.get_unit_placement('mediawiki', {}) self.assertEqual(placement.get(0), 1) def test_get_unit_placement_v4_hulk_smash(self): d = self.get_deployment_v4('hulk-smash.yaml') machines = { '1': 1, } status = { 'services': { 'mediawiki': { 'units': { 'mediawiki/1': {'machine': 1} } } } } d.set_machines(machines) placement = d.get_unit_placement('mysql', status) self.assertEqual(placement.get(0), 1) placement = d.get_unit_placement('mediawiki', status) self.assertEqual(placement.get(0), 1) def test_get_unit_placement_v4_hulk_smash_nounits(self): d = self.get_deployment_v4('hulk-smash-nounits.yaml') machines = { '1': 1, } status = { 'services': { 'mediawiki': { 'units': { 'mediawiki/1': {'machine': 1} } } } } d.set_machines(machines) placement = d.get_unit_placement('mysql', status) self.assertEqual(placement.get(0), 1) placement = d.get_unit_placement('mediawiki', status) self.assertEqual(placement.get(0), 1) def test_get_unit_placement_v4_hulk_smash_nounits_nomachines(self): d = self.get_deployment_v4('hulk-smash-nounits-nomachines.yaml') machines = { '1': 1, } status = { 'services': { 'mediawiki': { 'units': { 'mediawiki/1': {'machine': 1} } } } } d.set_machines(machines) placement = d.get_unit_placement('mysql', status) self.assertEqual(placement.get(0), 1) # Since we don't have a placement, even with the status, this should # still be None. placement = d.get_unit_placement('mediawiki', status) self.assertEqual(placement.get(0), None) def test_get_unit_placement_v4_container(self): d = self.get_deployment_v4('container.yaml') machines = { '1': 1, } status = { 'services': { 'mediawiki': { 'units': { 'mediawiki/1': {'machine': 1}, } } } } d.set_machines(machines) placement = d.get_unit_placement('mysql', status) self.assertEqual(placement.get(0), 'lxc:1') placement = d.get_unit_placement('mediawiki', status) self.assertEqual(placement.get(0), 1) def test_get_unit_placement_v4_container_new(self): d = self.get_deployment_v4('container-new.yaml') machines = { '1': 1, 'mysql/0': 2 } status = { 'services': { 'mediawiki': { 'units': { 'mediawiki/1': {'machine': 1} } } } } d.set_machines(machines) placement = d.get_unit_placement('mysql', status) self.assertEqual(placement.get_new_machines_for_containers(), ['mysql/0']) self.assertEqual(placement.get(0), 'lxc:2') placement = d.get_unit_placement('mediawiki', status) self.assertEqual(placement.get(0), 1) def test_multiple_relations_no_weight(self): data = {"relations": {"wordpress": {"consumes": ["mysql"]}, "nginx": {"consumes": ["wordpress"]}}} d = Deployment("foo", data, include_dirs=()) self.assertEqual( [('nginx', 'wordpress'), ('wordpress', 'mysql')], list(d.get_relations())) def test_multiple_relations_weighted(self): data = { "relations": { "keystone": { "weight": 100, "consumes": ["mysql"] }, "nova-compute": { "weight": 50, "consumes": ["mysql"] }, "glance": { "weight": 70, "consumes": ["mysql"] }, } } d = Deployment("foo", data, include_dirs=()) self.assertEqual( [('keystone', 'mysql'), ('glance', 'mysql'), ('nova-compute', 'mysql')], list(d.get_relations())) def test_getting_service_names(self): # It is possible to retrieve the service names. deployment = self.get_named_deployment_v3("stack-placement.yaml", "stack") service_names = deployment.get_service_names() expected_service_names = [ 'ceph', 'mysql', 'nova-compute', 'quantum', 'semper', 'verity', 'lxc-service'] self.assertEqual(set(expected_service_names), set(service_names)) def test_resolve_config_handles_empty_options(self): """resolve_config should handle options being "empty" lp:1361883""" deployment = self.get_named_deployment_v3("negative.cfg", "negative") self.assertEqual( deployment.data["services"]["foo"]["options"], {}) deployment.resolve_config() def test_resolve_config_handles_none_options(self): """resolve_config should handle options being "none" lp:1361883""" deployment = self.get_named_deployment_v3("negative.yaml", "negative") self.assertEqual( deployment.data["services"]["foo"]["options"], None) deployment.resolve_config() juju-deployer-0.6.4/deployer/tests/base.py0000664000175000017500000000724612600342600024714 0ustar tvansteenburghtvansteenburgh00000000000000import inspect import logging import os import unittest import shutil import StringIO import tempfile import mock import deployer from deployer.config import ConfigStack # Skip during launchpad recipe package builds (DEB_BUILD_ARCH) or if explicitly # requested with 'TEST_OFFLINE=1' TEST_OFFLINE = ("DEB_BUILD_ARCH" in os.environ or "TEST_OFFLINE" in os.environ) TEST_OFFLINE_REASON = "Requires configured bzr launchpad id and network access" skip_if_offline = unittest.skipIf(TEST_OFFLINE, TEST_OFFLINE_REASON) class Base(unittest.TestCase): test_data_dir = os.path.join( os.path.dirname(inspect.getabsfile(deployer)), "tests", "test_data") @classmethod def setUpClass(cls): os.environ["JUJU_HOME"] = tempfile.mkdtemp() @classmethod def tearDownClass(cls): shutil.rmtree(os.environ["JUJU_HOME"]) def get_named_deployment_v3(self, file_name, stack_name): """ Get v3 deployment from a test_data file. """ return ConfigStack( [os.path.join( self.test_data_dir, file_name)]).get(stack_name) def get_deployment_v4(self, file_name): """Get v4 deployment from a test_data file. """ f = os.path.join(self.test_data_dir, 'v4', file_name) return ConfigStack([f]).get(f) def capture_logging(self, name="", level=logging.INFO, log_file=None, formatter=None): if log_file is None: log_file = StringIO.StringIO() log_handler = logging.StreamHandler(log_file) if formatter: log_handler.setFormatter(formatter) logger = logging.getLogger(name) logger.addHandler(log_handler) old_logger_level = logger.level logger.setLevel(level) @self.addCleanup def reset_logging(): logger.removeHandler(log_handler) logger.setLevel(old_logger_level) return log_file def mkdir(self): d = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, d) return d def change_environment(self, **kw): """ """ original_environ = dict(os.environ) @self.addCleanup def cleanup_env(): os.environ.clear() os.environ.update(original_environ) os.environ.update(kw) def patch_env_status(env, services, machines=()): """Simulate that the given mock env has the status described in services. This function is used so that tests do not have to wait minutes for service units presence when the importer is used with the given env. The services argument is a dict mapping service names with the number of their units. This will be reflected by the status returned when the importer adds the units (see "deployer.action.importer.Importer.add_unit"). The machines argument can be used to simulate that the given machines are present in the Juju environment. """ services_status = dict( (k, {'units': dict((i, {}) for i in range(v))}) for k, v in services.items() ) machines_status = dict((i, {}) for i in machines) env.status.side_effect = [ # There are no services initially. {'services': {}, 'machines': machines_status}, {'services': {}, 'machines': machines_status}, # This is the workaround check for subordinate charms presence: # see lp:1421315 for details. {'services': services_status, 'machines': machines_status}, {'services': services_status, 'machines': machines_status}, # After we exited the workaround loop, we can just mock further # status results. mock.MagicMock(), mock.MagicMock(), mock.MagicMock(), ] juju-deployer-0.6.4/deployer/tests/test_guiserver.py0000664000175000017500000003224112600342600027045 0ustar tvansteenburghtvansteenburgh00000000000000"""Tests for the GUI server bundles deployment support.""" from contextlib import contextmanager import os import shutil import tempfile import unittest import mock import yaml from deployer import guiserver from deployer.feedback import Feedback from deployer.tests.base import ( patch_env_status, skip_if_offline, ) class TestGetDefaultGuiserverOptions(unittest.TestCase): def setUp(self): self.options = guiserver.get_default_guiserver_options() def test_option_keys(self): # All the required options are returned. # When adding/modifying options, ensure the defaults are sane for us. expected_keys = set([ 'bootstrap', 'branch_only', 'configs', 'debug', 'deploy_delay', 'deployment', 'description', 'destroy_services', 'diff', 'find_service', 'ignore_errors', 'juju_env', 'list_deploys', 'no_local_mods', 'no_relations', 'overrides', 'rel_wait', 'retry_count', 'series', 'skip_unit_wait', 'terminate_machines', 'timeout', 'update_charms', 'verbose', 'watch' ]) self.assertEqual(expected_keys, set(self.options.__dict__.keys())) def test_option_values(self): # The options values are suitable to be used by the GUI server. # When adding/modifying options, ensure the defaults are sane for us. options = self.options self.assertFalse(options.bootstrap) self.assertFalse(options.branch_only) self.assertIsNone(options.configs) self.assertFalse(options.debug) self.assertEqual(0, options.deploy_delay) self.assertIsNone(options.deployment) self.assertFalse(options.destroy_services) self.assertFalse(options.diff) self.assertIsNone(options.find_service) self.assertTrue(options.ignore_errors) self.assertEqual(os.getenv("JUJU_ENV"), options.juju_env) self.assertFalse(options.list_deploys) self.assertTrue(options.no_local_mods) self.assertIsNone(options.overrides) self.assertEqual(60, options.rel_wait) self.assertEqual(0, options.retry_count) self.assertIsNone(options.series) self.assertFalse(options.terminate_machines) self.assertEqual(2700, options.timeout) self.assertFalse(options.update_charms) self.assertFalse(options.verbose) self.assertFalse(options.watch) class TestDeploymentError(unittest.TestCase): def test_error(self): # A single error is properly stored and represented. exception = guiserver.DeploymentError(['bad wolf']) self.assertEqual(['bad wolf'], exception.errors) self.assertEqual('bad wolf', str(exception)) def test_multiple_errors(self): # Multiple deployment errors are properly stored and represented. errors = ['error 1', 'error 2'] exception = guiserver.DeploymentError(errors) self.assertEqual(errors, exception.errors) self.assertEqual('error 1\nerror 2', str(exception)) class TestGUIDeployment(unittest.TestCase): def setUp(self): # Set up a GUIDeployment instance and a Feedback object. self.deployment = guiserver.GUIDeployment('mybundle', 'mydata', 4) self.feedback = Feedback() def test_valid_deployment(self): # If the bundle is well formed, the deployment proceeds normally. self.assertIsNone(self.deployment._handle_feedback(self.feedback)) def test_warnings(self): # Warning messages are properly logged. self.feedback.warn('we are the Borg') with mock.patch.object(self.deployment, 'log') as mock_log: self.deployment._handle_feedback(self.feedback) mock_log.warning.assert_called_once_with('we are the Borg') def test_errors(self): # A DeploymentError is raised if errors are found in the bundle. self.feedback.error('error 1') self.feedback.error('error 2') with self.assertRaises(guiserver.DeploymentError) as cm: self.deployment._handle_feedback(self.feedback) self.assertEqual(['error 1', 'error 2'], cm.exception.errors) class DeployerFunctionsTestMixin(object): """Base set up for the functions that make use of the juju-deployer.""" apiurl = 'wss://api.example.com:17070' username = 'who' password = 'Secret!' name = 'mybundle' bundle = yaml.safe_load(""" services: wordpress: charm: "cs:precise/wordpress-20" num_units: 1 options: debug: "no" engine: nginx tuning: single annotations: "gui-x": "425.347" "gui-y": "414.547" mysql: charm: "cs:precise/mysql-28" num_units: 2 constraints: arch: i386 mem: 4G cpu-cores: 4 annotations: "gui-x": "494.347" "gui-y": "164.547" relations: - - "mysql:db" - "wordpress:db" series: precise """) def check_environment_life(self, mock_environment): """Check the calls executed on the given mock environment. Ensure that, in order to retrieve the list of currently deployed services, the environment is instantiated, connected, env.status is called and then the connection is closed. """ mock_environment.assert_called_once_with( self.apiurl, self.username, self.password) mock_env_instance = mock_environment() mock_env_instance.connect.assert_called_once_with() mock_env_instance.status.assert_called_once_with() mock_env_instance.close.assert_called_once_with() @contextmanager def assert_overlapping_services(self, mock_environment): """Ensure a ValueError is raised in the context manager block. The given mock environment object is set up so that its status simulates an existing service. The name of this service overlaps with the name of one of the services in the bundle. """ mock_env_instance = mock_environment() mock_env_instance.status.return_value = {'services': {'mysql': {}}} # Ensure a ValueError is raised by the code in the context block. with self.assertRaises(ValueError) as context_manager: yield # The error reflects the overlapping service name. error = str(context_manager.exception) self.assertEqual('service(s) already in the environment: mysql', error) # Even if an error occurs, the environment connection is closed. mock_env_instance.close.assert_called_once_with() @skip_if_offline @mock.patch('deployer.guiserver.GUIEnvironment') class TestValidate(DeployerFunctionsTestMixin, unittest.TestCase): def test_validation(self, mock_environment): # The validation is correctly run. guiserver.validate( self.apiurl, self.username, self.password, self.bundle) # The environment is correctly instantiated and used. self.check_environment_life(mock_environment) def test_overlapping_services(self, mock_environment): # The validation fails if the bundle includes a service name already # present in the Juju environment. with self.assert_overlapping_services(mock_environment): guiserver.validate( self.apiurl, self.username, self.password, self.bundle) @skip_if_offline @mock.patch('deployer.guiserver.GUIEnvironment') class TestImportBundle(DeployerFunctionsTestMixin, unittest.TestCase): # The options attribute simulates the options passed to the Importer. options = guiserver.get_default_guiserver_options() @contextmanager def patch_juju_home(self): """Patch the value used by the bundle importer as Juju home.""" base_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, base_dir) juju_home = os.path.join(base_dir, 'juju-home') with mock.patch('deployer.guiserver.JUJU_HOME', juju_home): try: yield juju_home finally: del os.environ['JUJU_HOME'] def import_bundle(self, version=4): """Call the import_bundle function.""" guiserver.import_bundle( self.apiurl, self.username, self.password, self.name, self.bundle, version, self.options) def cleanup_series_path(self): """Remove the series path created by the Deployment object.""" if os.path.isdir('precise'): os.rmdir('precise') @mock.patch('deployer.guiserver.Importer') def test_importing_bundle(self, mock_importer, mock_environment): # The juju-deployer importer is correctly set up and run. with self.patch_juju_home(): self.import_bundle() # The environment is correctly instantiated and used. self.check_environment_life(mock_environment) # The importer is correctly instantiated. self.assertEqual(1, mock_importer.call_count) importer_args = mock_importer.call_args[0] self.assertEqual(3, len(importer_args)) env, deployment, options = importer_args # The first argument passed to the importer is the environment. self.assertIs(mock_environment(), env) # The second argument is the deployment object. self.assertIsInstance(deployment, guiserver.GUIDeployment) self.assertEqual(self.name, deployment.name) self.assertEqual(self.bundle, deployment.data) # The third and last argument is the options object. self.assertIs(self.options, options) # The importer is started. mock_importer().run.assert_called_once_with() def test_overlapping_services(self, mock_environment): # The import fails if the bundle includes a service name already # present in the Juju environment. with self.assert_overlapping_services(mock_environment): with self.patch_juju_home(): self.import_bundle() @mock.patch('deployer.guiserver.Importer') def test_juju_home(self, mock_importer, mock_environment): # A customized Juju home is created and used during the import process. with self.patch_juju_home() as juju_home: assert not os.path.isdir(juju_home), 'directory should not exist' # Ensure JUJU_HOME is included in the context when the Importer # instance is run. run = lambda: self.assertEqual(juju_home, os.getenv('JUJU_HOME')) mock_importer().run = run self.import_bundle() # The JUJU_HOME directory has been created. self.assertTrue(os.path.isdir(juju_home)) @mock.patch('time.sleep') def test_importer_behavior(self, mock_sleep, mock_environment): # The importer executes the expected environment calls. self.addCleanup(self.cleanup_series_path) patch_env_status(mock_environment(), {'mysql': 1, 'wordpress': 1}) mock_environment.reset_mock() with self.patch_juju_home(): self.import_bundle(version=3) mock_sleep.assert_has_calls([mock.call(5.1), mock.call(60)]) # If any of the calls below fails, then we have to change the # signatures of deployer.guiserver.GUIEnvironment methods. mock_environment.assert_called_once_with( self.apiurl, self.username, self.password) mock_environment().assert_has_calls([ mock.call.connect(), mock.call.status(), mock.call.deploy( 'mysql', 'cs:precise/mysql-28', '', None, {'arch': 'i386', 'cpu-cores': 4, 'mem': '4G'}, 2, None), mock.call.set_annotation( 'mysql', {'gui-y': '164.547', 'gui-x': '494.347'}), mock.call.deploy( 'wordpress', 'cs:precise/wordpress-20', '', {'debug': 'no', 'engine': 'nginx', 'tuning': 'single'}, None, 1, None), mock.call.set_annotation( 'wordpress', {'gui-y': '414.547', 'gui-x': '425.347'}), mock.call.add_units('mysql', 1), mock.call.add_relation('mysql:db', 'wordpress:db'), mock.call.close(), ], any_order=True) def test_deployment_errors(self, mock_environment): # A DeploymentError is raised if the deployment fails. bundle = { 'services': { 'wordpress': { 'charm': 'cs:precise/wordpress-20', 'options': {'no-such-option': 42}, # Invalid options. }, 'mysql': { 'charm': 'cs:precise/mysql-28', 'options': {'bad': 'wolf'}, # Invalid options. }, }, } version = 4 self.addCleanup(self.cleanup_series_path) with self.patch_juju_home(): with self.assertRaises(guiserver.DeploymentError) as cm: guiserver.import_bundle( self.apiurl, self.username, self.password, self.name, bundle, version, self.options) expected_errors = set([ 'Invalid config charm cs:precise/wordpress-20 no-such-option=42', 'Invalid config charm cs:precise/mysql-28 bad=wolf', ]) self.assertEqual(expected_errors, set(cm.exception.errors)) juju-deployer-0.6.4/deployer/tests/test_pyenv.py0000664000175000017500000001176312600342600026201 0ustar tvansteenburghtvansteenburgh00000000000000import StringIO from .base import Base from deployer.env import watchers from deployer.env.py import PyEnvironment from deployer.errors import UnitErrors from deployer.utils import setup_logging, ErrorExit class FakePyEnvironment(PyEnvironment): def __init__(self, name, status): super(FakePyEnvironment, self).__init__(name) self._status = status def status(self): return self._status class PyEnvironmentTest(Base): def setUp(self): self.output = setup_logging( debug=True, verbose=True, stream=StringIO.StringIO()) def test_wait_for_units_error_no_exit(self): env = FakePyEnvironment( "foo", {"services": {"wordpress": {"units": {"wordpress/0": {"agent-state": "install-error", "machine": 1}, }, }, }, }) with self.assertRaises(UnitErrors) as cm: env.wait_for_units( timeout=240, on_errors=watchers.raise_on_errors(UnitErrors)) errors = cm.exception.errors self.assertEqual(1, len(errors)) unit = errors[0] self.assertEqual("wordpress/0", unit["name"]) self.assertEqual("install-error", unit["agent-state"]) self.assertEqual(1, unit["machine"]) def test_wait_for_units_error_exit(self): env = FakePyEnvironment( "foo", {"services": {"wordpress": {"units": {"wordpress/0": {"agent-state": "install-error", "machine": 1}, }, }, }, }) with self.assertRaises(ErrorExit): env.wait_for_units( timeout=240, on_errors=watchers.exit_on_errors(env)) output = self.output.getvalue() self.assertIn('The following units had errors:', output) self.assertIn( 'unit: wordpress/0: machine: 1 agent-state: install-error', output) def test_wait_for_units_sub_error_no_exit(self): env = FakePyEnvironment( "foo", {"services": {"wordpress": {"units": {"wordpress/0": {"agent-state": "started", "machine": 1, "subordinates": {"nrpe/0": {"agent-state": "install-error"}, } }, }, }, }, }) with self.assertRaises(UnitErrors) as cm: env.wait_for_units( timeout=240, on_errors=watchers.raise_on_errors(UnitErrors)) errors = cm.exception.errors self.assertEqual(1, len(errors)) unit = errors[0] self.assertEqual("nrpe/0", unit["name"]) self.assertEqual("install-error", unit["agent-state"]) self.assertEqual(1, unit["machine"]) def test_wait_for_units_no_error_no_exit(self): env = FakePyEnvironment( "foo", {"services": {"wordpress": {"units": {"wordpress/0": {"agent-state": "started", "machine": 1, "subordinates": {"nrpe/0": {"agent-state": "started"}, } }, }, }, }, }) try: env.wait_for_units( timeout=240, on_errors=watchers.raise_on_errors(UnitErrors)) except UnitErrors as err: self.fail("Unexpected exception: %s" % err) def test_wait_for_units_relation_error_no_exit(self): env = FakePyEnvironment( "foo", {"services": {"wordpress": {"units": {"wordpress/0": {"agent-state": "started", "machine": 1, "relation-errors": {"nrpe": ["nrpe/1"]}, }, }, }, }, }) with self.assertRaises(UnitErrors) as cm: env.wait_for_units( timeout=240, on_errors=watchers.raise_on_errors(UnitErrors)) errors = cm.exception.errors self.assertEqual(1, len(errors)) unit = errors[0] self.assertEqual("wordpress/0", unit["name"]) self.assertEqual("relation-error: nrpe", unit["agent-state"]) self.assertEqual(1, unit["machine"]) def test_wait_for_units_subordinate_no_unit_no_error_no_exit(self): env = FakePyEnvironment( "foo", {"services": {"nrpe": {"subordinate": "true", "units": {}, }, }, }) try: env.wait_for_units( timeout=240, on_errors=watchers.raise_on_errors(UnitErrors)) except UnitErrors as err: self.fail("Unexpected exception: %s" % err) juju-deployer-0.6.4/deployer/tests/test_importer.py0000664000175000017500000000745412600522521026705 0ustar tvansteenburghtvansteenburgh00000000000000import os import mock from deployer.config import ConfigStack from deployer.action.importer import Importer from base import ( Base, patch_env_status, skip_if_offline, ) class Options(dict): def __getattr__(self, key): return self[key] class ImporterTest(Base): def setUp(self): self.juju_home = self.mkdir() self.change_environment(JUJU_HOME=self.juju_home) self.options = Options({ 'bootstrap': False, 'branch_only': False, 'configs': [os.path.join(self.test_data_dir, 'wiki.yaml')], 'debug': True, 'deploy_delay': 0, 'destroy_services': None, 'diff': False, 'find_service': None, 'ignore_errors': False, 'list_deploys': False, 'no_local_mods': True, 'no_relations': False, 'overrides': None, 'rel_wait': 60, 'retry_count': 0, 'series': None, 'skip_unit_wait': False, 'terminate_machines': False, 'timeout': 2700, 'update_charms': False, 'verbose': True, 'watch': False}) @skip_if_offline @mock.patch('deployer.action.importer.time') def test_importer(self, mock_time): # Trying to track down where this comes from http://pad.lv/1243827 stack = ConfigStack(self.options.configs) deploy = stack.get('wiki') env = mock.MagicMock() patch_env_status(env, {'wiki': 1, 'db': 1}) importer = Importer(env, deploy, self.options) importer.run() config = {'name': '$hi_world _are_you_there? {guess_who}'} self.assertEqual( env.method_calls[3], mock.call.deploy( 'wiki', 'cs:precise/mediawiki', os.environ.get("JUJU_REPOSITORY", ""), config, None, 1, None)) env.add_relation.assert_called_once_with('wiki', 'db') @skip_if_offline @mock.patch('deployer.action.importer.time') def test_importer_no_relations(self, mock_time): self.options.no_relations = True stack = ConfigStack(self.options.configs) deploy = stack.get('wiki') env = mock.MagicMock() patch_env_status(env, {'wiki': 1, 'db': 1}) importer = Importer(env, deploy, self.options) importer.run() self.assertFalse(env.add_relation.called) @skip_if_offline @mock.patch('deployer.action.importer.time') def test_importer_add_machine_series(self, mock_time): self.options.configs = [ os.path.join(self.test_data_dir, 'v4', 'series.yaml')] stack = ConfigStack(self.options.configs) deploy = stack.get(self.options.configs[0]) env = mock.MagicMock() patch_env_status(env, {'mediawiki': 1, 'mysql': 1}) importer = Importer(env, deploy, self.options) importer.run() self.assertEqual(env.add_machine.call_count, 2) self.assertEqual( env.add_machine.call_args_list[0][1], {'series': 'precise', 'constraints': 'mem=512M'}) self.assertEqual( env.add_machine.call_args_list[1][1], {'series': 'trusty', 'constraints': 'mem=512M'}) @skip_if_offline @mock.patch('deployer.action.importer.time') def test_importer_existing_machine(self, mock_time): self.options.configs = [ os.path.join(self.test_data_dir, 'v4', 'container-existing-machine.yaml')] stack = ConfigStack(self.options.configs) deploy = stack.get(self.options.configs[0]) env = mock.MagicMock() patch_env_status(env, {'mediawiki': 1, 'mysql': 1}, machines=[1]) importer = Importer(env, deploy, self.options) importer.run() self.assertFalse(env.add_machine.called) juju-deployer-0.6.4/deployer/tests/test_constraints.py0000664000175000017500000001044712600342600027405 0ustar tvansteenburghtvansteenburgh00000000000000from deployer.service import Service from .base import Base from ..utils import parse_constraints class ConstraintsTest(Base): def test_constraints(self): data = { 'branch': 'lp:precise/mysql', 'constraints': "instance-type=m1.small", } s = Service('db', data) self.assertEquals(s.constraints, "instance-type=m1.small") data = { 'branch': 'lp:precise/mysql', 'constraints': "cpu-cores=4 mem=2048M root-disk=10G", } s = Service('db', data) c = parse_constraints(s.constraints) self.assertEquals(s.constraints, "cpu-cores=4 mem=2048M root-disk=10G") self.assertEqual(c['cpu-cores'], 4) self.assertEqual(c['mem'], 2048) self.assertEqual(c['root-disk'], 10 * 1024) def test_constraints_none(self): # If passed None, parse_constraints returns None. result = parse_constraints(None) self.assertIsNone(result) def test_constraints_dict(self): # The function can also accept a dict. value = { 'mem': '1G', 'cpu-cores': '5', 'root-disk': '100', } result = parse_constraints(value) self.assertEqual(result['cpu-cores'], 5) self.assertEqual(result['mem'], 1024) self.assertEqual(result['root-disk'], 100) def test_constraints_accepts_no_spec(self): value = { 'mem': '1', 'root-disk': '10', } result = parse_constraints(value) self.assertEqual(result['mem'], 1) self.assertEqual(result['root-disk'], 10) def test_constraints_accept_M(self): value = { 'mem': '1M', 'root-disk': '10M', } mult = 1 result = parse_constraints(value) self.assertEqual(result['mem'], 1 * mult) self.assertEqual(result['root-disk'], 10 * mult) def test_constraints_accept_G(self): value = { 'mem': '1G', 'root-disk': '10G', } result = parse_constraints(value) mult = 1024 self.assertEqual(result['mem'], 1 * mult) self.assertEqual(result['root-disk'], 10 * mult) def test_constraints_accept_T(self): value = { 'mem': '1T', 'root-disk': '10T', } result = parse_constraints(value) mult = 1024 * 1024 self.assertEqual(result['mem'], 1 * mult) self.assertEqual(result['root-disk'], 10 * mult) def test_constraints_accept_P(self): value = { 'mem': '1P', 'root-disk': '10P', } result = parse_constraints(value) mult = 1024 * 1024 * 1024 self.assertEqual(result['mem'], 1 * mult) self.assertEqual(result['root-disk'], 10 * mult) def test_constraints_reject_other_sizes(self): value = { 'mem': '1E', } with self.assertRaises(ValueError) as exc: result = parse_constraints(value) self.assertEqual('Constraint mem has invalid value 1E', exc.exception.message) def test_other_numeric_constraints_have_no_units(self): # If any other numeric constraint gets a units specifier an error is # raised. keys = ['cpu-power', 'cpu-cores'] for k in keys: value = { k: '1T', } with self.assertRaises(ValueError) as exc: parse_constraints(value) self.assertEqual('Constraint {} has invalid value 1T'.format(k), exc.exception.message) def test_non_numerics_are_not_converted(self): # Constraints that expect strings are not affected by the parsing. keys = ['arch', 'container'] for k in keys: value = { k: '1T', } result = parse_constraints(value) self.assertEqual(result[k], '1T') def test_tags(self): # Tags are provided as a list of strings. constraints = parse_constraints({'tags': 'foo,bar'}) self.assertEqual({'tags': ['foo', 'bar']}, constraints) def test_single_tag(self): # A single tag is converted to a list. constraints = parse_constraints({'tags': 'foo'}) self.assertEqual({'tags': ['foo']}, constraints) juju-deployer-0.6.4/deployer/tests/test_service.py0000664000175000017500000000151512600342600026472 0ustar tvansteenburghtvansteenburgh00000000000000from deployer.service import Service from .base import Base class ServiceTest(Base): def test_service(self): data = { 'branch': 'lp:precise/mysql'} s = Service('db', data) self.assertEqual(s.name, "db") self.assertEqual(s.num_units, 1) self.assertEqual(s.constraints, None) self.assertEqual(s.config, None) data = { 'branch': 'lp:precise/mysql', 'constraints': "instance-type=m1.small", 'options': {"services": "include-file://stack-include.yaml"}, 'num_units': 10} s = Service('db', data) self.assertEquals(s.num_units, 10) self.assertEquals(s.constraints, "instance-type=m1.small") self.assertEquals( s.config, {"services": "include-file://stack-include.yaml"}) juju-deployer-0.6.4/deployer/tests/test_guienv.py0000664000175000017500000000514712600342600026334 0ustar tvansteenburghtvansteenburgh00000000000000"""Tests for the GUIEnvironment.""" import unittest import mock from deployer.env.gui import ( GUIEnvironment, ) @mock.patch('deployer.env.gui.EnvironmentClient') class TestGUIEnvironment(unittest.TestCase): endpoint = 'wss://api.example.com:17070' username = 'who' password = 'Secret!' def setUp(self): self.env = GUIEnvironment(self.endpoint, self.username, self.password) def test_connect(self, mock_client): # The environment uses the provided endpoint and password to connect # to the Juju API server. self.env.connect() mock_client.assert_called_once_with(self.endpoint) mock_client().login.assert_called_once_with( self.password, user=self.username) def test_multiple_connections(self, mock_client): # The environment does not attempt a second connection if it is already # connected to the API backend. self.env.connect() self.env.connect() self.assertEqual(1, mock_client.call_count) def test_close(self, mock_client): # The client attribute is set to None when the connection is closed. self.env.connect() self.env.close() self.assertIsNone(self.env.client) def test_deploy(self, mock_client): # The environment uses the API to deploy charms. # Constraints are converted to numeric values before calling the # client deploy. self.env.connect() config = {'foo': 'bar'} constraints = {'cpu-cores': '4', 'mem': '5G'} # Deploy a service: the repo argument is ignored. self.env.deploy( 'myservice', 'cs:precise/service-42', repo='/my/repo/', config=config, constraints=constraints, num_units=2, force_machine=1) expected = {'cpu-cores': 4, 'mem': 5 * 1024} mock_client().deploy.assert_called_once_with( 'myservice', 'cs:precise/service-42', config=config, constraints=expected, num_units=2, machine_spec=1) def test_deploy_unqualified_url(self, mock_client): # Show that an unqualified URL is given the latest revision. self.env.connect() def mocked_get_qualified_charm_url(url): return url + '-98' path = 'deployer.env.gui.get_qualified_charm_url' with mock.patch(path, mocked_get_qualified_charm_url): self.env.deploy( 'myservice', 'cs:precise/service', repo='/my/repo/') mock_client().deploy.assert_called_once_with( 'myservice', 'cs:precise/service-98', config=None, constraints=None, num_units=1, machine_spec=None) juju-deployer-0.6.4/deployer/tests/test_data/0000775000175000017500000000000012666044061025403 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/tests/test_data/negative.yaml0000664000175000017500000000012712600342600030055 0ustar tvansteenburghtvansteenburgh00000000000000negative: services: foo: options: # lp:1361883 juju-deployer-0.6.4/deployer/tests/test_data/negative.cfg0000664000175000017500000000023212600342600027647 0ustar tvansteenburghtvansteenburgh00000000000000{ "negative": { "services": { "foo": { # lp:1361883 "options": {} } } } } juju-deployer-0.6.4/deployer/tests/test_data/blog-haproxy-services.yaml0000664000175000017500000000043612600342600032512 0ustar tvansteenburghtvansteenburgh00000000000000- service_name: blog service_host: 0.0.0.0 service_port: 80 service_options: - mode http - option httplog - balance uri - hash-type consistent - timeout client 60000 - timeout server 60000 server_options: - check inter 2000 - rise 2 - fall 5 juju-deployer-0.6.4/deployer/tests/test_data/stack-placement-invalid-subordinate.yaml0000664000175000017500000000030012600342600035260 0ustar tvansteenburghtvansteenburgh00000000000000stack: series: precise services: nova-compute: charm: cs:precise/nova-compute to: nrpe ceph: to: lxc:nrpe=0 nrpe: charm: cs:precise/nrpe units: 2 juju-deployer-0.6.4/deployer/tests/test_data/stack-placement-invalid-2.yaml0000664000175000017500000000040312600342600033106 0ustar tvansteenburghtvansteenburgh00000000000000stack: series: precise services: nova-compute: charm: cs:precise/nova-compute units: 3 ceph: units: 3 to: [nova-compute, nova-compute, nova-compute] mysql: to: lxc:nova-compute wordpress: to: lxc:foobar juju-deployer-0.6.4/deployer/tests/test_data/v4/0000775000175000017500000000000012666044061025734 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/tests/test_data/v4/placement-invalid-number.yaml0000664000175000017500000000021412600342600033463 0ustar tvansteenburghtvansteenburgh00000000000000series: trusty services: django: charm: cs:trusty/django to: - lxc:haproxy/bad-wolf haproxy: charm: cs:trusty/haproxy juju-deployer-0.6.4/deployer/tests/test_data/v4/hulk-smash.yaml0000664000175000017500000000151712600342600030664 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" to: - "1" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" to: - "mediawiki/0" series: precise relations: - - mediawiki:db - mysql:db machines: 1: constraints: 'mem=512M' juju-deployer-0.6.4/deployer/tests/test_data/v4/container-existing-machine.yaml0000664000175000017500000000146012600342600034021 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" to: - "1" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" to: - "lxc:1" series: precise relations: - - mediawiki:db - mysql:db machines: 1: {} juju-deployer-0.6.4/deployer/tests/test_data/v4/placement-invalid-subordinate.yaml0000664000175000017500000000032012600342600034510 0ustar tvansteenburghtvansteenburgh00000000000000series: precise services: nova-compute: charm: cs:precise/nova-compute to: - nrpe ceph: charm: cs:precise/ceph to: - lxc:nrpe/0 nrpe: charm: cs:precise/nrpe units: 2 juju-deployer-0.6.4/deployer/tests/test_data/v4/fill_placement.yaml0000664000175000017500000000045512600342600031566 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki1: charm: cs:precise/mediawiki-10 num_units: 2 mediawiki2: charm: cs:precise/mediawiki-10 num_units: 2 to: - 0 mediawiki3: charm: cs:precise/mediawiki-10 num_units: 2 to: - mediawiki1 - mediawiki1 machines: {} series: precise juju-deployer-0.6.4/deployer/tests/test_data/v4/validate.yaml0000664000175000017500000000040612600342600030375 0ustar tvansteenburghtvansteenburgh00000000000000services: mysql: charm: 'cs:precise/mysql-1' num_units: 5 to: - 'asdf:0' - 'lxc:asdf' - '1' - 'wordpress/3' - 'asdf' wordpress: charm: 'cs:precise/wordpress-1' num_units: 1 machines: 3: constraints: '' juju-deployer-0.6.4/deployer/tests/test_data/v4/placement.yaml0000664000175000017500000000154612600342600030562 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" to: - "1" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" to: - "2" series: precise relations: - - mediawiki:db - mysql:db machines: 1: constraints: 'mem=512M' 2: constraints: 'mem=512M' juju-deployer-0.6.4/deployer/tests/test_data/v4/hulk-smash-nounits-nomachines.yaml0000664000175000017500000000141612600342600034501 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" to: - "mediawiki" series: precise relations: - - mediawiki:db - mysql:db juju-deployer-0.6.4/deployer/tests/test_data/v4/hulk-smash-nounits.yaml0000664000175000017500000000151512600342600032357 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" to: - "1" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" to: - "mediawiki" series: precise relations: - - mediawiki:db - mysql:db machines: 1: constraints: 'mem=512M' juju-deployer-0.6.4/deployer/tests/test_data/v4/container-new.yaml0000664000175000017500000000151312600342600031355 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" to: - "1" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" to: - "lxc:new" series: precise relations: - - mediawiki:db - mysql:db machines: 1: constraints: 'mem=512M' juju-deployer-0.6.4/deployer/tests/test_data/v4/container.yaml0000664000175000017500000000151112600342600030564 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" to: - "1" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" to: - "lxc:1" series: precise relations: - - mediawiki:db - mysql:db machines: 1: constraints: "mem=512M" juju-deployer-0.6.4/deployer/tests/test_data/v4/series.yaml0000664000175000017500000000052012600342600030073 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 annotations: gui-x: "609" gui-y: "-15" to: - "1" mysql: charm: cs:trusty/mysql-1 num_units: 1 to: - "2" series: trusty machines: 1: series: precise constraints: 'mem=512M' 2: constraints: 'mem=512M' juju-deployer-0.6.4/deployer/tests/test_data/v4/simple.yaml0000664000175000017500000000137712600342600030105 0ustar tvansteenburghtvansteenburgh00000000000000services: mediawiki: charm: cs:precise/mediawiki-10 num_units: 1 options: debug: false name: Please set name of wiki skin: vector annotations: gui-x: "609" gui-y: "-15" mysql: charm: cs:precise/mysql-28 num_units: 1 options: binlog-format: MIXED block-size: 5 dataset-size: 80% flavor: distro ha-bindiface: eth0 ha-mcastport: 5411 max-connections: -1 preferred-storage-engine: InnoDB query-cache-size: -1 query-cache-type: "OFF" rbd-name: mysql1 tuning-level: safest vip_cidr: 24 vip_iface: eth0 annotations: gui-x: "610" gui-y: "255" machines: {} series: precise relations: - - mediawiki:db - mysql:db juju-deployer-0.6.4/deployer/tests/test_data/wiki.yaml0000664000175000017500000000032612600342600027217 0ustar tvansteenburghtvansteenburgh00000000000000wiki: series: precise services: wiki: charm: cs:precise/mediawiki options: name: $hi_world _are_you_there? {guess_who} db: charm: cs:precise/mysql relations: - [wiki, db]juju-deployer-0.6.4/deployer/tests/test_data/stack-include.yaml0000664000175000017500000000134512600342600031004 0ustar tvansteenburghtvansteenburgh00000000000000- service_name: thumbnail-private service_host: 0.0.0.0 service_port: 1001 service_options: ['timeout client 65000', 'timeout server 65000', 'option httpchk', 'balance leastconn'] server_options: ['check inter 2000', 'rise 2', 'fall 5', 'maxconn 4'] - service_name: updown-download-private service_host: 0.0.0.0 service_port: 1011 service_options: ['timeout client 65000', 'timeout server 300000', 'option httpchk', 'balance leastconn'] server_options: ['check inter 2000', 'rise 2', 'fall 5', 'maxconn 16'] - service_name: updown-upload service_host: 0.0.0.0 service_port: 1021 service_options: ['balance leastconn', 'timeout server 600000'] server_options: ['check inter 2000', 'rise 2', 'fall 5', 'maxconn 8'] juju-deployer-0.6.4/deployer/tests/test_data/stack-includes.cfg0000664000175000017500000000241412600342600030762 0ustar tvansteenburghtvansteenburgh00000000000000{ "include-config": "stack-default.cfg", "my-files-frontend-dev": { "inherits": "wordpress", "services": { "my-files-fe": { "constraints": "instance-type=m1.small" }, "my-files-lb": { "constraints": "instance-type=m1.small", "options": { "services": "include-file://stack-include.yaml" } }, "my-nagios": { "constraints": "instance-type=m1.small" } }, "relations": { "my-nrpe-files-fe:monitors": { "weight": 99, "consumes": ["my-nagios:monitors"] }, "my-files-fe:juju-info": { "weight": 98, "consumes": ["my-nagios:nagios"] }, "my-nrpe-files-lb:monitors": { "weight": 89, "consumes": ["my-nagios:monitors"] }, "my-files-lb:juju-info": { "weight": 88, "consumes": ["my-nagios:nagios"] }, "my-files-lb:local-monitors": { "weight": 88, "consumes": ["my-nrpe-files-lb:local-monitors"] } } } } juju-deployer-0.6.4/deployer/tests/test_data/blog.snippet0000664000175000017500000000001212600342600027707 0ustar tvansteenburghtvansteenburgh00000000000000HelloWorldjuju-deployer-0.6.4/deployer/tests/test_data/stack-placement-invalid.yaml0000664000175000017500000000040212600342600032746 0ustar tvansteenburghtvansteenburgh00000000000000stack: series: precise services: nova-compute: charm: cs:precise/nova-compute units: 3 ceph: units: 3 to: [nova-compute, nova-compute, nova-compute] mysql: to: lxc:nova-compute wordpress: to: lxc:mysql juju-deployer-0.6.4/deployer/tests/test_data/blog.yaml0000664000175000017500000000227012600342600027177 0ustar tvansteenburghtvansteenburgh00000000000000 metrics-base: services: newrelic: branch: lp:charms/precise/newrelic options: key: measureallthethings wordpress-base: series: precise services: blog: charm: cs:precise/wordpress db: charm: cs:precise/mysql wordpress-stage: series: precise inherits: - wordpress-base - metrics-base services: blog: constraints: instance-type=m1.small num_units: 3 options: tuning: optimized engine: apache wp-content: include-base64://blog.snippet cache: branch: lp:charms/precise/memcached options: size: 100 haproxy: charm_url: cs:precise/haproxy options: services: include-file://blog-haproxy-services.yaml relations: - [blog, db] - - blog - cache - - blog - haproxy wordpress-prod: series: precise inherits: wordpress-stage services: blog: options: engine: nginx tuning: optimized constraints: instance-type=m1.large db: constraints: instance-type=m1.large options: tuning-level: safest juju-deployer-0.6.4/deployer/tests/test_data/precise/0000775000175000017500000000000012666044061027035 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/tests/test_data/precise/appsrv/0000775000175000017500000000000012666044061030350 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/tests/test_data/precise/appsrv/metadata.yaml0000664000175000017500000000001512600342600032774 0ustar tvansteenburghtvansteenburgh00000000000000charm: true juju-deployer-0.6.4/deployer/tests/test_data/stack-default.cfg0000664000175000017500000000336012600342600030601 0ustar tvansteenburghtvansteenburgh00000000000000{ "wordpress": { "series": "precise", "services": { "wordpress": { "constraints": "instance-type=m1.small", "options": { "engine": "", "enable_modules": "proxy rewrite proxy_http proxy_balancer ssl headers", "vhost_https_template": "include-base64://stack-include.template" } }, "db": { "constraints": "instance-type=m1.small", "charm": "mysql", "options": { "tuning-level": "safest"} }, "memcached": { "constraints": "instance-type=m1.small", "charm_url": "" }, "haproxy": { }, "my-app-cache": { "constraints": "instance-type=m1.small", "options": { "x_balancer_name_allowed": "true" } }, "my-nrpe-app-cache": { }, "my-app-cache-lb": { "constraints": "instance-type=m1.small", "options": { "enable_monitoring": "true" } }, "my-nrpe-app-cache-lb": { } }, "relations": { "my-app-fe:balancer": { "weight": 100, "consumes": ["my-app-lb:website"] }, "my-app-lb:reverseproxy": { "weight": 90, "consumes": ["my-app-cache:cached-website"] }, "my-app-cache:website": { "weight": 80, "consumes": ["my-app-cache-lb:website"] } } } } juju-deployer-0.6.4/deployer/tests/test_data/stack-placement.yaml0000664000175000017500000000114012600342600031322 0ustar tvansteenburghtvansteenburgh00000000000000stack: series: precise services: nova-compute: charm: cs:precise/nova-compute units: 3 ceph: units: 3 to: [nova-compute, nova-compute] mysql: to: 0 quantum: units: 4 to: ["lxc:nova-compute", "lxc:nova-compute", "lxc:nova-compute", "lxc:nova-compute"] verity: to: lxc:nova-compute=2 semper: to: nova-compute=2 lxc-service: # Ensure more than nova-compute, catches lp:1357196 num_units: 5 to: [ "lxc:nova-compute=1", "lxc:nova-compute=2", "lxc:nova-compute=0", "lxc:nova-compute=0", "lxc:nova-compute=2" ] juju-deployer-0.6.4/deployer/tests/test_data/stack-include.template0000664000175000017500000000054412600342600031655 0ustar tvansteenburghtvansteenburgh00000000000000 ServerAdmin webmaster@myapp.com ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined LogLevel warn DocumentRoot /srv/myapp.com/www/root ProxyRequests off Order deny,allow Allow from all ProxyPreserveHost off "]) def write(self, files): for f in files: with open(os.path.join( self.path, f), 'w') as fh: fh.write(files[f]) self._call( ["bzr", "add", f], "Could not add file %s" % f) def commit(self, msg): self._call( ["bzr", "commit", "-m", msg], "Could not commit at %(path)s") def revert(self): self._call( ["bzr", "revert"], "Could not revert at %(path)s") def tag(self, name): self._call( ["bzr", "tag", name], "Could not tag at %(path)s") branch = update = pull = None class BzrCharmTest(Base): def setUp(self): self.repo_path = d = self.mkdir() self.series_path = os.path.join(d, "precise") os.mkdir(self.series_path) self.output = self.capture_logging( "deployer.charm", level=logging.DEBUG) def setup_vcs_charm(self): self.branch = Bzr(self.mkdir()) self.branch.init() self.branch.write( {'metadata.yaml': yaml_dump({ 'name': 'couchdb', 'summary': 'RESTful document oriented database', 'provides': { 'db': { 'interface': 'couchdb'}}}), 'revision': '3'}) self.branch.commit('initial') self.branch.write({'revision': '4'}) self.branch.commit('next') self.branch.tag('v2') self.branch.write({'revision': '5'}) self.branch.commit('next') self.charm_data = { "charm": "couchdb", "build": None, "branch": self.branch.path, "rev": None, "charm_url": None, } def test_vcs_charm(self): self.setup_vcs_charm() params = dict(self.charm_data) charm = Charm.from_service( "scratch", self.repo_path, "precise", params) charm.fetch() self.assertEqual(charm.metadata['name'], 'couchdb') self.assertEqual(charm.vcs.get_cur_rev(), '3') charm.rev = '2' charm.update() self.assertEqual(charm.vcs.get_cur_rev(), '2') self.assertFalse(charm.is_modified()) with open(os.path.join(charm.path, 'revision'), 'w') as fh: fh.write('0') self.assertTrue(charm.is_modified()) Bzr(charm.path).revert() charm.rev = None # Update goes to latest with no revision specified charm.update() self.assertEqual(charm.vcs.get_cur_rev(), '3') def test_vcs_fetch_with_rev(self): self.setup_vcs_charm() params = dict(self.charm_data) params['branch'] = params['branch'] + '@2' charm = Charm.from_service( "scratch", self.repo_path, "precise", params) charm.fetch() self.assertEqual(charm.vcs.get_cur_rev(), '2') charms_vcs_series = [ ({"charm": "local:precise/mongodb", "branch": "lp:charms/precise/couchdb"}, 'trusty', 'precise/mongodb'), ({"series": "trusty", "charm": "couchdb", "branch": "lp:charms/precise/couchdb"}, 'precise', 'trusty/couchdb')] def test_vcs_charm_with_series(self): for data, dseries, path in self.charms_vcs_series: charm = Charm.from_service( "db", "/tmp", dseries, data) self.assertEqual( charm.path, os.path.join('/tmp', path)) self.assertEqual( charm.series_path, os.path.join('/tmp', path.split('/')[0])) def test_charm_error(self): branch = self.mkdir() params = { 'charm': 'couchdb', 'branch': "file://%s" % branch} charm = Charm.from_service( "scratch", self.repo_path, "precise", params) self.assertRaises(ErrorExit, charm.fetch) self.assertIn('bzr: ERROR: Not a branch: ', self.output.getvalue()) class Git(BaseGit): def __init__(self, path): super(Git, self).__init__( path, "", logging.getLogger("deployer.repo")) def init(self): self._call( ["git", "init", self.path], "Could not initialize repo at %(path)s") def write(self, files): for f in files: with open(os.path.join( self.path, f), 'w') as fh: fh.write(files[f]) self._call( ["git", "add", f], "Could not add file %s" % f) def commit(self, msg): self._call( ["git", "commit", "-m", msg], "Could not commit at %(path)s") def revert(self): self._call( ["git", "reset", "--hard"], "Could not revert at %(path)s") def tag(self, name): self._call( ["git", "tag", name], "Could not tag at %(path)s") branch = update = pull = None class GitCharmTest(Base): def setUp(self): self.repo_path = d = self.mkdir() self.series_path = os.path.join(d, "precise") os.mkdir(self.series_path) self.output = self.capture_logging( "deployer.charm", level=logging.DEBUG) def setup_vcs_charm(self): self.branch = Git(self.mkdir()) self.branch.init() self.branch.write( {'metadata.yaml': yaml_dump({ 'name': 'couchdb', 'summary': 'RESTful document oriented database', 'provides': { 'db': { 'interface': 'couchdb'}}}), 'revision': '3'}) self.branch.commit('initial') self.branch.write({'revision': '4'}) self.branch.commit('next') self.branch.tag('v2') self.tagged_revision = self.branch.get_cur_rev() self.branch.write({'revision': '5'}) self.branch.commit('next') self.charm_data = { "charm": "couchdb", "build": None, "branch": self.branch.path, "rev": None, "charm_url": None, } def test_vcs_charm(self): self.setup_vcs_charm() params = dict(self.charm_data) charm = Charm.from_service( "scratch", self.repo_path, "precise", params) charm.fetch() self.assertEqual(charm.metadata['name'], 'couchdb') HEAD = charm.vcs.get_cur_rev() self.assertFalse(charm.is_modified()) with open(os.path.join(charm.path, 'revision'), 'w') as fh: fh.write('0') self.assertTrue(charm.is_modified()) Git(charm.path).revert() charm.rev = None # Update goes to latest with no revision specified charm.update() self.assertEqual(charm.vcs.get_cur_rev(), HEAD) def test_vcs_fetch_with_rev(self): self.setup_vcs_charm() params = dict(self.charm_data) rev2 = self.branch._call( "git rev-parse HEAD~1".split(), self.branch.err_cur_rev, ) params['branch'] = '{}@{}'.format(params['branch'], rev2) charm = Charm.from_service( "scratch", self.repo_path, "precise", params) charm.fetch() self.assertEqual(charm.vcs.get_cur_rev(), rev2) def test_vcs_fetch_with_tag(self): self.setup_vcs_charm() params = dict(self.charm_data) params['branch'] = '{}@{}'.format(params['branch'], 'v2') charm = Charm.from_service( "scratch", self.repo_path, "precise", params) charm.fetch() self.assertEqual(charm.vcs.get_cur_rev(), self.tagged_revision) def test_vcs_fetch_with_refspec(self): self.setup_vcs_charm() params = dict(self.charm_data) params['branch'] = \ "https://review.openstack.org/openstack/charm-ceilometer" + \ "#changeref=286669/1" charm = Charm.from_service( "scratch", self.repo_path, "trusty", params) charm.fetch() self.assertEqual(charm.vcs.get_remote_origin(), "https://review.openstack.org/" + "openstack/charm-ceilometer") self.assertEqual(charm.vcs.extended_options, {'changeref': "286669/1"}) def test_charm_vcs_unknown(self): branch = self.mkdir() params = { 'charm': 'couchdb', 'branch': "%s" % branch} try: Charm.from_service( "scratch", self.repo_path, "precise", params) self.fail("should have failed, vcs ambigious") except ValueError, e: self.assertIn("Could not determine vcs backend", str(e)) juju-deployer-0.6.4/deployer/tests/mock.py0000664000175000017500000022340712600342600024732 0ustar tvansteenburghtvansteenburgh00000000000000# mock.py # Test tools for mocking and patching. # Copyright (C) 2007-2012 Michael Foord & the mock team # E-mail: fuzzyman AT voidspace DOT org DOT uk # mock 1.0 # http://www.voidspace.org.uk/python/mock/ # Released subject to the BSD License # Please see http://www.voidspace.org.uk/python/license.shtml # Scripts maintained at http://www.voidspace.org.uk/python/index.shtml # Comments, suggestions and bug reports welcome. __all__ = ( 'Mock', 'MagicMock', 'patch', 'sentinel', 'DEFAULT', 'ANY', 'call', 'create_autospec', 'FILTER_DIR', 'NonCallableMock', 'NonCallableMagicMock', 'mock_open', 'PropertyMock', ) __version__ = '1.0.1' import pprint import sys try: import inspect except ImportError: # for alternative platforms that # may not have inspect inspect = None try: from functools import wraps as original_wraps except ImportError: # Python 2.4 compatibility def wraps(original): def inner(f): f.__name__ = original.__name__ f.__doc__ = original.__doc__ f.__module__ = original.__module__ f.__wrapped__ = original return f return inner else: if sys.version_info[:2] >= (3, 3): wraps = original_wraps else: def wraps(func): def inner(f): f = original_wraps(func)(f) f.__wrapped__ = func return f return inner try: unicode except NameError: # Python 3 basestring = unicode = str try: long except NameError: # Python 3 long = int try: BaseException except NameError: # Python 2.4 compatibility BaseException = Exception try: next except NameError: def next(obj): return obj.next() BaseExceptions = (BaseException,) if 'java' in sys.platform: # jython import java BaseExceptions = (BaseException, java.lang.Throwable) try: _isidentifier = str.isidentifier except AttributeError: # Python 2.X import keyword import re regex = re.compile(r'^[a-z_][a-z0-9_]*$', re.I) def _isidentifier(string): if string in keyword.kwlist: return False return regex.match(string) inPy3k = sys.version_info[0] == 3 # Needed to work around Python 3 bug where use of "super" interferes with # defining __class__ as a descriptor _super = super self = 'im_self' builtin = '__builtin__' if inPy3k: self = '__self__' builtin = 'builtins' FILTER_DIR = True def _is_instance_mock(obj): # can't use isinstance on Mock objects because they override __class__ # The base class for all mocks is NonCallableMock return issubclass(type(obj), NonCallableMock) def _is_exception(obj): return ( isinstance(obj, BaseExceptions) or isinstance(obj, ClassTypes) and issubclass(obj, BaseExceptions) ) class _slotted(object): __slots__ = ['a'] DescriptorTypes = ( type(_slotted.a), property, ) def _getsignature(func, skipfirst, instance=False): if inspect is None: raise ImportError('inspect module not available') if isinstance(func, ClassTypes) and not instance: try: func = func.__init__ except AttributeError: return skipfirst = True elif not isinstance(func, FunctionTypes): # for classes where instance is True we end up here too try: func = func.__call__ except AttributeError: return if inPy3k: try: argspec = inspect.getfullargspec(func) except TypeError: # C function / method, possibly inherited object().__init__ return regargs, varargs, varkw, defaults, kwonly, kwonlydef, ann = argspec else: try: regargs, varargs, varkwargs, defaults = inspect.getargspec(func) except TypeError: # C function / method, possibly inherited object().__init__ return # instance methods and classmethods need to lose the self argument if getattr(func, self, None) is not None: regargs = regargs[1:] if skipfirst: # this condition and the above one are never both True - why? regargs = regargs[1:] if inPy3k: signature = inspect.formatargspec( regargs, varargs, varkw, defaults, kwonly, kwonlydef, ann, formatvalue=lambda value: "") else: signature = inspect.formatargspec( regargs, varargs, varkwargs, defaults, formatvalue=lambda value: "") return signature[1:-1], func def _check_signature(func, mock, skipfirst, instance=False): if not _callable(func): return result = _getsignature(func, skipfirst, instance) if result is None: return signature, func = result # can't use self because "self" is common as an argument name # unfortunately even not in the first place src = "lambda _mock_self, %s: None" % signature checksig = eval(src, {}) _copy_func_details(func, checksig) type(mock)._mock_check_sig = checksig def _copy_func_details(func, funcopy): funcopy.__name__ = func.__name__ funcopy.__doc__ = func.__doc__ #funcopy.__dict__.update(func.__dict__) funcopy.__module__ = func.__module__ if not inPy3k: funcopy.func_defaults = func.func_defaults return funcopy.__defaults__ = func.__defaults__ funcopy.__kwdefaults__ = func.__kwdefaults__ def _callable(obj): if isinstance(obj, ClassTypes): return True if getattr(obj, '__call__', None) is not None: return True return False def _is_list(obj): # checks for list or tuples # XXXX badly named! return type(obj) in (list, tuple) def _instance_callable(obj): """Given an object, return True if the object is callable. For classes, return True if instances would be callable.""" if not isinstance(obj, ClassTypes): # already an instance return getattr(obj, '__call__', None) is not None klass = obj # uses __bases__ instead of __mro__ so that we work with old style classes if klass.__dict__.get('__call__') is not None: return True for base in klass.__bases__: if _instance_callable(base): return True return False def _set_signature(mock, original, instance=False): # creates a function with signature (*args, **kwargs) that delegates to a # mock. It still does signature checking by calling a lambda with the same # signature as the original. if not _callable(original): return skipfirst = isinstance(original, ClassTypes) result = _getsignature(original, skipfirst, instance) if result is None: # was a C function (e.g. object().__init__ ) that can't be mocked return signature, func = result src = "lambda %s: None" % signature checksig = eval(src, {}) _copy_func_details(func, checksig) name = original.__name__ if not _isidentifier(name): name = 'funcopy' context = {'_checksig_': checksig, 'mock': mock} src = """def %s(*args, **kwargs): _checksig_(*args, **kwargs) return mock(*args, **kwargs)""" % name exec (src, context) funcopy = context[name] _setup_func(funcopy, mock) return funcopy def _setup_func(funcopy, mock): funcopy.mock = mock # can't use isinstance with mocks if not _is_instance_mock(mock): return def assert_called_with(*args, **kwargs): return mock.assert_called_with(*args, **kwargs) def assert_called_once_with(*args, **kwargs): return mock.assert_called_once_with(*args, **kwargs) def assert_has_calls(*args, **kwargs): return mock.assert_has_calls(*args, **kwargs) def assert_any_call(*args, **kwargs): return mock.assert_any_call(*args, **kwargs) def reset_mock(): funcopy.method_calls = _CallList() funcopy.mock_calls = _CallList() mock.reset_mock() ret = funcopy.return_value if _is_instance_mock(ret) and not ret is mock: ret.reset_mock() funcopy.called = False funcopy.call_count = 0 funcopy.call_args = None funcopy.call_args_list = _CallList() funcopy.method_calls = _CallList() funcopy.mock_calls = _CallList() funcopy.return_value = mock.return_value funcopy.side_effect = mock.side_effect funcopy._mock_children = mock._mock_children funcopy.assert_called_with = assert_called_with funcopy.assert_called_once_with = assert_called_once_with funcopy.assert_has_calls = assert_has_calls funcopy.assert_any_call = assert_any_call funcopy.reset_mock = reset_mock mock._mock_delegate = funcopy def _is_magic(name): return '__%s__' % name[2:-2] == name class _SentinelObject(object): "A unique, named, sentinel object." def __init__(self, name): self.name = name def __repr__(self): return 'sentinel.%s' % self.name class _Sentinel(object): """Access attributes to return a named object, usable as a sentinel.""" def __init__(self): self._sentinels = {} def __getattr__(self, name): if name == '__bases__': # Without this help(mock) raises an exception raise AttributeError return self._sentinels.setdefault(name, _SentinelObject(name)) sentinel = _Sentinel() DEFAULT = sentinel.DEFAULT _missing = sentinel.MISSING _deleted = sentinel.DELETED class OldStyleClass: pass ClassType = type(OldStyleClass) def _copy(value): if type(value) in (dict, list, tuple, set): return type(value)(value) return value ClassTypes = (type,) if not inPy3k: ClassTypes = (type, ClassType) _allowed_names = set( [ 'return_value', '_mock_return_value', 'side_effect', '_mock_side_effect', '_mock_parent', '_mock_new_parent', '_mock_name', '_mock_new_name' ] ) def _delegating_property(name): _allowed_names.add(name) _the_name = '_mock_' + name def _get(self, name=name, _the_name=_the_name): sig = self._mock_delegate if sig is None: return getattr(self, _the_name) return getattr(sig, name) def _set(self, value, name=name, _the_name=_the_name): sig = self._mock_delegate if sig is None: self.__dict__[_the_name] = value else: setattr(sig, name, value) return property(_get, _set) class _CallList(list): def __contains__(self, value): if not isinstance(value, list): return list.__contains__(self, value) len_value = len(value) len_self = len(self) if len_value > len_self: return False for i in range(0, len_self - len_value + 1): sub_list = self[i:i+len_value] if sub_list == value: return True return False def __repr__(self): return pprint.pformat(list(self)) def _check_and_set_parent(parent, value, name, new_name): if not _is_instance_mock(value): return False if ((value._mock_name or value._mock_new_name) or (value._mock_parent is not None) or (value._mock_new_parent is not None)): return False _parent = parent while _parent is not None: # setting a mock (value) as a child or return value of itself # should not modify the mock if _parent is value: return False _parent = _parent._mock_new_parent if new_name: value._mock_new_parent = parent value._mock_new_name = new_name if name: value._mock_parent = parent value._mock_name = name return True class Base(object): _mock_return_value = DEFAULT _mock_side_effect = None def __init__(self, *args, **kwargs): pass class NonCallableMock(Base): """A non-callable version of `Mock`""" def __new__(cls, *args, **kw): # every instance has its own class # so we can create magic methods on the # class without stomping on other mocks new = type(cls.__name__, (cls,), {'__doc__': cls.__doc__}) instance = object.__new__(new) return instance def __init__( self, spec=None, wraps=None, name=None, spec_set=None, parent=None, _spec_state=None, _new_name='', _new_parent=None, **kwargs ): if _new_parent is None: _new_parent = parent __dict__ = self.__dict__ __dict__['_mock_parent'] = parent __dict__['_mock_name'] = name __dict__['_mock_new_name'] = _new_name __dict__['_mock_new_parent'] = _new_parent if spec_set is not None: spec = spec_set spec_set = True self._mock_add_spec(spec, spec_set) __dict__['_mock_children'] = {} __dict__['_mock_wraps'] = wraps __dict__['_mock_delegate'] = None __dict__['_mock_called'] = False __dict__['_mock_call_args'] = None __dict__['_mock_call_count'] = 0 __dict__['_mock_call_args_list'] = _CallList() __dict__['_mock_mock_calls'] = _CallList() __dict__['method_calls'] = _CallList() if kwargs: self.configure_mock(**kwargs) _super(NonCallableMock, self).__init__( spec, wraps, name, spec_set, parent, _spec_state ) def attach_mock(self, mock, attribute): """ Attach a mock as an attribute of this one, replacing its name and parent. Calls to the attached mock will be recorded in the `method_calls` and `mock_calls` attributes of this one.""" mock._mock_parent = None mock._mock_new_parent = None mock._mock_name = '' mock._mock_new_name = None setattr(self, attribute, mock) def mock_add_spec(self, spec, spec_set=False): """Add a spec to a mock. `spec` can either be an object or a list of strings. Only attributes on the `spec` can be fetched as attributes from the mock. If `spec_set` is True then only attributes on the spec can be set.""" self._mock_add_spec(spec, spec_set) def _mock_add_spec(self, spec, spec_set): _spec_class = None if spec is not None and not _is_list(spec): if isinstance(spec, ClassTypes): _spec_class = spec else: _spec_class = _get_class(spec) spec = dir(spec) __dict__ = self.__dict__ __dict__['_spec_class'] = _spec_class __dict__['_spec_set'] = spec_set __dict__['_mock_methods'] = spec def __get_return_value(self): ret = self._mock_return_value if self._mock_delegate is not None: ret = self._mock_delegate.return_value if ret is DEFAULT: ret = self._get_child_mock( _new_parent=self, _new_name='()' ) self.return_value = ret return ret def __set_return_value(self, value): if self._mock_delegate is not None: self._mock_delegate.return_value = value else: self._mock_return_value = value _check_and_set_parent(self, value, None, '()') __return_value_doc = "The value to be returned when the mock is called." return_value = property(__get_return_value, __set_return_value, __return_value_doc) @property def __class__(self): if self._spec_class is None: return type(self) return self._spec_class called = _delegating_property('called') call_count = _delegating_property('call_count') call_args = _delegating_property('call_args') call_args_list = _delegating_property('call_args_list') mock_calls = _delegating_property('mock_calls') def __get_side_effect(self): sig = self._mock_delegate if sig is None: return self._mock_side_effect return sig.side_effect def __set_side_effect(self, value): value = _try_iter(value) sig = self._mock_delegate if sig is None: self._mock_side_effect = value else: sig.side_effect = value side_effect = property(__get_side_effect, __set_side_effect) def reset_mock(self): "Restore the mock object to its initial state." self.called = False self.call_args = None self.call_count = 0 self.mock_calls = _CallList() self.call_args_list = _CallList() self.method_calls = _CallList() for child in self._mock_children.values(): if isinstance(child, _SpecState): continue child.reset_mock() ret = self._mock_return_value if _is_instance_mock(ret) and ret is not self: ret.reset_mock() def configure_mock(self, **kwargs): """Set attributes on the mock through keyword arguments. Attributes plus return values and side effects can be set on child mocks using standard dot notation and unpacking a dictionary in the method call: >>> attrs = {'method.return_value': 3, 'other.side_effect': KeyError} >>> mock.configure_mock(**attrs)""" for arg, val in sorted(kwargs.items(), # we sort on the number of dots so that # attributes are set before we set attributes on # attributes key=lambda entry: entry[0].count('.')): args = arg.split('.') final = args.pop() obj = self for entry in args: obj = getattr(obj, entry) setattr(obj, final, val) def __getattr__(self, name): if name == '_mock_methods': raise AttributeError(name) elif self._mock_methods is not None: if name not in self._mock_methods or name in _all_magics: raise AttributeError("Mock object has no attribute %r" % name) elif _is_magic(name): raise AttributeError(name) result = self._mock_children.get(name) if result is _deleted: raise AttributeError(name) elif result is None: wraps = None if self._mock_wraps is not None: # XXXX should we get the attribute without triggering code # execution? wraps = getattr(self._mock_wraps, name) result = self._get_child_mock( parent=self, name=name, wraps=wraps, _new_name=name, _new_parent=self ) self._mock_children[name] = result elif isinstance(result, _SpecState): result = create_autospec( result.spec, result.spec_set, result.instance, result.parent, result.name ) self._mock_children[name] = result return result def __repr__(self): _name_list = [self._mock_new_name] _parent = self._mock_new_parent last = self dot = '.' if _name_list == ['()']: dot = '' seen = set() while _parent is not None: last = _parent _name_list.append(_parent._mock_new_name + dot) dot = '.' if _parent._mock_new_name == '()': dot = '' _parent = _parent._mock_new_parent # use ids here so as not to call __hash__ on the mocks if id(_parent) in seen: break seen.add(id(_parent)) _name_list = list(reversed(_name_list)) _first = last._mock_name or 'mock' if len(_name_list) > 1: if _name_list[1] not in ('()', '().'): _first += '.' _name_list[0] = _first name = ''.join(_name_list) name_string = '' if name not in ('mock', 'mock.'): name_string = ' name=%r' % name spec_string = '' if self._spec_class is not None: spec_string = ' spec=%r' if self._spec_set: spec_string = ' spec_set=%r' spec_string = spec_string % self._spec_class.__name__ return "<%s%s%s id='%s'>" % ( type(self).__name__, name_string, spec_string, id(self) ) def __dir__(self): """Filter the output of `dir(mock)` to only useful members. XXXX """ extras = self._mock_methods or [] from_type = dir(type(self)) from_dict = list(self.__dict__) if FILTER_DIR: from_type = [e for e in from_type if not e.startswith('_')] from_dict = [e for e in from_dict if not e.startswith('_') or _is_magic(e)] return sorted(set(extras + from_type + from_dict + list(self._mock_children))) def __setattr__(self, name, value): if name in _allowed_names: # property setters go through here return object.__setattr__(self, name, value) elif (self._spec_set and self._mock_methods is not None and name not in self._mock_methods and name not in self.__dict__): raise AttributeError("Mock object has no attribute '%s'" % name) elif name in _unsupported_magics: msg = 'Attempting to set unsupported magic method %r.' % name raise AttributeError(msg) elif name in _all_magics: if self._mock_methods is not None and name not in self._mock_methods: raise AttributeError("Mock object has no attribute '%s'" % name) if not _is_instance_mock(value): setattr(type(self), name, _get_method(name, value)) original = value value = lambda *args, **kw: original(self, *args, **kw) else: # only set _new_name and not name so that mock_calls is tracked # but not method calls _check_and_set_parent(self, value, None, name) setattr(type(self), name, value) self._mock_children[name] = value elif name == '__class__': self._spec_class = value return else: if _check_and_set_parent(self, value, name, name): self._mock_children[name] = value return object.__setattr__(self, name, value) def __delattr__(self, name): if name in _all_magics and name in type(self).__dict__: delattr(type(self), name) if name not in self.__dict__: # for magic methods that are still MagicProxy objects and # not set on the instance itself return if name in self.__dict__: object.__delattr__(self, name) obj = self._mock_children.get(name, _missing) if obj is _deleted: raise AttributeError(name) if obj is not _missing: del self._mock_children[name] self._mock_children[name] = _deleted def _format_mock_call_signature(self, args, kwargs): name = self._mock_name or 'mock' return _format_call_signature(name, args, kwargs) def _format_mock_failure_message(self, args, kwargs): message = 'Expected call: %s\nActual call: %s' expected_string = self._format_mock_call_signature(args, kwargs) call_args = self.call_args if len(call_args) == 3: call_args = call_args[1:] actual_string = self._format_mock_call_signature(*call_args) return message % (expected_string, actual_string) def assert_called_with(_mock_self, *args, **kwargs): """assert that the mock was called with the specified arguments. Raises an AssertionError if the args and keyword args passed in are different to the last call to the mock.""" self = _mock_self if self.call_args is None: expected = self._format_mock_call_signature(args, kwargs) raise AssertionError('Expected call: %s\nNot called' % (expected,)) if self.call_args != (args, kwargs): msg = self._format_mock_failure_message(args, kwargs) raise AssertionError(msg) def assert_called_once_with(_mock_self, *args, **kwargs): """assert that the mock was called exactly once and with the specified arguments.""" self = _mock_self if not self.call_count == 1: msg = ("Expected to be called once. Called %s times." % self.call_count) raise AssertionError(msg) return self.assert_called_with(*args, **kwargs) def assert_has_calls(self, calls, any_order=False): """assert the mock has been called with the specified calls. The `mock_calls` list is checked for the calls. If `any_order` is False (the default) then the calls must be sequential. There can be extra calls before or after the specified calls. If `any_order` is True then the calls can be in any order, but they must all appear in `mock_calls`.""" if not any_order: if calls not in self.mock_calls: raise AssertionError( 'Calls not found.\nExpected: %r\n' 'Actual: %r' % (calls, self.mock_calls) ) return all_calls = list(self.mock_calls) not_found = [] for kall in calls: try: all_calls.remove(kall) except ValueError: not_found.append(kall) if not_found: raise AssertionError( '%r not all found in call list' % (tuple(not_found),) ) def assert_any_call(self, *args, **kwargs): """assert the mock has been called with the specified arguments. The assert passes if the mock has *ever* been called, unlike `assert_called_with` and `assert_called_once_with` that only pass if the call is the most recent one.""" kall = call(*args, **kwargs) if kall not in self.call_args_list: expected_string = self._format_mock_call_signature(args, kwargs) raise AssertionError( '%s call not found' % expected_string ) def _get_child_mock(self, **kw): """Create the child mocks for attributes and return value. By default child mocks will be the same type as the parent. Subclasses of Mock may want to override this to customize the way child mocks are made. For non-callable mocks the callable variant will be used (rather than any custom subclass).""" _type = type(self) if not issubclass(_type, CallableMixin): if issubclass(_type, NonCallableMagicMock): klass = MagicMock elif issubclass(_type, NonCallableMock) : klass = Mock else: klass = _type.__mro__[1] return klass(**kw) def _try_iter(obj): if obj is None: return obj if _is_exception(obj): return obj if _callable(obj): return obj try: return iter(obj) except TypeError: # XXXX backwards compatibility # but this will blow up on first call - so maybe we should fail early? return obj class CallableMixin(Base): def __init__(self, spec=None, side_effect=None, return_value=DEFAULT, wraps=None, name=None, spec_set=None, parent=None, _spec_state=None, _new_name='', _new_parent=None, **kwargs): self.__dict__['_mock_return_value'] = return_value _super(CallableMixin, self).__init__( spec, wraps, name, spec_set, parent, _spec_state, _new_name, _new_parent, **kwargs ) self.side_effect = side_effect def _mock_check_sig(self, *args, **kwargs): # stub method that can be replaced with one with a specific signature pass def __call__(_mock_self, *args, **kwargs): # can't use self in-case a function / method we are mocking uses self # in the signature _mock_self._mock_check_sig(*args, **kwargs) return _mock_self._mock_call(*args, **kwargs) def _mock_call(_mock_self, *args, **kwargs): self = _mock_self self.called = True self.call_count += 1 self.call_args = _Call((args, kwargs), two=True) self.call_args_list.append(_Call((args, kwargs), two=True)) _new_name = self._mock_new_name _new_parent = self._mock_new_parent self.mock_calls.append(_Call(('', args, kwargs))) seen = set() skip_next_dot = _new_name == '()' do_method_calls = self._mock_parent is not None name = self._mock_name while _new_parent is not None: this_mock_call = _Call((_new_name, args, kwargs)) if _new_parent._mock_new_name: dot = '.' if skip_next_dot: dot = '' skip_next_dot = False if _new_parent._mock_new_name == '()': skip_next_dot = True _new_name = _new_parent._mock_new_name + dot + _new_name if do_method_calls: if _new_name == name: this_method_call = this_mock_call else: this_method_call = _Call((name, args, kwargs)) _new_parent.method_calls.append(this_method_call) do_method_calls = _new_parent._mock_parent is not None if do_method_calls: name = _new_parent._mock_name + '.' + name _new_parent.mock_calls.append(this_mock_call) _new_parent = _new_parent._mock_new_parent # use ids here so as not to call __hash__ on the mocks _new_parent_id = id(_new_parent) if _new_parent_id in seen: break seen.add(_new_parent_id) ret_val = DEFAULT effect = self.side_effect if effect is not None: if _is_exception(effect): raise effect if not _callable(effect): result = next(effect) if _is_exception(result): raise result return result ret_val = effect(*args, **kwargs) if ret_val is DEFAULT: ret_val = self.return_value if (self._mock_wraps is not None and self._mock_return_value is DEFAULT): return self._mock_wraps(*args, **kwargs) if ret_val is DEFAULT: ret_val = self.return_value return ret_val class Mock(CallableMixin, NonCallableMock): """ Create a new `Mock` object. `Mock` takes several optional arguments that specify the behaviour of the Mock object: * `spec`: This can be either a list of strings or an existing object (a class or instance) that acts as the specification for the mock object. If you pass in an object then a list of strings is formed by calling dir on the object (excluding unsupported magic attributes and methods). Accessing any attribute not in this list will raise an `AttributeError`. If `spec` is an object (rather than a list of strings) then `mock.__class__` returns the class of the spec object. This allows mocks to pass `isinstance` tests. * `spec_set`: A stricter variant of `spec`. If used, attempting to *set* or get an attribute on the mock that isn't on the object passed as `spec_set` will raise an `AttributeError`. * `side_effect`: A function to be called whenever the Mock is called. See the `side_effect` attribute. Useful for raising exceptions or dynamically changing return values. The function is called with the same arguments as the mock, and unless it returns `DEFAULT`, the return value of this function is used as the return value. Alternatively `side_effect` can be an exception class or instance. In this case the exception will be raised when the mock is called. If `side_effect` is an iterable then each call to the mock will return the next value from the iterable. If any of the members of the iterable are exceptions they will be raised instead of returned. * `return_value`: The value returned when the mock is called. By default this is a new Mock (created on first access). See the `return_value` attribute. * `wraps`: Item for the mock object to wrap. If `wraps` is not None then calling the Mock will pass the call through to the wrapped object (returning the real result). Attribute access on the mock will return a Mock object that wraps the corresponding attribute of the wrapped object (so attempting to access an attribute that doesn't exist will raise an `AttributeError`). If the mock has an explicit `return_value` set then calls are not passed to the wrapped object and the `return_value` is returned instead. * `name`: If the mock has a name then it will be used in the repr of the mock. This can be useful for debugging. The name is propagated to child mocks. Mocks can also be called with arbitrary keyword arguments. These will be used to set attributes on the mock after it is created. """ def _dot_lookup(thing, comp, import_path): try: return getattr(thing, comp) except AttributeError: __import__(import_path) return getattr(thing, comp) def _importer(target): components = target.split('.') import_path = components.pop(0) thing = __import__(import_path) for comp in components: import_path += ".%s" % comp thing = _dot_lookup(thing, comp, import_path) return thing def _is_started(patcher): # XXXX horrible return hasattr(patcher, 'is_local') class _patch(object): attribute_name = None _active_patches = set() def __init__( self, getter, attribute, new, spec, create, spec_set, autospec, new_callable, kwargs ): if new_callable is not None: if new is not DEFAULT: raise ValueError( "Cannot use 'new' and 'new_callable' together" ) if autospec is not None: raise ValueError( "Cannot use 'autospec' and 'new_callable' together" ) self.getter = getter self.attribute = attribute self.new = new self.new_callable = new_callable self.spec = spec self.create = create self.has_local = False self.spec_set = spec_set self.autospec = autospec self.kwargs = kwargs self.additional_patchers = [] def copy(self): patcher = _patch( self.getter, self.attribute, self.new, self.spec, self.create, self.spec_set, self.autospec, self.new_callable, self.kwargs ) patcher.attribute_name = self.attribute_name patcher.additional_patchers = [ p.copy() for p in self.additional_patchers ] return patcher def __call__(self, func): if isinstance(func, ClassTypes): return self.decorate_class(func) return self.decorate_callable(func) def decorate_class(self, klass): for attr in dir(klass): if not attr.startswith(patch.TEST_PREFIX): continue attr_value = getattr(klass, attr) if not hasattr(attr_value, "__call__"): continue patcher = self.copy() setattr(klass, attr, patcher(attr_value)) return klass def decorate_callable(self, func): if hasattr(func, 'patchings'): func.patchings.append(self) return func @wraps(func) def patched(*args, **keywargs): # don't use a with here (backwards compatability with Python 2.4) extra_args = [] entered_patchers = [] # can't use try...except...finally because of Python 2.4 # compatibility exc_info = tuple() try: try: for patching in patched.patchings: arg = patching.__enter__() entered_patchers.append(patching) if patching.attribute_name is not None: keywargs.update(arg) elif patching.new is DEFAULT: extra_args.append(arg) args += tuple(extra_args) return func(*args, **keywargs) except: if (patching not in entered_patchers and _is_started(patching)): # the patcher may have been started, but an exception # raised whilst entering one of its additional_patchers entered_patchers.append(patching) # Pass the exception to __exit__ exc_info = sys.exc_info() # re-raise the exception raise finally: for patching in reversed(entered_patchers): patching.__exit__(*exc_info) patched.patchings = [self] if hasattr(func, 'func_code'): # not in Python 3 patched.compat_co_firstlineno = getattr( func, "compat_co_firstlineno", func.func_code.co_firstlineno ) return patched def get_original(self): target = self.getter() name = self.attribute original = DEFAULT local = False try: original = target.__dict__[name] except (AttributeError, KeyError): original = getattr(target, name, DEFAULT) else: local = True if not self.create and original is DEFAULT: raise AttributeError( "%s does not have the attribute %r" % (target, name) ) return original, local def __enter__(self): """Perform the patch.""" new, spec, spec_set = self.new, self.spec, self.spec_set autospec, kwargs = self.autospec, self.kwargs new_callable = self.new_callable self.target = self.getter() # normalise False to None if spec is False: spec = None if spec_set is False: spec_set = None if autospec is False: autospec = None if spec is not None and autospec is not None: raise TypeError("Can't specify spec and autospec") if ((spec is not None or autospec is not None) and spec_set not in (True, None)): raise TypeError("Can't provide explicit spec_set *and* spec or autospec") original, local = self.get_original() if new is DEFAULT and autospec is None: inherit = False if spec is True: # set spec to the object we are replacing spec = original if spec_set is True: spec_set = original spec = None elif spec is not None: if spec_set is True: spec_set = spec spec = None elif spec_set is True: spec_set = original if spec is not None or spec_set is not None: if original is DEFAULT: raise TypeError("Can't use 'spec' with create=True") if isinstance(original, ClassTypes): # If we're patching out a class and there is a spec inherit = True Klass = MagicMock _kwargs = {} if new_callable is not None: Klass = new_callable elif spec is not None or spec_set is not None: this_spec = spec if spec_set is not None: this_spec = spec_set if _is_list(this_spec): not_callable = '__call__' not in this_spec else: not_callable = not _callable(this_spec) if not_callable: Klass = NonCallableMagicMock if spec is not None: _kwargs['spec'] = spec if spec_set is not None: _kwargs['spec_set'] = spec_set # add a name to mocks if (isinstance(Klass, type) and issubclass(Klass, NonCallableMock) and self.attribute): _kwargs['name'] = self.attribute _kwargs.update(kwargs) new = Klass(**_kwargs) if inherit and _is_instance_mock(new): # we can only tell if the instance should be callable if the # spec is not a list this_spec = spec if spec_set is not None: this_spec = spec_set if (not _is_list(this_spec) and not _instance_callable(this_spec)): Klass = NonCallableMagicMock _kwargs.pop('name') new.return_value = Klass(_new_parent=new, _new_name='()', **_kwargs) elif autospec is not None: # spec is ignored, new *must* be default, spec_set is treated # as a boolean. Should we check spec is not None and that spec_set # is a bool? if new is not DEFAULT: raise TypeError( "autospec creates the mock for you. Can't specify " "autospec and new." ) if original is DEFAULT: raise TypeError("Can't use 'autospec' with create=True") spec_set = bool(spec_set) if autospec is True: autospec = original new = create_autospec(autospec, spec_set=spec_set, _name=self.attribute, **kwargs) elif kwargs: # can't set keyword args when we aren't creating the mock # XXXX If new is a Mock we could call new.configure_mock(**kwargs) raise TypeError("Can't pass kwargs to a mock we aren't creating") new_attr = new self.temp_original = original self.is_local = local setattr(self.target, self.attribute, new_attr) if self.attribute_name is not None: extra_args = {} if self.new is DEFAULT: extra_args[self.attribute_name] = new for patching in self.additional_patchers: arg = patching.__enter__() if patching.new is DEFAULT: extra_args.update(arg) return extra_args return new def __exit__(self, *exc_info): """Undo the patch.""" if not _is_started(self): raise RuntimeError('stop called on unstarted patcher') if self.is_local and self.temp_original is not DEFAULT: setattr(self.target, self.attribute, self.temp_original) else: delattr(self.target, self.attribute) if not self.create and not hasattr(self.target, self.attribute): # needed for proxy objects like django settings setattr(self.target, self.attribute, self.temp_original) del self.temp_original del self.is_local del self.target for patcher in reversed(self.additional_patchers): if _is_started(patcher): patcher.__exit__(*exc_info) def start(self): """Activate a patch, returning any created mock.""" result = self.__enter__() self._active_patches.add(self) return result def stop(self): """Stop an active patch.""" self._active_patches.discard(self) return self.__exit__() def _get_target(target): try: target, attribute = target.rsplit('.', 1) except (TypeError, ValueError): raise TypeError("Need a valid target to patch. You supplied: %r" % (target,)) getter = lambda: _importer(target) return getter, attribute def _patch_object( target, attribute, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs ): """ patch.object(target, attribute, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs) patch the named member (`attribute`) on an object (`target`) with a mock object. `patch.object` can be used as a decorator, class decorator or a context manager. Arguments `new`, `spec`, `create`, `spec_set`, `autospec` and `new_callable` have the same meaning as for `patch`. Like `patch`, `patch.object` takes arbitrary keyword arguments for configuring the mock object it creates. When used as a class decorator `patch.object` honours `patch.TEST_PREFIX` for choosing which methods to wrap. """ getter = lambda: target return _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, kwargs ) def _patch_multiple(target, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs): """Perform multiple patches in a single call. It takes the object to be patched (either as an object or a string to fetch the object by importing) and keyword arguments for the patches:: with patch.multiple(settings, FIRST_PATCH='one', SECOND_PATCH='two'): ... Use `DEFAULT` as the value if you want `patch.multiple` to create mocks for you. In this case the created mocks are passed into a decorated function by keyword, and a dictionary is returned when `patch.multiple` is used as a context manager. `patch.multiple` can be used as a decorator, class decorator or a context manager. The arguments `spec`, `spec_set`, `create`, `autospec` and `new_callable` have the same meaning as for `patch`. These arguments will be applied to *all* patches done by `patch.multiple`. When used as a class decorator `patch.multiple` honours `patch.TEST_PREFIX` for choosing which methods to wrap. """ if type(target) in (unicode, str): getter = lambda: _importer(target) else: getter = lambda: target if not kwargs: raise ValueError( 'Must supply at least one keyword argument with patch.multiple' ) # need to wrap in a list for python 3, where items is a view items = list(kwargs.items()) attribute, new = items[0] patcher = _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, {} ) patcher.attribute_name = attribute for attribute, new in items[1:]: this_patcher = _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, {} ) this_patcher.attribute_name = attribute patcher.additional_patchers.append(this_patcher) return patcher def patch( target, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs ): """ `patch` acts as a function decorator, class decorator or a context manager. Inside the body of the function or with statement, the `target` is patched with a `new` object. When the function/with statement exits the patch is undone. If `new` is omitted, then the target is replaced with a `MagicMock`. If `patch` is used as a decorator and `new` is omitted, the created mock is passed in as an extra argument to the decorated function. If `patch` is used as a context manager the created mock is returned by the context manager. `target` should be a string in the form `'package.module.ClassName'`. The `target` is imported and the specified object replaced with the `new` object, so the `target` must be importable from the environment you are calling `patch` from. The target is imported when the decorated function is executed, not at decoration time. The `spec` and `spec_set` keyword arguments are passed to the `MagicMock` if patch is creating one for you. In addition you can pass `spec=True` or `spec_set=True`, which causes patch to pass in the object being mocked as the spec/spec_set object. `new_callable` allows you to specify a different class, or callable object, that will be called to create the `new` object. By default `MagicMock` is used. A more powerful form of `spec` is `autospec`. If you set `autospec=True` then the mock with be created with a spec from the object being replaced. All attributes of the mock will also have the spec of the corresponding attribute of the object being replaced. Methods and functions being mocked will have their arguments checked and will raise a `TypeError` if they are called with the wrong signature. For mocks replacing a class, their return value (the 'instance') will have the same spec as the class. Instead of `autospec=True` you can pass `autospec=some_object` to use an arbitrary object as the spec instead of the one being replaced. By default `patch` will fail to replace attributes that don't exist. If you pass in `create=True`, and the attribute doesn't exist, patch will create the attribute for you when the patched function is called, and delete it again afterwards. This is useful for writing tests against attributes that your production code creates at runtime. It is off by by default because it can be dangerous. With it switched on you can write passing tests against APIs that don't actually exist! Patch can be used as a `TestCase` class decorator. It works by decorating each test method in the class. This reduces the boilerplate code when your test methods share a common patchings set. `patch` finds tests by looking for method names that start with `patch.TEST_PREFIX`. By default this is `test`, which matches the way `unittest` finds tests. You can specify an alternative prefix by setting `patch.TEST_PREFIX`. Patch can be used as a context manager, with the with statement. Here the patching applies to the indented block after the with statement. If you use "as" then the patched object will be bound to the name after the "as"; very useful if `patch` is creating a mock object for you. `patch` takes arbitrary keyword arguments. These will be passed to the `Mock` (or `new_callable`) on construction. `patch.dict(...)`, `patch.multiple(...)` and `patch.object(...)` are available for alternate use-cases. """ getter, attribute = _get_target(target) return _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, kwargs ) class _patch_dict(object): """ Patch a dictionary, or dictionary like object, and restore the dictionary to its original state after the test. `in_dict` can be a dictionary or a mapping like container. If it is a mapping then it must at least support getting, setting and deleting items plus iterating over keys. `in_dict` can also be a string specifying the name of the dictionary, which will then be fetched by importing it. `values` can be a dictionary of values to set in the dictionary. `values` can also be an iterable of `(key, value)` pairs. If `clear` is True then the dictionary will be cleared before the new values are set. `patch.dict` can also be called with arbitrary keyword arguments to set values in the dictionary:: with patch.dict('sys.modules', mymodule=Mock(), other_module=Mock()): ... `patch.dict` can be used as a context manager, decorator or class decorator. When used as a class decorator `patch.dict` honours `patch.TEST_PREFIX` for choosing which methods to wrap. """ def __init__(self, in_dict, values=(), clear=False, **kwargs): if isinstance(in_dict, basestring): in_dict = _importer(in_dict) self.in_dict = in_dict # support any argument supported by dict(...) constructor self.values = dict(values) self.values.update(kwargs) self.clear = clear self._original = None def __call__(self, f): if isinstance(f, ClassTypes): return self.decorate_class(f) @wraps(f) def _inner(*args, **kw): self._patch_dict() try: return f(*args, **kw) finally: self._unpatch_dict() return _inner def decorate_class(self, klass): for attr in dir(klass): attr_value = getattr(klass, attr) if (attr.startswith(patch.TEST_PREFIX) and hasattr(attr_value, "__call__")): decorator = _patch_dict(self.in_dict, self.values, self.clear) decorated = decorator(attr_value) setattr(klass, attr, decorated) return klass def __enter__(self): """Patch the dict.""" self._patch_dict() def _patch_dict(self): values = self.values in_dict = self.in_dict clear = self.clear try: original = in_dict.copy() except AttributeError: # dict like object with no copy method # must support iteration over keys original = {} for key in in_dict: original[key] = in_dict[key] self._original = original if clear: _clear_dict(in_dict) try: in_dict.update(values) except AttributeError: # dict like object with no update method for key in values: in_dict[key] = values[key] def _unpatch_dict(self): in_dict = self.in_dict original = self._original _clear_dict(in_dict) try: in_dict.update(original) except AttributeError: for key in original: in_dict[key] = original[key] def __exit__(self, *args): """Unpatch the dict.""" self._unpatch_dict() return False start = __enter__ stop = __exit__ def _clear_dict(in_dict): try: in_dict.clear() except AttributeError: keys = list(in_dict) for key in keys: del in_dict[key] def _patch_stopall(): """Stop all active patches.""" for patch in list(_patch._active_patches): patch.stop() patch.object = _patch_object patch.dict = _patch_dict patch.multiple = _patch_multiple patch.stopall = _patch_stopall patch.TEST_PREFIX = 'test' magic_methods = ( "lt le gt ge eq ne " "getitem setitem delitem " "len contains iter " "hash str sizeof " "enter exit " "divmod neg pos abs invert " "complex int float index " "trunc floor ceil " ) numerics = "add sub mul div floordiv mod lshift rshift and xor or pow " inplace = ' '.join('i%s' % n for n in numerics.split()) right = ' '.join('r%s' % n for n in numerics.split()) extra = '' if inPy3k: extra = 'bool next ' else: extra = 'unicode long nonzero oct hex truediv rtruediv ' # not including __prepare__, __instancecheck__, __subclasscheck__ # (as they are metaclass methods) # __del__ is not supported at all as it causes problems if it exists _non_defaults = set('__%s__' % method for method in [ 'cmp', 'getslice', 'setslice', 'coerce', 'subclasses', 'format', 'get', 'set', 'delete', 'reversed', 'missing', 'reduce', 'reduce_ex', 'getinitargs', 'getnewargs', 'getstate', 'setstate', 'getformat', 'setformat', 'repr', 'dir' ]) def _get_method(name, func): "Turns a callable object (like a mock) into a real function" def method(self, *args, **kw): return func(self, *args, **kw) method.__name__ = name return method _magics = set( '__%s__' % method for method in ' '.join([magic_methods, numerics, inplace, right, extra]).split() ) _all_magics = _magics | _non_defaults _unsupported_magics = set([ '__getattr__', '__setattr__', '__init__', '__new__', '__prepare__' '__instancecheck__', '__subclasscheck__', '__del__' ]) _calculate_return_value = { '__hash__': lambda self: object.__hash__(self), '__str__': lambda self: object.__str__(self), '__sizeof__': lambda self: object.__sizeof__(self), '__unicode__': lambda self: unicode(object.__str__(self)), } _return_values = { '__lt__': NotImplemented, '__gt__': NotImplemented, '__le__': NotImplemented, '__ge__': NotImplemented, '__int__': 1, '__contains__': False, '__len__': 0, '__exit__': False, '__complex__': 1j, '__float__': 1.0, '__bool__': True, '__nonzero__': True, '__oct__': '1', '__hex__': '0x1', '__long__': long(1), '__index__': 1, } def _get_eq(self): def __eq__(other): ret_val = self.__eq__._mock_return_value if ret_val is not DEFAULT: return ret_val return self is other return __eq__ def _get_ne(self): def __ne__(other): if self.__ne__._mock_return_value is not DEFAULT: return DEFAULT return self is not other return __ne__ def _get_iter(self): def __iter__(): ret_val = self.__iter__._mock_return_value if ret_val is DEFAULT: return iter([]) # if ret_val was already an iterator, then calling iter on it should # return the iterator unchanged return iter(ret_val) return __iter__ _side_effect_methods = { '__eq__': _get_eq, '__ne__': _get_ne, '__iter__': _get_iter, } def _set_return_value(mock, method, name): fixed = _return_values.get(name, DEFAULT) if fixed is not DEFAULT: method.return_value = fixed return return_calulator = _calculate_return_value.get(name) if return_calulator is not None: try: return_value = return_calulator(mock) except AttributeError: # XXXX why do we return AttributeError here? # set it as a side_effect instead? return_value = AttributeError(name) method.return_value = return_value return side_effector = _side_effect_methods.get(name) if side_effector is not None: method.side_effect = side_effector(mock) class MagicMixin(object): def __init__(self, *args, **kw): _super(MagicMixin, self).__init__(*args, **kw) self._mock_set_magics() def _mock_set_magics(self): these_magics = _magics if self._mock_methods is not None: these_magics = _magics.intersection(self._mock_methods) remove_magics = set() remove_magics = _magics - these_magics for entry in remove_magics: if entry in type(self).__dict__: # remove unneeded magic methods delattr(self, entry) # don't overwrite existing attributes if called a second time these_magics = these_magics - set(type(self).__dict__) _type = type(self) for entry in these_magics: setattr(_type, entry, MagicProxy(entry, self)) class NonCallableMagicMock(MagicMixin, NonCallableMock): """A version of `MagicMock` that isn't callable.""" def mock_add_spec(self, spec, spec_set=False): """Add a spec to a mock. `spec` can either be an object or a list of strings. Only attributes on the `spec` can be fetched as attributes from the mock. If `spec_set` is True then only attributes on the spec can be set.""" self._mock_add_spec(spec, spec_set) self._mock_set_magics() class MagicMock(MagicMixin, Mock): """ MagicMock is a subclass of Mock with default implementations of most of the magic methods. You can use MagicMock without having to configure the magic methods yourself. If you use the `spec` or `spec_set` arguments then *only* magic methods that exist in the spec will be created. Attributes and the return value of a `MagicMock` will also be `MagicMocks`. """ def mock_add_spec(self, spec, spec_set=False): """Add a spec to a mock. `spec` can either be an object or a list of strings. Only attributes on the `spec` can be fetched as attributes from the mock. If `spec_set` is True then only attributes on the spec can be set.""" self._mock_add_spec(spec, spec_set) self._mock_set_magics() class MagicProxy(object): def __init__(self, name, parent): self.name = name self.parent = parent def __call__(self, *args, **kwargs): m = self.create_mock() return m(*args, **kwargs) def create_mock(self): entry = self.name parent = self.parent m = parent._get_child_mock(name=entry, _new_name=entry, _new_parent=parent) setattr(parent, entry, m) _set_return_value(parent, m, entry) return m def __get__(self, obj, _type=None): return self.create_mock() class _ANY(object): "A helper object that compares equal to everything." def __eq__(self, other): return True def __ne__(self, other): return False def __repr__(self): return '' ANY = _ANY() def _format_call_signature(name, args, kwargs): message = '%s(%%s)' % name formatted_args = '' args_string = ', '.join([repr(arg) for arg in args]) kwargs_string = ', '.join([ '%s=%r' % (key, value) for key, value in kwargs.items() ]) if args_string: formatted_args = args_string if kwargs_string: if formatted_args: formatted_args += ', ' formatted_args += kwargs_string return message % formatted_args class _Call(tuple): """ A tuple for holding the results of a call to a mock, either in the form `(args, kwargs)` or `(name, args, kwargs)`. If args or kwargs are empty then a call tuple will compare equal to a tuple without those values. This makes comparisons less verbose:: _Call(('name', (), {})) == ('name',) _Call(('name', (1,), {})) == ('name', (1,)) _Call(((), {'a': 'b'})) == ({'a': 'b'},) The `_Call` object provides a useful shortcut for comparing with call:: _Call(((1, 2), {'a': 3})) == call(1, 2, a=3) _Call(('foo', (1, 2), {'a': 3})) == call.foo(1, 2, a=3) If the _Call has no name then it will match any name. """ def __new__(cls, value=(), name=None, parent=None, two=False, from_kall=True): name = '' args = () kwargs = {} _len = len(value) if _len == 3: name, args, kwargs = value elif _len == 2: first, second = value if isinstance(first, basestring): name = first if isinstance(second, tuple): args = second else: kwargs = second else: args, kwargs = first, second elif _len == 1: value, = value if isinstance(value, basestring): name = value elif isinstance(value, tuple): args = value else: kwargs = value if two: return tuple.__new__(cls, (args, kwargs)) return tuple.__new__(cls, (name, args, kwargs)) def __init__(self, value=(), name=None, parent=None, two=False, from_kall=True): self.name = name self.parent = parent self.from_kall = from_kall def __eq__(self, other): if other is ANY: return True try: len_other = len(other) except TypeError: return False self_name = '' if len(self) == 2: self_args, self_kwargs = self else: self_name, self_args, self_kwargs = self other_name = '' if len_other == 0: other_args, other_kwargs = (), {} elif len_other == 3: other_name, other_args, other_kwargs = other elif len_other == 1: value, = other if isinstance(value, tuple): other_args = value other_kwargs = {} elif isinstance(value, basestring): other_name = value other_args, other_kwargs = (), {} else: other_args = () other_kwargs = value else: # len 2 # could be (name, args) or (name, kwargs) or (args, kwargs) first, second = other if isinstance(first, basestring): other_name = first if isinstance(second, tuple): other_args, other_kwargs = second, {} else: other_args, other_kwargs = (), second else: other_args, other_kwargs = first, second if self_name and other_name != self_name: return False # this order is important for ANY to work! return (other_args, other_kwargs) == (self_args, self_kwargs) def __ne__(self, other): return not self.__eq__(other) def __call__(self, *args, **kwargs): if self.name is None: return _Call(('', args, kwargs), name='()') name = self.name + '()' return _Call((self.name, args, kwargs), name=name, parent=self) def __getattr__(self, attr): if self.name is None: return _Call(name=attr, from_kall=False) name = '%s.%s' % (self.name, attr) return _Call(name=name, parent=self, from_kall=False) def __repr__(self): if not self.from_kall: name = self.name or 'call' if name.startswith('()'): name = 'call%s' % name return name if len(self) == 2: name = 'call' args, kwargs = self else: name, args, kwargs = self if not name: name = 'call' elif not name.startswith('()'): name = 'call.%s' % name else: name = 'call%s' % name return _format_call_signature(name, args, kwargs) def call_list(self): """For a call object that represents multiple calls, `call_list` returns a list of all the intermediate calls as well as the final call.""" vals = [] thing = self while thing is not None: if thing.from_kall: vals.append(thing) thing = thing.parent return _CallList(reversed(vals)) call = _Call(from_kall=False) def create_autospec(spec, spec_set=False, instance=False, _parent=None, _name=None, **kwargs): """Create a mock object using another object as a spec. Attributes on the mock will use the corresponding attribute on the `spec` object as their spec. Functions or methods being mocked will have their arguments checked to check that they are called with the correct signature. If `spec_set` is True then attempting to set attributes that don't exist on the spec object will raise an `AttributeError`. If a class is used as a spec then the return value of the mock (the instance of the class) will have the same spec. You can use a class as the spec for an instance object by passing `instance=True`. The returned mock will only be callable if instances of the mock are callable. `create_autospec` also takes arbitrary keyword arguments that are passed to the constructor of the created mock.""" if _is_list(spec): # can't pass a list instance to the mock constructor as it will be # interpreted as a list of strings spec = type(spec) is_type = isinstance(spec, ClassTypes) _kwargs = {'spec': spec} if spec_set: _kwargs = {'spec_set': spec} elif spec is None: # None we mock with a normal mock without a spec _kwargs = {} _kwargs.update(kwargs) Klass = MagicMock if type(spec) in DescriptorTypes: # descriptors don't have a spec # because we don't know what type they return _kwargs = {} elif not _callable(spec): Klass = NonCallableMagicMock elif is_type and instance and not _instance_callable(spec): Klass = NonCallableMagicMock _new_name = _name if _parent is None: # for a top level object no _new_name should be set _new_name = '' mock = Klass(parent=_parent, _new_parent=_parent, _new_name=_new_name, name=_name, **_kwargs) if isinstance(spec, FunctionTypes): # should only happen at the top level because we don't # recurse for functions mock = _set_signature(mock, spec) else: _check_signature(spec, mock, is_type, instance) if _parent is not None and not instance: _parent._mock_children[_name] = mock if is_type and not instance and 'return_value' not in kwargs: mock.return_value = create_autospec(spec, spec_set, instance=True, _name='()', _parent=mock) for entry in dir(spec): if _is_magic(entry): # MagicMock already does the useful magic methods for us continue if isinstance(spec, FunctionTypes) and entry in FunctionAttributes: # allow a mock to actually be a function continue # XXXX do we need a better way of getting attributes without # triggering code execution (?) Probably not - we need the actual # object to mock it so we would rather trigger a property than mock # the property descriptor. Likewise we want to mock out dynamically # provided attributes. # XXXX what about attributes that raise exceptions other than # AttributeError on being fetched? # we could be resilient against it, or catch and propagate the # exception when the attribute is fetched from the mock try: original = getattr(spec, entry) except AttributeError: continue kwargs = {'spec': original} if spec_set: kwargs = {'spec_set': original} if not isinstance(original, FunctionTypes): new = _SpecState(original, spec_set, mock, entry, instance) mock._mock_children[entry] = new else: parent = mock if isinstance(spec, FunctionTypes): parent = mock.mock new = MagicMock(parent=parent, name=entry, _new_name=entry, _new_parent=parent, **kwargs) mock._mock_children[entry] = new skipfirst = _must_skip(spec, entry, is_type) _check_signature(original, new, skipfirst=skipfirst) # so functions created with _set_signature become instance attributes, # *plus* their underlying mock exists in _mock_children of the parent # mock. Adding to _mock_children may be unnecessary where we are also # setting as an instance attribute? if isinstance(new, FunctionTypes): setattr(mock, entry, new) return mock def _must_skip(spec, entry, is_type): if not isinstance(spec, ClassTypes): if entry in getattr(spec, '__dict__', {}): # instance attribute - shouldn't skip return False spec = spec.__class__ if not hasattr(spec, '__mro__'): # old style class: can't have descriptors anyway return is_type for klass in spec.__mro__: result = klass.__dict__.get(entry, DEFAULT) if result is DEFAULT: continue if isinstance(result, (staticmethod, classmethod)): return False return is_type # shouldn't get here unless function is a dynamically provided attribute # XXXX untested behaviour return is_type def _get_class(obj): try: return obj.__class__ except AttributeError: # in Python 2, _sre.SRE_Pattern objects have no __class__ return type(obj) class _SpecState(object): def __init__(self, spec, spec_set=False, parent=None, name=None, ids=None, instance=False): self.spec = spec self.ids = ids self.spec_set = spec_set self.parent = parent self.instance = instance self.name = name FunctionTypes = ( # python function type(create_autospec), # instance method type(ANY.__eq__), # unbound method type(_ANY.__eq__), ) FunctionAttributes = set([ 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name', ]) file_spec = None def mock_open(mock=None, read_data=''): """ A helper function to create a mock to replace the use of `open`. It works for `open` called directly or used as a context manager. The `mock` argument is the mock object to configure. If `None` (the default) then a `MagicMock` will be created for you, with the API limited to methods or attributes available on standard file handles. `read_data` is a string for the `read` method of the file handle to return. This is an empty string by default. """ global file_spec if file_spec is None: # set on first use if inPy3k: import _io file_spec = list(set(dir(_io.TextIOWrapper)).union(set(dir(_io.BytesIO)))) else: file_spec = file if mock is None: mock = MagicMock(name='open', spec=open) handle = MagicMock(spec=file_spec) handle.write.return_value = None handle.__enter__.return_value = handle handle.read.return_value = read_data mock.return_value = handle return mock class PropertyMock(Mock): """ A mock intended to be used as a property, or other descriptor, on a class. `PropertyMock` provides `__get__` and `__set__` methods so you can specify a return value when it is fetched. Fetching a `PropertyMock` instance from an object calls the mock, with no args. Setting it calls the mock with the value being set. """ def _get_child_mock(self, **kwargs): return MagicMock(**kwargs) def __get__(self, obj, obj_type): return self() def __set__(self, obj, val): self(val) juju-deployer-0.6.4/deployer/tests/__init__.py0000664000175000017500000000000012600342600025516 0ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/tests/test_diff.py0000664000175000017500000001056112600342600025743 0ustar tvansteenburghtvansteenburgh00000000000000""" Unittest for juju-deployer diff action (--diff) """ # pylint: disable=C0103 import StringIO import os import shutil import tempfile import unittest from deployer.config import ConfigStack from deployer.env.mem import MemoryEnvironment from deployer.utils import setup_logging from .base import Base, skip_if_offline, TEST_OFFLINE, TEST_OFFLINE_REASON from ..action.diff import Diff @skip_if_offline class DiffTest(Base): def setUp(self): self.output = setup_logging( debug=True, verbose=True, stream=StringIO.StringIO()) # Because fetch_charms is expensive, do it once for all tests @classmethod def setUpClass(cls): super(DiffTest, cls).setUpClass() # setUpClass not being skipped, here, could have to do with # decorator on derived class. So skip explicitly. if TEST_OFFLINE: raise unittest.SkipTest(TEST_OFFLINE_REASON) deployment = ConfigStack( [os.path.join( cls.test_data_dir, "blog.yaml")]).get('wordpress-prod') cls._dir = tempfile.mkdtemp() os.mkdir(os.path.join(cls._dir, "precise")) deployment.repo_path = cls._dir deployment.fetch_charms() deployment.resolve() cls._deployment = deployment @classmethod def tearDownClass(cls): super(DiffTest, cls).tearDownClass() # TearDownClass not being skipped, here, could have to do with # decorator on derived class. So skip explicitly. if TEST_OFFLINE: raise unittest.SkipTest(TEST_OFFLINE_REASON) shutil.rmtree(cls._dir) @classmethod def get_deployment(cls): """ Return saved deployment at class initialization """ return cls._deployment def test_diff_nil(self): dpl = self.get_deployment() # No changes, assert nil diff env = MemoryEnvironment(dpl.name, dpl) diff = Diff(env, dpl, {}).do_diff() self.assertEqual(diff, {}) def test_diff_num_units(self): # Removing 1 unit must show -1 'num_units' dpl = self.get_deployment() env = MemoryEnvironment(dpl.name, dpl) env.remove_unit(env.status()['services']['haproxy']['units'][0]) diff = Diff(env, dpl, {}).do_diff() self.assertEqual( diff['services']['modified']['haproxy']['num_units'], -1) # re-adding a unit -> nil diff env.add_units('haproxy', 1) diff = Diff(env, dpl, {}).do_diff() self.assertEqual(diff, {}) def test_diff_config(self): dpl = self.get_deployment() env = MemoryEnvironment(dpl.name, dpl) env.set_config('blog', {'tuning': 'bare'}) diff = Diff(env, dpl, {}).do_diff() mod_blog = diff['services']['modified']['blog'] self.assertTrue(mod_blog['env-config']['tuning'] != mod_blog['cfg-config']['tuning']) self.assertEquals(mod_blog['env-config']['tuning'], 'bare') def test_diff_config_many(self): dpl = self.get_deployment() env = MemoryEnvironment(dpl.name, dpl) env.set_config('blog', {'tuning': 'bare', 'engine': 'duck'}) diff = Diff(env, dpl, {}).do_diff() mod_blog = diff['services']['modified']['blog'] self.assertEqual( set(mod_blog['env-config'].keys()), set(['tuning', 'engine'])) self.assertTrue(mod_blog['env-config']['tuning'] != mod_blog['cfg-config']['tuning']) self.assertTrue(mod_blog['env-config']['engine'] != mod_blog['cfg-config']['engine']) def test_diff_constraints(self): dpl = self.get_deployment() env = MemoryEnvironment(dpl.name, dpl) env.set_constraints('haproxy', 'foo=bar') diff = Diff(env, dpl, {}).do_diff() mod_haproxy = diff['services']['modified']['haproxy'] self.assertTrue( mod_haproxy['env-constraints'] != mod_haproxy['cfg-constraints']) self.assertEqual(mod_haproxy['env-constraints'], {'foo': 'bar'}) def test_diff_service_destroy(self): dpl = self.get_deployment() env = MemoryEnvironment(dpl.name, dpl) env.destroy_service('haproxy') diff = Diff(env, dpl, {}).do_diff() self.assertTrue(str(diff['relations']['missing'][0]).find('haproxy') != -1) self.assertTrue(diff['services']['missing'].keys() == ['haproxy']) juju-deployer-0.6.4/deployer/guiserver.py0000664000175000017500000000756512600342600024657 0ustar tvansteenburghtvansteenburgh00000000000000"""Juju GUI server bundles deployment support. The following functions are used by the Juju GUI server to validate and start bundle deployments. The validate and import_bundle operations represents the public API: they are directly called in the GUI server bundles support code, which also takes care of handling any exception they can raise. Those functions are blocking, and therefore the GUI server executes them in separate processes. See . """ import os from deployer.action.importer import Importer from deployer.cli import setup_parser from deployer.deployment import Deployment from deployer.env.gui import GUIEnvironment from deployer.utils import ( DeploymentError, mkdir, ) # This value is used by the juju-deployer Importer object to store charms. # This directory is usually created in the machine where the Juju GUI charm is # deployed the first time a bundle deployment is requested. JUJU_HOME = '/var/lib/juju-gui/juju-home' def get_default_guiserver_options(): """Return the default importer options used by the GUI server.""" # Options used by the juju-deployer. The defaults work for us, except for # the ignore_errors flag. return setup_parser().parse_args(['--ignore-errors']) class GUIDeployment(Deployment): """Handle bundle deployments requested by the GUI server.""" def __init__(self, name, data, version): super(GUIDeployment, self).__init__(name, data, [], version=version) def _handle_feedback(self, feedback): """Raise a DeploymentError if the given feedback includes errors. The GUI server will catch and report failures propagating them through the WebSocket connection to the client. """ for message in feedback.get_warnings(): self.log.warning(message) if feedback.has_errors: # Errors are logged by the GUI server. raise DeploymentError(feedback.get_errors()) def _validate(env, bundle): """Bundle validation logic, used by both validate and import_bundle. This function receives a connected environment and the bundle as a YAML decoded object. """ # Retrieve the services deployed in the Juju environment. env_status = env.status() env_services = set(env_status['services'].keys()) # Retrieve the services in the bundle. bundle_services = set(bundle.get('services', {}).keys()) # Calculate overlapping services. overlapping = env_services.intersection(bundle_services) if overlapping: services = ', '.join(overlapping) error = 'service(s) already in the environment: {}'.format(services) raise ValueError(error) def validate(apiurl, username, password, bundle): """Validate a bundle.""" env = GUIEnvironment(apiurl, username, password) env.connect() try: _validate(env, bundle) finally: env.close() def import_bundle(apiurl, username, password, name, bundle, version, options): """Import a bundle. To connect to the Juju environment, use the given API URL, user name and password. The name and bundle arguments are used to deploy the bundle. The version argument specifies whether the given bundle content uses the legacy v3 or the new v4 bundle syntax. The given options are used to start the bundle deployment process. """ env = GUIEnvironment(apiurl, username, password) deployment = GUIDeployment(name, bundle, version=version) importer = Importer(env, deployment, options) env.connect() # The Importer tries to retrieve the Juju home from the JUJU_HOME # environment variable: create a customized directory (if required) and # set up the environment context for the Importer. mkdir(JUJU_HOME) os.environ['JUJU_HOME'] = JUJU_HOME try: _validate(env, bundle) importer.run() finally: env.close() juju-deployer-0.6.4/deployer/vcs.py0000664000175000017500000001117312666041305023437 0ustar tvansteenburghtvansteenburgh00000000000000import subprocess import os import re from bzrlib.workingtree import WorkingTree from .utils import ErrorExit class Vcs(object): err_update = ( "Could not update branch %(path)s from %(branch_url)s\n\n %(output)s") err_branch = "Could not branch %(branch_url)s to %(path)s\n\n %(output)s" err_is_mod = "Couldn't determine if %(path)s was modified\n\n %(output)s" err_pull = ( "Could not pull branch @ %(branch_url)s to %(path)s\n\n %(output)s") err_cur_rev = ( "Could not determine current revision %(path)s\n\n %(output)s") def __init__(self, path, origin, log): self.path = path self.log = log self.extended_options = self.get_extended_options(origin) if self.extended_options: self.origin = origin.split("#")[0] else: self.origin = origin def _call(self, args, error_msg, cwd=None, stderr=()): try: if stderr is not None and not stderr: stderr = subprocess.STDOUT output = subprocess.check_output( args, cwd=cwd or self.path, stderr=stderr) except subprocess.CalledProcessError, e: self.log.error(error_msg % self.get_err_msg_ctx(e)) raise ErrorExit() return output.strip() def get_err_msg_ctx(self, e): return { 'path': self.path, 'branch_url': self.origin, 'exit_code': e.returncode, 'output': e.output, 'vcs': self.__class__.__name__.lower()} def get_extended_options(self, origin): regexp = re.compile(r"[\?#&](?P[^&=]+)=(?P[^&=]+)") matched = regexp.findall(origin) if matched: ret = dict() for option in matched: (name, value) = option if name in ret: raise Exception("%s option already defined" % name) ret[name] = value return ret return {} def get_cur_rev(self): raise NotImplementedError() def update(self, rev=None): raise NotImplementedError() def branch(self): raise NotImplementedError() def pull(self): raise NotImplementedError() def is_modified(self): raise NotImplementedError() # upstream missing revisions? class Bzr(Vcs): def get_cur_rev(self): params = ["bzr", "revno", "--tree"] return self._call(params, self.err_cur_rev, stderr=None) def update(self, rev=None): params = ["bzr", "up"] if rev: params.extend(["-r", str(rev)]) self._call(params, self.err_update) def branch(self): params = ["bzr", "co", "--lightweight", self.origin, self.path] cwd = os.path.dirname(os.path.dirname(self.path)) if not cwd: cwd = "." self._call(params, self.err_branch, cwd) def is_modified(self): # To replace with bzr cli, we need to be able to detect # changes to a wc @ a rev or @ trunk. tree = WorkingTree.open(self.path) return tree.has_changes() class Git(Vcs): def get_cur_rev(self): params = ["git", "rev-parse", "HEAD"] return self._call(params, self.err_cur_rev) def update(self, rev=None): params = ["git", "reset", "--merge"] if rev: params.append(rev) self._call(params, self.err_update) def branch(self): params = ["git", "clone", "--depth", "1"] # Deal with branches in the format # ; components = self.origin.split(';') if len(components) == 2: params += ["--branch", components[1], components[0], self.path] else: params += [self.origin, self.path] cwd = os.path.dirname(os.path.dirname(self.path)) if not cwd: cwd = "." self._call(params, self.err_branch, cwd) change_ref = self.extended_options.get('changeref', None) if change_ref: change_ref = 'refs/changes/{}/{}'.format( change_ref.split("/")[0][-2:], change_ref) self._call(["git", "fetch", "--depth", "1", self.origin, change_ref], self.err_branch, self.path) self._call(["git", "checkout", "FETCH_HEAD"], self.err_branch, self.path) def is_modified(self): params = ["git", "status", "-s"] return bool(self._call(params, self.err_is_mod).strip()) def get_remote_origin(self): params = ["git", "config", "--get", "remote.origin.url"] return self._call(params, "") juju-deployer-0.6.4/deployer/utils.py0000664000175000017500000002556212647473442024025 0ustar tvansteenburghtvansteenburgh00000000000000from copy import deepcopy from contextlib import contextmanager import errno import logging from logging.config import dictConfig as logConfig import json import os from os.path import ( abspath, expanduser, isabs, isdir, join as path_join, exists as path_exists, ) import stat import subprocess import time import tempfile from urllib2 import ( HTTPError, URLError, urlopen, ) import zipfile try: from yaml import CSafeLoader, CSafeDumper SafeLoader, SafeDumper = CSafeLoader, CSafeDumper except ImportError: from yaml import SafeLoader import yaml class ErrorExit(Exception): def __init__(self, error=None): self.error = error class DeploymentError(Exception): """One or more errors occurred during the deployment preparation.""" def __init__(self, errors): self.errors = errors super(DeploymentError, self).__init__(errors) def __str__(self): return '\n'.join(self.errors) STORE_URL = "https://api.jujucharms.com/charmstore" # Utility functions def yaml_dump(value): return yaml.dump(value, default_flow_style=False) def yaml_load(value): return yaml.load(value, Loader=SafeLoader) # We're not using safe dumper because we're using other custom # representers as well. def _unicode_representer(dumper, uni): node = yaml.ScalarNode(tag=u'tag:yaml.org,2002:str', value=uni) return node yaml.add_representer(unicode, _unicode_representer) DEFAULT_LOGGING = """ version: 1 formatters: standard: format: '%(asctime)s %(message)s' datefmt: "%Y-%m-%d %H:%M:%S" detailed: format: '%(asctime)s [%(levelname)s] %(name)s: %(message)s' datefmt: "%Y-%m-%d %H:%M:%S" handlers: console: class: logging.StreamHandler formatter: standard level: DEBUG stream: ext://sys.stderr loggers: deployer: level: INFO propogate: true deploy.cli: level: DEBUG propogate: true deploy.charm: level: DEBUG propogate: true deploy.env: level: DEBUG propogate: true deploy.deploy: level: DEBUG propogate: true deploy.importer: level: DEBUG propogate: true "": level: INFO handlers: - console """ def setup_logging(verbose=False, debug=False, stream=None): config = yaml_load(DEFAULT_LOGGING) log_options = {} if verbose: log_options.update({"loggers": { "deployer": {"level": "DEBUG", "propogate": True}}}) #log_options.update({"loggers": {"" if debug: log_options.update( {"handlers": {"console": {"formatter": "detailed"}}}) config = dict_merge(config, log_options) logConfig(config) # Allow tests to reuse this func to mass configure log streams. if stream: root = logging.getLogger() previous = root.handlers[0] root.handlers[0] = current = logging.StreamHandler(stream) current.setFormatter(previous.formatter) return stream @contextmanager def temp_file(): t = tempfile.NamedTemporaryFile() try: yield t finally: t.close() def extract_zip(zip_path, dir_path): zf = zipfile.ZipFile(zip_path, "r") for info in zf.infolist(): mode = info.external_attr >> 16 if stat.S_ISLNK(mode): source = zf.read(info.filename) target = os.path.join(dir_path, info.filename) if os.path.exists(target): os.remove(target) os.symlink(source, target) continue extract_path = zf.extract(info, dir_path) os.chmod(extract_path, mode) UNITS_DICT = { 'M': 1, 'G': 1024, 'T': 1024 * 1024, 'P': 1024 * 1024 * 1024, } def _to_number(value): """Convert a string to a numeric. The returned value is either an int or a float, depending on the input. Raises a ValueError if the value cannot be parsed. """ try: num = int(value) except ValueError: num = float(value) return num def _convert_units_specifier(value): """Convert a string that may have a unit specifier. Given a string possibly containing a unit specifier, return the the string representing the number and the units multiplier. """ units = UNITS_DICT.get(value[-1], None) if units is not None: value = value[:-1] else: units = 1 return value, units def parse_constraints(value): """Parse the constraints, converting size specifiers into ints. Specifiers of 'M' and 'G' are supported. The resulting value is the number of megabytes. The input is either a string of the form "k1=v1 ... kn=vn" or a dictionary. """ if value is None: return value constraints_with_units = ['mem', 'root-disk'] numerics = ['cpu-cores', 'cpu-power'] + constraints_with_units lists = ['tags'] constraints = {} if isinstance(value, dict): constraints.update(value) else: pairs = value.strip().split() for item in pairs: k, v = item.split('=') constraints[k] = v for k, v in constraints.items(): if k in lists: v = v.split(',') elif k in numerics: # Currently numeric constraints are never passed as slices. For # this reason, it is safe to assume that a numeric value is always # a string and never a list. units = 1 if k in constraints_with_units: v, units = _convert_units_specifier(v) try: v = _to_number(v) * units except ValueError: raise ValueError( 'Constraint {} has invalid value {}'.format( k, constraints[k])) constraints[k] = v return constraints def _get_juju_home(): jhome = os.environ.get("JUJU_HOME") if jhome is None: jhome = path_join(os.environ.get('HOME'), '.juju') return jhome def _check_call(params, log, *args, **kw): max_retry = kw.get('max_retry', None) cur = kw.get('cur_try', 1) shell = kw.get('shell', False) try: cwd = abspath(".") if 'cwd' in kw: cwd = kw['cwd'] stderr = subprocess.STDOUT if 'stderr' in kw: stderr = kw['stderr'] output = subprocess.check_output( params, cwd=cwd, stderr=stderr, env=os.environ, shell=shell) except subprocess.CalledProcessError, e: if 'ignoreerr' in kw: return #print "subprocess error" #print " ".join(params), "\ncwd: %s\n" % cwd, "\n ".join( # ["%s=%s" % (k, os.environ[k]) for k in os.environ # if k.startswith("JUJU")]) #print e.output log.error(*args) log.error("Command (%s) Output:\n\n %s", " ".join(params), e.output) if not max_retry or cur > max_retry: raise ErrorExit(e) kw['cur_try'] = cur + 1 log.error("Retrying (%s of %s)" % (cur, max_retry)) time.sleep(1) output = _check_call(params, log, args, **kw) return output # Utils from deployer 1 def relations_combine(onto, source): target = deepcopy(onto) # Support list of relations targets if isinstance(onto, list) and isinstance(source, list): target.extend(source) return target for (key, value) in source.items(): if key in target: if isinstance(target[key], dict) and isinstance(value, dict): target[key] = relations_combine(target[key], value) elif isinstance(target[key], list) and isinstance(value, list): target[key] = list(set(target[key] + value)) else: target[key] = value return target def dict_merge(onto, source): target = deepcopy(onto) for (key, value) in source.items(): if key == 'relations' and key in target: target[key] = relations_combine(target[key], value) elif (key in target and isinstance(target[key], dict) and isinstance(value, dict)): target[key] = dict_merge(target[key], value) else: target[key] = value return target def resolve_include(fname, include_dirs): if isabs(fname): return fname for path in include_dirs: full_path = path_join(path, fname) if path_exists(full_path): return full_path return None def mkdir(path): """Create a leaf directory and all intermediate ones. Also expand ~ and ~user constructions. If path exists and it's a directory, return without errors. """ path = expanduser(path) try: os.makedirs(path) except OSError as err: # Re-raise the error if the target path exists but it is not a dir. if (err.errno != errno.EEXIST) or (not isdir(path)): raise def _is_qualified_charm_url(url): """Test an URL to see if it is revisioned.""" parts = url.rsplit('-', 1) return len(parts) > 1 and parts[-1].isdigit() def get_qualified_charm_url(url): """Given a charm URL, if not revisioned, return the latest revisioned URL. If the URL is already revisioned, return it. Otherwise ask the Charm store for the latest revision and return that URL. """ if _is_qualified_charm_url(url): return url info_url = "%s/charm-info?charms=%s" % (STORE_URL, url) try: fh = urlopen(info_url) except (HTTPError, URLError) as e: errmsg = '{} ({})'.format(e, info_url) raise DeploymentError([errmsg]) content = json.loads(fh.read()) rev = content[url]['revision'] return "%s-%d" % (url, rev) def get_env_name(param_env_name): """Get the environment name. """ if param_env_name: return param_env_name elif os.environ.get("JUJU_ENV"): return os.environ['JUJU_ENV'] juju_home = _get_juju_home() env_ptr = os.path.join(juju_home, "current-environment") if os.path.exists(env_ptr): with open(env_ptr) as fh: return fh.read().strip() with open(os.path.join(juju_home, 'environments.yaml')) as fh: conf = yaml_load(fh.read()) if not 'default' in conf: raise ValueError("No Environment specified") return conf['default'] def x_in_y(x, y): """Check to see if the second argument is named in the first argument's unit placement spec. Both arguments provided are services with unit placement directives. If the first service appears in the second service's unit placement, either colocated on a default unit, colocated with a specific unit, or containerized alongside that service, then True is returned, False otherwise. """ for placement in y.unit_placement: if ':' in placement: _, placement = placement.split(':') if '/' in placement: placement, _ = placement.split('/') if x.name == placement: return True return False juju-deployer-0.6.4/deployer/feedback.py0000664000175000017500000000141712600342600024356 0ustar tvansteenburghtvansteenburgh00000000000000 WARN = 3 ERROR = 7 class Feedback(object): def __init__(self): self.messages = [] self.has_errors = False def error(self, msg): self.messages.append((ERROR, msg)) self.has_errors = True def warn(self, msg): self.messages.append((WARN, msg)) def __iter__(self): return iter(self.messages) def __nonzero__(self): return bool(self.messages) def get_errors(self): return [m for (m_kind, m) in self.messages if m_kind == ERROR] def get_warnings(self): return [m for (m_kind, m) in self.messages if m_kind == WARN] def extend(self, other): self.messages.extend(other.messages) if not self.has_errors and other.has_errors: self.has_errors = True juju-deployer-0.6.4/deployer/config.py0000664000175000017500000001014512600342600024075 0ustar tvansteenburghtvansteenburgh00000000000000from os.path import abspath, isabs, join, dirname import logging import os import tempfile import shutil import urllib2 import urlparse from .deployment import Deployment from .utils import ErrorExit, yaml_load, path_exists, dict_merge class ConfigStack(object): log = logging.getLogger("deployer.config") def __init__(self, config_files, cli_series=None): self.config_files = config_files self.cli_series = cli_series self.version = 3 self.data = {} self.yaml = {} self.include_dirs = [] self.urlopen = urllib2.urlopen self.load() def _yaml_load(self, config_file): if config_file in self.yaml: return self.yaml[config_file] if urlparse.urlparse(config_file).scheme: response = self.urlopen(config_file) if response.getcode() == 200: temp = tempfile.NamedTemporaryFile(delete=True) shutil.copyfileobj(response, temp) temp.flush() config_file = temp.name else: self.log.warning("Could not retrieve %s", config_file) raise ErrorExit() with open(config_file) as fh: try: yaml_result = yaml_load(fh.read()) except Exception, e: self.log.warning( "Couldn't load config file @ %r, error: %s:%s", config_file, type(e), e) raise # Check if this is a v4 bundle. services = yaml_result.get('services') if isinstance(services, dict) and 'services' not in services: self.version = 4 yaml_result = {config_file: yaml_result} self.yaml[config_file] = yaml_result return self.yaml[config_file] def keys(self): return sorted(self.data) def get(self, key): if key not in self.data: self.log.warning("Deployment %r not found. Available %s", key, ", ".join(self.keys())) raise ErrorExit() deploy_data = self.data[key] if self.version < 4: deploy_data = self._resolve_inherited(deploy_data) if self.cli_series: deploy_data['series'] = self.cli_series return Deployment( key, deploy_data, self.include_dirs, repo_path=os.environ.get("JUJU_REPOSITORY", ""), version=self.version) def load(self): data = {} include_dirs = [] for fp in self._resolve_included(): if path_exists(fp): include_dirs.append(dirname(abspath(fp))) d = self._yaml_load(fp) data = dict_merge(data, d) self.data = data for k in ['include-config', 'include-configs']: if k in self.data: self.data.pop(k) self.include_dirs = include_dirs def _inherits(self, d): parents = d.get('inherits', ()) if isinstance(parents, basestring): parents = [parents] return parents def _resolve_inherited(self, deploy_data): if 'inherits' not in deploy_data: return deploy_data inherits = parents = self._inherits(deploy_data) for parent_name in parents: parent = self.get(parent_name) inherits.extend(self._inherits(parent.data)) deploy_data = dict_merge(parent.data, deploy_data) deploy_data['inherits'] = inherits return deploy_data def _includes(self, config_file): files = [config_file] d = self._yaml_load(config_file) incs = d.get('include-configs') or d.get('include-config') if isinstance(incs, basestring): inc_fs = [incs] else: inc_fs = incs if inc_fs: for inc_f in inc_fs: if not isabs(inc_f): inc_f = join(dirname(config_file), inc_f) files.extend(self._includes(inc_f)) return files def _resolve_included(self): files = [] [files.extend(self._includes(cf)) for cf in self.config_files] return files juju-deployer-0.6.4/deployer/__init__.py0000664000175000017500000000000212600342600024356 0ustar tvansteenburghtvansteenburgh00000000000000# juju-deployer-0.6.4/deployer/deployment.py0000664000175000017500000002615112600342600025014 0ustar tvansteenburghtvansteenburgh00000000000000from base64 import b64encode import logging import pprint import os import yaml from .charm import Charm from .feedback import Feedback from .service import Service, ServiceUnitPlacementV3, ServiceUnitPlacementV4 from .relation import Endpoint from .utils import path_join, yaml_dump, ErrorExit, resolve_include, x_in_y class Deployment(object): log = logging.getLogger("deployer.deploy") def __init__(self, name, data, include_dirs, repo_path="", version=3): self.name = name self.data = data self.include_dirs = include_dirs self.repo_path = repo_path self.version = version self.machines = {} @property def series(self): # Series could use a little help, charm series should be inferred # directly from a store url return self.data.get('series', 'precise') @property def series_path(self): return path_join(self.repo_path, self.series) def pretty_print(self): pprint.pprint(self.data) def get_service(self, name): if name not in self.data['services']: return return Service(name, self.data['services'][name]) def get_services(self): services = [] for name, svc_data in self.data.get('services', {}).items(): services.append(Service(name, svc_data)) if self.version == 3: # Sort unplaced units first, then sort by name for placed units. services.sort(key=lambda svc: (bool(svc.unit_placement), svc.name)) else: services.sort(self._machines_placement_sort) return services def set_machines(self, machines): """Set a dict of machines, mapping from the names in the machine spec to the machine names in the environment status. """ self.machines = machines def get_machine(self, id): """Return a dict containing machine options None is returned if the machine doesn't exists in the bundle YAML.""" return self.get_machines().get(id, None) def get_machines(self): """Return a dict mapping machine names to machine options. An empty dict is returned if no machines are defined in the bundle YAML. """ machines = {} for key, machine in self.data.get('machines', {}).items(): machines[str(key)] = machine return machines def get_service_names(self): """Return a sequence of service names for this deployment.""" return self.data.get('services', {}).keys() @staticmethod def _machines_placement_sort(svc_a, svc_b): """Sort machines with machine placement in mind. If svc_a is colocated alongside svc_b, svc_b needs to be deployed first, so that it exists by the time svc_a is deployed, and vice versa; this sorts first based on this fact, then secondly based on whether or not the service has a unit placement, and then finally based on the name of the service. """ if svc_a.unit_placement: if svc_b.unit_placement: # Check for colocation. This naively assumes that there is no # circularity in placements. if x_in_y(svc_b, svc_a): return 1 if x_in_y(svc_a, svc_b): return -1 # If no colocation exists, simply compare names. return cmp(svc_a.name, svc_b.name) return 1 if svc_b.unit_placement: return -1 return cmp(svc_a.name, svc_b.name) def get_unit_placement(self, svc, status): if isinstance(svc, (str, unicode)): svc = self.get_service(svc) if self.version == 3: return ServiceUnitPlacementV3(svc, self, status) else: return ServiceUnitPlacementV4(svc, self, status, machines_map=self.machines) def get_relations(self): if 'relations' not in self.data: return # Strip duplicate rels seen = set() def check(a, b): k = tuple(sorted([a, b])) if k in seen: #self.log.warning(" Skipping duplicate relation %r" % (k,)) return seen.add(k) return True # Support an ordered list of [endpoints] if isinstance(self.data['relations'], list): for end_a, end_b in self.data['relations']: # Allow shorthand of [end_a, [end_b, end_c]] if isinstance(end_b, list): for eb in end_b: if check(end_a, eb): yield (end_a, eb) else: if check(end_a, end_b): yield (end_a, end_b) return # Legacy format (dictionary of dictionaries with weights) rels = {} for k, v in self.data['relations'].items(): expanded = [] for c in v['consumes']: expanded.append((k, c)) by_weight = rels.setdefault(v.get('weight', 0), []) by_weight.extend(expanded) for k in sorted(rels, reverse=True): for r in rels[k]: if check(*r): yield r #self.log.debug( # "Found relations %s\n %s" % (" ".join(map(str, seen)))) def get_charms(self): for k, v in self.data.get('services', {}).items(): yield Charm.from_service(k, self.repo_path, self.series, v) def get_charm_for(self, svc_name): svc_data = self.data['services'][svc_name] return Charm.from_service( svc_name, self.repo_path, self.series, svc_data) def fetch_charms(self, update=False, no_local_mods=False): for charm in self.get_charms(): if charm.is_local(): if charm.exists(): if no_local_mods and charm.is_modified(): self.log.warning( "Charm %r has local modifications", charm.path) raise ErrorExit() if charm.rev or update: if charm.rev and charm.is_modified(): self.log.warning(( "Charm %r with pinned revision has" " local modifications"), charm.path) raise ErrorExit() charm.update(build=True) continue elif not os.path.exists(charm.series_path): os.mkdir(charm.series_path) charm.fetch() def resolve(self, cli_overides=()): # Once we have charms definitions available, we can do verification # of config options. self.load_overrides(cli_overides) self.resolve_config() self.validate_relations() self.validate_placement() def load_overrides(self, cli_overrides=()): """Load overrides.""" overrides = {} overrides.update(self.data.get('overrides', {})) for o in cli_overrides: key, value = o.split('=', 1) overrides[key] = value for k, v in overrides.iteritems(): found = False for svc_name, svc_data in self.data['services'].items(): charm = self.get_charm_for(svc_name) if k in charm.config: if not svc_data.get('options'): svc_data['options'] = {} svc_data['options'][k] = v found = True if not found: self.log.warning( "Override %s does not match any charms", k) def resolve_config(self): """Load any lazy config values (includes), and verify config options. """ self.log.debug("Resolving configuration") # XXX TODO, rename resolve, validate relations # against defined services feedback = Feedback() for svc_name, svc_data in self.data.get('services', {}).items(): if not 'options' in svc_data: continue charm = self.get_charm_for(svc_name) config = charm.config options = {} svc_options = svc_data.get('options', {}) if svc_options is None: svc_options = {} for k, v in svc_options.items(): if not k in config: feedback.error( "Invalid config charm %s %s=%s" % (charm.name, k, v)) continue iv = self._resolve_include(svc_name, k, v) if isinstance(iv, Feedback): feedback.extend(iv) continue if iv is not None: v = iv options[k] = v svc_data['options'] = options self._handle_feedback(feedback) def _resolve_include(self, svc_name, k, v): feedback = Feedback() for include_type in ["file", "base64"]: if (not isinstance(v, basestring) or not v.startswith( "include-%s://" % include_type)): continue include, fname = v.split("://", 1) ip = resolve_include(fname, self.include_dirs) if ip is None: feedback.error( "Invalid config %s.%s include not found %s" % ( svc_name, k, v)) continue with open(ip) as fh: v = fh.read() if include_type == "base64": v = b64encode(v) return v if feedback: return feedback def validate_relations(self): # Could extend to do interface matching against charms. services = dict([(s.name, s) for s in self.get_services()]) feedback = Feedback() for e_a, e_b in self.get_relations(): for ep in [Endpoint(e_a), Endpoint(e_b)]: if not ep.service in services: feedback.error( ("Invalid relation in config," " service %s not found, rel %s") % ( ep.service, "%s <-> %s" % (e_a, e_b))) continue self._handle_feedback(feedback) def validate_placement(self): services = dict([(s.name, s) for s in self.get_services()]) feedback = Feedback() for name, s in services.items(): placement = self.get_unit_placement(s, {}) feedback.extend(placement.validate()) self._handle_feedback(feedback) def _handle_feedback(self, feedback): for e in feedback.get_errors(): self.log.error(e) for w in feedback.get_warnings(): self.log.warning(w) if feedback.has_errors: raise ErrorExit() def save(self, path): with open(path, "w") as fh: fh.write(yaml_dump(self.data)) @staticmethod def to_yaml(dumper, deployment): return dumper.represent_dict(deployment.data) yaml.add_representer(Deployment, Deployment.to_yaml) juju-deployer-0.6.4/deployer/action/0000775000175000017500000000000012666044061023546 5ustar tvansteenburghtvansteenburgh00000000000000juju-deployer-0.6.4/deployer/action/diff.py0000664000175000017500000001306212600342600025016 0ustar tvansteenburghtvansteenburgh00000000000000import logging import time from .base import BaseAction from ..relation import EndpointPair from ..utils import parse_constraints, yaml_dump class Diff(BaseAction): log = logging.getLogger("deployer.diff") def __init__(self, env, deployment, options): self.options = options self.env = env self.deployment = deployment self.env_status = None self.env_state = {'services': {}, 'relations': []} def load_env(self): """ """ rels = set() for svc_name in self.env_status['services']: if not svc_name in self.env_status['services']: self.env_state['services'][svc_name] = 'missing' self.env_state['services'].setdefault(svc_name, {})[ 'options'] = self.env.get_config(svc_name) self.env_state['services'][svc_name][ 'constraints'] = self.env.get_constraints(svc_name) self.env_state['services'][svc_name][ 'unit_count'] = len( self.env_status['services'][svc_name].get('units', {})) rels.update(self._load_rels(svc_name)) self.env_state['relations'] = sorted(rels) def _load_rels(self, svc_name): rels = set() svc_rels = self.env_status['services'][svc_name].get( 'relations', {}) # There is ambiguity here for multiple rels between two # services without the relation id, which we need support # from core for. for r_name, r_svcs in svc_rels.items(): for r_svc in r_svcs: # Skip peer relations if r_svc == svc_name: continue rr_name = self._get_rel_name(svc_name, r_svc) rels.add( tuple(sorted([ "%s:%s" % (svc_name, r_name), "%s:%s" % (r_svc, rr_name)]))) return rels def _get_rel_name(self, src, tgt): svc_rels = self.env_status['services'][tgt]['relations'] found = None for r, eps in svc_rels.items(): if src in eps: if found: raise ValueError("Ambigious relations for service") found = r return found def get_delta(self): delta = {} rels_delta = self._get_relations_delta() if rels_delta: delta['relations'] = rels_delta svc_delta = self._get_services_delta() if svc_delta: delta['services'] = svc_delta return delta def _get_relations_delta(self): # Simple endpoint diff, no qualified endpoint checking. # Env relations are always qualified (at least in go). delta = {} env_rels = set( EndpointPair(*x) for x in self.env_state.get('relations', ())) dep_rels = set( [EndpointPair(*y) for y in self.deployment.get_relations()]) for r in dep_rels.difference(env_rels): delta.setdefault('missing', []).append(r) for r in env_rels.difference(dep_rels): delta.setdefault('unknown', []).append(r) return delta def _get_services_delta(self): delta = {} env_svcs = set(self.env_status['services'].keys()) dep_svcs = set([s.name for s in self.deployment.get_services()]) missing = dep_svcs - env_svcs if missing: delta['missing'] = {} for a in missing: delta['missing'][a] = self.deployment.get_service( a).svc_data unknown = env_svcs - dep_svcs if unknown: delta['unknown'] = {} for r in unknown: delta['unknown'][r] = self.env_state.get(r) for cs in env_svcs.intersection(dep_svcs): d_s = self.deployment.get_service(cs).svc_data e_s = self.env_state['services'][cs] mod = self._diff_service(e_s, d_s, self.deployment.get_charm_for(cs)) if not mod: continue if not 'modified' in delta: delta['modified'] = {} delta['modified'][cs] = mod return delta def _diff_service(self, e_s, d_s, charm): mod = {} d_sc = parse_constraints(d_s.get('constraints', '')) # 'tags' is a special case, as it can be multi-valued: convert to list # if cfg one is a string if isinstance(d_sc.get('tags'), basestring): d_sc['tags'] = [d_sc['tags']] if d_sc != e_s['constraints']: mod['env-constraints'] = e_s['constraints'] mod['cfg-constraints'] = d_sc for k, v in d_s.get('options', {}).items(): # Deploy options not known to the env may originate # from charm version delta or be an invalid config. if not k in e_s['options']: continue e_v = e_s['options'].get(k, {}).get('value') if e_v != v: mod.setdefault('env-config', {}).update({k: e_v}) mod.setdefault('cfg-config', {}).update({k: v}) if not charm or not charm.is_subordinate(): if e_s['unit_count'] != d_s.get('num_units', 1): mod['num_units'] = e_s['unit_count'] - d_s.get('num_units', 1) return mod def do_diff(self): self.start_time = time.time() self.deployment.resolve() self.env.connect() self.env_status = self.env.status() self.load_env() return self.get_delta() def run(self): diff = self.do_diff() if diff: print yaml_dump(diff) juju-deployer-0.6.4/deployer/action/base.py0000664000175000017500000000004512600342600025015 0ustar tvansteenburghtvansteenburgh00000000000000# class BaseAction(object): pass juju-deployer-0.6.4/deployer/action/export.py0000664000175000017500000000060212600342600025423 0ustar tvansteenburghtvansteenburgh00000000000000import logging from .base import BaseAction class Export(BaseAction): log = logging.getLogger("deployer.export") def __init__(self, env, deployment, options): self.options = options self.env = env self.deployment = deployment self.env_status = None self.env_state = {'services': {}, 'relations': {}} def run(self): pass juju-deployer-0.6.4/deployer/action/importer.py0000664000175000017500000003262012600342600025750 0ustar tvansteenburghtvansteenburgh00000000000000import logging import time from .base import BaseAction from ..env import watchers from ..utils import ErrorExit class Importer(BaseAction): log = logging.getLogger("deployer.import") def __init__(self, env, deployment, options): self.options = options self.env = env self.deployment = deployment def add_units(self): self.log.debug("Adding units...") # Add units to existing services that don't match count. env_status = self.env.status() # Workaround an issue where watch output doesn't include subordinate # services right away, and add_unit would fail while attempting to add # units to a non-existant service. See lp:1421315 for details. cur_units = 0 for svc in self.deployment.get_services(): delay = time.time() + 60 while delay > time.time(): if svc.name in env_status['services']: cur_units = len( env_status['services'][svc.name].get('units', ()) ) if cur_units > 0: break time.sleep(5) env_status = self.env.status() delta = (svc.num_units - cur_units) if delta <= 0: self.log.debug( " Service %r does not need any more units added.", svc.name) continue charm = self.deployment.get_charm_for(svc.name) if charm.is_subordinate(): self.log.warning( "Config specifies num units for subordinate: %s", svc.name) continue self.log.info( "Adding %d more units to %s" % (abs(delta), svc.name)) if svc.unit_placement: # Reload status before each placed unit is deployed, so that # co-location to other units can take place properly. env_status = self.env.status() placement = self.deployment.get_unit_placement(svc, env_status) for mid in range(cur_units, svc.num_units): self.env.add_unit(svc.name, placement.get(mid)) else: self.env.add_units(svc.name, abs(delta)) def machine_exists(self, id): """Checks if the given id exists on the current environment.""" return str(id) in map(str, self.env.status().get('machines', {})) def create_machines(self): """Create machines as specified in the machine spec in the bundle. A machine spec consists of a named machine (the name is, by convention, an integer) with an optional series, optional constraints and optional annotations: 0: series: "precise" constraints: "mem=4G arch=i386" annotations: foo: bar 1: constraints: "mem=4G" This method first attempts to create any machines in the 'machines' section of the bundle specification with the given constraints and annotations. Then, if there are any machines requested for containers in the style of :new, it creates those machines and adds them to the machines map. If the machine already exists on the environment, then the new machine creation is skipped. """ machines = self.deployment.get_machines() machines_map = {} if machines: self.log.info("Creating machines...") for machine_name, spec in machines.items(): if self.machine_exists(machine_name): # XXX frankban: do we really want this? The machine # identifiers as included in v4 bundles are not intended # to refer to real machines. A mapping could be provided # but this kind of implicit mapping seems weak. self.log.info( """Machine %s already exists on environment, """ """skipping creation""" % machine_name) machines_map[machine_name] = str(machine_name) else: self.log.info("Machine %s will be created" % machine_name) machines_map[machine_name] = self.env.add_machine( series=spec.get('series', self.deployment.data['series']), constraints=spec.get('constraints')) if isinstance(spec, dict): annotations = spec.get('annotations') if annotations: self.env.set_annotation( machines_map[machine_name], annotations, entity_type='machine') # In the case of :new, we need to create a machine # before creating the container on which the service will be # deployed. This is stored in the machines map which will be used # in the service placement. for service in self.deployment.get_services(): placement = self.deployment.get_unit_placement(service, None) for container_host in placement.get_new_machines_for_containers(): if self.machine_exists(machine_name): self.log.info("Machine %s already exists," "skipping creation" % machine_name) machines_map[container_host] = str(container_host) else: self.log.info("A new container will be created" "on machine: %s" % container_host) machines_map[container_host] = self.env.add_machine() self.deployment.set_machines(machines_map) def get_charms(self): # Get Charms self.log.debug("Getting charms...") self.deployment.fetch_charms( update=self.options.update_charms, no_local_mods=self.options.no_local_mods) # Load config overrides/includes and verify rels after we can # validate them. self.deployment.resolve(self.options.overrides or ()) def deploy_services(self, add_units=True): """Deploy the services specified in the deployment. add_units: whether or not to add units to the service as it is deployed; newer versions of bundles may have machines specified in a machine spec, and units will be placed accordingly if this flag is false. """ self.log.info("Deploying services...") env_status = self.env.status() reloaded = False for svc in self.deployment.get_services(): if svc.name in env_status['services']: self.log.debug( " Service %r already deployed. Skipping" % svc.name) continue charm = self.deployment.get_charm_for(svc.name) self.log.info( " Deploying service %s using %s", svc.name, charm.charm_url if not charm.is_absolute() else charm.path ) if svc.unit_placement: # We sorted all the non placed services first, so we only # need to update status once after we're done with them, in # the instance of v3 bundles; in the more complex case of v4 # bundles, we'll need to refresh each time. if not reloaded: self.log.debug( " Refetching status for placement deploys") time.sleep(5.1) env_status = self.env.status() # In the instance of version 3 deployments, we will not # need to fetch the status more than once. In version 4 # bundles, however, we will need to fetch the status each # time in order to allow for the machine specification to # be taken into account. if self.deployment.version == 3: reloaded = True num_units = 1 else: num_units = svc.num_units # Only add a single unit if requested. This is done after the above # work to ensure that the status is still retrieved as necessary. if not add_units: num_units = 1 placement = self.deployment.get_unit_placement(svc, env_status) if charm.is_subordinate(): num_units = None self.env.deploy( svc.name, charm.charm_url, charm.repo_path or self.deployment.repo_path, svc.config, svc.constraints, num_units, placement.get(0)) if svc.annotations: self.log.debug(" Setting annotations") self.env.set_annotation(svc.name, svc.annotations) if self.options.deploy_delay: self.log.debug(" Waiting for deploy delay") time.sleep(self.options.deploy_delay) def add_relations(self): self.log.info("Adding relations...") # Relations status = self.env.status() created = False for end_a, end_b in self.deployment.get_relations(): if self._rel_exists(status, end_a, end_b): continue self.log.info(" Adding relation %s <-> %s", end_a, end_b) self.env.add_relation(end_a, end_b) created = True # per the original, not sure the use case. #self.log.debug(" Waiting 5s before next relation") #time.sleep(5) return created def _rel_exists(self, status, end_a, end_b): # Checks for a named relation on one side that matches the local # endpoint and remote service. (name_a, name_b, rem_a, rem_b) = (end_a, end_b, None, None) if ":" in end_a: name_a, rem_a = end_a.split(":", 1) if ":" in end_b: name_b, rem_b = end_b.split(":", 1) rels_svc_a = status['services'][name_a].get('relations', {}) found = False for r, related in rels_svc_a.items(): if name_b in related: if rem_a and not r in rem_a: continue found = True break if found: return True return False def check_timeout(self): timeout = self.options.timeout - (time.time() - self.start_time) if timeout < 0: self.log.error("Reached deployment timeout.. exiting") raise ErrorExit() return timeout def wait_for_units(self, ignore_errors=False): if self.options.skip_unit_wait: return timeout = self.check_timeout() # Set up the callback to be called in case of unit errors: if # ignore_errors is True errors are just logged, otherwise we exit the # program. if ignore_errors: on_errors = watchers.log_on_errors(self.env) else: on_errors = watchers.exit_on_errors(self.env) self.env.wait_for_units( int(timeout), watch=self.options.watch, services=self.deployment.get_service_names(), on_errors=on_errors) def run(self): options = self.options self.start_time = time.time() # Get charms self.get_charms() if options.branch_only: return if options.bootstrap: self.env.bootstrap() self.env.connect() if self.deployment.version > 3: self.create_machines() # We can shortcut and add the units during deployment for v3 bundles. self.deploy_services(add_units=(self.deployment.version == 3)) # Workaround api issue in juju-core, where any action takes 5s # to be consistent to subsequent watch api interactions, see # http://pad.lv/1203105 which will obviate this wait. time.sleep(5.1) self.add_units() ignore_errors = bool(options.retry_count) or options.ignore_errors self.log.debug("Waiting for units before adding relations") self.wait_for_units(ignore_errors=ignore_errors) # Reset our environment connection, as it may grow stale during # the watch (we're using a sync client so not responding to pings # unless actively using the conn). self.env.close() self.env.connect() self.check_timeout() rels_created = False if not options.no_relations: rels_created = self.add_relations() # Wait for the units to be up before waiting for rel stability. if rels_created and not options.skip_unit_wait: self.log.debug( "Waiting for relation convergence %ds", options.rel_wait) time.sleep(options.rel_wait) self.wait_for_units(ignore_errors=ignore_errors) if options.retry_count: self.log.info("Looking for errors to auto-retry") self.env.resolve_errors( options.retry_count, options.timeout - time.time() - self.start_time) # Finally expose things for svc in self.deployment.get_services(): if svc.expose: self.log.info(" Exposing service %r" % svc.name) self.env.expose(svc.name) juju-deployer-0.6.4/deployer/action/__init__.py0000664000175000017500000000000212600342600025633 0ustar tvansteenburghtvansteenburgh00000000000000#