juju-0.7.orig/.testr.conf 0000644 0000000 0000000 00000000127 12135220114 013477 0 ustar 0000000 0000000 [DEFAULT]
test_command=./test --reporter=subunit $LISTOPT $IDLIST
test_list_option=-n
juju-0.7.orig/COPYING 0000644 0000000 0000000 00000103330 12135220114 012444 0 ustar 0000000 0000000 GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
.
juju-0.7.orig/Makefile 0000644 0000000 0000000 00000003371 12135220114 013055 0 ustar 0000000 0000000 PEP8=pep8
COVERAGE_FILES=`find juju -name "*py" | grep -v "tests\|lib/mocker.py\|lib/testing.py"`
all:
@echo "You've just watched the fastest build on earth."
tests:
./test
coverage:
python -c "import coverage as c; c.main()" run ./test
python -c "import coverage as c; c.main()" html -d htmlcov $(COVERAGE_FILES)
gnome-open htmlcov/index.html
ftests:
./test --functional
tags:
@ctags --python-kinds=-iv -R juju
etags:
@ctags -e --python-kinds=-iv -R juju
present_pep8=$(shell which $(PEP8))
present_pyflakes=$(shell which pyflakes)
warn_missing_linters:
@test -n "$(present_pep8)" || echo "WARNING: $(PEP8) not installed."
@test -n "$(present_pyflakes)" || echo "WARNING: pyflakes not installed."
# "check": Check uncommitted changes for lint.
check_changes=$(shell bzr status -S | grep '^[ +]*[MN]' | awk '{print $$2;}' | grep "\\.py$$")
check: warn_missing_linters
@test -z $(present_pep8) || (echo $(check_changes) | xargs -r $(PEP8) --repeat)
@test -z $(present_pyflakes) || (echo $(check_changes) | xargs -r pyflakes)
# "review": Check all changes compared to trunk for lint.
review_changes=$(shell bzr status -S -r ancestor:$(JUJU_TRUNK) | grep '^[ +]*[MN]' | awk '{print $$2;}' | grep "\\.py$$")
review: warn_missing_linters
#@test -z $(present_pep8) || (echo $(review_changes) | xargs -r $(PEP8) --repeat)
@test -z $(present_pyflakes) || (echo $(review_changes) | xargs -r pyflakes)
ptests_changes=$(shell bzr status -S -r branch::prev | grep -P '^[ +]*[MN]' | awk '{print $$2;}'| grep "test_.*\\.py$$")
ptests:
@echo $(ptests_changes) | xargs -r ./test
btests_changes=$(shell bzr status -S -r ancestor:$(JUJU_TRUNK)/ | grep "test.*\\.py$$" | awk '{print $$2;}')
btests:
@./test $(btests_changes)
.PHONY: tags check review warn_missing_linters
juju-0.7.orig/README 0000644 0000000 0000000 00000002204 12135220114 012267 0 ustar 0000000 0000000 juju
====
Welcome to juju, we hope you enjoy your stay.
You can always get the latest juju code by running::
$ bzr branch lp:juju
The juju bug tracker is at https://bugs.launchpad.net/juju
Documentation for getting setup and running juju can be found at
http://juju.ubuntu.com/docs
====
Juju Developers
Except where stated otherwise, the files contained within this source
code tree are covered under the following copyright and license:
Copyright 2010, 2011 Canonical Ltd. All Rights Reserved.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see .
juju-0.7.orig/bin/ 0000755 0000000 0000000 00000000000 12135220114 012161 5 ustar 0000000 0000000 juju-0.7.orig/juju/ 0000755 0000000 0000000 00000000000 12135220114 012366 5 ustar 0000000 0000000 juju-0.7.orig/misc/ 0000755 0000000 0000000 00000000000 12135220114 012344 5 ustar 0000000 0000000 juju-0.7.orig/setup.py 0000644 0000000 0000000 00000002165 12135220114 013127 0 ustar 0000000 0000000 from distutils.core import setup
from glob import glob
from juju import __version__
import os
def find_packages():
"""
Compatibility wrapper.
Taken from storm setup.py.
"""
try:
from setuptools import find_packages
return find_packages()
except ImportError:
pass
packages = []
for directory, subdirectories, files in os.walk("juju"):
if "__init__.py" in files:
packages.append(directory.replace(os.sep, '.'))
return packages
setup(
name="juju",
version=__version__,
description="Cloud automation and orchestration",
author="Juju Developers",
author_email="juju@lists.ubuntu.com",
url="https://launchpad.net/juju",
license="GPL",
packages=find_packages(),
scripts=glob("./bin/*"),
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Information Technology",
"Programming Language :: Python",
"Topic :: Database",
"Topic :: Internet :: WWW/HTTP",
],
)
juju-0.7.orig/test 0000755 0000000 0000000 00000003025 12135220114 012316 0 ustar 0000000 0000000 #!/usr/bin/env python
import os
import sys
from twisted.scripts.trial import run
from juju.tests.common import zookeeper_test_context
from juju.lib.testing import TestCase
FUNCTIONAL = '--functional'
def main(args):
if not "ZOOKEEPER_PATH" in os.environ:
# Look for a system install of ZK
env_path = "/etc/zookeeper/conf/environment"
if os.path.exists(env_path):
print "Using system zookeeper classpath from %s" % env_path
os.environ["ZOOKEEPER_PATH"] = "system"
else:
print ("Environment variable ZOOKEEPER_PATH must be defined "
"and should point to directory of Zookeeper installation")
exit()
matched = [arg for arg in args if arg.startswith("juju")]
if FUNCTIONAL in sys.argv:
sys.argv.remove(FUNCTIONAL)
sys.argv.append("juju.ftests")
elif matched:
pass
else:
packages = [p for p in os.listdir("juju") \
if os.path.isdir("juju%s%s"%(os.sep, p))]
packages.remove("ftests")
sys.argv.extend(["juju.%s"%p for p in packages])
if 'JUJU_TEST_TIMEOUT' in os.environ:
try:
TestCase.timeout = float(os.environ['JUJU_TEST_TIMEOUT'])
except ValueError:
print ("JUJU_TEST_TIMEOUT must be a number")
exit()
with zookeeper_test_context(
os.environ["ZOOKEEPER_PATH"],
os.environ.get("ZOOKEEPER_TEST_PORT", 28181)) as zk:
run()
if __name__ == "__main__":
main(sys.argv[1:])
juju-0.7.orig/bin/close-port 0000755 0000000 0000000 00000000475 12135220114 014204 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import close_port
if __name__ == '__main__':
close_port()
juju-0.7.orig/bin/config-get 0000755 0000000 0000000 00000000476 12135220114 014140 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import config_get
if __name__ == '__main__':
config_get()
juju-0.7.orig/bin/juju 0000755 0000000 0000000 00000000276 12135220114 013071 0 ustar 0000000 0000000 #!/usr/bin/env python
import sys
from juju.control import main
from juju.errors import JujuError
try:
main(sys.argv[1:])
except JujuError, error:
sys.exit("error: %s" % (error,))
juju-0.7.orig/bin/juju-admin 0000755 0000000 0000000 00000000277 12135220114 014160 0 ustar 0000000 0000000 #!/usr/bin/env python
import sys
from juju.control import admin
from juju.errors import JujuError
try:
admin(sys.argv[1:])
except JujuError, error:
sys.exit("error: %s" % (error,))
juju-0.7.orig/bin/juju-log 0000755 0000000 0000000 00000000460 12135220114 013643 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import log
if __name__ == '__main__':
log()
juju-0.7.orig/bin/open-port 0000755 0000000 0000000 00000000473 12135220114 014036 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import open_port
if __name__ == '__main__':
open_port()
juju-0.7.orig/bin/relation-get 0000755 0000000 0000000 00000000502 12135220114 014476 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import relation_get
if __name__ == '__main__':
relation_get()
juju-0.7.orig/bin/relation-ids 0000755 0000000 0000000 00000000501 12135220114 014475 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import relation_ids
if __name__ == '__main__':
relation_ids()
juju-0.7.orig/bin/relation-list 0000755 0000000 0000000 00000000505 12135220114 014675 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import relation_list
if __name__ == '__main__':
relation_list()
juju-0.7.orig/bin/relation-set 0000755 0000000 0000000 00000000503 12135220114 014513 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import relation_set
if __name__ == '__main__':
relation_set()
juju-0.7.orig/bin/unit-get 0000755 0000000 0000000 00000000472 12135220114 013646 0 ustar 0000000 0000000 #!/usr/bin/env python
# We avoid using PYTHONPATH because it can cause side effects on hook execution
import os, sys
if "JUJU_PYTHONPATH" in os.environ:
sys.path[:0] = filter(None, os.environ["JUJU_PYTHONPATH"].split(":"))
from juju.hooks.commands import unit_get
if __name__ == '__main__':
unit_get()
juju-0.7.orig/juju/__init__.py 0000644 0000000 0000000 00000000026 12135220114 014475 0 ustar 0000000 0000000 #
__version__ = '0.7'
juju-0.7.orig/juju/agents/ 0000755 0000000 0000000 00000000000 12135220114 013647 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/ 0000755 0000000 0000000 00000000000 12135220114 013460 5 ustar 0000000 0000000 juju-0.7.orig/juju/control/ 0000755 0000000 0000000 00000000000 12135220114 014046 5 ustar 0000000 0000000 juju-0.7.orig/juju/environment/ 0000755 0000000 0000000 00000000000 12135220114 014732 5 ustar 0000000 0000000 juju-0.7.orig/juju/errors.py 0000644 0000000 0000000 00000014553 12135220114 014264 0 ustar 0000000 0000000 """
This file holds the generic errors which are sensible for several
areas of juju.
"""
class JujuError(Exception):
"""All errors in juju are subclasses of this.
This error should not be raised by itself, though, since it means
pretty much nothing. It's useful mostly as something to catch instead.
"""
class IncompatibleVersion(JujuError):
"""Raised when there is a mismatch in versions using the topology.
This mismatch will occur when the /topology node has the key
version set to a version different from
juju.state.topology.VERSION in the code itself. This scenario
can occur when a new client accesses an environment deployed with
previous code, or upon the update of the code in the environment
itself.
Although this checking is done at the level of the topology, upon
every read, the error is defined here because of its
generality. Doing the check in the topology is just because of the
centrality of that piece within juju.
"""
def __init__(self, current, wanted):
self.current = current
self.wanted = wanted
def __str__(self):
return (
"Incompatible juju protocol versions (found %r, want %r)" % (
self.current, self.wanted))
class FileNotFound(JujuError):
"""Raised when a file is not found, obviously! :-)
@ivar path: Path of the directory or file which wasn't found.
"""
def __init__(self, path):
self.path = path
def __str__(self):
return "File was not found: %r" % (self.path,)
class CharmError(JujuError):
"""An error occurred while processing a charm."""
def __init__(self, path, message):
self.path = path
self.message = message
def __str__(self):
return "Error processing %r: %s" % (self.path, self.message)
class CharmInvocationError(CharmError):
"""A charm's hook invocation exited with an error"""
def __init__(self, path, exit_code, signal=None):
self.path = path
self.exit_code = exit_code
self.signal = signal
def __str__(self):
if self.signal is None:
return "Error processing %r: exit code %s." % (
self.path, self.exit_code)
else:
return "Error processing %r: signal %s." % (
self.path, self.signal)
class CharmUpgradeError(CharmError):
"""Something went wrong trying to upgrade a charm"""
def __init__(self, message):
self.message = message
def __str__(self):
return "Cannot upgrade charm: %s" % self.message
class FileAlreadyExists(JujuError):
"""Raised when something refuses to overwrite an existing file.
@ivar path: Path of the directory or file which wasn't found.
"""
def __init__(self, path):
self.path = path
def __str__(self):
return "File already exists, won't overwrite: %r" % (self.path,)
class NoConnection(JujuError):
"""Raised when the CLI is unable to establish a Zookeeper connection."""
class InvalidHost(NoConnection):
"""Raised when the CLI cannot connect to ZK because of an invalid host."""
class InvalidUser(NoConnection):
"""Raised when the CLI cannot connect to ZK because of an invalid user."""
class EnvironmentNotFound(NoConnection):
"""Raised when the juju environment cannot be found."""
def __init__(self, info="no details available"):
self._info = info
def __str__(self):
return "juju environment not found: %s" % self._info
class EnvironmentPending(NoConnection):
"""Raised when the juju environment is not accessible."""
class ConstraintError(JujuError):
"""Machine constraints are inappropriate or incomprehensible"""
class UnknownConstraintError(ConstraintError):
"""Constraint name not recognised"""
def __init__(self, name):
self.name = name
def __str__(self):
return "Unknown constraint: %r" % self.name
class ProviderError(JujuError):
"""Raised when an exception occurs in a provider."""
class CloudInitError(ProviderError):
"""Raised when a cloud-init file is misconfigured"""
class MachinesNotFound(ProviderError):
"""Raised when a provider can't fulfil a request for machines."""
def __init__(self, instance_ids):
self.instance_ids = list(instance_ids)
def __str__(self):
return "Cannot find machine%s: %s" % (
"" if len(self.instance_ids) == 1 else "s",
", ".join(map(str, self.instance_ids)))
class ProviderInteractionError(ProviderError):
"""Raised when an unexpected error occurs interacting with a provider"""
class CannotTerminateMachine(JujuError):
"""Cannot terminate machine because of some reason"""
def __init__(self, id, reason):
self.id = id
self.reason = reason
def __str__(self):
return "Cannot terminate machine %d: %s" % (self.id, self.reason)
class InvalidPlacementPolicy(JujuError):
"""The provider does not support the user specified placement policy.
"""
def __init__(self, user_policy, provider_type, provider_policies):
self.user_policy = user_policy
self.provider_type = provider_type
self.provider_policies = provider_policies
def __str__(self):
return (
"Unsupported placement policy: %r "
"for provider: %r, supported policies %s" % (
self.user_policy,
self.provider_type,
", ".join(self.provider_policies)))
class ServiceError(JujuError):
"""Some problem with an upstart service"""
class SSLVerificationError(JujuError):
"""User friendly wrapper for SSL certificate errors
Unfortunately the SSL exceptions on certificate validation failure are not
very useful, just being:
('SSL routines','SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')
"""
def __init__(self, ssl_error):
# TODO: pass and report hostname that did not validate
self.ssl_error = ssl_error
def __str__(self):
return ("Bad HTTPS certificate, "
"set 'ssl-hostname-verification' to false to permit")
class SSLVerificationUnsupported(JujuError):
"""Verifying https certificates unsupported as txaws lacks support"""
def __str__(self):
return ("HTTPS certificates cannot be verified as txaws.client.ssl is"
" missing.\n"
"Upgrade txaws or set 'ssl-hostname-verification' to false.")
juju-0.7.orig/juju/ftests/ 0000755 0000000 0000000 00000000000 12135220114 013676 5 ustar 0000000 0000000 juju-0.7.orig/juju/hooks/ 0000755 0000000 0000000 00000000000 12135220114 013511 5 ustar 0000000 0000000 juju-0.7.orig/juju/lib/ 0000755 0000000 0000000 00000000000 12135220114 013134 5 ustar 0000000 0000000 juju-0.7.orig/juju/machine/ 0000755 0000000 0000000 00000000000 12135220114 013772 5 ustar 0000000 0000000 juju-0.7.orig/juju/providers/ 0000755 0000000 0000000 00000000000 12135220114 014403 5 ustar 0000000 0000000 juju-0.7.orig/juju/state/ 0000755 0000000 0000000 00000000000 12135220114 013506 5 ustar 0000000 0000000 juju-0.7.orig/juju/tests/ 0000755 0000000 0000000 00000000000 12135220114 013530 5 ustar 0000000 0000000 juju-0.7.orig/juju/unit/ 0000755 0000000 0000000 00000000000 12135220114 013345 5 ustar 0000000 0000000 juju-0.7.orig/juju/agents/__init__.py 0000644 0000000 0000000 00000000002 12135220114 015750 0 ustar 0000000 0000000 #
juju-0.7.orig/juju/agents/base.py 0000644 0000000 0000000 00000026573 12135220114 015150 0 ustar 0000000 0000000 import argparse
import os
import logging
import stat
import sys
import yaml
import zookeeper
from twisted.application import service
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.scripts._twistd_unix import UnixApplicationRunner, UnixAppLogger
from twisted.python.log import PythonLoggingObserver
from txzookeeper import ZookeeperClient
from txzookeeper.managed import ManagedClient
from juju.control.options import setup_twistd_options
from juju.errors import NoConnection, JujuError
from juju.lib.zklog import ZookeeperHandler
from juju.lib.zk import CLIENT_SESSION_TIMEOUT
from juju.state.environment import GlobalSettingsStateManager
def load_client_id(path):
try:
with open(path) as f:
return yaml.load(f.read())
except IOError:
return None
def save_client_id(path, client_id):
parent = os.path.dirname(path)
if not os.path.exists(parent):
os.makedirs(parent)
with open(path, "w") as f:
f.write(yaml.dump(client_id))
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
class TwistedOptionNamespace(object):
"""
An argparse namespace implementation that is compatible with twisted
config dictionary usage.
"""
def __getitem__(self, key):
return self.__dict__[key]
def __setitem__(self, key, value):
self.__dict__[key] = value
def get(self, key, default=None):
return self.__dict__.get(key, default)
def has_key(self, key):
return key in self.__dict__
class AgentLogger(UnixAppLogger):
def __init__(self, options):
super(AgentLogger, self).__init__(options)
self._loglevel = options.get("loglevel", logging.DEBUG)
def _getLogObserver(self):
if self._logfilename == "-":
log_file = sys.stdout
else:
log_file = open(self._logfilename, "a")
# Setup file logger
log_handler = logging.StreamHandler(log_file)
formatter = logging.Formatter(
"%(asctime)s: %(name)s@%(levelname)s: %(message)s")
log_handler.setFormatter(formatter)
# Also capture zookeeper logs (XXX not compatible with rotation)
zookeeper.set_log_stream(log_file)
zookeeper.set_debug_level(0)
# Configure logging.
root = logging.getLogger()
root.addHandler(log_handler)
root.setLevel(logging.getLevelName(self._loglevel))
# Twisted logging is painfully verbose on twisted.web, and
# there isn't a good way to distinguish different channels
# within twisted, so just utlize error level logging only for
# all of twisted.
twisted_log = logging.getLogger("twisted")
twisted_log.setLevel(logging.ERROR)
observer = PythonLoggingObserver()
return observer.emit
class AgentRunner(UnixApplicationRunner):
application = None
loggerFactory = AgentLogger
def createOrGetApplication(self):
return self.application
class BaseAgent(object, service.Service):
name = "juju-agent-unknown"
client = None
# Flag when enabling persistent topology watches, testing aid.
_watch_enabled = True
# Distributed debug log handler
_debug_log_handler = None
@classmethod
def run(cls):
"""Runs the agent as a unix daemon.
Main entry point for starting an agent, parses cli options, and setups
a daemon using twistd as per options.
"""
parser = argparse.ArgumentParser()
cls.setup_options(parser)
config = parser.parse_args(namespace=TwistedOptionNamespace())
runner = AgentRunner(config)
agent = cls()
agent.configure(config)
runner.application = agent.as_app()
runner.run()
@classmethod
def setup_options(cls, parser):
"""Configure the argparse cli parser for the agent."""
return cls.setup_default_options(parser)
@classmethod
def setup_default_options(cls, parser):
"""Setup default twistd daemon and agent options.
This method is intended as a utility for subclasses.
@param parser an argparse instance.
@type C{argparse.ArgumentParser}
"""
setup_twistd_options(parser, cls)
setup_default_agent_options(parser, cls)
def as_app(self):
"""
Return the agent as a C{twisted.application.service.Application}
"""
app = service.Application(self.name)
self.setServiceParent(app)
return app
def configure(self, options):
"""Configure the agent to handle its cli options.
Invoked called before the service is started.
@param options
@type C{TwistedOptionNamespace} an argparse namespace corresponding
to a dict.
"""
if not options.get("zookeeper_servers"):
raise NoConnection("No zookeeper connection configured.")
if not os.path.exists(options.get("juju_directory", "")):
raise JujuError(
"Invalid juju-directory %r, does not exist." % (
options.get("juju_directory")))
if options["session_file"] is None:
raise JujuError("No session file specified")
self.config = options
@inlineCallbacks
def _kill_existing_session(self):
try:
# We might have died suddenly, in which case the session may
# still be alive. If this is the case, shoot it in the head, so
# it doesn't interfere with our attempts to recreate our state.
# (We need to be able to recreate our state *anyway*, and it's
# much simpler to force ourselves to recreate it every time than
# it is to mess around partially recreating partial state.)
client_id = load_client_id(self.config["session_file"])
if client_id is None:
return
temp_client = yield ZookeeperClient().connect(
self.config["zookeeper_servers"], client_id=client_id)
yield temp_client.close()
except zookeeper.ZooKeeperException:
# We don't really care what went wrong; just that we're not able
# to connect using the old session, and therefore we should be ok
# to start a fresh one without transient state hanging around.
pass
@inlineCallbacks
def connect(self):
"""Return an authenticated connection to the juju zookeeper."""
yield self._kill_existing_session()
self.client = yield ManagedClient(
session_timeout=CLIENT_SESSION_TIMEOUT).connect(
self.config["zookeeper_servers"])
save_client_id(
self.config["session_file"], self.client.client_id)
principals = self.config.get("principals", ())
for principal in principals:
self.client.add_auth("digest", principal)
# bug work around to keep auth fast
if principals:
yield self.client.exists("/")
returnValue(self.client)
def start(self):
"""Callback invoked on the agent's startup.
The agent will already be connected to zookeeper. Subclasses are
responsible for implementing.
"""
raise NotImplementedError
def stop(self):
"""Callback invoked on when the agent is shutting down."""
pass
# Twisted IService implementation, used for delegates to maintain naming
# conventions.
@inlineCallbacks
def startService(self):
yield self.connect()
# Start the global settings watch prior to starting the agent.
# Allows for debug log to be enabled early.
if self.get_watch_enabled():
yield self.start_global_settings_watch()
yield self.start()
@inlineCallbacks
def stopService(self):
try:
yield self.stop()
finally:
if self.client and self.client.connected:
self.client.close()
session_file = self.config["session_file"]
if os.path.exists(session_file):
os.unlink(session_file)
def set_watch_enabled(self, flag):
"""Set boolean flag for whether this agent should watching zookeeper.
This is mainly used for testing, to allow for setting up the
various data scenarios, before enabling an agent watch which will
be observing state.
"""
self._watch_enabled = bool(flag)
def get_watch_enabled(self):
"""Returns a boolean if the agent should be settings state watches.
The meaning of this flag is typically agent specific, as each
agent has separate watches they'd like to establish on agent specific
state within zookeeper. In general if this flag is False, the agent
should refrain from establishing a watch on startup. This flag is
typically used by tests to isolate and test the watch behavior
independent of the agent startup, via construction of test data.
"""
return self._watch_enabled
def start_global_settings_watch(self):
"""Start watching the runtime state for configuration changes."""
self.global_settings_state = GlobalSettingsStateManager(self.client)
return self.global_settings_state.watch_settings_changes(
self.on_global_settings_change)
@inlineCallbacks
def on_global_settings_change(self, change):
"""On global settings change, take action.
"""
if (yield self.global_settings_state.is_debug_log_enabled()):
yield self.start_debug_log()
else:
self.stop_debug_log()
@inlineCallbacks
def start_debug_log(self):
"""Enable the distributed debug log handler.
"""
if self._debug_log_handler is not None:
returnValue(None)
context_name = self.get_agent_name()
self._debug_log_handler = ZookeeperHandler(
self.client, context_name)
yield self._debug_log_handler.open()
log_root = logging.getLogger()
log_root.addHandler(self._debug_log_handler)
def stop_debug_log(self):
"""Disable any configured debug log handler.
"""
if self._debug_log_handler is None:
return
handler, self._debug_log_handler = self._debug_log_handler, None
log_root = logging.getLogger()
log_root.removeHandler(handler)
def get_agent_name(self):
"""Return the agent's name and context such that it can be identified.
Subclasses should override this to provide additional context and
unique naming.
"""
return self.__class__.__name__
def setup_default_agent_options(parser, cls):
principals_default = os.environ.get("JUJU_PRINCIPALS", "").split()
parser.add_argument(
"--principal", "-e",
action="append", dest="principals", default=principals_default,
help="Agent principals to utilize for the zookeeper connection")
servers_default = os.environ.get("JUJU_ZOOKEEPER", "")
parser.add_argument(
"--zookeeper-servers", "-z", default=servers_default,
help="juju Zookeeper servers to connect to ($JUJU_ZOOKEEPER)")
juju_home = os.environ.get("JUJU_HOME", "/var/lib/juju")
parser.add_argument(
"--juju-directory", default=juju_home, type=os.path.abspath,
help="juju working directory ($JUJU_HOME)")
parser.add_argument(
"--session-file", default=None, type=os.path.abspath,
help="like a pidfile, but for the zookeeper session id")
juju-0.7.orig/juju/agents/dummy.py 0000644 0000000 0000000 00000000517 12135220114 015357 0 ustar 0000000 0000000
from .base import BaseAgent
class DummyAgent(BaseAgent):
"""A do nothing juju agent.
A bit like a dog, it just lies around basking in the sun,
doing nothing, nonetheless its quite content. :-)
"""
def start(self):
"""nothing to see here, move along."""
if __name__ == '__main__':
DummyAgent.run()
juju-0.7.orig/juju/agents/machine.py 0000644 0000000 0000000 00000010231 12135220114 015622 0 ustar 0000000 0000000 import logging
import os
from twisted.internet.defer import inlineCallbacks
from juju.errors import JujuError
from juju.state.machine import MachineStateManager
from juju.state.service import ServiceStateManager
from juju.unit.deploy import UnitDeployer
from .base import BaseAgent
log = logging.getLogger("juju.agents.machine")
class MachineAgent(BaseAgent):
"""A juju machine agent.
The machine agent is responsible for monitoring service units
assigned to a machine. If a new unit is assigned to machine, the
machine agent will download the charm, create a working
space for the service unit agent, and then launch it.
Additionally the machine agent will monitor the running service
unit agents on the machine, via their ephemeral nodes, and
restart them if they die.
"""
name = "juju-machine-agent"
unit_agent_module = "juju.agents.unit"
@property
def units_directory(self):
return os.path.join(self.config["juju_directory"], "units")
@property
def unit_state_directory(self):
return os.path.join(self.config["juju_directory"], "state")
@inlineCallbacks
def start(self):
"""Start the machine agent.
Creates state directories on the machine, retrieves the machine state,
and enables watch on assigned units.
"""
if not os.path.exists(self.units_directory):
os.makedirs(self.units_directory)
if not os.path.exists(self.unit_state_directory):
os.makedirs(self.unit_state_directory)
# Get state managers we'll be utilizing.
self.service_state_manager = ServiceStateManager(self.client)
self.unit_deployer = UnitDeployer(
self.client, self.get_machine_id(), self.config["juju_directory"])
yield self.unit_deployer.start()
# Retrieve the machine state for the machine we represent.
machine_manager = MachineStateManager(self.client)
self.machine_state = yield machine_manager.get_machine_state(
self.get_machine_id())
# Watch assigned units for the machine.
if self.get_watch_enabled():
self.machine_state.watch_assigned_units(
self.watch_service_units)
# Connect the machine agent, broadcasting presence to the world.
yield self.machine_state.connect_agent()
log.info("Machine agent started id:%s" % self.get_machine_id())
@inlineCallbacks
def watch_service_units(self, old_units, new_units):
"""Callback invoked when the assigned service units change.
"""
if old_units is None:
old_units = set()
log.debug(
"Units changed old:%s new:%s", old_units, new_units)
stopped = old_units - new_units
started = new_units - old_units
for unit_name in stopped:
log.debug("Stopping service unit: %s ...", unit_name)
try:
yield self.unit_deployer.kill_service_unit(unit_name)
except Exception:
log.exception("Error stopping unit: %s", unit_name)
for unit_name in started:
log.debug("Starting service unit: %s ...", unit_name)
try:
yield self.unit_deployer.start_service_unit(unit_name)
except Exception:
log.exception("Error starting unit: %s", unit_name)
def get_machine_id(self):
"""Get the id of the machine as known within the zk state."""
return self.config["machine_id"]
def get_agent_name(self):
return "Machine:%s" % (self.get_machine_id())
def configure(self, options):
super(MachineAgent, self).configure(options)
if not options.get("machine_id"):
msg = ("--machine-id must be provided in the command line, "
"or $JUJU_MACHINE_ID in the environment")
raise JujuError(msg)
@classmethod
def setup_options(cls, parser):
super(MachineAgent, cls).setup_options(parser)
machine_id = os.environ.get("JUJU_MACHINE_ID", "")
parser.add_argument(
"--machine-id", default=machine_id)
return parser
if __name__ == "__main__":
MachineAgent().run()
juju-0.7.orig/juju/agents/provision.py 0000644 0000000 0000000 00000022631 12135220114 016255 0 ustar 0000000 0000000 import logging
from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from zookeeper import NoNodeException
from juju.environment.config import EnvironmentsConfig
from juju.errors import ProviderError
from juju.lib.twistutils import concurrent_execution_guard
from juju.state.errors import MachineStateNotFound, StopWatcher
from juju.state.firewall import FirewallManager
from juju.state.machine import MachineStateManager
from juju.state.service import ServiceStateManager
from .base import BaseAgent
log = logging.getLogger("juju.agents.provision")
class ProvisioningAgent(BaseAgent):
name = "juju-provisoning-agent"
_current_machines = ()
# time in seconds
machine_check_period = 60
def get_agent_name(self):
return "provision:%s" % (self.environment.type)
@inlineCallbacks
def start(self):
self._running = True
self.environment = yield self.configure_environment()
self.provider = self.environment.get_machine_provider()
self.machine_state_manager = MachineStateManager(self.client)
self.service_state_manager = ServiceStateManager(self.client)
self.firewall_manager = FirewallManager(
self.client, self.is_running, self.provider)
if self.get_watch_enabled():
self.machine_state_manager.watch_machine_states(
self.watch_machine_changes)
self.service_state_manager.watch_service_states(
self.firewall_manager.watch_service_changes)
from twisted.internet import reactor
reactor.callLater(
self.machine_check_period, self.periodic_machine_check)
log.info("Started provisioning agent")
else:
log.info("Started provisioning agent without watches enabled")
def stop(self):
log.info("Stopping provisioning agent")
self._running = False
return succeed(True)
def is_running(self):
"""Whether this agent is running or not."""
return self._running
@inlineCallbacks
def configure_environment(self):
"""The provisioning agent configure its environment on start or change.
The environment contains the configuration th agent needs to interact
with its machine provider, in order to do its work. This configuration
data is deployed lazily over an encrypted connection upon first usage.
The agent waits for this data to exist before completing its startup.
"""
try:
get_d, watch_d = self.client.get_and_watch("/environment")
environment_data, stat = yield get_d
watch_d.addCallback(self._on_environment_changed)
except NoNodeException:
# Wait till the environment node appears. play twisted gymnastics
exists_d, watch_d = self.client.exists_and_watch("/environment")
stat = yield exists_d
if stat:
environment = yield self.configure_environment()
else:
watch_d.addCallback(
lambda result: self.configure_environment())
if not stat:
environment = yield watch_d
returnValue(environment)
config = EnvironmentsConfig()
config.parse(environment_data)
returnValue(config.get_default())
@inlineCallbacks
def _on_environment_changed(self, event):
"""Reload the environment if its data changes."""
if event.type_name == "deleted":
return
self.environment = yield self.configure_environment()
self.provider = self.environment.get_machine_provider()
def periodic_machine_check(self):
"""A periodic checking of machine states and provider machines.
In addition to the on demand changes to zookeeper states that are
monitored by L{watch_machine_changes}, the periodic machine check
performs non zookeeper state related verification by periodically
checking the last current provider machine states against the
last known zookeeper state.
Primarily this helps in recovering from transient error conditions
which may have prevent processing of an individual machine state, as
well as verifying the current state of the provider's running machines
against the zk state, thus pruning unused resources.
"""
from twisted.internet import reactor
d = self.process_machines(self._current_machines)
d.addBoth(
lambda result: reactor.callLater(
self.machine_check_period, self.periodic_machine_check))
return d
@inlineCallbacks
def watch_machine_changes(self, old_machines, new_machines):
"""Watches and processes machine state changes.
This function is used to subscribe to topology changes, and
specifically changes to machines within the topology. It performs
work against the machine provider to ensure that the currently
running state of the juju cluster corresponds to the topology
via creation and deletion of machines.
The subscription utilized is a permanent one, meaning that this
function will automatically be rescheduled to run whenever a topology
state change happens that involves machines.
This functional also caches the current set of machines as an agent
instance attribute.
@param old_machines machine ids as existed in the previous topology.
@param new_machines machine ids as exist in the current topology.
"""
if not self._running:
raise StopWatcher()
log.debug("Machines changed old:%s new:%s", old_machines, new_machines)
self._current_machines = new_machines
try:
yield self.process_machines(self._current_machines)
except Exception:
# Log and effectively retry later in periodic_machine_check
log.exception(
"Got unexpected exception in processing machines,"
" will retry")
@concurrent_execution_guard("_processing_machines")
@inlineCallbacks
def process_machines(self, current_machines):
"""Ensure the currently running machines correspond to state.
At the end of each process_machines execution, verify that all
running machines within the provider correspond to machine_ids within
the topology. If they don't then shut them down.
Utilizes concurrent execution guard, to ensure that this is only being
executed at most once per process.
"""
# XXX this is obviously broken, but the margins of 80 columns prevent
# me from describing. hint think concurrent agents, and use a lock.
# map of instance_id -> machine
try:
provider_machines = yield self.provider.get_machines()
except ProviderError:
log.exception("Cannot get machine list")
return
provider_machines = dict(
[(m.instance_id, m) for m in provider_machines])
instance_ids = []
for machine_state_id in current_machines:
try:
instance_id = yield self.process_machine(
machine_state_id, provider_machines)
except (MachineStateNotFound, ProviderError):
log.exception("Cannot process machine %s", machine_state_id)
continue
instance_ids.append(instance_id)
# Terminate all unused juju machines running within the cluster.
unused = set(provider_machines.keys()) - set(instance_ids)
for instance_id in unused:
log.info("Shutting down machine id:%s ...", instance_id)
machine = provider_machines[instance_id]
try:
yield self.provider.shutdown_machine(machine)
except ProviderError:
log.exception("Cannot shutdown machine %s", instance_id)
continue
@inlineCallbacks
def process_machine(self, machine_state_id, provider_machine_map):
"""Ensure a provider machine for a machine state id.
For each machine_id in new machines which represents the current state
of the topology:
* Check to ensure its state reflects that it has been
launched. If it hasn't then create the machine and update
the state.
* Watch the machine's assigned services so that changes can
be applied to the firewall for service exposing support.
"""
# fetch the machine state
machine_state = yield self.machine_state_manager.get_machine_state(
machine_state_id)
instance_id = yield machine_state.get_instance_id()
# Verify a machine id has state and is running, else launch it.
if instance_id is None or not instance_id in provider_machine_map:
log.info("Starting machine id:%s ...", machine_state.id)
constraints = yield machine_state.get_constraints()
machines = yield self.provider.start_machine(
{"machine-id": machine_state.id, "constraints": constraints})
instance_id = machines[0].instance_id
yield machine_state.set_instance_id(instance_id)
# The firewall manager also needs to be checked for any
# outstanding retries on this machine
yield self.firewall_manager.process_machine(machine_state)
returnValue(instance_id)
if __name__ == '__main__':
ProvisioningAgent().run()
juju-0.7.orig/juju/agents/tests/ 0000755 0000000 0000000 00000000000 12135220114 015011 5 ustar 0000000 0000000 juju-0.7.orig/juju/agents/unit.py 0000644 0000000 0000000 00000020644 12135220114 015206 0 ustar 0000000 0000000 import os
import logging
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.errors import JujuError
from juju.state.service import ServiceStateManager, RETRY_HOOKS
from juju.hooks.protocol import UnitSettingsFactory
from juju.hooks.executor import HookExecutor
from juju.unit.address import get_unit_address
from juju.unit.lifecycle import UnitLifecycle, HOOK_SOCKET_FILE
from juju.unit.workflow import UnitWorkflowState
from juju.agents.base import BaseAgent
log = logging.getLogger("juju.agents.unit")
def unit_path(juju_path, unit_state):
return os.path.join(
juju_path, "units", unit_state.unit_name.replace("/", "-"))
class UnitAgent(BaseAgent):
"""An juju Unit Agent.
Provides for the management of a charm, via hook execution in response to
external events in the coordination space (zookeeper).
"""
name = "juju-unit-agent"
@classmethod
def setup_options(cls, parser):
super(UnitAgent, cls).setup_options(parser)
unit_name = os.environ.get("JUJU_UNIT_NAME", "")
parser.add_argument("--unit-name", default=unit_name)
@property
def unit_name(self):
return self.config["unit_name"]
def get_agent_name(self):
return "unit:%s" % self.unit_name
def configure(self, options):
"""Configure the unit agent."""
super(UnitAgent, self).configure(options)
if not options.get("unit_name"):
msg = ("--unit-name must be provided in the command line, "
"or $JUJU_UNIT_NAME in the environment")
raise JujuError(msg)
self.executor = HookExecutor()
self.api_factory = UnitSettingsFactory(
self.executor.get_hook_context,
self.executor.get_invoker,
logging.getLogger("unit.hook.api"))
self.api_socket = None
self.workflow = None
@inlineCallbacks
def start(self):
"""Start the unit agent process."""
service_state_manager = ServiceStateManager(self.client)
# Retrieve our unit and configure working directories.
service_name = self.unit_name.split("/")[0]
self.service_state = yield service_state_manager.get_service_state(
service_name)
self.unit_state = yield self.service_state.get_unit_state(
self.unit_name)
self.unit_directory = os.path.join(
self.config["juju_directory"], "units",
self.unit_state.unit_name.replace("/", "-"))
self.state_directory = os.path.join(
self.config["juju_directory"], "state")
# Setup the server portion of the cli api exposed to hooks.
socket_path = os.path.join(self.unit_directory, HOOK_SOCKET_FILE)
if os.path.exists(socket_path):
os.unlink(socket_path)
from twisted.internet import reactor
self.api_socket = reactor.listenUNIX(socket_path, self.api_factory)
# Setup the unit state's address
address = yield get_unit_address(self.client)
yield self.unit_state.set_public_address(
(yield address.get_public_address()))
yield self.unit_state.set_private_address(
(yield address.get_private_address()))
if self.get_watch_enabled():
yield self.unit_state.watch_hook_debug(self.cb_watch_hook_debug)
# Inform the system, we're alive.
yield self.unit_state.connect_agent()
# Start paying attention to the debug-log setting
if self.get_watch_enabled():
yield self.unit_state.watch_hook_debug(self.cb_watch_hook_debug)
self.lifecycle = UnitLifecycle(
self.client, self.unit_state, self.service_state,
self.unit_directory, self.state_directory, self.executor)
self.workflow = UnitWorkflowState(
self.client, self.unit_state, self.lifecycle, self.state_directory)
# Set up correct lifecycle and executor state given the persistent
# unit workflow state, and fire any starting transitions if necessary.
with (yield self.workflow.lock()):
yield self.workflow.synchronize(self.executor)
if self.get_watch_enabled():
yield self.unit_state.watch_resolved(self.cb_watch_resolved)
yield self.service_state.watch_config_state(
self.cb_watch_config_changed)
yield self.unit_state.watch_upgrade_flag(
self.cb_watch_upgrade_flag)
@inlineCallbacks
def stop(self):
"""Stop the unit agent process."""
if self.lifecycle.running:
yield self.lifecycle.stop(fire_hooks=False, stop_relations=False)
yield self.executor.stop()
if self.api_socket:
yield self.api_socket.stopListening()
yield self.api_factory.stopFactory()
@inlineCallbacks
def cb_watch_resolved(self, change):
"""Update the unit's state, when its resolved.
Resolved operations form the basis of error recovery for unit
workflows. A resolved operation can optionally specify hook
execution. The unit agent runs the error recovery transition
if the unit is not in a running state.
"""
# Would be nice if we could fold this into an atomic
# get and delete primitive.
# Check resolved setting
resolved = yield self.unit_state.get_resolved()
if resolved is None:
returnValue(None)
# Clear out the setting
yield self.unit_state.clear_resolved()
with (yield self.workflow.lock()):
if (yield self.workflow.get_state()) == "started":
returnValue(None)
try:
log.info("Resolved detected, firing retry transition")
if resolved["retry"] == RETRY_HOOKS:
yield self.workflow.fire_transition_alias("retry_hook")
else:
yield self.workflow.fire_transition_alias("retry")
except Exception:
log.exception("Unknown error while transitioning for resolved")
@inlineCallbacks
def cb_watch_hook_debug(self, change):
"""Update the hooks to be debugged when the settings change.
"""
debug = yield self.unit_state.get_hook_debug()
debug_hooks = debug and debug.get("debug_hooks") or None
self.executor.set_debug(debug_hooks)
@inlineCallbacks
def cb_watch_upgrade_flag(self, change):
"""Update the unit's charm when requested.
"""
upgrade_flag = yield self.unit_state.get_upgrade_flag()
if not upgrade_flag:
log.info("No upgrade flag set.")
return
log.info("Upgrade detected")
# Clear the flag immediately; this means that upgrade requests will
# be *ignored* by units which are not "started", and will need to be
# reissued when the units are in acceptable states.
yield self.unit_state.clear_upgrade_flag()
new_id = yield self.service_state.get_charm_id()
old_id = yield self.unit_state.get_charm_id()
if new_id == old_id:
log.info("Upgrade ignored: already running latest charm")
return
with (yield self.workflow.lock()):
state = yield self.workflow.get_state()
if state != "started":
if upgrade_flag["force"]:
yield self.lifecycle.upgrade_charm(
fire_hooks=False, force=True)
log.info("Forced upgrade complete")
return
log.warning(
"Cannot upgrade: unit is in non-started state %s. Reissue "
"upgrade command to try again.", state)
return
log.info("Starting upgrade")
if (yield self.workflow.fire_transition("upgrade_charm")):
log.info("Upgrade complete")
else:
log.info("Upgrade failed")
@inlineCallbacks
def cb_watch_config_changed(self, change):
"""Trigger hook on configuration change"""
# Verify it is running
with (yield self.workflow.lock()):
current_state = yield self.workflow.get_state()
log.debug("Configuration Changed")
if current_state != "started":
log.debug(
"Configuration updated on service in a non-started state")
returnValue(None)
yield self.workflow.fire_transition("configure")
if __name__ == '__main__':
UnitAgent.run()
juju-0.7.orig/juju/agents/tests/__init__.py 0000644 0000000 0000000 00000000002 12135220114 017112 0 ustar 0000000 0000000 #
juju-0.7.orig/juju/agents/tests/common.py 0000644 0000000 0000000 00000003237 12135220114 016660 0 ustar 0000000 0000000 import os
from twisted.internet.defer import inlineCallbacks, succeed
from txzookeeper.tests.utils import deleteTree
from juju.agents.base import TwistedOptionNamespace
from juju.state.tests.common import StateTestBase
from juju.tests.common import get_test_zookeeper_address
class AgentTestBase(StateTestBase):
agent_class = None
juju_directory = None
setup_environment = True
@inlineCallbacks
def setUp(self):
self.juju_directory = self.makeDir()
yield super(AgentTestBase, self).setUp()
assert self.agent_class, "Agent Class must be specified on test"
if self.setup_environment:
yield self.push_default_config()
self.agent = self.agent_class()
self.options = yield self.get_agent_config()
self.agent.configure(self.options)
self.agent.set_watch_enabled(False)
def tearDown(self):
if self.agent.client and self.agent.client.connected:
self.agent.client.close()
if self.client.connected:
deleteTree("/", self.client.handle)
self.client.close()
def get_agent_config(self):
options = TwistedOptionNamespace()
options["juju_directory"] = self.juju_directory
options["zookeeper_servers"] = get_test_zookeeper_address()
options["session_file"] = self.makeFile()
return succeed(options)
@inlineCallbacks
def debug_pprint_tree(self, path="/", indent=1):
children = yield self.client.get_children(path)
for n in children:
print " " * indent, "/" + n
yield self.debug_pprint_tree(
os.path.join(path, n),
indent + 1)
juju-0.7.orig/juju/agents/tests/test_base.py 0000644 0000000 0000000 00000054453 12135220114 017347 0 ustar 0000000 0000000 import argparse
import json
import logging
import os
import stat
import sys
import yaml
from twisted.application.app import AppLogger
from twisted.application.service import IService, IServiceCollection
from twisted.internet.defer import (
fail, succeed, Deferred, inlineCallbacks, returnValue)
from twisted.python.components import Componentized
from twisted.python import log
import zookeeper
from txzookeeper import ZookeeperClient
from juju.lib.testing import TestCase
from juju.lib.mocker import MATCH
from juju.tests.common import get_test_zookeeper_address
from juju.agents.base import (
BaseAgent, TwistedOptionNamespace, AgentRunner, AgentLogger)
from juju.agents.dummy import DummyAgent
from juju.errors import NoConnection, JujuError
from juju.lib.zklog import ZookeeperHandler
from juju.agents.tests.common import AgentTestBase
MATCH_APP = MATCH(lambda x: isinstance(x, Componentized))
MATCH_HANDLER = MATCH(lambda x: isinstance(x, ZookeeperHandler))
class BaseAgentTest(TestCase):
@inlineCallbacks
def setUp(self):
yield super(BaseAgentTest, self).setUp()
self.juju_home = self.makeDir()
self.change_environment(JUJU_HOME=self.juju_home)
def test_as_app(self):
"""The agent class can be accessed as an application."""
app = BaseAgent().as_app()
multi_service = IService(app, None)
self.assertTrue(IServiceCollection.providedBy(multi_service))
services = list(multi_service)
self.assertEqual(len(services), 1)
def test_twistd_default_options(self):
"""The agent cli parsing, populates standard twistd options."""
parser = argparse.ArgumentParser()
BaseAgent.setup_options(parser)
# Daemon group
self.assertEqual(
parser.get_default("logfile"), "%s.log" % BaseAgent.name)
self.assertEqual(parser.get_default("pidfile"), "")
self.assertEqual(parser.get_default("loglevel"), "DEBUG")
self.assertFalse(parser.get_default("nodaemon"))
self.assertEqual(parser.get_default("rundir"), ".")
self.assertEqual(parser.get_default("chroot"), None)
self.assertEqual(parser.get_default("umask"), '0022')
self.assertEqual(parser.get_default("uid"), None)
self.assertEqual(parser.get_default("gid"), None)
self.assertEqual(parser.get_default("euid"), None)
self.assertEqual(parser.get_default("prefix"), BaseAgent.name)
self.assertEqual(parser.get_default("syslog"), False)
# Development Group
self.assertFalse(parser.get_default("debug"))
self.assertFalse(parser.get_default("profile"))
self.assertFalse(parser.get_default("savestats"))
self.assertEqual(parser.get_default("profiler"), "cprofile")
# Hidden defaults
self.assertEqual(parser.get_default("reactor"), "epoll")
self.assertEqual(parser.get_default("originalname"), None)
# Agent options
self.assertEqual(parser.get_default("principals"), [])
self.assertEqual(parser.get_default("zookeeper_servers"), "")
self.assertEqual(parser.get_default("juju_directory"), self.juju_home)
self.assertEqual(parser.get_default("session_file"), None)
def test_twistd_flags_correspond(self):
parser = argparse.ArgumentParser()
BaseAgent.setup_options(parser)
args = [
"--profile",
"--savestats",
"--nodaemon"]
options = parser.parse_args(args, namespace=TwistedOptionNamespace())
self.assertEqual(options.get("savestats"), True)
self.assertEqual(options.get("nodaemon"), True)
self.assertEqual(options.get("profile"), True)
def test_agent_logger(self):
parser = argparse.ArgumentParser()
BaseAgent.setup_options(parser)
log_file_path = self.makeFile()
options = parser.parse_args(
["--logfile", log_file_path, "--session-file", self.makeFile()],
namespace=TwistedOptionNamespace())
def match_observer(observer):
return isinstance(observer.im_self, log.PythonLoggingObserver)
def cleanup(observer):
# post test cleanup of global state.
log.removeObserver(observer)
logging.getLogger().handlers = []
original_log_with_observer = log.startLoggingWithObserver
def _start_log_with_observer(observer):
self.addCleanup(cleanup, observer)
# by default logging will replace stdout/stderr
return original_log_with_observer(observer, 0)
app = self.mocker.mock()
app.getComponent(log.ILogObserver, None)
self.mocker.result(None)
start_log_with_observer = self.mocker.replace(
log.startLoggingWithObserver)
start_log_with_observer(MATCH(match_observer))
self.mocker.call(_start_log_with_observer)
self.mocker.replay()
agent_logger = AgentLogger(options)
agent_logger.start(app)
# We suppress twisted messages below the error level.
output = open(log_file_path).read()
self.assertFalse(output)
# also verify we didn't mess with the app logging.
app_log = logging.getLogger()
app_log.info("Good")
# and that twisted errors still go through.
log.err("Something bad happened")
output = open(log_file_path).read()
self.assertIn("Good", output)
self.assertIn("Something bad happened", output)
def test_custom_log_level(self):
parser = argparse.ArgumentParser()
BaseAgent.setup_options(parser)
options = parser.parse_args(
["--loglevel", "INFO"], namespace=TwistedOptionNamespace())
self.assertEqual(options.loglevel, "INFO")
def test_twistd_option_namespace(self):
"""
The twisted option namespace bridges argparse attribute access,
to twisted dictionary access for cli options.
"""
options = TwistedOptionNamespace()
options.x = 1
self.assertEqual(options['x'], 1)
self.assertEqual(options.get('x'), 1)
self.assertEqual(options.get('y'), None)
self.assertRaises(KeyError, options.__getitem__, 'y')
options['y'] = 2
self.assertEqual(options.y, 2)
self.assertTrue(options.has_key('y'))
self.assertFalse(options.has_key('z'))
def test_runner_attribute_application(self):
"""The agent runner retrieve the application as an attribute."""
runner = AgentRunner({})
self.assertEqual(runner.createOrGetApplication(), None)
runner.application = 21
self.assertEqual(runner.createOrGetApplication(), 21)
def test_run(self):
"""Invokes the run class method on an agent.
This will create an agent instance, parse the cli args, passes them to
the agent, and starts the agent runner.
"""
self.change_args(
"es-agent", "--zookeeper-servers", get_test_zookeeper_address(),
"--session-file", self.makeFile())
runner = self.mocker.patch(AgentRunner)
runner.run()
mock_agent = self.mocker.patch(BaseAgent)
def match_args(config):
self.assertEqual(config["zookeeper_servers"],
get_test_zookeeper_address())
return True
mock_agent.configure(MATCH(match_args))
self.mocker.passthrough()
self.mocker.replay()
BaseAgent.run()
def test_full_run(self):
"""Verify a functional agent start via the 'run' method.
This test requires Zookeeper running on the default port of localhost.
The mocked portions are to prevent the daemon start from altering the
test environment (sys.stdout/sys.stderr, and reactor start).
"""
zookeeper.set_debug_level(0)
started = Deferred()
class DummyAgent(BaseAgent):
started = False
def start(self):
started.callback(self)
def validate_started(agent):
self.assertTrue(agent.client.connected)
started.addCallback(validate_started)
self.change_args(
"es-agent", "--nodaemon",
"--zookeeper-servers", get_test_zookeeper_address(),
"--session-file", self.makeFile())
runner = self.mocker.patch(AgentRunner)
logger = self.mocker.patch(AppLogger)
logger.start(MATCH_APP)
runner.startReactor(None, sys.stdout, sys.stderr)
logger.stop()
self.mocker.replay()
DummyAgent.run()
return started
@inlineCallbacks
def test_stop_service_stub_closes_agent(self):
"""The base class agent, stopService will the stop method.
Additionally it will close the agent's zookeeper client if
the client is still connected.
"""
mock_agent = self.mocker.patch(BaseAgent)
mock_client = self.mocker.mock(ZookeeperClient)
session_file = self.makeFile()
# connection is closed after agent.stop invoked.
with self.mocker.order():
mock_agent.stop()
self.mocker.passthrough()
# client existence check
mock_agent.client
self.mocker.result(mock_client)
# client connected check
mock_agent.client
self.mocker.result(mock_client)
mock_client.connected
self.mocker.result(True)
# client close
mock_agent.client
self.mocker.result(mock_client)
mock_client.close()
# delete session file
mock_agent.config
self.mocker.result({"session_file": session_file})
self.mocker.replay()
agent = BaseAgent()
yield agent.stopService()
self.assertFalse(os.path.exists(session_file))
@inlineCallbacks
def test_stop_service_stub_ignores_disconnected_agent(self):
"""The base class agent, stopService will the stop method.
If the client is not connected then no attempt is made to close it.
"""
mock_agent = self.mocker.patch(BaseAgent)
mock_client = self.mocker.mock(ZookeeperClient)
session_file = self.makeFile()
# connection is closed after agent.stop invoked.
with self.mocker.order():
mock_agent.stop()
# client existence check
mock_agent.client
self.mocker.result(mock_client)
# client connected check
mock_agent.client
self.mocker.result(mock_client)
mock_client.connected
self.mocker.result(False)
mock_agent.config
self.mocker.result({"session_file": session_file})
self.mocker.replay()
agent = BaseAgent()
yield agent.stopService()
self.assertFalse(os.path.exists(session_file))
def test_run_base_raises_error(self):
"""The base class agent, raises a notimplemented error when started."""
client = self.mocker.patch(ZookeeperClient)
client.connect(get_test_zookeeper_address())
client_mock = self.mocker.mock()
self.mocker.result(succeed(client_mock))
client_mock.client_id
self.mocker.result((123, "abc"))
self.mocker.replay()
agent = BaseAgent()
agent.set_watch_enabled(False)
agent.configure({
"zookeeper_servers": get_test_zookeeper_address(),
"juju_directory": self.makeDir(),
"session_file": self.makeFile()})
d = agent.startService()
self.failUnlessFailure(d, NotImplementedError)
return d
def test_connect_cli_option(self):
"""The zookeeper server can be passed via cli argument."""
mock_client = self.mocker.mock()
client = self.mocker.patch(ZookeeperClient)
client.connect("x2.example.com")
self.mocker.result(succeed(mock_client))
mock_client.client_id
self.mocker.result((123, "abc"))
self.mocker.replay()
agent = BaseAgent()
agent.configure({"zookeeper_servers": "x2.example.com",
"juju_directory": self.makeDir(),
"session_file": self.makeFile()})
result = agent.connect()
self.assertEqual(result.result, mock_client)
self.assertEqual(agent.client, mock_client)
def test_nonexistent_directory(self):
"""If the juju directory does not exist an error should be raised.
"""
juju_directory = self.makeDir()
os.rmdir(juju_directory)
data = {"zookeeper_servers": get_test_zookeeper_address(),
"juju_directory": juju_directory,
"session_file": self.makeFile()}
self.assertRaises(JujuError, BaseAgent().configure, data)
def test_bad_session_file(self):
"""If the session file cannot be created an error should be raised.
"""
data = {"zookeeper_servers": get_test_zookeeper_address(),
"juju_directory": self.makeDir(),
"session_file": None}
self.assertRaises(JujuError, BaseAgent().configure, data)
def test_directory_cli_option(self):
"""The juju directory can be configured on the cli."""
juju_directory = self.makeDir()
self.change_args(
"es-agent", "--zookeeper-servers", get_test_zookeeper_address(),
"--juju-directory", juju_directory,
"--session-file", self.makeFile())
agent = BaseAgent()
parser = argparse.ArgumentParser()
agent.setup_options(parser)
options = parser.parse_args(namespace=TwistedOptionNamespace())
agent.configure(options)
self.assertEqual(
agent.config["juju_directory"], juju_directory)
def test_directory_env(self):
"""The juju directory passed via environment."""
self.change_args("es-agent")
juju_directory = self.makeDir()
self.change_environment(
JUJU_HOME=juju_directory,
JUJU_ZOOKEEPER=get_test_zookeeper_address())
agent = BaseAgent()
parser = argparse.ArgumentParser()
agent.setup_options(parser)
options = parser.parse_args(
["--session-file", self.makeFile()],
namespace=TwistedOptionNamespace())
agent.configure(options)
self.assertEqual(
agent.config["juju_directory"], juju_directory)
def test_connect_env(self):
"""Zookeeper connection information can be passed via environment."""
self.change_args("es-agent")
self.change_environment(
JUJU_HOME=self.makeDir(),
JUJU_ZOOKEEPER="x1.example.com",
JUJU_PRINCIPALS="admin:abc agent:xyz")
client = self.mocker.patch(ZookeeperClient)
client.connect("x1.example.com")
self.mocker.result(succeed(client))
client.client_id
self.mocker.result((123, "abc"))
client.add_auth("digest", "admin:abc")
client.add_auth("digest", "agent:xyz")
client.exists("/")
self.mocker.replay()
agent = BaseAgent()
agent.set_watch_enabled(False)
parser = argparse.ArgumentParser()
agent.setup_options(parser)
options = parser.parse_args(
["--session-file", self.makeFile()],
namespace=TwistedOptionNamespace())
agent.configure(options)
d = agent.startService()
self.failUnlessFailure(d, NotImplementedError)
return d
def test_connect_closes_running_session(self):
self.change_args("es-agent")
self.change_environment(
JUJU_HOME=self.makeDir(),
JUJU_ZOOKEEPER="x1.example.com")
session_file = self.makeFile()
with open(session_file, "w") as f:
f.write(yaml.dump((123, "abc")))
mock_client_1 = self.mocker.mock()
client = self.mocker.patch(ZookeeperClient)
client.connect("x1.example.com", client_id=(123, "abc"))
self.mocker.result(succeed(mock_client_1))
mock_client_1.close()
self.mocker.result(None)
mock_client_2 = self.mocker.mock()
client.connect("x1.example.com")
self.mocker.result(succeed(mock_client_2))
mock_client_2.client_id
self.mocker.result((456, "def"))
self.mocker.replay()
agent = BaseAgent()
agent.set_watch_enabled(False)
parser = argparse.ArgumentParser()
agent.setup_options(parser)
options = parser.parse_args(
["--session-file", session_file],
namespace=TwistedOptionNamespace())
agent.configure(options)
d = agent.startService()
self.failUnlessFailure(d, NotImplementedError)
return d
def test_connect_handles_expired_session(self):
self.change_args("es-agent")
self.change_environment(
JUJU_HOME=self.makeDir(),
JUJU_ZOOKEEPER="x1.example.com")
session_file = self.makeFile()
with open(session_file, "w") as f:
f.write(yaml.dump((123, "abc")))
client = self.mocker.patch(ZookeeperClient)
client.connect("x1.example.com", client_id=(123, "abc"))
self.mocker.result(fail(zookeeper.SessionExpiredException()))
mock_client = self.mocker.mock()
client.connect("x1.example.com")
self.mocker.result(succeed(mock_client))
mock_client.client_id
self.mocker.result((456, "def"))
self.mocker.replay()
agent = BaseAgent()
agent.set_watch_enabled(False)
parser = argparse.ArgumentParser()
agent.setup_options(parser)
options = parser.parse_args(
["--session-file", session_file],
namespace=TwistedOptionNamespace())
agent.configure(options)
d = agent.startService()
self.failUnlessFailure(d, NotImplementedError)
return d
def test_connect_handles_nonsense_session(self):
self.change_args("es-agent")
self.change_environment(
JUJU_HOME=self.makeDir(),
JUJU_ZOOKEEPER="x1.example.com")
session_file = self.makeFile()
with open(session_file, "w") as f:
f.write(yaml.dump("cheesy wotsits"))
client = self.mocker.patch(ZookeeperClient)
client.connect("x1.example.com", client_id="cheesy wotsits")
self.mocker.result(fail(zookeeper.ZooKeeperException()))
mock_client = self.mocker.mock()
client.connect("x1.example.com")
self.mocker.result(succeed(mock_client))
mock_client.client_id
self.mocker.result((456, "def"))
self.mocker.replay()
agent = BaseAgent()
agent.set_watch_enabled(False)
parser = argparse.ArgumentParser()
agent.setup_options(parser)
options = parser.parse_args(
["--session-file", session_file],
namespace=TwistedOptionNamespace())
agent.configure(options)
d = agent.startService()
self.failUnlessFailure(d, NotImplementedError)
return d
def test_zookeeper_hosts_not_configured(self):
"""a NoConnection error is raised if no zookeeper host is specified."""
agent = BaseAgent()
self.assertRaises(
NoConnection, agent.configure, {"zookeeper_servers": None})
def test_watch_enabled_accessors(self):
agent = BaseAgent()
self.assertTrue(agent.get_watch_enabled())
agent.set_watch_enabled(False)
self.assertFalse(agent.get_watch_enabled())
@inlineCallbacks
def test_session_file_permissions(self):
session_file = self.makeFile()
agent = DummyAgent()
agent.configure({
"session_file": session_file,
"juju_directory": self.makeDir(),
"zookeeper_servers": get_test_zookeeper_address()})
yield agent.startService()
mode = os.stat(session_file).st_mode
mask = stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO
self.assertEquals(mode & mask, stat.S_IRUSR | stat.S_IWUSR)
yield agent.stopService()
self.assertFalse(os.path.exists(session_file))
class AgentDebugLogSettingsWatch(AgentTestBase):
agent_class = BaseAgent
@inlineCallbacks
def get_log_entry(self, number, wait=True):
entry_path = "/logs/log-%010d" % number
exists_d, watch_d = self.client.exists_and_watch(entry_path)
exists = yield exists_d
if not exists and wait:
yield watch_d
elif not exists:
returnValue(False)
data, stat = yield self.client.get(entry_path)
returnValue(json.loads(data))
def test_get_agent_name(self):
self.assertEqual(self.agent.get_agent_name(), "BaseAgent")
@inlineCallbacks
def test_runtime_watching_toggles_log(self):
"""Redundant changes with regard to the current configuration
are ignored."""
yield self.agent.connect()
root_log = logging.getLogger()
mock_log = self.mocker.replace(root_log)
mock_log.addHandler(MATCH_HANDLER)
self.mocker.result(True)
mock_log.removeHandler(MATCH_HANDLER)
self.mocker.result(True)
mock_log.addHandler(MATCH_HANDLER)
self.mocker.result(True)
self.mocker.replay()
yield self.agent.start_global_settings_watch()
yield self.agent.global_settings_state.set_debug_log(True)
yield self.agent.global_settings_state.set_debug_log(True)
yield self.agent.global_settings_state.set_debug_log(False)
yield self.agent.global_settings_state.set_debug_log(False)
yield self.agent.global_settings_state.set_debug_log(True)
# Give a moment for watches to fire.
yield self.sleep(0.1)
@inlineCallbacks
def test_log_enable_disable(self):
"""The log can be enabled and disabled."""
root_log = logging.getLogger()
root_log.setLevel(logging.DEBUG)
self.capture_logging(None, level=logging.DEBUG)
yield self.agent.connect()
self.assertFalse((yield self.client.exists("/logs")))
yield self.agent.start_debug_log()
root_log.debug("hello world")
yield self.agent.stop_debug_log()
root_log.info("goodbye")
root_log.info("world")
entry = yield self.get_log_entry(0)
self.assertTrue(entry)
self.assertEqual(entry["levelname"], "DEBUG")
entry = yield self.get_log_entry(1, wait=False)
self.assertFalse(entry)
# Else zookeeper is closing on occassion in teardown
yield self.sleep(0.1)
juju-0.7.orig/juju/agents/tests/test_dummy.py 0000644 0000000 0000000 00000000442 12135220114 017555 0 ustar 0000000 0000000 from juju.lib.testing import TestCase
from juju.agents.dummy import DummyAgent
class DummyTestCase(TestCase):
def test_start_dummy(self):
"""
Does nothing.
"""
agent = DummyAgent()
result = agent.start()
self.assertEqual(result, None)
juju-0.7.orig/juju/agents/tests/test_machine.py 0000644 0000000 0000000 00000021656 12135220114 020040 0 ustar 0000000 0000000 import argparse
import logging
import os
from twisted.internet.defer import (
inlineCallbacks, returnValue, fail, Deferred)
from juju.agents.base import TwistedOptionNamespace
from juju.agents.machine import MachineAgent
from juju.errors import JujuError
from juju.charm.bundle import CharmBundle
from juju.charm.directory import CharmDirectory
from juju.charm.publisher import CharmPublisher
from juju.charm.tests import local_charm_id
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.lib.mocker import MATCH
from juju.machine.tests.test_constraints import (
dummy_constraints, series_constraints)
from juju.state.machine import MachineStateManager, MachineState
from juju.state.service import ServiceStateManager
from juju.tests.common import get_test_zookeeper_address
from .common import AgentTestBase
MATCH_BUNDLE = MATCH(lambda x: isinstance(x, CharmBundle))
class MachineAgentTest(AgentTestBase, RepositoryTestBase):
agent_class = MachineAgent
@inlineCallbacks
def setUp(self):
yield super(MachineAgentTest, self).setUp()
self.output = self.capture_logging(level=logging.DEBUG)
environment = self.config.get_default()
# Load the environment with the charm state and charm binary
self.provider = environment.get_machine_provider()
self.storage = self.provider.get_file_storage()
self.charm = CharmDirectory(self.sample_dir1)
self.publisher = CharmPublisher(self.client, self.storage)
yield self.publisher.add_charm(local_charm_id(self.charm), self.charm)
charm_states = yield self.publisher.publish()
self.charm_state = charm_states[0]
# Create a service from the charm from which we can create units for
# the machine.
self.service_state_manager = ServiceStateManager(self.client)
self.service = yield self.service_state_manager.add_service_state(
"fatality-blog", self.charm_state, dummy_constraints)
@inlineCallbacks
def get_agent_config(self):
# gets invoked by AgentTestBase.setUp
options = yield super(MachineAgentTest, self).get_agent_config()
machine_state_manager = MachineStateManager(self.client)
self.machine_state = yield machine_state_manager.add_machine_state(
series_constraints)
self.change_environment(
JUJU_MACHINE_ID="0",
JUJU_HOME=self.juju_directory)
options["machine_id"] = str(self.machine_state.id)
# Start the agent with watching enabled
returnValue(options)
@inlineCallbacks
def test_start_begins_watch_and_initializes_directories(self):
self.agent.set_watch_enabled(True)
mock_machine_state = self.mocker.patch(MachineState)
mock_machine_state.watch_assigned_units(
self.agent.watch_service_units)
self.mocker.replay()
yield self.agent.startService()
self.assertTrue(os.path.isdir(self.agent.units_directory))
self.assertTrue(os.path.isdir(self.agent.unit_state_directory))
self.assertIn(
"Machine agent started id:%s" % self.agent.get_machine_id(),
self.output.getvalue())
yield self.agent.stopService()
def test_agent_machine_id_environment_extraction(self):
self.change_args("es-agent")
parser = argparse.ArgumentParser()
self.agent.setup_options(parser)
config = parser.parse_args(namespace=TwistedOptionNamespace())
self.assertEqual(
config["machine_id"], "0")
def test_get_agent_name(self):
self.assertEqual(self.agent.get_agent_name(), "Machine:0")
def test_agent_machine_id_cli_error(self):
"""
If the machine id can't be found, a detailed error message
is given.
"""
# initially setup by get_agent_config in setUp
self.change_environment(JUJU_MACHINE_ID="")
self.change_args("es-agent",
"--zookeeper-servers", get_test_zookeeper_address(),
"--juju-directory", self.makeDir(),
"--session-file", self.makeFile())
parser = argparse.ArgumentParser()
self.agent.setup_options(parser)
options = parser.parse_args(namespace=TwistedOptionNamespace())
e = self.assertRaises(
JujuError,
self.agent.configure,
options)
self.assertIn(
("--machine-id must be provided in the command line,"
" or $JUJU_MACHINE_ID in the environment"),
str(e))
def test_agent_machine_id_cli_extraction(self):
"""Command line passing of machine id works and has precedence
over environment arg passing."""
self.change_environment(JUJU_MACHINE_ID=str(21))
self.change_args("es-agent", "--machine-id", "0")
parser = argparse.ArgumentParser()
self.agent.setup_options(parser)
config = parser.parse_args(namespace=TwistedOptionNamespace())
self.assertEqual(
config["machine_id"], "0")
def test_machine_agent_knows_its_machine_id(self):
self.assertEqual(self.agent.get_machine_id(), "0")
@inlineCallbacks
def test_watch_new_service_unit(self):
"""
Adding a new service unit is detected by the watch.
"""
from juju.unit.deploy import UnitDeployer
mock_deployer = self.mocker.patch(UnitDeployer)
mock_deployer.start_service_unit("fatality-blog/0")
test_deferred = Deferred()
def test_complete(service_name):
test_deferred.callback(True)
self.mocker.call(test_complete)
self.mocker.replay()
self.agent.set_watch_enabled(True)
yield self.agent.startService()
# Create a new service unit
self.service_unit = yield self.service.add_unit_state()
yield self.service_unit.assign_to_machine(self.machine_state)
yield test_deferred
self.assertIn(
"Units changed old:set([]) new:set(['fatality-blog/0'])",
self.output.getvalue())
def test_watch_new_service_unit_error(self):
"""
An error while starting a new service is logged
"""
# Inject an error into the service deployment
from juju.unit.deploy import UnitDeployer
mock_deployer = self.mocker.patch(UnitDeployer)
mock_deployer.start_service_unit("fatality-blog/0")
self.mocker.result(fail(SyntaxError("Bad")))
self.mocker.replay()
yield self.agent.startService()
yield self.agent.watch_service_units(None, set(["fatality-blog/0"]))
self.assertIn("Starting service unit: %s" % "fatality-blog/0",
self.output.getvalue())
self.assertIn("Error starting unit: %s" % "fatality-blog/0",
self.output.getvalue())
self.assertIn("SyntaxError: Bad", self.output.getvalue())
@inlineCallbacks
def test_service_unit_removed(self):
"""
Service unit removed with manual invocation of watch_service_units.
"""
from juju.unit.deploy import UnitDeployer
mock_deployer = self.mocker.patch(UnitDeployer)
started = Deferred()
mock_deployer.start_service_unit("fatality-blog/0")
self.mocker.call(started.callback)
stopped = Deferred()
mock_deployer.kill_service_unit("fatality-blog/0")
self.mocker.call(stopped.callback)
self.mocker.replay()
# Start the agent with watching enabled
self.agent.set_watch_enabled(True)
yield self.agent.startService()
# Create a new service unit
self.service_unit = yield self.service.add_unit_state()
yield self.service_unit.assign_to_machine(self.machine_state)
# Need to ensure no there's no concurrency creating an overlap
# between assigning, unassigning to machine, since it is
# possible then for the watch in the machine agent to not
# observe *any* change in this case ("you cannot reliably see
# every change that happens to a node in ZooKeeper")
yield started
# And now remove it
yield self.service_unit.unassign_from_machine()
yield stopped
@inlineCallbacks
def test_watch_removed_service_unit_error(self):
"""
An error while removing a service unit is logged
"""
from juju.unit.deploy import UnitDeployer
mock_deployer = self.mocker.patch(UnitDeployer)
mock_deployer.kill_service_unit("fatality-blog/0")
self.mocker.result(fail(OSError("Bad")))
self.mocker.replay()
yield self.agent.startService()
yield self.agent.watch_service_units(set(["fatality-blog/0"]), set())
self.assertIn("Stopping service unit: %s" % "fatality-blog/0",
self.output.getvalue())
self.assertIn("Error stopping unit: %s" % "fatality-blog/0",
self.output.getvalue())
self.assertIn("OSError: Bad", self.output.getvalue())
juju-0.7.orig/juju/agents/tests/test_provision.py 0000644 0000000 0000000 00000045641 12135220114 020464 0 ustar 0000000 0000000 import logging
from twisted.internet.defer import inlineCallbacks, fail, succeed
from twisted.internet import reactor
from juju.agents.provision import ProvisioningAgent
from juju.environment.environment import Environment
from juju.environment.errors import EnvironmentsConfigError
from juju.machine.tests.test_constraints import dummy_cs, series_constraints
from juju.errors import ProviderInteractionError
from juju.lib.mocker import MATCH
from juju.providers.dummy import DummyMachine
from juju.state.errors import StopWatcher
from juju.state.machine import MachineState, MachineStateManager
from juju.state.tests.test_service import ServiceStateManagerTestBase
from .common import AgentTestBase
MATCH_MACHINE = MATCH(lambda x: isinstance(x, DummyMachine))
MATCH_MACHINE_STATE = MATCH(lambda x: isinstance(x, MachineState))
MATCH_SET = MATCH(lambda x: isinstance(x, set))
class ProvisioningTestBase(AgentTestBase):
agent_class = ProvisioningAgent
@inlineCallbacks
def setUp(self):
yield super(ProvisioningTestBase, self).setUp()
self.machine_manager = MachineStateManager(self.client)
def add_machine_state(self, constraints=None):
return self.machine_manager.add_machine_state(
constraints or series_constraints)
class ProvisioningAgentStartupTest(ProvisioningTestBase):
setup_environment = False
@inlineCallbacks
def setUp(self):
yield super(ProvisioningAgentStartupTest, self).setUp()
yield self.agent.connect()
@inlineCallbacks
def test_agent_waits_for_environment(self):
"""
When the agent starts it waits for the /environment node to exist.
As soon as it does, the agent will fetch the environment, and
deserialize it into an environment object.
"""
env_loaded_deferred = self.agent.configure_environment()
reactor.callLater(
0.3, self.push_default_config, with_constraints=False)
result = yield env_loaded_deferred
self.assertTrue(isinstance(result, Environment))
self.assertEqual(result.name, "firstenv")
@inlineCallbacks
def test_agent_with_existing_environment(self):
"""An agent should load an existing environment to configure itself."""
yield self.push_default_config()
def verify_environment(result):
self.assertTrue(isinstance(result, Environment))
self.assertEqual(result.name, "firstenv")
d = self.agent.configure_environment()
d.addCallback(verify_environment)
yield d
@inlineCallbacks
def test_agent_with_invalid_environment(self):
yield self.client.create("/environment", "WAHOO!")
d = self.agent.configure_environment()
yield self.assertFailure(d, EnvironmentsConfigError)
def test_agent_with_nonexistent_environment_created_concurrently(self):
"""
If the environment node does not initially exist but it is created
while the agent is processing the NoNodeException, it should detect
this and configure normally.
"""
exists_and_watch = self.agent.client.exists_and_watch
mock_client = self.mocker.patch(self.agent.client)
mock_client.exists_and_watch("/environment")
def inject_creation(path):
self.push_default_config(with_constraints=False)
return exists_and_watch(path)
self.mocker.call(inject_creation)
self.mocker.replay()
def verify_configured(result):
self.assertTrue(isinstance(result, Environment))
self.assertEqual(result.type, "dummy")
# mocker magic test
d = self.agent.configure_environment()
d.addCallback(verify_configured)
return d
class ProvisioningAgentTest(ProvisioningTestBase):
@inlineCallbacks
def setUp(self):
yield super(ProvisioningAgentTest, self).setUp()
self.agent.set_watch_enabled(False)
yield self.agent.startService()
self.output = self.capture_logging("juju.agents.provision",
logging.DEBUG)
def test_get_agent_name(self):
self.assertEqual(self.agent.get_agent_name(), "provision:dummy")
@inlineCallbacks
def test_watch_machine_changes_processes_new_machine_id(self):
"""The agent should process a new machine id by creating it"""
machine_state0 = yield self.add_machine_state()
machine_state1 = yield self.add_machine_state()
yield self.agent.watch_machine_changes(
None, [machine_state0.id, machine_state1.id])
self.assertIn(
"Machines changed old:None new:[0, 1]", self.output.getvalue())
self.assertIn("Starting machine id:0", self.output.getvalue())
machines = yield self.agent.provider.get_machines()
self.assertEquals(len(machines), 2)
instance_id = yield machine_state0.get_instance_id()
self.assertEqual(instance_id, 0)
instance_id = yield machine_state1.get_instance_id()
self.assertEqual(instance_id, 1)
@inlineCallbacks
def test_watch_machine_changes_ignores_running_machine(self):
"""
If there is an existing machine instance and state, when a
new machine state is added, the existing instance is preserved,
and a new instance is created.
"""
machine_state0 = yield self.add_machine_state()
machines = yield self.agent.provider.start_machine(
{"machine-id": machine_state0.id})
machine = machines.pop()
yield machine_state0.set_instance_id(machine.instance_id)
machine_state1 = yield self.add_machine_state()
machines = yield self.agent.provider.get_machines()
self.assertEquals(len(machines), 1)
yield self.agent.watch_machine_changes(
None, [machine_state0.id, machine_state1.id])
machines = yield self.agent.provider.get_machines()
self.assertEquals(len(machines), 2)
instance_id = yield machine_state1.get_instance_id()
self.assertEqual(instance_id, 1)
@inlineCallbacks
def test_watch_machine_changes_terminates_unused(self):
"""
Any running provider machine instances without corresponding
machine states are terminated.
"""
# start an unused machine within the dummy provider instance
yield self.agent.provider.start_machine({"machine-id": "machine-1"})
yield self.agent.watch_machine_changes(None, [])
self.assertIn("Shutting down machine id:0", self.output.getvalue())
machines = yield self.agent.provider.get_machines()
self.assertFalse(machines)
@inlineCallbacks
def test_watch_machine_changes_stop_watches(self):
"""Verify that the watches stops once the agent stops."""
yield self.agent.start()
yield self.agent.stop()
yield self.assertFailure(
self.agent.watch_machine_changes(None, []),
StopWatcher)
@inlineCallbacks
def test_new_machine_state_removed_while_processing(self):
"""
If the machine state is removed while the event is processing the
state, the watch function should process it normally.
"""
yield self.agent.watch_machine_changes(
None, [0])
machines = yield self.agent.provider.get_machines()
self.assertEquals(len(machines), 0)
@inlineCallbacks
def test_process_machines_non_concurrency(self):
"""
Process machines should only be executed serially by an
agent.
"""
machine_state0 = yield self.add_machine_state()
machine_state1 = yield self.add_machine_state()
call_1 = self.agent.process_machines([machine_state0.id])
# The second call should return immediately due to the
# instance attribute guard.
call_2 = self.agent.process_machines([machine_state1.id])
self.assertEqual(call_2.called, True)
self.assertEqual(call_2.result, False)
# The first call should have started a provider machine
yield call_1
machines = yield self.agent.provider.get_machines()
self.assertEquals(len(machines), 1)
instance_id_0 = yield machine_state0.get_instance_id()
self.assertEqual(instance_id_0, 0)
instance_id_1 = yield machine_state1.get_instance_id()
self.assertEqual(instance_id_1, None)
def test_new_machine_state_removed_while_processing_get_provider_id(self):
"""
If the machine state is removed while the event is processing the
state, the watch function should process it normally.
"""
yield self.agent.watch_machine_changes(
None, [0])
machines = yield self.agent.provider.get_machines()
self.assertEquals(len(machines), 0)
@inlineCallbacks
def test_on_environment_change_agent_reconfigures(self):
"""
If the environment changes the agent reconfigures itself
"""
provider = self.agent.provider
yield self.push_default_config()
yield self.sleep(0.2)
self.assertNotIdentical(provider, self.agent.provider)
@inlineCallbacks
def test_machine_state_reflects_invalid_provider_state(self):
"""
If a machine state has an invalid instance_id, it should be detected,
and a new machine started and the machine state updated with the
new instance_id.
"""
m1 = yield self.add_machine_state()
yield m1.set_instance_id("zebra")
m2 = yield self.add_machine_state()
yield self.agent.watch_machine_changes(None, [m1.id, m2.id])
m1_instance_id = yield m1.get_instance_id()
self.assertEqual(m1_instance_id, 0)
m2_instance_id = yield m2.get_instance_id()
self.assertEqual(m2_instance_id, 1)
def test_periodic_task(self):
"""
The agent schedules period checks that execute the process machines
call.
"""
mock_reactor = self.mocker.patch(reactor)
mock_reactor.callLater(self.agent.machine_check_period,
self.agent.periodic_machine_check)
mock_agent = self.mocker.patch(self.agent)
mock_agent.process_machines(())
self.mocker.result(succeed(None))
self.mocker.replay()
# mocker magic test
self.agent.periodic_machine_check()
@inlineCallbacks
def test_transient_provider_error_on_start_machine(self):
"""
If there's an error when processing changes, the agent should log
the error and continue.
"""
machine_state0 = yield self.add_machine_state(
dummy_cs.parse(["cpu=10"]).with_series("series"))
machine_state1 = yield self.add_machine_state(
dummy_cs.parse(["cpu=20"]).with_series("series"))
mock_provider = self.mocker.patch(self.agent.provider)
mock_provider.start_machine({
"machine-id": 0, "constraints": {
"arch": "amd64", "cpu": 10, "mem": 512,
"provider-type": "dummy", "ubuntu-series": "series"}})
self.mocker.result(fail(ProviderInteractionError()))
mock_provider.start_machine({
"machine-id": 1, "constraints": {
"arch": "amd64", "cpu": 20, "mem": 512,
"provider-type": "dummy", "ubuntu-series": "series"}})
self.mocker.passthrough()
self.mocker.replay()
yield self.agent.watch_machine_changes(
[], [machine_state0.id, machine_state1.id])
machine1_instance_id = yield machine_state1.get_instance_id()
self.assertEqual(machine1_instance_id, 0)
self.assertIn(
"Cannot process machine 0",
self.output.getvalue())
@inlineCallbacks
def test_transient_provider_error_on_shutdown_machine(self):
"""
A transient provider error on shutdown will be ignored
and the shutdown will be reattempted (assuming similiar
state conditions) on the next execution of process machines.
"""
yield self.agent.provider.start_machine({"machine-id": 1})
mock_provider = self.mocker.patch(self.agent.provider)
mock_provider.shutdown_machine(MATCH_MACHINE)
self.mocker.result(fail(ProviderInteractionError()))
mock_provider.shutdown_machine(MATCH_MACHINE)
self.mocker.passthrough()
self.mocker.replay()
try:
yield self.agent.process_machines([])
except:
self.fail("Should not raise")
machines = yield self.agent.provider.get_machines()
self.assertTrue(machines)
yield self.agent.process_machines([])
machines = yield self.agent.provider.get_machines()
self.assertFalse(machines)
self.assertIn(
"Cannot shutdown machine 0",
self.output.getvalue())
@inlineCallbacks
def test_transient_provider_error_on_get_machines(self):
machine_state0 = yield self.add_machine_state()
mock_provider = self.mocker.patch(self.agent.provider)
mock_provider.get_machines()
self.mocker.result(fail(ProviderInteractionError()))
mock_provider.get_machines()
self.mocker.passthrough()
self.mocker.replay()
try:
yield self.agent.process_machines([machine_state0.id])
except:
self.fail("Should not raise")
instance_id = yield machine_state0.get_instance_id()
self.assertEqual(instance_id, None)
yield self.agent.process_machines(
[machine_state0.id])
instance_id = yield machine_state0.get_instance_id()
self.assertEqual(instance_id, 0)
self.assertIn(
"Cannot get machine list",
self.output.getvalue())
@inlineCallbacks
def test_transient_unhandled_error_in_process_machines(self):
"""Verify that watch_machine_changes handles the exception.
Provider implementations may use libraries like txaws that do
not handle every error. However, this should not stop the
watch from re-establishing itself, as will be the case if the
exception is not caught.
"""
machine_state0 = yield self.add_machine_state()
machine_state1 = yield self.add_machine_state()
# Simulate a failure scenario seen occasionally when working
# with OpenStack and txaws
mock_agent = self.mocker.patch(self.agent)
# Simulate transient error
mock_agent.process_machines([machine_state0.id])
self.mocker.result(fail(
TypeError("'NoneType' object is not iterable")))
# Let it succeed on second try. In this case, the scenario is
# that the watch triggered before the periodic_machine_check
# was run again
mock_agent.process_machines([machine_state0.id, machine_state1.id])
self.mocker.passthrough()
self.mocker.replay()
# Verify that watch_machine_changes does not fail even in the case of
# the transient error, although no work was done
try:
yield self.agent.watch_machine_changes([], [machine_state0.id])
except:
self.fail("Should not raise")
instance_id = yield machine_state0.get_instance_id()
self.assertEqual(instance_id, None)
# Second attempt, verifiy it did in fact process the machine
yield self.agent.watch_machine_changes(
[machine_state0.id], [machine_state0.id, machine_state1.id])
self.assertEqual((yield machine_state0.get_instance_id()), 0)
self.assertEqual((yield machine_state1.get_instance_id()), 1)
# But only after attempting and failing the first time
self.assertIn(
"Got unexpected exception in processing machines, will retry",
self.output.getvalue())
self.assertIn(
"'NoneType' object is not iterable",
self.output.getvalue())
@inlineCallbacks
def test_start_agent_with_watch(self):
mock_reactor = self.mocker.patch(reactor)
mock_reactor.callLater(
self.agent.machine_check_period,
self.agent.periodic_machine_check)
self.mocker.replay()
self.agent.set_watch_enabled(True)
yield self.agent.start()
machine_state0 = yield self.add_machine_state()
exists_d, watch_d = self.client.exists_and_watch(
"/machines/%s" % machine_state0.internal_id)
yield exists_d
# Wait for the provisioning agent to wake and modify
# the machine id.
yield watch_d
instance_id = yield machine_state0.get_instance_id()
self.assertEqual(instance_id, 0)
class FirewallManagerTest(
ProvisioningTestBase, ServiceStateManagerTestBase):
@inlineCallbacks
def setUp(self):
yield super(FirewallManagerTest, self).setUp()
self.agent.set_watch_enabled(False)
yield self.agent.startService()
@inlineCallbacks
def test_watch_service_changes_is_called(self):
"""Verify FirewallManager is called when services change"""
from juju.state.firewall import FirewallManager
mock_manager = self.mocker.patch(FirewallManager)
seen = []
def record_watch_changes(old_services, new_services):
seen.append((old_services, new_services))
return succeed(True)
mock_manager.watch_service_changes(MATCH_SET, MATCH_SET)
self.mocker.count(3, 3)
self.mocker.call(record_watch_changes)
mock_reactor = self.mocker.patch(reactor)
mock_reactor.callLater(
self.agent.machine_check_period,
self.agent.periodic_machine_check)
self.mocker.replay()
self.agent.set_watch_enabled(True)
yield self.agent.start()
# Modify services, while subsequently poking to ensure service
# watch is processed on each modification
yield self.add_service("wordpress")
while len(seen) < 1:
yield self.poke_zk()
mysql = yield self.add_service("mysql")
while len(seen) < 2:
yield self.poke_zk()
yield self.service_state_manager.remove_service_state(mysql)
while len(seen) < 3:
yield self.poke_zk()
self.assertEqual(
seen,
[(set(), set(["wordpress"])),
(set(["wordpress"]), set(["mysql", "wordpress"])),
(set(["mysql", "wordpress"]), set(["wordpress"]))])
@inlineCallbacks
def test_process_machine_is_called(self):
"""Verify FirewallManager is called when machines are processed"""
from juju.state.firewall import FirewallManager
mock_manager = self.mocker.patch(FirewallManager)
seen = []
def record_machine(machine):
seen.append(machine)
return succeed(True)
mock_manager.process_machine(MATCH_MACHINE_STATE)
self.mocker.call(record_machine)
self.mocker.replay()
machine_state = yield self.add_machine_state()
yield self.agent.process_machines([machine_state.id])
self.assertEqual(seen, [machine_state])
juju-0.7.orig/juju/agents/tests/test_unit.py 0000644 0000000 0000000 00000070117 12135220114 017407 0 ustar 0000000 0000000 import argparse
import logging
import os
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.agents.unit import UnitAgent
from juju.agents.base import TwistedOptionNamespace
from juju.charm import get_charm_from_path
from juju.charm.url import CharmURL
from juju.errors import JujuError
from juju.hooks.executor import HookExecutor
from juju.lib import serializer
from juju.state.environment import GlobalSettingsStateManager
from juju.state.errors import ServiceStateNotFound
from juju.state.service import NO_HOOKS, RETRY_HOOKS
from juju.unit.lifecycle import UnitLifecycle
from juju.unit.workflow import UnitWorkflowState
from juju.agents.tests.common import AgentTestBase
from juju.control.tests.test_upgrade_charm import CharmUpgradeTestBase
from juju.hooks.tests.test_invoker import get_cli_environ_path
from juju.tests.common import get_test_zookeeper_address
from juju.unit.tests.test_charm import CharmPublisherTestBase
from juju.unit.tests.test_workflow import WorkflowTestBase
class UnitAgentTestBase(AgentTestBase, WorkflowTestBase):
agent_class = UnitAgent
@inlineCallbacks
def setUp(self):
self.patch(HookExecutor,
"LOCK_PATH",
os.path.join(self.makeDir(), "hook.lock"))
yield super(UnitAgentTestBase, self).setUp()
settings = GlobalSettingsStateManager(self.client)
yield settings.set_provider_type("dummy")
self.change_environment(
PATH=get_cli_environ_path(),
JUJU_ENV_UUID="snowflake",
JUJU_UNIT_NAME="mysql/0")
@inlineCallbacks
def tearDown(self):
if self.agent.api_socket:
yield self.agent.api_socket.stopListening()
yield super(UnitAgentTestBase, self).tearDown()
@inlineCallbacks
def get_agent_config(self):
yield self.setup_default_test_relation()
options = yield super(UnitAgentTestBase, self).get_agent_config()
options["unit_name"] = str(self.states["unit"].unit_name)
returnValue(options)
def write_empty_hooks(self, start=True, stop=True, install=True, **kw):
# NB Tests that use this helper method must properly wait on
# the agent being stopped (yield self.agent.stopService()) to
# avoid the environment being restored while asynchronously
# the stop hook continues to execute. Otherwise
# JUJU_UNIT_NAME, which hook invocation depends on, will
# not be available and the stop hook will fail (somewhat
# mysteriously!). The alternative is to set stop=False so that
# the stop hook will not be created when writing the empty
# hooks.
output_file = self.makeFile()
if install:
self.write_hook(
"install", "#!/bin/bash\necho install >> %s" % output_file)
if start:
self.write_hook(
"start", "#!/bin/bash\necho start >> %s" % output_file)
if stop:
self.write_hook(
"stop", "#!/bin/bash\necho stop >> %s" % output_file)
for k in kw.keys():
hook_name = k.replace("_", "-")
self.write_hook(
hook_name,
"#!/bin/bash\necho %s >> %s" % (hook_name, output_file))
return output_file
def parse_output(self, output_file):
return filter(None, open(output_file).read().split("\n"))
class UnitAgentTest(UnitAgentTestBase):
@inlineCallbacks
def test_agent_start_stop_start_service(self):
"""Verify workflow state when starting and stopping the unit agent."""
self.write_empty_hooks()
yield self.agent.startService()
current_state = yield self.agent.workflow.get_state()
self.assertEqual(current_state, "started")
self.assertTrue(self.agent.lifecycle.running)
self.assertTrue(self.agent.executor.running)
workflow = self.agent.lifecycle.get_relation_workflow(
self.states["unit_relation"].internal_relation_id)
relation_state = yield workflow.get_state()
self.assertEquals(relation_state, "up")
yield self.agent.stopService()
current_state = yield self.agent.workflow.get_state()
# NOTE: stopping the unit agent does *not* imply that the service
# should not continue to run; ie don't transition to "stopped", and
# don't mark the relation states as "down"
self.assertEqual(current_state, "started")
self.assertFalse(self.agent.lifecycle.running)
self.assertFalse(self.agent.executor.running)
relation_state = yield workflow.get_state()
self.assertEquals(relation_state, "up")
# and check we can restart as well
yield self.agent.startService()
current_state = yield self.agent.workflow.get_state()
self.assertEqual(current_state, "started")
self.assertTrue(self.agent.lifecycle.running)
self.assertTrue(self.agent.executor.running)
relation_state = yield workflow.get_state()
self.assertEquals(relation_state, "up")
yield self.agent.stopService()
current_state = yield self.agent.workflow.get_state()
self.assertEqual(current_state, "started")
self.assertFalse(self.agent.lifecycle.running)
self.assertFalse(self.agent.executor.running)
relation_state = yield workflow.get_state()
self.assertEquals(relation_state, "up")
@inlineCallbacks
def test_agent_start_from_started_workflow(self):
lifecycle = UnitLifecycle(
self.client, self.states["unit"], self.states["service"],
self.unit_directory, self.state_directory, self.executor)
workflow = UnitWorkflowState(
self.client, self.states["unit"], lifecycle,
os.path.join(self.juju_directory, "state"))
with (yield workflow.lock()):
yield workflow.fire_transition("install")
yield lifecycle.stop(fire_hooks=False, stop_relations=False)
yield self.agent.startService()
current_state = yield self.agent.workflow.get_state()
self.assertEqual(current_state, "started")
self.assertTrue(self.agent.lifecycle.running)
self.assertTrue(self.agent.executor.running)
@inlineCallbacks
def test_agent_start_from_error_workflow(self):
lifecycle = UnitLifecycle(
self.client, self.states["unit"], self.states["service"],
self.unit_directory, self.state_directory, self.executor)
workflow = UnitWorkflowState(
self.client, self.states["unit"], lifecycle,
os.path.join(self.juju_directory, "state"))
with (yield workflow.lock()):
yield workflow.fire_transition("install")
self.write_exit_hook("stop", 1)
yield workflow.fire_transition("stop")
yield self.agent.startService()
current_state = yield self.agent.workflow.get_state()
self.assertEqual(current_state, "stop_error")
self.assertFalse(self.agent.lifecycle.running)
self.assertTrue(self.agent.executor.running)
def test_agent_unit_name_environment_extraction(self):
"""Verify extraction of unit name from the environment."""
self.change_args("unit-agent")
self.change_environment(JUJU_UNIT_NAME="rabbit/1")
parser = argparse.ArgumentParser()
self.agent.setup_options(parser)
options = parser.parse_args(namespace=TwistedOptionNamespace())
self.assertEqual(options["unit_name"], "rabbit/1")
def test_agent_unit_name_cli_extraction_error(self):
"""Failure to extract the unit name, results in a nice error message.
"""
# We don't want JUJU_UNIT_NAME set, so that the expected
# JujuError will be raised
self.change_environment(
PATH=get_cli_environ_path())
self.change_args(
"unit-agent",
"--juju-directory", self.makeDir(),
"--zookeeper-servers", get_test_zookeeper_address(),
"--session-file", self.makeFile())
parser = argparse.ArgumentParser()
self.agent.setup_options(parser)
options = parser.parse_args(namespace=TwistedOptionNamespace())
e = self.assertRaises(JujuError,
self.agent.configure,
options)
self.assertEquals(
str(e),
"--unit-name must be provided in the command line, or "
"$JUJU_UNIT_NAME in the environment")
def test_agent_unit_name_cli_extraction(self):
"""The unit agent can parse its unit-name from the cli.
"""
self.change_args("unit-agent", "--unit-name", "rabbit/1")
parser = argparse.ArgumentParser()
self.agent.setup_options(parser)
options = parser.parse_args(namespace=TwistedOptionNamespace())
self.assertEqual(options["unit_name"], "rabbit/1")
def test_get_agent_name(self):
self.assertEqual(self.agent.get_agent_name(), "unit:mysql/0")
def test_agent_invalid_unit_name(self):
"""If the unit agent is given an invalid unit name, an error
message is raised."""
options = {}
options["juju_directory"] = self.juju_directory
options["zookeeper_servers"] = get_test_zookeeper_address()
options["session_file"] = self.makeFile()
options["unit_name"] = "rabbit-1"
agent = self.agent_class()
agent.configure(options)
return self.assertFailure(agent.startService(), ServiceStateNotFound)
@inlineCallbacks
def test_agent_records_address_on_startup(self):
"""On startup the agent will record the unit's addresses.
"""
yield self.agent.startService()
self.assertEqual(
(yield self.agent.unit_state.get_public_address()),
"localhost")
self.assertEqual(
(yield self.agent.unit_state.get_private_address()),
"localhost")
@inlineCallbacks
def test_agent_executes_install_and_start_hooks_on_startup(self):
"""On initial startup, the unit agent executes install and start hooks.
"""
output_file = self.write_empty_hooks()
hooks_complete = self.wait_on_hook(
sequence=["install", "config-changed", "start"],
executor=self.agent.executor)
yield self.agent.startService()
# Verify the hook has executed.
yield hooks_complete
# config-changed is not mentioned in the output below as the
# hook is optional and not written by default
self.assertEqual(self.parse_output(output_file),
["install", "start"])
yield self.assertState(self.agent.workflow, "started")
yield self.agent.stopService()
@inlineCallbacks
def test_agent_install_error_transitions_install_error(self):
self.write_hook("install", "!/bin/bash\nexit 1\n")
hooks_complete = self.wait_on_hook(
"install",
executor=self.agent.executor)
yield self.agent.startService()
# Verify the hook has executed.
yield hooks_complete
yield self.assertState(self.agent.workflow, "install_error")
@inlineCallbacks
def test_agent_executes_relation_changed_hook(self):
"""If a relation changes after the unit is started, a relation change
hook is executed."""
self.write_empty_hooks()
file_path = self.makeFile()
self.write_hook("app-relation-changed",
("#!/bin/sh\n"
"echo $JUJU_REMOTE_UNIT >> %s\n" % file_path))
yield self.agent.startService()
hook_complete = self.wait_on_hook(
"app-relation-changed", executor=self.agent.executor)
wordpress_states = yield self.add_opposite_service_unit(
self.states)
# Verify the hook has executed.
yield hook_complete
self.assertEqual(open(file_path).read().strip(),
wordpress_states["unit"].unit_name)
@inlineCallbacks
def test_agent_executes_config_changed_hook(self):
"""Service config changes fire a config-changed hook."""
self.agent.set_watch_enabled(True)
self.write_empty_hooks()
file_path = self.makeFile()
self.write_hook("config-changed",
("#!/bin/sh\n"
"config-get foo >> %s\n" % file_path))
yield self.agent.startService()
transition_complete = self.wait_on_state(
self.agent.workflow, "started")
service = self.states["service"]
config = yield service.get_config()
config["foo"] = "bar"
yield config.write()
# Verify the hook has executed, and transition has completed.
yield transition_complete
self.assertEqual(open(file_path).read().strip(), "bar")
@inlineCallbacks
def test_agent_can_execute_config_changed_in_relation_hook(self):
"""Service config changes fire a config-changed hook."""
self.agent.set_watch_enabled(True)
self.write_empty_hooks()
file_path = self.makeFile()
self.write_hook("app-relation-changed",
("#!/bin/sh\n"
"config-get foo >> %s\n" % file_path))
# set service config
service = self.states["service"]
config = yield service.get_config()
config["foo"] = "bar"
yield config.write()
yield self.agent.startService()
hook_complete = self.wait_on_hook(
"app-relation-changed", executor=self.agent.executor)
# trigger the hook that will read service options
yield self.add_opposite_service_unit(self.states)
# Verify the hook has executed.
yield hook_complete
self.assertEqual(open(file_path).read().strip(), "bar")
@inlineCallbacks
def test_agent_hook_api_usage(self):
"""If a relation changes after the unit is started, a relation change
hook is executed."""
self.write_empty_hooks()
file_path = self.makeFile()
self.write_hook("app-relation-changed",
"\n".join(
["#!/bin/sh",
"echo `relation-list` >> %s" % file_path,
"echo `relation-set greeting=hello`",
"echo `relation-set planet=earth`",
"echo `relation-get planet %s` >> %s" % (
self.states["unit"].unit_name, file_path)]))
yield self.agent.startService()
hook_complete = self.wait_on_hook(
"app-relation-changed", executor=self.agent.executor)
yield self.add_opposite_service_unit(self.states)
# Verify the hook has executed.
yield hook_complete
# Verify hook output
output = open(file_path).read().strip().split("\n")
self.assertEqual(output, ["wordpress/0", "earth"])
# Verify zookeeper state
contents = yield self.states["unit_relation"].get_data()
self.assertEqual(
{"greeting": "hello", "planet": "earth",
"private-address": "mysql-0.example.com"},
serializer.load(contents))
self.failUnlessIn("wordpress/0", output)
@inlineCallbacks
def test_agent_executes_depart_hook(self):
"""If a relation changes after the unit is started, a relation change
hook is executed."""
self.write_empty_hooks(app_relation_changed=True)
file_path = self.makeFile()
self.write_hook("app-relation-broken",
("#!/bin/sh\n"
"echo broken hook >> %s\n" % file_path))
yield self.agent.startService()
hook_complete = self.wait_on_hook(
"app-relation-changed", executor=self.agent.executor)
yield self.add_opposite_service_unit(self.states)
yield hook_complete
# Watch the unit relation workflow complete
workflow_complete = self.wait_on_state(
self.agent.lifecycle.get_relation_workflow(
self.states["relation"].internal_id),
"departed")
yield self.relation_manager.remove_relation_state(
self.states["relation"])
hook_complete = self.wait_on_hook(
"app-relation-broken", executor=self.agent.executor)
# Verify the hook has executed.
yield hook_complete
self.assertEqual(open(file_path).read().strip(), "broken hook")
# Wait for the workflow transition to complete.
yield workflow_complete
@inlineCallbacks
def test_agent_debug_watch(self):
"""The unit agent subscribes to changes to the hook debug settings.
"""
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield self.states["unit"].enable_hook_debug(["*"])
# Wait for watch to fire invoke callback and reset
yield self.sleep(0.1)
# Check the propogation to the executor
self.assertNotEquals(
self.agent.executor.get_hook_path("x"), "x")
class UnitAgentResolvedTest(UnitAgentTestBase):
@inlineCallbacks
def test_resolved_unit_already_running(self):
"""If the unit already running the setting is cleared,
and no transition is performed.
"""
self.write_empty_hooks()
start_deferred = self.wait_on_hook(
"start", executor=self.agent.executor)
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield start_deferred
self.assertEqual(
"started", (yield self.agent.workflow.get_state()))
yield self.agent.unit_state.set_resolved(RETRY_HOOKS)
# Wait for watch to fire and reset
yield self.sleep(0.1)
self.assertEqual(
"started", (yield self.agent.workflow.get_state()))
self.assertEqual(
None, (yield self.agent.unit_state.get_resolved()))
@inlineCallbacks
def test_resolved_install_error(self):
"""If the unit has an install error it will automatically
be transitioned to the installed state after the recovery.
"""
self.write_empty_hooks()
install_deferred = self.wait_on_hook(
"install", executor=self.agent.executor)
self.write_hook("install", "#!/bin/sh\nexit 1")
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield install_deferred
self.assertEqual(
"install_error", (yield self.agent.workflow.get_state()))
install_deferred = self.wait_on_state(self.agent.workflow, "started")
self.write_hook("install", "#!/bin/sh\nexit 0")
yield self.agent.unit_state.set_resolved(RETRY_HOOKS)
yield install_deferred
self.assertEqual("started", (yield self.agent.workflow.get_state()))
# Ensure we clear out background activity from the watch firing
yield self.poke_zk()
@inlineCallbacks
def test_resolved_start_error(self):
"""If the unit has a start error it will automatically
be transitioned to started after the recovery.
"""
self.write_empty_hooks()
hook_deferred = self.wait_on_hook(
"start", executor=self.agent.executor)
self.write_hook("start", "#!/bin/sh\nexit 1")
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield hook_deferred
self.assertEqual(
"start_error", (yield self.agent.workflow.get_state()))
state_deferred = self.wait_on_state(self.agent.workflow, "started")
yield self.agent.unit_state.set_resolved(NO_HOOKS)
yield state_deferred
self.assertEqual("started", (yield self.agent.workflow.get_state()))
# Resolving to the started state from the resolved watch will cause the
# lifecycle start to execute in the background context, wait
# for it to finish.
yield self.sleep(0.1)
@inlineCallbacks
def test_resolved_stopped(self):
"""If the unit has a stop error it will automatically
be transitioned to stopped after the recovery.
"""
self.write_empty_hooks()
self.write_hook("stop", "#!/bin/sh\nexit 1")
hook_deferred = self.wait_on_hook(
"start", executor=self.agent.executor)
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield hook_deferred
hook_deferred = self.wait_on_hook("stop", executor=self.agent.executor)
with (yield self.agent.workflow.lock()):
yield self.agent.workflow.fire_transition("stop")
yield hook_deferred
self.assertEqual("stop_error", (yield self.agent.workflow.get_state()))
state_deferred = self.wait_on_state(self.agent.workflow, "stopped")
self.write_hook("stop", "#!/bin/sh\nexit 0")
yield self.agent.unit_state.set_resolved(RETRY_HOOKS)
yield state_deferred
self.assertEqual("stopped", (yield self.agent.workflow.get_state()))
# Ensure we clear out background activity from the watch firing
yield self.poke_zk()
@inlineCallbacks
def test_hook_error_on_resolved_retry_remains_in_error_state(self):
"""If the unit has an install error it will automatically
be transitioned to started after the recovery.
"""
self.write_empty_hooks()
self.write_hook("stop", "#!/bin/sh\nexit 1")
hook_deferred = self.wait_on_hook(
"start", executor=self.agent.executor)
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield hook_deferred
hook_deferred = self.wait_on_hook("stop", executor=self.agent.executor)
with (yield self.agent.workflow.lock()):
yield self.agent.workflow.fire_transition("stop")
yield hook_deferred
self.assertEqual("stop_error", (yield self.agent.workflow.get_state()))
hook_deferred = self.wait_on_hook("stop", executor=self.agent.executor)
yield self.agent.unit_state.set_resolved(RETRY_HOOKS)
yield hook_deferred
# Ensure we clear out background activity from the watch firing
yield self.poke_zk()
self.assertEqual("stop_error", (yield self.agent.workflow.get_state()))
class UnitAgentUpgradeTest(
UnitAgentTestBase, CharmPublisherTestBase, CharmUpgradeTestBase):
@inlineCallbacks
def setUp(self):
yield super(UnitAgentTestBase, self).setUp()
settings = GlobalSettingsStateManager(self.client)
yield settings.set_provider_type("dummy")
self.makeDir(path=os.path.join(self.juju_directory, "charms"))
@inlineCallbacks
def wait_for_log(self, logger_name, message, level=logging.DEBUG):
output = self.capture_logging(logger_name, level=level)
while message not in output.getvalue():
yield self.sleep(0.1)
@inlineCallbacks
def mark_charm_upgrade(self):
# Create a new version of the charm
repository = self.increment_charm(self.charm)
# Upload the new charm version
charm = yield repository.find(CharmURL.parse("local:series/mysql"))
charm, charm_state = yield self.publish_charm(charm.path)
# Mark the unit for upgrade
yield self.states["service"].set_charm_id(charm_state.id)
yield self.states["unit"].set_upgrade_flag()
@inlineCallbacks
def test_agent_upgrade_watch(self):
"""The agent watches for unit upgrades."""
yield self.mark_charm_upgrade()
self.agent.set_watch_enabled(True)
hook_done = self.wait_on_hook(
"upgrade-charm", executor=self.agent.executor)
yield self.agent.startService()
yield hook_done
yield self.assertState(self.agent.workflow, "started")
@inlineCallbacks
def test_agent_upgrade(self):
"""The agent can succesfully upgrade its charm."""
log_written = self.wait_for_log("juju.agents.unit", "Upgrade complete")
hook_done = self.wait_on_hook(
"upgrade-charm", executor=self.agent.executor)
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield self.mark_charm_upgrade()
yield hook_done
yield log_written
self.assertIdentical(
(yield self.states["unit"].get_upgrade_flag()),
False)
new_charm = get_charm_from_path(
os.path.join(self.agent.unit_directory, "charm"))
self.assertEqual(
self.charm.get_revision() + 1, new_charm.get_revision())
@inlineCallbacks
def test_agent_upgrade_version_current(self):
"""If the unit is running the latest charm, do nothing."""
log_written = self.wait_for_log(
"juju.agents.unit",
"Upgrade ignored: already running latest charm")
old_charm_id = yield self.states["unit"].get_charm_id()
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield self.states["unit"].set_upgrade_flag()
yield log_written
self.assertIdentical(
(yield self.states["unit"].get_upgrade_flag()), False)
self.assertEquals(
(yield self.states["unit"].get_charm_id()), old_charm_id)
@inlineCallbacks
def test_agent_upgrade_bad_unit_state(self):
"""The upgrade fails if the unit is in a bad state."""
# Upload a new version of the unit's charm
repository = self.increment_charm(self.charm)
charm = yield repository.find(CharmURL.parse("local:series/mysql"))
charm, charm_state = yield self.publish_charm(charm.path)
old_charm_id = yield self.states["unit"].get_charm_id()
log_written = self.wait_for_log(
"juju.agents.unit",
"Cannot upgrade: unit is in non-started state configure_error. "
"Reissue upgrade command to try again.")
self.agent.set_watch_enabled(True)
yield self.agent.startService()
# Mark the unit for upgrade, with an invalid state.
with (yield self.agent.workflow.lock()):
yield self.agent.workflow.fire_transition("error_configure")
yield self.states["service"].set_charm_id(charm_state.id)
yield self.states["unit"].set_upgrade_flag()
yield log_written
self.assertIdentical(
(yield self.states["unit"].get_upgrade_flag()), False)
self.assertEquals(
(yield self.states["unit"].get_charm_id()), old_charm_id)
@inlineCallbacks
def test_agent_force_upgrade_bad_unit_state(self):
"""The upgrade runs if forced and the unit is in a bad state."""
# Upload a new version of the unit's charm
repository = self.increment_charm(self.charm)
charm = yield repository.find(CharmURL.parse("local:series/mysql"))
charm, charm_state = yield self.publish_charm(charm.path)
old_charm_id = yield self.states["unit"].get_charm_id()
output = self.capture_logging("juju.agents.unit", level=logging.DEBUG)
self.agent.set_watch_enabled(True)
yield self.agent.startService()
# Mark the unit for upgrade, with an invalid state.
with (yield self.agent.workflow.lock()):
yield self.agent.workflow.fire_transition("error_configure")
yield self.states["service"].set_charm_id(charm_state.id)
yield self.states["unit"].set_upgrade_flag(force=True)
# Its hard to watch something with no hooks and no state changes.
yield self.sleep(0.1)
self.assertIdentical(
(yield self.states["unit"].get_upgrade_flag()), False)
self.assertIn("Forced upgrade complete", output.getvalue())
self.assertEquals(
(yield self.states["unit"].get_charm_id()), "local:series/mysql-2")
self.assertEquals(old_charm_id, "local:series/dummy-1")
@inlineCallbacks
def test_agent_upgrade_no_flag(self):
"""An upgrade stops if there is no upgrade flag set."""
log_written = self.wait_for_log(
"juju.agents.unit", "No upgrade flag set")
old_charm_id = yield self.states["unit"].get_charm_id()
self.agent.set_watch_enabled(True)
yield self.agent.startService()
yield log_written
self.assertIdentical(
(yield self.states["unit"].get_upgrade_flag()),
False)
new_charm_id = yield self.states["unit"].get_charm_id()
self.assertEquals(new_charm_id, old_charm_id)
juju-0.7.orig/juju/charm/__init__.py 0000644 0000000 0000000 00000000114 12135220114 015565 0 ustar 0000000 0000000 from provider import get_charm_from_path
__all__ = ["get_charm_from_path"]
juju-0.7.orig/juju/charm/base.py 0000644 0000000 0000000 00000003357 12135220114 014754 0 ustar 0000000 0000000 from juju.errors import CharmError
def get_revision(file_content, metadata, path):
if file_content is None:
return metadata.obsolete_revision
try:
result = int(file_content.strip())
if result >= 0:
return result
except (ValueError, TypeError):
pass
raise CharmError(path, "invalid charm revision %r" % file_content)
class CharmBase(object):
"""Abstract base class for charm implementations.
"""
_sha256 = None
def _unsupported(self, attr):
raise NotImplementedError("%s.%s not supported" %
(self.__class__.__name__, attr))
def get_revision(self):
"""Get the revision, preferably from the revision file.
Will fall back to metadata if not available.
"""
self._unsupported("get_revision()")
def set_revision(self, revision):
"""Update the revision file, if possible.
Some subclasses may not be able to do this.
"""
self._unsupported("set_revision()")
def as_bundle(self):
"""Transform this charm into a charm bundle, if possible.
Some subclasses may not be able to do this.
"""
self._unsupported("as_bundle()")
def compute_sha256(self):
"""Compute the sha256 for this charm.
Every charm subclass must implement this.
"""
self._unsupported("compute_sha256()")
def get_sha256(self):
"""Return the cached sha256, or compute it if necessary.
If the sha256 value for this charm is not yet cached,
the compute_sha256() method will be called to compute it.
"""
if self._sha256 is None:
self._sha256 = self.compute_sha256()
return self._sha256
juju-0.7.orig/juju/charm/bundle.py 0000644 0000000 0000000 00000005404 12135220114 015306 0 ustar 0000000 0000000 import hashlib
import tempfile
import os
import stat
from zipfile import ZipFile, BadZipfile
from juju.charm.base import CharmBase, get_revision
from juju.charm.config import ConfigOptions
from juju.charm.metadata import MetaData
from juju.errors import CharmError
from juju.lib.filehash import compute_file_hash
class CharmBundle(CharmBase):
"""ZIP-archive that contains charm directory content."""
type = "bundle"
def __init__(self, path):
self.path = isinstance(path, file) and path.name or path
try:
zf = ZipFile(path, 'r')
except BadZipfile, exc:
raise CharmError(path, "must be a zip file (%s)" % exc)
if "metadata.yaml" not in zf.namelist():
raise CharmError(
path, "charm does not contain required file 'metadata.yaml'")
self.metadata = MetaData()
self.metadata.parse(zf.read("metadata.yaml"))
try:
revision_content = zf.read("revision")
except KeyError:
revision_content = None
self._revision = get_revision(
revision_content, self.metadata, self.path)
if self._revision is None:
raise CharmError(self.path, "has no revision")
self.config = ConfigOptions()
if "config.yaml" in zf.namelist():
self.config.parse(zf.read("config.yaml"))
def get_revision(self):
return self._revision
def compute_sha256(self):
"""Return the SHA256 digest for this charm bundle.
The digest is extracted out of the final bundle file itself.
"""
return compute_file_hash(hashlib.sha256, self.path)
def extract_to(self, directory_path):
"""Extract the bundle to directory path and return a
CharmDirectory handle"""
from .directory import CharmDirectory
zf = ZipFile(self.path, "r")
for info in zf.infolist():
mode = info.external_attr >> 16
if stat.S_ISLNK(mode):
source = zf.read(info.filename)
target = os.path.join(directory_path, info.filename)
# Support extracting over existing charm.
# TODO: a directory changed to a file needs install manifests
if os.path.exists(target):
os.remove(target)
os.symlink(source, target)
continue
# Preserve mode
extract_path = zf.extract(info, directory_path)
os.chmod(extract_path, mode)
return CharmDirectory(directory_path)
def as_bundle(self):
return self
def as_directory(self):
"""Returns the bundle as a CharmDirectory using a temporary
path"""
dn = tempfile.mkdtemp(prefix="tmp-charm-")
return self.extract_to(dn)
juju-0.7.orig/juju/charm/config.py 0000644 0000000 0000000 00000015705 12135220114 015307 0 ustar 0000000 0000000 import copy
import os
import sys
import yaml
from juju.lib import serializer
from juju.lib.format import YAMLFormat
from juju.lib.schema import (SchemaError, KeyDict, Dict, String,
Constant, OneOf, Int, Float)
from juju.charm.errors import (
ServiceConfigError, ServiceConfigValueError)
OPTION_SCHEMA = KeyDict({
"type": OneOf(Constant("string"),
Constant("str"), # Obsolete
Constant("int"),
Constant("boolean"),
Constant("float")),
"default": OneOf(String(), Int(), Float()),
"description": String(),
},
optional=["default", "description"],
)
# Schema used to validate ConfigOptions specifications
CONFIG_SCHEMA = KeyDict({
"options": Dict(String(), OPTION_SCHEMA),
})
WARNED_STR_IS_OBSOLETE = False
class ConfigOptions(object):
"""Represents the configuration options exposed by a charm.
The intended usage is that Charm provide access to these objects
and then use them to `validate` inputs provided in the `juju
set` and `juju deploy` code paths.
"""
def __init__(self):
self._data = {}
def as_dict(self):
return copy.deepcopy(self._data)
def load(self, pathname):
"""Construct a ConfigOptions instance from a YAML file.
If is currently allowed for `pathname` to be missing. An empty
file with no allowable options will be assumed in that case.
"""
data = None
if os.path.exists(pathname):
with open(pathname) as fh:
data = fh.read()
else:
pathname = None
data = "options: {}\n"
if not data:
raise ServiceConfigError(
pathname, "Missing required service options metadata")
self.parse(data, pathname)
return self
def parse(self, data, pathname=None):
"""Load data into the config object.
Data can be a properly encoded YAML string or an dict, such as
one returned by `get_serialization_data`.
Each call to `parse` replaces any existing data.
`data`: Python dict or YAML encoded dict containing a valid
config options specification.
`pathname`: optional pathname included in some errors
"""
if isinstance(data, basestring):
try:
raw_data = serializer.yaml_load(data)
except yaml.MarkedYAMLError, e:
# Capture the path name on the error if present.
if pathname is not None:
e.problem_mark = serializer.yaml_mark_with_path(
pathname, e.problem_mark)
raise
elif isinstance(data, dict):
raw_data = data
else:
raise ServiceConfigError(
pathname or "",
"Unknown data type for config options: %s" % type(data))
data = self.parse_serialization_data(raw_data, pathname)
self._data = data
# validate defaults
self.get_defaults()
def parse_serialization_data(self, data, pathname=None):
"""Verify we have sensible option metadata.
Returns the `options` dict from within the YAML data.
"""
if not data or not isinstance(data, dict):
raise ServiceConfigError(
pathname or "",
"Expected YAML dict of options metadata")
try:
data = CONFIG_SCHEMA.coerce(data, [])
except SchemaError, error:
raise ServiceConfigError(
pathname or "", "Invalid options specification: %s" % error)
# XXX Drop this after everyone has migrated their config to 'string'.
global WARNED_STR_IS_OBSOLETE
if not WARNED_STR_IS_OBSOLETE:
for name, info in data["options"].iteritems():
for field, value in info.iteritems():
if field == "type" and value == "str":
sys.stderr.write(
"WARNING: Charm is using obsolete 'str' type "
"in config.yaml. Rename it to 'string'. %r \n" % (
pathname or ""))
WARNED_STR_IS_OBSOLETE = True
break
return data["options"]
def _validate_one(self, name, value):
# see if there is a type associated with the option
kind = self._data[name].get("type", "string")
if kind not in validation_kinds:
raise ServiceConfigValueError(
"Unknown service option type: %s" % kind)
# apply validation
validator = validation_kinds[kind]
value, valid = validator(value, self._data[name])
if not valid:
# Return value such that it roundtrips; this allows us to
# report back the boolean false instead of the Python
# output format, False
raise ServiceConfigValueError(
"Invalid value for %s: %s" % (
name, YAMLFormat().format_raw(value)))
return value
def get_defaults(self):
"""Return a mapping of option: default for all options."""
d = {}
for name, options in self._data.items():
if "default" in options:
d[name] = self._validate_one(name, options["default"])
return d
def validate(self, options):
"""Validate options using the loaded validation data.
This method validates all the provided options, and returns a
new dictionary with values properly typed.
If a provided option is unknown or its value fails validation,
ServiceConfigError is raised.
"""
d = {}
for option, value in options.items():
if option not in self._data:
raise ServiceConfigValueError(
"%s is not a valid configuration option." % (option))
d[option] = self._validate_one(option, value)
return d
def get_serialization_data(self):
return dict(options=self._data.copy())
# Validators return (type mapped value, valid boolean)
def validate_str(value, options):
if isinstance(value, basestring):
return value, True
return value, False
def validate_int(value, options):
try:
return int(value), True
except ValueError:
return value, False
def validate_float(value, options):
try:
return float(value), True
except ValueError:
return value, False
def validate_boolean(value, options):
if isinstance(value, bool):
return value, True
if value.lower() == "true":
return True, True
if value.lower() == "false":
return False, True
return value, False
# maps service option types to callables
validation_kinds = {
"string": validate_str,
"str": validate_str, # Obsolete
"int": validate_int,
"float": validate_float,
"boolean": validate_boolean,
}
juju-0.7.orig/juju/charm/directory.py 0000644 0000000 0000000 00000011261 12135220114 016037 0 ustar 0000000 0000000 import os
import stat
import zipfile
import tempfile
from juju.charm.base import CharmBase, get_revision
from juju.charm.bundle import CharmBundle
from juju.charm.config import ConfigOptions
from juju.charm.errors import InvalidCharmFile
from juju.charm.metadata import MetaData
class CharmDirectory(CharmBase):
"""Directory that holds charm content.
:param path: Path to charm directory
The directory must contain the following files::
- ``metadata.yaml``
"""
type = "dir"
def __init__(self, path):
self.path = path
self.metadata = MetaData(os.path.join(path, "metadata.yaml"))
revision_content = None
revision_path = os.path.join(self.path, "revision")
if os.path.exists(revision_path):
with open(revision_path) as f:
revision_content = f.read()
self._revision = get_revision(
revision_content, self.metadata, self.path)
if self._revision is None:
self.set_revision(0)
elif revision_content is None:
self.set_revision(self._revision)
self.config = ConfigOptions()
self.config.load(os.path.join(path, "config.yaml"))
self._temp_bundle = None
self._temp_bundle_file = None
def get_revision(self):
return self._revision
def set_revision(self, revision):
self._revision = revision
with open(os.path.join(self.path, "revision"), "w") as f:
f.write(str(revision) + "\n")
def make_archive(self, path):
"""Create archive of directory and write to ``path``.
:param path: Path to archive
- build/* - This is used for packing the charm itself and any
similar tasks.
- */.* - Hidden files are all ignored for now. This will most
likely be changed into a specific ignore list (.bzr, etc)
"""
zf = zipfile.ZipFile(path, 'w', zipfile.ZIP_DEFLATED)
for dirpath, dirnames, filenames in os.walk(self.path):
relative_path = dirpath[len(self.path) + 1:]
if relative_path and not self._ignore(relative_path):
zf.write(dirpath, relative_path)
for name in filenames:
archive_name = os.path.join(relative_path, name)
if not self._ignore(archive_name):
real_path = os.path.join(dirpath, name)
self._check_type(real_path)
if os.path.islink(real_path):
self._check_link(real_path)
self._write_symlink(
zf, os.readlink(real_path), archive_name)
else:
zf.write(real_path, archive_name)
zf.close()
def _check_type(self, path):
"""Check the path
"""
s = os.stat(path)
if stat.S_ISDIR(s.st_mode) or stat.S_ISREG(s.st_mode):
return path
raise InvalidCharmFile(
self.metadata.name, path, "Invalid file type for a charm")
def _check_link(self, path):
link_path = os.readlink(path)
if link_path[0] == "/":
raise InvalidCharmFile(
self.metadata.name, path, "Absolute links are invalid")
path_dir = os.path.dirname(path)
link_path = os.path.join(path_dir, link_path)
if not link_path.startswith(os.path.abspath(self.path)):
raise InvalidCharmFile(
self.metadata.name, path, "Only internal symlinks are allowed")
def _write_symlink(self, zf, link_target, link_path):
"""Package symlinks with appropriate zipfile metadata."""
info = zipfile.ZipInfo()
info.filename = link_path
info.create_system = 3
# Preserve the pre-existing voodoo mode in a slightly clearer form.
info.external_attr = (stat.S_IFLNK | 0755) << 16
zf.writestr(info, link_target)
def _ignore(self, path):
if path == "build" or path.startswith("build/"):
return True
if path.startswith('.'):
return True
def as_bundle(self):
if self._temp_bundle is None:
prefix = "%s-%d.charm." % (self.metadata.name, self.get_revision())
temp_file = tempfile.NamedTemporaryFile(prefix=prefix)
self.make_archive(temp_file.name)
self._temp_bundle = CharmBundle(temp_file.name)
# Attach the life time of temp_file to self:
self._temp_bundle_file = temp_file
return self._temp_bundle
def as_directory(self):
return self
def compute_sha256(self):
"""
Compute sha256, based on the bundle.
"""
return self.as_bundle().compute_sha256()
juju-0.7.orig/juju/charm/errors.py 0000644 0000000 0000000 00000004674 12135220114 015361 0 ustar 0000000 0000000 from juju.errors import CharmError, JujuError
class CharmNotFound(JujuError):
"""A charm was not found in the repository."""
# This isn't semantically an error with a charm error, its an error
# even finding the charm.
def __init__(self, repository_path, charm_name):
self.repository_path = repository_path
self.charm_name = charm_name
def __str__(self):
return "Charm '%s' not found in repository %s" % (
self.charm_name, self.repository_path)
class CharmURLError(CharmError):
def __init__(self, url, message):
self.url = url
self.message = message
def __str__(self):
return "Bad charm URL %r: %s" % (self.url, self.message)
class MetaDataError(CharmError):
"""Raised when an error in the info file of a charm is found."""
def __init__(self, *args):
super(CharmError, self).__init__(*args)
def __str__(self):
return super(CharmError, self).__str__()
class InvalidCharmHook(CharmError):
"""A named hook was not found to be valid for the charm."""
def __init__(self, charm_name, hook_name):
self.charm_name = charm_name
self.hook_name = hook_name
def __str__(self):
return "Charm %r does not contain hook %r" % (
self.charm_name, self.hook_name)
class InvalidCharmFile(CharmError):
"""An invalid file was found in a charm."""
def __init__(self, charm_name, file_path, msg):
self.charm_name = charm_name
self.file_path = file_path
self.msg = msg
def __str__(self):
return "Charm %r invalid file %r %s" % (
self.charm_name, self.file_path, self.msg)
class NewerCharmNotFound(CharmError):
"""A newer charm was not found."""
def __init__(self, charm_id):
self.charm_id = charm_id
def __str__(self):
return "Charm %r is the latest revision known" % self.charm_id
class ServiceConfigError(CharmError):
"""Indicates an issue related to definition of service options."""
class ServiceConfigValueError(JujuError):
"""Indicates an issue related to values of service options."""
class RepositoryNotFound(JujuError):
"""Indicates inability to locate an appropriate repository"""
def __init__(self, specifier):
self.specifier = specifier
def __str__(self):
if self.specifier is None:
return "No repository specified"
return "No repository found at %r" % self.specifier
juju-0.7.orig/juju/charm/metadata.py 0000644 0000000 0000000 00000021056 12135220114 015616 0 ustar 0000000 0000000 import logging
import os
import yaml
from juju.charm.errors import MetaDataError
from juju.errors import FileNotFound
from juju.lib import serializer
from juju.lib.format import is_valid_charm_format
from juju.lib.schema import (
SchemaError, Bool, Constant, Dict, Int,
KeyDict, OneOf, UnicodeOrString)
log = logging.getLogger("juju.charm")
UTF8_SCHEMA = UnicodeOrString("utf-8")
SCOPE_GLOBAL = "global"
SCOPE_CONTAINER = "container"
INTERFACE_SCHEMA = KeyDict({
"interface": UTF8_SCHEMA,
"limit": OneOf(Constant(None), Int()),
"scope": OneOf(Constant(SCOPE_GLOBAL), Constant(SCOPE_CONTAINER)),
"optional": Bool()},
optional=["scope"])
class InterfaceExpander(object):
"""Schema coercer that expands the interface shorthand notation.
We need this class because our charm shorthand is difficult to
work with (unfortunately). So we coerce shorthand and then store
the desired format in ZK.
Supports the following variants::
provides:
server: riak
admin: http
foobar:
interface: blah
provides:
server:
interface: mysql
limit:
optional: false
In all input cases, the output is the fully specified interface
representation as seen in the mysql interface description above.
"""
def __init__(self, limit):
"""Create relation interface reshaper.
@limit: the limit for this relation. Used to provide defaults
for a given kind of relation role (peer, provider, consumer)
"""
self.limit = limit
def coerce(self, value, path):
"""Coerce `value` into an expanded interface.
Helper method to support each of the variants, either the
charm does not specify limit and optional, such as foobar in
the above example; or the interface spec is just a string,
such as the ``server: riak`` example.
"""
if not isinstance(value, dict):
return {
"interface": UTF8_SCHEMA.coerce(value, path),
"limit": self.limit,
"scope": SCOPE_GLOBAL,
"optional": False}
else:
# Optional values are context-sensitive and/or have
# defaults, which is different than what KeyDict can
# readily support. So just do it here first, then
# coerce.
if "limit" not in value:
value["limit"] = self.limit
if "optional" not in value:
value["optional"] = False
value["scope"] = value.get("scope", SCOPE_GLOBAL)
return INTERFACE_SCHEMA.coerce(value, path)
SCHEMA = KeyDict({
"name": UTF8_SCHEMA,
"revision": Int(),
"summary": UTF8_SCHEMA,
"description": UTF8_SCHEMA,
"format": Int(),
"peers": Dict(UTF8_SCHEMA, InterfaceExpander(limit=1)),
"provides": Dict(UTF8_SCHEMA, InterfaceExpander(limit=None)),
"requires": Dict(UTF8_SCHEMA, InterfaceExpander(limit=1)),
"subordinate": Bool(),
}, optional=set(
["format", "provides", "requires", "peers", "revision",
"subordinate"]))
class MetaData(object):
"""Represents the charm info file.
The main metadata for a charm (name, revision, etc) is maintained
in the charm's info file. This class is able to parse,
validate, and provide access to data in the info file.
"""
def __init__(self, path=None):
self._data = {}
if path is not None:
self.load(path)
@property
def name(self):
"""The charm name."""
return self._data.get("name")
@property
def obsolete_revision(self):
"""The charm revision.
The charm revision acts as a version, but unlike e.g. package
versions, the charm revision is a monotonically increasing
integer. This should not be stored in metadata any more, but remains
for backward compatibility's sake.
"""
return self._data.get("revision")
@property
def summary(self):
"""The charm summary."""
return self._data.get("summary")
@property
def description(self):
"""The charm description."""
return self._data.get("description")
@property
def format(self):
"""Optional charm format, defaults to 1"""
return self._data.get("format", 1)
@property
def provides(self):
"""The charm provides relations."""
return self._data.get("provides")
@property
def requires(self):
"""The charm requires relations."""
return self._data.get("requires")
@property
def peers(self):
"""The charm peers relations."""
return self._data.get("peers")
@property
def is_subordinate(self):
"""Indicates the charm requires a contained relationship.
This property will effect the deployment options of its
charm. When a charm is_subordinate it can only be deployed
when its contained relationship is satisfied. See the
subordinates specification.
"""
return self._data.get("subordinate", False)
def get_serialization_data(self):
"""Get internal dictionary representing the state of this instance.
This is useful to embed this information inside other storage-related
dictionaries.
"""
return dict(self._data)
def load(self, path):
"""Load and parse the info file.
@param path: Path of the file to load.
Internally, this function will pass the content of the file to
the C{parse()} method.
"""
if not os.path.isfile(path):
raise FileNotFound(path)
with open(path) as f:
self.parse(f.read(), path)
def parse(self, content, path=None):
"""Parse the info file described by the given content.
@param content: Content of the info file to parse.
@param path: Optional path of the loaded file. Used when raising
errors.
@raise MetaDataError: When errors are found in the info data.
"""
try:
self.parse_serialization_data(
serializer.yaml_load(content), path)
except yaml.MarkedYAMLError, e:
# Capture the path name on the error if present.
if path is not None:
e.problem_mark = serializer.yaml_mark_with_path(
path, e.problem_mark)
raise
if "revision" in self._data and path:
log.warning(
"%s: revision field is obsolete. Move it to the 'revision' "
"file." % path)
if self.provides:
for rel in self.provides:
if rel.startswith("juju-"):
raise MetaDataError(
"Charm %s attempting to provide relation in "
"implicit relation namespace: %s" %
(self.name, rel))
interface = self.provides[rel]["interface"]
if interface.startswith("juju-"):
raise MetaDataError(
"Charm %s attempting to provide interface in implicit namespace: "
"%s (relation: %s)" % (self.name, interface, rel))
if self.is_subordinate:
proper_subordinate = False
if self.requires:
for relation_data in self.requires.values():
if relation_data.get("scope") == SCOPE_CONTAINER:
proper_subordinate = True
if not proper_subordinate:
raise MetaDataError(
"%s labeled subordinate but lacking scope:container `requires` relation" %
path)
if not is_valid_charm_format(self.format):
raise MetaDataError("Charm %s uses an unknown format: %s" % (
self.name, self.format))
def parse_serialization_data(self, serialization_data, path=None):
"""Parse the unprocessed serialization data and load in this instance.
@param serialization_data: Unprocessed data matching the
metadata schema.
@param path: Optional path of the loaded file. Used when
raising errors.
@raise MetaDataError: When errors are found in the info data.
"""
try:
self._data = SCHEMA.coerce(serialization_data, [])
except SchemaError, error:
if path:
path_info = " %s:" % path
else:
path_info = ""
raise MetaDataError("Bad data in charm info:%s %s" %
(path_info, error))
juju-0.7.orig/juju/charm/provider.py 0000644 0000000 0000000 00000001470 12135220114 015666 0 ustar 0000000 0000000 """Charm Factory
Register a set of input handlers and spawn the correct charm
implementation.
"""
from juju.errors import CharmError
import os.path
def _is_bundle(filename):
"""is_bundle(filename) -> boolean"""
return os.path.isfile(filename) and filename.endswith(".charm")
def get_charm_from_path(specification):
"""
Given the specification of a charm (usually a pathname) map it
to an implementation and create an instance of the proper type.
"""
if _is_bundle(specification):
from .bundle import CharmBundle
return CharmBundle(specification)
elif os.path.isdir(specification):
from .directory import CharmDirectory
return CharmDirectory(specification)
raise CharmError(
specification, "unable to process %s into a charm" % specification)
juju-0.7.orig/juju/charm/publisher.py 0000644 0000000 0000000 00000010370 12135220114 016030 0 ustar 0000000 0000000 import logging
from zookeeper import NodeExistsException, NoNodeException
from twisted.internet.defer import (
DeferredList, inlineCallbacks, returnValue, succeed, FirstError)
from juju.lib import under
from juju.state.charm import CharmStateManager
from juju.state.errors import CharmStateNotFound, StateChanged
log = logging.getLogger("juju.charm")
class CharmPublisher(object):
"""
Publishes a charm to an environment.
"""
def __init__(self, client, storage):
self._client = client
self._storage = storage
self._charm_state_manager = CharmStateManager(self._client)
self._charm_add_queue = []
self._charm_state_cache = {}
@classmethod
@inlineCallbacks
def for_environment(cls, environment):
provider = environment.get_machine_provider()
storage = provider.get_file_storage()
client = yield provider.connect()
returnValue(cls(client, storage))
@inlineCallbacks
def add_charm(self, charm_id, charm):
"""Schedule a charm for addition to an juju environment.
Returns true if the charm is scheduled for upload, false if
the charm is already present in juju.
"""
self._charm_add_queue.append((charm_id, charm))
if charm_id in self._charm_state_cache:
returnValue(False)
try:
state = yield self._charm_state_manager.get_charm_state(
charm_id)
except CharmStateNotFound:
pass
else:
log.info("Using cached charm version of %s" % charm.metadata.name)
self._charm_state_cache[charm_id] = state
returnValue(False)
returnValue(True)
def _publish_one(self, charm_id, charm):
if charm_id in self._charm_state_cache:
return succeed(self._charm_state_cache[charm_id])
bundle = charm.as_bundle()
charm_file = open(bundle.path, "rb")
charm_store_path = under.quote(
"%s:%s" % (charm_id, bundle.get_sha256()))
def close_charm_file(passthrough):
charm_file.close()
return passthrough
def get_charm_url(result):
return self._storage.get_url(charm_store_path)
d = self._storage.put(charm_store_path, charm_file)
d.addBoth(close_charm_file)
d.addCallback(get_charm_url)
d.addCallback(self._cb_store_charm_state, charm_id, bundle)
d.addErrback(self._eb_verify_duplicate, charm_id, bundle)
return d
def publish(self):
"""Publish all added charms to provider storage and zookeeper.
Returns the charm_state of all scheduled charms.
"""
publish_deferreds = []
for charm_id, charm in self._charm_add_queue:
publish_deferreds.append(self._publish_one(charm_id, charm))
publish_deferred = DeferredList(publish_deferreds,
fireOnOneErrback=1,
consumeErrors=1)
# callbacks and deferreds to unwind the dlist
publish_deferred.addCallback(self._cb_extract_charm_state)
publish_deferred.addErrback(self._eb_extract_error)
return publish_deferred
def _cb_extract_charm_state(self, result):
return [r[1] for r in result]
def _eb_extract_error(self, failure):
failure.trap(FirstError)
return failure.value.subFailure
def _cb_store_charm_state(self, charm_url, charm_id, charm):
return self._charm_state_manager.add_charm_state(
charm_id, charm, charm_url)
@inlineCallbacks
def _eb_verify_duplicate(self, failure, charm_id, charm):
"""Detects duplicates vs. conflicts, raises stateerror on conflict."""
failure.trap(NodeExistsException)
try:
charm_state = \
yield self._charm_state_manager.get_charm_state(charm_id)
except NoNodeException:
# Check if the state goes away due to concurrent removal
msg = "Charm removed concurrently during publish, please retry."
raise StateChanged(msg)
if charm_state.get_sha256() != charm.get_sha256():
msg = "Concurrent upload of charm has different checksum %s" % (
charm_id)
raise StateChanged(msg)
juju-0.7.orig/juju/charm/repository.py 0000644 0000000 0000000 00000017103 12135220114 016253 0 ustar 0000000 0000000 import json
import logging
import os
import tempfile
import urllib
import urlparse
import yaml
from twisted.internet.defer import fail, inlineCallbacks, returnValue, succeed
from twisted.web.client import downloadPage, getPage
from twisted.web.error import Error
from txaws.client.ssl import VerifyingContextFactory
from juju.charm.provider import get_charm_from_path
from juju.charm.url import CharmURL
from juju.errors import FileNotFound
from juju.lib import under
from .errors import (
CharmNotFound, CharmError, RepositoryNotFound, ServiceConfigValueError)
log = logging.getLogger("juju.charm")
CS_STORE_URL = "https://store.juju.ubuntu.com"
def _makedirs(path):
try:
os.makedirs(path)
except OSError:
pass
def _cache_key(charm_url):
charm_url.assert_revision()
return under.quote("%s.charm" % charm_url)
class LocalCharmRepository(object):
"""Charm repository in a local directory."""
type = "local"
def __init__(self, path):
if path is None or not os.path.isdir(path):
raise RepositoryNotFound(path)
self.path = path
def _collection(self, collection):
path = os.path.join(self.path, collection.series)
if not os.path.exists(path):
return
for dentry in os.listdir(path):
if dentry.startswith("."):
continue
dentry_path = os.path.join(path, dentry)
try:
yield get_charm_from_path(dentry_path)
except FileNotFound:
continue
# There is a broken charm in the repo, but that
# shouldn't stop us from continuing
except yaml.YAMLError, e:
# Log yaml errors for feedback to developers.
log.warning("Charm %r has a YAML error: %s", dentry, e)
continue
except (CharmError, ServiceConfigValueError), e:
# Log invalid config.yaml and metadata.yaml semantic errors
log.warning("Charm %r has an error: %r %s", dentry, e, e)
continue
except CharmNotFound:
# This could just be a random directory/file in the repo
continue
except Exception, e:
# Catch all (perms, unknowns, etc)
log.warning(
"Unexpected error while processing %s: %r",
dentry, e)
def find(self, charm_url):
"""Find a charm with the given name.
If multiple charms are found with different versions, the most
recent one (greatest revision) will be returned.
"""
assert charm_url.collection.schema == "local", "schema mismatch"
latest = None
for charm in self._collection(charm_url.collection):
if charm.metadata.name == charm_url.name:
if charm.get_revision() == charm_url.revision:
return succeed(charm)
if (latest is None or
latest.get_revision() < charm.get_revision()):
latest = charm
if latest is None or charm_url.revision is not None:
return fail(CharmNotFound(self.path, charm_url))
return succeed(latest)
def latest(self, charm_url):
d = self.find(charm_url.with_revision(None))
d.addCallback(lambda c: c.get_revision())
return d
def __str__(self):
return "local charm repository: %s" % self.path
class RemoteCharmRepository(object):
cache_path = os.path.expanduser("~/.juju/cache")
type = "store"
def __init__(self, url_base, cache_path=None):
self.url_base = url_base
if cache_path is not None:
self.cache_path = cache_path
self.no_stats = bool(os.environ.get("JUJU_TESTING"))
def __str__(self):
return "charm store"
@inlineCallbacks
def _get_info(self, charm_url):
charm_id = str(charm_url)
url = "%s/charm-info?charms=%s" % (
self.url_base, urllib.quote(charm_id))
if self.no_stats:
url += "&stats=0"
try:
host = urlparse.urlparse(url).hostname
all_info = json.loads(
(yield getPage(
url, contextFactory=VerifyingContextFactory(host))))
charm_info = all_info[charm_id]
for warning in charm_info.get("warnings", []):
log.warning("%s: %s", charm_id, warning)
errors = charm_info.get("errors", [])
if errors:
raise CharmError(charm_id, "; ".join(errors))
returnValue(charm_info)
except Error:
raise CharmNotFound(self.url_base, charm_url)
@inlineCallbacks
def _download(self, charm_url, cache_path):
url = "%s/charm/%s" % (self.url_base, urllib.quote(charm_url.path))
downloads = os.path.join(self.cache_path, "downloads")
_makedirs(downloads)
f = tempfile.NamedTemporaryFile(
prefix=_cache_key(charm_url), suffix=".part", dir=downloads,
delete=False)
f.close()
downloading_path = f.name
host = urlparse.urlparse(url).hostname
if self.no_stats:
url += "?stats=0"
try:
yield downloadPage(
url,
downloading_path,
contextFactory=VerifyingContextFactory(host))
except Error:
raise CharmNotFound(self.url_base, charm_url)
os.rename(downloading_path, cache_path)
@inlineCallbacks
def find(self, charm_url):
info = yield self._get_info(charm_url)
revision = info["revision"]
if charm_url.revision is None:
charm_url = charm_url.with_revision(revision)
else:
assert revision == charm_url.revision, "bad url revision"
cache_path = os.path.join(self.cache_path, _cache_key(charm_url))
cached = os.path.exists(cache_path)
if not cached:
yield self._download(charm_url, cache_path)
charm = get_charm_from_path(cache_path)
assert charm.get_revision() == revision, "bad charm revision"
if charm.get_sha256() != info["sha256"]:
os.remove(cache_path)
name = "%s (%s)" % (
charm_url, "cached" if cached else "downloaded")
raise CharmError(name, "SHA256 mismatch")
returnValue(charm)
@inlineCallbacks
def latest(self, charm_url):
info = yield self._get_info(charm_url.with_revision(None))
returnValue(info["revision"])
def resolve(vague_name, repository_path, default_series):
"""Get a Charm and associated identifying information
:param str vague_name: a lazily specified charm name, suitable for use with
:meth:`CharmURL.infer`
:param repository_path: where on the local filesystem to find a repository
(only currently meaningful when `charm_name` is specified with
`"local:"`)
:type repository_path: str or None
:param str default_series: the Ubuntu series to insert when `charm_name` is
inadequately specified.
:return: a tuple of a :class:`juju.charm.url.CharmURL` and a
:class:`juju.charm.base.CharmBase` subclass, which together contain
both the charm's data and all information necessary to specify its
source.
"""
url = CharmURL.infer(vague_name, default_series)
if url.collection.schema == "local":
repo = LocalCharmRepository(repository_path)
elif url.collection.schema == "cs":
# The eventual charm store url, point to elastic ip for now b2
repo = RemoteCharmRepository(CS_STORE_URL)
return repo, url
juju-0.7.orig/juju/charm/tests/ 0000755 0000000 0000000 00000000000 12135220114 014622 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/url.py 0000644 0000000 0000000 00000010713 12135220114 014636 0 ustar 0000000 0000000 import copy
import re
from juju.charm.errors import CharmURLError
_USER_RE = re.compile("^[a-z0-9][a-zA-Z0-9+.-]+$")
_SERIES_RE = re.compile("^[a-z]+([a-z-]+[a-z])?$")
_NAME_RE = re.compile("^[a-z][a-z0-9]*(-[a-z0-9]*[a-z][a-z0-9]*)*$")
class CharmCollection(object):
"""Holds enough information to specify a repository and location
:attr str schema: Defines which repository; valid values are "cs" (for the
Juju charm store) and "local" (for a local repository).
:attr user: Remote repositories can have sections owned by individual
users.
:type user: str or None
:attr series: Which version of Ubuntu is targeted by charms in this
collection.
"""
def __init__(self, schema, user, series):
self.schema = schema
self.user = user
self.series = series
def __str__(self):
if self.user is None:
return "%s:%s" % (self.schema, self.series)
return "%s:~%s/%s" % (self.schema, self.user, self.series)
class CharmURL(object):
"""Holds enough information to specify a charm.
:attr collection: Where to look for the charm.
:type collection: :class:`CharmCollection`
:attr str name: The charm's name.
:attr revision: The charm's revision, if specified.
:type revision: int or None
"""
def __init__(self, collection, name, revision):
self.collection = collection
self.name = name
self.revision = revision
def __str__(self):
if self.revision is None:
return "%s/%s" % (self.collection, self.name)
return "%s/%s-%s" % (self.collection, self.name, self.revision)
@property
def path(self):
return str(self).split(":", 1)[1]
def with_revision(self, revision):
other = copy.deepcopy(self)
other.revision = revision
return other
def assert_revision(self):
if self.revision is None:
raise CharmURLError(str(self), "expected a revision")
@classmethod
def parse(cls, string):
"""Turn an unambiguous string representation into a CharmURL."""
def fail(message):
raise CharmURLError(string, message)
if not isinstance(string, basestring):
fail("not a string type")
if ":" not in string:
fail("no schema specified")
schema, rest = string.split(":", 1)
if schema not in ("cs", "local"):
fail("invalid schema")
parts = rest.split("/")
if len(parts) not in (2, 3):
fail("invalid form")
user = None
if parts[0].startswith("~"):
if schema == "local":
fail("users not allowed in local URLs")
user = parts[0][1:]
if not _USER_RE.match(user):
fail("invalid user")
parts = parts[1:]
if len(parts) != 2:
fail("no series specified")
revision = None
series, name = parts
if not _SERIES_RE.match(series):
fail("invalid series")
if "-" in name:
maybe_name, maybe_revision = name.rsplit("-", 1)
if maybe_revision.isdigit():
name, revision = maybe_name, int(maybe_revision)
if not _NAME_RE.match(name):
fail("invalid name")
return cls(CharmCollection(schema, user, series), name, revision)
@classmethod
def infer(cls, vague_name, default_series):
"""Turn a potentially fuzzy alias into a CharmURL."""
try:
# it might already be a valid URL string
return cls.parse(vague_name)
except CharmURLError:
# ok, it's not, we have to do some work
pass
if vague_name.startswith("~"):
raise CharmURLError(
vague_name, "a URL with a user must specify a schema")
if ":" in vague_name:
schema, rest = vague_name.split(":", 1)
else:
schema, rest = "cs", vague_name
url_string = "%s:%s" % (schema, rest)
parts = rest.split("/")
if len(parts) == 1:
url_string = "%s:%s/%s" % (schema, default_series, rest)
elif len(parts) == 2:
if parts[0].startswith("~"):
url_string = "%s:%s/%s/%s" % (
schema, parts[0], default_series, parts[1])
try:
return cls.parse(url_string)
except CharmURLError as err:
err.message += " (URL inferred from '%s')" % vague_name
raise
juju-0.7.orig/juju/charm/tests/__init__.py 0000644 0000000 0000000 00000000162 12135220114 016732 0 ustar 0000000 0000000 def local_charm_id(charm):
return "local:series/%s-%s" % (
charm.metadata.name, charm.get_revision())
juju-0.7.orig/juju/charm/tests/repository/ 0000755 0000000 0000000 00000000000 12135220114 017041 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/test_base.py 0000644 0000000 0000000 00000005162 12135220114 017151 0 ustar 0000000 0000000 from juju.charm.base import CharmBase, get_revision
from juju.charm.metadata import MetaData
from juju.errors import CharmError
from juju.lib import serializer
from juju.lib.testing import TestCase
class MyCharm(CharmBase):
pass
class CharmBaseTest(TestCase):
def setUp(self):
self.charm = MyCharm()
def assertUnsupported(self, callable, attr_name):
try:
callable()
except NotImplementedError, e:
self.assertEquals(str(e),
"MyCharm.%s not supported" % attr_name)
else:
self.fail("MyCharm.%s didn't fail" % attr_name)
def test_unsupported(self):
self.assertUnsupported(self.charm.as_bundle, "as_bundle()")
self.assertUnsupported(self.charm.get_sha256, "compute_sha256()")
self.assertUnsupported(self.charm.compute_sha256, "compute_sha256()")
self.assertUnsupported(self.charm.get_revision, "get_revision()")
self.assertUnsupported(
lambda: self.charm.set_revision(1), "set_revision()")
def test_compute_and_cache_sha256(self):
"""
The value computed by compute_sha256() on a child class
is returned by get_sha256, and cached permanently.
"""
sha256 = ["mysha"]
class CustomCharm(CharmBase):
def compute_sha256(self):
return sha256[0]
charm = CustomCharm()
self.assertEquals(charm.get_sha256(), "mysha")
sha256 = ["anothervalue"]
# Should still be the same, since the old one was cached.
self.assertEquals(charm.get_sha256(), "mysha")
class GetRevisionTest(TestCase):
def assert_good_content(self, content, value):
self.assertEquals(get_revision(content, None, None), value)
def assert_bad_content(self, content):
err = self.assertRaises(
CharmError, get_revision, content, None, "path")
self.assertEquals(
str(err),
"Error processing 'path': invalid charm revision %r" % content)
def test_with_content(self):
self.assert_good_content("0\n", 0)
self.assert_good_content("123\n", 123)
self.assert_bad_content("")
self.assert_bad_content("-1\n")
self.assert_bad_content("three hundred and six or so")
def test_metadata_fallback(self):
metadata = MetaData()
self.assertEquals(get_revision(None, metadata, None), None)
metadata.parse(
serializer.yaml_dump(
{"name": "x", "summary": "y", "description": "z","revision": 33},
))
self.assertEquals(get_revision(None, metadata, None), 33)
juju-0.7.orig/juju/charm/tests/test_bundle.py 0000644 0000000 0000000 00000021533 12135220114 017510 0 ustar 0000000 0000000 import os
import hashlib
import inspect
import shutil
import stat
import zipfile
from juju.lib import serializer
from juju.lib.testing import TestCase
from juju.lib.filehash import compute_file_hash
from juju.charm.metadata import MetaData
from juju.charm.bundle import CharmBundle
from juju.errors import CharmError
from juju.charm.directory import CharmDirectory
from juju.charm.provider import get_charm_from_path
from juju.charm import tests
repository_directory = os.path.join(
os.path.dirname(inspect.getabsfile(tests)), "repository")
sample_directory = os.path.join(repository_directory, "series", "dummy")
class BundleTest(TestCase):
def setUp(self):
directory = CharmDirectory(sample_directory)
# add sample directory
self.filename = self.makeFile(suffix=".charm")
directory.make_archive(self.filename)
def copy_charm(self):
dir_ = os.path.join(self.makeDir(), "sample")
shutil.copytree(sample_directory, dir_)
return dir_
def test_initialization(self):
bundle = CharmBundle(self.filename)
self.assertEquals(bundle.path, self.filename)
def test_error_not_zip(self):
filename = self.makeFile("@#$@$")
err = self.assertRaises(CharmError, CharmBundle, filename)
self.assertEquals(
str(err),
"Error processing %r: must be a zip file (File is not a zip file)"
% filename)
def test_error_zip_but_doesnt_have_metadata_file(self):
filename = self.makeFile()
zf = zipfile.ZipFile(filename, 'w')
zf.writestr("README.txt", "This is not a valid charm.")
zf.close()
err = self.assertRaises(CharmError, CharmBundle, filename)
self.assertEquals(
str(err),
"Error processing %r: charm does not contain required "
"file 'metadata.yaml'" % filename)
def test_no_revision_at_all(self):
filename = self.makeFile()
zf_dst = zipfile.ZipFile(filename, "w")
zf_src = zipfile.ZipFile(self.filename, "r")
for name in zf_src.namelist():
if name == "revision":
continue
zf_dst.writestr(name, zf_src.read(name))
zf_src.close()
zf_dst.close()
err = self.assertRaises(CharmError, CharmBundle, filename)
self.assertEquals(
str(err), "Error processing %r: has no revision" % filename)
def test_revision_in_metadata(self):
filename = self.makeFile()
zf_dst = zipfile.ZipFile(filename, "w")
zf_src = zipfile.ZipFile(self.filename, "r")
for name in zf_src.namelist():
if name == "revision":
continue
content = zf_src.read(name)
if name == "metadata.yaml":
data = serializer.yaml_load(content)
data["revision"] = 303
content = serializer.yaml_dump(data)
zf_dst.writestr(name, content)
zf_src.close()
zf_dst.close()
charm = CharmBundle(filename)
self.assertEquals(charm.get_revision(), 303)
def test_competing_revisions(self):
zf = zipfile.ZipFile(self.filename, "a")
zf.writestr("revision", "999")
data = serializer.yaml_load(zf.read("metadata.yaml"))
data["revision"] = 303
zf.writestr("metadata.yaml", serializer.yaml_dump(data))
zf.close()
charm = CharmBundle(self.filename)
self.assertEquals(charm.get_revision(), 999)
def test_cannot_set_revision(self):
charm = CharmBundle(self.filename)
self.assertRaises(NotImplementedError, charm.set_revision, 123)
def test_bundled_config(self):
"""Make sure that config is accessible from a bundle."""
from juju.charm.tests.test_config import sample_yaml_data
bundle = CharmBundle(self.filename)
self.assertEquals(bundle.config.get_serialization_data(),
sample_yaml_data)
def test_info(self):
bundle = CharmBundle(self.filename)
self.assertTrue(bundle.metadata is not None)
self.assertTrue(isinstance(bundle.metadata, MetaData))
self.assertEquals(bundle.metadata.name, "dummy")
self.assertEqual(bundle.type, "bundle")
def test_as_bundle(self):
bundle = CharmBundle(self.filename)
self.assertEquals(bundle.as_bundle(), bundle)
def test_executable_extraction(self):
sample_directory = os.path.join(
repository_directory, "series", "varnish-alternative")
charm_directory = CharmDirectory(sample_directory)
source_hook_path = os.path.join(sample_directory, "hooks", "install")
self.assertTrue(os.access(source_hook_path, os.X_OK))
bundle = charm_directory.as_bundle()
directory = bundle.as_directory()
hook_path = os.path.join(directory.path, "hooks", "install")
self.assertTrue(os.access(hook_path, os.X_OK))
config_path = os.path.join(directory.path, "config.yaml")
self.assertFalse(os.access(config_path, os.X_OK))
def get_charm_sha256(self):
return compute_file_hash(hashlib.sha256, self.filename)
def test_compute_sha256(self):
sha256 = self.get_charm_sha256()
bundle = CharmBundle(self.filename)
self.assertEquals(bundle.compute_sha256(), sha256)
def test_charm_base_inheritance(self):
"""
get_sha256() should be implemented in the base class,
and should use compute_sha256 to calculate the digest.
"""
sha256 = self.get_charm_sha256()
bundle = CharmBundle(self.filename)
self.assertEquals(bundle.get_sha256(), sha256)
def test_file_handle_as_path(self):
sha256 = self.get_charm_sha256()
fh = open(self.filename)
bundle = CharmBundle(fh)
self.assertEquals(bundle.get_sha256(), sha256)
def test_extract_to(self):
filename = self.makeFile()
charm = get_charm_from_path(self.filename)
f2 = charm.extract_to(filename)
# f2 should be a charm directory
self.assertInstance(f2, CharmDirectory)
self.assertInstance(f2.get_sha256(), basestring)
self.assertEqual(f2.path, filename)
def test_extract_symlink(self):
extract_dir = self.makeDir()
charm_path = self.copy_charm()
sym_path = os.path.join(charm_path, 'foobar')
os.symlink('metadata.yaml', sym_path)
charm_dir = CharmDirectory(charm_path)
bundle = charm_dir.as_bundle()
bundle.extract_to(extract_dir)
self.assertIn("foobar", os.listdir(extract_dir))
self.assertTrue(os.path.islink(os.path.join(extract_dir, "foobar")))
self.assertEqual(os.readlink(os.path.join(extract_dir, 'foobar')),
'metadata.yaml')
# Verify we can extract it over again
os.remove(sym_path)
os.symlink('./config.yaml', sym_path)
charm_dir = CharmDirectory(charm_path)
bundle = charm_dir.as_bundle()
bundle.extract_to(extract_dir)
self.assertEqual(os.readlink(os.path.join(extract_dir, 'foobar')),
'./config.yaml')
def test_extract_symlink_mode(self):
# lp:973260 - charms packed by different tools that record symlink
# mode permissions differently (ie the charm store) don't extract
# correctly.
charm_path = self.copy_charm()
sym_path = os.path.join(charm_path, 'foobar')
os.symlink('metadata.yaml', sym_path)
charm_dir = CharmDirectory(charm_path)
normal_path = charm_dir.as_bundle().path
zf_src = zipfile.ZipFile(normal_path, "r")
foreign_path = os.path.join(self.makeDir(), "store.charm")
zf_dst = zipfile.ZipFile(foreign_path, "w")
for info in zf_src.infolist():
if info.filename == "foobar":
# This is what the charm store does:
info.external_attr = (stat.S_IFLNK | 0777) << 16
zf_dst.writestr(info, zf_src.read(info.filename))
zf_src.close()
zf_dst.close()
bundle = CharmBundle(foreign_path)
extract_dir = self.makeDir()
bundle.extract_to(extract_dir)
self.assertIn("foobar", os.listdir(extract_dir))
self.assertTrue(os.path.islink(os.path.join(extract_dir, "foobar")))
self.assertEqual(os.readlink(os.path.join(extract_dir, 'foobar')),
'metadata.yaml')
def test_as_directory(self):
filename = self.makeFile()
charm = get_charm_from_path(self.filename)
f2 = charm.as_directory()
# f2 should be a charm directory
self.assertInstance(f2, CharmDirectory)
self.assertInstance(f2.get_sha256(), basestring)
# verify that it was extracted to a new temp dirname
self.assertNotEqual(f2.path, filename)
fn = os.path.split(f2.path)[1]
# verify that it used the expected prefix
self.assertStartsWith(fn, "tmp")
juju-0.7.orig/juju/charm/tests/test_config.py 0000644 0000000 0000000 00000017462 12135220114 017512 0 ustar 0000000 0000000 from StringIO import StringIO
import sys
import yaml
from juju.lib import serializer
from juju.lib.testing import TestCase
from juju.charm.config import ConfigOptions
from juju.charm.errors import ServiceConfigError, ServiceConfigValueError
sample_configuration = """
options:
title:
default: My Title
description: A descriptive title used for the service.
type: string
outlook:
description: No default outlook.
type: string
username:
default: admin001
description: The name of the initial account (given admin permissions).
type: string
skill-level:
description: A number indicating skill.
type: int
"""
sample_yaml_data = serializer.yaml_load(sample_configuration)
sample_config_defaults = {"title": "My Title",
"username": "admin001"}
class ConfigOptionsTest(TestCase):
def setUp(self):
self.config = ConfigOptions()
def test_load(self):
"""Validate we can load data or get expected errors."""
# load valid data
filename = self.makeFile(sample_configuration)
self.config.load(filename)
self.assertEqual(self.config.get_serialization_data(),
sample_yaml_data)
# test with dict based data
self.config.parse(sample_yaml_data)
self.assertEqual(self.config.get_serialization_data(),
sample_yaml_data)
# and with an unhandled type
self.assertRaises(TypeError, self.config.load, 1.234)
def test_load_file(self):
sample_path = self.makeFile(sample_configuration)
config = ConfigOptions()
config.load(sample_path)
self.assertEqual(config.get_serialization_data(),
sample_yaml_data)
# and an expected exception
# on an empty file
empty_file = self.makeFile("")
error = self.assertRaises(ServiceConfigError, config.load, empty_file)
self.assertEqual(
str(error),
("Error processing %r: "
"Missing required service options metadata") % empty_file)
# a missing filename is allowed
config = config.load("missing_file")
def test_defaults(self):
self.config.parse(sample_configuration)
defaults = self.config.get_defaults()
self.assertEqual(defaults, sample_config_defaults)
def test_defaults_validated(self):
e = self.assertRaises(
ServiceConfigValueError,
self.config.parse,
serializer.yaml_dump(
{"options": {
"foobar": {
"description": "beyond what?",
"type": "string",
"default": True}}}))
self.assertEqual(
str(e), "Invalid value for foobar: true")
def test_as_dict(self):
# load valid data
filename = self.makeFile(sample_configuration)
self.config.load(filename)
# Verify dictionary serialization
schema_dict = self.config.as_dict()
self.assertEqual(
schema_dict,
serializer.yaml_load(sample_configuration)["options"])
# Verify the dictionary is a copy
# Poke at embedded objects
schema_dict["outlook"]["default"] = 1
schema2_dict = self.config.as_dict()
self.assertFalse("default" in schema2_dict["outlook"])
def test_parse(self):
"""Verify that parse checks and raises."""
# no options dict
self.assertRaises(
ServiceConfigError, self.config.parse, {"foo": "bar"})
# and with bad data expected exceptions
error = self.assertRaises(yaml.YAMLError,
self.config.parse, "foo: [1, 2", "/tmp/zamboni")
self.assertIn("/tmp/zamboni", str(error))
def test_validate(self):
sample_input = {"title": "Helpful Title", "outlook": "Peachy"}
self.config.parse(sample_configuration)
data = self.config.validate(sample_input)
# This should include an overridden value, a default and a new value.
self.assertEqual(data,
{"outlook": "Peachy",
"title": "Helpful Title"})
# now try to set a value outside the expected
sample_input["bad"] = "value"
error = self.assertRaises(ServiceConfigValueError,
self.config.validate, sample_input)
self.assertEqual(error.message,
"bad is not a valid configuration option.")
# validating with an empty instance
# the service takes no options
config = ConfigOptions()
self.assertRaises(
ServiceConfigValueError, config.validate, sample_input)
def test_validate_float(self):
self.config.parse(serializer.yaml_dump(
{"options": {
"score": {
"description": "A number indicating score.",
"type": "float"}}}))
error = self.assertRaises(ServiceConfigValueError,
self.config.validate, {"score": "arg"})
self.assertEquals(str(error), "Invalid value for score: arg")
data = self.config.validate({"score": "82.1"})
self.assertEqual(data, {"score": 82.1})
def test_validate_string(self):
self.config.parse(sample_configuration)
error = self.assertRaises(ServiceConfigValueError,
self.config.validate, {"title": True})
self.assertEquals(str(error), "Invalid value for title: true")
data = self.config.validate({"title": u"Good"})
self.assertEqual(data, {"title": u"Good"})
def test_validate_boolean(self):
self.config.parse(serializer.yaml_dump(
{"options": {
"active": {
"description": "A boolean indicating activity.",
"type": "boolean"}}}))
error = self.assertRaises(ServiceConfigValueError,
self.config.validate, {"active": "Blob"})
self.assertEquals(str(error), "Invalid value for active: Blob")
data = self.config.validate({"active": "False"})
self.assertEqual(data, {"active": False})
data = self.config.validate({"active": "True"})
self.assertEqual(data, {"active": True})
data = self.config.validate({"active": True})
self.assertEqual(data, {"active": True})
def test_validate_integer(self):
self.config.parse(sample_configuration)
error = self.assertRaises(ServiceConfigValueError,
self.config.validate, {"skill-level": "NaN"})
self.assertEquals(str(error), "Invalid value for skill-level: NaN")
data = self.config.validate({"skill-level": "9001"})
# its over 9000!
self.assertEqual(data, {"skill-level": 9001})
def test_validate_with_obsolete_str(self):
"""
Test the handling for the obsolete 'str' option type (it's
'string' now). Remove support for it after a while, and take
this test with it.
"""
config = serializer.yaml_load(sample_configuration)
config["options"]["title"]["type"] = "str"
obsolete_config = serializer.yaml_dump(config)
sio = StringIO()
self.patch(sys, "stderr", sio)
self.config.parse(obsolete_config)
data = self.config.validate({"title": "Helpful Title"})
self.assertEqual(data["title"], "Helpful Title")
self.assertIn("obsolete 'str'", sio.getvalue())
# Trying it again, it should not warn since we don't want
# to pester the charm author.
sio.truncate(0)
self.config.parse(obsolete_config)
data = self.config.validate({"title": "Helpful Title"})
self.assertEqual(data["title"], "Helpful Title")
self.assertEqual(sio.getvalue(), "")
juju-0.7.orig/juju/charm/tests/test_directory.py 0000644 0000000 0000000 00000020676 12135220114 020252 0 ustar 0000000 0000000 import gc
import os
import hashlib
import inspect
import shutil
import zipfile
from juju.errors import CharmError, FileNotFound
from juju.charm.errors import InvalidCharmFile
from juju.charm.metadata import MetaData
from juju.charm.directory import CharmDirectory
from juju.charm.bundle import CharmBundle
from juju.lib import serializer
from juju.lib.filehash import compute_file_hash
from juju.charm import tests
from juju.charm.tests.test_repository import RepositoryTestBase
sample_directory = os.path.join(
os.path.dirname(
inspect.getabsfile(tests)), "repository", "series", "dummy")
class DirectoryTest(RepositoryTestBase):
def setUp(self):
super(DirectoryTest, self).setUp()
# Ensure the empty/ directory exists under the dummy sample
# charm. Depending on how the source code is exported,
# empty directories may be ignored.
empty_dir = os.path.join(sample_directory, "empty")
if not os.path.isdir(empty_dir):
os.mkdir(empty_dir)
def copy_charm(self):
dir_ = os.path.join(self.makeDir(), "sample")
shutil.copytree(sample_directory, dir_)
return dir_
def delete_revision(self, dir_):
os.remove(os.path.join(dir_, "revision"))
def set_metadata_revision(self, dir_, revision):
metadata_path = os.path.join(dir_, "metadata.yaml")
with open(metadata_path) as f:
data = serializer.yaml_load(f.read())
data["revision"] = 999
with open(metadata_path, "w") as f:
f.write(serializer.yaml_dump(data))
def test_metadata_is_required(self):
directory = self.makeDir()
self.assertRaises(FileNotFound, CharmDirectory, directory)
def test_no_revision(self):
dir_ = self.copy_charm()
self.delete_revision(dir_)
charm = CharmDirectory(dir_)
self.assertEquals(charm.get_revision(), 0)
with open(os.path.join(dir_, "revision")) as f:
self.assertEquals(f.read(), "0\n")
def test_nonsense_revision(self):
dir_ = self.copy_charm()
with open(os.path.join(dir_, "revision"), "w") as f:
f.write("shifty look")
err = self.assertRaises(CharmError, CharmDirectory, dir_)
self.assertEquals(
str(err),
"Error processing %r: invalid charm revision 'shifty look'" % dir_)
def test_revision_in_metadata(self):
dir_ = self.copy_charm()
self.delete_revision(dir_)
self.set_metadata_revision(dir_, 999)
log = self.capture_logging("juju.charm")
charm = CharmDirectory(dir_)
self.assertEquals(charm.get_revision(), 999)
self.assertIn(
"revision field is obsolete. Move it to the 'revision' file.",
log.getvalue())
def test_competing_revisions(self):
dir_ = self.copy_charm()
self.set_metadata_revision(dir_, 999)
log = self.capture_logging("juju.charm")
charm = CharmDirectory(dir_)
self.assertEquals(charm.get_revision(), 1)
self.assertIn(
"revision field is obsolete. Move it to the 'revision' file.",
log.getvalue())
def test_set_revision(self):
dir_ = self.copy_charm()
charm = CharmDirectory(dir_)
charm.set_revision(123)
self.assertEquals(charm.get_revision(), 123)
with open(os.path.join(dir_, "revision")) as f:
self.assertEquals(f.read(), "123\n")
def test_info(self):
directory = CharmDirectory(sample_directory)
self.assertTrue(directory.metadata is not None)
self.assertTrue(isinstance(directory.metadata, MetaData))
self.assertEquals(directory.metadata.name, "dummy")
self.assertEquals(directory.type, "dir")
def test_make_archive(self):
# make archive from sample directory
directory = CharmDirectory(sample_directory)
f = self.makeFile(suffix=".charm")
directory.make_archive(f)
# open archive in .zip-format and assert integrity
from zipfile import ZipFile
zf = ZipFile(f)
self.assertEqual(zf.testzip(), None)
# assert included
included = [info.filename for info in zf.infolist()]
self.assertEqual(
set(included),
set(("metadata.yaml", "empty/", "src/", "src/hello.c",
"config.yaml", "hooks/", "hooks/install", "revision")))
def test_as_bundle(self):
directory = CharmDirectory(self.sample_dir1)
charm_bundle = directory.as_bundle()
self.assertEquals(type(charm_bundle), CharmBundle)
self.assertEquals(charm_bundle.metadata.name, "sample")
self.assertIn("sample-1.charm", charm_bundle.path)
total_compressed = 0
total_uncompressed = 0
zip_file = zipfile.ZipFile(charm_bundle.path)
for n in zip_file.namelist():
info = zip_file.getinfo(n)
total_compressed += info.compress_size
total_uncompressed += info.file_size
self.assertTrue(total_compressed < total_uncompressed)
def test_as_bundle_file_lifetime(self):
"""
The temporary bundle file created should have a life time
equivalent to that of the directory object itself.
"""
directory = CharmDirectory(self.sample_dir1)
charm_bundle = directory.as_bundle()
gc.collect()
self.assertTrue(os.path.isfile(charm_bundle.path))
del directory
gc.collect()
self.assertFalse(os.path.isfile(charm_bundle.path))
def test_compute_sha256(self):
"""
Computing the sha256 of a directory will use the bundled
charm, since the hash of the file itself is needed.
"""
directory = CharmDirectory(self.sample_dir1)
sha256 = directory.compute_sha256()
charm_bundle = directory.as_bundle()
self.assertEquals(type(charm_bundle), CharmBundle)
self.assertEquals(compute_file_hash(hashlib.sha256,
charm_bundle.path),
sha256)
def test_as_bundle_with_relative_path(self):
"""
Ensure that as_bundle works correctly with relative paths.
"""
current_dir = os.getcwd()
os.chdir(self.sample_dir2)
self.addCleanup(os.chdir, current_dir)
charm_dir = "../%s" % os.path.basename(self.sample_dir1)
directory = CharmDirectory(charm_dir)
charm_bundle = directory.as_bundle()
self.assertEquals(type(charm_bundle), CharmBundle)
self.assertEquals(charm_bundle.metadata.name, "sample")
def test_charm_base_inheritance(self):
"""
get_sha256() should be implemented in the base class,
and should use compute_sha256 to calculate the digest.
"""
directory = CharmDirectory(self.sample_dir1)
bundle = directory.as_bundle()
digest = compute_file_hash(hashlib.sha256, bundle.path)
self.assertEquals(digest, directory.get_sha256())
def test_as_directory(self):
directory = CharmDirectory(self.sample_dir1)
self.assertIs(directory.as_directory(), directory)
def test_config(self):
"""Validate that ConfigOptions are available on the charm"""
from juju.charm.tests.test_config import sample_yaml_data
directory = CharmDirectory(sample_directory)
self.assertEquals(directory.config.get_serialization_data(),
sample_yaml_data)
def test_file_type(self):
charm_dir = self.copy_charm()
os.mkfifo(os.path.join(charm_dir, "foobar"))
directory = CharmDirectory(charm_dir)
e = self.assertRaises(InvalidCharmFile, directory.as_bundle)
self.assertIn("foobar' Invalid file type for a charm", str(e))
def test_internal_symlink(self):
charm_path = self.copy_charm()
external_file = self.makeFile(content='baz')
os.symlink(external_file, os.path.join(charm_path, "foobar"))
directory = CharmDirectory(charm_path)
e = self.assertRaises(InvalidCharmFile, directory.as_bundle)
self.assertIn("foobar' Absolute links are invalid", str(e))
def test_extract_symlink(self):
charm_path = self.copy_charm()
external_file = self.makeFile(content='lorem ipsum')
os.symlink(external_file, os.path.join(charm_path, "foobar"))
directory = CharmDirectory(charm_path)
e = self.assertRaises(InvalidCharmFile, directory.as_bundle)
self.assertIn("foobar' Absolute links are invalid", str(e))
juju-0.7.orig/juju/charm/tests/test_errors.py 0000644 0000000 0000000 00000004746 12135220114 017562 0 ustar 0000000 0000000 import os
from juju.charm.errors import (
CharmURLError, CharmNotFound, InvalidCharmHook, NewerCharmNotFound,
RepositoryNotFound, ServiceConfigError, InvalidCharmFile, MetaDataError)
from juju.errors import CharmError, JujuError
from juju.lib.testing import TestCase
class CharmErrorsTest(TestCase):
def test_NewerCharmNotFound(self):
error = NewerCharmNotFound("local:name:21")
self.assertEquals(
str(error),
"Charm 'local:name:21' is the latest revision known")
self.assertTrue(isinstance(error, CharmError))
def test_CharmURLError(self):
error = CharmURLError("foobar:/adfsa:slashot", "bad magic")
self.assertEquals(
str(error),
"Bad charm URL 'foobar:/adfsa:slashot': bad magic")
self.assertTrue(isinstance(error, CharmError))
def test_CharmNotFound(self):
error = CharmNotFound("/path", "cassandra")
self.assertEquals(
str(error),
"Charm 'cassandra' not found in repository /path")
self.assertTrue(isinstance(error, JujuError))
def test_InvalidCharmHook(self):
error = InvalidCharmHook("mysql", "magic-relation-changed")
self.assertEquals(
str(error),
"Charm 'mysql' does not contain hook 'magic-relation-changed'")
self.assertTrue(isinstance(error, CharmError))
def test_InvalidCharmFile(self):
error = InvalidCharmFile("mysql", "hooks/foobar", "bad file")
self.assertEquals(
str(error),
"Charm 'mysql' invalid file 'hooks/foobar' bad file")
self.assertTrue(isinstance(error, CharmError))
def test_MetaDataError(self):
error = MetaDataError("foobar is bad")
self.assertEquals(
str(error),
"foobar is bad")
self.assertTrue(isinstance(error, CharmError))
def test_RepositoryNotFound(self):
error = RepositoryNotFound(None)
self.assertEquals(str(error), "No repository specified")
self.assertTrue(isinstance(error, JujuError))
path = os.path.join(self.makeDir(), "missing")
error = RepositoryNotFound(path)
self.assertEquals(str(error), "No repository found at %r" % path)
self.assertTrue(isinstance(error, JujuError))
def test_ServiceConfigError(self):
error = ServiceConfigError("foobar", "blah")
self.assertEquals(str(error), "Error processing 'foobar': blah")
self.assertTrue(isinstance(error, JujuError))
juju-0.7.orig/juju/charm/tests/test_metadata.py 0000644 0000000 0000000 00000034770 12135220114 020026 0 ustar 0000000 0000000 # -*- encoding: utf-8 -*-
import os
import yaml
import inspect
from juju.charm import tests
from juju.charm.metadata import (
MetaData, MetaDataError, InterfaceExpander, SchemaError)
from juju.errors import FileNotFound
from juju.lib.testing import TestCase
from juju.lib import serializer
test_repository_path = os.path.join(
os.path.dirname(inspect.getabsfile(tests)),
"repository")
sample_path = os.path.join(
test_repository_path, "series", "dummy", "metadata.yaml")
sample_configuration = open(sample_path).read()
class MetaDataTest(TestCase):
def setUp(self):
self.metadata = MetaData()
self.sample = sample_configuration
def change_sample(self):
"""Return a context manager for hacking the sample data.
This should be used follows:
with self.change_sample() as data:
data["some-key"] = "some-data"
The changed sample file content will be available in self.sample
once the context is done executing.
"""
class HackManager(object):
def __enter__(mgr):
mgr.data = serializer.yaml_load(self.sample)
return mgr.data
def __exit__(mgr, exc_type, exc_val, exc_tb):
self.sample = serializer.yaml_dump(mgr.data)
return False
return HackManager()
def test_path_argument_loads_charm_info(self):
info = MetaData(sample_path)
self.assertEquals(info.name, "dummy")
def test_check_basic_info_before_loading(self):
"""
Attributes should be set to None before anything is loaded.
"""
self.assertEquals(self.metadata.name, None)
self.assertEquals(self.metadata.obsolete_revision, None)
self.assertEquals(self.metadata.summary, None)
self.assertEquals(self.metadata.description, None)
self.assertEquals(self.metadata.is_subordinate, False)
self.assertEquals(self.metadata.format, 1)
def test_parse_and_check_basic_info(self):
"""
Parsing the content file should work. :-) Basic information will
be available as attributes of the info file.
"""
self.metadata.parse(self.sample)
self.assertEquals(self.metadata.name, "dummy")
self.assertEquals(self.metadata.obsolete_revision, None)
self.assertEquals(self.metadata.summary, u"That's a dummy charm.")
self.assertEquals(self.metadata.description,
u"This is a longer description which\n"
u"potentially contains multiple lines.\n")
self.assertEquals(self.metadata.is_subordinate, False)
def test_is_subordinate(self):
"""Validate rules for detecting proper subordinate charms are working"""
logging_path = os.path.join(
test_repository_path, "series", "logging", "metadata.yaml")
logging_configuration = open(logging_path).read()
self.metadata.parse(logging_configuration)
self.assertTrue(self.metadata.is_subordinate)
def test_subordinate_without_container_relation(self):
"""Validate rules for detecting proper subordinate charms are working
Case where no container relation is specified.
"""
with self.change_sample() as data:
data["subordinate"] = True
error = self.assertRaises(MetaDataError, self.metadata.parse, self.sample, "some/path")
self.assertIn("some/path labeled subordinate but lacking scope:container `requires` relation",
str(error))
def test_scope_constraint(self):
"""Verify the scope constrain is parsed properly."""
logging_path = os.path.join(
test_repository_path, "series", "logging", "metadata.yaml")
logging_configuration = open(logging_path).read()
self.metadata.parse(logging_configuration)
# Verify the scope settings
self.assertEqual(self.metadata.provides[u"logging-client"]["scope"],
"global")
self.assertEqual(self.metadata.requires[u"logging-directory"]["scope"],
"container")
self.assertTrue(self.metadata.is_subordinate)
def assert_parse_with_revision(self, with_path):
"""
Parsing the content file should work. :-) Basic information will
be available as attributes of the info file.
"""
with self.change_sample() as data:
data["revision"] = 123
log = self.capture_logging("juju.charm")
self.metadata.parse(self.sample, "some/path" if with_path else None)
if with_path:
self.assertIn(
"some/path: revision field is obsolete. Move it to the "
"'revision' file.",
log.getvalue())
self.assertEquals(self.metadata.name, "dummy")
self.assertEquals(self.metadata.obsolete_revision, 123)
self.assertEquals(self.metadata.summary, u"That's a dummy charm.")
self.assertEquals(self.metadata.description,
u"This is a longer description which\n"
u"potentially contains multiple lines.\n")
self.assertEquals(
self.metadata.get_serialization_data()["revision"], 123)
def test_parse_with_revision(self):
self.assert_parse_with_revision(True)
self.assert_parse_with_revision(False)
def test_load_calls_parse_calls_parse_serialzation_data(self):
"""
We'll break the rules a little bit here and test the implementation
itself just so that we don't have to test *everything* twice. If
load() calls parse() which calls parse_serialzation_data(), then
whatever happens with parse_serialization_data(), happens with the
others.
"""
serialization_data = {"Hi": "there!"}
yaml_data = serializer.yaml_dump(serialization_data)
path = self.makeFile(yaml_data)
mock = self.mocker.patch(self.metadata)
mock.parse(yaml_data, path)
self.mocker.passthrough()
mock.parse_serialization_data(serialization_data, path)
self.mocker.replay()
self.metadata.load(path)
# Do your magic Mocker!
def test_metadata_parse_error_includes_path_with_load(self):
broken = ("""\
description: helo
name: hi
requires: {interface: zebra
revision: 0
summary: hola""")
path = self.makeFile()
e = self.assertRaises(
yaml.YAMLError, self.metadata.parse, broken, path)
self.assertIn(path, str(e))
def test_schema_error_includes_path_with_load(self):
"""
When using load(), the exception message should mention the
path name which was attempted.
"""
with self.change_sample() as data:
data["revision"] = "1"
filename = self.makeFile(self.sample)
error = self.assertRaises(MetaDataError,
self.metadata.load, filename)
self.assertEquals(str(error),
"Bad data in charm info: %s: revision: "
"expected int, got '1'" % filename)
def test_load_missing_file(self):
"""
When using load(), the exception message should mention the
path name which was attempted.
"""
filename = self.makeFile()
error = self.assertRaises(FileNotFound,
self.metadata.load, filename)
self.assertEquals(error.path, filename)
def test_name_summary_and_description_are_utf8(self):
"""
Textual fields are decoded to unicode by the schema using UTF-8.
"""
value = u"áéÃóú"
str_value = value.encode("utf-8")
with self.change_sample() as data:
data["name"] = str_value
data["summary"] = str_value
data["description"] = str_value
self.metadata.parse(self.sample)
self.assertEquals(self.metadata.name, value)
self.assertEquals(self.metadata.summary, value)
self.assertEquals(self.metadata.description, value)
def test_get_serialized_data(self):
"""
The get_serialization_data() function should return an object which
may be passed to parse_serialization_data() to restore the state of
the instance.
"""
self.metadata.parse(self.sample)
serialization_data = self.metadata.get_serialization_data()
self.assertEquals(serialization_data["name"], "dummy")
def test_provide_implicit_relation(self):
"""Verify providing a juju-* reserved relation errors"""
with self.change_sample() as data:
data["provides"] = {"juju-foo": {"interface": "juju-magic", "scope": "container"}}
# verify relation level error
error = self.assertRaises(MetaDataError,
self.metadata.parse, self.sample)
self.assertIn("Charm dummy attempting to provide relation in implicit relation namespace: juju-foo",
str(error))
# verify interface level error
with self.change_sample() as data:
data["provides"] = {"foo-rel": {"interface": "juju-magic", "scope": "container"}}
error = self.assertRaises(MetaDataError,
self.metadata.parse, self.sample)
self.assertIn(
"Charm dummy attempting to provide interface in implicit namespace: juju-magic (relation: foo-rel)",
str(error))
def test_format(self):
# Defaults to 1
self.metadata.parse(self.sample)
self.assertEquals(self.metadata.format, 1)
# Explicitly set to 1
with self.change_sample() as data:
data["format"] = 1
self.metadata.parse(self.sample)
self.assertEquals(self.metadata.format, 1)
# Explicitly set to 2
with self.change_sample() as data:
data["format"] = 2
self.metadata.parse(self.sample)
self.assertEquals(self.metadata.format, 2)
# Explicitly set to 3; however this is an unknown format for Juju
with self.change_sample() as data:
data["format"] = 3
error = self.assertRaises(MetaDataError, self.metadata.parse, self.sample)
self.assertIn("Charm dummy uses an unknown format: 3", str(error))
class ParseTest(TestCase):
"""Test the parsing of some well-known sample files"""
def get_metadata(self, charm_name):
"""Get the associated metadata for a given charm, eg ``wordpress``"""
metadata = MetaData(os.path.join(
test_repository_path, "series", charm_name, "metadata.yaml"))
self.assertEqual(metadata.name, charm_name)
return metadata
def test_mysql_sample(self):
"""Test parse of a relation written in shorthand format.
Such relations are defined as follows::
provides:
server: mysql
"""
metadata = self.get_metadata("mysql")
self.assertEqual(metadata.peers, None)
self.assertEqual(
metadata.provides["server"],
{"interface": "mysql", "limit": None, "optional": False, "scope": "global"})
self.assertEqual(metadata.requires, None)
def test_riak_sample(self):
"""Test multiple interfaces defined in long form, with defaults."""
metadata = self.get_metadata("riak")
self.assertEqual(
metadata.peers["ring"],
{"interface": "riak", "limit": 1, "optional": False, "scope": "global"})
self.assertEqual(
metadata.provides["endpoint"],
{"interface": "http", "limit": None, "optional": False, "scope": "global"})
self.assertEqual(
metadata.provides["admin"],
{"interface": "http", "limit": None, "optional": False, "scope": "global"})
self.assertEqual(metadata.requires, None)
def test_wordpress_sample(self):
"""Test multiple interfaces defined in long form, without defaults."""
metadata = self.get_metadata("wordpress")
self.assertEqual(metadata.peers, None)
self.assertEqual(
metadata.provides["url"],
{"interface": "http", "limit": None, "optional": False, "scope": "global"})
self.assertEqual(
metadata.requires["db"],
{"interface": "mysql", "limit": 1, "optional": False, "scope": "global"})
self.assertEqual(
metadata.requires["cache"],
{"interface": "varnish", "limit": 2, "optional": True, "scope": "global"})
def test_interface_expander(self):
"""Test rewriting of a given interface specification into long form.
InterfaceExpander uses `coerce` to do one of two things:
- Rewrite shorthand to the long form used for actual storage
- Fills in defaults, including a configurable `limit`
This test ensures test coverage on each of these branches, along
with ensuring the conversion object properly raises SchemaError
exceptions on invalid data.
"""
expander = InterfaceExpander(limit=None)
# shorthand is properly rewritten
self.assertEqual(
expander.coerce("http", ["provides"]),
{"interface": "http", "limit": None, "optional": False, "scope": "global"})
# defaults are properly applied
self.assertEqual(
expander.coerce(
{"interface": "http"}, ["provides"]),
{"interface": "http", "limit": None, "optional": False, "scope": "global"})
self.assertEqual(
expander.coerce(
{"interface": "http", "limit": 2}, ["provides"]),
{"interface": "http", "limit": 2, "optional": False, "scope": "global"})
self.assertEqual(
expander.coerce(
{"interface": "http", "optional": True, "scope": "global"},
["provides"]),
{"interface": "http", "limit": None, "optional": True, "scope": "global"})
# invalid data raises SchemaError
self.assertRaises(
SchemaError,
expander.coerce, 42, ["provides"])
self.assertRaises(
SchemaError,
expander.coerce,
{"interface": "http", "optional": None, "scope": "global"}, ["provides"])
self.assertRaises(
SchemaError,
expander.coerce,
{"interface": "http", "limit": "none, really"}, ["provides"])
# can change `limit` default
expander = InterfaceExpander(limit=1)
self.assertEqual(
expander.coerce("http", ["consumes"]),
{"interface": "http", "limit": 1, "optional": False, "scope": "global"})
juju-0.7.orig/juju/charm/tests/test_provider.py 0000644 0000000 0000000 00000001762 12135220114 020073 0 ustar 0000000 0000000 import os
import inspect
from juju.lib.testing import TestCase
from juju.charm import tests
from juju.charm.provider import get_charm_from_path
sample_directory = os.path.join(
os.path.dirname(inspect.getabsfile(tests)), "repository", "series", "dummy")
class CharmFromPathTest(TestCase):
def test_charm_from_path(self):
# from a directory
charm = get_charm_from_path(sample_directory)
assert charm.get_sha256()
filename = self.makeFile(suffix=".charm")
charm.make_archive(filename)
# and from a bundle
charm = get_charm_from_path(filename)
self.assertEquals(charm.path, filename)
self.assertInstance(charm.get_sha256(), basestring)
# and validate the implementation detail that calling it twice
# doesn't result in an error after caching the callable
charm = get_charm_from_path(filename)
self.assertEquals(charm.path, filename)
self.assertInstance(charm.get_sha256(), basestring)
juju-0.7.orig/juju/charm/tests/test_publisher.py 0000644 0000000 0000000 00000017763 12135220114 020246 0 ustar 0000000 0000000 import fcntl
import os
import zookeeper
from twisted.internet.defer import inlineCallbacks, fail
from twisted.python.failure import Failure
from txzookeeper.tests.utils import deleteTree
from juju.charm.bundle import CharmBundle
from juju.charm.directory import CharmDirectory
from juju.charm.publisher import CharmPublisher
from juju.charm.tests import local_charm_id
from juju.lib import under, serializer
from juju.providers.dummy import FileStorage
from juju.state.charm import CharmStateManager
from juju.state.errors import StateChanged
from juju.environment.tests.test_config import (
EnvironmentsConfigTestBase, SAMPLE_ENV)
from juju.lib.mocker import MATCH
from .test_repository import RepositoryTestBase
def _count_open_files():
count = 0
for sfd in os.listdir("/proc/self/fd"):
ifd = int(sfd)
if ifd < 3:
continue
try:
fcntl.fcntl(ifd, fcntl.F_GETFD)
count += 1
except IOError:
pass
return count
class CharmPublisherTest(RepositoryTestBase):
@inlineCallbacks
def setUp(self):
super(CharmPublisherTest, self).setUp()
zookeeper.set_debug_level(0)
self.charm = CharmDirectory(self.sample_dir1)
self.charm_id = local_charm_id(self.charm)
self.charm_key = under.quote(self.charm_id)
# provider storage key
self.charm_storage_key = under.quote(
"%s:%s" % (self.charm_id, self.charm.get_sha256()))
self.client = self.get_zookeeper_client()
self.storage_dir = self.makeDir()
self.storage = FileStorage(self.storage_dir)
self.publisher = CharmPublisher(self.client, self.storage)
yield self.client.connect()
yield self.client.create("/charms")
def tearDown(self):
deleteTree("/", self.client.handle)
self.client.close()
super(CharmPublisherTest, self).tearDown()
@inlineCallbacks
def test_add_charm_and_publish(self):
open_file_count = _count_open_files()
yield self.publisher.add_charm(self.charm_id, self.charm)
result = yield self.publisher.publish()
self.assertEquals(_count_open_files(), open_file_count)
children = yield self.client.get_children("/charms")
self.assertEqual(children, [self.charm_key])
fh = yield self.storage.get(self.charm_storage_key)
bundle = CharmBundle(fh)
self.assertEqual(self.charm.get_sha256(), bundle.get_sha256())
self.assertEqual(
result[0].bundle_url, "file://%s/%s" % (
self.storage_dir, self.charm_storage_key))
@inlineCallbacks
def test_published_charm_sans_unicode(self):
yield self.publisher.add_charm(self.charm_id, self.charm)
yield self.publisher.publish()
data, stat = yield self.client.get("/charms/%s" % self.charm_key)
self.assertNotIn("unicode", data)
@inlineCallbacks
def test_add_charm_with_concurrent(self):
"""
Publishing a charm, that has become published concurrent, after the
add_charm, works fine. it will write to storage regardless. The use
of a sha256 as part of the storage key is utilized to help ensure
uniqueness of bits. The sha256 is also stored with the charm state.
This relation betewen the charm state and the binary bits, helps
guarantee the property that any published charm in zookeeper will use
the binary bits that it was published with.
"""
yield self.publisher.add_charm(self.charm_id, self.charm)
concurrent_publisher = CharmPublisher(
self.client, self.storage)
charm = CharmDirectory(self.sample_dir1)
yield concurrent_publisher.add_charm(self.charm_id, charm)
yield self.publisher.publish()
# modify the charm to create a conflict scenario
self.makeFile("zebra",
path=os.path.join(self.sample_dir1, "junk.txt"))
# assert the charm now has a different sha post modification
modified_charm_sha = charm.get_sha256()
self.assertNotEqual(
modified_charm_sha,
self.charm.get_sha256())
# verify publishing raises a stateerror
def verify_failure(result):
if not isinstance(result, Failure):
self.fail("Should have raised state error")
result.trap(StateChanged)
return True
yield concurrent_publisher.publish().addBoth(verify_failure)
# verify the zk state
charm_nodes = yield self.client.get_children("/charms")
self.assertEqual(charm_nodes, [self.charm_key])
content, stat = yield self.client.get(
"/charms/%s" % charm_nodes[0])
# assert the checksum matches the initially published checksum
self.assertEqual(
serializer.yaml_load(content)["sha256"],
self.charm.get_sha256())
store_path = os.path.join(self.storage_dir, self.charm_storage_key)
self.assertTrue(os.path.exists(store_path))
# and the binary bits where stored
modified_charm_storage_key = under.quote(
"%s:%s" % (self.charm_id, modified_charm_sha))
modified_store_path = os.path.join(
self.storage_dir, modified_charm_storage_key)
self.assertTrue(os.path.exists(modified_store_path))
@inlineCallbacks
def test_add_charm_with_concurrent_removal(self):
"""
If a charm is published, and it detects that the charm exists
already exists, it will attempt to retrieve the charm state to
verify there is no checksum mismatch. If concurrently the charm
is removed, the publisher should fail with a statechange error.
"""
manager = self.mocker.patch(CharmStateManager)
manager.get_charm_state(self.charm_id)
self.mocker.passthrough()
def match_charm_bundle(bundle):
return isinstance(bundle, CharmBundle)
def match_charm_url(url):
return url.startswith("file://")
manager.add_charm_state(
self.charm_id, MATCH(match_charm_bundle), MATCH(match_charm_url))
self.mocker.result(fail(zookeeper.NodeExistsException()))
manager.get_charm_state(self.charm_id)
self.mocker.result(fail(zookeeper.NoNodeException()))
self.mocker.replay()
yield self.publisher.add_charm(self.charm_id, self.charm)
yield self.failUnlessFailure(self.publisher.publish(), StateChanged)
@inlineCallbacks
def test_add_charm_already_known(self):
"""Adding an existing charm, is an effective noop, as its not added
to the internal publisher queue.
"""
output = self.capture_logging("juju.charm")
# Do an initial publishing of the charm
scheduled = yield self.publisher.add_charm(self.charm_id, self.charm)
self.assertTrue(scheduled)
result = yield self.publisher.publish()
self.assertEqual(result[0].name, self.charm.metadata.name)
publisher = CharmPublisher(self.client, self.storage)
scheduled = yield publisher.add_charm(self.charm_id, self.charm)
self.assertFalse(scheduled)
scheduled = yield publisher.add_charm(self.charm_id, self.charm)
self.assertFalse(scheduled)
result = yield publisher.publish()
self.assertEqual(result[0].name, self.charm.metadata.name)
self.assertEqual(result[1].name, self.charm.metadata.name)
self.assertIn(
"Using cached charm version of %s" % self.charm.metadata.name,
output.getvalue())
class EnvironmentPublisherTest(EnvironmentsConfigTestBase):
def setUp(self):
super(EnvironmentPublisherTest, self).setUp()
self.write_config(SAMPLE_ENV)
self.config.load()
self.environment = self.config.get("myfirstenv")
zookeeper.set_debug_level(0)
@inlineCallbacks
def test_publisher_for_environment(self):
publisher = yield CharmPublisher.for_environment(self.environment)
self.assertTrue(isinstance(publisher, CharmPublisher))
juju-0.7.orig/juju/charm/tests/test_repository.py 0000644 0000000 0000000 00000055134 12135220114 020462 0 ustar 0000000 0000000 import json
import os
import inspect
import shutil
from twisted.internet.defer import fail, inlineCallbacks, succeed
from twisted.web.error import Error
from txaws.client.ssl import VerifyingContextFactory
from juju.charm.directory import CharmDirectory
from juju.charm.errors import CharmNotFound, CharmURLError, RepositoryNotFound
from juju.charm.repository import (
LocalCharmRepository, RemoteCharmRepository, resolve, CS_STORE_URL)
from juju.charm.url import CharmURL
from juju.charm import provider
from juju.errors import CharmError
from juju.lib import under
from juju.charm import tests
from juju.lib.mocker import ANY, MATCH
from juju.lib.testing import TestCase
unbundled_repository = os.path.join(
os.path.dirname(inspect.getabsfile(tests)), "repository")
class RepositoryTestBase(TestCase):
@inlineCallbacks
def setUp(self):
yield super(RepositoryTestBase, self).setUp()
self.bundled_repo_path = self.makeDir()
os.mkdir(os.path.join(self.bundled_repo_path, "series"))
self.unbundled_repo_path = self.makeDir()
os.rmdir(self.unbundled_repo_path)
shutil.copytree(unbundled_repository, self.unbundled_repo_path)
self.sample_dir1 = os.path.join(
self.unbundled_repo_path, "series", "old")
self.sample_dir2 = os.path.join(
self.unbundled_repo_path, "series", "new")
class LocalRepositoryTest(RepositoryTestBase):
def setUp(self):
super(LocalRepositoryTest, self).setUp()
# bundle sample charms
CharmDirectory(self.sample_dir1).make_archive(
os.path.join(self.bundled_repo_path, "series", "old.charm"))
CharmDirectory(self.sample_dir2).make_archive(
os.path.join(self.bundled_repo_path, "series", "new.charm"))
# define repository objects
self.repository1 = LocalCharmRepository(self.unbundled_repo_path)
self.repository2 = LocalCharmRepository(self.bundled_repo_path)
self.output = self.capture_logging("juju.charm")
def assert_there(self, name, repo, revision, latest_revision=None):
url = self.charm_url(name)
charm = yield repo.find(url)
self.assertEquals(charm.get_revision(), revision)
latest = yield repo.latest(url)
self.assertEquals(latest, latest_revision or revision)
@inlineCallbacks
def assert_not_there(self, name, repo, revision=None):
url = self.charm_url(name)
msg = "Charm 'local:series/%s' not found in repository %s" % (
name, repo.path)
err = yield self.assertFailure(repo.find(url), CharmNotFound)
self.assertEquals(str(err), msg)
if revision is None:
err = yield self.assertFailure(repo.latest(url), CharmNotFound)
self.assertEquals(str(err), msg)
def charm_url(self, name):
return CharmURL.parse("local:series/" + name)
def test_no_path(self):
err = self.assertRaises(RepositoryNotFound, LocalCharmRepository, None)
self.assertEquals(str(err), "No repository specified")
def test_bad_path(self):
path = os.path.join(self.makeDir(), "blah")
err = self.assertRaises(RepositoryNotFound, LocalCharmRepository, path)
self.assertEquals(str(err), "No repository found at %r" % path)
with open(path, "w"):
pass
err = self.assertRaises(RepositoryNotFound, LocalCharmRepository, path)
self.assertEquals(str(err), "No repository found at %r" % path)
def test_find_inappropriate_url(self):
url = CharmURL.parse("cs:foo/bar")
err = self.assertRaises(AssertionError, self.repository1.find, url)
self.assertEquals(str(err), "schema mismatch")
def test_completely_missing(self):
return self.assert_not_there("zebra", self.repository1)
def test_unknown_files_ignored(self):
self.makeFile(
"Foobar",
path=os.path.join(self.repository1.path, "series", "zebra"))
return self.assert_not_there("zebra", self.repository1)
@inlineCallbacks
def test_random_error_logged(self):
get_charm = self.mocker.replace(provider.get_charm_from_path)
get_charm(ANY)
self.mocker.throw(SyntaxError("magic"))
self.mocker.count(0, 3)
self.mocker.replay()
yield self.assertFailure(
self.repository1.find(self.charm_url("zebra")),
CharmNotFound)
self.assertIn(
"Unexpected error while processing",
self.output.getvalue())
self.assertIn(
"SyntaxError('magic',)",
self.output.getvalue())
def test_unknown_directories_ignored(self):
self.makeDir(
path=os.path.join(self.repository1.path, "series", "zebra"))
return self.assert_not_there("zebra", self.repository1)
@inlineCallbacks
def test_broken_charm_metadata_ignored(self):
charm_path = self.makeDir(
path=os.path.join(self.repository1.path, "series", "zebra"))
fh = open(os.path.join(charm_path, "metadata.yaml"), "w+")
fh.write("""\
description: helo
name: hi
requires: {interface: zebra
revision: 0
summary: hola""")
fh.close()
yield self.assertFailure(
self.repository1.find(self.charm_url("zebra")), CharmNotFound)
output = self.output.getvalue()
self.assertIn(
"Charm 'zebra' has a YAML error", output)
self.assertIn(
"%s/series/zebra/metadata.yaml" % self.repository1.path, output)
@inlineCallbacks
def test_broken_charm_config_ignored(self):
"""YAML Errors propogate to the log, but the search continues."""
fh = open(
os.path.join(
self.repository1.path, "series", "mysql", "config.yaml"),
"w+")
fh.write("""\
description: helo
name: hi
requires: {interface: zebra
revision: 0
summary: hola""")
fh.close()
yield self.repository1.find(self.charm_url("sample"))
output = self.output.getvalue()
self.assertIn(
"Charm 'mysql' has a YAML error", output)
self.assertIn(
"%s/series/mysql/config.yaml" % self.repository1.path, output)
@inlineCallbacks
def test_ignore_dot_files(self):
"""Dot files are ignored when browsing the repository."""
fh = open(
os.path.join(
self.repository1.path, "series", ".foo"),
"w+")
fh.write("Something")
fh.close()
yield self.repository1.find(self.charm_url("sample"))
output = self.output.getvalue()
self.assertNotIn("Charm '.foo' has an error", output)
@inlineCallbacks
def test_invalid_charm_config_ignored(self):
fh = open(
os.path.join(
self.repository1.path, "series", "mysql", "config.yaml"),
"w+")
fh.write("foobar: {}")
fh.close()
stream = self.capture_logging("juju.charm")
yield self.assertFailure(
self.repository1.find(self.charm_url("mysql")), CharmNotFound)
output = stream.getvalue()
self.assertIn(
"Charm 'mysql' has an error", output)
self.assertIn(
"%s/series/mysql/config.yaml" % self.repository1.path, output)
def test_repo_type(self):
self.assertEqual(self.repository1.type, "local")
@inlineCallbacks
def test_success_unbundled(self):
yield self.assert_there("sample", self.repository1, 2)
yield self.assert_there("sample-1", self.repository1, 1, 2)
yield self.assert_there("sample-2", self.repository1, 2)
yield self.assert_not_there("sample-3", self.repository1, 2)
@inlineCallbacks
def test_success_bundled(self):
yield self.assert_there("sample", self.repository2, 2)
yield self.assert_there("sample-1", self.repository2, 1, 2)
yield self.assert_there("sample-2", self.repository2, 2)
yield self.assert_not_there("sample-3", self.repository2, 2)
@inlineCallbacks
def test_no_revision_gets_latest(self):
yield self.assert_there("sample", self.repository1, 2)
yield self.assert_there("sample-1", self.repository1, 1, 2)
yield self.assert_there("sample-2", self.repository1, 2)
yield self.assert_not_there("sample-3", self.repository1, 2)
revision_path = os.path.join(
self.repository1.path, "series/old/revision")
with open(revision_path, "w") as f:
f.write("3")
yield self.assert_there("sample", self.repository1, 3)
yield self.assert_not_there("sample-1", self.repository1, 3)
yield self.assert_there("sample-2", self.repository1, 2, 3)
yield self.assert_there("sample-3", self.repository1, 3)
class RemoteRepositoryTest(RepositoryTestBase):
def setUp(self):
super(RemoteRepositoryTest, self).setUp()
self.cache_path = os.path.join(
self.makeDir(), "notexistyet")
self.download_path = os.path.join(self.cache_path, "downloads")
def delete():
if os.path.exists(self.cache_path):
shutil.rmtree(self.cache_path)
self.addCleanup(delete)
self.charm = CharmDirectory(
os.path.join(self.unbundled_repo_path, "series", "dummy"))
with open(self.charm.as_bundle().path, "rb") as f:
self.bundle_data = f.read()
self.sha256 = self.charm.as_bundle().get_sha256()
self.getPage = self.mocker.replace("twisted.web.client.getPage")
self.downloadPage = self.mocker.replace(
"twisted.web.client.downloadPage")
def repo(self, url_base):
return RemoteCharmRepository(url_base, self.cache_path)
def cache_location(self, url_str, revision):
charm_url = CharmURL.parse(url_str)
cache_key = under.quote(
"%s.charm" % (charm_url.with_revision(revision)))
return os.path.join(self.cache_path, cache_key)
def charm_info(self, url_str, revision, warnings=None, errors=None):
info = {"revision": revision, "sha256": self.sha256}
if errors:
info["errors"] = errors
if warnings:
info["warnings"] = warnings
return json.dumps({url_str: info})
def mock_charm_info(self, url, result):
def match_context(value):
return isinstance(value, VerifyingContextFactory)
self.getPage(url, contextFactory=MATCH(match_context))
self.mocker.result(result)
def mock_download(self, url, error=None):
def match_context(value):
return isinstance(value, VerifyingContextFactory)
self.downloadPage(url, ANY, contextFactory=MATCH(match_context))
if error:
return self.mocker.result(fail(error))
def download(_, path, contextFactory):
self.assertTrue(path.startswith(self.download_path))
with open(path, "wb") as f:
f.write(self.bundle_data)
return succeed(None)
self.mocker.call(download)
@inlineCallbacks
def assert_find_uncached(self, dns_name, url_str, info_url, find_url):
self.mock_charm_info(info_url, succeed(self.charm_info(url_str, 1)))
self.mock_download(find_url)
self.mocker.replay()
repo = self.repo(dns_name)
charm = yield repo.find(CharmURL.parse(url_str))
self.assertEquals(charm.get_sha256(), self.sha256)
self.assertEquals(charm.path, self.cache_location(url_str, 1))
self.assertEquals(os.listdir(self.download_path), [])
@inlineCallbacks
def assert_find_cached(self, dns_name, url_str, info_url):
os.makedirs(self.cache_path)
cache_location = self.cache_location(url_str, 1)
shutil.copy(self.charm.as_bundle().path, cache_location)
self.mock_charm_info(info_url, succeed(self.charm_info(url_str, 1)))
self.mocker.replay()
repo = self.repo(dns_name)
charm = yield repo.find(CharmURL.parse(url_str))
self.assertEquals(charm.get_sha256(), self.sha256)
self.assertEquals(charm.path, cache_location)
def assert_find_error(self, dns_name, url_str, err_type, message):
self.mocker.replay()
repo = self.repo(dns_name)
d = self.assertFailure(repo.find(CharmURL.parse(url_str)), err_type)
def verify(error):
self.assertEquals(str(error), message)
d.addCallback(verify)
return d
@inlineCallbacks
def assert_latest(self, dns_name, url_str, revision):
self.mocker.replay()
repo = self.repo(dns_name)
result = yield repo.latest(CharmURL.parse(url_str))
self.assertEquals(result, revision)
def assert_latest_error(self, dns_name, url_str, err_type, message):
self.mocker.replay()
repo = self.repo(dns_name)
d = self.assertFailure(repo.latest(CharmURL.parse(url_str)), err_type)
def verify(error):
self.assertEquals(str(error), message)
d.addCallback(verify)
return d
def test_find_plain_uncached_no_stat(self):
self.change_environment(JUJU_TESTING="1")
return self.assert_find_uncached(
"https://somewhe.re", "cs:series/name",
"https://somewhe.re/charm-info?charms=cs%3Aseries/name&stats=0",
"https://somewhe.re/charm/series/name-1?stats=0")
def test_find_plain_uncached(self):
return self.assert_find_uncached(
"https://somewhe.re", "cs:series/name",
"https://somewhe.re/charm-info?charms=cs%3Aseries/name",
"https://somewhe.re/charm/series/name-1")
def test_find_revision_uncached(self):
return self.assert_find_uncached(
"https://somewhe.re", "cs:series/name-1",
"https://somewhe.re/charm-info?charms=cs%3Aseries/name-1",
"https://somewhe.re/charm/series/name-1")
def test_find_user_uncached(self):
return self.assert_find_uncached(
"https://somewhereel.se", "cs:~user/srs/name",
"https://somewhereel.se/charm-info?charms=cs%3A%7Euser/srs/name",
"https://somewhereel.se/charm/%7Euser/srs/name-1")
def test_find_plain_cached(self):
return self.assert_find_cached(
"https://somewhe.re", "cs:series/name",
"https://somewhe.re/charm-info?charms=cs%3Aseries/name")
def test_find_revision_cached(self):
return self.assert_find_cached(
"https://somewhe.re", "cs:series/name-1",
"https://somewhe.re/charm-info?charms=cs%3Aseries/name-1")
def test_find_user_cached(self):
return self.assert_find_cached(
"https://somewhereel.se", "cs:~user/srs/name",
"https://somewhereel.se/charm-info?charms=cs%3A%7Euser/srs/name")
def test_find_info_http_error(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name",
fail(Error("500")))
return self.assert_find_error(
"https://anoth.er", "cs:series/name", CharmNotFound,
"Charm 'cs:series/name' not found in repository https://anoth.er")
@inlineCallbacks
def test_find_info_store_warning(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name-1",
succeed(self.charm_info(
"cs:series/name-1", 1, warnings=["omg", "halp"])))
self.mock_download("https://anoth.er/charm/series/name-1")
self.mocker.replay()
repo = self.repo("https://anoth.er")
log = self.capture_logging("juju.charm")
charm = yield repo.find(CharmURL.parse("cs:series/name-1"))
self.assertIn("omg", log.getvalue())
self.assertIn("halp", log.getvalue())
self.assertEquals(charm.get_sha256(), self.sha256)
def test_find_info_store_error(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name-101",
succeed(self.charm_info(
"cs:series/name-101", 101, errors=["oh", "noes"])))
return self.assert_find_error(
"https://anoth.er", "cs:series/name-101", CharmError,
"Error processing 'cs:series/name-101': oh; noes")
def test_find_info_bad_revision(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name-99",
succeed(self.charm_info("cs:series/name-99", 1)))
return self.assert_find_error(
"https://anoth.er", "cs:series/name-99", AssertionError,
"bad url revision")
def test_find_download_error(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name",
succeed(json.dumps({"cs:series/name": {"revision": 123}})))
self.mock_download(
"https://anoth.er/charm/series/name-123", Error("999"))
return self.assert_find_error(
"https://anoth.er", "cs:series/name", CharmNotFound,
"Charm 'cs:series/name-123' not found in repository "
"https://anoth.er")
def test_find_charm_revision_mismatch(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name",
succeed(json.dumps({"cs:series/name": {"revision": 99}})))
self.mock_download("https://anoth.er/charm/series/name-99")
return self.assert_find_error(
"https://anoth.er", "cs:series/name", AssertionError,
"bad charm revision")
@inlineCallbacks
def test_find_downloaded_hash_mismatch(self):
cache_location = self.cache_location("cs:series/name-1", 1)
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name",
succeed(json.dumps(
{"cs:series/name": {"revision": 1, "sha256": "NO YUO"}})))
self.mock_download("https://anoth.er/charm/series/name-1")
yield self.assert_find_error(
"https://anoth.er", "cs:series/name", CharmError,
"Error processing 'cs:series/name-1 (downloaded)': SHA256 "
"mismatch")
self.assertFalse(os.path.exists(cache_location))
@inlineCallbacks
def test_find_cached_hash_mismatch(self):
os.makedirs(self.cache_path)
cache_location = self.cache_location("cs:series/name-1", 1)
shutil.copy(self.charm.as_bundle().path, cache_location)
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name",
succeed(json.dumps(
{"cs:series/name": {"revision": 1, "sha256": "NO YUO"}})))
yield self.assert_find_error(
"https://anoth.er", "cs:series/name", CharmError,
"Error processing 'cs:series/name-1 (cached)': SHA256 mismatch")
self.assertFalse(os.path.exists(cache_location))
def test_latest_plain(self):
self.mock_charm_info(
"https://somewhe.re/charm-info?charms=cs%3Afoo/bar",
succeed(self.charm_info("cs:foo/bar", 99)))
return self.assert_latest("https://somewhe.re", "cs:foo/bar-1", 99)
def test_latest_user(self):
self.mock_charm_info(
"https://somewhereel.se/charm-info?charms=cs%3A%7Efee/foo/bar",
succeed(self.charm_info("cs:~fee/foo/bar", 123)))
return self.assert_latest(
"https://somewhereel.se", "cs:~fee/foo/bar", 123)
def test_latest_revision(self):
self.mock_charm_info(
"https://somewhereel.se/charm-info?charms=cs%3A%7Efee/foo/bar",
succeed(self.charm_info("cs:~fee/foo/bar", 123)))
return self.assert_latest(
"https://somewhereel.se", "cs:~fee/foo/bar-99", 123)
def test_latest_http_error(self):
self.mock_charm_info(
"https://andanoth.er/charm-info?charms=cs%3A%7Eblib/blab/blob",
fail(Error("404")))
return self.assert_latest_error(
"https://andanoth.er", "cs:~blib/blab/blob", CharmNotFound,
"Charm 'cs:~blib/blab/blob' not found in repository "
"https://andanoth.er")
@inlineCallbacks
def test_latest_store_warning(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name",
succeed(self.charm_info(
"cs:series/name", 1, warnings=["eww", "yuck"])))
self.mocker.replay()
repo = self.repo("https://anoth.er")
log = self.capture_logging("juju.charm")
revision = yield repo.latest(CharmURL.parse("cs:series/name-1"))
self.assertIn("eww", log.getvalue())
self.assertIn("yuck", log.getvalue())
self.assertEquals(revision, 1)
def test_latest_store_error(self):
self.mock_charm_info(
"https://anoth.er/charm-info?charms=cs%3Aseries/name",
succeed(self.charm_info(
"cs:series/name", 1, errors=["blam", "dink"])))
return self.assert_latest_error(
"https://anoth.er", "cs:series/name-1", CharmError,
"Error processing 'cs:series/name': blam; dink")
def test_repo_type(self):
self.mocker.replay()
self.assertEqual(self.repo("http://fbaro.com").type, "store")
class ResolveTest(RepositoryTestBase):
def assert_resolve_local(self, vague, default, expect):
path = self.makeDir()
repo, url = resolve(vague, path, default)
self.assertEquals(str(url), expect)
self.assertTrue(isinstance(repo, LocalCharmRepository))
self.assertEquals(repo.path, path)
def test_resolve_local(self):
self.assert_resolve_local(
"local:series/sample", "default", "local:series/sample")
self.assert_resolve_local(
"local:sample", "default", "local:default/sample")
def assert_resolve_remote(self, vague, default, expect):
repo, url = resolve(vague, None, default)
self.assertEquals(str(url), expect)
self.assertTrue(isinstance(repo, RemoteCharmRepository))
self.assertEquals(repo.url_base, CS_STORE_URL)
def test_resolve_remote(self):
self.assert_resolve_remote(
"sample", "default", "cs:default/sample")
self.assert_resolve_remote(
"series/sample", "default", "cs:series/sample")
self.assert_resolve_remote(
"cs:sample", "default", "cs:default/sample")
self.assert_resolve_remote(
"cs:series/sample", "default", "cs:series/sample")
self.assert_resolve_remote(
"cs:~user/sample", "default", "cs:~user/default/sample")
self.assert_resolve_remote(
"cs:~user/series/sample", "default", "cs:~user/series/sample")
def test_resolve_nonsense(self):
error = self.assertRaises(
CharmURLError, resolve, "blah:whatever", None, "series")
self.assertEquals(
str(error),
"Bad charm URL 'blah:series/whatever': invalid schema (URL "
"inferred from 'blah:whatever')")
juju-0.7.orig/juju/charm/tests/test_url.py 0000644 0000000 0000000 00000012764 12135220114 017047 0 ustar 0000000 0000000 from juju.charm.errors import CharmURLError
from juju.charm.url import CharmCollection, CharmURL
from juju.lib.testing import TestCase
class CharmCollectionTest(TestCase):
def test_str(self):
self.assertEquals(
str(CharmCollection("foo", "bar", "baz")), "foo:~bar/baz")
self.assertEquals(
str(CharmCollection("ping", None, "pong")), "ping:pong")
class CharmURLTest(TestCase):
def assert_url(self, url, schema, user, series, name, rev):
self.assertEquals(url.collection.schema, schema)
self.assertEquals(url.collection.user, user)
self.assertEquals(url.collection.series, series)
self.assertEquals(url.name, name)
self.assertEquals(url.revision, rev)
def assert_error(self, err, url_str, message):
self.assertEquals(
str(err), "Bad charm URL %r: %s" % (url_str, message))
def assert_parse(self, string, schema, user, series, name, rev):
url = CharmURL.parse(string)
self.assert_url(url, schema, user, series, name, rev)
self.assertEquals(str(url), string)
self.assertEquals(url.path, string.split(":", 1)[1])
def test_parse(self):
self.assert_parse(
"cs:~user/series/name", "cs", "user", "series", "name", None)
self.assert_parse(
"cs:~user/series/name-0", "cs", "user", "series", "name", 0)
self.assert_parse(
"cs:series/name", "cs", None, "series", "name", None)
self.assert_parse(
"cs:series/name-0", "cs", None, "series", "name", 0)
self.assert_parse(
"cs:series/name0", "cs", None, "series", "name0", None)
self.assert_parse(
"cs:series/n0-0n-n0", "cs", None, "series", "n0-0n-n0", None)
self.assert_parse(
"local:series/name", "local", None, "series", "name", None)
self.assert_parse(
"local:series/name-0", "local", None, "series", "name", 0)
def assert_cannot_parse(self, string, message):
err = self.assertRaises(CharmURLError, CharmURL.parse, string)
self.assert_error(err, string, message)
def test_cannot_parse(self):
self.assert_cannot_parse(
None, "not a string type")
self.assert_cannot_parse(
"series/name-1", "no schema specified")
self.assert_cannot_parse(
"bs:~user/series/name-1", "invalid schema")
self.assert_cannot_parse(
"cs:~1/series/name-1", "invalid user")
self.assert_cannot_parse(
"cs:~user/1/name-1", "invalid series")
self.assert_cannot_parse(
"cs:~user/series/name-1-2", "invalid name")
self.assert_cannot_parse(
"cs:~user/series/name-1-n-2", "invalid name")
self.assert_cannot_parse(
"cs:~user/series/name--a-2", "invalid name")
self.assert_cannot_parse(
"cs:~user/series/huh/name-1", "invalid form")
self.assert_cannot_parse(
"cs:~user/name", "no series specified")
self.assert_cannot_parse(
"cs:name", "invalid form")
self.assert_cannot_parse(
"local:~user/series/name", "users not allowed in local URLs")
self.assert_cannot_parse(
"local:~user/name", "users not allowed in local URLs")
self.assert_cannot_parse(
"local:name", "invalid form")
def test_revision(self):
url1 = CharmURL.parse("cs:foo/bar")
error = self.assertRaises(CharmURLError, url1.assert_revision)
self.assertEquals(
str(error), "Bad charm URL 'cs:foo/bar': expected a revision")
url2 = url1.with_revision(0)
url1.collection.schema = "local" # change url1, verify deep copied
url2.assert_revision()
self.assertEquals(str(url2), "cs:foo/bar-0")
url3 = url2.with_revision(999)
url3.assert_revision()
self.assertEquals(str(url3), "cs:foo/bar-999")
def assert_infer(self, string, schema, user, series, name, rev):
url = CharmURL.infer(string, "default")
self.assert_url(url, schema, user, series, name, rev)
def test_infer(self):
self.assert_infer(
"name", "cs", None, "default", "name", None)
self.assert_infer(
"name-0", "cs", None, "default", "name", 0)
self.assert_infer(
"series/name", "cs", None, "series", "name", None)
self.assert_infer(
"series/name-0", "cs", None, "series", "name", 0)
self.assert_infer(
"cs:name", "cs", None, "default", "name", None)
self.assert_infer(
"cs:name-0", "cs", None, "default", "name", 0)
self.assert_infer(
"cs:~user/name", "cs", "user", "default", "name", None)
self.assert_infer(
"cs:~user/name-0", "cs", "user", "default", "name", 0)
self.assert_infer(
"local:name", "local", None, "default", "name", None)
self.assert_infer(
"local:name-0", "local", None, "default", "name", 0)
def test_cannot_infer(self):
err = self.assertRaises(
CharmURLError, CharmURL.infer, "name", "invalid!series")
self.assertEquals(
str(err),
"Bad charm URL 'cs:invalid!series/name': invalid series (URL "
"inferred from 'name')")
err = self.assertRaises(
CharmURLError, CharmURL.infer, "~user/name", "default")
self.assertEquals(
str(err),
"Bad charm URL '~user/name': a URL with a user must specify a "
"schema")
juju-0.7.orig/juju/charm/tests/repository/series/ 0000755 0000000 0000000 00000000000 12135220114 020333 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/configtest/ 0000755 0000000 0000000 00000000000 12135220114 022500 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/ 0000755 0000000 0000000 00000000000 12135220114 021466 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/ 0000755 0000000 0000000 00000000000 12135220114 022333 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/logging/ 0000755 0000000 0000000 00000000000 12135220114 021761 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/mysql/ 0000755 0000000 0000000 00000000000 12135220114 021500 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/mysql-alternative/ 0000755 0000000 0000000 00000000000 12135220114 024014 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/ 0000755 0000000 0000000 00000000000 12135220114 023313 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/new/ 0000755 0000000 0000000 00000000000 12135220114 021124 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/old/ 0000755 0000000 0000000 00000000000 12135220114 021111 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/riak/ 0000755 0000000 0000000 00000000000 12135220114 021261 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/varnish/ 0000755 0000000 0000000 00000000000 12135220114 022005 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/ 0000755 0000000 0000000 00000000000 12135220114 024321 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/wordpress/ 0000755 0000000 0000000 00000000000 12135220114 022363 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/configtest/config.yaml 0000644 0000000 0000000 00000000235 12135220114 024631 0 ustar 0000000 0000000 options:
foo:
type: string
default: "foo-default"
description: "Foo"
bar:
type: string
default: "bar-default"
description: "Bar"
juju-0.7.orig/juju/charm/tests/repository/series/configtest/hooks/ 0000755 0000000 0000000 00000000000 12135220114 023623 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/configtest/metadata.yaml 0000644 0000000 0000000 00000000174 12135220114 025146 0 ustar 0000000 0000000 name: configtest
summary: "Testing Defaults"
description: "Test for bug #873643"
provides:
website:
interface: http
juju-0.7.orig/juju/charm/tests/repository/series/configtest/revision 0000644 0000000 0000000 00000000002 12135220114 024251 0 ustar 0000000 0000000 1
juju-0.7.orig/juju/charm/tests/repository/series/dummy/.dir/ 0000755 0000000 0000000 00000000000 12135220114 022322 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/.ignored 0000644 0000000 0000000 00000000001 12135220114 023105 0 ustar 0000000 0000000 # juju-0.7.orig/juju/charm/tests/repository/series/dummy/build/ 0000755 0000000 0000000 00000000000 12135220114 022565 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/config.yaml 0000644 0000000 0000000 00000000543 12135220114 023621 0 ustar 0000000 0000000 options:
title: {default: My Title, description: A descriptive title used for the service., type: string}
outlook: {description: No default outlook., type: string}
username: {default: admin001, description: The name of the initial account (given admin permissions)., type: string}
skill-level: {description: A number indicating skill., type: int}
juju-0.7.orig/juju/charm/tests/repository/series/dummy/empty/ 0000755 0000000 0000000 00000000000 12135220114 022624 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/hooks/ 0000755 0000000 0000000 00000000000 12135220114 022611 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/metadata.yaml 0000644 0000000 0000000 00000000214 12135220114 024127 0 ustar 0000000 0000000 name: dummy
summary: "That's a dummy charm."
description: |
This is a longer description which
potentially contains multiple lines.
juju-0.7.orig/juju/charm/tests/repository/series/dummy/revision 0000644 0000000 0000000 00000000001 12135220114 023236 0 ustar 0000000 0000000 1 juju-0.7.orig/juju/charm/tests/repository/series/dummy/src/ 0000755 0000000 0000000 00000000000 12135220114 022255 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/.dir/ignored 0000644 0000000 0000000 00000000000 12135220114 023662 0 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/build/ignored 0000644 0000000 0000000 00000000000 12135220114 024125 0 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/dummy/hooks/install 0000755 0000000 0000000 00000000031 12135220114 024177 0 ustar 0000000 0000000 #!/bin/bash
echo "Done!"
juju-0.7.orig/juju/charm/tests/repository/series/dummy/src/hello.c 0000644 0000000 0000000 00000000114 12135220114 023520 0 ustar 0000000 0000000 #include
main()
{
printf ("Hello World!\n");
return 0;
}
juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/config.yaml 0000644 0000000 0000000 00000000157 12135220114 024467 0 ustar 0000000 0000000 options:
blog-title: {default: My Title, description: A descriptive title used for the blog., type: string}
juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/metadata.yaml 0000644 0000000 0000000 00000000433 12135220114 024777 0 ustar 0000000 0000000 name: funkyblog
summary: "Blog engine"
description: "A funky blog engine"
provides:
url:
interface: http
limit:
optional: false
requires:
write-db:
interface: mysql
limit: 1
optional: false
read-db:
interface: mysql
limit: 1
optional: false
juju-0.7.orig/juju/charm/tests/repository/series/funkyblog/revision 0000644 0000000 0000000 00000000001 12135220114 024103 0 ustar 0000000 0000000 3 juju-0.7.orig/juju/charm/tests/repository/series/logging/.ignored 0000644 0000000 0000000 00000000001 12135220114 023400 0 ustar 0000000 0000000 # juju-0.7.orig/juju/charm/tests/repository/series/logging/hooks/ 0000755 0000000 0000000 00000000000 12135220114 023104 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/logging/metadata.yaml 0000644 0000000 0000000 00000000601 12135220114 024422 0 ustar 0000000 0000000 name: logging
summary: "Subordinate logging test charm"
description: |
This is a longer description which
potentially contains multiple lines.
subordinate: true
provides:
logging-client:
interface: logging
requires:
logging-directory:
interface: logging
scope: container
juju-info-fallback:
interface: juju-info
scope: container juju-0.7.orig/juju/charm/tests/repository/series/logging/revision 0000644 0000000 0000000 00000000001 12135220114 023531 0 ustar 0000000 0000000 1 juju-0.7.orig/juju/charm/tests/repository/series/logging/hooks/install 0000755 0000000 0000000 00000000031 12135220114 024472 0 ustar 0000000 0000000 #!/bin/bash
echo "Done!"
juju-0.7.orig/juju/charm/tests/repository/series/mysql/config.yaml 0000644 0000000 0000000 00000001607 12135220114 023635 0 ustar 0000000 0000000 options:
query-cache-size:
default: -1
type: int
description: Override the computed version from dataset-size. Still works if query-cache-type is "OFF" since sessions can override the cache type setting on their own.
awesome:
default: false
type: boolean
description: Set true to make this database engine truly awesome
tuning-level:
default: safest
type: string
description: Valid values are 'safest', 'fast', and 'unsafe'. If set to safest, all settings are tuned to have maximum safety at the cost of performance. Fast will turn off most controls, but may lose data on crashes. unsafe will turn off all protections.
monkey-madness:
default: 0.5
type: float
description: The amount of randomness to be desired in any data that is returned, from 0 (sane) to 1 (monkeys running the asylum).
juju-0.7.orig/juju/charm/tests/repository/series/mysql/metadata.yaml 0000644 0000000 0000000 00000000164 12135220114 024145 0 ustar 0000000 0000000 name: mysql
summary: "Database engine"
description: "A pretty popular database"
provides:
server: mysql
format: 1
juju-0.7.orig/juju/charm/tests/repository/series/mysql/revision 0000644 0000000 0000000 00000000001 12135220114 023250 0 ustar 0000000 0000000 1 juju-0.7.orig/juju/charm/tests/repository/series/mysql-alternative/metadata.yaml 0000644 0000000 0000000 00000000254 12135220114 026461 0 ustar 0000000 0000000 name: mysql-alternative
summary: "Database engine"
description: "A pretty popular database"
provides:
prod:
interface: mysql
dev:
interface: mysql
limit: 2
juju-0.7.orig/juju/charm/tests/repository/series/mysql-alternative/revision 0000644 0000000 0000000 00000000001 12135220114 025564 0 ustar 0000000 0000000 1 juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/config.yaml 0000644 0000000 0000000 00000001607 12135220114 025450 0 ustar 0000000 0000000 options:
query-cache-size:
default: -1
type: int
description: Override the computed version from dataset-size. Still works if query-cache-type is "OFF" since sessions can override the cache type setting on their own.
awesome:
default: false
type: boolean
description: Set true to make this database engine truly awesome
tuning-level:
default: safest
type: string
description: Valid values are 'safest', 'fast', and 'unsafe'. If set to safest, all settings are tuned to have maximum safety at the cost of performance. Fast will turn off most controls, but may lose data on crashes. unsafe will turn off all protections.
monkey-madness:
default: 0.5
type: float
description: The amount of randomness to be desired in any data that is returned, from 0 (sane) to 1 (monkeys running the asylum).
juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/metadata.yaml 0000644 0000000 0000000 00000000176 12135220114 025763 0 ustar 0000000 0000000 name: mysql-format-v2
summary: "Database engine"
description: "A pretty popular database"
provides:
server: mysql
format: 2
juju-0.7.orig/juju/charm/tests/repository/series/mysql-format-v2/revision 0000644 0000000 0000000 00000000001 12135220114 025063 0 ustar 0000000 0000000 1 juju-0.7.orig/juju/charm/tests/repository/series/new/metadata.yaml 0000644 0000000 0000000 00000000216 12135220114 023567 0 ustar 0000000 0000000 name: sample
summary: "That's a sample charm."
description: |
This is a longer description which
potentially contains multiple lines.
juju-0.7.orig/juju/charm/tests/repository/series/new/revision 0000644 0000000 0000000 00000000002 12135220114 022675 0 ustar 0000000 0000000 2
juju-0.7.orig/juju/charm/tests/repository/series/old/metadata.yaml 0000644 0000000 0000000 00000000216 12135220114 023554 0 ustar 0000000 0000000 name: sample
summary: "That's a sample charm."
description: |
This is a longer description which
potentially contains multiple lines.
juju-0.7.orig/juju/charm/tests/repository/series/old/revision 0000644 0000000 0000000 00000000002 12135220114 022662 0 ustar 0000000 0000000 1
juju-0.7.orig/juju/charm/tests/repository/series/riak/metadata.yaml 0000644 0000000 0000000 00000000317 12135220114 023726 0 ustar 0000000 0000000 name: riak
summary: "K/V storage engine"
description: "Scalable K/V Store in Erlang with Clocks :-)"
provides:
endpoint:
interface: http
admin:
interface: http
peers:
ring:
interface: riak
juju-0.7.orig/juju/charm/tests/repository/series/riak/revision 0000644 0000000 0000000 00000000001 12135220114 023031 0 ustar 0000000 0000000 7 juju-0.7.orig/juju/charm/tests/repository/series/varnish/metadata.yaml 0000644 0000000 0000000 00000000157 12135220114 024454 0 ustar 0000000 0000000 name: varnish
summary: "Database engine"
description: "Another popular database"
provides:
webcache: varnish
juju-0.7.orig/juju/charm/tests/repository/series/varnish/revision 0000644 0000000 0000000 00000000001 12135220114 023555 0 ustar 0000000 0000000 1 juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/hooks/ 0000755 0000000 0000000 00000000000 12135220114 025444 5 ustar 0000000 0000000 juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/metadata.yaml 0000644 0000000 0000000 00000000173 12135220114 026766 0 ustar 0000000 0000000 name: varnish-alternative
summary: "Database engine"
description: "Another popular database"
provides:
webcache: varnish
juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/revision 0000644 0000000 0000000 00000000001 12135220114 026071 0 ustar 0000000 0000000 1 juju-0.7.orig/juju/charm/tests/repository/series/varnish-alternative/hooks/install 0000755 0000000 0000000 00000000035 12135220114 027036 0 ustar 0000000 0000000 #!/bin/bash
echo hello world juju-0.7.orig/juju/charm/tests/repository/series/wordpress/config.yaml 0000644 0000000 0000000 00000000157 12135220114 024517 0 ustar 0000000 0000000 options:
blog-title: {default: My Title, description: A descriptive title used for the blog., type: string}
juju-0.7.orig/juju/charm/tests/repository/series/wordpress/metadata.yaml 0000644 0000000 0000000 00000000437 12135220114 025033 0 ustar 0000000 0000000 name: wordpress
summary: "Blog engine"
description: "A pretty popular blog engine"
provides:
url:
interface: http
limit:
optional: false
requires:
db:
interface: mysql
limit: 1
optional: false
cache:
interface: varnish
limit: 2
optional: true
juju-0.7.orig/juju/charm/tests/repository/series/wordpress/revision 0000644 0000000 0000000 00000000001 12135220114 024133 0 ustar 0000000 0000000 3 juju-0.7.orig/juju/control/__init__.py 0000644 0000000 0000000 00000012032 12135220114 016155 0 ustar 0000000 0000000 import argparse
import logging
import sys
import zookeeper
from .command import Commander
from .utils import ParseError
from juju.environment.config import EnvironmentsConfig
from juju import __version__
import add_relation
import add_unit
import bootstrap
import config_get
import config_set
import constraints_get
import constraints_set
import debug_hooks
import debug_log
import deploy
import destroy_environment
import destroy_service
import expose
import open_tunnel
import remove_relation
import remove_unit
import resolved
import scp
import status
import ssh
import terminate_machine
import unexpose
import upgrade_charm
import initialize
SUBCOMMANDS = [
add_relation,
add_unit,
bootstrap,
config_get,
config_set,
constraints_get,
constraints_set,
debug_log,
debug_hooks,
deploy,
destroy_environment,
destroy_service,
expose,
open_tunnel,
remove_relation,
remove_unit,
resolved,
scp,
status,
ssh,
terminate_machine,
unexpose,
upgrade_charm
]
ADMIN_SUBCOMMANDS = [
initialize]
log = logging.getLogger("juju.control.cli")
class JujuParser(argparse.ArgumentParser):
def add_subparsers(self, **kwargs):
kwargs.setdefault("parser_class", argparse.ArgumentParser)
return super(JujuParser, self).add_subparsers(**kwargs)
def error(self, message):
self.print_help(sys.stderr)
self.exit(2, '%s: error: %s\n' % (self.prog, message))
class JujuFormatter(argparse.HelpFormatter):
def _metavar_formatter(self, action, default_metavar):
"""Override to get rid of redundant printing of positional args.
"""
if action.metavar is not None:
result = action.metavar
elif default_metavar == "==SUPPRESS==":
result = ""
else:
result = default_metavar
def format(tuple_size):
if isinstance(result, tuple):
return result
else:
return (result, ) * tuple_size
return format
def setup_parser(subcommands, **kw):
"""Setup a command line argument/option parser."""
parser = JujuParser(formatter_class=JujuFormatter, **kw)
parser.add_argument(
"--verbose", "-v", default=False,
action="store_true",
help="Enable verbose logging")
parser.add_argument(
"--version", action="version", version='juju %s' % (__version__))
parser.add_argument(
"--log-file", "-l", default=sys.stderr, type=argparse.FileType('a'),
help="Log output to file")
subparsers = parser.add_subparsers()
for module in subcommands:
configure_subparser = getattr(module, "configure_subparser", None)
passthrough = getattr(module, "passthrough", None)
if configure_subparser:
sub_parser = configure_subparser(subparsers)
else:
sub_parser = subparsers.add_parser(
module.__name__.split('.')[-1], help=module.command.__doc__)
sub_parser.set_defaults(
command=Commander(module.command, passthrough=passthrough),
parser=sub_parser)
return parser
def setup_logging(options):
level = logging.DEBUG if options.verbose else logging.INFO
logging.basicConfig(
format="%(asctime)s %(levelname)s %(message)s",
level=level,
stream=options.log_file)
if level is not logging.DEBUG:
zookeeper.set_debug_level(0)
def admin(args):
"""juju Admin command line interface entry point.
The admin cli is used to provide an entry point into infrastructure
tools like initializing the zookeeper layout, launching machine and
provisioning agents, etc. Its not intended to be used by end users
but consumed internally by the framework.
"""
parser = setup_parser(
subcommands=ADMIN_SUBCOMMANDS,
prog="juju-admin",
description="juju cloud orchestration internal tools")
parser.set_defaults(log=log)
options = parser.parse_args(args)
setup_logging(options)
options.command(options)
def main(args):
"""The main end user cli command for juju users."""
parser = setup_parser(
subcommands=SUBCOMMANDS,
prog="juju",
description="juju cloud orchestration admin")
# Some commands, like juju ssh, do a further parse on options by
# delegating to another command (such as the underlying ssh). But
# first need to parse nonstrictly all args to even determine what
# command is even being used.
options, extra = parser.parse_known_args(args)
if options.command.passthrough:
try:
# Augments options with subparser specific passthrough parsing
options.command.passthrough(options, extra)
except ParseError, e:
options.parser.error(str(e))
else:
# Otherwise, do be strict
options = parser.parse_args(args)
env_config = EnvironmentsConfig()
env_config.load_or_write_sample()
options.environments = env_config
options.log = log
setup_logging(options)
options.command(options)
juju-0.7.orig/juju/control/add_relation.py 0000644 0000000 0000000 00000005113 12135220114 017045 0 ustar 0000000 0000000 """Implementation of add-relation juju subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import get_environment
from juju.state.errors import NoMatchingEndpoints, AmbiguousRelation
from juju.state.relation import RelationStateManager
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
"""Configure add-relation subcommand"""
sub_parser = subparsers.add_parser("add-relation", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to add the relation in.")
sub_parser.add_argument(
"--verbose",
help="Provide additional information when running the command.")
sub_parser.add_argument(
"descriptors", nargs=2, metavar="[:]",
help="Define the relation endpoints to be joined.")
return sub_parser
def command(options):
"""Add a relation between services in juju."""
environment = get_environment(options)
return add_relation(
options.environments,
environment,
options.verbose,
options.log,
*options.descriptors)
@inlineCallbacks
def add_relation(env_config, environment, verbose, log, *descriptors):
"""Add relation between relation endpoints described by `descriptors`"""
provider = environment.get_machine_provider()
client = yield provider.connect()
relation_state_manager = RelationStateManager(client)
service_state_manager = ServiceStateManager(client)
endpoint_pairs = yield service_state_manager.join_descriptors(
*descriptors)
if verbose:
log.info("Endpoint pairs: %s", endpoint_pairs)
if len(endpoint_pairs) == 0:
raise NoMatchingEndpoints()
elif len(endpoint_pairs) > 1:
for pair in endpoint_pairs[1:]:
if not (pair[0].relation_name.startswith("juju-") or
pair[1].relation_name.startswith("juju-")):
raise AmbiguousRelation(descriptors, endpoint_pairs)
# At this point we just have one endpoint pair. We need to pick
# just one of the endpoints if it's a peer endpoint, since that's
# our current API - join descriptors takes two descriptors, but
# add_relation_state takes one or two endpoints. TODO consider
# refactoring.
endpoints = endpoint_pairs[0]
if endpoints[0] == endpoints[1]:
endpoints = endpoints[0:1]
yield relation_state_manager.add_relation_state(*endpoints)
yield client.close()
log.info("Added %s relation to all service units.",
endpoints[0].relation_type)
juju-0.7.orig/juju/control/add_unit.py 0000644 0000000 0000000 00000004322 12135220114 016210 0 ustar 0000000 0000000 """Implementation of add unit subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.control import legacy
from juju.control.utils import get_environment, sync_environment_state
from juju.errors import JujuError
from juju.state.placement import place_unit
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
"""Configure add-unit subcommand"""
sub_parser = subparsers.add_parser("add-unit", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
sub_parser.add_argument(
"--num-units", "-n", default=1, type=int, metavar="NUM",
help="Number of service units to add.")
sub_parser.add_argument(
"service_name",
help="Name of the service a unit should be added for")
return sub_parser
def command(options):
"""Add a new service unit."""
environment = get_environment(options)
return add_unit(
options.environments,
environment,
options.verbose,
options.log,
options.service_name,
options.num_units)
@inlineCallbacks
def add_unit(config, environment, verbose, log, service_name, num_units):
"""Add a unit of a service to the environment.
"""
provider = environment.get_machine_provider()
placement_policy = provider.get_placement_policy()
client = yield provider.connect()
try:
yield legacy.check_environment(
client, provider.get_legacy_config_keys())
yield sync_environment_state(client, config, environment.name)
service_manager = ServiceStateManager(client)
service_state = yield service_manager.get_service_state(service_name)
if (yield service_state.is_subordinate()):
raise JujuError("Subordinate services acquire units from "
"their principal service.")
for i in range(num_units):
unit_state = yield service_state.add_unit_state()
yield place_unit(client, placement_policy, unit_state)
log.info("Unit %r added to service %r",
unit_state.unit_name, service_state.service_name)
finally:
yield client.close()
juju-0.7.orig/juju/control/bootstrap.py 0000644 0000000 0000000 00000002441 12135220114 016436 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import legacy
from juju.control.utils import expand_constraints, get_environment
def configure_subparser(subparsers):
"""Configure bootstrap subcommand"""
sub_parser = subparsers.add_parser("bootstrap", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
sub_parser.add_argument(
"--constraints",
help="default hardware constraints for this environment.",
default=[],
type=expand_constraints)
return sub_parser
@inlineCallbacks
def command(options):
"""
Bootstrap machine providers in the specified environment.
"""
environment = get_environment(options)
provider = environment.get_machine_provider()
legacy_keys = provider.get_legacy_config_keys()
if legacy_keys:
legacy.error(legacy_keys)
constraint_set = yield provider.get_constraint_set()
constraints = constraint_set.parse(options.constraints)
constraints = constraints.with_series(environment.default_series)
options.log.info(
"Bootstrapping environment %r (origin: %s type: %s)..." % (
environment.name, environment.origin, environment.type))
yield provider.bootstrap(constraints)
juju-0.7.orig/juju/control/command.py 0000644 0000000 0000000 00000004431 12135220114 016040 0 ustar 0000000 0000000 from twisted.internet import defer
from twisted.python.failure import Failure
from argparse import Namespace
from StringIO import StringIO
import sys
class Commander(object):
"""Command container.
Command objects are constructed in the argument parser in package
__init__ and used to control the execution of juju command
line activities.
Keyword Arguments:
callback -- a callable object which will be triggered in the
reactor loop when Commander.__call__ is invoked.
"""
def __init__(self, callback, passthrough=False):
if not callable(callback):
raise ValueError(
"Commander callback argument must be a callable")
self.callback = callback
self.passthrough = passthrough
self.options = None
self.exit_code = 0
def __call__(self, options):
from twisted.internet import reactor
if not options or not isinstance(options, Namespace):
raise ValueError(
"%s.__call__ must be passed a valid argparse.Namespace" %
self.__class__.__name__)
self.options = options
options.log.debug("Initializing %s runtime" %
options.parser.prog)
reactor.callWhenRunning(self._run)
reactor.run()
sys.exit(self.exit_code)
def _run(self):
d = defer.maybeDeferred(self.callback, self.options)
d.addBoth(self._handle_exit)
return d
def _handle_exit(self, result, stream=None):
from twisted.internet import reactor
if stream is None:
stream = sys.stderr
if isinstance(result, Failure):
if self.options.verbose:
tracebackIO = StringIO()
result.printTraceback(file=tracebackIO)
stream.write(tracebackIO.getvalue())
self.options.log.error(tracebackIO.getvalue())
self.options.log.error(result.getErrorMessage())
if self.exit_code == 0:
self.exit_code = 1
else:
command_name = self.callback.__module__.rsplit('.', 1)[-1]
self.options.log.info("%r command finished successfully" %
command_name)
if reactor.running:
reactor.stop()
juju-0.7.orig/juju/control/config_get.py 0000644 0000000 0000000 00000005514 12135220114 016531 0 ustar 0000000 0000000 import argparse
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import get_environment
from juju.lib.format import YAMLFormat
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser(
"get",
formatter_class=argparse.RawDescriptionHelpFormatter,
help=config_get.__doc__,
description=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to utilize.")
sub_parser.add_argument(
"--schema", "-s", action="store_true", default=False,
help="Display the schema only")
sub_parser.add_argument(
"service_name",
help="The name of the service to retrieve settings for")
return sub_parser
def command(options):
"""Get service config options.
Charms may define dynamic options which may be tweaked at
deployment time, or over the lifetime of the service. This
command allows display the current value of these settings
in yaml format.
$ juju get wordpress
{'service': 'wordpress',
'charm': 'local:series/wordpress-3',
'settings': {'blog-title': {
'description': 'A descriptive title used for the blog.',
'type': 'string',
'value': 'Hello World'}}},
"""
environment = get_environment(options)
return config_get(environment,
options.service_name,
options.schema)
@inlineCallbacks
def config_get(environment, service_name, display_schema):
"""Get service settings.
"""
provider = environment.get_machine_provider()
client = yield provider.connect()
try:
# Get the service
service_manager = ServiceStateManager(client)
service = yield service_manager.get_service_state(service_name)
# Retrieve schema
charm = yield service.get_charm_state()
schema = yield charm.get_config()
schema_dict = schema.as_dict()
display_dict = {"service": service.service_name,
"charm": (yield service.get_charm_id()),
"settings": schema_dict}
# Get current settings
settings = yield service.get_config()
settings = dict(settings.items())
# Merge current settings into schema/display dict
for k, v in schema_dict.items():
# Display defaults for unset values.
if k in settings:
v['value'] = settings[k]
else:
v['value'] = None
if 'default' in v:
if v['default'] == settings[k]:
v['default'] = True
else:
del v['default']
print YAMLFormat().format(display_dict)
finally:
yield client.close()
juju-0.7.orig/juju/control/config_set.py 0000644 0000000 0000000 00000006711 12135220114 016545 0 ustar 0000000 0000000 import argparse
import yaml
from twisted.internet.defer import inlineCallbacks
from juju.charm.errors import ServiceConfigValueError
from juju.control.utils import get_environment
from juju.lib import serializer
from juju.lib.format import get_charm_formatter
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser(
"set",
help=config_set.__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
description=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to status.")
sub_parser.add_argument(
"service_name",
help="The name of the service the options apply to.")
sub_parser.add_argument("--config",
type=argparse.FileType("r"),
help=(
"a filename containing a YAML dict of values "
"for the current service_name"))
sub_parser.add_argument("service_options",
nargs="*",
help="""name=value for option to set""")
return sub_parser
def command(options):
"""Set service options.
Service charms may define dynamic options which may be tweaked
at deployment time, or over the lifetime of the service. This
command allows changing these settings.
$ juju set option=value [option=value]
or
$ juju set --config local.yaml
"""
environment = get_environment(options)
if options.config:
if options.service_options:
raise ServiceConfigValueError(
"--config and command line options cannot "
"be used in a single invocation")
yaml_data = options.config.read()
try:
data = serializer.yaml_load(yaml_data)
except yaml.YAMLError:
raise ServiceConfigValueError(
"Config file %r not valid YAML" % options.config.name)
if not data or not isinstance(data, dict):
raise ServiceConfigValueError(
"Config file %r invalid" % options.config.name
)
data = data.get(options.service_name)
if data:
# set data directly
options.service_options = data
return config_set(environment,
options.service_name,
options.service_options)
@inlineCallbacks
def config_set(environment, service_name, service_options):
"""Set service settings.
"""
provider = environment.get_machine_provider()
client = yield provider.connect()
# Get the service and the charm
service_manager = ServiceStateManager(client)
service = yield service_manager.get_service_state(service_name)
charm = yield service.get_charm_state()
charm_format = (yield charm.get_metadata()).format
formatter = get_charm_formatter(charm_format)
# Use the charm's ConfigOptions instance to validate the
# arguments to config_set. Invalid options passed to this method
# will thrown an exception.
if isinstance(service_options, dict):
options = service_options
else:
options = formatter.parse_keyvalue_pairs(service_options)
config = yield charm.get_config()
options = config.validate(options)
# Apply the change
state = yield service.get_config()
state.update(options)
yield state.write()
juju-0.7.orig/juju/control/constraints_get.py 0000644 0000000 0000000 00000005117 12135220114 017632 0 ustar 0000000 0000000 import argparse
import sys
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import get_environment, sync_environment_state
from juju.lib import serializer
from juju.state.environment import EnvironmentStateManager
from juju.state.machine import MachineStateManager
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser(
"get-constraints",
help=command.__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
description=constraints_get.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to affect")
sub_parser.add_argument(
"entities",
nargs="*",
help="names of machines, units or services")
return sub_parser
def command(options):
"""Show currently applicable constraints"""
environment = get_environment(options)
return constraints_get(
options.environments, environment, options.entities, options.log)
@inlineCallbacks
def constraints_get(env_config, environment, entity_names, log):
"""
Show the complete set of applicable constraints for each specified entity.
This will show the final computed values of all constraints (including
internal constraints which cannot be set directly via set-constraints).
"""
provider = environment.get_machine_provider()
client = yield provider.connect()
result = {}
try:
yield sync_environment_state(client, env_config, environment.name)
if entity_names:
msm = MachineStateManager(client)
ssm = ServiceStateManager(client)
for name in entity_names:
if name.isdigit():
kind = "machine"
entity = yield msm.get_machine_state(name)
elif "/" in name:
kind = "service unit"
entity = yield ssm.get_unit_state(name)
else:
kind = "service"
entity = yield ssm.get_service_state(name)
log.info("Fetching constraints for %s %s", kind, name)
constraints = yield entity.get_constraints()
result[name] = dict(constraints)
else:
esm = EnvironmentStateManager(client)
log.info("Fetching constraints for environment")
constraints = yield esm.get_constraints()
result = dict(constraints)
contents = serializer.yaml_dump(result)
sys.stdout.write(contents)
finally:
yield client.close()
juju-0.7.orig/juju/control/constraints_set.py 0000644 0000000 0000000 00000006463 12135220114 017653 0 ustar 0000000 0000000 import argparse
from twisted.internet.defer import inlineCallbacks
from juju.control import legacy
from juju.control.utils import get_environment, sync_environment_state
from juju.state.environment import EnvironmentStateManager
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser(
"set-constraints",
help=command.__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
description=constraints_set.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to affect")
sub_parser.add_argument(
"--service", "-s", default=None,
help="Service to set constraints on")
sub_parser.add_argument(
"constraints",
nargs="+",
help="name=value for constraint to set")
return sub_parser
def command(options):
"""Set machine constraints for the environment, or for a named service.
"""
environment = get_environment(options)
env_config = options.environments
return constraints_set(
env_config, environment, options.service, options.constraints)
@inlineCallbacks
def constraints_set(env_config, environment, service_name, constraint_strs):
"""
Machine constraints allow you to pick the hardware to which your services
will be deployed. Examples:
$ juju set-constraints --service-name mysql mem=8G cpu=4
$ juju set-constraints instance-type=t1.micro
Available constraints vary by provider type, and will be ignored if not
understood by the current environment's provider. The current set of
available constraints across all providers is:
On Amazon EC2:
* arch (CPU architecture: i386/amd64/arm; amd64 by default)
* cpu (processing power in Amazon ECU; 1 by default)
* mem (memory in [MGT]iB; 512M by default)
* instance-type (unset by default)
* ec2-zone (unset by default)
On Orchestra:
* orchestra-classes (unset by default)
On MAAS:
* maas-name (unset by default)
Service settings, if specified, will override environment settings, which
will in turn override the juju defaults of mem=512M, cpu=1, arch=amd64.
New constraints set on an entity will completely replace that entity's
pre-existing constraints.
To override an environment constraint with the juju default when setting
service constraints, just specify "name=" (rather than just not specifying
the constraint at all, which will cause it to inherit the environment's
value).
To entirely unset a constraint, specify "name=any".
"""
provider = environment.get_machine_provider()
constraint_set = yield provider.get_constraint_set()
constraints = constraint_set.parse(constraint_strs)
client = yield provider.connect()
try:
yield legacy.check_constraints(client, constraint_strs)
yield sync_environment_state(client, env_config, environment.name)
if service_name is None:
esm = EnvironmentStateManager(client)
yield esm.set_constraints(constraints)
else:
ssm = ServiceStateManager(client)
service = yield ssm.get_service_state(service_name)
yield service.set_constraints(constraints)
finally:
yield client.close()
juju-0.7.orig/juju/control/debug_hooks.py 0000644 0000000 0000000 00000013041 12135220114 016710 0 ustar 0000000 0000000 """
Command for debugging hooks on a service unit.
"""
import base64
import os
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.control.utils import get_ip_address_for_unit
from juju.control.utils import get_environment
from juju.charm.errors import InvalidCharmHook
from juju.state.charm import CharmStateManager
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser("debug-hooks", help=command.__doc__)
sub_parser.add_argument(
"-e", "--environment",
help="juju environment to operate in.")
sub_parser.add_argument(
"unit_name",
help="Name of unit")
sub_parser.add_argument(
"hook_names", default=["*"], nargs="*",
help="Name of hook, defaults to all")
return sub_parser
@inlineCallbacks
def validate_hooks(client, unit_state, hook_names):
# Assemble a list of valid hooks for the charm.
valid_hooks = ["start", "stop", "install", "config-changed"]
service_manager = ServiceStateManager(client)
endpoints = yield service_manager.get_relation_endpoints(
unit_state.service_name)
endpoint_names = [ep.relation_name for ep in endpoints]
for endpoint_name in endpoint_names:
valid_hooks.extend([
endpoint_name + "-relation-joined",
endpoint_name + "-relation-changed",
endpoint_name + "-relation-departed",
endpoint_name + "-relation-broken",
])
# Verify the debug names.
for hook_name in hook_names:
if hook_name in valid_hooks:
continue
break
else:
returnValue(True)
# We dereference to the charm to give a fully qualified error
# message. I wish this was a little easier to dereference, the
# service_manager.get_relation_endpoints effectively does this
# already.
service_manager = ServiceStateManager(client)
service_state = yield service_manager.get_service_state(
unit_state.service_name)
charm_id = yield service_state.get_charm_id()
charm_manager = CharmStateManager(client)
charm = yield charm_manager.get_charm_state(charm_id)
raise InvalidCharmHook(charm.id, hook_name)
@inlineCallbacks
def command(options):
"""Interactively debug a hook remotely on a service unit.
"""
environment = get_environment(options)
provider = environment.get_machine_provider()
client = yield provider.connect()
# Verify unit and retrieve ip address
options.log.debug("Retrieving unit and machine information.")
ip_address, unit = yield get_ip_address_for_unit(
client, provider, options.unit_name)
# Verify hook name
if options.hook_names != ["*"]:
options.log.debug("Verifying hook names...")
yield validate_hooks(client, unit, options.hook_names)
# Enable debug log
options.log.debug(
"Enabling hook debug on unit (%r)..." % options.unit_name)
yield unit.enable_hook_debug(options.hook_names)
# If we don't have an ipaddress the unit isn't up yet, wait for it.
if not ip_address:
options.log.info("Waiting for unit")
# Wait and verify the agent is running.
while 1:
exists_d, watch_d = unit.watch_agent()
exists = yield exists_d
if exists:
options.log.info("Unit running")
break
yield watch_d
# Refetch the unit address
ip_address, unit = yield get_ip_address_for_unit(
client, provider, options.unit_name)
# Connect via ssh and start tmux.
options.log.info("Connecting to remote machine %s...", ip_address)
# Encode the script as base64 so that we can deliver it with a single
# ssh command while still retaining standard input on the terminal fd.
script = SCRIPT.replace("{unit_name}", options.unit_name)
script_b64 = base64.encodestring(script).replace("\n", "").strip()
cmd = '"F=`mktemp`; echo %s | base64 -d > \$F; . \$F"' % script_b64
# Yield to facilitate testing.
yield os.system(
"ssh -t ubuntu@%s 'sudo /bin/bash -c %s'" % (ip_address, cmd))
options.log.info("Debug session ended.")
# Ends hook debugging.
yield client.close()
SCRIPT = r"""
# Wait for tmux to be installed.
while [ ! -f /usr/bin/tmux ]; do
sleep 1
done
if [ ! -f ~/.tmux.conf ]; then
if [ -f /usr/share/byobu/profiles/tmux ]; then
# Use byobu/tmux profile for familiar keybindings and branding
echo "source-file /usr/share/byobu/profiles/tmux" > ~/.tmux.conf
else
# Otherwise, use the legacy juju/tmux configuration
cat > ~/.tmux.conf < arrow key
set-option -s escape-time 0
END
fi
fi
# The beauty below is a workaround for a bug in tmux (1.5 in Oneiric) or
# epoll that doesn't support /dev/null or whatever. Without it the
# command hangs.
tmux new-session -d -s {unit_name} 2>&1 | cat > /dev/null || true
tmux attach -t {unit_name}
"""
juju-0.7.orig/juju/control/debug_log.py 0000644 0000000 0000000 00000010434 12135220114 016351 0 ustar 0000000 0000000 """
Command for distributed debug logging output via the cli.
"""
from fnmatch import fnmatch
import logging
import sys
from twisted.internet.defer import inlineCallbacks
from juju.control.options import ensure_abs_path
from juju.control.utils import get_environment
from juju.state.environment import GlobalSettingsStateManager
from juju.lib.zklog import LogIterator
def configure_subparser(subparsers):
"""Configure debug-log subcommand"""
sub_parser = subparsers.add_parser("debug-log", help=command.__doc__,
description=debug_log.__doc__)
sub_parser.add_argument(
"-e", "--environment",
help="juju environment to operate in.")
sub_parser.add_argument(
"-r", "--replay", default=False,
action="store_true",
help="Display all existing logs first.")
sub_parser.add_argument(
"-i", "--include", action="append",
help=("Filter log messages to only show these log channels or agents."
"Multiple values can be specified, also supports unix globbing.")
)
sub_parser.add_argument(
"-x", "--exclude", action="append",
help=("Filter log messages to exclude these log channels or agents."
"Multiple values can be specified, also supports unix globbing.")
)
sub_parser.add_argument(
"-l", "--level", default="DEBUG",
choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"],
help="Log level to show")
sub_parser.add_argument(
"-n", "--limit", type=int,
help="Show n log messages and exit.")
sub_parser.add_argument(
"-o", "--output", default="-",
help="File to log to, defaults to stdout",
type=ensure_abs_path)
return sub_parser
def command(options):
"""Distributed juju debug log watching."""
environment = get_environment(options)
return debug_log(
options.environments,
environment,
options.log,
options)
@inlineCallbacks
def debug_log(config, environment, log, options):
""" Enables a distributed log for all agents in the environment, and
displays all log entries that have not been seen yet. """
provider = environment.get_machine_provider()
client = yield provider.connect()
log.info("Enabling distributed debug log.")
settings_manager = GlobalSettingsStateManager(client)
yield settings_manager.set_debug_log(True)
if not options.limit:
log.info("Tailing logs - Ctrl-C to stop.")
iterator = LogIterator(client, replay=options.replay)
# Setup the logging output with the user specified file.
if options.output == "-":
log_file = sys.stdout
else:
log_file = open(options.output, "a")
handler = logging.StreamHandler(log_file)
log_level = logging.getLevelName(options.level)
handler.setLevel(log_level)
formatter = logging.Formatter(
"%(asctime)s %(context)s: %(name)s %(levelname)s: %(message)s")
handler.setFormatter(formatter)
def match(data):
local_name = data["context"].split(":")[-1]
if options.exclude:
for exclude in options.exclude:
if fnmatch(local_name, exclude) or \
fnmatch(data["context"], exclude) or \
fnmatch(data["name"], exclude):
return False
if options.include:
for include in options.include:
if fnmatch(local_name, include) or \
fnmatch(data["context"], include) or \
fnmatch(data["name"], include):
return True
return False
return True
count = 0
try:
while True:
entry = yield iterator.next()
if not match(entry):
continue
# json doesn't distinguish lists v. tuples but python string
# formatting doesn't accept lists.
entry["args"] = tuple(entry["args"])
record = logging.makeLogRecord(entry)
if entry["levelno"] < handler.level:
continue
handler.handle(record)
count += 1
if options.limit is not None and count == options.limit:
break
finally:
yield settings_manager.set_debug_log(False)
client.close()
juju-0.7.orig/juju/control/deploy.py 0000644 0000000 0000000 00000015454 12135220114 015725 0 ustar 0000000 0000000 import os
from twisted.internet.defer import inlineCallbacks
from juju.control import legacy
from juju.control.utils import (
expand_constraints, expand_path, get_environment, sync_environment_state)
from juju.charm.errors import ServiceConfigValueError
from juju.charm.publisher import CharmPublisher
from juju.charm.repository import resolve
from juju.errors import CharmError
from juju.lib import serializer
from juju.state.endpoint import RelationEndpoint
from juju.state.placement import place_unit
from juju.state.relation import RelationStateManager
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser("deploy", help=command.__doc__,
description=deploy.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to deploy the charm in.")
sub_parser.add_argument(
"--num-units", "-n", default=1, type=int, metavar="NUM",
help="Number of service units to deploy.")
sub_parser.add_argument(
"-u", "--upgrade", default=False, action="store_true",
help="Deploy the charm on disk, increments revision if needed")
sub_parser.add_argument(
"--repository",
help="Directory for charm lookup and retrieval",
default=os.environ.get("JUJU_REPOSITORY"),
type=expand_path)
sub_parser.add_argument(
"--constraints",
help="Hardware constraints for the service",
default=[],
type=expand_constraints)
sub_parser.add_argument(
"charm", nargs=None,
help="Charm name")
sub_parser.add_argument(
"service_name", nargs="?",
help="Service name of deployed charm")
sub_parser.add_argument(
"--config",
help="YAML file containing service options")
return sub_parser
def command(options):
"""
Deploy a charm to juju!
"""
environment = get_environment(options)
return deploy(
options.environments,
environment,
options.repository,
options.charm,
options.service_name,
options.log,
options.constraints,
options.config,
options.upgrade,
num_units=options.num_units)
def parse_config_options(config_file, service_name, charm):
if not os.path.exists(config_file) or \
not os.access(config_file, os.R_OK):
raise ServiceConfigValueError(
"Config file %r not accessible." % config_file)
with open(config_file) as fh:
options = serializer.yaml_load(fh.read())
if not options or not isinstance(options, dict) or \
service_name not in options:
raise ServiceConfigValueError(
"Invalid options file passed to --config.\n"
"Expected a YAML dict with service name (%r)." % service_name)
# Validate and type service options and return
return charm.config.validate(options[service_name])
@inlineCallbacks
def deploy(env_config, environment, repository_path, charm_name,
service_name, log, constraint_strs, config_file=None, upgrade=False,
num_units=1):
"""Deploy a charm within an environment.
This will publish the charm to the environment, creating
a service from the charm, and get it set to be launched
on a new machine. If --repository is not specified, it
will be taken from the environment variable JUJU_REPOSITORY.
"""
repo, charm_url = resolve(
charm_name, repository_path, environment.default_series)
log.info("Searching for charm %s in %s" % (charm_url, repo))
charm = yield repo.find(charm_url)
if upgrade:
if repo.type != "local" or charm.type != "dir":
raise CharmError(
charm.path,
"Only local directory charms can be upgraded on deploy")
charm.set_revision(charm.get_revision() + 1)
charm_id = str(charm_url.with_revision(charm.get_revision()))
# Validate config options prior to deployment attempt
service_options = {}
service_name = service_name or charm_url.name
if config_file:
service_options = parse_config_options(
config_file, service_name, charm)
charm = yield repo.find(charm_url)
charm_id = str(charm_url.with_revision(charm.get_revision()))
provider = environment.get_machine_provider()
placement_policy = provider.get_placement_policy()
constraint_set = yield provider.get_constraint_set()
constraints = constraint_set.parse(constraint_strs)
client = yield provider.connect()
try:
yield legacy.check_constraints(client, constraint_strs)
yield legacy.check_environment(
client, provider.get_legacy_config_keys())
yield sync_environment_state(client, env_config, environment.name)
# Publish the charm to juju
storage = yield provider.get_file_storage()
publisher = CharmPublisher(client, storage)
yield publisher.add_charm(charm_id, charm)
result = yield publisher.publish()
# In future we might have multiple charms be published at
# the same time. For now, extract the charm_state from the
# list.
charm_state = result[0]
# Create the service state
service_manager = ServiceStateManager(client)
service_state = yield service_manager.add_service_state(
service_name, charm_state, constraints)
# Use the charm's ConfigOptions instance to validate service
# options.. Invalid options passed will thrown an exception
# and prevent the deploy.
state = yield service_state.get_config()
charm_config = yield charm_state.get_config()
# return the validated options with the defaults included
service_options = charm_config.validate(service_options)
state.update(service_options)
yield state.write()
# Create desired number of service units
if (yield service_state.is_subordinate()):
log.info("Subordinate %r awaiting relationship "
"to principal for deployment.", service_name)
else:
for i in xrange(num_units):
unit_state = yield service_state.add_unit_state()
yield place_unit(client, placement_policy, unit_state)
# Check if we have any peer relations to establish
if charm.metadata.peers:
relation_manager = RelationStateManager(client)
for peer_name, peer_info in charm.metadata.peers.items():
yield relation_manager.add_relation_state(
RelationEndpoint(service_name,
peer_info["interface"],
peer_name,
"peer"))
log.info("Charm deployed as service: %r", service_name)
finally:
yield client.close()
juju-0.7.orig/juju/control/destroy_environment.py 0000644 0000000 0000000 00000002237 12135220114 020541 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks, returnValue
from juju.control.utils import get_environment
def configure_subparser(subparsers):
"""Configure destroy-environment subcommand"""
sub_parser = subparsers.add_parser(
"destroy-environment", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
return sub_parser
@inlineCallbacks
def command(options):
"""
Terminate all machines and resources for an environment.
"""
environment = get_environment(options)
provider = environment.get_machine_provider()
value = raw_input(
"WARNING: this command will destroy the %r environment (type: %s).\n"
"This includes all machines, services, data, and other resources. "
"Continue [y/N] " % (
environment.name, environment.type))
if value.strip().lower() not in ("y", "yes"):
options.log.info("Environment destruction aborted")
returnValue(None)
options.log.info("Destroying environment %r (type: %s)..." % (
environment.name, environment.type))
yield provider.destroy_environment()
juju-0.7.orig/juju/control/destroy_service.py 0000644 0000000 0000000 00000005130 12135220114 017630 0 ustar 0000000 0000000 """Implementation of destroy service subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.state.errors import UnsupportedSubordinateServiceRemoval
from juju.state.relation import RelationStateManager
from juju.state.service import ServiceStateManager
from juju.control.utils import get_environment
def configure_subparser(subparsers):
"""Configure destroy-service subcommand"""
sub_parser = subparsers.add_parser("destroy-service", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to add the relation in.")
sub_parser.add_argument(
"service_name",
help="Name of the service to stop")
return sub_parser
def command(options):
"""Destroy a running service, its units, and break its relations."""
environment = get_environment(options)
return destroy_service(
options.environments,
environment,
options.verbose,
options.log,
options.service_name)
@inlineCallbacks
def destroy_service(config, environment, verbose, log, service_name):
provider = environment.get_machine_provider()
client = yield provider.connect()
service_manager = ServiceStateManager(client)
service_state = yield service_manager.get_service_state(service_name)
if (yield service_state.is_subordinate()):
# We can destroy the service if does not have relations.
# That implies that principals have already been torn
# down (or were never added).
relation_manager = RelationStateManager(client)
relations = yield relation_manager.get_relations_for_service(
service_state)
if relations:
principal_service = None
# if we have a container we can destroy the subordinate
# (revisit in the future)
for relation in relations:
if relation.relation_scope != "container":
continue
services = yield relation.get_service_states()
remote_service = [s for s in services if s.service_name !=
service_state.service_name][0]
if not (yield remote_service.is_subordinate()):
principal_service = remote_service
break
if principal_service:
raise UnsupportedSubordinateServiceRemoval(
service_state.service_name,
principal_service.service_name)
yield service_manager.remove_service_state(service_state)
log.info("Service %r destroyed.", service_state.service_name)
juju-0.7.orig/juju/control/expose.py 0000644 0000000 0000000 00000002757 12135220114 015736 0 ustar 0000000 0000000 """Implementation of expose subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import get_environment
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
"""Configure expose subcommand"""
sub_parser = subparsers.add_parser("expose", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
sub_parser.add_argument(
"service_name",
help="Name of the service that should be exposed.")
return sub_parser
def command(options):
"""Expose a service to the internet."""
environment = get_environment(options)
return expose(
options.environments,
environment,
options.verbose,
options.log,
options.service_name)
@inlineCallbacks
def expose(
config, environment, verbose, log, service_name):
"""Expose a service."""
provider = environment.get_machine_provider()
client = yield provider.connect()
try:
service_manager = ServiceStateManager(client)
service_state = yield service_manager.get_service_state(service_name)
already_exposed = yield service_state.get_exposed_flag()
if not already_exposed:
yield service_state.set_exposed_flag()
log.info("Service %r was exposed.", service_name)
else:
log.info("Service %r was already exposed.", service_name)
finally:
yield client.close()
juju-0.7.orig/juju/control/initialize.py 0000644 0000000 0000000 00000002672 12135220114 016570 0 ustar 0000000 0000000 from base64 import b64decode
import os
from twisted.internet.defer import inlineCallbacks
from txzookeeper import ZookeeperClient
from juju.lib import serializer
from juju.state.initialize import StateHierarchy
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser("initialize", help=command.__doc__)
sub_parser.add_argument(
"--instance-id", required=True,
help="Provider instance id for the bootstrap node")
sub_parser.add_argument(
"--admin-identity", required=True,
help="Admin access control identity for zookeeper ACLs")
sub_parser.add_argument(
"--constraints-data", required=True,
help="Base64-encoded yaml dump of the environment constraints data")
sub_parser.add_argument(
"--provider-type", required=True,
help="Environment machine provider type")
return sub_parser
@inlineCallbacks
def command(options):
"""
Initialize Zookeeper hierarchy
"""
zk_address = os.environ.get("ZOOKEEPER_ADDRESS", "127.0.0.1:2181")
client = yield ZookeeperClient(zk_address).connect()
try:
constraints_data = serializer.load(b64decode(options.constraints_data))
hierarchy = StateHierarchy(
client,
options.admin_identity,
options.instance_id,
constraints_data,
options.provider_type)
yield hierarchy.initialize()
finally:
yield client.close()
juju-0.7.orig/juju/control/legacy.py 0000644 0000000 0000000 00000002304 12135220114 015663 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.errors import JujuError
from juju.state.environment import EnvironmentStateManager
_ERROR = """
Your environments.yaml contains deprecated keys; they must not be used other
than in legacy deployments. The affected keys are:
%s
This error can be resolved according to the instructions available at:
https://juju.ubuntu.com/DeprecatedEnvironmentSettings
"""
def error(keys):
raise JujuError(_ERROR % "\n ".join(sorted(keys)))
@inlineCallbacks
def check_environment(client, keys):
if not keys:
return
esm = EnvironmentStateManager(client)
if not (yield esm.get_in_legacy_environment()):
error(keys)
@inlineCallbacks
def check_constraints(client, constraint_strs):
if not constraint_strs:
return
esm = EnvironmentStateManager(client)
if (yield esm.get_in_legacy_environment()):
raise JujuError(
"Constraints are not valid in legacy deployments. To use machine "
"constraints, please deploy your environment again from scratch. "
"You can continue to use this environment as before, but any "
"attempt to set constraints will fail.")
juju-0.7.orig/juju/control/open_tunnel.py 0000644 0000000 0000000 00000001605 12135220114 016750 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks, Deferred
from juju.control.utils import get_environment
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser("open-tunnel", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e", help="Environment to operate on.")
# TODO Coming next:
#sub_parser.add_argument(
# "unit_or_machine", nargs="*", help="Name of unit or machine")
return sub_parser
@inlineCallbacks
def command(options):
"""Establish a tunnel to the environment.
"""
environment = get_environment(options)
provider = environment.get_machine_provider()
yield provider.connect(share=True)
options.log.info("Tunnel to the environment is open. "
"Press CTRL-C to close it.")
yield hanging_deferred()
def hanging_deferred():
# Hang forever.
return Deferred()
juju-0.7.orig/juju/control/options.py 0000644 0000000 0000000 00000007116 12135220114 016120 0 ustar 0000000 0000000 """
Argparse implementation of twistd standard unix options.
"""
import os
import argparse
from twisted.python.util import uidFromString, gidFromString
from twisted.scripts._twistd_unix import _umask
def ensure_abs_path(path):
"""
Ensure the parent directory to the given path exists. Returns
the absolute file location to the given path
"""
if path == "-":
return path
path = os.path.abspath(path)
parent_dir = os.path.dirname(path)
if not os.path.exists(parent_dir):
os.makedirs(parent_dir)
return path
def setup_twistd_options(parser, agent):
"""
Mimic the standard twisted options with some sane defaults
"""
# Standard twisted app options
development_group = parser.add_argument_group("Development options")
development_group.add_argument(
"--debug", "-b", action="store_true",
help="Run the application in the python debugger",
)
development_group.add_argument(
"--profile", "-p", action="store_true",
help="Run in profile mode, dumping results to specified file",
)
development_group.add_argument(
"--savestats", "-s", action="store_true",
help="Save the Stats object rather than text output of the profiler",
)
# Standard unix daemon options
unix_group = parser.add_argument_group("Unix Daemon options")
unix_group.add_argument(
"--rundir", "-d", default=".",
help="Change to supplied directory before running",
type=os.path.abspath,
)
unix_group.add_argument(
"--pidfile", default="",
help="Path to the pid file",
)
unix_group.add_argument(
"--logfile", default="%s.log" % agent.name,
help="Log to a specified file, - for stdout",
type=ensure_abs_path,
)
unix_group.add_argument(
"--loglevel", default="DEBUG",
choices=("DEBUG", "INFO", "ERROR", "WARNING", "CRITICAL"),
help="Log level")
unix_group.add_argument(
"--chroot", default=None,
help="Chroot to a supplied directory before running",
type=os.path.abspath,
)
unix_group.add_argument(
"--umask", default='0022', type=_umask,
help="The (octal) file creation mask to apply.",
)
unix_group.add_argument(
"--uid", "-u", default=None, type=uidFromString,
help="The uid to run as.",
)
unix_group.add_argument(
"--gid", "-g", default=None, type=gidFromString,
help="The gid to run as.",
)
unix_group.add_argument(
"--nodaemon", "-n", default=False,
dest="nodaemon", action="store_true",
help="Don't daemonize (stay in foreground)",
)
unix_group.add_argument(
"--syslog", default=False, action="store_true",
help="Log to syslog, not to file",
)
unix_group.add_argument(
"--sysprefix", dest="prefix", default=agent.name,
help="Syslog prefix [default: %s]" % (agent.name),
)
# Hidden options expected by twistd, with sane defaults
parser.add_argument(
"--save", default=True, action="store_false",
dest="no_save",
help=argparse.SUPPRESS,
)
parser.add_argument(
"--profiler", default="cprofile",
help=argparse.SUPPRESS,
)
parser.add_argument(
"--reactor", "-r", default="epoll",
help=argparse.SUPPRESS,
)
parser.add_argument(
"--originalname",
help=argparse.SUPPRESS,
)
parser.add_argument(
"--euid",
help=argparse.SUPPRESS,
)
juju-0.7.orig/juju/control/remove_relation.py 0000644 0000000 0000000 00000006700 12135220114 017615 0 ustar 0000000 0000000 """Implementation of remove-relation juju subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import get_environment
from juju.state.errors import (AmbiguousRelation, NoMatchingEndpoints,
UnsupportedSubordinateServiceRemoval)
from juju.state.relation import RelationStateManager
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
"""Configure remove-relation subcommand"""
sub_parser = subparsers.add_parser("remove-relation", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to add the relation in.")
sub_parser.add_argument(
"--verbose",
help="Provide additional information when running the command.")
sub_parser.add_argument(
"descriptors", nargs=2, metavar="[:]",
help="Define the relation endpoints for the relation to be removed.")
return sub_parser
def command(options):
"""Remove a relation between services in juju."""
environment = get_environment(options)
return remove_relation(
options.environments,
environment,
options.verbose,
options.log,
*options.descriptors)
@inlineCallbacks
def remove_relation(env_config, environment, verbose, log, *descriptors):
"""Remove relation between relation endpoints described by `descriptors`"""
provider = environment.get_machine_provider()
client = yield provider.connect()
relation_state_manager = RelationStateManager(client)
service_state_manager = ServiceStateManager(client)
endpoint_pairs = yield service_state_manager.join_descriptors(
*descriptors)
if verbose:
log.info("Endpoint pairs: %s", endpoint_pairs)
if len(endpoint_pairs) == 0:
raise NoMatchingEndpoints()
elif len(endpoint_pairs) > 1:
raise AmbiguousRelation(descriptors, endpoint_pairs)
# At this point we just have one endpoint pair. We need to pick
# just one of the endpoints if it's a peer endpoint, since that's
# our current API - join descriptors takes two descriptors, but
# add_relation_state takes one or two endpoints. TODO consider
# refactoring.
endpoints = endpoint_pairs[0]
if endpoints[0] == endpoints[1]:
endpoints = endpoints[0:1]
relation_state = yield relation_state_manager.get_relation_state(
*endpoints)
# Look at both endpoints, if we are dealing with a container relation
# decide if one end is a principal.
service_pair = [] # ordered such that sub, principal
is_container = False
has_principal = True
for ep in endpoints:
if ep.relation_scope == "container":
is_container = True
service = yield service_state_manager.get_service_state(
ep.service_name)
if (yield service.is_subordinate()):
service_pair.append(service)
has_principal = True
else:
service_pair.insert(0, service)
if is_container and len(service_pair) == 2 and has_principal:
sub, principal = service_pair
raise UnsupportedSubordinateServiceRemoval(sub.service_name,
principal.service_name)
yield relation_state_manager.remove_relation_state(relation_state)
yield client.close()
log.info("Removed %s relation from all service units.",
endpoints[0].relation_type)
juju-0.7.orig/juju/control/remove_unit.py 0000644 0000000 0000000 00000003635 12135220114 016763 0 ustar 0000000 0000000 """Implementation of remove unit subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.state.errors import UnsupportedSubordinateServiceRemoval
from juju.state.service import ServiceStateManager, parse_service_name
from juju.control.utils import get_environment
def configure_subparser(subparsers):
"""Configure remove-unit subcommand"""
sub_parser = subparsers.add_parser("remove-unit", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
sub_parser.add_argument(
"unit_names", nargs="+", metavar="SERVICE_UNIT",
help="Name of the service unit to remove.")
return sub_parser
def command(options):
"""Remove a service unit."""
environment = get_environment(options)
return remove_unit(
options.environments,
environment,
options.verbose,
options.log,
options.unit_names)
@inlineCallbacks
def remove_unit(config, environment, verbose, log, unit_names):
provider = environment.get_machine_provider()
client = yield provider.connect()
try:
service_manager = ServiceStateManager(client)
for unit_name in unit_names:
service_name = parse_service_name(unit_name)
service_state = yield service_manager.get_service_state(
service_name)
unit_state = yield service_state.get_unit_state(unit_name)
if (yield service_state.is_subordinate()):
container = yield unit_state.get_container()
raise UnsupportedSubordinateServiceRemoval(
unit_state.unit_name,
container.unit_name)
yield service_state.remove_unit_state(unit_state)
log.info("Unit %r removed from service %r",
unit_state.unit_name, service_state.service_name)
finally:
yield client.close()
juju-0.7.orig/juju/control/resolved.py 0000644 0000000 0000000 00000010040 12135220114 016236 0 ustar 0000000 0000000 """Implementation of resolved subcommand"""
import argparse
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.control.utils import get_environment
from juju.state.service import ServiceStateManager, RETRY_HOOKS, NO_HOOKS
from juju.state.relation import RelationStateManager
from juju.state.errors import RelationStateNotFound
from juju.unit.workflow import is_unit_running, is_relation_running
def configure_subparser(subparsers):
"""Configure resolved subcommand"""
sub_parser = subparsers.add_parser(
"resolved",
help=command.__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
description=resolved.__doc__)
sub_parser.add_argument(
"--retry", "-r", action="store_true",
help="Retry failed hook."),
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
sub_parser.add_argument(
"service_unit_name",
help="Name of the service unit that should be resolved")
sub_parser.add_argument(
"relation_name", nargs="?", default=None,
help="Name of the unit relation that should be resolved")
return sub_parser
def command(options):
"""Mark an error as resolved in a unit or unit relation."""
environment = get_environment(options)
return resolved(
options.environments,
environment,
options.verbose,
options.log,
options.service_unit_name,
options.relation_name,
options.retry)
@inlineCallbacks
def resolved(
config, environment, verbose, log, unit_name, relation_name, retry):
"""Mark an error as resolved in a unit or unit relation.
If one of a unit's charm non-relation hooks returns a non-zero exit
status, the entire unit can be considered to be in a non-running state.
As a resolution, the the unit can be manually returned a running state
via the juju resolved command. Optionally this command can also
rerun the failed hook.
This resolution also applies separately to each of the unit's relations.
If one of the relation-hooks failed. In that case there is no
notion of retrying (the change is gone), but resolving will allow
additional relation hooks for that relation to proceed.
"""
provider = environment.get_machine_provider()
client = yield provider.connect()
service_manager = ServiceStateManager(client)
relation_manager = RelationStateManager(client)
unit_state = yield service_manager.get_unit_state(unit_name)
service_state = yield service_manager.get_service_state(
unit_name.split("/")[0])
retry = retry and RETRY_HOOKS or NO_HOOKS
if not relation_name:
running, workflow_state = yield is_unit_running(client, unit_state)
if running:
log.info("Unit %r already running: %s", unit_name, workflow_state)
client.close()
returnValue(False)
yield unit_state.set_resolved(retry)
log.info("Marked unit %r as resolved", unit_name)
returnValue(True)
# Check for the matching relations
service_relations = yield relation_manager.get_relations_for_service(
service_state)
service_relations = [
sr for sr in service_relations if sr.relation_name == relation_name]
if not service_relations:
raise RelationStateNotFound()
# Verify the relations are in need of resolution.
resolved_relations = {}
for service_relation in service_relations:
unit_relation = yield service_relation.get_unit_state(unit_state)
running, state = yield is_relation_running(client, unit_relation)
if not running:
resolved_relations[unit_relation.internal_relation_id] = retry
if not resolved_relations:
log.warning("Matched relations are all running")
client.close()
returnValue(False)
# Mark the relations as resolved.
yield unit_state.set_relation_resolved(resolved_relations)
log.info(
"Marked unit %r relation %r as resolved", unit_name, relation_name)
client.close()
juju-0.7.orig/juju/control/scp.py 0000644 0000000 0000000 00000006077 12135220114 015217 0 ustar 0000000 0000000 from argparse import RawDescriptionHelpFormatter
import os
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.control.utils import (
get_environment, get_ip_address_for_machine, get_ip_address_for_unit,
parse_passthrough_args, ParseError)
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser(
"scp",
help=command.__doc__,
usage=("%(prog)s [-h] [-e ENV] "
"[remote_host:]file1 ... [remote_host:]file2"),
formatter_class=RawDescriptionHelpFormatter,
description=(
"positional arguments:\n"
" [remote_host:]file The remote host can the name of either\n"
" a Juju unit/machine or a remote system"))
sub_parser.add_argument(
"--environment", "-e",
help="Environment to operate on.", metavar="ENV")
return sub_parser
def passthrough(options, extra):
"""Second parsing phase to parse `extra` to passthrough to scp itself.
Partitions into flags and file specifications.
"""
flags, positional = parse_passthrough_args(extra, "cFiloPS")
if not positional:
raise ParseError("too few arguments")
options.scp_flags = flags
options.paths = positional
def open_scp(flags, paths):
# XXX - TODO - Might be nice if we had the ability to get the user's
# private key path and utilize it here, ie the symmetric end to
# get user public key.
args = ["scp"]
# Unlike ssh, choose not to share connections by default, given
# that the target usage may be for large files. The user's ssh
# config would probably be the best place to get this anyway.
args.extend(flags)
args.extend(paths)
os.execvp("scp", args)
@inlineCallbacks
def _expand_unit_or_machine(client, provider, path):
"""Expands service unit or machine ID into DNS name"""
parts = path.split(":")
if len(parts) > 1:
remote_system = parts[0]
ip_address = None
if remote_system.isdigit():
# machine id, will not pick up dotted IP addresses
ip_address, _ = yield get_ip_address_for_machine(
client, provider, remote_system)
elif "/" in remote_system:
# service unit
ip_address, _ = yield get_ip_address_for_unit(
client, provider, remote_system)
if ip_address:
returnValue("ubuntu@%s:%s" % (ip_address, ":".join(parts[1:])))
returnValue(path) # no need to expand
@inlineCallbacks
def command(options):
"""Use scp to copy files to/from given unit or machine.
"""
# Unlike juju ssh, no attempt to verify liveness of the agent,
# instead it's just a matter of whether the underlying scp will work
# or not.
environment = get_environment(options)
provider = environment.get_machine_provider()
client = yield provider.connect()
try:
paths = [(yield _expand_unit_or_machine(client, provider, path))
for path in options.paths]
open_scp(options.scp_flags, paths)
finally:
yield client.close()
juju-0.7.orig/juju/control/ssh.py 0000644 0000000 0000000 00000007024 12135220114 015220 0 ustar 0000000 0000000 from argparse import RawDescriptionHelpFormatter
import os
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import (
get_environment, get_ip_address_for_machine, get_ip_address_for_unit,
parse_passthrough_args, ParseError)
from juju.state.errors import MachineStateNotFound
from juju.state.sshforward import prepare_ssh_sharing
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser(
"ssh",
help=command.__doc__,
usage=("%(prog)s [-h] [-e ENV] unit_or_machine [command]"),
formatter_class=RawDescriptionHelpFormatter,
description=(
"positional arguments:\n"
" unit_or_machine Name of unit or machine\n"
" [command] Optional command to run on machine"))
sub_parser.add_argument(
"--environment", "-e",
help="Environment to operate on.", metavar="ENV")
return sub_parser
def passthrough(options, extra):
"""Second parsing phase to parse `extra` to passthrough to ssh itself.
Partitions into flags, unit_or_machine, and optional ssh command.
"""
flags, positional = parse_passthrough_args(extra, "bcDeFIiLlmOopRSWw")
if not positional:
raise ParseError("too few arguments")
options.ssh_flags = flags
options.unit_or_machine = positional.pop(0)
options.ssh_command = positional # if any
def open_ssh(flags, ip_address, ssh_command):
# XXX - TODO - Might be nice if we had the ability to get the user's
# private key path and utilize it here, ie the symmetric end to
# get user public key.
args = ["ssh"]
args.extend(prepare_ssh_sharing())
args.extend(flags)
args.extend(["ubuntu@%s" % ip_address])
args.extend(ssh_command)
os.execvp("ssh", args)
@inlineCallbacks
def command(options):
"""Launch an ssh shell on the given unit or machine.
"""
environment = get_environment(options)
provider = environment.get_machine_provider()
client = yield provider.connect()
label = machine = unit = None
# First check if it's a juju machine id
if options.unit_or_machine.isdigit():
options.log.debug(
"Fetching machine address using juju machine id.")
ip_address, machine = yield get_ip_address_for_machine(
client, provider, options.unit_or_machine)
machine.get_ip_address = get_ip_address_for_machine
label = "machine"
# Next check if it's a unit
elif "/" in options.unit_or_machine:
options.log.debug(
"Fetching machine address using unit name.")
ip_address, unit = yield get_ip_address_for_unit(
client, provider, options.unit_or_machine)
unit.get_ip_address = get_ip_address_for_unit
label = "unit"
else:
raise MachineStateNotFound(options.unit_or_machine)
agent_state = machine or unit
# Now verify the relevant agent is operational via its agent.
exists_d, watch_d = agent_state.watch_agent()
exists = yield exists_d
if not exists:
# If not wait on it.
options.log.info("Waiting for %s to come up." % label)
yield watch_d
# Double check the address we have is valid, else refetch.
if ip_address is None:
ip_address, machine = yield agent_state.get_ip_address(
client, provider, options.unit_or_machine)
yield client.close()
options.log.info("Connecting to %s %s at %s",
label, options.unit_or_machine, ip_address)
open_ssh(options.ssh_flags, ip_address, options.ssh_command)
juju-0.7.orig/juju/control/status.py 0000644 0000000 0000000 00000061735 12135220114 015757 0 ustar 0000000 0000000 from fnmatch import fnmatch
import argparse
import functools
import json
import sys
import yaml
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.control.utils import get_environment
from juju.errors import ProviderError
from juju.state.errors import UnitRelationStateNotFound
from juju.state.charm import CharmStateManager
from juju.state.machine import MachineStateManager
from juju.state.service import ServiceStateManager, parse_service_name
from juju.state.relation import RelationStateManager
from juju.unit.workflow import WorkflowStateClient
# a minimal registry for renderers
# maps from format name to callable
renderers = {}
def configure_subparser(subparsers):
sub_parser = subparsers.add_parser(
"status", help=status.__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
description=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to status.")
sub_parser.add_argument("--output",
help="An optional filename to output "
"the result to",
type=argparse.FileType("w"),
default=sys.stdout)
sub_parser.add_argument("--format",
help="Select an output format",
default="yaml"
)
sub_parser.add_argument("scope",
nargs="*",
help="""scope of status request, service or unit"""
""" must match at least one of these""")
return sub_parser
def command(options):
"""Output status information about a deployment.
This command will report on the runtime state of various system
entities.
$ juju status
will return data on entire default deployment.
$ juju status -e DEPLOYMENT2
will return data on the DEPLOYMENT2 envionment.
"""
environment = get_environment(options)
renderer = renderers.get(options.format)
if renderer is None:
formats = sorted(renderers.keys())
formats = ", ".join(formats)
raise SystemExit(
"Unsupported render format %s (valid formats: %s)." % (
options.format, formats))
return status(environment,
options.scope,
renderer,
options.output,
options.log)
@inlineCallbacks
def status(environment, scope, renderer, output, log):
"""Display environment status information.
"""
provider = environment.get_machine_provider()
client = yield provider.connect()
try:
# Collect status information
command = StatusCommand(client, provider, log)
state = yield command(scope)
#state = yield collect(scope, provider, client, log)
finally:
yield client.close()
# Render
renderer(state, output, environment)
def digest_scope(scope):
"""Parse scope used to filter status information.
`scope`: a list of name specifiers. see collect()
Returns a tuple of (service_filter, unit_filter). The values in
either filter list will be passed as a glob to fnmatch
"""
services = []
units = []
if scope is not None:
for value in scope:
if "/" in value:
units.append(value)
else:
services.append(value)
return (services, units)
class StatusCommand(object):
def __init__(self, client, provider, log):
"""
Callable status command object.
`client`: ZK client connection
`provider`: machine provider for the environment
`log`: a Python stdlib logger.
"""
self.client = client
self.provider = provider
self.log = log
self.service_manager = ServiceStateManager(client)
self.relation_manager = RelationStateManager(client)
self.machine_manager = MachineStateManager(client)
self.charm_manager = CharmStateManager(client)
self._reset()
def _reset(self, scope=None):
# init per-run state
# self.state is assembled by the various process methods
# intermediate access to state is made more convenient
# using these references to its internals.
self.service_data = {} # service name: service info
self.machine_data = {} # machine id: machine state
self.unit_data = {} # unit_name :unit_info
# used in collecting subordinate (which are added to state in a two
# phase pass)
self.subordinates = {} # service : set(principal service names)
self.state = dict(services=self.service_data,
machines=self.machine_data)
# Filtering info
self.seen_machines = set()
self.filter_services, self.filter_units = digest_scope(scope)
@inlineCallbacks
def __call__(self, scope=None):
"""Extract status information into nested dicts for rendering.
`scope`: an optional list of name specifiers. Globbing based wildcards
supported. Defaults to all units, services and relations.
"""
self._reset(scope)
# Pass 1 Gather Data (including principals and subordinates)
# this builds unit info and container relationships
# which is assembled in pass 2 below
yield self._process_services()
# Pass 2: Nest information according to principal/subordinates
# rules
self._process_subordinates()
yield self._process_machines()
returnValue(self.state)
@inlineCallbacks
def _process_services(self):
"""
For each service gather the following information::
:
charm:
exposed:
relations:
units:
"""
services = yield self.service_manager.get_all_service_states()
for service in services:
if len(self.filter_services):
found = False
for filter_service in self.filter_services:
if fnmatch(service.service_name, filter_service):
found = True
break
if not found:
continue
yield self._process_service(service)
@inlineCallbacks
def _process_service(self, service):
"""
Gather the service info (described in _process_services).
`service`: ServiceState instance
"""
relation_data = {}
service_data = self.service_data
charm_id = yield service.get_charm_id()
charm = yield self.charm_manager.get_charm_state(charm_id)
service_data[service.service_name] = (
dict(units={},
charm=charm.id,
relations=relation_data))
if (yield service.is_subordinate()):
service_data[service.service_name]["subordinate"] = True
yield self._process_expose(service)
relations, rel_svc_map = yield self._process_relation_map(
service)
unit_matched = yield self._process_units(service,
relations,
rel_svc_map)
# after filtering units check if any matched or remove the
# service from the output
if self.filter_units and not unit_matched:
del service_data[service.service_name]
return
yield self._process_relations(service, relations, rel_svc_map)
@inlineCallbacks
def _process_units(self, service, relations, rel_svc_map):
"""
Gather unit information for a service::
:
agent-state:
machine:
open-ports: ["port/protocol", ...]
public-address:
subordinates:
`service`: ServiceState intance
`relations`: list of ServiceRelationState instance for this service
`rel_svc_map`: maps relation internal ids to the remote endpoint
service name. This references the name of the remote
endpoint and so is generated per service.
"""
units = yield service.get_all_unit_states()
unit_matched = False
for unit in units:
if len(self.filter_units):
found = False
for filter_unit in self.filter_units:
if fnmatch(unit.unit_name, filter_unit):
found = True
break
if not found:
continue
yield self._process_unit(service, unit, relations, rel_svc_map)
unit_matched = True
returnValue(unit_matched)
@inlineCallbacks
def _process_unit(self, service, unit, relations, rel_svc_map):
""" Generate unit info for a single unit of a single service.
`unit`: ServiceUnitState
see `_process_units` for an explanation of other arguments.
"""
u = self.unit_data[unit.unit_name] = dict()
container = yield unit.get_container()
if container:
u["container"] = container.unit_name
self.subordinates.setdefault(unit.service_name,
set()).add(container.service_name)
machine_id = yield unit.get_assigned_machine_id()
u["machine"] = machine_id
unit_workflow_client = WorkflowStateClient(self.client, unit)
unit_state = yield unit_workflow_client.get_state()
if not unit_state:
u["agent-state"] = "pending"
else:
unit_connected = yield unit.has_agent()
u["agent-state"] = unit_state.replace("_", "-") \
if unit_connected else "down"
exposed = self.service_data[service.service_name].get("exposed")
open_ports = yield unit.get_open_ports()
if exposed:
u["open-ports"] = ["{port}/{proto}".format(**port_info)
for port_info in open_ports]
elif open_ports:
# Ensure a hint is provided that there are open ports if
# not exposed by setting the key in the output
self.service_data[service.service_name]["exposed"] = False
u["public-address"] = yield unit.get_public_address()
# indicate we should include information about this
# machine later
self.seen_machines.add(machine_id)
# collect info on each relation for the service unit
yield self._process_unit_relations(service, unit,
relations, rel_svc_map)
@inlineCallbacks
def _process_relation_map(self, service):
"""Generate a mapping from a services relations to the service name of
the remote endpoints.
returns: ([ServiceRelationState, ...], mapping)
"""
relation_data = self.service_data[service.service_name]["relations"]
relation_mgr = self.relation_manager
relations = yield relation_mgr.get_relations_for_service(service)
rel_svc_map = {}
for relation in relations:
rel_services = yield relation.get_service_states()
# A single related service implies a peer relation. More
# imply a bi-directional provides/requires relationship.
# In the later case we omit the local side of the relation
# when reporting.
if len(rel_services) > 1:
# Filter out self from multi-service relations.
rel_services = [
rsn for rsn in rel_services if rsn.service_name !=
service.service_name]
if len(rel_services) > 1:
raise ValueError("Unexpected relationship with more "
"than 2 endpoints")
rel_service = rel_services[0]
relation_data.setdefault(relation.relation_name, set()).add(
rel_service.service_name)
rel_svc_map[relation.internal_relation_id] = (
rel_service.service_name)
returnValue((relations, rel_svc_map))
@inlineCallbacks
def _process_relations(self, service, relations, rel_svc_map):
"""Generate relation information for a given service
Each service with relations will have a relations dict
nested under it with one or more relations described::
relations:
:
-
"""
relation_data = self.service_data[service.service_name]["relations"]
for relation in relations:
rel_services = yield relation.get_service_states()
# A single related service implies a peer relation. More
# imply a bi-directional provides/requires relationship.
# In the later case we omit the local side of the relation
# when reporting.
if len(rel_services) > 1:
# Filter out self from multi-service relations.
rel_services = [
rsn for rsn in rel_services if rsn.service_name !=
service.service_name]
if len(rel_services) > 1:
raise ValueError("Unexpected relationship with more "
"than 2 endpoints")
rel_service = rel_services[0]
relation_data.setdefault(
relation.relation_name, set()).add(
rel_service.service_name)
rel_svc_map[relation.internal_relation_id] = (
rel_service.service_name)
# Normalize the sets back to lists
for r in relation_data:
relation_data[r] = sorted(relation_data[r])
@inlineCallbacks
def _process_unit_relations(self, service, unit, relations, rel_svc_map):
"""Collect UnitRelationState information per relation and per unit.
Includes information under each unit for its relations including
its relation state and information about any possible errors.
see `_process_relations` for argument information
"""
u = self.unit_data[unit.unit_name]
relation_errors = {}
for relation in relations:
try:
relation_unit = yield relation.get_unit_state(unit)
except UnitRelationStateNotFound:
# This exception will occur when relations are
# established between services without service
# units, and therefore never have any
# corresponding service relation units.
# UPDATE: common with subordinate services, and
# some testing scenarios.
continue
relation_workflow_client = WorkflowStateClient(
self.client, relation_unit)
workflow_state = yield relation_workflow_client.get_state()
rel_svc_name = rel_svc_map.get(relation.internal_relation_id)
if rel_svc_name and workflow_state not in ("up", None):
relation_errors.setdefault(
relation.relation_name, set()).add(rel_svc_name)
if relation_errors:
# Normalize sets and store.
u["relation-errors"] = dict(
[(r, sorted(relation_errors[r])) for r in relation_errors])
def _process_subordinates(self):
"""Properly nest subordinate units under their principal service's
unit nodes. Services and units are generated in one pass, then
iterated by this method to structure the output data to reflect
actual unit containment.
Subordinate units will include the follow::
subordinate: true
subordinate-to:
-
Principal services that have subordinates will include::
subordinates:
:
agent-state:
"""
service_data = self.service_data
for unit_name, u in self.unit_data.iteritems():
container = u.get("container")
if container:
d = self.unit_data[container].setdefault("subordinates", {})
d[unit_name] = u
# remove key that don't appear in output or come from container
for key in ("container", "machine", "public-address"):
u.pop(key, None)
else:
service_name = parse_service_name(unit_name)
service_data[service_name]["units"][unit_name] = u
for sub_service, principal_services in self.subordinates.iteritems():
service_data[sub_service]["subordinate-to"] = sorted(
principal_services)
service_data[sub_service].pop("units", None)
@inlineCallbacks
def _process_expose(self, service):
"""Indicate if a service is exposed or not."""
exposed = yield service.get_exposed_flag()
if exposed:
self.service_data[service.service_name].update(exposed=exposed)
returnValue(exposed)
@inlineCallbacks
def _process_machines(self):
"""Gather machine information.
machines:
:
agent-state:
dns-name:
instance-id:
instance-state:
"""
machines = yield self.machine_manager.get_all_machine_states()
provider_machines = yield self._process_provider_machines()
for machine_state in machines:
if (self.filter_services or self.filter_units) and \
machine_state.id not in self.seen_machines:
continue
yield self._process_machine(machine_state, provider_machines)
@inlineCallbacks
def _process_provider_machines(self):
"""Retrieve known provider machines into map[instance-id] = machine.
"""
index = {}
try:
provider_machines = yield self.provider.get_machines()
except ProviderError:
self.log.exception(
"Can't retrieve machine information from provider")
returnValue(index)
# missing is only when requesting by id.
for m in provider_machines:
index[m.instance_id] = m
returnValue(index)
@inlineCallbacks
def _process_machine(self, machine_state, provider_machines):
"""
`machine_state`: MachineState instance
"""
instance_id = yield machine_state.get_instance_id()
m = {"instance-id": instance_id \
if instance_id is not None else "pending"}
if instance_id is None:
self.machine_data[machine_state.id] = m
return
pm = provider_machines.get(instance_id)
if pm is None:
self.log.exception(
"Machine provider information missing: machine %s" % (
machine_state.id))
self.machine_data[machine_state.id] = m
return
m["dns-name"] = pm.dns_name
m["instance-state"] = pm.state
if (yield machine_state.has_agent()):
# if the agent's connected, we're fine
m["agent-state"] = "running"
else:
units = (
yield machine_state.get_all_service_unit_states())
for unit in units:
unit_workflow_client = WorkflowStateClient(
self.client, unit)
if (yield unit_workflow_client.get_state()):
# for unit to have a state, its agent must
# have run, which implies the machine agent
# must have been running correctly at some
# point in the past
m["agent-state"] = "down"
break
# otherwise we're probably just still waiting
if not 'agent-state' in m:
m["agent-state"] = "not-started"
self.machine_data[machine_state.id] = m
def render_yaml(data, filelike, environment):
# remove the root nodes empty name
yaml.dump(
data, filelike, default_flow_style=False, Dumper=yaml.CSafeDumper)
renderers["yaml"] = render_yaml
def jsonify(data, filelike, pretty=True, **kwargs):
args = dict(skipkeys=True)
args.update(kwargs)
if pretty:
args["sort_keys"] = True
args["indent"] = 4
return json.dump(data, filelike, **args)
def render_json(data, filelike, environment):
jsonify(data, filelike)
renderers["json"] = render_json
# Supplement kwargs provided to pydot.Cluster/Edge/Node.
# The first key is used as the data type selector.
DEFAULT_STYLE = {
"service_container": {
"bgcolor": "#dedede",
},
"service": {
"color": "#772953",
"shape": "component",
"style": "filled",
"fontcolor": "#ffffff",
},
"unit": {
"color": "#DD4814",
"fontcolor": "#ffffff",
"shape": "box",
"style": "filled",
},
"subunit": {
"color": "#c9c9c9",
"fontcolor": "#ffffff",
"shape": "box",
"style": "filled",
"rank": "same"
},
"relation": {
"dir": "none"}
}
def safe_dot_label(name):
"""Convert a name to a label safe for use in DOT.
Works around an issue where service names like wiki-db will produce DOT
items with names like cluster_wiki-db where the trailing '-' invalidates
the name.
"""
return name.replace("-", "_")
def render_dot(
data, filelike, environment, format="dot", style=DEFAULT_STYLE):
"""Render a graphiz output of the status information.
"""
try:
import pydot
except ImportError:
raise SystemExit("You need to install the pydot "
"library to support DOT visualizations")
dot = pydot.Dot(graph_name=environment.name)
# first create a cluster for each service
seen_relations = set()
for service_name, service in data["services"].iteritems():
cluster = pydot.Cluster(
safe_dot_label(service_name),
shape="component",
label="%s service" % service_name,
**style["service_container"])
snode = pydot.Node(safe_dot_label(service_name),
label="<%s
%s>" % (
service_name,
service["charm"]),
**style["service"])
cluster.add_node(snode)
for unit_name, unit in service.get("units", {}).iteritems():
subordinates = unit.get("subordinates")
if subordinates:
container = pydot.Subgraph()
un = pydot.Node(safe_dot_label(unit_name),
label="<%s
%s>" % (
unit_name,
unit.get("public-address")),
**style["unit"])
container.add_node(un)
for sub in subordinates:
s = pydot.Node(safe_dot_label(sub),
label="<%s
>" % (sub),
**style["subunit"])
container.add_node(s)
container.add_edge(pydot.Edge(un, s, **style["relation"]))
cluster.add_subgraph(container)
else:
un = pydot.Node(safe_dot_label(unit_name),
label="<%s
%s>" % (
unit_name,
unit.get("public-address")),
**style["unit"])
cluster.add_node(un)
cluster.add_edge(pydot.Edge(snode, un))
dot.add_subgraph(cluster)
# now map the relationships
for kind, relation in service["relations"].iteritems():
if not isinstance(relation, list):
relation = (relation,)
for rel in relation:
src = safe_dot_label(rel)
dest = safe_dot_label(service_name)
descriptor = ":".join(tuple(sorted((src, dest))))
#kind = safe_dot_label("%s/%s" % (descriptor, kind))
if descriptor not in seen_relations:
seen_relations.add(descriptor)
dot.add_edge(pydot.Edge(
src,
dest,
label=kind,
**style["relation"]
))
if format == "dot":
filelike.write(dot.to_string())
else:
filelike.write(dot.create(format=format))
renderers["dot"] = render_dot
renderers["svg"] = functools.partial(render_dot, format="svg")
renderers["png"] = functools.partial(render_dot, format="png")
juju-0.7.orig/juju/control/terminate_machine.py 0000644 0000000 0000000 00000005051 12135220114 020075 0 ustar 0000000 0000000 """Implementation of terminate-machine subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import sync_environment_state, get_environment
from juju.errors import CannotTerminateMachine
from juju.state.errors import MachineStateNotFound
from juju.state.machine import MachineStateManager
def configure_subparser(subparsers):
"""Configure terminate-machine subcommand"""
sub_parser = subparsers.add_parser(
"terminate-machine", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="Environment to terminate machines.")
sub_parser.add_argument(
"machine_ids", metavar="ID", type=int, nargs="*",
help="Machine IDs to terminate")
return sub_parser
def command(options):
"""Terminate machines in an environment."""
environment = get_environment(options)
return terminate_machine(
options.environments,
environment,
options.verbose,
options.log,
options.machine_ids)
@inlineCallbacks
def terminate_machine(config, environment, verbose, log, machine_ids):
"""Terminates the machines in `machine_ids`.
Like the underlying code in MachineStateManager, it's permissible
if the machine ID is already terminated or even never running. If
we determine this is not desired behavior, presumably propagate
that back to the state manager.
XXX However, we currently special case support of not terminating
the "root" machine, that is the one running the provisioning
agent. At some point, this will be managed like any other service,
but until then it seems best to ensure it's not terminated at this
level.
"""
provider = environment.get_machine_provider()
client = yield provider.connect()
terminated_machine_ids = []
try:
yield sync_environment_state(client, config, environment.name)
machine_state_manager = MachineStateManager(client)
for machine_id in machine_ids:
if machine_id == 0:
raise CannotTerminateMachine(
0, "environment would be destroyed")
removed = yield machine_state_manager.remove_machine_state(
machine_id)
if not removed:
raise MachineStateNotFound(machine_id)
terminated_machine_ids.append(machine_id)
finally:
yield client.close()
if terminated_machine_ids:
log.info(
"Machines terminated: %s",
", ".join(str(id) for id in terminated_machine_ids))
juju-0.7.orig/juju/control/tests/ 0000755 0000000 0000000 00000000000 12135220114 015210 5 ustar 0000000 0000000 juju-0.7.orig/juju/control/unexpose.py 0000644 0000000 0000000 00000002774 12135220114 016300 0 ustar 0000000 0000000 """Implementation of unexpose subcommand"""
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import get_environment
from juju.state.service import ServiceStateManager
def configure_subparser(subparsers):
"""Configure unexpose subcommand"""
sub_parser = subparsers.add_parser("unexpose", help=command.__doc__)
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
sub_parser.add_argument(
"service_name",
help="Name of the service that should be unexposed.")
return sub_parser
def command(options):
"""Remove internet access to a service."""
environment = get_environment(options)
return unexpose(
options.environments,
environment,
options.verbose,
options.log,
options.service_name)
@inlineCallbacks
def unexpose(
config, environment, verbose, log, service_name):
"""Unexpose a service."""
provider = environment.get_machine_provider()
client = yield provider.connect()
try:
service_manager = ServiceStateManager(client)
service_state = yield service_manager.get_service_state(service_name)
already_exposed = yield service_state.get_exposed_flag()
if already_exposed:
yield service_state.clear_exposed_flag()
log.info("Service %r was unexposed.", service_name)
else:
log.info("Service %r was not exposed.", service_name)
finally:
yield client.close()
juju-0.7.orig/juju/control/upgrade_charm.py 0000644 0000000 0000000 00000011612 12135220114 017222 0 ustar 0000000 0000000 """Implementation of charm-upgrade subcommand"""
import os
from twisted.internet.defer import inlineCallbacks
from juju.control.utils import get_environment, expand_path
from juju.charm.directory import CharmDirectory
from juju.charm.errors import NewerCharmNotFound
from juju.charm.publisher import CharmPublisher
from juju.charm.repository import resolve
from juju.charm.url import CharmURL
from juju.state.service import ServiceStateManager
from juju.unit.workflow import is_unit_running
def configure_subparser(subparsers):
"""Configure charm-upgrade subcommand"""
sub_parser = subparsers.add_parser("upgrade-charm", help=command.__doc__,
description=upgrade_charm.__doc__)
sub_parser.add_argument(
"--dry-run", "-n", action="store_true",
help="Dry-Run, show which charm would be deployed for upgrade.")
sub_parser.add_argument(
"--force", action="store_true", default=False,
help="Force an upgrade, regardless of unit state, no hooks executed.")
sub_parser.add_argument(
"--environment", "-e",
help="juju environment to operate in.")
sub_parser.add_argument(
"--repository",
help="Directory for charm lookup and retrieval",
default=os.environ.get('JUJU_REPOSITORY'),
type=expand_path)
sub_parser.add_argument(
"service_name",
help="Name of the service that should be upgraded")
return sub_parser
def command(options):
"""Upgrade a service's charm."""
environment = get_environment(options)
return upgrade_charm(
options.environments,
environment,
options.verbose,
options.log,
options.repository,
options.service_name,
options.dry_run,
options.force)
@inlineCallbacks
def upgrade_charm(
config, environment, verbose, log, repository_path, service_name,
dry_run, force):
"""Upgrades a service's charm.
First determines if an upgrade is available, then updates the
service charm reference, and marks the units as needing upgrades.
If --repository is not specified, it will be taken from the environment
variable JUJU_REPOSITORY.
"""
provider = environment.get_machine_provider()
client = yield provider.connect()
service_manager = ServiceStateManager(client)
service_state = yield service_manager.get_service_state(service_name)
old_charm_id = yield service_state.get_charm_id()
old_charm_url = CharmURL.parse(old_charm_id)
old_charm_url.assert_revision()
repo, charm_url = resolve(
str(old_charm_url.with_revision(None)),
repository_path,
environment.default_series)
new_charm_url = charm_url.with_revision(
(yield repo.latest(charm_url)))
if charm_url.collection.schema == "local":
if old_charm_url.revision >= new_charm_url.revision:
new_revision = old_charm_url.revision + 1
charm = yield repo.find(new_charm_url)
if isinstance(charm, CharmDirectory):
if dry_run:
log.info("%s would be set to revision %s",
charm.path, new_revision)
else:
log.info("Setting %s to revision %s",
charm.path, new_revision)
charm.set_revision(new_revision)
new_charm_url.revision = new_revision
new_charm_id = str(new_charm_url)
# Verify its newer than what's deployed
if not new_charm_url.revision > old_charm_url.revision:
if dry_run:
log.info("Service already running latest charm %r", old_charm_id)
else:
raise NewerCharmNotFound(old_charm_id)
elif dry_run:
log.info("Service would be upgraded from charm %r to %r",
old_charm_id, new_charm_id)
# On dry run, stop before modifying state.
if not dry_run:
# Publish the new charm
storage = provider.get_file_storage()
publisher = CharmPublisher(client, storage)
charm = yield repo.find(new_charm_url)
yield publisher.add_charm(new_charm_id, charm)
result = yield publisher.publish()
charm_state = result[0]
# Update the service charm reference
yield service_state.set_charm_id(charm_state.id)
# Update the service configuration
# Mark the units for upgrades
units = yield service_state.get_all_unit_states()
for unit in units:
if force:
# Save some roundtrips
if not dry_run:
yield unit.set_upgrade_flag(force=force)
continue
running, state = yield is_unit_running(client, unit)
if not force and not running:
log.info(
"Unit %r is not in a running state (state: %r), won't upgrade",
unit.unit_name, state or "uninitialized")
continue
if not dry_run:
yield unit.set_upgrade_flag()
juju-0.7.orig/juju/control/utils.py 0000644 0000000 0000000 00000012255 12135220114 015565 0 ustar 0000000 0000000 import os
from itertools import tee
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.environment.errors import EnvironmentsConfigError
from juju.state.errors import ServiceUnitStateMachineNotAssigned
from juju.state.environment import EnvironmentStateManager
from juju.state.machine import MachineStateManager
from juju.state.service import ServiceStateManager
def get_environment(options):
env_name = options.environment or os.environ.get("JUJU_ENV")
environment = options.environments.get(env_name)
if environment is None and options.environment:
raise EnvironmentsConfigError(
"Invalid environment %r" % options.environment)
elif environment is None:
environment = options.environments.get_default()
return environment
def sync_environment_state(client, config, name):
"""Push the local environment config to zookeeper.
This needs to be done:
* On any command which can cause the provisioning agent to take action
against the provider (ie create/destroy a machine), because the PA
needs to use credentials stored in the environment config to do so.
* On any command which uses constraints-related code (even if indirectly)
because Constraints objects are provider-specific, and need to be
created with the help of a MachineProvider; and the only way state code
can get a MachineProvider is by getting one from ZK (we certainly don't
want to thread the relevant provider from juju.control and/or the PA
itself all the way through the state code). So, we sync, to ensure
that state code can use an EnvironmentStateManager to get a provider.
"""
esm = EnvironmentStateManager(client)
return esm.set_config_state(config, name)
@inlineCallbacks
def get_ip_address_for_machine(client, provider, machine_id):
"""Returns public DNS name and machine state for the machine id.
:param client: a connected zookeeper client.
:param provider: the `MachineProvider` in charge of the juju.
:param machine_id: machine ID of the desired machine to connect to.
:return: tuple of the DNS name and a `MachineState`.
"""
manager = MachineStateManager(client)
machine_state = yield manager.get_machine_state(machine_id)
instance_id = yield machine_state.get_instance_id()
provider_machine = yield provider.get_machine(instance_id)
returnValue((provider_machine.dns_name, machine_state))
@inlineCallbacks
def get_ip_address_for_unit(client, provider, unit_name):
"""Returns public DNS name and unit state for the service unit.
:param client: a connected zookeeper client.
:param provider: the `MachineProvider` in charge of the juju.
:param unit_name: service unit running on a machine to connect to.
:return: tuple of the DNS name and a `MachineState`.
:raises: :class:`juju.state.errors.ServiceUnitStateMachineNotAssigned`
"""
manager = ServiceStateManager(client)
service_unit = yield manager.get_unit_state(unit_name)
machine_id = yield service_unit.get_assigned_machine_id()
if machine_id is None:
raise ServiceUnitStateMachineNotAssigned(unit_name)
returnValue(
((yield service_unit.get_public_address()), service_unit))
def expand_path(p):
return os.path.abspath(os.path.expanduser(p))
def expand_constraints(s):
if s:
return s.split(" ")
return []
class ParseError(Exception):
"""Used to support returning custom parse errors in passthrough parsing.
Enables similar support to what is seen in argparse, without using its
internals.
"""
def parse_passthrough_args(args, flags_taking_arg=()):
"""Scans left to right, partitioning flags and positional args.
:param args: Unparsed args from argparse
:param flags_taking_arg: One character flags that combine
with arguments.
:return: tuple of flags and positional args
:raises: :class:`juju.control.utils.ParseError`
TODO May need to support long options for other passthrough commands.
"""
args = iter(args)
flags_taking_arg = set(flags_taking_arg)
flags = []
positional = []
while True:
args, peek_args = tee(args)
try:
peeked = peek_args.next()
except StopIteration:
break
if peeked.startswith("-"):
flags.append(args.next())
# Only need to consume the next arg if the flag both takes
# an arg (say -L) and then it has an extra arg following
# (8080:localhost:80), rather than being combined, such as
# -L8080:localhost:80
if len(peeked) == 2 and peeked[1] in flags_taking_arg:
try:
flags.append(args.next())
except StopIteration:
raise ParseError(
"argument -%s: expected one argument" % peeked[1])
else:
# At this point no more flags for the command itself (more
# can follow after the first positional arg, as seen in
# working with ssh, for example), so consume the rest and
# stop parsing options
positional = list(args)
break
return flags, positional
juju-0.7.orig/juju/control/tests/__init__.py 0000644 0000000 0000000 00000000002 12135220114 017311 0 ustar 0000000 0000000 #
juju-0.7.orig/juju/control/tests/common.py 0000644 0000000 0000000 00000004317 12135220114 017057 0 ustar 0000000 0000000 from twisted.internet import reactor
from twisted.internet.defer import Deferred, inlineCallbacks
from juju.environment.tests.test_config import EnvironmentsConfigTestBase
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.state.tests.test_service import ServiceStateManagerTestBase
class ControlToolTest(EnvironmentsConfigTestBase):
def setup_cli_reactor(self):
"""Mock mock out reactor start and stop.
This is necessary when executing the CLI via tests since
commands will run a reactor as part of their execution, then
shut it down. Obviously this would cause issues with running
multiple tests under Twisted Trial.
Returns a a `Deferred` that a test can wait on until the
reactor is mocked stopped. This means that code running in the
context of a mock reactor run is in fact complete, and
assertions and tearDown can now be done.
"""
mock_reactor = self.mocker.patch(reactor)
mock_reactor.run()
mock_reactor.stop()
wait_on_stopped = Deferred()
def f():
wait_on_stopped.callback("reactor has stopped")
self.mocker.call(f)
reactor.running = True
return wait_on_stopped
def setUp(self):
self.log = self.capture_logging()
return super(ControlToolTest, self).setUp()
def setup_exit(self, code=0):
mock_exit = self.mocker.replace("sys.exit")
mock_exit(code)
class MachineControlToolTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(MachineControlToolTest, self).setUp()
# Dummy out the construction of our root machine (id=0), this
# will go away in a later release. Right now, there's no
# service unit holding it, so we have to special case.
yield self.add_machine_state()
@inlineCallbacks
def destroy_service(self, service_name):
"""Destroys the service equivalently to destroy-service subcommand."""
service_state = yield self.service_state_manager.get_service_state(
service_name)
yield self.service_state_manager.remove_service_state(service_state)
juju-0.7.orig/juju/control/tests/sample_cluster.yaml 0000644 0000000 0000000 00000003357 12135220114 021126 0 ustar 0000000 0000000
machines:
0: {dns-name: ec2-50-19-158-109.compute-1.amazonaws.com, instance-id: i-215dd84f}
1: {dns-name: ec2-50-17-16-228.compute-1.amazonaws.com, instance-id: i-8d58dde3}
2: {dns-name: ec2-72-44-49-114.compute-1.amazonaws.com, instance-id: i-9558ddfb}
3: {dns-name: ec2-50-19-47-106.compute-1.amazonaws.com, instance-id: i-6d5bde03}
4: {dns-name: ec2-174-129-132-248.compute-1.amazonaws.com, instance-id: i-7f5bde11}
5: {dns-name: ec2-50-19-152-136.compute-1.amazonaws.com, instance-id: i-755bde1b}
6: {dns-name: ec2-50-17-168-124.compute-1.amazonaws.com, instance-id: i-4b5bde25}
services:
demo-wiki:
charm: local:mediawiki-62
relations: {cache: wiki-cache, db: wiki-db, website: wiki-balancer}
units:
demo-wiki/0:
machine: 2
relations:
cache: {state: up}
db: {state: up}
website: {state: up}
state: started
demo-wiki/1:
machine: 6
relations:
cache: {state: up}
db: {state: up}
website: {state: up}
state: started
wiki-balancer:
charm: local:haproxy-13
relations: {reverseproxy: demo-wiki}
units:
wiki-balancer/0:
machine: 4
relations:
reverseproxy: {state: up}
state: started
wiki-cache:
charm: local:memcached-10
relations: {cache: demo-wiki}
units:
wiki-cache/0:
machine: 3
relations:
cache: {state: up}
state: started
wiki-cache/1:
machine: 5
relations:
cache: {state: up}
state: started
wiki-db:
charm: local:mysql-93
relations: {db: demo-wiki}
units:
wiki-db/0:
machine: 1
relations:
db: {state: up}
state: started
juju-0.7.orig/juju/control/tests/test_add_relation.py 0000644 0000000 0000000 00000022012 12135220114 021243 0 ustar 0000000 0000000 import logging
from twisted.internet.defer import inlineCallbacks
from juju.control import main, add_relation
from juju.control.tests.common import ControlToolTest
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.state.tests.test_service import ServiceStateManagerTestBase
class ControlAddRelationTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlAddRelationTest, self).setUp()
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_add_relation_method(self):
"""Test adding a relation via the supporting method in the cmd obj."""
environment = self.config.get("firstenv")
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("wordpress")
yield add_relation.add_relation(
self.config, environment, False,
logging.getLogger("juju.control.cli"), "mysql", "wordpress")
self.assertIn(
"Added mysql relation to all service units.",
self.output.getvalue())
@inlineCallbacks
def test_add_peer_relation(self):
"""Test that services that peer can have that relation added."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
yield self.add_service_from_charm("riak")
main(["add-relation", "riak", "riak"])
yield wait_on_reactor_stopped
self.assertIn(
"Added riak relation to all service units.",
self.output.getvalue())
@inlineCallbacks
def test_add_relation(self):
"""Test that the command works when run from the CLI itself."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("wordpress")
main(["add-relation", "mysql", "wordpress"])
yield wait_on_reactor_stopped
self.assertIn(
"Added mysql relation to all service units.",
self.output.getvalue())
@inlineCallbacks
def test_verbose_flag(self):
"""Test the verbose flag."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
yield self.add_service_from_charm("riak")
main(["--verbose", "add-relation", "riak:ring", "riak:ring"])
yield wait_on_reactor_stopped
self.assertIn("Endpoint pairs", self.output.getvalue())
self.assertIn(
"Added riak relation to all service units.",
self.output.getvalue())
@inlineCallbacks
def test_use_relation_name(self):
"""Test that the descriptor can be qualified with a relation_name."""
yield self.add_service_from_charm("mysql-alternative")
yield self.add_service_from_charm("wordpress")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
yield self.add_service_from_charm("riak")
main(["add-relation", "mysql-alternative:dev", "wordpress"])
yield wait_on_reactor_stopped
self.assertIn(
"Added mysql relation to all service units.",
self.output.getvalue())
@inlineCallbacks
def test_add_relation_multiple(self):
"""Test that the command can be used to create multiple relations."""
environment = self.config.get("firstenv")
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("wordpress")
yield self.add_service_from_charm("varnish")
yield add_relation.add_relation(
self.config, environment, False,
logging.getLogger("juju.control.cli"), "mysql", "wordpress")
self.assertIn(
"Added mysql relation to all service units.",
self.output.getvalue())
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-relation", "wordpress", "varnish"])
yield wait_on_reactor_stopped
self.assertIn(
"Added varnish relation to all service units.",
self.output.getvalue())
# test for various errors
def test_with_no_args(self):
"""Test that two descriptor arguments are required for command."""
# in argparse, before reactor startup
self.assertRaises(SystemExit, main, ["add-relation"])
self.assertIn(
"juju add-relation: error: too few arguments",
self.stderr.getvalue())
def test_too_many_arguments_provided(self):
"""Test command rejects more than 2 descriptor arguments."""
self.assertRaises(
SystemExit, main, ["add-relation", "foo", "fum", "bar"])
self.assertIn(
"juju: error: unrecognized arguments: bar",
self.stderr.getvalue())
@inlineCallbacks
def test_missing_service_added(self):
"""Test command fails if a service is missing."""
yield self.add_service_from_charm("mysql")
# but not wordpress
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-relation", "wordpress", "mysql"])
yield wait_on_reactor_stopped
self.assertIn(
"Service 'wordpress' was not found",
self.output.getvalue())
@inlineCallbacks
def test_no_common_relation_type(self):
"""Test command fails if the services cannot be added in a relation."""
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("riak")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-relation", "riak", "mysql"])
yield wait_on_reactor_stopped
self.assertIn("No matching endpoints", self.output.getvalue())
@inlineCallbacks
def test_ambiguous_pairing(self):
"""Test command fails if more than one way to connect services."""
yield self.add_service_from_charm("mysql-alternative")
yield self.add_service_from_charm("wordpress")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-relation", "wordpress", "mysql-alternative"])
yield wait_on_reactor_stopped
self.assertIn(
"Ambiguous relation 'wordpress mysql-alternative'; could refer "
"to:\n 'wordpress:db mysql-alternative:dev' (mysql client / "
"mysql server)\n 'wordpress:db mysql-alternative:prod' (mysql "
"client / mysql server)",
self.output.getvalue())
@inlineCallbacks
def test_missing_charm(self):
"""Test command fails if service is added w/o corresponding charm."""
yield self.add_service("mysql_no_charm")
yield self.add_service_from_charm("wordpress")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-relation", "wordpress", "mysql_no_charm"])
yield wait_on_reactor_stopped
self.assertIn("No matching endpoints", self.output.getvalue())
@inlineCallbacks
def test_relation_added_twice(self):
"""Test command fails if it's run twice."""
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("wordpress")
yield add_relation.add_relation(
self.config, self.config.get("firstenv"), False,
logging.getLogger("juju.control.cli"), "mysql", "wordpress")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-relation", "wordpress", "mysql"])
yield wait_on_reactor_stopped
self.assertIn(
"Relation mysql already exists between wordpress and mysql",
self.output.getvalue())
@inlineCallbacks
def test_invalid_environment(self):
"""Test command with an environment that hasn't been set up."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
main(["add-relation", "--environment", "roman-candle",
"wordpress", "mysql"])
yield wait_on_reactor_stopped
self.assertIn(
"Invalid environment 'roman-candle'",
self.output.getvalue())
@inlineCallbacks
def test_relate_to_implicit(self):
"""Validate we can implicitly relate to an implicitly provided relation"""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("logging")
main(["add-relation", "mysql", "logging"])
yield wait_on_reactor_stopped
self.assertIn(
"Added juju-info relation to all service units.",
self.output.getvalue())
juju-0.7.orig/juju/control/tests/test_add_unit.py 0000644 0000000 0000000 00000016304 12135220114 020414 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.lib.serializer import yaml_dump
from juju.state.environment import EnvironmentStateManager
from .common import MachineControlToolTest
class ControlAddUnitTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ControlAddUnitTest, self).setUp()
self.service_state1 = yield self.add_service_from_charm("mysql")
self.service_unit1 = yield self.service_state1.add_unit_state()
self.machine_state1 = yield self.add_machine_state()
yield self.service_unit1.assign_to_machine(self.machine_state1)
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_add_unit(self):
"""
'juju add-unit ' will add a new service
unit of the given service.
"""
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 1)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
# trash environment to check syncing
yield self.client.delete("/environment")
main(["add-unit", "mysql"])
yield finished
# verify the env state was synced
esm = EnvironmentStateManager(self.client)
yield esm.get_config()
# verify the unit and its machine assignment.
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 2)
topology = yield self.get_topology()
unit = yield self.service_state1.get_unit_state("mysql/1")
machine_id = topology.get_service_unit_machine(
self.service_state1.internal_id, unit.internal_id)
self.assertNotEqual(machine_id, None)
self.assertIn(
"Unit 'mysql/1' added to service 'mysql'",
self.output.getvalue())
yield self.assert_machine_assignments("mysql", [1, 2])
@inlineCallbacks
def test_add_multiple_units(self):
"""
'juju add-unit ' will add a new service
unit of the given service.
"""
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 1)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-unit", "--num-units", "5", "mysql"])
yield finished
# verify the unit and its machine assignment.
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 6)
topology = yield self.get_topology()
unit = yield self.service_state1.get_unit_state("mysql/1")
machine_id = topology.get_service_unit_machine(
self.service_state1.internal_id, unit.internal_id)
self.assertNotEqual(machine_id, None)
for i in xrange(1, 6):
self.assertIn(
"Unit 'mysql/%d' added to service 'mysql'" % i,
self.output.getvalue())
yield self.assert_machine_assignments("mysql", [1, 2, 3, 4, 5, 6])
@inlineCallbacks
def test_add_unit_unknown_service(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-unit", "volcano"])
yield finished
self.assertIn(
"Service 'volcano' was not found", self.output.getvalue())
@inlineCallbacks
def test_add_unit_subordinate_service(self):
yield self.add_service_from_charm("logging")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-unit", "logging"])
yield finished
self.assertIn(
"Subordinate services acquire units from "
"their principal service.",
self.output.getvalue())
@inlineCallbacks
def test_add_unit_reuses_machines(self):
"""Verify that if machines are not in use, add-unit uses them."""
# add machine to wordpress, then destroy and reallocate later
# in this test to mysql as mysql/1's machine
wordpress_service_state = yield self.add_service_from_charm(
"wordpress")
wordpress_unit_state = yield wordpress_service_state.add_unit_state()
wordpress_machine_state = yield self.add_machine_state()
yield wordpress_unit_state.assign_to_machine(wordpress_machine_state)
yield wordpress_unit_state.unassign_from_machine()
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-unit", "mysql"])
yield finished
self.assertIn(
"Unit 'mysql/1' added to service 'mysql'",
self.output.getvalue())
yield self.assert_machine_assignments("wordpress", [None])
yield self.assert_machine_assignments("mysql", [1, 2])
@inlineCallbacks
def test_policy_from_environment(self):
config = {
"environments": {"firstenv": {
"placement": "local",
"type": "dummy"}}}
yield self.push_config("firstenv", config)
ms0 = yield self.machine_state_manager.get_machine_state(0)
yield self.service_unit1.unassign_from_machine()
yield self.service_unit1.assign_to_machine(ms0)
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 1)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-unit", "mysql"])
yield finished
# Verify the local policy was used
topology = yield self.get_topology()
unit = yield self.service_state1.get_unit_state("mysql/1")
machine_id = topology.get_service_unit_machine(
self.service_state1.internal_id, unit.internal_id)
self.assertNotEqual(machine_id, None)
self.assertIn(
"Unit 'mysql/1' added to service 'mysql'",
self.output.getvalue())
# adding a second unit still assigns to machine 0 with local policy
yield self.assert_machine_assignments("mysql", [0, 0])
@inlineCallbacks
def test_legacy_option_in_legacy_env(self):
yield self.client.delete("/constraints")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-unit", "mysql"])
yield finished
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 2)
@inlineCallbacks
def test_legacy_option_in_fresh_env(self):
local_config = {
"environments": {"firstenv": {
"some-legacy-key": "blah",
"type": "dummy"}}}
self.write_config(yaml_dump(local_config))
self.config.load()
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["add-unit", "mysql"])
yield finished
output = self.output.getvalue()
self.assertIn(
"Your environments.yaml contains deprecated keys", output)
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 1)
juju-0.7.orig/juju/control/tests/test_admin.py 0000644 0000000 0000000 00000002717 12135220114 017720 0 ustar 0000000 0000000 from juju import control
from juju.control import setup_logging, admin, setup_parser
from juju.lib.mocker import ANY
from .common import ControlToolTest
class DummySubcommand(object):
@staticmethod
def configure_subparser(subparsers):
subparser = subparsers.add_parser("dummy")
subparser.add_argument("--opt1", default=1)
return subparser
@staticmethod
def command(*args):
"""Doc String"""
pass
class AdminCommandOptionTest(ControlToolTest):
def test_admin_subcommand_execution(self):
"""
Setup an admin subcommand, and verify that's it invoked.
"""
self.setup_cli_reactor()
self.setup_exit(0)
self.patch(control, "ADMIN_SUBCOMMANDS", [DummySubcommand])
setup_logging_mock = self.mocker.mock(setup_logging)
setup_parser_mock = self.mocker.proxy(setup_parser)
self.patch(control, "setup_logging", setup_logging_mock)
self.patch(control, "setup_parser", setup_parser_mock)
command_mock = self.mocker.proxy(DummySubcommand.command)
self.patch(DummySubcommand, "command", command_mock)
setup_parser_mock(
subcommands=ANY,
prog="juju-admin",
description="juju cloud orchestration internal tools")
self.mocker.passthrough()
setup_logging_mock(ANY)
command_mock(ANY)
self.mocker.passthrough()
self.mocker.replay()
admin(["dummy"])
juju-0.7.orig/juju/control/tests/test_bootstrap.py 0000644 0000000 0000000 00000007467 12135220114 020654 0 ustar 0000000 0000000
from twisted.internet.defer import inlineCallbacks, succeed
from juju.control import main
from juju.lib.serializer import yaml_dump as dump
from juju.providers.dummy import MachineProvider
from .common import ControlToolTest
class ControlBootstrapTest(ControlToolTest):
@inlineCallbacks
def test_bootstrap(self):
"""
'juju-control bootstrap' will invoke the bootstrap method of all
configured machine providers in all environments.
"""
config = {
"environments": {
"firstenv": {
"type": "dummy", "default-series": "homer"},
"secondenv": {
"type": "dummy", "default-series": "marge"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
self.setup_exit(0)
provider = self.mocker.patch(MachineProvider)
provider.bootstrap({
"ubuntu-series": "homer",
"provider-type": "dummy",
"arch": "arm",
"cpu": 2.0,
"mem": 512.0})
self.mocker.result(succeed(True))
self.mocker.replay()
self.capture_stream("stderr")
main(["bootstrap", "-e", "firstenv",
"--constraints", "arch=arm cpu=2"])
yield finished
lines = filter(None, self.log.getvalue().split("\n"))
self.assertEqual(
lines,
[("Bootstrapping environment 'firstenv' "
"(origin: distro type: dummy)..."),
"'bootstrap' command finished successfully"])
@inlineCallbacks
def test_bootstrap_multiple_environments_no_default_specified(self):
"""
If there are multiple environments but no default, and no environment
specified on the cli, then an error message is given.
"""
config = {
"environments": {
"firstenv": {
"type": "dummy", "admin-secret": "homer"},
"secondenv": {
"type": "dummy", "admin-secret": "marge"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
output = self.capture_logging()
main(["bootstrap"])
yield finished
msg = "There are multiple environments and no explicit default"
self.assertIn(msg, self.log.getvalue())
self.assertIn(msg, output.getvalue())
@inlineCallbacks
def test_bootstrap_invalid_environment_specified(self):
"""
If the environment specified does not exist an error message is given.
"""
config = {
"environments": {
"firstenv": {
"type": "dummy", "admin-secret": "homer"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
output = self.capture_logging()
main(["bootstrap", "-e", "thirdenv"])
yield finished
msg = "Invalid environment 'thirdenv'"
self.assertIn(msg, self.log.getvalue())
self.assertIn(msg, output.getvalue())
@inlineCallbacks
def test_bootstrap_legacy_config_keys(self):
"""
If the environment specified does not exist an error message is given.
"""
config = {
"environments": {
"firstenv": {
"type": "dummy", "some-legacy-key": "blah"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
output = self.capture_logging()
main(["bootstrap"])
yield finished
msg = "Your environments.yaml contains deprecated keys"
self.assertIn(msg, self.log.getvalue())
self.assertIn(msg, output.getvalue())
juju-0.7.orig/juju/control/tests/test_config_get.py 0000644 0000000 0000000 00000006361 12135220114 020733 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.charm.tests import local_charm_id
from juju.lib import serializer
from .common import MachineControlToolTest
class ControlJujuGetTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ControlJujuGetTest, self).setUp()
self.output = self.capture_logging()
@inlineCallbacks
def test_get_service_config(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
self.service_state = yield self.add_service_from_charm("wordpress")
config = yield self.service_state.get_config()
# The value which isn't in the config won't be displayed.
settings = {"blog-title": "Hello World", "world": 123}
config.update(settings)
yield config.write()
output = self.capture_stream("stdout")
main(["get", "wordpress"])
yield finished
data = serializer.yaml_load(output.getvalue())
self.assertEqual(
{"service": "wordpress",
"charm": "local:series/wordpress-3",
'settings': {'blog-title': {
'description': 'A descriptive title used for the blog.',
'type': 'string',
'value': 'Hello World'}}},
data)
@inlineCallbacks
def test_get_service_config_with_no_value(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
self.service_state = yield self.add_service_from_charm(
"dummy", local_charm_id(self.charm))
config = yield self.service_state.get_config()
config["title"] = "hello movie"
config["skill-level"] = 24
yield config.write()
output = self.capture_stream("stdout")
main(["get", "dummy"])
yield finished
data = serializer.yaml_load(output.getvalue())
self.assertEqual(
{"service": "dummy",
"charm": "local:series/dummy-1",
"settings": {
'outlook': {
'description': 'No default outlook.',
'type': 'string',
'value': None},
'skill-level': {
'description': 'A number indicating skill.',
'value': 24,
'type': 'int'},
'title': {
'description': ('A descriptive title used '
'for the service.'),
'value': 'hello movie',
'type': 'string'},
'username': {
'description': ('The name of the initial account (given '
'admin permissions).'),
'value': 'admin001',
'default': True,
'type': 'string'}}},
data)
@inlineCallbacks
def test_set_invalid_service(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["get", "whatever"])
yield finished
self.assertIn("Service 'whatever' was not found",
self.output.getvalue())
juju-0.7.orig/juju/control/tests/test_config_set.py 0000644 0000000 0000000 00000024527 12135220114 020753 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.control.config_set import config_set
from juju.lib import serializer
from .common import MachineControlToolTest
class ControlJujuSetTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ControlJujuSetTest, self).setUp()
self.service_state = yield self.add_service_from_charm("wordpress")
self.service_unit = yield self.service_state.add_unit_state()
self.environment = self.config.get_default()
self.stderr = self.capture_stream("stderr")
self.output = self.capture_logging()
@inlineCallbacks
def test_set_and_get(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set",
"wordpress",
"blog-title=Hello Tribune?"])
yield finished
# Verify the state is accessible
state = yield self.service_state.get_config()
self.assertEqual(state, {"blog-title": "Hello Tribune?"})
@inlineCallbacks
def test_set_with_config_file(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
config_file = self.makeFile(serializer.yaml_dump(
dict(wordpress={"blog-title": "Hello World"})))
main(["set", "wordpress",
"--config=%s" % config_file])
yield finished
# Verify the state is accessible
state = yield self.service_state.get_config()
self.assertEqual(state, {"blog-title": "Hello World"})
@inlineCallbacks
def test_set_with_invalid_file(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
# missing the service_name dict (will do nothing to values)
config_file = self.makeFile(
serializer.yaml_dump({"blog-title": "Hello World"}))
main(["set", "wordpress", "--config=%s" % config_file])
yield finished
state = yield self.service_state.get_config()
self.assertEqual(state, {'blog-title': 'My Title'})
@inlineCallbacks
def test_set_with_garbage_file(self):
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
# file exists but is not valid YAML
config_file = self.makeFile("blah")
main(["-v", "set", "wordpress",
"--config=%s" % config_file])
yield finished
self.assertIn(
"Config file %r invalid" % config_file, self.stderr.getvalue())
state = yield self.service_state.get_config()
self.assertEqual(state, {'blog-title': 'My Title'})
@inlineCallbacks
def test_config_and_cli_options_errors(self):
"""Verify --config and cli kvpairs can't be used together"""
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
# valid file, but incorrect cli usage
config_file = self.makeFile(serializer.yaml_dump(dict(
wordpress={"blog-title": "Hello World"})))
main(["-v", "set", "wordpress",
"blog-title=Test",
"--config=%s" % config_file])
yield finished
self.assertIn(
"--config and command line options", self.stderr.getvalue())
@inlineCallbacks
def test_set_invalid_option(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set",
"wordpress",
"blog-roll=What's a blog-roll?"])
yield finished
# Make sure we got an error message to the user
self.assertIn("blog-roll is not a valid configuration option.",
self.output.getvalue())
@inlineCallbacks
def test_set_invalid_service(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set",
"whatever",
"blog-roll=What's a blog-roll?"])
yield finished
self.assertIn("Service 'whatever' was not found",
self.output.getvalue())
@inlineCallbacks
def test_set_valid_option(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set",
"wordpress",
"blog-title=My title"])
yield finished
# Verify the state is accessible
state = yield self.service_state.get_config()
self.assertEqual(state, {"blog-title": "My title"})
@inlineCallbacks
def test_multiple_calls_with_defaults(self):
"""Bug #873643
Calling config set multiple times would result in the
subsequent calls resetting values to defaults if the values
were not explicitly set in each call. This verifies that each
value need not be present in each call for proper functioning.
"""
# apply all defaults as done through deploy
self.service_state = yield self.add_service_from_charm("configtest")
self.service_unit = yield self.service_state.add_unit_state()
# Publish the defaults as deploy should have done
charm = yield self.service_state.get_charm_state()
config_options = yield charm.get_config()
defaults = config_options.get_defaults()
state = yield self.service_state.get_config()
yield state.update(defaults)
yield state.write()
# Now perform two update in each case moving one value away
# from their default and checking the end result is as expected
yield config_set(self.environment, "configtest",
["foo=new foo"])
# force update
yield state.read()
self.assertEqual(state, {"foo": "new foo",
"bar": "bar-default"})
# Now perform two update in each case moving one value away
# from their default and checking the end result is as expected
yield config_set(self.environment, "configtest",
["bar=new bar"])
# force update
yield state.read()
self.assertEqual(state, {"foo": "new foo",
"bar": "new bar"})
@inlineCallbacks
def test_boolean_option_str_format_v1(self):
"""Verify possible to set a boolean option with format v1"""
self.service_state = yield self.add_service_from_charm("mysql")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set", "mysql", "awesome=true"])
yield finished
state = yield self.service_state.get_config()
self.assertEqual(
state,
{"awesome": True, "monkey-madness": 0.5,
"query-cache-size": -1, "tuning-level": "safest"})
@inlineCallbacks
def test_int_option_coerced_format_v1(self):
"""Verify int option possible in format v1"""
self.service_state = yield self.add_service_from_charm("mysql")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set", "mysql", "query-cache-size=10"])
yield finished
self.assertEqual(self.stderr.getvalue(), "")
state = yield self.service_state.get_config()
self.assertEqual(
state,
{"awesome": False, "monkey-madness": 0.5,
"query-cache-size": 10, "tuning-level": "safest"})
@inlineCallbacks
def test_float_option_str_format_v1(self):
"""Verify possible to set a float option with format v1"""
self.service_state = yield self.add_service_from_charm("mysql")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set", "mysql", "monkey-madness=0.99999999"])
yield finished
state = yield self.service_state.get_config()
self.assertEqual(
state,
{"awesome": False, "monkey-madness": 0.99999999,
"query-cache-size": -1, "tuning-level": "safest"})
@inlineCallbacks
def test_valid_options_format_v2(self):
"""Verify that config settings can be properly parsed and applied"""
self.service_state = yield self.add_service_from_charm(
"mysql-format-v2")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set",
"mysql-format-v2",
"query-cache-size=100",
"awesome=true",
"tuning-level=unsafe",
"monkey-madness=0.97"])
yield finished
self.assertEqual(self.stderr.getvalue(), "")
state = yield self.service_state.get_config()
self.assertEqual(
state,
{"awesome": True, "monkey-madness": 0.97,
"query-cache-size": 100, "tuning-level": "unsafe"})
@inlineCallbacks
def test_invalid_float_option_format_v2(self):
"""Verify that config settings reject invalid floats"""
self.service_state = yield self.add_service_from_charm(
"mysql-format-v2")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set", "mysql-format-v2",
"monkey-madness=barrels of monkeys"])
yield finished
self.assertEqual(
self.output.getvalue(),
"Invalid value for monkey-madness: barrels of monkeys\n")
state = yield self.service_state.get_config()
self.assertEqual(
state,
{"awesome": False, "monkey-madness": 0.5,
"query-cache-size": -1, "tuning-level": "safest"})
@inlineCallbacks
def test_invalid_int_option_format_v2(self):
"""Verify that config settings reject invalid ints"""
self.service_state = yield self.add_service_from_charm(
"mysql-format-v2")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set", "mysql-format-v2", "query-cache-size=big"])
yield finished
self.assertEqual(
self.output.getvalue(),
"Invalid value for query-cache-size: big\n")
state = yield self.service_state.get_config()
self.assertEqual(
state,
{"awesome": False, "monkey-madness": 0.5,
"query-cache-size": -1, "tuning-level": "safest"})
juju-0.7.orig/juju/control/tests/test_constraints_get.py 0000644 0000000 0000000 00000010526 12135220114 022033 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.lib import serializer
from juju.machine.tests.test_constraints import dummy_cs
from juju.state.environment import EnvironmentStateManager
from .common import MachineControlToolTest
env_log = "Fetching constraints for environment"
machine_log = "Fetching constraints for machine 1"
service_log = "Fetching constraints for service mysql"
unit_log = "Fetching constraints for service unit mysql/0"
class ConstraintsGetTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ConstraintsGetTest, self).setUp()
env_constraints = dummy_cs.parse(["mem=1024"])
esm = EnvironmentStateManager(self.client)
yield esm.set_constraints(env_constraints)
self.expect_env = {
"arch": "amd64", "cpu": 1.0, "mem": 1024.0,
"provider-type": "dummy", "ubuntu-series": None}
service_constraints = dummy_cs.parse(["cpu=10"])
service = yield self.add_service_from_charm(
"mysql", constraints=service_constraints)
# unit will snapshot the state of service when added
unit = yield service.add_unit_state()
self.expect_unit = {
"arch": "amd64", "cpu": 10.0, "mem": 1024.0,
"provider-type": "dummy", "ubuntu-series": "series"}
# machine gets its own constraints
machine_constraints = dummy_cs.parse(["cpu=15", "mem=8G"])
machine = yield self.add_machine_state(
constraints=machine_constraints.with_series("series"))
self.expect_machine = {
"arch": "amd64", "cpu": 15.0, "mem": 8192.0,
"provider-type": "dummy", "ubuntu-series": "series"}
yield unit.assign_to_machine(machine)
# service gets new constraints, leaves unit untouched
yield service.set_constraints(dummy_cs.parse(["mem=16G"]))
self.expect_service = {
"arch": "amd64", "cpu": 1.0, "mem": 16384.0,
"provider-type": "dummy", "ubuntu-series": "series"}
self.log = self.capture_logging()
self.stdout = self.capture_stream("stdout")
self.finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
def assert_messages(self, *messages):
log = self.log.getvalue()
for message in messages:
self.assertIn(message, log)
@inlineCallbacks
def test_env(self):
main(["get-constraints"])
yield self.finished
result = serializer.load(self.stdout.getvalue())
self.assertEquals(result, self.expect_env)
self.assert_messages(env_log)
@inlineCallbacks
def test_service(self):
main(["get-constraints", "mysql"])
yield self.finished
result = serializer.load(self.stdout.getvalue())
self.assertEquals(result, {"mysql": self.expect_service})
self.assert_messages(service_log)
@inlineCallbacks
def test_unit(self):
main(["get-constraints", "mysql/0"])
yield self.finished
result = serializer.load(self.stdout.getvalue())
self.assertEquals(result, {"mysql/0": self.expect_unit})
self.assert_messages(unit_log)
@inlineCallbacks
def test_machine(self):
main(["get-constraints", "1"])
yield self.finished
result = serializer.load(self.stdout.getvalue())
self.assertEquals(result, {"1": self.expect_machine})
self.assert_messages(machine_log)
@inlineCallbacks
def test_all(self):
main(["get-constraints", "mysql", "mysql/0", "1"])
yield self.finished
result = serializer.load(self.stdout.getvalue())
expect = {"mysql": self.expect_service,
"mysql/0": self.expect_unit,
"1": self.expect_machine}
self.assertEquals(result, expect)
self.assert_messages(service_log, unit_log, machine_log)
@inlineCallbacks
def test_syncs_environment(self):
"""If the environment were not synced, it would be impossible to create
the Constraints, so tool success proves sync."""
yield self.client.delete("/environment")
main(["get-constraints", "mysql/0"])
yield self.finished
result = serializer.load(self.stdout.getvalue())
self.assertEquals(result, {"mysql/0": self.expect_unit})
self.assert_messages(unit_log)
juju-0.7.orig/juju/control/tests/test_constraints_set.py 0000644 0000000 0000000 00000005142 12135220114 022045 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.state.environment import EnvironmentStateManager
from .common import MachineControlToolTest
class ControlSetConstraintsTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ControlSetConstraintsTest, self).setUp()
self.service_state = yield self.add_service_from_charm("mysql")
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_set_service_constraints(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set-constraints", "--service", "mysql", "cpu=8", "mem=1G"])
yield finished
constraints = yield self.service_state.get_constraints()
expect = {
"arch": "amd64", "cpu": 8, "mem": 1024,
"provider-type": "dummy", "ubuntu-series": "series"}
self.assertEquals(constraints, expect)
@inlineCallbacks
def test_bad_service_constraint(self):
initial_constraints = yield self.service_state.get_constraints()
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
main(["set-constraints", "--service", "mysql", "arch=proscenium"])
yield finished
self.assertIn(
"Bad 'arch' constraint 'proscenium': unknown architecture",
self.output.getvalue())
constraints = yield self.service_state.get_constraints()
self.assertEquals(constraints, initial_constraints)
@inlineCallbacks
def test_environment_constraint(self):
yield self.client.delete("/environment")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set-constraints", "arch=arm", "cpu=any"])
yield finished
esm = EnvironmentStateManager(self.client)
yield esm.get_config()
constraints = yield esm.get_constraints()
self.assertEquals(constraints, {
"ubuntu-series": None,
"provider-type": "dummy",
"arch": "arm",
"cpu": None,
"mem": 512.0})
@inlineCallbacks
def test_legacy_environment(self):
yield self.client.delete("/constraints")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["set-constraints", "arch=arm", "cpu=any"])
yield finished
self.assertIn(
"Constraints are not valid in legacy deployments.",
self.output.getvalue())
juju-0.7.orig/juju/control/tests/test_control.py 0000644 0000000 0000000 00000024121 12135220114 020301 0 ustar 0000000 0000000 import logging
import time
import os
from StringIO import StringIO
from argparse import Namespace
from twisted.internet.defer import inlineCallbacks
from juju.environment.errors import EnvironmentsConfigError
from juju.control import setup_logging, main, setup_parser
from juju.control.options import ensure_abs_path
from juju.control.command import Commander
from juju.state.tests.common import StateTestBase
from juju.lib.testing import TestCase
from juju import __version__
from .common import ControlToolTest
class ControlInitiailizationTest(ControlToolTest):
# The EnvironmentTestBase will replace our $HOME, so that tests will
# write properly to a temporary directory.
def test_write_sample_config(self):
"""
When juju-control is called without a valid environment
configuration, it should write one down and raise an error to
let the user know it should be edited.
"""
try:
main(["bootstrap"])
except EnvironmentsConfigError, error:
self.assertTrue(error.sample_written)
else:
self.fail("EnvironmentsConfigError not raised")
class ControlOutputTest(ControlToolTest, StateTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlOutputTest, self).setUp()
yield self.push_default_config()
def test_sans_args_produces_help(self):
"""
When juju-control is called without arguments, it
produces a standard help message.
"""
stderr = self.capture_stream("stderr")
self.mocker.replay()
try:
main([])
except SystemExit, e:
self.assertEqual(e.args[0], 2)
else:
self.fail("Should have exited")
output = stderr.getvalue()
self.assertIn("add-relation", output)
self.assertIn("destroy-environment", output)
self.assertIn("juju cloud orchestration admin", output)
def test_version(self):
stderr = self.capture_stream("stderr")
try:
main(['--version'])
except SystemExit, e:
self.assertEqual(e.args[0], 0)
else:
self.fail("Should have exited")
output = stderr.getvalue()
self.assertIn(__version__, output)
def test_custom_parser_does_not_extend_to_subcommand(self):
stderr = self.capture_stream("stderr")
self.mocker.replay()
try:
main(["deploy"])
except SystemExit, e:
self.assertEqual(e.args[0], 2)
else:
self.fail("Should have exited")
output = stderr.getvalue()
self.assertIn("juju deploy: error: too few arguments", output)
class ControlCommandOptionTest(ControlToolTest):
def test_command_suboption(self):
"""
The argparser setup, invokes command module configure_subparser
functions to allow the command to delineate additional cli
options.
"""
from juju.control import destroy_environment, bootstrap
def configure_destroy_environment(subparsers):
subparser = subparsers.add_parser("destroy-environment")
subparser.add_argument("--opt1", default=1)
return subparser
def configure_bootstrap(subparsers):
subparser = subparsers.add_parser("bootstrap")
subparser.add_argument("--opt2", default=2)
return subparser
self.patch(destroy_environment, "configure_subparser",
configure_destroy_environment)
self.patch(bootstrap, "configure_subparser", configure_bootstrap)
parser = setup_parser(subcommands=[destroy_environment, bootstrap])
shutdown_opts = parser.parse_args(["destroy-environment"])
bootstrap_opts = parser.parse_args(["bootstrap"])
missing = object()
self.assertEquals(shutdown_opts.opt1, 1)
self.assertEquals(getattr(shutdown_opts, "opt2", missing), missing)
self.assertEquals(bootstrap_opts.opt2, 2)
self.assertEquals(getattr(bootstrap_opts, "opt1", missing), missing)
class ControlUtilityTest(TestCase):
def test_ensured_abs_path(self):
parent_dir = self.makeDir()
file_path = self.makeFile(dirname=parent_dir)
os.rmdir(parent_dir)
self.assertEqual(ensure_abs_path(file_path), file_path)
self.assertTrue(os.path.exists(parent_dir))
def test_ensure_abs_path_with_stdout_symbol(self):
self.assertEqual(ensure_abs_path("-"), "-")
def test_ensured_abs_path_with_existing(self):
temp_dir = self.makeDir()
self.assertTrue(os.path.exists(temp_dir))
file_path = os.path.join(temp_dir, "zebra.txt")
self.assertEqual(ensure_abs_path(file_path), file_path)
self.assertTrue(os.path.exists(temp_dir))
def test_ensure_abs_path_relative(self):
current_dir = os.path.abspath(os.getcwd())
self.addCleanup(os.chdir, current_dir)
temp_dir = self.makeDir()
os.chdir(temp_dir)
file_path = ensure_abs_path("zebra.txt")
self.assertEqual(file_path, os.path.join(temp_dir, "zebra.txt"))
class ControlLoggingTest(TestCase):
def test_logging_format(self):
self.log = StringIO()
setup_logging(Namespace(verbose=False, log_file=self.log))
# ensure that we use gtm regardless of system settings
logging.getLogger().handlers[0].formatter.converter = time.gmtime
record = logging.makeLogRecord(
{"created": 0, "msecs": 0, "levelno": logging.INFO})
logging.getLogger().handle(record)
self.assertEqual(
self.log.getvalue(),
"1970-01-01 00:00:00,000 Level None \n")
def test_default_logging(self):
"""
Default log-level is informational.
"""
self.log = self.capture_logging()
setup_logging(Namespace(verbose=False, log_file=None))
root = logging.getLogger()
name = logging.getLevelName(root.getEffectiveLevel())
self.assertEqual(name, "INFO")
def test_verbose_logging(self):
"""
When verbose logging is enabled, the log level is set to debugging.
"""
setup_logging(Namespace(verbose=True, log_file=None))
root = logging.getLogger()
self.assertEqual(logging.getLevelName(root.level), "DEBUG")
custom = logging.getLogger("custom")
self.assertEqual(custom.getEffectiveLevel(), root.getEffectiveLevel())
def test_default_loggers(self):
"""
Verify that the default loggers are bound when the logging
system is started.
"""
root = logging.getLogger()
self.assertEqual(root.handlers, [])
setup_logging(Namespace(verbose=False, log_file=None))
self.assertNotEqual(root.handlers, [])
def tearDown(self):
# remove the logging handlers we installed
root = logging.getLogger()
root.handlers = []
class AttrDict(dict):
def __getattr__(self, key):
return self[key]
class TestCommander(ControlToolTest):
def get_sample_namespace(self):
# Command expects these objects forming a non-obvious contract
# with the runtime
self.debugStream = debugStream = StringIO()
debugStream.__call__ = debugStream.write
self.infoStream = infoStream = StringIO()
infoStream.__call__ = infoStream.write
self.errorStream = errorStream = StringIO()
errorStream.__call__ = errorStream.write
log = AttrDict(debug=debugStream, info=infoStream, error=errorStream)
return Namespace(log=log,
verbose=False,
parser=AttrDict(prog="juju"))
def test_invalid_callback(self):
# non callable callback
self.failUnlessRaises(ValueError, Commander, time.daylight)
# valid callback
Commander(time.time)
def test_run_invalid_call(self):
c = Commander(time.time)
# called with invalid options
self.failUnlessRaises(ValueError, c, None)
def change_value(self, options):
self.test_value = 42
return self.test_value
@inlineCallbacks
def deferred_callback(self, options):
self.test_value = 42
yield self.test_value
@inlineCallbacks
def deferred_callback_with_exception(self, options):
raise Exception("Some generic error condition")
def test_call_without_deferred(self):
self.test_value = None
self.setup_cli_reactor()
self.setup_exit(0)
com = Commander(self.change_value)
ns = self.get_sample_namespace()
self.mocker.replay()
com(ns)
self.assertEqual(self.test_value, 42)
def test_call_with_deferrred(self):
self.test_value = None
self.setup_cli_reactor()
self.setup_exit(0)
com = Commander(self.deferred_callback)
ns = self.get_sample_namespace()
self.mocker.replay()
com(ns)
self.assertEqual(self.test_value, 42)
def test_call_with_deferrred_exception(self):
self.test_value = None
self.setup_cli_reactor()
self.setup_exit(1)
com = Commander(self.deferred_callback_with_exception)
ns = self.get_sample_namespace()
self.mocker.replay()
com(ns)
# verify that the exception message is all that comes out of stderr
self.assertEqual(self.errorStream.getvalue(),
"Some generic error condition")
def test_verbose_successful(self):
self.test_value = None
self.setup_cli_reactor()
self.setup_exit(0)
com = Commander(self.deferred_callback)
ns = self.get_sample_namespace()
ns.verbose = True
self.mocker.replay()
com(ns)
self.assertEqual(self.test_value, 42)
def test_verbose_error_with_traceback(self):
self.test_value = None
self.setup_cli_reactor()
err = self.capture_stream("stderr")
self.setup_exit(1)
com = Commander(self.deferred_callback_with_exception)
ns = self.get_sample_namespace()
ns.verbose = True
self.mocker.replay()
com(ns)
self.assertIn("traceback", err.getvalue())
juju-0.7.orig/juju/control/tests/test_debug_hooks.py 0000644 0000000 0000000 00000016131 12135220114 021114 0 ustar 0000000 0000000 import logging
import os
from twisted.internet.defer import (
inlineCallbacks, returnValue, succeed, Deferred)
from juju.control import main
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.environment.environment import Environment
from juju.state.service import ServiceUnitState
from juju.lib.mocker import ANY
from juju.control.tests.common import ControlToolTest
from juju.state.tests.test_service import ServiceStateManagerTestBase
class ControlDebugHookTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlDebugHookTest, self).setUp()
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
# Setup a machine in the provider
self.provider_machine = (yield self.provider.start_machine(
{"machine-id": 0, "dns-name": "antigravity.example.com"}))[0]
# Setup the zk tree with a service, unit, and machine.
self.service = yield self.add_service_from_charm("mysql")
self.unit = yield self.service.add_unit_state()
self.machine = yield self.add_machine_state()
yield self.machine.set_instance_id(0)
yield self.unit.assign_to_machine(self.machine)
# capture the output.
self.output = self.capture_logging(
"juju.control.cli", level=logging.INFO)
self.stderr = self.capture_stream("stderr")
self.setup_exit(0)
@inlineCallbacks
def test_debug_hook_invalid_hook_name(self):
"""If an invalid hookname is used an appropriate error
message is raised that references the charm.
"""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["debug-hooks", "mysql/0", "bad-happened"])
yield finished
self.assertIn(
"Charm 'local:series/mysql-1' does not contain hook "
"'bad-happened'",
self.output.getvalue())
@inlineCallbacks
def test_debug_hook_invalid_unit_name(self):
"""An invalid unit causes an appropriate error.
"""
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["debug-hooks", "mysql/42"])
yield finished
self.assertIn(
"Service unit 'mysql/42' was not found",
self.output.getvalue())
@inlineCallbacks
def test_debug_hook_invalid_service(self):
"""An invalid service causes an appropriate error.
"""
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["debug-hooks", "magic/42"])
yield finished
self.assertIn(
"Service 'magic' was not found",
self.output.getvalue())
@inlineCallbacks
def test_debug_hook_unit_agent_not_available(self):
"""Simulate the unit agent isn't available when the command is run.
The command will set the debug flag, and wait for the unit
agent to be available.
"""
mock_unit = self.mocker.patch(ServiceUnitState)
# First time, doesn't exist, will wait on watch
mock_unit.watch_agent()
self.mocker.result((succeed(False), succeed(True)))
# Second time, setup the unit address
mock_unit.watch_agent()
def setup_unit_address():
set_d = self.unit.set_public_address("x1.example.com")
exist_d = Deferred()
set_d.addCallback(lambda result: exist_d.callback(True))
return (exist_d, succeed(True))
self.mocker.call(setup_unit_address)
# Intercept the ssh call
self.patch(os, "system", lambda x: True)
#mock_environment = self.mocker.patch(Environment)
#mock_environment.get_machine_provider()
#self.mocker.result(self.provider)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["debug-hooks", "mysql/0"])
yield finished
self.assertIn("Waiting for unit", self.output.getvalue())
self.assertIn("Unit running", self.output.getvalue())
self.assertIn("Connecting to remote machine x1.example.com",
self.output.getvalue())
@inlineCallbacks
def test_debug_hook(self):
"""The debug cli will setup unit debug setting and ssh to a screen.
"""
system_mock = self.mocker.replace(os.system)
system_mock(ANY)
def on_ssh(command):
self.assertStartsWith(command, "ssh -t ubuntu@x2.example.com")
# In the function, os.system yields to faciliate testing.
self.assertEqual(
(yield self.unit.get_hook_debug()),
{"debug_hooks": ["*"]})
returnValue(True)
self.mocker.call(on_ssh)
finished = self.setup_cli_reactor()
self.mocker.replay()
# Setup the unit address.
yield self.unit.set_public_address("x2.example.com")
main(["debug-hooks", "mysql/0"])
yield finished
self.assertIn(
"Connecting to remote machine", self.output.getvalue())
self.assertIn(
"Debug session ended.", self.output.getvalue())
@inlineCallbacks
def test_standard_named_debug_hook(self):
"""A hook can be debugged by name.
"""
yield self.verify_hook_debug("start")
self.mocker.reset()
self.setup_exit(0)
yield self.verify_hook_debug("stop")
self.mocker.reset()
self.setup_exit(0)
yield self.verify_hook_debug("server-relation-changed")
self.mocker.reset()
self.setup_exit(0)
yield self.verify_hook_debug("server-relation-changed",
"server-relation-broken")
self.mocker.reset()
self.setup_exit(0)
yield self.verify_hook_debug("server-relation-joined",
"server-relation-departed")
@inlineCallbacks
def verify_hook_debug(self, *hook_names):
"""Utility function to verify hook debugging by name
"""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
system_mock = self.mocker.replace(os.system)
system_mock(ANY)
def on_ssh(command):
self.assertStartsWith(command, "ssh -t ubuntu@x11.example.com")
self.assertEqual(
(yield self.unit.get_hook_debug()),
{"debug_hooks": list(hook_names)})
returnValue(True)
self.mocker.call(on_ssh)
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.unit.set_public_address("x11.example.com")
args = ["debug-hooks", "mysql/0"]
args.extend(hook_names)
main(args)
yield finished
self.assertIn(
"Connecting to remote machine", self.output.getvalue())
self.assertIn(
"Debug session ended.", self.output.getvalue())
juju-0.7.orig/juju/control/tests/test_debug_log.py 0000644 0000000 0000000 00000020567 12135220114 020562 0 ustar 0000000 0000000 import json
from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.control.tests.common import ControlToolTest
from juju.lib.tests.test_zklog import LogTestBase
class ControlDebugLogTest(LogTestBase, ControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ControlDebugLogTest, self).setUp()
yield self.push_default_config()
@inlineCallbacks
def test_replay(self):
"""
Older logs can be replayed without affecting the current
position pointer.
"""
self.log = yield self.get_configured_log()
for i in range(20):
self.log.info(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
yield self.client.set("/logs", json.dumps({"next-log-index": 15}))
main(["debug-log", "--replay", "--limit", "5"])
yield cli_done
output = stream.getvalue().split("\n")
for i in range(5):
self.assertIn("unit:mysql/0: test-zk-log INFO: %s" % i, output[i])
# We can run it again and get the same output
self.mocker.reset()
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
yield self.client.set("/logs", json.dumps({"next-log-index": 15}))
main(["debug-log", "--replay", "--limit", "5"])
yield cli_done
output2 = stream.getvalue().split("\n")
self.assertEqual(output, output2)
content, stat = yield self.client.get("/logs")
self.assertEqual(json.loads(content), {"next-log-index": 15})
@inlineCallbacks
def test_include_agent(self):
"""Messages can be filtered to include only certain agents."""
log = yield self.get_configured_log("hook.output", "unit:cassandra/10")
log2 = yield self.get_configured_log("hook.output", "unit:cassandra/1")
# example of an agent context name sans ":"
log3 = yield self.get_configured_log("unit.workflow", "mysql/1")
for i in range(5):
log.info(str(i))
for i in range(5):
log2.info(str(i))
for i in range(5):
log3.info(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
main(["debug-log", "--include", "cassandra/1", "--limit", "4"])
yield cli_done
output = stream.getvalue()
self.assertNotIn("mysql/1", output)
self.assertNotIn("cassandra/10", output)
self.assertIn("cassandra/1", output)
@inlineCallbacks
def test_include_log(self):
"""Messages can be filtered to include only certain log channels."""
log = yield self.get_configured_log("hook.output", "unit:cassandra/1")
log2 = yield self.get_configured_log("unit.workflow", "unit:mysql/1")
log3 = yield self.get_configured_log("provisioning", "agent:provision")
for i in range(5):
log.info(str(i))
for i in range(5):
log2.info(str(i))
for i in range(5):
log3.info(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
main(["debug-log", "--include", "unit.workflow",
"-i", "agent:provision", "--limit", "8"])
yield cli_done
output = stream.getvalue()
self.assertNotIn("cassandra/1", output)
self.assertIn("mysql/1", output)
self.assertIn("provisioning", output)
@inlineCallbacks
def test_exclude_agent(self):
"""Messages can be filterd to exclude certain agents."""
log = yield self.get_configured_log("hook.output", "unit:cassandra/1")
log2 = yield self.get_configured_log("unit.workflow", "unit:mysql/1")
for i in range(5):
log.info(str(i))
for i in range(5):
log2.info(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
main(["debug-log", "--exclude", "cassandra/1", "--limit", "4"])
yield cli_done
output = stream.getvalue()
self.assertNotIn("cassandra/1", output)
self.assertIn("mysql/1", output)
@inlineCallbacks
def test_exclude_log(self):
"""Messages can be filtered to exclude certain log channels."""
log = yield self.get_configured_log("hook.output", "unit:cassandra/1")
log2 = yield self.get_configured_log("provisioning", "agent:provision")
log3 = yield self.get_configured_log("unit.workflow", "unit:mysql/1")
for i in range(5):
log.info(str(i))
for i in range(5):
log2.info(str(i))
for i in range(5):
log3.info(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
main(["debug-log", "-x", "unit:cass*", "-x", "provisioning",
"--limit", "4"])
yield cli_done
output = stream.getvalue()
self.assertNotIn("cassandra/1", output)
self.assertNotIn("provisioning", output)
self.assertIn("mysql/1", output)
@inlineCallbacks
def test_complex_filter(self):
"""Messages can be filtered to include only certain log channels."""
log = yield self.get_configured_log("hook.output", "unit:cassandra/1")
log2 = yield self.get_configured_log("hook.output", "unit:cassandra/2")
log3 = yield self.get_configured_log("hook.output", "unit:cassandra/3")
for i in range(5):
log.info(str(i))
for i in range(5):
log2.info(str(i))
for i in range(5):
log3.info(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
main(
["debug-log", "-i", "cassandra/*", "-x", "cassandra/1", "-n", "8"])
yield cli_done
output = stream.getvalue()
self.assertNotIn("cassandra/1", output)
self.assertIn("cassandra/2", output)
self.assertIn("cassandra/3", output)
@inlineCallbacks
def test_log_level(self):
"""Messages can be filtered by log level."""
log = yield self.get_configured_log()
for i in range(5):
log.info(str(i))
for i in range(5):
log.debug(str(i))
for i in range(5):
log.warning(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
main(["debug-log", "--level", "WARNING", "--limit", "4"])
yield cli_done
output = stream.getvalue().split("\n")
for i in range(4):
self.assertIn("WARNING", output[i])
@inlineCallbacks
def test_log_file(self):
"""Logs can be sent to a file."""
log = yield self.get_configured_log()
for i in range(5):
log.info(str(i))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
file_path = self.makeFile()
main(["debug-log", "--output", file_path, "--limit", "4"])
yield cli_done
output = open(file_path).read().split("\n")
for i in range(4):
self.assertIn("unit:mysql/0: test-zk-log INFO: %s" % i, output[i])
@inlineCallbacks
def test_log_object(self):
"""Messages that utilize string interpolation are rendered correctly.
"""
class Foobar(object):
def __init__(self, v):
self._v = v
def __str__(self):
return str("Foobar:%s" % self._v)
log = yield self.get_configured_log("unit.workflow", "unit:mysql/1")
log.info("found a %s", Foobar(21))
log.info("jack jumped into a %s", Foobar("cauldron"))
cli_done = self.setup_cli_reactor()
self.setup_exit()
self.mocker.replay()
stream = self.capture_stream("stdout")
main(["debug-log", "--limit", "2"])
yield cli_done
output = stream.getvalue()
self.assertIn("Foobar:21", output)
self.assertIn("Foobar:cauldron", output)
juju-0.7.orig/juju/control/tests/test_deploy.py 0000644 0000000 0000000 00000056240 12135220114 020124 0 ustar 0000000 0000000 import logging
import os
from twisted.internet.defer import inlineCallbacks, succeed
from juju.control import deploy, main
from juju.environment.environment import Environment
from juju.environment.config import EnvironmentsConfig
from juju.errors import CharmError
from juju.charm.directory import CharmDirectory
from juju.charm.repository import RemoteCharmRepository
from juju.charm.url import CharmURL
from juju.charm.errors import ServiceConfigValueError
from juju.lib import serializer
from juju.state.charm import CharmStateManager
from juju.state.environment import EnvironmentStateManager
from juju.state.errors import ServiceStateNameInUse, ServiceStateNotFound
from juju.state.service import ServiceStateManager
from juju.state.relation import RelationStateManager
from juju.lib.mocker import MATCH
from .common import MachineControlToolTest
class ControlDeployTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ControlDeployTest, self).setUp()
config = {
"environments": {
"firstenv": {
"type": "dummy",
"admin-secret": "homer",
"placement": "unassigned",
"default-series": "series"}}}
yield self.push_config("firstenv", config)
def test_deploy_multiple_environments_none_specified(self):
"""
If multiple environments are configured, with no default,
one must be specified for the deploy command.
"""
self.capture_logging()
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
config = {
"environments": {
"firstenv": {
"type": "dummy", "admin-secret": "homer"},
"secondenv": {
"type": "dummy", "admin-secret": "marge"}}}
self.write_config(serializer.dump(config))
stderr = self.capture_logging()
main(["deploy", "--repository", self.unbundled_repo_path, "mysql"])
self.assertIn("There are multiple environments", stderr.getvalue())
def test_no_repository(self):
self.capture_logging()
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
stderr = self.capture_logging()
main(["deploy", "local:redis"])
self.assertIn("No repository specified", stderr.getvalue())
def test_repository_from_environ(self):
""" test using environment to set a default repository """
self.change_environment(JUJU_REPOSITORY=self.unbundled_repo_path)
self.capture_logging()
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
stderr = self.capture_logging()
main([
"deploy", "local:redis"])
self.assertNotIn("No repository specified", stderr.getvalue())
def test_charm_not_found(self):
self.capture_logging()
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
stderr = self.capture_logging()
main([
"deploy", "--repository", self.unbundled_repo_path, "local:redis"])
self.assertIn(
"Charm 'local:series/redis' not found in repository",
stderr.getvalue())
def test_nonsense_constraint(self):
self.capture_logging()
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
stderr = self.capture_logging()
main([
"deploy", "--repository", self.unbundled_repo_path, "local:sample",
"--constraints", "arch=arm tweedledee tweedledum"])
self.assertIn(
"Could not interpret 'tweedledee' constraint: need more than 1 "
"value to unpack",
stderr.getvalue())
@inlineCallbacks
def test_deploy_service_name_conflict(self):
"""Raise an error if a service name conflicts with an existing service
"""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"beekeeper", logging.getLogger("deploy"), [])
# deploy the service a second time to generate a name conflict
d = deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"beekeeper", logging.getLogger("deploy"), [])
error = yield self.failUnlessFailure(d, ServiceStateNameInUse)
self.assertEqual(
str(error),
"Service name 'beekeeper' is already in use")
@inlineCallbacks
def test_deploy_no_service_name_short_charm_name(self):
"""Uses charm name as service name if possible."""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
None, logging.getLogger("deploy"), [])
service = yield ServiceStateManager(
self.client).get_service_state("sample")
self.assertEqual(service.service_name, "sample")
@inlineCallbacks
def test_deploy_no_service_name_long_charm_name(self):
"""Uses charm name as service name if possible."""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path,
"local:series/sample", None, logging.getLogger("deploy"), [])
service = yield ServiceStateManager(
self.client).get_service_state("sample")
self.assertEqual(service.service_name, "sample")
def xtest_deploy_with_nonexistent_environment_specified(self):
self.capture_logging()
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
config = {
"environments": {
"firstenv": {
"type": "dummy", "admin-secret": "homer"},
"secondenv": {
"type": "dummy", "admin-secret": "marge"}}}
self.write_config(serializer.dump(config))
stderr = self.capture_logging()
main(["deploy", "--environment", "roman-candle",
"--repository", self.unbundled_repo_path, "sample"])
self.assertIn("Invalid environment 'roman-candle'", stderr.getvalue())
def test_deploy_with_environment_specified(self):
self.setup_cli_reactor()
self.setup_exit(0)
command = self.mocker.replace("juju.control.deploy.deploy")
config = {
"environments": {
"firstenv": {
"type": "dummy", "admin-secret": "homer"},
"secondenv": {
"type": "dummy", "admin-secret": "marge"}}}
self.write_config(serializer.dump(config))
def match_config(config):
return isinstance(config, EnvironmentsConfig)
def match_environment(environment):
return isinstance(environment, Environment) and \
environment.name == "secondenv"
command(MATCH(match_config), MATCH(match_environment),
self.unbundled_repo_path, "local:sample", None,
MATCH(lambda x: isinstance(x, logging.Logger)),
["cpu=36", "mem=64G"], None, False, num_units=1)
self.mocker.replay()
self.mocker.result(succeed(True))
main(["deploy", "--environment", "secondenv", "--repository",
self.unbundled_repo_path, "--constraints", "cpu=36 mem=64G",
"local:sample"])
@inlineCallbacks
def test_deploy(self):
"""Create service, and service unit on machine from charm"""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"myblog", logging.getLogger("deploy"), ["cpu=123"])
topology = yield self.get_topology()
service_id = topology.find_service_with_name("myblog")
self.assertEqual(service_id, "service-%010d" % 0)
exists = yield self.client.exists("/services/%s" % service_id)
self.assertTrue(exists)
service_state_manager = ServiceStateManager(self.client)
service_state = yield service_state_manager.get_service_state("myblog")
charm_id = yield service_state.get_charm_id()
self.assertEquals(charm_id, "local:series/sample-2")
constraints = yield service_state.get_constraints()
expect_constraints = {
"arch": "amd64", "cpu": 123, "mem": 512,
"provider-type": "dummy", "ubuntu-series": "series"}
self.assertEquals(constraints, expect_constraints)
machine_ids = topology.get_machines()
self.assertEqual(
machine_ids,
["machine-%010d" % 0, "machine-%010d" % 1])
exists = yield self.client.exists("/machines/%s" % machine_ids[0])
self.assertTrue(exists)
unit_ids = topology.get_service_units(service_id)
self.assertEqual(unit_ids, ["unit-%010d" % 0])
exists = yield self.client.exists("/units/%s" % unit_ids[0])
self.assertTrue(exists)
@inlineCallbacks
def test_deploy_upgrade(self):
"""A charm can be deployed and get the latest version"""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"myblog", logging.getLogger("deploy"), [])
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"myblog2", logging.getLogger("deploy"), [], upgrade=True)
services = ServiceStateManager(self.client)
service1 = yield services.get_service_state("myblog")
s1_charm_id = yield service1.get_charm_id()
service2 = yield services.get_service_state("myblog2")
s2_charm_id = yield service2.get_charm_id()
self.assertNotEqual(s1_charm_id, s2_charm_id)
charms = CharmStateManager(self.client)
charm1 = yield charms.get_charm_state(s1_charm_id)
charm2 = yield charms.get_charm_state(s2_charm_id)
self.assertEqual(charm1.revision + 1, charm2.revision)
@inlineCallbacks
def test_deploy_upgrade_bundle(self):
"""The upgrade option is invalid with a charm bundle."""
# bundle sample charms
output = self.capture_logging("deploy")
CharmDirectory(self.sample_dir1).make_archive(
os.path.join(self.bundled_repo_path, "series", "old.charm"))
CharmDirectory(self.sample_dir2).make_archive(
os.path.join(self.bundled_repo_path, "series", "new.charm"))
environment = self.config.get("firstenv")
error = yield self.assertFailure(
deploy.deploy(
self.config, environment,
self.bundled_repo_path, "local:sample",
"myblog", logging.getLogger("deploy"), [], upgrade=True),
CharmError)
self.assertIn("Searching for charm", output.getvalue())
self.assertIn("Only local directory charms can be upgraded on deploy",
str(error))
@inlineCallbacks
def test_deploy_upgrade_remote(self):
"""The upgrade option is invalid with a remote charm."""
repo = self.mocker.mock(RemoteCharmRepository)
repo.type
self.mocker.result("store")
resolve = self.mocker.replace("juju.control.deploy.resolve")
resolve("cs:sample", None, "series")
self.mocker.result((repo, CharmURL.infer("cs:sample", "series")))
repo.find(MATCH(lambda x: isinstance(x, CharmURL)))
self.mocker.result(CharmDirectory(self.sample_dir1))
self.mocker.replay()
environment = self.config.get("firstenv")
error = yield self.assertFailure(deploy.deploy(
self.config, environment, None, "cs:sample",
"myblog", logging.getLogger("deploy"), [], upgrade=True),
CharmError)
self.assertIn("Only local directory charms can be upgraded on deploy",
str(error))
@inlineCallbacks
def test_deploy_multiple_units(self):
"""Create service, and service unit on machine from charm"""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"myblog", logging.getLogger("deploy"), [], num_units=5)
topology = yield self.get_topology()
service_id = topology.find_service_with_name("myblog")
self.assertEqual(service_id, "service-%010d" % 0)
exists = yield self.client.exists("/services/%s" % service_id)
self.assertTrue(exists)
# Verify standard placement policy - unit placed on a new machine
machine_ids = topology.get_machines()
self.assertEqual(
set(machine_ids),
set(["machine-%010d" % i for i in xrange(6)]))
for i in xrange(6):
self.assertTrue(
(yield self.client.exists("/machines/%s" % machine_ids[i])))
unit_ids = topology.get_service_units(service_id)
self.assertEqual(
set(unit_ids),
set(["unit-%010d" % i for i in xrange(5)]))
for i in xrange(5):
self.assertTrue(
(yield self.client.exists("/units/%s" % unit_ids[i])))
@inlineCallbacks
def test_deploy_sends_environment(self):
"""Uses charm name as service name if possible."""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
None, logging.getLogger("deploy"), [])
env_state_manager = EnvironmentStateManager(self.client)
env_config = yield env_state_manager.get_config()
self.assertEquals(serializer.load(env_config.serialize("firstenv")),
serializer.load(self.config.serialize("firstenv")))
@inlineCallbacks
def test_deploy_reuses_machines(self):
"""Verify that if machines are not in use, deploy uses them."""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:mysql",
None, logging.getLogger("deploy"), [])
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path,
"local:wordpress", None, logging.getLogger("deploy"), [])
yield self.destroy_service("mysql")
yield self.destroy_service("wordpress")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path,
"local:wordpress", None, logging.getLogger("deploy"), [])
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:mysql",
None, logging.getLogger("deploy"), [])
yield self.assert_machine_assignments("wordpress", [1])
yield self.assert_machine_assignments("mysql", [2])
def test_deploy_missing_config(self):
"""Missing config files should prevent the deployment"""
stderr = self.capture_logging()
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
# missing config file
main(["deploy", "--config", "missing",
"--repository", self.unbundled_repo_path, "local:sample"])
self.assertIn("Config file 'missing'", stderr.getvalue())
@inlineCallbacks
def test_deploy_with_bad_config(self):
"""Valid config options should be available to the deployed
service."""
config_file = self.makeFile(
serializer.dump(dict(otherservice=dict(application_file="foo"))))
environment = self.config.get("firstenv")
failure = deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"myblog", logging.getLogger("deploy"), [], config_file)
error = yield self.assertFailure(failure, ServiceConfigValueError)
self.assertIn(
"Expected a YAML dict with service name ('myblog').", str(error))
@inlineCallbacks
def test_deploy_with_invalid_config(self):
"""Can't deploy with config that doesn't pass charm validation."""
config_file = self.makeFile(
serializer.dump(dict(myblog=dict(application_file="foo"))))
environment = self.config.get("firstenv")
failure = deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:sample",
"myblog", logging.getLogger("deploy"), [], config_file)
error = yield self.assertFailure(failure, ServiceConfigValueError)
self.assertIn(
"application_file is not a valid configuration option",
str(error))
yield self.assertFailure(
ServiceStateManager(self.client).get_service_state("myblog"),
ServiceStateNotFound)
@inlineCallbacks
def test_deploy_with_config(self):
"""Valid config options should be available to the deployed
service."""
config_file = self.makeFile(serializer.dump(dict(
myblog=dict(outlook="sunny",
username="tester01"))))
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:dummy",
"myblog", logging.getLogger("deploy"), [], config_file)
# Verify that options in the yaml are available as state after
# the deploy call (successfully applied)
service = yield ServiceStateManager(
self.client).get_service_state("myblog")
config = yield service.get_config()
self.assertEqual(config["outlook"], "sunny")
self.assertEqual(config["username"], "tester01")
# a default value from the config.yaml
self.assertEqual(config["title"], "My Title")
@inlineCallbacks
def test_deploy_with_default_config(self):
"""Valid config options should be available to the deployed
service."""
environment = self.config.get("firstenv")
# Here we explictly pass no config file but the services
# associated config.yaml defines default which we expect to
# find anyway.
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:dummy",
"myblog", logging.getLogger("deploy"), [], None)
# Verify that options in the yaml are available as state after
# the deploy call (successfully applied)
service = yield ServiceStateManager(
self.client).get_service_state("myblog")
config = yield service.get_config()
self.assertEqual(config["title"], "My Title")
@inlineCallbacks
def test_deploy_adds_peer_relations(self):
"""Deploy automatically adds a peer relations."""
environment = self.config.get("firstenv")
yield deploy.deploy(
self.config, environment, self.unbundled_repo_path, "local:riak",
None, logging.getLogger("deploy"), [])
service_manager = ServiceStateManager(self.client)
service_state = yield service_manager.get_service_state("riak")
relation_manager = RelationStateManager(self.client)
relations = yield relation_manager.get_relations_for_service(
service_state)
self.assertEqual(len(relations), 1)
self.assertEqual(relations[0].relation_name, "ring")
@inlineCallbacks
def test_deploy_policy_from_environment(self):
config = {
"environments": {"firstenv": {
"placement": "local",
"type": "dummy",
"default-series": "series"}}}
yield self.push_config("firstenv", config)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["deploy", "--environment", "firstenv", "--repository",
self.unbundled_repo_path, "local:sample", "beekeeper"])
yield finished
# and verify its placed on node 0 (as per local policy)
service = yield self.service_state_manager.get_service_state(
"beekeeper")
units = yield service.get_all_unit_states()
unit = units[0]
machine_id = yield unit.get_assigned_machine_id()
self.assertEqual(machine_id, 0)
@inlineCallbacks
def test_deploy_informs_with_subordinate(self):
"""Verify subordinate charm doesn't deploy.
And that it properly notifies the user.
"""
log = self.capture_logging()
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
# missing config file
main(["deploy",
"--repository", self.unbundled_repo_path, "local:logging"])
yield finished
self.assertIn(
"Subordinate 'logging' awaiting relationship to "
"principal for deployment.\n",
log.getvalue())
# and verify no units assigned to service
service_state = yield self.service_state_manager.get_service_state("logging")
self.assertEqual(service_state.service_name, "logging")
units = yield service_state.get_unit_names()
self.assertEqual(units, [])
def test_deploy_legacy_keys_in_legacy_env(self):
yield self.client.delete("/constraints")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["deploy", "--repository", self.unbundled_repo_path,
"local:sample", "beekeeper"])
yield finished
service_manager = ServiceStateManager(self.client)
yield service_manager.get_service_state("beekeeper")
@inlineCallbacks
def test_deploy_legacy_keys_in_fresh_env(self):
yield self.push_default_config()
local_config = {
"environments": {"firstenv": {
"type": "dummy",
"some-legacy-key": "blah",
"default-series": "series"}}}
self.write_config(serializer.dump(local_config))
self.config.load()
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
stderr = self.capture_logging()
main(["deploy", "--repository", self.unbundled_repo_path,
"local:sample", "beekeeper"])
yield finished
self.assertIn(
"Your environments.yaml contains deprecated keys",
stderr.getvalue())
service_manager = ServiceStateManager(self.client)
yield self.assertFailure(
service_manager.get_service_state("beekeeper"),
ServiceStateNotFound)
@inlineCallbacks
def test_deploy_constraints_in_legacy_env(self):
yield self.client.delete("/constraints")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
stderr = self.capture_logging()
main(["deploy", "--repository", self.unbundled_repo_path,
"local:sample", "beekeeper", "--constraints", "arch=i386"])
yield finished
self.assertIn(
"Constraints are not valid in legacy deployments.",
stderr.getvalue())
service_manager = ServiceStateManager(self.client)
yield self.assertFailure(
service_manager.get_service_state("beekeeper"),
ServiceStateNotFound)
juju-0.7.orig/juju/control/tests/test_destroy_environment.py 0000644 0000000 0000000 00000012007 12135220114 022736 0 ustar 0000000 0000000 from twisted.internet.defer import succeed, inlineCallbacks
from juju.lib.serializer import dump
from juju.lib.mocker import MATCH
from juju.providers.dummy import MachineProvider
from juju.control import main
from .common import ControlToolTest
class ControlDestroyEnvironmentTest(ControlToolTest):
@inlineCallbacks
def test_destroy_multiple_environments_no_default(self):
"""With multiple environments a default needs to be set or passed.
"""
config = {
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
stderr = self.capture_logging()
main(["destroy-environment"])
yield finished
self.assertIn(
"There are multiple environments and no explicit default",
stderr.getvalue())
@inlineCallbacks
def test_destroy_invalid_environment(self):
"""If an invalid environment is specified, an error message is given.
"""
config = {
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
stderr = self.capture_logging()
main(["destroy-environment", "-e", "thirdenv"])
yield finished
self.assertIn("Invalid environment 'thirdenv'", stderr.getvalue())
@inlineCallbacks
def test_destroy_environment_prompt_no(self):
"""If a user returns no to the prompt, destroy-environment is aborted.
"""
config = {
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
self.setup_exit(0)
mock_raw = self.mocker.replace(raw_input)
mock_raw(MATCH(lambda x: x.startswith(
"WARNING: this command will destroy the 'secondenv' "
"environment (type: dummy)")))
self.mocker.result("n")
self.mocker.replay()
main(["destroy-environment", "-e", "secondenv"])
yield finished
self.assertIn(
"Environment destruction aborted", self.log.getvalue())
@inlineCallbacks
def test_destroy_environment(self):
"""Command will terminate instances in only one environment."""
config = {
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
envs = set(("firstenv", "secondenv"))
def track_destroy_environment_call(self):
envs.remove(self.environment_name)
return succeed(True)
provider = self.mocker.patch(MachineProvider)
provider.destroy_environment()
self.mocker.call(track_destroy_environment_call, with_object=True)
self.setup_exit(0)
mock_raw = self.mocker.replace(raw_input)
mock_raw(MATCH(lambda x: x.startswith(
"WARNING: this command will destroy the 'secondenv' "
"environment (type: dummy)")))
self.mocker.result("y")
self.mocker.replay()
main(["destroy-environment", "-e", "secondenv"])
yield finished
self.assertIn("Destroying environment 'secondenv' (type: dummy)...",
self.log.getvalue())
self.assertEqual(envs, set(["firstenv"]))
@inlineCallbacks
def test_destroy_default_environment(self):
"""Command works with default environment, if specified."""
config = {
"default": "thirdenv",
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"},
"thirdenv": {"type": "dummy"}}}
self.write_config(dump(config))
finished = self.setup_cli_reactor()
envs = set(("firstenv", "secondenv", "thirdenv"))
def track_destroy_environment_call(self):
envs.remove(self.environment_name)
return succeed(True)
provider = self.mocker.patch(MachineProvider)
provider.destroy_environment()
self.mocker.call(track_destroy_environment_call, with_object=True)
self.setup_exit(0)
mock_raw = self.mocker.replace(raw_input)
mock_raw(MATCH(lambda x: x.startswith(
"WARNING: this command will destroy the 'thirdenv' "
"environment (type: dummy)")))
self.mocker.result("y")
self.mocker.replay()
main(["destroy-environment"])
yield finished
self.assertIn("Destroying environment 'thirdenv' (type: dummy)...",
self.log.getvalue())
self.assertEqual(envs, set(["firstenv", "secondenv"]))
juju-0.7.orig/juju/control/tests/test_destroy_service.py 0000644 0000000 0000000 00000012673 12135220114 022043 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from .common import ControlToolTest
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.state.tests.test_service import ServiceStateManagerTestBase
from juju.providers import dummy # for coverage/trial interaction
class ControlStopServiceTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlStopServiceTest, self).setUp()
self.service_state1 = yield self.add_service_from_charm("mysql")
self.service1_unit = yield self.service_state1.add_unit_state()
self.service_state2 = yield self.add_service_from_charm("wordpress")
yield self.add_relation(
"database",
(self.service_state1, "db", "server"),
(self.service_state2, "db", "client"))
self.output = self.capture_logging()
@inlineCallbacks
def test_stop_service(self):
"""
'juju-control stop_service ' will shutdown all configured instances
in all environments.
"""
topology = yield self.get_topology()
service_id = topology.find_service_with_name("mysql")
self.assertNotEqual(service_id, None)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["destroy-service", "mysql"])
yield finished
topology = yield self.get_topology()
self.assertFalse(topology.has_service(service_id))
exists = yield self.client.exists("/services/%s" % service_id)
self.assertFalse(exists)
self.assertIn("Service 'mysql' destroyed.", self.output.getvalue())
@inlineCallbacks
def test_stop_unknown_service(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["destroy-service", "volcano"])
yield finished
self.assertIn(
"Service 'volcano' was not found", self.output.getvalue())
@inlineCallbacks
def test_destroy_subordinate_service(self):
log_service = yield self.add_service_from_charm("logging")
lu1 = yield log_service.add_unit_state()
yield self.add_relation(
"juju-info", "container",
(self.service_state1, "juju-info", "server"),
(log_service, "juju-info", "client"),
)
finished = self.setup_cli_reactor()
topology = yield self.get_topology()
service_id = topology.find_service_with_name("logging")
self.assertNotEqual(service_id, None)
self.setup_exit(0)
self.mocker.replay()
main(["destroy-service", "logging"])
yield finished
service_id = topology.find_service_with_name("logging")
topology = yield self.get_topology()
self.assertTrue(topology.has_service(service_id))
exists = yield self.client.exists("/services/%s" % service_id)
self.assertTrue(exists)
self.assertEquals(
"Unsupported attempt to destroy subordinate service 'logging' "
"while principal service 'mysql' is related.\n",
self.output.getvalue())
@inlineCallbacks
def test_destroy_principal_with_subordinates(self):
log_service = yield self.add_service_from_charm("logging")
lu1 = yield log_service.add_unit_state()
yield self.add_relation(
"juju-info",
(self.service_state1, "juju-info", "server"),
(log_service, "juju-info", "client"))
finished = self.setup_cli_reactor()
topology = yield self.get_topology()
service_id = topology.find_service_with_name("mysql")
logging_id = topology.find_service_with_name("logging")
self.assertNotEqual(service_id, None)
self.assertNotEqual(logging_id, None)
self.setup_exit(0)
self.mocker.replay()
main(["destroy-service", "mysql"])
yield finished
service_id = topology.find_service_with_name("mysql")
topology = yield self.get_topology()
self.assertFalse(topology.has_service(service_id))
exists = yield self.client.exists("/services/%s" % service_id)
self.assertFalse(exists)
# Verify the subordinate state was not removed as well
# destroy should allow the destruction of subordinate services
# with no relations. This means removing the principal and then
# breaking the relation will allow for actual removal from
# Zookeeper. see test_destroy_subordinate_without_relations.
exists = yield self.client.exists("/services/%s" % logging_id)
self.assertTrue(exists)
self.assertIn("Service 'mysql' destroyed.", self.output.getvalue())
@inlineCallbacks
def test_destroy_subordinate_without_relations(self):
"""Verify we can remove a subordinate w/o relations."""
yield self.add_service_from_charm("logging")
finished = self.setup_cli_reactor()
topology = yield self.get_topology()
logging_id = topology.find_service_with_name("logging")
self.assertNotEqual(logging_id, None)
self.setup_exit(0)
self.mocker.replay()
main(["destroy-service", "logging"])
yield finished
topology = yield self.get_topology()
self.assertFalse(topology.has_service(logging_id))
exists = yield self.client.exists("/services/%s" % logging_id)
self.assertFalse(exists)
juju-0.7.orig/juju/control/tests/test_expose.py 0000644 0000000 0000000 00000006525 12135220114 020134 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.control.tests.common import ControlToolTest
from juju.lib import serializer
from juju.state.tests.test_service import ServiceStateManagerTestBase
class ExposeControlTest(
ServiceStateManagerTestBase, ControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ExposeControlTest, self).setUp()
config = {
"environments": {"firstenv": {"type": "dummy"}}}
self.write_config(serializer.dump(config))
self.config.load()
self.service_state = yield self.add_service_from_charm("wordpress")
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_expose_service(self):
"""Test subcommand sets the exposed flag for service."""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["expose", "wordpress"])
yield finished
exposed_flag = yield self.service_state.get_exposed_flag()
self.assertTrue(exposed_flag)
self.assertIn("Service 'wordpress' was exposed.", self.output.getvalue())
@inlineCallbacks
def test_expose_service_twice(self):
"""Test subcommand can run multiple times, keeping service exposed."""
yield self.service_state.set_exposed_flag()
exposed_flag = yield self.service_state.get_exposed_flag()
self.assertTrue(exposed_flag)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["expose", "wordpress"])
yield finished
exposed_flag = yield self.service_state.get_exposed_flag()
self.assertTrue(exposed_flag)
self.assertIn("Service 'wordpress' was already exposed.",
self.output.getvalue())
# various errors
def test_expose_with_no_args(self):
"""Test subcommand takes at least one service argument."""
# in argparse, before reactor startup
self.assertRaises(SystemExit, main, ["expose"])
self.assertIn(
"juju expose: error: too few arguments",
self.stderr.getvalue())
def test_expose_with_too_many_args(self):
"""Test subcommand takes at most one service argument."""
self.assertRaises(
SystemExit, main, ["expose", "foo", "fum"])
self.assertIn(
"juju: error: unrecognized arguments: fum",
self.stderr.getvalue())
@inlineCallbacks
def test_expose_unknown_service(self):
"""Test subcommand fails if service does not exist."""
finished = self.setup_cli_reactor()
self.setup_exit(0) # XXX change when bug 697093 is fixed
self.mocker.replay()
main(["expose", "foobar"])
yield finished
self.assertIn(
"Service 'foobar' was not found",
self.output.getvalue())
@inlineCallbacks
def test_invalid_environment(self):
"""Test command with an environment that hasn't been set up."""
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
main(["expose", "--environment", "roman-candle", "wordpress"])
yield finished
self.assertIn(
"Invalid environment 'roman-candle'",
self.output.getvalue())
juju-0.7.orig/juju/control/tests/test_initialize.py 0000644 0000000 0000000 00000003341 12135220114 020763 0 ustar 0000000 0000000 from base64 import b64encode
from twisted.internet.defer import succeed
from txzookeeper import ZookeeperClient
from juju.state.initialize import StateHierarchy
from juju.lib.serializer import dump
from juju.control import admin
from .common import ControlToolTest
class AdminInitializeTest(ControlToolTest):
def test_initialize(self):
"""The admin cli dispatches the initialize method with arguments."""
client = self.mocker.patch(ZookeeperClient)
hierarchy = self.mocker.patch(StateHierarchy)
self.setup_cli_reactor()
client.connect()
self.mocker.result(succeed(client))
hierarchy.initialize()
self.mocker.result(succeed(True))
client.close()
self.capture_stream('stderr')
self.setup_exit(0)
self.mocker.replay()
constraints_data = b64encode(dump({
"ubuntu-series": "foo", "provider-type": "bar"}))
admin(["initialize",
"--instance-id", "foobar",
"--admin-identity", "admin:genie",
"--constraints-data", constraints_data,
"--provider-type", "dummy"])
def test_bad_constraints_data(self):
"""Test that failing to unpack --constraints-data aborts initialize"""
client = self.mocker.patch(ZookeeperClient)
self.setup_cli_reactor()
client.connect()
self.mocker.result(succeed(client))
self.capture_stream('stderr')
self.setup_exit(1)
self.mocker.replay()
admin(["initialize",
"--instance-id", "foobar",
"--admin-identity", "admin:genie",
"--constraints-data", "zaphod's just this guy, you know?",
"--provider-type", "dummy"])
juju-0.7.orig/juju/control/tests/test_open_tunnel.py 0000644 0000000 0000000 00000002451 12135220114 021151 0 ustar 0000000 0000000
from juju.control import main, open_tunnel
from juju.lib.serializer import dump
from juju.providers.dummy import MachineProvider
from .common import ControlToolTest
class OpenTunnelTest(ControlToolTest):
def test_open_tunnel(self):
"""
'juju-control bootstrap' will invoke the bootstrap method of all
configured machine providers in all environments.
"""
config = {
"environments": {
"firstenv": {
"type": "dummy", "admin-secret": "homer"}}}
self.write_config(dump(config))
self.setup_cli_reactor()
self.setup_exit(0)
provider = self.mocker.patch(MachineProvider)
provider.connect(share=True)
hanging_deferred = self.mocker.replace(open_tunnel.hanging_deferred)
def callback(deferred):
deferred.callback(None)
return deferred
hanging_deferred()
self.mocker.passthrough(callback)
self.mocker.replay()
self.capture_stream("stderr")
main(["open-tunnel"])
lines = filter(None, self.log.getvalue().split("\n"))
self.assertEqual(
lines,
["Tunnel to the environment is open. Press CTRL-C to close it.",
"'open_tunnel' command finished successfully"])
juju-0.7.orig/juju/control/tests/test_remove_relation.py 0000644 0000000 0000000 00000021206 12135220114 022014 0 ustar 0000000 0000000 import logging
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.control import main, remove_relation
from juju.control.tests.common import ControlToolTest
from juju.lib import serializer
from juju.machine.tests.test_constraints import dummy_constraints
from juju.state.errors import ServiceStateNotFound
from juju.state.tests.test_service import ServiceStateManagerTestBase
class ControlRemoveRelationTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlRemoveRelationTest, self).setUp()
config = {
"environments": {
"firstenv": {
"type": "dummy", "admin-secret": "homer"}}}
self.write_config(serializer.dump(config))
self.config.load()
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def add_relation_state(self, *service_names):
for service_name in service_names:
# probe if service already exists
try:
yield self.service_state_manager.get_service_state(
service_name)
except ServiceStateNotFound:
yield self.add_service_from_charm(service_name)
endpoint_pairs = yield self.service_state_manager.join_descriptors(
*service_names)
endpoints = endpoint_pairs[0]
endpoints = endpoint_pairs[0]
if endpoints[0] == endpoints[1]:
endpoints = endpoints[0:1]
relation_state = (yield self.relation_state_manager.add_relation_state(
*endpoints))[0]
returnValue(relation_state)
@inlineCallbacks
def assertRemoval(self, relation_state):
topology = yield self.get_topology()
self.assertFalse(topology.has_relation(relation_state.internal_id))
@inlineCallbacks
def test_remove_relation(self):
"""Test that the command works when run from the CLI itself."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
relation_state = yield self.add_relation_state("mysql", "wordpress")
yield self.add_relation_state("varnish", "wordpress")
main(["remove-relation", "mysql", "wordpress"])
yield wait_on_reactor_stopped
self.assertIn(
"Removed mysql relation from all service units.",
self.output.getvalue())
yield self.assertRemoval(relation_state)
@inlineCallbacks
def test_remove_peer_relation(self):
"""Test that services that peer can have that relation removed."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
relation_state = yield self.add_relation_state("riak", "riak")
main(["remove-relation", "riak", "riak"])
yield wait_on_reactor_stopped
self.assertIn(
"Removed riak relation from all service units.",
self.output.getvalue())
yield self.assertRemoval(relation_state)
@inlineCallbacks
def test_remove_relation_command(self):
"""Test removing a relation via supporting method in the cmd obj."""
relation_state = yield self.add_relation_state("mysql", "wordpress")
environment = self.config.get("firstenv")
yield remove_relation.remove_relation(
self.config, environment, False,
logging.getLogger("juju.control.cli"), "mysql", "wordpress")
self.assertIn(
"Removed mysql relation from all service units.",
self.output.getvalue())
yield self.assertRemoval(relation_state)
@inlineCallbacks
def test_verbose_flag(self):
"""Test the verbose flag."""
relation_state = yield self.add_relation_state("riak", "riak")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["--verbose", "remove-relation", "riak:ring", "riak:ring"])
yield wait_on_reactor_stopped
self.assertIn("Endpoint pairs", self.output.getvalue())
self.assertIn(
"Removed riak relation from all service units.",
self.output.getvalue())
yield self.assertRemoval(relation_state)
# test for various errors
def test_with_no_args(self):
"""Test two descriptor arguments are required for command."""
self.assertRaises(SystemExit, main, ["remove-relation"])
self.assertIn(
"juju remove-relation: error: too few arguments",
self.stderr.getvalue())
def test_too_many_arguments_provided(self):
"""Test that command rejects more than 2 descriptor arguments."""
self.assertRaises(
SystemExit, main, ["remove-relation", "foo", "fum", "bar"])
self.assertIn(
"juju: error: unrecognized arguments: bar",
self.stderr.getvalue())
@inlineCallbacks
def test_missing_service(self):
"""Test command fails if a service in the relation is missing."""
yield self.add_service_from_charm("mysql")
# but not wordpress
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-relation", "wordpress", "mysql"])
yield wait_on_reactor_stopped
self.assertIn(
"Service 'wordpress' was not found",
self.output.getvalue())
@inlineCallbacks
def test_no_common_relation_type(self):
"""Test command fails if no common relation between services."""
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("riak")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-relation", "riak", "mysql"])
yield wait_on_reactor_stopped
self.assertIn("No matching endpoints", self.output.getvalue())
@inlineCallbacks
def test_ambiguous_pairing(self):
"""Test command fails because the relation is ambiguous."""
yield self.add_service_from_charm("mysql-alternative")
yield self.add_service_from_charm("wordpress")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-relation", "wordpress", "mysql-alternative"])
yield wait_on_reactor_stopped
self.assertIn(
"Ambiguous relation 'wordpress mysql-alternative'; could refer "
"to:\n 'wordpress:db mysql-alternative:dev' (mysql client / "
"mysql server)\n 'wordpress:db mysql-alternative:prod' (mysql "
"client / mysql server)",
self.output.getvalue())
@inlineCallbacks
def test_missing_charm(self):
"""Test command fails because service has no corresponding charm."""
yield self.add_service("mysql_no_charm")
yield self.add_service_from_charm("wordpress")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-relation", "wordpress", "mysql_no_charm"])
yield wait_on_reactor_stopped
self.assertIn("No matching endpoints", self.output.getvalue())
@inlineCallbacks
def test_remove_relation_missing_relation(self):
"""Test that the command works when run from the CLI itself."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
yield self.add_service_from_charm("mysql")
yield self.add_service_from_charm("wordpress")
main(["remove-relation", "mysql", "wordpress"])
yield wait_on_reactor_stopped
self.assertIn(
"Relation not found",
self.output.getvalue())
@inlineCallbacks
def test_remove_subordinate_relation_with_principal(self):
yield self.add_service_from_charm("wordpress")
log_charm = yield self.get_subordinate_charm()
yield self.service_state_manager.add_service_state(
"logging",
log_charm,
dummy_constraints)
yield self.add_relation_state("logging", "wordpress")
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-relation", "logging", "wordpress"])
yield wait_on_reactor_stopped
self.assertIn("Unsupported attempt to destroy "
"subordinate service 'wordpress' while "
"principal service 'logging' is related.",
self.output.getvalue())
juju-0.7.orig/juju/control/tests/test_remove_unit.py 0000644 0000000 0000000 00000020564 12135220114 021164 0 ustar 0000000 0000000 import logging
import sys
from twisted.internet.defer import inlineCallbacks
import zookeeper
from .common import ControlToolTest
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.control import main
from juju.state.endpoint import RelationEndpoint
from juju.state.tests.test_service import ServiceStateManagerTestBase
class ControlRemoveUnitTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlRemoveUnitTest, self).setUp()
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
# Setup some service units.
self.service_state1 = yield self.add_service_from_charm("mysql")
self.service_unit1 = yield self.service_state1.add_unit_state()
self.service_unit2 = yield self.service_state1.add_unit_state()
self.service_unit3 = yield self.service_state1.add_unit_state()
# Add an assigned machine to one of them.
self.machine = yield self.add_machine_state()
yield self.machine.set_instance_id(0)
yield self.service_unit1.assign_to_machine(self.machine)
# Setup a machine in the provider matching the assigned.
self.provider_machine = yield self.provider.start_machine(
{"machine-id": 0, "dns-name": "antigravity.example.com"})
self.output = self.capture_logging(level=logging.DEBUG)
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_remove_unit(self):
"""
'juju remove-unit ' will remove the given unit.
"""
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 3)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-unit", "mysql/0"])
yield finished
topology = yield self.get_topology()
self.assertFalse(topology.has_service_unit(
self.service_state1.internal_id, self.service_unit1.internal_id))
topology = yield self.get_topology()
self.assertTrue(topology.has_service_unit(
self.service_state1.internal_id, self.service_unit2.internal_id))
self.assertFalse(
topology.get_service_units_in_machine(self.machine.internal_id))
self.assertIn(
"Unit 'mysql/0' removed from service 'mysql'",
self.output.getvalue())
@inlineCallbacks
def test_remove_multiple_units(self):
"""
'juju remove-unit ...' removes desired units.
"""
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 3)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-unit", "mysql/0", "mysql/2"])
yield finished
topology = yield self.get_topology()
self.assertFalse(topology.has_service_unit(
self.service_state1.internal_id, self.service_unit1.internal_id))
topology = yield self.get_topology()
self.assertTrue(topology.has_service_unit(
self.service_state1.internal_id, self.service_unit2.internal_id))
self.assertFalse(
topology.get_service_units_in_machine(self.machine.internal_id))
self.assertIn(
"Unit 'mysql/0' removed from service 'mysql'",
self.output.getvalue())
self.assertIn(
"Unit 'mysql/2' removed from service 'mysql'",
self.output.getvalue())
@inlineCallbacks
def test_remove_unassigned_unit(self):
"""Remove unit also works if the unit is unassigned to a machine.
"""
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 3)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-unit", "mysql/1"])
yield finished
# verify the unit and its machine assignment.
unit_names = yield self.service_state1.get_unit_names()
self.assertEqual(len(unit_names), 2)
topology = yield self.get_topology()
topology = yield self.get_topology()
self.assertFalse(topology.has_service_unit(
self.service_state1.internal_id, self.service_unit2.internal_id))
topology = yield self.get_topology()
self.assertTrue(topology.has_service_unit(
self.service_state1.internal_id, self.service_unit1.internal_id))
@inlineCallbacks
def test_remove_unit_unknown_service(self):
"""If the service doesn't exist, return an appropriate error message.
"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-unit", "volcano/0"])
yield finished
self.assertIn(
"Service 'volcano' was not found", self.output.getvalue())
@inlineCallbacks
def test_remove_unit_with_subordinate(self):
wordpress = yield self.add_service_from_charm("wordpress")
logging = yield self.add_service_from_charm("logging")
wordpress_ep = RelationEndpoint("wordpress", "juju-info", "juju-info",
"server", "global")
logging_ep = RelationEndpoint("logging", "juju-info", "juju-info",
"client", "container")
relation_state, service_states = (yield
self.relation_state_manager.add_relation_state(
wordpress_ep, logging_ep))
wp1 = yield wordpress.add_unit_state()
yield logging.add_unit_state(container=wp1)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-unit", "logging/0"])
yield finished
self.assertIn(
"Unsupported attempt to destroy subordinate service "
"'logging/0' while principal service 'wordpress/0' is related.",
self.output.getvalue())
@inlineCallbacks
def test_remove_unit_bad_parse(self):
"""Verify that a bad service unit name results in an appropriate error.
"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-unit", "volcano-0"])
yield finished
self.assertIn(
"Not a proper unit name: 'volcano-0'", self.output.getvalue())
@inlineCallbacks
def test_remove_unit_unknown_unit(self):
"""If the unit doesn't exist an appropriate error message is returned.
"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["remove-unit", "mysql/3"])
yield finished
self.assertIn(
"Service unit 'mysql/3' was not found", self.output.getvalue())
@inlineCallbacks
def test_zookeeper_logging_default(self):
"""By default zookeeper logging is turned off, unless in verbose
mode.
"""
log_file = self.makeFile()
def reset_zk_log():
zookeeper.set_debug_level(0)
zookeeper.set_log_stream(sys.stdout)
self.addCleanup(reset_zk_log)
finished = self.setup_cli_reactor()
self.setup_exit(0)
# Do this as late as possible to prevent zk background logging
# from causing problems.
zookeeper.set_debug_level(zookeeper.LOG_LEVEL_INFO)
zookeeper.set_log_stream(open(log_file, "w"))
self.mocker.replay()
main(["remove-unit", "mysql/3"])
yield finished
output = open(log_file).read()
self.assertEqual(output, "")
@inlineCallbacks
def test_zookeeper_logging_enabled(self):
"""By default zookeeper logging is turned off, unless in verbose
mode.
"""
log_file = self.makeFile()
zookeeper.set_debug_level(10)
zookeeper.set_log_stream(open(log_file, "w"))
def reset_zk_log():
zookeeper.set_debug_level(0)
zookeeper.set_log_stream(sys.stdout)
self.addCleanup(reset_zk_log)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["-v", "remove-unit", "mysql/3"])
yield finished
output = open(log_file).read()
self.assertTrue(output)
self.assertIn("ZOO_DEBUG", output)
juju-0.7.orig/juju/control/tests/test_resolved.py 0000644 0000000 0000000 00000032577 12135220114 020462 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks, returnValue
from juju.control import main
from juju.control.tests.common import ControlToolTest
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.state.service import RETRY_HOOKS, NO_HOOKS
from juju.state.tests.test_service import ServiceStateManagerTestBase
from juju.state.errors import ServiceStateNotFound
from juju.unit.workflow import UnitWorkflowState, RelationWorkflowState
from juju.unit.lifecycle import UnitRelationLifecycle
from juju.hooks.executor import HookExecutor
class ControlResolvedTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlResolvedTest, self).setUp()
yield self.add_relation_state("wordpress", "mysql")
yield self.add_relation_state("wordpress", "varnish")
self.service1 = yield self.service_state_manager.get_service_state(
"mysql")
self.service_unit1 = yield self.service1.add_unit_state()
self.service_unit2 = yield self.service1.add_unit_state()
self.unit1_workflow = UnitWorkflowState(
self.client, self.service_unit1, None, self.makeDir())
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("started")
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
self.executor = HookExecutor()
@inlineCallbacks
def add_relation_state(self, *service_names):
for service_name in service_names:
try:
yield self.service_state_manager.get_service_state(
service_name)
except ServiceStateNotFound:
yield self.add_service_from_charm(service_name)
endpoint_pairs = yield self.service_state_manager.join_descriptors(
*service_names)
endpoints = endpoint_pairs[0]
endpoints = endpoint_pairs[0]
if endpoints[0] == endpoints[1]:
endpoints = endpoints[0:1]
relation_state = (yield self.relation_state_manager.add_relation_state(
*endpoints))[0]
returnValue(relation_state)
@inlineCallbacks
def get_named_service_relation(self, service_state, relation_name):
if isinstance(service_state, str):
service_state = yield self.service_state_manager.get_service_state(
service_state)
rels = yield self.relation_state_manager.get_relations_for_service(
service_state)
rels = [sr for sr in rels if sr.relation_name == relation_name]
if len(rels) == 1:
returnValue(rels[0])
returnValue(rels)
@inlineCallbacks
def setup_unit_relations(self, service_relation, *units):
"""
Given a service relation and set of unit tuples in the form
unit_state, unit_relation_workflow_state, will add unit relations
for these units and update their workflow state to the desired/given
state.
"""
for unit, state in units:
unit_relation = yield service_relation.add_unit_state(unit)
lifecycle = UnitRelationLifecycle(
self.client, unit.unit_name, unit_relation,
service_relation.relation_ident,
self.makeDir(), self.makeDir(), self.executor)
workflow_state = RelationWorkflowState(
self.client, unit_relation, service_relation.relation_name,
lifecycle, self.makeDir())
with (yield workflow_state.lock()):
yield workflow_state.set_state(state)
@inlineCallbacks
def test_resolved(self):
"""
'juju resolved ' will schedule a unit for
retrying from an error state.
"""
# Push the unit into an error state
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("start_error")
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
self.assertEqual(
(yield self.service_unit1.get_resolved()), None)
main(["resolved", "mysql/0"])
yield finished
self.assertEqual(
(yield self.service_unit1.get_resolved()), {"retry": NO_HOOKS})
self.assertIn(
"Marked unit 'mysql/0' as resolved",
self.output.getvalue())
@inlineCallbacks
def test_resolved_retry(self):
"""
'juju resolved --retry ' will schedule a unit
for retrying from an error state with a retry of hooks
executions.
"""
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("start_error")
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
self.assertEqual(
(yield self.service_unit1.get_resolved()), None)
main(["resolved", "--retry", "mysql/0"])
yield finished
self.assertEqual(
(yield self.service_unit1.get_resolved()), {"retry": RETRY_HOOKS})
self.assertIn(
"Marked unit 'mysql/0' as resolved",
self.output.getvalue())
@inlineCallbacks
def test_relation_resolved(self):
"""
'juju relation ' will schedule
the broken unit relations for being resolved.
"""
service_relation = yield self.get_named_service_relation(
self.service1, "server")
yield self.setup_unit_relations(
service_relation,
(self.service_unit1, "down"),
(self.service_unit2, "up"))
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("start_error")
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
self.assertEqual(
(yield self.service_unit1.get_relation_resolved()), None)
main(["resolved", "--retry", "mysql/0",
service_relation.relation_name])
yield finished
self.assertEqual(
(yield self.service_unit1.get_relation_resolved()),
{service_relation.internal_relation_id: RETRY_HOOKS})
self.assertEqual(
(yield self.service_unit2.get_relation_resolved()),
None)
self.assertIn(
"Marked unit 'mysql/0' relation 'server' as resolved",
self.output.getvalue())
@inlineCallbacks
def test_resolved_relation_some_already_resolved(self):
"""
'juju resolved ' will mark
resolved all down units that are not already marked resolved.
"""
service2 = yield self.service_state_manager.get_service_state(
"wordpress")
service_unit1 = yield service2.add_unit_state()
service_relation = yield self.get_named_service_relation(
service2, "db")
yield self.setup_unit_relations(
service_relation, (service_unit1, "down"))
service_relation2 = yield self.get_named_service_relation(
service2, "cache")
yield self.setup_unit_relations(
service_relation2, (service_unit1, "down"))
yield service_unit1.set_relation_resolved(
{service_relation.internal_relation_id: NO_HOOKS})
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["resolved", "--retry", "wordpress/0", "cache"])
yield finished
self.assertEqual(
(yield service_unit1.get_relation_resolved()),
{service_relation.internal_relation_id: NO_HOOKS,
service_relation2.internal_relation_id: RETRY_HOOKS})
self.assertIn(
"Marked unit 'wordpress/0' relation 'cache' as resolved",
self.output.getvalue())
@inlineCallbacks
def test_resolved_relation_some_already_resolved_conflict(self):
"""
'juju resolved ' will mark
resolved all down units that are not already marked resolved.
"""
service2 = yield self.service_state_manager.get_service_state(
"wordpress")
service_unit1 = yield service2.add_unit_state()
service_relation = yield self.get_named_service_relation(
service2, "db")
yield self.setup_unit_relations(
service_relation, (service_unit1, "down"))
yield service_unit1.set_relation_resolved(
{service_relation.internal_relation_id: NO_HOOKS})
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["resolved", "--retry", "wordpress/0", "db"])
yield finished
self.assertEqual(
(yield service_unit1.get_relation_resolved()),
{service_relation.internal_relation_id: NO_HOOKS})
self.assertIn(
"Service unit 'wordpress/0' already has relations marked as resol",
self.output.getvalue())
@inlineCallbacks
def test_resolved_unknown_service(self):
"""
'juju resolved ' will report if a service is
invalid.
"""
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["resolved", "zebra/0"])
yield finished
self.assertIn("Service 'zebra' was not found", self.output.getvalue())
@inlineCallbacks
def test_resolved_unknown_unit(self):
"""
'juju resolved ' will report if a unit is
invalid.
"""
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["resolved", "mysql/5"])
yield finished
self.assertIn(
"Service unit 'mysql/5' was not found", self.output.getvalue())
@inlineCallbacks
def test_resolved_unknown_unit_relation(self):
"""
'juju resolved ' will report if a relation is
invalid.
"""
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
self.assertEqual(
(yield self.service_unit1.get_resolved()), None)
main(["resolved", "mysql/0", "magic"])
yield finished
self.assertIn("Relation not found", self.output.getvalue())
@inlineCallbacks
def test_resolved_already_running(self):
"""
'juju resolved ' will report if
the unit is already running.
"""
# Just verify we don't accidentally mark up another unit of the service
unit2_workflow = UnitWorkflowState(
self.client, self.service_unit2, None, self.makeDir())
with (yield unit2_workflow.lock()):
unit2_workflow.set_state("start_error")
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["resolved", "mysql/0"])
yield finished
self.assertEqual(
(yield self.service_unit2.get_resolved()), None)
self.assertEqual(
(yield self.service_unit1.get_resolved()), None)
self.assertNotIn(
"Unit 'mysql/0 already running: started",
self.output.getvalue())
@inlineCallbacks
def test_resolved_already_resolved(self):
"""
'juju resolved ' will report if
the unit is already resolved.
"""
# Mark the unit as resolved and as in an error state.
yield self.service_unit1.set_resolved(RETRY_HOOKS)
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("start_error")
unit2_workflow = UnitWorkflowState(
self.client, self.service_unit1, None, self.makeDir())
with (yield unit2_workflow.lock()):
unit2_workflow.set_state("start_error")
self.assertEqual(
(yield self.service_unit2.get_resolved()), None)
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["resolved", "mysql/0"])
yield finished
self.assertEqual(
(yield self.service_unit1.get_resolved()),
{"retry": RETRY_HOOKS})
self.assertNotIn(
"Marked unit 'mysql/0' as resolved",
self.output.getvalue())
self.assertIn(
"Service unit 'mysql/0' is already marked as resolved.",
self.output.getvalue(), "")
@inlineCallbacks
def test_resolved_relation_already_running(self):
"""
'juju resolved ' will report
if the relation is already running.
"""
service2 = yield self.service_state_manager.get_service_state(
"wordpress")
service_unit1 = yield service2.add_unit_state()
service_relation = yield self.get_named_service_relation(
service2, "db")
yield self.setup_unit_relations(
service_relation, (service_unit1, "up"))
self.setup_exit(0)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["resolved", "wordpress/0", "db"])
yield finished
self.assertIn("Matched relations are all running",
self.output.getvalue())
self.assertEqual(
(yield service_unit1.get_relation_resolved()),
None)
juju-0.7.orig/juju/control/tests/test_scp.py 0000644 0000000 0000000 00000011155 12135220114 017411 0 ustar 0000000 0000000 import logging
import os
from twisted.internet.defer import inlineCallbacks
from juju.environment.environment import Environment
from juju.state.tests.test_service import ServiceStateManagerTestBase
from juju.lib.mocker import ARGS, KWARGS
from juju.control import main
from .common import ControlToolTest
class SCPTest(ServiceStateManagerTestBase, ControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(SCPTest, self).setUp()
self.setup_exit(0)
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
# Setup a machine in the provider
self.provider_machine = (yield self.provider.start_machine(
{"machine-id": 0, "dns-name": "antigravity.example.com"}))[0]
# Setup the zk tree with a service, unit, and machine.
self.service = yield self.add_service_from_charm("mysql")
self.unit = yield self.service.add_unit_state()
yield self.unit.set_public_address(
"%s.example.com" % self.unit.unit_name.replace("/", "-"))
self.machine = yield self.add_machine_state()
yield self.machine.set_instance_id(0)
yield self.unit.assign_to_machine(self.machine)
# capture the output.
self.output = self.capture_logging(
"juju.control.cli", level=logging.INFO)
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_scp_unit_name(self):
"""Verify scp command is invoked against the host for a unit name."""
# Verify expected call against scp
mock_exec = self.mocker.replace(os.execvp)
mock_exec("scp", [
"scp", "ubuntu@mysql-0.example.com:/foo/*", "10.1.2.3:."])
# But no other calls
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.unit.connect_agent()
main(["scp", "mysql/0:/foo/*", "10.1.2.3:."])
yield finished
self.assertEquals(calls, [])
@inlineCallbacks
def test_scp_machine_id(self):
"""Verify scp command is invoked against the host for a machine ID."""
# We need to do this because separate instances of DummyProvider don't
# share instance state.
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
# Verify expected call against scp
mock_exec = self.mocker.replace(os.execvp)
mock_exec(
"scp",
["scp", "ubuntu@antigravity.example.com:foo/*", "10.1.2.3:."])
# But no other calls
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.unit.connect_agent()
main(["scp", "0:foo/*", "10.1.2.3:."])
yield finished
self.assertEquals(calls, [])
@inlineCallbacks
def test_passthrough_args(self):
"""Verify that args are passed through to the underlying scp command.
For example, something like the following command should be valid::
$ juju scp -o "ConnectTimeout 60" foo mysql/0:/foo/bar
"""
# Verify expected call against scp
mock_exec = self.mocker.replace(os.execvp)
mock_exec("scp", [
"scp", "-r", "-o", "ConnectTimeout 60",
"foo", "ubuntu@mysql-0.example.com:/foo/bar"])
# But no other calls
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["scp", "-r", "-o", "ConnectTimeout 60",
"foo", "mysql/0:/foo/bar"])
yield finished
self.assertEquals(calls, [])
class ParseErrorsTest(ServiceStateManagerTestBase, ControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ParseErrorsTest, self).setUp()
self.stderr = self.capture_stream("stderr")
def test_passthrough_args_parse_error(self):
"""Verify that bad passthrough args will get an argparse error."""
e = self.assertRaises(
SystemExit, main, ["scp", "-P", "mysql/0"])
self.assertEqual(e.code, 2)
self.assertIn("juju scp: error: too few arguments",
self.stderr.getvalue())
juju-0.7.orig/juju/control/tests/test_ssh.py 0000644 0000000 0000000 00000031107 12135220114 017420 0 ustar 0000000 0000000 import logging
import os
from twisted.internet.defer import inlineCallbacks, succeed, Deferred
from juju.environment.environment import Environment
from juju.charm.tests.test_repository import RepositoryTestBase
from juju.state.machine import MachineState
from juju.state.service import ServiceUnitState
from juju.state.tests.test_service import ServiceStateManagerTestBase
from juju.lib.mocker import ARGS, KWARGS
from juju.control import main
from .common import ControlToolTest
class ControlShellTest(
ServiceStateManagerTestBase, ControlToolTest, RepositoryTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlShellTest, self).setUp()
self.setup_exit(0)
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
# Setup a machine in the provider
self.provider_machine = (yield self.provider.start_machine(
{"machine-id": 0, "dns-name": "antigravity.example.com"}))[0]
# Setup the zk tree with a service, unit, and machine.
self.service = yield self.add_service_from_charm("mysql")
self.unit = yield self.service.add_unit_state()
yield self.unit.set_public_address(
"%s.example.com" % self.unit.unit_name.replace("/", "-"))
self.machine = yield self.add_machine_state()
yield self.machine.set_instance_id(0)
yield self.unit.assign_to_machine(self.machine)
# capture the output.
self.output = self.capture_logging(
"juju.control.cli", level=logging.INFO)
@inlineCallbacks
def test_shell_with_unit(self):
"""
'juju ssh mysql/0' will execute ssh against the machine
hosting the unit.
"""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
mock_exec = self.mocker.replace(os.execvp)
mock_exec("ssh", [
"ssh",
"-o",
"ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p",
"-o", "ControlMaster no",
"ubuntu@mysql-0.example.com"])
# Track unwanted calls:
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.unit.connect_agent()
main(["ssh", self.unit.unit_name])
yield finished
self.assertEquals(calls, [])
self.assertIn(
"Connecting to unit mysql/0 at mysql-0.example.com",
self.output.getvalue())
@inlineCallbacks
def test_passthrough_args(self):
"""Verify that args are passed through to the underlying ssh command.
For example, something like the following command should be valid::
$ juju ssh -L8080:localhost:8080 -o "ConnectTimeout 60" mysql/0 ls /
"""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
mock_exec = self.mocker.replace(os.execvp)
mock_exec("ssh", [
"ssh",
"-o",
"ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p",
"-o", "ControlMaster no",
"-L8080:localhost:8080", "-o", "ConnectTimeout 60",
"ubuntu@mysql-0.example.com", "ls *"])
# Track unwanted calls:
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.unit.connect_agent()
main(["ssh", "-L8080:localhost:8080", "-o", "ConnectTimeout 60",
self.unit.unit_name, "ls *"])
yield finished
self.assertEquals(calls, [])
self.assertIn(
"Connecting to unit mysql/0 at mysql-0.example.com",
self.output.getvalue())
@inlineCallbacks
def test_shell_with_unit_and_unconnected_unit_agent(self):
"""If a unit doesn't have a connected unit agent,
the ssh command will wait till one exists before connecting.
"""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
mock_unit = self.mocker.patch(ServiceUnitState)
mock_unit.watch_agent()
self.mocker.result((succeed(False), succeed(True)))
mock_exec = self.mocker.replace(os.execvp)
mock_exec("ssh", [
"ssh",
"-o",
"ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p",
"-o", "ControlMaster no",
"ubuntu@mysql-0.example.com"])
# Track unwanted calls:
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.unit.connect_agent()
main(["ssh", "mysql/0"])
yield finished
self.assertEquals(calls, [])
self.assertIn(
"Waiting for unit to come up", self.output.getvalue())
@inlineCallbacks
def test_shell_with_machine_and_unconnected_machine_agent(self):
"""If a machine doesn't have a connected machine agent,
the ssh command will wait till one exists before connecting.
"""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
mock_machine = self.mocker.patch(MachineState)
mock_machine.watch_agent()
self.mocker.result((succeed(False), succeed(True)))
mock_exec = self.mocker.replace(os.execvp)
mock_exec("ssh", [
"ssh",
"-o",
"ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p",
"-o", "ControlMaster no",
"ubuntu@antigravity.example.com"])
# Track unwanted calls:
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.machine.connect_agent()
main(["ssh", "0"])
yield finished
self.assertEquals(calls, [])
self.assertIn(
"Waiting for machine to come up", self.output.getvalue())
@inlineCallbacks
def test_shell_with_unit_and_unset_dns(self):
"""If a machine agent isn't connects, its also possible that
the provider machine may not yet have a dns name, if the
instance hasn't started. In that case after the machine agent
has connected, verify the provider dns name is valid."""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
mock_unit = self.mocker.patch(ServiceUnitState)
mock_unit.watch_agent()
address_set = Deferred()
@inlineCallbacks
def set_unit_dns():
yield self.unit.set_public_address("mysql-0.example.com")
address_set.callback(True)
self.mocker.call(set_unit_dns)
self.mocker.result((succeed(False), succeed(False)))
mock_exec = self.mocker.replace(os.execvp)
mock_exec("ssh", [
"ssh",
"-o",
"ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p",
"-o", "ControlMaster no",
"ubuntu@mysql-0.example.com"])
# Track unwanted calls:
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.unit.set_public_address(None)
main(["ssh", "mysql/0"])
# Wait till we've set the unit address before connecting the agent.
yield address_set
yield self.unit.connect_agent()
yield finished
self.assertEquals(calls, [])
self.assertIn(
"Waiting for unit to come up", self.output.getvalue())
@inlineCallbacks
def test_shell_with_machine_and_unset_dns(self):
"""If a machine agent isn't connects, its also possible that
the provider machine may not yet have a dns name, if the
instance hasn't started. In that case after the machine agent
has connected, verify the provider dns name is valid."""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
mock_machine = self.mocker.patch(MachineState)
mock_machine.watch_agent()
def set_machine_dns():
self.provider_machine.dns_name = "antigravity.example.com"
self.mocker.call(set_machine_dns)
self.mocker.result((succeed(False), succeed(False)))
mock_exec = self.mocker.replace(os.execvp)
mock_exec("ssh", [
"ssh",
"-o",
"ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p",
"-o", "ControlMaster no",
"ubuntu@antigravity.example.com"])
# Track unwanted calls:
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
self.provider_machine.dns_name = None
yield self.machine.connect_agent()
main(["ssh", "0"])
yield finished
self.assertEquals(calls, [])
self.assertIn(
"Waiting for machine to come up", self.output.getvalue())
@inlineCallbacks
def test_shell_with_machine_id(self):
"""
'juju ssh ' will execute ssh against the machine
with the corresponding id.
"""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
mock_exec = self.mocker.replace(os.execvp)
mock_exec("ssh", [
"ssh",
"-o",
"ControlPath " + self.tmp_home + "/.juju/ssh/master-%r@%h:%p",
"-o", "ControlMaster no",
"ubuntu@antigravity.example.com",
])
# Track unwanted calls:
calls = []
mock_exec(ARGS, KWARGS)
self.mocker.count(0, None)
self.mocker.call(lambda *args, **kwargs: calls.append((args, kwargs)))
finished = self.setup_cli_reactor()
self.mocker.replay()
yield self.machine.connect_agent()
main(["ssh", "0"])
yield finished
self.assertEquals(calls, [])
self.assertIn(
"Connecting to machine 0 at antigravity.example.com",
self.output.getvalue())
@inlineCallbacks
def test_shell_with_unassigned_unit(self):
"""If the service unit is not assigned, attempting to
connect, raises an error."""
finished = self.setup_cli_reactor()
self.mocker.replay()
unit_state = yield self.service.add_unit_state()
main(["ssh", unit_state.unit_name])
yield finished
self.assertIn(
"Service unit 'mysql/1' is not assigned to a machine",
self.output.getvalue())
@inlineCallbacks
def test_shell_with_invalid_machine(self):
"""If the machine does not exist, attempting to
connect, raises an error."""
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
finished = self.setup_cli_reactor()
self.mocker.replay()
main(["ssh", "1"])
yield finished
self.assertIn("Machine 1 was not found", self.output.getvalue())
class ParseErrorsTest(ServiceStateManagerTestBase, ControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ParseErrorsTest, self).setUp()
self.stderr = self.capture_stream("stderr")
def test_passthrough_args_parse_error(self):
"""Verify that bad passthrough args will get an argparse error."""
e = self.assertRaises(
SystemExit, main, ["ssh", "-L", "mysql/0"])
self.assertEqual(e.code, 2)
self.assertIn("juju ssh: error: too few arguments",
self.stderr.getvalue())
juju-0.7.orig/juju/control/tests/test_status.py 0000644 0000000 0000000 00000111610 12135220114 020144 0 ustar 0000000 0000000 from fnmatch import fnmatch
import inspect
import json
import logging
import os
from StringIO import StringIO
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.agents.base import TwistedOptionNamespace
from juju.agents.machine import MachineAgent
from juju.environment.environment import Environment
from juju.control import status
from juju.control import tests
from juju.lib import serializer
from juju.state.endpoint import RelationEndpoint
from juju.state.environment import GlobalSettingsStateManager
from juju.state.tests.test_service import ServiceStateManagerTestBase
from juju.tests.common import get_test_zookeeper_address
from juju.unit.workflow import ZookeeperWorkflowState
from .common import ControlToolTest
tests_path = os.path.dirname(inspect.getabsfile(tests))
sample_path = os.path.join(tests_path, "sample_cluster.yaml")
sample_cluster = serializer.load(open(sample_path, "r"))
def dump_stringio(stringio, filename):
"""Debug utility to dump a StringIO to a filename."""
fp = open(filename, "w")
fp.write(stringio.getvalue())
fp.close()
@inlineCallbacks
def collect(scope, provider, client, log):
"""Collect and return status info as dict"""
# provided for backwards compatibility with
# original API
# used only in testing
s = status.StatusCommand(client, provider, log)
state = yield s(scope)
returnValue(state)
class StatusTestBase(ServiceStateManagerTestBase, ControlToolTest):
# Status tests setup a large tree every time, make allowances for it.
# TODO: create minimal trees needed per test.
timeout = 10
@inlineCallbacks
def setUp(self):
yield super(StatusTestBase, self).setUp()
settings = GlobalSettingsStateManager(self.client)
yield settings.set_provider_type("dummy")
self.log = self.capture_logging()
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
self.machine_count = 0
self.output = StringIO()
@inlineCallbacks
def set_unit_state(self, unit_state, state, port_protos=()):
unit_state.set_public_address(
"%s.example.com" % unit_state.unit_name.replace("/", "-"))
workflow_client = ZookeeperWorkflowState(self.client, unit_state)
with (yield workflow_client.lock()):
yield workflow_client.set_state(state)
for port_proto in port_protos:
yield unit_state.open_port(*port_proto)
@inlineCallbacks
def add_relation_unit_states(self, relation_state, unit_states, states):
for unit_state, state in zip(unit_states, states):
relation_unit_state = yield relation_state.add_unit_state(
unit_state)
workflow_client = ZookeeperWorkflowState(
self.client, relation_unit_state)
with (yield workflow_client.lock()):
yield workflow_client.set_state(state)
@inlineCallbacks
def add_relation_with_relation_units(
self,
source_endpoint, source_units, source_states,
dest_endpoint, dest_units, dest_states):
relation_state, service_relation_states = \
yield self.relation_state_manager.add_relation_state(
*[source_endpoint, dest_endpoint])
source_relation_state, dest_relation_state = service_relation_states
yield self.add_relation_unit_states(
source_relation_state, source_units, source_states)
yield self.add_relation_unit_states(
dest_relation_state, dest_units, dest_states)
@inlineCallbacks
def add_unit(self, service, machine, with_agent=lambda _: True,
units=None, container=None):
unit = yield service.add_unit_state(container=container)
self.assertTrue(machine or container)
if machine is not None:
yield unit.assign_to_machine(machine)
if with_agent(unit.unit_name):
yield unit.connect_agent()
if units is not None:
units.setdefault(service.service_name, []).append(unit)
returnValue(unit)
@inlineCallbacks
def build_topology(self, base=None, skip_unit_agents=()):
"""Build a simulated topology with a default machine configuration.
This method returns a dict that can be used to get handles to
the constructed objects.
"""
state = {}
# build out the topology using the state managers
m1 = yield self.add_machine_state()
m2 = yield self.add_machine_state()
m3 = yield self.add_machine_state()
m4 = yield self.add_machine_state()
m5 = yield self.add_machine_state()
m6 = yield self.add_machine_state()
m7 = yield self.add_machine_state()
# inform the provider about the machine
yield self.provider.start_machine({"machine-id": 0,
"dns-name": "steamcloud-1.com"})
yield self.provider.start_machine({"machine-id": 1,
"dns-name": "steamcloud-2.com"})
yield self.provider.start_machine({"machine-id": 2,
"dns-name": "steamcloud-3.com"})
yield self.provider.start_machine({"machine-id": 3,
"dns-name": "steamcloud-4.com"})
yield self.provider.start_machine({"machine-id": 4,
"dns-name": "steamcloud-5.com"})
yield self.provider.start_machine({"machine-id": 5,
"dns-name": "steamcloud-6.com"})
yield self.provider.start_machine({"machine-id": 6,
"dns-name": "steamcloud-7.com"})
yield m1.set_instance_id(0)
yield m2.set_instance_id(1)
yield m3.set_instance_id(2)
yield m4.set_instance_id(3)
yield m5.set_instance_id(4)
yield m6.set_instance_id(5)
yield m7.set_instance_id(6)
state["machines"] = [m1, m2, m3, m4, m5, m6, m7]
# "Deploy" services
wordpress = yield self.add_service_from_charm("wordpress")
mysql = yield self.add_service_from_charm("mysql")
yield mysql.set_exposed_flag() # but w/ no open ports
varnish = yield self.add_service_from_charm("varnish")
yield varnish.set_exposed_flag()
# w/o additional metadata
memcache = yield self.add_service("memcache")
state["services"] = dict(wordpress=wordpress, mysql=mysql,
varnish=varnish, memcache=memcache)
def with_unit(name):
for pattern in skip_unit_agents:
if fnmatch(name, pattern):
return False
return True
units = {}
wpu = yield self.add_unit(wordpress, m1, with_unit, units)
myu1 = yield self.add_unit(mysql, m2, with_unit, units)
myu2 = yield self.add_unit(mysql, m3, with_unit, units)
vu1 = yield self.add_unit(varnish, m4, with_unit, units)
vu2 = yield self.add_unit(varnish, m5, with_unit, units)
mc1 = yield self.add_unit(memcache, m6, with_unit, units)
mc2 = yield self.add_unit(memcache, m7, with_unit, units)
state["units"] = units
# add unit states to services and assign to machines
# Set the lifecycle state and open ports, if any, for each unit state.
yield self.set_unit_state(wpu, "started", [(80, "tcp"), (443, "tcp")])
yield self.set_unit_state(myu1, "started")
yield self.set_unit_state(myu2, "stop_error")
yield self.set_unit_state(vu1, "started", [(80, "tcp")])
yield self.set_unit_state(vu2, "started", [(80, "tcp")])
yield self.set_unit_state(mc1, None)
yield self.set_unit_state(mc2, "installed")
# Wordpress integrates with each of the following
# services. Each relation endpoint is used to define the
# specific relation to be established.
mysql_ep = RelationEndpoint(
"mysql", "client-server", "db", "server")
memcache_ep = RelationEndpoint(
"memcache", "client-server", "cache", "server")
varnish_ep = RelationEndpoint(
"varnish", "client-server", "proxy", "client")
wordpress_db_ep = RelationEndpoint(
"wordpress", "client-server", "db", "client")
wordpress_cache_ep = RelationEndpoint(
"wordpress", "client-server", "cache", "client")
wordpress_proxy_ep = RelationEndpoint(
"wordpress", "client-server", "proxy", "server")
# Create relation service units for each of these relations
yield self.add_relation_with_relation_units(
mysql_ep, [myu1, myu2], ["up", "departed"],
wordpress_db_ep, [wpu], ["up"])
yield self.add_relation_with_relation_units(
memcache_ep, [mc1, mc2], ["up", "down"],
wordpress_cache_ep, [wpu], ["up"])
yield self.add_relation_with_relation_units(
varnish_ep, [vu1, vu2], ["up", "up"],
wordpress_proxy_ep, [wpu], ["up"])
state["relations"] = dict(
wordpress=[wpu],
mysql=[myu1, myu2],
varnish=[vu1, vu2],
memcache=[mc1, mc2]
)
returnValue(state)
def mock_environment(self):
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
class StatusTest(StatusTestBase):
@inlineCallbacks
def add_provider_machine(self):
m = yield self.add_machine_state()
yield self.provider.start_machine(
{"machine-id": self.machine_count,
"dns-name": "steamcloud-%s.com" % self.machine_count})
m.set_instance_id(self.machine_count)
self.machine_count += 1
returnValue(m)
@inlineCallbacks
def test_status_provider_machine(self):
"""Verify only one call to the provider for n machines.
"""
yield self.add_provider_machine()
yield self.add_provider_machine()
mock_provider = self.mocker.patch(self.provider)
mock_provider.get_machines()
self.mocker.count(1)
self.mocker.passthrough()
self.mocker.replay()
state = yield collect(
None, mock_provider, self.client, None)
self.assertEqual(
state,
{'services': {},
'machines': {
0: {'agent-state':
'not-started',
'instance-state':
'unknown',
'instance-id': 0,
'dns-name': 'steamcloud-0.com'},
1: {'agent-state': 'not-started',
'instance-state': 'unknown',
'instance-id': 1,
'dns-name': 'steamcloud-1.com'}}})
@inlineCallbacks
def test_peer_relation(self):
"""Verify status works with peer relations.
"""
m1 = yield self.add_provider_machine()
m2 = yield self.add_provider_machine()
riak = yield self.add_service_from_charm("riak")
riak_u1 = yield self.add_unit(riak, m1)
riak_u2 = yield self.add_unit(riak, m2, with_agent=lambda _: False)
yield self.set_unit_state(riak_u1, "started")
yield self.set_unit_state(riak_u2, "started")
_, (peer_rel,) = yield self.relation_state_manager.add_relation_state(
RelationEndpoint("riak", "peer", "ring", "peer"))
riak_u1_relation = yield peer_rel.add_unit_state(riak_u1)
riak_u1_workflow = ZookeeperWorkflowState(
self.client, riak_u1_relation)
with (yield riak_u1_workflow.lock()):
yield riak_u1_workflow.set_state("up")
yield peer_rel.add_unit_state(riak_u2)
state = yield collect(
["riak"], self.provider, self.client, None)
self.assertEqual(
state["services"]["riak"],
{"charm": "local:series/riak-7",
"relations": {"ring": ["riak"]},
"units": {"riak/0": {"machine": 0,
"public-address": "riak-0.example.com",
"agent-state": "started"},
"riak/1": {"machine": 1,
"public-address": "riak-1.example.com",
"agent-state": "down"}}})
@inlineCallbacks
def test_service_with_multiple_instances_of_named_relation(self):
m1 = yield self.add_provider_machine()
m2 = yield self.add_provider_machine()
m3 = yield self.add_provider_machine()
mysql = yield self.add_service_from_charm("mysql")
mysql_ep = RelationEndpoint(
"mysql", "client-server", "db", "server")
mysql_1 = yield self.add_unit(mysql, m1)
myblog = yield self.add_service_from_charm(
"myblog", charm_name="wordpress")
myblog_db_ep = RelationEndpoint(
"myblog", "client-server", "db", "client")
myblog_1 = yield self.add_unit(myblog, m2)
teamblog = yield self.add_service_from_charm(
"teamblog", charm_id=(yield myblog.get_charm_id()))
teamblog_db_ep = RelationEndpoint(
"teamblog", "client-server", "db", "client")
teamblog_1 = yield self.add_unit(teamblog, m3)
yield self.add_relation_with_relation_units(
mysql_ep, [mysql_1], ["up"],
myblog_db_ep, [myblog_1], ["up"])
yield self.add_relation_with_relation_units(
mysql_ep, [mysql_1], ["up"],
teamblog_db_ep, [teamblog_1], ["up"])
state = yield collect(None, self.provider, self.client, None)
self.assertEqual(
state["services"]["mysql"]["units"]["mysql/0"],
{"agent-state": "pending",
"machine": 0,
"public-address": None})
self.assertEqual(
state["services"]["mysql"]["relations"],
{"db": ["myblog", "teamblog"]})
@inlineCallbacks
def test_service_with_multiple_rels_to_same_endpoint(self):
m1 = yield self.add_provider_machine()
m2 = yield self.add_provider_machine()
mysql = yield self.add_service_from_charm("mysql")
mysql_ep = RelationEndpoint(
"mysql", "client-server", "db", "server")
mysql_1 = yield self.add_unit(mysql, m1)
myblog = yield self.add_service_from_charm(
"myblog", charm_name="funkyblog")
write_db_ep = RelationEndpoint(
"myblog", "client-server", "write-db", "client")
read_db_ep = RelationEndpoint(
"myblog", "client-server", "read-db", "client")
myblog_1 = yield self.add_unit(myblog, m2)
yield self.add_relation_with_relation_units(
mysql_ep, [mysql_1], ["down"],
write_db_ep, [myblog_1], ["up"])
yield self.add_relation_with_relation_units(
mysql_ep, [mysql_1], ["down"],
read_db_ep, [myblog_1], ["up"])
state = yield collect(None, self.provider, self.client, None)
# Even though there are two relations to this service we
# collapse to one the additional displays are redundant.
self.assertEqual(
state["services"]["mysql"]["relations"],
{"db": ["myblog"]})
self.assertEqual(
state["services"]["mysql"]["units"]["mysql/0"],
{"agent-state": "pending",
"machine": 0,
"relation-errors": {"db": ["myblog"]},
"public-address": None})
@inlineCallbacks
def test_collect(self):
yield self.build_topology(skip_unit_agents=("varnish/1",))
agent = MachineAgent()
options = TwistedOptionNamespace()
options["juju_directory"] = self.makeDir()
options["zookeeper_servers"] = get_test_zookeeper_address()
options["session_file"] = self.makeFile()
options["machine_id"] = "0"
agent.configure(options)
agent.set_watch_enabled(False)
agent.client = self.client
yield agent.start()
# collect everything
state = yield collect(None, self.provider, self.client, None)
services = state["services"]
self.assertIn("wordpress", services)
self.assertIn("varnish", services)
self.assertIn("mysql", services)
# and verify the specifics of a single service
self.assertTrue("mysql" in services)
units = list(services["mysql"]["units"])
self.assertEqual(len(units), 2)
self.assertEqual(state["machines"][0],
{"instance-id": 0,
"instance-state": "unknown",
"dns-name": "steamcloud-1.com",
"agent-state": "running"})
self.assertEqual(services["mysql"]["relations"],
{"db": ["wordpress"]})
self.assertEqual(services["wordpress"]["relations"],
{"cache": ["memcache"],
"db": ["mysql"],
"proxy": ["varnish"]})
self.assertEqual(
services["varnish"],
{"units":
{"varnish/1": {
"machine": 4,
"agent-state": "down",
"open-ports": ["80/tcp"],
"public-address": "varnish-1.example.com"},
"varnish/0": {
"machine": 3,
"agent-state": "started",
"public-address": "varnish-0.example.com",
"open-ports": ["80/tcp"]}},
"exposed": True,
"charm": "local:series/varnish-1",
"relations": {"proxy": ["wordpress"]}})
self.assertEqual(
services["wordpress"],
{"charm": "local:series/wordpress-3",
"exposed": False,
"relations": {
"cache": ["memcache"],
"db": ["mysql"],
"proxy": ["varnish"]},
"units": {
"wordpress/0": {
"machine": 0,
"public-address": "wordpress-0.example.com",
"agent-state": "started"}}})
self.assertEqual(
services["memcache"],
{"charm": "local:series/dummy-1",
"relations": {"cache": ["wordpress"]},
"units": {
"memcache/0": {
"machine": 5,
"public-address": "memcache-0.example.com",
"agent-state": "pending"},
"memcache/1": {
"machine": 6,
"public-address": "memcache-1.example.com",
"relation-errors": {
"cache": ["wordpress"]},
"agent-state": "installed"}}}
)
@inlineCallbacks
def test_collect_filtering(self):
yield self.build_topology()
# collect by service name
state = yield collect(
["wordpress"], self.provider, self.client, None)
# Validate that only the expected service is present
# in the state
self.assertEqual(state["machines"].keys(), [0])
self.assertEqual(state["services"].keys(), ["wordpress"])
# collect by unit name
state = yield collect(["*/0"], self.provider, self.client, None)
self.assertEqual(set(state["machines"].keys()), set([0, 1, 3, 5]))
self.assertEqual(set(state["services"].keys()),
set(["memcache", "varnish", "mysql", "wordpress"]))
# collect by unit name
state = yield collect(["*/1"], self.provider, self.client, None)
self.assertEqual(set(state["machines"].keys()), set([2, 4, 6]))
# verify that only the proper units and services are present
self.assertEqual(
state["services"],
{"memcache": {
"charm": "local:series/dummy-1",
"relations": {"cache": ["wordpress"]},
"units": {
"memcache/1": {
"machine": 6,
"agent-state": "installed",
"public-address": "memcache-1.example.com",
"relation-errors": {"cache": ["wordpress"]}}}},
"mysql": {
"exposed": True,
"charm": "local:series/mysql-1",
"relations": {"db": ["wordpress"]},
"units": {
"mysql/1": {
"machine": 2,
"public-address": "mysql-1.example.com",
"open-ports": [],
"agent-state": "stop-error",
"relation-errors": {"db": ["wordpress"]}}}},
"varnish": {
"exposed": True,
"charm": "local:series/varnish-1",
"relations": {"proxy": ["wordpress"]},
"units": {
"varnish/1": {
"machine": 4,
"public-address": "varnish-1.example.com",
"open-ports": ["80/tcp"],
"agent-state": "started",
}}}})
# filter a missing service
state = yield collect(
["cluehammer"], self.provider, self.client, None)
self.assertEqual(set(state["machines"].keys()), set([]))
self.assertEqual(set(state["services"].keys()), set([]))
# filter a missing unit
state = yield collect(["*/7"], self.provider, self.client, None)
self.assertEqual(set(state["machines"].keys()), set([]))
self.assertEqual(set(state["services"].keys()), set([]))
@inlineCallbacks
def test_collect_with_unassigned_machines(self):
yield self.build_topology()
# get a service's units and unassign one of them
wordpress = yield self.service_state_manager.get_service_state(
"wordpress")
units = yield wordpress.get_all_unit_states()
# There is only a single wordpress machine in the topology.
unit = units[0]
machine_id = yield unit.get_assigned_machine_id()
yield unit.unassign_from_machine()
yield unit.set_public_address(None)
# test that the machine is in state information w/o assignment
state = yield collect(None, self.provider, self.client, None)
# verify that the unassigned machine appears in the state
self.assertEqual(state["machines"][machine_id],
{"dns-name": "steamcloud-1.com",
"instance-id": 0,
"instance-state": "unknown",
"agent-state": "not-started"})
# verify that we have a record of the unassigned service;
# but note that unassigning this machine without removing the
# service unit and relation units now produces other dangling
# records in the topology
self.assertEqual(
state["services"]["wordpress"]["units"],
{"wordpress/0":
{"machine": None,
"public-address": None,
"agent-state": "started"}})
@inlineCallbacks
def test_collect_with_removed_unit(self):
yield self.build_topology()
# get a service's units and unassign one of them
wordpress = yield self.service_state_manager.get_service_state(
"wordpress")
units = yield wordpress.get_all_unit_states()
# There is only a single wordpress machine in the topology.
unit = units[0]
machine_id = yield unit.get_assigned_machine_id()
yield wordpress.remove_unit_state(unit)
# test that wordpress has no assigned service units
state = yield collect(None, self.provider, self.client, None)
self.assertEqual(
state["services"]["wordpress"],
{"charm": "local:series/wordpress-3",
"relations": {"cache": ["memcache"],
"db": ["mysql"],
"proxy": ["varnish"]},
"units": {}})
# but its machine is still available as reported by status
seen_machines = set()
for service, service_data in state["services"].iteritems():
for unit, unit_data in service_data["units"].iteritems():
seen_machines.add(unit_data["machine"])
self.assertIn(machine_id, state["machines"])
self.assertNotIn(machine_id, seen_machines)
@inlineCallbacks
def test_provider_pending_machine_state(self):
# verify that we get some error reporting if the provider
# doesn't have proper machine info
yield self.build_topology()
# add a new machine to the topology (but not the provider)
# and status it
m8 = yield self.add_machine_state()
wordpress = yield self.service_state_manager.get_service_state(
"wordpress")
wpu = yield wordpress.add_unit_state()
yield wpu.assign_to_machine(m8)
# test that we identify we don't have machine state
state = yield collect(
None, self.provider, self.client, logging.getLogger())
self.assertEqual(state["machines"][7]["instance-id"],
"pending")
@inlineCallbacks
def test_render_yaml(self):
yield self.build_topology()
self.mock_environment()
self.mocker.replay()
yield status.status(self.environment, [],
status.render_yaml, self.output, None)
state = serializer.yaml_load(self.output.getvalue())
self.assertEqual(set(state["machines"].keys()),
set([0, 1, 2, 3, 4, 5, 6]))
services = state["services"]
self.assertEqual(set(services["memcache"].keys()),
set(["charm", "relations", "units"]))
self.assertEqual(set(services["mysql"].keys()),
set(["exposed", "charm", "relations", "units"]))
self.assertEqual(set(services["varnish"].keys()),
set(["exposed", "charm", "relations", "units"]))
self.assertEqual(set(services["wordpress"].keys()),
set(["charm", "exposed", "relations", "units"]))
for service in services.itervalues():
self.assertGreaterEqual( # may also include "exposed" key
set(service.keys()),
set(["units", "relations", "charm"]))
self.assertTrue(service["charm"].startswith("local:series/"))
self.assertEqual(state["machines"][0],
{"instance-id": 0,
"instance-state": "unknown",
"dns-name": "steamcloud-1.com",
"agent-state": "down"})
self.assertEqual(services["mysql"]["relations"],
{"db": ["wordpress"]})
self.assertEqual(services["mysql"]["units"]["mysql/1"]["open-ports"],
[])
self.assertEqual(services["wordpress"]["relations"],
{"cache": ["memcache"],
"db": ["mysql"],
"proxy": ["varnish"]})
@inlineCallbacks
def test_render_json(self):
yield self.build_topology()
self.mock_environment()
self.mocker.replay()
yield status.status(self.environment, [],
status.render_json, self.output, None)
state = json.loads(self.output.getvalue())
self.assertEqual(set(state["machines"].keys()),
set([unicode(i) for i in [0, 1, 2, 3, 4, 5, 6]]))
services = state["services"]
self.assertEqual(set(services["memcache"].keys()),
set(["charm", "relations", "units"]))
self.assertEqual(set(services["mysql"].keys()),
set(["exposed", "charm", "relations", "units"]))
self.assertEqual(set(services["varnish"].keys()),
set(["exposed", "charm", "relations", "units"]))
self.assertEqual(set(services["wordpress"].keys()),
set(["charm", "exposed", "relations", "units"]))
for service in services.itervalues():
self.assertTrue(service["charm"].startswith("local:series/"))
self.assertEqual(state["machines"][u"0"],
{"instance-id": 0,
"instance-state": "unknown",
"dns-name": "steamcloud-1.com",
"agent-state": "down"})
self.assertEqual(services["mysql"]["relations"],
{"db": ["wordpress"]})
self.assertEqual(services["mysql"]["units"]["mysql/1"]["open-ports"],
[])
self.assertEqual(services["wordpress"]["relations"],
{"cache": ["memcache"],
"db": ["mysql"],
"proxy": ["varnish"]})
self.assertEqual(
services["varnish"],
{
"exposed": True,
"units":
{"varnish/1": {
"machine": 4,
"public-address": "varnish-1.example.com",
"open-ports": ["80/tcp"],
"agent-state": "started"},
"varnish/0": {
"machine": 3,
"public-address": "varnish-0.example.com",
"open-ports": ["80/tcp"],
"agent-state": "started"},
},
"charm": "local:series/varnish-1",
"relations": {"proxy": ["wordpress"]}})
@inlineCallbacks
def test_render_dot(self):
yield self.build_topology()
self.mock_environment()
self.mocker.replay()
yield status.status(self.environment, [],
status.render_dot, self.output, None)
result = self.output.getvalue()
#dump_stringio(self.output, "/tmp/ens.dot")
# make mild assertions about the expected DOT output
# because the DOT language is simple we can test that some
# relationships are present
self.assertIn('memcache -> "memcache/1"', result)
self.assertIn('varnish -> "varnish/0"', result)
self.assertIn('varnish -> "varnish/1"', result)
# test that relationships are being rendered
self.assertIn("wordpress -> memcache", result)
self.assertIn("mysql -> wordpress", result)
# assert that properties were applied to a relationship
# self.assertIn("wordpress -> varnish [dir=none, "
# "label=\"varnish:wordpress/proxy\"]",
# result)
# verify that the renderer picked up the DNS name of the
# machines (and they are associated with the proper machine)
self.assertIn(
'"mysql/0" [color="#DD4814", fontcolor="#ffffff", '
"shape=box, style=filled, label=mysql-0."
"example.com>]",
result)
self.assertIn(
'"mysql/1" [color="#DD4814", fontcolor="#ffffff", shape=box, style=filled, label=mysql-1.example.com>]',
result)
# Check the charms are present in the service node.
self.assertIn(
'memcache [color="#772953", fontcolor="#ffffff", shape=component, style=filled, label=local:series/dummy-1>]', result)
self.assertIn(
'varnish [color="#772953", fontcolor="#ffffff", shape=component, style=filled, label=local:series/varnish-1>]',result)
self.assertIn(
'mysql [color="#772953", fontcolor="#ffffff", shape=component, style=filled, label=local:series/mysql-1>]', result)
self.assertIn("local:series/dummy-1", result)
def test_render_dot_bad_clustering(self):
"""Test around Bug #792448.
Deployment producing bad status dot output, but sane normal
output.
"""
self.mocker.replay()
output = StringIO()
renderer = status.renderers["dot"]
renderer(sample_cluster, output, self.environment, format="dot")
# Verify that the invalid names were properly corrected
self.assertIn("subgraph cluster_wiki_db {",
output.getvalue())
self.assertIn('wiki_cache -> "wiki_cache/0"',
output.getvalue())
@inlineCallbacks
def test_render_svg(self):
yield self.build_topology()
self.mock_environment()
self.mocker.replay()
yield status.status(self.environment, [],
status.renderers["svg"],
self.output,
None)
# look for a hint the process completed.
self.assertIn("", self.output.getvalue())
@inlineCallbacks
def test_subordinate_status_output(self):
state = yield self.build_topology()
# supplement status with additional subordinates
# add logging to mysql and wordpress
logging = yield self.add_service_from_charm("logging")
mysql_ep = RelationEndpoint("mysql", "client-server",
"juju-info", "server")
wordpress_db_ep = RelationEndpoint("wordpress", "client-server",
"juju-info", "server")
logging_ep = RelationEndpoint("logging", "client-server",
"juju-info", "client", "container")
my_log_rel, my_log_services = (
yield self.relation_state_manager.add_relation_state(
mysql_ep, logging_ep))
wp_log_rel, wp_log_services = (
yield self.relation_state_manager.add_relation_state(
wordpress_db_ep, logging_ep))
units = state["units"]
log_units = units.setdefault("logging", {})
wp1 = iter(units["wordpress"]).next()
mu1, mu2 = list(units["mysql"])
yield self.add_unit(logging, None, container=mu1, units=log_units)
yield self.add_unit(logging, None, container=wp1, units=log_units)
yield self.add_unit(logging, None, container=mu2, units=log_units)
self.mock_environment()
self.mocker.replay()
yield status.status(self.environment, [],
status.render_yaml, self.output, None)
state = serializer.load(self.output.getvalue())
# verify our changes
log_state = state["services"]["logging"]
self.assertEqual(set(log_state["relations"]["juju-info"]),
set(["mysql", "wordpress"]))
self.assertEqual(set(log_state["subordinate-to"]),
set(["mysql", "wordpress"]))
wp_state = state["services"]["wordpress"]
self.assertEqual(wp_state["relations"]["juju-info"], ["logging"])
wp_subs = wp_state["units"]["wordpress/0"]["subordinates"]
logging_sub = wp_subs["logging/1"]
# this assertion verifies that we don't see keys we don't
# expect as well
self.assertEqual(logging_sub, {"agent-state": "pending"})
@inlineCallbacks
def test_subordinate_status_output_no_container(self):
state = yield self.build_topology()
# supplement status with additional subordinates
# add logging to mysql and wordpress
logging = yield self.add_service_from_charm("logging")
mysql_ep = RelationEndpoint("mysql", "client-server",
"juju-info", "server")
wordpress_db_ep = RelationEndpoint("wordpress", "client-server",
"juju-info", "server")
logging_ep = RelationEndpoint("logging", "client-server",
"juju-info", "client", "container")
my_log_rel, my_log_services = (
yield self.relation_state_manager.add_relation_state(
mysql_ep, logging_ep))
wp_log_rel, wp_log_services = (
yield self.relation_state_manager.add_relation_state(
wordpress_db_ep, logging_ep))
units = state["units"]
log_units = units.setdefault("logging", {})
wp1 = iter(units["wordpress"]).next()
mu1, mu2 = list(units["mysql"])
yield self.add_unit(logging, None, container=mu1, units=log_units)
yield self.add_unit(logging, None, container=wp1, units=log_units)
yield self.add_unit(logging, None, container=mu2, units=log_units)
# remove mysql/0
yield state["services"]["mysql"].remove_unit_state(mu1)
self.mock_environment()
self.mocker.replay()
yield status.status(self.environment, [],
status.render_yaml, self.output, None)
output = serializer.load(self.output.getvalue())
self.assertNotIn(mu1.unit_name, output["services"]["mysql"]["units"])
self.assertIn(mu2.unit_name, output["services"]["mysql"]["units"])
juju-0.7.orig/juju/control/tests/test_terminate_machine.py 0000644 0000000 0000000 00000022541 12135220114 022301 0 ustar 0000000 0000000 import logging
from twisted.internet.defer import inlineCallbacks
from juju.control import main, terminate_machine
from juju.control.tests.common import MachineControlToolTest
from juju.errors import CannotTerminateMachine
from juju.state.errors import MachineStateInUse, MachineStateNotFound
from juju.state.environment import EnvironmentStateManager
class ControlTerminateMachineTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(ControlTerminateMachineTest, self).setUp()
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_terminate_machine_method(self):
"""Verify that underlying method works as expected."""
environment = self.config.get("firstenv")
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
wordpress_service_state = \
yield self.add_service_from_charm("wordpress")
wordpress_unit_state = yield wordpress_service_state.add_unit_state()
wordpress_machine_state = yield self.add_machine_state()
yield wordpress_unit_state.assign_to_machine(wordpress_machine_state)
yield wordpress_unit_state.unassign_from_machine()
yield terminate_machine.terminate_machine(
self.config, environment, False,
logging.getLogger("juju.control.cli"), [2])
yield self.assert_machine_states([0, 1], [2])
@inlineCallbacks
def test_terminate_machine_method_root(self):
"""Verify supporting method throws `CannotTerminateMachine`."""
environment = self.config.get("firstenv")
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
ex = yield self.assertFailure(
terminate_machine.terminate_machine(
self.config, environment, False,
logging.getLogger("juju.control.cli"), [0]),
CannotTerminateMachine)
self.assertEqual(
str(ex),
"Cannot terminate machine 0: environment would be destroyed")
@inlineCallbacks
def test_terminate_machine_method_in_use(self):
"""Verify supporting method throws `MachineStateInUse`."""
environment = self.config.get("firstenv")
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
ex = yield self.assertFailure(
terminate_machine.terminate_machine(
self.config, environment, False,
logging.getLogger("juju.control.cli"), [1]),
MachineStateInUse)
self.assertEqual(ex.machine_id, 1)
yield self.assert_machine_states([0, 1], [])
@inlineCallbacks
def test_terminate_machine_method_unknown(self):
"""Verify supporting method throws `MachineStateInUse`."""
environment = self.config.get("firstenv")
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
ex = yield self.assertFailure(
terminate_machine.terminate_machine(
self.config, environment, False,
logging.getLogger("juju.control.cli"), [42]),
MachineStateNotFound)
self.assertEqual(ex.machine_id, 42)
yield self.assert_machine_states([0, 1], [])
@inlineCallbacks
def test_terminate_unused_machine(self):
"""Verify a typical allocation, unassignment, and then termination."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
wordpress_service_state = \
yield self.add_service_from_charm("wordpress")
wordpress_unit_state = yield wordpress_service_state.add_unit_state()
wordpress_machine_state = yield self.add_machine_state()
yield wordpress_unit_state.assign_to_machine(wordpress_machine_state)
riak_service_state = yield self.add_service_from_charm("riak")
riak_unit_state = yield riak_service_state.add_unit_state()
riak_machine_state = yield self.add_machine_state()
yield riak_unit_state.assign_to_machine(riak_machine_state)
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
yield wordpress_unit_state.unassign_from_machine()
yield mysql_unit_state.unassign_from_machine()
yield self.assert_machine_states([0, 1, 2, 3], [])
# trash environment to check syncing
yield self.client.delete("/environment")
main(["terminate-machine", "1", "3"])
yield wait_on_reactor_stopped
# check environment synced
esm = EnvironmentStateManager(self.client)
yield esm.get_config()
self.assertIn(
"Machines terminated: 1, 3", self.output.getvalue())
yield self.assert_machine_states([0, 2], [1, 3])
@inlineCallbacks
def test_attempt_terminate_unknown_machine(self):
"""Try to terminate a used machine and get an in use error in log."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0) # XXX should be 1, see bug #697093
self.mocker.replay()
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
main(["terminate-machine", "42", "1"])
yield wait_on_reactor_stopped
self.assertIn("Machine 42 was not found", self.output.getvalue())
yield self.assert_machine_states([1], [])
@inlineCallbacks
def test_attempt_terminate_root_machine(self):
"""Try to terminate root machine and get corresponding error in log."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0) # XXX should be 1, see bug #697093
self.mocker.replay()
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
main(["terminate-machine", "0", "1"])
yield wait_on_reactor_stopped
self.assertIn(
"Cannot terminate machine 0: environment would be destroyed",
self.output.getvalue())
yield self.assert_machine_states([0, 1], [])
@inlineCallbacks
def test_do_nothing(self):
"""Verify terminate-machine can take no args and then does nothing."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
main(["terminate-machine"])
yield wait_on_reactor_stopped
yield self.assert_machine_states([0, 1], [])
def test_wrong_arguments_provided_non_integer(self):
"""Test command rejects non-integer args."""
self.assertRaises(
SystemExit, main, ["terminate-machine", "bar"])
self.assertIn(
"juju terminate-machine: error: argument ID: "
"invalid int value: 'bar'",
self.stderr.getvalue())
@inlineCallbacks
def test_invalid_environment(self):
"""Test command with an environment that hasn't been set up."""
wait_on_reactor_stopped = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
mysql_service_state = yield self.add_service_from_charm("mysql")
mysql_unit_state = yield mysql_service_state.add_unit_state()
mysql_machine_state = yield self.add_machine_state()
yield mysql_unit_state.assign_to_machine(mysql_machine_state)
wordpress_service_state = \
yield self.add_service_from_charm("wordpress")
wordpress_unit_state = yield wordpress_service_state.add_unit_state()
wordpress_machine_state = yield self.add_machine_state()
yield wordpress_unit_state.assign_to_machine(wordpress_machine_state)
main(["terminate-machine", "--environment", "roman-candle",
"1", "2"])
yield wait_on_reactor_stopped
self.assertIn(
"Invalid environment 'roman-candle'",
self.output.getvalue())
yield self.assert_machine_states([0, 1, 2], [])
juju-0.7.orig/juju/control/tests/test_unexpose.py 0000644 0000000 0000000 00000006560 12135220114 020476 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from juju.control import main
from juju.control.tests.common import ControlToolTest
from juju.lib import serializer
from juju.state.tests.test_service import ServiceStateManagerTestBase
class UnexposeControlTest(
ServiceStateManagerTestBase, ControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(UnexposeControlTest, self).setUp()
config = {
"environments": {"firstenv": {"type": "dummy"}}}
self.write_config(serializer.dump(config))
self.config.load()
self.service_state = yield self.add_service_from_charm("wordpress")
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_unexpose_service(self):
"""Test subcommand clears exposed flag for service."""
yield self.service_state.set_exposed_flag()
exposed_flag = yield self.service_state.get_exposed_flag()
self.assertTrue(exposed_flag)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["unexpose", "wordpress"])
yield finished
exposed_flag = yield self.service_state.get_exposed_flag()
self.assertFalse(exposed_flag)
self.assertIn("Service 'wordpress' was unexposed.", self.output.getvalue())
@inlineCallbacks
def test_unexpose_service_not_exposed(self):
"""Test subcommand keeps an unexposed service still unexposed."""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["unexpose", "wordpress"])
yield finished
exposed_flag = yield self.service_state.get_exposed_flag()
self.assertFalse(exposed_flag)
self.assertIn("Service 'wordpress' was not exposed.",
self.output.getvalue())
# various errors
def test_unexpose_with_no_args(self):
"""Test subcommand takes at least one service argument."""
# in argparse, before reactor startup
self.assertRaises(SystemExit, main, ["unexpose"])
self.assertIn(
"juju unexpose: error: too few arguments",
self.stderr.getvalue())
def test_unexpose_with_too_many_args(self):
"""Test subcommand takes at most one service argument."""
self.assertRaises(
SystemExit, main, ["unexpose", "foo", "fum"])
self.assertIn(
"juju: error: unrecognized arguments: fum",
self.stderr.getvalue())
@inlineCallbacks
def test_unexpose_unknown_service(self):
"""Test subcommand fails if service does not exist."""
finished = self.setup_cli_reactor()
self.setup_exit(0) # XXX change when bug 697093 is fixed
self.mocker.replay()
main(["unexpose", "foobar"])
yield finished
self.assertIn(
"Service 'foobar' was not found",
self.output.getvalue())
@inlineCallbacks
def test_invalid_environment(self):
"""Test command with an environment that hasn't been set up."""
finished = self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
main(["unexpose", "--environment", "roman-candle", "wordpress"])
yield finished
self.assertIn(
"Invalid environment 'roman-candle'",
self.output.getvalue())
juju-0.7.orig/juju/control/tests/test_upgrade_charm.py 0000644 0000000 0000000 00000045022 12135220114 021425 0 ustar 0000000 0000000 import json
import os
from twisted.internet.defer import inlineCallbacks, succeed
from juju.charm.directory import CharmDirectory
from juju.charm.repository import LocalCharmRepository, CS_STORE_URL
from juju.charm.tests.test_metadata import test_repository_path
from juju.charm.url import CharmURL
from juju.control import main
from juju.errors import FileNotFound
from juju.environment.environment import Environment
from juju.lib.mocker import ANY
from juju.lib.serializer import dump
from juju.unit.workflow import UnitWorkflowState
from .common import MachineControlToolTest
class CharmUpgradeTestBase(object):
def add_charm(
self, metadata, revision, repository_dir=None, bundle=False,
config=None):
"""
Helper method to create a charm in the given repo.
"""
if repository_dir is None:
repository_dir = self.makeDir()
series_dir = os.path.join(repository_dir, "series")
os.mkdir(series_dir)
charm_dir = self.makeDir()
with open(os.path.join(charm_dir, "metadata.yaml"), "w") as f:
f.write(dump(metadata))
with open(os.path.join(charm_dir, "revision"), "w") as f:
f.write(str(revision))
if config:
with open(os.path.join(charm_dir, "config.yaml"), "w") as f:
f.write(dump(config))
if bundle:
CharmDirectory(charm_dir).make_archive(
os.path.join(series_dir, "%s.charm" % metadata["name"]))
else:
os.rename(charm_dir, os.path.join(series_dir, metadata["name"]))
return LocalCharmRepository(repository_dir)
def increment_charm(self, charm):
metadata = charm.metadata.get_serialization_data()
metadata["name"] = "mysql"
repository = self.add_charm(metadata, charm.get_revision() + 1)
return repository
class ControlCharmUpgradeTest(
MachineControlToolTest, CharmUpgradeTestBase):
@inlineCallbacks
def setUp(self):
yield super(ControlCharmUpgradeTest, self).setUp()
self.service_state1 = yield self.add_service_from_charm("mysql")
self.service_unit1 = yield self.service_state1.add_unit_state()
self.unit1_workflow = UnitWorkflowState(
self.client, self.service_unit1, None, self.makeDir())
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("started")
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_charm_upgrade(self):
"""
'juju charm-upgrade ' will schedule
a charm for upgrade.
"""
repository = self.increment_charm(self.charm)
mock_environment = self.mocker.patch(Environment)
mock_environment.get_machine_provider()
self.mocker.result(self.provider)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["upgrade-charm", "--repository", repository.path, "mysql"])
yield finished
# Verify the service has a new charm reference
charm_id = yield self.service_state1.get_charm_id()
self.assertEqual(charm_id, "local:series/mysql-2")
# Verify the provider storage has been updated
charm = yield repository.find(CharmURL.parse("local:series/mysql"))
storage = self.provider.get_file_storage()
try:
yield storage.get(
"local_3a_series_2f_mysql-2_3a_%s" % charm.get_sha256())
except FileNotFound:
self.fail("New charm not uploaded")
# Verify the upgrade flag on the service units.
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertTrue(upgrade_flag)
@inlineCallbacks
def test_missing_repository(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["upgrade-charm", "mysql"])
yield finished
self.assertIn("No repository specified", self.output.getvalue())
@inlineCallbacks
def test_repository_from_environ(self):
repository = self.increment_charm(self.charm)
self.change_environment(JUJU_REPOSITORY=repository.path)
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["upgrade-charm", "mysql"])
yield finished
self.assertNotIn("No repository specified", self.output.getvalue())
@inlineCallbacks
def test_upgrade_charm_with_unupgradeable_units(self):
"""If there are units that won't be upgraded, they will be reported,
other units will be upgraded.
"""
repository = self.increment_charm(self.charm)
service_unit2 = yield self.service_state1.add_unit_state()
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["upgrade-charm", "--repository", repository.path, "mysql"])
yield finished
# Verify report of unupgradeable units
self.assertIn(
("Unit 'mysql/1' is not in a running state "
"(state: 'uninitialized'), won't upgrade"),
self.output.getvalue())
# Verify flags only set on upgradeable unit.
value = (yield service_unit2.get_upgrade_flag())
self.assertFalse(value)
value = (yield self.service_unit1.get_upgrade_flag())
self.assertTrue(value)
@inlineCallbacks
def test_force_charm_upgrade(self):
repository = self.increment_charm(self.charm)
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("start_error")
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["upgrade-charm", "--repository", repository.path,
"--force", "mysql"])
yield finished
value = (yield self.service_unit1.get_upgrade_flag())
self.assertEqual(value, {'force': True})
@inlineCallbacks
def test_upgrade_charm_unknown_service(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["upgrade-charm", "--repository", self.makeDir(), "volcano"])
yield finished
self.assertIn(
"Service 'volcano' was not found", self.output.getvalue())
@inlineCallbacks
def test_upgrade_charm_unknown_charm(self):
"""If a charm is not found in the repository, an error is given.
"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
repository_dir = self.makeDir()
os.mkdir(os.path.join(repository_dir, "series"))
main(["upgrade-charm", "--repository", repository_dir, "mysql"])
yield finished
self.assertIn(
"Charm 'local:series/mysql' not found in repository",
self.output.getvalue())
@inlineCallbacks
def test_upgrade_charm_unknown_charm_dryrun(self):
"""If a charm is not found in the repository, an error is given.
"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
repository_dir = self.makeDir()
os.mkdir(os.path.join(repository_dir, "series"))
main(["upgrade-charm", "--repository",
repository_dir, "mysql", "--dry-run"])
yield finished
self.assertIn(
"Charm 'local:series/mysql' not found in repository",
self.output.getvalue())
@inlineCallbacks
def test_upgrade_charm_dryrun_reports_unupgradeable_units(self):
"""If there are units that won't be upgraded, dry-run will report them.
"""
repository = self.increment_charm(self.charm)
service_unit2 = yield self.service_state1.add_unit_state()
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
main(["upgrade-charm", "-n",
"--repository", repository.path, "mysql"])
yield finished
# Verify dry run
self.assertIn(
"Service would be upgraded from charm", self.output.getvalue())
# Verify report of unupgradeable units
self.assertIn(
("Unit 'mysql/1' is not in a running state "
"(state: 'uninitialized'), won't upgrade"),
self.output.getvalue())
# Verify no flags have been set.
value = (yield service_unit2.get_upgrade_flag())
self.assertFalse(value)
value = (yield self.service_unit1.get_upgrade_flag())
self.assertFalse(value)
@inlineCallbacks
def test_apply_new_charm_defaults(self):
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
# Add a charm and its service.
metadata = {"name": "haiku",
"summary": "its short",
"description": "but with cadence"}
repository = self.add_charm(
metadata=metadata,
revision=1,
config={
"options": {
"foo": {"type": "string",
"default": "foo-default",
"description": "Foo"},
"bar": {"type": "string",
"default": "bar-default",
"description": "Bar"},
}
})
charm_dir = yield repository.find(CharmURL.parse("local:series/haiku"))
service_state = yield self.add_service_from_charm(
"haiku", charm_dir=charm_dir)
# Update a config value
config = yield service_state.get_config()
config["foo"] = "abc"
yield config.write()
# Upgrade the charm
repository = self.add_charm(
metadata=metadata,
revision=2,
config={
"options": {
"foo": {"type": "string",
"default": "foo-default",
"description": "Foo"},
"bar": {"type": "string",
"default": "bar-default",
"description": "Bar"},
"dca": {"type": "string",
"default": "default-dca",
"description": "Airport"},
}
})
main(["upgrade-charm", "--repository", repository.path, "haiku"])
yield finished
config = yield service_state.get_config()
self.assertEqual(
config,
{"foo": "abc", "dca": "default-dca", "bar": "bar-default"})
@inlineCallbacks
def test_latest_local_dry_run(self):
"""Do nothing; log that local charm would be re-revisioned and used"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
metadata = self.charm.metadata.get_serialization_data()
metadata["name"] = "mysql"
repository = self.add_charm(metadata, 1)
main(["upgrade-charm", "--dry-run",
"--repository", repository.path, "mysql"])
yield finished
charm_path = os.path.join(repository.path, "series", "mysql")
self.assertIn(
"%s would be set to revision 2" % charm_path,
self.output.getvalue())
self.assertIn(
"Service would be upgraded from charm 'local:series/mysql-1' to "
"'local:series/mysql-2'",
self.output.getvalue())
with open(os.path.join(charm_path, "revision")) as f:
self.assertEquals(f.read(), "1")
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertFalse(upgrade_flag)
@inlineCallbacks
def test_latest_local_live_fire(self):
"""Local charm should be re-revisioned and used; log that it was"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
metadata = self.charm.metadata.get_serialization_data()
metadata["name"] = "mysql"
repository = self.add_charm(metadata, 1)
main(["upgrade-charm", "--repository", repository.path, "mysql"])
yield finished
charm_path = os.path.join(repository.path, "series", "mysql")
self.assertIn(
"Setting %s to revision 2" % charm_path,
self.output.getvalue())
with open(os.path.join(charm_path, "revision")) as f:
self.assertEquals(f.read(), "2\n")
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertTrue(upgrade_flag)
@inlineCallbacks
def test_latest_local_leapfrog_dry_run(self):
"""Do nothing; log that local charm would be re-revisioned and used"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
metadata = self.charm.metadata.get_serialization_data()
metadata["name"] = "mysql"
repository = self.add_charm(metadata, 0)
main(["upgrade-charm", "--dry-run",
"--repository", repository.path, "mysql"])
yield finished
charm_path = os.path.join(repository.path, "series", "mysql")
self.assertIn(
"%s would be set to revision 2" % charm_path,
self.output.getvalue())
self.assertIn(
"Service would be upgraded from charm 'local:series/mysql-1' to "
"'local:series/mysql-2'",
self.output.getvalue())
with open(os.path.join(charm_path, "revision")) as f:
self.assertEquals(f.read(), "0")
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertFalse(upgrade_flag)
@inlineCallbacks
def test_latest_local_leapfrog_live_fire(self):
"""Local charm should be re-revisioned and used; log that it was"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
metadata = self.charm.metadata.get_serialization_data()
metadata["name"] = "mysql"
repository = self.add_charm(metadata, 0)
main(["upgrade-charm", "--repository", repository.path, "mysql"])
yield finished
charm_path = os.path.join(repository.path, "series", "mysql")
self.assertIn(
"Setting %s to revision 2" % charm_path,
self.output.getvalue())
with open(os.path.join(charm_path, "revision")) as f:
self.assertEquals(f.read(), "2\n")
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertTrue(upgrade_flag)
@inlineCallbacks
def test_latest_local_bundle_dry_run(self):
"""Do nothing; log that nothing would be done"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
metadata = self.charm.metadata.get_serialization_data()
metadata["name"] = "mysql"
repository = self.add_charm(metadata, 1, bundle=True)
main(["upgrade-charm", "--dry-run",
"--repository", repository.path, "mysql"])
yield finished
self.assertIn(
"Service already running latest charm",
self.output.getvalue())
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertFalse(upgrade_flag)
@inlineCallbacks
def test_latest_local_bundle_live_fire(self):
"""Do nothing; log that nothing was done"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
metadata = self.charm.metadata.get_serialization_data()
metadata["name"] = "mysql"
repository = self.add_charm(metadata, 1, bundle=True)
main(["upgrade-charm", "--repository", repository.path, "mysql"])
yield finished
self.assertIn(
"Charm 'local:series/mysql-1' is the latest revision known",
self.output.getvalue())
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertFalse(upgrade_flag)
class RemoteUpgradeCharmTest(MachineControlToolTest):
@inlineCallbacks
def setUp(self):
yield super(RemoteUpgradeCharmTest, self).setUp()
charm = CharmDirectory(os.path.join(
test_repository_path, "series", "mysql"))
self.charm_state_manager.add_charm_state(
"cs:series/mysql-1", charm, "")
self.service_state1 = yield self.add_service_from_charm(
"mysql", "cs:series/mysql-1")
self.service_unit1 = yield self.service_state1.add_unit_state()
self.unit1_workflow = UnitWorkflowState(
self.client, self.service_unit1, None, self.makeDir())
with (yield self.unit1_workflow.lock()):
yield self.unit1_workflow.set_state("started")
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
self.output = self.capture_logging()
self.stderr = self.capture_stream("stderr")
@inlineCallbacks
def test_latest_dry_run(self):
"""Do nothing; log that nothing would be done"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
getPage = self.mocker.replace("twisted.web.client.getPage")
getPage(
CS_STORE_URL + "/charm-info?charms=cs%3Aseries/mysql",
contextFactory=ANY)
self.mocker.result(succeed(json.dumps(
{"cs:series/mysql": {"revision": 1, "sha256": "whatever"}})))
self.mocker.replay()
main(["upgrade-charm", "--dry-run", "mysql"])
yield finished
self.assertIn(
"Service already running latest charm",
self.output.getvalue())
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertFalse(upgrade_flag)
@inlineCallbacks
def test_latest_live_fire(self):
"""Do nothing; log that nothing was done"""
finished = self.setup_cli_reactor()
self.setup_exit(0)
getPage = self.mocker.replace("twisted.web.client.getPage")
getPage(CS_STORE_URL + "/charm-info?charms=cs%3Aseries/mysql",
contextFactory=ANY)
self.mocker.result(succeed(json.dumps(
{"cs:series/mysql": {"revision": 1, "sha256": "whatever"}})))
self.mocker.replay()
main(["upgrade-charm", "mysql"])
yield finished
self.assertIn(
"Charm 'cs:series/mysql-1' is the latest revision known",
self.output.getvalue())
upgrade_flag = yield self.service_unit1.get_upgrade_flag()
self.assertFalse(upgrade_flag)
juju-0.7.orig/juju/control/tests/test_utils.py 0000644 0000000 0000000 00000017154 12135220114 017771 0 ustar 0000000 0000000 import os
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.environment.tests.test_config import EnvironmentsConfigTestBase
from juju.control.tests.common import ControlToolTest
from juju.control.utils import (
get_environment, get_ip_address_for_machine, get_ip_address_for_unit,
expand_path, parse_passthrough_args, ParseError)
from juju.environment.config import EnvironmentsConfig
from juju.environment.errors import EnvironmentsConfigError
from juju.lib.serializer import yaml_dump as dump
from juju.lib.testing import TestCase
from juju.state.errors import ServiceUnitStateMachineNotAssigned
from juju.state.tests.test_service import ServiceStateManagerTestBase
class FakeOptions(object):
pass
class LookupTest(ServiceStateManagerTestBase, EnvironmentsConfigTestBase):
@inlineCallbacks
def setUp(self):
yield super(LookupTest, self).setUp()
self.environment = self.config.get_default()
self.provider = self.environment.get_machine_provider()
@inlineCallbacks
def start_machine(self, dns_name):
machine_state = yield self.add_machine_state()
provider_machine, = yield self.provider.start_machine(
{"machine-id": machine_state.id, "dns-name": dns_name})
yield machine_state.set_instance_id(provider_machine.instance_id)
returnValue(machine_state)
@inlineCallbacks
def test_get_ip_address_for_machine(self):
"""Verify can retrieve dns name, machine state with machine id."""
machine_state = yield self.add_machine_state()
provider_machine, = yield self.provider.start_machine(
{"machine-id": machine_state.id, "dns-name": "steamcloud-1.com"})
yield machine_state.set_instance_id(provider_machine.instance_id)
dns_name, lookedup_machine_state = yield get_ip_address_for_machine(
self.client, self.provider, machine_state.id)
self.assertEqual(dns_name, "steamcloud-1.com")
self.assertEqual(lookedup_machine_state.id, machine_state.id)
@inlineCallbacks
def test_get_ip_address_for_unit(self):
"""Verify can retrieve dns name, unit state with unit name."""
service_state = yield self.add_service("wordpress")
unit_state = yield service_state.add_unit_state()
machine_state = yield self.start_machine("steamcloud-1.com")
yield unit_state.assign_to_machine(machine_state)
yield unit_state.set_public_address("steamcloud-1.com")
dns_name, lookedup_unit_state = yield get_ip_address_for_unit(
self.client, self.provider, "wordpress/0")
self.assertEqual(dns_name, "steamcloud-1.com")
self.assertEqual(lookedup_unit_state.unit_name, "wordpress/0")
@inlineCallbacks
def test_get_ip_address_for_unit_with_unassigned_machine(self):
"""Service unit exists, but it doesn't have an assigned machine."""
service_state = yield self.add_service("wordpress")
yield service_state.add_unit_state()
e = yield self.assertFailure(
get_ip_address_for_unit(self.client, self.provider, "wordpress/0"),
ServiceUnitStateMachineNotAssigned)
self.assertEqual(
str(e),
"Service unit 'wordpress/0' is not assigned to a machine")
class PathExpandTest(TestCase):
def test_expand_path(self):
self.assertEqual(
os.path.abspath("."), expand_path("."))
self.assertEqual(
os.path.expanduser("~/foobar"), expand_path("~/foobar"))
class GetEnvironmentTest(ControlToolTest):
def test_get_environment_from_environment(self):
self.change_environment(JUJU_ENV="secondenv")
config = {
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"}}}
self.write_config(dump(config))
env_config = EnvironmentsConfig()
env_config.load_or_write_sample()
options = FakeOptions()
options.environment = None
options.environments = env_config
environment = get_environment(options)
self.assertEqual(environment.name, "secondenv")
def test_get_environment(self):
config = {
"environments": {"firstenv": {"type": "dummy"}}}
self.write_config(dump(config))
env_config = EnvironmentsConfig()
env_config.load_or_write_sample()
options = FakeOptions()
options.environment = None
options.environments = env_config
environment = get_environment(options)
self.assertEqual(environment.name, "firstenv")
def test_get_environment_default_with_multiple(self):
config = {
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"}}}
self.write_config(dump(config))
env_config = EnvironmentsConfig()
env_config.load_or_write_sample()
options = FakeOptions()
options.environment = None
options.environments = env_config
error = self.assertRaises(
EnvironmentsConfigError,
get_environment,
options)
self.assertIn(
"There are multiple environments and no explicit default",
str(error))
def test_get_nonexistant_environment(self):
config = {
"environments": {"firstenv": {"type": "dummy"},
"secondenv": {"type": "dummy"}}}
self.write_config(dump(config))
env_config = EnvironmentsConfig()
env_config.load_or_write_sample()
options = FakeOptions()
options.environment = "volcano"
options.environments = env_config
error = self.assertRaises(
EnvironmentsConfigError,
get_environment,
options)
self.assertIn("Invalid environment 'volcano'", str(error))
class ParsePassthroughArgsTest(ControlToolTest):
def test_parse_typical_ssh(self):
"""Verify that flags and positional args are properly partitioned."""
ssh_flags = "bcDeFIiLlmOopRSWw"
self.assertEqual(
parse_passthrough_args(
["-L8080:localhost:80", "-o", "Volume 11", "mysql/0", "ls a*"],
ssh_flags),
(["-L8080:localhost:80", "-o", "Volume 11"], ["mysql/0", "ls a*"]))
self.assertEqual(
parse_passthrough_args(
["-L8080:localhost:80", "-aC26", "0", "foobar", "do", "123"],
ssh_flags),
(["-L8080:localhost:80", "-aC26"], ["0", "foobar", "do", "123"]))
self.assertEqual(
parse_passthrough_args(["mysql/0"], ssh_flags),
([], ["mysql/0"]))
self.assertEqual(
parse_passthrough_args(
["mysql/0", "command", "-L8080:localhost:80"], ssh_flags),
([], ["mysql/0", "command", "-L8080:localhost:80"]))
def test_parse_flag_taking_args(self):
"""Verify that arg-taking flags properly combine with args"""
# some sample flags, from the ssh command
ssh_flags = "bcDeFIiLlmOopRSWw"
for flag in ssh_flags:
# This flag properly combines, either of the form -Xabc or -X abc
self.assertEqual(
parse_passthrough_args(
["-" + flag + "XYZ", "-1X", "-" + flag, "XYZ", "mysql/0"],
ssh_flags),
(["-" + flag + "XYZ", "-1X", "-" + flag, "XYZ"], ["mysql/0"]))
# And requires that it is combined
e = self.assertRaises(
ParseError,
parse_passthrough_args, ["-" + flag], ssh_flags)
self.assertEqual(str(e), "argument -%s: expected one argument" % flag)
juju-0.7.orig/juju/environment/__init__.py 0000644 0000000 0000000 00000000000 12135220114 017031 0 ustar 0000000 0000000 juju-0.7.orig/juju/environment/config.py 0000644 0000000 0000000 00000024631 12135220114 016557 0 ustar 0000000 0000000 import os
import uuid
import yaml
from juju.environment.environment import Environment
from juju.environment.errors import EnvironmentsConfigError
from juju.errors import FileAlreadyExists, FileNotFound
from juju.lib import serializer
from juju.lib.schema import (
Constant, Dict, Int, KeyDict, OAuthString, OneOf, SchemaError, SelectDict,
String, Bool)
DEFAULT_CONFIG_PATH = "~/.juju/environments.yaml"
SAMPLE_CONFIG = """\
environments:
sample:
type: ec2
control-bucket: %(control-bucket)s
admin-secret: %(admin-secret)s
default-series: precise
ssl-hostname-verification: true
"""
_EITHER_PLACEMENT = OneOf(Constant("unassigned"), Constant("local"))
# See juju.providers.openstack.credentials for definition and more details
_OPENSTACK_AUTH_MODE = OneOf(
Constant("userpass"),
Constant("keypair"),
Constant("legacy"),
Constant("rax"),
)
SCHEMA = KeyDict({
"default": String(),
"environments": Dict(String(), SelectDict("type", {
"ec2": KeyDict({
"control-bucket": String(),
"admin-secret": String(),
"access-key": String(),
"secret-key": String(),
"region": OneOf(
Constant("us-east-1"),
Constant("us-west-1"),
Constant("us-west-2"),
Constant("eu-west-1"),
Constant("sa-east-1"),
Constant("ap-northeast-1"),
Constant("ap-southeast-1")),
"ec2-uri": String(),
"s3-uri": String(),
"ssl-hostname-verification": OneOf(
Constant(True),
Constant(False)),
"placement": _EITHER_PLACEMENT,
"default-series": String()},
optional=[
"access-key", "secret-key", "region", "ec2-uri", "s3-uri",
"placement", "ssl-hostname-verification"]),
"openstack": KeyDict({
"control-bucket": String(),
"admin-secret": String(),
"access-key": String(),
"secret-key": String(),
"default-instance-type": String(),
"default-image-id": OneOf(String(), Int()),
"auth-url": String(),
"project-name": String(),
"use-floating-ip": Bool(),
"auth-mode": _OPENSTACK_AUTH_MODE,
"region": String(),
"default-series": String(),
"ssl-hostname-verification": Bool(),
},
optional=[
"access-key", "secret-key", "auth-url", "project-name",
"auth-mode", "region", "use-floating-ip",
"ssl-hostname-verification", "default-instance-type"]),
"openstack_s3": KeyDict({
"control-bucket": String(),
"admin-secret": String(),
"access-key": String(),
"secret-key": String(),
"default-instance-type": String(),
"default-image-id": OneOf(String(), Int()),
"auth-url": String(),
"combined-key": String(),
"s3-uri": String(),
"use-floating-ip": Bool(),
"auth-mode": _OPENSTACK_AUTH_MODE,
"region": String(),
"default-series": String(),
"ssl-hostname-verification": Bool(),
},
optional=[
"access-key", "secret-key", "combined-key", "auth-url",
"s3-uri", "project-name", "auth-mode", "region",
"use-floating-ip", "ssl-hostname-verification",
"default-instance-type"]),
"maas": KeyDict({
"maas-server": String(),
"maas-oauth": OAuthString(),
"admin-secret": String(),
"placement": _EITHER_PLACEMENT,
# MAAS currently only provisions precise; any other default-series
# would just lead to errors down the line.
"default-series": String()},
optional=["placement"]),
"local": KeyDict({
"admin-secret": String(),
"data-dir": String(),
"placement": Constant("local"),
"default-series": String()},
optional=["placement"]),
"dummy": KeyDict({})}))},
optional=["default"])
class EnvironmentsConfig(object):
"""An environment configuration, with one or more environments.
"""
def __init__(self):
self._config = None
self._loaded_path = None
def get_default_path(self):
"""Return the default environment configuration file path."""
return os.path.expanduser(DEFAULT_CONFIG_PATH)
def _get_path(self, path):
if path is None:
return self.get_default_path()
return path
def load(self, path=None):
"""Load an enviornment configuration file.
@param path: An optional environment configuration file path.
Defaults to ~/.juju/environments.yaml
This method will call the C{parse()} method with the content
of the loaded file.
"""
path = self._get_path(path)
if not os.path.isfile(path):
raise FileNotFound(path)
with open(path) as file:
self.parse(file.read(), path)
def parse(self, content, path=None):
"""Parse an enviornment configuration.
@param content: The content to parse.
@param path: An optional environment configuration file path, used
when raising errors.
@raise EnvironmentsConfigError: On other issues.
"""
if not isinstance(content, basestring):
self._fail("Configuration must be a string", path, repr(content))
try:
config = serializer.yaml_load(content)
except yaml.YAMLError, error:
self._fail(error, path=path, content=content)
if not isinstance(config, dict):
self._fail("Configuration must be a dictionary", path, content)
try:
config = SCHEMA.coerce(config, [])
except SchemaError, error:
self._fail(error, path=path)
self._config = config
self._loaded_path = path
def _fail(self, error, path, content=None):
if path is None:
path_info = ""
else:
path_info = " %s:" % (path,)
error = str(error)
if content:
error += ":\n%s" % content
raise EnvironmentsConfigError(
"Environments configuration error:%s %s" %
(path_info, error))
def get_names(self):
"""Return the names of environments available in the configuration."""
return sorted(self._config["environments"].iterkeys())
def get(self, name):
"""Retrieve the Environment with the given name.
@return: The Environment, or None if one isn't found.
"""
environment_config = self._config["environments"].get(name)
if environment_config is not None:
return Environment(name, environment_config)
return None
def get_default(self):
"""Get the default environment for this configuration.
The default environment is either the single defined environment
in the configuration, or the one explicitly named through the
"default:" option in the outermost scope.
@raise EnvironmentsConfigError: If it can't determine a default
environment.
"""
environments_config = self._config.get("environments")
if len(environments_config) == 1:
return self.get(environments_config.keys()[0])
default = self._config.get("default")
if default:
if default not in environments_config:
raise EnvironmentsConfigError(
"Default environment '%s' was not found: %s" %
(default, self._loaded_path))
return self.get(default)
raise EnvironmentsConfigError("There are multiple environments and no "
"explicit default (set one explicitly?): "
"%s" % self._loaded_path)
def write_sample(self, path=None):
"""Write down a sample configuration file.
@param path: An optional environment configuration file path.
Defaults to ~/.juju/environments.yaml
"""
path = self._get_path(path)
dirname = os.path.dirname(path)
if os.path.exists(path):
raise FileAlreadyExists(path)
if not os.path.exists(dirname):
os.mkdir(dirname, 0700)
defaults = {
"control-bucket": "juju-%s" % (uuid.uuid4().hex),
"admin-secret": "%s" % (uuid.uuid4().hex),
"default-series": "precise"
}
with open(path, "w") as file:
file.write(SAMPLE_CONFIG % defaults)
os.chmod(path, 0600)
def load_or_write_sample(self):
"""Try to load the configuration, and if it doesn't work dump a sample.
This method will try to load the environment configuration from the
default location, and if it doesn't work, it will write down a
sample configuration there.
This is handy for a default initialization.
"""
try:
self.load()
except FileNotFound:
self.write_sample()
raise EnvironmentsConfigError("No environments configured. Please "
"edit: %s" % self.get_default_path(),
sample_written=True)
def serialize(self, name=None):
"""Serialize the environments configuration.
Optionally an environment name can be specified and only
that environment will be serialized.
Serialization dispatches to the individual environments as
they may serialize information not contained within the
original config file.
"""
if not name:
names = self.get_names()
else:
names = [name]
config = self._config.copy()
config["environments"] = {}
for name in names:
environment = self.get(name)
if environment is None:
raise EnvironmentsConfigError(
"Invalid environment %r" % name)
data = environment.get_serialization_data()
# all environment data should be contained
# in a nested dict under the environment name.
assert data.keys() == [name]
config["environments"].update(data)
return serializer.dump(config)
juju-0.7.orig/juju/environment/environment.py 0000644 0000000 0000000 00000003547 12135220114 017661 0 ustar 0000000 0000000 from juju.lib.loader import get_callable
class Environment(object):
"""An environment where machines can be run on."""
def __init__(self, name, environment_config):
self._name = name
self._environment_config = environment_config
self._machine_provider = None
@property
def name(self):
"""The name of this environment."""
return self._name
def get_serialization_data(self):
provider = self.get_machine_provider()
data = {self.name: provider.get_serialization_data()}
return data
@property
def type(self):
"""The type of the environment."""
return self._environment_config["type"]
def get_machine_provider(self):
"""Return a MachineProvider instance for the given provider name.
The returned instance will be retrieved from the module named
after the type of the given machine provider.
"""
if self._machine_provider is None:
provider_type = self._environment_config["type"]
MachineProvider = get_callable(
"juju.providers.%s.MachineProvider" % provider_type)
self._machine_provider = MachineProvider(
self._name, self._environment_config)
return self._machine_provider
@property
def placement(self):
"""The name of the default placement policy.
If the environment doesn't have a default unit placement
policy None is returned
"""
return self._environment_config.get("placement")
@property
def default_series(self):
"""The Ubuntu series to run on machines in this environment."""
return self._environment_config.get("default-series")
@property
def origin(self):
"""Returns the origin of the code."""
return self._environment_config.get("juju-origin", "distro")
juju-0.7.orig/juju/environment/errors.py 0000644 0000000 0000000 00000000471 12135220114 016622 0 ustar 0000000 0000000 from juju.errors import JujuError
class EnvironmentsConfigError(JujuError):
"""Raised when the environment configuration file has problems."""
def __init__(self, message, sample_written=False):
super(EnvironmentsConfigError, self).__init__(message)
self.sample_written = sample_written
juju-0.7.orig/juju/environment/tests/ 0000755 0000000 0000000 00000000000 12135220114 016074 5 ustar 0000000 0000000 juju-0.7.orig/juju/environment/tests/__init__.py 0000644 0000000 0000000 00000000000 12135220114 020173 0 ustar 0000000 0000000 juju-0.7.orig/juju/environment/tests/data/ 0000755 0000000 0000000 00000000000 12135220114 017005 5 ustar 0000000 0000000 juju-0.7.orig/juju/environment/tests/test_config.py 0000644 0000000 0000000 00000070253 12135220114 020761 0 ustar 0000000 0000000 import os
from twisted.internet.defer import inlineCallbacks
from juju.environment import environment
from juju.environment.config import EnvironmentsConfig, SAMPLE_CONFIG
from juju.environment.environment import Environment
from juju.environment.errors import EnvironmentsConfigError
from juju.errors import FileNotFound, FileAlreadyExists
from juju.lib import serializer
from juju.state.environment import EnvironmentStateManager
from juju.lib.testing import TestCase
DATA_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "data"))
SAMPLE_ENV = """
environments:
myfirstenv:
type: dummy
foo: bar
mysecondenv:
type: dummy
nofoo: 1
"""
SAMPLE_MAAS = """
environments:
sample:
type: maas
maas-server: somewhe.re
maas-oauth: foo:bar:baz
admin-secret: garden
default-series: precise
"""
SAMPLE_LOCAL = """
ensemble: environments
environments:
sample:
type: local
admin-secret: sekret
default-series: oneiric
"""
SAMPLE_OPENSTACK = """
environments:
sample:
type: openstack
admin-secret: sekret
control-bucket: container
default-image-id: 42
default-series: precise
"""
class EnvironmentsConfigTestBase(TestCase):
@inlineCallbacks
def setUp(self):
yield super(EnvironmentsConfigTestBase, self).setUp()
release_path = os.path.join(DATA_DIR, "lsb-release")
self.patch(environment, "LSB_RELEASE_PATH", release_path)
self.old_home = os.environ.get("HOME")
self.tmp_home = self.makeDir()
self.change_environment(HOME=self.tmp_home, PATH=os.environ["PATH"])
self.default_path = os.path.join(self.tmp_home,
".juju/environments.yaml")
self.other_path = os.path.join(self.tmp_home,
".juju/other-environments.yaml")
self.config = EnvironmentsConfig()
def write_config(self, config_text, other_path=False):
if other_path:
path = self.other_path
else:
path = self.default_path
parent_name = os.path.dirname(path)
if not os.path.exists(parent_name):
os.makedirs(parent_name)
with open(path, "w") as file:
file.write(config_text)
# The following methods expect to be called *after* a subclass has set
# self.client.
def push_config(self, name, config):
self.write_config(serializer.yaml_dump(config))
self.config.load()
esm = EnvironmentStateManager(self.client)
return esm.set_config_state(self.config, name)
@inlineCallbacks
def push_env_constraints(self, *constraint_strs):
esm = EnvironmentStateManager(self.client)
constraint_set = yield esm.get_constraint_set()
yield esm.set_constraints(constraint_set.parse(constraint_strs))
@inlineCallbacks
def push_default_config(self, with_constraints=True):
config = {
"environments": {"firstenv": {
"type": "dummy", "storage-directory": self.makeDir()}}}
yield self.push_config("firstenv", config)
if with_constraints:
yield self.push_env_constraints()
class EnvironmentsConfigTest(EnvironmentsConfigTestBase):
def test_get_default_path(self):
self.assertEquals(self.config.get_default_path(), self.default_path)
def compare_config(self, config1, sample_config2):
config1 = serializer.yaml_load(config1)
config2 = serializer.yaml_load(
sample_config2 % config1["environments"]["sample"])
self.assertEqual(config1, config2)
def setup_ec2_credentials(self):
self.change_environment(
AWS_ACCESS_KEY_ID="foobar",
AWS_SECRET_ACCESS_KEY="secrat")
def test_load_with_nonexistent_default_path(self):
"""
Raise an error if load() is called without a path and the
default doesn't exist.
"""
try:
self.config.load()
except FileNotFound, error:
self.assertEquals(error.path, self.default_path)
else:
self.fail("FileNotFound not raised")
def test_load_with_nonexistent_custom_path(self):
"""
Raise an error if load() is called with non-existing path.
"""
path = "/non/existent/custom/path"
try:
self.config.load(path)
except FileNotFound, error:
self.assertEquals(error.path, path)
else:
self.fail("FileNotFound not raised")
def test_write_sample_environment_default_path(self):
"""
write_sample() should write a pre-defined sample configuration file.
"""
self.config.write_sample()
self.assertTrue(os.path.isfile(self.default_path))
with open(self.default_path) as file:
self.compare_config(file.read(), SAMPLE_CONFIG)
dir_path = os.path.dirname(self.default_path)
dir_stat = os.stat(dir_path)
self.assertEqual(dir_stat.st_mode & 0777, 0700)
stat = os.stat(self.default_path)
self.assertEqual(stat.st_mode & 0777, 0600)
def test_write_sample_contains_secret_key_and_control_bucket(self):
"""
write_sample() should write a pre-defined sample with an ec2 machine
provider type, a unique s3 control bucket, and an admin secret key.
"""
uuid_factory = self.mocker.replace("uuid.uuid4")
uuid_factory().hex
self.mocker.result("abc")
uuid_factory().hex
self.mocker.result("xyz")
self.mocker.replay()
self.config.write_sample()
self.assertTrue(os.path.isfile(self.default_path))
with open(self.default_path) as file:
config = serializer.yaml_load(file.read())
self.assertEqual(
config["environments"]["sample"]["type"], "ec2")
self.assertEqual(
config["environments"]["sample"]["control-bucket"],
"juju-abc")
self.assertEqual(
config["environments"]["sample"]["admin-secret"],
"xyz")
def test_write_sample_environment_with_default_path_and_existing_dir(self):
"""
write_sample() should not fail if the config directory already exists.
"""
os.mkdir(os.path.dirname(self.default_path))
self.config.write_sample()
self.assertTrue(os.path.isfile(self.default_path))
with open(self.default_path) as file:
self.compare_config(file.read(), SAMPLE_CONFIG)
def test_write_sample_environment_with_custom_path(self):
"""
write_sample() may receive an argument with a custom path.
"""
path = os.path.join(self.tmp_home, "sample-file")
self.config.write_sample(path)
self.assertTrue(os.path.isfile(path))
with open(path) as file:
self.compare_config(file.read(), SAMPLE_CONFIG)
def test_write_sample_wont_overwrite_existing_configuration(self):
"""
write_sample() must never overwrite an existing file.
"""
path = self.other_path
os.makedirs(os.path.dirname(path))
with open(path, "w") as file:
file.write("previous content")
try:
self.config.write_sample(path)
except FileAlreadyExists, error:
self.assertEquals(error.path, path)
else:
self.fail("FileAlreadyExists not raised")
def test_load_empty_environments(self):
"""
load() must raise an error if there are no enviroments defined
in the configuration file.
"""
# Use a different path to ensure the error message is right.
self.write_config("""
environments:
""", other_path=True)
try:
self.config.load(self.other_path)
except EnvironmentsConfigError, error:
self.assertEquals(str(error),
"Environments configuration error: %s: "
"environments: expected dict, got None"
% self.other_path)
else:
self.fail("EnvironmentsConfigError not raised")
def test_load_environments_with_wrong_type(self):
"""
load() must raise an error if the "environments:" option in
the YAML configuration file doesn't have a mapping underneath it.
"""
# Use a different path to ensure the error message is right.
self.write_config("""
environments:
- list
""", other_path=True)
try:
self.config.load(self.other_path)
except EnvironmentsConfigError, error:
self.assertEquals(str(error),
"Environments configuration error: %s: "
"environments: expected dict, got ['list']"
% self.other_path)
else:
self.fail("EnvironmentsConfigError not raised")
def test_wb_parse(self):
"""
We'll have an exception, and use mocker here to test the
implementation itself, because we don't want to repeat the
same tests for both parse() and load(), so we'll just verify
that one calls the other internally.
"""
mock = self.mocker.patch(self.config)
mock.parse(SAMPLE_ENV, self.other_path)
self.write_config(SAMPLE_ENV, other_path=True)
self.mocker.replay()
self.config.load(self.other_path)
def test_parse_errors_without_filename(self):
"""
parse() may receive None as the file path, in which case the
error should not mention it.
"""
# Use a different path to ensure the error message is right.
try:
self.config.parse("""
environments:
""")
except EnvironmentsConfigError, error:
self.assertEquals(str(error),
"Environments configuration error: "
"environments: expected dict, got None")
else:
self.fail("EnvironmentsConfigError not raised")
def test_get_environment_names(self):
"""
get_names() should return of the environments names contained
in the configuration file.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
self.assertEquals(self.config.get_names(),
["myfirstenv", "mysecondenv"])
def test_get_non_existing_environment(self):
"""
Trying to get() a non-existing configuration name should return None.
"""
self.config.parse(SAMPLE_ENV)
self.assertEquals(self.config.get("non-existing"), None)
def test_load_and_get_environment(self):
"""
get() should return an Environment instance.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
self.assertEquals(type(self.config.get("myfirstenv")), Environment)
def test_load_or_write_sample_with_non_existent_config(self):
"""
When an environment configuration does not exist, the method
load_or_write_sample() must write down a sample configuration
file, and raise an error to let the user know his request did
not work, and he should edit this file.
"""
try:
self.config.load_or_write_sample()
except EnvironmentsConfigError, error:
self.assertEquals(str(error),
"No environments configured. Please edit: %s" %
self.default_path)
self.assertEquals(error.sample_written, True)
with open(self.default_path) as file:
self.compare_config(file.read(), SAMPLE_CONFIG)
else:
self.fail("EnvironmentsConfigError not raised")
def test_environment_config_error_sample_written_defaults_to_false(self):
"""
The error raised by load_or_write_sample() has a flag to let the
calling site know if a sample file was actually written down or not.
It must default to false, naturally.
"""
error = EnvironmentsConfigError("message")
self.assertFalse(error.sample_written)
def test_load_or_write_sample_will_load(self):
"""
load_or_write_sample() must load the configuration file if it exists.
"""
self.write_config(SAMPLE_ENV)
self.config.load_or_write_sample()
self.assertTrue(self.config.get("myfirstenv"))
def test_get_default_with_single_environment(self):
"""
get_default() must return the one defined environment, when it's
indeed a single one.
"""
config = serializer.yaml_load(SAMPLE_ENV)
del config["environments"]["mysecondenv"]
self.write_config(serializer.yaml_dump(config))
self.config.load()
env = self.config.get_default()
self.assertEquals(env.name, "myfirstenv")
def test_get_default_with_named_default(self):
"""
get_default() must otherwise return the environment named
through the "default:" option.
"""
config = serializer.yaml_load(SAMPLE_ENV)
config["default"] = "mysecondenv"
self.write_config(serializer.yaml_dump(config))
self.config.load()
env = self.config.get_default()
self.assertEquals(env.name, "mysecondenv")
def test_default_is_schema_protected(self):
"""
The schema should mention the "default:" option as a string.
"""
config = serializer.yaml_load(SAMPLE_ENV)
config["default"] = 1
self.write_config(serializer.yaml_dump(config))
error = self.assertRaises(EnvironmentsConfigError, self.config.load)
self.assertEquals(
str(error),
"Environments configuration error: %s: "
"default: expected string, got 1" % self.default_path)
def test_get_default_with_named_but_missing_default(self):
"""
get_default() must raise an error if the environment named through
the "default:" option isn't found.
"""
config = serializer.yaml_load(SAMPLE_ENV)
config["default"] = "non-existent"
# Use a different path to ensure the error message is right.
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
try:
self.config.get_default()
except EnvironmentsConfigError, error:
self.assertEquals(str(error),
"Default environment 'non-existent' was not found: "
+ self.other_path)
else:
self.fail("EnvironmentsConfigError not raised")
def test_get_default_without_computable_default(self):
"""
get_default() must raise an error if there are multiple defined
environments and no explicit default was defined.
"""
# Use a different path to ensure the error message is right.
self.write_config(SAMPLE_ENV, other_path=True)
self.config.load(self.other_path)
try:
self.config.get_default()
except EnvironmentsConfigError, error:
self.assertEquals(str(error),
"There are multiple environments and no explicit default "
"(set one explicitly?): " + self.other_path)
else:
self.fail("EnvironmentsConfigError not raised")
def test_ensure_provider_types_are_set(self):
"""
The schema should refuse to receive a configuration which
contains a machine provider configuration without any type
information.
"""
config = serializer.yaml_load(SAMPLE_ENV)
# Delete the type.
del config["environments"]["myfirstenv"]["type"]
self.write_config(serializer.yaml_dump(config), other_path=True)
try:
self.config.load(self.other_path)
except EnvironmentsConfigError, error:
self.assertEquals(str(error),
"Environments configuration error: %s: "
"environments.myfirstenv.type: required value not found"
% self.other_path)
else:
self.fail("EnvironmentsConfigError not raised")
def test_serialize(self):
"""The config should be able to serialize itself."""
self.write_config(SAMPLE_ENV)
self.config.load()
config = self.config.serialize()
serialized = serializer.yaml_load(SAMPLE_ENV)
for d in serialized["environments"].values():
d["dynamicduck"] = "magic"
self.assertEqual(serializer.yaml_load(config), serialized)
def test_serialize_environment(self):
"""
The config serialization can take an environment name, in
which case that environment is serialized in isolation
into a valid config file that can be loaded.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
data = serializer.yaml_load(SAMPLE_ENV)
del data["environments"]["mysecondenv"]
data["environments"]["myfirstenv"]["dynamicduck"] = "magic"
self.assertEqual(
serializer.yaml_load(self.config.serialize("myfirstenv")),
data)
def test_load_serialized_environment(self):
"""
Serialize an environment, and then load it again
via an EnvironmentsConfig.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
serialized = self.config.serialize("myfirstenv")
config = EnvironmentsConfig()
config.parse(serialized)
self.assertTrue(
isinstance(config.get("myfirstenv"), Environment))
self.assertFalse(
isinstance(config.get("mysecondenv"), Environment))
def test_serialize_unknown_environment(self):
"""Serializing an unknown environment raises an error."""
self.write_config(SAMPLE_ENV)
self.config.load()
self.assertRaises(
EnvironmentsConfigError,
self.config.serialize, "zebra")
def test_serialize_custom_variables_outside_environment(self):
"""Serializing captures custom variables out of the environment."""
data = serializer.yaml_load(SAMPLE_ENV)
data["default"] = "myfirstenv"
self.write_config(serializer.yaml_dump(data))
self.config.load()
serialized = self.config.serialize()
config = EnvironmentsConfig()
config.parse(serialized)
environment = config.get_default()
self.assertEqual(environment.name, "myfirstenv")
def test_invalid_configuration_data_raise_environment_config_error(self):
self.write_config("ZEBRA")
self.assertRaises(EnvironmentsConfigError, self.config.load)
def test_nonstring_configuration_data_raise_environment_config_error(self):
error = self.assertRaises(
EnvironmentsConfigError, self.config.parse, None)
self.assertIn(
"Configuration must be a string:\nNone", str(error))
def test_yaml_load_error_raise_environment_config_error(self):
self.write_config("\0")
error = self.assertRaises(EnvironmentsConfigError, self.config.load)
self.assertIn(
"control characters are not allowed", str(error))
def test_ec2_verifies_region(self):
# sample doesn't include credentials
self.setup_ec2_credentials()
self.config.write_sample()
with open(self.default_path) as file:
config = serializer.yaml_load(file.read())
config["environments"]["sample"]["region"] = "ap-southeast-2"
self.write_config(serializer.yaml_dump(config), other_path=True)
e = self.assertRaises(EnvironmentsConfigError,
self.config.load,
self.other_path)
self.assertIn("expected 'us-east-1', got 'ap-southeast-2'",
str(e))
with open(self.default_path) as file:
config = serializer.yaml_load(file.read())
# Authorized keys are required for environment serialization.
config["environments"]["sample"]["authorized-keys"] = "mickey"
config["environments"]["sample"]["region"] = "ap-southeast-1"
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
data = self.config.get_default().get_serialization_data()
self.assertEqual(data["sample"]["region"], "ap-southeast-1")
def assert_ec2_sample_config(self, delete_key):
self.config.write_sample()
with open(self.default_path) as file:
config = serializer.yaml_load(file.read())
del config["environments"]["sample"][delete_key]
self.write_config(serializer.yaml_dump(config), other_path=True)
try:
self.config.load(self.other_path)
except EnvironmentsConfigError, error:
self.assertEquals(
str(error),
"Environments configuration error: %s: "
"environments.sample.%s: required value not found"
% (self.other_path, delete_key))
else:
self.fail("Did not properly require " + delete_key)
def test_ec2_sample_config_without_admin_secret(self):
self.assert_ec2_sample_config("admin-secret")
def test_ec2_sample_config_without_default_series(self):
self.assert_ec2_sample_config("default-series")
def test_ec2_sample_config_without_control_buckets(self):
self.assert_ec2_sample_config("control-bucket")
def test_ec2_verifies_placement(self):
# sample doesn't include credentials
self.setup_ec2_credentials()
self.config.write_sample()
with open(self.default_path) as file:
config = serializer.yaml_load(file.read())
config["environments"]["sample"]["placement"] = "random"
self.write_config(serializer.yaml_dump(config), other_path=True)
e = self.assertRaises(EnvironmentsConfigError,
self.config.load,
self.other_path)
self.assertIn("expected 'unassigned', got 'random'",
str(e))
with open(self.default_path) as file:
config = serializer.yaml_load(file.read())
# Authorized keys are required for environment serialization.
config["environments"]["sample"]["authorized-keys"] = "mickey"
config["environments"]["sample"]["placement"] = "local"
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
data = self.config.get_default().get_serialization_data()
self.assertEqual(data["sample"]["placement"], "local")
def test_ec2_respects_default_series(self):
# sample doesn't include credentials
self.setup_ec2_credentials()
self.config.write_sample()
with open(self.default_path) as f:
config = serializer.yaml_load(f.read())
config["environments"]["sample"]["default-series"] = "astounding"
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
provider = self.config.get_default().get_machine_provider()
self.assertEqual(provider.config["default-series"], "astounding")
def test_ec2_respects_ssl_hostname_verification(self):
self.setup_ec2_credentials()
self.config.write_sample()
with open(self.default_path) as f:
config = serializer.yaml_load(f.read())
config["environments"]["sample"]["ssl-hostname-verification"] = True
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
provider = self.config.get_default().get_machine_provider()
self.assertEqual(provider.config["ssl-hostname-verification"], True)
def test_maas_schema_requires(self):
requires = "maas-server maas-oauth admin-secret default-series".split()
for require in requires:
config = serializer.yaml_load(SAMPLE_MAAS)
del config["environments"]["sample"][require]
self.write_config(serializer.yaml_dump(config), other_path=True)
try:
self.config.load(self.other_path)
except EnvironmentsConfigError as error:
self.assertEquals(str(error),
"Environments configuration error: %s: "
"environments.sample.%s: "
"required value not found"
% (self.other_path, require))
else:
self.fail("Did not properly require %s when type == maas"
% require)
def test_maas_default_series(self):
config = serializer.yaml_load(SAMPLE_MAAS)
config["environments"]["sample"]["default-series"] = "magnificent"
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
provider = self.config.get_default().get_machine_provider()
self.assertEqual(provider.config["default-series"], "magnificent")
def test_maas_verifies_placement(self):
config = serializer.yaml_load(SAMPLE_MAAS)
config["environments"]["sample"]["placement"] = "random"
self.write_config(serializer.yaml_dump(config), other_path=True)
e = self.assertRaises(
EnvironmentsConfigError, self.config.load, self.other_path)
self.assertIn("expected 'unassigned', got 'random'",
str(e))
config["environments"]["sample"]["placement"] = "local"
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
data = self.config.get_default().placement
self.assertEqual(data, "local")
def test_lxc_requires_data_dir(self):
"""lxc dev only supports local placement."""
config = serializer.yaml_load(SAMPLE_LOCAL)
self.write_config(serializer.yaml_dump(config), other_path=True)
error = self.assertRaises(
EnvironmentsConfigError, self.config.load, self.other_path)
self.assertIn("data-dir: required value not found", str(error))
def test_lxc_verifies_placement(self):
"""lxc dev only supports local placement."""
config = serializer.yaml_load(SAMPLE_LOCAL)
config["environments"]["sample"]["placement"] = "unassigned"
self.write_config(serializer.yaml_dump(config), other_path=True)
error = self.assertRaises(
EnvironmentsConfigError, self.config.load, self.other_path)
self.assertIn("expected 'local', got 'unassigned'", str(error))
def test_openstack_requires_default_image_id(self):
"""A VM image must be supplied for openstack provider."""
config = serializer.yaml_load(SAMPLE_OPENSTACK)
del config["environments"]["sample"]["default-image-id"]
self.write_config(serializer.yaml_dump(config), other_path=True)
error = self.assertRaises(
EnvironmentsConfigError, self.config.load, self.other_path)
self.assertIn("default-image-id: required value not found", str(error))
def test_openstack_ignores_placement(self):
"""The placement config is not verified for openstack provider."""
config = serializer.yaml_load(SAMPLE_OPENSTACK)
config["environments"]["sample"]["placement"] = "whatever"
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
def test_openstack_s3_requires_default_image_id(self):
"""A VM image must be supplied for openstack_s3 provider."""
config = serializer.yaml_load(SAMPLE_OPENSTACK)
config["environments"]["sample"]["type"] = "openstack_s3"
del config["environments"]["sample"]["default-image-id"]
self.write_config(serializer.yaml_dump(config), other_path=True)
error = self.assertRaises(
EnvironmentsConfigError, self.config.load, self.other_path)
self.assertIn("default-image-id: required value not found", str(error))
def test_openstack_s3_ignores_placement(self):
"""The placement config is not verified for openstack_s3 provider."""
config = serializer.yaml_load(SAMPLE_OPENSTACK)
config["environments"]["sample"]["type"] = "openstack_s3"
config["environments"]["sample"]["placement"] = "whatever"
self.write_config(serializer.yaml_dump(config), other_path=True)
self.config.load(self.other_path)
juju-0.7.orig/juju/environment/tests/test_environment.py 0000644 0000000 0000000 00000005565 12135220114 022064 0 ustar 0000000 0000000 from juju.environment.tests.test_config import (
EnvironmentsConfigTestBase, SAMPLE_ENV)
from juju.providers import dummy
class EnvironmentTest(EnvironmentsConfigTestBase):
def test_attributes(self):
self.write_config(SAMPLE_ENV)
self.config.load()
env = self.config.get("myfirstenv")
self.assertEquals(env.name, "myfirstenv")
self.assertEquals(env.type, "dummy")
self.assertEquals(env.origin, "distro")
def test_get_machine_provider(self):
"""
get_machine_provider() should return a MachineProvider instance
imported from a module named after the "type:" provided in the
machine provider configuration.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
env = self.config.get("myfirstenv")
machine_provider = env.get_machine_provider()
self.assertEquals(type(machine_provider), dummy.MachineProvider)
def test_get_machine_provider_passes_config_into_provider(self):
"""
get_machine_provider() should pass the machine provider configuration
when constructing the MachineProvider.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
env = self.config.get("myfirstenv")
dummy_provider = env.get_machine_provider()
self.assertEquals(dummy_provider.config.get("foo"), "bar")
def test_get_machine_provider_should_cache_results(self):
"""
get_machine_provider() must cache its results.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
env = self.config.get("myfirstenv")
machine_provider1 = env.get_machine_provider()
machine_provider2 = env.get_machine_provider()
self.assertIdentical(machine_provider1, machine_provider2)
def test_get_serialization_data(self):
"""
Getting the serialization data returns a dictionary with the
environment configuration.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
env = self.config.get("myfirstenv")
data = env.get_serialization_data()
self.assertEqual(
data,
{"myfirstenv":
{"type": "dummy",
"foo": "bar",
"dynamicduck": "magic"}})
def test_get_serialization_data_errors_passthrough(self):
"""Serialization errors are raised to the caller.
"""
self.write_config(SAMPLE_ENV)
self.config.load()
env = self.config.get("myfirstenv")
mock_env = self.mocker.patch(env)
mock_env.get_machine_provider()
mock_provider = self.mocker.mock(dummy.MachineProvider)
self.mocker.result(mock_provider)
mock_provider.get_serialization_data()
self.mocker.throw(SyntaxError())
self.mocker.replay()
self.assertRaises(SyntaxError, env.get_serialization_data)
juju-0.7.orig/juju/environment/tests/data/lsb-release 0000644 0000000 0000000 00000000151 12135220114 021123 0 ustar 0000000 0000000 DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=99.10
DISTRIB_CODENAME="tremendous"
DISTRIB_DESCRIPTION="Ubuntu 99.10"
juju-0.7.orig/juju/ftests/__init__.py 0000644 0000000 0000000 00000000002 12135220114 015777 0 ustar 0000000 0000000 #
juju-0.7.orig/juju/ftests/test_aws.py 0000644 0000000 0000000 00000007162 12135220114 016107 0 ustar 0000000 0000000 """
Validate AWS (EC2/S3) Assumptions via api usage exercise.
Requirements for functional test
- valid amazon credentials present in the environment as
AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID
"""
import random
from twisted.internet.defer import inlineCallbacks
from twisted.python.failure import Failure
from txaws.service import AWSServiceRegion
from txaws.s3.exception import S3Error
from juju.lib.testing import TestCase
class AWSFunctionalTest(TestCase):
def setUp(self):
region = AWSServiceRegion()
self.ec2 = region.get_ec2_client()
self.s3 = region.get_s3_client()
class EC2SecurityGroupTest(AWSFunctionalTest):
def setUp(self):
super(EC2SecurityGroupTest, self).setUp()
self.security_group = "juju-test-%s" % (random.random())
@inlineCallbacks
def tearDown(self):
yield self.ec2.delete_security_group(self.security_group)
@inlineCallbacks
def test_create_and_get_group(self):
"""Verify input/outputs of creating a security group.
Specifically we want to see if we can pick up the owner id off the
group, which we need for group 2 group ec2 authorizations."""
data = yield self.ec2.create_security_group(
self.security_group, "test")
self.assertEqual(data, True)
info = yield self.ec2.describe_security_groups(self.security_group)
self.assertEqual(len(info), 1)
group = info.pop()
self.assertTrue(group.owner_id)
@inlineCallbacks
def test_create_and_authorize_group(self):
yield self.ec2.create_security_group(self.security_group, "test")
info = yield self.ec2.describe_security_groups(self.security_group)
group = info.pop()
yield self.ec2.authorize_security_group(
self.security_group,
source_group_name = self.security_group,
source_group_owner_id = group.owner_id)
info = yield self.ec2.describe_security_groups(self.security_group)
group = info.pop()
self.assertEqual(group.name, self.security_group)
class S3FilesTest(AWSFunctionalTest):
def setUp(self):
super(S3FilesTest, self).setUp()
self.control_bucket = "juju-test-%s" % (random.random())
return self.s3.create_bucket(self.control_bucket)
@inlineCallbacks
def tearDown(self):
listing = yield self.s3.get_bucket(self.control_bucket)
for ob in listing.contents:
yield self.s3.delete_object(self.control_bucket, ob.key)
yield self.s3.delete_bucket(self.control_bucket)
def test_put_object(self):
"""Verify input/outputs of putting an object in the bucket.
The output is just an empty string on success."""
d = self.s3.put_object(
self.control_bucket, "pirates/gold.txt", "blah blah")
def verify_result(result):
self.assertEqual(result, "")
d.addCallback(verify_result)
return d
@inlineCallbacks
def test_get_object(self):
"""Verify input/outputs of getting an object from the bucket."""
yield self.s3.put_object(
self.control_bucket, "pirates/ship.txt", "argh argh")
ob = yield self.s3.get_object(
self.control_bucket, "pirates/ship.txt")
self.assertEqual(ob, "argh argh")
def test_get_object_nonexistant(self):
"""Verify output when an object does not exist."""
d = self.s3.get_object(self.control_bucket, "pirates/treasure.txt")
def verify_result(result):
self.assertTrue(isinstance(result, Failure))
result.trap(S3Error)
d.addBoth(verify_result)
return d
juju-0.7.orig/juju/ftests/test_connection.py 0000644 0000000 0000000 00000005710 12135220114 017451 0 ustar 0000000 0000000 """
Functional tests for secure zookeeper connections using an ssh forwarded port.
Requirements for functional test
- sshd running on localhost
- zookeeper running on localhost, listening on port 2181
- user can log into localhost via key authentication
- ~/.ssh/id_dsa exists and is configured as an authorized key
"""
import os
import pwd
import zookeeper
from juju.errors import NoConnection
from juju.lib.testing import TestCase
from juju.state.sshclient import SSHClient
from juju.tests.common import get_test_zookeeper_address
class ConnectionTest(TestCase):
def setUp(self):
super(ConnectionTest, self).setUp()
self.username = pwd.getpwuid(os.getuid())[0]
self.log = self.capture_logging("juju.state.sshforward")
self.old_user_name = SSHClient.remote_user
SSHClient.remote_user = self.username
self.client = SSHClient()
zookeeper.set_debug_level(0)
def tearDown(self):
super(ConnectionTest, self).tearDown()
zookeeper.set_debug_level(zookeeper.LOG_LEVEL_DEBUG)
SSHClient.remote_user = self.old_user_name
def test_connect(self):
"""
Forwarding a port spawns an ssh process with port forwarding arguments.
"""
connect_deferred = self.client.connect(
get_test_zookeeper_address(), timeout=20)
def validate_connected(client):
self.assertTrue(client.connected)
client.close()
connect_deferred.addCallback(validate_connected)
return connect_deferred
def test_invalid_host(self):
"""
if a connection can not be made before a timeout period, an exception
is raised.
with the sshclient layer, tihs test no longer returns a failure..
and its hard to cleanup the process tunnel..
"""
SSHClient.remote_user = "rabbit"
connect_deferred = self.client.connect(
"foobar.example.com:2181", timeout=10)
self.failUnlessFailure(connect_deferred, NoConnection)
def validate_log(result):
output = self.log.getvalue()
self.assertTrue(output.strip().startswith(
"Invalid host for SSH forwarding"))
connect_deferred.addCallback(validate_log)
return connect_deferred
def test_invalid_user(self):
"""
if a connection can not be made before a timeout period, an exception
is raised.
with the sshclient layer, tihs test no longer returns a failure..
and its hard to cleanup the process tunnel..
"""
SSHClient.remote_user = "rabbit"
connect_deferred = self.client.connect(
get_test_zookeeper_address(), timeout=10)
self.failUnlessFailure(connect_deferred, NoConnection)
def validate_log(result):
output = self.log.getvalue()
self.assertEqual(output.strip(), "Invalid SSH key")
connect_deferred.addCallback(validate_log)
return connect_deferred
juju-0.7.orig/juju/ftests/test_ec2_provider.py 0000644 0000000 0000000 00000021233 12135220114 017673 0 ustar 0000000 0000000 """
Functional test for EC2 Provider.
Requirements for functional test
- valid amazon credentials present in the environment as
AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID
- an ssh key (id_dsa, id_rsa, identity) present in ~/.ssh
- if the key has a password an ssh-agent must be running with
SSH_AGENT_PID and SSH_AUTH_SOCK set in the environment.
These tests may take several minutes for each test.
"""
from cStringIO import StringIO
import os
import pwd
import sys
import zookeeper
from twisted.internet.defer import inlineCallbacks, Deferred, returnValue
from juju.errors import FileNotFound, EnvironmentNotFound
from juju.providers.ec2 import MachineProvider
from juju.state.sshclient import SSHClient
from juju.lib.testing import TestCase
def wait_for_startup(client, instances, interval=2):
"""Poll EC2 waiting for instance to transition to running state."""
# XXX should we instead be waiting for /initialized?
from twisted.internet import reactor
on_started = Deferred()
def poll_instance():
d = client.describe_instances(*[i.instance_id for i in instances])
d.addCallback(check_status)
def check_status(instances):
started = filter(lambda i: i.instance_state == "running", instances)
if len(started) == len(instances):
on_started.callback(instances)
else:
reactor.callLater(interval, poll_instance)
reactor.callLater(interval, poll_instance)
return on_started
def get_juju_branch_url():
"""
Inspect the current working tree, to determine the juju branch
to utilize when running functional tests in the cloud. If the current
directory is a branch, then use its push location. If its a checkout
then use its bound location.
Also verify the local tree has no uncommitted changes, and that all
local commits have been pushed upstream.
"""
import juju
from bzrlib import workingtree, branch, errors, transport
try:
tree, path = workingtree.WorkingTree.open_containing(
os.path.abspath(os.path.dirname(juju.__file__)))
except errors.NotBranchError:
return "lp:juju"
if tree.has_changes():
raise RuntimeError("Local branch has uncommitted changes")
# only a checkout will have a bound location, typically trunk
location = tree.branch.get_bound_location()
if location:
return location
# else its a development branch
location = tree.branch.get_push_location()
assert location, "Could not determine juju location for ftests"
# verify the branch is up to date pushed
local_revno = tree.branch.revno()
location = location.replace("lp:", "bzr+ssh://bazaar.launchpad.net/")
t = transport.get_transport(location)
try:
remote_branch = branch.Branch.open_from_transport(t)
except errors.NotBranchError:
raise RuntimeError("Local branch not pushed")
remote_revno = remote_branch.revno()
if not local_revno <= remote_revno:
raise RuntimeError("Local branch has unpushed changes")
# the remote bzr invocation prefers lp: addresses
location = location.replace("bzr+ssh://bazaar.launchpad.net/", "lp:")
return str(location)
class EC2ProviderFunctionalTest(TestCase):
def setUp(self):
super(EC2ProviderFunctionalTest, self).setUp()
self.username = pwd.getpwuid(os.getuid())[0]
self.log = self.capture_logging("juju")
zookeeper.set_debug_level(0)
juju_branch = "" # get_juju_branch_url()
self.provider = MachineProvider("ec2-functional",
{"control-bucket": "juju-test-%s" % (self.username),
"admin-secret": "magic-beans",
"juju-branch": juju_branch})
class EC2MachineTest(EC2ProviderFunctionalTest):
def _filter_instances(self, instances):
provider_instances = []
group_name = "juju-%s" % self.provider.environment_name
for i in instances:
if i.instance_state not in ("running", "pending"):
continue
if not group_name in i.reservation.groups:
continue
provider_instances.append(i)
return provider_instances
@inlineCallbacks
def test_shutdown(self):
"""
Shutting down the provider, terminates all instances associated to
the provider instance.
"""
running = yield self.provider.ec2.describe_instances()
running_prior = set(self._filter_instances(running))
result = yield self.provider.shutdown()
running_set = yield self.provider.ec2.describe_instances()
running_now = set(self._filter_instances(running_set))
if result:
shutdown = running_prior - running_now
self.assertEqual(len(result), len(shutdown))
result_ids = [r.instance_id for r in result]
for i in shutdown:
self.failUnlessIn(i.instance_id, result_ids)
@inlineCallbacks
def test_bootstrap_and_connect(self):
"""
Launching a bootstrap instance, creates an ec2 instance with
a zookeeper server running on it. This test may take up to 7m
"""
machines = yield self.provider.bootstrap()
instances = yield wait_for_startup(self.provider.ec2, machines)
test_complete = Deferred()
def verify_running():
sys.stderr.write("running; ")
@inlineCallbacks
def validate_connected(client):
self.assertTrue(client.connected)
sys.stderr.write("connected.")
exists_deferred, watch_deferred = client.exists_and_watch(
"/charms")
stat = yield exists_deferred
if stat:
test_complete.callback(None)
returnValue(True)
yield watch_deferred
stat = yield client.exists("/charms")
self.assertTrue(stat)
test_complete.callback(None)
def propogate_failure(failure):
test_complete.errback(failure)
return failure
def close_client(result, client):
client.close()
server = "%s:2181" % instances[0].dns_name
client = SSHClient()
client_deferred = client.connect(server, timeout=300)
client_deferred.addCallback(validate_connected)
client_deferred.addErrback(propogate_failure)
client_deferred.addBoth(close_client, client)
yield verify_running()
yield test_complete
# set the timeout to something more reasonable for bootstraping
test_bootstrap_and_connect.timeout = 300
@inlineCallbacks
def test_provider_with_nonexistant_zk_instance(self):
"""
If the zookeeper instances as stored in s3 does not exist, then
connect should return the appropriate error message.
"""
self.addCleanup(self.provider.save_state, {})
yield self.provider.save_state({"zookeeper-instances": [
"i-a189723", "i-a213213"]})
d = self.provider.connect()
yield self.assertFailure(d, EnvironmentNotFound)
class EC2StorageTest(EC2ProviderFunctionalTest):
def setUp(self):
super(EC2StorageTest, self).setUp()
self.s3 = self.provider.s3
self.storage = self.provider.get_file_storage()
self.control_bucket = self.provider.config.get("control-bucket")
return self.s3.create_bucket(self.control_bucket)
@inlineCallbacks
def tearDown(self):
listing = yield self.s3.get_bucket(self.control_bucket)
for ob in listing.contents:
yield self.s3.delete_object(self.control_bucket, ob.key)
yield self.s3.delete_bucket(self.control_bucket)
@inlineCallbacks
def test_put_object(self):
content = "snakes eat rubies"
yield self.storage.put("files/reptile.txt", StringIO(content))
s3_content = yield self.s3.get_object(
self.control_bucket, "files/reptile.txt")
self.assertEqual(content, s3_content)
@inlineCallbacks
def test_get_object(self):
content = "snakes eat rubies"
yield self.storage.put("files/reptile.txt", StringIO(content))
file_obj = yield self.storage.get("files/reptile.txt")
s3_content = file_obj.read()
self.assertEqual(content, s3_content)
def test_get_object_nonexistant(self):
remote_path = "files/reptile.txt"
d = self.storage.get(remote_path)
self.failUnlessFailure(d, FileNotFound)
def validate_error_message(result):
self.assertEqual(
result.path, "s3://%s/%s" % (self.control_bucket, remote_path))
d.addCallback(validate_error_message)
return d
juju-0.7.orig/juju/hooks/__init__.py 0000644 0000000 0000000 00000000000 12135220114 015610 0 ustar 0000000 0000000 juju-0.7.orig/juju/hooks/cli.py 0000644 0000000 0000000 00000023413 12135220114 014635 0 ustar 0000000 0000000 import argparse
import base64
import copy
import json
import logging
import os
import sys
from argparse import ArgumentTypeError
from twisted.internet import defer
from twisted.internet import protocol
from juju.hooks.protocol import UnitAgentClient
from juju.lib.format import get_charm_formatter_from_env
_marker = object()
class CommandLineClient(object):
"""Template for writing Command Line Clients. Used to implement
the utility scripts available to hook authors. Provides a
framework for utilities connected to the Unit Agent process via a
UNIX Domain Socket. This provides facilities for standardized
logging, error handling, output transformation and exit codes.
There are a number of variables that can be set in a subclass to
help configure its behavior.
Instance Variables:
`exit_code` -- Indicate an error to the caller. The default
indicates no error. (default: 0)
`keyvalue_pairs` -- Commands may process key-value pairs in the
format 'alpha=a beta=b' as arguments. Setting this boolean to True
enables the parsing of these options and supports the additional
conventions described in the specifications/unit-agent-hooks
document. (default: False)
`require_cid` -- Does the command require the specification of a
client_id. (default: True)
"""
default_mode = "wb"
keyvalue_pairs = False
exit_code = 0
require_cid = True
manage_logging = True
manage_connection = True
def _setup_flags(self):
parser = self.parser
# Set up the default Arguments
parser.add_argument("-o", "--output",
type=argparse.FileType(self.default_mode),
help="""Specify an output file""")
parser.add_argument("-s", "--socket",
help="Unit Agent communicates with "
"tools over a socket. This value can be "
"overridden here or read from the "
"envionment variable JUJU_AGENT_SOCKET"
)
parser.add_argument("--client-id",
help="A token used to connect the client "
"with an execution context and state "
"cache. This value can be overridden "
"here or read from the environment "
"variable JUJU_CLIENT_ID"
)
# output rendering
parser.add_argument("--format", default="smart")
# logging
parser.add_argument("--log-file", metavar="FILE",
default=sys.stderr,
type=argparse.FileType('a'),
help="Log output to file")
parser.add_argument("--log-level",
metavar="CRITICAL|DEBUG|INFO|ERROR|WARNING",
help="Display messages starting at given level",
type=parse_log_level, default=logging.WARNING)
def customize_parser(self):
"""Hook for subclasses to add special handling after the basic
parser and standard flags have been added. This hook is called
at such a time that if positional args are defined these will
be added before any key-value pair handling.
"""
pass
def setup_parser(self):
self.parser = argparse.ArgumentParser()
self._setup_flags()
self.customize_parser()
if self.keyvalue_pairs:
self.parser.add_argument("keyvalue_pairs", nargs="*")
return self.parser
def parse_args(self, arguments=None):
"""By default this processes command line arguments. However
with arguments are passed they should be a list of arguments
in the format normally provided by sys.argv.
arguments:
`arguments` -- optional list of arguments to parse in the
sys.argv standard format. (default: None)
"""
options = self.parser.parse_args(arguments)
self.options = options
if self.manage_logging:
self.setup_logging()
exit = False
if not exit and not options.socket:
options.socket = os.environ.get("JUJU_AGENT_SOCKET")
if not options.socket:
exit = SystemExit("No JUJU_AGENT_SOCKET/"
"-s option found")
# use argparse std error code for this error
exit.code = 2
if not exit and not options.client_id:
options.client_id = os.environ.get("JUJU_CLIENT_ID")
if not options.client_id and self.require_cid:
exit = SystemExit("No JUJU_CLIENT_ID/"
"--client_id option found")
exit.code = 2
if exit:
self.parser.print_usage(sys.stderr)
print >>sys.stderr, (str(exit))
raise exit
if self.keyvalue_pairs:
self.parse_kvpairs(self.options.keyvalue_pairs)
return options
def setup_logging(self):
logging.basicConfig(
format="%(asctime)s %(levelname)s: %(message)s",
level=self.options.log_level,
stream=self.options.log_file)
def parse_kvpairs(self, options):
formatter = get_charm_formatter_from_env()
data = formatter.parse_keyvalue_pairs(options)
# cache
self.options.keyvalue_pairs = data
return data
def _connect_to_agent(self):
from twisted.internet import reactor
def onConnectionMade(p):
self.client = p
return p
d = protocol.ClientCreator(
reactor, UnitAgentClient).connectUNIX(self.options.socket)
d.addCallback(onConnectionMade)
return d
def __call__(self, arguments=None):
from twisted.internet import reactor
self.setup_parser()
self.parse_args(arguments=arguments)
if self.manage_connection:
self._connect_to_agent().addCallback(self._run)
else:
reactor.callWhenRunning(self._run)
reactor.run()
sys.exit(self.exit_code)
def _run(self, result=None):
from twisted.internet import reactor
d = defer.maybeDeferred(self.run)
d.addCallbacks(self.render, self.render_error)
d.addBoth(lambda x: reactor.stop())
return d
def run(self):
"""Implemented by subclass. This method should implement any
behavior specific to the command and return (or yield with
inlineCallbacks) a value that will later be handed off to
render for formatting and output.
"""
pass
def render_error(self, result):
tb = result.getTraceback(elideFrameworkCode=True)
sys.stderr.write(tb)
logging.error(tb)
logging.error(str(result))
def render(self, result):
options = self.options
format = options.format
if options.output:
stream = options.output
else:
stream = sys.stdout
formatter = getattr(self, "format_%s" % format, None)
if formatter is not None:
formatter(result, stream)
else:
print >>sys.stderr, "unknown output format: %s" % format
if result:
print >>stream, str(result)
stream.flush()
def format_json(self, result, stream):
encoded = copy.copy(result)
if isinstance(result, dict):
for k, v in result.iteritems():
# Workaround the fact that JSON does not work with str
# values that have high bytes and are not actually UTF-8
# encoded; workaround by firt testing whether it can be
# decoded as UTF-8, and if not, wrapping as Base64
# encoded.
if isinstance(v, str):
try:
v.decode("utf8")
except UnicodeDecodeError:
encoded[k] = base64.b64encode(v)
json.dump(encoded, stream)
def format_smart(self, result, stream):
if result is not None:
charm_formatter = get_charm_formatter_from_env()
stream.write(charm_formatter.format_raw(result))
def parse_log_level(level):
"""Level name/level number => level number"""
if isinstance(level, basestring):
level = level.upper()
if level.isdigit():
level = int(level)
else:
# converts the name INFO to level number
level = logging.getLevelName(level)
if not isinstance(level, int):
logging.error("Invalid log level %s" % level)
level = logging.INFO
return level
def parse_port_protocol(port_protocol_string):
"""Returns (`port`, `protocol`) by converting `port_protocol_string`.
`port` is an integer for a valid port (1 through 65535).
`protocol` is restricted to TCP and UDP. TCP is the default.
Otherwise raises ArgumentTypeError(msg).
"""
split = port_protocol_string.split("/")
if len(split) == 2:
port_string, protocol = split
elif len(split) == 1:
port_string, protocol = split[0], "tcp"
else:
raise ArgumentTypeError(
"Invalid format for port/protocol, got %r" % port_protocol_string)
try:
port = int(port_string)
except ValueError:
raise ArgumentTypeError(
"Invalid port, must be an integer, got %r" % port_string)
raise
if port < 1 or port > 65535:
raise ArgumentTypeError(
"Invalid port, must be from 1 to 65535, got %r" % port)
if protocol.lower() not in ("tcp", "udp"):
raise ArgumentTypeError(
"Invalid protocol, must be 'tcp' or 'udp', got %r" % protocol)
return port, protocol.lower()
juju-0.7.orig/juju/hooks/commands.py 0000644 0000000 0000000 00000021053 12135220114 015665 0 ustar 0000000 0000000 import logging
import os
import pipes
import re
import sys
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.hooks.cli import (
CommandLineClient, parse_log_level, parse_port_protocol)
from juju.hooks.protocol import MustSpecifyRelationName
from juju.state.errors import InvalidRelationIdentity
BAD_CHARS = re.compile("[\-\./:()<>|?*]|(\\\)")
class RelationGetCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
remote_unit = os.environ.get("JUJU_REMOTE_UNIT")
self.parser.add_argument(
"-r", dest="relation_id", default="", metavar="RELATION ID")
self.parser.add_argument("settings_name", default="", nargs="?")
self.parser.add_argument("unit_name", default=remote_unit, nargs="?")
@inlineCallbacks
def run(self):
if self.options.settings_name == "-":
self.options.settings_name = ""
if self.options.unit_name is None:
print >>sys.stderr, "Unit name is not defined"
return
result = None
try:
result = yield self.client.relation_get(self.options.client_id,
self.options.relation_id,
self.options.unit_name,
self.options.settings_name)
except InvalidRelationIdentity, e:
# This prevents the exception from getting wrapped by AMP
print >>sys.stderr, e.relation_ident
except Exception, e:
print >>sys.stderr, str(e)
returnValue(result)
def format_shell(self, result, stream):
options = self.options
settings_name = options.settings_name
if settings_name and settings_name != "-":
# result should be a single value
result = {settings_name.upper(): result}
if result:
errs = []
for k, v in sorted(os.environ.items()):
if k.startswith("VAR_"):
print >>stream, "%s=" % (k.upper(), )
errs.append(k)
for k, v in sorted(result.items()):
k = BAD_CHARS.sub("_", k.upper())
v = pipes.quote(v)
print >>stream, "VAR_%s=%s" % (k.upper(), v)
# Order of output within streams is assured, but we output
# on (commonly) two streams here and the ordering of those
# messages is significant to the user. Make a best
# effort. However, this cannot be guaranteed when these
# streams are collected by `HookProtocol`.
stream.flush()
if errs:
print >>sys.stderr, "The following were omitted from " \
"the environment. VAR_ prefixed variables indicate a " \
"usage error."
print >>sys.stderr, "".join(errs)
def relation_get():
"""Entry point for relation-get"""
client = RelationGetCli()
sys.exit(client())
class RelationSetCli(CommandLineClient):
keyvalue_pairs = True
def customize_parser(self):
self.parser.add_argument(
"-r", dest="relation_id", default="", metavar="RELATION ID")
@inlineCallbacks
def run(self):
try:
yield self.client.relation_set(self.options.client_id,
self.options.relation_id,
self.options.keyvalue_pairs)
except InvalidRelationIdentity, e:
# This prevents the exception from getting wrapped by AMP
print >>sys.stderr, e.relation_ident
except Exception, e:
print >>sys.stderr, str(e)
def relation_set():
"""Entry point for relation-set."""
client = RelationSetCli()
sys.exit(client())
class RelationIdsCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
relation_name = os.environ.get("JUJU_RELATION", "")
self.parser.add_argument(
"relation_name",
metavar="RELATION NAME",
nargs="?",
default=relation_name,
help=("Specify the relation name of the relation ids to list. "
"Defaults to $JUJU_RELATION, if available."))
@inlineCallbacks
def run(self):
if not self.options.relation_name:
raise MustSpecifyRelationName()
result = yield self.client.relation_ids(
self.options.client_id, self.options.relation_name)
returnValue(result)
def format_smart(self, result, stream):
for ident in result:
print >>stream, ident
def relation_ids():
"""Entry point for relation-set."""
client = RelationIdsCli()
sys.exit(client())
class ListCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
self.parser.add_argument(
"-r", dest="relation_id", default="", metavar="RELATION ID")
@inlineCallbacks
def run(self):
result = None
try:
result = yield self.client.list_relations(self.options.client_id,
self.options.relation_id)
except InvalidRelationIdentity, e:
# This prevents the exception from getting wrapped by AMP
print >>sys.stderr, e.relation_ident
except Exception, e:
print >>sys.stderr, str(e)
returnValue(result)
def format_eval(self, result, stream):
""" eval `juju-list` """
print >>stream, "export JUJU_MEMBERS=\"%s\"" % (" ".join(result))
def format_smart(self, result, stream):
for member in result:
print >>stream, member
def relation_list():
"""Entry point for relation-list."""
client = ListCli()
sys.exit(client())
class LoggingCli(CommandLineClient):
keyvalue_pairs = False
require_cid = False
def customize_parser(self):
self.parser.add_argument("message", nargs="+")
self.parser.add_argument("-l",
metavar="CRITICAL|DEBUG|INFO|ERROR|WARNING",
help="Send log message at the given level",
type=parse_log_level, default=logging.INFO)
def run(self, result=None):
return self.client.log(self.options.l,
self.options.message)
def render(self, result):
return None
def log():
"""Entry point for juju-log."""
client = LoggingCli()
sys.exit(client())
class ConfigGetCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
self.parser.add_argument("option_name", default="", nargs="?")
@inlineCallbacks
def run(self):
# handle settings_name being explictly skipped on the cli
result = yield self.client.config_get(self.options.client_id,
self.options.option_name)
returnValue(result)
def config_get():
"""Entry point for config-get"""
client = ConfigGetCli()
sys.exit(client())
class OpenPortCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
self.parser.add_argument(
"port_protocol",
metavar="PORT[/PROTOCOL]",
help="The port to open. The protocol defaults to TCP.",
type=parse_port_protocol)
def run(self):
return self.client.open_port(
self.options.client_id, *self.options.port_protocol)
def open_port():
"""Entry point for open-port."""
client = OpenPortCli()
sys.exit(client())
class ClosePortCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
self.parser.add_argument(
"port_protocol",
metavar="PORT[/PROTOCOL]",
help="The port to close. The protocol defaults to TCP.",
type=parse_port_protocol)
def run(self):
return self.client.close_port(
self.options.client_id, *self.options.port_protocol)
def close_port():
"""Entry point for close-port."""
client = ClosePortCli()
sys.exit(client())
class UnitGetCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
self.parser.add_argument("setting_name")
@inlineCallbacks
def run(self):
result = yield self.client.get_unit_info(self.options.client_id,
self.options.setting_name)
returnValue(result["data"])
def unit_get():
"""Entry point for config-get"""
client = UnitGetCli()
sys.exit(client())
juju-0.7.orig/juju/hooks/executor.py 0000644 0000000 0000000 00000024242 12135220114 015725 0 ustar 0000000 0000000 """ Hook Execution.
"""
import os
import fnmatch
import logging
import tempfile
from twisted.internet.defer import (
inlineCallbacks, DeferredQueue, Deferred, DeferredLock, returnValue,
DeferredFilesystemLock)
from twisted.internet.error import ProcessExitedAlready
DEBUG_HOOK_TEMPLATE = r"""#!/bin/bash
set -e
export JUJU_DEBUG=$(mktemp -d)
exec > $JUJU_DEBUG/debug.log >&1
# Save environment variables and export them for sourcing.
FILTER='^\(LS_COLORS\|LESSOPEN\|LESSCLOSE\|PWD\)='
env | grep -v $FILTER > $JUJU_DEBUG/env.sh
sed -i 's/^/export /' $JUJU_DEBUG/env.sh
# Create an internal script which will load the hook environment.
cat > $JUJU_DEBUG/hook.sh < $JUJU_DEBUG/hook.pid
exec /bin/bash
END
chmod +x $JUJU_DEBUG/hook.sh
# If the session already exists, the ssh command won the race, so just use it.
# The beauty below is a workaround for a bug in tmux (1.5 in Oneiric) or
# epoll that doesn't support /dev/null or whatever. Without it the
# command hangs.
tmux new-session -d -s $JUJU_UNIT_NAME 2>&1 | cat > /dev/null || true
tmux new-window -t $JUJU_UNIT_NAME -n {hook_name} "$JUJU_DEBUG/hook.sh"
# If we exit for whatever reason, kill the hook shell.
exit_handler() {
if [ -f $JUJU_DEBUG/hook.pid ]; then
kill -9 $(cat $JUJU_DEBUG/hook.pid) || true
fi
}
trap exit_handler EXIT
# Wait for the hook shell to start, and then wait for it to exit.
while [ ! -f $JUJU_DEBUG/hook.pid ]; do
sleep 1
done
HOOK_PID=$(cat $JUJU_DEBUG/hook.pid)
while kill -0 "$HOOK_PID" 2> /dev/null; do
sleep 1
done
"""
class HookExecutor(object):
"""Executes scheduled hooks.
A typical unit agent is subscribed to multiple event streams
across unit and relation lifecycles. All of which will attempt to
execute hooks in response to events. In order to serialize hook
execution and bring observability, a hook executor is utilized
across the different components that want to execute hooks.
"""
STOP = object()
LOCK_PATH = "/var/lib/juju/hook.lock"
def __init__(self):
self._running = False
self._executions = DeferredQueue()
self._observer = None
self._log = logging.getLogger("hook.executor")
self._run_lock = DeferredLock()
# Serialized container hook execution
self._fs_lock = DeferredFilesystemLock(self.LOCK_PATH)
# The currently executing hook invoker. None if no hook is executing.
self._invoker = None
# The currently executing hook's context. None if no hook is executing.
self._hook_context = None
# The current names of hooks that should be debugged.
self._debug_hook_names = None
# The path to the last utilized tempfile debug hook.
self._debug_hook_file_path = None
@property
def running(self):
"""Returns a boolean, denoting if the executor is running."""
return self._running
@inlineCallbacks
def start(self):
"""Start the hook executor.
After the executor is started, it will continue to serially execute
any queued hook executions.
"""
assert self._running is False, "Already running"
self._running = True
self._log.debug("started")
while self._running:
next = yield self._executions.get()
# The stop logic here is to allow for two different
# scenarios. One is if the executor is currently waiting on
# the queue, putting a stop value there will, immediately
# wake it up and cause it to stop.
# The other distinction is more subtle, if we invoke
# start/stop/start on the executioner, and it was
# currently executing a slow hook, then when the
# executioner finishes with the hook it may now be in the
# running state, resulting in two executioners closures
# executing hooks. We track stops to ensure that only one
# executioner closure is running at a time.
if next is self.STOP:
continue
yield self._run_lock.acquire()
if not self._running:
self._run_lock.release()
continue
yield self._fs_lock.deferUntilLocked()
try:
yield self._run_one(*next)
finally:
try:
self._fs_lock.unlock()
except ValueError:
# Defensive.. If on unlock we're not the owner the impl
# will raise an error, we don't care as long the sys
# is not blocked by us, lock will sanitize
pass
self._run_lock.release()
@inlineCallbacks
def _run_one(self, invoker, path, exec_deferred):
"""Run a hook.
"""
hook_path = self.get_hook_path(path)
if not os.path.exists(hook_path):
self._log.info(
"Hook does not exist, skipping %s", hook_path)
exec_deferred.callback(False)
if self._observer:
self._observer(path)
returnValue(None)
self._log.debug("Running hook: %s", path)
# Store for context for callbacks, execution is serialized.
self._invoker = invoker
self._hook_context = invoker.get_context()
try:
yield invoker(hook_path)
except Exception, e:
self._invoker = self._hook_context = None
self._log.debug("Hook error: %s %s", path, e)
exec_deferred.errback(e)
else:
self._invoker = self._hook_context = None
self._log.debug("Hook complete: %s", path)
exec_deferred.callback(True)
if self._observer:
self._observer(path)
@inlineCallbacks
def stop(self):
"""Stop hook executions.
Returns a deferred that fires when the executor has stopped.
"""
assert self._running, "Already stopped"
yield self._run_lock.acquire()
self._running = False
self._executions.put(self.STOP)
self._run_lock.release()
self._log.debug("stopped")
@inlineCallbacks
def run_priority_hook(self, invoker, hook_path):
"""Execute a hook while the executor is stopped.
Executes a hook immediately, ignoring the existing queued
hook executions, requires the hook executor to be stopped.
"""
yield self._run_lock.acquire()
try:
assert not self._running, "Executor must not be running"
exec_deferred = Deferred()
yield self._run_one(invoker, hook_path, exec_deferred)
finally:
self._run_lock.release()
yield exec_deferred
def set_observer(self, observer):
"""Set a callback hook execution observer.
The callback receives a single parameter, the path to the hook,
and is invoked after the hook has been executed.
"""
self._observer = observer
def get_hook_context(self, client_id):
"""Retrieve the context of the currently executing hook.
This serves as the integration point with the hook api server,
which utilizes this function to retrieve a hook context for
a given client. Since we're serializing execution its effectively
a constant lookup to the currently executing hook's context.
"""
return self._hook_context
def get_hook_path(self, hook_path):
"""Retrieve a hook path. We use this to enable debugging.
:param hook_path: The requested hook path to execute.
If the executor is in debug mode, a path to a debug hook
is returned.
"""
hook_name = os.path.basename(hook_path)
# Cleanup/Release any previous hook debug scripts.
if self._debug_hook_file_path:
os.unlink(self._debug_hook_file_path)
self._debug_hook_file_path = None
# Check if debug is active, if not use the requested hook.
if not self._debug_hook_names:
return hook_path
# Check if its a hook we want to debug
found = False
for debug_name in self._debug_hook_names:
if fnmatch.fnmatch(hook_name, debug_name):
found = True
if not found:
return hook_path
# Setup a debug hook script.
self._debug_hook_file_path = self._write_debug_hook(hook_name)
return self._debug_hook_file_path
def _write_debug_hook(self, hook_name):
debug_hook = DEBUG_HOOK_TEMPLATE.replace("{hook_name}", hook_name)
debug_hook_file = tempfile.NamedTemporaryFile(
suffix="-%s" % hook_name, delete=False)
debug_hook_file.write(debug_hook)
debug_hook_file.flush()
# We have to close the hook file, else linux throws a Text
# File busy on exec because the file is open for write.
debug_hook_file.close()
os.chmod(debug_hook_file.name, 0700)
return debug_hook_file.name
def get_invoker(self, client_id):
"""Retrieve the invoker of the currently executing hook.
This method enables a lookup point for the hook API.
"""
return self._invoker
def set_debug(self, hook_names):
"""Set some hooks to be debugged. Also used to clear debug.
:param hook_names: A list of hook names to debug, None values
means disable debugging, and end current debugging
underway.
"""
if hook_names is not None and not isinstance(hook_names, list):
raise AssertionError("Invalid hook names %r" % (hook_names))
# Terminate an existing debug session when the debug ends.
if hook_names is None and self._invoker:
try:
self._invoker.send_signal("HUP")
except (ProcessExitedAlready, ValueError):
pass
self._debug_hook_names = hook_names
def __call__(self, invoker, hook_path):
"""Schedule a hook for execution.
Returns a deferred that fires when the hook has been executed.
"""
exec_deferred = Deferred()
self._executions.put(
(invoker, hook_path, exec_deferred))
return exec_deferred
juju-0.7.orig/juju/hooks/invoker.py 0000644 0000000 0000000 00000035122 12135220114 015543 0 ustar 0000000 0000000 import os
import sys
from twisted.internet import protocol
from twisted.internet.defer import Deferred, inlineCallbacks, returnValue
from twisted.python.failure import Failure
from juju import errors
from juju.state.errors import RelationIdentNotFound, InvalidRelationIdentity
from juju.state.hook import RelationHookContext
class HookProtocol(protocol.ProcessProtocol):
"""Protocol used to communicate between the unit agent and hook process.
This class manages events associated with the hook process,
including its exit and status of file descriptors.
"""
def __init__(self, hook_name, context, log=None):
self._context = context
self._hook_name = hook_name
self._log = log
# The process has exited. The value of this Deferred is
# the exit code, and only if it is 0. Otherwise a
# `CharmInvocationError` is communicated through this
# Deferred.
self.exited = Deferred()
# The process has ended, that is, its file descriptors
# are closed. Output can now be fully read. The deferred
# semantics are the same as `exited` above.
self.ended = Deferred()
def outReceived(self, data):
"""Log `data` from stduout until the child process has ended."""
self._log.info(data)
def errReceived(self, data):
"""Log `data` from stderr until the child process has ended."""
self._log.info(data)
def _process_reason(self, reason, deferred):
"""Common code for working with both processEnded and processExited.
The semantics of `exited` and `ended` are the same with
respect to how they process the status code; the difference is
when these events occur. For more, see :class:`Invoker`.
"""
exit_code = reason.value.exitCode
if exit_code == 0:
return deferred.callback(exit_code)
elif exit_code is None and reason.value.signal:
error = errors.CharmInvocationError(
self._hook_name, exit_code, signal=reason.value.signal)
else:
error = errors.CharmInvocationError(self._hook_name, exit_code)
deferred.errback(error)
def processExited(self, reason):
"""Called when the process has exited."""
self._process_reason(reason, self.exited)
def processEnded(self, reason):
"""Called when file descriptors for the process are closed."""
self._process_reason(reason, self.ended)
class FormatSettingChanges(object):
"""Wrapper to delay executing __str_ of changes until written, if at all.
:param list changes: Each change is a pair (`relation_ident`,
`item`), where `item` may be an `AddedItem`, `DeletedItem`, or
`ModifiedItem`. If `relation_ident` is None, this implies that
it is a setting on the implied (or parent) context; it is
sorted first and the relation_ident for the implied context is
not logged.
"""
def __init__(self, changes):
self.changes = changes
def __str__(self):
changes = sorted(
self.changes,
key=lambda (relation_ident, item): (relation_ident, item.key))
lines = []
for relation_ident, item in changes:
if relation_ident is None:
lines.append(" %s" % str(item))
else:
lines.append(" %s on %r" % (str(item), relation_ident))
return "\n".join(lines)
class Invoker(object):
"""Responsible for the execution and management of hook invocation.
In a nutshell, *how* hooks are invoked, not *when* or *why*.
Responsible for the following:
* Manages socket connection with the unit agent.
* Connects the child process stdout/stderr file descriptors to
logging.
* Handles the exit of the hook process, including reporting its
exit code.
* Cleans up resources of the hook process upon its exit.
It's important to understand the difference between a process
exiting and the process ending (using the terminology established
by Twisted). Process exit is simple - this is the first event and
occurs by the process returning its status code through the exit
process. Normally process ending occurs very shortly thereafter,
however, it may be briefly delayed because of pending writes to
its file descriptors.
In certain cases, however, hook scripts may invoke poorly written
commands that fork child processes in the background that will
wait around indefinitely, but do not close their file
descriptors. In this case, it is the responsibility of the Invoker
to wait briefly (for now hardcoded to 5 seconds), then reap such
processes.
"""
def __init__(self, context, change, client_id, socket_path,
unit_path, logger):
"""Takes the following arguments:
`context`: an `juju.state.hook.HookContext`
`change`: an `juju.state.hook.RelationChange`
`client_id`: a string uniquely identifying a client session
`socket_path`: the path to the UNIX Domain socket used by
clients to communicate with the Unit Agent
`logger`: instance of a `logging.Logger` object used to capture
hook output
"""
self.environment = {}
self._context = context
self._relation_contexts = {}
self._change = change
self._client_id = client_id
self._socket_path = socket_path
self._unit_path = unit_path
self._log = logger
self._charm_format = None
# The twisted.internet.process.Process instance.
self._process = None
# The hook executable path
self._process_executable = None
# Deferred tracking whether the process HookProtocol is ended
self._ended = None
# When set, a delayed call that ensures the process is
# properly terminated with loseConnection
self._reaper = None
# Add the initial context to the relation contexts if it's in
# fact such
if isinstance(context, RelationHookContext):
self._relation_contexts[context.relation_ident] = context
@inlineCallbacks
def start(self):
"""Cache relation hook contexts for all relation idents."""
# Get all relation idents (None means "all")
relation_idents = set((yield self._context.get_relation_idents(None)))
if isinstance(self._context, RelationHookContext):
# Exclude the parent context for being looked up as a child
relation_idents.discard(self._context.relation_ident)
display_parent_relation_ident = " on %r" % \
self._context.relation_ident
else:
display_parent_relation_ident = ""
for relation_ident in relation_idents:
child = yield self._context.get_relation_hook_context(
relation_ident)
self._relation_contexts[relation_ident] = child
self._log.debug("Cached relation hook contexts%s: %r" % (
display_parent_relation_ident,
sorted(relation_idents)))
service = yield self._context.get_local_service()
charm = yield service.get_charm_state()
self._charm_format = (yield charm.get_metadata()).format
@property
def charm_format(self):
return self._charm_format
@property
def ended(self):
return self._ended
@property
def unit_path(self):
return self._unit_path
def get_environment(self):
"""
Returns the environment used to run the hook as a dict.
Defaults are provided based on information passed to __init__.
By setting keys inside Invoker.environment you can override
the defaults or provide additional variables.
"""
base = dict(JUJU_AGENT_SOCKET=self._socket_path,
JUJU_CLIENT_ID=self._client_id,
CHARM_DIR=os.path.join(self._unit_path, "charm"),
_JUJU_CHARM_FORMAT=str(self.charm_format),
JUJU_ENV_UUID=os.environ["JUJU_ENV_UUID"],
JUJU_UNIT_NAME=os.environ["JUJU_UNIT_NAME"],
DEBIAN_FRONTEND="noninteractive",
APT_LISTCHANGES_FRONTEND="none",
PATH=os.environ["PATH"],
JUJU_PYTHONPATH=":".join(sys.path))
base.update(self.environment)
return self.get_environment_from_change(base, self._change)
def get_environment_from_change(self, env, change):
"""Supplement the default environment with dict with variables
originating from the `change` argument to __init__.
"""
return env
def get_context(self):
"""Returns the hook context for the invocation."""
return self._context
def get_cached_relation_hook_context(self, relation_ident):
"""Returns cached hook context corresponding to `relation_ident`"""
try:
return self._relation_contexts[relation_ident]
except KeyError:
parts = relation_ident.split(":")
if len(parts) != 2 or not parts[1].isdigit():
raise InvalidRelationIdentity(relation_ident)
else:
raise RelationIdentNotFound(relation_ident)
@inlineCallbacks
def get_relation_idents(self, relation_name):
"""Returns valid relation instances for the given name."""
idents = yield self._context.get_relation_idents(relation_name)
returnValue(
list(set(idents).intersection(set(self._relation_contexts))))
def validate_hook(self, hook_filename):
"""Verify that the hook_filename exists and is executable. """
if not os.path.exists(hook_filename):
raise errors.FileNotFound(hook_filename)
if not os.access(hook_filename, os.X_OK):
raise errors.CharmError(hook_filename,
"hook is not executable")
def send_signal(self, signal_id):
"""Send a signal of the given signal_id.
`signal_id`: limited value interpretation, numeric signals
ids are used as given, some values for symbolic string
interpretation are available see
``twisted.internet.process._BaseProcess.signalProcess`` for
additional details.
Raises a `ValueError` if the process doesn't exist or
`ProcessExitedAlready` if the process has already ended.
"""
if not self._process:
raise ValueError("No Process")
return self._process.signalProcess(signal_id)
def _ensure_process_termination(self, ignored):
"""Cancels any scheduled reaper and terminates hook process, if around.
Canceling the reaper itself is necessary to ensure that
deferreds like this are not left in the reactor. This would
otherwise be the case for test that are awaiting the log, by
using the `Invoker.end` deferred.
"""
if self._reaper:
if not self._reaper.called:
self._reaper.cancel()
self._process.loseConnection()
@inlineCallbacks
def _cleanup_process(self, hook, result):
"""Performs process cleanup:
* Flushes any changes (eg relation settings maded by the
hook)
* Ensures that the result will be the exit code of the
process (if 0), or the `CharmInvocationError` from the
underlying `HookProtocol`, with cleaned up traceback.
* Also schedules a reaper to be called later that ensures
process termination.
"""
message = result
if isinstance(message, Failure):
message = message.getTraceback(elideFrameworkCode=True)
self._log.debug("hook %s exited, exit code %s." % (
os.path.basename(hook), message))
# Ensure that the process is terminated (via loseConnection)
# no more than 5 seconds (arbitrary) after it exits, unless it
# normally ends. If ended, the reaper is cancelled to ensure
# it is not left in the reactor.
#
# The 5 seconds was chosen to make it vanishly small that
# there would be any lost output (as might be *occasionally*
# seen with a 50ms threshold in actual testing).
from twisted.internet import reactor
self._reaper = reactor.callLater(5, self._process.loseConnection)
# Flush context changes back to zookeeper if hook was successful.
if result == 0 and self._context:
relation_setting_changes = []
for context in self._relation_contexts.itervalues():
changes = yield context.flush()
if changes:
for change in changes:
if context is self._context:
relation_setting_changes.append((None, change))
else:
# Only log relation idents for relation settings
# on child relation hook contexts
relation_setting_changes.append(
(context.relation_ident, change))
if relation_setting_changes:
if hasattr(self._context, "relation_ident"):
display_parent_relation_ident = " on %r" % \
self._context.relation_ident
else:
display_parent_relation_ident = ""
self._log.debug(
"Flushed values for hook %r%s\n%s",
os.path.basename(hook),
display_parent_relation_ident,
FormatSettingChanges(relation_setting_changes))
returnValue(result)
def __call__(self, hook):
"""Execute `hook` in a runtime context and returns status code.
The `hook` parameter should be a complete path to the desired
executable. The returned value is a `Deferred` that is called
when the hook exits.
"""
# Sanity check the hook.
self.validate_hook(hook)
# Setup for actual invocation
env = self.get_environment()
hook_proto = HookProtocol(hook, self._context, self._log)
exited = hook_proto.exited
self._ended = ended = hook_proto.ended
from twisted.internet import reactor
self._process = reactor.spawnProcess(
hook_proto, hook, [hook], env,
os.path.join(self._unit_path, "charm"))
# Manage cleanup after hook exits
def cb_cleanup_process(result):
return self._cleanup_process(hook, result)
exited.addBoth(cb_cleanup_process)
ended.addBoth(self._ensure_process_termination)
return exited
juju-0.7.orig/juju/hooks/protocol.py 0000644 0000000 0000000 00000044404 12135220114 015732 0 ustar 0000000 0000000 """
Protocol
Twisted AMP protocol used between the UnitAgent (via the
juju/hooks/invoker template) and client scripts invoked hooks on
behalf of charm authors.
Interactions with the server happen through an exchange of
commands. Each interaction with the UnitAgent is coordinated through
the use of a single command.
These commands have there concrete implementation relative to server
state in the UnitAgentServer class. The utility methods in
UnitAgentClient provide a synchronous interface for scripts derived
from juju.hooks.cli to expose to scripts.
To extend the system with additional command the following pattern is
used.
- Author a new BaseCommand subclass outlining the arguments and returns
the Command neeeds.
- Implement a responder for that command in UnitAgentServer returning
a dict with the response agreed upon by the new Command
- Implement a client side callable in UnitAgentClient which handles
any pre-wire data marshaling (with the goal of mapping to the
Command objects contract) and return a result after waiting for any
asynchronous actions to complete.
UnitAgentClient and UnitAgentServer act as the client and server sides
of an RPC interface. Due to this they have a number of arguments in
common which are documented here.
arguments:
`client_id` -- Client specifier identifying a client to the server
side thus connecting it with an juju.state.hook.HookContent (str)
`unit_name` -- String of the name of the unit being queried or
manipulated.
"""
import logging
from twisted.internet import defer
from twisted.internet import protocol
from twisted.protocols import amp
from juju.errors import JujuError
from juju.lib import serializer
from juju.lib.format import get_charm_formatter, get_charm_formatter_from_env
from juju.state.errors import (
InvalidRelationIdentity, RelationStateNotFound, UnitRelationStateNotFound,
RelationIdentNotFound)
from juju.state.hook import RelationHookContext
class NoSuchUnit(JujuError):
"""
The requested Unit Name wasn't found
"""
# Amp Currently cannot construct the 3 required arguments for
# UnitRelationStateNotFound. This captures the error message in
# a way that can pass over the wire
pass
class NotRelationContext(JujuError):
"""Relation commands can only be used in relation hooks"""
class NoSuchKey(JujuError):
""" The requested key did not exist.
"""
class MustSpecifyRelationName(JujuError):
"""No relation name was specified."""
def __str__(self):
return "Relation name must be specified"
class BaseCommand(amp.Command):
errors = {
NoSuchUnit: "NoSuchUnit",
NoSuchKey: "NoSuchKey",
NotRelationContext: "NotRelationContext",
UnitRelationStateNotFound: "UnitRelationStateNotFound",
MustSpecifyRelationName: "MustSpecifyRelationName",
InvalidRelationIdentity: "InvalidRelationIdentity",
RelationStateNotFound: "RelationStateNotFound",
RelationIdentNotFound: "RelationIdentNotFound"
}
# All the commands below this point should be documented in the
# specification specifications/unit-agent-hooks
class RelationGetCommand(BaseCommand):
commandName = "relation_get"
arguments = [("client_id", amp.String()),
("relation_id", amp.String()),
("unit_name", amp.String()),
("setting_name", amp.String())]
response = [("data", amp.String()),
("charm_format", amp.Integer())]
class RelationSetCommand(BaseCommand):
commandName = "relation_set"
arguments = [("client_id", amp.String()),
("relation_id", amp.String()),
("blob", amp.String())]
response = []
class RelationIdsCommand(BaseCommand):
commandName = "relation_ids"
arguments = [("client_id", amp.String()),
("relation_name", amp.String())]
response = [("ids", amp.String())]
class ListRelationsCommand(BaseCommand):
arguments = [("client_id", amp.String()),
("relation_id", amp.String())]
# whitespace delimited string
response = [("members", amp.String())]
class LogCommand(BaseCommand):
arguments = [("level", amp.Integer()),
("message", amp.String())]
response = []
class ConfigGetCommand(BaseCommand):
commandName = "config_get"
arguments = [("client_id", amp.String()),
("option_name", amp.String())]
response = [("data", amp.String())]
class OpenPortCommand(BaseCommand):
commandName = "open_port"
arguments = [("client_id", amp.String()),
("port", amp.Integer()),
("proto", amp.String())]
response = []
class ClosePortCommand(BaseCommand):
commandName = "close_port"
arguments = [("client_id", amp.String()),
("port", amp.Integer()),
("proto", amp.String())]
response = []
class UnitGetCommand(BaseCommand):
commandName = "get_unit_info"
arguments = [("client_id", amp.String()),
("setting_name", amp.String())]
response = [("data", amp.String())]
def require_relation_context(context):
"""Is this a valid context for relation hook commands?
A guard for relation methods ensuring they have the proper
RelationHookContext. A NotRelationContext exception is raised when
a non-RelationHookContext is provided.
"""
if not isinstance(context, RelationHookContext):
raise NotRelationContext(
"Calling relation related method without relation context: %s" %
type(context))
class UnitAgentServer(amp.AMP, object):
"""
Protocol used by the UnitAgent to provide a server side to CLI
tools
"""
def connectionMade(self):
"""Inform the factory a connection was made.
"""
super(UnitAgentServer, self).connectionMade()
self.factory.connectionMade(self)
@RelationGetCommand.responder
@defer.inlineCallbacks
def relation_get(self,
client_id, relation_id, unit_name, setting_name):
"""Get settings from a state.hook.RelationHookContext
:param settings_name: optional setting_name (str) indicating that
the client requested a single value only.
"""
context = self.factory.get_context(client_id)
invoker = self.factory.get_invoker(client_id)
if relation_id:
yield self.factory.log(
logging.DEBUG, "Getting relation %s" % relation_id)
context = invoker.get_cached_relation_hook_context(relation_id)
require_relation_context(context)
try:
if setting_name:
data = yield context.get_value(unit_name, setting_name)
else:
data = yield context.get(unit_name)
except UnitRelationStateNotFound, e:
raise NoSuchUnit(str(e))
formatter = get_charm_formatter(invoker.charm_format)
defer.returnValue(dict(
charm_format=invoker.charm_format,
data=formatter.dump(data)))
@RelationSetCommand.responder
@defer.inlineCallbacks
def relation_set(self, client_id, relation_id, blob):
"""Set values into state.hook.RelationHookContext.
:param blob: a YAML or JSON dumped string of a dict that will
contain the delta of settings to be applied to a unit_name.
"""
context = yield self.factory.get_context(client_id)
invoker = self.factory.get_invoker(client_id)
formatter = get_charm_formatter(invoker.charm_format)
data = formatter.load(blob)
if relation_id:
yield self.factory.log(
logging.DEBUG, "Setting relation %s" % relation_id)
context = invoker.get_cached_relation_hook_context(relation_id)
require_relation_context(context)
for k, v in data.items():
if formatter.should_delete(v):
yield context.delete_value(k)
else:
yield context.set_value(k, v)
defer.returnValue({})
@ListRelationsCommand.responder
@defer.inlineCallbacks
def list_relations(self, client_id, relation_id):
"""Lists the members of a relation."""
context = yield self.factory.get_context(client_id)
if relation_id:
yield self.factory.log(
logging.DEBUG, "Listing relation members for %s" % relation_id)
invoker = yield self.factory.get_invoker(client_id)
context = invoker.get_cached_relation_hook_context(relation_id)
require_relation_context(context)
members = yield context.get_members()
defer.returnValue(dict(members=" ".join(members)))
@RelationIdsCommand.responder
@defer.inlineCallbacks
def relation_ids(self, client_id, relation_name):
"""Get relation idents for this hook context.
:client_id: hooks client id that is used to define a context
for a consistent view of state.
:param relation_name: The relation name to query relation ids
for this context. If no such relation name is specified,
raises `MustSpecifyRelationName`.
"""
if not relation_name:
raise MustSpecifyRelationName()
context = yield self.factory.get_context(client_id)
ids = yield context.get_relation_idents(relation_name)
defer.returnValue(dict(ids=" ".join(ids)))
@LogCommand.responder
@defer.inlineCallbacks
def log(self, level, message):
"""Log a message from the hook with the UnitAgent.
:param level: A python logging module log level integer
indicating the level the message should be logged at.
:param message: A string containing the message to be logged.
"""
yield self.factory.log(level, message)
defer.returnValue({})
@ConfigGetCommand.responder
@defer.inlineCallbacks
def config_get(self, client_id, option_name):
"""Retrieve one or more configuration options for a service.
Service is implied in the hooks context.
:client_id: hooks client id, used to define a context for a
consistent view of state, as in the relation_
commands.
:param option_name: Optional name of an option to fetch from
the list.
"""
context = self.factory.get_context(client_id)
options = yield context.get_config()
if option_name:
options = options.get(option_name)
else:
options = dict(options)
# NOTE: no need to consider charm format for blob here, this
# blob has always been in YAML format
defer.returnValue(dict(data=serializer.dump(options)))
@OpenPortCommand.responder
@defer.inlineCallbacks
def open_port(self, client_id, port, proto):
"""Open `port` using `proto` for the service unit.
The service unit is implied by the hook's context.
`client_id` - hook's client id, used to define a context for a
consistent view of state.
`port` - port to be opened
`proto` - protocol of the port to be opened
"""
context = self.factory.get_context(client_id)
service_unit_state = yield context.get_local_unit_state()
container = yield service_unit_state.get_container()
if container:
# also open the port on the container
# we still do subordinate unit below to ease
# status reporting
yield container.open_port(port, proto)
yield service_unit_state.open_port(port, proto)
yield self.factory.log(logging.DEBUG, "opened %s/%s" % (port, proto))
defer.returnValue({})
@ClosePortCommand.responder
@defer.inlineCallbacks
def close_port(self, client_id, port, proto):
"""Close `port` using `proto` for the service unit.
The service unit is implied by the hook's context.
`client_id` - hook's client id, used to define a context for a
consistent view of state.
`port` - port to be closed
`proto` - protocol of the port to be closed
"""
context = self.factory.get_context(client_id)
service_unit_state = yield context.get_local_unit_state()
container = yield service_unit_state.get_container()
if container:
# also close the port on the container
# we still do subordinate unit below to ease
# status reporting
yield container.close_port(port, proto)
yield service_unit_state.close_port(port, proto)
yield self.factory.log(logging.DEBUG, "closed %s/%s" % (port, proto))
defer.returnValue({})
@UnitGetCommand.responder
@defer.inlineCallbacks
def get_unit_info(self, client_id, setting_name):
"""Retrieve a unit value with the given name.
:param client_id: The hook's client id, used to define a context
for a consitent view of state.
:param setting_name: The name of the setting to be retrieved.
"""
context = self.factory.get_context(client_id)
unit_state = yield context.get_local_unit_state()
yield self.factory.log(
logging.DEBUG, "Get unit setting: %r" % setting_name)
if setting_name == "private-address":
value = yield unit_state.get_private_address()
elif setting_name == "public-address":
value = yield unit_state.get_public_address()
else:
raise NoSuchKey("Unit has no setting: %r" % setting_name)
value = value or ""
defer.returnValue({"data": value})
class UnitAgentClient(amp.AMP, object):
"""
Helper used by the CLI tools to call the UnitAgentServer protocol run in
the UnitAgent.
"""
@defer.inlineCallbacks
def relation_get(self, client_id, relation_id, unit_name, setting_name):
""" See UnitAgentServer.relation_get
"""
if not setting_name:
setting_name = ""
result = yield self.callRemote(RelationGetCommand,
client_id=client_id,
relation_id=relation_id,
unit_name=unit_name,
setting_name=setting_name)
formatter = get_charm_formatter(result["charm_format"])
defer.returnValue(formatter.load(result["data"]))
@defer.inlineCallbacks
def relation_set(self, client_id, relation_id, data):
"""Set relation settings for unit_name
:param data: Python dict applied as a delta hook settings
"""
formatter = get_charm_formatter_from_env()
blob = formatter.dump(data)
yield self.callRemote(RelationSetCommand,
client_id=client_id,
relation_id=relation_id,
blob=blob)
defer.returnValue(None)
@defer.inlineCallbacks
def list_relations(self, client_id, relation_id):
result = yield self.callRemote(ListRelationsCommand,
client_id=client_id,
relation_id=relation_id)
members = result["members"].split()
defer.returnValue(members)
@defer.inlineCallbacks
def relation_ids(self, client_id, relation_name):
result = yield self.callRemote(RelationIdsCommand,
client_id=client_id,
relation_name=relation_name)
ids = result["ids"].split()
defer.returnValue(ids)
@defer.inlineCallbacks
def log(self, level, message):
if isinstance(message, (list, tuple)):
message = " ".join(message)
result = yield self.callRemote(LogCommand,
level=level,
message=message)
defer.returnValue(result)
@defer.inlineCallbacks
def config_get(self, client_id, option_name=None):
"""See UnitAgentServer.config_get."""
result = yield self.callRemote(ConfigGetCommand,
client_id=client_id,
option_name=option_name)
# Unbundle and deserialize
result = serializer.load(result["data"])
defer.returnValue(result)
@defer.inlineCallbacks
def open_port(self, client_id, port, proto):
"""Open `port` for `proto` for this unit identified by `client_id`."""
yield self.callRemote(
OpenPortCommand, client_id=client_id, port=port, proto=proto)
defer.returnValue(None)
@defer.inlineCallbacks
def close_port(self, client_id, port, proto):
"""Close `port` for `proto` for this unit identified by `client_id`."""
yield self.callRemote(
ClosePortCommand, client_id=client_id, port=port, proto=proto)
defer.returnValue(None)
@defer.inlineCallbacks
def get_unit_info(self, client_id, setting_name):
result = yield self.callRemote(
UnitGetCommand, client_id=client_id, setting_name=setting_name)
defer.returnValue(result)
class UnitSettingsFactory(protocol.ServerFactory, object):
protocol = UnitAgentServer
def __init__(self, context_provider, invoker_provider, logger=None):
""" Factory to be used by the server for communications.
:param context_provider: Callable(client_id) returning an
juju.state.hook.RelationHookContext. A given `client_id`
will map to a single HookContext.
:param invoker_provider: Callable(client_id) returning a
juju.hook.invoker.Invoker. A given `client_id` will map to a
single invoker.
:param log: When not None a python.logging.Logger object. The
log is usually managed by the UnitAgent and is passed through
the factory.
"""
self.context_provider = context_provider
self.invoker_provider = invoker_provider
self._logger = logger
self.onMade = defer.Deferred()
def get_context(self, client_id):
return self.context_provider(client_id)
def get_invoker(self, client_id):
return self.invoker_provider(client_id)
def log(self, level, message):
if self._logger is not None:
self._logger.log(level, message)
def connectionMade(self, protocol):
if self.onMade:
self.onMade.callback(protocol)
self.onMade = None
juju-0.7.orig/juju/hooks/scheduler.py 0000644 0000000 0000000 00000030507 12135220114 016046 0 ustar 0000000 0000000 import logging
import os
from twisted.internet.defer import (
DeferredQueue, inlineCallbacks, succeed, Deferred,
QueueUnderflow, QueueOverflow)
from juju.lib import serializer
from juju.state.hook import RelationHookContext, RelationChange
ADDED = "joined"
REMOVED = "departed"
MODIFIED = "modified"
log = logging.getLogger("hook.scheduler")
def check_writeable(path):
try:
with open(path, "a"):
pass
except IOError:
raise AssertionError("%s is not writable!" % path)
class HookQueue(DeferredQueue):
# Single consumer, multi producer LIFO, with Nones treated
# as FIFO
def __init__(self, modify_callback):
self._modify_callback = modify_callback
super(HookQueue, self).__init__(backlog=1)
def put(self, change, offset=1):
"""
LIFO except for a None value which is FIFO
Add an object to this queue.
@raise QueueOverflow: Too many objects are in this queue.
"""
if self.waiting:
if change is None:
return self.waiting.pop(0).callback(change)
# Because there's a waiter we know the offset is 0
self._queue_change(change, offset=0)
if self.pending:
self.waiting.pop(0).callback(self.pending[0])
elif self.size is None or len(self.pending) < self.size:
# If the queue is currently processing, no need
# to store the stop change, it will catch the stop
# during the loop iter.
if change is None:
return
self._queue_change(change, offset)
else:
raise QueueOverflow()
def next_change(self):
"""Get the next change from the queue"""
if self.pending:
return succeed(self.pending[0])
elif self.backlog is None or len(self.waiting) < self.backlog:
d = Deferred(canceller=self._cancelGet)
self.waiting.append(d)
return d
else:
raise QueueUnderflow()
def finished_change(self):
"""The last change fetched has been processed."""
if self.pending:
value = self.pending.pop(0)
self.cb_modified()
return value
def cb_modified(self):
"""Callback invoked by queue when the state has been modified"""
self._modify_callback()
def _previous(self, unit_name, pending):
"""Find the most recent previous operation for a unit.
:param pending: sequence of pending operations to consider.
"""
for p in reversed(pending):
if p['unit_name'] == unit_name:
return pending.index(p), p
return None, None
def _wipe_member(self, unit_name, pending):
"""Remove a given unit from membership in pending."""
for p in pending:
if unit_name in p['members']:
p['members'].remove(unit_name)
def _queue_change(self, change, offset=1):
"""Queue up the node change for execution.
The process of queuing the change will automatically
merge with previously queued changes.
:param change: The change to queue up.
:param offset: Starting position of any queue merges.
If the queue is currently being processed, we
don't attempt to merge with the head of the queue
as its currently being operated on.
"""
# Find the previous change if any.
previous_idx, previous = self._previous(
change['unit_name'], self.pending[offset:])
# No previous change, just add
if previous_idx is None:
self.pending.append(change)
return self.cb_modified()
# Reduce
change_idx, change_type = self._reduce(
(previous_idx, previous['change_type']),
(-1, change['change_type']))
# Previous change, done.
if previous_idx == change_idx:
return
# New change, remove previous
elif change_type is not None:
self.pending.pop(previous_idx)
change['change_type'] = change_type
self.pending.append(change)
# Changes cancelled, remove previous, wipe membership
elif change_type is None or change_idx != previous_idx:
assert change['change_type'] == REMOVED
self._wipe_member(
change['unit_name'], self.pending[offset:])
self.pending.pop(previous_idx)
# Notify changed
self.cb_modified()
def _reduce(self, previous, new):
"""Given two change operations for a node, reduce to one operation.
We depend on zookeeper's total ordering behavior as we don't
attempt to handle nonsensical operation sequences like
removed followed by a modified, or modified followed by an
add.
"""
previous_clock, previous_change = previous
new_clock, new_change = new
if previous_change == REMOVED and new_change == ADDED:
return (new_clock, MODIFIED)
elif previous_change == ADDED and new_change == MODIFIED:
return (previous_clock, previous_change)
elif previous_change == ADDED and new_change == REMOVED:
return (None, None)
elif previous_change == MODIFIED and new_change == REMOVED:
return (new_clock, new_change)
elif previous_change == MODIFIED and new_change == MODIFIED:
return (previous_clock, previous_change)
elif previous_change == REMOVED and new_change == MODIFIED:
return (previous_clock, previous_change)
class HookScheduler(object):
def __init__(self, client, executor, unit_relation, relation_ident,
unit_name, state_path):
self._running = False
self._state_path = state_path
# The thing that will actually run the hook for us
self._executor = executor
# For hook context construction.
self._client = client
self._unit_relation = unit_relation
self._relation_ident = relation_ident
self._relation_name = relation_ident.split(":")[0]
self._unit_name = unit_name
self._current_clock = None
if os.path.exists(self._state_path):
self._load_state()
else:
self._create_state()
def _create_state(self):
# Current units (as far as the next hook should know)
self._context_members = None
# Current units and settings versions (as far as the scheduler knows)
self._member_versions = {}
# Run queue (clock)
self._run_queue = HookQueue(self._save_state)
def _load_state(self):
with open(self._state_path) as f:
state = serializer.load(f.read())
if not state:
return self._create_state()
self._context_members = set(state["context_members"])
self._member_versions = state["member_versions"]
self._run_queue = HookQueue(self._save_state)
self._run_queue.pending = state["change_queue"]
def _save_state(self):
state = serializer.dump({
"context_members": sorted(self._context_members),
"member_versions": self._member_versions,
"change_queue": self._run_queue.pending})
temp_path = self._state_path + "~"
with open(temp_path, "w") as f:
f.write(state)
os.rename(temp_path, self._state_path)
def _execute(self, change):
"""Execute a hook script for a change.
"""
# Assemble the change and hook execution context
rel_change = RelationChange(
self._relation_ident, change['change_type'], change['unit_name'])
context = RelationHookContext(
self._client, self._unit_relation, self._relation_ident,
change['members'], unit_name=self._unit_name)
# Execute the change.
return self._executor(context, rel_change)
def _get_change(self, unit_name, change_type, members):
"""
Return a hook context, corresponding to the current state of the
system.
"""
return dict(unit_name=unit_name,
change_type=change_type,
members=sorted(members))
@property
def running(self):
return self._running is True
@inlineCallbacks
def run(self):
assert not self._running, "Scheduler is already running"
check_writeable(self._state_path)
self._running = True
log.debug("start")
while self._running:
change = yield self._run_queue.next_change()
if change is None:
if not self._running:
break
continue
log.debug(
"executing hook for %s:%s",
change['unit_name'], change['change_type'])
# Execute the hook
success = yield self._execute(change)
# Queue up modified immediately after change.
if change['change_type'] == ADDED:
self._run_queue.put(
self._get_change(change['unit_name'],
MODIFIED,
self._context_members))
if success:
self._run_queue.finished_change()
else:
log.debug("hook error, stopping scheduler execution")
self._running = False
break
log.info("stopped")
def stop(self):
"""Stop the hook execution.
Note this does not stop the scheduling, the relation watcher
that feeds changes to the scheduler needs to be stopped to
achieve that effect.
"""
log.debug("stopping")
if not self._running:
return
self._running = False
# Put a marker value onto the queue to designate, stop now.
# This is in case we're waiting on the queue, when the stop
# occurs. The queue treats this None specially as a transient
# value for extant waiters wakeup.
self._run_queue.put(None)
def pop(self):
"""Pop the next event on the queue.
The goal is that on a relation hook error we'll come back up
and we have the option of retrying the failed hook OR to proceed
to the next event. To proceed to the next event we pop the failed
event off the queue.
"""
assert not self._running, "Scheduler must be stopped for pop()"
return self._run_queue.finished_change()
def cb_change_members(self, old_units, new_units):
"""Watch callback invoked when the relation membership changes.
"""
log.debug("members changed: old=%s, new=%s", old_units, new_units)
if self._context_members is None:
self._context_members = set(old_units)
if set(self._member_versions) != set(old_units):
# Can happen when we miss seeing some changes ie. disconnected.
log.debug(
"old does not match last recorded units: %s",
sorted(self._member_versions))
added = set(new_units) - set(self._member_versions)
removed = set(self._member_versions) - set(new_units)
self._member_versions.update(dict((unit, 0) for unit in added))
for unit in removed:
del self._member_versions[unit]
for unit_name in sorted(added):
self._context_members.add(unit_name)
self._run_queue.put(
self._get_change(unit_name, ADDED, self._context_members),
int(self._running))
for unit_name in sorted(removed):
self._context_members.remove(unit_name)
self._run_queue.put(
self._get_change(unit_name, REMOVED, self._context_members),
int(self._running))
self._save_state()
def cb_change_settings(self, unit_versions):
"""Watch callback invoked when related units change data.
"""
log.debug("settings changed: %s", unit_versions)
for (unit_name, version) in unit_versions:
if version > self._member_versions.get(unit_name, 0):
self._member_versions[unit_name] = version
self._run_queue.put(
self._get_change(
unit_name, MODIFIED, self._context_members),
int(self._running))
self._save_state()
juju-0.7.orig/juju/hooks/tests/ 0000755 0000000 0000000 00000000000 12135220114 014653 5 ustar 0000000 0000000 juju-0.7.orig/juju/hooks/tests/__init__.py 0000644 0000000 0000000 00000000000 12135220114 016752 0 ustar 0000000 0000000 juju-0.7.orig/juju/hooks/tests/hooks/ 0000755 0000000 0000000 00000000000 12135220114 015776 5 ustar 0000000 0000000 juju-0.7.orig/juju/hooks/tests/test_arguments.py 0000644 0000000 0000000 00000012456 12135220114 020301 0 ustar 0000000 0000000 import logging
import os
from juju.errors import JujuError
from juju.hooks.cli import CommandLineClient
from juju.lib.testing import TestCase
class TestArguments(TestCase):
"""
Test verifying the standard argument parsing and handling used
by cli hook tools functions properly.
"""
def setup_environment(self):
self.change_environment(
JUJU_AGENT_SOCKET="/tmp/juju_agent_socket",
JUJU_CLIENT_ID="xyzzy")
self.change_args("test-script")
def test_usage(self):
output = self.capture_stream("stdout")
cli = CommandLineClient()
cli.setup_parser()
cli.parser.print_usage()
# test for the existence of a std argument to
# ensure function
self.assertIn("-s SOCKET", output.getvalue())
def test_default_socket_argument(self):
"""
verify that the socket argument is accepted from a command
line flag, or the environment or raises an error.
"""
self.setup_environment()
os.environ.pop("JUJU_AGENT_SOCKET", None)
cli = CommandLineClient()
cli.setup_parser()
options = cli.parse_args("-s /tmp/socket".split())
self.assertEquals(options.socket, "/tmp/socket")
# now set the environment variable to a known state
os.environ["JUJU_AGENT_SOCKET"] = "/tmp/socket2"
options = cli.parse_args()
self.assertEquals(options.socket, "/tmp/socket2")
err = self.capture_stream("stderr")
os.environ.pop("JUJU_AGENT_SOCKET", None)
error = self.failUnlessRaises(SystemExit, cli.parse_args)
self.assertEquals(str(error),
"No JUJU_AGENT_SOCKET/-s option found")
self.assertIn("No JUJU_AGENT_SOCKET/-s option found",
err.getvalue())
def test_single_keyvalue(self):
"""
Verify that a single key/vaule setting can be properly read
from the command line.
"""
self.setup_environment()
cli = CommandLineClient()
cli.keyvalue_pairs = True
cli.setup_parser()
options = cli.parse_args(["foo=bar"])
self.assertEqual(options.keyvalue_pairs["foo"], "bar")
# need to verify this is akin to the sys.argv parsing that
# will occur with single and double quoted strings around
# foo's right hand side
options = cli.parse_args(["foo=bar none"])
self.assertEqual(options.keyvalue_pairs["foo"], "bar none")
def test_multiple_keyvalue(self):
self.setup_environment()
cli = CommandLineClient()
cli.keyvalue_pairs = True
cli.setup_parser()
options = cli.parse_args(["foo=bar", "baz=whatever"])
self.assertIn(("foo", "bar"), options.keyvalue_pairs.items())
self.assertIn(("baz", "whatever"), options.keyvalue_pairs.items())
def test_without_keyvalue_flag(self):
self.setup_environment()
output = self.capture_stream("stderr")
cli = CommandLineClient()
cli.keyvalue_pairs = False
cli.setup_parser()
# exit with the proper error code and make sure a message
# appears on stderr
error = self.assertRaises(SystemExit, cli.parse_args, ["foo=bar"])
self.assertEqual(error.code, 2)
self.assertIn("unrecognized arguments: foo=bar",
output.getvalue())
def test_bad_keyvalue_pair(self):
self.setup_environment()
cli = CommandLineClient()
cli.keyvalue_pairs = True
cli.setup_parser()
options = cli.parse_args(["foo=bar", "baz=whatever", "xxx=",
"yyy=", "zzz=zzz"])
self.assertIn(("foo", "bar"), options.keyvalue_pairs.items())
self.assertIn(("baz", "whatever"), options.keyvalue_pairs.items())
def test_fileinput(self):
self.setup_environment()
filename = self.makeFile("""This is config""")
# the @ sign maps to an argparse.File
cli = CommandLineClient()
cli.keyvalue_pairs = True
cli.default_mode = "rb"
cli.setup_parser()
options = cli.parse_args(["foo=@%s" % filename])
contents = options.keyvalue_pairs["foo"]
self.assertEquals("This is config", contents)
def test_fileinput_missing_file(self):
self.setup_environment()
filename = "missing"
# the @ sign maps to an argparse.File
cli = CommandLineClient()
cli.keyvalue_pairs = True
cli.default_mode = "rb"
cli.setup_parser()
# files in read-mode must exist at the time of the parse
self.assertRaises(JujuError, cli.parse_args,
["foo=@%s" % filename])
def test_fileoutput(self):
self.setup_environment()
filename = self.makeFile()
cli = CommandLineClient()
cli.setup_parser()
options = cli.parse_args(["-o", filename])
# validate that the output file
output = options.output
self.assertInstance(output, file)
self.assertEquals(output.mode, "wb")
def test_logging(self):
self.setup_environment()
cli = CommandLineClient()
cli.keyvalue_pairs = True
cli.setup_parser()
options = cli.parse_args(["foo=bar", "--log-level", "info"])
cli.setup_logging()
self.assertEquals(options.log_level, logging.INFO)
juju-0.7.orig/juju/hooks/tests/test_cli.py 0000644 0000000 0000000 00000034605 12135220114 017043 0 ustar 0000000 0000000 # -*- encoding: utf-8 -*-
import json
import logging
import os
import StringIO
from argparse import ArgumentTypeError
from contextlib import closing
from twisted.internet.defer import inlineCallbacks, returnValue
from juju.hooks.cli import (
CommandLineClient, parse_log_level, parse_port_protocol)
from juju.lib.testing import TestCase
class NoopCli(CommandLineClient):
"""
do nothing client used to test options
"""
manage_logging = True
manage_connection = False
def run(self):
return self.options
def format_special(self, result, stream):
"""
render will lookup this method with the correct format
option and make the output special!!
"""
print >>stream, result + "!!"
class ErrorCli(CommandLineClient):
"""
do nothing client used to test options
"""
manage_logging = True
manage_connection = False
def run(self):
self.exit_code = 1
raise ValueError("Checking render error")
class GetCli(CommandLineClient):
keyvalue_pairs = False
def customize_parser(self):
self.parser.add_argument("unit_name")
self.parser.add_argument("settings_name", nargs="*")
@inlineCallbacks
def run(self):
result = yield self.client.get(self.options.client_id,
self.options.unit_name,
self.options.settings_name)
returnValue(result)
class SetCli(CommandLineClient):
keyvalue_pairs = True
def customize_parser(self):
self.parser.add_argument("unit_name")
@inlineCallbacks
def run(self):
result = yield self.client.set(self.options.client_id,
self.options.unit_name,
self.options.keyvalue_pairs)
returnValue(result)
class TestCli(TestCase):
"""
Verify the integration of the protocols with the cli tool helper.
"""
def tearDown(self):
# remove the logging handlers we installed
root = logging.getLogger()
root.handlers = []
def setup_exit(self, code=0):
mock_exit = self.mocker.replace("sys.exit")
mock_exit(code)
def setup_cli_reactor(self):
"""
When executing the cli via tests, we need to mock out any reactor
start or shutdown.
"""
from twisted.internet import reactor
mock_reactor = self.mocker.patch(reactor)
mock_reactor.run()
mock_reactor.stop()
reactor.running = True
def setup_environment(self):
self.change_environment(JUJU_AGENT_SOCKET=self.makeFile(),
JUJU_CLIENT_ID="client_id")
self.change_args(__file__)
def test_empty_invocation(self):
self.setup_cli_reactor()
self.setup_environment()
self.setup_exit(0)
cli = CommandLineClient()
cli.manage_connection = False
self.mocker.replay()
cli()
def test_cli_get(self):
self.setup_environment()
self.setup_cli_reactor()
self.setup_exit(0)
cli = GetCli()
cli.manage_connection = False
obj = self.mocker.patch(cli)
obj.client.get("client_id", "test_unit", ["foobar"])
self.mocker.replay()
cli("test_unit foobar".split())
def test_cli_get_without_settings_name(self):
self.setup_cli_reactor()
self.setup_environment()
self.setup_exit(0)
cli = GetCli()
cli.manage_connection = False
obj = self.mocker.patch(cli)
obj.client.get("client_id", "test_unit", [])
self.mocker.replay()
cli("test_unit".split())
def test_cli_set(self):
"""
verify the SetCli works
"""
self.setup_environment()
self.setup_cli_reactor()
self.setup_exit(0)
cli = SetCli()
cli.manage_connection = False
obj = self.mocker.patch(cli)
obj.client.set("client_id", "test_unit",
{"foo": "bar", "sheep": "lamb"})
self.mocker.replay()
cli("test_unit foo=bar sheep=lamb".split())
def test_cli_set_fileinput(self):
"""
verify the SetCli works
"""
self.setup_environment()
self.setup_cli_reactor()
self.setup_exit(0)
contents = "this is a test"
filename = self.makeFile(contents)
cli = SetCli()
cli.manage_connection = False
obj = self.mocker.patch(cli)
obj.client.set("client_id", "test_unit",
{"foo": "bar", "sheep": contents})
self.mocker.replay()
# verify that the @notation read the file
cmdline = "test_unit foo=bar sheep=@%s" % (filename)
cli(cmdline.split())
def test_json_output(self):
self.setup_environment()
self.setup_cli_reactor()
self.setup_exit(0)
filename = self.makeFile()
data = dict(a="b", c="d")
cli = NoopCli()
obj = self.mocker.patch(cli)
obj.run()
self.mocker.result(data)
self.mocker.replay()
cli(("--format json -o %s" % filename).split())
with open(filename, "r") as fp:
result = fp.read()
self.assertEquals(json.loads(result), data)
def test_special_format(self):
self.setup_environment()
self.setup_cli_reactor()
self.setup_exit(0)
filename = self.makeFile()
data = "Base Value"
cli = NoopCli()
obj = self.mocker.patch(cli)
obj.run()
self.mocker.result(data)
self.mocker.replay()
cli(("--format special -o %s" % filename).split())
with open(filename, "r") as fp:
result = fp.read()
self.assertEquals(result, data + "!!\n")
def test_cli_no_socket(self):
# don't set up the environment with a socket
self.change_environment()
self.change_args(__file__)
cli = GetCli()
cli.manage_connection = False
cli.manage_logging = False
self.mocker.replay()
error_log = self.capture_stream("stderr")
error = self.failUnlessRaises(SystemExit, cli,
"test_unit foobar".split())
self.assertEquals(error.code, 2)
self.assertIn("No JUJU_AGENT_SOCKET", error_log.getvalue())
def test_cli_no_client_id(self):
# don't set up the environment with a socket
self.setup_environment()
del os.environ["JUJU_CLIENT_ID"]
self.change_args(__file__)
cli = GetCli()
cli.manage_connection = False
cli.manage_logging = False
self.mocker.replay()
error_log = self.capture_stream("stderr")
error = self.failUnlessRaises(SystemExit, cli,
"test_unit foobar".split())
self.assertEquals(error.code, 2)
self.assertIn("No JUJU_CLIENT_ID", error_log.getvalue())
def test_log_level(self):
self.setup_environment()
self.change_args(__file__)
cli = GetCli()
cli.manage_connection = False
self.mocker.replay()
# bad log level
log = self.capture_logging()
cli.setup_parser()
cli.parse_args("--log-level XYZZY test_unit".split())
self.assertIn("Invalid log level", log.getvalue())
# still get a default
self.assertEqual(cli.options.log_level, logging.INFO)
# good symbolic name
cli.parse_args("--log-level CRITICAL test_unit".split())
self.assertEqual(cli.options.log_level, logging.CRITICAL)
# made up numeric level
cli.parse_args("--log-level 42 test_unit".split())
self.assertEqual(cli.options.log_level, 42)
def test_log_format(self):
self.setup_environment()
self.change_args(__file__)
cli = NoopCli()
cli.setup_parser()
cli.parse_args("--format smart".split())
self.assertEqual(cli.options.format, "smart")
cli.parse_args("--format json".split())
self.assertEqual(cli.options.format, "json")
out = self.capture_stream("stdout")
err = self.capture_stream("stderr")
self.setup_cli_reactor()
self.setup_exit(0)
self.mocker.replay()
cli("--format missing".split())
self.assertIn("missing", err.getvalue())
self.assertIn("Namespace", out.getvalue())
def test_render_error(self):
self.setup_environment()
self.change_args(__file__)
cli = ErrorCli()
# bad log level
err = self.capture_stream("stderr")
self.setup_cli_reactor()
self.setup_exit(1)
self.mocker.replay()
cli("")
# make sure we got a traceback on stderr
self.assertIn("Checking render error", err.getvalue())
def test_parse_log_level(self):
self.assertEquals(parse_log_level("INFO"), logging.INFO)
self.assertEquals(parse_log_level("ERROR"), logging.ERROR)
self.assertEquals(parse_log_level(logging.INFO), logging.INFO)
self.assertEquals(parse_log_level(logging.ERROR), logging.ERROR)
def test_parse_port_protocol(self):
self.assertEqual(parse_port_protocol("80"), (80, "tcp"))
self.assertEqual(parse_port_protocol("443/tcp"), (443, "tcp"))
self.assertEqual(parse_port_protocol("53/udp"), (53, "udp"))
self.assertEqual(parse_port_protocol("443/TCP"), (443, "tcp"))
self.assertEqual(parse_port_protocol("53/UDP"), (53, "udp"))
error = self.assertRaises(ArgumentTypeError,
parse_port_protocol, "eighty")
self.assertEqual(
str(error),
"Invalid port, must be an integer, got 'eighty'")
error = self.assertRaises(ArgumentTypeError,
parse_port_protocol, "fifty-three/udp")
self.assertEqual(
str(error),
"Invalid port, must be an integer, got 'fifty-three'")
error = self.assertRaises(ArgumentTypeError,
parse_port_protocol, "53/udp/")
self.assertEqual(
str(error),
"Invalid format for port/protocol, got '53/udp/'")
error = self.assertRaises(ArgumentTypeError,
parse_port_protocol, "53/udp/bad-format")
self.assertEqual(
str(error),
"Invalid format for port/protocol, got '53/udp/bad-format'")
error = self.assertRaises(ArgumentTypeError, parse_port_protocol, "0")
self.assertEqual(
str(error),
"Invalid port, must be from 1 to 65535, got 0")
error = self.assertRaises(
ArgumentTypeError, parse_port_protocol, "65536")
self.assertEqual(
str(error),
"Invalid port, must be from 1 to 65535, got 65536")
error = self.assertRaises(ArgumentTypeError,
parse_port_protocol, "53/not-a-valid-protocol")
self.assertEqual(
str(error),
"Invalid protocol, must be 'tcp' or 'udp', "
"got 'not-a-valid-protocol'")
def assert_smart_output_v1(self, sample, formatted=object()):
"""Verifies output serialization"""
# No roundtripping is verified because str(obj) is in general
# not roundtrippable
cli = CommandLineClient()
with closing(StringIO.StringIO()) as output:
cli.format_smart(sample, output)
self.assertEqual(output.getvalue(), formatted)
def assert_format_smart_v1(self):
"""Verifies legacy smart format v1 which uses Python str encoding"""
self.assert_smart_output_v1(None, "") # No \n in output for None
self.assert_smart_output_v1("", "\n")
self.assert_smart_output_v1("A string", "A string\n")
self.assert_smart_output_v1(
"High bytes: \xca\xfe", "High bytes: \xca\xfe\n")
self.assert_smart_output_v1(u"", "\n")
self.assert_smart_output_v1(
u"A unicode string (but really ascii)",
"A unicode string (but really ascii)\n")
# Maintain LP bug #901495, fixed in v2 format; this happens because
# str(obj) is used
e = self.assertRaises(
UnicodeEncodeError,
self.assert_smart_output_v1, u"䏿–‡")
self.assertEqual(
str(e),
("'ascii' codec can't encode characters in position 0-1: "
"ordinal not in range(128)"))
self.assert_smart_output_v1({}, "{}\n")
self.assert_smart_output_v1(
{u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com"},
"{u'public-address': u'ec2-1-2-3-4.compute-1.amazonaws.com'}\n")
self.assert_smart_output_v1(False, "False\n")
self.assert_smart_output_v1(True, "True\n")
self.assert_smart_output_v1(0.0, "0.0\n")
self.assert_smart_output_v1(3.14159, "3.14159\n")
self.assert_smart_output_v1(6.02214178e23, "6.02214178e+23\n")
self.assert_smart_output_v1(0, "0\n")
self.assert_smart_output_v1(42, "42\n")
def test_format_smart_v1_implied(self):
"""Smart format v1 is implied if _JUJU_CHARM_FORMAT is not defined"""
# Double check env setup
self.assertNotIn("_JUJU_CHARM_FORMAT", os.environ)
self.assert_format_smart_v1()
def test_format_smart_v1(self):
"""Verify legacy format v1 works"""
self.change_environment(_JUJU_CHARM_FORMAT="1")
self.assert_format_smart_v1()
def assert_smart_output(self, sample, formatted):
cli = CommandLineClient()
with closing(StringIO.StringIO()) as output:
cli.format_smart(sample, output)
self.assertEqual(output.getvalue(), formatted)
def test_format_smart_v2(self):
"""Verifies smart format v2 writes raw strings properly"""
self.change_environment(_JUJU_CHARM_FORMAT="2")
# For each case, verify actual output serialization along with
# roundtripping through YAML
self.assert_smart_output(None, "") # No newline in output for None
self.assert_smart_output("", "")
self.assert_smart_output("A string", "A string")
self.assert_smart_output(
"High bytes: \xCA\xFE", "High bytes: \xca\xfe")
self.assert_smart_output("䏿–‡", "\xe4\xb8\xad\xe6\x96\x87")
self.assert_smart_output(
{u"public-address": u"ec2-1-2-3-4.compute-1.amazonaws.com",
u"foo": u"bar",
u"configured": True},
("configured: true\n"
"foo: bar\n"
"public-address: ec2-1-2-3-4.compute-1.amazonaws.com"))
juju-0.7.orig/juju/hooks/tests/test_communications.py 0000644 0000000 0000000 00000035153 12135220114 021323 0 ustar 0000000 0000000 from StringIO import StringIO
import logging
from twisted.internet import protocol, defer, error
from juju.errors import JujuError
from juju.hooks.protocol import (UnitAgentClient, UnitAgentServer,
UnitSettingsFactory, NoSuchUnit,
NoSuchKey, MustSpecifyRelationName)
from juju.lib.mocker import ANY
from juju.lib.testing import TestCase
from juju.lib.twistutils import gather_results
from juju.state.errors import UnitRelationStateNotFound
def _loseAndPass(err, proto):
# be specific, pass on the error to the client.
err.trap(error.ConnectionLost, error.ConnectionDone)
del proto.connectionLost
proto.connectionLost(err)
class UnitAgentServerMock(UnitAgentServer):
def _get_data(self):
return self.factory.data
def _set_data(self, dictlike):
"""
protected method used in testing to rewrite internal data state
"""
self.factory.data = dictlike
data = property(_get_data, _set_data)
def _set_members(self, members):
# replace the content of the current list
self.factory.members = members
def _set_relation_idents(self, relation_idents):
self.factory.relation_idents = relation_idents
@property
def config(self):
return self.factory.config
def config_set(self, dictlike):
"""Write service state directly. """
self.factory.config_set(dictlike)
class MockServiceUnitState(object):
def __init__(self):
self.ports = set()
self.config = {}
def get_container(self):
return None
def open_port(self, port, proto):
self.ports.add((port, proto))
def close_port(self, port, proto):
self.ports.discard((port, proto))
def get_public_address(self):
return self.config.get("public-address", "")
def get_private_address(self):
return self.config.get("private-address", "")
MockServiceUnitState = MockServiceUnitState()
class MockServiceState(object):
def get_unit_state(self, unit_name):
return MockServiceUnitState
class MockInvoker(object):
def __init__(self, charm_format):
self.charm_format = charm_format
class UnitSettingsFactoryLocal(UnitSettingsFactory):
"""
For testing a UnitSettingsFactory with local storage. Loosely
mimics a HookContext
"""
protocol = UnitAgentServerMock
def __init__(self):
super(UnitSettingsFactoryLocal, self).__init__(
self.context_provider, self.invoker)
self.data = {} # relation data
self.config = {} # service options
self.members = []
self.relation_idents = []
self._agent_io = StringIO()
self._invoker = MockInvoker(charm_format=1)
# hook context and a logger to the settings factory
logger = logging.getLogger("unit-settings-fact-test")
handler = logging.StreamHandler(self._agent_io)
handler.setFormatter(logging.Formatter("%(levelname)s %(message)s"))
logger.addHandler(handler)
base = super(UnitSettingsFactoryLocal, self)
base.__init__(self.context_provider, self.invoker, logger)
def context_provider(self, client_id):
return self
def invoker(self, client_id):
return self._invoker
def get_value(self, unit_name, setting_name):
return self.data[unit_name][setting_name]
def get(self, unit_name):
# Currently this is cheating as the real impl can
if unit_name not in self.data:
# raise it with fake data
raise UnitRelationStateNotFound("mysql/1",
"server",
unit_name)
return self.data[unit_name]
def get_members(self):
return self.members
def get_relation_idents(self, relation_name):
return self.relation_idents
def set_value(self, key, value):
self.data.setdefault(self._unit_name, {})[key] = value
def set(self, blob):
self.data[self._unit_name] = blob
def _set_unit_name(self, unit_name):
self._unit_name = unit_name
def config_set(self, data):
"""Directly update service options for testing."""
self.config.update(data)
def get_config(self, option_name=None):
d = self.config.copy()
if option_name:
d = d[option_name]
return d
def get_local_unit_state(self):
return MockServiceUnitState
class LiveFireBase(TestCase):
"""
Utility for connected reactor-using tests.
"""
def _listen_server(self, addr):
from twisted.internet import reactor
self.server_factory = UnitSettingsFactoryLocal()
self.server_socket = reactor.listenUNIX(addr, self.server_factory)
self.addCleanup(self.server_socket.stopListening)
return self.server_socket
def _connect_client(self, addr):
from twisted.internet import reactor
d = protocol.ClientCreator(
reactor, self.client_protocol).connectUNIX(addr)
return d
def setUp(self):
"""
Create an amp server and connect a client to it.
"""
super(LiveFireBase, self).setUp()
sock = self.makeFile()
self._listen_server(sock)
on_client_connect = self._connect_client(sock)
def getProtocols(results):
[(_, client), (_, server)] = results
self.client = client
self.server = server
dl = defer.DeferredList([on_client_connect,
self.server_factory.onMade])
return dl.addCallback(getProtocols)
def tearDown(self):
"""
Cleanup client and server connections, and check the error got at
C{connectionLost}.
"""
L = []
for conn in self.client, self.server:
if conn.transport is not None:
# depend on amp's function connection-dropping behavior
d = defer.Deferred().addErrback(_loseAndPass, conn)
conn.connectionLost = d.errback
conn.transport.loseConnection()
L.append(d)
super(LiveFireBase, self).tearDown()
return gather_results(L)
class TestCommunications(LiveFireBase):
"""
Verify that client and server can communicate with the proper
protocol.
"""
client_protocol = UnitAgentClient
server_protocol = UnitAgentServer
@defer.inlineCallbacks
def setUp(self):
yield super(TestCommunications, self).setUp()
self.log = self.capture_logging(
level=logging.DEBUG,
formatter=logging.Formatter("%(levelname)s %(message)s"))
@defer.inlineCallbacks
def test_relation_get_command(self):
# Allow our testing class to pass the usual guard
require_test_context = self.mocker.replace(
"juju.hooks.protocol.require_relation_context")
require_test_context(ANY)
self.mocker.result(True)
self.mocker.count(3)
self.mocker.replay()
# provide fake data to the server so the client can test it for
# verification
self.server.data = dict(test_node=dict(a="b", foo="bar"))
self.assertIn("test_node", self.server.factory.data)
data = yield self.client.relation_get(
"client_id", "", "test_node", "a")
self.assertEquals(data, "b")
data = yield self.client.relation_get(
"client_id", "", "test_node", "foo")
self.assertEquals(data, "bar")
# A request for asks for all the settings
data = yield self.client.relation_get(
"client_id", "", "test_node", "")
self.assertEquals(data["a"], "b")
self.assertEquals(data["foo"], "bar")
@defer.inlineCallbacks
def test_get_no_such_unit(self):
"""
An attempt to retrieve a value for a nonexistant unit raises
an appropriate error.
"""
# Allow our testing class to pass the usual guard
require_test_context = self.mocker.replace(
"juju.hooks.protocol.require_relation_context")
require_test_context(ANY)
self.mocker.result(True)
self.mocker.replay()
yield self.assertFailure(
self.client.relation_get(
"client_id", "", "missing_unit/99", ""),
NoSuchUnit)
@defer.inlineCallbacks
def test_relation_with_nonrelation_context(self):
"""
Verify that using a non-relation context doesn't allow for the
calling of relation commands and that an appropriate error is
available.
"""
# Allow our testing class to pass the usual guard
from juju.hooks.protocol import NotRelationContext
failure = self.client.relation_get(
"client_id", "", "missing_unit/99", "")
yield self.assertFailure(failure, NotRelationContext)
@defer.inlineCallbacks
def test_relation_set_command(self):
# Allow our testing class to pass the usual guard
require_test_context = self.mocker.replace(
"juju.hooks.protocol.require_relation_context")
require_test_context(ANY)
self.mocker.result(True)
self.mocker.replay()
self.assertEquals(self.server.data, {})
# for testing mock the context being stored in the factory
self.server_factory._set_unit_name("test_node")
result = yield self.client.relation_set(
"client_id", "",
dict(a="b", foo="bar"))
# set returns nothing
self.assertEqual(result, None)
# verify the data exists in the server now
self.assertTrue(self.server.data)
self.assertEquals(self.server.data["test_node"]["a"], "b")
self.assertEquals(self.server.data["test_node"]["foo"], "bar")
def test_must_specify_relation_name(self):
"""Verify `MustSpecifyRelationName` exception`"""
error = MustSpecifyRelationName()
self.assertTrue(isinstance(error, JujuError))
self.assertEquals(
str(error),
"Relation name must be specified")
@defer.inlineCallbacks
def test_relation_ids(self):
"""Verify api support of relation_ids command"""
# NOTE: this is the point where the externally visible usage
# of "relation ids" (as seen in the relation-ids command) is
# converted to "relation idents", hence the use of both
# conventions here. (It has to be somewhere.)
self.server.factory.relation_type = "server"
self.server._set_relation_idents(["db:0", "db:1", "db:42"])
relation_idents = yield self.client.relation_ids("client_id", "db")
self.assertEqual(relation_idents, ["db:0", "db:1", "db:42"])
# A relation name must be specified.
e = yield self.assertFailure(
self.client.relation_ids("client_id", ""),
MustSpecifyRelationName)
self.assertEqual(str(e), "Relation name must be specified")
@defer.inlineCallbacks
def test_list_relations(self):
# Allow our testing class to pass the usual guard
require_test_context = self.mocker.replace(
"juju.hooks.protocol.require_relation_context")
require_test_context(ANY)
self.mocker.result(True)
self.mocker.replay()
self.server.factory.relation_type = "peer"
self.server._set_members(["riak/1", "riak/2"])
members = yield self.client.list_relations("client_id", "")
self.assertIn("riak/1", members)
self.assertIn("riak/2", members)
@defer.inlineCallbacks
def test_log_command(self):
# This is the default calling convention from clients
yield self.client.log(logging.WARNING, ["This", "is", "a", "WARNING"])
yield self.client.log(logging.INFO, "This is INFO")
yield self.client.log(logging.CRITICAL, ["This is CRITICAL"])
self.assertIn("WARNING This is a WARNING", self.log.getvalue())
self.assertIn("INFO This is INFO", self.log.getvalue())
self.assertIn("CRITICAL This is CRITICAL", self.log.getvalue())
@defer.inlineCallbacks
def test_config_get_command(self):
"""Verify ConfigGetCommand.
Test that the communication between the client and server side
of the protocol is marshalling data as expected. Using mock
data and services this exists only to test that
self.client.config_get is returning expected data.
"""
self.server.config_set(dict(a="b", foo="bar"))
data = yield self.client.config_get("client_id", "a")
self.assertEquals(data, "b")
data = yield self.client.config_get("client_id", "foo")
self.assertEquals(data, "bar")
# A request for asks for all the settings
data = yield self.client.config_get("client_id", "")
self.assertEquals(data["a"], "b")
self.assertEquals(data["foo"], "bar")
# test with valid option names
data = yield self.client.config_get("client_id", "a")
self.assertEquals(data, "b")
data = yield self.client.config_get("client_id", "foo")
self.assertEquals(data, "bar")
# test with invalid option name
data = yield self.client.config_get("client_id", "missing")
self.assertEquals(data, None)
@defer.inlineCallbacks
def test_port_commands(self):
mock_service_unit_state = MockServiceState().get_unit_state("mock/0")
yield self.client.open_port("client-id", 80, "tcp")
self.assertEqual(mock_service_unit_state.ports, set([(80, "tcp")]))
yield self.client.open_port("client-id", 53, "udp")
yield self.client.close_port("client-id", 80, "tcp")
self.assertEqual(mock_service_unit_state.ports, set([(53, "udp")]))
yield self.client.close_port("client-id", 53, "udp")
self.assertEqual(mock_service_unit_state.ports, set())
self.assertIn(
"DEBUG opened 80/tcp\n"
"DEBUG opened 53/udp\n"
"DEBUG closed 80/tcp\n"
"DEBUG closed 53/udp\n",
self.log.getvalue())
@defer.inlineCallbacks
def test_unit_get_commands(self):
mock_service_unit_state = MockServiceState().get_unit_state("mock/0")
mock_service_unit_state.config["public-address"] = "foobar.example.com"
value = yield self.client.get_unit_info("client-id", "public-address")
self.assertEqual(value, {"data": "foobar.example.com"})
yield self.assertFailure(
self.client.get_unit_info("client-id", "garbage"), NoSuchKey)
# Shouldn't ever happen in practice (unit agent inits on startup)
value = yield self.client.get_unit_info("client-id", "private-address")
self.assertEqual(value, {"data": ""})
juju-0.7.orig/juju/hooks/tests/test_executor.py 0000644 0000000 0000000 00000033731 12135220114 020131 0 ustar 0000000 0000000 import logging
import os
import subprocess
import sys
from twisted.internet.defer import inlineCallbacks, Deferred
from twisted.internet.error import ProcessExitedAlready
import juju.hooks.executor
from juju.hooks.executor import HookExecutor
from juju.hooks.invoker import Invoker
from juju.lib.testing import TestCase
from juju.lib.twistutils import gather_results
class HookExecutorTest(TestCase):
def setUp(self):
self.lock_path = os.path.join(self.makeDir(), "hook.lock")
self.patch(HookExecutor, "LOCK_PATH", self.lock_path)
self._executor = HookExecutor()
self.output = self.capture_logging("hook.executor", logging.DEBUG)
@inlineCallbacks
def test_observer(self):
"""An observer can be registered against the executor
to recieve callbacks when hooks are executed."""
results = []
d = Deferred()
def observer(hook_path):
results.append(hook_path)
if len(results) == 3:
d.callback(True)
self._executor.set_observer(observer)
self._executor.start()
class _Invoker(object):
def get_context(self):
return None
def __call__(self, hook_path):
results.append(hook_path)
hook_path = self.makeFile("hook content")
yield self._executor(_Invoker(), hook_path)
# Also observes non existant hooks
yield self._executor(_Invoker(), self.makeFile())
self.assertEqual(len(results), 3)
@inlineCallbacks
def test_start_deferred_ends_on_stop(self):
"""The executor start method returns a deferred that
fires when the executor has been stopped."""
stopped = []
def on_start_finish(result):
self.assertTrue(stopped)
d = self._executor.start()
d.addCallback(on_start_finish)
stopped.append(True)
yield self._executor.stop()
self._executor.debug = True
yield d
def test_start_start(self):
"""Attempting to start twice raises an exception."""
self._executor.start()
return self.assertFailure(self._executor.start(), AssertionError)
def test_stop_stop(self):
"""Attempt to stop twice raises an exception."""
self._executor.start()
self._executor.stop()
return self.assertFailure(self._executor.stop(), AssertionError)
@inlineCallbacks
def test_debug_hook(self):
"""A debug hook is executed if a debug hook name is found.
"""
self.output = self.capture_logging(
"hook.executor", level=logging.DEBUG)
results = []
class _Invoker(object):
def get_context(self):
return None
def __call__(self, hook_path):
results.append(hook_path)
self._executor.set_debug(["*"])
self._executor.start()
yield self._executor(_Invoker(), "abc")
self.assertNotEqual(results, ["abc"])
self.assertIn("abc", self.output.getvalue())
def test_get_debug_hook_path_executable(self):
"""The debug hook path return from the executor should be executable.
"""
self.patch(
juju.hooks.executor, "DEBUG_HOOK_TEMPLATE",
"#!/bin/bash\n echo {hook_name}\n exit 0")
self._executor.set_debug(["*"])
debug_hook = self._executor.get_hook_path("something/good")
stdout = open(self.makeFile(), "w+")
p = subprocess.Popen(debug_hook, stdout=stdout.fileno())
self.assertEqual(p.wait(), 0)
stdout.seek(0)
self.assertEqual(stdout.read(), "good\n")
@inlineCallbacks
def test_end_debug_with_exited_process(self):
"""Ending debug with a process that has already ended is a noop."""
results = []
class _Invoker(object):
process_ended = Deferred()
def get_context(self):
return None
def __call__(self, hook_path):
results.append(hook_path)
return self.process_ended
def send_signal(self, signal_id):
if results:
results.append(1)
raise ProcessExitedAlready()
results.append(2)
raise ValueError("No such process")
self._executor.start()
self._executor.set_debug(["abc"])
hook_done = self._executor(_Invoker(), "abc")
self._executor.set_debug(None)
_Invoker.process_ended.callback(True)
yield hook_done
self.assertEqual(len(results), 2)
self.assertNotEqual(results[0], "abc")
self.assertEqual(results[1], 1)
@inlineCallbacks
def test_end_debug_with_hook_not_started(self):
results = []
class _Invoker(object):
process_ended = Deferred()
def get_context(self):
return None
def __call__(self, hook_path):
results.append(hook_path)
return self.process_ended
def send_signal(self, signal_id):
if len(results) == 1:
results.append(1)
raise ValueError()
results.append(2)
raise ProcessExitedAlready()
self._executor.start()
self._executor.set_debug(["abc"])
hook_done = self._executor(_Invoker(), "abc")
self._executor.set_debug(None)
_Invoker.process_ended.callback(True)
yield hook_done
self.assertEqual(len(results), 2)
self.assertNotEqual(results[0], "abc")
self.assertEqual(results[1], 1)
@inlineCallbacks
def test_end_debug_with_debug_running(self):
"""If a debug hook is running, it is signaled if the debug is disabled.
"""
self.patch(
juju.hooks.executor, "DEBUG_HOOK_TEMPLATE",
"\n".join(("#!/bin/bash",
"exit_handler() {",
" echo clean exit",
" exit 0",
"}",
'trap "exit_handler" HUP',
"sleep 0.2",
"exit 1")))
unit_dir = self.makeDir()
charm_dir = os.path.join(unit_dir, "charm")
self.makeDir(path=charm_dir)
self._executor.set_debug(["*"])
log = logging.getLogger("invoker")
# Populate environment variables for default invoker.
self.change_environment(
JUJU_UNIT_NAME="dummy/1",
JUJU_ENV_UUID="snowflake",
PATH="/bin/:/usr/bin")
output = self.capture_logging("invoker", level=logging.DEBUG)
invoker = Invoker(
None, None, "constant", self.makeFile(), unit_dir, log)
self._executor.start()
hook_done = self._executor(invoker, "abc")
# Give a moment for execution to start.
yield self.sleep(0.1)
self._executor.set_debug(None)
yield hook_done
self.assertIn("clean exit", output.getvalue())
def test_get_debug_hook_path(self):
"""A debug hook file path is returned if a debug hook name is found.
"""
# Default is to return the file path.
file_path = self.makeFile()
hook_name = os.path.basename(file_path)
self.assertEquals(self._executor.get_hook_path(file_path), file_path)
# Hook names can be specified as globs.
self._executor.set_debug(["*"])
debug_hook_path = self._executor.get_hook_path(file_path)
self.assertNotEquals(file_path, debug_hook_path)
# The hook base name is suffixed onto the debug hook file
self.assertIn(os.path.basename(file_path),
os.path.basename(debug_hook_path))
# Verify the debug hook contents.
debug_hook_file = open(debug_hook_path)
debug_contents = debug_hook_file.read()
debug_hook_file.close()
self.assertIn("hook.sh", debug_contents)
self.assertIn("-n %s" % hook_name, debug_contents)
self.assertTrue(os.access(debug_hook_path, os.X_OK))
# The hook debug can be set back to none.
self._executor.set_debug(None)
self.assertEquals(self._executor.get_hook_path(file_path), file_path)
# the executor can debug only selected hooks.
self._executor.set_debug(["abc"])
self.assertEquals(self._executor.get_hook_path(file_path), file_path)
# The debug hook file is removed on the next hook path access.
self.assertFalse(os.path.exists(debug_hook_path))
def test_hook_exception_propgates(self):
"""An error in a hook is propogated to the execution deferred."""
class _Invoker:
def get_context(self):
return None
def __call__(self, hook_path):
raise AttributeError("Foo")
hook_path = self.makeFile("never got here")
self._executor.start()
return self.assertFailure(
self._executor(_Invoker(), hook_path), AttributeError)
@inlineCallbacks
def test_executor_running_property(self):
self._executor.start()
self.assertTrue(self._executor.running)
yield self._executor.stop()
self.assertFalse(self._executor.running)
@inlineCallbacks
def test_nonexistant_hook_skipped(self):
"""If a hook does not exist a warning is logged and the hook skipped.
"""
class _Invoker:
def get_context(self):
return None
self._executor.start()
hook_path = self.makeFile()
value = yield self._executor(_Invoker(), hook_path)
self.assertEqual(value, False)
self.assertIn("Hook does not exist, skipping %s" % hook_path,
self.output.getvalue())
def test_start_stop_start(self):
"""The executor can be stopped and restarted."""
results = []
def invoke(hook_path):
results.append(hook_path)
self._executor(invoke, "1")
start_complete = self._executor.start()
self._executor.stop()
yield start_complete
self.assertEqual(len(results), 1)
self._executor(invoke, "1")
self._executor(invoke, "2")
start_complete = self._executor.start()
self._executor.stop()
yield start_complete
self.assertEqual(len(results), 3)
@inlineCallbacks
def test_run_priority_hook_while_already_running(self):
"""Attempting to run a priority hook while running is an error.
"""
def invoke(hook_path):
pass
self._executor.start()
error = yield self.assertFailure(
self._executor.run_priority_hook(invoke, "foobar"),
AssertionError)
self.assertEquals(str(error), "Executor must not be running")
@inlineCallbacks
def test_prioritize_with_queued(self):
"""A prioritized hook will execute before queued hooks.
"""
results = []
execs = []
hooks = [self.makeFile(str(i)) for i in range(5)]
class _Invoker(object):
def get_context(self):
return None
def __call__(self, hook_path):
results.append(hook_path)
invoke = _Invoker()
for i in hooks:
execs.append(self._executor(invoke, i))
priority_hook = self.makeFile(str("me first"))
yield self._executor.run_priority_hook(invoke, priority_hook)
self._executor.start()
yield gather_results(execs)
hooks.insert(0, priority_hook)
self.assertEqual(results, hooks)
def assert_lock_pid(self, pid=None):
pid = pid or os.getpid()
self.assertTrue(os.path.lexists(self.lock_path))
fpid = int(os.readlink(self.lock_path))
self.assertEqual(pid, fpid)
@inlineCallbacks
def test_fs_lock(self):
"""An FS Lock is acquired while hooks are executing."""
results = []
test = self
class _Invoker(object):
def get_context(self):
return None
def __call__(self, hook_path):
test.assert_lock_pid()
results.append(hook_path)
invoker = _Invoker()
d = self._executor.start()
yield self._executor(invoker, self.makeFile("a"))
yield self._executor(invoker, self.makeFile("b"))
self._executor.stop()
yield d
self.assertEqual(len(results), 2)
@inlineCallbacks
def test_fs_lock_invalid_pid(self):
"""An invalid pid on the lock is handled."""
os.symlink(str(2 ** 31 - 1), self.lock_path)
test = self
class _Invoker(object):
def get_context(self):
return None
def __call__(self, hook_path):
test.assert_lock_pid()
invoker = _Invoker()
self._executor.start()
# This would block forever if unsuccessful
yield self._executor(invoker, self.makeFile("a"))
def test_serialized_execution(self):
"""Hook execution is serialized via the HookExecution api.
"""
wait_callback = [Deferred() for i in range(5)]
finish_callback = [Deferred() for i in range(5)]
results = []
@inlineCallbacks
def invoker(hook_path):
results.append(hook_path)
yield finish_callback[len(results) - 1]
wait_callback[len(results) - 1].callback(True)
start_complete = self._executor.start()
for i in range(5):
self._executor(invoker, "hook-%s" % i)
self.assertEqual(len(results), 1)
finish_callback[1].callback(True)
self.assertEqual(len(results), 1)
# Verify stop behavior
stop_complete = yield self._executor.stop()
# Finish the running execution.
finish_callback[0].callback(True)
# Verify we've stopped executing.
yield stop_complete
self.assertTrue(start_complete.called)
self.assertEqual(len(results), 1)
# Start the executioner again.
self._executor.start()
for finish in finish_callback[2:]:
finish.callback(True)
self.assertEqual(len(results), 5)
juju-0.7.orig/juju/hooks/tests/test_invoker.py 0000644 0000000 0000000 00000252124 12135220114 017747 0 ustar 0000000 0000000 # -*- encoding: utf-8 -*-
from StringIO import StringIO
import base64
import json
import logging
import os
import stat
import sys
from twisted.internet import defer
from twisted.internet.process import Process
import juju
from juju import errors
from juju.control.tests.test_status import StatusTestBase
from juju.environment.tests.test_config import EnvironmentsConfigTestBase
from juju.lib.pick import pick_attr
from juju.hooks import invoker
from juju.hooks import commands
from juju.hooks.protocol import UnitSettingsFactory
from juju.lib import serializer
from juju.lib.mocker import MATCH
from juju.lib.twistutils import get_module_directory
from juju.state import hook
from juju.state.endpoint import RelationEndpoint
from juju.state.errors import RelationIdentNotFound
from juju.state.relation import RelationStateManager
from juju.state.tests.test_relation import RelationTestBase
class MockUnitAgent(object):
"""Pretends to implement the client state cache, and the UA hook socket.
"""
def __init__(self, client, socket_path, charm_dir):
self.client = client
self.socket_path = socket_path
self.charm_dir = charm_dir
self._clients = {} # client_id -> HookContext
self._invokers = {} # client_id -> Invoker
self._agent_log = logging.getLogger("unit-agent")
self._agent_io = StringIO()
handler = logging.StreamHandler(self._agent_io)
self._agent_log.addHandler(handler)
self.server_listen()
def make_context(self, relation_ident, change_type, unit_name,
unit_relation, client_id):
"""Create, record and return a HookContext object for a change."""
change = hook.RelationChange(relation_ident, change_type, unit_name)
context = hook.RelationHookContext(
self.client, unit_relation, relation_ident, unit_name=unit_name)
self._clients[client_id] = context
return context, change
def get_logger(self):
"""Build a logger to be used for a hook."""
logger = logging.getLogger("hook")
log_file = StringIO()
handler = logging.StreamHandler(log_file)
logger.addHandler(handler)
return logger
@defer.inlineCallbacks
def get_invoker(self, relation_ident, change_type,
unit_name, unit_relation, client_id="client_id"):
"""Build an Invoker for the execution of a hook.
`relation_ident`: relation identity of the relation the Invoker is for.
`change_type`: the string name of the type of change the hook
is invoked for.
`unit_name`: the name of the local unit of the change.
`unit_relation`: a UnitRelationState instance for the hook.
`client_id`: unique client identifier.
`service`: The local service of the executing hook.
"""
context, change = self.make_context(
relation_ident, change_type,
unit_name, unit_relation, client_id)
logger = self.get_logger()
exe = invoker.Invoker(context, change,
self.get_client_id(),
self.socket_path,
self.charm_dir,
logger)
yield exe.start()
self._invokers[client_id] = exe
defer.returnValue(exe)
def get_client_id(self):
# simulate associating a client_id with a client connection
# for later context look up. In reality this would be a mapping.
return "client_id"
def get_context(self, client_id):
return self._clients[client_id]
def lookup_invoker(self, client_id):
return self._invokers[client_id]
def stop(self):
"""Stop the process invocation.
Trigger any registered cleanup functions.
"""
self.server_socket.stopListening()
def server_listen(self):
from twisted.internet import reactor
# hook context and a logger to the settings factory
logger = logging.getLogger("unit-agent")
self.log_file = StringIO()
handler = logging.StreamHandler(self.log_file)
logger.addHandler(handler)
self.server_factory = UnitSettingsFactory(
self.get_context, self.lookup_invoker, logger)
self.server_socket = reactor.listenUNIX(
self.socket_path, self.server_factory)
def capture_separate_log(name, level):
"""Support the separate capture of logging at different log levels.
TestCase.capture_logging only allows one level to be captured at
any given time. Given that the hook log captures both data AND
traditional logging, it's useful to separate.
"""
logger = logging.getLogger(name)
output = StringIO()
handler = logging.StreamHandler(output)
handler.setLevel(level)
logger.addHandler(handler)
return output
def get_cli_environ_path(*search_path):
"""Construct a path environment variable.
This path will contain the juju bin directory and any paths
passed as *search_path.
@param search_path: additional directories to put on PATH
"""
search_path = list(search_path)
# Look for the top level juju bin directory and make sure
# that is available for the client utilities.
bin_path = os.path.normpath(
os.path.join(get_module_directory(juju), "..", "bin"))
search_path.append(bin_path)
search_path.extend(os.environ.get("PATH", "").split(":"))
return ":".join(search_path)
class InvokerTestBase(EnvironmentsConfigTestBase):
@defer.inlineCallbacks
def setUp(self):
yield super(InvokerTestBase, self).setUp()
yield self.push_default_config()
def update_invoker_env(self, local_unit, remote_unit):
"""Update os.env for a hook invocation.
Update the invoker (and hence the hook) environment with the
path to the juju cli utils, and the local unit name.
"""
test_hook_path = os.path.join(
os.path.abspath(
os.path.dirname(__file__)).replace("/_trial_temp", ""),
"hooks")
self.change_environment(
PATH=get_cli_environ_path(test_hook_path, "/usr/bin", "/bin"),
JUJU_ENV_UUID="snowflake",
JUJU_UNIT_NAME=local_unit,
JUJU_REMOTE_UNIT=remote_unit)
def get_test_hook(self, hook):
"""Search for the test hook under the testing directory.
Returns the full path name of the hook to be invoked from its
basename.
"""
dirname = os.path.dirname(__file__)
abspath = os.path.abspath(dirname)
hook_file = os.path.join(abspath, "hooks", hook)
if not os.path.exists(hook_file):
# attempt to find it via sys_path
for p in sys.path:
hook_file = os.path.normpath(
os.path.join(p, dirname, "hooks", hook))
if os.path.exists(hook_file):
return hook_file
raise IOError("%s doesn't exist" % hook_file)
return hook_file
def get_cli_hook(self, hook):
bin_path = os.path.normpath(
os.path.join(get_module_directory(juju), "..", "bin"))
return os.path.join(bin_path, hook)
def create_hook(self, hook, arguments):
bin_path = self.get_cli_hook(hook)
fn = self.makeFile("#!/bin/bash\n'%s' %s" % (bin_path, arguments))
# make the hook executable
os.chmod(fn, stat.S_IEXEC | stat.S_IREAD)
return fn
def create_capturing_hook(self, hook, files=()):
"""Create a hook to enable capturing of results into files.
This method helps test scenarios of bash scripts that depend
on exact captures of stdout and stderr as well as set -eu
(Juju hook commands do not return nonzero exit codes, except
in the case of parse failures).
bin-path (the path to Juju commands) is always defined, along
with paths to temporary files, as keyed to the list of
`files`.
"""
args = {"bin-path": os.path.normpath(
os.path.join(get_module_directory(juju), "..", "bin"))}
for f in files:
args[f] = self.makeFile()
hook_fn = self.makeFile(hook.format(**args))
os.chmod(hook_fn, stat.S_IEXEC | stat.S_IREAD)
return hook_fn, args
def assert_file(self, path, data):
"""Assert file reached by `path` contains exactly this `data`."""
with open(path) as f:
self.assertEqual(f.read(), data)
class TestCompleteInvoker(InvokerTestBase, StatusTestBase):
@defer.inlineCallbacks
def setUp(self):
yield super(TestCompleteInvoker, self).setUp()
self.update_invoker_env("mysql/0", "wordpress/0")
self.socket_path = self.makeFile()
unit_dir = self.makeDir()
self.makeDir(path=os.path.join(unit_dir, "charm"))
self.ua = MockUnitAgent(
self.client,
self.socket_path,
unit_dir)
@defer.inlineCallbacks
def tearDown(self):
self.ua.stop()
yield super(TestCompleteInvoker, self).tearDown()
@defer.inlineCallbacks
def build_default_relationships(self):
state = yield self.build_topology(skip_unit_agents=("*",))
myr = yield self.relation_state_manager.get_relations_for_service(
state["services"]["mysql"])
self.mysql_relation = yield myr[0].add_unit_state(
state["relations"]["mysql"][0])
wpr = yield self.relation_state_manager.get_relations_for_service(
state["services"]["wordpress"])
wpr = [r for r in wpr if r.internal_relation_id ==
self.mysql_relation.internal_relation_id][0]
self.wordpress_relation = yield wpr.add_unit_state(
state["relations"]["wordpress"][0])
defer.returnValue(state)
@defer.inlineCallbacks
def test_get_from_different_unit(self):
"""Verify that relation-get works with a remote unit.
This test will run the logic of relation-get and will ensure
that, even though we're running the hook within the context of
unit A, a hook can obtain the data from unit B using
relation-get. To do this a more complete simulation of the
runtime is needed than with the local test cases below.
"""
yield self.build_default_relationships()
yield self.wordpress_relation.set_data({"hello": "world"})
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0",
self.mysql_relation,
client_id="client_id")
yield exe(self.create_hook(
"relation-get", "--format=json - wordpress/0"))
self.assertEqual({"hello": "world"},
json.loads(hook_log.getvalue()))
@defer.inlineCallbacks
def test_spawn_cli_get_hook_no_args(self):
"""Validate the get hook works with no (or all default) args.
This should default to the remote unit. We do pass a format
arg so we can marshall the data.
"""
yield self.build_default_relationships()
yield self.wordpress_relation.set_data({"hello": "world"})
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0", self.mysql_relation,
client_id="client_id")
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
result = yield exe(self.create_hook("relation-get", "--format=json"))
self.assertEqual(result, 0)
# verify that its the wordpress data
self.assertEqual({"hello": "world"},
json.loads(hook_log.getvalue()))
@defer.inlineCallbacks
def test_spawn_cli_get_implied_unit(self):
"""Validate the get hook can transmit values to the hook."""
yield self.build_default_relationships()
hook_log = self.capture_logging("hook")
# Populate and verify some data we will
# later extract with the hook
expected = {"name": "rabbit",
"forgotten": "lyrics",
"nottobe": "requested"}
yield self.wordpress_relation.set_data(expected)
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0", self.mysql_relation,
client_id="client_id")
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
# invoke relation-get and verify the result
result = yield exe(self.create_hook("relation-get", "--format=json -"))
self.assertEqual(result, 0)
data = json.loads(hook_log.getvalue())
self.assertEqual(data["name"], "rabbit")
self.assertEqual(data["forgotten"], "lyrics")
@defer.inlineCallbacks
def test_spawn_cli_get_format_shell(self):
"""Validate the get hook can transmit values to the hook."""
yield self.build_default_relationships()
hook_log = self.capture_logging("hook")
# Populate and verify some data we will
# later extract with the hook
expected = {"name": "rabbit",
"forgotten": "lyrics"}
yield self.wordpress_relation.set_data(expected)
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0", self.mysql_relation,
client_id="client_id")
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
# invoke relation-get and verify the result
result = yield exe(
self.create_hook("relation-get", "--format=shell -"))
self.assertEqual(result, 0)
data = hook_log.getvalue()
self.assertEqual('VAR_FORGOTTEN=lyrics\nVAR_NAME=rabbit\n\n', data)
# and with a single value request
hook_log.truncate(0)
result = yield exe(
self.create_hook("relation-get", "--format=shell name"))
self.assertEqual(result, 0)
data = hook_log.getvalue()
self.assertEqual('VAR_NAME=rabbit\n\n', data)
@defer.inlineCallbacks
def test_relation_get_format_shell_bad_vars(self):
"""If illegal values are make somehow available warn."""
yield self.build_default_relationships()
hook_log = self.capture_logging("hook")
# Populate and verify some data we will
# later extract with the hook
expected = {"BAR": "none", "funny-chars*": "should work"}
yield self.wordpress_relation.set_data(expected)
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0", self.mysql_relation,
client_id="client_id")
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
exe.environment["VAR_FOO"] = "jungle"
result = yield exe(
self.create_hook("relation-get", "--format=shell -"))
self.assertEqual(result, 0)
yield exe.ended
data = hook_log.getvalue()
self.assertIn('VAR_BAR=none', data)
# Verify that illegal shell variable names get converted
# in an expected way
self.assertIn("VAR_FUNNY_CHARS_='should work'", data)
# Verify that it sets VAR_FOO to null because it shouldn't
# exist in the environment
self.assertIn("VAR_FOO=", data)
self.assertIn("The following were omitted from", data)
@defer.inlineCallbacks
def test_hook_exec_in_charm_directory(self):
"""Hooks are executed in the charm directory."""
yield self.build_default_relationships()
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0", self.mysql_relation,
client_id="client_id")
self.assertTrue(os.path.isdir(exe.unit_path))
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
# verify the hook's execution directory
hook_path = self.makeFile("#!/bin/bash\necho $PWD")
os.chmod(hook_path, stat.S_IEXEC | stat.S_IREAD)
result = yield exe(hook_path)
self.assertEqual(hook_log.getvalue().strip(),
os.path.join(exe.unit_path, "charm"))
self.assertEqual(result, 0)
# Reset the output capture
hook_log.seek(0)
hook_log.truncate()
# Verify the environment variable is set.
hook_path = self.makeFile("#!/bin/bash\necho $CHARM_DIR")
os.chmod(hook_path, stat.S_IEXEC | stat.S_IREAD)
result = yield exe(hook_path)
self.assertEqual(hook_log.getvalue().strip(),
os.path.join(exe.unit_path, "charm"))
@defer.inlineCallbacks
def test_spawn_cli_config_get(self):
"""Validate that config-get returns expected values."""
yield self.build_default_relationships()
hook_log = self.capture_logging("hook")
# Populate and verify some data we will
# later extract with the hook
expected = {"name": "rabbit",
"forgotten": "lyrics",
"nottobe": "requested"}
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0", self.mysql_relation,
client_id="client_id")
context = yield self.ua.get_context("client_id")
config = yield context.get_config()
config.update(expected)
yield config.write()
# invoke relation-get and verify the result
result = yield exe(self.create_hook("config-get", "--format=json"))
self.assertEqual(result, 0)
data = json.loads(hook_log.getvalue())
self.assertEqual(data["name"], "rabbit")
self.assertEqual(data["forgotten"], "lyrics")
class RelationInvokerTestBase(InvokerTestBase, RelationTestBase):
@defer.inlineCallbacks
def setUp(self):
yield super(RelationInvokerTestBase, self).setUp()
yield self._default_relations()
self.update_invoker_env("mysql/0", "wordpress/0")
self.socket_path = self.makeFile()
unit_dir = self.makeDir()
self.makeDir(path=os.path.join(unit_dir, "charm"))
self.ua = MockUnitAgent(
self.client,
self.socket_path,
unit_dir)
self.log = self.capture_logging(
formatter=logging.Formatter(
"%(name)s:%(levelname)s:: %(message)s"),
level=logging.DEBUG)
@defer.inlineCallbacks
def tearDown(self):
self.ua.stop()
yield super(RelationInvokerTestBase, self).tearDown()
@defer.inlineCallbacks
def _default_relations(self):
wordpress_ep = RelationEndpoint(
"wordpress", "client-server", "app", "client")
mysql_ep = RelationEndpoint(
"mysql", "client-server", "db", "server")
self.wordpress_states = yield self.\
add_relation_service_unit_from_endpoints(wordpress_ep, mysql_ep)
self.mysql_states = yield self.add_opposite_service_unit(
self.wordpress_states)
self.relation = self.mysql_states["unit_relation"]
class InvokerTest(RelationInvokerTestBase):
@defer.inlineCallbacks
def test_environment(self):
"""Test various way to manipulate the calling environment.
"""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
exe.environment.update(dict(FOO="bar"))
env = exe.get_environment()
# these come from the init argument
self.assertEqual(env["JUJU_AGENT_SOCKET"], self.socket_path)
self.assertEqual(env["JUJU_CLIENT_ID"], "client_id")
# this comes from updating the Invoker.environment
self.assertEqual(env["FOO"], "bar")
# and this comes from the unit agent passing through its environment
self.assertTrue(env["PATH"])
self.assertEqual(env["JUJU_UNIT_NAME"], "mysql/0")
# Set for all hooks
self.assertEqual(env["DEBIAN_FRONTEND"], "noninteractive")
self.assertEqual(env["APT_LISTCHANGES_FRONTEND"], "none")
# Specific to the charm that is running, in this case it's the
# dummy charm (this is the default charm used when the
# add_service method is use)
self.assertEqual(env["_JUJU_CHARM_FORMAT"], "1")
self.assertEqual(env["JUJU_ENV_UUID"], "snowflake")
@defer.inlineCallbacks
def test_missing_hook(self):
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
self.failUnlessRaises(errors.FileNotFound, exe, "hook-missing")
@defer.inlineCallbacks
def test_noexec_hook(self):
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
hook = self.get_test_hook("noexec-hook")
error = self.failUnlessRaises(errors.CharmError, exe, hook)
self.assertEqual(error.path, hook)
self.assertEqual(error.message, "hook is not executable")
@defer.inlineCallbacks
def test_unhandled_signaled_on_hook(self):
"""A hook that executes as a result of an unhandled signal is an error.
"""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
hook_exec = exe(self.get_test_hook("sleep-hook"))
# Send the process a signal to kill it
exe._process.signalProcess("HUP")
error = yield self.assertFailure(
hook_exec, errors.CharmInvocationError)
self.assertIn(
"sleep-hook': signal 1.", str(error))
@defer.inlineCallbacks
def test_spawn_success(self):
"""Validate hook with success from exit."""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
result = yield exe(self.get_test_hook("success-hook"))
self.assertEqual(result, 0)
yield exe.ended
self.assertIn("WIN", self.log.getvalue())
self.assertIn("exit code 0", self.log.getvalue())
@defer.inlineCallbacks
def test_spawn_fail(self):
"""Validate hook with fail from exit."""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
d = exe(self.get_test_hook("fail-hook"))
result = yield self.assertFailure(d, errors.CharmInvocationError)
self.assertEqual(result.exit_code, 1)
# ERROR indicate the level name, we are checking that the
# proper level was logged here
yield exe.ended
self.assertIn("INFO", self.log.getvalue())
# and the message
self.assertIn("FAIL", self.log.getvalue())
self.assertIn("exit code 1", self.log.getvalue())
@defer.inlineCallbacks
def test_hanging_hook(self):
"""Verify that a hook that's slow to end is terminated.
Test this by having the hook fork a process that hangs around
for a while, necessitating reaping. This happens because the
child process does not close the parent's file descriptors (as
expected with daemonization, for example).
http://www.snailbook.com/faq/background-jobs.auto.html
provides some insight into what can happen.
"""
from twisted.internet import reactor
# Ordinarily the reaper for any such hanging hooks will run in
# 5s, but we are impatient. Force it to end much sooner by
# intercepting the reaper setup.
mock_reactor = self.mocker.patch(reactor)
# Although we can match precisely on the
# Process.loseConnection, Mocker gets confused with also
# trying to match the delay time, using something like
# `MATCH(lambda x: isinstance(x, (int, float)))`. So instead
# we hardcode it here as just 5.
mock_reactor.callLater(
5, MATCH(lambda x: isinstance(x.im_self, Process)))
def intercept_reaper_setup(delay, reaper):
# Given this is an external process, let's sleep for a
# short period of time
return reactor.callLater(0.2, reaper)
self.mocker.call(intercept_reaper_setup)
self.mocker.replay()
# The hook script will immediately exit with a status code of
# 0, but it created a child process (via shell backgrounding)
# that is running (and will sleep for >10s)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
result = yield exe(self.get_test_hook("hanging-hook"))
self.assertEqual(result, 0)
# Verify after waiting for the process to close (which means
# the reaper ran!), we get output for the first phase of the
# hanging hook, but not after its second, more extended sleep.
yield exe.ended
self.assertIn("Slept for 50ms", self.log.getvalue())
self.assertNotIn("Slept for 1s", self.log.getvalue())
# Lastly there's a nice long sleep that would occur after the
# default timeout of this test. Successful completion of this
# test without a timeout means this sleep was never executed.
def test_path_setup(self):
"""Validate that the path allows finding the executable."""
from twisted.python.procutils import which
exe = which("relation-get")
self.assertTrue(exe)
self.assertTrue(exe[0].endswith("relation-get"))
@defer.inlineCallbacks
def test_spawn_cli_get_hook(self):
"""Validate the get hook can transmit values to the hook"""
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Populate and verify some data we will
# later extract with the hook
context = self.ua.get_context("client_id")
expected = {"a": "b", "c": "d",
"private-address": "mysql-0.example.com"}
yield context.set(expected)
data = yield context.get("mysql/0")
self.assertEqual(expected, data)
# invoke the hook and process the results
# verifying they are expected
result = yield exe(self.create_hook("relation-get",
"--format=json - mysql/0"))
self.assertEqual(result, 0)
data = hook_log.getvalue()
self.assertEqual(json.loads(data), expected)
@defer.inlineCallbacks
def test_spawn_cli_get_value_hook(self):
"""Validate the get hook can transmit values to the hook."""
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Populate and verify some data we will
# later extract with the hook
context = self.ua.get_context("client_id")
expected = {"name": "rabbit", "private-address": "mysql-0.example.com"}
yield context.set(expected)
data = yield context.get("mysql/0")
self.assertEqual(expected, data)
# invoke the hook and process the results
# verifying they are expected
result = yield exe(self.create_hook("relation-get",
"--format=json name mysql/0"))
self.assertEqual(result, 0)
data = hook_log.getvalue()
self.assertEqual("rabbit", json.loads(data))
@defer.inlineCallbacks
def test_spawn_cli_get_unit_private_address(self):
"""Private addresses can be retrieved."""
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
result = yield exe(self.create_hook("unit-get", "private-address"))
self.assertEqual(result, 0)
data = hook_log.getvalue()
self.assertEqual("mysql-0.example.com", data.strip())
@defer.inlineCallbacks
def test_spawn_cli_get_unit_unknown_public_address(self):
"""If for some hysterical raison, the public address hasn't been set.
We shouldn't error. This should never happen, the unit agent is sets
it on startup.
"""
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
result = yield exe(self.create_hook("unit-get", "public-address"))
self.assertEqual(result, 0)
data = hook_log.getvalue()
self.assertEqual("", data.strip())
def test_get_remote_unit_arg(self):
"""Simple test around remote arg parsing."""
self.change_environment(JUJU_UNIT_NAME="mysql/0",
JUJU_CLIENT_ID="client_id",
JUJU_AGENT_SOCKET=self.socket_path)
client = commands.RelationGetCli()
client.setup_parser()
options = client.parse_args(["-", "mysql/1"])
self.assertEqual(options.unit_name, "mysql/1")
@defer.inlineCallbacks
def test_spawn_cli_set_hook(self):
"""Validate the set hook can set values in zookeeper."""
output = self.capture_logging("hook", level=logging.DEBUG)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Invoke the hook and process the results verifying they are expected
hook = self.create_hook("relation-set", "a=b c=d")
result = yield exe(hook)
self.assertEqual(result, 0)
# Verify the context was flushed to zk
zk_data = yield self.relation.get_data()
self.assertEqual(
{"a": "b", "c": "d", "private-address": "mysql-0.example.com"},
serializer.load(zk_data))
yield exe.ended
self.assertIn(
"Flushed values for hook %r on 'database:42'\n"
" Setting changed: u'a'=u'b' (was unset)\n"
" Setting changed: u'c'=u'd' (was unset)" % (
os.path.basename(hook)),
output.getvalue())
@defer.inlineCallbacks
def test_spawn_cli_set_can_delete_and_modify(self):
"""Validate the set hook can delete values in zookeeper."""
output = self.capture_logging("hook", level=logging.DEBUG)
hook_directory = self.makeDir()
hook_file_path = self.makeFile(
content=("#!/bin/bash\n"
"relation-set existing= absent= new-value=2 "
"changed=abc changed2=xyz\n"
"exit 0\n"),
basename=os.path.join(hook_directory, "set-delete-test"))
os.chmod(hook_file_path, stat.S_IRWXU)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Populate with data that will be deleted
context = self.ua.get_context("client_id")
yield context.set(
{u"existing": u"42",
u"changed": u"a" * 101,
u"changed2": u"a" * 100})
yield context.flush()
# Invoke the hook and process the results verifying they are expected
self.assertTrue(os.path.exists(hook_file_path))
result = yield exe(hook_file_path)
self.assertEqual(result, 0)
# Verify the context was flushed to zk
zk_data = yield self.relation.get_data()
self.assertEqual(
{"new-value": "2", "changed": "abc", "changed2": "xyz",
"private-address": "mysql-0.example.com"},
serializer.load(zk_data))
# Verify that unicode/strings longer than 100 characters in
# representation (including quotes and the u marker) are cut
# off; 100 is the default cutoff used in the change items
# __str__ method
yield exe.ended
self.assertIn(
"Flushed values for hook 'set-delete-test' on 'database:42'\n"
" Setting changed: u'changed'=u'abc' (was u'%s)\n"
" Setting changed: u'changed2'=u'xyz' (was u'%s)\n"
" Setting deleted: u'existing' (was u'42')\n"
" Setting changed: u'new-value'=u'2' (was unset)" % (
"a" * 98, "a" * 98),
output.getvalue())
@defer.inlineCallbacks
def test_spawn_cli_set_noop_only_logs_on_change(self):
"""Validate the set hook only logs flushes when there are changes."""
output = self.capture_logging("hook", level=logging.DEBUG)
hook_directory = self.makeDir()
hook_file_path = self.makeFile(
content=("#!/bin/bash\n"
"relation-set no-change=42 absent=\n"
"exit 0\n"),
basename=os.path.join(hook_directory, "set-does-nothing"))
os.chmod(hook_file_path, stat.S_IRWXU)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Populate with data that will be *not* be modified
context = self.ua.get_context("client_id")
yield context.set({"no-change": "42", "untouched": "xyz"})
yield context.flush()
# Invoke the hook and process the results verifying they are expected
self.assertTrue(os.path.exists(hook_file_path))
result = yield exe(hook_file_path)
self.assertEqual(result, 0)
# Verify the context was flushed to zk
zk_data = yield self.relation.get_data()
self.assertEqual({"no-change": "42", "untouched": "xyz",
"private-address": "mysql-0.example.com"},
serializer.load(zk_data))
self.assertNotIn(
"Flushed values for hook 'set-does-nothing'",
output.getvalue())
@defer.inlineCallbacks
def test_logging(self):
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
# The echo hook will echo out the value
# it will also echo to stderr the ERROR variable
message = "This is only a test"
error = "All is full of fail"
default = "Default level"
exe.environment["MESSAGE"] = message
exe.environment["ERROR"] = error
exe.environment["DEFAULT"] = default
# of the MESSAGE variable
result = yield exe(self.get_test_hook("echo-hook"))
self.assertEqual(result, 0)
yield exe.ended
self.assertIn(message, self.log.getvalue())
# Logging used to log an empty response dict
# assure this doesn't happpen [b=915506]
self.assertNotIn("{}", self.log.getvalue())
# The 'error' was sent via juju-log
# to the UA. Our test UA has a fake log stream
# which we can check now
output = self.ua.log_file.getvalue()
self.assertIn("ERROR:: " + error, self.log.getvalue())
self.assertIn("INFO:: " + default, self.log.getvalue())
assert message not in output, """Log includes unintended messages"""
@defer.inlineCallbacks
def test_spawn_cli_list_hook_smart(self):
"""Validate the get hook can transmit values to the hook."""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Populate and verify some data we will
# later extract with the hook
context = self.ua.get_context("client_id")
# directly manipulate the context to the expected list of
# members
expected = ["alpha/0", "beta/0"]
context._members = expected
# invoke the hook and process the results
# verifying they are expected
exe.environment["FORMAT"] = "smart"
result = yield exe(self.create_hook("relation-list",
"--format=smart"))
self.assertEqual(result, 0)
yield exe.ended
self.assertIn("alpha/0\nbeta/0\n", self.log.getvalue())
@defer.inlineCallbacks
def test_spawn_cli_list_hook_eval(self):
"""Validate the get hook can transmit values to the hook."""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Populate and verify some data we will
# later extract with the hook
context = self.ua.get_context("client_id")
# directly manipulate the context to the expected list of
# members
expected = ["alpha/0", "beta/0"]
context._members = expected
# invoke the hook and process the results
# verifying they are expected
result = yield exe(self.create_hook("relation-list",
"--format=eval"))
self.assertEqual(result, 0)
yield exe.ended
self.assertIn("alpha/0 beta/0", self.log.getvalue())
@defer.inlineCallbacks
def test_spawn_cli_list_hook_json(self):
"""Validate the get hook can transmit values to the hook."""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Populate and verify some data we will
# later extract with the hook
context = self.ua.get_context("client_id")
# directly manipulate the context to the expected list of
# members
expected = ["alpha/0", "beta/0"]
context._members = expected
# invoke the hook and process the results
# verifying they are expected
result = yield exe(self.create_hook("relation-list", "--format json"))
self.assertEqual(result, 0)
yield exe.ended
self.assertIn('["alpha/0", "beta/0"]', self.log.getvalue())
class ChildRelationHookContextsTest(RelationInvokerTestBase):
@defer.inlineCallbacks
def add_a_blog(self, blog_name):
blog_states = yield self.add_opposite_service_unit(
(yield self.add_relation_service_unit_to_another_endpoint(
self.mysql_states,
RelationEndpoint(
blog_name, "client-server", "app", "client"))))
yield blog_states['service_relations'][-1].add_unit_state(
self.mysql_states['unit'])
defer.returnValue(blog_states)
@defer.inlineCallbacks
def add_db_admin_tool(self, admin_name):
"""Add another relation, using a different relation name"""
admin_ep = RelationEndpoint(
admin_name, "client-server", "admin-app", "client")
mysql_ep = RelationEndpoint(
"mysql", "client-server", "db-admin", "server")
yield self.add_relation_service_unit_from_endpoints(
admin_ep, mysql_ep)
@defer.inlineCallbacks
def assert_zk_data(self, context, expected):
internal_relation_id, _ = yield context.get_relation_id_and_scope(
context.relation_ident)
internal_unit_id = (yield context.get_local_unit_state()).internal_id
path = yield context.get_settings_path(internal_unit_id)
data, stat = yield self.client.get(path)
self.assertEqual(serializer.load(data), expected)
@defer.inlineCallbacks
def test_implied_relation_hook_context(self):
"""Verify implied hook context is cached and can get relation ids."""
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
implied = exe.get_context()
self.assertEqual(implied.relation_ident, "db:0")
# Verify that the same hook context for the implied relation
# is returned if referenced by its relation id
self.assertEqual(
implied,
self.ua.server_factory.get_invoker("client_id").\
get_cached_relation_hook_context("db:0"))
self.assertEqual(
set((yield implied.get_relation_idents("db"))),
set(["db:0", "db:1"]))
@defer.inlineCallbacks
def test_get_child_relation_hook_context(self):
"""Verify retrieval of a child hook context and methods on it."""
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
# Add another relation, verify it's not yet visible
yield self.add_a_blog("wordpress3")
db0 = exe.get_cached_relation_hook_context("db:0")
db1 = exe.get_cached_relation_hook_context("db:1")
self.assertEqual(db1.relation_ident, "db:1")
self.assertEqual(
set((yield db1.get_relation_idents("db"))),
set(["db:0", "db:1"]))
self.assertEqual(
db1,
exe.get_cached_relation_hook_context("db:1"))
# Not yet visible relation
self.assertRaises(
RelationIdentNotFound,
exe.get_cached_relation_hook_context, "db:2")
# Nonexistent relation idents
self.assertRaises(
RelationIdentNotFound,
exe.get_cached_relation_hook_context, "db:12345")
# Reread parent and child contexts
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
db0 = yield exe.get_context()
db1 = exe.get_cached_relation_hook_context("db:1")
db2 = exe.get_cached_relation_hook_context("db:2")
# Verify that any changes are written out; write directly here
# using the relation contexts
yield db0.set({u"a": u"42", u"b": u"xyz"})
yield db1.set({u"c": u"21"})
yield db2.set({u"d": u"99"})
# Then actually execute a successfully hook so flushes occur
# on both parent and children
result = yield exe(self.get_test_hook("success-hook"))
self.assertEqual(result, 0)
yield exe.ended
# Verify that all contexts were flushed properly to ZK
yield self.assert_zk_data(db0, {
u"a": u"42",
u"b": u"xyz",
"private-address": "mysql-0.example.com"})
yield self.assert_zk_data(db1, {
u"c": u"21",
"private-address": "mysql-0.example.com"})
yield self.assert_zk_data(db2, {
u"d": u"99",
"private-address": "mysql-0.example.com"})
# Verify log is written in order of relations: parent first,
# then by children
self.assertLogLines(
self.log.getvalue(),
["Cached relation hook contexts on 'db:0': ['db:1']",
"Flushed values for hook 'success-hook' on 'db:0'",
" Setting changed: u'a'=u'42' (was unset)",
" Setting changed: u'b'=u'xyz' (was unset)",
" Setting changed: u'c'=u'21' (was unset) on 'db:1'",
" Setting changed: u'd'=u'99' (was unset) on 'db:2'"])
@defer.inlineCallbacks
def test_get_child_relation_hook_context_while_removing_relation(self):
"""Verify retrieval of a child hook context and methods on it."""
wordpress2_states = yield self.add_a_blog("wordpress2")
yield self.add_a_blog("wordpress3")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
relation_state_manager = RelationStateManager(self.client)
yield relation_state_manager.remove_relation_state(
wordpress2_states["relation"])
self.assertEqual(
set((yield exe.get_context().get_relation_idents("db"))),
set(["db:0", "db:1", "db:2"]))
db0 = exe.get_cached_relation_hook_context("db:0")
db1 = exe.get_cached_relation_hook_context("db:1")
db2 = exe.get_cached_relation_hook_context("db:2")
# Verify that any changes are written out; write directly here
# using the relation contexts
yield db0.set({u"a": u"42", u"b": u"xyz"})
yield db1.set({u"c": u"21"})
yield db2.set({u"d": u"99"})
# Then actually execute a successfully hook so flushes occur
# on both parent and children
result = yield exe(self.get_test_hook("success-hook"))
self.assertEqual(result, 0)
yield exe.ended
# Verify that both contexts were flushed properly to ZK: even
# if the db:1 relation is gone, we allow its relation settings
# to be written out to ZK
yield self.assert_zk_data(db0, {
u"a": u"42",
u"b": u"xyz",
"private-address": "mysql-0.example.com"})
yield self.assert_zk_data(db1, {
u"c": "21",
"private-address": "mysql-0.example.com"})
yield self.assert_zk_data(db2, {
u"d": "99",
"private-address": "mysql-0.example.com"})
self.assertLogLines(
self.log.getvalue(),
["Cached relation hook contexts on 'db:0': ['db:1', 'db:2']",
"Flushed values for hook 'success-hook' on 'db:0'",
" Setting changed: u'a'=u'42' (was unset)",
" Setting changed: u'b'=u'xyz' (was unset)",
" Setting changed: u'c'=u'21' (was unset) on 'db:1'",
" Setting changed: u'd'=u'99' (was unset) on 'db:2'"])
# Reread parent and child contexts, verify db:1 relation has
# disappeared from topology
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
self.assertEqual(
set((yield exe.get_context().get_relation_idents("db"))),
set(["db:0", "db:2"]))
yield self.assertRaises(
RelationIdentNotFound,
exe.get_cached_relation_hook_context, "db:1")
@defer.inlineCallbacks
def test_relation_ids(self):
"""Verify `relation-ids` command returns ids separated by newlines."""
hook_log = self.capture_logging("hook")
# Invoker will be in the context of the mysql/0 service unit
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
# Invoker has already been started, verify cache coherence by
# adding another relation.
yield self.add_a_blog("wordpress2")
# Then verify the hook lists the relation ids corresponding to
# the relation name `db`
hook = self.create_hook("relation-ids", "db")
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
# Smart formtting outputs one id per line
self.assertEqual(
hook_log.getvalue(), "db:0\n\n")
# But newlines are just whitespace to the shell or to Python,
# so they can be split accordingly, adhering to the letter of
# the spec
self.assertEqual(
hook_log.getvalue().split(), ["db:0"])
@defer.inlineCallbacks
def test_relation_ids_json_format(self):
"""Verify `relation-ids --format=json` command returns ids in json."""
yield self.add_a_blog("wordpress2")
yield self.add_db_admin_tool("admin")
hook_log = self.capture_logging("hook")
# Invoker will be in the context of the mysql/0 service unit
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
# Then verify the hook lists the relation ids corresponding to
# the relation name `db`
hook = self.create_hook("relation-ids", "--format=json db")
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assertEqual(
set(json.loads(hook_log.getvalue())),
set(["db:0", "db:1"]))
@defer.inlineCallbacks
def test_relation_ids_no_relation_name(self):
"""Verify returns all relation ids if relation name not specified."""
yield self.add_a_blog("wordpress2")
yield self.add_db_admin_tool("admin")
# Invoker will be in the context of the mysql/0 service
# unit. This test file's mock unit agent does not set the
# additional environment variables for relation hooks that are
# set by juju.unit.lifecycle.RelationInvoker; instead it has
# to be set by individual tests.
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
exe.environment["JUJU_RELATION"] = "db"
# Then verify the hook lists the relation ids corresponding to
# to JUJU_RELATION (="db")
hook_log = self.capture_logging("hook")
hook = self.create_hook("relation-ids", "--format=json")
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assertEqual(
set(json.loads(hook_log.getvalue())),
set(["db:0", "db:1"]))
# This time pretend this is a nonrelational hook
# context. Ignore the relation stuff in the get_invoker
# function below, really it is a nonrelational hook as far as
# the hook commands are concerned:
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
hook_log = self.capture_logging("hook")
hook = self.create_hook("relation-ids", "--format=json")
result = yield exe(hook)
# Currently, exceptions of all hook commands are only logged;
# the exit code is always set to 0.
self.assertEqual(result, 0)
yield exe.ended
self.assertIn(
("juju.hooks.protocol.MustSpecifyRelationName: "
"Relation name must be specified"),
hook_log.getvalue())
@defer.inlineCallbacks
def test_relation_ids_nonexistent_relation_name(self):
"""Verify an empty listing is returned if name does not exist"""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"data=$({bin-path}/relation-ids does-not-exist 2> {stderr})\n"
"echo -n $data > {stdout}\n",
files=["stdout", "stderr"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["stdout"], "")
self.assert_file(args["stderr"], "")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
@defer.inlineCallbacks
def test_relation_set_with_relation_id_option(self):
"""Verify `relation-set` works with -r option."""
# Invoker will be in the context of the db:0 relation
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
# But set from db:1
hook = self.create_hook("relation-set", "-r db:1 a=42 b=xyz")
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
db1 = exe.get_cached_relation_hook_context("db:1")
yield self.assert_zk_data(db1, {
"a": "42",
"b": "xyz",
"private-address": "mysql-0.example.com"})
self.assertLogLines(
self.log.getvalue(),
["Cached relation hook contexts on 'db:0': ['db:1']",
"Flushed values for hook %r on 'db:0'" % os.path.basename(hook),
" Setting changed: u'a'=u'42' (was unset) on 'db:1'",
" Setting changed: u'b'=u'xyz' (was unset) on 'db:1'"])
@defer.inlineCallbacks
def test_relation_set_with_nonexistent_relation_id(self):
"""Verify error put on stderr when using nonexistent relation id."""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"data=$({bin-path}/relation-set -r db:12345 "
"k1=v1 k2=v2 2> {stderr})\n"
"echo -n $data > {stdout}\n",
files=["stdout", "stderr"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["stdout"], "")
self.assert_file(args["stderr"], "Relation not found for - db:12345\n")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
@defer.inlineCallbacks
def test_relation_set_with_invalid_relation_id(self):
"""Verify message is written to stderr for invalid formatted id."""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"data=$({bin-path}/relation-set -r invalid-id:forty-two "
"k1=v1 k2=v2 2> {stderr})\n"
"echo -n $data > {stdout}\n",
files=["stderr", "stdout"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["stdout"], "")
self.assert_file(args["stderr"],
"Not a valid relation id: 'invalid-id:forty-two'\n")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
@defer.inlineCallbacks
def test_relation_get_with_relation_id_option(self):
"""Verify `relation-get` works with -r option."""
yield self.add_a_blog("wordpress2")
hook_log = self.capture_logging("hook")
# Invoker will be in the context of the db:0 relation
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
# First set through the context
db1 = exe.get_cached_relation_hook_context("db:1")
yield db1.set({"name": "whiterabbit"})
# Then get from db:1
hook = self.create_hook("relation-get",
"--format=json -r db:1 - mysql/0")
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assertEqual(
json.loads(hook_log.getvalue()),
{"private-address": "mysql-0.example.com",
"name": "whiterabbit"})
@defer.inlineCallbacks
def test_relation_get_with_nonexistent_relation_id(self):
"""Verify error put on stderr when using nonexistent relation id."""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"settings=$({bin-path}/relation-get -r db:12345 - mysql/0 "
"2> {stderr})\n"
"echo -n $settings > {settings}\n",
files=["settings", "stderr"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["settings"], "")
self.assert_file(args["stderr"], "Relation not found for - db:12345\n")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
@defer.inlineCallbacks
def test_relation_get_with_invalid_relation_id(self):
"""Verify message is written to stderr for invalid formatted id."""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"data=$({bin-path}/relation-get -r invalid-id:forty-two - "
"2> {stderr})\n"
"echo -n $data > {stdout}\n",
files=["stderr", "stdout"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["stderr"],
"Not a valid relation id: 'invalid-id:forty-two'\n")
self.assert_file(args["stdout"], "")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
@defer.inlineCallbacks
def test_relation_get_unset_remote_unit_in_env(self):
"""Verify error is reported if JUJU_REMOTE_UNIT is not defined."""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
self.assertNotIn("JUJU_REMOTE_UNIT", exe.environment)
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"data=$({bin-path}/relation-get -r db:0 - 2> {stderr})\n"
"echo -n $data > {stdout}\n",
files=["stderr", "stdout"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["stdout"], "")
self.assert_file(args["stderr"], "Unit name is not defined\n")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
@defer.inlineCallbacks
def test_relation_list_with_relation_id_option(self):
"""Verify `relation-list` works with -r option."""
yield self.add_a_blog("wordpress2")
hook_log = self.capture_logging("hook")
# Invoker will be in the context of the db:0 relation
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation,
client_id="client_id")
# Then verify relation membership can be listed for db:1
hook = self.create_hook("relation-list", "--format=json -r db:1")
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assertEqual(
json.loads(hook_log.getvalue()),
["wordpress2/0"])
@defer.inlineCallbacks
def test_relation_list_with_nonexistent_relation_id(self):
"""Verify a nonexistent relation id writes message to stderr."""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"data=$({bin-path}/relation-list -r db:12345 2> {stderr})\n"
"echo -n $data > {stdout}\n",
files=["stdout", "stderr"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["stdout"], "")
self.assert_file(args["stderr"], "Relation not found for - db:12345\n")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
@defer.inlineCallbacks
def test_relation_list_with_invalid_relation_id(self):
"""Verify message is written to stderr for invalid formatted id."""
hook_log = self.capture_logging("hook", level=logging.DEBUG)
yield self.add_a_blog("wordpress2")
exe = yield self.ua.get_invoker(
"db:0", "add", "mysql/0", self.relation, client_id="client_id")
exe.environment["JUJU_REMOTE_UNIT"] = "wordpress/0"
hook, args = self.create_capturing_hook(
"#!/bin/bash\n"
"set -eu\n"
"data=$({bin-path}/relation-list -r invalid-id:forty-two "
"2> {stderr})\n"
"echo -n $data > {stdout}\n",
files=["stderr", "stdout"])
result = yield exe(hook)
self.assertEqual(result, 0)
yield exe.ended
self.assert_file(args["stdout"], "")
self.assert_file(args["stderr"],
"Not a valid relation id: 'invalid-id:forty-two'\n")
self.assertEqual(
hook_log.getvalue(),
"Cached relation hook contexts on 'db:0': ['db:1']\n"
"hook {} exited, exit code 0.\n".format(os.path.basename(hook)))
class PortCommandsTest(RelationInvokerTestBase):
def test_path_setup(self):
"""Validate that the path allows finding the executable."""
from twisted.python.procutils import which
open_port_exe = which("open-port")
self.assertTrue(open_port_exe)
self.assertTrue(open_port_exe[0].endswith("open-port"))
close_port_exe = which("close-port")
self.assertTrue(close_port_exe)
self.assertTrue(close_port_exe[0].endswith("close-port"))
@defer.inlineCallbacks
def test_open_and_close_ports(self):
"""Verify that port hook commands run and changes are immediate."""
unit_state = self.mysql_states["unit"]
self.assertEqual((yield unit_state.get_open_ports()), [])
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
result = yield exe(self.create_hook("open-port", "80"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 80, "proto": "tcp"}])
result = yield exe(self.create_hook("open-port", "53/udp"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 80, "proto": "tcp"},
{"port": 53, "proto": "udp"}])
result = yield exe(self.create_hook("open-port", "53/tcp"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 80, "proto": "tcp"},
{"port": 53, "proto": "udp"},
{"port": 53, "proto": "tcp"}])
result = yield exe(self.create_hook("open-port", "443/tcp"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 80, "proto": "tcp"},
{"port": 53, "proto": "udp"},
{"port": 53, "proto": "tcp"},
{"port": 443, "proto": "tcp"}])
result = yield exe(self.create_hook("close-port", "80/tcp"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 53, "proto": "udp"},
{"port": 53, "proto": "tcp"},
{"port": 443, "proto": "tcp"}])
yield exe.ended
self.assertLogLines(
self.log.getvalue(), [
"opened 80/tcp",
"opened 53/udp",
"opened 443/tcp",
"closed 80/tcp"])
@defer.inlineCallbacks
def test_open_port_args(self):
"""Verify that open-port properly reports arg parse errors."""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
result = yield self.assertFailure(
exe(self.create_hook("open-port", "80/invalid-protocol")),
errors.CharmInvocationError)
self.assertEqual(result.exit_code, 2)
yield exe.ended
self.assertIn(
"open-port: error: argument PORT[/PROTOCOL]: "
"Invalid protocol, must be 'tcp' or 'udp', got 'invalid-protocol'",
self.log.getvalue())
result = yield self.assertFailure(
exe(self.create_hook("open-port", "0/tcp")),
errors.CharmInvocationError)
self.assertEqual(result.exit_code, 2)
yield exe.ended
self.assertIn(
"open-port: error: argument PORT[/PROTOCOL]: "
"Invalid port, must be from 1 to 65535, got 0",
self.log.getvalue())
result = yield self.assertFailure(
exe(self.create_hook("open-port", "80/udp/extra-info")),
errors.CharmInvocationError)
self.assertEqual(result.exit_code, 2)
yield exe.ended
self.assertIn(
"open-port: error: argument PORT[/PROTOCOL]: "
"Invalid format for port/protocol, got '80/udp/extra-info",
self.log.getvalue())
@defer.inlineCallbacks
def test_close_port_args(self):
"""Verify that close-port properly reports arg parse errors."""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
result = yield self.assertFailure(
exe(self.create_hook("close-port", "80/invalid-protocol")),
errors.CharmInvocationError)
self.assertEqual(result.exit_code, 2)
yield exe.ended
self.assertIn(
"close-port: error: argument PORT[/PROTOCOL]: "
"Invalid protocol, must be 'tcp' or 'udp', got 'invalid-protocol'",
self.log.getvalue())
result = yield self.assertFailure(
exe(self.create_hook("close-port", "0/tcp")),
errors.CharmInvocationError)
self.assertEqual(result.exit_code, 2)
yield exe.ended
self.assertIn(
"close-port: error: argument PORT[/PROTOCOL]: "
"Invalid port, must be from 1 to 65535, got 0",
self.log.getvalue())
result = yield self.assertFailure(
exe(self.create_hook("close-port", "80/udp/extra-info")),
errors.CharmInvocationError)
self.assertEqual(result.exit_code, 2)
yield exe.ended
self.assertIn(
"close-port: error: argument PORT[/PROTOCOL]: "
"Invalid format for port/protocol, got '80/udp/extra-info",
self.log.getvalue())
class SubordinateRelationInvokerTest(InvokerTestBase, RelationTestBase):
@defer.inlineCallbacks
def setUp(self):
yield super(SubordinateRelationInvokerTest, self).setUp()
self.log = self.capture_logging(
formatter=logging.Formatter(
"%(name)s:%(levelname)s:: %(message)s"),
level=logging.DEBUG)
self.update_invoker_env("mysql/0", "logging/0")
self.socket_path = self.makeFile()
unit_dir = self.makeDir()
self.makeDir(path=os.path.join(unit_dir, "charm"))
self.ua = MockUnitAgent(
self.client,
self.socket_path,
unit_dir)
yield self._build_relation()
@defer.inlineCallbacks
def tearDown(self):
self.ua.stop()
yield super(SubordinateRelationInvokerTest, self).tearDown()
@defer.inlineCallbacks
def _build_relation(self):
mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info",
"server", "global")
logging_ep = RelationEndpoint("logging", "juju-info", "juju-info",
"client", "container")
mysql, my_units = yield self.get_service_and_units_by_charm_name(
"mysql", 2)
self.assertFalse((yield mysql.is_subordinate()))
log, log_units = yield self.get_service_and_units_by_charm_name(
"logging")
self.assertTrue((yield log.is_subordinate()))
# add the relationship so we can create units with containers
relation_state, service_states = (yield
self.relation_manager.add_relation_state(
mysql_ep, logging_ep))
log, log_units = yield self.get_service_and_units_by_charm_name(
"logging",
containers=my_units)
self.assertTrue((yield log.is_subordinate()))
for lu in log_units:
self.assertTrue((yield lu.is_subordinate()))
mu1, mu2 = my_units
lu1, lu2 = log_units
self.mysql_units = my_units
self.log_units = log_units
mystate = pick_attr(service_states, relation_role="server")
logstate = pick_attr(service_states, relation_role="client")
yield mystate.add_unit_state(mu1)
self.relation = yield logstate.add_unit_state(lu1)
# add the second container
yield mystate.add_unit_state(mu2)
self.relation2 = yield logstate.add_unit_state(lu2)
@defer.inlineCallbacks
def test_subordinate_invocation(self):
exe = yield self.ua.get_invoker(
"juju-info", "add", "mysql/0", self.relation)
result = yield exe(self.create_hook("relation-list",
"--format=smart"))
self.assertEqual(result, 0)
yield exe.ended
# verify that we see the proper unit
self.assertIn("mysql/0", self.log.getvalue())
# we don't see units in the other container
self.assertNotIn("mysql/1", self.log.getvalue())
# reset the log and verify other container
self.log.seek(0)
exe = yield self.ua.get_invoker(
"juju-info", "add", "mysql/1", self.relation2)
result = yield exe(self.create_hook("relation-list",
"--format=smart"))
self.assertEqual(result, 0)
# verify that we see the proper unit
yield exe.ended
self.assertIn("mysql/1", self.log.getvalue())
# we don't see units in the other container
self.assertNotIn("mysql/0", self.log.getvalue())
@defer.inlineCallbacks
def test_open_and_close_ports(self):
"""Verify that port hook commands run and changes are immediate."""
unit_state = self.log_units[0]
container_state = self.mysql_units[0]
self.assertEqual((yield unit_state.get_open_ports()), [])
exe = yield self.ua.get_invoker(
"database:42", "add", "logging/0", self.relation)
result = yield exe(self.create_hook("open-port", "80"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 80, "proto": "tcp"}])
self.assertEqual(
(yield container_state.get_open_ports()),
[{"port": 80, "proto": "tcp"}])
result = yield exe(self.create_hook("open-port", "53/udp"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 80, "proto": "tcp"},
{"port": 53, "proto": "udp"}])
self.assertEqual(
(yield container_state.get_open_ports()),
[{"port": 80, "proto": "tcp"},
{"port": 53, "proto": "udp"}])
result = yield exe(self.create_hook("close-port", "80/tcp"))
self.assertEqual(result, 0)
self.assertEqual(
(yield unit_state.get_open_ports()),
[{"port": 53, "proto": "udp"}])
self.assertEqual(
(yield container_state.get_open_ports()),
[{"port": 53, "proto": "udp"}])
yield exe.ended
self.assertLogLines(
self.log.getvalue(), [
"opened 80/tcp",
"opened 53/udp",
"closed 80/tcp"])
class TestCharmFormatV1(RelationInvokerTestBase):
@defer.inlineCallbacks
def _default_relations(self):
"""Intercept mysql/wordpress setup to ensure v1 charm format"""
# The mysql charm in the test repository has no format defined
yield self.add_service_from_charm("mysql", charm_name="mysql")
yield super(TestCharmFormatV1, self)._default_relations()
@defer.inlineCallbacks
def test_environment(self):
"""Ensure that an implicit setting of format: 1 works properly"""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
env = exe.get_environment()
self.assertEqual(env["_JUJU_CHARM_FORMAT"], "1")
@defer.inlineCallbacks
def test_output(self):
"""Verify roundtripping"""
hook_debug_log = capture_separate_log("hook", level=logging.DEBUG)
hook_log = capture_separate_log("hook", level=logging.INFO)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
set_hook = self.create_hook(
"relation-set",
"b=true i=42 f=1.23 s=ascii u=䏿–‡")
yield exe(set_hook)
result = yield exe(self.create_hook("relation-get", "- mysql/0"))
self.assertEqual(result, 0)
# No guarantee on output ordering, so keep to this test stable,
# first a little parsing work:
output = hook_log.getvalue()
self.assertEqual(output[0], "{")
self.assertEqual(output[-3:-1], "}\n")
self.assertEqual(
set(output[1:-3].split(", ")),
set(["u'b': u'true'",
"u'f': u'1.23'",
"u'i': u'42'",
"u'private-address': u'mysql-0.example.com'",
"u's': u'ascii'",
"u'u': u'\\u4e2d\\u6587'"]))
self.assertLogLines(
hook_debug_log.getvalue(),
["Flushed values for hook %r on 'database:42'" % (
os.path.basename(set_hook),),
" Setting changed: u'b'=u'true' (was unset)",
" Setting changed: u'f'=u'1.23' (was unset)",
" Setting changed: u'i'=u'42' (was unset)",
" Setting changed: u's'=u'ascii' (was unset)",
" Setting changed: u'u'=u'\\u4e2d\\u6587' (was unset)"])
# Unlike v2 formatting, this will only fail on output
hook_log.truncate()
data_file = self.makeFile("But when I do drink, I prefer \xCA\xFE")
yield exe(self.create_hook(
"relation-set", "d=@%s" % data_file))
result = yield exe(self.create_hook("relation-get", "d mysql/0"))
self.assertEqual(result, 1)
self.assertIn(
"Error: \'utf8\' codec can\'t decode byte 0xca in position 30: "
"invalid continuation byte",
hook_log.getvalue())
class TestCharmFormatV2(RelationInvokerTestBase):
@defer.inlineCallbacks
def _default_relations(self):
"""Intercept mysql/wordpress setup to ensure v2 charm format"""
# The mysql-format-v2 charm defines format:2 in its metadata
yield self.add_service_from_charm(
"mysql", charm_name="mysql-format-v2")
yield super(TestCharmFormatV2, self)._default_relations()
def make_zipped_file(self):
data_file = self.makeFile()
with open(data_file, "wb") as f:
# gzipped of 'abc' - however gzip will also includes the
# source file name, so easiest to keep it stable here as
# standard data
f.write("\x1f\x8b\x08\x08\xbb\x8bAP\x02\xfftmpr"
"fyP0e\x00KLJ\x06\x00\xc2A$5\x03\x00\x00\x00")
return data_file
@defer.inlineCallbacks
def test_environment(self):
"""Ensure that an explicit setting of format: 2 works properly"""
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation)
env = exe.get_environment()
self.assertEqual(env["_JUJU_CHARM_FORMAT"], "2")
@defer.inlineCallbacks
def test_smart_output(self):
"""Verify roundtripping"""
hook_debug_log = capture_separate_log("hook", level=logging.DEBUG)
hook_log = capture_separate_log("hook", level=logging.INFO)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Test the support of raw strings, both from a file and from
# command line. Unicode can also be used - this is just
# rendered as UTF-8 in the shell; the source here is also
# UTF-8 - note it is not a Unicode string, it's a bytestring.
data_file = self.make_zipped_file()
set_hook = self.create_hook(
"relation-set",
"b=true f=1.23 i=42 s=ascii u=䏿–‡ d=@%s "
"r=\"$(echo -en 'But when I do drink, I prefer \\xCA\\xFE')\"" % (
data_file))
yield exe(set_hook)
result = yield exe(self.create_hook("relation-get", "- mysql/0"))
self.assertEqual(result, 0)
# relation-get - uses YAML to dump keys. YAML guarantees that
# the keys will be sorted lexicographically; note that we
# output UTF-8 for Unicode when dumping YAML, so our source
# text (with this test file in UTF-8 itself) matches the
# output text, as seen in the characters for "zhongwen"
# (Chinese language).
self.assertEqual(
hook_log.getvalue(),
"b: 'true'\n"
"d: !!binary |\n H4sICLuLQVAC/3RtcHJmeVAwZQBLTEoGAMJBJDUDAAAA\n"
"f: '1.23'\n"
"i: '42'\n"
"private-address: mysql-0.example.com\n"
"r: !!binary |\n QnV0IHdoZW4gSSBkbyBkcmluaywgSSBwcmVmZXIgyv4=\n"
"s: ascii\n"
"u: 䏿–‡\n")
# Note: backslashes are necessarily doubled here; r"XYZ"
# strings don't help with hexescapes
self.assertLogLines(
hook_debug_log.getvalue(),
["Flushed values for hook %r on 'database:42'" % (
os.path.basename(set_hook),),
" Setting changed: 'b'='true' (was unset)",
" Setting changed: 'd'='\\x1f\\x8b\\x08\\x08\\xbb\\x8bAP\\x02"
"\\xfftmprfyP0e\\x00KLJ\\x06\\x00\\xc2A$5\\x03"
"\\x00\\x00\\x00' (was unset)",
" Setting changed: 'f'='1.23' (was unset)",
" Setting changed: 'i'='42' (was unset)",
" Setting changed: 'r'='But when I do drink, "
"I prefer \\xca\\xfe' (was unset)",
" Setting changed: 's'='ascii' (was unset)",
" Setting changed: 'u'=u'\\u4e2d\\u6587' (was unset)"
])
@defer.inlineCallbacks
def test_exact_roundtrip_binary_data(self):
"""Verify that binary data, including \x00, is roundtripped exactly"""
hook_log = capture_separate_log("hook", level=logging.INFO)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
data_file = self.make_zipped_file()
# relation-set can only read null bytes from a file; bash
# would otherwise silently drop
set_hook = self.create_hook("relation-set", "zipped=@%s" % (
data_file))
result = yield exe(set_hook)
self.assertEqual(result, 0)
# Abuse the create_hook method a little bit by adding a pipe
get_hook = self.create_hook("relation-get", "zipped mysql/0 | zcat")
result = yield exe(get_hook)
self.assertEqual(result, 0)
# Using the hook log for this verification does generate one
# extra \n (seen elsewhere in our tests), but this is just
# test noise: we are guaranteed roundtrip fidelity by using
# the picky tool that is zcat - no extraneous data accepted.
self.assertEqual(hook_log.getvalue(), "abc\n")
@defer.inlineCallbacks
def test_json_output(self):
"""Verify roundtripping"""
hook_log = capture_separate_log("hook", level=logging.INFO)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0", self.relation,
client_id="client_id")
# Test the support of raw strings, both from a file and from
# command line. In addition, test Unicode indirectly by using
# UTF-8. Because the source of this file is marked as UTF-8,
# we can embed such characters directly in bytestrings, not
# just Unicode strings. This also works within the context of
# the shell.
raw = "But when I do drink, I prefer \xca\xfe"
data_file = self.makeFile(raw)
set_hook = self.create_hook(
"relation-set",
"b=true f=1.23 i=42 s=ascii u=䏿–‡ d=@%s "
"r=\"$(echo -en 'But when I do drink, I prefer \\xCA\\xFE')\"" % (
data_file,))
yield exe(set_hook)
result = yield exe(self.create_hook(
"relation-get", "--format=json - mysql/0"))
self.assertEqual(result, 0)
# YAML serialization internally has converted (transparently)
# UTF-8 to Unicode, which can be rendered by JSON. However the
# "cafe" bytestring is invalid JSON, so verify that it's been
# Base64 encoded.
encoded = base64.b64encode(raw)
self.assertEqual(
hook_log.getvalue(),
'{"b": "true", '
'"d": "%s", '
'"f": "1.23", '
'"i": "42", '
'"private-address": "mysql-0.example.com", '
'"s": "ascii", '
'"r": "%s", '
'"u": "\\u4e2d\\u6587"}\n' % (encoded, encoded))
@defer.inlineCallbacks
def common_relation_set(self):
hook_log = capture_separate_log("hook", level=logging.INFO)
exe = yield self.ua.get_invoker(
"database:42", "add", "mysql/0",
self.relation, client_id="client_id")
raw = "But when I do drink, I prefer \xCA\xFE"
data_file = self.makeFile(raw)
set_hook = self.create_hook(
"relation-set",
"s='some text' u=䏿–‡ d=@%s "
"r=\"$(echo -en 'But when I do drink, I prefer \\xCA\\xFE')\"" % (
data_file))
result = yield exe(set_hook)
self.assertEqual(result, 0)
defer.returnValue((exe, hook_log))
@defer.inlineCallbacks
def test_relation_get_ascii(self):
"""Verify that ascii data is roundtripped"""
exe, hook_log = yield self.common_relation_set()
result = yield exe(self.create_hook("relation-get", "s mysql/0"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "some text\n")
@defer.inlineCallbacks
def test_relation_get_raw(self):
"""Verify that raw data is roundtripped"""
exe, hook_log = yield self.common_relation_set()
result = yield exe(self.create_hook("relation-get", "r mysql/0"))
self.assertEqual(result, 0)
self.assertEqual(
hook_log.getvalue(), "But when I do drink, I prefer \xca\xfe\n")
@defer.inlineCallbacks
def test_relation_get_unicode(self):
"""Verify Unicode is roundtripped (via UTF-8) through the shell"""
exe, hook_log = yield self.common_relation_set()
result = yield exe(self.create_hook("relation-get", "u mysql/0"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "䏿–‡\n")
@defer.inlineCallbacks
def setup_config(self):
hook_log = self.capture_logging("hook")
exe = yield self.ua.get_invoker(
"db:42", "add", "mysql/0", self.relation, client_id="client_id")
context = yield self.ua.get_context("client_id")
config = yield context.get_config()
with open(self.make_zipped_file(), "rb") as f:
data = f.read()
config.update({
"b": True,
"f": 1.23,
"i": 42,
"s": "some text",
# uses UTF-8 encoding in this test script
"u": "䏿–‡",
# use high byte and null byte characters
"r": data
})
yield config.write()
defer.returnValue((exe, hook_log))
@defer.inlineCallbacks
def test_config_get_boolean(self):
"""Validate that config-get returns lowercase names of booleans."""
exe, hook_log = yield self.setup_config()
result = yield exe(self.create_hook("config-get", "b"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "true\n")
@defer.inlineCallbacks
def test_config_get_float(self):
"""Validate that config-get returns floats without quotes."""
exe, hook_log = yield self.setup_config()
result = yield exe(self.create_hook("config-get", "f"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "1.23\n")
@defer.inlineCallbacks
def test_config_get_int(self):
"""Validate that config-get returns ints without quotes."""
exe, hook_log = yield self.setup_config()
result = yield exe(self.create_hook("config-get", "i"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "42\n")
@defer.inlineCallbacks
def test_config_get_ascii(self):
"""Validate that config-get returns ascii strings."""
exe, hook_log = yield self.setup_config()
result = yield exe(self.create_hook("config-get", "s"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "some text\n")
@defer.inlineCallbacks
def test_config_get_raw(self):
"""Validate config-get can work with high and null bytes."""
exe, hook_log = yield self.setup_config()
result = yield exe(self.create_hook("config-get", "r | zcat"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "abc\n")
@defer.inlineCallbacks
def test_config_get_unicode(self):
"""Validate that config-get returns raw strings containing UTF-8."""
exe, hook_log = yield self.setup_config()
result = yield exe(self.create_hook("config-get", "u"))
self.assertEqual(result, 0)
self.assertEqual(hook_log.getvalue(), "䏿–‡\n")
juju-0.7.orig/juju/hooks/tests/test_scheduler.py 0000644 0000000 0000000 00000066733 12135220114 020261 0 ustar 0000000 0000000 import logging
import os
from twisted.internet.defer import (
inlineCallbacks, fail, succeed, Deferred, returnValue)
from juju.lib import serializer
from juju.hooks.scheduler import HookScheduler
from juju.state.tests.test_service import ServiceStateManagerTestBase
class SomeError(Exception):
pass
class HookSchedulerTest(ServiceStateManagerTestBase):
@inlineCallbacks
def setUp(self):
yield super(HookSchedulerTest, self).setUp()
self.client = self.get_zookeeper_client()
self.unit_relation = self.mocker.mock()
self.executions = []
self.service = yield self.add_service_from_charm("wordpress")
self.state_file = self.makeFile()
self.executor = self.collect_executor
self._scheduler = None
self.log_stream = self.capture_logging(
"hook.scheduler", level=logging.DEBUG)
@property
def scheduler(self):
# Create lazily, so we can create with a state file if we want to,
# and swap out collect_executor when helpful to do so.
if self._scheduler is None:
self._scheduler = HookScheduler(
self.client, self.executor, self.unit_relation, "",
"wordpress/0", self.state_file)
return self._scheduler
def collect_executor(self, context, change):
self.executions.append((context, change))
return True
def write_single_unit_state(self):
with open(self.state_file, "w") as f:
f.write(serializer.dump({
"context_members": ["u-1"],
"member_versions": {"u-1": 0},
"unit_ops": {},
"clock_units": {},
"change_queue": [],
"clock_sequence": 1}))
def test_add_expanded_modified(self):
"""An add event is expanded to a modified event.
"""
self.scheduler.cb_change_members([], ["u-1"])
self.scheduler.run()
self.assertEqual(len(self.executions), 2)
self.assertTrue(
self.executions[-1][1].change_type == 'modified')
@inlineCallbacks
def test_add_expanded_on_error(self):
"""If the hook exec for an add fails, its still expanded.
"""
results = [succeed(False), succeed(True)]
collected = []
def executor(context, change):
res = results[len(collected)]
collected.append((context, change))
return res
self.executor = executor
self.scheduler.cb_change_members([], ["u-1"])
sched_done = self.scheduler.run()
self.assertEqual(len(collected), 1)
self.assertTrue(
collected[0][1].change_type == 'joined')
yield sched_done
self.assertFalse(self.scheduler.running)
with open(self.state_file) as f:
self.assertEquals(serializer.load(f.read()), {
"context_members": ['u-1'],
"member_versions": {"u-1": 0},
"change_queue": [
{'change_type': 'joined',
'members': ['u-1'], 'unit_name': 'u-1'},
{'change_type': 'modified',
'members': ['u-1'], 'unit_name': 'u-1'}]})
@inlineCallbacks
def test_add_and_immediate_remove_can_ellipse_change(self):
"""A concurrent depart of a unit during the join hook elides expansion.
Ie. This is the one scenario where a change hook won't be
executed after a successful join, because the remote side is
already gone, its of little purpose to execute the modify
additionally.
"""
results = [Deferred() for i in range(5)]
collected = []
@inlineCallbacks
def executor(context, change):
res = yield results[len(collected)]
collected.append((context, change))
returnValue(res)
self.executor = executor
self.scheduler.cb_change_members([], ["u-1"])
sched_done = self.scheduler.run()
self.scheduler.cb_change_members(["u-1"], [])
self.assertFalse(collected)
with open(self.state_file) as f:
self.assertEquals(serializer.load(f.read()), {
"context_members": [],
"member_versions": {},
"change_queue": [
{'change_type': 'joined', 'members': ['u-1'],
'unit_name': 'u-1'},
{'change_type': 'departed', 'members': [],
'unit_name': 'u-1'},
]})
for i in results:
i.callback(True)
self.scheduler.stop()
yield sched_done
self.assertEqual(
[(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \
for i in collected],
[("u-1", "joined", ["u-1"]),
("u-1", "departed", [])])
@inlineCallbacks
def test_hook_error_doesnt_stop_reduction_of_events_in_clock(self):
"""Reduction of events continues even in the face of a hook error.
"""
results = [succeed(False), succeed(True), succeed(True)]
collected = []
def executor(context, change):
res = results[len(collected)]
collected.append((context, change))
return res
self.executor = executor
self.scheduler.cb_change_members([], ["u-1"])
sched_done = self.scheduler.run()
self.assertEqual(len(collected), 1)
self.assertTrue(
collected[0][1].change_type == 'joined')
yield sched_done
self.assertFalse(self.scheduler.running)
self.scheduler.cb_change_members(["u-1"], ["u-2"])
with open(self.state_file) as f:
self.assertEquals(serializer.load(f.read()), {
"context_members": ['u-2'],
"member_versions": {"u-2": 0},
"change_queue": [
{'change_type': 'joined', 'members': ['u-1'],
'unit_name': 'u-1'},
{'change_type': 'joined', 'members': ['u-1', 'u-2'],
'unit_name': 'u-2'},
{'change_type': 'departed', 'members': ['u-2'],
'unit_name': 'u-1'}]})
@inlineCallbacks
def test_depart_hook_error_membership_affects(self):
"""An error in a remove event should not distort the membership.
Also verifies restart after error starts with error event.
"""
yield self.write_single_unit_state()
results = [
succeed(True), succeed(False), succeed(True), succeed(True)]
collected = []
def executor(context, change):
res = results[len(collected)]
collected.append((context, change))
return res
self.executor = executor
self.scheduler.cb_change_members(["u-1"], ["u-2"])
self.scheduler.run()
self.assertEqual(
[(i[1].unit_name, i[1].change_type) for i in collected],
[("u-2", "joined"), ("u-1", "departed")])
self.assertFalse(self.scheduler.running)
with open(self.state_file) as f:
self.assertEquals(serializer.load(f.read()), {
"context_members": ['u-2'],
"member_versions": {"u-2": 0},
"change_queue": [
{'change_type': 'departed',
'members': ['u-2'],
'unit_name': 'u-1'},
{'change_type': 'modified',
'members': ['u-2'],
'unit_name': 'u-2'}]})
self.scheduler.run()
self.assertEqual(
[(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \
for i in collected],
[("u-2", "joined", ["u-1", "u-2"]),
("u-1", "departed", ["u-2"]),
("u-1", "departed", ["u-2"]),
("u-2", "modified", ["u-2"])])
@inlineCallbacks
def test_restart_after_error_and_pop_starts_with_next_event(self):
"""If a hook errors, the schedule is popped, the next hook is new.
"""
yield self.write_single_unit_state()
results = [
succeed(True), succeed(False), succeed(True), succeed(True)]
collected = []
def executor(context, change):
res = results[len(collected)]
collected.append((context, change))
return res
self.executor = executor
self.scheduler.cb_change_members(["u-1"], ["u-1", "u-2"])
self.scheduler.cb_change_settings((("u-1", 2),))
yield self.scheduler.run()
self.assertFalse(self.scheduler.running)
self.assertEqual(
(yield self.scheduler.pop()),
{'change_type': 'modified', 'unit_name': 'u-1',
'members': ['u-1', 'u-2']})
self.scheduler.run()
self.assertEqual(
[(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \
for i in collected],
[("u-2", "joined", ["u-1", "u-2"]),
("u-1", "modified", ["u-1", "u-2"]),
("u-2", "modified", ["u-1", "u-2"])])
@inlineCallbacks
def test_current_unit_op_changes_while_executing(self):
"""The current operation being executed changing during execution is ok
"""
yield self.write_single_unit_state()
results = [Deferred() for i in range(5)]
collected = []
@inlineCallbacks
def executor(context, change):
res = yield results[len(collected)]
collected.append((context, change))
returnValue(res)
self.executor = executor
self.scheduler.cb_change_settings((("u-1", 2),))
sched_done = self.scheduler.run()
self.scheduler.cb_change_members(["u-1"], [])
self.assertFalse(collected)
results[0].callback(True)
self.assertEqual(collected[-1][1].change_type, "modified")
results[1].callback(True)
self.scheduler.stop()
yield sched_done
self.assertEqual(collected[-1][1].change_type, "departed")
@inlineCallbacks
def test_next_unit_op_changes_during_previous_hook_exec(self):
results = [Deferred() for i in range(5)]
collected = []
@inlineCallbacks
def executor(context, change):
res = yield results[len(collected)]
collected.append((context, change))
returnValue(res)
self.executor = executor
self.scheduler.cb_change_members([], ["u-1", "u-2"])
sched_done = self.scheduler.run()
self.scheduler.cb_change_members(["u-1", "u-2"], ["u-1"])
self.assertFalse(collected)
for i in results:
i.callback(True)
self.scheduler.stop()
yield sched_done
self.assertEqual(
[(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \
for i in collected],
[("u-1", "joined", ["u-1"]),
("u-1", "modified", ["u-1"])])
with open(self.state_file) as f:
self.assertEquals(serializer.load(f.read()), {
"context_members": ['u-1'],
"member_versions": {"u-1": 0},
"change_queue": []})
# Event reduction/coalescing cases
def test_reduce_removed_added(self):
""" A remove event for a node followed by an add event,
results in a modify event.
note this isn't possible, unit ids are unique.
"""
self.write_single_unit_state()
self.scheduler.cb_change_members(["u-1"], [])
self.scheduler.cb_change_members([], ["u-1"])
self.scheduler.run()
self.assertEqual(len(self.executions), 1)
self.assertEqual(self.executions[0][1].change_type, "modified")
output = ("members changed: old=['u-1'], new=[]",
"members changed: old=[], new=['u-1']",
"start",
"executing hook for u-1:modified\n")
self.assertEqual(self.log_stream.getvalue(), "\n".join(output))
def test_reduce_modify_remove_add(self):
"""A modify, remove, add event for a node results in a modify.
An extra validation of the previous test.
"""
self.write_single_unit_state()
self.scheduler.cb_change_settings([("u-1", 1)])
self.scheduler.cb_change_members(["u-1"], [])
self.scheduler.cb_change_members([], ["u-1"])
self.scheduler.run()
self.assertEqual(len(self.executions), 1)
self.assertEqual(self.executions[0][1].change_type, "modified")
def test_reduce_add_modify(self):
"""An add and modify event for a node are coalesced to an add."""
self.scheduler.cb_change_members([], ["u-1"])
self.scheduler.cb_change_settings([("u-1", 1)])
self.scheduler.cb_change_settings([("u-1", 1)])
self.scheduler.run()
self.assertEqual(len(self.executions), 2)
self.assertEqual(self.executions[0][1].change_type, "joined")
def test_reduce_add_remove(self):
"""an add followed by a removal results in a noop."""
self.scheduler.cb_change_members([], ["u-1"])
self.scheduler.cb_change_members(["u-1"], [])
self.scheduler.run()
self.assertEqual(len(self.executions), 0)
def test_reduce_modify_remove(self):
"""Modifying and then removing a node, results in just the removal."""
self.write_single_unit_state()
self.scheduler.cb_change_settings([("u-1", 1)])
self.scheduler.cb_change_members(["u-1"], [])
self.scheduler.run()
self.assertEqual(len(self.executions), 1)
self.assertEqual(self.executions[0][1].change_type, "departed")
def test_reduce_modify_modify(self):
"""Multiple modifies get coalesced to a single modify."""
# simulate normal startup, the first notify will always be the existing
# membership set.
self.scheduler.cb_change_members([], ["u-1"])
self.scheduler.run()
self.scheduler.stop()
self.assertEqual(len(self.executions), 2)
# Now continue the modify/modify reduction.
self.scheduler.cb_change_settings([("u-1", 1)])
self.scheduler.cb_change_settings([("u-1", 2)])
self.scheduler.cb_change_settings([("u-1", 3)])
self.scheduler.run()
self.assertEqual(len(self.executions), 3)
self.assertEqual(self.executions[1][1].change_type, "modified")
# Other stuff.
@inlineCallbacks
def test_start_stop(self):
self.assertFalse(self.scheduler.running)
d = self.scheduler.run()
self.assertTrue(self.scheduler.running)
# starting multiple times results in an error
self.assertFailure(self.scheduler.run(), AssertionError)
self.scheduler.stop()
self.assertFalse(self.scheduler.running)
yield d
# stopping multiple times is not an error
yield self.scheduler.stop()
self.assertFalse(self.scheduler.running)
def test_start_stop_start(self):
"""Stop values should only be honored if the scheduler is stopped.
"""
waits = [Deferred(), succeed(True), succeed(True), succeed(True)]
results = []
@inlineCallbacks
def executor(context, change):
res = yield waits[len(results)]
results.append((context, change))
returnValue(res)
scheduler = HookScheduler(
self.client, executor,
self.unit_relation, "", "wordpress/0", self.state_file)
# Start the scheduler
d = scheduler.run()
# Now queue up some changes.
scheduler.cb_change_members([], ["u-1"])
scheduler.cb_change_members(["u-1"], ["u-1", "u-2"])
# Stop the scheduler
scheduler.stop()
yield d
self.assertFalse(scheduler.running)
# Finish the hook execution
waits[0].callback(True)
d = scheduler.run()
self.assertTrue(scheduler.running)
# More changes
scheduler.cb_change_settings([("u-1", 1)])
scheduler.cb_change_settings([("u-2", 1)])
# Scheduler should still be running.
print self._debug_scheduler()
print [(r[1].change_type, r[1].unit_name) for r in results]
self.assertFalse(d.called)
self.assertEqual(len(results), 4)
@inlineCallbacks
def test_run_requires_writable_state(self):
# Induce lazy creation of scheduler, then break state file
self.scheduler
with open(self.state_file, "w"):
pass
os.chmod(self.state_file, 0)
e = yield self.assertFailure(self.scheduler.run(), AssertionError)
self.assertEquals(str(e), "%s is not writable!" % self.state_file)
def test_empty_state(self):
with open(self.state_file, "w") as f:
f.write(serializer.dump({}))
# Induce lazy creation to verify it can still survive
self.scheduler
@inlineCallbacks
def test_membership_visibility_per_change(self):
"""Hooks are executed against changes, those changes are
associated to a temporal timestamp, however the changes
are scheduled for execution, and the state/time of the
world may have advanced, to present a logically consistent
view, we try to guarantee at a minimum, that hooks will
always see the membership of a relation as it was at the
time of their associated change. In conjunction with the
event reduction, this keeps a consistent but up to date
world view.
"""
self.scheduler.cb_change_members([], ["u-1", "u-2"])
self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"])
self.scheduler.cb_change_settings([("u-2", 1)])
self.scheduler.run()
self.scheduler.stop()
# two reduced events, resulting u-2, u-3 add, two expanded
# u-2, u-3 modified
#self.assertEqual(len(self.executions), 4)
self.assertEqual(
[(i[1].unit_name, i[1].change_type, (yield i[0].get_members())) \
for i in self.executions],
[("u-2", "joined", ["u-2"]),
("u-3", "joined", ["u-2", "u-3"]),
("u-2", "modified", ["u-2", "u-3"]),
("u-3", "modified", ["u-2", "u-3"]),
])
# Now the first execution (u-2 add) should only see members
# from the time of its change, not the current members. However
# since u-1 has been subsequently removed, it no longer retains
# an entry in the membership list.
change_members = yield self.executions[0][0].get_members()
self.assertEqual(change_members, ["u-2"])
self.scheduler.cb_change_settings([("u-2", 2)])
self.scheduler.cb_change_members(["u-2", "u-3"], ["u-2"])
self.scheduler.run()
self.assertEqual(len(self.executions), 6)
self.assertEqual(self.executions[4][1].change_type, "modified")
# Verify modify events see the correct membership.
change_members = yield self.executions[4][0].get_members()
self.assertEqual(change_members, ["u-2", "u-3"])
@inlineCallbacks
def test_membership_visibility_with_change(self):
"""We express a stronger guarantee of the above, namely that
a hook wont see any 'active' members in a membership list, that
it hasn't previously been given a notify of before.
"""
with open(self.state_file, "w") as f:
f.write(serializer.dump({
"context_members": ["u-1", "u-2"],
"member_versions": {"u-1": 0, "u-2": 0},
"change_queue": []}))
self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3", "u-4"])
self.scheduler.cb_change_settings([("u-2", 1)])
self.scheduler.run()
self.scheduler.stop()
# add for u-3, u-4, mod for u3, u4, remove for u-1, modify for u-2
self.assertEqual(len(self.executions), 6)
# Verify members for each change.
self.assertEqual(self.executions[0][1].change_type, "joined")
members = yield self.executions[0][0].get_members()
self.assertEqual(members, ["u-1", "u-2", "u-3"])
self.assertEqual(self.executions[1][1].change_type, "joined")
members = yield self.executions[1][0].get_members()
self.assertEqual(members, ["u-1", "u-2", "u-3", "u-4"])
self.assertEqual(self.executions[2][1].change_type, "departed")
members = yield self.executions[2][0].get_members()
self.assertEqual(members, ["u-2", "u-3", "u-4"])
self.assertEqual(self.executions[3][1].change_type, "modified")
members = yield self.executions[2][0].get_members()
self.assertEqual(members, ["u-2", "u-3", "u-4"])
with open(self.state_file) as f:
state = serializer.load(f.read())
self.assertEquals(state, {
"change_queue": [],
"context_members": ["u-2", "u-3", "u-4"],
"member_versions": {"u-2": 1, "u-3": 0, "u-4": 0}})
@inlineCallbacks
def test_state_is_loaded(self):
with open(self.state_file, "w") as f:
f.write(serializer.dump({
"context_members": ["u-1", "u-2", "u-3"],
"member_versions": {"u-1": 5, "u-2": 2, "u-3": 0},
"change_queue": [
{'unit_name': 'u-1',
'change_type': 'modified',
'members': ['u-1', 'u-2']},
{'unit_name': 'u-3',
'change_type': 'joined',
'members': ['u-1', 'u-2', 'u-3']}]}))
d = self.scheduler.run()
while len(self.executions) < 2:
yield self.sleep(0.1)
self.scheduler.stop()
yield d
self.assertEqual(self.executions[0][1].change_type, "modified")
members = yield self.executions[0][0].get_members()
self.assertEqual(members, ["u-1", "u-2"])
self.assertEqual(self.executions[1][1].change_type, "joined")
members = yield self.executions[1][0].get_members()
self.assertEqual(members, ["u-1", "u-2", "u-3"])
with open(self.state_file) as f:
state = serializer.load(f.read())
self.assertEquals(state, {
"context_members": ["u-1", "u-2", "u-3"],
"member_versions": {"u-1": 5, "u-2": 2, "u-3": 0},
"change_queue": []})
def test_state_is_stored(self):
with open(self.state_file, "w") as f:
f.write(serializer.dump({
"context_members": ["u-1", "u-2"],
"member_versions": {"u-1": 0, "u-2": 2},
"change_queue": []}))
self.scheduler.cb_change_members(["u-1", "u-2"], ["u-2", "u-3"])
self.scheduler.cb_change_settings([("u-2", 3)])
# Add a stop instruction to the queue, which should *not* be saved.
self.scheduler.stop()
with open(self.state_file) as f:
state = serializer.load(f.read())
self.assertEquals(state, {
"context_members": ["u-2", "u-3"],
"member_versions": {"u-2": 3, "u-3": 0},
'change_queue': [{'change_type': 'joined',
'members': ['u-1', 'u-2', 'u-3'],
'unit_name': 'u-3'},
{'change_type': 'departed',
'members': ['u-2', 'u-3'],
'unit_name': 'u-1'},
{'change_type': 'modified',
'members': ['u-2', 'u-3'],
'unit_name': 'u-2'}],
})
@inlineCallbacks
def test_state_stored_after_tick(self):
def execute(context, change):
self.execute_calls += 1
if self.execute_calls > 1:
return fail(SomeError())
return succeed(True)
self.execute_calls = 0
self.executor = execute
with open(self.state_file, "w") as f:
f.write(serializer.dump({
"context_members": ["u-1", "u-2"],
"member_versions": {"u-1": 1, "u-2": 0, "u-3": 0},
"change_queue": [
{"unit_name": "u-1", "change_type": "modified",
"members": ["u-1", "u-2"]},
{"unit_name": "u-3", "change_type": "added",
"members": ["u-1", "u-2", "u-3"]}]}))
d = self.scheduler.run()
while self.execute_calls < 2:
yield self.poke_zk()
yield self.assertFailure(d, SomeError)
with open(self.state_file) as f:
self.assertEquals(serializer.load(f.read()), {
"context_members": ["u-1", "u-2"],
"member_versions": {"u-1": 1, "u-2": 0, "u-3": 0},
"change_queue": [
{"unit_name": "u-3", "change_type": "added",
"members": ["u-1", "u-2", "u-3"]}]})
@inlineCallbacks
def test_state_not_stored_mid_tick(self):
def execute(context, change):
self.execute_called = True
return fail(SomeError())
self.execute_called = False
self.executor = execute
initial_state = {
"context_members": ["u-1", "u-2"],
"member_versions": {"u-1": 1, "u-2": 0, "u-3": 0},
"change_queue": [
{"change_type": "modified", "unit_name": "u-1",
"members":["u-1", "u-2"]},
{"change_type": "modified", "unit_name": "u-1",
"members":["u-1", "u-2", "u-3"]},
]}
with open(self.state_file, "w") as f:
f.write(serializer.dump(initial_state))
d = self.scheduler.run()
while not self.execute_called:
yield self.poke_zk()
yield self.assertFailure(d, SomeError)
with open(self.state_file) as f:
self.assertEquals(serializer.load(f.read()), initial_state)
def test_ignore_equal_settings_version(self):
"""
A modified event whose version is not greater than the latest known
version for that unit will be ignored.
"""
self.write_single_unit_state()
self.scheduler.cb_change_settings([("u-1", 0)])
self.scheduler.run()
self.assertEquals(len(self.executions), 0)
def test_settings_version_0_on_add(self):
"""
When a unit is added, we assume its settings version to be 0, and
therefore modified events with version 0 will be ignored.
"""
self.scheduler.cb_change_members([], ["u-1"])
self.scheduler.run()
self.assertEquals(len(self.executions), 2)
self.scheduler.cb_change_settings([("u-1", 0)])
self.assertEquals(len(self.executions), 2)
self.assertEqual(self.executions[0][1].change_type, "joined")
def test_membership_timeslip(self):
"""
Adds and removes are calculated based on known membership state, NOT
on old_units.
"""
with open(self.state_file, "w") as f:
f.write(serializer.dump({
"context_members": ["u-1", "u-2"],
"member_versions": {"u-1": 0, "u-2": 0},
"change_queue": []}))
self.scheduler.cb_change_members(["u-2"], ["u-3", "u-4"])
self.scheduler.run()
output = (
"members changed: old=['u-2'], new=['u-3', 'u-4']",
"old does not match last recorded units: ['u-1', 'u-2']",
"start",
"executing hook for u-3:joined",
"executing hook for u-4:joined",
"executing hook for u-1:departed",
"executing hook for u-2:departed",
"executing hook for u-3:modified",
"executing hook for u-4:modified\n")
self.assertEqual(self.log_stream.getvalue(), "\n".join(output))
juju-0.7.orig/juju/hooks/tests/hooks/echo-hook 0000755 0000000 0000000 00000000106 12135220114 017575 0 ustar 0000000 0000000 #!/bin/bash
echo $MESSAGE
juju-log -l ERROR $ERROR
juju-log $DEFAULT
juju-0.7.orig/juju/hooks/tests/hooks/fail-hook 0000755 0000000 0000000 00000000042 12135220114 017571 0 ustar 0000000 0000000 #!/bin/sh
echo "FAIL" >&2
exit 1
juju-0.7.orig/juju/hooks/tests/hooks/hanging-hook 0000755 0000000 0000000 00000000146 12135220114 020276 0 ustar 0000000 0000000 #!/bin/bash
sleep 0.05 && echo "Slept for 50ms" && sleep 1 && echo "Slept for 1s" && sleep 1000000 &
juju-0.7.orig/juju/hooks/tests/hooks/noexec-hook 0000644 0000000 0000000 00000000000 12135220114 020126 0 ustar 0000000 0000000 juju-0.7.orig/juju/hooks/tests/hooks/sleep-hook 0000755 0000000 0000000 00000000025 12135220114 017767 0 ustar 0000000 0000000 #!/bin/bash
sleep 10 juju-0.7.orig/juju/hooks/tests/hooks/success-hook 0000755 0000000 0000000 00000000045 12135220114 020331 0 ustar 0000000 0000000 #!/bin/sh
echo "EPIC WIN" && exit 0
juju-0.7.orig/juju/lib/__init__.py 0000644 0000000 0000000 00000000000 12135220114 015233 0 ustar 0000000 0000000 juju-0.7.orig/juju/lib/cache.py 0000644 0000000 0000000 00000000613 12135220114 014551 0 ustar 0000000 0000000 import time
class CachedValue(object):
def __init__(self, max_cache, value=None):
self._max_cache = max_cache
self.set(value)
def get(self):
delta = time.time() - self._timestamp
if delta > self._max_cache:
return None
return self._value
def set(self, value):
self._value = value
self._timestamp = time.time()
juju-0.7.orig/juju/lib/filehash.py 0000644 0000000 0000000 00000001035 12135220114 015270 0 ustar 0000000 0000000
def compute_file_hash(hash_type, filename):
"""Simple helper to compute the digest of a file.
@param hash_type: A class like hashlib.sha256.
@param filename: File path to compute the digest from.
"""
hash = hash_type()
with open(filename) as file:
# Chunk the digest extraction to avoid loading large
# files in memory unnecessarily.
while True:
chunk = file.read(8192)
if not chunk:
break
hash.update(chunk)
return hash.hexdigest()
juju-0.7.orig/juju/lib/format.py 0000644 0000000 0000000 00000010633 12135220114 015001 0 ustar 0000000 0000000 """Utility functions and constants to support uniform i/o formatting."""
import json
import os
import yaml
from juju.errors import JujuError
class BaseFormat(object):
"""Maintains parallel code paths for input and output formatting
through the subclasses PythonFormat (Python str formatting with JSON
encoding) and YAMLFormat.
"""
def parse_keyvalue_pairs(self, pairs):
"""Parses key value pairs, using `_parse_value` for specific format"""
data = {}
for kv in pairs:
if "=" not in kv:
raise JujuError(
"Expected `option=value`. Found `%s`" % kv)
k, v = kv.split("=", 1)
if v.startswith("@"):
# Handle file input, any parsing/sanitization is done next
# with respect to charm format
filename = v[1:]
try:
with open(filename, "r") as f:
v = f.read()
except IOError:
raise JujuError(
"No such file or directory: %s (argument:%s)" % (
filename,
k))
except Exception, e:
raise JujuError("Bad file %s" % e)
data[k] = self._parse_value(k, v)
return data
def _parse_value(self, key, value):
"""Interprets value as a str"""
return value
def should_delete(self, value):
"""Whether `value` implies corresponding key should be deleted"""
return not value.strip()
class PythonFormat(BaseFormat):
"""Supports backwards compatibility through str and JSON encoding."""
charm_format = 1
def format(self, data):
"""Formats `data` using Python str encoding"""
return str(data)
def format_raw(self, data):
"""Add extra \n seen in Python format, so not truly raw"""
return self.format(data) + "\n"
# For the old format: 1, using JSON serialization introduces some
# subtle issues around Unicode conversion that then later results
# in bugginess. For compatibility reasons, we keep these old bugs
# around, by dumping and loading into JSON at appropriate points.
def dump(self, data):
"""Dumps using JSON serialization"""
return json.dumps(data)
def load(self, data):
"""Loads data, but also converts str to Unicode"""
return json.loads(data)
class YAMLFormat(BaseFormat):
"""New format that uses YAML, so no unexpected encoding issues"""
charm_format = 2
def format(self, data):
"""Formats `data` in Juju's preferred YAML format"""
# Return value such that it roundtrips; this allows us to
# report back the boolean false instead of the Python
# output format, False
if data is None:
return ""
serialized = yaml.dump(
data, indent=4, default_flow_style=False, width=80,
allow_unicode=True, Dumper=yaml.CSafeDumper)
if serialized.endswith("\n...\n"):
# Remove explicit doc end sentinel, still valid yaml
serialized = serialized[0:-5]
# Also remove any extra \n, will still be valid yaml
return serialized.rstrip("\n")
def format_raw(self, data):
"""Formats `data` as a raw string if str, otherwise as YAML"""
if isinstance(data, str):
return data
else:
return self.format(data)
# Use the same format for dump
dump = format
def load(self, data):
"""Loads data safely, ensuring no Python specific type info leaks"""
return yaml.load(data, Loader=yaml.CSafeLoader)
def is_valid_charm_format(charm_format):
"""True if `charm_format` is a valid format"""
return charm_format in (PythonFormat.charm_format, YAMLFormat.charm_format)
def get_charm_formatter(charm_format):
"""Map `charm_format` to the implementing strategy for that format"""
if charm_format == PythonFormat.charm_format:
return PythonFormat()
elif charm_format == YAMLFormat.charm_format:
return YAMLFormat()
else:
raise JujuError(
"Expected charm format to be either 1 or 2, got %s" % (
charm_format,))
def get_charm_formatter_from_env():
"""Return the formatter specified by $_JUJU_CHARM_FORMAT"""
return get_charm_formatter(int(
os.environ.get("_JUJU_CHARM_FORMAT", "1")))
juju-0.7.orig/juju/lib/loader.py 0000644 0000000 0000000 00000002160 12135220114 014753 0 ustar 0000000 0000000
_marker = object()
def _get_module_function(specification):
# converts foo.bar.baz to ['foo.bar', 'baz']
try:
data = specification.rsplit('.', 1)
except (ValueError, AttributeError):
data = []
if len(data) != 2:
raise ValueError("Invalid import specification: %r" % (
specification))
return data
def load_symbol(specification):
"""load a symbol from a dot delimited path in the import
namespace.
returns (module, symbol) or raises ImportError
"""
module_path, symbol_name = _get_module_function(specification)
module = __import__(module_path, fromlist=module_path.split())
symbol = getattr(module, symbol_name, _marker)
return (module, symbol)
def get_callable(specification):
"""
Convert a string version of a function name to the callable
object. If no callable can be found an ImportError is raised.
"""
module, callback = load_symbol(specification)
if callback is _marker or not callable(callback):
raise ImportError("No callback found for %s" % (
specification))
return callback
juju-0.7.orig/juju/lib/lxc/ 0000755 0000000 0000000 00000000000 12135220114 013722 5 ustar 0000000 0000000 juju-0.7.orig/juju/lib/mocker.py 0000644 0000000 0000000 00000232454 12135220114 015000 0 ustar 0000000 0000000 """
Mocker
Graceful platform for test doubles in Python: mocks, stubs, fakes, and dummies.
Copyright (c) 2007-2010, Gustavo Niemeyer
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import __builtin__
import tempfile
import unittest
import inspect
import shutil
import types
import sys
import os
import gc
if sys.version_info < (2, 4):
from sets import Set as set # pragma: nocover
__all__ = ["Mocker", "Expect", "expect", "IS", "CONTAINS", "IN", "MATCH",
"ANY", "ARGS", "KWARGS", "MockerTestCase"]
__author__ = "Gustavo Niemeyer "
__license__ = "BSD"
__version__ = "1.0"
ERROR_PREFIX = "[Mocker] "
# --------------------------------------------------------------------
# Exceptions
class MatchError(AssertionError):
"""Raised when an unknown expression is seen in playback mode."""
# --------------------------------------------------------------------
# Helper for chained-style calling.
class expect(object):
"""This is a simple helper that allows a different call-style.
With this class one can comfortably do chaining of calls to the
mocker object responsible by the object being handled. For instance::
expect(obj.attr).result(3).count(1, 2)
Is the same as::
obj.attr
mocker.result(3)
mocker.count(1, 2)
"""
__mocker__ = None
def __init__(self, mock, attr=None):
self._mock = mock
self._attr = attr
def __getattr__(self, attr):
return self.__class__(self._mock, attr)
def __call__(self, *args, **kwargs):
mocker = self.__mocker__
if not mocker:
mocker = self._mock.__mocker__
getattr(mocker, self._attr)(*args, **kwargs)
return self
def Expect(mocker):
"""Create an expect() "function" using the given Mocker instance.
This helper allows defining an expect() "function" which works even
in trickier cases such as:
expect = Expect(mymocker)
expect(iter(mock)).generate([1, 2, 3])
"""
return type("Expect", (expect,), {"__mocker__": mocker})
# --------------------------------------------------------------------
# Extensions to Python's unittest.
class MockerTestCase(unittest.TestCase):
"""unittest.TestCase subclass with Mocker support.
@ivar mocker: The mocker instance.
This is a convenience only. Mocker may easily be used with the
standard C{unittest.TestCase} class if wanted.
Test methods have a Mocker instance available on C{self.mocker}.
At the end of each test method, expectations of the mocker will
be verified, and any requested changes made to the environment
will be restored.
In addition to the integration with Mocker, this class provides
a few additional helper methods.
"""
def __init__(self, methodName="runTest"):
# So here is the trick: we take the real test method, wrap it on
# a function that do the job we have to do, and insert it in the
# *instance* dictionary, so that getattr() will return our
# replacement rather than the class method.
test_method = getattr(self, methodName, None)
if test_method is not None:
def test_method_wrapper():
try:
result = test_method()
except:
raise
else:
if (self.mocker.is_recording() and
self.mocker.get_events()):
raise RuntimeError("Mocker must be put in replay "
"mode with self.mocker.replay()")
if (hasattr(result, "addCallback") and
hasattr(result, "addErrback")):
def verify(result):
self.mocker.verify()
return result
result.addCallback(verify)
else:
self.mocker.verify()
self.mocker.restore()
return result
# Copy all attributes from the original method..
for attr in dir(test_method):
# .. unless they're present in our wrapper already.
if not hasattr(test_method_wrapper, attr) or attr == "__doc__":
setattr(test_method_wrapper, attr,
getattr(test_method, attr))
setattr(self, methodName, test_method_wrapper)
# We could overload run() normally, but other well-known testing
# frameworks do it as well, and some of them won't call the super,
# which might mean that cleanup wouldn't happen. With that in mind,
# we make integration easier by using the following trick.
run_method = self.run
def run_wrapper(*args, **kwargs):
try:
return run_method(*args, **kwargs)
finally:
self.__cleanup()
self.run = run_wrapper
self.mocker = Mocker()
self.expect = Expect(self.mocker)
self.__cleanup_funcs = []
self.__cleanup_paths = []
super(MockerTestCase, self).__init__(methodName)
def __call__(self, *args, **kwargs):
# This is necessary for Python 2.3 only, because it didn't use run(),
# which is supported above.
try:
super(MockerTestCase, self).__call__(*args, **kwargs)
finally:
if sys.version_info < (2, 4):
self.__cleanup()
def __cleanup(self):
for path in self.__cleanup_paths:
if os.path.isfile(path):
os.unlink(path)
elif os.path.isdir(path):
shutil.rmtree(path)
self.mocker.reset()
for func, args, kwargs in self.__cleanup_funcs:
func(*args, **kwargs)
def addCleanup(self, func, *args, **kwargs):
self.__cleanup_funcs.append((func, args, kwargs))
def makeFile(self, content=None, suffix="", prefix="tmp", basename=None,
dirname=None, path=None):
"""Create a temporary file and return the path to it.
@param content: Initial content for the file.
@param suffix: Suffix to be given to the file's basename.
@param prefix: Prefix to be given to the file's basename.
@param basename: Full basename for the file.
@param dirname: Put file inside this directory.
The file is removed after the test runs.
"""
if path is not None:
self.__cleanup_paths.append(path)
elif basename is not None:
if dirname is None:
dirname = tempfile.mkdtemp()
self.__cleanup_paths.append(dirname)
path = os.path.join(dirname, basename)
else:
fd, path = tempfile.mkstemp(suffix, prefix, dirname)
self.__cleanup_paths.append(path)
os.close(fd)
if content is None:
os.unlink(path)
if content is not None:
file = open(path, "w")
file.write(content)
file.close()
return path
def makeDir(self, suffix="", prefix="tmp", dirname=None, path=None):
"""Create a temporary directory and return the path to it.
@param suffix: Suffix to be given to the file's basename.
@param prefix: Prefix to be given to the file's basename.
@param dirname: Put directory inside this parent directory.
The directory is removed after the test runs.
"""
if path is not None:
os.makedirs(path)
else:
path = tempfile.mkdtemp(suffix, prefix, dirname)
self.__cleanup_paths.append(path)
return path
def failUnlessIs(self, first, second, msg=None):
"""Assert that C{first} is the same object as C{second}."""
if first is not second:
raise self.failureException(msg or "%r is not %r" % (first, second))
def failIfIs(self, first, second, msg=None):
"""Assert that C{first} is not the same object as C{second}."""
if first is second:
raise self.failureException(msg or "%r is %r" % (first, second))
def failUnlessIn(self, first, second, msg=None):
"""Assert that C{first} is contained in C{second}."""
if first not in second:
raise self.failureException(msg or "%r not in %r" % (first, second))
def failUnlessStartsWith(self, first, second, msg=None):
"""Assert that C{first} starts with C{second}."""
if first[:len(second)] != second:
raise self.failureException(msg or "%r doesn't start with %r" %
(first, second))
def failIfStartsWith(self, first, second, msg=None):
"""Assert that C{first} doesn't start with C{second}."""
if first[:len(second)] == second:
raise self.failureException(msg or "%r starts with %r" %
(first, second))
def failUnlessEndsWith(self, first, second, msg=None):
"""Assert that C{first} starts with C{second}."""
if first[len(first)-len(second):] != second:
raise self.failureException(msg or "%r doesn't end with %r" %
(first, second))
def failIfEndsWith(self, first, second, msg=None):
"""Assert that C{first} doesn't start with C{second}."""
if first[len(first)-len(second):] == second:
raise self.failureException(msg or "%r ends with %r" %
(first, second))
def failIfIn(self, first, second, msg=None):
"""Assert that C{first} is not contained in C{second}."""
if first in second:
raise self.failureException(msg or "%r in %r" % (first, second))
def failUnlessApproximates(self, first, second, tolerance, msg=None):
"""Assert that C{first} is near C{second} by at most C{tolerance}."""
if abs(first - second) > tolerance:
raise self.failureException(msg or "abs(%r - %r) > %r" %
(first, second, tolerance))
def failIfApproximates(self, first, second, tolerance, msg=None):
"""Assert that C{first} is far from C{second} by at least C{tolerance}.
"""
if abs(first - second) <= tolerance:
raise self.failureException(msg or "abs(%r - %r) <= %r" %
(first, second, tolerance))
def failUnlessMethodsMatch(self, first, second):
"""Assert that public methods in C{first} are present in C{second}.
This method asserts that all public methods found in C{first} are also
present in C{second} and accept the same arguments. C{first} may
have its own private methods, though, and may not have all methods
found in C{second}. Note that if a private method in C{first} matches
the name of one in C{second}, their specification is still compared.
This is useful to verify if a fake or stub class have the same API as
the real class being simulated.
"""
first_methods = dict(inspect.getmembers(first, inspect.ismethod))
second_methods = dict(inspect.getmembers(second, inspect.ismethod))
for name, first_method in first_methods.iteritems():
first_argspec = inspect.getargspec(first_method)
first_formatted = inspect.formatargspec(*first_argspec)
second_method = second_methods.get(name)
if second_method is None:
if name[:1] == "_":
continue # First may have its own private methods.
raise self.failureException("%s.%s%s not present in %s" %
(first.__name__, name, first_formatted, second.__name__))
second_argspec = inspect.getargspec(second_method)
if first_argspec != second_argspec:
second_formatted = inspect.formatargspec(*second_argspec)
raise self.failureException("%s.%s%s != %s.%s%s" %
(first.__name__, name, first_formatted,
second.__name__, name, second_formatted))
def failUnlessRaises(self, excClass, callableObj, *args, **kwargs):
"""
Fail unless an exception of class excClass is thrown by callableObj
when invoked with arguments args and keyword arguments kwargs. If a
different type of exception is thrown, it will not be caught, and the
test case will be deemed to have suffered an error, exactly as for an
unexpected exception. It returns the exception instance if it matches
the given exception class.
"""
try:
result = callableObj(*args, **kwargs)
except excClass, e:
return e
else:
excName = excClass
if hasattr(excClass, "__name__"):
excName = excClass.__name__
raise self.failureException(
"%s not raised (%r returned)" % (excName, result))
assertIs = failUnlessIs
assertIsNot = failIfIs
assertIn = failUnlessIn
assertNotIn = failIfIn
assertStartsWith = failUnlessStartsWith
assertNotStartsWith = failIfStartsWith
assertEndsWith = failUnlessEndsWith
assertNotEndsWith = failIfEndsWith
assertApproximates = failUnlessApproximates
assertNotApproximates = failIfApproximates
assertMethodsMatch = failUnlessMethodsMatch
assertRaises = failUnlessRaises
# The following are missing in Python < 2.4.
assertTrue = unittest.TestCase.failUnless
assertFalse = unittest.TestCase.failIf
# The following is provided for compatibility with Twisted's trial.
assertIdentical = assertIs
assertNotIdentical = assertIsNot
failUnlessIdentical = failUnlessIs
failIfIdentical = failIfIs
# --------------------------------------------------------------------
# Mocker.
class classinstancemethod(object):
def __init__(self, method):
self.method = method
def __get__(self, obj, cls=None):
def bound_method(*args, **kwargs):
return self.method(cls, obj, *args, **kwargs)
return bound_method
class MockerBase(object):
"""Controller of mock objects.
A mocker instance is used to command recording and replay of
expectations on any number of mock objects.
Expectations should be expressed for the mock object while in
record mode (the initial one) by using the mock object itself,
and using the mocker (and/or C{expect()} as a helper) to define
additional behavior for each event. For instance::
mock = mocker.mock()
mock.hello()
mocker.result("Hi!")
mocker.replay()
assert mock.hello() == "Hi!"
mock.restore()
mock.verify()
In this short excerpt a mock object is being created, then an
expectation of a call to the C{hello()} method was recorded, and
when called the method should return the value C{10}. Then, the
mocker is put in replay mode, and the expectation is satisfied by
calling the C{hello()} method, which indeed returns 10. Finally,
a call to the L{restore()} method is performed to undo any needed
changes made in the environment, and the L{verify()} method is
called to ensure that all defined expectations were met.
The same logic can be expressed more elegantly using the
C{with mocker:} statement, as follows::
mock = mocker.mock()
mock.hello()
mocker.result("Hi!")
with mocker:
assert mock.hello() == "Hi!"
Also, the MockerTestCase class, which integrates the mocker on
a unittest.TestCase subclass, may be used to reduce the overhead
of controlling the mocker. A test could be written as follows::
class SampleTest(MockerTestCase):
def test_hello(self):
mock = self.mocker.mock()
mock.hello()
self.mocker.result("Hi!")
self.mocker.replay()
self.assertEquals(mock.hello(), "Hi!")
"""
_recorders = []
# For convenience only.
on = expect
class __metaclass__(type):
def __init__(self, name, bases, dict):
# Make independent lists on each subclass, inheriting from parent.
self._recorders = list(getattr(self, "_recorders", ()))
def __init__(self):
self._recorders = self._recorders[:]
self._events = []
self._recording = True
self._ordering = False
self._last_orderer = None
def is_recording(self):
"""Return True if in recording mode, False if in replay mode.
Recording is the initial state.
"""
return self._recording
def replay(self):
"""Change to replay mode, where recorded events are reproduced.
If already in replay mode, the mocker will be restored, with all
expectations reset, and then put again in replay mode.
An alternative and more comfortable way to replay changes is
using the 'with' statement, as follows::
mocker = Mocker()
with mocker:
The 'with' statement will automatically put mocker in replay
mode, and will also verify if all events were correctly reproduced
at the end (using L{verify()}), and also restore any changes done
in the environment (with L{restore()}).
Also check the MockerTestCase class, which integrates the
unittest.TestCase class with mocker.
"""
if not self._recording:
for event in self._events:
event.restore()
else:
self._recording = False
for event in self._events:
event.replay()
def restore(self):
"""Restore changes in the environment, and return to recording mode.
This should always be called after the test is complete (succeeding
or not). There are ways to call this method automatically on
completion (e.g. using a C{with mocker:} statement, or using the
L{MockerTestCase} class.
"""
if not self._recording:
self._recording = True
for event in self._events:
event.restore()
def reset(self):
"""Reset the mocker state.
This will restore environment changes, if currently in replay
mode, and then remove all events previously recorded.
"""
if not self._recording:
self.restore()
self.unorder()
del self._events[:]
def get_events(self):
"""Return all recorded events."""
return self._events[:]
def add_event(self, event):
"""Add an event.
This method is used internally by the implementation, and
shouldn't be needed on normal mocker usage.
"""
self._events.append(event)
if self._ordering:
orderer = event.add_task(Orderer(event.path))
if self._last_orderer:
orderer.add_dependency(self._last_orderer)
self._last_orderer = orderer
return event
def verify(self):
"""Check if all expectations were met, and raise AssertionError if not.
The exception message will include a nice description of which
expectations were not met, and why.
"""
errors = []
for event in self._events:
try:
event.verify()
except AssertionError, e:
error = str(e)
if not error:
raise RuntimeError("Empty error message from %r"
% event)
errors.append(error)
if errors:
message = [ERROR_PREFIX + "Unmet expectations:", ""]
for error in errors:
lines = error.splitlines()
message.append("=> " + lines.pop(0))
message.extend([" " + line for line in lines])
message.append("")
raise AssertionError(os.linesep.join(message))
def mock(self, spec_and_type=None, spec=None, type=None,
name=None, count=True):
"""Return a new mock object.
@param spec_and_type: Handy positional argument which sets both
spec and type.
@param spec: Method calls will be checked for correctness against
the given class.
@param type: If set, the Mock's __class__ attribute will return
the given type. This will make C{isinstance()} calls
on the object work.
@param name: Name for the mock object, used in the representation of
expressions. The name is rarely needed, as it's usually
guessed correctly from the variable name used.
@param count: If set to false, expressions may be executed any number
of times, unless an expectation is explicitly set using
the L{count()} method. By default, expressions are
expected once.
"""
if spec_and_type is not None:
spec = type = spec_and_type
return Mock(self, spec=spec, type=type, name=name, count=count)
def proxy(self, object, spec=True, type=True, name=None, count=True,
passthrough=True):
"""Return a new mock object which proxies to the given object.
Proxies are useful when only part of the behavior of an object
is to be mocked. Unknown expressions may be passed through to
the real implementation implicitly (if the C{passthrough} argument
is True), or explicitly (using the L{passthrough()} method
on the event).
@param object: Real object to be proxied, and replaced by the mock
on replay mode. It may also be an "import path",
such as C{"time.time"}, in which case the object
will be the C{time} function from the C{time} module.
@param spec: Method calls will be checked for correctness against
the given object, which may be a class or an instance
where attributes will be looked up. Defaults to the
the C{object} parameter. May be set to None explicitly,
in which case spec checking is disabled. Checks may
also be disabled explicitly on a per-event basis with
the L{nospec()} method.
@param type: If set, the Mock's __class__ attribute will return
the given type. This will make C{isinstance()} calls
on the object work. Defaults to the type of the
C{object} parameter. May be set to None explicitly.
@param name: Name for the mock object, used in the representation of
expressions. The name is rarely needed, as it's usually
guessed correctly from the variable name used.
@param count: If set to false, expressions may be executed any number
of times, unless an expectation is explicitly set using
the L{count()} method. By default, expressions are
expected once.
@param passthrough: If set to False, passthrough of actions on the
proxy to the real object will only happen when
explicitly requested via the L{passthrough()}
method.
"""
if isinstance(object, basestring):
if name is None:
name = object
import_stack = object.split(".")
attr_stack = []
while import_stack:
module_path = ".".join(import_stack)
try:
__import__(module_path)
except ImportError:
attr_stack.insert(0, import_stack.pop())
if not import_stack:
raise
continue
else:
object = sys.modules[module_path]
for attr in attr_stack:
object = getattr(object, attr)
break
if isinstance(object, types.UnboundMethodType):
object = object.im_func
if spec is True:
spec = object
if type is True:
type = __builtin__.type(object)
return Mock(self, spec=spec, type=type, object=object,
name=name, count=count, passthrough=passthrough)
def replace(self, object, spec=True, type=True, name=None, count=True,
passthrough=True):
"""Create a proxy, and replace the original object with the mock.
On replay, the original object will be replaced by the returned
proxy in all dictionaries found in the running interpreter via
the garbage collecting system. This should cover module
namespaces, class namespaces, instance namespaces, and so on.
@param object: Real object to be proxied, and replaced by the mock
on replay mode. It may also be an "import path",
such as C{"time.time"}, in which case the object
will be the C{time} function from the C{time} module.
@param spec: Method calls will be checked for correctness against
the given object, which may be a class or an instance
where attributes will be looked up. Defaults to the
the C{object} parameter. May be set to None explicitly,
in which case spec checking is disabled. Checks may
also be disabled explicitly on a per-event basis with
the L{nospec()} method.
@param type: If set, the Mock's __class__ attribute will return
the given type. This will make C{isinstance()} calls
on the object work. Defaults to the type of the
C{object} parameter. May be set to None explicitly.
@param name: Name for the mock object, used in the representation of
expressions. The name is rarely needed, as it's usually
guessed correctly from the variable name used.
@param passthrough: If set to False, passthrough of actions on the
proxy to the real object will only happen when
explicitly requested via the L{passthrough()}
method.
"""
mock = self.proxy(object, spec, type, name, count, passthrough)
event = self._get_replay_restore_event()
event.add_task(ProxyReplacer(mock))
return mock
def patch(self, object, spec=True):
"""Patch an existing object to reproduce recorded events.
@param object: Class or instance to be patched.
@param spec: Method calls will be checked for correctness against
the given object, which may be a class or an instance
where attributes will be looked up. Defaults to the
the C{object} parameter. May be set to None explicitly,
in which case spec checking is disabled. Checks may
also be disabled explicitly on a per-event basis with
the L{nospec()} method.
The result of this method is still a mock object, which can be
used like any other mock object to record events. The difference
is that when the mocker is put on replay mode, the *real* object
will be modified to behave according to recorded expectations.
Patching works in individual instances, and also in classes.
When an instance is patched, recorded events will only be
considered on this specific instance, and other instances should
behave normally. When a class is patched, the reproduction of
events will be considered on any instance of this class once
created (collectively).
Observe that, unlike with proxies which catch only events done
through the mock object, *all* accesses to recorded expectations
will be considered; even these coming from the object itself
(e.g. C{self.hello()} is considered if this method was patched).
While this is a very powerful feature, and many times the reason
to use patches in the first place, it's important to keep this
behavior in mind.
Patching of the original object only takes place when the mocker
is put on replay mode, and the patched object will be restored
to its original state once the L{restore()} method is called
(explicitly, or implicitly with alternative conventions, such as
a C{with mocker:} block, or a MockerTestCase class).
"""
if spec is True:
spec = object
patcher = Patcher()
event = self._get_replay_restore_event()
event.add_task(patcher)
mock = Mock(self, object=object, patcher=patcher,
passthrough=True, spec=spec)
patcher.patch_attr(object, '__mocker_mock__', mock)
return mock
def act(self, path):
"""This is called by mock objects whenever something happens to them.
This method is part of the implementation between the mocker
and mock objects.
"""
if self._recording:
event = self.add_event(Event(path))
for recorder in self._recorders:
recorder(self, event)
return Mock(self, path)
else:
# First run events that may run, then run unsatisfied events, then
# ones not previously run. We put the index in the ordering tuple
# instead of the actual event because we want a stable sort
# (ordering between 2 events is undefined).
events = self._events
order = [(events[i].satisfied()*2 + events[i].has_run(), i)
for i in range(len(events))]
order.sort()
postponed = None
for weight, i in order:
event = events[i]
if event.matches(path):
if event.may_run(path):
return event.run(path)
elif postponed is None:
postponed = event
if postponed is not None:
return postponed.run(path)
raise MatchError(ERROR_PREFIX + "Unexpected expression: %s" % path)
def get_recorders(cls, self):
"""Return recorders associated with this mocker class or instance.
This method may be called on mocker instances and also on mocker
classes. See the L{add_recorder()} method for more information.
"""
return (self or cls)._recorders[:]
get_recorders = classinstancemethod(get_recorders)
def add_recorder(cls, self, recorder):
"""Add a recorder to this mocker class or instance.
@param recorder: Callable accepting C{(mocker, event)} as parameters.
This is part of the implementation of mocker.
All registered recorders are called for translating events that
happen during recording into expectations to be met once the state
is switched to replay mode.
This method may be called on mocker instances and also on mocker
classes. When called on a class, the recorder will be used by
all instances, and also inherited on subclassing. When called on
instances, the recorder is added only to the given instance.
"""
(self or cls)._recorders.append(recorder)
return recorder
add_recorder = classinstancemethod(add_recorder)
def remove_recorder(cls, self, recorder):
"""Remove the given recorder from this mocker class or instance.
This method may be called on mocker classes and also on mocker
instances. See the L{add_recorder()} method for more information.
"""
(self or cls)._recorders.remove(recorder)
remove_recorder = classinstancemethod(remove_recorder)
def result(self, value):
"""Make the last recorded event return the given value on replay.
@param value: Object to be returned when the event is replayed.
"""
self.call(lambda *args, **kwargs: value)
def generate(self, sequence):
"""Last recorded event will return a generator with the given sequence.
@param sequence: Sequence of values to be generated.
"""
def generate(*args, **kwargs):
for value in sequence:
yield value
self.call(generate)
def throw(self, exception):
"""Make the last recorded event raise the given exception on replay.
@param exception: Class or instance of exception to be raised.
"""
def raise_exception(*args, **kwargs):
raise exception
self.call(raise_exception)
def call(self, func, with_object=False):
"""Make the last recorded event cause the given function to be called.
@param func: Function to be called.
The result of the function will be used as the event result.
"""
event = self._events[-1]
if with_object and event.path.root_object is None:
raise TypeError("Mock object isn't a proxy")
event.add_task(FunctionRunner(func, with_root_object=with_object))
def count(self, min, max=False):
"""Last recorded event must be replayed between min and max times.
@param min: Minimum number of times that the event must happen.
@param max: Maximum number of times that the event must happen. If
not given, it defaults to the same value of the C{min}
parameter. If set to None, there is no upper limit, and
the expectation is met as long as it happens at least
C{min} times.
"""
event = self._events[-1]
for task in event.get_tasks():
if isinstance(task, RunCounter):
event.remove_task(task)
event.add_task(RunCounter(min, max))
def is_ordering(self):
"""Return true if all events are being ordered.
See the L{order()} method.
"""
return self._ordering
def unorder(self):
"""Disable the ordered mode.
See the L{order()} method for more information.
"""
self._ordering = False
self._last_orderer = None
def order(self, *path_holders):
"""Create an expectation of order between two or more events.
@param path_holders: Objects returned as the result of recorded events.
By default, mocker won't force events to happen precisely in
the order they were recorded. Calling this method will change
this behavior so that events will only match if reproduced in
the correct order.
There are two ways in which this method may be used. Which one
is used in a given occasion depends only on convenience.
If no arguments are passed, the mocker will be put in a mode where
all the recorded events following the method call will only be met
if they happen in order. When that's used, the mocker may be put
back in unordered mode by calling the L{unorder()} method, or by
using a 'with' block, like so::
with mocker.ordered():
In this case, only expressions in will be ordered,
and the mocker will be back in unordered mode after the 'with' block.
The second way to use it is by specifying precisely which events
should be ordered. As an example::
mock = mocker.mock()
expr1 = mock.hello()
expr2 = mock.world
expr3 = mock.x.y.z
mocker.order(expr1, expr2, expr3)
This method of ordering only works when the expression returns
another object.
Also check the L{after()} and L{before()} methods, which are
alternative ways to perform this.
"""
if not path_holders:
self._ordering = True
return OrderedContext(self)
last_orderer = None
for path_holder in path_holders:
if type(path_holder) is Path:
path = path_holder
else:
path = path_holder.__mocker_path__
for event in self._events:
if event.path is path:
for task in event.get_tasks():
if isinstance(task, Orderer):
orderer = task
break
else:
orderer = Orderer(path)
event.add_task(orderer)
if last_orderer:
orderer.add_dependency(last_orderer)
last_orderer = orderer
break
def after(self, *path_holders):
"""Last recorded event must happen after events referred to.
@param path_holders: Objects returned as the result of recorded events
which should happen before the last recorded event
As an example, the idiom::
expect(mock.x).after(mock.y, mock.z)
is an alternative way to say::
expr_x = mock.x
expr_y = mock.y
expr_z = mock.z
mocker.order(expr_y, expr_x)
mocker.order(expr_z, expr_x)
See L{order()} for more information.
"""
last_path = self._events[-1].path
for path_holder in path_holders:
self.order(path_holder, last_path)
def before(self, *path_holders):
"""Last recorded event must happen before events referred to.
@param path_holders: Objects returned as the result of recorded events
which should happen after the last recorded event
As an example, the idiom::
expect(mock.x).before(mock.y, mock.z)
is an alternative way to say::
expr_x = mock.x
expr_y = mock.y
expr_z = mock.z
mocker.order(expr_x, expr_y)
mocker.order(expr_x, expr_z)
See L{order()} for more information.
"""
last_path = self._events[-1].path
for path_holder in path_holders:
self.order(last_path, path_holder)
def nospec(self):
"""Don't check method specification of real object on last event.
By default, when using a mock created as the result of a call to
L{proxy()}, L{replace()}, and C{patch()}, or when passing the spec
attribute to the L{mock()} method, method calls on the given object
are checked for correctness against the specification of the real
object (or the explicitly provided spec).
This method will disable that check specifically for the last
recorded event.
"""
event = self._events[-1]
for task in event.get_tasks():
if isinstance(task, SpecChecker):
event.remove_task(task)
def passthrough(self, result_callback=None):
"""Make the last recorded event run on the real object once seen.
@param result_callback: If given, this function will be called with
the result of the *real* method call as the only argument.
This can only be used on proxies, as returned by the L{proxy()}
and L{replace()} methods, or on mocks representing patched objects,
as returned by the L{patch()} method.
"""
event = self._events[-1]
if event.path.root_object is None:
raise TypeError("Mock object isn't a proxy")
event.add_task(PathExecuter(result_callback))
def __enter__(self):
"""Enter in a 'with' context. This will run replay()."""
self.replay()
return self
def __exit__(self, type, value, traceback):
"""Exit from a 'with' context.
This will run restore() at all times, but will only run verify()
if the 'with' block itself hasn't raised an exception. Exceptions
in that block are never swallowed.
"""
self.restore()
if type is None:
self.verify()
return False
def _get_replay_restore_event(self):
"""Return unique L{ReplayRestoreEvent}, creating if needed.
Some tasks only want to replay/restore. When that's the case,
they shouldn't act on other events during replay. Also, they
can all be put in a single event when that's the case. Thus,
we add a single L{ReplayRestoreEvent} as the first element of
the list.
"""
if not self._events or type(self._events[0]) != ReplayRestoreEvent:
self._events.insert(0, ReplayRestoreEvent())
return self._events[0]
class OrderedContext(object):
def __init__(self, mocker):
self._mocker = mocker
def __enter__(self):
return None
def __exit__(self, type, value, traceback):
self._mocker.unorder()
class Mocker(MockerBase):
__doc__ = MockerBase.__doc__
# Decorator to add recorders on the standard Mocker class.
recorder = Mocker.add_recorder
# --------------------------------------------------------------------
# Mock object.
class Mock(object):
def __init__(self, mocker, path=None, name=None, spec=None, type=None,
object=None, passthrough=False, patcher=None, count=True):
self.__mocker__ = mocker
self.__mocker_path__ = path or Path(self, object)
self.__mocker_name__ = name
self.__mocker_spec__ = spec
self.__mocker_object__ = object
self.__mocker_passthrough__ = passthrough
self.__mocker_patcher__ = patcher
self.__mocker_replace__ = False
self.__mocker_type__ = type
self.__mocker_count__ = count
def __mocker_act__(self, kind, args=(), kwargs={}, object=None):
if self.__mocker_name__ is None:
self.__mocker_name__ = find_object_name(self, 2)
action = Action(kind, args, kwargs, self.__mocker_path__)
path = self.__mocker_path__ + action
if object is not None:
path.root_object = object
try:
return self.__mocker__.act(path)
except MatchError, exception:
root_mock = path.root_mock
if (path.root_object is not None and
root_mock.__mocker_passthrough__):
return path.execute(path.root_object)
# Reinstantiate to show raise statement on traceback, and
# also to make the traceback shown shorter.
raise MatchError(str(exception))
except AssertionError, e:
lines = str(e).splitlines()
message = [ERROR_PREFIX + "Unmet expectation:", ""]
message.append("=> " + lines.pop(0))
message.extend([" " + line for line in lines])
message.append("")
raise AssertionError(os.linesep.join(message))
def __getattribute__(self, name):
if name.startswith("__mocker_"):
return super(Mock, self).__getattribute__(name)
if name == "__class__":
if self.__mocker__.is_recording() or self.__mocker_type__ is None:
return type(self)
return self.__mocker_type__
if name == "__length_hint__":
# This is used by Python 2.6+ to optimize the allocation
# of arrays in certain cases. Pretend it doesn't exist.
raise AttributeError("No __length_hint__ here!")
return self.__mocker_act__("getattr", (name,))
def __setattr__(self, name, value):
if name.startswith("__mocker_"):
return super(Mock, self).__setattr__(name, value)
return self.__mocker_act__("setattr", (name, value))
def __delattr__(self, name):
return self.__mocker_act__("delattr", (name,))
def __call__(self, *args, **kwargs):
return self.__mocker_act__("call", args, kwargs)
def __contains__(self, value):
return self.__mocker_act__("contains", (value,))
def __getitem__(self, key):
return self.__mocker_act__("getitem", (key,))
def __setitem__(self, key, value):
return self.__mocker_act__("setitem", (key, value))
def __delitem__(self, key):
return self.__mocker_act__("delitem", (key,))
def __len__(self):
# MatchError is turned on an AttributeError so that list() and
# friends act properly when trying to get length hints on
# something that doesn't offer them.
try:
result = self.__mocker_act__("len")
except MatchError, e:
raise AttributeError(str(e))
if type(result) is Mock:
return 0
return result
def __nonzero__(self):
try:
result = self.__mocker_act__("nonzero")
except MatchError, e:
return True
if type(result) is Mock:
return True
return result
def __iter__(self):
# XXX On py3k, when next() becomes __next__(), we'll be able
# to return the mock itself because it will be considered
# an iterator (we'll be mocking __next__ as well, which we
# can't now).
result = self.__mocker_act__("iter")
if type(result) is Mock:
return iter([])
return result
# When adding a new action kind here, also add support for it on
# Action.execute() and Path.__str__().
def find_object_name(obj, depth=0):
"""Try to detect how the object is named on a previous scope."""
try:
frame = sys._getframe(depth+1)
except:
return None
for name, frame_obj in frame.f_locals.iteritems():
if frame_obj is obj:
return name
self = frame.f_locals.get("self")
if self is not None:
try:
items = list(self.__dict__.iteritems())
except:
pass
else:
for name, self_obj in items:
if self_obj is obj:
return name
return None
# --------------------------------------------------------------------
# Action and path.
class Action(object):
def __init__(self, kind, args, kwargs, path=None):
self.kind = kind
self.args = args
self.kwargs = kwargs
self.path = path
self._execute_cache = {}
def __repr__(self):
if self.path is None:
return "Action(%r, %r, %r)" % (self.kind, self.args, self.kwargs)
return "Action(%r, %r, %r, %r)" % \
(self.kind, self.args, self.kwargs, self.path)
def __eq__(self, other):
return (self.kind == other.kind and
self.args == other.args and
self.kwargs == other.kwargs)
def __ne__(self, other):
return not self.__eq__(other)
def matches(self, other):
return (self.kind == other.kind and
match_params(self.args, self.kwargs, other.args, other.kwargs))
def execute(self, object):
# This caching scheme may fail if the object gets deallocated before
# the action, as the id might get reused. It's somewhat easy to fix
# that with a weakref callback. For our uses, though, the object
# should never get deallocated before the action itself, so we'll
# just keep it simple.
if id(object) in self._execute_cache:
return self._execute_cache[id(object)]
execute = getattr(object, "__mocker_execute__", None)
if execute is not None:
result = execute(self, object)
else:
kind = self.kind
if kind == "getattr":
result = getattr(object, self.args[0])
elif kind == "setattr":
result = setattr(object, self.args[0], self.args[1])
elif kind == "delattr":
result = delattr(object, self.args[0])
elif kind == "call":
result = object(*self.args, **self.kwargs)
elif kind == "contains":
result = self.args[0] in object
elif kind == "getitem":
result = object[self.args[0]]
elif kind == "setitem":
result = object[self.args[0]] = self.args[1]
elif kind == "delitem":
del object[self.args[0]]
result = None
elif kind == "len":
result = len(object)
elif kind == "nonzero":
result = bool(object)
elif kind == "iter":
result = iter(object)
else:
raise RuntimeError("Don't know how to execute %r kind." % kind)
self._execute_cache[id(object)] = result
return result
class Path(object):
def __init__(self, root_mock, root_object=None, actions=()):
self.root_mock = root_mock
self.root_object = root_object
self.actions = tuple(actions)
self.__mocker_replace__ = False
def parent_path(self):
if not self.actions:
return None
return self.actions[-1].path
parent_path = property(parent_path)
def __add__(self, action):
"""Return a new path which includes the given action at the end."""
return self.__class__(self.root_mock, self.root_object,
self.actions + (action,))
def __eq__(self, other):
"""Verify if the two paths are equal.
Two paths are equal if they refer to the same mock object, and
have the actions with equal kind, args and kwargs.
"""
if (self.root_mock is not other.root_mock or
self.root_object is not other.root_object or
len(self.actions) != len(other.actions)):
return False
for action, other_action in zip(self.actions, other.actions):
if action != other_action:
return False
return True
def matches(self, other):
"""Verify if the two paths are equivalent.
Two paths are equal if they refer to the same mock object, and
have the same actions performed on them.
"""
if (self.root_mock is not other.root_mock or
len(self.actions) != len(other.actions)):
return False
for action, other_action in zip(self.actions, other.actions):
if not action.matches(other_action):
return False
return True
def execute(self, object):
"""Execute all actions sequentially on object, and return result.
"""
for action in self.actions:
object = action.execute(object)
return object
def __str__(self):
"""Transform the path into a nice string such as obj.x.y('z')."""
result = self.root_mock.__mocker_name__ or ""
for action in self.actions:
if action.kind == "getattr":
result = "%s.%s" % (result, action.args[0])
elif action.kind == "setattr":
result = "%s.%s = %r" % (result, action.args[0], action.args[1])
elif action.kind == "delattr":
result = "del %s.%s" % (result, action.args[0])
elif action.kind == "call":
args = [repr(x) for x in action.args]
items = list(action.kwargs.iteritems())
items.sort()
for pair in items:
args.append("%s=%r" % pair)
result = "%s(%s)" % (result, ", ".join(args))
elif action.kind == "contains":
result = "%r in %s" % (action.args[0], result)
elif action.kind == "getitem":
result = "%s[%r]" % (result, action.args[0])
elif action.kind == "setitem":
result = "%s[%r] = %r" % (result, action.args[0],
action.args[1])
elif action.kind == "delitem":
result = "del %s[%r]" % (result, action.args[0])
elif action.kind == "len":
result = "len(%s)" % result
elif action.kind == "nonzero":
result = "bool(%s)" % result
elif action.kind == "iter":
result = "iter(%s)" % result
else:
raise RuntimeError("Don't know how to format kind %r" %
action.kind)
return result
class SpecialArgument(object):
"""Base for special arguments for matching parameters."""
def __init__(self, object=None):
self.object = object
def __repr__(self):
if self.object is None:
return self.__class__.__name__
else:
return "%s(%r)" % (self.__class__.__name__, self.object)
def matches(self, other):
return True
def __eq__(self, other):
return type(other) == type(self) and self.object == other.object
class ANY(SpecialArgument):
"""Matches any single argument."""
ANY = ANY()
class ARGS(SpecialArgument):
"""Matches zero or more positional arguments."""
ARGS = ARGS()
class KWARGS(SpecialArgument):
"""Matches zero or more keyword arguments."""
KWARGS = KWARGS()
class IS(SpecialArgument):
def matches(self, other):
return self.object is other
def __eq__(self, other):
return type(other) == type(self) and self.object is other.object
class CONTAINS(SpecialArgument):
def matches(self, other):
try:
other.__contains__
except AttributeError:
try:
iter(other)
except TypeError:
# If an object can't be iterated, and has no __contains__
# hook, it'd blow up on the test below. We test this in
# advance to prevent catching more errors than we really
# want.
return False
return self.object in other
class IN(SpecialArgument):
def matches(self, other):
return other in self.object
class MATCH(SpecialArgument):
def matches(self, other):
return bool(self.object(other))
def __eq__(self, other):
return type(other) == type(self) and self.object is other.object
def match_params(args1, kwargs1, args2, kwargs2):
"""Match the two sets of parameters, considering special parameters."""
has_args = ARGS in args1
has_kwargs = KWARGS in args1
if has_kwargs:
args1 = [arg1 for arg1 in args1 if arg1 is not KWARGS]
elif len(kwargs1) != len(kwargs2):
return False
if not has_args and len(args1) != len(args2):
return False
# Either we have the same number of kwargs, or unknown keywords are
# accepted (KWARGS was used), so check just the ones in kwargs1.
for key, arg1 in kwargs1.iteritems():
if key not in kwargs2:
return False
arg2 = kwargs2[key]
if isinstance(arg1, SpecialArgument):
if not arg1.matches(arg2):
return False
elif arg1 != arg2:
return False
# Keywords match. Now either we have the same number of
# arguments, or ARGS was used. If ARGS wasn't used, arguments
# must match one-on-one necessarily.
if not has_args:
for arg1, arg2 in zip(args1, args2):
if isinstance(arg1, SpecialArgument):
if not arg1.matches(arg2):
return False
elif arg1 != arg2:
return False
return True
# Easy choice. Keywords are matching, and anything on args is accepted.
if (ARGS,) == args1:
return True
# We have something different there. If we don't have positional
# arguments on the original call, it can't match.
if not args2:
# Unless we have just several ARGS (which is bizarre, but..).
for arg1 in args1:
if arg1 is not ARGS:
return False
return True
# Ok, all bets are lost. We have to actually do the more expensive
# matching. This is an algorithm based on the idea of the Levenshtein
# Distance between two strings, but heavily hacked for this purpose.
args2l = len(args2)
if args1[0] is ARGS:
args1 = args1[1:]
array = [0]*args2l
else:
array = [1]*args2l
for i in range(len(args1)):
last = array[0]
if args1[i] is ARGS:
for j in range(1, args2l):
last, array[j] = array[j], min(array[j-1], array[j], last)
else:
array[0] = i or int(args1[i] != args2[0])
for j in range(1, args2l):
last, array[j] = array[j], last or int(args1[i] != args2[j])
if 0 not in array:
return False
if array[-1] != 0:
return False
return True
# --------------------------------------------------------------------
# Event and task base.
class Event(object):
"""Aggregation of tasks that keep track of a recorded action.
An event represents something that may or may not happen while the
mocked environment is running, such as an attribute access, or a
method call. The event is composed of several tasks that are
orchestrated together to create a composed meaning for the event,
including for which actions it should be run, what happens when it
runs, and what's the expectations about the actions run.
"""
def __init__(self, path=None):
self.path = path
self._tasks = []
self._has_run = False
def add_task(self, task):
"""Add a new task to this taks."""
self._tasks.append(task)
return task
def remove_task(self, task):
self._tasks.remove(task)
def get_tasks(self):
return self._tasks[:]
def matches(self, path):
"""Return true if *all* tasks match the given path."""
for task in self._tasks:
if not task.matches(path):
return False
return bool(self._tasks)
def has_run(self):
return self._has_run
def may_run(self, path):
"""Verify if any task would certainly raise an error if run.
This will call the C{may_run()} method on each task and return
false if any of them returns false.
"""
for task in self._tasks:
if not task.may_run(path):
return False
return True
def run(self, path):
"""Run all tasks with the given action.
@param path: The path of the expression run.
Running an event means running all of its tasks individually and in
order. An event should only ever be run if all of its tasks claim to
match the given action.
The result of this method will be the last result of a task
which isn't None, or None if they're all None.
"""
self._has_run = True
result = None
errors = []
for task in self._tasks:
try:
task_result = task.run(path)
except AssertionError, e:
error = str(e)
if not error:
raise RuntimeError("Empty error message from %r" % task)
errors.append(error)
else:
if task_result is not None:
result = task_result
if errors:
message = [str(self.path)]
if str(path) != message[0]:
message.append("- Run: %s" % path)
for error in errors:
lines = error.splitlines()
message.append("- " + lines.pop(0))
message.extend([" " + line for line in lines])
raise AssertionError(os.linesep.join(message))
return result
def satisfied(self):
"""Return true if all tasks are satisfied.
Being satisfied means that there are no unmet expectations.
"""
for task in self._tasks:
try:
task.verify()
except AssertionError:
return False
return True
def verify(self):
"""Run verify on all tasks.
The verify method is supposed to raise an AssertionError if the
task has unmet expectations, with a one-line explanation about
why this item is unmet. This method should be safe to be called
multiple times without side effects.
"""
errors = []
for task in self._tasks:
try:
task.verify()
except AssertionError, e:
error = str(e)
if not error:
raise RuntimeError("Empty error message from %r" % task)
errors.append(error)
if errors:
message = [str(self.path)]
for error in errors:
lines = error.splitlines()
message.append("- " + lines.pop(0))
message.extend([" " + line for line in lines])
raise AssertionError(os.linesep.join(message))
def replay(self):
"""Put all tasks in replay mode."""
self._has_run = False
for task in self._tasks:
task.replay()
def restore(self):
"""Restore the state of all tasks."""
for task in self._tasks:
task.restore()
class ReplayRestoreEvent(Event):
"""Helper event for tasks which need replay/restore but shouldn't match."""
def matches(self, path):
return False
class Task(object):
"""Element used to track one specific aspect on an event.
A task is responsible for adding any kind of logic to an event.
Examples of that are counting the number of times the event was
made, verifying parameters if any, and so on.
"""
def matches(self, path):
"""Return true if the task is supposed to be run for the given path.
"""
return True
def may_run(self, path):
"""Return false if running this task would certainly raise an error."""
return True
def run(self, path):
"""Perform the task item, considering that the given action happened.
"""
def verify(self):
"""Raise AssertionError if expectations for this item are unmet.
The verify method is supposed to raise an AssertionError if the
task has unmet expectations, with a one-line explanation about
why this item is unmet. This method should be safe to be called
multiple times without side effects.
"""
def replay(self):
"""Put the task in replay mode.
Any expectations of the task should be reset.
"""
def restore(self):
"""Restore any environmental changes made by the task.
Verify should continue to work after this is called.
"""
# --------------------------------------------------------------------
# Task implementations.
class OnRestoreCaller(Task):
"""Call a given callback when restoring."""
def __init__(self, callback):
self._callback = callback
def restore(self):
self._callback()
class PathMatcher(Task):
"""Match the action path against a given path."""
def __init__(self, path):
self.path = path
def matches(self, path):
return self.path.matches(path)
def path_matcher_recorder(mocker, event):
event.add_task(PathMatcher(event.path))
Mocker.add_recorder(path_matcher_recorder)
class RunCounter(Task):
"""Task which verifies if the number of runs are within given boundaries.
"""
def __init__(self, min, max=False):
self.min = min
if max is None:
self.max = sys.maxint
elif max is False:
self.max = min
else:
self.max = max
self._runs = 0
def replay(self):
self._runs = 0
def may_run(self, path):
return self._runs < self.max
def run(self, path):
self._runs += 1
if self._runs > self.max:
self.verify()
def verify(self):
if not self.min <= self._runs <= self.max:
if self._runs < self.min:
raise AssertionError("Performed fewer times than expected.")
raise AssertionError("Performed more times than expected.")
class ImplicitRunCounter(RunCounter):
"""RunCounter inserted by default on any event.
This is a way to differentiate explicitly added counters and
implicit ones.
"""
def run_counter_recorder(mocker, event):
"""Any event may be repeated once, unless disabled by default."""
if event.path.root_mock.__mocker_count__:
event.add_task(ImplicitRunCounter(1))
Mocker.add_recorder(run_counter_recorder)
def run_counter_removal_recorder(mocker, event):
"""
Events created by getattr actions which lead to other events
may be repeated any number of times. For that, we remove implicit
run counters of any getattr actions leading to the current one.
"""
parent_path = event.path.parent_path
for event in mocker.get_events()[::-1]:
if (event.path is parent_path and
event.path.actions[-1].kind == "getattr"):
for task in event.get_tasks():
if type(task) is ImplicitRunCounter:
event.remove_task(task)
Mocker.add_recorder(run_counter_removal_recorder)
class MockReturner(Task):
"""Return a mock based on the action path."""
def __init__(self, mocker):
self.mocker = mocker
def run(self, path):
return Mock(self.mocker, path)
def mock_returner_recorder(mocker, event):
"""Events that lead to other events must return mock objects."""
parent_path = event.path.parent_path
for event in mocker.get_events():
if event.path is parent_path:
for task in event.get_tasks():
if isinstance(task, MockReturner):
break
else:
event.add_task(MockReturner(mocker))
break
Mocker.add_recorder(mock_returner_recorder)
class FunctionRunner(Task):
"""Task that runs a function everything it's run.
Arguments of the last action in the path are passed to the function,
and the function result is also returned.
"""
def __init__(self, func, with_root_object=False):
self._func = func
self._with_root_object = with_root_object
def run(self, path):
action = path.actions[-1]
if self._with_root_object:
return self._func(path.root_object, *action.args, **action.kwargs)
else:
return self._func(*action.args, **action.kwargs)
class PathExecuter(Task):
"""Task that executes a path in the real object, and returns the result."""
def __init__(self, result_callback=None):
self._result_callback = result_callback
def get_result_callback(self):
return self._result_callback
def run(self, path):
result = path.execute(path.root_object)
if self._result_callback is not None:
self._result_callback(result)
return result
class Orderer(Task):
"""Task to establish an order relation between two events.
An orderer task will only match once all its dependencies have
been run.
"""
def __init__(self, path):
self.path = path
self._run = False
self._dependencies = []
def replay(self):
self._run = False
def has_run(self):
return self._run
def may_run(self, path):
for dependency in self._dependencies:
if not dependency.has_run():
return False
return True
def run(self, path):
for dependency in self._dependencies:
if not dependency.has_run():
raise AssertionError("Should be after: %s" % dependency.path)
self._run = True
def add_dependency(self, orderer):
self._dependencies.append(orderer)
def get_dependencies(self):
return self._dependencies
class SpecChecker(Task):
"""Task to check if arguments of the last action conform to a real method.
"""
def __init__(self, method):
self._method = method
self._unsupported = False
if method:
try:
self._args, self._varargs, self._varkwargs, self._defaults = \
inspect.getargspec(method)
except TypeError:
self._unsupported = True
else:
if self._defaults is None:
self._defaults = ()
if type(method) is type(self.run):
self._args = self._args[1:]
def get_method(self):
return self._method
def _raise(self, message):
spec = inspect.formatargspec(self._args, self._varargs,
self._varkwargs, self._defaults)
raise AssertionError("Specification is %s%s: %s" %
(self._method.__name__, spec, message))
def verify(self):
if not self._method:
raise AssertionError("Method not found in real specification")
def may_run(self, path):
try:
self.run(path)
except AssertionError:
return False
return True
def run(self, path):
if not self._method:
raise AssertionError("Method not found in real specification")
if self._unsupported:
return # Can't check it. Happens with builtin functions. :-(
action = path.actions[-1]
obtained_len = len(action.args)
obtained_kwargs = action.kwargs.copy()
nodefaults_len = len(self._args) - len(self._defaults)
for i, name in enumerate(self._args):
if i < obtained_len and name in action.kwargs:
self._raise("%r provided twice" % name)
if (i >= obtained_len and i < nodefaults_len and
name not in action.kwargs):
self._raise("%r not provided" % name)
obtained_kwargs.pop(name, None)
if obtained_len > len(self._args) and not self._varargs:
self._raise("too many args provided")
if obtained_kwargs and not self._varkwargs:
self._raise("unknown kwargs: %s" % ", ".join(obtained_kwargs))
def spec_checker_recorder(mocker, event):
spec = event.path.root_mock.__mocker_spec__
if spec:
actions = event.path.actions
if len(actions) == 1:
if actions[0].kind == "call":
method = getattr(spec, "__call__", None)
event.add_task(SpecChecker(method))
elif len(actions) == 2:
if actions[0].kind == "getattr" and actions[1].kind == "call":
method = getattr(spec, actions[0].args[0], None)
event.add_task(SpecChecker(method))
Mocker.add_recorder(spec_checker_recorder)
class ProxyReplacer(Task):
"""Task which installs and deinstalls proxy mocks.
This task will replace a real object by a mock in all dictionaries
found in the running interpreter via the garbage collecting system.
"""
def __init__(self, mock):
self.mock = mock
self.__mocker_replace__ = False
def replay(self):
global_replace(self.mock.__mocker_object__, self.mock)
def restore(self):
global_replace(self.mock, self.mock.__mocker_object__)
def global_replace(remove, install):
"""Replace object 'remove' with object 'install' on all dictionaries."""
for referrer in gc.get_referrers(remove):
if (type(referrer) is dict and
referrer.get("__mocker_replace__", True)):
for key, value in list(referrer.iteritems()):
if value is remove:
referrer[key] = install
class Undefined(object):
def __repr__(self):
return "Undefined"
Undefined = Undefined()
class Patcher(Task):
def __init__(self):
super(Patcher, self).__init__()
self._monitored = {} # {kind: {id(object): object}}
self._patched = {}
def is_monitoring(self, obj, kind):
monitored = self._monitored.get(kind)
if monitored:
if id(obj) in monitored:
return True
cls = type(obj)
if issubclass(cls, type):
cls = obj
bases = set([id(base) for base in cls.__mro__])
bases.intersection_update(monitored)
return bool(bases)
return False
def monitor(self, obj, kind):
if kind not in self._monitored:
self._monitored[kind] = {}
self._monitored[kind][id(obj)] = obj
def patch_attr(self, obj, attr, value):
original = obj.__dict__.get(attr, Undefined)
self._patched[id(obj), attr] = obj, attr, original
setattr(obj, attr, value)
def get_unpatched_attr(self, obj, attr):
cls = type(obj)
if issubclass(cls, type):
cls = obj
result = Undefined
for mro_cls in cls.__mro__:
key = (id(mro_cls), attr)
if key in self._patched:
result = self._patched[key][2]
if result is not Undefined:
break
elif attr in mro_cls.__dict__:
result = mro_cls.__dict__.get(attr, Undefined)
break
if isinstance(result, object) and hasattr(type(result), "__get__"):
if cls is obj:
obj = None
return result.__get__(obj, cls)
return result
def _get_kind_attr(self, kind):
if kind == "getattr":
return "__getattribute__"
return "__%s__" % kind
def replay(self):
for kind in self._monitored:
attr = self._get_kind_attr(kind)
seen = set()
for obj in self._monitored[kind].itervalues():
cls = type(obj)
if issubclass(cls, type):
cls = obj
if cls not in seen:
seen.add(cls)
unpatched = getattr(cls, attr, Undefined)
self.patch_attr(cls, attr,
PatchedMethod(kind, unpatched,
self.is_monitoring))
self.patch_attr(cls, "__mocker_execute__",
self.execute)
def restore(self):
for obj, attr, original in self._patched.itervalues():
if original is Undefined:
delattr(obj, attr)
else:
setattr(obj, attr, original)
self._patched.clear()
def execute(self, action, object):
attr = self._get_kind_attr(action.kind)
unpatched = self.get_unpatched_attr(object, attr)
try:
return unpatched(*action.args, **action.kwargs)
except AttributeError:
type, value, traceback = sys.exc_info()
if action.kind == "getattr":
# The normal behavior of Python is to try __getattribute__,
# and if it raises AttributeError, try __getattr__. We've
# tried the unpatched __getattribute__ above, and we'll now
# try __getattr__.
try:
__getattr__ = unpatched("__getattr__")
except AttributeError:
pass
else:
return __getattr__(*action.args, **action.kwargs)
raise type, value, traceback
class PatchedMethod(object):
def __init__(self, kind, unpatched, is_monitoring):
self._kind = kind
self._unpatched = unpatched
self._is_monitoring = is_monitoring
def __get__(self, obj, cls=None):
object = obj or cls
if not self._is_monitoring(object, self._kind):
return self._unpatched.__get__(obj, cls)
def method(*args, **kwargs):
if self._kind == "getattr" and args[0].startswith("__mocker_"):
return self._unpatched.__get__(obj, cls)(args[0])
mock = object.__mocker_mock__
return mock.__mocker_act__(self._kind, args, kwargs, object)
return method
def __call__(self, obj, *args, **kwargs):
# At least with __getattribute__, Python seems to use *both* the
# descriptor API and also call the class attribute directly. It
# looks like an interpreter bug, or at least an undocumented
# inconsistency.
return self.__get__(obj)(*args, **kwargs)
def patcher_recorder(mocker, event):
mock = event.path.root_mock
if mock.__mocker_patcher__ and len(event.path.actions) == 1:
patcher = mock.__mocker_patcher__
patcher.monitor(mock.__mocker_object__, event.path.actions[0].kind)
Mocker.add_recorder(patcher_recorder)
juju-0.7.orig/juju/lib/pick.py 0000644 0000000 0000000 00000002406 12135220114 014436 0 ustar 0000000 0000000 import itertools
_marker = object()
def pick_all_key(iterable, **kwargs):
"""Return all element having key/value pairs listed in kwargs."""
def filtermethod(element):
for k, v in kwargs.iteritems():
if element[k] != v:
return False
return True
return itertools.ifilter(filtermethod, iterable)
def pick_key(iterable, **kwargs):
"""Return the first element of iterable with all key/value pairs.
If no matching element is found None is returned.
"""
try:
return pick_all_key(iterable, **kwargs).next()
except StopIteration:
return None
def pick_all_attr(iterable, **kwargs):
"""Return all element having key/value pairs listed in kwargs."""
def filtermethod(element):
for k, v in kwargs.iteritems():
el = getattr(element, k, _marker)
if el is _marker or el != v:
return False
return True
return itertools.ifilter(filtermethod, iterable)
def pick_attr(iterable, **kwargs):
"""Return the first element of iterable with all key/value pairs.
If no matching element is found None is returned.
"""
try:
return pick_all_attr(iterable, **kwargs).next()
except StopIteration:
return None
juju-0.7.orig/juju/lib/port.py 0000644 0000000 0000000 00000000536 12135220114 014476 0 ustar 0000000 0000000 import socket
def get_open_port(host=""):
"""Get an open port on the machine.
"""
temp_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
temp_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
temp_sock.bind((host, 0))
port = temp_sock.getsockname()[1]
temp_sock.close()
del temp_sock
return port
juju-0.7.orig/juju/lib/schema.py 0000644 0000000 0000000 00000023271 12135220114 014753 0 ustar 0000000 0000000 """A schema system for validation of dict-based values."""
import re
class SchemaError(Exception):
"""Raised when invalid input is received."""
def __init__(self, path, message):
self.path = path
self.message = message
info = "%s: %s" % ("".join(self.path), self.message)
super(Exception, self).__init__(info)
class SchemaExpectationError(SchemaError):
"""Raised when an expected value is not found."""
def __init__(self, path, expected, got):
self.expected = expected
self.got = got
message = "expected %s, got %s" % (expected, got)
super(SchemaExpectationError, self).__init__(path, message)
class Constant(object):
"""Something that must be equal to a constant value."""
def __init__(self, value):
self.value = value
def coerce(self, value, path):
if value != self.value:
raise SchemaExpectationError(path, repr(self.value), repr(value))
return value
class Any(object):
"""Anything at all."""
def coerce(self, value, path):
return value
class OneOf(object):
"""Must necessarily match one of the given schemas."""
def __init__(self, *schemas):
"""
@param schemas: Any number of other schema objects.
"""
self.schemas = schemas
def coerce(self, value, path):
"""
The result of the first schema which doesn't raise
L{SchemaError} from its C{coerce} method will be returned.
"""
best_error = None
for i, schema in enumerate(self.schemas):
try:
return schema.coerce(value, path)
except SchemaError, be:
if not best_error or len(be.path) > len(best_error.path):
best_error = be
raise best_error
class Bool(object):
"""Something that must be a C{bool}."""
def coerce(self, value, path):
if not isinstance(value, bool):
raise SchemaExpectationError(path, "bool", repr(value))
return value
class Int(object):
"""Something that must be an C{int} or C{long}."""
def coerce(self, value, path):
if not isinstance(value, (int, long)):
raise SchemaExpectationError(path, "int", repr(value))
return value
class Float(object):
"""Something that must be an C{int}, C{long}, or C{float}."""
def coerce(self, value, path):
if not isinstance(value, (int, long, float)):
raise SchemaExpectationError(path, "number", repr(value))
return value
class String(object):
"""Something that must be a C{str}."""
def coerce(self, value, path):
if not isinstance(value, str):
raise SchemaExpectationError(path, "string", repr(value))
return value
class Unicode(object):
"""Something that must be a C{unicode}."""
def coerce(self, value, path):
if not isinstance(value, unicode):
raise SchemaExpectationError(path, "unicode", repr(value))
return value
class Regex(object):
"""Something that must be a valid Python regular expression."""
def coerce(self, value, path):
try:
regex = re.compile(value)
except re.error:
raise SchemaExpectationError(path,
"regex",
repr(value))
return regex
class UnicodeOrString(object):
"""Something that must be a C{unicode} or {str}.
If the value is a C{str}, it will automatically be decoded.
"""
def __init__(self, encoding):
"""
@param encoding: The encoding to automatically decode C{str}s with.
"""
self.encoding = encoding
def coerce(self, value, path):
if isinstance(value, str):
try:
value = value.decode(self.encoding)
except UnicodeDecodeError:
raise SchemaExpectationError(
path, "unicode or %s string" % self.encoding,
repr(value))
elif not isinstance(value, unicode):
raise SchemaExpectationError(
path, "unicode or %s string" % self.encoding,
repr(value))
return value
class List(object):
"""Something which must be a C{list}."""
def __init__(self, schema):
"""
@param schema: The schema that all values of the list must match.
"""
self.schema = schema
def coerce(self, value, path):
if not isinstance(value, list):
raise SchemaExpectationError(path, "list", repr(value))
new_list = list(value)
path.extend(["[", "?", "]"])
try:
for i, subvalue in enumerate(value):
path[-2] = str(i)
new_list[i] = self.schema.coerce(subvalue, path)
finally:
del path[-3:]
return new_list
class Tuple(object):
"""Something which must be a fixed-length tuple."""
def __init__(self, *schema):
"""
@param schema: A sequence of schemas, which will be applied to
each value in the tuple respectively.
"""
self.schema = schema
def coerce(self, value, path):
if not isinstance(value, tuple):
raise SchemaExpectationError(path, "tuple", repr(value))
if len(value) != len(self.schema):
raise SchemaExpectationError(
path, "tuple with %d elements" % len(self.schema),
repr(value))
new_value = []
path.extend(["[", "?", "]"])
try:
for i, (schema, value) in enumerate(zip(self.schema, value)):
path[-2] = str(i)
new_value.append(schema.coerce(value, path))
finally:
del path[-3:]
return tuple(new_value)
class Dict(object):
"""Something which must be a C{dict} with arbitrary keys."""
def __init__(self, key_schema, value_schema):
"""
@param key_schema: The schema that keys must match.
@param value_schema: The schema that values must match.
"""
self.key_schema = key_schema
self.value_schema = value_schema
def coerce(self, value, path):
if not isinstance(value, dict):
raise SchemaExpectationError(path, "dict", repr(value))
new_dict = {}
key_path = path
if not path:
value_path = ["?"]
else:
value_path = path + [".", "?"]
for key, subvalue in value.items():
new_key = self.key_schema.coerce(key, key_path)
try:
value_path[-1] = str(key)
except ValueError:
value_path[-1] = repr(key)
new_subvalue = self.value_schema.coerce(subvalue, value_path)
new_dict[new_key] = new_subvalue
return new_dict
class KeyDict(object):
"""Something which must be a C{dict} with defined keys.
The keys must be constant and the values must match a per-key schema.
"""
def __init__(self, schema, optional=None):
"""
@param schema: A dict mapping keys to schemas that the values
of those keys must match.
"""
self.optional = set(optional or ())
self.schema = schema
def coerce(self, value, path):
new_dict = {}
if not isinstance(value, dict):
raise SchemaExpectationError(path, "dict", repr(value))
path = path[:]
if path:
path.append(".")
path.append("?")
for k, v in value.iteritems():
if k in self.schema:
try:
path[-1] = str(k)
except ValueError:
path[-1] = repr(k)
new_dict[k] = self.schema[k].coerce(v, path)
else:
# Just preserve entries which are not in the schema.
# This is less likely to eat important values due to
# different app versions being used, for instance.
new_dict[k] = v
for k in self.schema:
if k not in value and k not in self.optional:
path[-1] = k
raise SchemaError(path, "required value not found")
# No need to restore path. It was copied.
return new_dict
class SelectDict(object):
"""Something that must be a C{dict} whose schema depends on some value."""
def __init__(self, key, schemas):
"""
@param key: a key we expect to be in each of the possible schemas,
which we use to select which schema to coerce to.
@param schemas: a dictionary mapping values for C{key} to schemas,
to which the eventual value should be coerced.
"""
self.key = key
self.schemas = schemas
def coerce(self, value, path):
if self.key not in value:
raise SchemaError(
path + ['.', self.key], "required value not found")
selected = value[self.key]
return self.schemas[selected].coerce(value, path)
class OAuthString(String):
"""A L{String} containing OAuth information, colon-separated.
The string should contain three parts::
consumer-key:resource-key:resource-secret
Each part is stripped of leading and trailing whitespace.
@return: A 3-tuple of C{consumer-key}, C{resource-key},
C{resource-secret}.
"""
def coerce(self, value, path):
value = super(OAuthString, self).coerce(value, path)
parts = tuple(part.strip() for part in value.split(":"))
if len(parts) != 3:
raise SchemaError(
path, "does not contain three colon-separated parts")
if "" in parts:
raise SchemaError(
path, "one or more parts are empty")
return parts
juju-0.7.orig/juju/lib/serializer.py 0000644 0000000 0000000 00000000737 12135220114 015666 0 ustar 0000000 0000000 from yaml import CSafeLoader, CSafeDumper, Mark
from yaml import dump as _dump
from yaml import load as _load
def dump(value):
return _dump(value, Dumper=CSafeDumper)
yaml_dump = dump
def load(value):
return _load(value, Loader=CSafeLoader)
yaml_load = load
def yaml_mark_with_path(path, mark):
# yaml c ext, cant be modded, convert to capture path
return Mark(
path, mark.index,
mark.line, mark.column,
mark.buffer, mark.pointer)
juju-0.7.orig/juju/lib/service.py 0000644 0000000 0000000 00000011052 12135220114 015145 0 ustar 0000000 0000000 from twisted.internet.defer import inlineCallbacks
from twisted.internet.threads import deferToThread
from juju.errors import ServiceError
import os
import subprocess
def _check_call(args, env=None, output_path=None):
if not output_path:
output_path = os.devnull
with open(output_path, "a") as f:
return subprocess.check_call(
args,
stdout=f.fileno(), stderr=f.fileno(),
env=env)
def _cat(filename, use_sudo=False):
args = ("cat", filename)
if use_sudo and not os.access(filename, os.R_OK):
args = ("sudo",) + args
p = subprocess.Popen(
args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout_data, _ = p.communicate()
r = p.returncode
return (r, stdout_data)
class TwistedDaemonService(object):
"""Manage the starting and stopping of an Agent.
This manager tracks the agent via its --pidfile. The pidfile argument
specifies the location of the pid file that is used to track this service.
"""
def __init__(self, name, pidfile, use_sudo=False):
self._name = name
self._use_sudo = use_sudo
self._description = None
self._environ = None
self._command = None
self._daemon = True
self._output_path = None
self._pid_path = pidfile
self._pid = None
@property
def output_path(self):
if self._output_path is not None:
return self._output_path
return "/tmp/%s.output" % self._name
@output_path.setter
def output_path(self, path):
self._output_path = path
def set_description(self, description):
self._description = description
def set_daemon(self, value):
self._daemon = bool(value)
def set_environ(self, environ):
for k, v in environ.items():
environ[k] = str(v)
self._environ = environ
def set_command(self, command):
if self._daemon:
if "--pidfile" not in command:
command += ["--pidfile", self._pid_path]
else:
# pid file is in command (consume it for get_pid)
idx = command.index("--pidfile")
self._pid_path = command[idx+1]
self._command = command
@inlineCallbacks
def _trash_output(self):
if os.path.exists(self.output_path):
# Just using os.unlink will fail when we're running TEST_SUDO
# tests which hit this code path (because root will own
# self.output_path)
yield self._call("rm", "-f", self.output_path)
if os.path.exists(self._pid_path):
yield self._call("rm", "-f", self._pid_path)
def _call(self, *args, **kwargs):
if self._use_sudo:
if self._environ:
_args = ["%s=%s" % (k, v) for k, v in self._environ.items()]
else:
_args = []
_args.insert(0, "sudo")
_args.extend(args)
args = _args
return deferToThread(_check_call, args, env=self._environ,
output_path=self.output_path)
def install(self):
if self._command is None:
raise ServiceError("Cannot manage agent: %s no command set" % (
self._name))
@inlineCallbacks
def start(self):
if (yield self.is_running()):
raise ServiceError(
"%s already running: pid (%s)" % (
self._name, self.get_pid()))
if not self.is_installed():
yield self.install()
yield self._trash_output()
yield self._call(*self._command, output_path=self.output_path)
@inlineCallbacks
def destroy(self):
if (yield self.is_running()):
yield self._call("kill", self.get_pid())
yield self._trash_output()
def get_pid(self):
if self._pid != None:
return self._pid
if not os.path.exists(self._pid_path):
return None
r, data = _cat(self._pid_path, use_sudo=self._use_sudo)
if r != 0:
return None
# verify that pid is a number but leave
# it as a string suitable for passing to kill
if not data.strip().isdigit():
return None
pid = data.strip()
self._pid = pid
return self._pid
def is_running(self):
pid = self.get_pid()
if not pid:
return False
proc_file = "/proc/%s" % pid
if not os.path.exists(proc_file):
return False
return True
def is_installed(self):
return False
juju-0.7.orig/juju/lib/statemachine.py 0000644 0000000 0000000 00000033311 12135220114 016154 0 ustar 0000000 0000000 """A simple state machine for twisted applications.
Responsibilities are divided between three classes. A workflow class,
composed of transitions, and responsible for verifying the transitions
available from each state. The transitions define their endpoints, and
optionally a transition action, and an error transition. When the
transition is executed to move a context between two endpoint states, the
transition action is invoked. If it fails with a TransitionError, the
error transition is fired. If it succeeds, it can return a dictionary
of values. These values are stored.
The workflow state class, forms the basis for interacting with the workflow
system. It bridges an arbitrary domain objectcontext, with its associated
workflow. Its used to fire transitions, store/load state, and as a location
for defining any relevant transition actions.
"""
import logging
from twisted.internet.defer import DeferredLock, inlineCallbacks, returnValue
class WorkflowError(Exception):
pass
class InvalidStateError(WorkflowError):
pass
class InvalidTransitionError(WorkflowError):
pass
class TransitionError(WorkflowError):
pass
log = logging.getLogger("statemachine")
def class_name(instance):
return instance.__class__.__name__.lower()
class _ExitCaller(object):
def __init__(self, func):
self._func = func
def __enter__(self):
pass
def __exit__(self, *exc_info):
self._func()
class WorkflowState(object):
_workflow = None
def __init__(self, workflow=None):
if workflow:
self._workflow = workflow
self._observer = None
self._lock = DeferredLock()
@inlineCallbacks
def lock(self):
yield self._lock.acquire()
returnValue(_ExitCaller(self._lock.release))
def _assert_locked(self):
"""Should be called at the start of any method which changes state.
This is a frankly pitiful hack that should (handwave handwave) help
people to use this correctly; it doesn't stop anyone from calling
write methods on this object while someone *else* holds a lock, but
hopefully it will help us catch these situations when unit testing.
This method only exists as a place to put this documentation.
"""
assert self._lock.locked
@inlineCallbacks
def get_available_transitions(self):
"""Return a list of valid transitions from the current state.
"""
state_id = yield self.get_state()
returnValue(self._workflow.get_transitions(state_id))
@inlineCallbacks
def fire_transition_alias(self, transition_alias):
"""Fire a transition with the matching alias.
A transition from the current state with the given alias will
be located.
The purpose of alias is to allow groups of transitions, each
from a different state, to be invoked unambigiously by
a caller, for example::
>> state.fire_transition_alias("upgrade")
>> state.fire_transition_alias("settings-changed")
>> state.fire_transition_alias("error")
All will invoke the appropriate transition from their state
without the caller having to do state inspection or transition
id mangling.
Ambigious (multiple) or no matching transitions cause an exception
InvalidTransition to be raised.
"""
self._assert_locked()
found = []
for t in (yield self.get_available_transitions()):
if transition_alias == t.alias:
found.append(t)
if len(found) > 1:
current_state = yield self.get_state()
raise InvalidTransitionError(
"Multiple transition for alias:%s state:%s transitions:%s" % (
transition_alias, current_state, found))
if len(found) == 0:
current_state = yield self.get_state()
raise InvalidTransitionError(
"No transition found for alias:%s state:%s" % (
transition_alias, current_state))
value = yield self.fire_transition(found[0].transition_id)
returnValue(value)
@inlineCallbacks
def transition_state(self, state_id):
"""Attempt a transition to the given state.
Will look for a transition to the given state from the
current state, and execute if it one exists.
Returns a boolean value based on whether the state
was achieved.
"""
self._assert_locked()
# verify it's a valid state id
if not self._workflow.has_state(state_id):
raise InvalidStateError(state_id)
transitions = yield self.get_available_transitions()
for transition in transitions:
if transition.destination == state_id:
break
else:
returnValue(False)
log.debug("%s: transition state (%s -> %s)",
class_name(self),
transition.source,
transition.destination)
result = yield self.fire_transition(transition.transition_id)
returnValue(result)
@inlineCallbacks
def fire_transition(self, transition_id, **state_variables):
"""Fire a transition with given id.
Invokes any transition actions, saves state and state variables, and
error transitions as needed.
"""
self._assert_locked()
# Verify and retrieve the transition.
available = yield self.get_available_transitions()
available_ids = [t.transition_id for t in available]
if not transition_id in available_ids:
current_state = yield self.get_state()
raise InvalidTransitionError(
"%r not a valid transition for state %s" % (
transition_id, current_state))
yield self.set_inflight(transition_id)
transition = self._workflow.get_transition(transition_id)
log.debug("%s: transition %s (%s -> %s) %r",
class_name(self),
transition_id,
transition.source,
transition.destination,
state_variables)
# Execute any per transition action.
state_variables = state_variables
action_id = "do_%s" % transition_id
action = getattr(self, action_id, None)
if callable(action):
try:
log.debug("%s: execute action %s",
class_name(self), action.__name__)
variables = yield action()
if isinstance(variables, dict):
state_variables.update(variables)
except TransitionError, e:
# If an error happens during the transition, allow for
# executing an error transition.
if transition.error_transition_id:
log.debug("%s: executing error transition %s, %s",
class_name(self),
transition.error_transition_id,
e)
yield self.fire_transition(
transition.error_transition_id)
else:
yield self.set_inflight(None)
log.debug("%s: transition %s failed %s",
class_name(self), transition_id, e)
# Bail, and note the error as a return value.
returnValue(False)
# Set the state with state variables (and implicitly clear inflight)
yield self.set_state(transition.destination, **state_variables)
log.debug("%s: transition complete %s (state %s) %r",
class_name(self), transition_id,
transition.destination, state_variables)
yield self._fire_automatic_transitions()
returnValue(True)
@inlineCallbacks
def _fire_automatic_transitions(self):
self._assert_locked()
available = yield self.get_available_transitions()
for t in available:
if t.automatic:
yield self.fire_transition(t.transition_id)
return
@inlineCallbacks
def get_state(self):
"""Get the current workflow state.
"""
state_dict = yield self._load()
if not state_dict:
returnValue(None)
returnValue(state_dict["state"])
@inlineCallbacks
def get_state_variables(self):
"""Retrieve a dictionary of variables associated to the current state.
"""
state_dict = yield self._load()
if not state_dict:
returnValue({})
returnValue(state_dict["state_variables"])
def set_observer(self, observer):
"""Set a callback, that will be notified on state changes.
The caller will receive the new state and the new state
variables as dictionary via positional args. ie.::
def callback(new_state, state_variables):
print new_state, state_variables
"""
self._observer = observer
@inlineCallbacks
def set_state(self, state, **variables):
"""Set the current workflow state, optionally setting state variables.
"""
self._assert_locked()
yield self._store(dict(state=state, state_variables=variables))
if self._observer:
self._observer(state, variables)
@inlineCallbacks
def set_inflight(self, transition_id):
"""Record intent to perform a transition, or completion of same.
Ideally, this would not be exposed to the public, but it's necessary
for writing sane tests.
"""
self._assert_locked()
state = yield self._load() or {}
state.setdefault("state", None)
state.setdefault("state_variables", {})
if transition_id is not None:
state["transition_id"] = transition_id
else:
state.pop("transition_id", None)
yield self._store(state)
@inlineCallbacks
def get_inflight(self):
"""Get the id of the transition that is currently executing.
(Or which was abandoned due to unexpected process death.)
"""
state = yield self._load() or {}
returnValue(state.get("transition_id"))
@inlineCallbacks
def synchronize(self):
"""Rerun inflight transition, if any, and any default transitions."""
self._assert_locked()
# First of all, complete any abandoned transition.
transition_id = yield self.get_inflight()
if transition_id is not None:
yield self.fire_transition(transition_id)
else:
yield self._fire_automatic_transitions()
def _load(self):
""" Load the state and variables from persistent storage.
"""
pass
def _store(self, state_dict):
""" Store the state and variables to persistent storage.
"""
pass
class Workflow(object):
def __init__(self, *transitions):
self.initialize(transitions)
def initialize(self, transitions):
"""Initialize the internal data structures with the given transitions.
"""
self._sources = {}
self._transitions = {}
for t in transitions:
self._sources.setdefault(t.source, []).append(t.transition_id)
self._sources.setdefault(t.destination, [])
self._transitions[t.transition_id] = t
def get_transitions(self, source_id):
"""Retrieve transition ids valid from the srource id state.
"""
if not source_id in self._sources:
raise InvalidStateError(source_id)
transitions = self._sources[source_id]
return [self._transitions[t] for t in transitions]
def get_transition(self, transition_id):
"""Retrieve a transition object by id.
"""
return self._transitions[transition_id]
def has_state(self, state_id):
return state_id in self._sources
class Transition(object):
"""A transition encapsulates an edge in the statemachine graph.
:attr:`transition_id` The identity of the transition.
:attr:`label` A human readable label of the transition's purpose.
:attr:`source` The origin/source state of the transition.
:attr:`destination` The target/destination state of the transition.
:attr:`action_id` The name of the action method to use for this transition.
:attr:`error_transition_id`: A transition to fire if the action fails.
:attr:`automatic`: If true, always try to fire this transition whenever in
`source` state.
:attr:`alias` See :meth:`WorkflowState.fire_transition_alias`
"""
def __init__(self, transition_id, label, source, destination,
error_transition_id=None, automatic=False, alias=None):
self._transition_id = transition_id
self._label = label
self._source = source
self._destination = destination
self._error_transition_id = error_transition_id
self._automatic = automatic
self._alias = alias
@property
def transition_id(self):
"""The id of this transition.
"""
return self._transition_id
@property
def label(self):
return self._label
@property
def destination(self):
"""The destination state id of this transition.
"""
return self._destination
@property
def source(self):
"""The origin state id of this transition.
"""
return self._source
@property
def alias(self):
return self._alias
@property
def error_transition_id(self):
"""The id of a transition to fire upon an error of this transition.
"""
return self._error_transition_id
@property
def automatic(self):
"""Should this transition always fire whenever possible?
"""
return self._automatic
juju-0.7.orig/juju/lib/testing.py 0000644 0000000 0000000 00000015175 12135220114 015174 0 ustar 0000000 0000000 import itertools
import logging
import os
import yaml
import StringIO
import sys
from twisted.internet.defer import Deferred, inlineCallbacks, returnValue
from twisted.internet import reactor
from twisted.trial.unittest import TestCase as TrialTestCase
from txzookeeper import ZookeeperClient
from txzookeeper.managed import ManagedClient
from juju.lib.mocker import MockerTestCase
from juju.tests.common import get_test_zookeeper_address
class TestCase(TrialTestCase, MockerTestCase):
"""
Base class for all juju tests.
"""
# Default timeout for any test
timeout = 5
# Default value for zookeeper test client
client = None
def capture_stream(self, stream_name):
original = getattr(sys, stream_name)
new = StringIO.StringIO()
@self.addCleanup
def reset_stream():
setattr(sys, stream_name, original)
setattr(sys, stream_name, new)
return new
def capture_logging(self, name="", level=logging.INFO,
log_file=None, formatter=None):
if log_file is None:
log_file = StringIO.StringIO()
log_handler = logging.StreamHandler(log_file)
if formatter:
log_handler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.addHandler(log_handler)
old_logger_level = logger.level
logger.setLevel(level)
@self.addCleanup
def reset_logging():
logger.removeHandler(log_handler)
logger.setLevel(old_logger_level)
return log_file
_missing_attr = object()
def patch(self, object, attr, value):
"""Replace an object's attribute, and restore original value later.
Returns the original value of the attribute if any or None.
"""
original_value = getattr(object, attr, self._missing_attr)
@self.addCleanup
def restore_original():
if original_value is self._missing_attr:
try:
delattr(object, attr)
except AttributeError:
pass
else:
setattr(object, attr, original_value)
setattr(object, attr, value)
if original_value is self._missing_attr:
return None
return original_value
def change_args(self, *args):
"""Change the cli args to the specified, with restoration later."""
original_args = sys.argv
sys.argv = list(args)
@self.addCleanup
def restore():
sys.argv = original_args
def change_environment(self, **kw):
"""Reset the environment to kwargs. The tests runtime
environment will be initialized with only those values passed
as kwargs.
The original state of the environment will be restored after
the tests complete.
"""
# preserve key elements needed for testing
for env in ["AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"EC2_PRIVATE_KEY",
"EC2_CERT",
"HOME",
"ZOOKEEPER_ADDRESS"]:
if env not in kw:
kw[env] = os.environ.get(env, "")
original_environ = dict(os.environ)
@self.addCleanup
def cleanup_env():
os.environ.clear()
os.environ.update(original_environ)
os.environ.clear()
os.environ.update(kw)
def assertInstance(self, instance, type):
self.assertTrue(isinstance(instance, type))
def assertLogLines(self, observed, expected):
"""Asserts that the lines of `expected` exist in order in the log."""
logged = observed.split("\n")
it = iter(expected)
for line in logged:
it, peekat = itertools.tee(it)
peeked = next(peekat)
if peeked in line:
next(it) # then consume this line and move on
self.assertFalse(
expected,
"Did not see all expected lines in log, in order: %s, %s" % (
observed, expected))
def sleep(self, delay):
"""Non-blocking sleep."""
deferred = Deferred()
reactor.callLater(delay, deferred.callback, None)
return deferred
@inlineCallbacks
def poke_zk(self):
"""Create a roundtrip communication to zookeeper.
An alternative to sleeping in many cases when waiting for
a zookeeper watch or interaction to trigger a callback.
"""
if self.client is None:
raise ValueError("No Zookeeper client to utilize")
yield self.client.exists("/zookeeper")
returnValue(True)
def get_zookeeper_client(self):
client = ManagedClient(
get_test_zookeeper_address(), session_timeout=1000)
return client
@inlineCallbacks
def dump_data(self, path="/"):
client = self.client
output = {}
@inlineCallbacks
def export_tree(path, indent):
d = {}
data, stat = yield client.get(path)
name = path.rsplit('/', 1)[1]
d['contents'] = _decode_fmt(data, yaml.load)
children = yield client.get_children(path)
for name in children:
if path == "/" and name == "zookeeper":
continue
cd = yield export_tree(path + '/' + name, indent)
d[name] = cd
returnValue(d)
output[path.rsplit('/', 1)[1]] = yield export_tree(path, '')
returnValue(output)
@inlineCallbacks
def assertTree(self, path, data):
data = yield self.dump_data(path)
self.assertEqual(data, data)
@inlineCallbacks
def dump_tree(self, path="/", format=yaml.load):
client = self.client
output = []
out = output.append
@inlineCallbacks
def export_tree(path, indent):
data, stat = yield client.get(path)
name = path.rsplit("/", 1)[1]
properties = _decode_fmt(data, format)
out(indent + "/" + name)
indent += " "
for i in sorted(properties.iteritems()):
out(indent + "%s = %r" % i)
children = yield client.get_children(path)
for name in sorted(children):
if path == "/" and name == "zookeeper":
continue
yield export_tree(path + "/" + name, indent)
yield export_tree(path, "")
returnValue("\n".join(output) + "\n")
def _decode_fmt(s, decoder):
s = s.strip()
if not s:
data = {}
try:
data = decoder(s)
except:
data = dict(string_value=s)
return data
juju-0.7.orig/juju/lib/tests/ 0000755 0000000 0000000 00000000000 12135220114 014276 5 ustar 0000000 0000000 juju-0.7.orig/juju/lib/twistutils.py 0000644 0000000 0000000 00000004051 12135220114 015741 0 ustar 0000000 0000000 import inspect
import os
from twisted.internet import reactor
from twisted.internet.defer import (
Deferred, maybeDeferred, succeed, DeferredList)
from twisted.python.util import mergeFunctionMetadata
def concurrent_execution_guard(attribute):
"""Sets attribute to True/False during execution of the decorated method.
Used to ensure non concurrent execution of the decorated function via
an instance attribute. *The underlying function must return a defered*.
"""
def guard(f):
def guard_execute(self, *args, **kw):
value = getattr(self, attribute, None)
if value:
return succeed(False)
else:
setattr(self, attribute, True)
d = maybeDeferred(f, self, *args, **kw)
def post_execute(result):
setattr(self, attribute, False)
return result
d.addBoth(post_execute)
return d
return mergeFunctionMetadata(f, guard_execute)
return guard
def gather_results(deferreds, consume_errors=True):
d = DeferredList(deferreds, fireOnOneErrback=1,
consumeErrors=consume_errors)
d.addCallback(lambda r: [x[1] for x in r])
d.addErrback(lambda f: f.value.subFailure)
return d
def get_module_directory(module):
"""Determine the directory of a module.
Trial rearranges the working directory such that the module
paths are relative to a modified current working directory,
which results in failing tests when run under coverage, we
manually remove the trial locations to ensure correct
directories are utilized.
"""
return os.path.abspath(os.path.dirname(inspect.getabsfile(module)).replace(
"/_trial_temp", ""))
def sleep(delay):
"""Non-blocking sleep.
:param int delay: time in seconds to sleep.
:return: a Deferred that fires after the desired delay.
:rtype: :class:`twisted.internet.defer.Deferred`
"""
deferred = Deferred()
reactor.callLater(delay, deferred.callback, None)
return deferred
juju-0.7.orig/juju/lib/under.py 0000644 0000000 0000000 00000000442 12135220114 014623 0 ustar 0000000 0000000 import string
_SAFE_CHARS = set(string.ascii_letters + string.digits + ".-")
_CHAR_MAP = {}
for i in range(256):
c = chr(i)
_CHAR_MAP[c] = c if c in _SAFE_CHARS else "_%02x_" % i
_quote_char = _CHAR_MAP.__getitem__
def quote(unsafe):
return "".join(map(_quote_char, unsafe))
juju-0.7.orig/juju/lib/upstart.py 0000644 0000000 0000000 00000011440 12135220114 015210 0 ustar 0000000 0000000 import os
import subprocess
from tempfile import NamedTemporaryFile
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.internet.threads import deferToThread
from twisted.internet.utils import getProcessOutput
from juju.errors import ServiceError
from juju.lib.twistutils import sleep
_CONF_TEMPLATE = """\
description "%s"
author "Juju Team "
start on runlevel [2345]
stop on runlevel [!2345]
respawn
%s
exec %s >> %s 2>&1
"""
def _silent_check_call(args):
with open(os.devnull, "w") as f:
return subprocess.check_call(
args, stdout=f.fileno(), stderr=f.fileno())
class UpstartService(object):
# on class for ease of testing
init_dir = "/etc/init"
def __init__(self, name, init_dir=None, use_sudo=False):
self._name = name
if init_dir is not None:
self.init_dir = init_dir
self._use_sudo = use_sudo
self._output_path = None
self._description = None
self._environ = {}
self._command = None
@property
def _conf_path(self):
return os.path.join(
self.init_dir, "%s.conf" % self._name)
@property
def output_path(self):
if self._output_path is not None:
return self._output_path
return "/tmp/%s.output" % self._name
def set_description(self, description):
self._description = description
def set_environ(self, environ):
self._environ = environ
def set_command(self, command):
self._command = command
def set_output_path(self, path):
self._output_path = path
@inlineCallbacks
def _trash_output(self):
if os.path.exists(self.output_path):
# Just using os.unlink will fail when we're running TEST_SUDO tests
# which hit this code path (because root will own self.output_path)
yield self._call("rm", self.output_path)
def _render(self):
if self._description is None:
raise ServiceError("Cannot render .conf: no description set")
if self._command is None:
raise ServiceError("Cannot render .conf: no command set")
return _CONF_TEMPLATE % (
self._description,
"\n".join('env %s="%s"' % kv
for kv in sorted(self._environ.items())),
self._command,
self.output_path)
def _call(self, *args):
if self._use_sudo:
args = ("sudo",) + args
return deferToThread(_silent_check_call, args)
def get_cloud_init_commands(self):
return ["cat >> %s <