pax_global_header 0000666 0000000 0000000 00000000064 12541056305 0014513 g ustar 00root root 0000000 0000000 52 comment=c5065944e7f274cbd5b97c248681f89705972b3a
fabric-1.10.2/ 0000775 0000000 0000000 00000000000 12541056305 0013022 5 ustar 00root root 0000000 0000000 fabric-1.10.2/.gitignore 0000664 0000000 0000000 00000000222 12541056305 0015006 0 ustar 00root root 0000000 0000000 *~
*.pyc
*.pyo
*.pyt
*.pytc
*.egg
.DS_Store
.*.swp
Fabric.egg-info
.coverage
docs/_build
dist
build/
tags
TAGS
.tox
tox.ini
.idea/
sites/*/_build
fabric-1.10.2/.travis.yml 0000664 0000000 0000000 00000002134 12541056305 0015133 0 ustar 00root root 0000000 0000000 language: python
python:
- "2.6"
- "2.7"
install:
# Build/test dependencies
- pip install -r requirements.txt
# Get fab to test fab
- pip install -e .
# Deal with issue on Travis builders re: multiprocessing.Queue :(
- "sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm"
- "pip install jinja2"
before_script:
# Allow us to SSH passwordless to localhost
- ssh-keygen -f ~/.ssh/id_rsa -N ""
- cp ~/.ssh/{id_rsa.pub,authorized_keys}
# Creation of an SSH agent for testing forwarding
- eval $(ssh-agent)
- ssh-add
script:
# Normal tests
- fab test
# Integration tests
- fab -H localhost test:"--tests\=integration"
# Build docs; www first without warnings so its intersphinx objects file
# generates. Then docs (with warnings->errors), then www again (also w/
# warnings on.) FUN TIMES WITH CIRCULAR DEPENDENCIES.
- invoke www
- invoke docs -o -W
- invoke www -c -o -W
notifications:
irc:
channels: "irc.freenode.org#fabric"
template:
- "%{repository}@%{branch}: %{message} (%{build_url})"
on_success: change
on_failure: change
email: false
fabric-1.10.2/AUTHORS 0000664 0000000 0000000 00000002171 12541056305 0014073 0 ustar 00root root 0000000 0000000 The following list contains individuals who contributed nontrivial code to
Fabric's codebase, ordered by date of first contribution. Individuals who
submitted bug reports or trivial one-line "you forgot to do X" patches are
generally credited in the commit log only.
IMPORTANT: as of 2012, this file is historical only and we'll probably stop
updating it. The changelog and/or Git history is the canonical source for
thanks, credits etc.
Christian Vest Hansen
Rob Cowie
Jeff Forcier
Travis Cline
Niklas Lindström
Kevin Horn
Max Battcher
Alexander Artemenko
Dennis Schoen
Erick Dennis
Sverre Johansen
Michael Stephens
Armin Ronacher
Curt Micol
Patrick McNerthney
Steve Steiner
Ali Saifee
Jorge Vargas
Peter Ellis
Brian Rosner
Xinan Wu
Alex Koshelev
Mich Matuson
Morgan Goose
Carl Meyer
Erich Heine
Travis Swicegood
Paul Smith
Alex Koshelev
Stephen Goss
James Murty
Thomas Ballinger
Rick Harding
Kirill Pinchuk
Ales Zoulek
Casey Banner
Roman Imankulov
Rodrigue Alcazar
Jeremy Avnet
Matt Chisholm
Mark Merritt
Max Arnold
Szymon Reichmann
David Wolever
Jason Coombs
Ben Davis
Neilen Marais
Rory Geoghegan
Alexey Diyan
Kamil Kisiel
Jonas Lundberg
fabric-1.10.2/CONTRIBUTING.rst 0000664 0000000 0000000 00000000203 12541056305 0015456 0 ustar 00root root 0000000 0000000 Please see `contribution-guide.org `_ for
details on what we expect from contributors. Thanks!
fabric-1.10.2/INSTALL 0000664 0000000 0000000 00000000160 12541056305 0014050 0 ustar 00root root 0000000 0000000 For installation help, please see http://fabfile.org/ or (if using a source
checkout) sites/www/installing.rst.
fabric-1.10.2/LICENSE 0000664 0000000 0000000 00000002532 12541056305 0014031 0 ustar 00root root 0000000 0000000 Copyright (c) 2009-2015 Jeffrey E. Forcier
Copyright (c) 2008-2009 Christian Vest Hansen
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
fabric-1.10.2/MANIFEST.in 0000664 0000000 0000000 00000000377 12541056305 0014567 0 ustar 00root root 0000000 0000000 include AUTHORS
include INSTALL
include LICENSE
include README.rst
recursive-include sites *
recursive-exclude sites/docs/_build *
recursive-exclude sites/www/_build *
include requirements.txt
recursive-include tests *
recursive-exclude tests *.pyc *.pyo
fabric-1.10.2/README.rst 0000664 0000000 0000000 00000002406 12541056305 0014513 0 ustar 00root root 0000000 0000000 Fabric is a Python (2.5-2.7) library and command-line tool for
streamlining the use of SSH for application deployment or systems
administration tasks.
It provides a basic suite of operations for executing local or remote shell
commands (normally or via ``sudo``) and uploading/downloading files, as well as
auxiliary functionality such as prompting the running user for input, or
aborting execution.
Typical use involves creating a Python module containing one or more functions,
then executing them via the ``fab`` command-line tool. Below is a small but
complete "fabfile" containing a single task:
.. code-block:: python
from fabric.api import run
def host_type():
run('uname -s')
Once a task is defined, it may be run on one or more servers, like so::
$ fab -H localhost,linuxbox host_type
[localhost] run: uname -s
[localhost] out: Darwin
[linuxbox] run: uname -s
[linuxbox] out: Linux
Done.
Disconnecting from localhost... done.
Disconnecting from linuxbox... done.
In addition to use via the ``fab`` tool, Fabric's components may be imported
into other Python code, providing a Pythonic interface to the SSH protocol
suite at a higher level than that provided by e.g. the ``Paramiko`` library
(which Fabric itself uses.)
fabric-1.10.2/dev-requirements.txt 0000664 0000000 0000000 00000001035 12541056305 0017061 0 ustar 00root root 0000000 0000000 # You should already have the dev version of Paramiko and your local Fabric
# checkout installed! Stable Paramiko may not be sufficient!
# Test runner/testing utils
nose
# Rudolf adds color to the output of 'fab test'. This is a custom fork
# addressing Python 2.7 and Nose's 'skip' plugin compatibility issues.
-e git+https://github.com/bitprophet/rudolf#egg=rudolf
# Mocking library
Fudge<1.0
# Documentation generation
Sphinx>=1.2
releases==0.6.1
invoke==0.10.1
invocations>=0.10,<0.11
alabaster>=0.6.1
semantic_version==2.4
wheel==0.24
fabric-1.10.2/fabfile/ 0000775 0000000 0000000 00000000000 12541056305 0014412 5 ustar 00root root 0000000 0000000 fabric-1.10.2/fabfile/__init__.py 0000664 0000000 0000000 00000001160 12541056305 0016521 0 ustar 00root root 0000000 0000000 """
Fabric's own fabfile.
"""
from __future__ import with_statement
import nose
from fabric.api import abort, local, task
@task(default=True)
def test(args=None):
"""
Run all unit tests and doctests.
Specify string argument ``args`` for additional args to ``nosetests``.
"""
# Default to explicitly targeting the 'tests' folder, but only if nothing
# is being overridden.
tests = "" if args else " tests"
default_args = "-sv --with-doctest --nologcapture --with-color %s" % tests
default_args += (" " + args) if args else ""
nose.core.run_exit(argv=[''] + default_args.split())
fabric-1.10.2/fabric/ 0000775 0000000 0000000 00000000000 12541056305 0014250 5 ustar 00root root 0000000 0000000 fabric-1.10.2/fabric/__init__.py 0000664 0000000 0000000 00000000074 12541056305 0016362 0 ustar 00root root 0000000 0000000 """
See `fabric.api` for the publically importable API.
"""
fabric-1.10.2/fabric/__main__.py 0000664 0000000 0000000 00000000046 12541056305 0016342 0 ustar 00root root 0000000 0000000 import fabric.main
fabric.main.main()
fabric-1.10.2/fabric/api.py 0000664 0000000 0000000 00000001442 12541056305 0015374 0 ustar 00root root 0000000 0000000 """
Non-init module for doing convenient * imports from.
Necessary because if we did this in __init__, one would be unable to import
anything else inside the package -- like, say, the version number used in
setup.py -- without triggering loads of most of the code. Which doesn't work so
well when you're using setup.py to install e.g. ssh!
"""
from fabric.context_managers import (cd, hide, settings, show, path, prefix,
lcd, quiet, warn_only, remote_tunnel, shell_env)
from fabric.decorators import (hosts, roles, runs_once, with_settings, task,
serial, parallel)
from fabric.operations import (require, prompt, put, get, run, sudo, local,
reboot, open_shell)
from fabric.state import env, output
from fabric.utils import abort, warn, puts, fastprint
from fabric.tasks import execute
fabric-1.10.2/fabric/auth.py 0000664 0000000 0000000 00000001036 12541056305 0015563 0 ustar 00root root 0000000 0000000 """
Common authentication subroutines. Primarily for internal use.
"""
def get_password(user, host, port):
from fabric.state import env
from fabric.network import join_host_strings
host_string = join_host_strings(user, host, port)
return env.passwords.get(host_string, env.password)
def set_password(user, host, port, password):
from fabric.state import env
from fabric.network import join_host_strings
host_string = join_host_strings(user, host, port)
env.password = env.passwords[host_string] = password
fabric-1.10.2/fabric/colors.py 0000664 0000000 0000000 00000002175 12541056305 0016130 0 ustar 00root root 0000000 0000000 """
.. versionadded:: 0.9.2
Functions for wrapping strings in ANSI color codes.
Each function within this module returns the input string ``text``, wrapped
with ANSI color codes for the appropriate color.
For example, to print some text as green on supporting terminals::
from fabric.colors import green
print(green("This text is green!"))
Because these functions simply return modified strings, you can nest them::
from fabric.colors import red, green
print(red("This sentence is red, except for " + \
green("these words, which are green") + "."))
If ``bold`` is set to ``True``, the ANSI flag for bolding will be flipped on
for that particular invocation, which usually shows up as a bold or brighter
version of the original color on most terminals.
"""
def _wrap_with(code):
def inner(text, bold=False):
c = code
if bold:
c = "1;%s" % c
return "\033[%sm%s\033[0m" % (c, text)
return inner
red = _wrap_with('31')
green = _wrap_with('32')
yellow = _wrap_with('33')
blue = _wrap_with('34')
magenta = _wrap_with('35')
cyan = _wrap_with('36')
white = _wrap_with('37')
fabric-1.10.2/fabric/context_managers.py 0000664 0000000 0000000 00000051260 12541056305 0020167 0 ustar 00root root 0000000 0000000 """
Context managers for use with the ``with`` statement.
.. note:: When using Python 2.5, you will need to start your fabfile
with ``from __future__ import with_statement`` in order to make use of
the ``with`` statement (which is a regular, non ``__future__`` feature of
Python 2.6+.)
.. note:: If you are using multiple directly nested ``with`` statements, it can
be convenient to use multiple context expressions in one single with
statement. Instead of writing::
with cd('/path/to/app'):
with prefix('workon myvenv'):
run('./manage.py syncdb')
run('./manage.py loaddata myfixture')
you can write::
with cd('/path/to/app'), prefix('workon myvenv'):
run('./manage.py syncdb')
run('./manage.py loaddata myfixture')
Note that you need Python 2.7+ for this to work. On Python 2.5 or 2.6, you
can do the following::
from contextlib import nested
with nested(cd('/path/to/app'), prefix('workon myvenv')):
...
Finally, note that `~fabric.context_managers.settings` implements
``nested`` itself -- see its API doc for details.
"""
from contextlib import contextmanager, nested
import socket
import select
from fabric.thread_handling import ThreadHandler
from fabric.state import output, win32, connections, env
from fabric import state
from fabric.utils import isatty
if not win32:
import termios
import tty
def _set_output(groups, which):
"""
Refactored subroutine used by ``hide`` and ``show``.
"""
previous = {}
try:
# Preserve original values, pull in new given value to use
for group in output.expand_aliases(groups):
previous[group] = output[group]
output[group] = which
# Yield control
yield
finally:
# Restore original values
output.update(previous)
def documented_contextmanager(func):
wrapper = contextmanager(func)
wrapper.undecorated = func
return wrapper
@documented_contextmanager
def show(*groups):
"""
Context manager for setting the given output ``groups`` to True.
``groups`` must be one or more strings naming the output groups defined in
`~fabric.state.output`. The given groups will be set to True for the
duration of the enclosed block, and restored to their previous value
afterwards.
For example, to turn on debug output (which is typically off by default)::
def my_task():
with show('debug'):
run('ls /var/www')
As almost all output groups are displayed by default, `show` is most useful
for turning on the normally-hidden ``debug`` group, or when you know or
suspect that code calling your own code is trying to hide output with
`hide`.
"""
return _set_output(groups, True)
@documented_contextmanager
def hide(*groups):
"""
Context manager for setting the given output ``groups`` to False.
``groups`` must be one or more strings naming the output groups defined in
`~fabric.state.output`. The given groups will be set to False for the
duration of the enclosed block, and restored to their previous value
afterwards.
For example, to hide the "[hostname] run:" status lines, as well as
preventing printout of stdout and stderr, one might use `hide` as follows::
def my_task():
with hide('running', 'stdout', 'stderr'):
run('ls /var/www')
"""
return _set_output(groups, False)
@documented_contextmanager
def _setenv(variables):
"""
Context manager temporarily overriding ``env`` with given key/value pairs.
A callable that returns a dict can also be passed. This is necessary when
new values are being calculated from current values, in order to ensure that
the "current" value is current at the time that the context is entered, not
when the context manager is initialized. (See Issue #736.)
This context manager is used internally by `settings` and is not intended
to be used directly.
"""
if callable(variables):
variables = variables()
clean_revert = variables.pop('clean_revert', False)
previous = {}
new = []
for key, value in variables.iteritems():
if key in state.env:
previous[key] = state.env[key]
else:
new.append(key)
state.env[key] = value
try:
yield
finally:
if clean_revert:
for key, value in variables.iteritems():
# If the current env value for this key still matches the
# value we set it to beforehand, we are OK to revert it to the
# pre-block value.
if key in state.env and value == state.env[key]:
if key in previous:
state.env[key] = previous[key]
else:
del state.env[key]
else:
state.env.update(previous)
for key in new:
del state.env[key]
def settings(*args, **kwargs):
"""
Nest context managers and/or override ``env`` variables.
`settings` serves two purposes:
* Most usefully, it allows temporary overriding/updating of ``env`` with
any provided keyword arguments, e.g. ``with settings(user='foo'):``.
Original values, if any, will be restored once the ``with`` block closes.
* The keyword argument ``clean_revert`` has special meaning for
``settings`` itself (see below) and will be stripped out before
execution.
* In addition, it will use `contextlib.nested`_ to nest any given
non-keyword arguments, which should be other context managers, e.g.
``with settings(hide('stderr'), show('stdout')):``.
.. _contextlib.nested: http://docs.python.org/library/contextlib.html#contextlib.nested
These behaviors may be specified at the same time if desired. An example
will hopefully illustrate why this is considered useful::
def my_task():
with settings(
hide('warnings', 'running', 'stdout', 'stderr'),
warn_only=True
):
if run('ls /etc/lsb-release'):
return 'Ubuntu'
elif run('ls /etc/redhat-release'):
return 'RedHat'
The above task executes a `run` statement, but will warn instead of
aborting if the ``ls`` fails, and all output -- including the warning
itself -- is prevented from printing to the user. The end result, in this
scenario, is a completely silent task that allows the caller to figure out
what type of system the remote host is, without incurring the handful of
output that would normally occur.
Thus, `settings` may be used to set any combination of environment
variables in tandem with hiding (or showing) specific levels of output, or
in tandem with any other piece of Fabric functionality implemented as a
context manager.
If ``clean_revert`` is set to ``True``, ``settings`` will **not** revert
keys which are altered within the nested block, instead only reverting keys
whose values remain the same as those given. More examples will make this
clear; below is how ``settings`` operates normally::
# Before the block, env.parallel defaults to False, host_string to None
with settings(parallel=True, host_string='myhost'):
# env.parallel is True
# env.host_string is 'myhost'
env.host_string = 'otherhost'
# env.host_string is now 'otherhost'
# Outside the block:
# * env.parallel is False again
# * env.host_string is None again
The internal modification of ``env.host_string`` is nullified -- not always
desirable. That's where ``clean_revert`` comes in::
# Before the block, env.parallel defaults to False, host_string to None
with settings(parallel=True, host_string='myhost', clean_revert=True):
# env.parallel is True
# env.host_string is 'myhost'
env.host_string = 'otherhost'
# env.host_string is now 'otherhost'
# Outside the block:
# * env.parallel is False again
# * env.host_string remains 'otherhost'
Brand new keys which did not exist in ``env`` prior to using ``settings``
are also preserved if ``clean_revert`` is active. When ``False``, such keys
are removed when the block exits.
.. versionadded:: 1.4.1
The ``clean_revert`` kwarg.
"""
managers = list(args)
if kwargs:
managers.append(_setenv(kwargs))
return nested(*managers)
def cd(path):
"""
Context manager that keeps directory state when calling remote operations.
Any calls to `run`, `sudo`, `get`, or `put` within the wrapped block will
implicitly have a string similar to ``"cd && "`` prefixed in order
to give the sense that there is actually statefulness involved.
.. note::
`cd` only affects *remote* paths -- to modify *local* paths, use
`~fabric.context_managers.lcd`.
Because use of `cd` affects all such invocations, any code making use of
those operations, such as much of the ``contrib`` section, will also be
affected by use of `cd`.
Like the actual 'cd' shell builtin, `cd` may be called with relative paths
(keep in mind that your default starting directory is your remote user's
``$HOME``) and may be nested as well.
Below is a "normal" attempt at using the shell 'cd', which doesn't work due
to how shell-less SSH connections are implemented -- state is **not** kept
between invocations of `run` or `sudo`::
run('cd /var/www')
run('ls')
The above snippet will list the contents of the remote user's ``$HOME``
instead of ``/var/www``. With `cd`, however, it will work as expected::
with cd('/var/www'):
run('ls') # Turns into "cd /var/www && ls"
Finally, a demonstration (see inline comments) of nesting::
with cd('/var/www'):
run('ls') # cd /var/www && ls
with cd('website1'):
run('ls') # cd /var/www/website1 && ls
.. note::
This context manager is currently implemented by appending to (and, as
always, restoring afterwards) the current value of an environment
variable, ``env.cwd``. However, this implementation may change in the
future, so we do not recommend manually altering ``env.cwd`` -- only
the *behavior* of `cd` will have any guarantee of backwards
compatibility.
.. note::
Space characters will be escaped automatically to make dealing with
such directory names easier.
.. versionchanged:: 1.0
Applies to `get` and `put` in addition to the command-running
operations.
.. seealso:: `~fabric.context_managers.lcd`
"""
return _change_cwd('cwd', path)
def lcd(path):
"""
Context manager for updating local current working directory.
This context manager is identical to `~fabric.context_managers.cd`, except
that it changes a different env var (`lcwd`, instead of `cwd`) and thus
only affects the invocation of `~fabric.operations.local` and the local
arguments to `~fabric.operations.get`/`~fabric.operations.put`.
Relative path arguments are relative to the local user's current working
directory, which will vary depending on where Fabric (or Fabric-using code)
was invoked. You can check what this is with `os.getcwd
`_. It may be
useful to pin things relative to the location of the fabfile in use, which
may be found in :ref:`env.real_fabfile `
.. versionadded:: 1.0
"""
return _change_cwd('lcwd', path)
def _change_cwd(which, path):
path = path.replace(' ', '\ ')
if state.env.get(which) and not path.startswith('/') and not path.startswith('~'):
new_cwd = state.env.get(which) + '/' + path
else:
new_cwd = path
return _setenv({which: new_cwd})
def path(path, behavior='append'):
"""
Append the given ``path`` to the PATH used to execute any wrapped commands.
Any calls to `run` or `sudo` within the wrapped block will implicitly have
a string similar to ``"PATH=$PATH: "`` prepended before the given
command.
You may customize the behavior of `path` by specifying the optional
``behavior`` keyword argument, as follows:
* ``'append'``: append given path to the current ``$PATH``, e.g.
``PATH=$PATH:``. This is the default behavior.
* ``'prepend'``: prepend given path to the current ``$PATH``, e.g.
``PATH=:$PATH``.
* ``'replace'``: ignore previous value of ``$PATH`` altogether, e.g.
``PATH=``.
.. note::
This context manager is currently implemented by modifying (and, as
always, restoring afterwards) the current value of environment
variables, ``env.path`` and ``env.path_behavior``. However, this
implementation may change in the future, so we do not recommend
manually altering them directly.
.. versionadded:: 1.0
"""
return _setenv({'path': path, 'path_behavior': behavior})
def prefix(command):
"""
Prefix all wrapped `run`/`sudo` commands with given command plus ``&&``.
This is nearly identical to `~fabric.operations.cd`, except that nested
invocations append to a list of command strings instead of modifying a
single string.
Most of the time, you'll want to be using this alongside a shell script
which alters shell state, such as ones which export or alter shell
environment variables.
For example, one of the most common uses of this tool is with the
``workon`` command from `virtualenvwrapper
`_::
with prefix('workon myvenv'):
run('./manage.py syncdb')
In the above snippet, the actual shell command run would be this::
$ workon myvenv && ./manage.py syncdb
This context manager is compatible with `~fabric.context_managers.cd`, so
if your virtualenv doesn't ``cd`` in its ``postactivate`` script, you could
do the following::
with cd('/path/to/app'):
with prefix('workon myvenv'):
run('./manage.py syncdb')
run('./manage.py loaddata myfixture')
Which would result in executions like so::
$ cd /path/to/app && workon myvenv && ./manage.py syncdb
$ cd /path/to/app && workon myvenv && ./manage.py loaddata myfixture
Finally, as alluded to near the beginning,
`~fabric.context_managers.prefix` may be nested if desired, e.g.::
with prefix('workon myenv'):
run('ls')
with prefix('source /some/script'):
run('touch a_file')
The result::
$ workon myenv && ls
$ workon myenv && source /some/script && touch a_file
Contrived, but hopefully illustrative.
"""
return _setenv(lambda: {'command_prefixes': state.env.command_prefixes + [command]})
@documented_contextmanager
def char_buffered(pipe):
"""
Force local terminal ``pipe`` be character, not line, buffered.
Only applies on Unix-based systems; on Windows this is a no-op.
"""
if win32 or not isatty(pipe):
yield
else:
old_settings = termios.tcgetattr(pipe)
tty.setcbreak(pipe)
try:
yield
finally:
termios.tcsetattr(pipe, termios.TCSADRAIN, old_settings)
def shell_env(**kw):
"""
Set shell environment variables for wrapped commands.
For example, the below shows how you might set a ZeroMQ related environment
variable when installing a Python ZMQ library::
with shell_env(ZMQ_DIR='/home/user/local'):
run('pip install pyzmq')
As with `~fabric.context_managers.prefix`, this effectively turns the
``run`` command into::
$ export ZMQ_DIR='/home/user/local' && pip install pyzmq
Multiple key-value pairs may be given simultaneously.
.. note::
If used to affect the behavior of `~fabric.operations.local` when
running from a Windows localhost, ``SET`` commands will be used to
implement this feature.
"""
return _setenv({'shell_env': kw})
def _forwarder(chan, sock):
# Bidirectionally forward data between a socket and a Paramiko channel.
while True:
r, w, x = select.select([sock, chan], [], [])
if sock in r:
data = sock.recv(1024)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(1024)
if len(data) == 0:
break
sock.send(data)
chan.close()
sock.close()
@documented_contextmanager
def remote_tunnel(remote_port, local_port=None, local_host="localhost",
remote_bind_address="127.0.0.1"):
"""
Create a tunnel forwarding a locally-visible port to the remote target.
For example, you can let the remote host access a database that is
installed on the client host::
# Map localhost:6379 on the server to localhost:6379 on the client,
# so that the remote 'redis-cli' program ends up speaking to the local
# redis-server.
with remote_tunnel(6379):
run("redis-cli -i")
The database might be installed on a client only reachable from the client
host (as opposed to *on* the client itself)::
# Map localhost:6379 on the server to redis.internal:6379 on the client
with remote_tunnel(6379, local_host="redis.internal")
run("redis-cli -i")
``remote_tunnel`` accepts up to four arguments:
* ``remote_port`` (mandatory) is the remote port to listen to.
* ``local_port`` (optional) is the local port to connect to; the default is
the same port as the remote one.
* ``local_host`` (optional) is the locally-reachable computer (DNS name or
IP address) to connect to; the default is ``localhost`` (that is, the
same computer Fabric is running on).
* ``remote_bind_address`` (optional) is the remote IP address to bind to
for listening, on the current target. It should be an IP address assigned
to an interface on the target (or a DNS name that resolves to such IP).
You can use "0.0.0.0" to bind to all interfaces.
.. note::
By default, most SSH servers only allow remote tunnels to listen to the
localhost interface (127.0.0.1). In these cases, `remote_bind_address`
is ignored by the server, and the tunnel will listen only to 127.0.0.1.
.. versionadded: 1.6
"""
if local_port is None:
local_port = remote_port
sockets = []
channels = []
threads = []
def accept(channel, (src_addr, src_port), (dest_addr, dest_port)):
channels.append(channel)
sock = socket.socket()
sockets.append(sock)
try:
sock.connect((local_host, local_port))
except Exception, e:
print "[%s] rtunnel: cannot connect to %s:%d (from local)" % (env.host_string, local_host, local_port)
channel.close()
return
print "[%s] rtunnel: opened reverse tunnel: %r -> %r -> %r"\
% (env.host_string, channel.origin_addr,
channel.getpeername(), (local_host, local_port))
th = ThreadHandler('fwd', _forwarder, channel, sock)
threads.append(th)
transport = connections[env.host_string].get_transport()
transport.request_port_forward(remote_bind_address, remote_port, handler=accept)
try:
yield
finally:
for sock, chan, th in zip(sockets, channels, threads):
sock.close()
chan.close()
th.thread.join()
th.raise_if_needed()
transport.cancel_port_forward(remote_bind_address, remote_port)
quiet = lambda: settings(hide('everything'), warn_only=True)
quiet.__doc__ = """
Alias to ``settings(hide('everything'), warn_only=True)``.
Useful for wrapping remote interrogative commands which you expect to fail
occasionally, and/or which you want to silence.
Example::
with quiet():
have_build_dir = run("test -e /tmp/build").succeeded
When used in a task, the above snippet will not produce any ``run: test -e
/tmp/build`` line, nor will any stdout/stderr display, and command failure
is ignored.
.. seealso::
:ref:`env.warn_only `,
`~fabric.context_managers.settings`,
`~fabric.context_managers.hide`
.. versionadded:: 1.5
"""
warn_only = lambda: settings(warn_only=True)
warn_only.__doc__ = """
Alias to ``settings(warn_only=True)``.
.. seealso::
:ref:`env.warn_only `,
`~fabric.context_managers.settings`,
`~fabric.context_managers.quiet`
"""
fabric-1.10.2/fabric/contrib/ 0000775 0000000 0000000 00000000000 12541056305 0015710 5 ustar 00root root 0000000 0000000 fabric-1.10.2/fabric/contrib/__init__.py 0000664 0000000 0000000 00000000000 12541056305 0020007 0 ustar 00root root 0000000 0000000 fabric-1.10.2/fabric/contrib/console.py 0000664 0000000 0000000 00000002227 12541056305 0017727 0 ustar 00root root 0000000 0000000 """
Console/terminal user interface functionality.
"""
from fabric.api import prompt
def confirm(question, default=True):
"""
Ask user a yes/no question and return their response as True or False.
``question`` should be a simple, grammatically complete question such as
"Do you wish to continue?", and will have a string similar to " [Y/n] "
appended automatically. This function will *not* append a question mark for
you.
By default, when the user presses Enter without typing anything, "yes" is
assumed. This can be changed by specifying ``default=False``.
"""
# Set up suffix
if default:
suffix = "Y/n"
else:
suffix = "y/N"
# Loop till we get something we like
while True:
response = prompt("%s [%s] " % (question, suffix)).lower()
# Default
if not response:
return default
# Yes
if response in ['y', 'yes']:
return True
# No
if response in ['n', 'no']:
return False
# Didn't get empty, yes or no, so complain and loop
print("I didn't understand you. Please specify '(y)es' or '(n)o'.")
fabric-1.10.2/fabric/contrib/django.py 0000664 0000000 0000000 00000006375 12541056305 0017537 0 ustar 00root root 0000000 0000000 """
.. versionadded:: 0.9.2
These functions streamline the process of initializing Django's settings module
environment variable. Once this is done, your fabfile may import from your
Django project, or Django itself, without requiring the use of ``manage.py``
plugins or having to set the environment variable yourself every time you use
your fabfile.
Currently, these functions only allow Fabric to interact with
local-to-your-fabfile Django installations. This is not as limiting as it
sounds; for example, you can use Fabric as a remote "build" tool as well as
using it locally. Imagine the following fabfile::
from fabric.api import run, local, hosts, cd
from fabric.contrib import django
django.project('myproject')
from myproject.myapp.models import MyModel
def print_instances():
for instance in MyModel.objects.all():
print(instance)
@hosts('production-server')
def print_production_instances():
with cd('/path/to/myproject'):
run('fab print_instances')
With Fabric installed on both ends, you could execute
``print_production_instances`` locally, which would trigger ``print_instances``
on the production server -- which would then be interacting with your
production Django database.
As another example, if your local and remote settings are similar, you can use
it to obtain e.g. your database settings, and then use those when executing a
remote (non-Fabric) command. This would allow you some degree of freedom even
if Fabric is only installed locally::
from fabric.api import run
from fabric.contrib import django
django.settings_module('myproject.settings')
from django.conf import settings
def dump_production_database():
run('mysqldump -u %s -p=%s %s > /tmp/prod-db.sql' % (
settings.DATABASE_USER,
settings.DATABASE_PASSWORD,
settings.DATABASE_NAME
))
The above snippet will work if run from a local, development environment, again
provided your local ``settings.py`` mirrors your remote one in terms of
database connection info.
"""
import os
def settings_module(module):
"""
Set ``DJANGO_SETTINGS_MODULE`` shell environment variable to ``module``.
Due to how Django works, imports from Django or a Django project will fail
unless the shell environment variable ``DJANGO_SETTINGS_MODULE`` is
correctly set (see `the Django settings docs
`_.)
This function provides a shortcut for doing so; call it near the top of
your fabfile or Fabric-using code, after which point any Django imports
should work correctly.
.. note::
This function sets a **shell** environment variable (via
``os.environ``) and is unrelated to Fabric's own internal "env"
variables.
"""
os.environ['DJANGO_SETTINGS_MODULE'] = module
def project(name):
"""
Sets ``DJANGO_SETTINGS_MODULE`` to ``'.settings'``.
This function provides a handy shortcut for the common case where one is
using the Django default naming convention for their settings file and
location.
Uses `settings_module` -- see its documentation for details on why and how
to use this functionality.
"""
settings_module('%s.settings' % name)
fabric-1.10.2/fabric/contrib/files.py 0000664 0000000 0000000 00000037506 12541056305 0017377 0 ustar 00root root 0000000 0000000 """
Module providing easy API for working with remote files and folders.
"""
from __future__ import with_statement
import hashlib
import tempfile
import re
import os
from StringIO import StringIO
from functools import partial
from fabric.api import *
from fabric.utils import apply_lcwd
def exists(path, use_sudo=False, verbose=False):
"""
Return True if given path exists on the current remote host.
If ``use_sudo`` is True, will use `sudo` instead of `run`.
`exists` will, by default, hide all output (including the run line, stdout,
stderr and any warning resulting from the file not existing) in order to
avoid cluttering output. You may specify ``verbose=True`` to change this
behavior.
"""
func = use_sudo and sudo or run
cmd = 'test -e %s' % _expand_path(path)
# If verbose, run normally
if verbose:
with settings(warn_only=True):
return not func(cmd).failed
# Otherwise, be quiet
with settings(hide('everything'), warn_only=True):
return not func(cmd).failed
def is_link(path, use_sudo=False, verbose=False):
"""
Return True if the given path is a symlink on the current remote host.
If ``use_sudo`` is True, will use `.sudo` instead of `.run`.
`.is_link` will, by default, hide all output. Give ``verbose=True`` to
change this.
"""
func = sudo if use_sudo else run
cmd = 'test -L "$(echo %s)"' % path
args, kwargs = [], {'warn_only': True}
if not verbose:
args = [hide('everything')]
with settings(*args, **kwargs):
return func(cmd).succeeded
def first(*args, **kwargs):
"""
Given one or more file paths, returns first one found, or None if none
exist. May specify ``use_sudo`` and ``verbose`` which are passed to
`exists`.
"""
for directory in args:
if exists(directory, **kwargs):
return directory
def upload_template(filename, destination, context=None, use_jinja=False,
template_dir=None, use_sudo=False, backup=True, mirror_local_mode=False,
mode=None, pty=None):
"""
Render and upload a template text file to a remote host.
Returns the result of the inner call to `~fabric.operations.put` -- see its
documentation for details.
``filename`` should be the path to a text file, which may contain `Python
string interpolation formatting
`_ and will
be rendered with the given context dictionary ``context`` (if given.)
Alternately, if ``use_jinja`` is set to True and you have the Jinja2
templating library available, Jinja will be used to render the template
instead. Templates will be loaded from the invoking user's current working
directory by default, or from ``template_dir`` if given.
The resulting rendered file will be uploaded to the remote file path
``destination``. If the destination file already exists, it will be
renamed with a ``.bak`` extension unless ``backup=False`` is specified.
By default, the file will be copied to ``destination`` as the logged-in
user; specify ``use_sudo=True`` to use `sudo` instead.
The ``mirror_local_mode`` and ``mode`` kwargs are passed directly to an
internal `~fabric.operations.put` call; please see its documentation for
details on these two options.
The ``pty`` kwarg will be passed verbatim to any internal
`~fabric.operations.run`/`~fabric.operations.sudo` calls, such as those
used for testing directory-ness, making backups, etc.
.. versionchanged:: 1.1
Added the ``backup``, ``mirror_local_mode`` and ``mode`` kwargs.
.. versionchanged:: 1.9
Added the ``pty`` kwarg.
"""
func = use_sudo and sudo or run
if pty is not None:
func = partial(func, pty=pty)
# Normalize destination to be an actual filename, due to using StringIO
with settings(hide('everything'), warn_only=True):
if func('test -d %s' % _expand_path(destination)).succeeded:
sep = "" if destination.endswith('/') else "/"
destination += sep + os.path.basename(filename)
# Use mode kwarg to implement mirror_local_mode, again due to using
# StringIO
if mirror_local_mode and mode is None:
mode = os.stat(apply_lcwd(filename, env)).st_mode
# To prevent put() from trying to do this
# logic itself
mirror_local_mode = False
# Process template
text = None
if use_jinja:
try:
template_dir = template_dir or os.getcwd()
template_dir = apply_lcwd(template_dir, env)
from jinja2 import Environment, FileSystemLoader
jenv = Environment(loader=FileSystemLoader(template_dir))
text = jenv.get_template(filename).render(**context or {})
# Force to a byte representation of Unicode, or str()ification
# within Paramiko's SFTP machinery may cause decode issues for
# truly non-ASCII characters.
text = text.encode('utf-8')
except ImportError:
import traceback
tb = traceback.format_exc()
abort(tb + "\nUnable to import Jinja2 -- see above.")
else:
if template_dir:
filename = os.path.join(template_dir, filename)
filename = apply_lcwd(filename, env)
with open(os.path.expanduser(filename)) as inputfile:
text = inputfile.read()
if context:
text = text % context
# Back up original file
if backup and exists(destination):
func("cp %s{,.bak}" % _expand_path(destination))
# Upload the file.
return put(
local_path=StringIO(text),
remote_path=destination,
use_sudo=use_sudo,
mirror_local_mode=mirror_local_mode,
mode=mode
)
def sed(filename, before, after, limit='', use_sudo=False, backup='.bak',
flags='', shell=False):
"""
Run a search-and-replace on ``filename`` with given regex patterns.
Equivalent to ``sed -i -r -e "// s///g"
``. Setting ``backup`` to an empty string will, disable backup
file creation.
For convenience, ``before`` and ``after`` will automatically escape forward
slashes, single quotes and parentheses for you, so you don't need to
specify e.g. ``http:\/\/foo\.com``, instead just using ``http://foo\.com``
is fine.
If ``use_sudo`` is True, will use `sudo` instead of `run`.
The ``shell`` argument will be eventually passed to `run`/`sudo`. It
defaults to False in order to avoid problems with many nested levels of
quotes and backslashes. However, setting it to True may help when using
``~fabric.operations.cd`` to wrap explicit or implicit ``sudo`` calls.
(``cd`` by it's nature is a shell built-in, not a standalone command, so it
should be called within a shell.)
Other options may be specified with sed-compatible regex flags -- for
example, to make the search and replace case insensitive, specify
``flags="i"``. The ``g`` flag is always specified regardless, so you do not
need to remember to include it when overriding this parameter.
.. versionadded:: 1.1
The ``flags`` parameter.
.. versionadded:: 1.6
Added the ``shell`` keyword argument.
"""
func = use_sudo and sudo or run
# Characters to be escaped in both
for char in "/'":
before = before.replace(char, r'\%s' % char)
after = after.replace(char, r'\%s' % char)
# Characters to be escaped in replacement only (they're useful in regexen
# in the 'before' part)
for char in "()":
after = after.replace(char, r'\%s' % char)
if limit:
limit = r'/%s/ ' % limit
context = {
'script': r"'%ss/%s/%s/%sg'" % (limit, before, after, flags),
'filename': _expand_path(filename),
'backup': backup
}
# Test the OS because of differences between sed versions
with hide('running', 'stdout'):
platform = run("uname")
if platform in ('NetBSD', 'OpenBSD', 'QNX'):
# Attempt to protect against failures/collisions
hasher = hashlib.sha1()
hasher.update(env.host_string)
hasher.update(filename)
context['tmp'] = "/tmp/%s" % hasher.hexdigest()
# Use temp file to work around lack of -i
expr = r"""cp -p %(filename)s %(tmp)s \
&& sed -r -e %(script)s %(filename)s > %(tmp)s \
&& cp -p %(filename)s %(filename)s%(backup)s \
&& mv %(tmp)s %(filename)s"""
else:
context['extended_regex'] = '-E' if platform == 'Darwin' else '-r'
expr = r"sed -i%(backup)s %(extended_regex)s -e %(script)s %(filename)s"
command = expr % context
return func(command, shell=shell)
def uncomment(filename, regex, use_sudo=False, char='#', backup='.bak',
shell=False):
"""
Attempt to uncomment all lines in ``filename`` matching ``regex``.
The default comment delimiter is `#` and may be overridden by the ``char``
argument.
This function uses the `sed` function, and will accept the same
``use_sudo``, ``shell`` and ``backup`` keyword arguments that `sed` does.
`uncomment` will remove a single whitespace character following the comment
character, if it exists, but will preserve all preceding whitespace. For
example, ``# foo`` would become ``foo`` (the single space is stripped) but
`` # foo`` would become `` foo`` (the single space is still stripped,
but the preceding 4 spaces are not.)
.. versionchanged:: 1.6
Added the ``shell`` keyword argument.
"""
return sed(
filename,
before=r'^([[:space:]]*)%s[[:space:]]?' % char,
after=r'\1',
limit=regex,
use_sudo=use_sudo,
backup=backup,
shell=shell
)
def comment(filename, regex, use_sudo=False, char='#', backup='.bak',
shell=False):
"""
Attempt to comment out all lines in ``filename`` matching ``regex``.
The default commenting character is `#` and may be overridden by the
``char`` argument.
This function uses the `sed` function, and will accept the same
``use_sudo``, ``shell`` and ``backup`` keyword arguments that `sed` does.
`comment` will prepend the comment character to the beginning of the line,
so that lines end up looking like so::
this line is uncommented
#this line is commented
# this line is indented and commented
In other words, comment characters will not "follow" indentation as they
sometimes do when inserted by hand. Neither will they have a trailing space
unless you specify e.g. ``char='# '``.
.. note::
In order to preserve the line being commented out, this function will
wrap your ``regex`` argument in parentheses, so you don't need to. It
will ensure that any preceding/trailing ``^`` or ``$`` characters are
correctly moved outside the parentheses. For example, calling
``comment(filename, r'^foo$')`` will result in a `sed` call with the
"before" regex of ``r'^(foo)$'`` (and the "after" regex, naturally, of
``r'#\\1'``.)
.. versionadded:: 1.5
Added the ``shell`` keyword argument.
"""
carot, dollar = '', ''
if regex.startswith('^'):
carot = '^'
regex = regex[1:]
if regex.endswith('$'):
dollar = '$'
regex = regex[:-1]
regex = "%s(%s)%s" % (carot, regex, dollar)
return sed(
filename,
before=regex,
after=r'%s\1' % char,
use_sudo=use_sudo,
backup=backup,
shell=shell
)
def contains(filename, text, exact=False, use_sudo=False, escape=True,
shell=False):
"""
Return True if ``filename`` contains ``text`` (which may be a regex.)
By default, this function will consider a partial line match (i.e. where
``text`` only makes up part of the line it's on). Specify ``exact=True`` to
change this behavior so that only a line containing exactly ``text``
results in a True return value.
This function leverages ``egrep`` on the remote end (so it may not follow
Python regular expression syntax perfectly), and skips ``env.shell``
wrapper by default.
If ``use_sudo`` is True, will use `sudo` instead of `run`.
If ``escape`` is False, no extra regular expression related escaping is
performed (this includes overriding ``exact`` so that no ``^``/``$`` is
added.)
The ``shell`` argument will be eventually passed to ``run/sudo``. See
description of the same argumnet in ``~fabric.contrib.sed`` for details.
.. versionchanged:: 1.0
Swapped the order of the ``filename`` and ``text`` arguments to be
consistent with other functions in this module.
.. versionchanged:: 1.4
Updated the regular expression related escaping to try and solve
various corner cases.
.. versionchanged:: 1.4
Added ``escape`` keyword argument.
.. versionadded:: 1.6
Added the ``shell`` keyword argument.
"""
func = use_sudo and sudo or run
if escape:
text = _escape_for_regex(text)
if exact:
text = "^%s$" % text
with settings(hide('everything'), warn_only=True):
egrep_cmd = 'egrep "%s" %s' % (text, _expand_path(filename))
return func(egrep_cmd, shell=shell).succeeded
def append(filename, text, use_sudo=False, partial=False, escape=True,
shell=False):
"""
Append string (or list of strings) ``text`` to ``filename``.
When a list is given, each string inside is handled independently (but in
the order given.)
If ``text`` is already found in ``filename``, the append is not run, and
None is returned immediately. Otherwise, the given text is appended to the
end of the given ``filename`` via e.g. ``echo '$text' >> $filename``.
The test for whether ``text`` already exists defaults to a full line match,
e.g. ``^$``, as this seems to be the most sensible approach for the
"append lines to a file" use case. You may override this and force partial
searching (e.g. ``^``) by specifying ``partial=True``.
Because ``text`` is single-quoted, single quotes will be transparently
backslash-escaped. This can be disabled with ``escape=False``.
If ``use_sudo`` is True, will use `sudo` instead of `run`.
The ``shell`` argument will be eventually passed to ``run/sudo``. See
description of the same argumnet in ``~fabric.contrib.sed`` for details.
.. versionchanged:: 0.9.1
Added the ``partial`` keyword argument.
.. versionchanged:: 1.0
Swapped the order of the ``filename`` and ``text`` arguments to be
consistent with other functions in this module.
.. versionchanged:: 1.0
Changed default value of ``partial`` kwarg to be ``False``.
.. versionchanged:: 1.4
Updated the regular expression related escaping to try and solve
various corner cases.
.. versionadded:: 1.6
Added the ``shell`` keyword argument.
"""
func = use_sudo and sudo or run
# Normalize non-list input to be a list
if isinstance(text, basestring):
text = [text]
for line in text:
regex = '^' + _escape_for_regex(line) + ('' if partial else '$')
if (exists(filename, use_sudo=use_sudo) and line
and contains(filename, regex, use_sudo=use_sudo, escape=False,
shell=shell)):
continue
line = line.replace("'", r"'\\''") if escape else line
func("echo '%s' >> %s" % (line, _expand_path(filename)))
def _escape_for_regex(text):
"""Escape ``text`` to allow literal matching using egrep"""
regex = re.escape(text)
# Seems like double escaping is needed for \
regex = regex.replace('\\\\', '\\\\\\')
# Triple-escaping seems to be required for $ signs
regex = regex.replace(r'\$', r'\\\$')
# Whereas single quotes should not be escaped
regex = regex.replace(r"\'", "'")
return regex
def _expand_path(path):
return '"$(echo %s)"' % path
fabric-1.10.2/fabric/contrib/project.py 0000664 0000000 0000000 00000020047 12541056305 0017733 0 ustar 00root root 0000000 0000000 """
Useful non-core functionality, e.g. functions composing multiple operations.
"""
from __future__ import with_statement
from os import getcwd, sep
import os.path
from datetime import datetime
from tempfile import mkdtemp
from fabric.network import needs_host, key_filenames, normalize
from fabric.operations import local, run, sudo, put
from fabric.state import env, output
from fabric.context_managers import cd
__all__ = ['rsync_project', 'upload_project']
@needs_host
def rsync_project(
remote_dir,
local_dir=None,
exclude=(),
delete=False,
extra_opts='',
ssh_opts='',
capture=False,
upload=True,
default_opts='-pthrvz'
):
"""
Synchronize a remote directory with the current project directory via rsync.
Where ``upload_project()`` makes use of ``scp`` to copy one's entire
project every time it is invoked, ``rsync_project()`` uses the ``rsync``
command-line utility, which only transfers files newer than those on the
remote end.
``rsync_project()`` is thus a simple wrapper around ``rsync``; for
details on how ``rsync`` works, please see its manpage. ``rsync`` must be
installed on both your local and remote systems in order for this operation
to work correctly.
This function makes use of Fabric's ``local()`` operation, and returns the
output of that function call; thus it will return the stdout, if any, of
the resultant ``rsync`` call.
``rsync_project()`` takes the following parameters:
* ``remote_dir``: the only required parameter, this is the path to the
directory on the remote server. Due to how ``rsync`` is implemented, the
exact behavior depends on the value of ``local_dir``:
* If ``local_dir`` ends with a trailing slash, the files will be
dropped inside of ``remote_dir``. E.g.
``rsync_project("/home/username/project/", "foldername/")`` will drop
the contents of ``foldername`` inside of ``/home/username/project``.
* If ``local_dir`` does **not** end with a trailing slash (and this
includes the default scenario, when ``local_dir`` is not specified),
``remote_dir`` is effectively the "parent" directory, and a new
directory named after ``local_dir`` will be created inside of it. So
``rsync_project("/home/username", "foldername")`` would create a new
directory ``/home/username/foldername`` (if needed) and place the
files there.
* ``local_dir``: by default, ``rsync_project`` uses your current working
directory as the source directory. This may be overridden by specifying
``local_dir``, which is a string passed verbatim to ``rsync``, and thus
may be a single directory (``"my_directory"``) or multiple directories
(``"dir1 dir2"``). See the ``rsync`` documentation for details.
* ``exclude``: optional, may be a single string, or an iterable of strings,
and is used to pass one or more ``--exclude`` options to ``rsync``.
* ``delete``: a boolean controlling whether ``rsync``'s ``--delete`` option
is used. If True, instructs ``rsync`` to remove remote files that no
longer exist locally. Defaults to False.
* ``extra_opts``: an optional, arbitrary string which you may use to pass
custom arguments or options to ``rsync``.
* ``ssh_opts``: Like ``extra_opts`` but specifically for the SSH options
string (rsync's ``--rsh`` flag.)
* ``capture``: Sent directly into an inner `~fabric.operations.local` call.
* ``upload``: a boolean controlling whether file synchronization is
performed up or downstream. Upstream by default.
* ``default_opts``: the default rsync options ``-pthrvz``, override if
desired (e.g. to remove verbosity, etc).
Furthermore, this function transparently honors Fabric's port and SSH key
settings. Calling this function when the current host string contains a
nonstandard port, or when ``env.key_filename`` is non-empty, will use the
specified port and/or SSH key filename(s).
For reference, the approximate ``rsync`` command-line call that is
constructed by this function is the following::
rsync [--delete] [--exclude exclude[0][, --exclude[1][, ...]]] \\
[default_opts] [extra_opts] :
.. versionadded:: 1.4.0
The ``ssh_opts`` keyword argument.
.. versionadded:: 1.4.1
The ``capture`` keyword argument.
.. versionadded:: 1.8.0
The ``default_opts`` keyword argument.
"""
# Turn single-string exclude into a one-item list for consistency
if not hasattr(exclude, '__iter__'):
exclude = (exclude,)
# Create --exclude options from exclude list
exclude_opts = ' --exclude "%s"' * len(exclude)
# Double-backslash-escape
exclusions = tuple([str(s).replace('"', '\\\\"') for s in exclude])
# Honor SSH key(s)
key_string = ""
keys = key_filenames()
if keys:
key_string = "-i " + " -i ".join(keys)
# Port
user, host, port = normalize(env.host_string)
port_string = "-p %s" % port
# RSH
rsh_string = ""
rsh_parts = [key_string, port_string, ssh_opts]
if any(rsh_parts):
rsh_string = "--rsh='ssh %s'" % " ".join(rsh_parts)
# Set up options part of string
options_map = {
'delete': '--delete' if delete else '',
'exclude': exclude_opts % exclusions,
'rsh': rsh_string,
'default': default_opts,
'extra': extra_opts,
}
options = "%(delete)s%(exclude)s %(default)s %(extra)s %(rsh)s" % options_map
# Get local directory
if local_dir is None:
local_dir = '../' + getcwd().split(sep)[-1]
# Create and run final command string
if host.count(':') > 1:
# Square brackets are mandatory for IPv6 rsync address,
# even if port number is not specified
remote_prefix = "[%s@%s]" % (user, host)
else:
remote_prefix = "%s@%s" % (user, host)
if upload:
cmd = "rsync %s %s %s:%s" % (options, local_dir, remote_prefix, remote_dir)
else:
cmd = "rsync %s %s:%s %s" % (options, remote_prefix, remote_dir, local_dir)
if output.running:
print("[%s] rsync_project: %s" % (env.host_string, cmd))
return local(cmd, capture=capture)
def upload_project(local_dir=None, remote_dir="", use_sudo=False):
"""
Upload the current project to a remote system via ``tar``/``gzip``.
``local_dir`` specifies the local project directory to upload, and defaults
to the current working directory.
``remote_dir`` specifies the target directory to upload into (meaning that
a copy of ``local_dir`` will appear as a subdirectory of ``remote_dir``)
and defaults to the remote user's home directory.
``use_sudo`` specifies which method should be used when executing commands
remotely. ``sudo`` will be used if use_sudo is True, otherwise ``run`` will
be used.
This function makes use of the ``tar`` and ``gzip`` programs/libraries,
thus it will not work too well on Win32 systems unless one is using Cygwin
or something similar. It will attempt to clean up the local and remote
tarfiles when it finishes executing, even in the event of a failure.
.. versionchanged:: 1.1
Added the ``local_dir`` and ``remote_dir`` kwargs.
.. versionchanged:: 1.7
Added the ``use_sudo`` kwarg.
"""
runner = use_sudo and sudo or run
local_dir = local_dir or os.getcwd()
# Remove final '/' in local_dir so that basename() works
local_dir = local_dir.rstrip(os.sep)
local_path, local_name = os.path.split(local_dir)
tar_file = "%s.tar.gz" % local_name
target_tar = os.path.join(remote_dir, tar_file)
tmp_folder = mkdtemp()
try:
tar_path = os.path.join(tmp_folder, tar_file)
local("tar -czf %s -C %s %s" % (tar_path, local_path, local_name))
put(tar_path, target_tar, use_sudo=use_sudo)
with cd(remote_dir):
try:
runner("tar -xzf %s" % tar_file)
finally:
runner("rm -f %s" % tar_file)
finally:
local("rm -rf %s" % tmp_folder)
fabric-1.10.2/fabric/decorators.py 0000664 0000000 0000000 00000016562 12541056305 0017001 0 ustar 00root root 0000000 0000000 """
Convenience decorators for use in fabfiles.
"""
from __future__ import with_statement
import types
from functools import wraps
from Crypto import Random
from fabric import tasks
from .context_managers import settings
def task(*args, **kwargs):
"""
Decorator declaring the wrapped function to be a new-style task.
May be invoked as a simple, argument-less decorator (i.e. ``@task``) or
with arguments customizing its behavior (e.g. ``@task(alias='myalias')``).
Please see the :ref:`new-style task ` documentation for
details on how to use this decorator.
.. versionchanged:: 1.2
Added the ``alias``, ``aliases``, ``task_class`` and ``default``
keyword arguments. See :ref:`task-decorator-arguments` for details.
.. versionchanged:: 1.5
Added the ``name`` keyword argument.
.. seealso:: `~fabric.docs.unwrap_tasks`, `~fabric.tasks.WrappedCallableTask`
"""
invoked = bool(not args or kwargs)
task_class = kwargs.pop("task_class", tasks.WrappedCallableTask)
if not invoked:
func, args = args[0], ()
def wrapper(func):
return task_class(func, *args, **kwargs)
return wrapper if invoked else wrapper(func)
def _wrap_as_new(original, new):
if isinstance(original, tasks.Task):
return tasks.WrappedCallableTask(new)
return new
def _list_annotating_decorator(attribute, *values):
def attach_list(func):
@wraps(func)
def inner_decorator(*args, **kwargs):
return func(*args, **kwargs)
_values = values
# Allow for single iterable argument as well as *args
if len(_values) == 1 and not isinstance(_values[0], basestring):
_values = _values[0]
setattr(inner_decorator, attribute, list(_values))
# Don't replace @task new-style task objects with inner_decorator by
# itself -- wrap in a new Task object first.
inner_decorator = _wrap_as_new(func, inner_decorator)
return inner_decorator
return attach_list
def hosts(*host_list):
"""
Decorator defining which host or hosts to execute the wrapped function on.
For example, the following will ensure that, barring an override on the
command line, ``my_func`` will be run on ``host1``, ``host2`` and
``host3``, and with specific users on ``host1`` and ``host3``::
@hosts('user1@host1', 'host2', 'user2@host3')
def my_func():
pass
`~fabric.decorators.hosts` may be invoked with either an argument list
(``@hosts('host1')``, ``@hosts('host1', 'host2')``) or a single, iterable
argument (``@hosts(['host1', 'host2'])``).
Note that this decorator actually just sets the function's ``.hosts``
attribute, which is then read prior to executing the function.
.. versionchanged:: 0.9.2
Allow a single, iterable argument (``@hosts(iterable)``) to be used
instead of requiring ``@hosts(*iterable)``.
"""
return _list_annotating_decorator('hosts', *host_list)
def roles(*role_list):
"""
Decorator defining a list of role names, used to look up host lists.
A role is simply defined as a key in `env` whose value is a list of one or
more host connection strings. For example, the following will ensure that,
barring an override on the command line, ``my_func`` will be executed
against the hosts listed in the ``webserver`` and ``dbserver`` roles::
env.roledefs.update({
'webserver': ['www1', 'www2'],
'dbserver': ['db1']
})
@roles('webserver', 'dbserver')
def my_func():
pass
As with `~fabric.decorators.hosts`, `~fabric.decorators.roles` may be
invoked with either an argument list or a single, iterable argument.
Similarly, this decorator uses the same mechanism as
`~fabric.decorators.hosts` and simply sets ``.roles``.
.. versionchanged:: 0.9.2
Allow a single, iterable argument to be used (same as
`~fabric.decorators.hosts`).
"""
return _list_annotating_decorator('roles', *role_list)
def runs_once(func):
"""
Decorator preventing wrapped function from running more than once.
By keeping internal state, this decorator allows you to mark a function
such that it will only run once per Python interpreter session, which in
typical use means "once per invocation of the ``fab`` program".
Any function wrapped with this decorator will silently fail to execute the
2nd, 3rd, ..., Nth time it is called, and will return the value of the
original run.
.. note:: ``runs_once`` does not work with parallel task execution.
"""
@wraps(func)
def decorated(*args, **kwargs):
if not hasattr(decorated, 'return_value'):
decorated.return_value = func(*args, **kwargs)
return decorated.return_value
decorated = _wrap_as_new(func, decorated)
# Mark as serial (disables parallelism) and return
return serial(decorated)
def serial(func):
"""
Forces the wrapped function to always run sequentially, never in parallel.
This decorator takes precedence over the global value of :ref:`env.parallel
`. However, if a task is decorated with both
`~fabric.decorators.serial` *and* `~fabric.decorators.parallel`,
`~fabric.decorators.parallel` wins.
.. versionadded:: 1.3
"""
if not getattr(func, 'parallel', False):
func.serial = True
return _wrap_as_new(func, func)
def parallel(pool_size=None):
"""
Forces the wrapped function to run in parallel, instead of sequentially.
This decorator takes precedence over the global value of :ref:`env.parallel
`. It also takes precedence over `~fabric.decorators.serial`
if a task is decorated with both.
.. versionadded:: 1.3
"""
called_without_args = type(pool_size) == types.FunctionType
def real_decorator(func):
@wraps(func)
def inner(*args, **kwargs):
# Required for ssh/PyCrypto to be happy in multiprocessing
# (as far as we can tell, this is needed even with the extra such
# calls in newer versions of paramiko.)
Random.atfork()
return func(*args, **kwargs)
inner.parallel = True
inner.serial = False
inner.pool_size = None if called_without_args else pool_size
return _wrap_as_new(func, inner)
# Allow non-factory-style decorator use (@decorator vs @decorator())
if called_without_args:
return real_decorator(pool_size)
return real_decorator
def with_settings(*arg_settings, **kw_settings):
"""
Decorator equivalent of ``fabric.context_managers.settings``.
Allows you to wrap an entire function as if it was called inside a block
with the ``settings`` context manager. This may be useful if you know you
want a given setting applied to an entire function body, or wish to
retrofit old code without indenting everything.
For example, to turn aborts into warnings for an entire task function::
@with_settings(warn_only=True)
def foo():
...
.. seealso:: `~fabric.context_managers.settings`
.. versionadded:: 1.1
"""
def outer(func):
@wraps(func)
def inner(*args, **kwargs):
with settings(*arg_settings, **kw_settings):
return func(*args, **kwargs)
return _wrap_as_new(func, inner)
return outer
fabric-1.10.2/fabric/docs.py 0000664 0000000 0000000 00000004730 12541056305 0015556 0 ustar 00root root 0000000 0000000 from fabric.tasks import WrappedCallableTask
def unwrap_tasks(module, hide_nontasks=False):
"""
Replace task objects on ``module`` with their wrapped functions instead.
Specifically, look for instances of `~fabric.tasks.WrappedCallableTask` and
replace them with their ``.wrapped`` attribute (the original decorated
function.)
This is intended for use with the Sphinx autodoc tool, to be run near the
bottom of a project's ``conf.py``. It ensures that the autodoc extension
will have full access to the "real" function, in terms of function
signature and so forth. Without use of ``unwrap_tasks``, autodoc is unable
to access the function signature (though it is able to see e.g.
``__doc__``.)
For example, at the bottom of your ``conf.py``::
from fabric.docs import unwrap_tasks
import my_package.my_fabfile
unwrap_tasks(my_package.my_fabfile)
You can go above and beyond, and explicitly **hide** all non-task
functions, by saying ``hide_nontasks=True``. This renames all objects
failing the "is it a task?" check so they appear to be private, which will
then cause autodoc to skip over them.
``hide_nontasks`` is thus useful when you have a fabfile mixing in
subroutines with real tasks and want to document *just* the real tasks.
If you run this within an actual Fabric-code-using session (instead of
within a Sphinx ``conf.py``), please seek immediate medical attention.
.. versionadded: 1.5
.. seealso:: `~fabric.tasks.WrappedCallableTask`, `~fabric.decorators.task`
"""
set_tasks = []
for name, obj in vars(module).items():
if isinstance(obj, WrappedCallableTask):
setattr(module, obj.name, obj.wrapped)
# Handle situation where a task's real name shadows a builtin.
# If the builtin comes after the task in vars().items(), the object
# we just setattr'd above will get re-hidden :(
set_tasks.append(obj.name)
# In the same vein, "privately" named wrapped functions whose task
# name is public, needs to get renamed so autodoc picks it up.
obj.wrapped.func_name = obj.name
else:
if name in set_tasks:
continue
has_docstring = getattr(obj, '__doc__', False)
if hide_nontasks and has_docstring and not name.startswith('_'):
setattr(module, '_%s' % name, obj)
delattr(module, name)
fabric-1.10.2/fabric/exceptions.py 0000664 0000000 0000000 00000001675 12541056305 0017014 0 ustar 00root root 0000000 0000000 """
Custom Fabric exception classes.
Most are simply distinct Exception subclasses for purposes of message-passing
(though typically still in actual error situations.)
"""
class NetworkError(Exception):
# Must allow for calling with zero args/kwargs, since pickle is apparently
# stupid with exceptions and tries to call it as such when passed around in
# a multiprocessing.Queue.
def __init__(self, message=None, wrapped=None):
self.message = message
self.wrapped = wrapped
def __str__(self):
return self.message or ""
def __repr__(self):
return "%s(%s) => %r" % (
self.__class__.__name__, self.message, self.wrapped
)
class CommandTimeout(Exception):
def __init__(self, timeout):
self.timeout = timeout
message = 'Command failed to finish in %s seconds' % (timeout)
self.message = message
super(CommandTimeout, self).__init__(message)
fabric-1.10.2/fabric/io.py 0000664 0000000 0000000 00000022715 12541056305 0015240 0 ustar 00root root 0000000 0000000 from __future__ import with_statement
import sys
import time
import re
import socket
from select import select
from fabric.state import env, output, win32
from fabric.auth import get_password, set_password
import fabric.network
from fabric.network import ssh, normalize
from fabric.utils import RingBuffer
from fabric.exceptions import CommandTimeout
if win32:
import msvcrt
def _endswith(char_list, substring):
tail = char_list[-1 * len(substring):]
substring = list(substring)
return tail == substring
def _has_newline(bytelist):
return '\r' in bytelist or '\n' in bytelist
def output_loop(*args, **kwargs):
OutputLooper(*args, **kwargs).loop()
class OutputLooper(object):
def __init__(self, chan, attr, stream, capture, timeout):
self.chan = chan
self.stream = stream
self.capture = capture
self.timeout = timeout
self.read_func = getattr(chan, attr)
self.prefix = "[%s] %s: " % (
env.host_string,
"out" if attr == 'recv' else "err"
)
self.printing = getattr(output, 'stdout' if (attr == 'recv') else 'stderr')
self.linewise = (env.linewise or env.parallel)
self.reprompt = False
self.read_size = 4096
self.write_buffer = RingBuffer([], maxlen=len(self.prefix))
def _flush(self, text):
self.stream.write(text)
# Actually only flush if not in linewise mode.
# When linewise is set (e.g. in parallel mode) flushing makes
# doubling-up of line prefixes, and other mixed output, more likely.
if not env.linewise:
self.stream.flush()
self.write_buffer.extend(text)
def loop(self):
"""
Loop, reading from .(), writing to and buffering to .
Will raise `~fabric.exceptions.CommandTimeout` if network timeouts
continue to be seen past the defined ``self.timeout`` threshold.
(Timeouts before then are considered part of normal short-timeout fast
network reading; see Fabric issue #733 for background.)
"""
# Internal capture-buffer-like buffer, used solely for state keeping.
# Unlike 'capture', nothing is ever purged from this.
_buffer = []
# Initialize loop variables
initial_prefix_printed = False
seen_cr = False
line = []
# Allow prefix to be turned off.
if not env.output_prefix:
self.prefix = ""
start = time.time()
while True:
# Handle actual read
try:
bytelist = self.read_func(self.read_size)
except socket.timeout:
elapsed = time.time() - start
if self.timeout is not None and elapsed > self.timeout:
raise CommandTimeout(timeout=self.timeout)
continue
# Empty byte == EOS
if bytelist == '':
# If linewise, ensure we flush any leftovers in the buffer.
if self.linewise and line:
self._flush(self.prefix)
self._flush("".join(line))
break
# A None capture variable implies that we're in open_shell()
if self.capture is None:
# Just print directly -- no prefixes, no capturing, nada
# And since we know we're using a pty in this mode, just go
# straight to stdout.
self._flush(bytelist)
# Otherwise, we're in run/sudo and need to handle capturing and
# prompts.
else:
# Print to user
if self.printing:
printable_bytes = bytelist
# Small state machine to eat \n after \r
if printable_bytes[-1] == "\r":
seen_cr = True
if printable_bytes[0] == "\n" and seen_cr:
printable_bytes = printable_bytes[1:]
seen_cr = False
while _has_newline(printable_bytes) and printable_bytes != "":
# at most 1 split !
cr = re.search("(\r\n|\r|\n)", printable_bytes)
if cr is None:
break
end_of_line = printable_bytes[:cr.start(0)]
printable_bytes = printable_bytes[cr.end(0):]
if not initial_prefix_printed:
self._flush(self.prefix)
if _has_newline(end_of_line):
end_of_line = ''
if self.linewise:
self._flush("".join(line) + end_of_line + "\n")
line = []
else:
self._flush(end_of_line + "\n")
initial_prefix_printed = False
if self.linewise:
line += [printable_bytes]
else:
if not initial_prefix_printed:
self._flush(self.prefix)
initial_prefix_printed = True
self._flush(printable_bytes)
# Now we have handled printing, handle interactivity
read_lines = re.split(r"(\r|\n|\r\n)", bytelist)
for fragment in read_lines:
# Store in capture buffer
self.capture += fragment
# Store in internal buffer
_buffer += fragment
# Handle prompts
expected, response = self._get_prompt_response()
if expected:
del self.capture[-1 * len(expected):]
self.chan.sendall(str(response) + '\n')
else:
prompt = _endswith(self.capture, env.sudo_prompt)
try_again = (_endswith(self.capture, env.again_prompt + '\n')
or _endswith(self.capture, env.again_prompt + '\r\n'))
if prompt:
self.prompt()
elif try_again:
self.try_again()
# Print trailing new line if the last thing we printed was our line
# prefix.
if self.prefix and "".join(self.write_buffer) == self.prefix:
self._flush('\n')
def prompt(self):
# Obtain cached password, if any
password = get_password(*normalize(env.host_string))
# Remove the prompt itself from the capture buffer. This is
# backwards compatible with Fabric 0.9.x behavior; the user
# will still see the prompt on their screen (no way to avoid
# this) but at least it won't clutter up the captured text.
del self.capture[-1 * len(env.sudo_prompt):]
# If the password we just tried was bad, prompt the user again.
if (not password) or self.reprompt:
# Print the prompt and/or the "try again" notice if
# output is being hidden. In other words, since we need
# the user's input, they need to see why we're
# prompting them.
if not self.printing:
self._flush(self.prefix)
if self.reprompt:
self._flush(env.again_prompt + '\n' + self.prefix)
self._flush(env.sudo_prompt)
# Prompt for, and store, password. Give empty prompt so the
# initial display "hides" just after the actually-displayed
# prompt from the remote end.
self.chan.input_enabled = False
password = fabric.network.prompt_for_password(
prompt=" ", no_colon=True, stream=self.stream
)
self.chan.input_enabled = True
# Update env.password, env.passwords if necessary
user, host, port = normalize(env.host_string)
set_password(user, host, port, password)
# Reset reprompt flag
self.reprompt = False
# Send current password down the pipe
self.chan.sendall(password + '\n')
def try_again(self):
# Remove text from capture buffer
self.capture = self.capture[:len(env.again_prompt)]
# Set state so we re-prompt the user at the next prompt.
self.reprompt = True
def _get_prompt_response(self):
"""
Iterate through the request prompts dict and return the response and
original request if we find a match
"""
for tup in env.prompts.iteritems():
if _endswith(self.capture, tup[0]):
return tup
return None, None
def input_loop(chan, using_pty):
while not chan.exit_status_ready():
if win32:
have_char = msvcrt.kbhit()
else:
r, w, x = select([sys.stdin], [], [], 0.0)
have_char = (r and r[0] == sys.stdin)
if have_char and chan.input_enabled:
# Send all local stdin to remote end's stdin
byte = msvcrt.getch() if win32 else sys.stdin.read(1)
chan.sendall(byte)
# Optionally echo locally, if needed.
if not using_pty and env.echo_stdin:
# Not using fastprint() here -- it prints as 'user'
# output level, don't want it to be accidentally hidden
sys.stdout.write(byte)
sys.stdout.flush()
time.sleep(ssh.io_sleep)
fabric-1.10.2/fabric/job_queue.py 0000664 0000000 0000000 00000017153 12541056305 0016607 0 ustar 00root root 0000000 0000000 """
Sliding-window-based job/task queue class (& example of use.)
May use ``multiprocessing.Process`` or ``threading.Thread`` objects as queue
items, though within Fabric itself only ``Process`` objects are used/supported.
"""
from __future__ import with_statement
import time
import Queue
from multiprocessing import Process
from fabric.state import env
from fabric.network import ssh
from fabric.context_managers import settings
class JobQueue(object):
"""
The goal of this class is to make a queue of processes to run, and go
through them running X number at any given time.
So if the bubble is 5 start with 5 running and move the bubble of running
procs along the queue looking something like this:
Start
...........................
[~~~~~]....................
___[~~~~~].................
_________[~~~~~]...........
__________________[~~~~~]..
____________________[~~~~~]
___________________________
End
"""
def __init__(self, max_running, comms_queue):
"""
Setup the class to resonable defaults.
"""
self._queued = []
self._running = []
self._completed = []
self._num_of_jobs = 0
self._max = max_running
self._comms_queue = comms_queue
self._finished = False
self._closed = False
self._debug = False
def _all_alive(self):
"""
Simply states if all procs are alive or not. Needed to determine when
to stop looping, and pop dead procs off and add live ones.
"""
if self._running:
return all([x.is_alive() for x in self._running])
else:
return False
def __len__(self):
"""
Just going to use number of jobs as the JobQueue length.
"""
return self._num_of_jobs
def close(self):
"""
A sanity check, so that the need to care about new jobs being added in
the last throws of the job_queue's run are negated.
"""
if self._debug:
print("job queue closed.")
self._closed = True
def append(self, process):
"""
Add the Process() to the queue, so that later it can be checked up on.
That is if the JobQueue is still open.
If the queue is closed, this will just silently do nothing.
To get data back out of this process, give ``process`` access to a
``multiprocessing.Queue`` object, and give it here as ``queue``. Then
``JobQueue.run`` will include the queue's contents in its return value.
"""
if not self._closed:
self._queued.append(process)
self._num_of_jobs += 1
if self._debug:
print("job queue appended %s." % process.name)
def run(self):
"""
This is the workhorse. It will take the intial jobs from the _queue,
start them, add them to _running, and then go into the main running
loop.
This loop will check for done procs, if found, move them out of
_running into _completed. It also checks for a _running queue with open
spots, which it will then fill as discovered.
To end the loop, there have to be no running procs, and no more procs
to be run in the queue.
This function returns an iterable of all its children's exit codes.
"""
def _advance_the_queue():
"""
Helper function to do the job of poping a new proc off the queue
start it, then add it to the running queue. This will eventually
depleate the _queue, which is a condition of stopping the running
while loop.
It also sets the env.host_string from the job.name, so that fabric
knows that this is the host to be making connections on.
"""
job = self._queued.pop()
if self._debug:
print("Popping '%s' off the queue and starting it" % job.name)
with settings(clean_revert=True, host_string=job.name, host=job.name):
job.start()
self._running.append(job)
# Prep return value so we can start filling it during main loop
results = {}
for job in self._queued:
results[job.name] = dict.fromkeys(('exit_code', 'results'))
if not self._closed:
raise Exception("Need to close() before starting.")
if self._debug:
print("Job queue starting.")
while len(self._running) < self._max:
_advance_the_queue()
# Main loop!
while not self._finished:
while len(self._running) < self._max and self._queued:
_advance_the_queue()
if not self._all_alive():
for id, job in enumerate(self._running):
if not job.is_alive():
if self._debug:
print("Job queue found finished proc: %s." %
job.name)
done = self._running.pop(id)
self._completed.append(done)
if self._debug:
print("Job queue has %d running." % len(self._running))
if not (self._queued or self._running):
if self._debug:
print("Job queue finished.")
for job in self._completed:
job.join()
self._finished = True
# Each loop pass, try pulling results off the queue to keep its
# size down. At this point, we don't actually care if any results
# have arrived yet; they will be picked up after the main loop.
self._fill_results(results)
time.sleep(ssh.io_sleep)
# Consume anything left in the results queue. Note that there is no
# need to block here, as the main loop ensures that all workers will
# already have finished.
self._fill_results(results)
# Attach exit codes now that we're all done & have joined all jobs
for job in self._completed:
if isinstance(job, Process):
results[job.name]['exit_code'] = job.exitcode
return results
def _fill_results(self, results):
"""
Attempt to pull data off self._comms_queue and add to 'results' dict.
If no data is available (i.e. the queue is empty), bail immediately.
"""
while True:
try:
datum = self._comms_queue.get_nowait()
results[datum['name']]['results'] = datum['result']
except Queue.Empty:
break
#### Sample
def try_using(parallel_type):
"""
This will run the queue through it's paces, and show a simple way of using
the job queue.
"""
def print_number(number):
"""
Simple function to give a simple task to execute.
"""
print(number)
if parallel_type == "multiprocessing":
from multiprocessing import Process as Bucket
elif parallel_type == "threading":
from threading import Thread as Bucket
# Make a job_queue with a bubble of len 5, and have it print verbosely
queue = Queue.Queue()
jobs = JobQueue(5, queue)
jobs._debug = True
# Add 20 procs onto the stack
for x in range(20):
jobs.append(Bucket(
target=print_number,
args=[x],
kwargs={},
))
# Close up the queue and then start it's execution
jobs.close()
jobs.run()
if __name__ == '__main__':
try_using("multiprocessing")
try_using("threading")
fabric-1.10.2/fabric/main.py 0000664 0000000 0000000 00000062202 12541056305 0015550 0 ustar 00root root 0000000 0000000 """
This module contains Fab's `main` method plus related subroutines.
`main` is executed as the command line ``fab`` program and takes care of
parsing options and commands, loading the user settings file, loading a
fabfile, and executing the commands given.
The other callables defined in this module are internal only. Anything useful
to individuals leveraging Fabric as a library, should be kept elsewhere.
"""
import getpass
from operator import isMappingType
from optparse import OptionParser
import os
import sys
import types
# For checking callables against the API, & easy mocking
from fabric import api, state, colors
from fabric.contrib import console, files, project
from fabric.network import disconnect_all, ssh
from fabric.state import env_options
from fabric.tasks import Task, execute, get_task_details
from fabric.task_utils import _Dict, crawl
from fabric.utils import abort, indent, warn, _pty_size
# One-time calculation of "all internal callables" to avoid doing this on every
# check of a given fabfile callable (in is_classic_task()).
_modules = [api, project, files, console, colors]
_internals = reduce(lambda x, y: x + filter(callable, vars(y).values()),
_modules,
[]
)
# Module recursion cache
class _ModuleCache(object):
"""
Set-like object operating on modules and storing __name__s internally.
"""
def __init__(self):
self.cache = set()
def __contains__(self, value):
return value.__name__ in self.cache
def add(self, value):
return self.cache.add(value.__name__)
def clear(self):
return self.cache.clear()
_seen = _ModuleCache()
def load_settings(path):
"""
Take given file path and return dictionary of any key=value pairs found.
Usage docs are in sites/docs/usage/fab.rst, in "Settings files."
"""
if os.path.exists(path):
comments = lambda s: s and not s.startswith("#")
settings = filter(comments, open(path, 'r'))
return dict((k.strip(), v.strip()) for k, _, v in
[s.partition('=') for s in settings])
# Handle nonexistent or empty settings file
return {}
def _is_package(path):
"""
Is the given path a Python package?
"""
return (
os.path.isdir(path)
and os.path.exists(os.path.join(path, '__init__.py'))
)
def find_fabfile(names=None):
"""
Attempt to locate a fabfile, either explicitly or by searching parent dirs.
Usage docs are in sites/docs/usage/fabfiles.rst, in "Fabfile discovery."
"""
# Obtain env value if not given specifically
if names is None:
names = [state.env.fabfile]
# Create .py version if necessary
if not names[0].endswith('.py'):
names += [names[0] + '.py']
# Does the name contain path elements?
if os.path.dirname(names[0]):
# If so, expand home-directory markers and test for existence
for name in names:
expanded = os.path.expanduser(name)
if os.path.exists(expanded):
if name.endswith('.py') or _is_package(expanded):
return os.path.abspath(expanded)
else:
# Otherwise, start in cwd and work downwards towards filesystem root
path = '.'
# Stop before falling off root of filesystem (should be platform
# agnostic)
while os.path.split(os.path.abspath(path))[1]:
for name in names:
joined = os.path.join(path, name)
if os.path.exists(joined):
if name.endswith('.py') or _is_package(joined):
return os.path.abspath(joined)
path = os.path.join('..', path)
# Implicit 'return None' if nothing was found
def is_classic_task(tup):
"""
Takes (name, object) tuple, returns True if it's a non-Fab public callable.
"""
name, func = tup
try:
is_classic = (
callable(func)
and (func not in _internals)
and not name.startswith('_')
)
# Handle poorly behaved __eq__ implementations
except (ValueError, TypeError):
is_classic = False
return is_classic
def load_fabfile(path, importer=None):
"""
Import given fabfile path and return (docstring, callables).
Specifically, the fabfile's ``__doc__`` attribute (a string) and a
dictionary of ``{'name': callable}`` containing all callables which pass
the "is a Fabric task" test.
"""
if importer is None:
importer = __import__
# Get directory and fabfile name
directory, fabfile = os.path.split(path)
# If the directory isn't in the PYTHONPATH, add it so our import will work
added_to_path = False
index = None
if directory not in sys.path:
sys.path.insert(0, directory)
added_to_path = True
# If the directory IS in the PYTHONPATH, move it to the front temporarily,
# otherwise other fabfiles -- like Fabric's own -- may scoop the intended
# one.
else:
i = sys.path.index(directory)
if i != 0:
# Store index for later restoration
index = i
# Add to front, then remove from original position
sys.path.insert(0, directory)
del sys.path[i + 1]
# Perform the import (trimming off the .py)
imported = importer(os.path.splitext(fabfile)[0])
# Remove directory from path if we added it ourselves (just to be neat)
if added_to_path:
del sys.path[0]
# Put back in original index if we moved it
if index is not None:
sys.path.insert(index + 1, directory)
del sys.path[0]
# Actually load tasks
docstring, new_style, classic, default = load_tasks_from_module(imported)
tasks = new_style if state.env.new_style_tasks else classic
# Clean up after ourselves
_seen.clear()
return docstring, tasks, default
def load_tasks_from_module(imported):
"""
Handles loading all of the tasks for a given `imported` module
"""
# Obey the use of .__all__ if it is present
imported_vars = vars(imported)
if "__all__" in imported_vars:
imported_vars = [(name, imported_vars[name]) for name in \
imported_vars if name in imported_vars["__all__"]]
else:
imported_vars = imported_vars.items()
# Return a two-tuple value. First is the documentation, second is a
# dictionary of callables only (and don't include Fab operations or
# underscored callables)
new_style, classic, default = extract_tasks(imported_vars)
return imported.__doc__, new_style, classic, default
def extract_tasks(imported_vars):
"""
Handle extracting tasks from a given list of variables
"""
new_style_tasks = _Dict()
classic_tasks = {}
default_task = None
if 'new_style_tasks' not in state.env:
state.env.new_style_tasks = False
for tup in imported_vars:
name, obj = tup
if is_task_object(obj):
state.env.new_style_tasks = True
# Use instance.name if defined
if obj.name and obj.name != 'undefined':
new_style_tasks[obj.name] = obj
else:
obj.name = name
new_style_tasks[name] = obj
# Handle aliasing
if obj.aliases is not None:
for alias in obj.aliases:
new_style_tasks[alias] = obj
# Handle defaults
if obj.is_default:
default_task = obj
elif is_classic_task(tup):
classic_tasks[name] = obj
elif is_task_module(obj):
docs, newstyle, classic, default = load_tasks_from_module(obj)
for task_name, task in newstyle.items():
if name not in new_style_tasks:
new_style_tasks[name] = _Dict()
new_style_tasks[name][task_name] = task
if default is not None:
new_style_tasks[name].default = default
return new_style_tasks, classic_tasks, default_task
def is_task_module(a):
"""
Determine if the provided value is a task module
"""
#return (type(a) is types.ModuleType and
# any(map(is_task_object, vars(a).values())))
if isinstance(a, types.ModuleType) and a not in _seen:
# Flag module as seen
_seen.add(a)
# Signal that we need to check it out
return True
def is_task_object(a):
"""
Determine if the provided value is a ``Task`` object.
This returning True signals that all tasks within the fabfile
module must be Task objects.
"""
return isinstance(a, Task) and a.use_task_objects
def parse_options():
"""
Handle command-line options with optparse.OptionParser.
Return list of arguments, largely for use in `parse_arguments`.
"""
#
# Initialize
#
parser = OptionParser(
usage=("fab [options] "
"[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ..."))
#
# Define options that don't become `env` vars (typically ones which cause
# Fabric to do something other than its normal execution, such as
# --version)
#
# Display info about a specific command
parser.add_option('-d', '--display',
metavar='NAME',
help="print detailed info about command NAME"
)
# Control behavior of --list
LIST_FORMAT_OPTIONS = ('short', 'normal', 'nested')
parser.add_option('-F', '--list-format',
choices=LIST_FORMAT_OPTIONS,
default='normal',
metavar='FORMAT',
help="formats --list, choices: %s" % ", ".join(LIST_FORMAT_OPTIONS)
)
parser.add_option('-I', '--initial-password-prompt',
action='store_true',
default=False,
help="Force password prompt up-front"
)
# List Fab commands found in loaded fabfiles/source files
parser.add_option('-l', '--list',
action='store_true',
dest='list_commands',
default=False,
help="print list of possible commands and exit"
)
# Allow setting of arbitrary env vars at runtime.
parser.add_option('--set',
metavar="KEY=VALUE,...",
dest='env_settings',
default="",
help="comma separated KEY=VALUE pairs to set Fab env vars"
)
# Like --list, but text processing friendly
parser.add_option('--shortlist',
action='store_true',
dest='shortlist',
default=False,
help="alias for -F short --list"
)
# Version number (optparse gives you --version but we have to do it
# ourselves to get -V too. sigh)
parser.add_option('-V', '--version',
action='store_true',
dest='show_version',
default=False,
help="show program's version number and exit"
)
#
# Add in options which are also destined to show up as `env` vars.
#
for option in env_options:
parser.add_option(option)
#
# Finalize
#
# Return three-tuple of parser + the output from parse_args (opt obj, args)
opts, args = parser.parse_args()
return parser, opts, args
def _is_task(name, value):
"""
Is the object a task as opposed to e.g. a dict or int?
"""
return is_classic_task((name, value)) or is_task_object(value)
def _sift_tasks(mapping):
tasks, collections = [], []
for name, value in mapping.iteritems():
if _is_task(name, value):
tasks.append(name)
elif isMappingType(value):
collections.append(name)
tasks = sorted(tasks)
collections = sorted(collections)
return tasks, collections
def _task_names(mapping):
"""
Flatten & sort task names in a breadth-first fashion.
Tasks are always listed before submodules at the same level, but within
those two groups, sorting is alphabetical.
"""
tasks, collections = _sift_tasks(mapping)
for collection in collections:
module = mapping[collection]
if hasattr(module, 'default'):
tasks.append(collection)
join = lambda x: ".".join((collection, x))
tasks.extend(map(join, _task_names(module)))
return tasks
def _print_docstring(docstrings, name):
if not docstrings:
return False
docstring = crawl(name, state.commands).__doc__
if isinstance(docstring, basestring):
return docstring
def _normal_list(docstrings=True):
result = []
task_names = _task_names(state.commands)
# Want separator between name, description to be straight col
max_len = reduce(lambda a, b: max(a, len(b)), task_names, 0)
sep = ' '
trail = '...'
max_width = _pty_size()[1] - 1 - len(trail)
for name in task_names:
output = None
docstring = _print_docstring(docstrings, name)
if docstring:
lines = filter(None, docstring.splitlines())
first_line = lines[0].strip()
# Truncate it if it's longer than N chars
size = max_width - (max_len + len(sep) + len(trail))
if len(first_line) > size:
first_line = first_line[:size] + trail
output = name.ljust(max_len) + sep + first_line
# Or nothing (so just the name)
else:
output = name
result.append(indent(output))
return result
def _nested_list(mapping, level=1):
result = []
tasks, collections = _sift_tasks(mapping)
# Tasks come first
result.extend(map(lambda x: indent(x, spaces=level * 4), tasks))
for collection in collections:
module = mapping[collection]
# Section/module "header"
result.append(indent(collection + ":", spaces=level * 4))
# Recurse
result.extend(_nested_list(module, level + 1))
return result
COMMANDS_HEADER = "Available commands"
NESTED_REMINDER = " (remember to call as module.[...].task)"
def list_commands(docstring, format_):
"""
Print all found commands/tasks, then exit. Invoked with ``-l/--list.``
If ``docstring`` is non-empty, it will be printed before the task list.
``format_`` should conform to the options specified in
``LIST_FORMAT_OPTIONS``, e.g. ``"short"``, ``"normal"``.
"""
# Short-circuit with simple short output
if format_ == "short":
return _task_names(state.commands)
# Otherwise, handle more verbose modes
result = []
# Docstring at top, if applicable
if docstring:
trailer = "\n" if not docstring.endswith("\n") else ""
result.append(docstring + trailer)
header = COMMANDS_HEADER
if format_ == "nested":
header += NESTED_REMINDER
result.append(header + ":\n")
c = _normal_list() if format_ == "normal" else _nested_list(state.commands)
result.extend(c)
return result
def display_command(name):
"""
Print command function's docstring, then exit. Invoked with -d/--display.
"""
# Sanity check
command = crawl(name, state.commands)
if command is None:
msg = "Task '%s' does not appear to exist. Valid task names:\n%s"
abort(msg % (name, "\n".join(_normal_list(False))))
# Print out nicely presented docstring if found
if hasattr(command, '__details__'):
task_details = command.__details__()
else:
task_details = get_task_details(command)
if task_details:
print("Displaying detailed information for task '%s':" % name)
print('')
print(indent(task_details, strip=True))
print('')
# Or print notice if not
else:
print("No detailed information available for task '%s':" % name)
sys.exit(0)
def _escape_split(sep, argstr):
"""
Allows for escaping of the separator: e.g. task:arg='foo\, bar'
It should be noted that the way bash et. al. do command line parsing, those
single quotes are required.
"""
escaped_sep = r'\%s' % sep
if escaped_sep not in argstr:
return argstr.split(sep)
before, _, after = argstr.partition(escaped_sep)
startlist = before.split(sep) # a regular split is fine here
unfinished = startlist[-1]
startlist = startlist[:-1]
# recurse because there may be more escaped separators
endlist = _escape_split(sep, after)
# finish building the escaped value. we use endlist[0] becaue the first
# part of the string sent in recursion is the rest of the escaped value.
unfinished += sep + endlist[0]
return startlist + [unfinished] + endlist[1:] # put together all the parts
def parse_arguments(arguments):
"""
Parse string list into list of tuples: command, args, kwargs, hosts, roles.
See sites/docs/usage/fab.rst, section on "per-task arguments" for details.
"""
cmds = []
for cmd in arguments:
args = []
kwargs = {}
hosts = []
roles = []
exclude_hosts = []
if ':' in cmd:
cmd, argstr = cmd.split(':', 1)
for pair in _escape_split(',', argstr):
result = _escape_split('=', pair)
if len(result) > 1:
k, v = result
# Catch, interpret host/hosts/role/roles/exclude_hosts
# kwargs
if k in ['host', 'hosts', 'role', 'roles', 'exclude_hosts']:
if k == 'host':
hosts = [v.strip()]
elif k == 'hosts':
hosts = [x.strip() for x in v.split(';')]
elif k == 'role':
roles = [v.strip()]
elif k == 'roles':
roles = [x.strip() for x in v.split(';')]
elif k == 'exclude_hosts':
exclude_hosts = [x.strip() for x in v.split(';')]
# Otherwise, record as usual
else:
kwargs[k] = v
else:
args.append(result[0])
cmds.append((cmd, args, kwargs, hosts, roles, exclude_hosts))
return cmds
def parse_remainder(arguments):
"""
Merge list of "remainder arguments" into a single command string.
"""
return ' '.join(arguments)
def update_output_levels(show, hide):
"""
Update state.output values as per given comma-separated list of key names.
For example, ``update_output_levels(show='debug,warnings')`` is
functionally equivalent to ``state.output['debug'] = True ;
state.output['warnings'] = True``. Conversely, anything given to ``hide``
sets the values to ``False``.
"""
if show:
for key in show.split(','):
state.output[key] = True
if hide:
for key in hide.split(','):
state.output[key] = False
def show_commands(docstring, format, code=0):
print("\n".join(list_commands(docstring, format)))
sys.exit(code)
def main(fabfile_locations=None):
"""
Main command-line execution loop.
"""
try:
# Parse command line options
parser, options, arguments = parse_options()
# Handle regular args vs -- args
arguments = parser.largs
remainder_arguments = parser.rargs
# Allow setting of arbitrary env keys.
# This comes *before* the "specific" env_options so that those may
# override these ones. Specific should override generic, if somebody
# was silly enough to specify the same key in both places.
# E.g. "fab --set shell=foo --shell=bar" should have env.shell set to
# 'bar', not 'foo'.
for pair in _escape_split(',', options.env_settings):
pair = _escape_split('=', pair)
# "--set x" => set env.x to True
# "--set x=" => set env.x to ""
key = pair[0]
value = True
if len(pair) == 2:
value = pair[1]
state.env[key] = value
# Update env with any overridden option values
# NOTE: This needs to remain the first thing that occurs
# post-parsing, since so many things hinge on the values in env.
for option in env_options:
state.env[option.dest] = getattr(options, option.dest)
# Handle --hosts, --roles, --exclude-hosts (comma separated string =>
# list)
for key in ['hosts', 'roles', 'exclude_hosts']:
if key in state.env and isinstance(state.env[key], basestring):
state.env[key] = state.env[key].split(',')
# Feed the env.tasks : tasks that are asked to be executed.
state.env['tasks'] = arguments
# Handle output control level show/hide
update_output_levels(show=options.show, hide=options.hide)
# Handle version number option
if options.show_version:
print("Fabric %s" % state.env.version)
print("Paramiko %s" % ssh.__version__)
sys.exit(0)
# Load settings from user settings file, into shared env dict.
state.env.update(load_settings(state.env.rcfile))
# Find local fabfile path or abort
fabfile = find_fabfile(fabfile_locations)
if not fabfile and not remainder_arguments:
abort("""Couldn't find any fabfiles!
Remember that -f can be used to specify fabfile path, and use -h for help.""")
# Store absolute path to fabfile in case anyone needs it
state.env.real_fabfile = fabfile
# Load fabfile (which calls its module-level code, including
# tweaks to env values) and put its commands in the shared commands
# dict
default = None
if fabfile:
docstring, callables, default = load_fabfile(fabfile)
state.commands.update(callables)
# Handle case where we were called bare, i.e. just "fab", and print
# a help message.
actions = (options.list_commands, options.shortlist, options.display,
arguments, remainder_arguments, default)
if not any(actions):
parser.print_help()
sys.exit(1)
# Abort if no commands found
if not state.commands and not remainder_arguments:
abort("Fabfile didn't contain any commands!")
# Now that we're settled on a fabfile, inform user.
if state.output.debug:
if fabfile:
print("Using fabfile '%s'" % fabfile)
else:
print("No fabfile loaded -- remainder command only")
# Shortlist is now just an alias for the "short" list format;
# it overrides use of --list-format if somebody were to specify both
if options.shortlist:
options.list_format = 'short'
options.list_commands = True
# List available commands
if options.list_commands:
show_commands(docstring, options.list_format)
# Handle show (command-specific help) option
if options.display:
display_command(options.display)
# If user didn't specify any commands to run, show help
if not (arguments or remainder_arguments or default):
parser.print_help()
sys.exit(0) # Or should it exit with error (1)?
# Parse arguments into commands to run (plus args/kwargs/hosts)
commands_to_run = parse_arguments(arguments)
# Parse remainders into a faux "command" to execute
remainder_command = parse_remainder(remainder_arguments)
# Figure out if any specified task names are invalid
unknown_commands = []
for tup in commands_to_run:
if crawl(tup[0], state.commands) is None:
unknown_commands.append(tup[0])
# Abort if any unknown commands were specified
if unknown_commands and not state.env.get('skip_unknown_tasks', False):
warn("Command(s) not found:\n%s" \
% indent(unknown_commands))
show_commands(None, options.list_format, 1)
# Generate remainder command and insert into commands, commands_to_run
if remainder_command:
r = ''
state.commands[r] = lambda: api.run(remainder_command)
commands_to_run.append((r, [], {}, [], [], []))
# Ditto for a default, if found
if not commands_to_run and default:
commands_to_run.append((default.name, [], {}, [], [], []))
# Initial password prompt, if requested
if options.initial_password_prompt:
prompt = "Initial value for env.password: "
state.env.password = getpass.getpass(prompt)
if state.output.debug:
names = ", ".join(x[0] for x in commands_to_run)
print("Commands to run: %s" % names)
# At this point all commands must exist, so execute them in order.
for name, args, kwargs, arg_hosts, arg_roles, arg_exclude_hosts in commands_to_run:
execute(
name,
hosts=arg_hosts,
roles=arg_roles,
exclude_hosts=arg_exclude_hosts,
*args, **kwargs
)
# If we got here, no errors occurred, so print a final note.
if state.output.status:
print("\nDone.")
except SystemExit:
# a number of internal functions might raise this one.
raise
except KeyboardInterrupt:
if state.output.status:
sys.stderr.write("\nStopped.\n")
sys.exit(1)
except:
sys.excepthook(*sys.exc_info())
# we might leave stale threads if we don't explicitly exit()
sys.exit(1)
finally:
disconnect_all()
sys.exit(0)
fabric-1.10.2/fabric/network.py 0000664 0000000 0000000 00000060720 12541056305 0016320 0 ustar 00root root 0000000 0000000 """
Classes and subroutines dealing with network connections and related topics.
"""
from __future__ import with_statement
from functools import wraps
import getpass
import os
import re
import time
import socket
import sys
from StringIO import StringIO
from fabric.auth import get_password, set_password
from fabric.utils import abort, handle_prompt_abort, warn
from fabric.exceptions import NetworkError
try:
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
import paramiko as ssh
except ImportError, e:
import traceback
traceback.print_exc()
msg = """
There was a problem importing our SSH library (see traceback above).
Please make sure all dependencies are installed and importable.
""".rstrip()
sys.stderr.write(msg + '\n')
sys.exit(1)
ipv6_regex = re.compile(
'^\[?(?P[0-9A-Fa-f:]+(?:%[a-z]+\d+)?)\]?(:(?P\d+))?$')
def direct_tcpip(client, host, port):
return client.get_transport().open_channel(
'direct-tcpip',
(host, int(port)),
('', 0)
)
def is_key_load_error(e):
return (
e.__class__ is ssh.SSHException
and 'Unable to parse key file' in str(e)
)
def _tried_enough(tries):
from fabric.state import env
return tries >= env.connection_attempts
def get_gateway(host, port, cache, replace=False):
"""
Create and return a gateway socket, if one is needed.
This function checks ``env`` for gateway or proxy-command settings and
returns the necessary socket-like object for use by a final host
connection.
:param host:
Hostname of target server.
:param port:
Port to connect to on target server.
:param cache:
A ``HostConnectionCache`` object, in which gateway ``SSHClient``
objects are to be retrieved/cached.
:param replace:
Whether to forcibly replace a cached gateway client object.
:returns:
A ``socket.socket``-like object, or ``None`` if none was created.
"""
from fabric.state import env, output
sock = None
proxy_command = ssh_config().get('proxycommand', None)
if env.gateway:
gateway = normalize_to_string(env.gateway)
# ensure initial gateway connection
if replace or gateway not in cache:
if output.debug:
print "Creating new gateway connection to %r" % gateway
cache[gateway] = connect(*normalize(gateway) + (cache, False))
# now we should have an open gw connection and can ask it for a
# direct-tcpip channel to the real target. (bypass cache's own
# __getitem__ override to avoid hilarity - this is usually called
# within that method.)
sock = direct_tcpip(dict.__getitem__(cache, gateway), host, port)
elif proxy_command:
sock = ssh.ProxyCommand(proxy_command)
return sock
class HostConnectionCache(dict):
"""
Dict subclass allowing for caching of host connections/clients.
This subclass will intelligently create new client connections when keys
are requested, or return previously created connections instead.
It also handles creating new socket-like objects when required to implement
gateway connections and `ProxyCommand`, and handing them to the inner
connection methods.
Key values are the same as host specifiers throughout Fabric: optional
username + ``@``, mandatory hostname, optional ``:`` + port number.
Examples:
* ``example.com`` - typical Internet host address.
* ``firewall`` - atypical, but still legal, local host address.
* ``user@example.com`` - with specific username attached.
* ``bob@smith.org:222`` - with specific nonstandard port attached.
When the username is not given, ``env.user`` is used. ``env.user``
defaults to the currently running user at startup but may be overwritten by
user code or by specifying a command-line flag.
Note that differing explicit usernames for the same hostname will result in
multiple client connections being made. For example, specifying
``user1@example.com`` will create a connection to ``example.com``, logged
in as ``user1``; later specifying ``user2@example.com`` will create a new,
2nd connection as ``user2``.
The same applies to ports: specifying two different ports will result in
two different connections to the same host being made. If no port is given,
22 is assumed, so ``example.com`` is equivalent to ``example.com:22``.
"""
def connect(self, key):
"""
Force a new connection to ``key`` host string.
"""
from fabric.state import env
user, host, port = normalize(key)
key = normalize_to_string(key)
seek_gateway = True
# break the loop when the host is gateway itself
if env.gateway:
seek_gateway = normalize_to_string(env.gateway) != key
self[key] = connect(
user, host, port, cache=self, seek_gateway=seek_gateway)
def __getitem__(self, key):
"""
Autoconnect + return connection object
"""
key = normalize_to_string(key)
if key not in self:
self.connect(key)
return dict.__getitem__(self, key)
#
# Dict overrides that normalize input keys
#
def __setitem__(self, key, value):
return dict.__setitem__(self, normalize_to_string(key), value)
def __delitem__(self, key):
return dict.__delitem__(self, normalize_to_string(key))
def __contains__(self, key):
return dict.__contains__(self, normalize_to_string(key))
def ssh_config(host_string=None):
"""
Return ssh configuration dict for current env.host_string host value.
Memoizes the loaded SSH config file, but not the specific per-host results.
This function performs the necessary "is SSH config enabled?" checks and
will simply return an empty dict if not. If SSH config *is* enabled and the
value of env.ssh_config_path is not a valid file, it will abort.
May give an explicit host string as ``host_string``.
"""
from fabric.state import env
dummy = {}
if not env.use_ssh_config:
return dummy
if '_ssh_config' not in env:
try:
conf = ssh.SSHConfig()
path = os.path.expanduser(env.ssh_config_path)
with open(path) as fd:
conf.parse(fd)
env._ssh_config = conf
except IOError:
warn("Unable to load SSH config file '%s'" % path)
return dummy
host = parse_host_string(host_string or env.host_string)['host']
return env._ssh_config.lookup(host)
def key_filenames():
"""
Returns list of SSH key filenames for the current env.host_string.
Takes into account ssh_config and env.key_filename, including normalization
to a list. Also performs ``os.path.expanduser`` expansion on any key
filenames.
"""
from fabric.state import env
keys = env.key_filename
# For ease of use, coerce stringish key filename into list
if isinstance(env.key_filename, basestring) or env.key_filename is None:
keys = [keys]
# Strip out any empty strings (such as the default value...meh)
keys = filter(bool, keys)
# Honor SSH config
conf = ssh_config()
if 'identityfile' in conf:
# Assume a list here as we require Paramiko 1.10+
keys.extend(conf['identityfile'])
return map(os.path.expanduser, keys)
def key_from_env(passphrase=None):
"""
Returns a paramiko-ready key from a text string of a private key
"""
from fabric.state import env, output
if 'key' in env:
if output.debug:
# NOTE: this may not be the most secure thing; OTOH anybody running
# the process must by definition have access to the key value,
# so only serious problem is if they're logging the output.
sys.stderr.write("Trying to honor in-memory key %r\n" % env.key)
for pkey_class in (ssh.rsakey.RSAKey, ssh.dsskey.DSSKey):
if output.debug:
sys.stderr.write("Trying to load it as %s\n" % pkey_class)
try:
return pkey_class.from_private_key(StringIO(env.key), passphrase)
except Exception, e:
# File is valid key, but is encrypted: raise it, this will
# cause cxn loop to prompt for passphrase & retry
if 'Private key file is encrypted' in e:
raise
# Otherwise, it probably means it wasn't a valid key of this
# type, so try the next one.
else:
pass
def parse_host_string(host_string):
# Split host_string to user (optional) and host/port
user_hostport = host_string.rsplit('@', 1)
hostport = user_hostport.pop()
user = user_hostport[0] if user_hostport and user_hostport[0] else None
# Split host/port string to host and optional port
# For IPv6 addresses square brackets are mandatory for host/port separation
if hostport.count(':') > 1:
# Looks like IPv6 address
r = ipv6_regex.match(hostport).groupdict()
host = r['host'] or None
port = r['port'] or None
else:
# Hostname or IPv4 address
host_port = hostport.rsplit(':', 1)
host = host_port.pop(0) or None
port = host_port[0] if host_port and host_port[0] else None
return {'user': user, 'host': host, 'port': port}
def normalize(host_string, omit_port=False):
"""
Normalizes a given host string, returning explicit host, user, port.
If ``omit_port`` is given and is True, only the host and user are returned.
This function will process SSH config files if Fabric is configured to do
so, and will use them to fill in some default values or swap in hostname
aliases.
"""
from fabric.state import env
# Gracefully handle "empty" input by returning empty output
if not host_string:
return ('', '') if omit_port else ('', '', '')
# Parse host string (need this early on to look up host-specific ssh_config
# values)
r = parse_host_string(host_string)
host = r['host']
# Env values (using defaults if somehow earlier defaults were replaced with
# empty values)
user = env.user or env.local_user
port = env.port or env.default_port
# SSH config data
conf = ssh_config(host_string)
# Only use ssh_config values if the env value appears unmodified from
# the true defaults. If the user has tweaked them, that new value
# takes precedence.
if user == env.local_user and 'user' in conf:
user = conf['user']
if port == env.default_port and 'port' in conf:
port = conf['port']
# Also override host if needed
if 'hostname' in conf:
host = conf['hostname']
# Merge explicit user/port values with the env/ssh_config derived ones
# (Host is already done at this point.)
user = r['user'] or user
port = r['port'] or port
if omit_port:
return user, host
return user, host, port
def to_dict(host_string):
user, host, port = normalize(host_string)
return {
'user': user, 'host': host, 'port': port, 'host_string': host_string
}
def from_dict(arg):
return join_host_strings(arg['user'], arg['host'], arg['port'])
def denormalize(host_string):
"""
Strips out default values for the given host string.
If the user part is the default user, it is removed;
if the port is port 22, it also is removed.
"""
from fabric.state import env
r = parse_host_string(host_string)
user = ''
if r['user'] is not None and r['user'] != env.user:
user = r['user'] + '@'
port = ''
if r['port'] is not None and r['port'] != '22':
port = ':' + r['port']
host = r['host']
host = '[%s]' % host if port and host.count(':') > 1 else host
return user + host + port
def join_host_strings(user, host, port=None):
"""
Turns user/host/port strings into ``user@host:port`` combined string.
This function is not responsible for handling missing user/port strings;
for that, see the ``normalize`` function.
If ``host`` looks like IPv6 address, it will be enclosed in square brackets
If ``port`` is omitted, the returned string will be of the form
``user@host``.
"""
if port:
# Square brackets are necessary for IPv6 host/port separation
template = "%s@[%s]:%s" if host.count(':') > 1 else "%s@%s:%s"
return template % (user, host, port)
else:
return "%s@%s" % (user, host)
def normalize_to_string(host_string):
"""
normalize() returns a tuple; this returns another valid host string.
"""
return join_host_strings(*normalize(host_string))
def connect(user, host, port, cache, seek_gateway=True):
"""
Create and return a new SSHClient instance connected to given host.
:param user: Username to connect as.
:param host: Network hostname.
:param port: SSH daemon port.
:param cache:
A ``HostConnectionCache`` instance used to cache/store gateway hosts
when gatewaying is enabled.
:param seek_gateway:
Whether to try setting up a gateway socket for this connection. Used so
the actual gateway connection can prevent recursion.
"""
from state import env, output
#
# Initialization
#
# Init client
client = ssh.SSHClient()
# Load system hosts file (e.g. /etc/ssh/ssh_known_hosts)
known_hosts = env.get('system_known_hosts')
if known_hosts:
client.load_system_host_keys(known_hosts)
# Load known host keys (e.g. ~/.ssh/known_hosts) unless user says not to.
if not env.disable_known_hosts:
client.load_system_host_keys()
# Unless user specified not to, accept/add new, unknown host keys
if not env.reject_unknown_hosts:
client.set_missing_host_key_policy(ssh.AutoAddPolicy())
#
# Connection attempt loop
#
# Initialize loop variables
connected = False
password = get_password(user, host, port)
tries = 0
sock = None
# Loop until successful connect (keep prompting for new password)
while not connected:
# Attempt connection
try:
tries += 1
# (Re)connect gateway socket, if needed.
# Nuke cached client object if not on initial try.
if seek_gateway:
sock = get_gateway(host, port, cache, replace=tries > 0)
# Ready to connect
client.connect(
hostname=host,
port=int(port),
username=user,
password=password,
pkey=key_from_env(password),
key_filename=key_filenames(),
timeout=env.timeout,
allow_agent=not env.no_agent,
look_for_keys=not env.no_keys,
sock=sock
)
connected = True
# set a keepalive if desired
if env.keepalive:
client.get_transport().set_keepalive(env.keepalive)
return client
# BadHostKeyException corresponds to key mismatch, i.e. what on the
# command line results in the big banner error about man-in-the-middle
# attacks.
except ssh.BadHostKeyException, e:
raise NetworkError("Host key for %s did not match pre-existing key! Server's key was changed recently, or possible man-in-the-middle attack." % host, e)
# Prompt for new password to try on auth failure
except (
ssh.AuthenticationException,
ssh.PasswordRequiredException,
ssh.SSHException
), e:
msg = str(e)
# If we get SSHExceptionError and the exception message indicates
# SSH protocol banner read failures, assume it's caused by the
# server load and try again.
if e.__class__ is ssh.SSHException \
and msg == 'Error reading SSH protocol banner':
if _tried_enough(tries):
raise NetworkError(msg, e)
continue
# For whatever reason, empty password + no ssh key or agent
# results in an SSHException instead of an
# AuthenticationException. Since it's difficult to do
# otherwise, we must assume empty password + SSHException ==
# auth exception.
#
# Conversely: if we get SSHException and there
# *was* a password -- it is probably something non auth
# related, and should be sent upwards. (This is not true if the
# exception message does indicate key parse problems.)
#
# This also holds true for rejected/unknown host keys: we have to
# guess based on other heuristics.
if e.__class__ is ssh.SSHException \
and (password or msg.startswith('Unknown server')) \
and not is_key_load_error(e):
raise NetworkError(msg, e)
# Otherwise, assume an auth exception, and prompt for new/better
# password.
# Paramiko doesn't handle prompting for locked private
# keys (i.e. keys with a passphrase and not loaded into an agent)
# so we have to detect this and tweak our prompt slightly.
# (Otherwise, however, the logic flow is the same, because
# ssh's connect() method overrides the password argument to be
# either the login password OR the private key passphrase. Meh.)
#
# NOTE: This will come up if you normally use a
# passphrase-protected private key with ssh-agent, and enter an
# incorrect remote username, because ssh.connect:
# * Tries the agent first, which will fail as you gave the wrong
# username, so obviously any loaded keys aren't gonna work for a
# nonexistent remote account;
# * Then tries the on-disk key file, which is passphrased;
# * Realizes there's no password to try unlocking that key with,
# because you didn't enter a password, because you're using
# ssh-agent;
# * In this condition (trying a key file, password is None)
# ssh raises PasswordRequiredException.
text = None
if e.__class__ is ssh.PasswordRequiredException \
or is_key_load_error(e):
# NOTE: we can't easily say WHICH key's passphrase is needed,
# because ssh doesn't provide us with that info, and
# env.key_filename may be a list of keys, so we can't know
# which one raised the exception. Best not to try.
prompt = "[%s] Passphrase for private key"
text = prompt % env.host_string
password = prompt_for_password(text)
# Update env.password, env.passwords if empty
set_password(user, host, port, password)
# Ctrl-D / Ctrl-C for exit
# TODO: this may no longer actually serve its original purpose and may
# also hide TypeErrors from paramiko. Double check in v2.
except (EOFError, TypeError):
# Print a newline (in case user was sitting at prompt)
print('')
sys.exit(0)
# Handle DNS error / name lookup failure
except socket.gaierror, e:
raise NetworkError('Name lookup failed for %s' % host, e)
# Handle timeouts and retries, including generic errors
# NOTE: In 2.6, socket.error subclasses IOError
except socket.error, e:
not_timeout = type(e) is not socket.timeout
giving_up = _tried_enough(tries)
# Baseline error msg for when debug is off
msg = "Timed out trying to connect to %s" % host
# Expanded for debug on
err = msg + " (attempt %s of %s)" % (tries, env.connection_attempts)
if giving_up:
err += ", giving up"
err += ")"
# Debuggin'
if output.debug:
sys.stderr.write(err + '\n')
# Having said our piece, try again
if not giving_up:
# Sleep if it wasn't a timeout, so we still get timeout-like
# behavior
if not_timeout:
time.sleep(env.timeout)
continue
# Override eror msg if we were retrying other errors
if not_timeout:
msg = "Low level socket error connecting to host %s on port %s: %s" % (
host, port, e[1]
)
# Here, all attempts failed. Tweak error msg to show # tries.
# TODO: find good humanization module, jeez
s = "s" if env.connection_attempts > 1 else ""
msg += " (tried %s time%s)" % (env.connection_attempts, s)
raise NetworkError(msg, e)
# Ensure that if we terminated without connecting and we were given an
# explicit socket, close it out.
finally:
if not connected and sock is not None:
sock.close()
def _password_prompt(prompt, stream):
# NOTE: Using encode-to-ascii to prevent (Windows, at least) getpass from
# choking if given Unicode.
return getpass.getpass(prompt.encode('ascii', 'ignore'), stream)
def prompt_for_password(prompt=None, no_colon=False, stream=None):
"""
Prompts for and returns a new password if required; otherwise, returns
None.
A trailing colon is appended unless ``no_colon`` is True.
If the user supplies an empty password, the user will be re-prompted until
they enter a non-empty password.
``prompt_for_password`` autogenerates the user prompt based on the current
host being connected to. To override this, specify a string value for
``prompt``.
``stream`` is the stream the prompt will be printed to; if not given,
defaults to ``sys.stderr``.
"""
from fabric.state import env
handle_prompt_abort("a connection or sudo password")
stream = stream or sys.stderr
# Construct prompt
default = "[%s] Login password for '%s'" % (env.host_string, env.user)
password_prompt = prompt if (prompt is not None) else default
if not no_colon:
password_prompt += ": "
# Get new password value
new_password = _password_prompt(password_prompt, stream)
# Otherwise, loop until user gives us a non-empty password (to prevent
# returning the empty string, and to avoid unnecessary network overhead.)
while not new_password:
print("Sorry, you can't enter an empty password. Please try again.")
new_password = _password_prompt(password_prompt, stream)
return new_password
def needs_host(func):
"""
Prompt user for value of ``env.host_string`` when ``env.host_string`` is
empty.
This decorator is basically a safety net for silly users who forgot to
specify the host/host list in one way or another. It should be used to wrap
operations which require a network connection.
Due to how we execute commands per-host in ``main()``, it's not possible to
specify multiple hosts at this point in time, so only a single host will be
prompted for.
Because this decorator sets ``env.host_string``, it will prompt once (and
only once) per command. As ``main()`` clears ``env.host_string`` between
commands, this decorator will also end up prompting the user once per
command (in the case where multiple commands have no hosts set, of course.)
"""
from fabric.state import env
@wraps(func)
def host_prompting_wrapper(*args, **kwargs):
while not env.get('host_string', False):
handle_prompt_abort("the target host connection string")
host_string = raw_input("No hosts found. Please specify (single)"
" host string for connection: ")
env.update(to_dict(host_string))
return func(*args, **kwargs)
host_prompting_wrapper.undecorated = func
return host_prompting_wrapper
def disconnect_all():
"""
Disconnect from all currently connected servers.
Used at the end of ``fab``'s main loop, and also intended for use by
library users.
"""
from fabric.state import connections, output
# Explicitly disconnect from all servers
for key in connections.keys():
if output.status:
# Here we can't use the py3k print(x, end=" ")
# because 2.5 backwards compatibility
sys.stdout.write("Disconnecting from %s... " % denormalize(key))
connections[key].close()
del connections[key]
if output.status:
sys.stdout.write("done.\n")
fabric-1.10.2/fabric/operations.py 0000664 0000000 0000000 00000144074 12541056305 0017017 0 ustar 00root root 0000000 0000000 """
Functions to be used in fabfiles and other non-core code, such as run()/sudo().
"""
from __future__ import with_statement
import os
import os.path
import posixpath
import re
import subprocess
import sys
import time
from glob import glob
from contextlib import closing, contextmanager
from fabric.context_managers import (settings, char_buffered, hide,
quiet as quiet_manager, warn_only as warn_only_manager)
from fabric.io import output_loop, input_loop
from fabric.network import needs_host, ssh, ssh_config
from fabric.sftp import SFTP
from fabric.state import env, connections, output, win32, default_channel
from fabric.thread_handling import ThreadHandler
from fabric.utils import (
abort,
error,
handle_prompt_abort,
indent,
_pty_size,
warn,
apply_lcwd
)
def _shell_escape(string):
"""
Escape double quotes, backticks and dollar signs in given ``string``.
For example::
>>> _shell_escape('abc$')
'abc\\\\$'
>>> _shell_escape('"')
'\\\\"'
"""
for char in ('"', '$', '`'):
string = string.replace(char, '\%s' % char)
return string
class _AttributeString(str):
"""
Simple string subclass to allow arbitrary attribute access.
"""
@property
def stdout(self):
return str(self)
class _AttributeList(list):
"""
Like _AttributeString, but for lists.
"""
pass
# Can't wait till Python versions supporting 'def func(*args, foo=bar)' become
# widespread :(
def require(*keys, **kwargs):
"""
Check for given keys in the shared environment dict and abort if not found.
Positional arguments should be strings signifying what env vars should be
checked for. If any of the given arguments do not exist, Fabric will abort
execution and print the names of the missing keys.
The optional keyword argument ``used_for`` may be a string, which will be
printed in the error output to inform users why this requirement is in
place. ``used_for`` is printed as part of a string similar to::
"Th(is|ese) variable(s) (are|is) used for %s"
so format it appropriately.
The optional keyword argument ``provided_by`` may be a list of functions or
function names or a single function or function name which the user should
be able to execute in order to set the key or keys; it will be included in
the error output if requirements are not met.
Note: it is assumed that the keyword arguments apply to all given keys as a
group. If you feel the need to specify more than one ``used_for``, for
example, you should break your logic into multiple calls to ``require()``.
.. versionchanged:: 1.1
Allow iterable ``provided_by`` values instead of just single values.
"""
# If all keys exist and are non-empty, we're good, so keep going.
missing_keys = filter(lambda x: x not in env or (x in env and
isinstance(env[x], (dict, list, tuple, set)) and not env[x]), keys)
if not missing_keys:
return
# Pluralization
if len(missing_keys) > 1:
variable = "variables were"
used = "These variables are"
else:
variable = "variable was"
used = "This variable is"
# Regardless of kwargs, print what was missing. (Be graceful if used outside
# of a command.)
if 'command' in env:
prefix = "The command '%s' failed because the " % env.command
else:
prefix = "The "
msg = "%sfollowing required environment %s not defined:\n%s" % (
prefix, variable, indent(missing_keys)
)
# Print used_for if given
if 'used_for' in kwargs:
msg += "\n\n%s used for %s" % (used, kwargs['used_for'])
# And print provided_by if given
if 'provided_by' in kwargs:
funcs = kwargs['provided_by']
# non-iterable is given, treat it as a list of this single item
if not hasattr(funcs, '__iter__'):
funcs = [funcs]
if len(funcs) > 1:
command = "one of the following commands"
else:
command = "the following command"
to_s = lambda obj: getattr(obj, '__name__', str(obj))
provided_by = [to_s(obj) for obj in funcs]
msg += "\n\nTry running %s prior to this one, to fix the problem:\n%s"\
% (command, indent(provided_by))
abort(msg)
def prompt(text, key=None, default='', validate=None):
"""
Prompt user with ``text`` and return the input (like ``raw_input``).
A single space character will be appended for convenience, but nothing
else. Thus, you may want to end your prompt text with a question mark or a
colon, e.g. ``prompt("What hostname?")``.
If ``key`` is given, the user's input will be stored as ``env.`` in
addition to being returned by `prompt`. If the key already existed in
``env``, its value will be overwritten and a warning printed to the user.
If ``default`` is given, it is displayed in square brackets and used if the
user enters nothing (i.e. presses Enter without entering any text).
``default`` defaults to the empty string. If non-empty, a space will be
appended, so that a call such as ``prompt("What hostname?",
default="foo")`` would result in a prompt of ``What hostname? [foo]`` (with
a trailing space after the ``[foo]``.)
The optional keyword argument ``validate`` may be a callable or a string:
* If a callable, it is called with the user's input, and should return the
value to be stored on success. On failure, it should raise an exception
with an exception message, which will be printed to the user.
* If a string, the value passed to ``validate`` is used as a regular
expression. It is thus recommended to use raw strings in this case. Note
that the regular expression, if it is not fully matching (bounded by
``^`` and ``$``) it will be made so. In other words, the input must fully
match the regex.
Either way, `prompt` will re-prompt until validation passes (or the user
hits ``Ctrl-C``).
.. note::
`~fabric.operations.prompt` honors :ref:`env.abort_on_prompts
` and will call `~fabric.utils.abort` instead of
prompting if that flag is set to ``True``. If you want to block on user
input regardless, try wrapping with
`~fabric.context_managers.settings`.
Examples::
# Simplest form:
environment = prompt('Please specify target environment: ')
# With default, and storing as env.dish:
prompt('Specify favorite dish: ', 'dish', default='spam & eggs')
# With validation, i.e. requiring integer input:
prompt('Please specify process nice level: ', key='nice', validate=int)
# With validation against a regular expression:
release = prompt('Please supply a release name',
validate=r'^\w+-\d+(\.\d+)?$')
# Prompt regardless of the global abort-on-prompts setting:
with settings(abort_on_prompts=False):
prompt('I seriously need an answer on this! ')
"""
handle_prompt_abort("a user-specified prompt() call")
# Store previous env value for later display, if necessary
if key:
previous_value = env.get(key)
# Set up default display
default_str = ""
if default != '':
default_str = " [%s] " % str(default).strip()
else:
default_str = " "
# Construct full prompt string
prompt_str = text.strip() + default_str
# Loop until we pass validation
value = None
while value is None:
# Get input
value = raw_input(prompt_str) or default
# Handle validation
if validate:
# Callable
if callable(validate):
# Callable validate() must raise an exception if validation
# fails.
try:
value = validate(value)
except Exception, e:
# Reset value so we stay in the loop
value = None
print("Validation failed for the following reason:")
print(indent(e.message) + "\n")
# String / regex must match and will be empty if validation fails.
else:
# Need to transform regex into full-matching one if it's not.
if not validate.startswith('^'):
validate = r'^' + validate
if not validate.endswith('$'):
validate += r'$'
result = re.findall(validate, value)
if not result:
print("Regular expression validation failed: '%s' does not match '%s'\n" % (value, validate))
# Reset value so we stay in the loop
value = None
# At this point, value must be valid, so update env if necessary
if key:
env[key] = value
# Print warning if we overwrote some other value
if key and previous_value is not None and previous_value != value:
warn("overwrote previous env variable '%s'; used to be '%s', is now '%s'." % (
key, previous_value, value
))
# And return the value, too, just in case someone finds that useful.
return value
@needs_host
def put(local_path=None, remote_path=None, use_sudo=False,
mirror_local_mode=False, mode=None, use_glob=True, temp_dir=""):
"""
Upload one or more files to a remote host.
`~fabric.operations.put` returns an iterable containing the absolute file
paths of all remote files uploaded. This iterable also exhibits a
``.failed`` attribute containing any local file paths which failed to
upload (and may thus be used as a boolean test.) You may also check
``.succeeded`` which is equivalent to ``not .failed``.
``local_path`` may be a relative or absolute local file or directory path,
and may contain shell-style wildcards, as understood by the Python ``glob``
module (give ``use_glob=False`` to disable this behavior). Tilde expansion
(as implemented by ``os.path.expanduser``) is also performed.
``local_path`` may alternately be a file-like object, such as the result of
``open('path')`` or a ``StringIO`` instance.
.. note::
In this case, `~fabric.operations.put` will attempt to read the entire
contents of the file-like object by rewinding it using ``seek`` (and
will use ``tell`` afterwards to preserve the previous file position).
``remote_path`` may also be a relative or absolute location, but applied to
the remote host. Relative paths are relative to the remote user's home
directory, but tilde expansion (e.g. ``~/.ssh/``) will also be performed if
necessary.
An empty string, in either path argument, will be replaced by the
appropriate end's current working directory.
While the SFTP protocol (which `put` uses) has no direct ability to upload
files to locations not owned by the connecting user, you may specify
``use_sudo=True`` to work around this. When set, this setting causes `put`
to upload the local files to a temporary location on the remote end
(defaults to remote user's ``$HOME``; this may be overridden via
``temp_dir``), and then use `sudo` to move them to ``remote_path``.
In some use cases, it is desirable to force a newly uploaded file to match
the mode of its local counterpart (such as when uploading executable
scripts). To do this, specify ``mirror_local_mode=True``.
Alternately, you may use the ``mode`` kwarg to specify an exact mode, in
the same vein as ``os.chmod``, such as an exact octal number (``0755``) or
a string representing one (``"0755"``).
`~fabric.operations.put` will honor `~fabric.context_managers.cd`, so
relative values in ``remote_path`` will be prepended by the current remote
working directory, if applicable. Thus, for example, the below snippet
would attempt to upload to ``/tmp/files/test.txt`` instead of
``~/files/test.txt``::
with cd('/tmp'):
put('/path/to/local/test.txt', 'files')
Use of `~fabric.context_managers.lcd` will affect ``local_path`` in the
same manner.
Examples::
put('bin/project.zip', '/tmp/project.zip')
put('*.py', 'cgi-bin/')
put('index.html', 'index.html', mode=0755)
.. note::
If a file-like object such as StringIO has a ``name`` attribute, that
will be used in Fabric's printed output instead of the default
````
.. versionchanged:: 1.0
Now honors the remote working directory as manipulated by
`~fabric.context_managers.cd`, and the local working directory as
manipulated by `~fabric.context_managers.lcd`.
.. versionchanged:: 1.0
Now allows file-like objects in the ``local_path`` argument.
.. versionchanged:: 1.0
Directories may be specified in the ``local_path`` argument and will
trigger recursive uploads.
.. versionchanged:: 1.0
Return value is now an iterable of uploaded remote file paths which
also exhibits the ``.failed`` and ``.succeeded`` attributes.
.. versionchanged:: 1.5
Allow a ``name`` attribute on file-like objects for log output
.. versionchanged:: 1.7
Added ``use_glob`` option to allow disabling of globbing.
"""
# Handle empty local path
local_path = local_path or os.getcwd()
# Test whether local_path is a path or a file-like object
local_is_path = not (hasattr(local_path, 'read') \
and callable(local_path.read))
ftp = SFTP(env.host_string)
with closing(ftp) as ftp:
home = ftp.normalize('.')
# Empty remote path implies cwd
remote_path = remote_path or home
# Expand tildes
if remote_path.startswith('~'):
remote_path = remote_path.replace('~', home, 1)
# Honor cd() (assumes Unix style file paths on remote end)
if not os.path.isabs(remote_path) and env.get('cwd'):
remote_path = env.cwd.rstrip('/') + '/' + remote_path
if local_is_path:
# Apply lcwd, expand tildes, etc
local_path = os.path.expanduser(local_path)
local_path = apply_lcwd(local_path, env)
if use_glob:
# Glob local path
names = glob(local_path)
else:
# Check if file exists first so ValueError gets raised
if os.path.exists(local_path):
names = [local_path]
else:
names = []
else:
names = [local_path]
# Make sure local arg exists
if local_is_path and not names:
err = "'%s' is not a valid local path or glob." % local_path
raise ValueError(err)
# Sanity check and wierd cases
if ftp.exists(remote_path):
if local_is_path and len(names) != 1 and not ftp.isdir(remote_path):
raise ValueError("'%s' is not a directory" % remote_path)
# Iterate over all given local files
remote_paths = []
failed_local_paths = []
for lpath in names:
try:
if local_is_path and os.path.isdir(lpath):
p = ftp.put_dir(lpath, remote_path, use_sudo,
mirror_local_mode, mode, temp_dir)
remote_paths.extend(p)
else:
p = ftp.put(lpath, remote_path, use_sudo, mirror_local_mode,
mode, local_is_path, temp_dir)
remote_paths.append(p)
except Exception, e:
msg = "put() encountered an exception while uploading '%s'"
failure = lpath if local_is_path else ""
failed_local_paths.append(failure)
error(message=msg % lpath, exception=e)
ret = _AttributeList(remote_paths)
ret.failed = failed_local_paths
ret.succeeded = not ret.failed
return ret
@needs_host
def get(remote_path, local_path=None, use_sudo=False, temp_dir=""):
"""
Download one or more files from a remote host.
`~fabric.operations.get` returns an iterable containing the absolute paths
to all local files downloaded, which will be empty if ``local_path`` was a
StringIO object (see below for more on using StringIO). This object will
also exhibit a ``.failed`` attribute containing any remote file paths which
failed to download, and a ``.succeeded`` attribute equivalent to ``not
.failed``.
``remote_path`` is the remote file or directory path to download, which may
contain shell glob syntax, e.g. ``"/var/log/apache2/*.log"``, and will have
tildes replaced by the remote home directory. Relative paths will be
considered relative to the remote user's home directory, or the current
remote working directory as manipulated by `~fabric.context_managers.cd`.
If the remote path points to a directory, that directory will be downloaded
recursively.
``local_path`` is the local file path where the downloaded file or files
will be stored. If relative, it will honor the local current working
directory as manipulated by `~fabric.context_managers.lcd`. It may be
interpolated, using standard Python dict-based interpolation, with the
following variables:
* ``host``: The value of ``env.host_string``, eg ``myhostname`` or
``user@myhostname-222`` (the colon between hostname and port is turned
into a dash to maximize filesystem compatibility)
* ``dirname``: The directory part of the remote file path, e.g. the
``src/projectname`` in ``src/projectname/utils.py``.
* ``basename``: The filename part of the remote file path, e.g. the
``utils.py`` in ``src/projectname/utils.py``
* ``path``: The full remote path, e.g. ``src/projectname/utils.py``.
While the SFTP protocol (which `get` uses) has no direct ability to download
files from locations not owned by the connecting user, you may specify
``use_sudo=True`` to work around this. When set, this setting allows `get`
to copy (using sudo) the remote files to a temporary location on the remote end
(defaults to remote user's ``$HOME``; this may be overridden via ``temp_dir``),
and then download them to ``local_path``.
.. note::
When ``remote_path`` is an absolute directory path, only the inner
directories will be recreated locally and passed into the above
variables. So for example, ``get('/var/log', '%(path)s')`` would start
writing out files like ``apache2/access.log``,
``postgresql/8.4/postgresql.log``, etc, in the local working directory.
It would **not** write out e.g. ``var/log/apache2/access.log``.
Additionally, when downloading a single file, ``%(dirname)s`` and
``%(path)s`` do not make as much sense and will be empty and equivalent
to ``%(basename)s``, respectively. Thus a call like
``get('/var/log/apache2/access.log', '%(path)s')`` will save a local
file named ``access.log``, not ``var/log/apache2/access.log``.
This behavior is intended to be consistent with the command-line
``scp`` program.
If left blank, ``local_path`` defaults to ``"%(host)s/%(path)s"`` in order
to be safe for multi-host invocations.
.. warning::
If your ``local_path`` argument does not contain ``%(host)s`` and your
`~fabric.operations.get` call runs against multiple hosts, your local
files will be overwritten on each successive run!
If ``local_path`` does not make use of the above variables (i.e. if it is a
simple, explicit file path) it will act similar to ``scp`` or ``cp``,
overwriting pre-existing files if necessary, downloading into a directory
if given (e.g. ``get('/path/to/remote_file.txt', 'local_directory')`` will
create ``local_directory/remote_file.txt``) and so forth.
``local_path`` may alternately be a file-like object, such as the result of
``open('path', 'w')`` or a ``StringIO`` instance.
.. note::
Attempting to `get` a directory into a file-like object is not valid
and will result in an error.
.. note::
This function will use ``seek`` and ``tell`` to overwrite the entire
contents of the file-like object, in order to be consistent with the
behavior of `~fabric.operations.put` (which also considers the entire
file). However, unlike `~fabric.operations.put`, the file pointer will
not be restored to its previous location, as that doesn't make as much
sense here and/or may not even be possible.
.. note::
If a file-like object such as StringIO has a ``name`` attribute, that
will be used in Fabric's printed output instead of the default
````
.. versionchanged:: 1.0
Now honors the remote working directory as manipulated by
`~fabric.context_managers.cd`, and the local working directory as
manipulated by `~fabric.context_managers.lcd`.
.. versionchanged:: 1.0
Now allows file-like objects in the ``local_path`` argument.
.. versionchanged:: 1.0
``local_path`` may now contain interpolated path- and host-related
variables.
.. versionchanged:: 1.0
Directories may be specified in the ``remote_path`` argument and will
trigger recursive downloads.
.. versionchanged:: 1.0
Return value is now an iterable of downloaded local file paths, which
also exhibits the ``.failed`` and ``.succeeded`` attributes.
.. versionchanged:: 1.5
Allow a ``name`` attribute on file-like objects for log output
"""
# Handle empty local path / default kwarg value
local_path = local_path or "%(host)s/%(path)s"
# Test whether local_path is a path or a file-like object
local_is_path = not (hasattr(local_path, 'write') \
and callable(local_path.write))
# Honor lcd() where it makes sense
if local_is_path:
local_path = apply_lcwd(local_path, env)
ftp = SFTP(env.host_string)
with closing(ftp) as ftp:
home = ftp.normalize('.')
# Expand home directory markers (tildes, etc)
if remote_path.startswith('~'):
remote_path = remote_path.replace('~', home, 1)
if local_is_path:
local_path = os.path.expanduser(local_path)
# Honor cd() (assumes Unix style file paths on remote end)
if not os.path.isabs(remote_path):
# Honor cwd if it's set (usually by with cd():)
if env.get('cwd'):
remote_path_escaped = env.cwd.rstrip('/')
remote_path_escaped = remote_path_escaped.replace('\\ ', ' ')
remote_path = remote_path_escaped + '/' + remote_path
# Otherwise, be relative to remote home directory (SFTP server's
# '.')
else:
remote_path = posixpath.join(home, remote_path)
# Track final local destination files so we can return a list
local_files = []
failed_remote_files = []
try:
# Glob remote path if it's a directory; otherwise use as-is
if (
ftp.isdir(remote_path)
or '*' in remote_path or '?' in remote_path
):
names = ftp.glob(remote_path)
else:
names = [remote_path]
# Handle invalid local-file-object situations
if not local_is_path:
if len(names) > 1 or ftp.isdir(names[0]):
error("[%s] %s is a glob or directory, but local_path is a file object!" % (env.host_string, remote_path))
for remote_path in names:
if ftp.isdir(remote_path):
result = ftp.get_dir(remote_path, local_path, use_sudo, temp_dir)
local_files.extend(result)
else:
# Perform actual get. If getting to real local file path,
# add result (will be true final path value) to
# local_files. File-like objects are omitted.
result = ftp.get(remote_path, local_path, use_sudo, local_is_path, os.path.basename(remote_path), temp_dir)
if local_is_path:
local_files.append(result)
except Exception, e:
failed_remote_files.append(remote_path)
msg = "get() encountered an exception while downloading '%s'"
error(message=msg % remote_path, exception=e)
ret = _AttributeList(local_files if local_is_path else [])
ret.failed = failed_remote_files
ret.succeeded = not ret.failed
return ret
def _sudo_prefix_argument(argument, value):
if value is None:
return ""
if str(value).isdigit():
value = "#%s" % value
return ' %s "%s"' % (argument, value)
def _sudo_prefix(user, group=None):
"""
Return ``env.sudo_prefix`` with ``user``/``group`` inserted if necessary.
"""
# Insert env.sudo_prompt into env.sudo_prefix
prefix = env.sudo_prefix % env
if user is not None or group is not None:
return "%s%s%s " % (prefix,
_sudo_prefix_argument('-u', user),
_sudo_prefix_argument('-g', group))
return prefix
def _shell_wrap(command, shell_escape, shell=True, sudo_prefix=None):
"""
Conditionally wrap given command in env.shell (while honoring sudo.)
"""
# Honor env.shell, while allowing the 'shell' kwarg to override it (at
# least in terms of turning it off.)
if shell and not env.use_shell:
shell = False
# Sudo plus space, or empty string
if sudo_prefix is None:
sudo_prefix = ""
else:
sudo_prefix += " "
# If we're shell wrapping, prefix shell and space. Next, escape the command
# if requested, and then quote it. Otherwise, empty string.
if shell:
shell = env.shell + " "
if shell_escape:
command = _shell_escape(command)
command = '"%s"' % command
else:
shell = ""
# Resulting string should now have correct formatting
return sudo_prefix + shell + command
def _prefix_commands(command, which):
"""
Prefixes ``command`` with all prefixes found in ``env.command_prefixes``.
``env.command_prefixes`` is a list of strings which is modified by the
`~fabric.context_managers.prefix` context manager.
This function also handles a special-case prefix, ``cwd``, used by
`~fabric.context_managers.cd`. The ``which`` kwarg should be a string,
``"local"`` or ``"remote"``, which will determine whether ``cwd`` or
``lcwd`` is used.
"""
# Local prefix list (to hold env.command_prefixes + any special cases)
prefixes = list(env.command_prefixes)
# Handle current working directory, which gets its own special case due to
# being a path string that gets grown/shrunk, instead of just a single
# string or lack thereof.
# Also place it at the front of the list, in case user is expecting another
# prefixed command to be "in" the current working directory.
cwd = env.cwd if which == 'remote' else env.lcwd
redirect = " >/dev/null" if not win32 else ''
if cwd:
prefixes.insert(0, 'cd %s%s' % (cwd, redirect))
glue = " && "
prefix = (glue.join(prefixes) + glue) if prefixes else ""
return prefix + command
def _prefix_env_vars(command, local=False):
"""
Prefixes ``command`` with any shell environment vars, e.g. ``PATH=foo ``.
Currently, this only applies the PATH updating implemented in
`~fabric.context_managers.path` and environment variables from
`~fabric.context_managers.shell_env`.
Will switch to using Windows style 'SET' commands when invoked by
``local()`` and on a Windows localhost.
"""
env_vars = {}
# path(): local shell env var update, appending/prepending/replacing $PATH
path = env.path
if path:
if env.path_behavior == 'append':
path = '$PATH:\"%s\"' % path
elif env.path_behavior == 'prepend':
path = '\"%s\":$PATH' % path
elif env.path_behavior == 'replace':
path = '\"%s\"' % path
env_vars['PATH'] = path
# shell_env()
env_vars.update(env.shell_env)
if env_vars:
set_cmd, exp_cmd = '', ''
if win32 and local:
set_cmd = 'SET '
else:
exp_cmd = 'export '
exports = ' '.join(
'%s%s="%s"' % (set_cmd, k, v if k == 'PATH' else _shell_escape(v))
for k, v in env_vars.iteritems()
)
shell_env_str = '%s%s && ' % (exp_cmd, exports)
else:
shell_env_str = ''
return shell_env_str + command
def _execute(channel, command, pty=True, combine_stderr=None,
invoke_shell=False, stdout=None, stderr=None, timeout=None):
"""
Execute ``command`` over ``channel``.
``pty`` controls whether a pseudo-terminal is created.
``combine_stderr`` controls whether we call ``channel.set_combine_stderr``.
By default, the global setting for this behavior (:ref:`env.combine_stderr
`) is consulted, but you may specify ``True`` or ``False``
here to override it.
``invoke_shell`` controls whether we use ``exec_command`` or
``invoke_shell`` (plus a handful of other things, such as always forcing a
pty.)
Returns a three-tuple of (``stdout``, ``stderr``, ``status``), where
``stdout``/``stderr`` are captured output strings and ``status`` is the
program's return code, if applicable.
"""
# stdout/stderr redirection
stdout = stdout or sys.stdout
stderr = stderr or sys.stderr
# Timeout setting control
timeout = env.command_timeout if (timeout is None) else timeout
# What to do with CTRl-C?
remote_interrupt = env.remote_interrupt
with char_buffered(sys.stdin):
# Combine stdout and stderr to get around oddball mixing issues
if combine_stderr is None:
combine_stderr = env.combine_stderr
channel.set_combine_stderr(combine_stderr)
# Assume pty use, and allow overriding of this either via kwarg or env
# var. (invoke_shell always wants a pty no matter what.)
using_pty = True
if not invoke_shell and (not pty or not env.always_use_pty):
using_pty = False
# Request pty with size params (default to 80x24, obtain real
# parameters if on POSIX platform)
if using_pty:
rows, cols = _pty_size()
channel.get_pty(width=cols, height=rows)
# Use SSH agent forwarding from 'ssh' if enabled by user
config_agent = ssh_config().get('forwardagent', 'no').lower() == 'yes'
forward = None
if env.forward_agent or config_agent:
forward = ssh.agent.AgentRequestHandler(channel)
# Kick off remote command
if invoke_shell:
channel.invoke_shell()
if command:
channel.sendall(command + "\n")
else:
channel.exec_command(command=command)
# Init stdout, stderr capturing. Must use lists instead of strings as
# strings are immutable and we're using these as pass-by-reference
stdout_buf, stderr_buf = [], []
if invoke_shell:
stdout_buf = stderr_buf = None
workers = (
ThreadHandler('out', output_loop, channel, "recv",
capture=stdout_buf, stream=stdout, timeout=timeout),
ThreadHandler('err', output_loop, channel, "recv_stderr",
capture=stderr_buf, stream=stderr, timeout=timeout),
ThreadHandler('in', input_loop, channel, using_pty)
)
if remote_interrupt is None:
remote_interrupt = invoke_shell
if remote_interrupt and not using_pty:
remote_interrupt = False
while True:
if channel.exit_status_ready():
break
else:
# Check for thread exceptions here so we can raise ASAP
# (without chance of getting blocked by, or hidden by an
# exception within, recv_exit_status())
for worker in workers:
worker.raise_if_needed()
try:
time.sleep(ssh.io_sleep)
except KeyboardInterrupt:
if not remote_interrupt:
raise
channel.send('\x03')
# Obtain exit code of remote program now that we're done.
status = channel.recv_exit_status()
# Wait for threads to exit so we aren't left with stale threads
for worker in workers:
worker.thread.join()
worker.raise_if_needed()
# Close channel
channel.close()
# Close any agent forward proxies
if forward is not None:
forward.close()
# Update stdout/stderr with captured values if applicable
if not invoke_shell:
stdout_buf = ''.join(stdout_buf).strip()
stderr_buf = ''.join(stderr_buf).strip()
# Tie off "loose" output by printing a newline. Helps to ensure any
# following print()s aren't on the same line as a trailing line prefix
# or similar. However, don't add an extra newline if we've already
# ended up with one, as that adds a entire blank line instead.
if output.running \
and (output.stdout and stdout_buf and not stdout_buf.endswith("\n")) \
or (output.stderr and stderr_buf and not stderr_buf.endswith("\n")):
print("")
return stdout_buf, stderr_buf, status
@needs_host
def open_shell(command=None):
"""
Invoke a fully interactive shell on the remote end.
If ``command`` is given, it will be sent down the pipe before handing
control over to the invoking user.
This function is most useful for when you need to interact with a heavily
shell-based command or series of commands, such as when debugging or when
fully interactive recovery is required upon remote program failure.
It should be considered an easy way to work an interactive shell session
into the middle of a Fabric script and is *not* a drop-in replacement for
`~fabric.operations.run`, which is also capable of interacting with the
remote end (albeit only while its given command is executing) and has much
stronger programmatic abilities such as error handling and stdout/stderr
capture.
Specifically, `~fabric.operations.open_shell` provides a better interactive
experience than `~fabric.operations.run`, but use of a full remote shell
prevents Fabric from determining whether programs run within the shell have
failed, and pollutes the stdout/stderr stream with shell output such as
login banners, prompts and echoed stdin.
Thus, this function does not have a return value and will not trigger
Fabric's failure handling if any remote programs result in errors.
.. versionadded:: 1.0
"""
_execute(channel=default_channel(), command=command, pty=True,
combine_stderr=True, invoke_shell=True)
@contextmanager
def _noop():
yield
def _run_command(command, shell=True, pty=True, combine_stderr=True,
sudo=False, user=None, quiet=False, warn_only=False, stdout=None,
stderr=None, group=None, timeout=None, shell_escape=None):
"""
Underpinnings of `run` and `sudo`. See their docstrings for more info.
"""
manager = _noop
if warn_only:
manager = warn_only_manager
# Quiet's behavior is a superset of warn_only's, so it wins.
if quiet:
manager = quiet_manager
with manager():
# Set up new var so original argument can be displayed verbatim later.
given_command = command
# Check if shell_escape has been overridden in env
if shell_escape is None:
shell_escape = env.get('shell_escape', True)
# Handle context manager modifications, and shell wrapping
wrapped_command = _shell_wrap(
_prefix_commands(_prefix_env_vars(command), 'remote'),
shell_escape,
shell,
_sudo_prefix(user, group) if sudo else None
)
# Execute info line
which = 'sudo' if sudo else 'run'
if output.debug:
print("[%s] %s: %s" % (env.host_string, which, wrapped_command))
elif output.running:
print("[%s] %s: %s" % (env.host_string, which, given_command))
# Actual execution, stdin/stdout/stderr handling, and termination
result_stdout, result_stderr, status = _execute(
channel=default_channel(), command=wrapped_command, pty=pty,
combine_stderr=combine_stderr, invoke_shell=False, stdout=stdout,
stderr=stderr, timeout=timeout)
# Assemble output string
out = _AttributeString(result_stdout)
err = _AttributeString(result_stderr)
# Error handling
out.failed = False
out.command = given_command
out.real_command = wrapped_command
if status not in env.ok_ret_codes:
out.failed = True
msg = "%s() received nonzero return code %s while executing" % (
which, status
)
if env.warn_only:
msg += " '%s'!" % given_command
else:
msg += "!\n\nRequested: %s\nExecuted: %s" % (
given_command, wrapped_command
)
error(message=msg, stdout=out, stderr=err)
# Attach return code to output string so users who have set things to
# warn only, can inspect the error code.
out.return_code = status
# Convenience mirror of .failed
out.succeeded = not out.failed
# Attach stderr for anyone interested in that.
out.stderr = err
return out
@needs_host
def run(command, shell=True, pty=True, combine_stderr=None, quiet=False,
warn_only=False, stdout=None, stderr=None, timeout=None, shell_escape=None):
"""
Run a shell command on a remote host.
If ``shell`` is True (the default), `run` will execute the given command
string via a shell interpreter, the value of which may be controlled by
setting ``env.shell`` (defaulting to something similar to ``/bin/bash -l -c
""``.) Any double-quote (``"``) or dollar-sign (``$``) characters
in ``command`` will be automatically escaped when ``shell`` is True.
`run` will return the result of the remote program's stdout as a single
(likely multiline) string. This string will exhibit ``failed`` and
``succeeded`` boolean attributes specifying whether the command failed or
succeeded, and will also include the return code as the ``return_code``
attribute. Furthermore, it includes a copy of the requested & actual
command strings executed, as ``.command`` and ``.real_command``,
respectively.
Any text entered in your local terminal will be forwarded to the remote
program as it runs, thus allowing you to interact with password or other
prompts naturally. For more on how this works, see
:doc:`/usage/interactivity`.
You may pass ``pty=False`` to forego creation of a pseudo-terminal on the
remote end in case the presence of one causes problems for the command in
question. However, this will force Fabric itself to echo any and all input
you type while the command is running, including sensitive passwords. (With
``pty=True``, the remote pseudo-terminal will echo for you, and will
intelligently handle password-style prompts.) See :ref:`pseudottys` for
details.
Similarly, if you need to programmatically examine the stderr stream of the
remote program (exhibited as the ``stderr`` attribute on this function's
return value), you may set ``combine_stderr=False``. Doing so has a high
chance of causing garbled output to appear on your terminal (though the
resulting strings returned by `~fabric.operations.run` will be properly
separated). For more info, please read :ref:`combine_streams`.
To ignore non-zero return codes, specify ``warn_only=True``. To both ignore
non-zero return codes *and* force a command to run silently, specify
``quiet=True``.
To override which local streams are used to display remote stdout and/or
stderr, specify ``stdout`` or ``stderr``. (By default, the regular
``sys.stdout`` and ``sys.stderr`` Python stream objects are used.)
For example, ``run("command", stderr=sys.stdout)`` would print the remote
standard error to the local standard out, while preserving it as its own
distinct attribute on the return value (as per above.) Alternately, you
could even provide your own stream objects or loggers, e.g. ``myout =
StringIO(); run("command", stdout=myout)``.
If you want an exception raised when the remote program takes too long to
run, specify ``timeout=N`` where ``N`` is an integer number of seconds,
after which to time out. This will cause ``run`` to raise a
`~fabric.exceptions.CommandTimeout` exception.
If you want to disable Fabric's automatic attempts at escaping quotes,
dollar signs etc., specify ``shell_escape=False``.
Examples::
run("ls /var/www/")
run("ls /home/myuser", shell=False)
output = run('ls /var/www/site1')
run("take_a_long_time", timeout=5)
.. versionadded:: 1.0
The ``succeeded`` and ``stderr`` return value attributes, the
``combine_stderr`` kwarg, and interactive behavior.
.. versionchanged:: 1.0
The default value of ``pty`` is now ``True``.
.. versionchanged:: 1.0.2
The default value of ``combine_stderr`` is now ``None`` instead of
``True``. However, the default *behavior* is unchanged, as the global
setting is still ``True``.
.. versionadded:: 1.5
The ``quiet``, ``warn_only``, ``stdout`` and ``stderr`` kwargs.
.. versionadded:: 1.5
The return value attributes ``.command`` and ``.real_command``.
.. versionadded:: 1.6
The ``timeout`` argument.
.. versionadded:: 1.7
The ``shell_escape`` argument.
"""
return _run_command(command, shell, pty, combine_stderr, quiet=quiet,
warn_only=warn_only, stdout=stdout, stderr=stderr, timeout=timeout,
shell_escape=shell_escape)
@needs_host
def sudo(command, shell=True, pty=True, combine_stderr=None, user=None,
quiet=False, warn_only=False, stdout=None, stderr=None, group=None,
timeout=None, shell_escape=None):
"""
Run a shell command on a remote host, with superuser privileges.
`sudo` is identical in every way to `run`, except that it will always wrap
the given ``command`` in a call to the ``sudo`` program to provide
superuser privileges.
`sudo` accepts additional ``user`` and ``group`` arguments, which are
passed to ``sudo`` and allow you to run as some user and/or group other
than root. On most systems, the ``sudo`` program can take a string
username/group or an integer userid/groupid (uid/gid); ``user`` and
``group`` may likewise be strings or integers.
You may set :ref:`env.sudo_user ` at module level or via
`~fabric.context_managers.settings` if you want multiple ``sudo`` calls to
have the same ``user`` value. An explicit ``user`` argument will, of
course, override this global setting.
Examples::
sudo("~/install_script.py")
sudo("mkdir /var/www/new_docroot", user="www-data")
sudo("ls /home/jdoe", user=1001)
result = sudo("ls /tmp/")
with settings(sudo_user='mysql'):
sudo("whoami") # prints 'mysql'
.. versionchanged:: 1.0
See the changed and added notes for `~fabric.operations.run`.
.. versionchanged:: 1.5
Now honors :ref:`env.sudo_user `.
.. versionadded:: 1.5
The ``quiet``, ``warn_only``, ``stdout`` and ``stderr`` kwargs.
.. versionadded:: 1.5
The return value attributes ``.command`` and ``.real_command``.
.. versionadded:: 1.7
The ``shell_escape`` argument.
"""
return _run_command(
command, shell, pty, combine_stderr, sudo=True,
user=user if user else env.sudo_user,
group=group, quiet=quiet, warn_only=warn_only, stdout=stdout,
stderr=stderr, timeout=timeout, shell_escape=shell_escape,
)
def local(command, capture=False, shell=None):
"""
Run a command on the local system.
`local` is simply a convenience wrapper around the use of the builtin
Python ``subprocess`` module with ``shell=True`` activated. If you need to
do anything special, consider using the ``subprocess`` module directly.
``shell`` is passed directly to `subprocess.Popen
`_'s
``execute`` argument (which determines the local shell to use.) As per the
linked documentation, on Unix the default behavior is to use ``/bin/sh``,
so this option is useful for setting that value to e.g. ``/bin/bash``.
`local` is not currently capable of simultaneously printing and
capturing output, as `~fabric.operations.run`/`~fabric.operations.sudo`
do. The ``capture`` kwarg allows you to switch between printing and
capturing as necessary, and defaults to ``False``.
When ``capture=False``, the local subprocess' stdout and stderr streams are
hooked up directly to your terminal, though you may use the global
:doc:`output controls ` ``output.stdout`` and
``output.stderr`` to hide one or both if desired. In this mode, the return
value's stdout/stderr values are always empty.
When ``capture=True``, you will not see any output from the subprocess in
your terminal, but the return value will contain the captured
stdout/stderr.
In either case, as with `~fabric.operations.run` and
`~fabric.operations.sudo`, this return value exhibits the ``return_code``,
``stderr``, ``failed``, ``succeeded``, ``command`` and ``real_command``
attributes. See `run` for details.
`~fabric.operations.local` will honor the `~fabric.context_managers.lcd`
context manager, allowing you to control its current working directory
independently of the remote end (which honors
`~fabric.context_managers.cd`).
.. versionchanged:: 1.0
Added the ``succeeded`` and ``stderr`` attributes.
.. versionchanged:: 1.0
Now honors the `~fabric.context_managers.lcd` context manager.
.. versionchanged:: 1.0
Changed the default value of ``capture`` from ``True`` to ``False``.
.. versionadded:: 1.9
The return value attributes ``.command`` and ``.real_command``.
"""
given_command = command
# Apply cd(), path() etc
with_env = _prefix_env_vars(command, local=True)
wrapped_command = _prefix_commands(with_env, 'local')
if output.debug:
print("[localhost] local: %s" % (wrapped_command))
elif output.running:
print("[localhost] local: " + given_command)
# Tie in to global output controls as best we can; our capture argument
# takes precedence over the output settings.
dev_null = None
if capture:
out_stream = subprocess.PIPE
err_stream = subprocess.PIPE
else:
dev_null = open(os.devnull, 'w+')
# Non-captured, hidden streams are discarded.
out_stream = None if output.stdout else dev_null
err_stream = None if output.stderr else dev_null
try:
cmd_arg = wrapped_command if win32 else [wrapped_command]
p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream,
stderr=err_stream, executable=shell,
close_fds=(not win32))
(stdout, stderr) = p.communicate()
finally:
if dev_null is not None:
dev_null.close()
# Handle error condition (deal with stdout being None, too)
out = _AttributeString(stdout.strip() if stdout else "")
err = _AttributeString(stderr.strip() if stderr else "")
out.command = given_command
out.real_command = wrapped_command
out.failed = False
out.return_code = p.returncode
out.stderr = err
if p.returncode not in env.ok_ret_codes:
out.failed = True
msg = "local() encountered an error (return code %s) while executing '%s'" % (p.returncode, command)
error(message=msg, stdout=out, stderr=err)
out.succeeded = not out.failed
# If we were capturing, this will be a string; otherwise it will be None.
return out
@needs_host
def reboot(wait=120, command='reboot'):
"""
Reboot the remote system.
Will temporarily tweak Fabric's reconnection settings (:ref:`timeout` and
:ref:`connection-attempts`) to ensure that reconnection does not give up
for at least ``wait`` seconds.
.. note::
As of Fabric 1.4, the ability to reconnect partway through a session no
longer requires use of internal APIs. While we are not officially
deprecating this function, adding more features to it will not be a
priority.
Users who want greater control
are encouraged to check out this function's (6 lines long, well
commented) source code and write their own adaptation using different
timeout/attempt values or additional logic.
.. versionadded:: 0.9.2
.. versionchanged:: 1.4
Changed the ``wait`` kwarg to be optional, and refactored to leverage
the new reconnection functionality; it may not actually have to wait
for ``wait`` seconds before reconnecting.
"""
# Shorter timeout for a more granular cycle than the default.
timeout = 5
# Use 'wait' as max total wait time
attempts = int(round(float(wait) / float(timeout)))
# Don't bleed settings, since this is supposed to be self-contained.
# User adaptations will probably want to drop the "with settings()" and
# just have globally set timeout/attempts values.
with settings(
hide('running'),
timeout=timeout,
connection_attempts=attempts
):
sudo(command)
# Try to make sure we don't slip in before pre-reboot lockdown
time.sleep(5)
# This is actually an internal-ish API call, but users can simply drop
# it in real fabfile use -- the next run/sudo/put/get/etc call will
# automatically trigger a reconnect.
# We use it here to force the reconnect while this function is still in
# control and has the above timeout settings enabled.
connections.connect(env.host_string)
# At this point we should be reconnected to the newly rebooted server.
fabric-1.10.2/fabric/sftp.py 0000664 0000000 0000000 00000031325 12541056305 0015602 0 ustar 00root root 0000000 0000000 from __future__ import with_statement
import hashlib
import os
import posixpath
import stat
import re
from fnmatch import filter as fnfilter
from fabric.state import output, connections, env
from fabric.utils import warn
from fabric.context_managers import settings
# TODO: use self.sftp.listdir_iter on Paramiko 1.15+
def _format_local(local_path, local_is_path):
"""Format a path for log output"""
if local_is_path:
return local_path
else:
# This allows users to set a name attr on their StringIO objects
# just like an open file object would have
return getattr(local_path, 'name', '')
class SFTP(object):
"""
SFTP helper class, which is also a facade for ssh.SFTPClient.
"""
def __init__(self, host_string):
self.ftp = connections[host_string].open_sftp()
# Recall that __getattr__ is the "fallback" attribute getter, and is thus
# pretty safe to use for facade-like behavior as we're doing here.
def __getattr__(self, attr):
return getattr(self.ftp, attr)
def isdir(self, path):
try:
return stat.S_ISDIR(self.ftp.stat(path).st_mode)
except IOError:
return False
def islink(self, path):
try:
return stat.S_ISLNK(self.ftp.lstat(path).st_mode)
except IOError:
return False
def exists(self, path):
try:
self.ftp.lstat(path).st_mode
except IOError:
return False
return True
def glob(self, path):
from fabric.state import win32
dirpart, pattern = os.path.split(path)
rlist = self.ftp.listdir(dirpart)
names = fnfilter([f for f in rlist if not f[0] == '.'], pattern)
ret = [path]
if len(names):
s = '/'
ret = [dirpart.rstrip(s) + s + name.lstrip(s) for name in names]
if not win32:
ret = [posixpath.join(dirpart, name) for name in names]
return ret
def walk(self, top, topdown=True, onerror=None, followlinks=False):
from os.path import join
# We may not have read permission for top, in which case we can't get a
# list of the files the directory contains. os.path.walk always
# suppressed the exception then, rather than blow up for a minor reason
# when (say) a thousand readable directories are still left to visit.
# That logic is copied here.
try:
# Note that listdir and error are globals in this module due to
# earlier import-*.
names = self.ftp.listdir(top)
except Exception, err:
if onerror is not None:
onerror(err)
return
dirs, nondirs = [], []
for name in names:
if self.isdir(join(top, name)):
dirs.append(name)
else:
nondirs.append(name)
if topdown:
yield top, dirs, nondirs
for name in dirs:
path = join(top, name)
if followlinks or not self.islink(path):
for x in self.walk(path, topdown, onerror, followlinks):
yield x
if not topdown:
yield top, dirs, nondirs
def mkdir(self, path, use_sudo):
from fabric.api import sudo, hide
if use_sudo:
with hide('everything'):
sudo('mkdir "%s"' % path)
else:
self.ftp.mkdir(path)
def get(self, remote_path, local_path, use_sudo, local_is_path, rremote=None, temp_dir=""):
from fabric.api import sudo, hide
# rremote => relative remote path, so get(/var/log) would result in
# this function being called with
# remote_path=/var/log/apache2/access.log and
# rremote=apache2/access.log
rremote = rremote if rremote is not None else remote_path
# Handle format string interpolation (e.g. %(dirname)s)
path_vars = {
'host': env.host_string.replace(':', '-'),
'basename': os.path.basename(rremote),
'dirname': os.path.dirname(rremote),
'path': rremote
}
if local_is_path:
# Naive fix to issue #711
escaped_path = re.sub(r'(%[^()]*\w)', r'%\1', local_path)
local_path = os.path.abspath(escaped_path % path_vars )
# Ensure we give ssh.SFTPCLient a file by prepending and/or
# creating local directories as appropriate.
dirpath, filepath = os.path.split(local_path)
if dirpath and not os.path.exists(dirpath):
os.makedirs(dirpath)
if os.path.isdir(local_path):
local_path = os.path.join(local_path, path_vars['basename'])
if output.running:
print("[%s] download: %s <- %s" % (
env.host_string,
_format_local(local_path, local_is_path),
remote_path
))
# Warn about overwrites, but keep going
if local_is_path and os.path.exists(local_path):
msg = "Local file %s already exists and is being overwritten."
warn(msg % local_path)
# When using sudo, "bounce" the file through a guaranteed-unique file
# path in the default remote CWD (which, typically, the login user will
# have write permissions on) in order to sudo(cp) it.
if use_sudo:
target_path = remote_path
hasher = hashlib.sha1()
hasher.update(env.host_string)
hasher.update(target_path)
target_path = posixpath.join(temp_dir, hasher.hexdigest())
# Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv command.
# (The target path has already been cwd-ified elsewhere.)
with settings(hide('everything'), cwd=""):
sudo('cp -p "%s" "%s"' % (remote_path, target_path))
# The user should always own the copied file.
sudo('chown %s "%s"' % (env.user, target_path))
# Only root and the user has the right to read the file
sudo('chmod %o "%s"' % (0400, target_path))
remote_path = target_path
try:
# File-like objects: reset to file seek 0 (to ensure full overwrite)
# and then use Paramiko's getfo() directly
getter = self.ftp.get
if not local_is_path:
local_path.seek(0)
getter = self.ftp.getfo
getter(remote_path, local_path)
finally:
# try to remove the temporary file after the download
if use_sudo:
with settings(hide('everything'), cwd=""):
sudo('rm -f "%s"' % remote_path)
# Return local_path object for posterity. (If mutated, caller will want
# to know.)
return local_path
def get_dir(self, remote_path, local_path, use_sudo, temp_dir):
# Decide what needs to be stripped from remote paths so they're all
# relative to the given remote_path
if os.path.basename(remote_path):
strip = os.path.dirname(remote_path)
else:
strip = os.path.dirname(os.path.dirname(remote_path))
# Store all paths gotten so we can return them when done
result = []
# Use our facsimile of os.walk to find all files within remote_path
for context, dirs, files in self.walk(remote_path):
# Normalize current directory to be relative
# E.g. remote_path of /var/log and current dir of /var/log/apache2
# would be turned into just 'apache2'
lcontext = rcontext = context.replace(strip, '', 1).lstrip('/')
# Prepend local path to that to arrive at the local mirrored
# version of this directory. So if local_path was 'mylogs', we'd
# end up with 'mylogs/apache2'
lcontext = os.path.join(local_path, lcontext)
# Download any files in current directory
for f in files:
# Construct full and relative remote paths to this file
rpath = posixpath.join(context, f)
rremote = posixpath.join(rcontext, f)
# If local_path isn't using a format string that expands to
# include its remote path, we need to add it here.
if "%(path)s" not in local_path \
and "%(dirname)s" not in local_path:
lpath = os.path.join(lcontext, f)
# Otherwise, just passthrough local_path to self.get()
else:
lpath = local_path
# Now we can make a call to self.get() with specific file paths
# on both ends.
result.append(self.get(rpath, lpath, use_sudo, True, rremote, temp_dir))
return result
def put(self, local_path, remote_path, use_sudo, mirror_local_mode, mode,
local_is_path, temp_dir):
from fabric.api import sudo, hide
pre = self.ftp.getcwd()
pre = pre if pre else ''
if local_is_path and self.isdir(remote_path):
basename = os.path.basename(local_path)
remote_path = posixpath.join(remote_path, basename)
if output.running:
print("[%s] put: %s -> %s" % (
env.host_string,
_format_local(local_path, local_is_path),
posixpath.join(pre, remote_path)
))
# When using sudo, "bounce" the file through a guaranteed-unique file
# path in the default remote CWD (which, typically, the login user will
# have write permissions on) in order to sudo(mv) it later.
if use_sudo:
target_path = remote_path
hasher = hashlib.sha1()
hasher.update(env.host_string)
hasher.update(target_path)
remote_path = posixpath.join(temp_dir, hasher.hexdigest())
# Read, ensuring we handle file-like objects correct re: seek pointer
putter = self.ftp.put
if not local_is_path:
old_pointer = local_path.tell()
local_path.seek(0)
putter = self.ftp.putfo
rattrs = putter(local_path, remote_path)
if not local_is_path:
local_path.seek(old_pointer)
# Handle modes if necessary
if (local_is_path and mirror_local_mode) or (mode is not None):
lmode = os.stat(local_path).st_mode if mirror_local_mode else mode
# Cast to octal integer in case of string
if isinstance(lmode, basestring):
lmode = int(lmode, 8)
lmode = lmode & 07777
rmode = rattrs.st_mode
# Only bitshift if we actually got an rmode
if rmode is not None:
rmode = (rmode & 07777)
if lmode != rmode:
if use_sudo:
# Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv
# command. (The target path has already been cwd-ified
# elsewhere.)
with settings(hide('everything'), cwd=""):
sudo('chmod %o \"%s\"' % (lmode, remote_path))
else:
self.ftp.chmod(remote_path, lmode)
if use_sudo:
# Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv command.
# (The target path has already been cwd-ified elsewhere.)
with settings(hide('everything'), cwd=""):
sudo("mv \"%s\" \"%s\"" % (remote_path, target_path))
# Revert to original remote_path for return value's sake
remote_path = target_path
return remote_path
def put_dir(self, local_path, remote_path, use_sudo, mirror_local_mode,
mode, temp_dir):
if os.path.basename(local_path):
strip = os.path.dirname(local_path)
else:
strip = os.path.dirname(os.path.dirname(local_path))
remote_paths = []
for context, dirs, files in os.walk(local_path):
rcontext = context.replace(strip, '', 1)
# normalize pathname separators with POSIX separator
rcontext = rcontext.replace(os.sep, '/')
rcontext = rcontext.lstrip('/')
rcontext = posixpath.join(remote_path, rcontext)
if not self.exists(rcontext):
self.mkdir(rcontext, use_sudo)
for d in dirs:
n = posixpath.join(rcontext, d)
if not self.exists(n):
self.mkdir(n, use_sudo)
for f in files:
local_path = os.path.join(context, f)
n = posixpath.join(rcontext, f)
p = self.put(local_path, n, use_sudo, mirror_local_mode, mode,
True, temp_dir)
remote_paths.append(p)
return remote_paths
fabric-1.10.2/fabric/state.py 0000664 0000000 0000000 00000027275 12541056305 0015757 0 ustar 00root root 0000000 0000000 """
Internal shared-state variables such as config settings and host lists.
"""
import os
import sys
from optparse import make_option
from fabric.network import HostConnectionCache, ssh
from fabric.version import get_version
from fabric.utils import _AliasDict, _AttributeDict
#
# Win32 flag
#
# Impacts a handful of platform specific behaviors. Note that Cygwin's Python
# is actually close enough to "real" UNIXes that it doesn't need (or want!) to
# use PyWin32 -- so we only test for literal Win32 setups (vanilla Python,
# ActiveState etc) here.
win32 = (sys.platform == 'win32')
#
# Environment dictionary - support structures
#
# By default, if the user (including code using Fabric as a library) doesn't
# set the username, we obtain the currently running username and use that.
def _get_system_username():
"""
Obtain name of current system user, which will be default connection user.
"""
import getpass
username = None
try:
username = getpass.getuser()
# getpass.getuser supported on both Unix and Windows systems.
# getpass.getuser may call pwd.getpwuid which in turns may raise KeyError
# if it cannot find a username for the given UID, e.g. on ep.io
# and similar "non VPS" style services. Rather than error out, just keep
# the 'default' username to None. Can check for this value later if needed.
except KeyError:
pass
except ImportError:
if win32:
import win32api
import win32security
import win32profile
username = win32api.GetUserName()
return username
def _rc_path():
"""
Return platform-specific default file path for $HOME/.fabricrc.
"""
rc_file = '.fabricrc'
rc_path = '~/' + rc_file
expanded_rc_path = os.path.expanduser(rc_path)
if expanded_rc_path == rc_path and win32:
from win32com.shell.shell import SHGetSpecialFolderPath
from win32com.shell.shellcon import CSIDL_PROFILE
expanded_rc_path = "%s/%s" % (
SHGetSpecialFolderPath(0, CSIDL_PROFILE),
rc_file
)
return expanded_rc_path
default_port = '22' # hurr durr
default_ssh_config_path = os.path.join(os.path.expanduser('~'), '.ssh', 'config')
# Options/settings which exist both as environment keys and which can be set on
# the command line, are defined here. When used via `fab` they will be added to
# the optparse parser, and either way they are added to `env` below (i.e. the
# 'dest' value becomes the environment key and the value, the env value).
#
# Keep in mind that optparse changes hyphens to underscores when automatically
# deriving the `dest` name, e.g. `--reject-unknown-hosts` becomes
# `reject_unknown_hosts`.
#
# Furthermore, *always* specify some sort of default to avoid ending up with
# optparse.NO_DEFAULT (currently a two-tuple)! In general, None is a better
# default than ''.
#
# User-facing documentation for these are kept in sites/docs/env.rst.
env_options = [
make_option('-a', '--no_agent',
action='store_true',
default=False,
help="don't use the running SSH agent"
),
make_option('-A', '--forward-agent',
action='store_true',
default=False,
help="forward local agent to remote end"
),
make_option('--abort-on-prompts',
action='store_true',
default=False,
help="abort instead of prompting (for password, host, etc)"
),
make_option('-c', '--config',
dest='rcfile',
default=_rc_path(),
metavar='PATH',
help="specify location of config file to use"
),
make_option('--colorize-errors',
action='store_true',
default=False,
help="Color error output",
),
make_option('-D', '--disable-known-hosts',
action='store_true',
default=False,
help="do not load user known_hosts file"
),
make_option('-e', '--eagerly-disconnect',
action='store_true',
default=False,
help="disconnect from hosts as soon as possible"
),
make_option('-f', '--fabfile',
default='fabfile',
metavar='PATH',
help="python module file to import, e.g. '../other.py'"
),
make_option('-g', '--gateway',
default=None,
metavar='HOST',
help="gateway host to connect through"
),
make_option('--hide',
metavar='LEVELS',
help="comma-separated list of output levels to hide"
),
make_option('-H', '--hosts',
default=[],
help="comma-separated list of hosts to operate on"
),
make_option('-i',
action='append',
dest='key_filename',
metavar='PATH',
default=None,
help="path to SSH private key file. May be repeated."
),
make_option('-k', '--no-keys',
action='store_true',
default=False,
help="don't load private key files from ~/.ssh/"
),
make_option('--keepalive',
dest='keepalive',
type=int,
default=0,
metavar="N",
help="enables a keepalive every N seconds"
),
make_option('--linewise',
action='store_true',
default=False,
help="print line-by-line instead of byte-by-byte"
),
make_option('-n', '--connection-attempts',
type='int',
metavar='M',
dest='connection_attempts',
default=1,
help="make M attempts to connect before giving up"
),
make_option('--no-pty',
dest='always_use_pty',
action='store_false',
default=True,
help="do not use pseudo-terminal in run/sudo"
),
make_option('-p', '--password',
default=None,
help="password for use with authentication and/or sudo"
),
make_option('-P', '--parallel',
dest='parallel',
action='store_true',
default=False,
help="default to parallel execution method"
),
make_option('--port',
default=default_port,
help="SSH connection port"
),
make_option('-r', '--reject-unknown-hosts',
action='store_true',
default=False,
help="reject unknown hosts"
),
make_option('--system-known-hosts',
default=None,
help="load system known_hosts file before reading user known_hosts"
),
make_option('-R', '--roles',
default=[],
help="comma-separated list of roles to operate on"
),
make_option('-s', '--shell',
default='/bin/bash -l -c',
help="specify a new shell, defaults to '/bin/bash -l -c'"
),
make_option('--show',
metavar='LEVELS',
help="comma-separated list of output levels to show"
),
make_option('--skip-bad-hosts',
action="store_true",
default=False,
help="skip over hosts that can't be reached"
),
make_option('--skip-unknown-tasks',
action="store_true",
default=False,
help="skip over unknown tasks"
),
make_option('--ssh-config-path',
default=default_ssh_config_path,
metavar='PATH',
help="Path to SSH config file"
),
make_option('-t', '--timeout',
type='int',
default=10,
metavar="N",
help="set connection timeout to N seconds"
),
make_option('-T', '--command-timeout',
dest='command_timeout',
type='int',
default=None,
metavar="N",
help="set remote command timeout to N seconds"
),
make_option('-u', '--user',
default=_get_system_username(),
help="username to use when connecting to remote hosts"
),
make_option('-w', '--warn-only',
action='store_true',
default=False,
help="warn, instead of abort, when commands fail"
),
make_option('-x', '--exclude-hosts',
default=[],
metavar='HOSTS',
help="comma-separated list of hosts to exclude"
),
make_option('-z', '--pool-size',
dest='pool_size',
type='int',
metavar='INT',
default=0,
help="number of concurrent processes to use in parallel mode",
),
]
#
# Environment dictionary - actual dictionary object
#
# Global environment dict. Currently a catchall for everything: config settings
# such as global deep/broad mode, host lists, username etc.
# Most default values are specified in `env_options` above, in the interests of
# preserving DRY: anything in here is generally not settable via the command
# line.
env = _AttributeDict({
'abort_exception': None,
'again_prompt': 'Sorry, try again.',
'all_hosts': [],
'combine_stderr': True,
'colorize_errors': False,
'command': None,
'command_prefixes': [],
'cwd': '', # Must be empty string, not None, for concatenation purposes
'dedupe_hosts': True,
'default_port': default_port,
'eagerly_disconnect': False,
'echo_stdin': True,
'effective_roles': [],
'exclude_hosts': [],
'gateway': None,
'host': None,
'host_string': None,
'lcwd': '', # Must be empty string, not None, for concatenation purposes
'local_user': _get_system_username(),
'output_prefix': True,
'passwords': {},
'path': '',
'path_behavior': 'append',
'port': default_port,
'real_fabfile': None,
'remote_interrupt': None,
'roles': [],
'roledefs': {},
'shell_env': {},
'skip_bad_hosts': False,
'skip_unknown_tasks': False,
'ssh_config_path': default_ssh_config_path,
'ok_ret_codes': [0], # a list of return codes that indicate success
# -S so sudo accepts passwd via stdin, -p with our known-value prompt for
# later detection (thus %s -- gets filled with env.sudo_prompt at runtime)
'sudo_prefix': "sudo -S -p '%(sudo_prompt)s' ",
'sudo_prompt': 'sudo password:',
'sudo_user': None,
'tasks': [],
'prompts': {},
'use_exceptions_for': {'network': False},
'use_shell': True,
'use_ssh_config': False,
'user': None,
'version': get_version('short')
})
# Fill in exceptions settings
exceptions = ['network']
exception_dict = {}
for e in exceptions:
exception_dict[e] = False
env.use_exceptions_for = _AliasDict(exception_dict,
aliases={'everything': exceptions})
# Add in option defaults
for option in env_options:
env[option.dest] = option.default
#
# Command dictionary
#
# Keys are the command/function names, values are the callables themselves.
# This is filled in when main() runs.
commands = {}
#
# Host connection dict/cache
#
connections = HostConnectionCache()
def _open_session():
return connections[env.host_string].get_transport().open_session()
def default_channel():
"""
Return a channel object based on ``env.host_string``.
"""
try:
chan = _open_session()
except ssh.SSHException, err:
if str(err) == 'SSH session not active':
connections[env.host_string].close()
del connections[env.host_string]
chan = _open_session()
else:
raise
chan.settimeout(0.1)
chan.input_enabled = True
return chan
#
# Output controls
#
# Keys are "levels" or "groups" of output, values are always boolean,
# determining whether output falling into the given group is printed or not
# printed.
#
# By default, everything except 'debug' is printed, as this is what the average
# user, and new users, are most likely to expect.
#
# See docs/usage.rst for details on what these levels mean.
output = _AliasDict({
'status': True,
'aborts': True,
'warnings': True,
'running': True,
'stdout': True,
'stderr': True,
'exceptions': False,
'debug': False,
'user': True
}, aliases={
'everything': ['warnings', 'running', 'user', 'output', 'exceptions'],
'output': ['stdout', 'stderr'],
'commands': ['stdout', 'running']
})
fabric-1.10.2/fabric/task_utils.py 0000664 0000000 0000000 00000005343 12541056305 0017011 0 ustar 00root root 0000000 0000000 from fabric.utils import abort, indent
from fabric import state
# For attribute tomfoolery
class _Dict(dict):
pass
def _crawl(name, mapping):
"""
``name`` of ``'a.b.c'`` => ``mapping['a']['b']['c']``
"""
key, _, rest = name.partition('.')
value = mapping[key]
if not rest:
return value
return _crawl(rest, value)
def crawl(name, mapping):
try:
result = _crawl(name, mapping)
# Handle default tasks
if isinstance(result, _Dict):
if getattr(result, 'default', False):
result = result.default
# Ensure task modules w/ no default are treated as bad targets
else:
result = None
return result
except (KeyError, TypeError):
return None
def merge(hosts, roles, exclude, roledefs):
"""
Merge given host and role lists into one list of deduped hosts.
"""
# Abort if any roles don't exist
bad_roles = [x for x in roles if x not in roledefs]
if bad_roles:
abort("The following specified roles do not exist:\n%s" % (
indent(bad_roles)
))
# Coerce strings to one-item lists
if isinstance(hosts, basestring):
hosts = [hosts]
# Look up roles, turn into flat list of hosts
role_hosts = []
for role in roles:
value = roledefs[role]
# Handle dict style roledefs
if isinstance(value, dict):
value = value['hosts']
# Handle "lazy" roles (callables)
if callable(value):
value = value()
role_hosts += value
# Strip whitespace from host strings.
cleaned_hosts = [x.strip() for x in list(hosts) + list(role_hosts)]
# Return deduped combo of hosts and role_hosts, preserving order within
# them (vs using set(), which may lose ordering) and skipping hosts to be
# excluded.
# But only if the user hasn't indicated they want this behavior disabled.
all_hosts = cleaned_hosts
if state.env.dedupe_hosts:
deduped_hosts = []
for host in cleaned_hosts:
if host not in deduped_hosts and host not in exclude:
deduped_hosts.append(host)
all_hosts = deduped_hosts
return all_hosts
def parse_kwargs(kwargs):
new_kwargs = {}
hosts = []
roles = []
exclude_hosts = []
for key, value in kwargs.iteritems():
if key == 'host':
hosts = [value]
elif key == 'hosts':
hosts = value
elif key == 'role':
roles = [value]
elif key == 'roles':
roles = value
elif key == 'exclude_hosts':
exclude_hosts = value
else:
new_kwargs[key] = value
return new_kwargs, hosts, roles, exclude_hosts
fabric-1.10.2/fabric/tasks.py 0000664 0000000 0000000 00000040124 12541056305 0015750 0 ustar 00root root 0000000 0000000 from __future__ import with_statement
from functools import wraps
import inspect
import sys
import textwrap
from fabric import state
from fabric.utils import abort, warn, error
from fabric.network import to_dict, normalize_to_string, disconnect_all
from fabric.context_managers import settings
from fabric.job_queue import JobQueue
from fabric.task_utils import crawl, merge, parse_kwargs
from fabric.exceptions import NetworkError
if sys.version_info[:2] == (2, 5):
# Python 2.5 inspect.getargspec returns a tuple
# instead of ArgSpec namedtuple.
class ArgSpec(object):
def __init__(self, args, varargs, keywords, defaults):
self.args = args
self.varargs = varargs
self.keywords = keywords
self.defaults = defaults
self._tuple = (args, varargs, keywords, defaults)
def __getitem__(self, idx):
return self._tuple[idx]
def patched_get_argspec(func):
return ArgSpec(*inspect._getargspec(func))
inspect._getargspec = inspect.getargspec
inspect.getargspec = patched_get_argspec
def get_task_details(task):
details = [
textwrap.dedent(task.__doc__)
if task.__doc__
else 'No docstring provided']
argspec = inspect.getargspec(task)
default_args = [] if not argspec.defaults else argspec.defaults
num_default_args = len(default_args)
args_without_defaults = argspec.args[:len(argspec.args) - num_default_args]
args_with_defaults = argspec.args[-1 * num_default_args:]
details.append('Arguments: %s' % (
', '.join(
args_without_defaults + [
'%s=%r' % (arg, default)
for arg, default in zip(args_with_defaults, default_args)
])
))
return '\n'.join(details)
def _get_list(env):
def inner(key):
return env.get(key, [])
return inner
class Task(object):
"""
Abstract base class for objects wishing to be picked up as Fabric tasks.
Instances of subclasses will be treated as valid tasks when present in
fabfiles loaded by the :doc:`fab ` tool.
For details on how to implement and use `~fabric.tasks.Task` subclasses,
please see the usage documentation on :ref:`new-style tasks
`.
.. versionadded:: 1.1
"""
name = 'undefined'
use_task_objects = True
aliases = None
is_default = False
# TODO: make it so that this wraps other decorators as expected
def __init__(self, alias=None, aliases=None, default=False, name=None,
*args, **kwargs):
if alias is not None:
self.aliases = [alias, ]
if aliases is not None:
self.aliases = aliases
if name is not None:
self.name = name
self.is_default = default
def __details__(self):
return get_task_details(self.run)
def run(self):
raise NotImplementedError
def get_hosts_and_effective_roles(self, arg_hosts, arg_roles, arg_exclude_hosts, env=None):
"""
Return a tuple containing the host list the given task should be using
and the roles being used.
See :ref:`host-lists` for detailed documentation on how host lists are
set.
.. versionchanged:: 1.9
"""
env = env or {'hosts': [], 'roles': [], 'exclude_hosts': []}
roledefs = env.get('roledefs', {})
# Command line per-task takes precedence over anything else.
if arg_hosts or arg_roles:
return merge(arg_hosts, arg_roles, arg_exclude_hosts, roledefs), arg_roles
# Decorator-specific hosts/roles go next
func_hosts = getattr(self, 'hosts', [])
func_roles = getattr(self, 'roles', [])
if func_hosts or func_roles:
return merge(func_hosts, func_roles, arg_exclude_hosts, roledefs), func_roles
# Finally, the env is checked (which might contain globally set lists
# from the CLI or from module-level code). This will be the empty list
# if these have not been set -- which is fine, this method should
# return an empty list if no hosts have been set anywhere.
env_vars = map(_get_list(env), "hosts roles exclude_hosts".split())
env_vars.append(roledefs)
return merge(*env_vars), env.get('roles', [])
def get_pool_size(self, hosts, default):
# Default parallel pool size (calculate per-task in case variables
# change)
default_pool_size = default or len(hosts)
# Allow per-task override
# Also cast to int in case somebody gave a string
from_task = getattr(self, 'pool_size', None)
pool_size = int(from_task or default_pool_size)
# But ensure it's never larger than the number of hosts
pool_size = min((pool_size, len(hosts)))
# Inform user of final pool size for this task
if state.output.debug:
print("Parallel tasks now using pool size of %d" % pool_size)
return pool_size
class WrappedCallableTask(Task):
"""
Wraps a given callable transparently, while marking it as a valid Task.
Generally used via `~fabric.decorators.task` and not directly.
.. versionadded:: 1.1
.. seealso:: `~fabric.docs.unwrap_tasks`, `~fabric.decorators.task`
"""
def __init__(self, callable, *args, **kwargs):
super(WrappedCallableTask, self).__init__(*args, **kwargs)
self.wrapped = callable
# Don't use getattr() here -- we want to avoid touching self.name
# entirely so the superclass' value remains default.
if hasattr(callable, '__name__'):
if self.name == 'undefined':
self.__name__ = self.name = callable.__name__
else:
self.__name__ = self.name
if hasattr(callable, '__doc__'):
self.__doc__ = callable.__doc__
if hasattr(callable, '__module__'):
self.__module__ = callable.__module__
def __call__(self, *args, **kwargs):
return self.run(*args, **kwargs)
def run(self, *args, **kwargs):
return self.wrapped(*args, **kwargs)
def __getattr__(self, k):
return getattr(self.wrapped, k)
def __details__(self):
orig = self
while 'wrapped' in orig.__dict__:
orig = orig.__dict__.get('wrapped')
return get_task_details(orig)
def requires_parallel(task):
"""
Returns True if given ``task`` should be run in parallel mode.
Specifically:
* It's been explicitly marked with ``@parallel``, or:
* It's *not* been explicitly marked with ``@serial`` *and* the global
parallel option (``env.parallel``) is set to ``True``.
"""
return (
(state.env.parallel and not getattr(task, 'serial', False))
or getattr(task, 'parallel', False)
)
def _parallel_tasks(commands_to_run):
return any(map(
lambda x: requires_parallel(crawl(x[0], state.commands)),
commands_to_run
))
def _is_network_error_ignored():
return not state.env.use_exceptions_for['network'] and state.env.skip_bad_hosts
def _execute(task, host, my_env, args, kwargs, jobs, queue, multiprocessing):
"""
Primary single-host work body of execute()
"""
# Log to stdout
if state.output.running and not hasattr(task, 'return_value'):
print("[%s] Executing task '%s'" % (host, my_env['command']))
# Create per-run env with connection settings
local_env = to_dict(host)
local_env.update(my_env)
# Set a few more env flags for parallelism
if queue is not None:
local_env.update({'parallel': True, 'linewise': True})
# Handle parallel execution
if queue is not None: # Since queue is only set for parallel
name = local_env['host_string']
# Wrap in another callable that:
# * expands the env it's given to ensure parallel, linewise, etc are
# all set correctly and explicitly. Such changes are naturally
# insulted from the parent process.
# * nukes the connection cache to prevent shared-access problems
# * knows how to send the tasks' return value back over a Queue
# * captures exceptions raised by the task
def inner(args, kwargs, queue, name, env):
state.env.update(env)
def submit(result):
queue.put({'name': name, 'result': result})
try:
state.connections.clear()
submit(task.run(*args, **kwargs))
except BaseException, e: # We really do want to capture everything
# SystemExit implies use of abort(), which prints its own
# traceback, host info etc -- so we don't want to double up
# on that. For everything else, though, we need to make
# clear what host encountered the exception that will
# print.
if e.__class__ is not SystemExit:
if not (isinstance(e, NetworkError) and
_is_network_error_ignored()):
sys.stderr.write("!!! Parallel execution exception under host %r:\n" % name)
submit(e)
# Here, anything -- unexpected exceptions, or abort()
# driven SystemExits -- will bubble up and terminate the
# child process.
if not (isinstance(e, NetworkError) and
_is_network_error_ignored()):
raise
# Stuff into Process wrapper
kwarg_dict = {
'args': args,
'kwargs': kwargs,
'queue': queue,
'name': name,
'env': local_env,
}
p = multiprocessing.Process(target=inner, kwargs=kwarg_dict)
# Name/id is host string
p.name = name
# Add to queue
jobs.append(p)
# Handle serial execution
else:
with settings(**local_env):
return task.run(*args, **kwargs)
def _is_task(task):
return isinstance(task, Task)
def execute(task, *args, **kwargs):
"""
Execute ``task`` (callable or name), honoring host/role decorators, etc.
``task`` may be an actual callable object, or it may be a registered task
name, which is used to look up a callable just as if the name had been
given on the command line (including :ref:`namespaced tasks `,
e.g. ``"deploy.migrate"``.
The task will then be executed once per host in its host list, which is
(again) assembled in the same manner as CLI-specified tasks: drawing from
:option:`-H`, :ref:`env.hosts `, the `~fabric.decorators.hosts` or
`~fabric.decorators.roles` decorators, and so forth.
``host``, ``hosts``, ``role``, ``roles`` and ``exclude_hosts`` kwargs will
be stripped out of the final call, and used to set the task's host list, as
if they had been specified on the command line like e.g. ``fab
taskname:host=hostname``.
Any other arguments or keyword arguments will be passed verbatim into
``task`` (the function itself -- not the ``@task`` decorator wrapping your
function!) when it is called, so ``execute(mytask, 'arg1',
kwarg1='value')`` will (once per host) invoke ``mytask('arg1',
kwarg1='value')``.
:returns:
a dictionary mapping host strings to the given task's return value for
that host's execution run. For example, ``execute(foo, hosts=['a',
'b'])`` might return ``{'a': None, 'b': 'bar'}`` if ``foo`` returned
nothing on host `a` but returned ``'bar'`` on host `b`.
In situations where a task execution fails for a given host but overall
progress does not abort (such as when :ref:`env.skip_bad_hosts
` is True) the return value for that host will be the
error object or message.
.. seealso::
:ref:`The execute usage docs `, for an expanded explanation
and some examples.
.. versionadded:: 1.3
.. versionchanged:: 1.4
Added the return value mapping; previously this function had no defined
return value.
"""
my_env = {'clean_revert': True}
results = {}
# Obtain task
is_callable = callable(task)
if not (is_callable or _is_task(task)):
# Assume string, set env.command to it
my_env['command'] = task
task = crawl(task, state.commands)
if task is None:
msg = "%r is not callable or a valid task name" % (my_env['command'],)
if state.env.get('skip_unknown_tasks', False):
warn(msg)
return
else:
abort(msg)
# Set env.command if we were given a real function or callable task obj
else:
dunder_name = getattr(task, '__name__', None)
my_env['command'] = getattr(task, 'name', dunder_name)
# Normalize to Task instance if we ended up with a regular callable
if not _is_task(task):
task = WrappedCallableTask(task)
# Filter out hosts/roles kwargs
new_kwargs, hosts, roles, exclude_hosts = parse_kwargs(kwargs)
# Set up host list
my_env['all_hosts'], my_env['effective_roles'] = task.get_hosts_and_effective_roles(hosts, roles,
exclude_hosts, state.env)
parallel = requires_parallel(task)
if parallel:
# Import multiprocessing if needed, erroring out usefully
# if it can't.
try:
import multiprocessing
except ImportError:
import traceback
tb = traceback.format_exc()
abort(tb + """
At least one task needs to be run in parallel, but the
multiprocessing module cannot be imported (see above
traceback.) Please make sure the module is installed
or that the above ImportError is fixed.""")
else:
multiprocessing = None
# Get pool size for this task
pool_size = task.get_pool_size(my_env['all_hosts'], state.env.pool_size)
# Set up job queue in case parallel is needed
queue = multiprocessing.Queue() if parallel else None
jobs = JobQueue(pool_size, queue)
if state.output.debug:
jobs._debug = True
# Call on host list
if my_env['all_hosts']:
# Attempt to cycle on hosts, skipping if needed
for host in my_env['all_hosts']:
try:
results[host] = _execute(
task, host, my_env, args, new_kwargs, jobs, queue,
multiprocessing
)
except NetworkError, e:
results[host] = e
# Backwards compat test re: whether to use an exception or
# abort
if not state.env.use_exceptions_for['network']:
func = warn if state.env.skip_bad_hosts else abort
error(e.message, func=func, exception=e.wrapped)
else:
raise
# If requested, clear out connections here and not just at the end.
if state.env.eagerly_disconnect:
disconnect_all()
# If running in parallel, block until job queue is emptied
if jobs:
err = "One or more hosts failed while executing task '%s'" % (
my_env['command']
)
jobs.close()
# Abort if any children did not exit cleanly (fail-fast).
# This prevents Fabric from continuing on to any other tasks.
# Otherwise, pull in results from the child run.
ran_jobs = jobs.run()
for name, d in ran_jobs.iteritems():
if d['exit_code'] != 0:
if isinstance(d['results'], NetworkError) and \
_is_network_error_ignored():
error(d['results'].message, func=warn, exception=d['results'].wrapped)
elif isinstance(d['results'], BaseException):
error(err, exception=d['results'])
else:
error(err)
results[name] = d['results']
# Or just run once for local-only
else:
with settings(**my_env):
results[''] = task.run(*args, **new_kwargs)
# Return what we can from the inner task executions
return results
fabric-1.10.2/fabric/thread_handling.py 0000664 0000000 0000000 00000001311 12541056305 0017731 0 ustar 00root root 0000000 0000000 import threading
import sys
class ThreadHandler(object):
def __init__(self, name, callable, *args, **kwargs):
# Set up exception handling
self.exception = None
def wrapper(*args, **kwargs):
try:
callable(*args, **kwargs)
except BaseException:
self.exception = sys.exc_info()
# Kick off thread
thread = threading.Thread(None, wrapper, name, args, kwargs)
thread.setDaemon(True)
thread.start()
# Make thread available to instantiator
self.thread = thread
def raise_if_needed(self):
if self.exception:
e = self.exception
raise e[0], e[1], e[2]
fabric-1.10.2/fabric/utils.py 0000664 0000000 0000000 00000033726 12541056305 0015775 0 ustar 00root root 0000000 0000000 """
Internal subroutines for e.g. aborting execution with an error message,
or performing indenting on multiline output.
"""
import os
import sys
import textwrap
from traceback import format_exc
def _encode(msg, stream):
if isinstance(msg, unicode) and hasattr(stream, 'encoding') and not stream.encoding is None:
return msg.encode(stream.encoding)
else:
return str(msg)
def isatty(stream):
"""Check if a stream is a tty.
Not all file-like objects implement the `isatty` method.
"""
fn = getattr(stream, 'isatty', None)
if fn is None:
return False
return fn()
def abort(msg):
"""
Abort execution, print ``msg`` to stderr and exit with error status (1.)
This function currently makes use of `SystemExit`_ in a manner that is
similar to `sys.exit`_ (but which skips the automatic printing to stderr,
allowing us to more tightly control it via settings).
Therefore, it's possible to detect and recover from inner calls to `abort`
by using ``except SystemExit`` or similar.
.. _sys.exit: http://docs.python.org/library/sys.html#sys.exit
.. _SystemExit: http://docs.python.org/library/exceptions.html#exceptions.SystemExit
"""
from fabric.state import output, env
if not env.colorize_errors:
red = lambda x: x
else:
from colors import red
if output.aborts:
sys.stderr.write(red("\nFatal error: %s\n" % _encode(msg, sys.stderr)))
sys.stderr.write(red("\nAborting.\n"))
if env.abort_exception:
raise env.abort_exception(msg)
else:
# See issue #1318 for details on the below; it lets us construct a
# valid, useful SystemExit while sidestepping the automatic stderr
# print (which would otherwise duplicate with the above in a
# non-controllable fashion).
e = SystemExit(1)
e.message = msg
raise e
def warn(msg):
"""
Print warning message, but do not abort execution.
This function honors Fabric's :doc:`output controls
<../../usage/output_controls>` and will print the given ``msg`` to stderr,
provided that the ``warnings`` output level (which is active by default) is
turned on.
"""
from fabric.state import output, env
if not env.colorize_errors:
magenta = lambda x: x
else:
from colors import magenta
if output.warnings:
msg = _encode(msg, sys.stderr)
sys.stderr.write(magenta("\nWarning: %s\n\n" % msg))
def indent(text, spaces=4, strip=False):
"""
Return ``text`` indented by the given number of spaces.
If text is not a string, it is assumed to be a list of lines and will be
joined by ``\\n`` prior to indenting.
When ``strip`` is ``True``, a minimum amount of whitespace is removed from
the left-hand side of the given string (so that relative indents are
preserved, but otherwise things are left-stripped). This allows you to
effectively "normalize" any previous indentation for some inputs.
"""
# Normalize list of strings into a string for dedenting. "list" here means
# "not a string" meaning "doesn't have splitlines". Meh.
if not hasattr(text, 'splitlines'):
text = '\n'.join(text)
# Dedent if requested
if strip:
text = textwrap.dedent(text)
prefix = ' ' * spaces
output = '\n'.join(prefix + line for line in text.splitlines())
# Strip out empty lines before/aft
output = output.strip()
# Reintroduce first indent (which just got stripped out)
output = prefix + output
return output
def puts(text, show_prefix=None, end="\n", flush=False):
"""
An alias for ``print`` whose output is managed by Fabric's output controls.
In other words, this function simply prints to ``sys.stdout``, but will
hide its output if the ``user`` :doc:`output level
` is set to ``False``.
If ``show_prefix=False``, `puts` will omit the leading ``[hostname]``
which it tacks on by default. (It will also omit this prefix if
``env.host_string`` is empty.)
Newlines may be disabled by setting ``end`` to the empty string (``''``).
(This intentionally mirrors Python 3's ``print`` syntax.)
You may force output flushing (e.g. to bypass output buffering) by setting
``flush=True``.
.. versionadded:: 0.9.2
.. seealso:: `~fabric.utils.fastprint`
"""
from fabric.state import output, env
if show_prefix is None:
show_prefix = env.output_prefix
if output.user:
prefix = ""
if env.host_string and show_prefix:
prefix = "[%s] " % env.host_string
sys.stdout.write(prefix + _encode(text, sys.stdout) + end)
if flush:
sys.stdout.flush()
def fastprint(text, show_prefix=False, end="", flush=True):
"""
Print ``text`` immediately, without any prefix or line ending.
This function is simply an alias of `~fabric.utils.puts` with different
default argument values, such that the ``text`` is printed without any
embellishment and immediately flushed.
It is useful for any situation where you wish to print text which might
otherwise get buffered by Python's output buffering (such as within a
processor intensive ``for`` loop). Since such use cases typically also
require a lack of line endings (such as printing a series of dots to
signify progress) it also omits the traditional newline by default.
.. note::
Since `~fabric.utils.fastprint` calls `~fabric.utils.puts`, it is
likewise subject to the ``user`` :doc:`output level
`.
.. versionadded:: 0.9.2
.. seealso:: `~fabric.utils.puts`
"""
return puts(text=text, show_prefix=show_prefix, end=end, flush=flush)
def handle_prompt_abort(prompt_for):
import fabric.state
reason = "Needed to prompt for %s (host: %s), but %%s" % (
prompt_for, fabric.state.env.host_string
)
# Explicit "don't prompt me bro"
if fabric.state.env.abort_on_prompts:
abort(reason % "abort-on-prompts was set to True")
# Implicit "parallel == stdin/prompts have ambiguous target"
if fabric.state.env.parallel:
abort(reason % "input would be ambiguous in parallel mode")
class _AttributeDict(dict):
"""
Dictionary subclass enabling attribute lookup/assignment of keys/values.
For example::
>>> m = _AttributeDict({'foo': 'bar'})
>>> m.foo
'bar'
>>> m.foo = 'not bar'
>>> m['foo']
'not bar'
``_AttributeDict`` objects also provide ``.first()`` which acts like
``.get()`` but accepts multiple keys as arguments, and returns the value of
the first hit, e.g.::
>>> m = _AttributeDict({'foo': 'bar', 'biz': 'baz'})
>>> m.first('wrong', 'incorrect', 'foo', 'biz')
'bar'
"""
def __getattr__(self, key):
try:
return self[key]
except KeyError:
# to conform with __getattr__ spec
raise AttributeError(key)
def __setattr__(self, key, value):
self[key] = value
def first(self, *names):
for name in names:
value = self.get(name)
if value:
return value
class _AliasDict(_AttributeDict):
"""
`_AttributeDict` subclass that allows for "aliasing" of keys to other keys.
Upon creation, takes an ``aliases`` mapping, which should map alias names
to lists of key names. Aliases do not store their own value, but instead
set (override) all mapped keys' values. For example, in the following
`_AliasDict`, calling ``mydict['foo'] = True`` will set the values of
``mydict['bar']``, ``mydict['biz']`` and ``mydict['baz']`` all to True::
mydict = _AliasDict(
{'biz': True, 'baz': False},
aliases={'foo': ['bar', 'biz', 'baz']}
)
Because it is possible for the aliased values to be in a heterogenous
state, reading aliases is not supported -- only writing to them is allowed.
This also means they will not show up in e.g. ``dict.keys()``.
..note::
Aliases are recursive, so you may refer to an alias within the key list
of another alias. Naturally, this means that you can end up with
infinite loops if you're not careful.
`_AliasDict` provides a special function, `expand_aliases`, which will take
a list of keys as an argument and will return that list of keys with any
aliases expanded. This function will **not** dedupe, so any aliases which
overlap will result in duplicate keys in the resulting list.
"""
def __init__(self, arg=None, aliases=None):
init = super(_AliasDict, self).__init__
if arg is not None:
init(arg)
else:
init()
# Can't use super() here because of _AttributeDict's setattr override
dict.__setattr__(self, 'aliases', aliases)
def __setitem__(self, key, value):
# Attr test required to not blow up when deepcopy'd
if hasattr(self, 'aliases') and key in self.aliases:
for aliased in self.aliases[key]:
self[aliased] = value
else:
return super(_AliasDict, self).__setitem__(key, value)
def expand_aliases(self, keys):
ret = []
for key in keys:
if key in self.aliases:
ret.extend(self.expand_aliases(self.aliases[key]))
else:
ret.append(key)
return ret
def _pty_size():
"""
Obtain (rows, cols) tuple for sizing a pty on the remote end.
Defaults to 80x24 (which is also the 'ssh' lib's default) but will detect
local (stdout-based) terminal window size on non-Windows platforms.
"""
from fabric.state import win32
if not win32:
import fcntl
import termios
import struct
default_rows, default_cols = 24, 80
rows, cols = default_rows, default_cols
if not win32 and isatty(sys.stdout):
# We want two short unsigned integers (rows, cols)
fmt = 'HH'
# Create an empty (zeroed) buffer for ioctl to map onto. Yay for C!
buffer = struct.pack(fmt, 0, 0)
# Call TIOCGWINSZ to get window size of stdout, returns our filled
# buffer
try:
result = fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ,
buffer)
# Unpack buffer back into Python data types
rows, cols = struct.unpack(fmt, result)
# Fall back to defaults if TIOCGWINSZ returns unreasonable values
if rows == 0:
rows = default_rows
if cols == 0:
cols = default_cols
# Deal with e.g. sys.stdout being monkeypatched, such as in testing.
# Or termios not having a TIOCGWINSZ.
except AttributeError:
pass
return rows, cols
def error(message, func=None, exception=None, stdout=None, stderr=None):
"""
Call ``func`` with given error ``message``.
If ``func`` is None (the default), the value of ``env.warn_only``
determines whether to call ``abort`` or ``warn``.
If ``exception`` is given, it is inspected to get a string message, which
is printed alongside the user-generated ``message``.
If ``stdout`` and/or ``stderr`` are given, they are assumed to be strings
to be printed.
"""
import fabric.state
if func is None:
func = fabric.state.env.warn_only and warn or abort
# If exception printing is on, append a traceback to the message
if fabric.state.output.exceptions or fabric.state.output.debug:
exception_message = format_exc()
if exception_message:
message += "\n\n" + exception_message
# Otherwise, if we were given an exception, append its contents.
elif exception is not None:
# Figure out how to get a string out of the exception; EnvironmentError
# subclasses, for example, "are" integers and .strerror is the string.
# Others "are" strings themselves. May have to expand this further for
# other error types.
if hasattr(exception, 'strerror') and exception.strerror is not None:
underlying = exception.strerror
else:
underlying = exception
message += "\n\nUnderlying exception:\n" + indent(str(underlying))
if func is abort:
if stdout and not fabric.state.output.stdout:
message += _format_error_output("Standard output", stdout)
if stderr and not fabric.state.output.stderr:
message += _format_error_output("Standard error", stderr)
return func(message)
def _format_error_output(header, body):
term_width = _pty_size()[1]
header_side_length = (term_width - (len(header) + 2)) / 2
mark = "="
side = mark * header_side_length
return "\n\n%s %s %s\n\n%s\n\n%s" % (
side, header, side, body, mark * term_width
)
# TODO: replace with collections.deque(maxlen=xxx) in Python 2.6
class RingBuffer(list):
def __init__(self, value, maxlen):
# Heh.
self._super = super(RingBuffer, self)
self._maxlen = maxlen
return self._super.__init__(value)
def _free(self):
return self._maxlen - len(self)
def append(self, value):
if self._free() == 0:
del self[0]
return self._super.append(value)
def extend(self, values):
overage = len(values) - self._free()
if overage > 0:
del self[0:overage]
return self._super.extend(values)
# Paranoia from here on out.
def insert(self, index, value):
raise ValueError("Can't insert into the middle of a ring buffer!")
def __setslice__(self, i, j, sequence):
raise ValueError("Can't set a slice of a ring buffer!")
def __setitem__(self, key, value):
if isinstance(key, slice):
raise ValueError("Can't set a slice of a ring buffer!")
else:
return self._super.__setitem__(key, value)
def apply_lcwd(path, env):
# Apply CWD if a relative path
if not os.path.isabs(path) and env.lcwd:
path = os.path.join(env.lcwd, path)
return path
fabric-1.10.2/fabric/version.py 0000664 0000000 0000000 00000005472 12541056305 0016317 0 ustar 00root root 0000000 0000000 """
Current Fabric version constant plus version pretty-print method.
This functionality is contained in its own module to prevent circular import
problems with ``__init__.py`` (which is loaded by setup.py during installation,
which in turn needs access to this version information.)
"""
from subprocess import Popen, PIPE
from os.path import abspath, dirname
VERSION = (1, 10, 2, 'final', 0)
def git_sha():
loc = abspath(dirname(__file__))
try:
p = Popen(
"cd \"%s\" && git log -1 --format=format:%%h" % loc,
shell=True,
stdout=PIPE,
stderr=PIPE
)
return p.communicate()[0]
# OSError occurs on Unix-derived platforms lacking Popen's configured shell
# default, /bin/sh. E.g. Android.
except OSError:
return None
def get_version(form='short'):
"""
Return a version string for this package, based on `VERSION`.
Takes a single argument, ``form``, which should be one of the following
strings:
* ``branch``: just the major + minor, e.g. "0.9", "1.0".
* ``short`` (default): compact, e.g. "0.9rc1", "0.9.0". For package
filenames or SCM tag identifiers.
* ``normal``: human readable, e.g. "0.9", "0.9.1", "0.9 beta 1". For e.g.
documentation site headers.
* ``verbose``: like ``normal`` but fully explicit, e.g. "0.9 final". For
tag commit messages, or anywhere that it's important to remove ambiguity
between a branch and the first final release within that branch.
* ``all``: Returns all of the above, as a dict.
"""
# Setup
versions = {}
branch = "%s.%s" % (VERSION[0], VERSION[1])
tertiary = VERSION[2]
type_ = VERSION[3]
final = (type_ == "final")
type_num = VERSION[4]
firsts = "".join([x[0] for x in type_.split()])
# Branch
versions['branch'] = branch
# Short
v = branch
if (tertiary or final):
v += "." + str(tertiary)
if not final:
v += firsts
if type_num:
v += str(type_num)
versions['short'] = v
# Normal
v = branch
if tertiary:
v += "." + str(tertiary)
if not final:
if type_num:
v += " " + type_ + " " + str(type_num)
else:
v += " pre-" + type_
versions['normal'] = v
# Verbose
v = branch
if tertiary:
v += "." + str(tertiary)
if not final:
if type_num:
v += " " + type_ + " " + str(type_num)
else:
v += " pre-" + type_
else:
v += " final"
versions['verbose'] = v
try:
return versions[form]
except KeyError:
if form == 'all':
return versions
raise TypeError('"%s" is not a valid form specifier.' % form)
__version__ = get_version('short')
if __name__ == "__main__":
print(get_version('all'))
fabric-1.10.2/integration/ 0000775 0000000 0000000 00000000000 12541056305 0015345 5 ustar 00root root 0000000 0000000 fabric-1.10.2/integration/test_contrib.py 0000664 0000000 0000000 00000007365 12541056305 0020431 0 ustar 00root root 0000000 0000000 import os
import types
import re
import sys
from fabric.api import run, local
from fabric.contrib import files, project
from utils import Integration
def tildify(path):
home = run("echo ~", quiet=True).stdout.strip()
return path.replace('~', home)
def expect(path):
assert files.exists(tildify(path))
def expect_contains(path, value):
assert files.contains(tildify(path), value)
def escape(path):
return path.replace(' ', r'\ ')
class FileCleaner(Integration):
def setup(self):
self.local = []
self.remote = []
def teardown(self):
super(FileCleaner, self).teardown()
for created in self.local:
os.unlink(created)
for created in self.remote:
run("rm %s" % escape(created))
class TestTildeExpansion(FileCleaner):
def test_append(self):
for target in ('~/append_test', '~/append_test with spaces'):
self.remote.append(target)
files.append(target, ['line'])
expect(target)
def test_exists(self):
for target in ('~/exists_test', '~/exists test with space'):
self.remote.append(target)
run("touch %s" % escape(target))
expect(target)
def test_sed(self):
for target in ('~/sed_test', '~/sed test with space'):
self.remote.append(target)
run("echo 'before' > %s" % escape(target))
files.sed(target, 'before', 'after')
expect_contains(target, 'after')
def test_upload_template(self):
for i, target in enumerate((
'~/upload_template_test',
'~/upload template test with space'
)):
src = "source%s" % i
local("touch %s" % src)
self.local.append(src)
self.remote.append(target)
files.upload_template(src, target)
expect(target)
class TestIsLink(FileCleaner):
# TODO: add more of these. meh.
def test_is_link_is_true_on_symlink(self):
self.remote.extend(['/tmp/foo', '/tmp/bar'])
run("touch /tmp/foo")
run("ln -s /tmp/foo /tmp/bar")
assert files.is_link('/tmp/bar')
def test_is_link_is_false_on_non_link(self):
self.remote.append('/tmp/biz')
run("touch /tmp/biz")
assert not files.is_link('/tmp/biz')
rsync_sources = (
'integration/',
'integration/test_contrib.py',
'integration/test_operations.py',
'integration/utils.py'
)
class TestRsync(Integration):
def rsync(self, id_, **kwargs):
remote = '/tmp/rsync-test-%s/' % id_
if files.exists(remote):
run("rm -rf %s" % remote)
return project.rsync_project(
remote_dir=remote,
local_dir='integration',
ssh_opts='-o StrictHostKeyChecking=no',
capture=True,
**kwargs
)
def test_existing_default_args(self):
"""
Rsync uses -v by default
"""
r = self.rsync(1)
for x in rsync_sources:
assert re.search(r'^%s$' % x, r.stdout, re.M), "'%s' was not found in '%s'" % (x, r.stdout)
def test_overriding_default_args(self):
"""
Use of default_args kwarg can be used to nuke e.g. -v
"""
r = self.rsync(2, default_opts='-pthrz')
for x in rsync_sources:
assert not re.search(r'^%s$' % x, r.stdout, re.M), "'%s' was found in '%s'" % (x, r.stdout)
class TestUploadTemplate(FileCleaner):
def test_allows_pty_disable(self):
src = "source_file"
target = "remote_file"
local("touch %s" % src)
self.local.append(src)
self.remote.append(target)
# Just make sure it doesn't asplode. meh.
files.upload_template(src, target, pty=False)
expect(target)
fabric-1.10.2/integration/test_operations.py 0000664 0000000 0000000 00000015377 12541056305 0021156 0 ustar 00root root 0000000 0000000 from __future__ import with_statement
from StringIO import StringIO
import os
import posixpath
import shutil
from fabric.api import (
run, path, put, sudo, abort, warn_only, env, cd, local, settings, get
)
from fabric.contrib.files import exists
from utils import Integration
def assert_mode(path, mode):
remote_mode = run("stat -c \"%%a\" \"%s\"" % path).stdout
assert remote_mode == mode, "remote %r != expected %r" % (remote_mode, mode)
class TestOperations(Integration):
filepath = "/tmp/whocares"
dirpath = "/tmp/whatever/bin"
not_owned = "/tmp/notmine"
def setup(self):
super(TestOperations, self).setup()
run("mkdir -p %s" % " ".join([self.dirpath, self.not_owned]))
def teardown(self):
super(TestOperations, self).teardown()
# Revert any chown crap from put sudo tests
sudo("chown %s ." % env.user)
# Nuke to prevent bleed
sudo("rm -rf %s" % " ".join([self.dirpath, self.filepath]))
sudo("rm -rf %s" % self.not_owned)
def test_no_trailing_space_in_shell_path_in_run(self):
put(StringIO("#!/bin/bash\necho hi"), "%s/myapp" % self.dirpath, mode="0755")
with path(self.dirpath):
assert run('myapp').stdout == 'hi'
def test_string_put_mode_arg_doesnt_error(self):
put(StringIO("#!/bin/bash\necho hi"), self.filepath, mode="0755")
assert_mode(self.filepath, "755")
def test_int_put_mode_works_ok_too(self):
put(StringIO("#!/bin/bash\necho hi"), self.filepath, mode=0755)
assert_mode(self.filepath, "755")
def _chown(self, target):
sudo("chown root %s" % target)
def _put_via_sudo(self, source=None, target_suffix='myfile', **kwargs):
# Ensure target dir prefix is not owned by our user (so we fail unless
# the sudo part of things is working)
self._chown(self.not_owned)
source = source if source else StringIO("whatever")
# Drop temp file into that dir, via use_sudo, + any kwargs
return put(
source,
self.not_owned + '/' + target_suffix,
use_sudo=True,
**kwargs
)
def test_put_with_use_sudo(self):
self._put_via_sudo()
def test_put_with_dir_and_use_sudo(self):
# Test cwd should be root of fabric source tree. Use our own folder as
# the source, meh.
self._put_via_sudo(source='integration', target_suffix='')
def test_put_with_use_sudo_and_custom_temp_dir(self):
# TODO: allow dependency injection in sftp.put or w/e, test it in
# isolation instead.
# For now, just half-ass it by ensuring $HOME isn't writable
# temporarily.
self._chown('.')
self._put_via_sudo(temp_dir='/tmp')
def test_put_with_use_sudo_dir_and_custom_temp_dir(self):
self._chown('.')
self._put_via_sudo(source='integration', target_suffix='', temp_dir='/tmp')
def test_put_use_sudo_and_explicit_mode(self):
# Setup
target_dir = posixpath.join(self.filepath, 'blah')
subdir = "inner"
subdir_abs = posixpath.join(target_dir, subdir)
filename = "whatever.txt"
target_file = posixpath.join(subdir_abs, filename)
run("mkdir -p %s" % subdir_abs)
self._chown(subdir_abs)
local_path = os.path.join('/tmp', filename)
with open(local_path, 'w+') as fd:
fd.write('stuff\n')
# Upload + assert
with cd(target_dir):
put(local_path, subdir, use_sudo=True, mode='777')
assert_mode(target_file, '777')
def test_put_file_to_dir_with_use_sudo_and_mirror_mode(self):
# Ensure mode of local file, umask varies on eg travis vs various
# localhosts
source = 'whatever.txt'
try:
local("touch %s" % source)
local("chmod 644 %s" % source)
# Target for _put_via_sudo is a directory by default
uploaded = self._put_via_sudo(
source=source, mirror_local_mode=True
)
assert_mode(uploaded[0], '644')
finally:
local("rm -f %s" % source)
def test_put_directory_use_sudo_and_spaces(self):
localdir = 'I have spaces'
localfile = os.path.join(localdir, 'file.txt')
os.mkdir(localdir)
with open(localfile, 'w') as fd:
fd.write('stuff\n')
try:
uploaded = self._put_via_sudo(localdir, target_suffix='')
# Kinda dumb, put() would've died if it couldn't do it, but.
assert exists(uploaded[0])
assert exists(posixpath.dirname(uploaded[0]))
finally:
shutil.rmtree(localdir)
def test_agent_forwarding_functions(self):
# When paramiko #399 is present this will hang indefinitely
with settings(forward_agent=True):
run('ssh-add -L')
def test_get_with_use_sudo_unowned_file(self):
# Ensure target is not normally readable by us
target = self.filepath
sudo("echo 'nope' > %s" % target)
sudo("chown root:root %s" % target)
sudo("chmod 0440 %s" % target)
# Pull down with use_sudo, confirm contents
local_ = StringIO()
result = get(
local_path=local_,
remote_path=target,
use_sudo=True,
)
assert local_.getvalue() == "nope\n"
def test_get_with_use_sudo_groupowned_file(self):
# Issue #1226: file gotten w/ use_sudo, file normally readable via
# group perms (yes - so use_sudo not required - full use case involves
# full-directory get() where use_sudo *is* required). Prior to fix,
# temp file is chmod 404 which seems to cause perm denied due to group
# membership (despite 'other' readability).
target = self.filepath
sudo("echo 'nope' > %s" % target)
# Same group as connected user
gid = run("id -g")
sudo("chown root:%s %s" % (gid, target))
# Same perms as bug use case (only really need group read)
sudo("chmod 0640 %s" % target)
# Do eet
local_ = StringIO()
result = get(
local_path=local_,
remote_path=target,
use_sudo=True,
)
assert local_.getvalue() == "nope\n"
def test_get_from_unreadable_dir(self):
# Put file in dir as normal user
remotepath = "%s/myfile.txt" % self.dirpath
run("echo 'foo' > %s" % remotepath)
# Make dir unreadable (but still executable - impossible to obtain
# file if dir is both unreadable and unexecutable)
sudo("chown root:root %s" % self.dirpath)
sudo("chmod 711 %s" % self.dirpath)
# Try gettin' it
local_ = StringIO()
get(local_path=local_, remote_path=remotepath)
assert local_.getvalue() == 'foo\n'
fabric-1.10.2/integration/utils.py 0000664 0000000 0000000 00000000731 12541056305 0017060 0 ustar 00root root 0000000 0000000 import os
import sys
# Pull in regular tests' utilities
mod = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'tests'))
sys.path.insert(0, mod)
from mock_streams import mock_streams
#from utils import FabricTest
# Clean up
del sys.path[0]
class Integration(object):
def setup(self):
# Just so subclasses can super() us w/o fear. Meh.
pass
def teardown(self):
# Just so subclasses can super() us w/o fear. Meh.
pass
fabric-1.10.2/requirements.txt 0000664 0000000 0000000 00000001016 12541056305 0016304 0 ustar 00root root 0000000 0000000 # These requirements are for DEVELOPMENT ONLY!
# You do not need e.g. Sphinx or Fudge just to run the 'fab' tool.
# Instead, these are necessary for executing the test suite or developing the
# cutting edge (which may have different requirements from released versions.)
# Development version of Paramiko, just in case we're in one of those phases.
-e git+https://github.com/paramiko/paramiko#egg=paramiko
# Pull in actual "you already have local installed checkouts of Fabric +
# Paramiko" dev deps.
-r dev-requirements.txt
fabric-1.10.2/setup.py 0000664 0000000 0000000 00000004341 12541056305 0014536 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
from __future__ import with_statement
import sys
from setuptools import setup, find_packages
from fabric.version import get_version
with open('README.rst') as f:
readme = f.read()
long_description = """
To find out what's new in this version of Fabric, please see `the changelog
`_.
You can also install the `in-development version
`_ using
pip, with `pip install fabric==dev`.
----
%s
----
For more information, please see the Fabric website or execute ``fab --help``.
""" % (readme)
if sys.version_info[:2] < (2, 6):
install_requires=['paramiko>=1.10,<1.13']
else:
install_requires=['paramiko>=1.10']
setup(
name='Fabric',
version=get_version('short'),
description='Fabric is a simple, Pythonic tool for remote execution and deployment.',
long_description=long_description,
author='Jeff Forcier',
author_email='jeff@bitprophet.org',
url='http://fabfile.org',
packages=find_packages(),
test_suite='nose.collector',
tests_require=['nose', 'fudge<1.0', 'jinja2'],
install_requires=install_requires,
entry_points={
'console_scripts': [
'fab = fabric.main:main',
]
},
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: BSD License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Unix',
'Operating System :: POSIX',
'Programming Language :: Python',
'Programming Language :: Python :: 2.5',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Topic :: Software Development',
'Topic :: Software Development :: Build Tools',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: System :: Clustering',
'Topic :: System :: Software Distribution',
'Topic :: System :: Systems Administration',
],
)
fabric-1.10.2/sites/ 0000775 0000000 0000000 00000000000 12541056305 0014151 5 ustar 00root root 0000000 0000000 fabric-1.10.2/sites/_shared_static/ 0000775 0000000 0000000 00000000000 12541056305 0017125 5 ustar 00root root 0000000 0000000 fabric-1.10.2/sites/_shared_static/logo.png 0000664 0000000 0000000 00000014401 12541056305 0020573 0 ustar 00root root 0000000 0000000 ‰PNG
IHDR – u ¬ÑÎ $iCCPICC Profile 8…UßoÛT>‰oR¤? XG‡ŠÅ¯US[¹ÆI“¥íJ¥éØ*$ä:7‰©Û鶪O{7ü@ÙH§kk?ì<Ê»øÎí¾kktüqóÝ‹mÇ6°nÆ¶ÂøØ¯±-ümR;`zŠ–¡Êðv x#=\Ó%
ëoàYÐÚRÚ±£¥êùÐ#&Á?È>ÌÒ¹áЪþ¢þ©n¨_¨Ôß;j„;¦$}*}+ý(}'}/ýLŠtYº"ý$]•¾‘.9»ï½Ÿ%Ø{¯_aÝŠ]hÕkŸ5'SNÊ{äå”ü¼ü²<°¹_“§ä½ðì öÍý½t
³jMµ{-ñ4%ׯTÅ„«tYÛŸ“¦R6ÈÆØô#§v\œå–Šx:žŠ'H‰ï‹OÄÇâ3·ž¼ø^ø&°¦õþ“0::àm,L%È3â:qVEô
t›ÐÍ]~ߢI«vÖ6ÊWÙ¯ª¯) |ʸ2]ÕG‡Í4Ïå(6w¸½Â‹£$¾ƒ"ŽèAÞû¾EvÝmî[D‡ÿÂ;ëVh[¨}íõ¿Ú†ðN|æ3¢‹õº½âç£Hä‘S:°ßûéKâÝt·Ñx€÷UÏ'D;7ÿ®7;_"ÿÑeó?Y qxl+ pHYs ÒÝ~ü iTXtXML:com.adobe.xmp
Adobe Fireworks CS5.1
1
·B iIDATxí]
ŒUÅžWk(H…d .lºaQײ+ŠEE
ZŒ•R‹Ji°`´F¥ÿR)Ñ*¦Å*ØšXP´(5nâOµRe±¢²âÚ‹Y È&Ð,1Íëù;x÷¾;sgî™;ïí›äå¾;w~ÏýîÌ™sΜÉå)°J¨PÀ0¾`¸¼Jq
(PVV(P–²V
«‚+(=`}²™1üzPèØ{€½ú½%Õã/–Lk LùÖk;øñçMîSÃXÿz–«šÈX¿qŸÇ—ø¿ÍÛö³×þù1{¹eÛ²µ³[ofN¬e7\8¢[œ7¹’7 T-W2ö¿bч±ª±²óèú-q:OŸ L«6t°WßÝÅöîûTÚÊR WI +ßrcûß•»èaU#Ëûm¯A¦¦pÿšLd5ÇÐÇäiðXû×Óhuuròa$0‰@öÆúŽN^Ž¡œû|ÆV¾ÞÁžxekìÈ$«ò¼¯W³ûfž&K’é3ïy¬üî§ÒÓçÎÕ,O?F