pax_global_header00006660000000000000000000000064131501146200014503gustar00rootroot0000000000000052 comment=8b2af46f7f6a50d93a562c567211045c2fe6fbd3 fabric-1.14.0/000077500000000000000000000000001315011462000130145ustar00rootroot00000000000000fabric-1.14.0/.gitignore000066400000000000000000000002221315011462000150000ustar00rootroot00000000000000*~ *.pyc *.pyo *.pyt *.pytc *.egg .DS_Store .*.swp Fabric.egg-info .coverage docs/_build dist build/ tags TAGS .tox tox.ini .idea/ sites/*/_build fabric-1.14.0/.travis.yml000066400000000000000000000021341315011462000151250ustar00rootroot00000000000000language: python python: - "2.6" - "2.7" install: # Build/test dependencies - pip install -r requirements.txt # Get fab to test fab - pip install -e . # Deal with issue on Travis builders re: multiprocessing.Queue :( - "sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm" - "pip install jinja2" before_script: # Allow us to SSH passwordless to localhost - ssh-keygen -f ~/.ssh/id_rsa -N "" - cp ~/.ssh/{id_rsa.pub,authorized_keys} # Creation of an SSH agent for testing forwarding - eval $(ssh-agent) - ssh-add script: # Normal tests - fab test # Integration tests - fab -H localhost test:"--tests\=integration" # Build docs; www first without warnings so its intersphinx objects file # generates. Then docs (with warnings->errors), then www again (also w/ # warnings on.) FUN TIMES WITH CIRCULAR DEPENDENCIES. - invoke www - invoke docs -o -W - invoke www -c -o -W notifications: irc: channels: "irc.freenode.org#fabric" template: - "%{repository}@%{branch}: %{message} (%{build_url})" on_success: change on_failure: change email: false fabric-1.14.0/AUTHORS000066400000000000000000000021711315011462000140650ustar00rootroot00000000000000The following list contains individuals who contributed nontrivial code to Fabric's codebase, ordered by date of first contribution. Individuals who submitted bug reports or trivial one-line "you forgot to do X" patches are generally credited in the commit log only. IMPORTANT: as of 2012, this file is historical only and we'll probably stop updating it. The changelog and/or Git history is the canonical source for thanks, credits etc. Christian Vest Hansen Rob Cowie Jeff Forcier Travis Cline Niklas Lindström Kevin Horn Max Battcher Alexander Artemenko Dennis Schoen Erick Dennis Sverre Johansen Michael Stephens Armin Ronacher Curt Micol Patrick McNerthney Steve Steiner Ali Saifee Jorge Vargas Peter Ellis Brian Rosner Xinan Wu Alex Koshelev Mich Matuson Morgan Goose Carl Meyer Erich Heine Travis Swicegood Paul Smith Alex Koshelev Stephen Goss James Murty Thomas Ballinger Rick Harding Kirill Pinchuk Ales Zoulek Casey Banner Roman Imankulov Rodrigue Alcazar Jeremy Avnet Matt Chisholm Mark Merritt Max Arnold Szymon Reichmann David Wolever Jason Coombs Ben Davis Neilen Marais Rory Geoghegan Alexey Diyan Kamil Kisiel Jonas Lundberg fabric-1.14.0/CHANGELOG.rst000066400000000000000000000002611315011462000150340ustar00rootroot00000000000000The CHANGELOG lives in `sites/www/changelog.rst `_. An HTML version lives online at `fabfile.org/changelog.html `_. fabric-1.14.0/CONTRIBUTING.rst000066400000000000000000000002031315011462000154500ustar00rootroot00000000000000Please see `contribution-guide.org `_ for details on what we expect from contributors. Thanks! fabric-1.14.0/INSTALL000066400000000000000000000001601315011462000140420ustar00rootroot00000000000000For installation help, please see http://fabfile.org/ or (if using a source checkout) sites/www/installing.rst. fabric-1.14.0/LICENSE000066400000000000000000000025321315011462000140230ustar00rootroot00000000000000Copyright (c) 2009-2017 Jeffrey E. Forcier Copyright (c) 2008-2009 Christian Vest Hansen All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. fabric-1.14.0/MANIFEST.in000066400000000000000000000003771315011462000145610ustar00rootroot00000000000000include AUTHORS include INSTALL include LICENSE include README.rst recursive-include sites * recursive-exclude sites/docs/_build * recursive-exclude sites/www/_build * include requirements.txt recursive-include tests * recursive-exclude tests *.pyc *.pyo fabric-1.14.0/README.rst000066400000000000000000000025251315011462000145070ustar00rootroot00000000000000Fabric is a Python (2.5-2.7) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. It provides a basic suite of operations for executing local or remote shell commands (normally or via ``sudo``) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution. Typical use involves creating a Python module containing one or more functions, then executing them via the ``fab`` command-line tool. Below is a small but complete "fabfile" containing a single task: .. code-block:: python from fabric.api import run def host_type(): run('uname -s') If you save the above as ``fabfile.py`` (the default module that ``fab`` loads), you can run the tasks defined in it on one or more servers, like so:: $ fab -H localhost,linuxbox host_type [localhost] run: uname -s [localhost] out: Darwin [linuxbox] run: uname -s [linuxbox] out: Linux Done. Disconnecting from localhost... done. Disconnecting from linuxbox... done. In addition to use via the ``fab`` tool, Fabric's components may be imported into other Python code, providing a Pythonic interface to the SSH protocol suite at a higher level than that provided by e.g. the ``Paramiko`` library (which Fabric itself uses.) fabric-1.14.0/dev-requirements.txt000066400000000000000000000010651315011462000170560ustar00rootroot00000000000000# You should already have the dev version of Paramiko and your local Fabric # checkout installed! Stable Paramiko may not be sufficient! # Test runner/testing utils nose<2.0 # Rudolf adds color to the output of 'fab test'. This is a custom fork # addressing Python 2.7 and Nose's 'skip' plugin compatibility issues. -e git+https://github.com/bitprophet/rudolf#egg=rudolf # Mocking library Fudge<1.0 # Documentation generation Sphinx==1.3.4 releases>=1.2.0,<2 invoke>=0.12,<0.13 invocations>=0.12,<0.13 alabaster>=0.6.1 semantic_version==2.4 wheel==0.24 twine==1.5 fabric-1.14.0/fabfile.py000066400000000000000000000011421315011462000147540ustar00rootroot00000000000000""" Fabric's own fabfile. """ from __future__ import with_statement import nose from fabric.api import task @task(default=True) def test(args=None): """ Run all unit tests and doctests. Specify string argument ``args`` for additional args to ``nosetests``. """ # Default to explicitly targeting the 'tests' folder, but only if nothing # is being overridden. tests = "" if args else " tests" default_args = "-sv --with-doctest --nologcapture --with-color %s" % tests default_args += (" " + args) if args else "" nose.core.run_exit(argv=[''] + default_args.split()) fabric-1.14.0/fabric/000077500000000000000000000000001315011462000142425ustar00rootroot00000000000000fabric-1.14.0/fabric/__init__.py000066400000000000000000000000741315011462000163540ustar00rootroot00000000000000""" See `fabric.api` for the publically importable API. """ fabric-1.14.0/fabric/__main__.py000066400000000000000000000000461315011462000163340ustar00rootroot00000000000000import fabric.main fabric.main.main() fabric-1.14.0/fabric/api.py000066400000000000000000000014431315011462000153670ustar00rootroot00000000000000""" Non-init module for doing convenient * imports from. Necessary because if we did this in __init__, one would be unable to import anything else inside the package -- like, say, the version number used in setup.py -- without triggering loads of most of the code. Which doesn't work so well when you're using setup.py to install e.g. ssh! """ from fabric.context_managers import (cd, hide, settings, show, path, prefix, lcd, quiet, warn_only, remote_tunnel, shell_env) from fabric.decorators import (hosts, roles, runs_once, with_settings, task, serial, parallel) from fabric.operations import (require, prompt, put, get, run, sudo, local, reboot, open_shell) from fabric.state import env, output from fabric.utils import abort, warn, puts, fastprint from fabric.tasks import execute fabric-1.14.0/fabric/auth.py000066400000000000000000000013221315011462000155530ustar00rootroot00000000000000""" Common authentication subroutines. Primarily for internal use. """ def get_password(user, host, port, login_only=False): from fabric.state import env from fabric.network import join_host_strings host_string = join_host_strings(user, host, port) sudo_password = env.sudo_passwords.get(host_string, env.sudo_password) login_password = env.passwords.get(host_string, env.password) return login_password if login_only else sudo_password or login_password def set_password(user, host, port, password): from fabric.state import env from fabric.network import join_host_strings host_string = join_host_strings(user, host, port) env.password = env.passwords[host_string] = password fabric-1.14.0/fabric/colors.py000066400000000000000000000026671315011462000161300ustar00rootroot00000000000000""" .. versionadded:: 0.9.2 Functions for wrapping strings in ANSI color codes. Each function within this module returns the input string ``text``, wrapped with ANSI color codes for the appropriate color. For example, to print some text as green on supporting terminals:: from fabric.colors import green print(green("This text is green!")) Because these functions simply return modified strings, you can nest them:: from fabric.colors import red, green print(red("This sentence is red, except for " + \ green("these words, which are green") + ".")) If ``bold`` is set to ``True``, the ANSI flag for bolding will be flipped on for that particular invocation, which usually shows up as a bold or brighter version of the original color on most terminals. .. versionchanged:: 1.11 Added support for the shell env var ``FABRIC_DISABLE_COLORS``; if this variable is present and set to any non-empty value, all colorization driven by this module will be skipped/disabled. """ import os def _wrap_with(code): def inner(text, bold=False): c = code if os.environ.get('FABRIC_DISABLE_COLORS'): return text if bold: c = "1;%s" % c return "\033[%sm%s\033[0m" % (c, text) return inner red = _wrap_with('31') green = _wrap_with('32') yellow = _wrap_with('33') blue = _wrap_with('34') magenta = _wrap_with('35') cyan = _wrap_with('36') white = _wrap_with('37') fabric-1.14.0/fabric/context_managers.py000066400000000000000000000512551315011462000201650ustar00rootroot00000000000000""" Context managers for use with the ``with`` statement. .. note:: When using Python 2.5, you will need to start your fabfile with ``from __future__ import with_statement`` in order to make use of the ``with`` statement (which is a regular, non ``__future__`` feature of Python 2.6+.) .. note:: If you are using multiple directly nested ``with`` statements, it can be convenient to use multiple context expressions in one single with statement. Instead of writing:: with cd('/path/to/app'): with prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') you can write:: with cd('/path/to/app'), prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') Note that you need Python 2.7+ for this to work. On Python 2.5 or 2.6, you can do the following:: from contextlib import nested with nested(cd('/path/to/app'), prefix('workon myvenv')): ... Finally, note that `~fabric.context_managers.settings` implements ``nested`` itself -- see its API doc for details. """ from contextlib import contextmanager, nested import socket import select from fabric.thread_handling import ThreadHandler from fabric.state import output, win32, connections, env from fabric import state from fabric.utils import isatty if not win32: import termios import tty def _set_output(groups, which): """ Refactored subroutine used by ``hide`` and ``show``. """ previous = {} try: # Preserve original values, pull in new given value to use for group in output.expand_aliases(groups): previous[group] = output[group] output[group] = which # Yield control yield finally: # Restore original values output.update(previous) def documented_contextmanager(func): wrapper = contextmanager(func) wrapper.undecorated = func return wrapper @documented_contextmanager def show(*groups): """ Context manager for setting the given output ``groups`` to True. ``groups`` must be one or more strings naming the output groups defined in `~fabric.state.output`. The given groups will be set to True for the duration of the enclosed block, and restored to their previous value afterwards. For example, to turn on debug output (which is typically off by default):: def my_task(): with show('debug'): run('ls /var/www') As almost all output groups are displayed by default, `show` is most useful for turning on the normally-hidden ``debug`` group, or when you know or suspect that code calling your own code is trying to hide output with `hide`. """ return _set_output(groups, True) @documented_contextmanager def hide(*groups): """ Context manager for setting the given output ``groups`` to False. ``groups`` must be one or more strings naming the output groups defined in `~fabric.state.output`. The given groups will be set to False for the duration of the enclosed block, and restored to their previous value afterwards. For example, to hide the "[hostname] run:" status lines, as well as preventing printout of stdout and stderr, one might use `hide` as follows:: def my_task(): with hide('running', 'stdout', 'stderr'): run('ls /var/www') """ return _set_output(groups, False) @documented_contextmanager def _setenv(variables): """ Context manager temporarily overriding ``env`` with given key/value pairs. A callable that returns a dict can also be passed. This is necessary when new values are being calculated from current values, in order to ensure that the "current" value is current at the time that the context is entered, not when the context manager is initialized. (See Issue #736.) This context manager is used internally by `settings` and is not intended to be used directly. """ if callable(variables): variables = variables() clean_revert = variables.pop('clean_revert', False) previous = {} new = [] for key, value in variables.iteritems(): if key in state.env: previous[key] = state.env[key] else: new.append(key) state.env[key] = value try: yield finally: if clean_revert: for key, value in variables.iteritems(): # If the current env value for this key still matches the # value we set it to beforehand, we are OK to revert it to the # pre-block value. if key in state.env and value == state.env[key]: if key in previous: state.env[key] = previous[key] else: del state.env[key] else: state.env.update(previous) for key in new: del state.env[key] def settings(*args, **kwargs): """ Nest context managers and/or override ``env`` variables. `settings` serves two purposes: * Most usefully, it allows temporary overriding/updating of ``env`` with any provided keyword arguments, e.g. ``with settings(user='foo'):``. Original values, if any, will be restored once the ``with`` block closes. * The keyword argument ``clean_revert`` has special meaning for ``settings`` itself (see below) and will be stripped out before execution. * In addition, it will use `contextlib.nested`_ to nest any given non-keyword arguments, which should be other context managers, e.g. ``with settings(hide('stderr'), show('stdout')):``. .. _contextlib.nested: http://docs.python.org/library/contextlib.html#contextlib.nested These behaviors may be specified at the same time if desired. An example will hopefully illustrate why this is considered useful:: def my_task(): with settings( hide('warnings', 'running', 'stdout', 'stderr'), warn_only=True ): if run('ls /etc/lsb-release'): return 'Ubuntu' elif run('ls /etc/redhat-release'): return 'RedHat' The above task executes a `run` statement, but will warn instead of aborting if the ``ls`` fails, and all output -- including the warning itself -- is prevented from printing to the user. The end result, in this scenario, is a completely silent task that allows the caller to figure out what type of system the remote host is, without incurring the handful of output that would normally occur. Thus, `settings` may be used to set any combination of environment variables in tandem with hiding (or showing) specific levels of output, or in tandem with any other piece of Fabric functionality implemented as a context manager. If ``clean_revert`` is set to ``True``, ``settings`` will **not** revert keys which are altered within the nested block, instead only reverting keys whose values remain the same as those given. More examples will make this clear; below is how ``settings`` operates normally:: # Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost'): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string is None again The internal modification of ``env.host_string`` is nullified -- not always desirable. That's where ``clean_revert`` comes in:: # Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost', clean_revert=True): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string remains 'otherhost' Brand new keys which did not exist in ``env`` prior to using ``settings`` are also preserved if ``clean_revert`` is active. When ``False``, such keys are removed when the block exits. .. versionadded:: 1.4.1 The ``clean_revert`` kwarg. """ managers = list(args) if kwargs: managers.append(_setenv(kwargs)) return nested(*managers) def cd(path): """ Context manager that keeps directory state when calling remote operations. Any calls to `run`, `sudo`, `get`, or `put` within the wrapped block will implicitly have a string similar to ``"cd && "`` prefixed in order to give the sense that there is actually statefulness involved. .. note:: `cd` only affects *remote* paths -- to modify *local* paths, use `~fabric.context_managers.lcd`. Because use of `cd` affects all such invocations, any code making use of those operations, such as much of the ``contrib`` section, will also be affected by use of `cd`. Like the actual 'cd' shell builtin, `cd` may be called with relative paths (keep in mind that your default starting directory is your remote user's ``$HOME``) and may be nested as well. Below is a "normal" attempt at using the shell 'cd', which doesn't work due to how shell-less SSH connections are implemented -- state is **not** kept between invocations of `run` or `sudo`:: run('cd /var/www') run('ls') The above snippet will list the contents of the remote user's ``$HOME`` instead of ``/var/www``. With `cd`, however, it will work as expected:: with cd('/var/www'): run('ls') # Turns into "cd /var/www && ls" Finally, a demonstration (see inline comments) of nesting:: with cd('/var/www'): run('ls') # cd /var/www && ls with cd('website1'): run('ls') # cd /var/www/website1 && ls .. note:: This context manager is currently implemented by appending to (and, as always, restoring afterwards) the current value of an environment variable, ``env.cwd``. However, this implementation may change in the future, so we do not recommend manually altering ``env.cwd`` -- only the *behavior* of `cd` will have any guarantee of backwards compatibility. .. note:: Space characters will be escaped automatically to make dealing with such directory names easier. .. versionchanged:: 1.0 Applies to `get` and `put` in addition to the command-running operations. .. seealso:: `~fabric.context_managers.lcd` """ return _change_cwd('cwd', path) def lcd(path): """ Context manager for updating local current working directory. This context manager is identical to `~fabric.context_managers.cd`, except that it changes a different env var (`lcwd`, instead of `cwd`) and thus only affects the invocation of `~fabric.operations.local` and the local arguments to `~fabric.operations.get`/`~fabric.operations.put`. Relative path arguments are relative to the local user's current working directory, which will vary depending on where Fabric (or Fabric-using code) was invoked. You can check what this is with `os.getcwd `_. It may be useful to pin things relative to the location of the fabfile in use, which may be found in :ref:`env.real_fabfile ` .. versionadded:: 1.0 """ return _change_cwd('lcwd', path) def _change_cwd(which, path): path = path.replace(' ', '\ ') if state.env.get(which) and not path.startswith('/') and not path.startswith('~'): new_cwd = state.env.get(which) + '/' + path else: new_cwd = path return _setenv({which: new_cwd}) def path(path, behavior='append'): """ Append the given ``path`` to the PATH used to execute any wrapped commands. Any calls to `run` or `sudo` within the wrapped block will implicitly have a string similar to ``"PATH=$PATH: "`` prepended before the given command. You may customize the behavior of `path` by specifying the optional ``behavior`` keyword argument, as follows: * ``'append'``: append given path to the current ``$PATH``, e.g. ``PATH=$PATH:``. This is the default behavior. * ``'prepend'``: prepend given path to the current ``$PATH``, e.g. ``PATH=:$PATH``. * ``'replace'``: ignore previous value of ``$PATH`` altogether, e.g. ``PATH=``. .. note:: This context manager is currently implemented by modifying (and, as always, restoring afterwards) the current value of environment variables, ``env.path`` and ``env.path_behavior``. However, this implementation may change in the future, so we do not recommend manually altering them directly. .. versionadded:: 1.0 """ return _setenv({'path': path, 'path_behavior': behavior}) def prefix(command): """ Prefix all wrapped `run`/`sudo` commands with given command plus ``&&``. This is nearly identical to `~fabric.operations.cd`, except that nested invocations append to a list of command strings instead of modifying a single string. Most of the time, you'll want to be using this alongside a shell script which alters shell state, such as ones which export or alter shell environment variables. For example, one of the most common uses of this tool is with the ``workon`` command from `virtualenvwrapper `_:: with prefix('workon myvenv'): run('./manage.py syncdb') In the above snippet, the actual shell command run would be this:: $ workon myvenv && ./manage.py syncdb This context manager is compatible with `~fabric.context_managers.cd`, so if your virtualenv doesn't ``cd`` in its ``postactivate`` script, you could do the following:: with cd('/path/to/app'): with prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') Which would result in executions like so:: $ cd /path/to/app && workon myvenv && ./manage.py syncdb $ cd /path/to/app && workon myvenv && ./manage.py loaddata myfixture Finally, as alluded to near the beginning, `~fabric.context_managers.prefix` may be nested if desired, e.g.:: with prefix('workon myenv'): run('ls') with prefix('source /some/script'): run('touch a_file') The result:: $ workon myenv && ls $ workon myenv && source /some/script && touch a_file Contrived, but hopefully illustrative. """ return _setenv(lambda: {'command_prefixes': state.env.command_prefixes + [command]}) @documented_contextmanager def char_buffered(pipe): """ Force local terminal ``pipe`` be character, not line, buffered. Only applies on Unix-based systems; on Windows this is a no-op. """ if win32 or not isatty(pipe): yield else: old_settings = termios.tcgetattr(pipe) tty.setcbreak(pipe) try: yield finally: termios.tcsetattr(pipe, termios.TCSADRAIN, old_settings) def shell_env(**kw): """ Set shell environment variables for wrapped commands. For example, the below shows how you might set a ZeroMQ related environment variable when installing a Python ZMQ library:: with shell_env(ZMQ_DIR='/home/user/local'): run('pip install pyzmq') As with `~fabric.context_managers.prefix`, this effectively turns the ``run`` command into:: $ export ZMQ_DIR='/home/user/local' && pip install pyzmq Multiple key-value pairs may be given simultaneously. .. note:: If used to affect the behavior of `~fabric.operations.local` when running from a Windows localhost, ``SET`` commands will be used to implement this feature. """ return _setenv({'shell_env': kw}) def _forwarder(chan, sock): # Bidirectionally forward data between a socket and a Paramiko channel. while True: r, w, x = select.select([sock, chan], [], []) if sock in r: data = sock.recv(1024) if len(data) == 0: break chan.send(data) if chan in r: data = chan.recv(1024) if len(data) == 0: break sock.send(data) chan.close() sock.close() @documented_contextmanager def remote_tunnel(remote_port, local_port=None, local_host="localhost", remote_bind_address="127.0.0.1"): """ Create a tunnel forwarding a locally-visible port to the remote target. For example, you can let the remote host access a database that is installed on the client host:: # Map localhost:6379 on the server to localhost:6379 on the client, # so that the remote 'redis-cli' program ends up speaking to the local # redis-server. with remote_tunnel(6379): run("redis-cli -i") The database might be installed on a client only reachable from the client host (as opposed to *on* the client itself):: # Map localhost:6379 on the server to redis.internal:6379 on the client with remote_tunnel(6379, local_host="redis.internal") run("redis-cli -i") ``remote_tunnel`` accepts up to four arguments: * ``remote_port`` (mandatory) is the remote port to listen to. * ``local_port`` (optional) is the local port to connect to; the default is the same port as the remote one. * ``local_host`` (optional) is the locally-reachable computer (DNS name or IP address) to connect to; the default is ``localhost`` (that is, the same computer Fabric is running on). * ``remote_bind_address`` (optional) is the remote IP address to bind to for listening, on the current target. It should be an IP address assigned to an interface on the target (or a DNS name that resolves to such IP). You can use "0.0.0.0" to bind to all interfaces. .. note:: By default, most SSH servers only allow remote tunnels to listen to the localhost interface (127.0.0.1). In these cases, `remote_bind_address` is ignored by the server, and the tunnel will listen only to 127.0.0.1. .. versionadded: 1.6 """ if local_port is None: local_port = remote_port sockets = [] channels = [] threads = [] def accept(channel, (src_addr, src_port), (dest_addr, dest_port)): channels.append(channel) sock = socket.socket() sockets.append(sock) try: sock.connect((local_host, local_port)) except Exception: print "[%s] rtunnel: cannot connect to %s:%d (from local)" % (env.host_string, local_host, local_port) channel.close() return print "[%s] rtunnel: opened reverse tunnel: %r -> %r -> %r"\ % (env.host_string, channel.origin_addr, channel.getpeername(), (local_host, local_port)) th = ThreadHandler('fwd', _forwarder, channel, sock) threads.append(th) transport = connections[env.host_string].get_transport() transport.request_port_forward(remote_bind_address, remote_port, handler=accept) try: yield finally: for sock, chan, th in zip(sockets, channels, threads): sock.close() chan.close() th.thread.join() th.raise_if_needed() transport.cancel_port_forward(remote_bind_address, remote_port) quiet = lambda: settings(hide('everything'), warn_only=True) quiet.__doc__ = """ Alias to ``settings(hide('everything'), warn_only=True)``. Useful for wrapping remote interrogative commands which you expect to fail occasionally, and/or which you want to silence. Example:: with quiet(): have_build_dir = run("test -e /tmp/build").succeeded When used in a task, the above snippet will not produce any ``run: test -e /tmp/build`` line, nor will any stdout/stderr display, and command failure is ignored. .. seealso:: :ref:`env.warn_only `, `~fabric.context_managers.settings`, `~fabric.context_managers.hide` .. versionadded:: 1.5 """ warn_only = lambda: settings(warn_only=True) warn_only.__doc__ = """ Alias to ``settings(warn_only=True)``. .. seealso:: :ref:`env.warn_only `, `~fabric.context_managers.settings`, `~fabric.context_managers.quiet` """ fabric-1.14.0/fabric/contrib/000077500000000000000000000000001315011462000157025ustar00rootroot00000000000000fabric-1.14.0/fabric/contrib/__init__.py000066400000000000000000000000001315011462000200010ustar00rootroot00000000000000fabric-1.14.0/fabric/contrib/console.py000066400000000000000000000022271315011462000177210ustar00rootroot00000000000000""" Console/terminal user interface functionality. """ from fabric.api import prompt def confirm(question, default=True): """ Ask user a yes/no question and return their response as True or False. ``question`` should be a simple, grammatically complete question such as "Do you wish to continue?", and will have a string similar to " [Y/n] " appended automatically. This function will *not* append a question mark for you. By default, when the user presses Enter without typing anything, "yes" is assumed. This can be changed by specifying ``default=False``. """ # Set up suffix if default: suffix = "Y/n" else: suffix = "y/N" # Loop till we get something we like while True: response = prompt("%s [%s] " % (question, suffix)).lower() # Default if not response: return default # Yes if response in ['y', 'yes']: return True # No if response in ['n', 'no']: return False # Didn't get empty, yes or no, so complain and loop print("I didn't understand you. Please specify '(y)es' or '(n)o'.") fabric-1.14.0/fabric/contrib/django.py000066400000000000000000000063751315011462000175310ustar00rootroot00000000000000""" .. versionadded:: 0.9.2 These functions streamline the process of initializing Django's settings module environment variable. Once this is done, your fabfile may import from your Django project, or Django itself, without requiring the use of ``manage.py`` plugins or having to set the environment variable yourself every time you use your fabfile. Currently, these functions only allow Fabric to interact with local-to-your-fabfile Django installations. This is not as limiting as it sounds; for example, you can use Fabric as a remote "build" tool as well as using it locally. Imagine the following fabfile:: from fabric.api import run, local, hosts, cd from fabric.contrib import django django.project('myproject') from myproject.myapp.models import MyModel def print_instances(): for instance in MyModel.objects.all(): print(instance) @hosts('production-server') def print_production_instances(): with cd('/path/to/myproject'): run('fab print_instances') With Fabric installed on both ends, you could execute ``print_production_instances`` locally, which would trigger ``print_instances`` on the production server -- which would then be interacting with your production Django database. As another example, if your local and remote settings are similar, you can use it to obtain e.g. your database settings, and then use those when executing a remote (non-Fabric) command. This would allow you some degree of freedom even if Fabric is only installed locally:: from fabric.api import run from fabric.contrib import django django.settings_module('myproject.settings') from django.conf import settings def dump_production_database(): run('mysqldump -u %s -p=%s %s > /tmp/prod-db.sql' % ( settings.DATABASE_USER, settings.DATABASE_PASSWORD, settings.DATABASE_NAME )) The above snippet will work if run from a local, development environment, again provided your local ``settings.py`` mirrors your remote one in terms of database connection info. """ import os def settings_module(module): """ Set ``DJANGO_SETTINGS_MODULE`` shell environment variable to ``module``. Due to how Django works, imports from Django or a Django project will fail unless the shell environment variable ``DJANGO_SETTINGS_MODULE`` is correctly set (see `the Django settings docs `_.) This function provides a shortcut for doing so; call it near the top of your fabfile or Fabric-using code, after which point any Django imports should work correctly. .. note:: This function sets a **shell** environment variable (via ``os.environ``) and is unrelated to Fabric's own internal "env" variables. """ os.environ['DJANGO_SETTINGS_MODULE'] = module def project(name): """ Sets ``DJANGO_SETTINGS_MODULE`` to ``'.settings'``. This function provides a handy shortcut for the common case where one is using the Django default naming convention for their settings file and location. Uses `settings_module` -- see its documentation for details on why and how to use this functionality. """ settings_module('%s.settings' % name) fabric-1.14.0/fabric/contrib/files.py000066400000000000000000000427421315011462000173670ustar00rootroot00000000000000""" Module providing easy API for working with remote files and folders. """ from __future__ import with_statement import hashlib import os from StringIO import StringIO from functools import partial from fabric.api import run, sudo, hide, settings, env, put, abort from fabric.utils import apply_lcwd def exists(path, use_sudo=False, verbose=False): """ Return True if given path exists on the current remote host. If ``use_sudo`` is True, will use `sudo` instead of `run`. `exists` will, by default, hide all output (including the run line, stdout, stderr and any warning resulting from the file not existing) in order to avoid cluttering output. You may specify ``verbose=True`` to change this behavior. .. versionchanged:: 1.13 Replaced internal use of ``test -e`` with ``stat`` for improved remote cross-platform (e.g. Windows) compatibility. """ func = use_sudo and sudo or run cmd = 'stat %s' % _expand_path(path) # If verbose, run normally if verbose: with settings(warn_only=True): return not func(cmd).failed # Otherwise, be quiet with settings(hide('everything'), warn_only=True): return not func(cmd).failed def is_link(path, use_sudo=False, verbose=False): """ Return True if the given path is a symlink on the current remote host. If ``use_sudo`` is True, will use `.sudo` instead of `.run`. `.is_link` will, by default, hide all output. Give ``verbose=True`` to change this. """ func = sudo if use_sudo else run cmd = 'test -L "$(echo %s)"' % path args, kwargs = [], {'warn_only': True} if not verbose: args = [hide('everything')] with settings(*args, **kwargs): return func(cmd).succeeded def first(*args, **kwargs): """ Given one or more file paths, returns first one found, or None if none exist. May specify ``use_sudo`` and ``verbose`` which are passed to `exists`. """ for directory in args: if exists(directory, **kwargs): return directory def upload_template(filename, destination, context=None, use_jinja=False, template_dir=None, use_sudo=False, backup=True, mirror_local_mode=False, mode=None, pty=None, keep_trailing_newline=False, temp_dir=''): """ Render and upload a template text file to a remote host. Returns the result of the inner call to `~fabric.operations.put` -- see its documentation for details. ``filename`` should be the path to a text file, which may contain `Python string interpolation formatting `_ and will be rendered with the given context dictionary ``context`` (if given.) Alternately, if ``use_jinja`` is set to True and you have the Jinja2 templating library available, Jinja will be used to render the template instead. Templates will be loaded from the invoking user's current working directory by default, or from ``template_dir`` if given. The resulting rendered file will be uploaded to the remote file path ``destination``. If the destination file already exists, it will be renamed with a ``.bak`` extension unless ``backup=False`` is specified. By default, the file will be copied to ``destination`` as the logged-in user; specify ``use_sudo=True`` to use `sudo` instead. The ``mirror_local_mode``, ``mode``, and ``temp_dir`` kwargs are passed directly to an internal `~fabric.operations.put` call; please see its documentation for details on these two options. The ``pty`` kwarg will be passed verbatim to any internal `~fabric.operations.run`/`~fabric.operations.sudo` calls, such as those used for testing directory-ness, making backups, etc. The ``keep_trailing_newline`` kwarg will be passed when creating Jinja2 Environment which is False by default, same as Jinja2's behaviour. .. versionchanged:: 1.1 Added the ``backup``, ``mirror_local_mode`` and ``mode`` kwargs. .. versionchanged:: 1.9 Added the ``pty`` kwarg. .. versionchanged:: 1.11 Added the ``keep_trailing_newline`` kwarg. .. versionchanged:: 1.11 Added the ``temp_dir`` kwarg. """ func = use_sudo and sudo or run if pty is not None: func = partial(func, pty=pty) # Normalize destination to be an actual filename, due to using StringIO with settings(hide('everything'), warn_only=True): if func('test -d %s' % _expand_path(destination)).succeeded: sep = "" if destination.endswith('/') else "/" destination += sep + os.path.basename(filename) # Use mode kwarg to implement mirror_local_mode, again due to using # StringIO if mirror_local_mode and mode is None: mode = os.stat(apply_lcwd(filename, env)).st_mode # To prevent put() from trying to do this # logic itself mirror_local_mode = False # Process template text = None if use_jinja: try: template_dir = template_dir or os.getcwd() template_dir = apply_lcwd(template_dir, env) from jinja2 import Environment, FileSystemLoader jenv = Environment(loader=FileSystemLoader(template_dir), keep_trailing_newline=keep_trailing_newline) text = jenv.get_template(filename).render(**context or {}) # Force to a byte representation of Unicode, or str()ification # within Paramiko's SFTP machinery may cause decode issues for # truly non-ASCII characters. text = text.encode('utf-8') except ImportError: import traceback tb = traceback.format_exc() abort(tb + "\nUnable to import Jinja2 -- see above.") else: if template_dir: filename = os.path.join(template_dir, filename) filename = apply_lcwd(filename, env) with open(os.path.expanduser(filename)) as inputfile: text = inputfile.read() if context: text = text % context # Back up original file if backup and exists(destination): func("cp %s{,.bak}" % _expand_path(destination)) # Upload the file. return put( local_path=StringIO(text), remote_path=destination, use_sudo=use_sudo, mirror_local_mode=mirror_local_mode, mode=mode, temp_dir=temp_dir ) def sed(filename, before, after, limit='', use_sudo=False, backup='.bak', flags='', shell=False): """ Run a search-and-replace on ``filename`` with given regex patterns. Equivalent to ``sed -i -r -e "// s///g" ``. Setting ``backup`` to an empty string will, disable backup file creation. For convenience, ``before`` and ``after`` will automatically escape forward slashes, single quotes and parentheses for you, so you don't need to specify e.g. ``http:\/\/foo\.com``, instead just using ``http://foo\.com`` is fine. If ``use_sudo`` is True, will use `sudo` instead of `run`. The ``shell`` argument will be eventually passed to `run`/`sudo`. It defaults to False in order to avoid problems with many nested levels of quotes and backslashes. However, setting it to True may help when using ``~fabric.operations.cd`` to wrap explicit or implicit ``sudo`` calls. (``cd`` by it's nature is a shell built-in, not a standalone command, so it should be called within a shell.) Other options may be specified with sed-compatible regex flags -- for example, to make the search and replace case insensitive, specify ``flags="i"``. The ``g`` flag is always specified regardless, so you do not need to remember to include it when overriding this parameter. .. versionadded:: 1.1 The ``flags`` parameter. .. versionadded:: 1.6 Added the ``shell`` keyword argument. """ func = use_sudo and sudo or run # Characters to be escaped in both for char in "/'": before = before.replace(char, r'\%s' % char) after = after.replace(char, r'\%s' % char) # Characters to be escaped in replacement only (they're useful in regexen # in the 'before' part) for char in "()": after = after.replace(char, r'\%s' % char) if limit: limit = r'/%s/ ' % limit context = { 'script': r"'%ss/%s/%s/%sg'" % (limit, before, after, flags), 'filename': _expand_path(filename), 'backup': backup } # Test the OS because of differences between sed versions with hide('running', 'stdout'): platform = run("uname", shell=False, pty=False) if platform in ('NetBSD', 'OpenBSD', 'QNX'): # Attempt to protect against failures/collisions hasher = hashlib.sha1() hasher.update(env.host_string) hasher.update(filename) context['tmp'] = "/tmp/%s" % hasher.hexdigest() # Use temp file to work around lack of -i expr = r"""cp -p %(filename)s %(tmp)s \ && sed -r -e %(script)s %(filename)s > %(tmp)s \ && cp -p %(filename)s %(filename)s%(backup)s \ && mv %(tmp)s %(filename)s""" else: context['extended_regex'] = '-E' if platform == 'Darwin' else '-r' expr = r"sed -i%(backup)s %(extended_regex)s -e %(script)s %(filename)s" command = expr % context return func(command, shell=shell) def uncomment(filename, regex, use_sudo=False, char='#', backup='.bak', shell=False): """ Attempt to uncomment all lines in ``filename`` matching ``regex``. The default comment delimiter is `#` and may be overridden by the ``char`` argument. This function uses the `sed` function, and will accept the same ``use_sudo``, ``shell`` and ``backup`` keyword arguments that `sed` does. `uncomment` will remove a single whitespace character following the comment character, if it exists, but will preserve all preceding whitespace. For example, ``# foo`` would become ``foo`` (the single space is stripped) but `` # foo`` would become `` foo`` (the single space is still stripped, but the preceding 4 spaces are not.) .. versionchanged:: 1.6 Added the ``shell`` keyword argument. """ return sed( filename, before=r'^([[:space:]]*)%s[[:space:]]?' % char, after=r'\1', limit=regex, use_sudo=use_sudo, backup=backup, shell=shell ) def comment(filename, regex, use_sudo=False, char='#', backup='.bak', shell=False): """ Attempt to comment out all lines in ``filename`` matching ``regex``. The default commenting character is `#` and may be overridden by the ``char`` argument. This function uses the `sed` function, and will accept the same ``use_sudo``, ``shell`` and ``backup`` keyword arguments that `sed` does. `comment` will prepend the comment character to the beginning of the line, so that lines end up looking like so:: this line is uncommented #this line is commented # this line is indented and commented In other words, comment characters will not "follow" indentation as they sometimes do when inserted by hand. Neither will they have a trailing space unless you specify e.g. ``char='# '``. .. note:: In order to preserve the line being commented out, this function will wrap your ``regex`` argument in parentheses, so you don't need to. It will ensure that any preceding/trailing ``^`` or ``$`` characters are correctly moved outside the parentheses. For example, calling ``comment(filename, r'^foo$')`` will result in a `sed` call with the "before" regex of ``r'^(foo)$'`` (and the "after" regex, naturally, of ``r'#\\1'``.) .. versionadded:: 1.5 Added the ``shell`` keyword argument. """ carot, dollar = '', '' if regex.startswith('^'): carot = '^' regex = regex[1:] if regex.endswith('$'): dollar = '$' regex = regex[:-1] regex = "%s(%s)%s" % (carot, regex, dollar) return sed( filename, before=regex, after=r'%s\1' % char, use_sudo=use_sudo, backup=backup, shell=shell ) def contains(filename, text, exact=False, use_sudo=False, escape=True, shell=False, case_sensitive=True): """ Return True if ``filename`` contains ``text`` (which may be a regex.) By default, this function will consider a partial line match (i.e. where ``text`` only makes up part of the line it's on). Specify ``exact=True`` to change this behavior so that only a line containing exactly ``text`` results in a True return value. This function leverages ``egrep`` on the remote end (so it may not follow Python regular expression syntax perfectly), and skips ``env.shell`` wrapper by default. If ``use_sudo`` is True, will use `sudo` instead of `run`. If ``escape`` is False, no extra regular expression related escaping is performed (this includes overriding ``exact`` so that no ``^``/``$`` is added.) The ``shell`` argument will be eventually passed to ``run/sudo``. See description of the same argument in ``~fabric.contrib.sed`` for details. If ``case_sensitive`` is False, the `-i` flag will be passed to ``egrep``. .. versionchanged:: 1.0 Swapped the order of the ``filename`` and ``text`` arguments to be consistent with other functions in this module. .. versionchanged:: 1.4 Updated the regular expression related escaping to try and solve various corner cases. .. versionchanged:: 1.4 Added ``escape`` keyword argument. .. versionadded:: 1.6 Added the ``shell`` keyword argument. .. versionadded:: 1.11 Added the ``case_sensitive`` keyword argument. """ func = use_sudo and sudo or run if escape: text = _escape_for_regex(text) if exact: text = "^%s$" % text with settings(hide('everything'), warn_only=True): egrep_cmd = 'egrep "%s" %s' % (text, _expand_path(filename)) if not case_sensitive: egrep_cmd = egrep_cmd.replace('egrep', 'egrep -i', 1) return func(egrep_cmd, shell=shell).succeeded def append(filename, text, use_sudo=False, partial=False, escape=True, shell=False): """ Append string (or list of strings) ``text`` to ``filename``. When a list is given, each string inside is handled independently (but in the order given.) If ``text`` is already found in ``filename``, the append is not run, and None is returned immediately. Otherwise, the given text is appended to the end of the given ``filename`` via e.g. ``echo '$text' >> $filename``. The test for whether ``text`` already exists defaults to a full line match, e.g. ``^$``, as this seems to be the most sensible approach for the "append lines to a file" use case. You may override this and force partial searching (e.g. ``^``) by specifying ``partial=True``. Because ``text`` is single-quoted, single quotes will be transparently backslash-escaped. This can be disabled with ``escape=False``. If ``use_sudo`` is True, will use `sudo` instead of `run`. The ``shell`` argument will be eventually passed to ``run/sudo``. See description of the same argumnet in ``~fabric.contrib.sed`` for details. .. versionchanged:: 0.9.1 Added the ``partial`` keyword argument. .. versionchanged:: 1.0 Swapped the order of the ``filename`` and ``text`` arguments to be consistent with other functions in this module. .. versionchanged:: 1.0 Changed default value of ``partial`` kwarg to be ``False``. .. versionchanged:: 1.4 Updated the regular expression related escaping to try and solve various corner cases. .. versionadded:: 1.6 Added the ``shell`` keyword argument. """ func = use_sudo and sudo or run # Normalize non-list input to be a list if isinstance(text, basestring): text = [text] for line in text: regex = '^' + _escape_for_regex(line) + ('' if partial else '$') if (exists(filename, use_sudo=use_sudo) and line and contains(filename, regex, use_sudo=use_sudo, escape=False, shell=shell)): continue line = line.replace("'", r"'\\''") if escape else line func("echo '%s' >> %s" % (line, _expand_path(filename))) def _escape_for_regex(text): """Escape ``text`` to allow literal matching using egrep""" re_specials = '\\^$|(){}[]*+?.' sh_specials = '\\$`"' re_chars = [] sh_chars = [] for c in text: if c in re_specials: re_chars.append('\\') re_chars.append(c) for c in re_chars: if c in sh_specials: sh_chars.append('\\') sh_chars.append(c) return ''.join(sh_chars) def is_win(): """ Return True if remote SSH server is running Windows, False otherwise. The idea is based on echoing quoted text: \*NIX systems will echo quoted text only, while Windows echoes quotation marks as well. """ with settings(hide('everything'), warn_only=True): return '"' in run('echo "Will you echo quotation marks"') def _expand_path(path): """ Return a path expansion E.g. ~/some/path -> /home/myuser/some/path /user/\*/share -> /user/local/share More examples can be found here: http://linuxcommand.org/lc3_lts0080.php .. versionchanged:: 1.0 Avoid breaking remote Windows commands which does not support expansion. """ return path if is_win() else '"$(echo %s)"' % path fabric-1.14.0/fabric/contrib/project.py000066400000000000000000000210551315011462000177250ustar00rootroot00000000000000""" Useful non-core functionality, e.g. functions composing multiple operations. """ from __future__ import with_statement from os import getcwd, sep import os.path from tempfile import mkdtemp from fabric.network import needs_host, key_filenames, normalize from fabric.operations import local, run, sudo, put from fabric.state import env, output from fabric.context_managers import cd __all__ = ['rsync_project', 'upload_project'] @needs_host def rsync_project( remote_dir, local_dir=None, exclude=(), delete=False, extra_opts='', ssh_opts='', capture=False, upload=True, default_opts='-pthrvz' ): """ Synchronize a remote directory with the current project directory via rsync. Where ``upload_project()`` makes use of ``scp`` to copy one's entire project every time it is invoked, ``rsync_project()`` uses the ``rsync`` command-line utility, which only transfers files newer than those on the remote end. ``rsync_project()`` is thus a simple wrapper around ``rsync``; for details on how ``rsync`` works, please see its manpage. ``rsync`` must be installed on both your local and remote systems in order for this operation to work correctly. This function makes use of Fabric's ``local()`` operation, and returns the output of that function call; thus it will return the stdout, if any, of the resultant ``rsync`` call. ``rsync_project()`` uses the current Fabric connection parameters (user, host, port) by default, adding them to rsync's ssh options (then mixing in ``ssh_opts``, if given -- see below.) ``rsync_project()`` takes the following parameters: * ``remote_dir``: the only required parameter, this is the path to the directory on the remote server. Due to how ``rsync`` is implemented, the exact behavior depends on the value of ``local_dir``: * If ``local_dir`` ends with a trailing slash, the files will be dropped inside of ``remote_dir``. E.g. ``rsync_project("/home/username/project/", "foldername/")`` will drop the contents of ``foldername`` inside of ``/home/username/project``. * If ``local_dir`` does **not** end with a trailing slash (and this includes the default scenario, when ``local_dir`` is not specified), ``remote_dir`` is effectively the "parent" directory, and a new directory named after ``local_dir`` will be created inside of it. So ``rsync_project("/home/username", "foldername")`` would create a new directory ``/home/username/foldername`` (if needed) and place the files there. * ``local_dir``: by default, ``rsync_project`` uses your current working directory as the source directory. This may be overridden by specifying ``local_dir``, which is a string passed verbatim to ``rsync``, and thus may be a single directory (``"my_directory"``) or multiple directories (``"dir1 dir2"``). See the ``rsync`` documentation for details. * ``exclude``: optional, may be a single string, or an iterable of strings, and is used to pass one or more ``--exclude`` options to ``rsync``. * ``delete``: a boolean controlling whether ``rsync``'s ``--delete`` option is used. If True, instructs ``rsync`` to remove remote files that no longer exist locally. Defaults to False. * ``extra_opts``: an optional, arbitrary string which you may use to pass custom arguments or options to ``rsync``. * ``ssh_opts``: Like ``extra_opts`` but specifically for the SSH options string (rsync's ``--rsh`` flag.) * ``capture``: Sent directly into an inner `~fabric.operations.local` call. * ``upload``: a boolean controlling whether file synchronization is performed up or downstream. Upstream by default. * ``default_opts``: the default rsync options ``-pthrvz``, override if desired (e.g. to remove verbosity, etc). Furthermore, this function transparently honors Fabric's port and SSH key settings. Calling this function when the current host string contains a nonstandard port, or when ``env.key_filename`` is non-empty, will use the specified port and/or SSH key filename(s). For reference, the approximate ``rsync`` command-line call that is constructed by this function is the following:: rsync [--delete] [--exclude exclude[0][, --exclude[1][, ...]]] \\ [default_opts] [extra_opts] : .. versionadded:: 1.4.0 The ``ssh_opts`` keyword argument. .. versionadded:: 1.4.1 The ``capture`` keyword argument. .. versionadded:: 1.8.0 The ``default_opts`` keyword argument. """ # Turn single-string exclude into a one-item list for consistency if not hasattr(exclude, '__iter__'): exclude = (exclude,) # Create --exclude options from exclude list exclude_opts = ' --exclude "%s"' * len(exclude) # Double-backslash-escape exclusions = tuple([str(s).replace('"', '\\\\"') for s in exclude]) # Honor SSH key(s) key_string = "" keys = key_filenames() if keys: key_string = "-i " + " -i ".join(keys) # Port user, host, port = normalize(env.host_string) port_string = "-p %s" % port # RSH rsh_string = "" if env.gateway is None: gateway_opts = "" else: gw_user, gw_host, gw_port = normalize(env.gateway) gw_str = "-A -o \"ProxyCommand=ssh %s -p %s %s@%s nc %s %s\"" gateway_opts = gw_str % ( key_string, gw_port, gw_user, gw_host, host, port ) rsh_parts = [key_string, port_string, ssh_opts, gateway_opts] if any(rsh_parts): rsh_string = "--rsh='ssh %s'" % " ".join(rsh_parts) # Set up options part of string options_map = { 'delete': '--delete' if delete else '', 'exclude': exclude_opts % exclusions, 'rsh': rsh_string, 'default': default_opts, 'extra': extra_opts, } options = "%(delete)s%(exclude)s %(default)s %(extra)s %(rsh)s" % options_map # Get local directory if local_dir is None: local_dir = '../' + getcwd().split(sep)[-1] # Create and run final command string if host.count(':') > 1: # Square brackets are mandatory for IPv6 rsync address, # even if port number is not specified remote_prefix = "[%s@%s]" % (user, host) else: remote_prefix = "%s@%s" % (user, host) if upload: cmd = "rsync %s %s %s:%s" % (options, local_dir, remote_prefix, remote_dir) else: cmd = "rsync %s %s:%s %s" % (options, remote_prefix, remote_dir, local_dir) if output.running: print("[%s] rsync_project: %s" % (env.host_string, cmd)) return local(cmd, capture=capture) def upload_project(local_dir=None, remote_dir="", use_sudo=False): """ Upload the current project to a remote system via ``tar``/``gzip``. ``local_dir`` specifies the local project directory to upload, and defaults to the current working directory. ``remote_dir`` specifies the target directory to upload into (meaning that a copy of ``local_dir`` will appear as a subdirectory of ``remote_dir``) and defaults to the remote user's home directory. ``use_sudo`` specifies which method should be used when executing commands remotely. ``sudo`` will be used if use_sudo is True, otherwise ``run`` will be used. This function makes use of the ``tar`` and ``gzip`` programs/libraries, thus it will not work too well on Win32 systems unless one is using Cygwin or something similar. It will attempt to clean up the local and remote tarfiles when it finishes executing, even in the event of a failure. .. versionchanged:: 1.1 Added the ``local_dir`` and ``remote_dir`` kwargs. .. versionchanged:: 1.7 Added the ``use_sudo`` kwarg. """ runner = use_sudo and sudo or run local_dir = local_dir or os.getcwd() # Remove final '/' in local_dir so that basename() works local_dir = local_dir.rstrip(os.sep) local_path, local_name = os.path.split(local_dir) local_path = local_path or '.' tar_file = "%s.tar.gz" % local_name target_tar = os.path.join(remote_dir, tar_file) tmp_folder = mkdtemp() try: tar_path = os.path.join(tmp_folder, tar_file) local("tar -czf %s -C %s %s" % (tar_path, local_path, local_name)) put(tar_path, target_tar, use_sudo=use_sudo) with cd(remote_dir): try: runner("tar -xzf %s" % tar_file) finally: runner("rm -f %s" % tar_file) finally: local("rm -rf %s" % tmp_folder) fabric-1.14.0/fabric/decorators.py000066400000000000000000000166741315011462000167770ustar00rootroot00000000000000""" Convenience decorators for use in fabfiles. """ from __future__ import with_statement import types from functools import wraps try: from Crypto import Random except ImportError: Random = None from fabric import tasks from .context_managers import settings def task(*args, **kwargs): """ Decorator declaring the wrapped function to be a new-style task. May be invoked as a simple, argument-less decorator (i.e. ``@task``) or with arguments customizing its behavior (e.g. ``@task(alias='myalias')``). Please see the :ref:`new-style task ` documentation for details on how to use this decorator. .. versionchanged:: 1.2 Added the ``alias``, ``aliases``, ``task_class`` and ``default`` keyword arguments. See :ref:`task-decorator-arguments` for details. .. versionchanged:: 1.5 Added the ``name`` keyword argument. .. seealso:: `~fabric.docs.unwrap_tasks`, `~fabric.tasks.WrappedCallableTask` """ invoked = bool(not args or kwargs) task_class = kwargs.pop("task_class", tasks.WrappedCallableTask) if not invoked: func, args = args[0], () def wrapper(func): return task_class(func, *args, **kwargs) return wrapper if invoked else wrapper(func) def _wrap_as_new(original, new): if isinstance(original, tasks.Task): return tasks.WrappedCallableTask(new) return new def _list_annotating_decorator(attribute, *values): def attach_list(func): @wraps(func) def inner_decorator(*args, **kwargs): return func(*args, **kwargs) _values = values # Allow for single iterable argument as well as *args if len(_values) == 1 and not isinstance(_values[0], basestring): _values = _values[0] setattr(inner_decorator, attribute, list(_values)) # Don't replace @task new-style task objects with inner_decorator by # itself -- wrap in a new Task object first. inner_decorator = _wrap_as_new(func, inner_decorator) return inner_decorator return attach_list def hosts(*host_list): """ Decorator defining which host or hosts to execute the wrapped function on. For example, the following will ensure that, barring an override on the command line, ``my_func`` will be run on ``host1``, ``host2`` and ``host3``, and with specific users on ``host1`` and ``host3``:: @hosts('user1@host1', 'host2', 'user2@host3') def my_func(): pass `~fabric.decorators.hosts` may be invoked with either an argument list (``@hosts('host1')``, ``@hosts('host1', 'host2')``) or a single, iterable argument (``@hosts(['host1', 'host2'])``). Note that this decorator actually just sets the function's ``.hosts`` attribute, which is then read prior to executing the function. .. versionchanged:: 0.9.2 Allow a single, iterable argument (``@hosts(iterable)``) to be used instead of requiring ``@hosts(*iterable)``. """ return _list_annotating_decorator('hosts', *host_list) def roles(*role_list): """ Decorator defining a list of role names, used to look up host lists. A role is simply defined as a key in `env` whose value is a list of one or more host connection strings. For example, the following will ensure that, barring an override on the command line, ``my_func`` will be executed against the hosts listed in the ``webserver`` and ``dbserver`` roles:: env.roledefs.update({ 'webserver': ['www1', 'www2'], 'dbserver': ['db1'] }) @roles('webserver', 'dbserver') def my_func(): pass As with `~fabric.decorators.hosts`, `~fabric.decorators.roles` may be invoked with either an argument list or a single, iterable argument. Similarly, this decorator uses the same mechanism as `~fabric.decorators.hosts` and simply sets ``.roles``. .. versionchanged:: 0.9.2 Allow a single, iterable argument to be used (same as `~fabric.decorators.hosts`). """ return _list_annotating_decorator('roles', *role_list) def runs_once(func): """ Decorator preventing wrapped function from running more than once. By keeping internal state, this decorator allows you to mark a function such that it will only run once per Python interpreter session, which in typical use means "once per invocation of the ``fab`` program". Any function wrapped with this decorator will silently fail to execute the 2nd, 3rd, ..., Nth time it is called, and will return the value of the original run. .. note:: ``runs_once`` does not work with parallel task execution. """ @wraps(func) def decorated(*args, **kwargs): if not hasattr(decorated, 'return_value'): decorated.return_value = func(*args, **kwargs) return decorated.return_value decorated = _wrap_as_new(func, decorated) # Mark as serial (disables parallelism) and return return serial(decorated) def serial(func): """ Forces the wrapped function to always run sequentially, never in parallel. This decorator takes precedence over the global value of :ref:`env.parallel `. However, if a task is decorated with both `~fabric.decorators.serial` *and* `~fabric.decorators.parallel`, `~fabric.decorators.parallel` wins. .. versionadded:: 1.3 """ if not getattr(func, 'parallel', False): func.serial = True return _wrap_as_new(func, func) def parallel(pool_size=None): """ Forces the wrapped function to run in parallel, instead of sequentially. This decorator takes precedence over the global value of :ref:`env.parallel `. It also takes precedence over `~fabric.decorators.serial` if a task is decorated with both. .. versionadded:: 1.3 """ called_without_args = type(pool_size) == types.FunctionType def real_decorator(func): @wraps(func) def inner(*args, **kwargs): # Required for ssh/PyCrypto to be happy in multiprocessing # (as far as we can tell, this is needed even with the extra such # calls in newer versions of paramiko.) if Random: Random.atfork() return func(*args, **kwargs) inner.parallel = True inner.serial = False inner.pool_size = None if called_without_args else pool_size return _wrap_as_new(func, inner) # Allow non-factory-style decorator use (@decorator vs @decorator()) if called_without_args: return real_decorator(pool_size) return real_decorator def with_settings(*arg_settings, **kw_settings): """ Decorator equivalent of ``fabric.context_managers.settings``. Allows you to wrap an entire function as if it was called inside a block with the ``settings`` context manager. This may be useful if you know you want a given setting applied to an entire function body, or wish to retrofit old code without indenting everything. For example, to turn aborts into warnings for an entire task function:: @with_settings(warn_only=True) def foo(): ... .. seealso:: `~fabric.context_managers.settings` .. versionadded:: 1.1 """ def outer(func): @wraps(func) def inner(*args, **kwargs): with settings(*arg_settings, **kw_settings): return func(*args, **kwargs) return _wrap_as_new(func, inner) return outer fabric-1.14.0/fabric/docs.py000066400000000000000000000047301315011462000155500ustar00rootroot00000000000000from fabric.tasks import WrappedCallableTask def unwrap_tasks(module, hide_nontasks=False): """ Replace task objects on ``module`` with their wrapped functions instead. Specifically, look for instances of `~fabric.tasks.WrappedCallableTask` and replace them with their ``.wrapped`` attribute (the original decorated function.) This is intended for use with the Sphinx autodoc tool, to be run near the bottom of a project's ``conf.py``. It ensures that the autodoc extension will have full access to the "real" function, in terms of function signature and so forth. Without use of ``unwrap_tasks``, autodoc is unable to access the function signature (though it is able to see e.g. ``__doc__``.) For example, at the bottom of your ``conf.py``:: from fabric.docs import unwrap_tasks import my_package.my_fabfile unwrap_tasks(my_package.my_fabfile) You can go above and beyond, and explicitly **hide** all non-task functions, by saying ``hide_nontasks=True``. This renames all objects failing the "is it a task?" check so they appear to be private, which will then cause autodoc to skip over them. ``hide_nontasks`` is thus useful when you have a fabfile mixing in subroutines with real tasks and want to document *just* the real tasks. If you run this within an actual Fabric-code-using session (instead of within a Sphinx ``conf.py``), please seek immediate medical attention. .. versionadded: 1.5 .. seealso:: `~fabric.tasks.WrappedCallableTask`, `~fabric.decorators.task` """ set_tasks = [] for name, obj in vars(module).items(): if isinstance(obj, WrappedCallableTask): setattr(module, obj.name, obj.wrapped) # Handle situation where a task's real name shadows a builtin. # If the builtin comes after the task in vars().items(), the object # we just setattr'd above will get re-hidden :( set_tasks.append(obj.name) # In the same vein, "privately" named wrapped functions whose task # name is public, needs to get renamed so autodoc picks it up. obj.wrapped.func_name = obj.name else: if name in set_tasks: continue has_docstring = getattr(obj, '__doc__', False) if hide_nontasks and has_docstring and not name.startswith('_'): setattr(module, '_%s' % name, obj) delattr(module, name) fabric-1.14.0/fabric/exceptions.py000066400000000000000000000016751315011462000170060ustar00rootroot00000000000000""" Custom Fabric exception classes. Most are simply distinct Exception subclasses for purposes of message-passing (though typically still in actual error situations.) """ class NetworkError(Exception): # Must allow for calling with zero args/kwargs, since pickle is apparently # stupid with exceptions and tries to call it as such when passed around in # a multiprocessing.Queue. def __init__(self, message=None, wrapped=None): self.message = message self.wrapped = wrapped def __str__(self): return self.message or "" def __repr__(self): return "%s(%s) => %r" % ( self.__class__.__name__, self.message, self.wrapped ) class CommandTimeout(Exception): def __init__(self, timeout): self.timeout = timeout message = 'Command failed to finish in %s seconds' % (timeout) self.message = message super(CommandTimeout, self).__init__(message) fabric-1.14.0/fabric/io.py000066400000000000000000000225151315011462000152300ustar00rootroot00000000000000from __future__ import with_statement import sys import time import re import socket from select import select from fabric.state import env, output, win32 from fabric.auth import get_password, set_password import fabric.network from fabric.network import ssh, normalize from fabric.utils import RingBuffer from fabric.exceptions import CommandTimeout if win32: import msvcrt def _endswith(char_list, substring): tail = char_list[-1 * len(substring):] substring = list(substring) return tail == substring def _has_newline(bytelist): return '\r' in bytelist or '\n' in bytelist def output_loop(*args, **kwargs): OutputLooper(*args, **kwargs).loop() class OutputLooper(object): def __init__(self, chan, attr, stream, capture, timeout): self.chan = chan self.stream = stream self.capture = capture self.timeout = timeout self.read_func = getattr(chan, attr) self.prefix = "[%s] %s: " % ( env.host_string, "out" if attr == 'recv' else "err" ) self.printing = getattr(output, 'stdout' if (attr == 'recv') else 'stderr') self.linewise = (env.linewise or env.parallel) self.reprompt = False self.read_size = 4096 self.write_buffer = RingBuffer([], maxlen=len(self.prefix)) def _flush(self, text): self.stream.write(text) # Actually only flush if not in linewise mode. # When linewise is set (e.g. in parallel mode) flushing makes # doubling-up of line prefixes, and other mixed output, more likely. if not env.linewise: self.stream.flush() self.write_buffer.extend(text) def loop(self): """ Loop, reading from .(), writing to and buffering to . Will raise `~fabric.exceptions.CommandTimeout` if network timeouts continue to be seen past the defined ``self.timeout`` threshold. (Timeouts before then are considered part of normal short-timeout fast network reading; see Fabric issue #733 for background.) """ # Initialize loop variables initial_prefix_printed = False seen_cr = False line = [] # Allow prefix to be turned off. if not env.output_prefix: self.prefix = "" start = time.time() while True: # Handle actual read try: bytelist = self.read_func(self.read_size) except socket.timeout: elapsed = time.time() - start if self.timeout is not None and elapsed > self.timeout: raise CommandTimeout(timeout=self.timeout) continue # Empty byte == EOS if bytelist == '': # If linewise, ensure we flush any leftovers in the buffer. if self.linewise and line: self._flush(self.prefix) self._flush("".join(line)) break # A None capture variable implies that we're in open_shell() if self.capture is None: # Just print directly -- no prefixes, no capturing, nada # And since we know we're using a pty in this mode, just go # straight to stdout. self._flush(bytelist) # Otherwise, we're in run/sudo and need to handle capturing and # prompts. else: # Print to user if self.printing: printable_bytes = bytelist # Small state machine to eat \n after \r if printable_bytes[-1] == "\r": seen_cr = True if printable_bytes[0] == "\n" and seen_cr: printable_bytes = printable_bytes[1:] seen_cr = False while _has_newline(printable_bytes) and printable_bytes != "": # at most 1 split ! cr = re.search("(\r\n|\r|\n)", printable_bytes) if cr is None: break end_of_line = printable_bytes[:cr.start(0)] printable_bytes = printable_bytes[cr.end(0):] if not initial_prefix_printed: self._flush(self.prefix) if _has_newline(end_of_line): end_of_line = '' if self.linewise: self._flush("".join(line) + end_of_line + "\n") line = [] else: self._flush(end_of_line + "\n") initial_prefix_printed = False if self.linewise: line += [printable_bytes] else: if not initial_prefix_printed: self._flush(self.prefix) initial_prefix_printed = True self._flush(printable_bytes) # Now we have handled printing, handle interactivity read_lines = re.split(r"(\r|\n|\r\n)", bytelist) for fragment in read_lines: # Store in capture buffer self.capture += fragment # Handle prompts expected, response = self._get_prompt_response() if expected: del self.capture[-1 * len(expected):] self.chan.sendall(str(response) + '\n') else: prompt = _endswith(self.capture, env.sudo_prompt) try_again = (_endswith(self.capture, env.again_prompt + '\n') or _endswith(self.capture, env.again_prompt + '\r\n')) if prompt: self.prompt() elif try_again: self.try_again() # Print trailing new line if the last thing we printed was our line # prefix. if self.prefix and "".join(self.write_buffer) == self.prefix: self._flush('\n') def prompt(self): # Obtain cached password, if any password = get_password(*normalize(env.host_string)) # Remove the prompt itself from the capture buffer. This is # backwards compatible with Fabric 0.9.x behavior; the user # will still see the prompt on their screen (no way to avoid # this) but at least it won't clutter up the captured text. del self.capture[-1 * len(env.sudo_prompt):] # If the password we just tried was bad, prompt the user again. if (not password) or self.reprompt: # Print the prompt and/or the "try again" notice if # output is being hidden. In other words, since we need # the user's input, they need to see why we're # prompting them. if not self.printing: self._flush(self.prefix) if self.reprompt: self._flush(env.again_prompt + '\n' + self.prefix) self._flush(env.sudo_prompt) # Prompt for, and store, password. Give empty prompt so the # initial display "hides" just after the actually-displayed # prompt from the remote end. self.chan.input_enabled = False password = fabric.network.prompt_for_password( prompt=" ", no_colon=True, stream=self.stream ) self.chan.input_enabled = True # Update env.password, env.passwords if necessary user, host, port = normalize(env.host_string) # TODO: in 2.x, make sure to only update sudo-specific password # config values, not login ones. set_password(user, host, port, password) # Reset reprompt flag self.reprompt = False # Send current password down the pipe self.chan.sendall(password + '\n') def try_again(self): # Remove text from capture buffer self.capture = self.capture[:len(env.again_prompt)] # Set state so we re-prompt the user at the next prompt. self.reprompt = True def _get_prompt_response(self): """ Iterate through the request prompts dict and return the response and original request if we find a match """ for tup in env.prompts.iteritems(): if _endswith(self.capture, tup[0]): return tup return None, None def input_loop(chan, using_pty): while not chan.exit_status_ready(): if win32: have_char = msvcrt.kbhit() else: r, w, x = select([sys.stdin], [], [], 0.0) have_char = (r and r[0] == sys.stdin) if have_char and chan.input_enabled: # Send all local stdin to remote end's stdin byte = msvcrt.getch() if win32 else sys.stdin.read(1) chan.sendall(byte) # Optionally echo locally, if needed. if not using_pty and env.echo_stdin: # Not using fastprint() here -- it prints as 'user' # output level, don't want it to be accidentally hidden sys.stdout.write(byte) sys.stdout.flush() time.sleep(ssh.io_sleep) fabric-1.14.0/fabric/job_queue.py000066400000000000000000000171161315011462000166000ustar00rootroot00000000000000""" Sliding-window-based job/task queue class (& example of use.) May use ``multiprocessing.Process`` or ``threading.Thread`` objects as queue items, though within Fabric itself only ``Process`` objects are used/supported. """ from __future__ import with_statement import time import Queue from multiprocessing import Process from fabric.network import ssh from fabric.context_managers import settings class JobQueue(object): """ The goal of this class is to make a queue of processes to run, and go through them running X number at any given time. So if the bubble is 5 start with 5 running and move the bubble of running procs along the queue looking something like this: Start ........................... [~~~~~].................... ___[~~~~~]................. _________[~~~~~]........... __________________[~~~~~].. ____________________[~~~~~] ___________________________ End """ def __init__(self, max_running, comms_queue): """ Setup the class to resonable defaults. """ self._queued = [] self._running = [] self._completed = [] self._num_of_jobs = 0 self._max = max_running self._comms_queue = comms_queue self._finished = False self._closed = False self._debug = False def _all_alive(self): """ Simply states if all procs are alive or not. Needed to determine when to stop looping, and pop dead procs off and add live ones. """ if self._running: return all([x.is_alive() for x in self._running]) else: return False def __len__(self): """ Just going to use number of jobs as the JobQueue length. """ return self._num_of_jobs def close(self): """ A sanity check, so that the need to care about new jobs being added in the last throws of the job_queue's run are negated. """ if self._debug: print("job queue closed.") self._closed = True def append(self, process): """ Add the Process() to the queue, so that later it can be checked up on. That is if the JobQueue is still open. If the queue is closed, this will just silently do nothing. To get data back out of this process, give ``process`` access to a ``multiprocessing.Queue`` object, and give it here as ``queue``. Then ``JobQueue.run`` will include the queue's contents in its return value. """ if not self._closed: self._queued.append(process) self._num_of_jobs += 1 if self._debug: print("job queue appended %s." % process.name) def run(self): """ This is the workhorse. It will take the intial jobs from the _queue, start them, add them to _running, and then go into the main running loop. This loop will check for done procs, if found, move them out of _running into _completed. It also checks for a _running queue with open spots, which it will then fill as discovered. To end the loop, there have to be no running procs, and no more procs to be run in the queue. This function returns an iterable of all its children's exit codes. """ def _advance_the_queue(): """ Helper function to do the job of poping a new proc off the queue start it, then add it to the running queue. This will eventually depleate the _queue, which is a condition of stopping the running while loop. It also sets the env.host_string from the job.name, so that fabric knows that this is the host to be making connections on. """ job = self._queued.pop() if self._debug: print("Popping '%s' off the queue and starting it" % job.name) with settings(clean_revert=True, host_string=job.name, host=job.name): job.start() self._running.append(job) # Prep return value so we can start filling it during main loop results = {} for job in self._queued: results[job.name] = dict.fromkeys(('exit_code', 'results')) if not self._closed: raise Exception("Need to close() before starting.") if self._debug: print("Job queue starting.") while len(self._running) < self._max: _advance_the_queue() # Main loop! while not self._finished: while len(self._running) < self._max and self._queued: _advance_the_queue() if not self._all_alive(): for id, job in enumerate(self._running): if not job.is_alive(): if self._debug: print("Job queue found finished proc: %s." % job.name) done = self._running.pop(id) self._completed.append(done) if self._debug: print("Job queue has %d running." % len(self._running)) if not (self._queued or self._running): if self._debug: print("Job queue finished.") for job in self._completed: job.join() self._finished = True # Each loop pass, try pulling results off the queue to keep its # size down. At this point, we don't actually care if any results # have arrived yet; they will be picked up after the main loop. self._fill_results(results) time.sleep(ssh.io_sleep) # Consume anything left in the results queue. Note that there is no # need to block here, as the main loop ensures that all workers will # already have finished. self._fill_results(results) # Attach exit codes now that we're all done & have joined all jobs for job in self._completed: if isinstance(job, Process): results[job.name]['exit_code'] = job.exitcode return results def _fill_results(self, results): """ Attempt to pull data off self._comms_queue and add to 'results' dict. If no data is available (i.e. the queue is empty), bail immediately. """ while True: try: datum = self._comms_queue.get_nowait() results[datum['name']]['results'] = datum['result'] except Queue.Empty: break #### Sample def try_using(parallel_type): """ This will run the queue through it's paces, and show a simple way of using the job queue. """ def print_number(number): """ Simple function to give a simple task to execute. """ print(number) if parallel_type == "multiprocessing": from multiprocessing import Process as Bucket elif parallel_type == "threading": from threading import Thread as Bucket # Make a job_queue with a bubble of len 5, and have it print verbosely queue = Queue.Queue() jobs = JobQueue(5, queue) jobs._debug = True # Add 20 procs onto the stack for x in range(20): jobs.append(Bucket( target=print_number, args=[x], kwargs={}, )) # Close up the queue and then start it's execution jobs.close() jobs.run() if __name__ == '__main__': try_using("multiprocessing") try_using("threading") fabric-1.14.0/fabric/main.py000066400000000000000000000632161315011462000155500ustar00rootroot00000000000000""" This module contains Fab's `main` method plus related subroutines. `main` is executed as the command line ``fab`` program and takes care of parsing options and commands, loading the user settings file, loading a fabfile, and executing the commands given. The other callables defined in this module are internal only. Anything useful to individuals leveraging Fabric as a library, should be kept elsewhere. """ import getpass import inspect from operator import isMappingType from optparse import OptionParser import os import sys import types # For checking callables against the API, & easy mocking from fabric import api, state, colors from fabric.contrib import console, files, project from fabric.network import disconnect_all, ssh from fabric.state import env_options from fabric.tasks import Task, execute, get_task_details from fabric.task_utils import _Dict, crawl from fabric.utils import abort, indent, warn, _pty_size # One-time calculation of "all internal callables" to avoid doing this on every # check of a given fabfile callable (in is_classic_task()). _modules = [api, project, files, console, colors] _internals = reduce(lambda x, y: x + filter(callable, vars(y).values()), _modules, [] ) # Module recursion cache class _ModuleCache(object): """ Set-like object operating on modules and storing __name__s internally. """ def __init__(self): self.cache = set() def __contains__(self, value): return value.__name__ in self.cache def add(self, value): return self.cache.add(value.__name__) def clear(self): return self.cache.clear() _seen = _ModuleCache() def load_settings(path): """ Take given file path and return dictionary of any key=value pairs found. Usage docs are in sites/docs/usage/fab.rst, in "Settings files." """ if os.path.exists(path): comments = lambda s: s and not s.startswith("#") settings = filter(comments, open(path, 'r')) return dict((k.strip(), v.strip()) for k, _, v in [s.partition('=') for s in settings]) # Handle nonexistent or empty settings file return {} def _is_package(path): """ Is the given path a Python package? """ _exists = lambda s: os.path.exists(os.path.join(path, s)) return ( os.path.isdir(path) and (_exists('__init__.py') or _exists('__init__.pyc')) ) def find_fabfile(names=None): """ Attempt to locate a fabfile, either explicitly or by searching parent dirs. Usage docs are in sites/docs/usage/fabfiles.rst, in "Fabfile discovery." """ # Obtain env value if not given specifically if names is None: names = [state.env.fabfile] # Create .py version if necessary if not names[0].endswith('.py'): names += [names[0] + '.py'] # Does the name contain path elements? if os.path.dirname(names[0]): # If so, expand home-directory markers and test for existence for name in names: expanded = os.path.expanduser(name) if os.path.exists(expanded): if name.endswith('.py') or _is_package(expanded): return os.path.abspath(expanded) else: # Otherwise, start in cwd and work downwards towards filesystem root path = '.' # Stop before falling off root of filesystem (should be platform # agnostic) while os.path.split(os.path.abspath(path))[1]: for name in names: joined = os.path.join(path, name) if os.path.exists(joined): if name.endswith('.py') or _is_package(joined): return os.path.abspath(joined) path = os.path.join('..', path) # Implicit 'return None' if nothing was found def is_classic_task(tup): """ Takes (name, object) tuple, returns True if it's a non-Fab public callable. """ name, func = tup try: is_classic = ( callable(func) and (func not in _internals) and not name.startswith('_') and not (inspect.isclass(func) and issubclass(func, Exception)) ) # Handle poorly behaved __eq__ implementations except (ValueError, TypeError): is_classic = False return is_classic def load_fabfile(path, importer=None): """ Import given fabfile path and return (docstring, callables). Specifically, the fabfile's ``__doc__`` attribute (a string) and a dictionary of ``{'name': callable}`` containing all callables which pass the "is a Fabric task" test. """ if importer is None: importer = __import__ # Get directory and fabfile name directory, fabfile = os.path.split(path) # If the directory isn't in the PYTHONPATH, add it so our import will work added_to_path = False index = None if directory not in sys.path: sys.path.insert(0, directory) added_to_path = True # If the directory IS in the PYTHONPATH, move it to the front temporarily, # otherwise other fabfiles -- like Fabric's own -- may scoop the intended # one. else: i = sys.path.index(directory) if i != 0: # Store index for later restoration index = i # Add to front, then remove from original position sys.path.insert(0, directory) del sys.path[i + 1] # Perform the import (trimming off the .py) imported = importer(os.path.splitext(fabfile)[0]) # Remove directory from path if we added it ourselves (just to be neat) if added_to_path: del sys.path[0] # Put back in original index if we moved it if index is not None: sys.path.insert(index + 1, directory) del sys.path[0] # Actually load tasks docstring, new_style, classic, default = load_tasks_from_module(imported) tasks = new_style if state.env.new_style_tasks else classic # Clean up after ourselves _seen.clear() return docstring, tasks, default def load_tasks_from_module(imported): """ Handles loading all of the tasks for a given `imported` module """ # Obey the use of .__all__ if it is present imported_vars = vars(imported) if "__all__" in imported_vars: imported_vars = [(name, imported_vars[name]) for name in \ imported_vars if name in imported_vars["__all__"]] else: imported_vars = imported_vars.items() # Return a two-tuple value. First is the documentation, second is a # dictionary of callables only (and don't include Fab operations or # underscored callables) new_style, classic, default = extract_tasks(imported_vars) return imported.__doc__, new_style, classic, default def extract_tasks(imported_vars): """ Handle extracting tasks from a given list of variables """ new_style_tasks = _Dict() classic_tasks = {} default_task = None if 'new_style_tasks' not in state.env: state.env.new_style_tasks = False for tup in imported_vars: name, obj = tup if is_task_object(obj): state.env.new_style_tasks = True # Use instance.name if defined if obj.name and obj.name != 'undefined': new_style_tasks[obj.name] = obj else: obj.name = name new_style_tasks[name] = obj # Handle aliasing if obj.aliases is not None: for alias in obj.aliases: new_style_tasks[alias] = obj # Handle defaults if obj.is_default: default_task = obj elif is_classic_task(tup): classic_tasks[name] = obj elif is_task_module(obj): docs, newstyle, classic, default = load_tasks_from_module(obj) for task_name, task in newstyle.items(): if name not in new_style_tasks: new_style_tasks[name] = _Dict() new_style_tasks[name][task_name] = task if default is not None: new_style_tasks[name].default = default return new_style_tasks, classic_tasks, default_task def is_task_module(a): """ Determine if the provided value is a task module """ #return (type(a) is types.ModuleType and # any(map(is_task_object, vars(a).values()))) if isinstance(a, types.ModuleType) and a not in _seen: # Flag module as seen _seen.add(a) # Signal that we need to check it out return True def is_task_object(a): """ Determine if the provided value is a ``Task`` object. This returning True signals that all tasks within the fabfile module must be Task objects. """ return isinstance(a, Task) and a.use_task_objects def parse_options(): """ Handle command-line options with optparse.OptionParser. Return list of arguments, largely for use in `parse_arguments`. """ # # Initialize # parser = OptionParser( usage=("fab [options] " "[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...")) # # Define options that don't become `env` vars (typically ones which cause # Fabric to do something other than its normal execution, such as # --version) # # Display info about a specific command parser.add_option('-d', '--display', metavar='NAME', help="print detailed info about command NAME" ) # Control behavior of --list LIST_FORMAT_OPTIONS = ('short', 'normal', 'nested') parser.add_option('-F', '--list-format', choices=LIST_FORMAT_OPTIONS, default='normal', metavar='FORMAT', help="formats --list, choices: %s" % ", ".join(LIST_FORMAT_OPTIONS) ) parser.add_option('-I', '--initial-password-prompt', action='store_true', default=False, help="Force password prompt up-front" ) parser.add_option('--initial-sudo-password-prompt', action='store_true', default=False, help="Force sudo password prompt up-front" ) # List Fab commands found in loaded fabfiles/source files parser.add_option('-l', '--list', action='store_true', dest='list_commands', default=False, help="print list of possible commands and exit" ) # Allow setting of arbitrary env vars at runtime. parser.add_option('--set', metavar="KEY=VALUE,...", dest='env_settings', default="", help="comma separated KEY=VALUE pairs to set Fab env vars" ) # Like --list, but text processing friendly parser.add_option('--shortlist', action='store_true', dest='shortlist', default=False, help="alias for -F short --list" ) # Version number (optparse gives you --version but we have to do it # ourselves to get -V too. sigh) parser.add_option('-V', '--version', action='store_true', dest='show_version', default=False, help="show program's version number and exit" ) # # Add in options which are also destined to show up as `env` vars. # for option in env_options: parser.add_option(option) # # Finalize # # Return three-tuple of parser + the output from parse_args (opt obj, args) opts, args = parser.parse_args() return parser, opts, args def _is_task(name, value): """ Is the object a task as opposed to e.g. a dict or int? """ return is_classic_task((name, value)) or is_task_object(value) def _sift_tasks(mapping): tasks, collections = [], [] for name, value in mapping.iteritems(): if _is_task(name, value): tasks.append(name) elif isMappingType(value): collections.append(name) tasks = sorted(tasks) collections = sorted(collections) return tasks, collections def _task_names(mapping): """ Flatten & sort task names in a breadth-first fashion. Tasks are always listed before submodules at the same level, but within those two groups, sorting is alphabetical. """ tasks, collections = _sift_tasks(mapping) for collection in collections: module = mapping[collection] if hasattr(module, 'default'): tasks.append(collection) join = lambda x: ".".join((collection, x)) tasks.extend(map(join, _task_names(module))) return tasks def _print_docstring(docstrings, name): if not docstrings: return False docstring = crawl(name, state.commands).__doc__ if isinstance(docstring, basestring): return docstring def _normal_list(docstrings=True): result = [] task_names = _task_names(state.commands) # Want separator between name, description to be straight col max_len = reduce(lambda a, b: max(a, len(b)), task_names, 0) sep = ' ' trail = '...' max_width = _pty_size()[1] - 1 - len(trail) for name in task_names: output = None docstring = _print_docstring(docstrings, name) if docstring: lines = filter(None, docstring.splitlines()) first_line = lines[0].strip() # Truncate it if it's longer than N chars size = max_width - (max_len + len(sep) + len(trail)) if len(first_line) > size: first_line = first_line[:size] + trail output = name.ljust(max_len) + sep + first_line # Or nothing (so just the name) else: output = name result.append(indent(output)) return result def _nested_list(mapping, level=1): result = [] tasks, collections = _sift_tasks(mapping) # Tasks come first result.extend(map(lambda x: indent(x, spaces=level * 4), tasks)) for collection in collections: module = mapping[collection] # Section/module "header" result.append(indent(collection + ":", spaces=level * 4)) # Recurse result.extend(_nested_list(module, level + 1)) return result COMMANDS_HEADER = "Available commands" NESTED_REMINDER = " (remember to call as module.[...].task)" def list_commands(docstring, format_): """ Print all found commands/tasks, then exit. Invoked with ``-l/--list.`` If ``docstring`` is non-empty, it will be printed before the task list. ``format_`` should conform to the options specified in ``LIST_FORMAT_OPTIONS``, e.g. ``"short"``, ``"normal"``. """ # Short-circuit with simple short output if format_ == "short": return _task_names(state.commands) # Otherwise, handle more verbose modes result = [] # Docstring at top, if applicable if docstring: trailer = "\n" if not docstring.endswith("\n") else "" result.append(docstring + trailer) header = COMMANDS_HEADER if format_ == "nested": header += NESTED_REMINDER result.append(header + ":\n") c = _normal_list() if format_ == "normal" else _nested_list(state.commands) result.extend(c) return result def display_command(name): """ Print command function's docstring, then exit. Invoked with -d/--display. """ # Sanity check command = crawl(name, state.commands) if command is None: msg = "Task '%s' does not appear to exist. Valid task names:\n%s" abort(msg % (name, "\n".join(_normal_list(False)))) # Print out nicely presented docstring if found if hasattr(command, '__details__'): task_details = command.__details__() else: task_details = get_task_details(command) if task_details: print("Displaying detailed information for task '%s':" % name) print('') print(indent(task_details, strip=True)) print('') # Or print notice if not else: print("No detailed information available for task '%s':" % name) sys.exit(0) def _escape_split(sep, argstr): """ Allows for escaping of the separator: e.g. task:arg='foo\, bar' It should be noted that the way bash et. al. do command line parsing, those single quotes are required. """ escaped_sep = r'\%s' % sep if escaped_sep not in argstr: return argstr.split(sep) before, _, after = argstr.partition(escaped_sep) startlist = before.split(sep) # a regular split is fine here unfinished = startlist[-1] startlist = startlist[:-1] # recurse because there may be more escaped separators endlist = _escape_split(sep, after) # finish building the escaped value. we use endlist[0] becaue the first # part of the string sent in recursion is the rest of the escaped value. unfinished += sep + endlist[0] return startlist + [unfinished] + endlist[1:] # put together all the parts def parse_arguments(arguments): """ Parse string list into list of tuples: command, args, kwargs, hosts, roles. See sites/docs/usage/fab.rst, section on "per-task arguments" for details. """ cmds = [] for cmd in arguments: args = [] kwargs = {} hosts = [] roles = [] exclude_hosts = [] if ':' in cmd: cmd, argstr = cmd.split(':', 1) for pair in _escape_split(',', argstr): result = _escape_split('=', pair) if len(result) > 1: k, v = result # Catch, interpret host/hosts/role/roles/exclude_hosts # kwargs if k in ['host', 'hosts', 'role', 'roles', 'exclude_hosts']: if k == 'host': hosts = [v.strip()] elif k == 'hosts': hosts = [x.strip() for x in v.split(';')] elif k == 'role': roles = [v.strip()] elif k == 'roles': roles = [x.strip() for x in v.split(';')] elif k == 'exclude_hosts': exclude_hosts = [x.strip() for x in v.split(';')] # Otherwise, record as usual else: kwargs[k] = v else: args.append(result[0]) cmds.append((cmd, args, kwargs, hosts, roles, exclude_hosts)) return cmds def parse_remainder(arguments): """ Merge list of "remainder arguments" into a single command string. """ return ' '.join(arguments) def update_output_levels(show, hide): """ Update state.output values as per given comma-separated list of key names. For example, ``update_output_levels(show='debug,warnings')`` is functionally equivalent to ``state.output['debug'] = True ; state.output['warnings'] = True``. Conversely, anything given to ``hide`` sets the values to ``False``. """ if show: for key in show.split(','): state.output[key] = True if hide: for key in hide.split(','): state.output[key] = False def show_commands(docstring, format, code=0): print("\n".join(list_commands(docstring, format))) sys.exit(code) def main(fabfile_locations=None): """ Main command-line execution loop. """ try: # Parse command line options parser, options, arguments = parse_options() # Handle regular args vs -- args arguments = parser.largs remainder_arguments = parser.rargs # Allow setting of arbitrary env keys. # This comes *before* the "specific" env_options so that those may # override these ones. Specific should override generic, if somebody # was silly enough to specify the same key in both places. # E.g. "fab --set shell=foo --shell=bar" should have env.shell set to # 'bar', not 'foo'. for pair in _escape_split(',', options.env_settings): pair = _escape_split('=', pair) # "--set x" => set env.x to True # "--set x=" => set env.x to "" key = pair[0] value = True if len(pair) == 2: value = pair[1] state.env[key] = value # Update env with any overridden option values # NOTE: This needs to remain the first thing that occurs # post-parsing, since so many things hinge on the values in env. for option in env_options: state.env[option.dest] = getattr(options, option.dest) # Handle --hosts, --roles, --exclude-hosts (comma separated string => # list) for key in ['hosts', 'roles', 'exclude_hosts']: if key in state.env and isinstance(state.env[key], basestring): state.env[key] = state.env[key].split(',') # Feed the env.tasks : tasks that are asked to be executed. state.env['tasks'] = arguments # Handle output control level show/hide update_output_levels(show=options.show, hide=options.hide) # Handle version number option if options.show_version: print("Fabric %s" % state.env.version) print("Paramiko %s" % ssh.__version__) sys.exit(0) # Load settings from user settings file, into shared env dict. state.env.update(load_settings(state.env.rcfile)) # Find local fabfile path or abort fabfile = find_fabfile(fabfile_locations) if not fabfile and not remainder_arguments: abort("""Couldn't find any fabfiles! Remember that -f can be used to specify fabfile path, and use -h for help.""") # Store absolute path to fabfile in case anyone needs it state.env.real_fabfile = fabfile # Load fabfile (which calls its module-level code, including # tweaks to env values) and put its commands in the shared commands # dict default = None if fabfile: docstring, callables, default = load_fabfile(fabfile) state.commands.update(callables) # Handle case where we were called bare, i.e. just "fab", and print # a help message. actions = (options.list_commands, options.shortlist, options.display, arguments, remainder_arguments, default) if not any(actions): parser.print_help() sys.exit(1) # Abort if no commands found if not state.commands and not remainder_arguments: abort("Fabfile didn't contain any commands!") # Now that we're settled on a fabfile, inform user. if state.output.debug: if fabfile: print("Using fabfile '%s'" % fabfile) else: print("No fabfile loaded -- remainder command only") # Shortlist is now just an alias for the "short" list format; # it overrides use of --list-format if somebody were to specify both if options.shortlist: options.list_format = 'short' options.list_commands = True # List available commands if options.list_commands: show_commands(docstring, options.list_format) # Handle show (command-specific help) option if options.display: display_command(options.display) # If user didn't specify any commands to run, show help if not (arguments or remainder_arguments or default): parser.print_help() sys.exit(0) # Or should it exit with error (1)? # Parse arguments into commands to run (plus args/kwargs/hosts) commands_to_run = parse_arguments(arguments) # Parse remainders into a faux "command" to execute remainder_command = parse_remainder(remainder_arguments) # Figure out if any specified task names are invalid unknown_commands = [] for tup in commands_to_run: if crawl(tup[0], state.commands) is None: unknown_commands.append(tup[0]) # Abort if any unknown commands were specified if unknown_commands and not state.env.get('skip_unknown_tasks', False): warn("Command(s) not found:\n%s" \ % indent(unknown_commands)) show_commands(None, options.list_format, 1) # Generate remainder command and insert into commands, commands_to_run if remainder_command: r = '' state.commands[r] = lambda: api.run(remainder_command) commands_to_run.append((r, [], {}, [], [], [])) # Ditto for a default, if found if not commands_to_run and default: commands_to_run.append((default.name, [], {}, [], [], [])) # Initial password prompt, if requested if options.initial_password_prompt: prompt = "Initial value for env.password: " state.env.password = getpass.getpass(prompt) # Ditto sudo_password if options.initial_sudo_password_prompt: prompt = "Initial value for env.sudo_password: " state.env.sudo_password = getpass.getpass(prompt) if state.output.debug: names = ", ".join(x[0] for x in commands_to_run) print("Commands to run: %s" % names) # At this point all commands must exist, so execute them in order. for name, args, kwargs, arg_hosts, arg_roles, arg_exclude_hosts in commands_to_run: execute( name, hosts=arg_hosts, roles=arg_roles, exclude_hosts=arg_exclude_hosts, *args, **kwargs ) # If we got here, no errors occurred, so print a final note. if state.output.status: print("\nDone.") except SystemExit: # a number of internal functions might raise this one. raise except KeyboardInterrupt: if state.output.status: sys.stderr.write("\nStopped.\n") sys.exit(1) except: sys.excepthook(*sys.exc_info()) # we might leave stale threads if we don't explicitly exit() sys.exit(1) finally: disconnect_all() sys.exit(0) fabric-1.14.0/fabric/network.py000066400000000000000000000635021315011462000163130ustar00rootroot00000000000000""" Classes and subroutines dealing with network connections and related topics. """ from __future__ import with_statement from functools import wraps import getpass import os import re import time import socket import sys from StringIO import StringIO from fabric.auth import get_password, set_password from fabric.utils import handle_prompt_abort, warn from fabric.exceptions import NetworkError try: import warnings warnings.simplefilter('ignore', DeprecationWarning) import paramiko as ssh except ImportError, e: import traceback traceback.print_exc() msg = """ There was a problem importing our SSH library (see traceback above). Please make sure all dependencies are installed and importable. """.rstrip() sys.stderr.write(msg + '\n') sys.exit(1) ipv6_regex = re.compile( '^\[?(?P[0-9A-Fa-f:]+(?:%[a-z]+\d+)?)\]?(:(?P\d+))?$') def direct_tcpip(client, host, port): return client.get_transport().open_channel( 'direct-tcpip', (host, int(port)), ('', 0) ) def is_key_load_error(e): return ( e.__class__ is ssh.SSHException and 'Unable to parse key file' in str(e) ) def _tried_enough(tries): from fabric.state import env return tries >= env.connection_attempts def get_gateway(host, port, cache, replace=False): """ Create and return a gateway socket, if one is needed. This function checks ``env`` for gateway or proxy-command settings and returns the necessary socket-like object for use by a final host connection. :param host: Hostname of target server. :param port: Port to connect to on target server. :param cache: A ``HostConnectionCache`` object, in which gateway ``SSHClient`` objects are to be retrieved/cached. :param replace: Whether to forcibly replace a cached gateway client object. :returns: A ``socket.socket``-like object, or ``None`` if none was created. """ from fabric.state import env, output sock = None proxy_command = ssh_config().get('proxycommand', None) if env.gateway: gateway = normalize_to_string(env.gateway) # ensure initial gateway connection if replace or gateway not in cache: if output.debug: print "Creating new gateway connection to %r" % gateway cache[gateway] = connect(*normalize(gateway) + (cache, False)) # now we should have an open gw connection and can ask it for a # direct-tcpip channel to the real target. (bypass cache's own # __getitem__ override to avoid hilarity - this is usually called # within that method.) sock = direct_tcpip(dict.__getitem__(cache, gateway), host, port) elif proxy_command: sock = ssh.ProxyCommand(proxy_command) return sock class HostConnectionCache(dict): """ Dict subclass allowing for caching of host connections/clients. This subclass will intelligently create new client connections when keys are requested, or return previously created connections instead. It also handles creating new socket-like objects when required to implement gateway connections and `ProxyCommand`, and handing them to the inner connection methods. Key values are the same as host specifiers throughout Fabric: optional username + ``@``, mandatory hostname, optional ``:`` + port number. Examples: * ``example.com`` - typical Internet host address. * ``firewall`` - atypical, but still legal, local host address. * ``user@example.com`` - with specific username attached. * ``bob@smith.org:222`` - with specific nonstandard port attached. When the username is not given, ``env.user`` is used. ``env.user`` defaults to the currently running user at startup but may be overwritten by user code or by specifying a command-line flag. Note that differing explicit usernames for the same hostname will result in multiple client connections being made. For example, specifying ``user1@example.com`` will create a connection to ``example.com``, logged in as ``user1``; later specifying ``user2@example.com`` will create a new, 2nd connection as ``user2``. The same applies to ports: specifying two different ports will result in two different connections to the same host being made. If no port is given, 22 is assumed, so ``example.com`` is equivalent to ``example.com:22``. """ def connect(self, key): """ Force a new connection to ``key`` host string. """ from fabric.state import env user, host, port = normalize(key) key = normalize_to_string(key) seek_gateway = True # break the loop when the host is gateway itself if env.gateway: seek_gateway = normalize_to_string(env.gateway) != key self[key] = connect( user, host, port, cache=self, seek_gateway=seek_gateway) def __getitem__(self, key): """ Autoconnect + return connection object """ key = normalize_to_string(key) if key not in self: self.connect(key) return dict.__getitem__(self, key) # # Dict overrides that normalize input keys # def __setitem__(self, key, value): return dict.__setitem__(self, normalize_to_string(key), value) def __delitem__(self, key): return dict.__delitem__(self, normalize_to_string(key)) def __contains__(self, key): return dict.__contains__(self, normalize_to_string(key)) def ssh_config(host_string=None): """ Return ssh configuration dict for current env.host_string host value. Memoizes the loaded SSH config file, but not the specific per-host results. This function performs the necessary "is SSH config enabled?" checks and will simply return an empty dict if not. If SSH config *is* enabled and the value of env.ssh_config_path is not a valid file, it will abort. May give an explicit host string as ``host_string``. """ from fabric.state import env dummy = {} if not env.use_ssh_config: return dummy if '_ssh_config' not in env: try: conf = ssh.SSHConfig() path = os.path.expanduser(env.ssh_config_path) with open(path) as fd: conf.parse(fd) env._ssh_config = conf except IOError: warn("Unable to load SSH config file '%s'" % path) return dummy host = parse_host_string(host_string or env.host_string)['host'] return env._ssh_config.lookup(host) def key_filenames(): """ Returns list of SSH key filenames for the current env.host_string. Takes into account ssh_config and env.key_filename, including normalization to a list. Also performs ``os.path.expanduser`` expansion on any key filenames. """ from fabric.state import env keys = env.key_filename # For ease of use, coerce stringish key filename into list if isinstance(env.key_filename, basestring) or env.key_filename is None: keys = [keys] # Strip out any empty strings (such as the default value...meh) keys = filter(bool, keys) # Honor SSH config conf = ssh_config() if 'identityfile' in conf: # Assume a list here as we require Paramiko 1.10+ keys.extend(conf['identityfile']) return map(os.path.expanduser, keys) def key_from_env(passphrase=None): """ Returns a paramiko-ready key from a text string of a private key """ from fabric.state import env, output if 'key' in env: if output.debug: # NOTE: this may not be the most secure thing; OTOH anybody running # the process must by definition have access to the key value, # so only serious problem is if they're logging the output. sys.stderr.write("Trying to honor in-memory key %r\n" % env.key) for pkey_class in (ssh.rsakey.RSAKey, ssh.dsskey.DSSKey): if output.debug: sys.stderr.write("Trying to load it as %s\n" % pkey_class) try: return pkey_class.from_private_key(StringIO(env.key), passphrase) except Exception, e: # File is valid key, but is encrypted: raise it, this will # cause cxn loop to prompt for passphrase & retry if 'Private key file is encrypted' in e: raise # Otherwise, it probably means it wasn't a valid key of this # type, so try the next one. else: pass def parse_host_string(host_string): # Split host_string to user (optional) and host/port user_hostport = host_string.rsplit('@', 1) hostport = user_hostport.pop() user = user_hostport[0] if user_hostport and user_hostport[0] else None # Split host/port string to host and optional port # For IPv6 addresses square brackets are mandatory for host/port separation if hostport.count(':') > 1: # Looks like IPv6 address r = ipv6_regex.match(hostport).groupdict() host = r['host'] or None port = r['port'] or None else: # Hostname or IPv4 address host_port = hostport.rsplit(':', 1) host = host_port.pop(0) or None port = host_port[0] if host_port and host_port[0] else None return {'user': user, 'host': host, 'port': port} def normalize(host_string, omit_port=False): """ Normalizes a given host string, returning explicit host, user, port. If ``omit_port`` is given and is True, only the host and user are returned. This function will process SSH config files if Fabric is configured to do so, and will use them to fill in some default values or swap in hostname aliases. Regarding SSH port used: * Ports explicitly given within host strings always win, no matter what. * When the host string lacks a port, SSH-config driven port configurations are used next. * When the SSH config doesn't specify a port (at all - including a default ``Host *`` block), Fabric's internal setting ``env.port`` is consulted. * If ``env.port`` is empty, ``env.default_port`` is checked (which should always be, as one would expect, port ``22``). """ from fabric.state import env # Gracefully handle "empty" input by returning empty output if not host_string: return ('', '') if omit_port else ('', '', '') # Parse host string (need this early on to look up host-specific ssh_config # values) r = parse_host_string(host_string) host = r['host'] # Env values (using defaults if somehow earlier defaults were replaced with # empty values) user = env.user or env.local_user # SSH config data conf = ssh_config(host_string) # Only use ssh_config values if the env value appears unmodified from # the true defaults. If the user has tweaked them, that new value # takes precedence. if user == env.local_user and 'user' in conf: user = conf['user'] # Also override host if needed if 'hostname' in conf: host = conf['hostname'] # Merge explicit user/port values with the env/ssh_config derived ones # (Host is already done at this point.) user = r['user'] or user if omit_port: return user, host # determine port from ssh config if enabled ssh_config_port = None if env.use_ssh_config: ssh_config_port = conf.get('port', None) # port priority order (as in docstring) port = r['port'] or ssh_config_port or env.port or env.default_port return user, host, port def to_dict(host_string): user, host, port = normalize(host_string) return { 'user': user, 'host': host, 'port': port, 'host_string': host_string } def from_dict(arg): return join_host_strings(arg['user'], arg['host'], arg['port']) def denormalize(host_string): """ Strips out default values for the given host string. If the user part is the default user, it is removed; if the port is port 22, it also is removed. """ from fabric.state import env r = parse_host_string(host_string) user = '' if r['user'] is not None and r['user'] != env.user: user = r['user'] + '@' port = '' if r['port'] is not None and r['port'] != '22': port = ':' + r['port'] host = r['host'] host = '[%s]' % host if port and host.count(':') > 1 else host return user + host + port def join_host_strings(user, host, port=None): """ Turns user/host/port strings into ``user@host:port`` combined string. This function is not responsible for handling missing user/port strings; for that, see the ``normalize`` function. If ``host`` looks like IPv6 address, it will be enclosed in square brackets If ``port`` is omitted, the returned string will be of the form ``user@host``. """ if port: # Square brackets are necessary for IPv6 host/port separation template = "%s@[%s]:%s" if host.count(':') > 1 else "%s@%s:%s" return template % (user, host, port) else: return "%s@%s" % (user, host) def normalize_to_string(host_string): """ normalize() returns a tuple; this returns another valid host string. """ return join_host_strings(*normalize(host_string)) def connect(user, host, port, cache, seek_gateway=True): """ Create and return a new SSHClient instance connected to given host. :param user: Username to connect as. :param host: Network hostname. :param port: SSH daemon port. :param cache: A ``HostConnectionCache`` instance used to cache/store gateway hosts when gatewaying is enabled. :param seek_gateway: Whether to try setting up a gateway socket for this connection. Used so the actual gateway connection can prevent recursion. """ from fabric.state import env, output # # Initialization # # Init client client = ssh.SSHClient() # Load system hosts file (e.g. /etc/ssh/ssh_known_hosts) known_hosts = env.get('system_known_hosts') if known_hosts: client.load_system_host_keys(known_hosts) # Load known host keys (e.g. ~/.ssh/known_hosts) unless user says not to. if not env.disable_known_hosts: client.load_system_host_keys() # Unless user specified not to, accept/add new, unknown host keys if not env.reject_unknown_hosts: client.set_missing_host_key_policy(ssh.AutoAddPolicy()) # # Connection attempt loop # # Initialize loop variables connected = False password = get_password(user, host, port, login_only=True) tries = 0 sock = None # Loop until successful connect (keep prompting for new password) while not connected: # Attempt connection try: tries += 1 # (Re)connect gateway socket, if needed. # Nuke cached client object if not on initial try. if seek_gateway: sock = get_gateway(host, port, cache, replace=tries > 0) # Set up kwargs (this lets us skip GSS-API kwargs unless explicitly # set; otherwise older Paramiko versions will be cranky.) kwargs = dict( hostname=host, port=int(port), username=user, password=password, pkey=key_from_env(password), key_filename=key_filenames(), timeout=env.timeout, allow_agent=not env.no_agent, look_for_keys=not env.no_keys, sock=sock, ) for suffix in ('auth', 'deleg_creds', 'kex'): name = "gss_" + suffix val = env.get(name, None) if val is not None: kwargs[name] = val # Ready to connect client.connect(**kwargs) connected = True # set a keepalive if desired if env.keepalive: client.get_transport().set_keepalive(env.keepalive) return client # BadHostKeyException corresponds to key mismatch, i.e. what on the # command line results in the big banner error about man-in-the-middle # attacks. except ssh.BadHostKeyException, e: raise NetworkError("Host key for %s did not match pre-existing key! Server's key was changed recently, or possible man-in-the-middle attack." % host, e) # Prompt for new password to try on auth failure except ( ssh.AuthenticationException, ssh.PasswordRequiredException, ssh.SSHException ), e: msg = str(e) # If we get SSHExceptionError and the exception message indicates # SSH protocol banner read failures, assume it's caused by the # server load and try again. # # If we are using a gateway, we will get a ChannelException if # connection to the downstream host fails. We should retry. if (e.__class__ is ssh.SSHException \ and msg == 'Error reading SSH protocol banner') \ or e.__class__ is ssh.ChannelException: if _tried_enough(tries): raise NetworkError(msg, e) continue # For whatever reason, empty password + no ssh key or agent # results in an SSHException instead of an # AuthenticationException. Since it's difficult to do # otherwise, we must assume empty password + SSHException == # auth exception. # # Conversely: if we get SSHException and there # *was* a password -- it is probably something non auth # related, and should be sent upwards. (This is not true if the # exception message does indicate key parse problems.) # # This also holds true for rejected/unknown host keys: we have to # guess based on other heuristics. if ( e.__class__ is ssh.SSHException and ( password or msg.startswith('Unknown server') or "not found in known_hosts" in msg ) and not is_key_load_error(e) ): raise NetworkError(msg, e) # Otherwise, assume an auth exception, and prompt for new/better # password. # Paramiko doesn't handle prompting for locked private # keys (i.e. keys with a passphrase and not loaded into an agent) # so we have to detect this and tweak our prompt slightly. # (Otherwise, however, the logic flow is the same, because # ssh's connect() method overrides the password argument to be # either the login password OR the private key passphrase. Meh.) # # NOTE: This will come up if you normally use a # passphrase-protected private key with ssh-agent, and enter an # incorrect remote username, because ssh.connect: # * Tries the agent first, which will fail as you gave the wrong # username, so obviously any loaded keys aren't gonna work for a # nonexistent remote account; # * Then tries the on-disk key file, which is passphrased; # * Realizes there's no password to try unlocking that key with, # because you didn't enter a password, because you're using # ssh-agent; # * In this condition (trying a key file, password is None) # ssh raises PasswordRequiredException. text = None if e.__class__ is ssh.PasswordRequiredException \ or is_key_load_error(e): # NOTE: we can't easily say WHICH key's passphrase is needed, # because ssh doesn't provide us with that info, and # env.key_filename may be a list of keys, so we can't know # which one raised the exception. Best not to try. prompt = "[%s] Passphrase for private key" text = prompt % env.host_string password = prompt_for_password(text) # Update env.password, env.passwords if empty set_password(user, host, port, password) # Ctrl-D / Ctrl-C for exit # TODO: this may no longer actually serve its original purpose and may # also hide TypeErrors from paramiko. Double check in v2. except (EOFError, TypeError): # Print a newline (in case user was sitting at prompt) print('') sys.exit(0) # Handle DNS error / name lookup failure except socket.gaierror, e: raise NetworkError('Name lookup failed for %s' % host, e) # Handle timeouts and retries, including generic errors # NOTE: In 2.6, socket.error subclasses IOError except socket.error, e: not_timeout = type(e) is not socket.timeout giving_up = _tried_enough(tries) # Baseline error msg for when debug is off msg = "Timed out trying to connect to %s" % host # Expanded for debug on err = msg + " (attempt %s of %s)" % (tries, env.connection_attempts) if giving_up: err += ", giving up" err += ")" # Debuggin' if output.debug: sys.stderr.write(err + '\n') # Having said our piece, try again if not giving_up: # Sleep if it wasn't a timeout, so we still get timeout-like # behavior if not_timeout: time.sleep(env.timeout) continue # Override eror msg if we were retrying other errors if not_timeout: msg = "Low level socket error connecting to host %s on port %s: %s" % ( host, port, e[1] ) # Here, all attempts failed. Tweak error msg to show # tries. # TODO: find good humanization module, jeez s = "s" if env.connection_attempts > 1 else "" msg += " (tried %s time%s)" % (env.connection_attempts, s) raise NetworkError(msg, e) # Ensure that if we terminated without connecting and we were given an # explicit socket, close it out. finally: if not connected and sock is not None: sock.close() def _password_prompt(prompt, stream): # NOTE: Using encode-to-ascii to prevent (Windows, at least) getpass from # choking if given Unicode. return getpass.getpass(prompt.encode('ascii', 'ignore'), stream) def prompt_for_password(prompt=None, no_colon=False, stream=None): """ Prompts for and returns a new password if required; otherwise, returns None. A trailing colon is appended unless ``no_colon`` is True. If the user supplies an empty password, the user will be re-prompted until they enter a non-empty password. ``prompt_for_password`` autogenerates the user prompt based on the current host being connected to. To override this, specify a string value for ``prompt``. ``stream`` is the stream the prompt will be printed to; if not given, defaults to ``sys.stderr``. """ from fabric.state import env handle_prompt_abort("a connection or sudo password") stream = stream or sys.stderr # Construct prompt default = "[%s] Login password for '%s'" % (env.host_string, env.user) password_prompt = prompt if (prompt is not None) else default if not no_colon: password_prompt += ": " # Get new password value new_password = _password_prompt(password_prompt, stream) # Otherwise, loop until user gives us a non-empty password (to prevent # returning the empty string, and to avoid unnecessary network overhead.) while not new_password: print("Sorry, you can't enter an empty password. Please try again.") new_password = _password_prompt(password_prompt, stream) return new_password def needs_host(func): """ Prompt user for value of ``env.host_string`` when ``env.host_string`` is empty. This decorator is basically a safety net for silly users who forgot to specify the host/host list in one way or another. It should be used to wrap operations which require a network connection. Due to how we execute commands per-host in ``main()``, it's not possible to specify multiple hosts at this point in time, so only a single host will be prompted for. Because this decorator sets ``env.host_string``, it will prompt once (and only once) per command. As ``main()`` clears ``env.host_string`` between commands, this decorator will also end up prompting the user once per command (in the case where multiple commands have no hosts set, of course.) """ from fabric.state import env @wraps(func) def host_prompting_wrapper(*args, **kwargs): while not env.get('host_string', False): handle_prompt_abort("the target host connection string") host_string = raw_input("No hosts found. Please specify (single)" " host string for connection: ") env.update(to_dict(host_string)) return func(*args, **kwargs) host_prompting_wrapper.undecorated = func return host_prompting_wrapper def disconnect_all(): """ Disconnect from all currently connected servers. Used at the end of ``fab``'s main loop, and also intended for use by library users. """ from fabric.state import connections, output # Explicitly disconnect from all servers for key in connections.keys(): if output.status: # Here we can't use the py3k print(x, end=" ") # because 2.5 backwards compatibility sys.stdout.write("Disconnecting from %s... " % denormalize(key)) connections[key].close() del connections[key] if output.status: sys.stdout.write("done.\n") fabric-1.14.0/fabric/operations.py000066400000000000000000001503221315011462000170020ustar00rootroot00000000000000""" Functions to be used in fabfiles and other non-core code, such as run()/sudo(). """ from __future__ import with_statement import errno import os import os.path import posixpath import re import subprocess import sys import time from glob import glob from contextlib import closing, contextmanager from fabric.context_managers import (settings, char_buffered, hide, quiet as quiet_manager, warn_only as warn_only_manager) from fabric.io import output_loop, input_loop from fabric.network import needs_host, ssh, ssh_config from fabric.sftp import SFTP from fabric.state import env, connections, output, win32, default_channel from fabric.thread_handling import ThreadHandler from fabric.utils import ( abort, error, handle_prompt_abort, indent, _pty_size, warn, apply_lcwd, RingBuffer, ) def _shell_escape(string): """ Escape double quotes, backticks and dollar signs in given ``string``. For example:: >>> _shell_escape('abc$') 'abc\\\\$' >>> _shell_escape('"') '\\\\"' """ for char in ('"', '$', '`'): string = string.replace(char, '\%s' % char) return string class _AttributeString(str): """ Simple string subclass to allow arbitrary attribute access. """ @property def stdout(self): return str(self) class _AttributeList(list): """ Like _AttributeString, but for lists. """ pass # Can't wait till Python versions supporting 'def func(*args, foo=bar)' become # widespread :( def require(*keys, **kwargs): """ Check for given keys in the shared environment dict and abort if not found. Positional arguments should be strings signifying what env vars should be checked for. If any of the given arguments do not exist, Fabric will abort execution and print the names of the missing keys. The optional keyword argument ``used_for`` may be a string, which will be printed in the error output to inform users why this requirement is in place. ``used_for`` is printed as part of a string similar to:: "Th(is|ese) variable(s) (are|is) used for %s" so format it appropriately. The optional keyword argument ``provided_by`` may be a list of functions or function names or a single function or function name which the user should be able to execute in order to set the key or keys; it will be included in the error output if requirements are not met. Note: it is assumed that the keyword arguments apply to all given keys as a group. If you feel the need to specify more than one ``used_for``, for example, you should break your logic into multiple calls to ``require()``. .. versionchanged:: 1.1 Allow iterable ``provided_by`` values instead of just single values. """ # If all keys exist and are non-empty, we're good, so keep going. missing_keys = filter(lambda x: x not in env or (x in env and isinstance(env[x], (dict, list, tuple, set)) and not env[x]), keys) if not missing_keys: return # Pluralization if len(missing_keys) > 1: variable = "variables were" used = "These variables are" else: variable = "variable was" used = "This variable is" # Regardless of kwargs, print what was missing. (Be graceful if used outside # of a command.) if 'command' in env: prefix = "The command '%s' failed because the " % env.command else: prefix = "The " msg = "%sfollowing required environment %s not defined:\n%s" % ( prefix, variable, indent(missing_keys) ) # Print used_for if given if 'used_for' in kwargs: msg += "\n\n%s used for %s" % (used, kwargs['used_for']) # And print provided_by if given if 'provided_by' in kwargs: funcs = kwargs['provided_by'] # non-iterable is given, treat it as a list of this single item if not hasattr(funcs, '__iter__'): funcs = [funcs] if len(funcs) > 1: command = "one of the following commands" else: command = "the following command" to_s = lambda obj: getattr(obj, '__name__', str(obj)) provided_by = [to_s(obj) for obj in funcs] msg += "\n\nTry running %s prior to this one, to fix the problem:\n%s"\ % (command, indent(provided_by)) abort(msg) def prompt(text, key=None, default='', validate=None): """ Prompt user with ``text`` and return the input (like ``raw_input``). A single space character will be appended for convenience, but nothing else. Thus, you may want to end your prompt text with a question mark or a colon, e.g. ``prompt("What hostname?")``. If ``key`` is given, the user's input will be stored as ``env.`` in addition to being returned by `prompt`. If the key already existed in ``env``, its value will be overwritten and a warning printed to the user. If ``default`` is given, it is displayed in square brackets and used if the user enters nothing (i.e. presses Enter without entering any text). ``default`` defaults to the empty string. If non-empty, a space will be appended, so that a call such as ``prompt("What hostname?", default="foo")`` would result in a prompt of ``What hostname? [foo]`` (with a trailing space after the ``[foo]``.) The optional keyword argument ``validate`` may be a callable or a string: * If a callable, it is called with the user's input, and should return the value to be stored on success. On failure, it should raise an exception with an exception message, which will be printed to the user. * If a string, the value passed to ``validate`` is used as a regular expression. It is thus recommended to use raw strings in this case. Note that the regular expression, if it is not fully matching (bounded by ``^`` and ``$``) it will be made so. In other words, the input must fully match the regex. Either way, `prompt` will re-prompt until validation passes (or the user hits ``Ctrl-C``). .. note:: `~fabric.operations.prompt` honors :ref:`env.abort_on_prompts ` and will call `~fabric.utils.abort` instead of prompting if that flag is set to ``True``. If you want to block on user input regardless, try wrapping with `~fabric.context_managers.settings`. Examples:: # Simplest form: environment = prompt('Please specify target environment: ') # With default, and storing as env.dish: prompt('Specify favorite dish: ', 'dish', default='spam & eggs') # With validation, i.e. requiring integer input: prompt('Please specify process nice level: ', key='nice', validate=int) # With validation against a regular expression: release = prompt('Please supply a release name', validate=r'^\w+-\d+(\.\d+)?$') # Prompt regardless of the global abort-on-prompts setting: with settings(abort_on_prompts=False): prompt('I seriously need an answer on this! ') """ handle_prompt_abort("a user-specified prompt() call") # Store previous env value for later display, if necessary if key: previous_value = env.get(key) # Set up default display default_str = "" if default != '': default_str = " [%s] " % str(default).strip() else: default_str = " " # Construct full prompt string prompt_str = text.strip() + default_str # Loop until we pass validation value = None while value is None: # Get input value = raw_input(prompt_str) or default # Handle validation if validate: # Callable if callable(validate): # Callable validate() must raise an exception if validation # fails. try: value = validate(value) except Exception, e: # Reset value so we stay in the loop value = None print("Validation failed for the following reason:") print(indent(e.message) + "\n") # String / regex must match and will be empty if validation fails. else: # Need to transform regex into full-matching one if it's not. if not validate.startswith('^'): validate = r'^' + validate if not validate.endswith('$'): validate += r'$' result = re.findall(validate, value) if not result: print("Regular expression validation failed: '%s' does not match '%s'\n" % (value, validate)) # Reset value so we stay in the loop value = None # At this point, value must be valid, so update env if necessary if key: env[key] = value # Print warning if we overwrote some other value if key and previous_value is not None and previous_value != value: warn("overwrote previous env variable '%s'; used to be '%s', is now '%s'." % ( key, previous_value, value )) # And return the value, too, just in case someone finds that useful. return value @needs_host def put(local_path=None, remote_path=None, use_sudo=False, mirror_local_mode=False, mode=None, use_glob=True, temp_dir=""): """ Upload one or more files to a remote host. As with the OpenSSH ``sftp`` program, `.put` will overwrite pre-existing remote files without requesting confirmation. `~fabric.operations.put` returns an iterable containing the absolute file paths of all remote files uploaded. This iterable also exhibits a ``.failed`` attribute containing any local file paths which failed to upload (and may thus be used as a boolean test.) You may also check ``.succeeded`` which is equivalent to ``not .failed``. ``local_path`` may be a relative or absolute local file or directory path, and may contain shell-style wildcards, as understood by the Python ``glob`` module (give ``use_glob=False`` to disable this behavior). Tilde expansion (as implemented by ``os.path.expanduser``) is also performed. ``local_path`` may alternately be a file-like object, such as the result of ``open('path')`` or a ``StringIO`` instance. .. note:: In this case, `~fabric.operations.put` will attempt to read the entire contents of the file-like object by rewinding it using ``seek`` (and will use ``tell`` afterwards to preserve the previous file position). ``remote_path`` may also be a relative or absolute location, but applied to the remote host. Relative paths are relative to the remote user's home directory, but tilde expansion (e.g. ``~/.ssh/``) will also be performed if necessary. An empty string, in either path argument, will be replaced by the appropriate end's current working directory. While the SFTP protocol (which `put` uses) has no direct ability to upload files to locations not owned by the connecting user, you may specify ``use_sudo=True`` to work around this. When set, this setting causes `put` to upload the local files to a temporary location on the remote end (defaults to remote user's ``$HOME``; this may be overridden via ``temp_dir``), and then use `sudo` to move them to ``remote_path``. In some use cases, it is desirable to force a newly uploaded file to match the mode of its local counterpart (such as when uploading executable scripts). To do this, specify ``mirror_local_mode=True``. Alternately, you may use the ``mode`` kwarg to specify an exact mode, in the same vein as ``os.chmod``, such as an exact octal number (``0755``) or a string representing one (``"0755"``). `~fabric.operations.put` will honor `~fabric.context_managers.cd`, so relative values in ``remote_path`` will be prepended by the current remote working directory, if applicable. Thus, for example, the below snippet would attempt to upload to ``/tmp/files/test.txt`` instead of ``~/files/test.txt``:: with cd('/tmp'): put('/path/to/local/test.txt', 'files') Use of `~fabric.context_managers.lcd` will affect ``local_path`` in the same manner. Examples:: put('bin/project.zip', '/tmp/project.zip') put('*.py', 'cgi-bin/') put('index.html', 'index.html', mode=0755) .. note:: If a file-like object such as StringIO has a ``name`` attribute, that will be used in Fabric's printed output instead of the default ```` .. versionchanged:: 1.0 Now honors the remote working directory as manipulated by `~fabric.context_managers.cd`, and the local working directory as manipulated by `~fabric.context_managers.lcd`. .. versionchanged:: 1.0 Now allows file-like objects in the ``local_path`` argument. .. versionchanged:: 1.0 Directories may be specified in the ``local_path`` argument and will trigger recursive uploads. .. versionchanged:: 1.0 Return value is now an iterable of uploaded remote file paths which also exhibits the ``.failed`` and ``.succeeded`` attributes. .. versionchanged:: 1.5 Allow a ``name`` attribute on file-like objects for log output .. versionchanged:: 1.7 Added ``use_glob`` option to allow disabling of globbing. """ # Handle empty local path local_path = local_path or os.getcwd() # Test whether local_path is a path or a file-like object local_is_path = not (hasattr(local_path, 'read') \ and callable(local_path.read)) ftp = SFTP(env.host_string) with closing(ftp) as ftp: home = ftp.normalize('.') # Empty remote path implies cwd remote_path = remote_path or home # Expand tildes if remote_path.startswith('~'): remote_path = remote_path.replace('~', home, 1) # Honor cd() (assumes Unix style file paths on remote end) if not os.path.isabs(remote_path) and env.get('cwd'): remote_path = env.cwd.rstrip('/') + '/' + remote_path if local_is_path: # Apply lcwd, expand tildes, etc local_path = os.path.expanduser(local_path) local_path = apply_lcwd(local_path, env) if use_glob: # Glob local path names = glob(local_path) else: # Check if file exists first so ValueError gets raised if os.path.exists(local_path): names = [local_path] else: names = [] else: names = [local_path] # Make sure local arg exists if local_is_path and not names: err = "'%s' is not a valid local path or glob." % local_path raise ValueError(err) # Sanity check and wierd cases if ftp.exists(remote_path): if local_is_path and len(names) != 1 and not ftp.isdir(remote_path): raise ValueError("'%s' is not a directory" % remote_path) # Iterate over all given local files remote_paths = [] failed_local_paths = [] for lpath in names: try: if local_is_path and os.path.isdir(lpath): p = ftp.put_dir(lpath, remote_path, use_sudo, mirror_local_mode, mode, temp_dir) remote_paths.extend(p) else: p = ftp.put(lpath, remote_path, use_sudo, mirror_local_mode, mode, local_is_path, temp_dir) remote_paths.append(p) except Exception, e: msg = "put() encountered an exception while uploading '%s'" failure = lpath if local_is_path else "" failed_local_paths.append(failure) error(message=msg % lpath, exception=e) ret = _AttributeList(remote_paths) ret.failed = failed_local_paths ret.succeeded = not ret.failed return ret @needs_host def get(remote_path, local_path=None, use_sudo=False, temp_dir=""): """ Download one or more files from a remote host. `~fabric.operations.get` returns an iterable containing the absolute paths to all local files downloaded, which will be empty if ``local_path`` was a StringIO object (see below for more on using StringIO). This object will also exhibit a ``.failed`` attribute containing any remote file paths which failed to download, and a ``.succeeded`` attribute equivalent to ``not .failed``. ``remote_path`` is the remote file or directory path to download, which may contain shell glob syntax, e.g. ``"/var/log/apache2/*.log"``, and will have tildes replaced by the remote home directory. Relative paths will be considered relative to the remote user's home directory, or the current remote working directory as manipulated by `~fabric.context_managers.cd`. If the remote path points to a directory, that directory will be downloaded recursively. ``local_path`` is the local file path where the downloaded file or files will be stored. If relative, it will honor the local current working directory as manipulated by `~fabric.context_managers.lcd`. It may be interpolated, using standard Python dict-based interpolation, with the following variables: * ``host``: The value of ``env.host_string``, eg ``myhostname`` or ``user@myhostname-222`` (the colon between hostname and port is turned into a dash to maximize filesystem compatibility) * ``dirname``: The directory part of the remote file path, e.g. the ``src/projectname`` in ``src/projectname/utils.py``. * ``basename``: The filename part of the remote file path, e.g. the ``utils.py`` in ``src/projectname/utils.py`` * ``path``: The full remote path, e.g. ``src/projectname/utils.py``. While the SFTP protocol (which `get` uses) has no direct ability to download files from locations not owned by the connecting user, you may specify ``use_sudo=True`` to work around this. When set, this setting allows `get` to copy (using sudo) the remote files to a temporary location on the remote end (defaults to remote user's ``$HOME``; this may be overridden via ``temp_dir``), and then download them to ``local_path``. .. note:: When ``remote_path`` is an absolute directory path, only the inner directories will be recreated locally and passed into the above variables. So for example, ``get('/var/log', '%(path)s')`` would start writing out files like ``apache2/access.log``, ``postgresql/8.4/postgresql.log``, etc, in the local working directory. It would **not** write out e.g. ``var/log/apache2/access.log``. Additionally, when downloading a single file, ``%(dirname)s`` and ``%(path)s`` do not make as much sense and will be empty and equivalent to ``%(basename)s``, respectively. Thus a call like ``get('/var/log/apache2/access.log', '%(path)s')`` will save a local file named ``access.log``, not ``var/log/apache2/access.log``. This behavior is intended to be consistent with the command-line ``scp`` program. If left blank, ``local_path`` defaults to ``"%(host)s/%(path)s"`` in order to be safe for multi-host invocations. .. warning:: If your ``local_path`` argument does not contain ``%(host)s`` and your `~fabric.operations.get` call runs against multiple hosts, your local files will be overwritten on each successive run! If ``local_path`` does not make use of the above variables (i.e. if it is a simple, explicit file path) it will act similar to ``scp`` or ``cp``, overwriting pre-existing files if necessary, downloading into a directory if given (e.g. ``get('/path/to/remote_file.txt', 'local_directory')`` will create ``local_directory/remote_file.txt``) and so forth. ``local_path`` may alternately be a file-like object, such as the result of ``open('path', 'w')`` or a ``StringIO`` instance. .. note:: Attempting to `get` a directory into a file-like object is not valid and will result in an error. .. note:: This function will use ``seek`` and ``tell`` to overwrite the entire contents of the file-like object, in order to be consistent with the behavior of `~fabric.operations.put` (which also considers the entire file). However, unlike `~fabric.operations.put`, the file pointer will not be restored to its previous location, as that doesn't make as much sense here and/or may not even be possible. .. note:: If a file-like object such as StringIO has a ``name`` attribute, that will be used in Fabric's printed output instead of the default ```` .. versionchanged:: 1.0 Now honors the remote working directory as manipulated by `~fabric.context_managers.cd`, and the local working directory as manipulated by `~fabric.context_managers.lcd`. .. versionchanged:: 1.0 Now allows file-like objects in the ``local_path`` argument. .. versionchanged:: 1.0 ``local_path`` may now contain interpolated path- and host-related variables. .. versionchanged:: 1.0 Directories may be specified in the ``remote_path`` argument and will trigger recursive downloads. .. versionchanged:: 1.0 Return value is now an iterable of downloaded local file paths, which also exhibits the ``.failed`` and ``.succeeded`` attributes. .. versionchanged:: 1.5 Allow a ``name`` attribute on file-like objects for log output """ # Handle empty local path / default kwarg value local_path = local_path or "%(host)s/%(path)s" # Test whether local_path is a path or a file-like object local_is_path = not (hasattr(local_path, 'write') \ and callable(local_path.write)) # Honor lcd() where it makes sense if local_is_path: local_path = apply_lcwd(local_path, env) ftp = SFTP(env.host_string) with closing(ftp) as ftp: home = ftp.normalize('.') # Expand home directory markers (tildes, etc) if remote_path.startswith('~'): remote_path = remote_path.replace('~', home, 1) if local_is_path: local_path = os.path.expanduser(local_path) # Honor cd() (assumes Unix style file paths on remote end) if not os.path.isabs(remote_path): # Honor cwd if it's set (usually by with cd():) if env.get('cwd'): remote_path_escaped = env.cwd.rstrip('/') remote_path_escaped = remote_path_escaped.replace('\\ ', ' ') remote_path = remote_path_escaped + '/' + remote_path # Otherwise, be relative to remote home directory (SFTP server's # '.') else: remote_path = posixpath.join(home, remote_path) # Track final local destination files so we can return a list local_files = [] failed_remote_files = [] try: # Glob remote path if necessary if '*' in remote_path or '?' in remote_path: names = ftp.glob(remote_path) # Handle "file not found" errors (like Paramiko does if we # explicitly try to grab a glob-like filename). if not names: raise IOError(errno.ENOENT, "No such file") else: names = [remote_path] # Handle invalid local-file-object situations if not local_is_path: if len(names) > 1 or ftp.isdir(names[0]): error("[%s] %s is a glob or directory, but local_path is a file object!" % (env.host_string, remote_path)) for remote_path in names: if ftp.isdir(remote_path): result = ftp.get_dir(remote_path, local_path, use_sudo, temp_dir) local_files.extend(result) else: # Perform actual get. If getting to real local file path, # add result (will be true final path value) to # local_files. File-like objects are omitted. result = ftp.get(remote_path, local_path, use_sudo, local_is_path, os.path.basename(remote_path), temp_dir) if local_is_path: local_files.append(result) except Exception, e: failed_remote_files.append(remote_path) msg = "get() encountered an exception while downloading '%s'" error(message=msg % remote_path, exception=e) ret = _AttributeList(local_files if local_is_path else []) ret.failed = failed_remote_files ret.succeeded = not ret.failed return ret def _sudo_prefix_argument(argument, value): if value is None: return "" if str(value).isdigit(): value = "#%s" % value return ' %s "%s"' % (argument, value) def _sudo_prefix(user, group=None): """ Return ``env.sudo_prefix`` with ``user``/``group`` inserted if necessary. """ # Insert env.sudo_prompt into env.sudo_prefix prefix = env.sudo_prefix % env if user is not None or group is not None: return "%s%s%s " % (prefix, _sudo_prefix_argument('-u', user), _sudo_prefix_argument('-g', group)) return prefix def _shell_wrap(command, shell_escape, shell=True, sudo_prefix=None): """ Conditionally wrap given command in env.shell (while honoring sudo.) """ # Honor env.shell, while allowing the 'shell' kwarg to override it (at # least in terms of turning it off.) if shell and not env.use_shell: shell = False # Sudo plus space, or empty string if sudo_prefix is None: sudo_prefix = "" else: sudo_prefix += " " # If we're shell wrapping, prefix shell and space. Next, escape the command # if requested, and then quote it. Otherwise, empty string. if shell: shell = env.shell + " " if shell_escape: command = _shell_escape(command) command = '"%s"' % command else: shell = "" # Resulting string should now have correct formatting return sudo_prefix + shell + command def _prefix_commands(command, which): """ Prefixes ``command`` with all prefixes found in ``env.command_prefixes``. ``env.command_prefixes`` is a list of strings which is modified by the `~fabric.context_managers.prefix` context manager. This function also handles a special-case prefix, ``cwd``, used by `~fabric.context_managers.cd`. The ``which`` kwarg should be a string, ``"local"`` or ``"remote"``, which will determine whether ``cwd`` or ``lcwd`` is used. """ # Local prefix list (to hold env.command_prefixes + any special cases) prefixes = list(env.command_prefixes) # Handle current working directory, which gets its own special case due to # being a path string that gets grown/shrunk, instead of just a single # string or lack thereof. # Also place it at the front of the list, in case user is expecting another # prefixed command to be "in" the current working directory. cwd = env.cwd if which == 'remote' else env.lcwd redirect = " >/dev/null" if not win32 else '' if cwd: prefixes.insert(0, 'cd %s%s' % (cwd, redirect)) glue = " && " prefix = (glue.join(prefixes) + glue) if prefixes else "" return prefix + command def _prefix_env_vars(command, local=False): """ Prefixes ``command`` with any shell environment vars, e.g. ``PATH=foo ``. Currently, this only applies the PATH updating implemented in `~fabric.context_managers.path` and environment variables from `~fabric.context_managers.shell_env`. Will switch to using Windows style 'SET' commands when invoked by ``local()`` and on a Windows localhost. """ env_vars = {} # path(): local shell env var update, appending/prepending/replacing $PATH path = env.path if path: if env.path_behavior == 'append': path = '$PATH:\"%s\"' % path elif env.path_behavior == 'prepend': path = '\"%s\":$PATH' % path elif env.path_behavior == 'replace': path = '\"%s\"' % path env_vars['PATH'] = path # shell_env() env_vars.update(env.shell_env) if env_vars: set_cmd, exp_cmd = '', '' if win32 and local: set_cmd = 'SET ' else: exp_cmd = 'export ' exports = ' '.join( '%s%s="%s"' % (set_cmd, k, v if k == 'PATH' else _shell_escape(v)) for k, v in env_vars.iteritems() ) shell_env_str = '%s%s && ' % (exp_cmd, exports) else: shell_env_str = '' return shell_env_str + command def _execute(channel, command, pty=True, combine_stderr=None, invoke_shell=False, stdout=None, stderr=None, timeout=None, capture_buffer_size=None): """ Execute ``command`` over ``channel``. ``pty`` controls whether a pseudo-terminal is created. ``combine_stderr`` controls whether we call ``channel.set_combine_stderr``. By default, the global setting for this behavior (:ref:`env.combine_stderr `) is consulted, but you may specify ``True`` or ``False`` here to override it. ``invoke_shell`` controls whether we use ``exec_command`` or ``invoke_shell`` (plus a handful of other things, such as always forcing a pty.) ``capture_buffer_size`` controls the length of the ring-buffers used to capture stdout/stderr. (This is ignored if ``invoke_shell=True``, since that completely disables capturing overall.) Returns a three-tuple of (``stdout``, ``stderr``, ``status``), where ``stdout``/``stderr`` are captured output strings and ``status`` is the program's return code, if applicable. """ # stdout/stderr redirection stdout = stdout or sys.stdout stderr = stderr or sys.stderr # Timeout setting control timeout = env.command_timeout if (timeout is None) else timeout # What to do with CTRl-C? remote_interrupt = env.remote_interrupt with char_buffered(sys.stdin): # Combine stdout and stderr to get around oddball mixing issues if combine_stderr is None: combine_stderr = env.combine_stderr channel.set_combine_stderr(combine_stderr) # Assume pty use, and allow overriding of this either via kwarg or env # var. (invoke_shell always wants a pty no matter what.) using_pty = True if not invoke_shell and (not pty or not env.always_use_pty): using_pty = False # Request pty with size params (default to 80x24, obtain real # parameters if on POSIX platform) if using_pty: rows, cols = _pty_size() channel.get_pty(width=cols, height=rows) # Use SSH agent forwarding from 'ssh' if enabled by user config_agent = ssh_config().get('forwardagent', 'no').lower() == 'yes' forward = None if env.forward_agent or config_agent: forward = ssh.agent.AgentRequestHandler(channel) # Kick off remote command if invoke_shell: channel.invoke_shell() if command: channel.sendall(command + "\n") else: channel.exec_command(command=command) # Init stdout, stderr capturing. Must use lists instead of strings as # strings are immutable and we're using these as pass-by-reference stdout_buf = RingBuffer(value=[], maxlen=capture_buffer_size) stderr_buf = RingBuffer(value=[], maxlen=capture_buffer_size) if invoke_shell: stdout_buf = stderr_buf = None workers = ( ThreadHandler('out', output_loop, channel, "recv", capture=stdout_buf, stream=stdout, timeout=timeout), ThreadHandler('err', output_loop, channel, "recv_stderr", capture=stderr_buf, stream=stderr, timeout=timeout), ThreadHandler('in', input_loop, channel, using_pty) ) if remote_interrupt is None: remote_interrupt = invoke_shell if remote_interrupt and not using_pty: remote_interrupt = False while True: if channel.exit_status_ready(): break else: # Check for thread exceptions here so we can raise ASAP # (without chance of getting blocked by, or hidden by an # exception within, recv_exit_status()) for worker in workers: worker.raise_if_needed() try: time.sleep(ssh.io_sleep) except KeyboardInterrupt: if not remote_interrupt: raise channel.send('\x03') # Obtain exit code of remote program now that we're done. status = channel.recv_exit_status() # Wait for threads to exit so we aren't left with stale threads for worker in workers: worker.thread.join() worker.raise_if_needed() # Close channel channel.close() # Close any agent forward proxies if forward is not None: forward.close() # Update stdout/stderr with captured values if applicable if not invoke_shell: stdout_buf = ''.join(stdout_buf).strip() stderr_buf = ''.join(stderr_buf).strip() # Tie off "loose" output by printing a newline. Helps to ensure any # following print()s aren't on the same line as a trailing line prefix # or similar. However, don't add an extra newline if we've already # ended up with one, as that adds a entire blank line instead. if output.running \ and (output.stdout and stdout_buf and not stdout_buf.endswith("\n")) \ or (output.stderr and stderr_buf and not stderr_buf.endswith("\n")): print("") return stdout_buf, stderr_buf, status @needs_host def open_shell(command=None): """ Invoke a fully interactive shell on the remote end. If ``command`` is given, it will be sent down the pipe before handing control over to the invoking user. This function is most useful for when you need to interact with a heavily shell-based command or series of commands, such as when debugging or when fully interactive recovery is required upon remote program failure. It should be considered an easy way to work an interactive shell session into the middle of a Fabric script and is *not* a drop-in replacement for `~fabric.operations.run`, which is also capable of interacting with the remote end (albeit only while its given command is executing) and has much stronger programmatic abilities such as error handling and stdout/stderr capture. Specifically, `~fabric.operations.open_shell` provides a better interactive experience than `~fabric.operations.run`, but use of a full remote shell prevents Fabric from determining whether programs run within the shell have failed, and pollutes the stdout/stderr stream with shell output such as login banners, prompts and echoed stdin. Thus, this function does not have a return value and will not trigger Fabric's failure handling if any remote programs result in errors. .. versionadded:: 1.0 """ _execute(channel=default_channel(), command=command, pty=True, combine_stderr=True, invoke_shell=True) @contextmanager def _noop(): yield def _run_command(command, shell=True, pty=True, combine_stderr=True, sudo=False, user=None, quiet=False, warn_only=False, stdout=None, stderr=None, group=None, timeout=None, shell_escape=None, capture_buffer_size=None): """ Underpinnings of `run` and `sudo`. See their docstrings for more info. """ manager = _noop if warn_only: manager = warn_only_manager # Quiet's behavior is a superset of warn_only's, so it wins. if quiet: manager = quiet_manager with manager(): # Set up new var so original argument can be displayed verbatim later. given_command = command # Check if shell_escape has been overridden in env if shell_escape is None: shell_escape = env.get('shell_escape', True) # Handle context manager modifications, and shell wrapping wrapped_command = _shell_wrap( _prefix_env_vars(_prefix_commands(command, 'remote')), shell_escape, shell, _sudo_prefix(user, group) if sudo else None ) # Execute info line which = 'sudo' if sudo else 'run' if output.debug: print("[%s] %s: %s" % (env.host_string, which, wrapped_command)) elif output.running: print("[%s] %s: %s" % (env.host_string, which, given_command)) # Actual execution, stdin/stdout/stderr handling, and termination result_stdout, result_stderr, status = _execute( channel=default_channel(), command=wrapped_command, pty=pty, combine_stderr=combine_stderr, invoke_shell=False, stdout=stdout, stderr=stderr, timeout=timeout, capture_buffer_size=capture_buffer_size) # Assemble output string out = _AttributeString(result_stdout) err = _AttributeString(result_stderr) # Error handling out.failed = False out.command = given_command out.real_command = wrapped_command if status not in env.ok_ret_codes: out.failed = True msg = "%s() received nonzero return code %s while executing" % ( which, status ) if env.warn_only: msg += " '%s'!" % given_command else: msg += "!\n\nRequested: %s\nExecuted: %s" % ( given_command, wrapped_command ) error(message=msg, stdout=out, stderr=err) # Attach return code to output string so users who have set things to # warn only, can inspect the error code. out.return_code = status # Convenience mirror of .failed out.succeeded = not out.failed # Attach stderr for anyone interested in that. out.stderr = err return out @needs_host def run(command, shell=True, pty=True, combine_stderr=None, quiet=False, warn_only=False, stdout=None, stderr=None, timeout=None, shell_escape=None, capture_buffer_size=None): """ Run a shell command on a remote host. If ``shell`` is True (the default), `run` will execute the given command string via a shell interpreter, the value of which may be controlled by setting ``env.shell`` (defaulting to something similar to ``/bin/bash -l -c ""``.) Any double-quote (``"``) or dollar-sign (``$``) characters in ``command`` will be automatically escaped when ``shell`` is True (unless disabled by setting ``shell_escape=False``). When ``shell=False``, no shell wrapping or escaping will occur. (It's possible to specify ``shell=False, shell_escape=True`` if desired, which will still trigger escaping of dollar signs, etc but will not wrap with a shell program invocation). `run` will return the result of the remote program's stdout as a single (likely multiline) string. This string will exhibit ``failed`` and ``succeeded`` boolean attributes specifying whether the command failed or succeeded, and will also include the return code as the ``return_code`` attribute. Furthermore, it includes a copy of the requested & actual command strings executed, as ``.command`` and ``.real_command``, respectively. To lessen memory use when running extremely verbose programs (and, naturally, when having access to their full output afterwards is not necessary!) you may limit how much of the program's stdout/err is stored by setting ``capture_buffer_size`` to an integer value. .. warning:: Do not set ``capture_buffer_size`` to any value smaller than the length of ``env.sudo_prompt`` or you will likely break the functionality of `sudo`! Ditto any user prompts stored in ``env.prompts``. .. note:: This value is used for each buffer independently, so e.g. ``1024`` may result in storing a total of ``2048`` bytes if there's data in both streams.) Any text entered in your local terminal will be forwarded to the remote program as it runs, thus allowing you to interact with password or other prompts naturally. For more on how this works, see :doc:`/usage/interactivity`. You may pass ``pty=False`` to forego creation of a pseudo-terminal on the remote end in case the presence of one causes problems for the command in question. However, this will force Fabric itself to echo any and all input you type while the command is running, including sensitive passwords. (With ``pty=True``, the remote pseudo-terminal will echo for you, and will intelligently handle password-style prompts.) See :ref:`pseudottys` for details. Similarly, if you need to programmatically examine the stderr stream of the remote program (exhibited as the ``stderr`` attribute on this function's return value), you may set ``combine_stderr=False``. Doing so has a high chance of causing garbled output to appear on your terminal (though the resulting strings returned by `~fabric.operations.run` will be properly separated). For more info, please read :ref:`combine_streams`. To ignore non-zero return codes, specify ``warn_only=True``. To both ignore non-zero return codes *and* force a command to run silently, specify ``quiet=True``. To override which local streams are used to display remote stdout and/or stderr, specify ``stdout`` or ``stderr``. (By default, the regular ``sys.stdout`` and ``sys.stderr`` Python stream objects are used.) For example, ``run("command", stderr=sys.stdout)`` would print the remote standard error to the local standard out, while preserving it as its own distinct attribute on the return value (as per above.) Alternately, you could even provide your own stream objects or loggers, e.g. ``myout = StringIO(); run("command", stdout=myout)``. If you want an exception raised when the remote program takes too long to run, specify ``timeout=N`` where ``N`` is an integer number of seconds, after which to time out. This will cause ``run`` to raise a `~fabric.exceptions.CommandTimeout` exception. If you want to disable Fabric's automatic attempts at escaping quotes, dollar signs etc., specify ``shell_escape=False``. Examples:: run("ls /var/www/") run("ls /home/myuser", shell=False) output = run('ls /var/www/site1') run("take_a_long_time", timeout=5) .. versionadded:: 1.0 The ``succeeded`` and ``stderr`` return value attributes, the ``combine_stderr`` kwarg, and interactive behavior. .. versionchanged:: 1.0 The default value of ``pty`` is now ``True``. .. versionchanged:: 1.0.2 The default value of ``combine_stderr`` is now ``None`` instead of ``True``. However, the default *behavior* is unchanged, as the global setting is still ``True``. .. versionadded:: 1.5 The ``quiet``, ``warn_only``, ``stdout`` and ``stderr`` kwargs. .. versionadded:: 1.5 The return value attributes ``.command`` and ``.real_command``. .. versionadded:: 1.6 The ``timeout`` argument. .. versionadded:: 1.7 The ``shell_escape`` argument. .. versionadded:: 1.11 The ``capture_buffer_size`` argument. """ return _run_command( command, shell, pty, combine_stderr, quiet=quiet, warn_only=warn_only, stdout=stdout, stderr=stderr, timeout=timeout, shell_escape=shell_escape, capture_buffer_size=capture_buffer_size, ) @needs_host def sudo(command, shell=True, pty=True, combine_stderr=None, user=None, quiet=False, warn_only=False, stdout=None, stderr=None, group=None, timeout=None, shell_escape=None, capture_buffer_size=None): """ Run a shell command on a remote host, with superuser privileges. `sudo` is identical in every way to `run`, except that it will always wrap the given ``command`` in a call to the ``sudo`` program to provide superuser privileges. `sudo` accepts additional ``user`` and ``group`` arguments, which are passed to ``sudo`` and allow you to run as some user and/or group other than root. On most systems, the ``sudo`` program can take a string username/group or an integer userid/groupid (uid/gid); ``user`` and ``group`` may likewise be strings or integers. You may set :ref:`env.sudo_user ` at module level or via `~fabric.context_managers.settings` if you want multiple ``sudo`` calls to have the same ``user`` value. An explicit ``user`` argument will, of course, override this global setting. Examples:: sudo("~/install_script.py") sudo("mkdir /var/www/new_docroot", user="www-data") sudo("ls /home/jdoe", user=1001) result = sudo("ls /tmp/") with settings(sudo_user='mysql'): sudo("whoami") # prints 'mysql' .. versionchanged:: 1.0 See the changed and added notes for `~fabric.operations.run`. .. versionchanged:: 1.5 Now honors :ref:`env.sudo_user `. .. versionadded:: 1.5 The ``quiet``, ``warn_only``, ``stdout`` and ``stderr`` kwargs. .. versionadded:: 1.5 The return value attributes ``.command`` and ``.real_command``. .. versionadded:: 1.7 The ``shell_escape`` argument. .. versionadded:: 1.11 The ``capture_buffer_size`` argument. """ return _run_command( command, shell, pty, combine_stderr, sudo=True, user=user if user else env.sudo_user, group=group, quiet=quiet, warn_only=warn_only, stdout=stdout, stderr=stderr, timeout=timeout, shell_escape=shell_escape, capture_buffer_size=capture_buffer_size, ) def local(command, capture=False, shell=None): """ Run a command on the local system. `local` is simply a convenience wrapper around the use of the builtin Python ``subprocess`` module with ``shell=True`` activated. If you need to do anything special, consider using the ``subprocess`` module directly. ``shell`` is passed directly to `subprocess.Popen `_'s ``execute`` argument (which determines the local shell to use.) As per the linked documentation, on Unix the default behavior is to use ``/bin/sh``, so this option is useful for setting that value to e.g. ``/bin/bash``. `local` is not currently capable of simultaneously printing and capturing output, as `~fabric.operations.run`/`~fabric.operations.sudo` do. The ``capture`` kwarg allows you to switch between printing and capturing as necessary, and defaults to ``False``. When ``capture=False``, the local subprocess' stdout and stderr streams are hooked up directly to your terminal, though you may use the global :doc:`output controls ` ``output.stdout`` and ``output.stderr`` to hide one or both if desired. In this mode, the return value's stdout/stderr values are always empty. When ``capture=True``, you will not see any output from the subprocess in your terminal, but the return value will contain the captured stdout/stderr. In either case, as with `~fabric.operations.run` and `~fabric.operations.sudo`, this return value exhibits the ``return_code``, ``stderr``, ``failed``, ``succeeded``, ``command`` and ``real_command`` attributes. See `run` for details. `~fabric.operations.local` will honor the `~fabric.context_managers.lcd` context manager, allowing you to control its current working directory independently of the remote end (which honors `~fabric.context_managers.cd`). .. versionchanged:: 1.0 Added the ``succeeded`` and ``stderr`` attributes. .. versionchanged:: 1.0 Now honors the `~fabric.context_managers.lcd` context manager. .. versionchanged:: 1.0 Changed the default value of ``capture`` from ``True`` to ``False``. .. versionadded:: 1.9 The return value attributes ``.command`` and ``.real_command``. """ given_command = command # Apply cd(), path() etc with_env = _prefix_env_vars(command, local=True) wrapped_command = _prefix_commands(with_env, 'local') if output.debug: print("[localhost] local: %s" % (wrapped_command)) elif output.running: print("[localhost] local: " + given_command) # Tie in to global output controls as best we can; our capture argument # takes precedence over the output settings. dev_null = None if capture: out_stream = subprocess.PIPE err_stream = subprocess.PIPE else: dev_null = open(os.devnull, 'w+') # Non-captured, hidden streams are discarded. out_stream = None if output.stdout else dev_null err_stream = None if output.stderr else dev_null try: cmd_arg = wrapped_command if win32 else [wrapped_command] p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream, stderr=err_stream, executable=shell, close_fds=(not win32)) (stdout, stderr) = p.communicate() finally: if dev_null is not None: dev_null.close() # Handle error condition (deal with stdout being None, too) out = _AttributeString(stdout.strip() if stdout else "") err = _AttributeString(stderr.strip() if stderr else "") out.command = given_command out.real_command = wrapped_command out.failed = False out.return_code = p.returncode out.stderr = err if p.returncode not in env.ok_ret_codes: out.failed = True msg = "local() encountered an error (return code %s) while executing '%s'" % (p.returncode, command) error(message=msg, stdout=out, stderr=err) out.succeeded = not out.failed # If we were capturing, this will be a string; otherwise it will be None. return out @needs_host def reboot(wait=120, command='reboot', use_sudo=True): """ Reboot the remote system. Will temporarily tweak Fabric's reconnection settings (:ref:`timeout` and :ref:`connection-attempts`) to ensure that reconnection does not give up for at least ``wait`` seconds. .. note:: As of Fabric 1.4, the ability to reconnect partway through a session no longer requires use of internal APIs. While we are not officially deprecating this function, adding more features to it will not be a priority. Users who want greater control are encouraged to check out this function's (6 lines long, well commented) source code and write their own adaptation using different timeout/attempt values or additional logic. .. versionadded:: 0.9.2 .. versionchanged:: 1.4 Changed the ``wait`` kwarg to be optional, and refactored to leverage the new reconnection functionality; it may not actually have to wait for ``wait`` seconds before reconnecting. .. versionchanged:: 1.11 Added ``use_sudo`` as a kwarg. Maintained old functionality by setting the default value to True. """ # Shorter timeout for a more granular cycle than the default. timeout = 5 # Use 'wait' as max total wait time attempts = int(round(float(wait) / float(timeout))) # Don't bleed settings, since this is supposed to be self-contained. # User adaptations will probably want to drop the "with settings()" and # just have globally set timeout/attempts values. with settings( hide('running'), timeout=timeout, connection_attempts=attempts ): (sudo if use_sudo else run)(command) # Try to make sure we don't slip in before pre-reboot lockdown time.sleep(5) # This is actually an internal-ish API call, but users can simply drop # it in real fabfile use -- the next run/sudo/put/get/etc call will # automatically trigger a reconnect. # We use it here to force the reconnect while this function is still in # control and has the above timeout settings enabled. connections.connect(env.host_string) # At this point we should be reconnected to the newly rebooted server. fabric-1.14.0/fabric/sftp.py000066400000000000000000000310461315011462000155740ustar00rootroot00000000000000from __future__ import with_statement import os import posixpath import stat import re import uuid from fnmatch import filter as fnfilter from fabric.state import output, connections, env from fabric.utils import warn from fabric.context_managers import settings # TODO: use self.sftp.listdir_iter on Paramiko 1.15+ def _format_local(local_path, local_is_path): """Format a path for log output""" if local_is_path: return local_path else: # This allows users to set a name attr on their StringIO objects # just like an open file object would have return getattr(local_path, 'name', '') class SFTP(object): """ SFTP helper class, which is also a facade for ssh.SFTPClient. """ def __init__(self, host_string): self.ftp = connections[host_string].open_sftp() # Recall that __getattr__ is the "fallback" attribute getter, and is thus # pretty safe to use for facade-like behavior as we're doing here. def __getattr__(self, attr): return getattr(self.ftp, attr) def isdir(self, path): try: return stat.S_ISDIR(self.ftp.stat(path).st_mode) except IOError: return False def islink(self, path): try: return stat.S_ISLNK(self.ftp.lstat(path).st_mode) except IOError: return False def exists(self, path): try: self.ftp.lstat(path).st_mode except IOError: return False return True def glob(self, path): from fabric.state import win32 dirpart, pattern = os.path.split(path) rlist = self.ftp.listdir(dirpart) names = fnfilter([f for f in rlist if not f[0] == '.'], pattern) ret = [] if len(names): s = '/' ret = [dirpart.rstrip(s) + s + name.lstrip(s) for name in names] if not win32: ret = [posixpath.join(dirpart, name) for name in names] return ret def walk(self, top, topdown=True, onerror=None, followlinks=False): from os.path import join # We may not have read permission for top, in which case we can't get a # list of the files the directory contains. os.path.walk always # suppressed the exception then, rather than blow up for a minor reason # when (say) a thousand readable directories are still left to visit. # That logic is copied here. try: # Note that listdir and error are globals in this module due to # earlier import-*. names = self.ftp.listdir(top) except Exception, err: if onerror is not None: onerror(err) return dirs, nondirs = [], [] for name in names: if self.isdir(join(top, name)): dirs.append(name) else: nondirs.append(name) if topdown: yield top, dirs, nondirs for name in dirs: path = join(top, name) if followlinks or not self.islink(path): for x in self.walk(path, topdown, onerror, followlinks): yield x if not topdown: yield top, dirs, nondirs def mkdir(self, path, use_sudo): from fabric.api import sudo, hide if use_sudo: with hide('everything'): sudo('mkdir "%s"' % path) else: self.ftp.mkdir(path) def get(self, remote_path, local_path, use_sudo, local_is_path, rremote=None, temp_dir=""): from fabric.api import sudo, hide # rremote => relative remote path, so get(/var/log) would result in # this function being called with # remote_path=/var/log/apache2/access.log and # rremote=apache2/access.log rremote = rremote if rremote is not None else remote_path # Handle format string interpolation (e.g. %(dirname)s) path_vars = { 'host': env.host_string.replace(':', '-'), 'basename': os.path.basename(rremote), 'dirname': os.path.dirname(rremote), 'path': rremote } if local_is_path: # Fix for issue #711 and #1348 - escape %'s as well as possible. format_re = r'(%%(?!\((?:%s)\)\w))' % '|'.join(path_vars.keys()) escaped_path = re.sub(format_re, r'%\1', local_path) local_path = os.path.abspath(escaped_path % path_vars) # Ensure we give ssh.SFTPCLient a file by prepending and/or # creating local directories as appropriate. dirpath, filepath = os.path.split(local_path) if dirpath and not os.path.exists(dirpath): os.makedirs(dirpath) if os.path.isdir(local_path): local_path = os.path.join(local_path, path_vars['basename']) if output.running: print("[%s] download: %s <- %s" % ( env.host_string, _format_local(local_path, local_is_path), remote_path )) # Warn about overwrites, but keep going if local_is_path and os.path.exists(local_path): msg = "Local file %s already exists and is being overwritten." warn(msg % local_path) # When using sudo, "bounce" the file through a guaranteed-unique file # path in the default remote CWD (which, typically, the login user will # have write permissions on) in order to sudo(cp) it. if use_sudo: target_path = posixpath.join(temp_dir, uuid.uuid4().hex) # Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv command. # (The target path has already been cwd-ified elsewhere.) with settings(hide('everything'), cwd=""): sudo('cp -p "%s" "%s"' % (remote_path, target_path)) # The user should always own the copied file. sudo('chown %s "%s"' % (env.user, target_path)) # Only root and the user has the right to read the file sudo('chmod %o "%s"' % (0400, target_path)) remote_path = target_path try: # File-like objects: reset to file seek 0 (to ensure full overwrite) # and then use Paramiko's getfo() directly getter = self.ftp.get if not local_is_path: local_path.seek(0) getter = self.ftp.getfo getter(remote_path, local_path) finally: # try to remove the temporary file after the download if use_sudo: with settings(hide('everything'), cwd=""): sudo('rm -f "%s"' % remote_path) # Return local_path object for posterity. (If mutated, caller will want # to know.) return local_path def get_dir(self, remote_path, local_path, use_sudo, temp_dir): # Decide what needs to be stripped from remote paths so they're all # relative to the given remote_path if os.path.basename(remote_path): strip = os.path.dirname(remote_path) else: strip = os.path.dirname(os.path.dirname(remote_path)) # Store all paths gotten so we can return them when done result = [] # Use our facsimile of os.walk to find all files within remote_path for context, dirs, files in self.walk(remote_path): # Normalize current directory to be relative # E.g. remote_path of /var/log and current dir of /var/log/apache2 # would be turned into just 'apache2' lcontext = rcontext = context.replace(strip, '', 1).lstrip('/') # Prepend local path to that to arrive at the local mirrored # version of this directory. So if local_path was 'mylogs', we'd # end up with 'mylogs/apache2' lcontext = os.path.join(local_path, lcontext) # Download any files in current directory for f in files: # Construct full and relative remote paths to this file rpath = posixpath.join(context, f) rremote = posixpath.join(rcontext, f) # If local_path isn't using a format string that expands to # include its remote path, we need to add it here. if "%(path)s" not in local_path \ and "%(dirname)s" not in local_path: lpath = os.path.join(lcontext, f) # Otherwise, just passthrough local_path to self.get() else: lpath = local_path # Now we can make a call to self.get() with specific file paths # on both ends. result.append(self.get(rpath, lpath, use_sudo, True, rremote, temp_dir)) return result def put(self, local_path, remote_path, use_sudo, mirror_local_mode, mode, local_is_path, temp_dir): from fabric.api import sudo, hide pre = self.ftp.getcwd() pre = pre if pre else '' if local_is_path and self.isdir(remote_path): basename = os.path.basename(local_path) remote_path = posixpath.join(remote_path, basename) if output.running: print("[%s] put: %s -> %s" % ( env.host_string, _format_local(local_path, local_is_path), posixpath.join(pre, remote_path) )) # When using sudo, "bounce" the file through a guaranteed-unique file # path in the default remote CWD (which, typically, the login user will # have write permissions on) in order to sudo(mv) it later. if use_sudo: target_path = remote_path remote_path = posixpath.join(temp_dir, uuid.uuid4().hex) # Read, ensuring we handle file-like objects correct re: seek pointer putter = self.ftp.put if not local_is_path: old_pointer = local_path.tell() local_path.seek(0) putter = self.ftp.putfo rattrs = putter(local_path, remote_path) if not local_is_path: local_path.seek(old_pointer) # Handle modes if necessary if (local_is_path and mirror_local_mode) or (mode is not None): lmode = os.stat(local_path).st_mode if mirror_local_mode else mode # Cast to octal integer in case of string if isinstance(lmode, basestring): lmode = int(lmode, 8) lmode = lmode & 07777 rmode = rattrs.st_mode # Only bitshift if we actually got an rmode if rmode is not None: rmode = (rmode & 07777) if lmode != rmode: if use_sudo: # Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv # command. (The target path has already been cwd-ified # elsewhere.) with settings(hide('everything'), cwd=""): sudo('chmod %o \"%s\"' % (lmode, remote_path)) else: self.ftp.chmod(remote_path, lmode) if use_sudo: # Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv command. # (The target path has already been cwd-ified elsewhere.) with settings(hide('everything'), cwd=""): sudo("mv \"%s\" \"%s\"" % (remote_path, target_path)) # Revert to original remote_path for return value's sake remote_path = target_path return remote_path def put_dir(self, local_path, remote_path, use_sudo, mirror_local_mode, mode, temp_dir): if os.path.basename(local_path): strip = os.path.dirname(local_path) else: strip = os.path.dirname(os.path.dirname(local_path)) remote_paths = [] for context, dirs, files in os.walk(local_path): rcontext = context.replace(strip, '', 1) # normalize pathname separators with POSIX separator rcontext = rcontext.replace(os.sep, '/') rcontext = rcontext.lstrip('/') rcontext = posixpath.join(remote_path, rcontext) if not self.exists(rcontext): self.mkdir(rcontext, use_sudo) for d in dirs: n = posixpath.join(rcontext, d) if not self.exists(n): self.mkdir(n, use_sudo) for f in files: local_path = os.path.join(context, f) n = posixpath.join(rcontext, f) p = self.put(local_path, n, use_sudo, mirror_local_mode, mode, True, temp_dir) remote_paths.append(p) return remote_paths fabric-1.14.0/fabric/state.py000066400000000000000000000314731315011462000157440ustar00rootroot00000000000000""" Internal shared-state variables such as config settings and host lists. """ import os import sys from optparse import make_option from fabric.network import HostConnectionCache, ssh from fabric.version import get_version from fabric.utils import _AliasDict, _AttributeDict # # Win32 flag # # Impacts a handful of platform specific behaviors. Note that Cygwin's Python # is actually close enough to "real" UNIXes that it doesn't need (or want!) to # use PyWin32 -- so we only test for literal Win32 setups (vanilla Python, # ActiveState etc) here. win32 = (sys.platform == 'win32') # # Environment dictionary - support structures # # By default, if the user (including code using Fabric as a library) doesn't # set the username, we obtain the currently running username and use that. def _get_system_username(): """ Obtain name of current system user, which will be default connection user. """ import getpass username = None try: username = getpass.getuser() # getpass.getuser supported on both Unix and Windows systems. # getpass.getuser may call pwd.getpwuid which in turns may raise KeyError # if it cannot find a username for the given UID, e.g. on ep.io # and similar "non VPS" style services. Rather than error out, just keep # the 'default' username to None. Can check for this value later if needed. except KeyError: pass except ImportError: if win32: import win32api import win32security # noqa import win32profile # noqa username = win32api.GetUserName() return username def _rc_path(): """ Return platform-specific default file path for $HOME/.fabricrc. """ rc_file = '.fabricrc' rc_path = '~/' + rc_file expanded_rc_path = os.path.expanduser(rc_path) if expanded_rc_path == rc_path and win32: from win32com.shell.shell import SHGetSpecialFolderPath from win32com.shell.shellcon import CSIDL_PROFILE expanded_rc_path = "%s/%s" % ( SHGetSpecialFolderPath(0, CSIDL_PROFILE), rc_file ) return expanded_rc_path default_port = '22' # hurr durr default_ssh_config_path = os.path.join(os.path.expanduser('~'), '.ssh', 'config') # Options/settings which exist both as environment keys and which can be set on # the command line, are defined here. When used via `fab` they will be added to # the optparse parser, and either way they are added to `env` below (i.e. the # 'dest' value becomes the environment key and the value, the env value). # # Keep in mind that optparse changes hyphens to underscores when automatically # deriving the `dest` name, e.g. `--reject-unknown-hosts` becomes # `reject_unknown_hosts`. # # Furthermore, *always* specify some sort of default to avoid ending up with # optparse.NO_DEFAULT (currently a two-tuple)! In general, None is a better # default than ''. # # User-facing documentation for these are kept in sites/docs/env.rst. env_options = [ make_option('-a', '--no_agent', action='store_true', default=False, help="don't use the running SSH agent" ), make_option('-A', '--forward-agent', action='store_true', default=False, help="forward local agent to remote end" ), make_option('--abort-on-prompts', action='store_true', default=False, help="abort instead of prompting (for password, host, etc)" ), make_option('-c', '--config', dest='rcfile', default=_rc_path(), metavar='PATH', help="specify location of config file to use" ), make_option('--colorize-errors', action='store_true', default=False, help="Color error output", ), make_option('-D', '--disable-known-hosts', action='store_true', default=False, help="do not load user known_hosts file" ), make_option('-e', '--eagerly-disconnect', action='store_true', default=False, help="disconnect from hosts as soon as possible" ), make_option('-f', '--fabfile', default='fabfile', metavar='PATH', help="python module file to import, e.g. '../other.py'" ), make_option('-g', '--gateway', default=None, metavar='HOST', help="gateway host to connect through" ), make_option('--gss-auth', action='store_true', default=None, help="Use GSS-API authentication" ), make_option('--gss-deleg', action='store_true', default=None, help="Delegate GSS-API client credentials or not" ), make_option('--gss-kex', action='store_true', default=None, help="Perform GSS-API Key Exchange and user authentication" ), make_option('--hide', metavar='LEVELS', help="comma-separated list of output levels to hide" ), make_option('-H', '--hosts', default=[], help="comma-separated list of hosts to operate on" ), make_option('-i', action='append', dest='key_filename', metavar='PATH', default=None, help="path to SSH private key file. May be repeated." ), make_option('-k', '--no-keys', action='store_true', default=False, help="don't load private key files from ~/.ssh/" ), make_option('--keepalive', dest='keepalive', type=int, default=0, metavar="N", help="enables a keepalive every N seconds" ), make_option('--linewise', action='store_true', default=False, help="print line-by-line instead of byte-by-byte" ), make_option('-n', '--connection-attempts', type='int', metavar='M', dest='connection_attempts', default=1, help="make M attempts to connect before giving up" ), make_option('--no-pty', dest='always_use_pty', action='store_false', default=True, help="do not use pseudo-terminal in run/sudo" ), make_option('-p', '--password', default=None, help="password for use with authentication and/or sudo" ), make_option('-P', '--parallel', dest='parallel', action='store_true', default=False, help="default to parallel execution method" ), make_option('--port', default=default_port, help="SSH connection port" ), make_option('-r', '--reject-unknown-hosts', action='store_true', default=False, help="reject unknown hosts" ), make_option('--sudo-password', default=None, help="password for use with sudo only", ), make_option('--system-known-hosts', default=None, help="load system known_hosts file before reading user known_hosts" ), make_option('-R', '--roles', default=[], help="comma-separated list of roles to operate on" ), make_option('-s', '--shell', default='/bin/bash -l -c', help="specify a new shell, defaults to '/bin/bash -l -c'" ), make_option('--show', metavar='LEVELS', help="comma-separated list of output levels to show" ), make_option('--skip-bad-hosts', action="store_true", default=False, help="skip over hosts that can't be reached" ), make_option('--skip-unknown-tasks', action="store_true", default=False, help="skip over unknown tasks" ), make_option('--ssh-config-path', default=default_ssh_config_path, metavar='PATH', help="Path to SSH config file" ), make_option('-t', '--timeout', type='int', default=10, metavar="N", help="set connection timeout to N seconds" ), make_option('-T', '--command-timeout', dest='command_timeout', type='int', default=None, metavar="N", help="set remote command timeout to N seconds" ), make_option('-u', '--user', default=_get_system_username(), help="username to use when connecting to remote hosts" ), make_option('-w', '--warn-only', action='store_true', default=False, help="warn, instead of abort, when commands fail" ), make_option('-x', '--exclude-hosts', default=[], metavar='HOSTS', help="comma-separated list of hosts to exclude" ), make_option('-z', '--pool-size', dest='pool_size', type='int', metavar='INT', default=0, help="number of concurrent processes to use in parallel mode", ), ] # # Environment dictionary - actual dictionary object # # Global environment dict. Currently a catchall for everything: config settings # such as global deep/broad mode, host lists, username etc. # Most default values are specified in `env_options` above, in the interests of # preserving DRY: anything in here is generally not settable via the command # line. env = _AttributeDict({ 'abort_exception': None, 'again_prompt': 'Sorry, try again.', 'all_hosts': [], 'combine_stderr': True, 'colorize_errors': False, 'command': None, 'command_prefixes': [], 'cwd': '', # Must be empty string, not None, for concatenation purposes 'dedupe_hosts': True, 'default_port': default_port, 'eagerly_disconnect': False, 'echo_stdin': True, 'effective_roles': [], 'exclude_hosts': [], 'gateway': None, 'gss_auth': None, 'gss_deleg': None, 'gss_kex': None, 'host': None, 'host_string': None, 'lcwd': '', # Must be empty string, not None, for concatenation purposes 'local_user': _get_system_username(), 'output_prefix': True, 'passwords': {}, 'path': '', 'path_behavior': 'append', 'port': default_port, 'real_fabfile': None, 'remote_interrupt': None, 'roles': [], 'roledefs': {}, 'shell_env': {}, 'skip_bad_hosts': False, 'skip_unknown_tasks': False, 'ssh_config_path': default_ssh_config_path, 'sudo_passwords': {}, 'ok_ret_codes': [0], # a list of return codes that indicate success # -S so sudo accepts passwd via stdin, -p with our known-value prompt for # later detection (thus %s -- gets filled with env.sudo_prompt at runtime) 'sudo_prefix': "sudo -S -p '%(sudo_prompt)s' ", 'sudo_prompt': 'sudo password:', 'sudo_user': None, 'tasks': [], 'prompts': {}, 'use_exceptions_for': {'network': False}, 'use_shell': True, 'use_ssh_config': False, 'user': None, 'version': get_version('short') }) # Fill in exceptions settings exceptions = ['network'] exception_dict = {} for e in exceptions: exception_dict[e] = False env.use_exceptions_for = _AliasDict(exception_dict, aliases={'everything': exceptions}) # Add in option defaults for option in env_options: env[option.dest] = option.default # # Command dictionary # # Keys are the command/function names, values are the callables themselves. # This is filled in when main() runs. commands = {} # # Host connection dict/cache # connections = HostConnectionCache() def _open_session(): transport = connections[env.host_string].get_transport() # Try passing session-open timeout for Paramiko versions which support it # (1.14.3+) try: session = transport.open_session(timeout=env.timeout) # Revert to old call behavior if we seem to have hit arity error. # TODO: consider introspecting the exception to avoid masking other # TypeErrors; but this is highly fragile, especially when taking i18n into # account. except TypeError: # Assume arity error session = transport.open_session() return session def default_channel(): """ Return a channel object based on ``env.host_string``. """ try: chan = _open_session() except ssh.SSHException, err: if str(err) == 'SSH session not active': connections[env.host_string].close() del connections[env.host_string] chan = _open_session() else: raise chan.settimeout(0.1) chan.input_enabled = True return chan # # Output controls # # Keys are "levels" or "groups" of output, values are always boolean, # determining whether output falling into the given group is printed or not # printed. # # By default, everything except 'debug' is printed, as this is what the average # user, and new users, are most likely to expect. # # See docs/usage.rst for details on what these levels mean. output = _AliasDict({ 'status': True, 'aborts': True, 'warnings': True, 'running': True, 'stdout': True, 'stderr': True, 'exceptions': False, 'debug': False, 'user': True }, aliases={ 'everything': ['warnings', 'running', 'user', 'output', 'exceptions'], 'output': ['stdout', 'stderr'], 'commands': ['stdout', 'running'] }) fabric-1.14.0/fabric/task_utils.py000066400000000000000000000053431315011462000170030ustar00rootroot00000000000000from fabric.utils import abort, indent from fabric import state # For attribute tomfoolery class _Dict(dict): pass def _crawl(name, mapping): """ ``name`` of ``'a.b.c'`` => ``mapping['a']['b']['c']`` """ key, _, rest = name.partition('.') value = mapping[key] if not rest: return value return _crawl(rest, value) def crawl(name, mapping): try: result = _crawl(name, mapping) # Handle default tasks if isinstance(result, _Dict): if getattr(result, 'default', False): result = result.default # Ensure task modules w/ no default are treated as bad targets else: result = None return result except (KeyError, TypeError): return None def merge(hosts, roles, exclude, roledefs): """ Merge given host and role lists into one list of deduped hosts. """ # Abort if any roles don't exist bad_roles = [x for x in roles if x not in roledefs] if bad_roles: abort("The following specified roles do not exist:\n%s" % ( indent(bad_roles) )) # Coerce strings to one-item lists if isinstance(hosts, basestring): hosts = [hosts] # Look up roles, turn into flat list of hosts role_hosts = [] for role in roles: value = roledefs[role] # Handle dict style roledefs if isinstance(value, dict): value = value['hosts'] # Handle "lazy" roles (callables) if callable(value): value = value() role_hosts += value # Strip whitespace from host strings. cleaned_hosts = [x.strip() for x in list(hosts) + list(role_hosts)] # Return deduped combo of hosts and role_hosts, preserving order within # them (vs using set(), which may lose ordering) and skipping hosts to be # excluded. # But only if the user hasn't indicated they want this behavior disabled. all_hosts = cleaned_hosts if state.env.dedupe_hosts: deduped_hosts = [] for host in cleaned_hosts: if host not in deduped_hosts and host not in exclude: deduped_hosts.append(host) all_hosts = deduped_hosts return all_hosts def parse_kwargs(kwargs): new_kwargs = {} hosts = [] roles = [] exclude_hosts = [] for key, value in kwargs.iteritems(): if key == 'host': hosts = [value] elif key == 'hosts': hosts = value elif key == 'role': roles = [value] elif key == 'roles': roles = value elif key == 'exclude_hosts': exclude_hosts = value else: new_kwargs[key] = value return new_kwargs, hosts, roles, exclude_hosts fabric-1.14.0/fabric/tasks.py000066400000000000000000000400431315011462000157420ustar00rootroot00000000000000from __future__ import with_statement import inspect import sys import textwrap from fabric import state from fabric.utils import abort, warn, error from fabric.network import to_dict, disconnect_all from fabric.context_managers import settings from fabric.job_queue import JobQueue from fabric.task_utils import crawl, merge, parse_kwargs from fabric.exceptions import NetworkError if sys.version_info[:2] == (2, 5): # Python 2.5 inspect.getargspec returns a tuple # instead of ArgSpec namedtuple. class ArgSpec(object): def __init__(self, args, varargs, keywords, defaults): self.args = args self.varargs = varargs self.keywords = keywords self.defaults = defaults self._tuple = (args, varargs, keywords, defaults) def __getitem__(self, idx): return self._tuple[idx] def patched_get_argspec(func): return ArgSpec(*inspect._getargspec(func)) inspect._getargspec = inspect.getargspec inspect.getargspec = patched_get_argspec def get_task_details(task): details = [ textwrap.dedent(task.__doc__) if task.__doc__ else 'No docstring provided'] argspec = inspect.getargspec(task) default_args = [] if not argspec.defaults else argspec.defaults num_default_args = len(default_args) args_without_defaults = argspec.args[:len(argspec.args) - num_default_args] args_with_defaults = argspec.args[-1 * num_default_args:] details.append('Arguments: %s' % ( ', '.join( args_without_defaults + [ '%s=%r' % (arg, default) for arg, default in zip(args_with_defaults, default_args) ]) )) return '\n'.join(details) def _get_list(env): def inner(key): return env.get(key, []) return inner class Task(object): """ Abstract base class for objects wishing to be picked up as Fabric tasks. Instances of subclasses will be treated as valid tasks when present in fabfiles loaded by the :doc:`fab ` tool. For details on how to implement and use `~fabric.tasks.Task` subclasses, please see the usage documentation on :ref:`new-style tasks `. .. versionadded:: 1.1 """ name = 'undefined' use_task_objects = True aliases = None is_default = False # TODO: make it so that this wraps other decorators as expected def __init__(self, alias=None, aliases=None, default=False, name=None, *args, **kwargs): if alias is not None: self.aliases = [alias, ] if aliases is not None: self.aliases = aliases if name is not None: self.name = name self.is_default = default def __details__(self): return get_task_details(self.run) def run(self): raise NotImplementedError def get_hosts_and_effective_roles(self, arg_hosts, arg_roles, arg_exclude_hosts, env=None): """ Return a tuple containing the host list the given task should be using and the roles being used. See :ref:`host-lists` for detailed documentation on how host lists are set. .. versionchanged:: 1.9 """ env = env or {'hosts': [], 'roles': [], 'exclude_hosts': []} roledefs = env.get('roledefs', {}) # Command line per-task takes precedence over anything else. if arg_hosts or arg_roles: return merge(arg_hosts, arg_roles, arg_exclude_hosts, roledefs), arg_roles # Decorator-specific hosts/roles go next func_hosts = getattr(self, 'hosts', []) func_roles = getattr(self, 'roles', []) if func_hosts or func_roles: return merge(func_hosts, func_roles, arg_exclude_hosts, roledefs), func_roles # Finally, the env is checked (which might contain globally set lists # from the CLI or from module-level code). This will be the empty list # if these have not been set -- which is fine, this method should # return an empty list if no hosts have been set anywhere. env_vars = map(_get_list(env), "hosts roles exclude_hosts".split()) env_vars.append(roledefs) return merge(*env_vars), env.get('roles', []) def get_pool_size(self, hosts, default): # Default parallel pool size (calculate per-task in case variables # change) default_pool_size = default or len(hosts) # Allow per-task override # Also cast to int in case somebody gave a string from_task = getattr(self, 'pool_size', None) pool_size = int(from_task or default_pool_size) # But ensure it's never larger than the number of hosts pool_size = min((pool_size, len(hosts))) # Inform user of final pool size for this task if state.output.debug: print("Parallel tasks now using pool size of %d" % pool_size) return pool_size class WrappedCallableTask(Task): """ Wraps a given callable transparently, while marking it as a valid Task. Generally used via `~fabric.decorators.task` and not directly. .. versionadded:: 1.1 .. seealso:: `~fabric.docs.unwrap_tasks`, `~fabric.decorators.task` """ def __init__(self, callable, *args, **kwargs): super(WrappedCallableTask, self).__init__(*args, **kwargs) self.wrapped = callable # Don't use getattr() here -- we want to avoid touching self.name # entirely so the superclass' value remains default. if hasattr(callable, '__name__'): if self.name == 'undefined': self.__name__ = self.name = callable.__name__ else: self.__name__ = self.name if hasattr(callable, '__doc__'): self.__doc__ = callable.__doc__ if hasattr(callable, '__module__'): self.__module__ = callable.__module__ def __call__(self, *args, **kwargs): return self.run(*args, **kwargs) def run(self, *args, **kwargs): return self.wrapped(*args, **kwargs) def __getattr__(self, k): return getattr(self.wrapped, k) def __details__(self): orig = self while 'wrapped' in orig.__dict__: orig = orig.__dict__.get('wrapped') return get_task_details(orig) def requires_parallel(task): """ Returns True if given ``task`` should be run in parallel mode. Specifically: * It's been explicitly marked with ``@parallel``, or: * It's *not* been explicitly marked with ``@serial`` *and* the global parallel option (``env.parallel``) is set to ``True``. """ return ( (state.env.parallel and not getattr(task, 'serial', False)) or getattr(task, 'parallel', False) ) def _parallel_tasks(commands_to_run): return any(map( lambda x: requires_parallel(crawl(x[0], state.commands)), commands_to_run )) def _is_network_error_ignored(): return not state.env.use_exceptions_for['network'] and state.env.skip_bad_hosts def _execute(task, host, my_env, args, kwargs, jobs, queue, multiprocessing): """ Primary single-host work body of execute() """ # Log to stdout if state.output.running and not hasattr(task, 'return_value'): print("[%s] Executing task '%s'" % (host, my_env['command'])) # Create per-run env with connection settings local_env = to_dict(host) local_env.update(my_env) # Set a few more env flags for parallelism if queue is not None: local_env.update({'parallel': True, 'linewise': True}) # Handle parallel execution if queue is not None: # Since queue is only set for parallel name = local_env['host_string'] # Wrap in another callable that: # * expands the env it's given to ensure parallel, linewise, etc are # all set correctly and explicitly. Such changes are naturally # insulted from the parent process. # * nukes the connection cache to prevent shared-access problems # * knows how to send the tasks' return value back over a Queue # * captures exceptions raised by the task def inner(args, kwargs, queue, name, env): state.env.update(env) def submit(result): queue.put({'name': name, 'result': result}) try: state.connections.clear() submit(task.run(*args, **kwargs)) except BaseException, e: # We really do want to capture everything # SystemExit implies use of abort(), which prints its own # traceback, host info etc -- so we don't want to double up # on that. For everything else, though, we need to make # clear what host encountered the exception that will # print. if e.__class__ is not SystemExit: if not (isinstance(e, NetworkError) and _is_network_error_ignored()): sys.stderr.write("!!! Parallel execution exception under host %r:\n" % name) submit(e) # Here, anything -- unexpected exceptions, or abort() # driven SystemExits -- will bubble up and terminate the # child process. if not (isinstance(e, NetworkError) and _is_network_error_ignored()): raise # Stuff into Process wrapper kwarg_dict = { 'args': args, 'kwargs': kwargs, 'queue': queue, 'name': name, 'env': local_env, } p = multiprocessing.Process(target=inner, kwargs=kwarg_dict) # Name/id is host string p.name = name # Add to queue jobs.append(p) # Handle serial execution else: with settings(**local_env): return task.run(*args, **kwargs) def _is_task(task): return isinstance(task, Task) def execute(task, *args, **kwargs): """ Execute ``task`` (callable or name), honoring host/role decorators, etc. ``task`` may be an actual callable object, or it may be a registered task name, which is used to look up a callable just as if the name had been given on the command line (including :ref:`namespaced tasks `, e.g. ``"deploy.migrate"``. The task will then be executed once per host in its host list, which is (again) assembled in the same manner as CLI-specified tasks: drawing from :option:`-H`, :ref:`env.hosts `, the `~fabric.decorators.hosts` or `~fabric.decorators.roles` decorators, and so forth. ``host``, ``hosts``, ``role``, ``roles`` and ``exclude_hosts`` kwargs will be stripped out of the final call, and used to set the task's host list, as if they had been specified on the command line like e.g. ``fab taskname:host=hostname``. Any other arguments or keyword arguments will be passed verbatim into ``task`` (the function itself -- not the ``@task`` decorator wrapping your function!) when it is called, so ``execute(mytask, 'arg1', kwarg1='value')`` will (once per host) invoke ``mytask('arg1', kwarg1='value')``. :returns: a dictionary mapping host strings to the given task's return value for that host's execution run. For example, ``execute(foo, hosts=['a', 'b'])`` might return ``{'a': None, 'b': 'bar'}`` if ``foo`` returned nothing on host `a` but returned ``'bar'`` on host `b`. In situations where a task execution fails for a given host but overall progress does not abort (such as when :ref:`env.skip_bad_hosts ` is True) the return value for that host will be the error object or message. .. seealso:: :ref:`The execute usage docs `, for an expanded explanation and some examples. .. versionadded:: 1.3 .. versionchanged:: 1.4 Added the return value mapping; previously this function had no defined return value. """ my_env = {'clean_revert': True} results = {} # Obtain task is_callable = callable(task) if not (is_callable or _is_task(task)): # Assume string, set env.command to it my_env['command'] = task task = crawl(task, state.commands) if task is None: msg = "%r is not callable or a valid task name" % (my_env['command'],) if state.env.get('skip_unknown_tasks', False): warn(msg) return else: abort(msg) # Set env.command if we were given a real function or callable task obj else: dunder_name = getattr(task, '__name__', None) my_env['command'] = getattr(task, 'name', dunder_name) # Normalize to Task instance if we ended up with a regular callable if not _is_task(task): task = WrappedCallableTask(task) # Filter out hosts/roles kwargs new_kwargs, hosts, roles, exclude_hosts = parse_kwargs(kwargs) # Set up host list my_env['all_hosts'], my_env['effective_roles'] = task.get_hosts_and_effective_roles(hosts, roles, exclude_hosts, state.env) parallel = requires_parallel(task) if parallel: # Import multiprocessing if needed, erroring out usefully # if it can't. try: import multiprocessing except ImportError: import traceback tb = traceback.format_exc() abort(tb + """ At least one task needs to be run in parallel, but the multiprocessing module cannot be imported (see above traceback.) Please make sure the module is installed or that the above ImportError is fixed.""") else: multiprocessing = None # Get pool size for this task pool_size = task.get_pool_size(my_env['all_hosts'], state.env.pool_size) # Set up job queue in case parallel is needed queue = multiprocessing.Queue() if parallel else None jobs = JobQueue(pool_size, queue) if state.output.debug: jobs._debug = True # Call on host list if my_env['all_hosts']: # Attempt to cycle on hosts, skipping if needed for host in my_env['all_hosts']: try: results[host] = _execute( task, host, my_env, args, new_kwargs, jobs, queue, multiprocessing ) except NetworkError, e: results[host] = e # Backwards compat test re: whether to use an exception or # abort if not state.env.use_exceptions_for['network']: func = warn if state.env.skip_bad_hosts else abort error(e.message, func=func, exception=e.wrapped) else: raise # If requested, clear out connections here and not just at the end. if state.env.eagerly_disconnect: disconnect_all() # If running in parallel, block until job queue is emptied if jobs: err = "One or more hosts failed while executing task '%s'" % ( my_env['command'] ) jobs.close() # Abort if any children did not exit cleanly (fail-fast). # This prevents Fabric from continuing on to any other tasks. # Otherwise, pull in results from the child run. ran_jobs = jobs.run() for name, d in ran_jobs.iteritems(): if d['exit_code'] != 0: if isinstance(d['results'], NetworkError) and \ _is_network_error_ignored(): error(d['results'].message, func=warn, exception=d['results'].wrapped) elif isinstance(d['results'], BaseException): error(err, exception=d['results']) else: error(err) results[name] = d['results'] # Or just run once for local-only else: with settings(**my_env): results[''] = task.run(*args, **new_kwargs) # Return what we can from the inner task executions return results fabric-1.14.0/fabric/thread_handling.py000066400000000000000000000013111315011462000177230ustar00rootroot00000000000000import threading import sys class ThreadHandler(object): def __init__(self, name, callable, *args, **kwargs): # Set up exception handling self.exception = None def wrapper(*args, **kwargs): try: callable(*args, **kwargs) except BaseException: self.exception = sys.exc_info() # Kick off thread thread = threading.Thread(None, wrapper, name, args, kwargs) thread.setDaemon(True) thread.start() # Make thread available to instantiator self.thread = thread def raise_if_needed(self): if self.exception: e = self.exception raise e[0], e[1], e[2] fabric-1.14.0/fabric/utils.py000066400000000000000000000341411315011462000157570ustar00rootroot00000000000000""" Internal subroutines for e.g. aborting execution with an error message, or performing indenting on multiline output. """ import os import sys import textwrap from traceback import format_exc def _encode(msg, stream): if isinstance(msg, unicode) and hasattr(stream, 'encoding') and not stream.encoding is None: return msg.encode(stream.encoding) else: return str(msg) def isatty(stream): """Check if a stream is a tty. Not all file-like objects implement the `isatty` method. """ fn = getattr(stream, 'isatty', None) if fn is None: return False return fn() def abort(msg): """ Abort execution, print ``msg`` to stderr and exit with error status (1.) This function currently makes use of `SystemExit`_ in a manner that is similar to `sys.exit`_ (but which skips the automatic printing to stderr, allowing us to more tightly control it via settings). Therefore, it's possible to detect and recover from inner calls to `abort` by using ``except SystemExit`` or similar. .. _sys.exit: http://docs.python.org/library/sys.html#sys.exit .. _SystemExit: http://docs.python.org/library/exceptions.html#exceptions.SystemExit """ from fabric.state import output, env if not env.colorize_errors: red = lambda x: x else: from colors import red if output.aborts: sys.stderr.write(red("\nFatal error: %s\n" % _encode(msg, sys.stderr))) sys.stderr.write(red("\nAborting.\n")) if env.abort_exception: raise env.abort_exception(msg) else: # See issue #1318 for details on the below; it lets us construct a # valid, useful SystemExit while sidestepping the automatic stderr # print (which would otherwise duplicate with the above in a # non-controllable fashion). e = SystemExit(1) e.message = msg raise e def warn(msg): """ Print warning message, but do not abort execution. This function honors Fabric's :doc:`output controls <../../usage/output_controls>` and will print the given ``msg`` to stderr, provided that the ``warnings`` output level (which is active by default) is turned on. """ from fabric.state import output, env if not env.colorize_errors: magenta = lambda x: x else: from colors import magenta if output.warnings: msg = _encode(msg, sys.stderr) sys.stderr.write(magenta("\nWarning: %s\n\n" % msg)) def indent(text, spaces=4, strip=False): """ Return ``text`` indented by the given number of spaces. If text is not a string, it is assumed to be a list of lines and will be joined by ``\\n`` prior to indenting. When ``strip`` is ``True``, a minimum amount of whitespace is removed from the left-hand side of the given string (so that relative indents are preserved, but otherwise things are left-stripped). This allows you to effectively "normalize" any previous indentation for some inputs. """ # Normalize list of strings into a string for dedenting. "list" here means # "not a string" meaning "doesn't have splitlines". Meh. if not hasattr(text, 'splitlines'): text = '\n'.join(text) # Dedent if requested if strip: text = textwrap.dedent(text) prefix = ' ' * spaces output = '\n'.join(prefix + line for line in text.splitlines()) # Strip out empty lines before/aft output = output.strip() # Reintroduce first indent (which just got stripped out) output = prefix + output return output def puts(text, show_prefix=None, end="\n", flush=False): """ An alias for ``print`` whose output is managed by Fabric's output controls. In other words, this function simply prints to ``sys.stdout``, but will hide its output if the ``user`` :doc:`output level ` is set to ``False``. If ``show_prefix=False``, `puts` will omit the leading ``[hostname]`` which it tacks on by default. (It will also omit this prefix if ``env.host_string`` is empty.) Newlines may be disabled by setting ``end`` to the empty string (``''``). (This intentionally mirrors Python 3's ``print`` syntax.) You may force output flushing (e.g. to bypass output buffering) by setting ``flush=True``. .. versionadded:: 0.9.2 .. seealso:: `~fabric.utils.fastprint` """ from fabric.state import output, env if show_prefix is None: show_prefix = env.output_prefix if output.user: prefix = "" if env.host_string and show_prefix: prefix = "[%s] " % env.host_string sys.stdout.write(prefix + _encode(text, sys.stdout) + end) if flush: sys.stdout.flush() def fastprint(text, show_prefix=False, end="", flush=True): """ Print ``text`` immediately, without any prefix or line ending. This function is simply an alias of `~fabric.utils.puts` with different default argument values, such that the ``text`` is printed without any embellishment and immediately flushed. It is useful for any situation where you wish to print text which might otherwise get buffered by Python's output buffering (such as within a processor intensive ``for`` loop). Since such use cases typically also require a lack of line endings (such as printing a series of dots to signify progress) it also omits the traditional newline by default. .. note:: Since `~fabric.utils.fastprint` calls `~fabric.utils.puts`, it is likewise subject to the ``user`` :doc:`output level `. .. versionadded:: 0.9.2 .. seealso:: `~fabric.utils.puts` """ return puts(text=text, show_prefix=show_prefix, end=end, flush=flush) def handle_prompt_abort(prompt_for): import fabric.state reason = "Needed to prompt for %s (host: %s), but %%s" % ( prompt_for, fabric.state.env.host_string ) # Explicit "don't prompt me bro" if fabric.state.env.abort_on_prompts: abort(reason % "abort-on-prompts was set to True") # Implicit "parallel == stdin/prompts have ambiguous target" if fabric.state.env.parallel: abort(reason % "input would be ambiguous in parallel mode") class _AttributeDict(dict): """ Dictionary subclass enabling attribute lookup/assignment of keys/values. For example:: >>> m = _AttributeDict({'foo': 'bar'}) >>> m.foo 'bar' >>> m.foo = 'not bar' >>> m['foo'] 'not bar' ``_AttributeDict`` objects also provide ``.first()`` which acts like ``.get()`` but accepts multiple keys as arguments, and returns the value of the first hit, e.g.:: >>> m = _AttributeDict({'foo': 'bar', 'biz': 'baz'}) >>> m.first('wrong', 'incorrect', 'foo', 'biz') 'bar' """ def __getattr__(self, key): try: return self[key] except KeyError: # to conform with __getattr__ spec raise AttributeError(key) def __setattr__(self, key, value): self[key] = value def first(self, *names): for name in names: value = self.get(name) if value: return value class _AliasDict(_AttributeDict): """ `_AttributeDict` subclass that allows for "aliasing" of keys to other keys. Upon creation, takes an ``aliases`` mapping, which should map alias names to lists of key names. Aliases do not store their own value, but instead set (override) all mapped keys' values. For example, in the following `_AliasDict`, calling ``mydict['foo'] = True`` will set the values of ``mydict['bar']``, ``mydict['biz']`` and ``mydict['baz']`` all to True:: mydict = _AliasDict( {'biz': True, 'baz': False}, aliases={'foo': ['bar', 'biz', 'baz']} ) Because it is possible for the aliased values to be in a heterogenous state, reading aliases is not supported -- only writing to them is allowed. This also means they will not show up in e.g. ``dict.keys()``. ..note:: Aliases are recursive, so you may refer to an alias within the key list of another alias. Naturally, this means that you can end up with infinite loops if you're not careful. `_AliasDict` provides a special function, `expand_aliases`, which will take a list of keys as an argument and will return that list of keys with any aliases expanded. This function will **not** dedupe, so any aliases which overlap will result in duplicate keys in the resulting list. """ def __init__(self, arg=None, aliases=None): init = super(_AliasDict, self).__init__ if arg is not None: init(arg) else: init() # Can't use super() here because of _AttributeDict's setattr override dict.__setattr__(self, 'aliases', aliases) def __setitem__(self, key, value): # Attr test required to not blow up when deepcopy'd if hasattr(self, 'aliases') and key in self.aliases: for aliased in self.aliases[key]: self[aliased] = value else: return super(_AliasDict, self).__setitem__(key, value) def expand_aliases(self, keys): ret = [] for key in keys: if key in self.aliases: ret.extend(self.expand_aliases(self.aliases[key])) else: ret.append(key) return ret def _pty_size(): """ Obtain (rows, cols) tuple for sizing a pty on the remote end. Defaults to 80x24 (which is also the 'ssh' lib's default) but will detect local (stdout-based) terminal window size on non-Windows platforms. """ from fabric.state import win32 if not win32: import fcntl import termios import struct default_rows, default_cols = 24, 80 rows, cols = default_rows, default_cols if not win32 and isatty(sys.stdout): # We want two short unsigned integers (rows, cols) fmt = 'HH' # Create an empty (zeroed) buffer for ioctl to map onto. Yay for C! buffer = struct.pack(fmt, 0, 0) # Call TIOCGWINSZ to get window size of stdout, returns our filled # buffer try: result = fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ, buffer) # Unpack buffer back into Python data types rows, cols = struct.unpack(fmt, result) # Fall back to defaults if TIOCGWINSZ returns unreasonable values if rows == 0: rows = default_rows if cols == 0: cols = default_cols # Deal with e.g. sys.stdout being monkeypatched, such as in testing. # Or termios not having a TIOCGWINSZ. except AttributeError: pass return rows, cols def error(message, func=None, exception=None, stdout=None, stderr=None): """ Call ``func`` with given error ``message``. If ``func`` is None (the default), the value of ``env.warn_only`` determines whether to call ``abort`` or ``warn``. If ``exception`` is given, it is inspected to get a string message, which is printed alongside the user-generated ``message``. If ``stdout`` and/or ``stderr`` are given, they are assumed to be strings to be printed. """ import fabric.state if func is None: func = fabric.state.env.warn_only and warn or abort # If exception printing is on, append a traceback to the message if fabric.state.output.exceptions or fabric.state.output.debug: exception_message = format_exc() if exception_message: message += "\n\n" + exception_message # Otherwise, if we were given an exception, append its contents. elif exception is not None: # Figure out how to get a string out of the exception; EnvironmentError # subclasses, for example, "are" integers and .strerror is the string. # Others "are" strings themselves. May have to expand this further for # other error types. if hasattr(exception, 'strerror') and exception.strerror is not None: underlying = exception.strerror else: underlying = exception message += "\n\nUnderlying exception:\n" + indent(str(underlying)) if func is abort: if stdout and not fabric.state.output.stdout: message += _format_error_output("Standard output", stdout) if stderr and not fabric.state.output.stderr: message += _format_error_output("Standard error", stderr) return func(message) def _format_error_output(header, body): term_width = _pty_size()[1] header_side_length = (term_width - (len(header) + 2)) / 2 mark = "=" side = mark * header_side_length return "\n\n%s %s %s\n\n%s\n\n%s" % ( side, header, side, body, mark * term_width ) # TODO: replace with collections.deque(maxlen=xxx) in Python 2.6 class RingBuffer(list): def __init__(self, value, maxlen): # Because it's annoying typing this multiple times. self._super = super(RingBuffer, self) # Python 2.6 deque compatible option name! self._maxlen = maxlen return self._super.__init__(value) def _trim(self): if self._maxlen is None: return overage = max(len(self) - self._maxlen, 0) del self[0:overage] def append(self, value): self._super.append(value) self._trim() def extend(self, values): self._super.extend(values) self._trim() def __iadd__(self, other): self.extend(other) return self # Paranoia from here on out. def insert(self, index, value): raise ValueError("Can't insert into the middle of a ring buffer!") def __setslice__(self, i, j, sequence): raise ValueError("Can't set a slice of a ring buffer!") def __setitem__(self, key, value): if isinstance(key, slice): raise ValueError("Can't set a slice of a ring buffer!") else: return self._super.__setitem__(key, value) def apply_lcwd(path, env): # Apply CWD if a relative path if not os.path.isabs(path) and env.lcwd: path = os.path.join(env.lcwd, path) return path fabric-1.14.0/fabric/version.py000066400000000000000000000054721315011462000163110ustar00rootroot00000000000000""" Current Fabric version constant plus version pretty-print method. This functionality is contained in its own module to prevent circular import problems with ``__init__.py`` (which is loaded by setup.py during installation, which in turn needs access to this version information.) """ from subprocess import Popen, PIPE from os.path import abspath, dirname VERSION = (1, 14, 0, 'final', 0) def git_sha(): loc = abspath(dirname(__file__)) try: p = Popen( "cd \"%s\" && git log -1 --format=format:%%h" % loc, shell=True, stdout=PIPE, stderr=PIPE ) return p.communicate()[0] # OSError occurs on Unix-derived platforms lacking Popen's configured shell # default, /bin/sh. E.g. Android. except OSError: return None def get_version(form='short'): """ Return a version string for this package, based on `VERSION`. Takes a single argument, ``form``, which should be one of the following strings: * ``branch``: just the major + minor, e.g. "0.9", "1.0". * ``short`` (default): compact, e.g. "0.9rc1", "0.9.0". For package filenames or SCM tag identifiers. * ``normal``: human readable, e.g. "0.9", "0.9.1", "0.9 beta 1". For e.g. documentation site headers. * ``verbose``: like ``normal`` but fully explicit, e.g. "0.9 final". For tag commit messages, or anywhere that it's important to remove ambiguity between a branch and the first final release within that branch. * ``all``: Returns all of the above, as a dict. """ # Setup versions = {} branch = "%s.%s" % (VERSION[0], VERSION[1]) tertiary = VERSION[2] type_ = VERSION[3] final = (type_ == "final") type_num = VERSION[4] firsts = "".join([x[0] for x in type_.split()]) # Branch versions['branch'] = branch # Short v = branch if (tertiary or final): v += "." + str(tertiary) if not final: v += firsts if type_num: v += str(type_num) versions['short'] = v # Normal v = branch if tertiary: v += "." + str(tertiary) if not final: if type_num: v += " " + type_ + " " + str(type_num) else: v += " pre-" + type_ versions['normal'] = v # Verbose v = branch if tertiary: v += "." + str(tertiary) if not final: if type_num: v += " " + type_ + " " + str(type_num) else: v += " pre-" + type_ else: v += " final" versions['verbose'] = v try: return versions[form] except KeyError: if form == 'all': return versions raise TypeError('"%s" is not a valid form specifier.' % form) __version__ = get_version('short') if __name__ == "__main__": print(get_version('all')) fabric-1.14.0/integration/000077500000000000000000000000001315011462000153375ustar00rootroot00000000000000fabric-1.14.0/integration/test_contrib.py000066400000000000000000000073351315011462000204200ustar00rootroot00000000000000import os import re from fabric.api import run, local from fabric.contrib import files, project from utils import Integration def tildify(path): home = run("echo ~", quiet=True).stdout.strip() return path.replace('~', home) def expect(path): assert files.exists(tildify(path)) def expect_contains(path, value): assert files.contains(tildify(path), value) def escape(path): return path.replace(' ', r'\ ') class FileCleaner(Integration): def setup(self): self.local = [] self.remote = [] def teardown(self): super(FileCleaner, self).teardown() for created in self.local: os.unlink(created) for created in self.remote: run("rm %s" % escape(created)) class TestTildeExpansion(FileCleaner): def test_append(self): for target in ('~/append_test', '~/append_test with spaces'): self.remote.append(target) files.append(target, ['line']) expect(target) def test_exists(self): for target in ('~/exists_test', '~/exists test with space'): self.remote.append(target) run("touch %s" % escape(target)) expect(target) def test_sed(self): for target in ('~/sed_test', '~/sed test with space'): self.remote.append(target) run("echo 'before' > %s" % escape(target)) files.sed(target, 'before', 'after') expect_contains(target, 'after') def test_upload_template(self): for i, target in enumerate(( '~/upload_template_test', '~/upload template test with space' )): src = "source%s" % i local("touch %s" % src) self.local.append(src) self.remote.append(target) files.upload_template(src, target) expect(target) class TestIsLink(FileCleaner): # TODO: add more of these. meh. def test_is_link_is_true_on_symlink(self): self.remote.extend(['/tmp/foo', '/tmp/bar']) run("touch /tmp/foo") run("ln -s /tmp/foo /tmp/bar") assert files.is_link('/tmp/bar') def test_is_link_is_false_on_non_link(self): self.remote.append('/tmp/biz') run("touch /tmp/biz") assert not files.is_link('/tmp/biz') rsync_sources = ( 'integration/', 'integration/test_contrib.py', 'integration/test_operations.py', 'integration/utils.py' ) class TestRsync(Integration): def rsync(self, id_, **kwargs): remote = '/tmp/rsync-test-%s/' % id_ if files.exists(remote): run("rm -rf %s" % remote) return project.rsync_project( remote_dir=remote, local_dir='integration', ssh_opts='-o StrictHostKeyChecking=no', capture=True, **kwargs ) def test_existing_default_args(self): """ Rsync uses -v by default """ r = self.rsync(1) for x in rsync_sources: assert re.search(r'^%s$' % x, r.stdout, re.M), "'%s' was not found in '%s'" % (x, r.stdout) def test_overriding_default_args(self): """ Use of default_args kwarg can be used to nuke e.g. -v """ r = self.rsync(2, default_opts='-pthrz') for x in rsync_sources: assert not re.search(r'^%s$' % x, r.stdout, re.M), "'%s' was found in '%s'" % (x, r.stdout) class TestUploadTemplate(FileCleaner): def test_allows_pty_disable(self): src = "source_file" target = "remote_file" local("touch %s" % src) self.local.append(src) self.remote.append(target) # Just make sure it doesn't asplode. meh. files.upload_template(src, target, pty=False) expect(target) fabric-1.14.0/integration/test_operations.py000066400000000000000000000153331315011462000211400ustar00rootroot00000000000000from __future__ import with_statement from StringIO import StringIO import os import posixpath import shutil from fabric.api import ( run, path, put, sudo, env, cd, local, settings, get ) from fabric.contrib.files import exists from utils import Integration def assert_mode(path, mode): remote_mode = run("stat -c \"%%a\" \"%s\"" % path).stdout assert remote_mode == mode, "remote %r != expected %r" % (remote_mode, mode) class TestOperations(Integration): filepath = "/tmp/whocares" dirpath = "/tmp/whatever/bin" not_owned = "/tmp/notmine" def setup(self): super(TestOperations, self).setup() run("mkdir -p %s" % " ".join([self.dirpath, self.not_owned])) def teardown(self): super(TestOperations, self).teardown() # Revert any chown crap from put sudo tests sudo("chown %s ." % env.user) # Nuke to prevent bleed sudo("rm -rf %s" % " ".join([self.dirpath, self.filepath])) sudo("rm -rf %s" % self.not_owned) def test_no_trailing_space_in_shell_path_in_run(self): put(StringIO("#!/bin/bash\necho hi"), "%s/myapp" % self.dirpath, mode="0755") with path(self.dirpath): assert run('myapp').stdout == 'hi' def test_string_put_mode_arg_doesnt_error(self): put(StringIO("#!/bin/bash\necho hi"), self.filepath, mode="0755") assert_mode(self.filepath, "755") def test_int_put_mode_works_ok_too(self): put(StringIO("#!/bin/bash\necho hi"), self.filepath, mode=0755) assert_mode(self.filepath, "755") def _chown(self, target): sudo("chown root %s" % target) def _put_via_sudo(self, source=None, target_suffix='myfile', **kwargs): # Ensure target dir prefix is not owned by our user (so we fail unless # the sudo part of things is working) self._chown(self.not_owned) source = source if source else StringIO("whatever") # Drop temp file into that dir, via use_sudo, + any kwargs return put( source, self.not_owned + '/' + target_suffix, use_sudo=True, **kwargs ) def test_put_with_use_sudo(self): self._put_via_sudo() def test_put_with_dir_and_use_sudo(self): # Test cwd should be root of fabric source tree. Use our own folder as # the source, meh. self._put_via_sudo(source='integration', target_suffix='') def test_put_with_use_sudo_and_custom_temp_dir(self): # TODO: allow dependency injection in sftp.put or w/e, test it in # isolation instead. # For now, just half-ass it by ensuring $HOME isn't writable # temporarily. self._chown('.') self._put_via_sudo(temp_dir='/tmp') def test_put_with_use_sudo_dir_and_custom_temp_dir(self): self._chown('.') self._put_via_sudo(source='integration', target_suffix='', temp_dir='/tmp') def test_put_use_sudo_and_explicit_mode(self): # Setup target_dir = posixpath.join(self.filepath, 'blah') subdir = "inner" subdir_abs = posixpath.join(target_dir, subdir) filename = "whatever.txt" target_file = posixpath.join(subdir_abs, filename) run("mkdir -p %s" % subdir_abs) self._chown(subdir_abs) local_path = os.path.join('/tmp', filename) with open(local_path, 'w+') as fd: fd.write('stuff\n') # Upload + assert with cd(target_dir): put(local_path, subdir, use_sudo=True, mode='777') assert_mode(target_file, '777') def test_put_file_to_dir_with_use_sudo_and_mirror_mode(self): # Ensure mode of local file, umask varies on eg travis vs various # localhosts source = 'whatever.txt' try: local("touch %s" % source) local("chmod 644 %s" % source) # Target for _put_via_sudo is a directory by default uploaded = self._put_via_sudo( source=source, mirror_local_mode=True ) assert_mode(uploaded[0], '644') finally: local("rm -f %s" % source) def test_put_directory_use_sudo_and_spaces(self): localdir = 'I have spaces' localfile = os.path.join(localdir, 'file.txt') os.mkdir(localdir) with open(localfile, 'w') as fd: fd.write('stuff\n') try: uploaded = self._put_via_sudo(localdir, target_suffix='') # Kinda dumb, put() would've died if it couldn't do it, but. assert exists(uploaded[0]) assert exists(posixpath.dirname(uploaded[0])) finally: shutil.rmtree(localdir) def test_agent_forwarding_functions(self): # When paramiko #399 is present this will hang indefinitely with settings(forward_agent=True): run('ssh-add -L') def test_get_with_use_sudo_unowned_file(self): # Ensure target is not normally readable by us target = self.filepath sudo("echo 'nope' > %s" % target) sudo("chown root:root %s" % target) sudo("chmod 0440 %s" % target) # Pull down with use_sudo, confirm contents local_ = StringIO() get( local_path=local_, remote_path=target, use_sudo=True, ) assert local_.getvalue() == "nope\n" def test_get_with_use_sudo_groupowned_file(self): # Issue #1226: file gotten w/ use_sudo, file normally readable via # group perms (yes - so use_sudo not required - full use case involves # full-directory get() where use_sudo *is* required). Prior to fix, # temp file is chmod 404 which seems to cause perm denied due to group # membership (despite 'other' readability). target = self.filepath sudo("echo 'nope' > %s" % target) # Same group as connected user gid = run("id -g") sudo("chown root:%s %s" % (gid, target)) # Same perms as bug use case (only really need group read) sudo("chmod 0640 %s" % target) # Do eet local_ = StringIO() get( local_path=local_, remote_path=target, use_sudo=True, ) assert local_.getvalue() == "nope\n" def test_get_from_unreadable_dir(self): # Put file in dir as normal user remotepath = "%s/myfile.txt" % self.dirpath run("echo 'foo' > %s" % remotepath) # Make dir unreadable (but still executable - impossible to obtain # file if dir is both unreadable and unexecutable) sudo("chown root:root %s" % self.dirpath) sudo("chmod 711 %s" % self.dirpath) # Try gettin' it local_ = StringIO() get(local_path=local_, remote_path=remotepath) assert local_.getvalue() == 'foo\n' fabric-1.14.0/integration/utils.py000066400000000000000000000006251315011462000170540ustar00rootroot00000000000000import os import sys # Pull in regular tests' utilities mod = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'tests')) sys.path.insert(0, mod) # Clean up del sys.path[0] class Integration(object): def setup(self): # Just so subclasses can super() us w/o fear. Meh. pass def teardown(self): # Just so subclasses can super() us w/o fear. Meh. pass fabric-1.14.0/requirements.txt000066400000000000000000000010231315011462000162740ustar00rootroot00000000000000# These requirements are for DEVELOPMENT ONLY! # You do not need e.g. Sphinx or Fudge just to run the 'fab' tool. # Instead, these are necessary for executing the test suite or developing the # cutting edge (which may have different requirements from released versions.) # Development version of Paramiko, just in case we're in one of those phases. -e git+https://github.com/paramiko/paramiko@1.17#egg=paramiko # Pull in actual "you already have local installed checkouts of Fabric + # Paramiko" dev deps. -r dev-requirements.txt fabric-1.14.0/setup.cfg000066400000000000000000000001231315011462000146310ustar00rootroot00000000000000[flake8] exclude = tests/support,tests/Python26SocketServer.py,sites,fabric/api.py fabric-1.14.0/setup.py000066400000000000000000000043771315011462000145410ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import with_statement import sys from setuptools import setup, find_packages from fabric.version import get_version with open('README.rst') as f: readme = f.read() long_description = """ To find out what's new in this version of Fabric, please see `the changelog `_. You can also install the `development version via ``pip install -e git+https://github.com/fabric/fabric/#egg=fabric``. ---- %s ---- For more information, please see the Fabric website or execute ``fab --help``. """ % (readme) if sys.version_info[:2] < (2, 6): install_requires=['paramiko>=1.10,<1.13'] else: install_requires=['paramiko>=1.10,<3.0'] setup( name='Fabric', version=get_version('short'), description='Fabric is a simple, Pythonic tool for remote execution and deployment.', long_description=long_description, author='Jeff Forcier', author_email='jeff@bitprophet.org', url='http://fabfile.org', packages=find_packages(), test_suite='nose.collector', tests_require=['nose<2.0', 'fudge<1.0', 'jinja2<3.0'], install_requires=install_requires, entry_points={ 'console_scripts': [ 'fab = fabric.main:main', ] }, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Environment :: Console', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'License :: OSI Approved :: BSD License', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Unix', 'Operating System :: POSIX', 'Programming Language :: Python', 'Programming Language :: Python :: 2 :: Only', 'Programming Language :: Python :: 2.5', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Topic :: Software Development', 'Topic :: Software Development :: Build Tools', 'Topic :: Software Development :: Libraries', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: System :: Clustering', 'Topic :: System :: Software Distribution', 'Topic :: System :: Systems Administration', ], ) fabric-1.14.0/sites/000077500000000000000000000000001315011462000141435ustar00rootroot00000000000000fabric-1.14.0/sites/_shared_static/000077500000000000000000000000001315011462000171175ustar00rootroot00000000000000fabric-1.14.0/sites/_shared_static/logo.png000066400000000000000000000144011315011462000205650ustar00rootroot00000000000000‰PNG  IHDR–u¬ÑÎ$iCCPICC Profile8…UßoÛT>‰oR¤? XG‡ŠÅ¯US[¹­ÆI“¥íJ¥éØ*$ä:7‰©Û鶪O{7ü@ÙH§kk?ì<Ê»øÎí¾kktüqóÝ‹mÇ6°nÆ¶ÂøØ¯±-ümR;`zŠ–¡Êðv x#=\Ó% ëoàYÐÚRÚ±£¥êùÐ#&Á?È>ÌÒ¹áЪþ¢þ©n¨_¨Ôß;j„;¦$}*}+ý(}'}/ýLŠtYº"ý$]•¾‘.9»ï½Ÿ%Ø{¯_aÝŠ]hÕkŸ5'SNÊ{äå”ü¼ü²<°¹_“§ä½ðì öÍ ý½t ³jMµ{-ñ4%ׯTÅ„«tYÛŸ“¦R6ÈÆØô#§v\œå–Šx:žŠ'H‰ï‹OÄÇâ3·ž¼ø^ø&°¦õþ“0::àm,L%È3â:qVEô t›ÐÍ]~ߢI«vÖ6ÊWÙ¯ª¯) |ʸ2]ÕG‡Í4Ïå(6w¸½Â‹£$¾ƒ"ŽèAÞû¾EvÝ mî[D‡ÿÂ;ëVh[¨}íõ¿Ú†ðN|æ3¢‹õº½âç£Hä‘S:°ßûéKâÝt·Ñx€÷UÏ'D;7ÿ®7;_"ÿÑeó?Yqxl+ pHYs  ÒÝ~üiTXtXML:com.adobe.xmp Adobe Fireworks CS5.1 1 ·B iIDATxí] ŒUÅžWk(H…d­ .lºaQײ+ŠEE ZŒ•R‹Ji°`­´­F¥ÿR)Ñ*¦Å*ØšXP´(5nâOµRe±¢²âÚ­‹Y È&Ð,1Íëù;x÷¾;sgî™;ïí›äå¾;w~ÏýîÌ™sΜÉå)°J¨PÀ0¾`¸¼Jq (PVV(P–²V ­«‚+(=`}²™1üzPèØ{€½ú½%Õã/–Lk LùÖk;øñçMîSÃXÿz–«šÈX¿qŸÇ—ø¿ÍÛö³×þù1{¹eÛ²µ³[ofN¬e7\8¢[œ7¹’7T-W2ö¿bч±ª±²óèú-q:OŸL«6t°WßÝÅöîûTÚÊRWI+ßrcûß•»èaU#Ëûm¯A¦¦pÿšLd5ÇÐÇäiðXû×Óhuuròa$0‰@öÆúŽN^Ž¡œû|ÆV¾ÞÁžxekìÈ$«ò¼¯W³ûfž&K’é3ïy¬üî§ÒÓçÎÕ,O?F[¸ozû#v犷R L¦³F dÓN¯ñvê w¾´€l=˜èÎ5´—èÚ|’Íÿ~£X®va¤jBÌù$ëÒÑLŒÌ&<¨dÀ$|é+Ø _F2Rnçjo‰´ƒ¹Ìí«6K­ÀôOS]R#Sð5ÿ—°‚=?Sèÿ² `ì‡ÞY;t‰KžÙÒíVw5 ñžoêÖ蘛òï0M•ùËhE¹ÊŽ…¯Gt…í×ÉË#Wœï:£n›ÚXà +·P¾ÀâoŠlÏsÝ7bðç6¯˜ë– W6«ÎºìòV€ÂÉy#X®öæL¬V]wþ·G«@]>‚¹ kIæÕCBϱXIv,vËäKø.Þ¬r¹ö\`ñ7ˆUdû"w×i=oV©_½VB› .ôv×>ú¦x‹:í ÎÕ?ËØÐ¹´‚s`‚Sl¾Ýó¸‰îy[FYXxÏùõß«S`Ö2ï‚➘ÛˆÑÖÝ›ôõJTAI‹ô%_Ù 6èw­|+’ÎØ¢>gÂp±eYæÛît#ÿ*S¦>°`My€¤Ç½Óôq´wrš»Ÿme>ß * ]ÜϦ։·OaõøÞÝŒ^2õ`!¶‘Ù3LŸa¥ª+ÄÕV¾<¾„Õ%àMúבCŽsIOvA¤´™¿P›W}\¸~¨SͨNù1¶uI8›ù{˜zh6¶}é˜âý³•âZÀʫݑ¦ŸU' MŽÔö›+ŒT3î_¯eAÀÛåï¬Éµì*š#¼Ý´ýÜþÊQ¢ÄŽlWÊHÐl}Û^†¯Ø£·RǸPX`l߸P¿+æá/ÁâHfÒõy(F¯ù4=FîÓÃÔØq¯kVZ¡æý(i7bóad˜’ø‘Pu¡¤ ¬È)0¶ ¡\ q…nÜ×j‰ð6vôrÅØfêÁ3ýî¥غ7vÄŽLBâÐЧùnbwb‚:°L:ç ~‚ Ë ˜žŠ‹2A‰é¯òcŒ^^Ù½rÄè½å'ö§FžôÛT4‡·üåö"πʄˆH¨âŒD XøJiñ¦^ø4r$aÒ“´àZ9Oà‚ScûmÅ ™$Éò€•¨ý…ÖÊô1áƒKÔ,ß\JÀ22 ŠZÉã½¾¤aÒyU*WïÖi££G¬`$AÏ·ßoWæ¦^Áü´YF†„«þúaªé.ؽ¨ÿ*Ó¡šïø2°¸S‰4Ú“ו¹¯Ùɉ[®ÏÈbÇ»átiîÉ#!ë-Þ¼ë P¼ àgá³B&ÛŠñR§²¼6W˜ôÂGDIÒ#’'ŠÂ—xÏì)Á" &”¹ú§Í Tc¦A×€ ö}É»Òë¿ÛƒåÙù#¿9ÉÒX§±Ñ,™Ó(”Æ–Enru1eYŸ¬@ðž5³¤"‡,Ü&Éš~?b…s˜¾—l›r±£ÝQæ§ú^ØDÑ÷Ää|—„¨ìÀ]O·]å)t«(É›íÝýχd , ?eZ>î8¿×â§x¦¸+ç»â|ӇˑF1íÝÛÔj|gu¸ ª÷§ «’&Vß:i‰Æð*JäDõ2ŸºåòSÅJè@ÚDÁw5¬cùwfÄË»bèáê#ÓéguÕQÒäjâ¸ï#Ø´¼¼ ¶£cê»qù&ëÃ<ø©».¯VݤícD~©¸Fâ\£ÔõØÄš·ìŠ(5»(Яéæs¤ PJHÍ”Æð®¾J©D]Jªt‹¬$b,®è¡Û+ÐOf ÂËSV!´ü;Ñ“6Ãd(¹ûœoãò«¼xÜ ø2 °¯œ‡€&#iùá’:tÀ(uùøaÊ»µõ€Å[BꌂCŽ}éJÂӨÓø´'qëê«?5ïâQÊDáÝ´r…\Pph„Ž×?¼ÑªÔ\§OÓ”†Áì|rL¢{nO2`…[]bç Ÿƒ «=  \~•Õ²ŸžéŒŸ “FõÞ†/-Õºƒé0ÕÁÄùtâE§nf€l™ä‹D2—öñÅùìõôp%«C]¢ëÐïšhd•iX‚š0Ì/"ÁžŽ(AQJÑ*x¥‚,&Êzêk9ýàìaRÕLÒîÇ˱’–Ü•_äOµ8]2[z¦¤C8» Y]¸N~Nº Ž'Lqµ,jñšVë:¾`ßKŸrÉ_†i3í'È·½3¤üoXY 4ʧºÌ‰sîÉÔë ô|º.#S¾ßBöØ}”&* •aŒÇÊ PèL>Up]Ô¶%ÑSxëJ´l€Ò”ÇEG3ö¥d.NfH×™6¤Ì7žÙ¸Íº&ª£RùL‡ÛHåøÖ€-yT{Dq®ù)ŒÞsÏ?QΔÃnÿ_¿gl÷ºb9¤= ‰€…ÑéI:×xÍÛ3; ¢¡¾Oe ¤ßí"èăŸŠsh«S^\Z釯3P:î›0 ó´ÉyŠNЖÇKž{?“Ñ)Ø)©ý”Îö¬}]°NÝÿUÒ ´ºu!½”&H  (ä |ˆ5³•÷‡* Dºjisæ€Â)3uI¬(—ØAi«úß•m>Úáæd©!4K'¿A>Sµ#Qé°Zò3‚)kÊ¿8lFõqR­:ˆ—Ö‡!¦<çâÕï:Ñ÷aÑrÝ”bÕ v)'Ÿ\¦Cî”åÒ-i±â¬f\IËE—nëæQŠpQQñØ%tð²TD]1é¹¥›>Àc~øK«žr`å"ó© ¬Öíû¢^ƒ“8)ƒN-H<õ‰Zs ÉâSwƒ(Šs±áõ@ sÏ÷ë£G)ÓÓ*…ƒ»EO ñ±ÀúrFÇ–IG)0¢¶¶¸ÓáOpå˜;‰¬fcøPГ»âÃÈÝFGðY˜ö¤<ŒÖÈêþ‚¬v¢¥¼TW•ù]Œ·#OÓ<šVó-WÓÈEn$ÂB0é×<Ül]ä7r£«ù}܂ꨯJ) ,ájCZ¬þÃØ/2PdÁf¾ïëîóm iHÚ(ä%Vmè°ªX½‹.šT}Ó:oÅ_C޶¸É‚ÒªpöCÍV­T‰WÔ›Sb°2‰¤Þ–Ç›8ÑJ°yEÿø“È}UêG X¶¶·«L{ED‹ˆî‚‰HŸ( [´vê¦íªT¦¾Ø>pFÞÆ ðs6‚dc’ ,äo¼a1ÙŒ’.KÒèÈGøJ1uÙ Ô ˜z¨¹Lœ+A×ífØÝRnÑY°9†7OX&d4VÅ{‚+ä7­ä|×öÊH"©O#v°j ˜V¥Ãé Éû˜ø³ •…eõY7­áÅk]­*Ø|¡*»ƒy’ü‡¤~Øm‘|†®ô=Và™¤}¢<`¢ÝU¢|]ñq+ež]XÈ ;jaXŸuöðLvÈdÍw©Nà§Ry¹áoR÷Êy0sG+4E Xȧ7ä;>¦­‰–£W!c¾ Ý”më2ÎO%¥+,Böü)Úv— þRÁ« O® ¬(K åÞC ¾K×ë §ŽÎUâ9«ÆtŠýÞ}Ÿ.QªY8œÊñL“kÛÕÔ]­hÑl­6°xf õ8£_ï#3™êx;”®®ø.‰¼ $ÜÁ¶ÌôG©?.ÍØ'-‡jRPm…›”XႼ¿¸ /»#ûlÑx0²>O#{°º^@‘×[/FâžÈV•>•Ûã€U ¾Žùrš·•€7ISOy{&°ðT6\˜zS=pôê¹ÀhÀw¥5iV_½ÊþLhþÞ! / ].´(ÖCá,è«IåD~ìiY_î¡ìG,ØìÿмÜ@†)·Ð­‘S“Ã`¢•cÁ«Ä‡Øá´%ú§l%Ú a®Ð›+%6 ¦GìÕ“X©ò¤¥v-;`ApçåຎtÉwq´”!ÀÊX*€âï‘_¥ÎDT)æ…™¸BroÈ)‡‰æ¤)£ä%šòT‰“á—®ä]áÆ–ÁV’À‚Þí9bÊ—®}¿›b7ü~Tï¥'´»”w… &ŸŽ”Ë…ÐÓr8‹/÷%,l·zäå”N\×%p,ßåBÏ(kt‰M“%,È V¼Òîd«?l¤î›yZô+v)’ˆnY¬–†¨Âk`EÙ~‰èm2^jÕ ›%[»°u:+ Ã;¢M¤õXÓ¯ÏÌu¦F™ã"”&Þ†nWý^ò_Þªt0ZmÙ*?lQ÷=è¦ßA†Œ¢€ÇW>´‚Ë,À,>= Þ«e;Y0f ‚xòƳãÏÞé: Ú‰®QD‹Ïþ#z’i|¬ï†¬Z7¤Š˜TÇÓ߬ɵê'Ø£}Pd%wU½òöâ˜tEÕy;bá´)¼hWRWMž¬J² /lä¤M.C®ÿé.«S®Ë[`¡uë”;’4!„£ ¯h`+çÓ>:-ªÎï5æYò­5*ê±ù8G+h¥×ÀšPwœ.Š‹À¬Õ7cþ\h:0W÷9Î 6™{rΩØÇà5°pfžéBЦÙ JÚx[ä8ãðôÁ¦á«:×p‰æŠóZŽ…n¦q²&ø¨ùSë²Ù s¸Å6åV2¬±äÅÏÓà=°`3åŽçS‘€Š=$U ™!¹5¢Q “ì¼Ö)ÆVZï…Ž'•ÀcÊ›ÚX#v®o‹ª*å¦Á0Z5¬ó–¿B÷KX:£x²IcªÙÏnd•§‚‘Ti`…ƒÛw<ÖÝ_‚¬PÏG+4½$€…†Æ¹«Ät‡C²/=c’—f€í9æg2ßU0Ÿ¡§ï¡d€BwÜàVãG d—Ë$TË&ðQ¬ó%òÚÜrèØ7¬*aôWs×S %,ÞèÊÕ x-ÇòŸ|•Š(P–ˆ2•øT¨+ù*™E¨KD™J|* T€•Š|•Ì" T€%¢L%>*ÀJE¾Jf*ÀQ¦ŸŠÿ –ì9lïóÏIEND®B`‚fabric-1.14.0/sites/docs/000077500000000000000000000000001315011462000150735ustar00rootroot00000000000000fabric-1.14.0/sites/docs/api/000077500000000000000000000000001315011462000156445ustar00rootroot00000000000000fabric-1.14.0/sites/docs/api/contrib/000077500000000000000000000000001315011462000173045ustar00rootroot00000000000000fabric-1.14.0/sites/docs/api/contrib/console.rst000066400000000000000000000001501315011462000214740ustar00rootroot00000000000000Console Output Utilities ======================== .. automodule:: fabric.contrib.console :members: fabric-1.14.0/sites/docs/api/contrib/django.rst000066400000000000000000000001561315011462000213020ustar00rootroot00000000000000================== Django Integration ================== .. automodule:: fabric.contrib.django :members: fabric-1.14.0/sites/docs/api/contrib/files.rst000066400000000000000000000002161315011462000211370ustar00rootroot00000000000000============================= File and Directory Management ============================= .. automodule:: fabric.contrib.files :members: fabric-1.14.0/sites/docs/api/contrib/project.rst000066400000000000000000000001401315011462000214770ustar00rootroot00000000000000============= Project Tools ============= .. automodule:: fabric.contrib.project :members: fabric-1.14.0/sites/docs/api/core/000077500000000000000000000000001315011462000165745ustar00rootroot00000000000000fabric-1.14.0/sites/docs/api/core/colors.rst000066400000000000000000000002061315011462000206250ustar00rootroot00000000000000====================== Color output functions ====================== .. automodule:: fabric.colors :members: :undoc-members: fabric-1.14.0/sites/docs/api/core/context_managers.rst000066400000000000000000000001521315011462000226650ustar00rootroot00000000000000================ Context Managers ================ .. automodule:: fabric.context_managers :members: fabric-1.14.0/sites/docs/api/core/decorators.rst000066400000000000000000000002211315011462000214660ustar00rootroot00000000000000========== Decorators ========== .. automodule:: fabric.decorators :members: hosts, roles, runs_once, serial, parallel, task, with_settings fabric-1.14.0/sites/docs/api/core/docs.rst000066400000000000000000000001551315011462000202570ustar00rootroot00000000000000===================== Documentation helpers ===================== .. automodule:: fabric.docs :members: fabric-1.14.0/sites/docs/api/core/network.rst000066400000000000000000000001361315011462000210170ustar00rootroot00000000000000======= Network ======= .. automodule:: fabric.network .. autofunction:: disconnect_all fabric-1.14.0/sites/docs/api/core/operations.rst000066400000000000000000000001221315011462000215040ustar00rootroot00000000000000========== Operations ========== .. automodule:: fabric.operations :members: fabric-1.14.0/sites/docs/api/core/tasks.rst000066400000000000000000000001411315011462000204470ustar00rootroot00000000000000===== Tasks ===== .. automodule:: fabric.tasks :members: Task, WrappedCallableTask, execute fabric-1.14.0/sites/docs/api/core/utils.rst000066400000000000000000000000761315011462000204710ustar00rootroot00000000000000===== Utils ===== .. automodule:: fabric.utils :members: fabric-1.14.0/sites/docs/conf.py000066400000000000000000000016131315011462000163730ustar00rootroot00000000000000# Obtain shared config values import os, sys from os.path import abspath, join, dirname sys.path.append(abspath(join(dirname(__file__), '..'))) sys.path.append(abspath(join(dirname(__file__), '..', '..'))) from shared_conf import * # Enable autodoc, intersphinx extensions.extend(['sphinx.ext.autodoc', 'sphinx.ext.intersphinx']) # Autodoc settings autodoc_default_flags = ['members', 'special-members'] # Default is 'local' building, but reference the public WWW site when building # under RTD. target = join(dirname(__file__), '..', 'www', '_build') if os.environ.get('READTHEDOCS') == 'True': target = 'http://www.fabfile.org/' # Intersphinx connection to stdlib + www site intersphinx_mapping = { 'python': ('http://docs.python.org/2.6', None), 'www': (target, None), } # Sister-site links to WWW html_theme_options['extra_nav_links'] = { "Main website": 'http://www.fabfile.org', } fabric-1.14.0/sites/docs/index.rst000066400000000000000000000041001315011462000167270ustar00rootroot00000000000000================================== Welcome to Fabric's documentation! ================================== This site covers Fabric's usage & API documentation. For basic info on what Fabric is, including its public changelog & how the project is maintained, please see `the main project website `_. Tutorial -------- For new users, and/or for an overview of Fabric's basic functionality, please see the :doc:`tutorial`. The rest of the documentation will assume you're at least passingly familiar with the material contained within. .. toctree:: :hidden: tutorial .. _usage-docs: Usage documentation ------------------- The following list contains all major sections of Fabric's prose (non-API) documentation, which expands upon the concepts outlined in the :doc:`tutorial` and also covers advanced topics. .. toctree:: :maxdepth: 2 :glob: usage/* .. _api_docs: API documentation ----------------- Fabric maintains two sets of API documentation, autogenerated from the source code's docstrings (which are typically very thorough.) .. _core-api: Core API ~~~~~~~~ The **core** API is loosely defined as those functions, classes and methods which form the basic building blocks of Fabric (such as `~fabric.operations.run` and `~fabric.operations.sudo`) upon which everything else (the below "contrib" section, and user fabfiles) builds. .. toctree:: :maxdepth: 1 :glob: api/core/* .. _contrib-api: Contrib API ~~~~~~~~~~~ Fabric's **contrib** package contains commonly useful tools (often merged in from user fabfiles) for tasks such as user I/O, modifying remote files, and so forth. While the core API is likely to remain small and relatively unchanged over time, this contrib section will grow and evolve (while trying to remain backwards-compatible) as more use-cases are solved and added. .. toctree:: :maxdepth: 1 :glob: api/contrib/* Contributing & Running Tests ---------------------------- For advanced users & developers looking to help fix bugs or add new features. .. toctree:: :hidden: running_tests fabric-1.14.0/sites/docs/running_tests.rst000066400000000000000000000024531315011462000205330ustar00rootroot00000000000000====================== Running Fabric's Tests ====================== Fabric is maintained with 100% passing tests. Where possible, patches should include tests covering the changes, making things far easier to verify & merge. When developing on Fabric, it works best to establish a `virtualenv`_ to install the dependencies in isolation for running tests. .. _`virtualenv`: https://virtualenv.pypa.io/en/latest/ .. _first-time-setup: First-time Setup ================ * Fork the `repository`_ on GitHub * Clone your new fork (e.g. ``git clone git@github.com:/fabric.git``) * ``cd fabric`` * ``virtualenv env`` * ``. env/bin/activate`` * ``pip install -r requirements.txt`` * ``python setup.py develop`` .. _`repository`: https://github.com/fabric/fabric .. _running-tests: Running Tests ============= Once your virtualenv is activated (``. env/bin/activate``) & you have the latest requirements, running tests is just:: nosetests tests/ You should **always** run tests on ``master`` (or the release branch you're working with) to ensure they're passing before working on your own changes/tests. Alternatively, if you've run ``python setup.py develop`` on your Fabric clone, you can also run:: fab test This adds additional flags which enable running doctests & adds nice coloration. fabric-1.14.0/sites/docs/tutorial.rst000066400000000000000000000370641315011462000175020ustar00rootroot00000000000000===================== Overview and Tutorial ===================== Welcome to Fabric! This document is a whirlwind tour of Fabric's features and a quick guide to its use. Additional documentation (which is linked to throughout) can be found in the :ref:`usage documentation ` -- please make sure to check it out. What is Fabric? =============== As the ``README`` says: .. include:: ../../README.rst :end-before: It provides More specifically, Fabric is: * A tool that lets you execute **arbitrary Python functions** via the **command line**; * A library of subroutines (built on top of a lower-level library) to make executing shell commands over SSH **easy** and **Pythonic**. Naturally, most users combine these two things, using Fabric to write and execute Python functions, or **tasks**, to automate interactions with remote servers. Let's take a look. Hello, ``fab`` ============== This wouldn't be a proper tutorial without "the usual":: def hello(): print("Hello world!") Placed in a Python module file named ``fabfile.py`` in your current working directory, that ``hello`` function can be executed with the ``fab`` tool (installed as part of Fabric) and does just what you'd expect:: $ fab hello Hello world! Done. That's all there is to it. This functionality allows Fabric to be used as a (very) basic build tool even without importing any of its API. .. note:: The ``fab`` tool simply imports your fabfile and executes the function or functions you instruct it to. There's nothing magic about it -- anything you can do in a normal Python script can be done in a fabfile! .. seealso:: :ref:`execution-strategy`, :doc:`/usage/tasks`, :doc:`/usage/fab` Task arguments ============== It's often useful to pass runtime parameters into your tasks, just as you might during regular Python programming. Fabric has basic support for this using a shell-compatible notation: ``:,=,...``. It's contrived, but let's extend the above example to say hello to you personally:: def hello(name="world"): print("Hello %s!" % name) By default, calling ``fab hello`` will still behave as it did before; but now we can personalize it:: $ fab hello:name=Jeff Hello Jeff! Done. Those already used to programming in Python might have guessed that this invocation behaves exactly the same way:: $ fab hello:Jeff Hello Jeff! Done. For the time being, your argument values will always show up in Python as strings and may require a bit of string manipulation for complex types such as lists. Future versions may add a typecasting system to make this easier. .. seealso:: :ref:`task-arguments` Local commands ============== As used above, ``fab`` only really saves a couple lines of ``if __name__ == "__main__"`` boilerplate. It's mostly designed for use with Fabric's API, which contains functions (or **operations**) for executing shell commands, transferring files, and so forth. Let's build a hypothetical Web application fabfile. This example scenario is as follows: The Web application is managed via Git on a remote host ``vcshost``. On ``localhost``, we have a local clone of said Web application. When we push changes back to ``vcshost``, we want to be able to immediately install these changes on a remote host ``my_server`` in an automated fashion. We will do this by automating the local and remote Git commands. Fabfiles usually work best at the root of a project:: . |-- __init__.py |-- app.wsgi |-- fabfile.py <-- our fabfile! |-- manage.py `-- my_app |-- __init__.py |-- models.py |-- templates | `-- index.html |-- tests.py |-- urls.py `-- views.py .. note:: We're using a Django application here, but only as an example -- Fabric is not tied to any external codebase, save for its SSH library. For starters, perhaps we want to run our tests and commit to our VCS so we're ready for a deploy:: from fabric.api import local def prepare_deploy(): local("./manage.py test my_app") local("git add -p && git commit") local("git push") The output of which might look a bit like this:: $ fab prepare_deploy [localhost] run: ./manage.py test my_app Creating test database... Creating tables Creating indexes .......................................... ---------------------------------------------------------------------- Ran 42 tests in 9.138s OK Destroying test database... [localhost] run: git add -p && git commit [localhost] run: git push Done. The code itself is straightforward: import a Fabric API function, `~fabric.operations.local`, and use it to run and interact with local shell commands. The rest of Fabric's API is similar -- it's all just Python. .. seealso:: :doc:`api/core/operations`, :ref:`fabfile-discovery` Organize it your way ==================== Because Fabric is "just Python" you're free to organize your fabfile any way you want. For example, it's often useful to start splitting things up into subtasks:: from fabric.api import local def test(): local("./manage.py test my_app") def commit(): local("git add -p && git commit") def push(): local("git push") def prepare_deploy(): test() commit() push() The ``prepare_deploy`` task can be called just as before, but now you can make a more granular call to one of the sub-tasks, if desired. Failure ======= Our base case works fine now, but what happens if our tests fail? Chances are we want to put on the brakes and fix them before deploying. Fabric checks the return value of programs called via operations and will abort if they didn't exit cleanly. Let's see what happens if one of our tests encounters an error:: $ fab prepare_deploy [localhost] run: ./manage.py test my_app Creating test database... Creating tables Creating indexes .............E............................ ====================================================================== ERROR: testSomething (my_project.my_app.tests.MainTests) ---------------------------------------------------------------------- Traceback (most recent call last): [...] ---------------------------------------------------------------------- Ran 42 tests in 9.138s FAILED (errors=1) Destroying test database... Fatal error: local() encountered an error (return code 2) while executing './manage.py test my_app' Aborting. Great! We didn't have to do anything ourselves: Fabric detected the failure and aborted, never running the ``commit`` task. .. seealso:: :ref:`Failure handling (usage documentation) ` Failure handling ---------------- But what if we wanted to be flexible and give the user a choice? A setting (or **environment variable**, usually shortened to **env var**) called :ref:`warn_only` lets you turn aborts into warnings, allowing flexible error handling to occur. Let's flip this setting on for our ``test`` function, and then inspect the result of the `~fabric.operations.local` call ourselves:: from __future__ import with_statement from fabric.api import local, settings, abort from fabric.contrib.console import confirm def test(): with settings(warn_only=True): result = local('./manage.py test my_app', capture=True) if result.failed and not confirm("Tests failed. Continue anyway?"): abort("Aborting at user request.") [...] In adding this new feature we've introduced a number of new things: * The ``__future__`` import required to use ``with:`` in Python 2.5; * Fabric's `contrib.console ` submodule, containing the `~fabric.contrib.console.confirm` function, used for simple yes/no prompts; * The `~fabric.context_managers.settings` context manager, used to apply settings to a specific block of code; * Command-running operations like `~fabric.operations.local` can return objects containing info about their result (such as ``.failed``, or ``.return_code``); * And the `~fabric.utils.abort` function, used to manually abort execution. However, despite the additional complexity, it's still pretty easy to follow, and is now much more flexible. .. seealso:: :doc:`api/core/context_managers`, :ref:`env-vars` Making connections ================== Let's start wrapping up our fabfile by putting in the keystone: a ``deploy`` task that is destined to run on one or more remote server(s), and ensures the code is up to date:: def deploy(): code_dir = '/srv/django/myproject' with cd(code_dir): run("git pull") run("touch app.wsgi") Here again, we introduce a handful of new concepts: * Fabric is just Python -- so we can make liberal use of regular Python code constructs such as variables and string interpolation; * `~fabric.context_managers.cd`, an easy way of prefixing commands with a ``cd /to/some/directory`` call. This is similar to `~fabric.context_managers.lcd` which does the same locally. * `~fabric.operations.run`, which is similar to `~fabric.operations.local` but runs **remotely** instead of locally. We also need to make sure we import the new functions at the top of our file:: from __future__ import with_statement from fabric.api import local, settings, abort, run, cd from fabric.contrib.console import confirm With these changes in place, let's deploy:: $ fab deploy No hosts found. Please specify (single) host string for connection: my_server [my_server] run: git pull [my_server] out: Already up-to-date. [my_server] out: [my_server] run: touch app.wsgi Done. We never specified any connection info in our fabfile, so Fabric doesn't know on which host(s) the remote command should be executed. When this happens, Fabric prompts us at runtime. Connection definitions use SSH-like "host strings" (e.g. ``user@host:port``) and will use your local username as a default -- so in this example, we just had to specify the hostname, ``my_server``. Remote interactivity -------------------- ``git pull`` works fine if you've already got a checkout of your source code -- but what if this is the first deploy? It'd be nice to handle that case too and do the initial ``git clone``:: def deploy(): code_dir = '/srv/django/myproject' with settings(warn_only=True): if run("test -d %s" % code_dir).failed: run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir) with cd(code_dir): run("git pull") run("touch app.wsgi") As with our calls to `~fabric.operations.local` above, `~fabric.operations.run` also lets us construct clean Python-level logic based on executed shell commands. However, the interesting part here is the ``git clone`` call: since we're using Git's SSH method of accessing the repository on our Git server, this means our remote `~fabric.operations.run` call will need to authenticate itself. Older versions of Fabric (and similar high level SSH libraries) run remote programs in limbo, unable to be touched from the local end. This is problematic when you have a serious need to enter passwords or otherwise interact with the remote program. Fabric 1.0 and later breaks down this wall and ensures you can always talk to the other side. Let's see what happens when we run our updated ``deploy`` task on a new server with no Git checkout:: $ fab deploy No hosts found. Please specify (single) host string for connection: my_server [my_server] run: test -d /srv/django/myproject Warning: run() encountered an error (return code 1) while executing 'test -d /srv/django/myproject' [my_server] run: git clone user@vcshost:/path/to/repo/.git /srv/django/myproject [my_server] out: Cloning into /srv/django/myproject... [my_server] out: Password: [my_server] out: remote: Counting objects: 6698, done. [my_server] out: remote: Compressing objects: 100% (2237/2237), done. [my_server] out: remote: Total 6698 (delta 4633), reused 6414 (delta 4412) [my_server] out: Receiving objects: 100% (6698/6698), 1.28 MiB, done. [my_server] out: Resolving deltas: 100% (4633/4633), done. [my_server] out: [my_server] run: git pull [my_server] out: Already up-to-date. [my_server] out: [my_server] run: touch app.wsgi Done. Notice the ``Password:`` prompt -- that was our remote ``git`` call on our Web server, asking for the password to the Git server. We were able to type it in and the clone continued normally. .. seealso:: :doc:`/usage/interactivity` .. _defining-connections: Defining connections beforehand ------------------------------- Specifying connection info at runtime gets old real fast, so Fabric provides a handful of ways to do it in your fabfile or on the command line. We won't cover all of them here, but we will show you the most common one: setting the global host list, :ref:`env.hosts `. :doc:`env ` is a global dictionary-like object driving many of Fabric's settings, and can be written to with attributes as well (in fact, `~fabric.context_managers.settings`, seen above, is simply a wrapper for this.) Thus, we can modify it at module level near the top of our fabfile like so:: from __future__ import with_statement from fabric.api import * from fabric.contrib.console import confirm env.hosts = ['my_server'] def test(): do_test_stuff() When ``fab`` loads up our fabfile, our modification of ``env`` will execute, storing our settings change. The end result is exactly as above: our ``deploy`` task will run against the ``my_server`` server. This is also how you can tell Fabric to run on multiple remote systems at once: because ``env.hosts`` is a list, ``fab`` iterates over it, calling the given task once for each connection. .. seealso:: :doc:`usage/env`, :ref:`host-lists` Conclusion ========== Our completed fabfile is still pretty short, as such things go. Here it is in its entirety:: from __future__ import with_statement from fabric.api import * from fabric.contrib.console import confirm env.hosts = ['my_server'] def test(): with settings(warn_only=True): result = local('./manage.py test my_app', capture=True) if result.failed and not confirm("Tests failed. Continue anyway?"): abort("Aborting at user request.") def commit(): local("git add -p && git commit") def push(): local("git push") def prepare_deploy(): test() commit() push() def deploy(): code_dir = '/srv/django/myproject' with settings(warn_only=True): if run("test -d %s" % code_dir).failed: run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir) with cd(code_dir): run("git pull") run("touch app.wsgi") This fabfile makes use of a large portion of Fabric's feature set: * defining fabfile tasks and running them with :doc:`fab `; * calling local shell commands with `~fabric.operations.local`; * modifying env vars with `~fabric.context_managers.settings`; * handling command failures, prompting the user, and manually aborting; * and defining host lists and `~fabric.operations.run`-ning remote commands. However, there's still a lot more we haven't covered here! Please make sure you follow the various "see also" links, and check out the documentation table of contents on :doc:`the main index page `. Thanks for reading! fabric-1.14.0/sites/docs/usage/000077500000000000000000000000001315011462000161775ustar00rootroot00000000000000fabric-1.14.0/sites/docs/usage/env.rst000066400000000000000000000610361315011462000175270ustar00rootroot00000000000000=================================== The environment dictionary, ``env`` =================================== A simple but integral aspect of Fabric is what is known as the "environment": a Python dictionary subclass, which is used as a combination settings registry and shared inter-task data namespace. The environment dict is currently implemented as a global singleton, ``fabric.state.env``, and is included in ``fabric.api`` for convenience. Keys in ``env`` are sometimes referred to as "env variables". Environment as configuration ============================ Most of Fabric's behavior is controllable by modifying ``env`` variables, such as ``env.hosts`` (as seen in :ref:`the tutorial `). Other commonly-modified env vars include: * ``user``: Fabric defaults to your local username when making SSH connections, but you can use ``env.user`` to override this if necessary. The :doc:`execution` documentation also has info on how to specify usernames on a per-host basis. * ``password``: Used to explicitly set your default connection or sudo password if desired. Fabric will prompt you when necessary if this isn't set or doesn't appear to be valid. * ``warn_only``: a Boolean setting determining whether Fabric exits when detecting errors on the remote end. See :doc:`execution` for more on this behavior. There are a number of other env variables; for the full list, see :ref:`env-vars` at the bottom of this document. The `~fabric.context_managers.settings` context manager ------------------------------------------------------- In many situations, it's useful to only temporarily modify ``env`` vars so that a given settings change only applies to a block of code. Fabric provides a `~fabric.context_managers.settings` context manager, which takes any number of key/value pairs and will use them to modify ``env`` within its wrapped block. For example, there are many situations where setting ``warn_only`` (see below) is useful. To apply it to a few lines of code, use ``settings(warn_only=True)``, as seen in this simplified version of the ``contrib`` `~fabric.contrib.files.exists` function:: from fabric.api import settings, run def exists(path): with settings(warn_only=True): return run('test -e %s' % path) See the :doc:`../api/core/context_managers` API documentation for details on `~fabric.context_managers.settings` and other, similar tools. Environment as shared state =========================== As mentioned, the ``env`` object is simply a dictionary subclass, so your own fabfile code may store information in it as well. This is sometimes useful for keeping state between multiple tasks within a single execution run. .. note:: This aspect of ``env`` is largely historical: in the past, fabfiles were not pure Python and thus the environment was the only way to communicate between tasks. Nowadays, you may call other tasks or subroutines directly, and even keep module-level shared state if you wish. In future versions, Fabric will become threadsafe, at which point ``env`` may be the only easy/safe way to keep global state. Other considerations ==================== While it subclasses ``dict``, Fabric's ``env`` has been modified so that its values may be read/written by way of attribute access, as seen in some of the above material. In other words, ``env.host_string`` and ``env['host_string']`` are functionally identical. We feel that attribute access can often save a bit of typing and makes the code more readable, so it's the recommended way to interact with ``env``. The fact that it's a dictionary can be useful in other ways, such as with Python's ``dict``-based string interpolation, which is especially handy if you need to insert multiple env vars into a single string. Using "normal" string interpolation might look like this:: print("Executing on %s as %s" % (env.host, env.user)) Using dict-style interpolation is more readable and slightly shorter:: print("Executing on %(host)s as %(user)s" % env) .. _env-vars: Full list of env vars ===================== Below is a list of all predefined (or defined by Fabric itself during execution) environment variables. While many of them may be manipulated directly, it's often best to use `~fabric.context_managers`, either generally via `~fabric.context_managers.settings` or via specific context managers such as `~fabric.context_managers.cd`. Note that many of these may be set via ``fab``'s command-line switches -- see :doc:`fab` for details. Cross-references are provided where appropriate. .. seealso:: :option:`--set` .. _abort-exception: ``abort_exception`` ------------------- **Default:** ``None`` Fabric normally handles aborting by printing an error message to stderr and calling ``sys.exit(1)``. This setting allows you to override that behavior (which is what happens when ``env.abort_exception`` is ``None``.) Give it a callable which takes a string (the error message that would have been printed) and returns an exception instance. That exception object is then raised instead of ``SystemExit`` (which is what ``sys.exit`` does.) Much of the time you'll want to simply set this to an exception class, as those fit the above description perfectly (callable, take a string, return an exception instance.) E.g. ``env.abort_exception = MyExceptionClass``. .. _abort-on-prompts: ``abort_on_prompts`` -------------------- **Default:** ``False`` When ``True``, Fabric will run in a non-interactive mode, calling `~fabric.utils.abort` anytime it would normally prompt the user for input (such as password prompts, "What host to connect to?" prompts, fabfile invocation of `~fabric.operations.prompt`, and so forth.) This allows users to ensure a Fabric session will always terminate cleanly instead of blocking on user input forever when unforeseen circumstances arise. .. versionadded:: 1.1 .. seealso:: :option:`--abort-on-prompts` ``all_hosts`` ------------- **Default:** ``[]`` Set by ``fab`` to the full host list for the currently executing command. For informational purposes only. .. seealso:: :doc:`execution` .. _always-use-pty: ``always_use_pty`` ------------------ **Default:** ``True`` When set to ``False``, causes `~fabric.operations.run`/`~fabric.operations.sudo` to act as if they have been called with ``pty=False``. .. seealso:: :option:`--no-pty` .. versionadded:: 1.0 .. _colorize-errors: ``colorize_errors`` ------------------- **Default** ``False`` When set to ``True``, error output to the terminal is colored red and warnings are colored magenta to make them easier to see. .. versionadded:: 1.7 .. _combine-stderr: ``combine_stderr`` ------------------ **Default**: ``True`` Causes the SSH layer to merge a remote program's stdout and stderr streams to avoid becoming meshed together when printed. See :ref:`combine_streams` for details on why this is needed and what its effects are. .. versionadded:: 1.0 ``command`` ----------- **Default:** ``None`` Set by ``fab`` to the currently executing command name (e.g., when executed as ``$ fab task1 task2``, ``env.command`` will be set to ``"task1"`` while ``task1`` is executing, and then to ``"task2"``.) For informational purposes only. .. seealso:: :doc:`execution` ``command_prefixes`` -------------------- **Default:** ``[]`` Modified by `~fabric.context_managers.prefix`, and prepended to commands executed by `~fabric.operations.run`/`~fabric.operations.sudo`. .. versionadded:: 1.0 .. _command-timeout: ``command_timeout`` ------------------- **Default:** ``None`` Remote command timeout, in seconds. .. versionadded:: 1.6 .. seealso:: :option:`--command-timeout` .. _connection-attempts: ``connection_attempts`` ----------------------- **Default:** ``1`` Number of times Fabric will attempt to connect when connecting to a new server. For backwards compatibility reasons, it defaults to only one connection attempt. .. versionadded:: 1.4 .. seealso:: :option:`--connection-attempts`, :ref:`timeout` ``cwd`` ------- **Default:** ``''`` Current working directory. Used to keep state for the `~fabric.context_managers.cd` context manager. .. _dedupe_hosts: ``dedupe_hosts`` ---------------- **Default:** ``True`` Deduplicate merged host lists so any given host string is only represented once (e.g. when using combinations of ``@hosts`` + ``@roles``, or ``-H`` and ``-R``.) When set to ``False``, this option relaxes the deduplication, allowing users who explicitly want to run a task multiple times on the same host (say, in parallel, though it works fine serially too) to do so. .. versionadded:: 1.5 .. _disable-known-hosts: ``disable_known_hosts`` ----------------------- **Default:** ``False`` If ``True``, the SSH layer will skip loading the user's known-hosts file. Useful for avoiding exceptions in situations where a "known host" changing its host key is actually valid (e.g. cloud servers such as EC2.) .. seealso:: :option:`--disable-known-hosts <-D>`, :doc:`ssh` .. _eagerly-disconnect: ``eagerly_disconnect`` ---------------------- **Default:** ``False`` If ``True``, causes ``fab`` to close connections after each individual task execution, instead of at the end of the run. This helps prevent a lot of typically-unused network sessions from piling up and causing problems with limits on per-process open files, or network hardware. .. note:: When active, this setting will result in the disconnect messages appearing throughout your output, instead of at the end. This may be improved in future releases. .. _effective_roles: ``effective_roles`` ------------------- **Default:** ``[]`` Set by ``fab`` to the roles list of the currently executing command. For informational purposes only. .. versionadded:: 1.9 .. seealso:: :doc:`execution` .. _exclude-hosts: ``exclude_hosts`` ----------------- **Default:** ``[]`` Specifies a list of host strings to be :ref:`skipped over ` during ``fab`` execution. Typically set via :option:`--exclude-hosts/-x <-x>`. .. versionadded:: 1.1 ``fabfile`` ----------- **Default:** ``fabfile.py`` Filename pattern which ``fab`` searches for when loading fabfiles. To indicate a specific file, use the full path to the file. Obviously, it doesn't make sense to set this in a fabfile, but it may be specified in a ``.fabricrc`` file or on the command line. .. seealso:: :option:`--fabfile <-f>`, :doc:`fab` .. _gateway: ``gateway`` ----------- **Default:** ``None`` Enables SSH-driven gatewaying through the indicated host. The value should be a normal Fabric host string as used in e.g. :ref:`env.host_string `. When this is set, newly created connections will be set to route their SSH traffic through the remote SSH daemon to the final destination. .. versionadded:: 1.5 .. seealso:: :option:`--gateway <-g>` .. _kerberos: ``gss_(auth|deleg|kex)`` ------------------------ **Default:** ``False`` for all. These three options (``gss_auth``, ``gss_deleg``, and ``gss_kex``) are passed verbatim into Paramiko's ``Client.connect`` method, and control Kerberos/GSS-API behavior. For details, see Paramiko's docs: `GSS-API authentication `_, `GSS-API key exchange `_. .. note:: This functionality requires Paramiko ``1.15`` or above! You will get ``TypeError`` about unexpected keyword arguments with Paramiko ``1.14`` or earlier, as it lacks Kerberos support. .. versionadded:: 1.11 .. seealso:: :option:`--gss-auth`, :option:`--gss-deleg`, :option:`--gss-kex` .. _host_string: ``host_string`` --------------- **Default:** ``None`` Defines the current user/host/port which Fabric will connect to when executing `~fabric.operations.run`, `~fabric.operations.put` and so forth. This is set by ``fab`` when iterating over a previously set host list, and may also be manually set when using Fabric as a library. .. seealso:: :doc:`execution` .. _forward-agent: ``forward_agent`` -------------------- **Default:** ``False`` If ``True``, enables forwarding of your local SSH agent to the remote end. .. versionadded:: 1.4 .. seealso:: :option:`--forward-agent <-A>` .. _host: ``host`` -------- **Default:** ``None`` Set to the hostname part of ``env.host_string`` by ``fab``. For informational purposes only. .. _hosts: ``hosts`` --------- **Default:** ``[]`` The global host list used when composing per-task host lists. .. seealso:: :option:`--hosts <-H>`, :doc:`execution` .. _keepalive: ``keepalive`` ------------- **Default:** ``0`` (i.e. no keepalive) An integer specifying an SSH keepalive interval to use; basically maps to the SSH config option ``ServerAliveInterval``. Useful if you find connections are timing out due to meddlesome network hardware or what have you. .. seealso:: :option:`--keepalive` .. versionadded:: 1.1 .. _key: ``key`` ---------------- **Default:** ``None`` A string, or file-like object, containing an SSH key; used during connection authentication. .. note:: The most common method for using SSH keys is to set :ref:`key-filename`. .. versionadded:: 1.7 .. _key-filename: ``key_filename`` ---------------- **Default:** ``None`` May be a string or list of strings, referencing file paths to SSH key files to try when connecting. Passed through directly to the SSH layer. May be set/appended to with :option:`-i`. .. seealso:: `Paramiko's documentation for SSHClient.connect() `_ .. _env-linewise: ``linewise`` ------------ **Default:** ``False`` Forces buffering by line instead of by character/byte, typically when running in parallel mode. May be activated via :option:`--linewise`. This option is implied by :ref:`env.parallel ` -- even if ``linewise`` is False, if ``parallel`` is True then linewise behavior will occur. .. seealso:: :ref:`linewise-output` .. versionadded:: 1.3 .. _local-user: ``local_user`` -------------- A read-only value containing the local system username. This is the same value as :ref:`user`'s initial value, but whereas :ref:`user` may be altered by CLI arguments, Python code or specific host strings, :ref:`local-user` will always contain the same value. .. _no_agent: ``no_agent`` ------------ **Default:** ``False`` If ``True``, will tell the SSH layer not to seek out running SSH agents when using key-based authentication. .. versionadded:: 0.9.1 .. seealso:: :option:`--no_agent <-a>` .. _no_keys: ``no_keys`` ------------------ **Default:** ``False`` If ``True``, will tell the SSH layer not to load any private key files from one's ``$HOME/.ssh/`` folder. (Key files explicitly loaded via ``fab -i`` will still be used, of course.) .. versionadded:: 0.9.1 .. seealso:: :option:`-k` .. _output_prefix: ``output_prefix`` ----------------- **Default:** ``True`` By default Fabric prefixes every line of output with either ``[hostname] out:`` or ``[hostname] err:``. Those prefixes may be hidden by setting ``env.output_prefix`` to ``False``. .. versionadded:: 1.0.0 .. _env-parallel: ``parallel`` ------------------- **Default:** ``False`` When ``True``, forces all tasks to run in parallel. Implies :ref:`env.linewise `. .. versionadded:: 1.3 .. seealso:: :option:`--parallel <-P>`, :doc:`parallel` .. _password: ``password`` ------------ **Default:** ``None`` The default password used by the SSH layer when connecting to remote hosts, **and/or** when answering `~fabric.operations.sudo` prompts. .. seealso:: :option:`--initial-password-prompt <-I>`, :ref:`env.passwords `, :ref:`password-management` .. _passwords: ``passwords`` ------------- **Default:** ``{}`` This dictionary is largely for internal use, and is filled automatically as a per-host-string password cache. Keys are full :ref:`host strings ` and values are passwords (strings). .. warning:: If you modify or generate this dict manually, **you must use fully qualified host strings** with user and port values. See the link above for details on the host string API. .. seealso:: :ref:`password-management` .. versionadded:: 1.0 .. _env-path: ``path`` -------- **Default:** ``''`` Used to set the ``$PATH`` shell environment variable when executing commands in `~fabric.operations.run`/`~fabric.operations.sudo`/`~fabric.operations.local`. It is recommended to use the `~fabric.context_managers.path` context manager for managing this value instead of setting it directly. .. versionadded:: 1.0 .. _pool-size: ``pool_size`` ------------- **Default:** ``0`` Sets the number of concurrent processes to use when executing tasks in parallel. .. versionadded:: 1.3 .. seealso:: :option:`--pool-size <-z>`, :doc:`parallel` .. _prompts: ``prompts`` ------------- **Default:** ``{}`` The ``prompts`` dictionary allows users to control interactive prompts. If a key in the dictionary is found in a command's standard output stream, Fabric will automatically answer with the corresponding dictionary value. .. versionadded:: 1.9 .. _port: ``port`` -------- **Default:** ``None`` Set to the port part of ``env.host_string`` by ``fab`` when iterating over a host list. May also be used to specify a default port. .. _real-fabfile: ``real_fabfile`` ---------------- **Default:** ``None`` Set by ``fab`` with the path to the fabfile it has loaded up, if it got that far. For informational purposes only. .. seealso:: :doc:`fab` .. _remote-interrupt: ``remote_interrupt`` -------------------- **Default:** ``None`` Controls whether Ctrl-C triggers an interrupt remotely or is captured locally, as follows: * ``None`` (the default): only `~fabric.operations.open_shell` will exhibit remote interrupt behavior, and `~fabric.operations.run`/`~fabric.operations.sudo` will capture interrupts locally. * ``False``: even `~fabric.operations.open_shell` captures locally. * ``True``: all functions will send the interrupt to the remote end. .. versionadded:: 1.6 .. _rcfile: ``rcfile`` ---------- **Default:** ``$HOME/.fabricrc`` Path used when loading Fabric's local settings file. .. seealso:: :option:`--config <-c>`, :doc:`fab` .. _reject-unknown-hosts: ``reject_unknown_hosts`` ------------------------ **Default:** ``False`` If ``True``, the SSH layer will raise an exception when connecting to hosts not listed in the user's known-hosts file. .. seealso:: :option:`--reject-unknown-hosts <-r>`, :doc:`ssh` .. _system-known-hosts: ``system_known_hosts`` ------------------------ **Default:** ``None`` If set, should be the path to a :file:`known_hosts` file. The SSH layer will read this file before reading the user's known-hosts file. .. seealso:: :doc:`ssh` .. _roledefs: ``roledefs`` ------------ **Default:** ``{}`` Dictionary defining role name to host list mappings. .. seealso:: :doc:`execution` .. _roles: ``roles`` --------- **Default:** ``[]`` The global role list used when composing per-task host lists. .. seealso:: :option:`--roles <-R>`, :doc:`execution` .. _shell: ``shell`` --------- **Default:** ``/bin/bash -l -c`` Value used as shell wrapper when executing commands with e.g. `~fabric.operations.run`. Must be able to exist in the form `` ""`` -- e.g. the default uses Bash's ``-c`` option which takes a command string as its value. .. seealso:: :option:`--shell <-s>`, :ref:`FAQ on bash as default shell `, :doc:`execution` .. _skip-bad-hosts: ``skip_bad_hosts`` ------------------ **Default:** ``False`` If ``True``, causes ``fab`` (or non-``fab`` use of `~fabric.tasks.execute`) to skip over hosts it can't connect to. .. versionadded:: 1.4 .. seealso:: :option:`--skip-bad-hosts`, :ref:`excluding-hosts`, :doc:`execution` .. _skip-unknown-tasks: ``skip_unknown_tasks`` ---------------------- **Default:** ``False`` If ``True``, causes ``fab`` (or non-``fab`` use of `~fabric.tasks.execute`) to skip over tasks not found, without aborting. .. seealso:: :option:`--skip-unknown-tasks` .. _ssh-config-path: ``ssh_config_path`` ------------------- **Default:** ``$HOME/.ssh/config`` Allows specification of an alternate SSH configuration file path. .. versionadded:: 1.4 .. seealso:: :option:`--ssh-config-path`, :ref:`ssh-config` ``ok_ret_codes`` ------------------------ **Default:** ``[0]`` Return codes in this list are used to determine whether calls to `~fabric.operations.run`/`~fabric.operations.sudo`/`~fabric.operations.sudo` are considered successful. .. versionadded:: 1.6 .. _sudo_password: ``sudo_password`` ----------------- **Default:** ``None`` The default password to submit to ``sudo`` password prompts. If empty or ``None``, :ref:`env.password ` and/or :ref:`env.passwords ` is used as a fallback. .. seealso:: :ref:`password-management`, :option:`--sudo-password`, :option:`--initial-sudo-password-prompt` .. versionadded:: 1.12 .. _sudo_passwords: ``sudo_passwords`` ------------------ **Default:** ``{}`` Identical to :ref:`passwords`, but used for sudo-only passwords. .. seealso:: :ref:`password-management` .. versionadded:: 1.12 .. _sudo_prefix: ``sudo_prefix`` --------------- **Default:** ``"sudo -S -p '%(sudo_prompt)s' " % env`` The actual ``sudo`` command prefixed onto `~fabric.operations.sudo` calls' command strings. Users who do not have ``sudo`` on their default remote ``$PATH``, or who need to make other changes (such as removing the ``-p`` when passwordless sudo is in effect) may find changing this useful. .. seealso:: The `~fabric.operations.sudo` operation; :ref:`env.sudo_prompt ` .. _sudo_prompt: ``sudo_prompt`` --------------- **Default:** ``"sudo password:"`` Passed to the ``sudo`` program on remote systems so that Fabric may correctly identify its password prompt. .. seealso:: The `~fabric.operations.sudo` operation; :ref:`env.sudo_prefix ` .. _sudo_user: ``sudo_user`` ------------- **Default:** ``None`` Used as a fallback value for `~fabric.operations.sudo`'s ``user`` argument if none is given. Useful in combination with `~fabric.context_managers.settings`. .. seealso:: `~fabric.operations.sudo` .. _env-tasks: ``tasks`` ------------- **Default:** ``[]`` Set by ``fab`` to the full tasks list to be executed for the currently executing command. For informational purposes only. .. seealso:: :doc:`execution` .. _timeout: ``timeout`` ----------- **Default:** ``10`` Network connection timeout, in seconds. .. versionadded:: 1.4 .. seealso:: :option:`--timeout`, :ref:`connection-attempts` ``use_shell`` ------------- **Default:** ``True`` Global setting which acts like the ``shell`` argument to `~fabric.operations.run`/`~fabric.operations.sudo`: if it is set to ``False``, operations will not wrap executed commands in ``env.shell``. .. _use-ssh-config: ``use_ssh_config`` ------------------ **Default:** ``False`` Set to ``True`` to cause Fabric to load your local SSH config file. .. versionadded:: 1.4 .. seealso:: :ref:`ssh-config` .. _user: ``user`` -------- **Default:** User's local username The username used by the SSH layer when connecting to remote hosts. May be set globally, and will be used when not otherwise explicitly set in host strings. However, when explicitly given in such a manner, this variable will be temporarily overwritten with the current value -- i.e. it will always display the user currently being connected as. To illustrate this, a fabfile:: from fabric.api import env, run env.user = 'implicit_user' env.hosts = ['host1', 'explicit_user@host2', 'host3'] def print_user(): with hide('running'): run('echo "%(user)s"' % env) and its use:: $ fab print_user [host1] out: implicit_user [explicit_user@host2] out: explicit_user [host3] out: implicit_user Done. Disconnecting from host1... done. Disconnecting from host2... done. Disconnecting from host3... done. As you can see, during execution on ``host2``, ``env.user`` was set to ``"explicit_user"``, but was restored to its previous value (``"implicit_user"``) afterwards. .. note:: ``env.user`` is currently somewhat confusing (it's used for configuration **and** informational purposes) so expect this to change in the future -- the informational aspect will likely be broken out into a separate env variable. .. seealso:: :doc:`execution`, :option:`--user <-u>` ``version`` ----------- **Default:** current Fabric version string Mostly for informational purposes. Modification is not recommended, but probably won't break anything either. .. seealso:: :option:`--version <-V>` .. _warn_only: ``warn_only`` ------------- **Default:** ``False`` Specifies whether or not to warn, instead of abort, when `~fabric.operations` encounter error conditions. .. seealso:: :option:`--warn-only <-w>`, :doc:`execution` fabric-1.14.0/sites/docs/usage/execution.rst000066400000000000000000001047121315011462000207410ustar00rootroot00000000000000=============== Execution model =============== If you've read the :doc:`../tutorial`, you should already be familiar with how Fabric operates in the base case (a single task on a single host.) However, in many situations you'll find yourself wanting to execute multiple tasks and/or on multiple hosts. Perhaps you want to split a big task into smaller reusable parts, or crawl a collection of servers looking for an old user to remove. Such a scenario requires specific rules for when and how tasks are executed. This document explores Fabric's execution model, including the main execution loop, how to define host lists, how connections are made, and so forth. .. _execution-strategy: Execution strategy ================== Fabric defaults to a single, serial execution method, though there is an alternative parallel mode available as of Fabric 1.3 (see :doc:`/usage/parallel`). This default behavior is as follows: * A list of tasks is created. Currently this list is simply the arguments given to :doc:`fab `, preserving the order given. * For each task, a task-specific host list is generated from various sources (see :ref:`host-lists` below for details.) * The task list is walked through in order, and each task is run once per host in its host list. * Tasks with no hosts in their host list are considered local-only, and will always run once and only once. Thus, given the following fabfile:: from fabric.api import run, env env.hosts = ['host1', 'host2'] def taskA(): run('ls') def taskB(): run('whoami') and the following invocation:: $ fab taskA taskB you will see that Fabric performs the following: * ``taskA`` executed on ``host1`` * ``taskA`` executed on ``host2`` * ``taskB`` executed on ``host1`` * ``taskB`` executed on ``host2`` While this approach is simplistic, it allows for a straightforward composition of task functions, and (unlike tools which push the multi-host functionality down to the individual function calls) enables shell script-like logic where you may introspect the output or return code of a given command and decide what to do next. Defining tasks ============== For details on what constitutes a Fabric task and how to organize them, please see :doc:`/usage/tasks`. Defining host lists =================== Unless you're using Fabric as a simple build system (which is possible, but not the primary use-case) having tasks won't do you any good without the ability to specify remote hosts on which to execute them. There are a number of ways to do so, with scopes varying from global to per-task, and it's possible mix and match as needed. .. _host-strings: Hosts ----- Hosts, in this context, refer to what are also called "host strings": Python strings specifying a username, hostname and port combination, in the form of ``username@hostname:port``. User and/or port (and the associated ``@`` or ``:``) may be omitted, and will be filled by the executing user's local username, and/or port 22, respectively. Thus, ``admin@foo.com:222``, ``deploy@website`` and ``nameserver1`` could all be valid host strings. IPv6 address notation is also supported, for example ``::1``, ``[::1]:1222``, ``user@2001:db8::1`` or ``user@[2001:db8::1]:1222``. Square brackets are necessary only to separate the address from the port number. If no port number is used, the brackets are optional. Also if host string is specified via command-line argument, it may be necessary to escape brackets in some shells. .. note:: The user/hostname split occurs at the last ``@`` found, so e.g. email address usernames are valid and will be parsed correctly. During execution, Fabric normalizes the host strings given and then stores each part (username/hostname/port) in the environment dictionary, for both its use and for tasks to reference if the need arises. See :doc:`env` for details. .. _execution-roles: Roles ----- Host strings map to single hosts, but sometimes it's useful to arrange hosts in groups. Perhaps you have a number of Web servers behind a load balancer and want to update all of them, or want to run a task on "all client servers". Roles provide a way of defining strings which correspond to lists of host strings, and can then be specified instead of writing out the entire list every time. This mapping is defined as a dictionary, ``env.roledefs``, which must be modified by a fabfile in order to be used. A simple example:: from fabric.api import env env.roledefs['webservers'] = ['www1', 'www2', 'www3'] Since ``env.roledefs`` is naturally empty by default, you may also opt to re-assign to it without fear of losing any information (provided you aren't loading other fabfiles which also modify it, of course):: from fabric.api import env env.roledefs = { 'web': ['www1', 'www2', 'www3'], 'dns': ['ns1', 'ns2'] } Role definitions are not necessarily configuration of hosts only, they can also hold additional role specific settings of your choice. This is achieved by defining the roles as dicts and host strings under a ``hosts`` key:: from fabric.api import env env.roledefs = { 'web': { 'hosts': ['www1', 'www2', 'www3'], 'foo': 'bar' }, 'dns': { 'hosts': ['ns1', 'ns2'], 'foo': 'baz' } } In addition to list/iterable object types, the values in ``env.roledefs`` (or value of ``hosts`` key in dict style definition) may be callables, and will thus be called when looked up when tasks are run instead of at module load time. (For example, you could connect to remote servers to obtain role definitions, and not worry about causing delays at fabfile load time when calling e.g. ``fab --list``.) Use of roles is not required in any way -- it's simply a convenience in situations where you have common groupings of servers. .. versionchanged:: 0.9.2 Added ability to use callables as ``roledefs`` values. .. _host-lists: How host lists are constructed ------------------------------ There are a number of ways to specify host lists, either globally or per-task, and generally these methods override one another instead of merging together (though this may change in future releases.) Each such method is typically split into two parts, one for hosts and one for roles. Globally, via ``env`` ~~~~~~~~~~~~~~~~~~~~~ The most common method of setting hosts or roles is by modifying two key-value pairs in the environment dictionary, :doc:`env `: ``hosts`` and ``roles``. The value of these variables is checked at runtime, while constructing each tasks's host list. Thus, they may be set at module level, which will take effect when the fabfile is imported:: from fabric.api import env, run env.hosts = ['host1', 'host2'] def mytask(): run('ls /var/www') Such a fabfile, run simply as ``fab mytask``, will run ``mytask`` on ``host1`` followed by ``host2``. Since the env vars are checked for *each* task, this means that if you have the need, you can actually modify ``env`` in one task and it will affect all following tasks:: from fabric.api import env, run def set_hosts(): env.hosts = ['host1', 'host2'] def mytask(): run('ls /var/www') When run as ``fab set_hosts mytask``, ``set_hosts`` is a "local" task -- its own host list is empty -- but ``mytask`` will again run on the two hosts given. .. note:: This technique used to be a common way of creating fake "roles", but is less necessary now that roles are fully implemented. It may still be useful in some situations, however. Alongside ``env.hosts`` is ``env.roles`` (not to be confused with ``env.roledefs``!) which, if given, will be taken as a list of role names to look up in ``env.roledefs``. Globally, via the command line ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In addition to modifying ``env.hosts``, ``env.roles``, and ``env.exclude_hosts`` at the module level, you may define them by passing comma-separated string arguments to the command-line switches :option:`--hosts/-H <-H>` and :option:`--roles/-R <-R>`, e.g.:: $ fab -H host1,host2 mytask Such an invocation is directly equivalent to ``env.hosts = ['host1', 'host2']`` -- the argument parser knows to look for these arguments and will modify ``env`` at parse time. .. note:: It's possible, and in fact common, to use these switches to set only a single host or role. Fabric simply calls ``string.split(',')`` on the given string, so a string with no commas turns into a single-item list. It is important to know that these command-line switches are interpreted **before** your fabfile is loaded: any reassignment to ``env.hosts`` or ``env.roles`` in your fabfile will overwrite them. If you wish to nondestructively merge the command-line hosts with your fabfile-defined ones, make sure your fabfile uses ``env.hosts.extend()`` instead:: from fabric.api import env, run env.hosts.extend(['host3', 'host4']) def mytask(): run('ls /var/www') When this fabfile is run as ``fab -H host1,host2 mytask``, ``env.hosts`` will then contain ``['host1', 'host2', 'host3', 'host4']`` at the time that ``mytask`` is executed. .. note:: ``env.hosts`` is simply a Python list object -- so you may use ``env.hosts.append()`` or any other such method you wish. .. _hosts-per-task-cli: Per-task, via the command line ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Globally setting host lists only works if you want all your tasks to run on the same host list all the time. This isn't always true, so Fabric provides a few ways to be more granular and specify host lists which apply to a single task only. The first of these uses task arguments. As outlined in :doc:`fab`, it's possible to specify per-task arguments via a special command-line syntax. In addition to naming actual arguments to your task function, this may be used to set the ``host``, ``hosts``, ``role`` or ``roles`` "arguments", which are interpreted by Fabric when building host lists (and removed from the arguments passed to the task itself.) .. note:: Since commas are already used to separate task arguments from one another, semicolons must be used in the ``hosts`` or ``roles`` arguments to delineate individual host strings or role names. Furthermore, the argument must be quoted to prevent your shell from interpreting the semicolons. Take the below fabfile, which is the same one we've been using, but which doesn't define any host info at all:: from fabric.api import run def mytask(): run('ls /var/www') To specify per-task hosts for ``mytask``, execute it like so:: $ fab mytask:hosts="host1;host2" This will override any other host list and ensure ``mytask`` always runs on just those two hosts. Per-task, via decorators ~~~~~~~~~~~~~~~~~~~~~~~~ If a given task should always run on a predetermined host list, you may wish to specify this in your fabfile itself. This can be done by decorating a task function with the `~fabric.decorators.hosts` or `~fabric.decorators.roles` decorators. These decorators take a variable argument list, like so:: from fabric.api import hosts, run @hosts('host1', 'host2') def mytask(): run('ls /var/www') They will also take an single iterable argument, e.g.:: my_hosts = ('host1', 'host2') @hosts(my_hosts) def mytask(): # ... When used, these decorators override any checks of ``env`` for that particular task's host list (though ``env`` is not modified in any way -- it is simply ignored.) Thus, even if the above fabfile had defined ``env.hosts`` or the call to :doc:`fab ` uses :option:`--hosts/-H <-H>`, ``mytask`` would still run on a host list of ``['host1', 'host2']``. However, decorator host lists do **not** override per-task command-line arguments, as given in the previous section. Order of precedence ~~~~~~~~~~~~~~~~~~~ We've been pointing out which methods of setting host lists trump the others, as we've gone along. However, to make things clearer, here's a quick breakdown: * Per-task, command-line host lists (``fab mytask:host=host1``) override absolutely everything else. * Per-task, decorator-specified host lists (``@hosts('host1')``) override the ``env`` variables. * Globally specified host lists set in the fabfile (``env.hosts = ['host1']``) *can* override such lists set on the command-line, but only if you're not careful (or want them to.) * Globally specified host lists set on the command-line (``--hosts=host1``) will initialize the ``env`` variables, but that's it. This logic may change slightly in the future to be more consistent (e.g. having :option:`--hosts <-H>` somehow take precedence over ``env.hosts`` in the same way that command-line per-task lists trump in-code ones) but only in a backwards-incompatible release. .. _combining-host-lists: Combining host lists -------------------- There is no "unionizing" of hosts between the various sources mentioned in :ref:`host-lists`. If ``env.hosts`` is set to ``['host1', 'host2', 'host3']``, and a per-function (e.g. via `~fabric.decorators.hosts`) host list is set to just ``['host2', 'host3']``, that function will **not** execute on ``host1``, because the per-task decorator host list takes precedence. However, for each given source, if both roles **and** hosts are specified, they will be merged together into a single host list. Take, for example, this fabfile where both of the decorators are used:: from fabric.api import env, hosts, roles, run env.roledefs = {'role1': ['b', 'c']} @hosts('a', 'b') @roles('role1') def mytask(): run('ls /var/www') Assuming no command-line hosts or roles are given when ``mytask`` is executed, this fabfile will call ``mytask`` on a host list of ``['a', 'b', 'c']`` -- the union of ``role1`` and the contents of the `~fabric.decorators.hosts` call. .. _deduplication: Host list deduplication ----------------------- By default, to support :ref:`combining-host-lists`, Fabric deduplicates the final host list so any given host string is only present once. However, this prevents explicit/intentional running of a task multiple times on the same target host, which is sometimes useful. To turn off deduplication, set :ref:`env.dedupe_hosts ` to ``False``. .. _excluding-hosts: Excluding specific hosts ------------------------ At times, it is useful to exclude one or more specific hosts, e.g. to override a few bad or otherwise undesirable hosts which are pulled in from a role or an autogenerated host list. .. note:: As of Fabric 1.4, you may wish to use :ref:`skip-bad-hosts` instead, which automatically skips over any unreachable hosts. Host exclusion may be accomplished globally with :option:`--exclude-hosts/-x <-x>`:: $ fab -R myrole -x host2,host5 mytask If ``myrole`` was defined as ``['host1', 'host2', ..., 'host15']``, the above invocation would run with an effective host list of ``['host1', 'host3', 'host4', 'host6', ..., 'host15']``. .. note:: Using this option does not modify ``env.hosts`` -- it only causes the main execution loop to skip the requested hosts. Exclusions may be specified per-task by using an extra ``exclude_hosts`` kwarg, which is implemented similarly to the abovementioned ``hosts`` and ``roles`` per-task kwargs, in that it is stripped from the actual task invocation. This example would have the same result as the global exclude above:: $ fab mytask:roles=myrole,exclude_hosts="host2;host5" Note that the host list is semicolon-separated, just as with the ``hosts`` per-task argument. Combining exclusions ~~~~~~~~~~~~~~~~~~~~ Host exclusion lists, like host lists themselves, are not merged together across the different "levels" they can be declared in. For example, a global ``-x`` option will not affect a per-task host list set with a decorator or keyword argument, nor will per-task ``exclude_hosts`` keyword arguments affect a global ``-H`` list. There is one minor exception to this rule, namely that CLI-level keyword arguments (``mytask:exclude_hosts=x,y``) **will** be taken into account when examining host lists set via ``@hosts`` or ``@roles``. Thus a task function decorated with ``@hosts('host1', 'host2')`` executed as ``fab taskname:exclude_hosts=host2`` will only run on ``host1``. As with the host list merging, this functionality is currently limited (partly to keep the implementation simple) and may be expanded in future releases. .. _execute: Intelligently executing tasks with ``execute`` ============================================== .. versionadded:: 1.3 Most of the information here involves "top level" tasks executed via :doc:`fab `, such as the first example where we called ``fab taskA taskB``. However, it's often convenient to wrap up multi-task invocations like this into their own, "meta" tasks. Prior to Fabric 1.3, this had to be done by hand, as outlined in :doc:`/usage/library`. Fabric's design eschews magical behavior, so simply *calling* a task function does **not** take into account decorators such as `~fabric.decorators.roles`. New in Fabric 1.3 is the `~fabric.tasks.execute` helper function, which takes a task object or name as its first argument. Using it is effectively the same as calling the given task from the command line: all the rules given above in :ref:`host-lists` apply. (The ``hosts`` and ``roles`` keyword arguments to `~fabric.tasks.execute` are analogous to :ref:`CLI per-task arguments `, including how they override all other host/role-setting methods.) As an example, here's a fabfile defining two stand-alone tasks for deploying a Web application:: from fabric.api import run, roles env.roledefs = { 'db': ['db1', 'db2'], 'web': ['web1', 'web2', 'web3'], } @roles('db') def migrate(): # Database stuff here. pass @roles('web') def update(): # Code updates here. pass In Fabric <=1.2, the only way to ensure that ``migrate`` runs on the DB servers and that ``update`` runs on the Web servers (short of manual ``env.host_string`` manipulation) was to call both as top level tasks:: $ fab migrate update Fabric >=1.3 can use `~fabric.tasks.execute` to set up a meta-task. Update the ``import`` line like so:: from fabric.api import run, roles, execute and append this to the bottom of the file:: def deploy(): execute(migrate) execute(update) That's all there is to it; the `~fabric.decorators.roles` decorators will be honored as expected, resulting in the following execution sequence: * `migrate` on `db1` * `migrate` on `db2` * `update` on `web1` * `update` on `web2` * `update` on `web3` .. warning:: This technique works because tasks that themselves have no host list (this includes the global host list settings) only run one time. If used inside a "regular" task that is going to run on multiple hosts, calls to `~fabric.tasks.execute` will also run multiple times, resulting in multiplicative numbers of subtask calls -- be careful! If you would like your `execute` calls to only be called once, you may use the `~fabric.decorators.runs_once` decorator. .. seealso:: `~fabric.tasks.execute`, `~fabric.decorators.runs_once` .. _leveraging-execute-return-value: Leveraging ``execute`` to access multi-host results --------------------------------------------------- In nontrivial Fabric runs, especially parallel ones, you may want to gather up a bunch of per-host result values at the end - e.g. to present a summary table, perform calculations, etc. It's not possible to do this in Fabric's default "naive" mode (one where you rely on Fabric looping over host lists on your behalf), but with `.execute` it's pretty easy. Simply switch from calling the actual work-bearing task, to calling a "meta" task which takes control of execution with `.execute`:: from fabric.api import task, execute, run, runs_once @task def workhorse(): return run("get my infos") @task @runs_once def go(): results = execute(workhorse) print results In the above, ``workhorse`` can do any Fabric stuff at all -- it's literally your old "naive" task -- except that it needs to return something useful. ``go`` is your new entry point (to be invoked as ``fab go``, or whatnot) and its job is to take the ``results`` dictionary from the `.execute` call and do whatever you need with it. Check the API docs for details on the structure of that return value. .. _dynamic-hosts: Using ``execute`` with dynamically-set host lists ------------------------------------------------- A common intermediate-to-advanced use case for Fabric is to parameterize lookup of one's target host list at runtime (when use of :ref:`execution-roles` does not suffice). ``execute`` can make this extremely simple, like so:: from fabric.api import run, execute, task # For example, code talking to an HTTP API, or a database, or ... from mylib import external_datastore # This is the actual algorithm involved. It does not care about host # lists at all. def do_work(): run("something interesting on a host") # This is the user-facing task invoked on the command line. @task def deploy(lookup_param): # This is the magic you don't get with @hosts or @roles. # Even lazy-loading roles require you to declare available roles # beforehand. Here, the sky is the limit. host_list = external_datastore.query(lookup_param) # Put this dynamically generated host list together with the work to be # done. execute(do_work, hosts=host_list) For example, if ``external_datastore`` was a simplistic "look up hosts by tag in a database" service, and you wanted to run a task on all hosts tagged as being related to your application stack, you might call the above like this:: $ fab deploy:app But wait! A data migration has gone awry on the DB servers. Let's fix up our migration code in our source repo, and deploy just the DB boxes again:: $ fab deploy:db This use case looks similar to Fabric's roles, but has much more potential, and is by no means limited to a single argument. Define the task however you wish, query your external data store in whatever way you need -- it's just Python. The alternate approach ~~~~~~~~~~~~~~~~~~~~~~ Similar to the above, but using ``fab``'s ability to call multiple tasks in succession instead of an explicit ``execute`` call, is to mutate :ref:`env.hosts ` in a host-list lookup task and then call ``do_work`` in the same session:: from fabric.api import run, task from mylib import external_datastore # Marked as a publicly visible task, but otherwise unchanged: still just # "do the work, let somebody else worry about what hosts to run on". @task def do_work(): run("something interesting on a host") @task def set_hosts(lookup_param): # Update env.hosts instead of calling execute() env.hosts = external_datastore.query(lookup_param) Then invoke like so:: $ fab set_hosts:app do_work One benefit of this approach over the previous one is that you can replace ``do_work`` with any other "workhorse" task:: $ fab set_hosts:db snapshot $ fab set_hosts:cassandra,cluster2 repair_ring $ fab set_hosts:redis,environ=prod status .. _failures: Failure handling ================ Once the task list has been constructed, Fabric will start executing them as outlined in :ref:`execution-strategy`, until all tasks have been run on the entirety of their host lists. However, Fabric defaults to a "fail-fast" behavior pattern: if anything goes wrong, such as a remote program returning a nonzero return value or your fabfile's Python code encountering an exception, execution will halt immediately. This is typically the desired behavior, but there are many exceptions to the rule, so Fabric provides ``env.warn_only``, a Boolean setting. It defaults to ``False``, meaning an error condition will result in the program aborting immediately. However, if ``env.warn_only`` is set to ``True`` at the time of failure -- with, say, the `~fabric.context_managers.settings` context manager -- Fabric will emit a warning message but continue executing. To signal a failure error from a Fabric task, use the `~fabric.utils.abort`. `~fabric.utils.abort` signals an error as if it had been detected by Fabric and follows the regular execution model for control flow. .. _connections: Connections =========== ``fab`` itself doesn't actually make any connections to remote hosts. Instead, it simply ensures that for each distinct run of a task on one of its hosts, the env var ``env.host_string`` is set to the right value. Users wanting to leverage Fabric as a library may do so manually to achieve similar effects (though as of Fabric 1.3, using `~fabric.tasks.execute` is preferred and more powerful.) ``env.host_string`` is (as the name implies) the "current" host string, and is what Fabric uses to determine what connections to make (or re-use) when network-aware functions are run. Operations like `~fabric.operations.run` or `~fabric.operations.put` use ``env.host_string`` as a lookup key in a shared dictionary which maps host strings to SSH connection objects. .. note:: The connections dictionary (currently located at ``fabric.state.connections``) acts as a cache, opting to return previously created connections if possible in order to save some overhead, and creating new ones otherwise. Lazy connections ---------------- Because connections are driven by the individual operations, Fabric will not actually make connections until they're necessary. Take for example this task which does some local housekeeping prior to interacting with the remote server:: from fabric.api import * @hosts('host1') def clean_and_upload(): local('find assets/ -name "*.DS_Store" -exec rm '{}' \;') local('tar czf /tmp/assets.tgz assets/') put('/tmp/assets.tgz', '/tmp/assets.tgz') with cd('/var/www/myapp/'): run('tar xzf /tmp/assets.tgz') What happens, connection-wise, is as follows: #. The two `~fabric.operations.local` calls will run without making any network connections whatsoever; #. `~fabric.operations.put` asks the connection cache for a connection to ``host1``; #. The connection cache fails to find an existing connection for that host string, and so creates a new SSH connection, returning it to `~fabric.operations.put`; #. `~fabric.operations.put` uploads the file through that connection; #. Finally, the `~fabric.operations.run` call asks the cache for a connection to that same host string, and is given the existing, cached connection for its own use. Extrapolating from this, you can also see that tasks which don't use any network-borne operations will never actually initiate any connections (though they will still be run once for each host in their host list, if any.) Closing connections ------------------- Fabric's connection cache never closes connections itself -- it leaves this up to whatever is using it. The :doc:`fab ` tool does this bookkeeping for you: it iterates over all open connections and closes them just before it exits (regardless of whether the tasks failed or not.) Library users will need to ensure they explicitly close all open connections before their program exits. This can be accomplished by calling `~fabric.network.disconnect_all` at the end of your script. .. note:: `~fabric.network.disconnect_all` may be moved to a more public location in the future; we're still working on making the library aspects of Fabric more solidified and organized. Multiple connection attempts and skipping bad hosts --------------------------------------------------- As of Fabric 1.4, multiple attempts may be made to connect to remote servers before aborting with an error: Fabric will try connecting :ref:`env.connection_attempts ` times before giving up, with a timeout of :ref:`env.timeout ` seconds each time. (These currently default to 1 try and 10 seconds, to match previous behavior, but they may be safely changed to whatever you need.) Furthermore, even total failure to connect to a server is no longer an absolute hard stop: set :ref:`env.skip_bad_hosts ` to ``True`` and in most situations (typically initial connections) Fabric will simply warn and continue, instead of aborting. .. versionadded:: 1.4 .. _password-management: Password management =================== Fabric maintains an in-memory password cache of your login and sudo passwords in certain situations; this helps avoid tedious re-entry when multiple systems share the same password [#]_, or if a remote system's ``sudo`` configuration doesn't do its own caching. Pre-filling the password caches ------------------------------- The first layer is a simple default or fallback password value, :ref:`env.password ` (which may also be set at the command line via :option:`--password <-p>` or :option:`--initial-password-prompt <-I>`). This env var stores a single password which (if non-empty) will be tried in the event that the host-specific cache (see below) has no entry for the current :ref:`host string `. :ref:`env.passwords ` (plural!) serves as a per-user/per-host cache, storing the most recently entered password for every unique user/host/port combination (**note** that you must include **all three values** if modifying the structure by hand - see the above link for details). Due to this cache, connections to multiple different users and/or hosts in the same session will only require a single password entry for each. (Previous versions of Fabric used only the single, default password cache and thus required password re-entry every time the previously entered password became invalid.) Auto-filling/updating from user input ------------------------------------- Depending on your configuration and the number of hosts your session will connect to, you may find setting either or both of the above env vars to be useful. However, Fabric will automatically fill them in as necessary without any additional configuration. Specifically, each time a password prompt is presented to the user, the value entered is used to update both the single default password cache, and the cache value for the current value of ``env.host_string``. .. _sudo-passwords: Specifying ``sudo``-only passwords ---------------------------------- In some situations (such as those involving two-factor authentication, or any other situation where submitting a password at login time is not desired or correct) you may want to only cache passwords intended for ``sudo``, instead of reusing the values for both login and ``sudo`` purposes. To do this, you may set :ref:`env.sudo_password ` or populate :ref:`env.sudo_passwords `, which mirror ``env.password`` and ``env.passwords`` (described above). These values will **only** be used in responding to ``sudo`` password prompts, and will never be submitted at connection time. There is also an analogue to the ``--password`` command line flag, named :option:`--sudo-password`, and like :option:`--initial-password-prompt <-I>`, there exists :option:`--initial-sudo-password-prompt`. .. note:: When both types of passwords are filled in (e.g. if ``env.password = "foo"`` and ``env.sudo_password = "bar"``), the ``sudo`` specific passwords will be used. .. note:: Due to backwards compatibility concerns, user-entered ``sudo`` passwords will still be cached into ``env.password``/``env.passwords``; ``env.sudo_password``/``env.sudo_passwords`` are purely for noninteractive use. .. [#] We highly recommend the use of SSH `key-based access `_ instead of relying on homogeneous password setups, as it's significantly more secure. .. _ssh-config: Leveraging native SSH config files ================================== Command-line SSH clients (such as the one provided by `OpenSSH `_) make use of a specific configuration format typically known as ``ssh_config``, and will read from a file in the platform-specific location ``$HOME/.ssh/config`` (or an arbitrary path given to :option:`--ssh-config-path`/:ref:`env.ssh_config_path `.) This file allows specification of various SSH options such as default or per-host usernames, hostname aliases, and toggling other settings (such as whether to use :ref:`agent forwarding `.) Fabric's SSH implementation allows loading a subset of these options from one's actual SSH config file, should it exist. This behavior is not enabled by default (in order to be backwards compatible) but may be turned on by setting :ref:`env.use_ssh_config ` to ``True`` at the top of your fabfile. If enabled, the following SSH config directives will be loaded and honored by Fabric: * ``User`` and ``Port`` will be used to fill in the appropriate connection parameters when not otherwise specified, in the following fashion: * Globally specified ``User``/``Port`` will be used in place of the current defaults (local username and 22, respectively) if the appropriate env vars are not set. * However, if :ref:`env.user `/:ref:`env.port ` *are* set, they override global ``User``/``Port`` values. * User/port values in the host string itself (e.g. ``hostname:222``) will override everything, including any ``ssh_config`` values. * ``HostName`` can be used to replace the given hostname, just like with regular ``ssh``. So a ``Host foo`` entry specifying ``HostName example.com`` will allow you to give Fabric the hostname ``'foo'`` and have that expanded into ``'example.com'`` at connection time. * ``IdentityFile`` will extend (not replace) :ref:`env.key_filename `. * ``ForwardAgent`` will augment :ref:`env.forward_agent ` in an "OR" manner: if either is set to a positive value, agent forwarding will be enabled. * ``ProxyCommand`` will trigger use of a proxy command for host connections, just as with regular ``ssh``. .. note:: If all you want to do is bounce SSH traffic off a gateway, you may find :ref:`env.gateway ` to be a more efficient connection method (which will also honor more Fabric-level settings) than the typical ``ssh gatewayhost nc %h %p`` method of using ``ProxyCommand`` as a gateway. .. note:: If your SSH config file contains ``ProxyCommand`` directives *and* you have set :ref:`env.gateway ` to a non-``None`` value, ``env.gateway`` will take precedence and the ``ProxyCommand`` will be ignored. If one has a pre-created SSH config file, rationale states it will be easier for you to modify ``env.gateway`` (e.g. via `~fabric.context_managers.settings`) than to work around your conf file's contents entirely. fabric-1.14.0/sites/docs/usage/fab.rst000066400000000000000000000430151315011462000174640ustar00rootroot00000000000000============================= ``fab`` options and arguments ============================= The most common method for utilizing Fabric is via its command-line tool, ``fab``, which should have been placed on your shell's executable path when Fabric was installed. ``fab`` tries hard to be a good Unix citizen, using a standard style of command-line switches, help output, and so forth. Basic use ========= In its most simple form, ``fab`` may be called with no options at all, and with one or more arguments, which should be task names, e.g.:: $ fab task1 task2 As detailed in :doc:`../tutorial` and :doc:`execution`, this will run ``task1`` followed by ``task2``, assuming that Fabric was able to find a fabfile nearby containing Python functions with those names. However, it's possible to expand this simple usage into something more flexible, by using the provided options and/or passing arguments to individual tasks. .. _arbitrary-commands: Arbitrary remote shell commands =============================== .. versionadded:: 0.9.2 Fabric leverages a lesser-known command line convention and may be called in the following manner:: $ fab [options] -- [shell command] where everything after the ``--`` is turned into a temporary `~fabric.operations.run` call, and is not parsed for ``fab`` options. If you've defined a host list at the module level or on the command line, this usage will act like a one-line anonymous task. For example, let's say you just wanted to get the kernel info for a bunch of systems; you could do this:: $ fab -H system1,system2,system3 -- uname -a which would be literally equivalent to the following fabfile:: from fabric.api import run def anonymous(): run("uname -a") as if it were executed thusly:: $ fab -H system1,system2,system3 anonymous Most of the time you will want to just write out the task in your fabfile (anything you use once, you're likely to use again) but this feature provides a handy, fast way to quickly dash off an SSH-borne command while leveraging your fabfile's connection settings. .. _command-line-options: Command-line options ==================== A quick overview of all possible command line options can be found via ``fab --help``. If you're looking for details on a specific option, we go into detail below. .. note:: ``fab`` uses Python's `optparse`_ library, meaning that it honors typical Linux or GNU style short and long options, as well as freely mixing options and arguments. E.g. ``fab task1 -H hostname task2 -i path/to/keyfile`` is just as valid as the more straightforward ``fab -H hostname -i path/to/keyfile task1 task2``. .. _optparse: http://docs.python.org/library/optparse.html .. cmdoption:: -a, --no_agent Sets :ref:`env.no_agent ` to ``True``, forcing our SSH layer not to talk to the SSH agent when trying to unlock private key files. .. versionadded:: 0.9.1 .. cmdoption:: -A, --forward-agent Sets :ref:`env.forward_agent ` to ``True``, enabling agent forwarding. .. versionadded:: 1.4 .. cmdoption:: --abort-on-prompts Sets :ref:`env.abort_on_prompts ` to ``True``, forcing Fabric to abort whenever it would prompt for input. .. versionadded:: 1.1 .. cmdoption:: -c RCFILE, --config=RCFILE Sets :ref:`env.rcfile ` to the given file path, which Fabric will try to load on startup and use to update environment variables. .. cmdoption:: -d COMMAND, --display=COMMAND Prints the entire docstring for the given task, if there is one. Does not currently print out the task's function signature, so descriptive docstrings are a good idea. (They're *always* a good idea, of course -- just moreso here.) .. cmdoption:: --connection-attempts=M, -n M Set number of times to attempt connections. Sets :ref:`env.connection_attempts `. .. seealso:: :ref:`env.connection_attempts `, :ref:`env.timeout ` .. versionadded:: 1.4 .. cmdoption:: -D, --disable-known-hosts Sets :ref:`env.disable_known_hosts ` to ``True``, preventing Fabric from loading the user's SSH :file:`known_hosts` file. .. cmdoption:: -f FABFILE, --fabfile=FABFILE The fabfile name pattern to search for (defaults to ``fabfile.py``), or alternately an explicit file path to load as the fabfile (e.g. ``/path/to/my/fabfile.py``.) .. seealso:: :doc:`fabfiles` .. cmdoption:: -F LIST_FORMAT, --list-format=LIST_FORMAT Allows control over the output format of :option:`--list <-l>`. ``short`` is equivalent to :option:`--shortlist`, ``normal`` is the same as simply omitting this option entirely (i.e. the default), and ``nested`` prints out a nested namespace tree. .. versionadded:: 1.1 .. seealso:: :option:`--shortlist`, :option:`--list <-l>` .. cmdoption:: -g HOST, --gateway=HOST Sets :ref:`env.gateway ` to ``HOST`` host string. .. versionadded:: 1.5 .. cmdoption:: --gss-auth Toggles use of GSS-API authentication. .. seealso:: :ref:`kerberos` .. versionadded:: 1.11 .. cmdoption:: --gss-deleg Toggles whether GSS-API client credentials are delegated. .. seealso:: :ref:`kerberos` .. versionadded:: 1.11 .. cmdoption:: --gss-kex Toggles whether GSS-API key exchange is used. .. seealso:: :ref:`kerberos` .. versionadded:: 1.11 .. cmdoption:: -h, --help Displays a standard help message, with all possible options and a brief overview of what they do, then exits. .. cmdoption:: --hide=LEVELS A comma-separated list of :doc:`output levels ` to hide by default. .. cmdoption:: -H HOSTS, --hosts=HOSTS Sets :ref:`env.hosts ` to the given comma-delimited list of host strings. .. cmdoption:: -x HOSTS, --exclude-hosts=HOSTS Sets :ref:`env.exclude_hosts ` to the given comma-delimited list of host strings to then keep out of the final host list. .. versionadded:: 1.1 .. cmdoption:: -i KEY_FILENAME When set to a file path, will load the given file as an SSH identity file (usually a private key.) This option may be repeated multiple times. Sets (or appends to) :ref:`env.key_filename `. .. cmdoption:: -I, --initial-password-prompt Forces a password prompt at the start of the session (after fabfile load and option parsing, but before executing any tasks) in order to pre-fill :ref:`env.password `. This is useful for fire-and-forget runs (especially parallel sessions, in which runtime input is not possible) when setting the password via :option:`--password <-p>` or by setting :ref:`env.password ` in your fabfile, is undesirable. .. note:: The value entered into this prompt will *overwrite* anything supplied via :ref:`env.password ` at module level, or via :option:`--password <-p>`. .. seealso:: :ref:`password-management` .. cmdoption:: --initial-sudo-password-prompt Like :option:`--initial-password-prompt <-I>`, but for prefilling :ref:`sudo_password` instead of :ref:`password`. .. versionadded:: 1.12 .. cmdoption:: -k Sets :ref:`env.no_keys ` to ``True``, forcing the SSH layer to not look for SSH private key files in one's home directory. .. versionadded:: 0.9.1 .. cmdoption:: --keepalive=KEEPALIVE Sets :ref:`env.keepalive ` to the given (integer) value, specifying an SSH keepalive interval. .. versionadded:: 1.1 .. cmdoption:: --linewise Forces output to be buffered line-by-line instead of byte-by-byte. Often useful or required for :ref:`parallel execution `. .. versionadded:: 1.3 .. cmdoption:: -l, --list Imports a fabfile as normal, but then prints a list of all discovered tasks and exits. Will also print the first line of each task's docstring, if it has one, next to it (truncating if necessary.) .. versionchanged:: 0.9.1 Added docstring to output. .. seealso:: :option:`--shortlist`, :option:`--list-format <-F>` .. cmdoption:: -p PASSWORD, --password=PASSWORD Sets :ref:`env.password ` to the given string; it will then be used as the default password when making SSH connections or calling the ``sudo`` program. .. seealso:: :option:`--initial-password-prompt <-I>`, :option:`--sudo-password` .. cmdoption:: -P, --parallel Sets :ref:`env.parallel ` to ``True``, causing tasks to run in parallel. .. versionadded:: 1.3 .. seealso:: :doc:`/usage/parallel` .. cmdoption:: --no-pty Sets :ref:`env.always_use_pty ` to ``False``, causing all `~fabric.operations.run`/`~fabric.operations.sudo` calls to behave as if one had specified ``pty=False``. .. versionadded:: 1.0 .. cmdoption:: -r, --reject-unknown-hosts Sets :ref:`env.reject_unknown_hosts ` to ``True``, causing Fabric to abort when connecting to hosts not found in the user's SSH :file:`known_hosts` file. .. cmdoption:: -R ROLES, --roles=ROLES Sets :ref:`env.roles ` to the given comma-separated list of role names. .. cmdoption:: --set KEY=VALUE,... Allows you to set default values for arbitrary Fabric env vars. Values set this way have a low precedence -- they will not override more specific env vars which are also specified on the command line. E.g.:: fab --set password=foo --password=bar will result in ``env.password = 'bar'``, not ``'foo'`` Multiple ``KEY=VALUE`` pairs may be comma-separated, e.g. ``fab --set var1=val1,var2=val2``. Other than basic string values, you may also set env vars to True by omitting the ``=VALUE`` (e.g. ``fab --set KEY``), and you may set values to the empty string (and thus a False-equivalent value) by keeping the equals sign, but omitting ``VALUE`` (e.g. ``fab --set KEY=``.) .. versionadded:: 1.4 .. cmdoption:: -s SHELL, --shell=SHELL Sets :ref:`env.shell ` to the given string, overriding the default shell wrapper used to execute remote commands. .. cmdoption:: --shortlist Similar to :option:`--list <-l>`, but without any embellishment, just task names separated by newlines with no indentation or docstrings. .. versionadded:: 0.9.2 .. seealso:: :option:`--list <-l>` .. cmdoption:: --show=LEVELS A comma-separated list of :doc:`output levels ` to be added to those that are shown by default. .. seealso:: `~fabric.operations.run`, `~fabric.operations.sudo` .. cmdoption:: --ssh-config-path Sets :ref:`env.ssh_config_path `. .. versionadded:: 1.4 .. seealso:: :ref:`ssh-config` .. cmdoption:: --skip-bad-hosts Sets :ref:`env.skip_bad_hosts `, causing Fabric to skip unavailable hosts. .. versionadded:: 1.4 .. cmdoption:: --skip-unknown-tasks Sets :ref:`env.skip_unknown_tasks `, causing Fabric to skip unknown tasks. .. seealso:: :ref:`env.skip_unknown_tasks ` .. cmdoption:: --sudo-password Sets :ref:`env.sudo_password `. .. versionadded:: 1.12 .. cmdoption:: --timeout=N, -t N Set connection timeout in seconds. Sets :ref:`env.timeout `. .. seealso:: :ref:`env.timeout `, :ref:`env.connection_attempts ` .. versionadded:: 1.4 .. cmdoption:: --command-timeout=N, -T N Set remote command timeout in seconds. Sets :ref:`env.command_timeout `. .. seealso:: :ref:`env.command_timeout `, .. versionadded:: 1.6 .. cmdoption:: -u USER, --user=USER Sets :ref:`env.user ` to the given string; it will then be used as the default username when making SSH connections. .. cmdoption:: -V, --version Displays Fabric's version number, then exits. .. cmdoption:: -w, --warn-only Sets :ref:`env.warn_only ` to ``True``, causing Fabric to continue execution even when commands encounter error conditions. .. cmdoption:: -z, --pool-size Sets :ref:`env.pool_size `, which specifies how many processes to run concurrently during parallel execution. .. versionadded:: 1.3 .. seealso:: :doc:`/usage/parallel` .. _task-arguments: Per-task arguments ================== The options given in :ref:`command-line-options` apply to the invocation of ``fab`` as a whole; even if the order is mixed around, options still apply to all given tasks equally. Additionally, since tasks are just Python functions, it's often desirable to pass in arguments to them at runtime. Answering both these needs is the concept of "per-task arguments", which is a special syntax you can tack onto the end of any task name: * Use a colon (``:``) to separate the task name from its arguments; * Use commas (``,``) to separate arguments from one another (may be escaped by using a backslash, i.e. ``\,``); * Use equals signs (``=``) for keyword arguments, or omit them for positional arguments. May also be escaped with backslashes. Additionally, since this process involves string parsing, all values will end up as Python strings, so plan accordingly. (We hope to improve upon this in future versions of Fabric, provided an intuitive syntax can be found.) For example, a "create a new user" task might be defined like so (omitting most of the actual logic for brevity):: def new_user(username, admin='no', comment="No comment provided"): print("New User (%s): %s" % (username, comment)) pass You can specify just the username:: $ fab new_user:myusername Or treat it as an explicit keyword argument:: $ fab new_user:username=myusername If both args are given, you can again give them as positional args:: $ fab new_user:myusername,yes Or mix and match, just like in Python:: $ fab new_user:myusername,admin=yes The ``print`` call above is useful for illustrating escaped commas, like so:: $ fab new_user:myusername,admin=no,comment='Gary\, new developer (starts Monday)' .. note:: Quoting the backslash-escaped comma is required, as not doing so will cause shell syntax errors. Quotes are also needed whenever an argument involves other shell-related characters such as spaces. All of the above are translated into the expected Python function calls. For example, the last call above would become:: >>> new_user('myusername', admin='yes', comment='Gary, new developer (starts Monday)') Roles and hosts --------------- As mentioned in :ref:`the section on task execution `, there are a handful of per-task keyword arguments (``host``, ``hosts``, ``role`` and ``roles``) which do not actually map to the task functions themselves, but are used for setting per-task host and/or role lists. These special kwargs are **removed** from the args/kwargs sent to the task function itself; this is so that you don't run into TypeErrors if your task doesn't define the kwargs in question. (It also means that if you **do** define arguments with these names, you won't be able to specify them in this manner -- a regrettable but necessary sacrifice.) .. note:: If both the plural and singular forms of these kwargs are given, the value of the plural will win out and the singular will be discarded. When using the plural form of these arguments, one must use semicolons (``;``) since commas are already being used to separate arguments from one another. Furthermore, since your shell is likely to consider semicolons a special character, you'll want to quote the host list string to prevent shell interpretation, e.g.:: $ fab new_user:myusername,hosts="host1;host2" Again, since the ``hosts`` kwarg is removed from the argument list sent to the ``new_user`` task function, the actual Python invocation would be ``new_user('myusername')``, and the function would be executed on a host list of ``['host1', 'host2']``. .. _fabricrc: Settings files ============== Fabric currently honors a simple user settings file, or ``fabricrc`` (think ``bashrc`` but for ``fab``) which should contain one or more key-value pairs, one per line. These lines will be subject to ``string.split('=')``, and thus can currently only be used to specify string settings. Any such key-value pairs will be used to update :doc:`env ` when ``fab`` runs, and is loaded prior to the loading of any fabfile. By default, Fabric looks for ``~/.fabricrc``, and this may be overridden by specifying the :option:`-c` flag to ``fab``. For example, if your typical SSH login username differs from your workstation username, and you don't want to modify ``env.user`` in a project's fabfile (possibly because you expect others to use it as well) you could write a ``fabricrc`` file like so:: user = ssh_user_name Then, when running ``fab``, your fabfile would load up with ``env.user`` set to ``'ssh_user_name'``. Other users of that fabfile could do the same, allowing the fabfile itself to be cleanly agnostic regarding the default username. Exit status ============ If ``fab`` executes all commands on all hosts successfully, success (0) is returned. Otherwise, * If an invalid command or option is specified, ``fab`` aborts with an exit status of 1. * If a connection to a host fails, ``fab`` aborts with an exit status of 1. It will not try the next host. * If a local or remote command fails (returns non-zero status), ``fab`` aborts with an exit status of 1. The exit status of the original command can be found in the log. * If a Python exception is thrown, ``fab`` aborts with an exit status of 1. fabric-1.14.0/sites/docs/usage/fabfiles.rst000066400000000000000000000071511315011462000205100ustar00rootroot00000000000000============================ Fabfile construction and use ============================ This document contains miscellaneous sections about fabfiles, both how to best write them, and how to use them once written. .. _fabfile-discovery: Fabfile discovery ================= Fabric is capable of loading Python modules (e.g. ``fabfile.py``) or packages (e.g. a ``fabfile/`` directory containing an ``__init__.py``). By default, it looks for something named (to Python's import machinery) ``fabfile`` - so either ``fabfile/`` or ``fabfile.py``. The fabfile discovery algorithm searches in the invoking user's current working directory or any parent directories. Thus, it is oriented around "project" use, where one keeps e.g. a ``fabfile.py`` at the root of a source code tree. Such a fabfile will then be discovered no matter where in the tree the user invokes ``fab``. The specific name to be searched for may be overridden on the command-line with the :option:`-f` option, or by adding a :ref:`fabricrc ` line which sets the value of ``fabfile``. For example, if you wanted to name your fabfile ``fab_tasks.py``, you could create such a file and then call ``fab -f fab_tasks.py ``, or add ``fabfile = fab_tasks.py`` to ``~/.fabricrc``. If the given fabfile name contains path elements other than a filename (e.g. ``../fabfile.py`` or ``/dir1/dir2/custom_fabfile``) it will be treated as a file path and directly checked for existence without any sort of searching. When in this mode, tilde-expansion will be applied, so one may refer to e.g. ``~/personal_fabfile.py``. .. note:: Fabric does a normal ``import`` (actually an ``__import__``) of your fabfile in order to access its contents -- it does not do any ``eval``-ing or similar. In order for this to work, Fabric temporarily adds the found fabfile's containing folder to the Python load path (and removes it immediately afterwards.) .. versionchanged:: 0.9.2 The ability to load package fabfiles. .. _importing-the-api: Importing Fabric ================ Because Fabric is just Python, you *can* import its components any way you want. However, for the purposes of encapsulation and convenience (and to make life easier for Fabric's packaging script) Fabric's public API is maintained in the ``fabric.api`` module. All of Fabric's :doc:`../api/core/operations`, :doc:`../api/core/context_managers`, :doc:`../api/core/decorators` and :doc:`../api/core/utils` are included in this module as a single, flat namespace. This enables a very simple and consistent interface to Fabric within your fabfiles:: from fabric.api import * # call run(), sudo(), etc etc This is not technically best practices (for `a number of reasons`_) and if you're only using a couple of Fab API calls, it *is* probably a good idea to explicitly ``from fabric.api import env, run`` or similar. However, in most nontrivial fabfiles, you'll be using all or most of the API, and the star import:: from fabric.api import * will be a lot easier to write and read than:: from fabric.api import abort, cd, env, get, hide, hosts, local, prompt, \ put, require, roles, run, runs_once, settings, show, sudo, warn so in this case we feel pragmatism overrides best practices. .. _a number of reasons: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#importing Defining tasks and importing callables ====================================== For important information on what exactly Fabric will consider as a task when it loads your fabfile, as well as notes on how best to import other code, please see :doc:`/usage/tasks` in the :doc:`execution` documentation. fabric-1.14.0/sites/docs/usage/interactivity.rst000066400000000000000000000151271315011462000216350ustar00rootroot00000000000000================================ Interaction with remote programs ================================ Fabric's primary operations, `~fabric.operations.run` and `~fabric.operations.sudo`, are capable of sending local input to the remote end, in a manner nearly identical to the ``ssh`` program. For example, programs which display password prompts (e.g. a database dump utility, or changing a user's password) will behave just as if you were interacting with them directly. However, as with ``ssh`` itself, Fabric's implementation of this feature is subject to a handful of limitations which are not always intuitive. This document discusses such issues in detail. .. note:: Readers unfamiliar with the basics of Unix stdout and stderr pipes, and/or terminal devices, may wish to visit the Wikipedia pages for `Unix pipelines `_ and `Pseudo terminals `_ respectively. .. _combine_streams: Combining stdout and stderr =========================== The first issue to be aware of is that of the stdout and stderr streams, and why they are separated or combined as needed. Buffering --------- Fabric 0.9.x and earlier, and Python itself, buffer output on a line-by-line basis: text is not printed to the user until a newline character is found. This works fine in most situations but becomes problematic when one needs to deal with partial-line output such as prompts. .. note:: Line-buffered output can make programs appear to halt or freeze for no reason, as prompts print out text without a newline, waiting for the user to enter their input and press Return. Newer Fabric versions buffer both input and output on a character-by-character basis in order to make interaction with prompts possible. This has the convenient side effect of enabling interaction with complex programs utilizing the "curses" libraries or which otherwise redraw the screen (think ``top``). Crossing the streams -------------------- Unfortunately, printing to stderr and stdout simultaneously (as many programs do) means that when the two streams are printed independently one byte at a time, they can become garbled or meshed together. While this can sometimes be mitigated by line-buffering one of the streams and not the other, it's still a serious issue. To solve this problem, Fabric uses a setting in our SSH layer which merges the two streams at a low level and causes output to appear more naturally. This setting is represented in Fabric as the :ref:`combine-stderr` env var and keyword argument, and is ``True`` by default. Due to this default setting, output will appear correctly, but at the cost of an empty ``.stderr`` attribute on the return values of `~fabric.operations.run`/`~fabric.operations.sudo`, as all output will appear to be stdout. Conversely, users requiring a distinct stderr stream at the Python level and who aren't bothered by garbled user-facing output (or who are hiding stdout and stderr from the command in question) may opt to set this to ``False`` as needed. .. _pseudottys: Pseudo-terminals ================ The other main issue to consider when presenting interactive prompts to users is that of echoing the user's own input. Echoes ------ Typical terminal applications or bona fide text terminals (e.g. when using a Unix system without a running GUI) present programs with a terminal device called a tty or pty (for pseudo-terminal). These automatically echo all text typed into them back out to the user (via stdout), as interaction without seeing what you had just typed would be difficult. Terminal devices are also able to conditionally turn off echoing, allowing secure password prompts. However, it's possible for programs to be run without a tty or pty present at all (consider cron jobs, for example) and in this situation, any stdin data being fed to the program won't be echoed. This is desirable for programs being run without any humans around, and it's also Fabric's old default mode of operation. Fabric's approach ----------------- Unfortunately, in the context of executing commands via Fabric, when no pty is present to echo a user's stdin, Fabric must echo it for them. This is sufficient for many applications, but it presents problems for password prompts, which become insecure. In the interests of security and meeting the principle of least surprise (insofar as users are typically expecting things to behave as they would when run in a terminal emulator), Fabric 1.0 and greater force a pty by default. With a pty enabled, Fabric simply allows the remote end to handle echoing or hiding of stdin and does not echo anything itself. .. note:: In addition to allowing normal echo behavior, a pty also means programs that behave differently when attached to a terminal device will then do so. For example, programs that colorize output on terminals but not when run in the background will print colored output. Be wary of this if you inspect the return value of `~fabric.operations.run` or `~fabric.operations.sudo`! For situations requiring the pty behavior turned off, the :option:`--no-pty` command-line argument and :ref:`always-use-pty` env var may be used. Combining the two ================= As a final note, keep in mind that use of pseudo-terminals effectively implies combining stdout and stderr -- in much the same way as the :ref:`combine_stderr ` setting does. This is because a terminal device naturally sends both stdout and stderr to the same place -- the user's display -- thus making it impossible to differentiate between them. However, at the Fabric level, the two groups of settings are distinct from one another and may be combined in various ways. The default is for both to be set to ``True``; the other combinations are as follows: * ``run("cmd", pty=False, combine_stderr=True)``: will cause Fabric to echo all stdin itself, including passwords, as well as potentially altering ``cmd``'s behavior. Useful if ``cmd`` behaves undesirably when run under a pty and you're not concerned about password prompts. * ``run("cmd", pty=False, combine_stderr=False)``: with both settings ``False``, Fabric will echo stdin and won't issue a pty -- and this is highly likely to result in undesired behavior for all but the simplest commands. However, it is also the only way to access a distinct stderr stream, which is occasionally useful. * ``run("cmd", pty=True, combine_stderr=False)``: valid, but won't really make much of a difference, as ``pty=True`` will still result in merged streams. May be useful for avoiding any edge case problems in ``combine_stderr`` (none are presently known). fabric-1.14.0/sites/docs/usage/library.rst000066400000000000000000000053461315011462000204050ustar00rootroot00000000000000=========== Library Use =========== Fabric's primary use case is via fabfiles and the :doc:`fab ` tool, and this is reflected in much of the documentation. However, Fabric's internals are written in such a manner as to be easily used without ``fab`` or fabfiles at all -- this document will show you how. There's really only a couple of considerations one must keep in mind, when compared to writing a fabfile and using ``fab`` to run it: how connections are really made, and how disconnections occur. Connections =========== We've documented how Fabric really connects to its hosts before, but it's currently somewhat buried in the middle of the overall :doc:`execution docs `. Specifically, you'll want to skip over to the :ref:`connections` section and read it real quick. (You should really give that entire document a once-over, but it's not absolutely required.) As that section mentions, the key is simply that `~fabric.operations.run`, `~fabric.operations.sudo` and the other operations only look in one place when connecting: :ref:`env.host_string `. All of the other mechanisms for setting hosts are interpreted by the ``fab`` tool when it runs, and don't matter when running as a library. That said, most use cases where you want to marry a given task ``X`` and a given list of hosts ``Y`` can, as of Fabric 1.3, be handled with the `~fabric.tasks.execute` function via ``execute(X, hosts=Y)``. Please see `~fabric.tasks.execute`'s documentation for details -- manual host string manipulation should be rarely necessary. Disconnecting ============= The other main thing that ``fab`` does for you is to disconnect from all hosts at the end of a session; otherwise, Python will sit around forever waiting for those network resources to be released. Fabric 0.9.4 and newer have a function you can use to do this easily: `~fabric.network.disconnect_all`. Simply make sure your code calls this when it terminates (typically in the ``finally`` clause of an outer ``try: finally`` statement -- lest errors in your code prevent disconnections from happening!) and things ought to work pretty well. If you're on Fabric 0.9.3 or older, you can simply do this (``disconnect_all`` just adds a bit of nice output to this logic):: from fabric.state import connections for key in connections.keys(): connections[key].close() del connections[key] Final note ========== This document is an early draft, and may not cover absolutely every difference between ``fab`` use and library use. However, the above should highlight the largest stumbling blocks. When in doubt, note that in the Fabric source code, ``fabric/main.py`` contains the bulk of the extra work done by ``fab``, and may serve as a useful reference. fabric-1.14.0/sites/docs/usage/output_controls.rst000066400000000000000000000152321315011462000222170ustar00rootroot00000000000000=============== Managing output =============== The ``fab`` tool is very verbose by default and prints out almost everything it can, including the remote end's stderr and stdout streams, the command strings being executed, and so forth. While this is necessary in many cases in order to know just what's going on, any nontrivial Fabric task will quickly become difficult to follow as it runs. Output levels ============= To aid in organizing task output, Fabric output is grouped into a number of non-overlapping levels or groups, each of which may be turned on or off independently. This provides flexible control over what is displayed to the user. .. note:: All levels, save for ``debug`` and ``exceptions``, are on by default. Standard output levels ---------------------- The standard, atomic output levels/groups are as follows: * **status**: Status messages, i.e. noting when Fabric is done running, if the user used a keyboard interrupt, or when servers are disconnected from. These messages are almost always relevant and rarely verbose. * **aborts**: Abort messages. Like status messages, these should really only be turned off when using Fabric as a library, and possibly not even then. Note that even if this output group is turned off, aborts will still occur -- there just won't be any output about why Fabric aborted! * **warnings**: Warning messages. These are often turned off when one expects a given operation to fail, such as when using ``grep`` to test existence of text in a file. If paired with setting ``env.warn_only`` to True, this can result in fully silent warnings when remote programs fail. As with ``aborts``, this setting does not control actual warning behavior, only whether warning messages are printed or hidden. * **running**: Printouts of commands being executed or files transferred, e.g. ``[myserver] run: ls /var/www``. Also controls printing of tasks being run, e.g. ``[myserver] Executing task 'foo'``. * **stdout**: Local, or remote, stdout, i.e. non-error output from commands. * **stderr**: Local, or remote, stderr, i.e. error-related output from commands. * **user**: User-generated output, i.e. local output printed by fabfile code via use of the `~fabric.utils.fastprint` or `~fabric.utils.puts` functions. .. versionchanged:: 0.9.2 Added "Executing task" lines to the ``running`` output level. .. versionchanged:: 0.9.2 Added the ``user`` output level. Debug output ------------ There are two more atomic output levels for use when troubleshooting: ``debug``, which behaves slightly differently from the rest, and ``exceptions``, whose behavior is included in ``debug`` but may be enabled separately. * **debug**: Turn on debugging (which is off by default.) Currently, this is largely used to view the "full" commands being run; take for example this `~fabric.operations.run` call:: run('ls "/home/username/Folder Name With Spaces/"') Normally, the ``running`` line will show exactly what is passed into `~fabric.operations.run`, like so:: [hostname] run: ls "/home/username/Folder Name With Spaces/" With ``debug`` on, and assuming you've left :ref:`shell` set to ``True``, you will see the literal, full string as passed to the remote server:: [hostname] run: /bin/bash -l -c "ls \"/home/username/Folder Name With Spaces\"" Enabling ``debug`` output will also display full Python tracebacks during aborts (as if ``exceptions`` output was enabled). .. note:: Where modifying other pieces of output (such as in the above example where it modifies the 'running' line to show the shell and any escape characters), this setting takes precedence over the others; so if ``running`` is False but ``debug`` is True, you will still be shown the 'running' line in its debugging form. * **exceptions**: Enables display of tracebacks when exceptions occur; intended for use when ``debug`` is set to ``False`` but one is still interested in detailed error info. .. versionchanged:: 1.0 Debug output now includes full Python tracebacks during aborts. .. versionchanged:: 1.11 Added the ``exceptions`` output level. .. _output-aliases: Output level aliases -------------------- In addition to the atomic/standalone levels above, Fabric also provides a couple of convenience aliases which map to multiple other levels. These may be referenced anywhere the other levels are referenced, and will effectively toggle all of the levels they are mapped to. * **output**: Maps to both ``stdout`` and ``stderr``. Useful for when you only care to see the 'running' lines and your own print statements (and warnings). * **everything**: Includes ``warnings``, ``running``, ``user`` and ``output`` (see above.) Thus, when turning off ``everything``, you will only see a bare minimum of output (just ``status`` and ``debug`` if it's on), along with your own print statements. * **commands**: Includes ``stdout`` and ``running``. Good for hiding non-erroring commands entirely, while still displaying any stderr output. .. versionchanged:: 1.4 Added the ``commands`` output alias. Hiding and/or showing output levels =================================== You may toggle any of Fabric's output levels in a number of ways; for examples, please see the API docs linked in each bullet point: * **Direct modification of fabric.state.output**: `fabric.state.output` is a dictionary subclass (similar to :doc:`env `) whose keys are the output level names, and whose values are either True (show that particular type of output) or False (hide it.) `fabric.state.output` is the lowest-level implementation of output levels and is what Fabric's internals reference when deciding whether or not to print their output. * **Context managers**: `~fabric.context_managers.hide` and `~fabric.context_managers.show` are twin context managers that take one or more output level names as strings, and either hide or show them within the wrapped block. As with Fabric's other context managers, the prior values are restored when the block exits. .. seealso:: `~fabric.context_managers.settings`, which can nest calls to `~fabric.context_managers.hide` and/or `~fabric.context_managers.show` inside itself. * **Command-line arguments**: You may use the :option:`--hide` and/or :option:`--show` arguments to :doc:`fab`, which behave exactly like the context managers of the same names (but are, naturally, globally applied) and take comma-separated strings as input. Prefix output ============= By default Fabric prefixes every line of ouput with either ``[hostname] out:`` or ``[hostname] err:``. Those prefixes may be hidden by setting ``env.output_prefix`` to ``False``.fabric-1.14.0/sites/docs/usage/parallel.rst000066400000000000000000000122111315011462000205220ustar00rootroot00000000000000================== Parallel execution ================== .. _parallel-execution: .. versionadded:: 1.3 By default, Fabric executes all specified tasks **serially** (see :ref:`execution-strategy` for details.) This document describes Fabric's options for running tasks on multiple hosts in **parallel**, via per-task decorators and/or global command-line switches. What it does ============ Because Fabric 1.x is not fully threadsafe (and because in general use, task functions do not typically interact with one another) this functionality is implemented via the Python `multiprocessing `_ module. It creates one new process for each host and task combination, optionally using a (configurable) sliding window to prevent too many processes from running at the same time. For example, imagine a scenario where you want to update Web application code on a number of Web servers, and then reload the servers once the code has been distributed everywhere (to allow for easier rollback if code updates fail.) One could implement this with the following fabfile:: from fabric.api import * def update(): with cd("/srv/django/myapp"): run("git pull") def reload(): sudo("service apache2 reload") and execute it on a set of 3 servers, in serial, like so:: $ fab -H web1,web2,web3 update reload Normally, without any parallel execution options activated, Fabric would run in order: #. ``update`` on ``web1`` #. ``update`` on ``web2`` #. ``update`` on ``web3`` #. ``reload`` on ``web1`` #. ``reload`` on ``web2`` #. ``reload`` on ``web3`` With parallel execution activated (via :option:`-P` -- see below for details), this turns into: #. ``update`` on ``web1``, ``web2``, and ``web3`` #. ``reload`` on ``web1``, ``web2``, and ``web3`` Hopefully the benefits of this are obvious -- if ``update`` took 5 seconds to run and ``reload`` took 2 seconds, serial execution takes (5+2)*3 = 21 seconds to run, while parallel execution takes only a third of the time, (5+2) = 7 seconds on average. How to use it ============= Decorators ---------- Since the minimum "unit" that parallel execution affects is a task, the functionality may be enabled or disabled on a task-by-task basis using the `~fabric.decorators.parallel` and `~fabric.decorators.serial` decorators. For example, this fabfile:: from fabric.api import * @parallel def runs_in_parallel(): pass def runs_serially(): pass when run in this manner:: $ fab -H host1,host2,host3 runs_in_parallel runs_serially will result in the following execution sequence: #. ``runs_in_parallel`` on ``host1``, ``host2``, and ``host3`` #. ``runs_serially`` on ``host1`` #. ``runs_serially`` on ``host2`` #. ``runs_serially`` on ``host3`` Command-line flags ------------------ One may also force all tasks to run in parallel by using the command-line flag :option:`-P` or the env variable :ref:`env.parallel `. However, any task specifically wrapped with `~fabric.decorators.serial` will ignore this setting and continue to run serially. For example, the following fabfile will result in the same execution sequence as the one above:: from fabric.api import * def runs_in_parallel(): pass @serial def runs_serially(): pass when invoked like so:: $ fab -H host1,host2,host3 -P runs_in_parallel runs_serially As before, ``runs_in_parallel`` will run in parallel, and ``runs_serially`` in sequence. Bubble size =========== With large host lists, a user's local machine can get overwhelmed by running too many concurrent Fabric processes. Because of this, you may opt to use a moving bubble approach that limits Fabric to a specific number of concurrently active processes. By default, no bubble is used and all hosts are run in one concurrent pool. You can override this on a per-task level by specifying the ``pool_size`` keyword argument to `~fabric.decorators.parallel`, or globally via :option:`-z`. For example, to run on 5 hosts at a time:: from fabric.api import * @parallel(pool_size=5) def heavy_task(): # lots of heavy local lifting or lots of IO here Or skip the ``pool_size`` kwarg and instead:: $ fab -P -z 5 heavy_task .. _linewise-output: Linewise vs bytewise output =========================== Fabric's default mode of printing to the terminal is byte-by-byte, in order to support :doc:`/usage/interactivity`. This often gives poor results when running in parallel mode, as the multiple processes may write to your terminal's standard out stream simultaneously. To help offset this problem, Fabric's option for linewise output is automatically enabled whenever parallelism is active. This will cause you to lose most of the benefits outlined in the above link Fabric's remote interactivity features, but as those do not map well to parallel invocations, it's typically a fair trade. There's no way to avoid the multiple processes mixing up on a line-by-line basis, but you will at least be able to tell them apart by the host-string line prefix. .. note:: Future versions will add improved logging support to make troubleshooting parallel runs easier. fabric-1.14.0/sites/docs/usage/ssh.rst000066400000000000000000000061471315011462000175360ustar00rootroot00000000000000============ SSH behavior ============ Fabric currently makes use of a pure-Python SSH re-implementation for managing connections, meaning that there are occasionally spots where it is limited by that library's capabilities. Below are areas of note where Fabric will exhibit behavior that isn't consistent with, or as flexible as, the behavior of the ``ssh`` command-line program. Unknown hosts ============= SSH's host key tracking mechanism keeps tabs on all the hosts you attempt to connect to, and maintains a ``~/.ssh/known_hosts`` file with mappings between identifiers (IP address, sometimes with a hostname as well) and SSH keys. (For details on how this works, please see the `OpenSSH documentation `_.) The ``paramiko`` library is capable of loading up your ``known_hosts`` file, and will then compare any host it connects to, with that mapping. Settings are available to determine what happens when an unknown host (a host whose username or IP is not found in ``known_hosts``) is seen: * **Reject**: the host key is rejected and the connection is not made. This results in a Python exception, which will terminate your Fabric session with a message that the host is unknown. * **Add**: the new host key is added to the in-memory list of known hosts, the connection is made, and things continue normally. Note that this does **not** modify your on-disk ``known_hosts`` file! * **Ask**: not yet implemented at the Fabric level, this is a ``paramiko`` library option which would result in the user being prompted about the unknown key and whether to accept it. Whether to reject or add hosts, as above, is controlled in Fabric via the :ref:`env.reject_unknown_hosts ` option, which is False by default for convenience's sake. We feel this is a valid tradeoff between convenience and security; anyone who feels otherwise can easily modify their fabfiles at module level to set ``env.reject_unknown_hosts = True``. Known hosts with changed keys ============================= The point of SSH's key/fingerprint tracking is so that man-in-the-middle attacks can be detected: if an attacker redirects your SSH traffic to a computer under his control, and pretends to be your original destination server, the host keys will not match. Thus, the default behavior of SSH (and its Python implementation) is to immediately abort the connection when a host previously recorded in ``known_hosts`` suddenly starts sending us a different host key. In some edge cases such as some EC2 deployments, you may want to ignore this potential problem. Our SSH layer, at the time of writing, doesn't give us control over this exact behavior, but we can sidestep it by simply skipping the loading of ``known_hosts`` -- if the host list being compared to is empty, then there's no problem. Set :ref:`env.disable_known_hosts ` to True when you want this behavior; it is False by default, in order to preserve default SSH behavior. .. warning:: Enabling :ref:`env.disable_known_hosts ` will leave you wide open to man-in-the-middle attacks! Please use with caution. fabric-1.14.0/sites/docs/usage/tasks.rst000066400000000000000000000427741315011462000200740ustar00rootroot00000000000000============== Defining tasks ============== As of Fabric 1.1, there are two distinct methods you may use in order to define which objects in your fabfile show up as tasks: * The "new" method starting in 1.1 considers instances of `~fabric.tasks.Task` or its subclasses, and also descends into imported modules to allow building nested namespaces. * The "classic" method from 1.0 and earlier considers all public callable objects (functions, classes etc) and only considers the objects in the fabfile itself with no recursing into imported module. .. note:: These two methods are **mutually exclusive**: if Fabric finds *any* new-style task objects in your fabfile or in modules it imports, it will assume you've committed to this method of task declaration and won't consider any non-`~fabric.tasks.Task` callables. If *no* new-style tasks are found, it reverts to the classic behavior. The rest of this document explores these two methods in detail. .. note:: To see exactly what tasks in your fabfile may be executed via ``fab``, use :option:`fab --list <-l>`. .. _new-style-tasks: New-style tasks =============== Fabric 1.1 introduced the `~fabric.tasks.Task` class to facilitate new features and enable some programming best practices, specifically: * **Object-oriented tasks**. Inheritance and all that comes with it can make for much more sensible code reuse than passing around simple function objects. The classic style of task declaration didn't entirely rule this out, but it also didn't make it terribly easy. * **Namespaces**. Having an explicit method of declaring tasks makes it easier to set up recursive namespaces without e.g. polluting your task list with the contents of Python's ``os`` module (which would show up as valid "tasks" under the classic methodology.) With the introduction of `~fabric.tasks.Task`, there are two ways to set up new tasks: * Decorate a regular module level function with `@task `, which transparently wraps the function in a `~fabric.tasks.Task` subclass. The function name will be used as the task name when invoking. * Subclass `~fabric.tasks.Task` (`~fabric.tasks.Task` itself is intended to be abstract), define a ``run`` method, and instantiate your subclass at module level. Instances' ``name`` attributes are used as the task name; if omitted the instance's variable name will be used instead. Use of new-style tasks also allows you to set up :ref:`namespaces `. .. _task-decorator: The ``@task`` decorator ----------------------- The quickest way to make use of new-style task features is to wrap basic task functions with `@task `:: from fabric.api import task, run @task def mytask(): run("a command") When this decorator is used, it signals to Fabric that *only* functions wrapped in the decorator are to be loaded up as valid tasks. (When not present, :ref:`classic-style task ` behavior kicks in.) .. _task-decorator-arguments: Arguments ~~~~~~~~~ `@task ` may also be called with arguments to customize its behavior. Any arguments not documented below are passed into the constructor of the ``task_class`` being used, with the function itself as the first argument (see :ref:`task-decorator-and-classes` for details.) * ``task_class``: The `~fabric.tasks.Task` subclass used to wrap the decorated function. Defaults to `~fabric.tasks.WrappedCallableTask`. * ``aliases``: An iterable of string names which will be used as aliases for the wrapped function. See :ref:`task-aliases` for details. * ``alias``: Like ``aliases`` but taking a single string argument instead of an iterable. If both ``alias`` and ``aliases`` are specified, ``aliases`` will take precedence. * ``default``: A boolean value determining whether the decorated task also stands in for its containing module as a task name. See :ref:`default-tasks`. * ``name``: A string setting the name this task appears as to the command-line interface. Useful for task names that would otherwise shadow Python builtins (which is technically legal but frowned upon and bug-prone.) .. _task-aliases: Aliases ~~~~~~~ Here's a quick example of using the ``alias`` keyword argument to facilitate use of both a longer human-readable task name, and a shorter name which is quicker to type:: from fabric.api import task @task(alias='dwm') def deploy_with_migrations(): pass Calling :option:`--list <-l>` on this fabfile would show both the original ``deploy_with_migrations`` and its alias ``dwm``:: $ fab --list Available commands: deploy_with_migrations dwm When more than one alias for the same function is needed, simply swap in the ``aliases`` kwarg, which takes an iterable of strings instead of a single string. .. _default-tasks: Default tasks ~~~~~~~~~~~~~ In a similar manner to :ref:`aliases `, it's sometimes useful to designate a given task within a module as the "default" task, which may be called by referencing *just* the module name. This can save typing and/or allow for neater organization when there's a single "main" task and a number of related tasks or subroutines. For example, a ``deploy`` submodule might contain tasks for provisioning new servers, pushing code, migrating databases, and so forth -- but it'd be very convenient to highlight a task as the default "just deploy" action. Such a ``deploy.py`` module might look like this:: from fabric.api import task @task def migrate(): pass @task def push(): pass @task def provision(): pass @task def full_deploy(): if not provisioned: provision() push() migrate() With the following task list (assuming a simple top level ``fabfile.py`` that just imports ``deploy``):: $ fab --list Available commands: deploy.full_deploy deploy.migrate deploy.provision deploy.push Calling ``deploy.full_deploy`` on every deploy could get kind of old, or somebody new to the team might not be sure if that's really the right task to run. Using the ``default`` kwarg to `@task `, we can tag e.g. ``full_deploy`` as the default task:: @task(default=True) def full_deploy(): pass Doing so updates the task list like so:: $ fab --list Available commands: deploy deploy.full_deploy deploy.migrate deploy.provision deploy.push Note that ``full_deploy`` still exists as its own explicit task -- but now ``deploy`` shows up as a sort of top level alias for ``full_deploy``. If multiple tasks within a module have ``default=True`` set, the last one to be loaded (typically the one lowest down in the file) will take precedence. Top-level default tasks ~~~~~~~~~~~~~~~~~~~~~~~ Using ``@task(default=True)`` in the top level fabfile will cause the denoted task to execute when a user invokes ``fab`` without any task names (similar to e.g. ``make``.) When using this shortcut, it is not possible to specify arguments to the task itself -- use a regular invocation of the task if this is necessary. .. _task-subclasses: ``Task`` subclasses ------------------- If you're used to :ref:`classic-style tasks `, an easy way to think about `~fabric.tasks.Task` subclasses is that their ``run`` method is directly equivalent to a classic task; its arguments are the task arguments (other than ``self``) and its body is what gets executed. For example, this new-style task:: class MyTask(Task): name = "deploy" def run(self, environment, domain="whatever.com"): run("git clone foo") sudo("service apache2 restart") instance = MyTask() is exactly equivalent to this function-based task:: @task def deploy(environment, domain="whatever.com"): run("git clone foo") sudo("service apache2 restart") Note how we had to instantiate an instance of our class; that's simply normal Python object-oriented programming at work. While it's a small bit of boilerplate right now -- for example, Fabric doesn't care about the name you give the instantiation, only the instance's ``name`` attribute -- it's well worth the benefit of having the power of classes available. We plan to extend the API in the future to make this experience a bit smoother. .. _task-decorator-and-classes: Using custom subclasses with ``@task`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It's possible to marry custom `~fabric.tasks.Task` subclasses with `@task `. This may be useful in cases where your core execution logic doesn't do anything class/object-specific, but you want to take advantage of class metaprogramming or similar techniques. Specifically, any `~fabric.tasks.Task` subclass which is designed to take in a callable as its first constructor argument (as the built-in `~fabric.tasks.WrappedCallableTask` does) may be specified as the ``task_class`` argument to `@task `. Fabric will automatically instantiate a copy of the given class, passing in the wrapped function as the first argument. All other args/kwargs given to the decorator (besides the "special" arguments documented in :ref:`task-decorator-arguments`) are added afterwards. Here's a brief and somewhat contrived example to make this obvious:: from fabric.api import task from fabric.tasks import Task class CustomTask(Task): def __init__(self, func, myarg, *args, **kwargs): super(CustomTask, self).__init__(*args, **kwargs) self.func = func self.myarg = myarg def run(self, *args, **kwargs): return self.func(*args, **kwargs) @task(task_class=CustomTask, myarg='value', alias='at') def actual_task(): pass When this fabfile is loaded, a copy of ``CustomTask`` is instantiated, effectively calling:: task_obj = CustomTask(actual_task, myarg='value') Note how the ``alias`` kwarg is stripped out by the decorator itself and never reaches the class instantiation; this is identical in function to how :ref:`command-line task arguments ` work. .. _namespaces: Namespaces ---------- With :ref:`classic tasks `, fabfiles were limited to a single, flat set of task names with no real way to organize them. In Fabric 1.1 and newer, if you declare tasks the new way (via `@task ` or your own `~fabric.tasks.Task` subclass instances) you may take advantage of **namespacing**: * Any module objects imported into your fabfile will be recursed into, looking for additional task objects. * Within submodules, you may control which objects are "exported" by using the standard Python ``__all__`` module-level variable name (thought they should still be valid new-style task objects.) * These tasks will be given new dotted-notation names based on the modules they came from, similar to Python's own import syntax. Let's build up a fabfile package from simple to complex and see how this works. Basic ~~~~~ We start with a single `__init__.py` containing a few tasks (the Fabric API import omitted for brevity):: @task def deploy(): ... @task def compress(): ... The output of ``fab --list`` would look something like this:: deploy compress There's just one namespace here: the "root" or global namespace. Looks simple now, but in a real-world fabfile with dozens of tasks, it can get difficult to manage. Importing a submodule ~~~~~~~~~~~~~~~~~~~~~ As mentioned above, Fabric will examine any imported module objects for tasks, regardless of where that module exists on your Python import path. For now we just want to include our own, "nearby" tasks, so we'll make a new submodule in our package for dealing with, say, load balancers -- ``lb.py``:: @task def add_backend(): ... And we'll add this to the top of ``__init__.py``:: import lb Now ``fab --list`` shows us:: deploy compress lb.add_backend Again, with only one task in its own submodule, it looks kind of silly, but the benefits should be pretty obvious. Going deeper ~~~~~~~~~~~~ Namespacing isn't limited to just one level. Let's say we had a larger setup and wanted a namespace for database related tasks, with additional differentiation inside that. We make a sub-package named ``db/`` and inside it, a ``migrations.py`` module:: @task def list(): ... @task def run(): ... We need to make sure that this module is visible to anybody importing ``db``, so we add it to the sub-package's ``__init__.py``:: import migrations As a final step, we import the sub-package into our root-level ``__init__.py``, so now its first few lines look like this:: import lb import db After all that, our file tree looks like this:: . ├── __init__.py ├── db │   ├── __init__.py │   └── migrations.py └── lb.py and ``fab --list`` shows:: deploy compress lb.add_backend db.migrations.list db.migrations.run We could also have specified (or imported) tasks directly into ``db/__init__.py``, and they would show up as ``db.`` as you might expect. Limiting with ``__all__`` ~~~~~~~~~~~~~~~~~~~~~~~~~ You may limit what Fabric "sees" when it examines imported modules, by using the Python convention of a module level ``__all__`` variable (a list of variable names.) If we didn't want the ``db.migrations.run`` task to show up by default for some reason, we could add this to the top of ``db/migrations.py``:: __all__ = ['list'] Note the lack of ``'run'`` there. You could, if needed, import ``run`` directly into some other part of the hierarchy, but otherwise it'll remain hidden. Switching it up ~~~~~~~~~~~~~~~ We've been keeping our fabfile package neatly organized and importing it in a straightforward manner, but the filesystem layout doesn't actually matter here. All Fabric's loader cares about is the names the modules are given when they're imported. For example, if we changed the top of our root ``__init__.py`` to look like this:: import db as database Our task list would change thusly:: deploy compress lb.add_backend database.migrations.list database.migrations.run This applies to any other import -- you could import third party modules into your own task hierarchy, or grab a deeply nested module and make it appear near the top level. Nested list output ~~~~~~~~~~~~~~~~~~ As a final note, we've been using the default Fabric :option:`--list <-l>` output during this section -- it makes it more obvious what the actual task names are. However, you can get a more nested or tree-like view by passing ``nested`` to the :option:`--list-format <-F>` option:: $ fab --list-format=nested --list Available commands (remember to call as module.[...].task): deploy compress lb: add_backend database: migrations: list run While it slightly obfuscates the "real" task names, this view provides a handy way of noting the organization of tasks in large namespaces. .. _classic-tasks: Classic tasks ============= When no new-style `~fabric.tasks.Task`-based tasks are found, Fabric will consider any callable object found in your fabfile, **except** the following: * Callables whose name starts with an underscore (``_``). In other words, Python's usual "private" convention holds true here. * Callables defined within Fabric itself. Fabric's own functions such as `~fabric.operations.run` and `~fabric.operations.sudo` will not show up in your task list. Imports ------- Python's ``import`` statement effectively includes the imported objects in your module's namespace. Since Fabric's fabfiles are just Python modules, this means that imports are also considered as possible classic-style tasks, alongside anything defined in the fabfile itself. .. note:: This only applies to imported *callable objects* -- not modules. Imported modules only come into play if they contain :ref:`new-style tasks `, at which point this section no longer applies. Because of this, we strongly recommend that you use the ``import module`` form of importing, followed by ``module.callable()``, which will result in a cleaner fabfile API than doing ``from module import callable``. For example, here's a sample fabfile which uses ``urllib.urlopen`` to get some data out of a webservice:: from urllib import urlopen from fabric.api import run def webservice_read(): objects = urlopen('http://my/web/service/?foo=bar').read().split() print(objects) This looks simple enough, and will run without error. However, look what happens if we run :option:`fab --list <-l>` on this fabfile:: $ fab --list Available commands: webservice_read List some directories. urlopen urlopen(url [, data]) -> open file-like object Our fabfile of only one task is showing two "tasks", which is bad enough, and an unsuspecting user might accidentally try to call ``fab urlopen``, which probably won't work very well. Imagine any real-world fabfile, which is likely to be much more complex, and hopefully you can see how this could get messy fast. For reference, here's the recommended way to do it:: import urllib from fabric.api import run def webservice_read(): objects = urllib.urlopen('http://my/web/service/?foo=bar').read().split() print(objects) It's a simple change, but it'll make anyone using your fabfile a bit happier. fabric-1.14.0/sites/shared_conf.py000066400000000000000000000017541315011462000167770ustar00rootroot00000000000000from os.path import join from datetime import datetime import alabaster # Alabaster theme + mini-extension html_theme_path = [alabaster.get_path()] extensions = ['alabaster'] # Paths relative to invoking conf.py - not this shared file html_static_path = [join('..', '_shared_static')] html_theme = 'alabaster' html_theme_options = { 'logo': 'logo.png', 'logo_name': True, 'logo_text_align': 'center', 'description': "Pythonic remote execution", 'github_user': 'fabric', 'github_repo': 'fabric', 'travis_button': True, 'analytics_id': 'UA-18486793-1', 'link': '#3782BE', 'link_hover': '#3782BE', } html_sidebars = { '**': [ 'about.html', 'navigation.html', 'searchbox.html', 'donate.html', ] } # Regular settings project = 'Fabric' year = datetime.now().year copyright = '%d Jeff Forcier' % year master_doc = 'index' templates_path = ['_templates'] exclude_trees = ['_build'] source_suffix = '.rst' default_role = 'obj' fabric-1.14.0/sites/www/000077500000000000000000000000001315011462000147675ustar00rootroot00000000000000fabric-1.14.0/sites/www/changelog.rst000066400000000000000000001402231315011462000174520ustar00rootroot00000000000000========= Changelog ========= * :release:`1.14.0 <2017-08-25>` * :feature:`1475` Honor ``env.timeout`` when opening new remote sessions (as opposed to the initial overall connection, which already honored timeout settings.) Thanks to ``@EugeniuZ`` for the report & ``@jrmsgit`` for the first draft of the patch. .. note:: This feature only works with Paramiko 1.14.3 and above; if your Paramiko version is older, no timeout can be set, and the previous behavior will occur instead. * :release:`1.13.2 <2017-04-24>` * :release:`1.12.2 <2017-04-24>` * :bug:`1542` (via :issue:`1543`) Catch Paramiko-level gateway connection errors (``ChannelError``) when raising ``NetworkError``; this prevents an issue where gateway related issues were being treated as authentication errors. Thanks to Charlie Stanley for catch & patch. * :bug:`1555` Multiple simultaneous `~fabric.operations.get` and/or `~fabric.operations.put` with ``use_sudo=True`` and for the same remote host and path could fail unnecessarily. Thanks ``@arnimarj`` for the report and Pierce Lopez for the patch. * :bug:`1427` (via :issue:`1428`) Locate ``.pyc`` files when searching for fabfiles to load; previously we only used the presence of ``.py`` files to determine whether loading should be attempted. Credit: Ray Chen. * :bug:`1294` fix text escaping for `~fabric.contrib.files.contains` and `~fabric.contrib.files.append` which would fail if the text contained e.g. ``>``. Thanks to ``@ecksun`` for report & Pierce Lopez for the patch. * :support:`1065 backported` Fix incorrect SSH config reference in the docs for ``env.keepalive``; it corresponds to ``ServerAliveInterval``, not ``ClientAliveInterval``. Credit: Harry Percival. * :bug:`1574` `~fabric.contrib.project.upload_project` failed for folder in current directory specified without any path separator. Thanks ``@aidanmelen`` for the report and Pierce Lopez for the patch. * :support:`1590 backported` Replace a reference to ``fab`` in a test subprocess, to use the ``python -m `` style instead; this allows ``python setup.py test`` to run the test suite without having Fabric already installed. Thanks to ``@BenSturmfels`` for catch & patch. * :support:`- backported` Backport :issue:`1462` to 1.12.x (was previously only backported to 1.13.x.) * :support:`1416 backported` Add explicit "Python 2 only" note to ``setup.py`` trove classifiers to help signal that fact to various info-gathering tools. Patch courtesy of Gavin Bisesi. * :bug:`1526` Disable use of PTY and shell for a background command execution within `contrib.sed `, preventing a small class of issues on some platforms/environments. Thanks to ``@doflink`` for the report and Pierce Lopez for the final patch. * :support:`1539 backported` Add documentation for :ref:`env.output_prefix `. Thanks ``@jphalip``. * :bug:`1514` Compatibility with Python 2.5 was broken by using the ``format()`` method of a string (only in 1.11+). Report by ``@pedrudehuere``. * :release:`1.13.1 <2016-12-09>` * :bug:`1462` Make a PyCrypto-specific import and method call optional to avoid ``ImportError`` problems under Paramiko 2.x. Thanks to Alex Gaynor for catch & patch! * :release:`1.13.0 <2016-12-09>` * :support:`1461` Update setup requirements to allow Paramiko 2.x, now that it's stable and been out in the wild for some time. Paramiko 1.x still works like it always did; the only change to Paramiko 2 was the backend moving from PyCrypto to Cryptography. .. warning:: If you are upgrading an existing environment, the install dependencies have changed; please see Paramiko's installation docs for details: http://www.paramiko.org/installing.html * :release:`1.12.1 <2016-12-05>` * :release:`1.11.3 <2016-12-05>` * :release:`1.10.5 <2016-12-05>` * :bug:`1470` When using `~fabric.operations.get` with glob expressions, a lack of matches for the glob would result in an empty file named after the glob expression (in addition to raising an error). This has been fixed so the empty file is no longer generated. Thanks to Georgy Kibardin for the catch & initial patch. * :feature:`1495` Update the internals of `~fabric.contrib.files` so its members work with SSH servers running on Windows. Thanks to Hamdi Sahloul for the patch. * :support:`1483 backported` (also re: :issue:`1386`, :issue:`1374`, :issue:`1300`) Add :ref:`an FAQ ` about quote problems in remote ``csh`` causing issues with Fabric's shell-wrapping and quote-escaping. Thanks to Michael Radziej for the update. * :support:`1379 backported` (also :issue:`1464`) Clean up a lot of unused imports and similar cruft (many found via ``flake8 --select E4``). Thanks to Mathias Ertl for the original patches. * :bug:`1458` Detect ``known_hosts``-related instances of ``paramiko.SSHException`` and prevent them from being handled like authentication errors (which is the default behavior). This fixes issues with incorrect password prompts or prompt-related exceptions when using ``reject_unknown_hosts`` and encountering missing or bad ``known_hosts`` entries. Thanks to Lukáš Doktor for catch & patch. * :release:`1.12.0 <2016-07-25>` * :release:`1.11.2 <2016-07-25>` * :release:`1.10.4 <2016-07-25>` * :feature:`1491` Implement ``sudo``-specific password caching (:ref:`docs `). This can be used to work around issues where over-eager submission of ``env.password`` at login time causes authentication problems (e.g. during two-factor auth). * :bug:`1447` Fix a relative import in ``fabric.network`` to be correctly/consistently absolute instead. Thanks to ``@bildzeitung`` for catch & patch. * :release:`1.11.1 <2016-04-09>` * :bug:`- (==1.11)` Bumped version to ``1.11.1`` due to apparently accidentally uploading a false ``1.11.0`` to PyPI sometime in the past (PyPI is secure & prevents reusing deleted filenames.) We have no memory of this, but databases don't lie! * :release:`1.11.0 <2016-04-09>` * :release:`1.10.3 <2016-04-09>` * :bug:`1135` (via :issue:`1241`) Modified order of operations in `~fabric.operations.run`/`~fabric.operations.sudo` to apply environment vars before prefixing commands (instead of after). Report by ``@warsamebashir``, patch by Curtis Mattoon. * :feature:`1203` (via :issue:`1240`) Add a ``case_sensitive`` kwarg to `~fabric.contrib.files.contains` (which toggles use of ``egrep -i``). Report by ``@xoul``, patch by Curtis Mattoon. * :feature:`800` Add ``capture_buffer_size`` kwarg to `~fabric.operations.run`/`~fabric.operations.sudo` so users can limit memory usage in situations where subprocesses generate very large amounts of stdout/err. Thanks to Jordan Starcher for the report & Omri Bahumi for an early version of the patchset. * :feature:`1161` Add ``use_sudo`` kwarg to `~fabric.operations.reboot`. Credit: Bryce Verdier. * :support:`943 backported` Tweak ``env.warn_only`` docs to note that it applies to all operations, not just ``run``/``sudo``. Thanks ``@akitada``. * :feature:`932` Add a ``temp_dir`` kwarg to `~fabric.contrib.files.upload_template` which is passed into its inner `~fabric.operations.put` call. Thanks to ``@nburlett`` for the patch. * :support:`1257 backported` Add notes to the usage docs for ``fab`` regarding the program's exit status. Credit: ``@koalaman``. * :feature:`1261` Expose Paramiko's Kerberos functionality as Fabric config vars & command-line options. Thanks to Ramanan Sivaranjan for catch & patch, and to Johannes Löthberg & Michael Bennett for additional testing. * :feature:`1271` Allow users whose fabfiles use `fabric.colors` to disable colorization at runtime by specifying ``FABRIC_DISABLE_COLORS=1`` (or any other non-empty value). Credit: Eric Berg. * :feature:`1326` Make `~fabric.contrib.project.rsync_project` aware of ``env.gateway``, using a ``ProxyCommand`` under the hood. Credit: David Rasch. * :support:`1359` Add a more-visible top-level ``CHANGELOG.rst`` pointing users to the actual changelog stored within the Sphinx directory tree. Thanks to Jonathan Vanasco for catch & patch. * :feature:`1388` Expose Jinja's ``keep_trailing_newline`` parameter in `~fabric.contrib.files.upload_template` so users can force template renders to preserve trailing newlines. Thanks to Chen Lei for the patch. * :bug:`1389 major` Gently overhaul SSH port derivation so it's less surprising; previously, any non-default value stored in ``env.port`` was overriding all SSH-config derived values. See the API docs for `~fabric.network.normalize` for details on how it now behaves. Thanks to Harry Weppner for catch & patch. * :support:`1454 backported` Remove use of ``:option:`` directives in the changelog, it's currently broken in modern Sphinx & doesn't seem to have actually functioned on Renaissance-era Sphinx either. * :bug:`1365` (via :issue:`1372`) Classic-style fabfiles (ones not using ``@task``) erroneously included custom exception subclasses when collecting tasks. This is now fixed thanks to ``@mattvonrocketstein``. * :bug:`1348` (via :issue:`1361`) Fix a bug in `~fabric.operations.get` where remote file paths containing Python string formatting escape codes caused an exception. Thanks to ``@natecode`` for the report and Bradley Spink for the fix. * :release:`1.10.2 <2015-06-19>` * :support:`1325` Clarify `~fabric.operations.put` docs re: the ``mode`` argument. Thanks to ``@mjmare`` for the catch. * :bug:`1318` Update functionality added in :issue:`1213` so abort error messages don't get printed twice (once by us, once by ``sys.exit``) but the annotated exception error message is retained. Thanks to Felix Almeida for the report. * :bug:`1305` (also :issue:`1313`) Fix a couple minor issues with the operation of & demo code for the ``JobQueue`` class. Thanks to ``@dioh`` and Horst Gutmann for the report & Cameron Lane for the patch. * :bug:`980` (also :issue:`1312`) Redirect output of ``cd`` to ``/dev/null`` so users enabling bash's ``CDPATH`` (or similar features in other shells) don't have polluted output captures. Thanks to Alex North-Keys for the original report & Steve Ivy for the fix. * :bug:`1289` Fix "NameError: free variable referenced before assignment in enclosing scope". Thanks to ``@SamuelMarks`` for catch & patch. * :bug:`1286` (also :issue:`971`, :issue:`1032`) Recursively unwrap decorators instead of only unwrapping a single decorator level, when obtaining task docstrings. Thanks to Avishai Ish-Shalom for the original report & Max Kovgan for the patch. * :bug:`1273` Fix issue with ssh/config not having a cross-platform default path. Thanks to ``@SamuelMarks`` for catch & patch. * :feature:`1200` Introduced ``exceptions`` output level, so users don't have to deal with the debug output just to see tracebacks. * :support:`1239` Update README to work better under raw docutils so the example code block is highlighted as Python on PyPI (and not just on our Sphinx-driven website). Thanks to Marc Abramowitz. * :release:`1.10.1 <2014-12-19>` * :release:`1.9.2 <2014-12-19>` * :bug:`1201` Don't naively glob all `~fabric.operations.get` targets - only glob actual directories. This avoids incorrectly yielding permission errors in edge cases where a requested file is within a directory lacking the read permission bit. Thanks to Sassa Nf for the original report. * :bug:`1019` (also :issue:`1022`, :issue:`1186`) Fix "is a tty" tests in environments where streams (eg ``sys.stdout``) have been replaced with objects lacking a ``.isatty()`` method. Thanks to Miki Tebeka for the original report, Lele Long for a subsequent patch, and Julien Phalip for the final/merged patch. * :support:`1213 backported` Add useful exception message to the implicit ``SystemExit`` raised by Fabric's use of ``sys.exit`` inside the `~fabric.api.abort` function. This allows client code catching ``SystemExit`` to have better introspection into the error. Thanks to Ioannis Panousis. * :bug:`1228` Update the ``CommandTimeout`` class so it has a useful ``str`` instead of appearing blank when caught by Fabric's top level exception handling. Catch & patch from Tomaz Muraus. * :bug:`1180` Fix issue with unicode steam outputs crashing if stream encoding type is None. Thanks to ``@joekiller`` for catch & patch. * :support:`958 backported` Remove the Git SHA portion of our version string generation; it was rarely useful & occasionally caused issues for users with non-Git-based source checkouts. * :support:`1229 backported` Add some missing API doc hyperlink references. Thanks to Tony Narlock. * :bug:`1226` Update `~fabric.operations.get` to ensure that `env.user` has access to tempfiles before changing permissions. Also corrected permissions from 404 to 0400 to match comment. Patch by Curtis Mattoon; original report from Daniel Watkins. * :release:`1.10.0 <2014-09-04>` * :bug:`1188 major` Update `~fabric.operations.local` to close non-pipe file descriptors in the child process so subsequent calls to `~fabric.operations.local` aren't blocked on e.g. already-connected network sockets. Thanks to Tolbkni Kao for catch & patch. * :feature:`700` Added ``use_sudo`` and ``temp_dir`` params to `~fabric.operations.get`. This allows downloading files normally not accessible to the user using ``sudo``. Thanks to Jason Coombs for initial report and to Alex Plugaru for the patch (:issue:`1121`). * :feature:`1098` Add support for dict style roledefs. Thanks to Jonas Lundberg. * :feature:`1090` Add option to skip unknown tasks. Credit goes to Jonas Lundberg. * :feature:`975` Fabric can now be invoked via ``python -m fabric`` in addition to the typical use of the ``fab`` entrypoint. Patch courtesy of Jason Coombs. .. note:: This functionality is only available under Python 2.7. * :release:`1.9.1 <2014-08-06>` * :release:`1.8.5 <2014-08-06>` * :release:`1.7.5 <2014-08-06>` * :bug:`1165` Prevent infinite loop condition when a gateway host is enabled & the same host is in the regular target host list. Thanks to ``@CzBiX`` for catch & patch. * :bug:`1147` Use ``stat`` instead of ``lstat`` when testing directory-ness in the SFTP module. This allows recursive downloads to avoid recursing into symlinks unexpectedly. Thanks to Igor Kalnitsky for the patch. * :bug:`1146` Fix a bug where `~fabric.contrib.files.upload_template` failed to honor ``lcd`` when ``mirror_local_mode`` is ``True``. Thanks to Laszlo Marai for catch & patch. * :bug:`1134` Skip bad hosts when the tasks are executed in parallel. Thanks to Igor Maravić ``@i-maravic``. * :bug:`852` Fix to respect ``template_dir`` for non Jinja2 templates in `~fabric.contrib.files.upload_template`. Thanks to Adam Kowalski for the patch and Alex Plugaru for the initial test case. * :bug:`1096` Encode Unicode text appropriately for its target stream object to avoid issues on non-ASCII systems. Thanks to Toru Uetani for the original patch. * :bug:`1059` Update IPv6 support to work with link-local address formats. Fix courtesy of ``@obormot``. * :bug:`1026` Fix a typo preventing quiet operation of `~fabric.contrib.files.is_link`. Caught by ``@dongweiming``. * :bug:`600` Clear out connection caches in full when prepping parallel-execution subprocesses. This avoids corner cases causing hangs/freezes due to client/socket reuse. Thanks to Ruslan Lutsenko for the initial report and Romain Chossart for the suggested fix. * :bug:`1167` Add Jinja to ``test_requires`` in ``setup.py`` for the couple of newish tests that now require it. Thanks to Kubilay Kocak for the catch. * :release:`1.9.0 <2014-06-08>` * :feature:`1078` Add ``.command`` and ``.real_command`` attributes to ``local`` return value. Thanks to Alexander Teves (``@alexanderteves``) and Konrad HaÅ‚as (``@konradhalas``). * :feature:`938` Add an env var :ref:`env.effective_roles ` specifying roles used in the currently executing command. Thanks to Piotr Betkier for the patch. * :feature:`1101` Reboot operation now supports custom command. Thanks to Jonas Lejon. * :support:`1106` Fix a misleading/ambiguous example snippet in the ``fab`` usage docs to be clearer. Thanks to ``@zed``. * :release:`1.8.4 <2014-06-08>` * :release:`1.7.4 <2014-06-08>` * :bug:`898` Treat paths that begin with tilde "~" as absolute paths instead of relative. Thanks to Alex Plugaru for the patch and Dan Craig for the suggestion. * :support:`1105 backported` Enhance ``setup.py`` to allow Paramiko 1.13+ under Python 2.6+. Thanks to to ``@Arfrever`` for catch & patch. * :release:`1.8.3 <2014-03-21>` * :release:`1.7.3 <2014-03-21>` * :support:`- backported` Modified packaging data to reflect that Fabric requires Paramiko < 1.13 (which dropped Python 2.5 support.) * :feature:`1082` Add ``pty`` passthrough kwarg to `~fabric.contrib.files.upload_template`. * :release:`1.8.2 <2014-02-14>` * :release:`1.7.2 <2014-02-14>` * :bug:`955` Quote directories created as part of ``put``'s recursive directory uploads when ``use_sudo=True`` so directories with shell meta-characters (such as spaces) work correctly. Thanks to John Harris for the catch. * :bug:`917` Correct an issue with ``put(use_sudo=True, mode=xxx)`` where the ``chmod`` was trying to apply to the wrong location. Thanks to Remco (``@nl5887``) for catch & patch. * :bug:`1046` Fix typo preventing use of ProxyCommand in some situations. Thanks to Keith Yang. * :release:`1.8.1 <2013-12-24>` * :release:`1.7.1 <2013-12-24>` * :release:`1.6.4 <2013-12-24>` 956, 957 * :release:`1.5.5 <2013-12-24>` 956, 957 * :bug:`956` Fix pty size detection when running inside Emacs. Thanks to `@akitada` for catch & patch. * :bug:`957` Fix bug preventing use of :ref:`env.gateway ` with targets requiring password authentication. Thanks to Daniel González, `@Bengrunt` and `@adrianbn` for their bug reports. * :feature:`741` Add :ref:`env.prompts ` dictionary, allowing users to set up custom prompt responses (similar to the built-in sudo prompt auto-responder.) Thanks to Nigel Owens and David Halter for the patch. * :bug:`965 major` Tweak IO flushing behavior when in linewise (& thus parallel) mode so interwoven output is less frequent. Thanks to `@akidata` for catch & patch. * :bug:`948` Handle connection failures due to server load and try connecting to hosts a number of times specified in :ref:`env.connection_attempts `. * :release:`1.8.0 <2013-09-20>` * :feature:`931` Allow overriding of `.abort` behavior via a custom exception-returning callable set as :ref:`env.abort_exception `. Thanks to Chris Rose for the patch. * :support:`984 backported` Make this changelog easier to read! Now with per-release sections, generated automatically from the old timeline source format. * :feature:`910` Added a keyword argument to rsync_project to configure the default options. Thanks to ``@moorepants`` for the patch. * :release:`1.7.0 <2013-07-26>` * :release:`1.6.2 <2013-07-26>` * :feature:`925` Added `contrib.files.is_link <.is_link>`. Thanks to `@jtangas` for the patch. * :feature:`922` Task argument strings are now displayed when using ``fab -d``. Thanks to Kevin Qiu for the patch. * :bug:`912` Leaving ``template_dir`` un-specified when using `.upload_template` in Jinja mode used to cause ``'NoneType' has no attribute 'startswith'`` errors. This has been fixed. Thanks to Erick Yellott for catch & to Erick Yellott + Kevin Williams for patches. * :feature:`924` Add new env var option :ref:`colorize-errors` to enable coloring errors and warnings. Thanks to Aaron Meurer for the patch. * :bug:`593` Non-ASCII character sets in Jinja templates rendered within `.upload_template` would cause ``UnicodeDecodeError`` when uploaded. This has been addressed by encoding as ``utf-8`` prior to upload. Thanks to Sébastien Fievet for the catch. * :feature:`908` Support loading SSH keys from memory. Thanks to Caleb Groom for the patch. * :bug:`171` Added missing cross-references from ``env`` variables documentation to corresponding command-line options. Thanks to Daniel D. Beck for the contribution. * :bug:`884` The password cache feature was not working correctly with password-requiring SSH gateway connections. That's fixed now. Thanks to Marco Nenciarini for the catch. * :feature:`826` Enable sudo extraction of compressed archive via `use_sudo` kwarg in `.upload_project`. Thanks to ``@abec`` for the patch. * :bug:`694 major` Allow users to work around ownership issues in the default remote login directory: add ``temp_dir`` kwarg for explicit specification of which "bounce" folder to use when calling `.put` with ``use_sudo=True``. Thanks to Devin Bayer for the report & Dieter Plaetinck / Jesse Myers for suggesting the workaround. * :bug:`882` Fix a `.get` bug regarding spaces in remote working directory names. Thanks to Chris Rose for catch & patch. * :release:`1.6.1 <2013-05-23>` * :bug:`868` Substantial speedup of parallel tasks by removing an unnecessary blocking timeout in the ``JobQueue`` loop. Thanks to Simo Kinnunen for the patch. * :bug:`328` `.lcd` was no longer being correctly applied to `.upload_template`; this has been fixed. Thanks to Joseph Lawson for the catch. * :feature:`812` Add ``use_glob`` option to `.put` so users trying to upload real filenames containing glob patterns (``*``, ``[`` etc) can disable the default globbing behavior. Thanks to Michael McHugh for the patch. * :bug:`864 major` Allow users to disable Fabric's auto-escaping in `.run`/`.sudo`. Thanks to Christian Long and Michael McHugh for the patch. * :bug:`870` Changes to shell env var escaping highlighted some extraneous and now damaging whitespace in `with path(): <.path>`. This has been removed and a regression test added. * :bug:`871` Use of string mode values in `put(local, remote, mode="NNNN") <.put>` would sometimes cause ``Unsupported operand`` errors. This has been fixed. * :bug:`84 major` Fixed problem with missing -r flag in Mac OS X sed version. Thanks to Konrad HaÅ‚as for the patch. * :bug:`861` Gracefully handle situations where users give a single string literal to ``env.hosts``. Thanks to Bill Tucker for catch & patch. * :bug:`367` Expand paths with tilde inside (``contrib.files``). Thanks to Konrad HaÅ‚as for catch & patch. * :feature:`845 backported` Downstream synchronization option implemented for `~fabric.contrib.project.rsync_project`. Thanks to Antonio Barrero for the patch. * :release:`1.6.0 <2013-03-01>` * :release:`1.5.4 <2013-03-01>` * :bug:`844` Account for SSH config overhaul in Paramiko 1.10 by e.g. updating treatment of ``IdentityFile`` to handle multiple values. **This and related SSH config parsing changes are backwards incompatible**; we are including them in this release because they do fix incorrect, off-spec behavior. * :bug:`843` Ensure string ``pool_size`` values get run through ``int()`` before deriving final result (stdlib ``min()`` has odd behavior here...). Thanks to Chris Kastorff for the catch. * :bug:`839` Fix bug in `~fabric.contrib.project.rsync_project` where IPv6 address were not always correctly detected. Thanks to Antonio Barrero for catch & patch. * :bug:`587` Warn instead of aborting when :ref:`env.use_ssh_config ` is True but the configured SSH conf file doesn't exist. This allows multi-user fabfiles to enable SSH config without causing hard stops for users lacking SSH configs. Thanks to Rodrigo Pimentel for the report. * :feature:`821` Add `~fabric.context_managers.remote_tunnel` to allow reverse SSH tunneling (exposing locally-visible network ports to the remote end). Thanks to Giovanni Bajo for the patch. * :feature:`823` Add :ref:`env.remote_interrupt ` which controls whether Ctrl-C is forwarded to the remote end or is captured locally (previously, only the latter behavior was implemented). Thanks to Geert Jansen for the patch. * :release:`1.5.3 <2013-01-28>` * :bug:`806` Force strings given to ``getpass`` during password prompts to be ASCII, to prevent issues on some platforms when Unicode is encountered. Thanks to Alex Louden for the patch. * :bug:`805` Update `~fabric.context_managers.shell_env` to play nice with Windows (7, at least) systems and `~fabric.operations.local`. Thanks to Fernando Macedo for the patch. * :bug:`654` Parallel runs whose sum total of returned data was large (e.g. large return values from the task, or simply a large number of hosts in the host list) were causing frustrating hangs. This has been fixed. * :feature:`402` Attempt to detect stale SSH sessions and reconnect when they arise. Thanks to `@webengineer` for the patch. * :bug:`791` Cast `~fabric.operations.reboot`'s ``wait`` parameter to a numeric type in case the caller submitted a string by mistake. Thanks to Thomas Schreiber for the patch. * :bug:`703 major` Add a ``shell`` kwarg to many methods in `~fabric.contrib.files` to help avoid conflicts with `~fabric.context_managers.cd` and similar. Thanks to `@mikek` for the patch. * :feature:`730` Add :ref:`env.system_known_hosts/--system-known-hosts ` to allow loading a user-specified system-level SSH ``known_hosts`` file. Thanks to Roy Smith for the patch. * :release:`1.5.2 <2013-01-15>` * :feature:`818` Added :ref:`env.eagerly_disconnect ` option to help prevent pile-up of many open connections. * :feature:`706` Added :ref:`env.tasks `, returning list of tasks to be executed by current ``fab`` command. * :bug:`766` Use the variable name of a new-style ``fabric.tasks.Task`` subclass object when the object name attribute is undefined. Thanks to `@todddeluca` for the patch. * :bug:`604` Fixed wrong treatment of backslashes in put operation when uploading directory tree on Windows. Thanks to Jason Coombs for the catch and `@diresys` & Oliver Janik for the patch. for the patch. * :bug:`792` The newish `~fabric.context_managers.shell_env` context manager was incorrectly omitted from the ``fabric.api`` import endpoint. This has been remedied. Thanks to Vishal Rana for the catch. * :feature:`735` Add ``ok_ret_codes`` option to ``env`` to allow alternate return codes to be treated os "ok". Thanks to Andy Kraut for the pull request. * :bug:`775` Shell escaping was incorrectly applied to the value of ``$PATH`` updates in our shell environment handling, causing (at the very least) `~fabric.operations.local` binary paths to become inoperable in certain situations. This has been fixed. * :feature:`787` Utilize new Paramiko feature allowing us to skip the use of temporary local files when using file-like objects in `~fabric.operations.get`/`~fabric.operations.put`. * :feature:`249` Allow specification of remote command timeout value by setting :ref:`env.command_timeout `. Thanks to Paul McMillan for suggestion & initial patch. * Added current host string to prompt abort error messages. * :release:`1.5.1 <2012-11-15>` * :bug:`776` Fixed serious-but-non-obvious bug in direct-tcpip driven gatewaying (e.g. that triggered by ``-g`` or ``env.gateway``.) Should work correctly now. * :bug:`771` Sphinx autodoc helper `~fabric.docs.unwrap_tasks` didn't play nice with ``@task(name=xxx)`` in some situations. This has been fixed. * :release:`1.5.0 <2012-11-06>` * :release:`1.4.4 <2012-11-06>` * :feature:`38` (also :issue:`698`) Implement both SSH-level and ``ProxyCommand``-based gatewaying for SSH traffic. (This is distinct from tunneling non-SSH traffic over the SSH connection, which is :issue:`78` and not implemented yet.) * Thanks in no particular order to Erwin Bolwidt, Oskari Saarenmaa, Steven Noonan, Vladimir Lazarenko, Lincoln de Sousa, Valentino Volonghi, Olle Lundberg and Github user `@acrish` for providing the original patches to both Fabric and Paramiko. * :feature:`684 backported` (also :issue:`569`) Update how `~fabric.decorators.task` wraps task functions to preserve additional metadata; this allows decorated functions to play nice with Sphinx autodoc. Thanks to Jaka Hudoklin for catch & patch. * :support:`103` (via :issue:`748`) Long standing Sphinx autodoc issue requiring error-prone duplication of function signatures in our API docs has been fixed. Thanks to Alex Morega for the patch. * :bug:`767 major` Fix (and add test for) regression re: having linewise output automatically activate when parallelism is in effect. Thanks to Alexander Fortin and Dustin McQuay for the bug reports. * :bug:`736 major` Ensure context managers that build env vars play nice with ``contextlib.nested`` by deferring env var reference to entry time, not call time. Thanks to Matthew Tretter for catch & patch. * :feature:`763` Add ``--initial-password-prompt`` to allow prefilling the password cache at the start of a run. Great for sudo-powered parallel runs. * :feature:`665` (and #629) Update `~fabric.contrib.files.upload_template` to have a more useful return value, namely that of its internal `~fabric.operations.put` call. Thanks to Miquel Torres for the catch & Rodrigue Alcazar for the patch. * :feature:`578` Add ``name`` argument to `~fabric.decorators.task` (:ref:`docs `) to allow overriding of the default "function name is task name" behavior. Thanks to Daniel Simmons for catch & patch. * :feature:`761` Allow advanced users to parameterize ``fabric.main.main()`` to force loading of specific fabfiles. * :bug:`749` Gracefully work around calls to ``fabric.version`` on systems lacking ``/bin/sh`` (which causes an ``OSError`` in ``subprocess.Popen`` calls.) * :feature:`723` Add the ``group=`` argument to `~fabric.operations.sudo`. Thanks to Antti Kaihola for the pull request. * :feature:`725` Updated `~fabric.operations.local` to allow override of which local shell is used. Thanks to Mustafa Khattab. * :bug:`704 major` Fix up a bunch of Python 2.x style ``print`` statements to be forwards compatible. Thanks to Francesco Del Degan for the patch. * :feature:`491` (also :feature:`385`) IPv6 host string support. Thanks to Max Arnold for the patch. * :feature:`699` Allow `name` attribute on file-like objects for get/put. Thanks to Peter Lyons for the pull request. * :bug:`711 major` `~fabric.sftp.get` would fail when filenames had % in their path. Thanks to John Begeman * :bug:`702 major` `~fabric.operations.require` failed to test for "empty" values in the env keys it checks (e.g. ``require('a-key-whose-value-is-an-empty-list')`` would register a successful result instead of alerting that the value was in fact empty. This has been fixed, thanks to Rich Schumacher. * :bug:`718` ``isinstance(foo, Bar)`` is used in `~fabric.main` instead of ``type(foo) == Bar`` in order to fix some edge cases. Thanks to Mikhail Korobov. * :bug:`693` Fixed edge case where ``abort`` driven failures within parallel tasks could result in a top level exception (a ``KeyError``) regarding error handling. Thanks to Marcin KuźmiÅ„ski for the report. * :support:`681 backported` Fixed outdated docstring for `~fabric.decorators.runs_once` which claimed it would get run multiple times in parallel mode. That behavior was fixed in an earlier release but the docs were not updated. Thanks to Jan Brauer for the catch. * :release:`1.4.3 <2012-07-06>` * :release:`1.3.8 <2012-07-06>` * :feature:`263` Shell environment variable support for `~fabric.operations.run`/`~fabric.operations.sudo` added in the form of the `~fabric.context_managers.shell_env` context manager. Thanks to Oliver Tonnhofer for the original pull request, and to Kamil Kisiel for the final implementation. * :feature:`669` Updates to our Windows compatibility to rely more heavily on cross-platform Python stdlib implementations. Thanks to Alexey Diyan for the patch. * :bug:`671` :ref:`reject-unknown-hosts` sometimes resulted in a password prompt instead of an abort. This has been fixed. Thanks to Roy Smith for the report. * :bug:`659` Update docs to reflect that `~fabric.operations.local` currently honors :ref:`env.path `. Thanks to `@floledermann `_ for the catch. * :bug:`652` Show available commands when aborting on invalid command names. * :support:`651 backported` Added note about nesting ``with`` statements on Python 2.6+. Thanks to Jens Rantil for the patch. * :bug:`649` Don't swallow non-``abort``-driven exceptions in parallel mode. Fabric correctly printed such exceptions, and returned them from `~fabric.tasks.execute`, but did not actually cause the child or parent processes to halt with a nonzero status. This has been fixed. `~fabric.tasks.execute` now also honors :ref:`env.warn_only ` so users may still opt to call it by hand and inspect the returned exceptions, instead of encountering a hard stop. Thanks to Matt Robenolt for the catch. * :feature:`241` Add the command executed as a ``.command`` attribute to the return value of `~fabric.operations.run`/`~fabric.operations.sudo`. (Also includes a second attribute containing the "real" command executed, including the shell wrapper and any escaping.) * :feature:`646` Allow specification of which local streams to use when `~fabric.operations.run`/`~fabric.operations.sudo` print the remote stdout/stderr, via e.g. ``run("command", stderr=sys.stdout)``. * :support:`645 backported` Update Sphinx docs to work well when run out of a source tarball as opposed to a Git checkout. Thanks again to `@Arfrever` for the catch. * :support:`640 backported` (also :issue:`644`) Update packaging manifest so sdist tarballs include all necessary test & doc files. Thanks to Mike Gilbert and `@Arfrever` for catch & patch. * :feature:`627` Added convenient ``quiet`` and ``warn_only`` keyword arguments to `~fabric.operations.run`/`~fabric.operations.sudo` which are aliases for ``settings(hide('everything'), warn_only=True)`` and ``settings(warn_only=True)``, respectively. (Also added corresponding `context ` `managers `.) Useful for remote program calls which are expected to fail and/or whose output doesn't need to be shown to users. * :feature:`633` Allow users to turn off host list deduping by setting :ref:`env.dedupe_hosts ` to ``False``. This enables running the same task multiple times on a single host, which was previously not possible. * :support:`634 backported` Clarified that `~fabric.context_managers.lcd` does no special handling re: the user's current working directory, and thus relative paths given to it will be relative to ``os.getcwd()``. Thanks to `@techtonik `_ for the catch. * :release:`1.4.2 <2012-05-07>` * :release:`1.3.7 <2012-05-07>` * :bug:`562` Agent forwarding would error out or freeze when multiple uses of the forwarded agent were used per remote invocation (e.g. a single `~fabric.operations.run` command resulting in multiple Git or SVN checkouts.) This has been fixed thanks to Steven McDonald and GitHub user `@lynxis`. * :support:`626 backported` Clarity updates to the tutorial. Thanks to GitHub user `m4z` for the patches. * :bug:`625` `~fabric.context_managers.hide`/`~fabric.context_managers.show` did not correctly restore prior display settings if an exception was raised inside the block. This has been fixed. * :bug:`624` Login password prompts did not always display the username being authenticated for. This has been fixed. Thanks to Nick Zalutskiy for catch & patch. * :bug:`617` Fix the ``clean_revert`` behavior of `~fabric.context_managers.settings` so it doesn't ``KeyError`` for newly created settings keys. Thanks to Chris Streeter for the catch. * :feature:`615` Updated `~fabric.operations.sudo` to honor the new setting :ref:`env.sudo_user ` as a default for its ``user`` kwarg. * :bug:`616` Add port number to the error message displayed upon connection failures. * :bug:`609` (and :issue:`564`) Document and clean up :ref:`env.sudo_prefix ` so it can be more easily modified by users facing uncommon use cases. Thanks to GitHub users `3point2` for the cleanup and `SirScott` for the documentation catch. * :bug:`610` Change detection of ``env.key_filename``'s type (added as part of SSH config support in 1.4) so it supports arbitrary iterables. Thanks to Brandon Rhodes for the catch. * :release:`1.4.1 <2012-04-04>` * :release:`1.3.6 <2012-04-04>` * :bug:`608` Add ``capture`` kwarg to `~fabric.contrib.project.rsync_project` to aid in debugging rsync problems. * :bug:`607` Allow `~fabric.operations.local` to display stdout/stderr when it warns/aborts, if it was capturing them. * :bug:`395` Added :ref:`an FAQ entry ` detailing how to handle init scripts which misbehave when a pseudo-tty is allocated. * :bug:`568` `~fabric.tasks.execute` allowed too much of its internal state changes (to variables such as ``env.host_string`` and ``env.parallel``) to persist after execution completed; this caused a number of different incorrect behaviors. `~fabric.tasks.execute` has been overhauled to clean up its own state changes -- while preserving any state changes made by the task being executed. * :bug:`584` `~fabric.contrib.project.upload_project` did not take explicit remote directory location into account when untarring, and now uses `~fabric.context_managers.cd` to address this. Thanks to Ben Burry for the patch. * :bug:`458` `~fabric.decorators.with_settings` did not perfectly match `~fabric.context_managers.settings`, re: ability to inline additional context managers. This has been corrected. Thanks to Rory Geoghegan for the patch. * :bug:`499` `contrib.files.first ` used an outdated function signature in its wrapped `~fabric.contrib.files.exists` call. This has been fixed. Thanks to Massimiliano Torromeo for catch & patch. * :bug:`551` ``--list`` output now detects terminal window size and truncates (or doesn't truncate) accordingly. Thanks to Horacio G. de Oro for the initial pull request. * :bug:`572` Parallel task aborts (as oppposed to unhandled exceptions) now correctly print their abort messages instead of tracebacks, and cause the parent process to exit with the correct (nonzero) return code. Thanks to Ian Langworth for the catch. * :bug:`306` Remote paths now use posixpath for a separator. Thanks to Jason Coombs for the patch. * :release:`1.4.0 <2012-02-13>` * :release:`1.3.5 <2012-02-13>` * :release:`1.2.6 <2012-02-13>` * :release:`1.1.8 <2012-02-13>` * :bug:`495` Fixed documentation example showing how to subclass `~fabric.tasks.Task`. Thanks to Brett Haydon for the catch and Mark Merritt for the patch. * :bug:`410` Fixed a bug where using the `~fabric.decorators.task` decorator inside/under another decorator such as `~fabric.decorators.hosts` could cause that task to become invalid when invoked by name (due to how old-style vs new-style tasks are detected.) Thanks to Dan Colish for the initial patch. * :feature:`559` `~fabric.contrib.project.rsync_project` now allows users to append extra SSH-specific arguments to ``rsync``'s ``--rsh`` flag. * :feature:`138` :ref:`env.port ` may now be written to at fabfile module level to set a default nonstandard port number. Previously this value was read-only. * :feature:`3` Fabric can now load a subset of SSH config functionality directly from your local ``~/.ssh/config`` if :ref:`env.use_ssh_config ` is set to ``True``. See :ref:`ssh-config` for details. Thanks to Kirill Pinchuk for the initial patch. * :feature:`12` Added the ability to try connecting multiple times to temporarily-down remote systems, instead of immediately failing. (Default behavior is still to only try once.) See :ref:`env.timeout ` and :ref:`env.connection_attempts ` for controlling both connection timeouts and total number of attempts. `~fabric.operations.reboot` has also been overhauled (but practically deprecated -- see its updated docs.) * :feature:`474` `~fabric.tasks.execute` now allows you to access the executed task's return values, by itself returning a dictionary whose keys are the host strings executed against. * :bug:`487 major` Overhauled the regular expression escaping performed in `~fabric.contrib.files.append` and `~fabric.contrib.files.contains` to try and handle more corner cases. Thanks to Neilen Marais for the patch. * :support:`532` Reorganized and cleaned up the output of ``fab --help``. * :feature:`8` Added ``--skip-bad-hosts``/:ref:`env.skip_bad_hosts ` option to allow skipping past temporarily down/unreachable hosts. * :feature:`13` Env vars may now be set at runtime via the new ``--set`` command-line flag. * :feature:`506` A new :ref:`output alias `, ``commands``, has been added, which allows hiding remote stdout and local "running command X" output lines. * :feature:`72` SSH agent forwarding support has made it into Fabric's SSH library, and hooks for using it have been added (disabled by default; use ``-A`` or :ref:`env.forward_agent ` to enable.) Thanks to Ben Davis for porting an existing Paramiko patch to `ssh` and providing the necessary tweak to Fabric. * :release:`1.3.4 <2012-01-12>` * :bug:`492` `@parallel ` did not automatically trigger :ref:`linewise output `, as was intended. This has been fixed. Thanks to Brandon Huey for the catch. * :bug:`510` Parallel mode is incompatible with user input, such as password/hostname prompts, and was causing cryptic `Operation not supported by device` errors when such prompts needed to be displayed. This behavior has been updated to cleanly and obviously ``abort`` instead. * :bug:`494` Fixed regression bug affecting some `env` values such as `env.port` under parallel mode. Symptoms included `~fabric.contrib.project.rsync_project` bailing out due to a None port value when run under `@parallel `. Thanks to Rob Terhaar for the report. * :bug:`339` Don't show imported `~fabric.colors` members in ``--list`` output. Thanks to Nick Trew for the report. * :release:`1.3.3 <2011-11-23>` * :release:`1.2.5 <2011-11-23>` * :release:`1.1.7 <2011-11-23>` * :bug:`441` Specifying a task module as a task on the command line no longer blows up but presents the usual "no task by that name" error message instead. Thanks to Mitchell Hashimoto for the catch. * :bug:`475` Allow escaping of equals signs in per-task args/kwargs. * :bug:`450` Improve traceback display when handling ``ImportError`` for dependencies. Thanks to David Wolever for the patches. * :bug:`446` Add QNX to list of secondary-case `~fabric.contrib.files.sed` targets. Thanks to Rodrigo Madruga for the tip. * :bug:`443` `~fabric.contrib.files.exists` didn't expand tildes; now it does. Thanks to Riccardo Magliocchetti for the patch. * :bug:`437` `~fabric.decorators.with_settings` now correctly preserves the wrapped function's docstring and other attributes. Thanks to Eric Buckley for the catch and Luke Plant for the patch. * :bug:`400` Handle corner case of systems where ``pwd.getpwuid`` raises ``KeyError`` for the user's UID instead of returning a valid string. Thanks to Dougal Matthews for the catch. * :bug:`397` Some poorly behaved objects in third party modules triggered exceptions during Fabric's "classic or new-style task?" test. A fix has been added which tries to work around these. * :bug:`341` `~fabric.contrib.files.append` incorrectly failed to detect that the line(s) given already existed in files hidden to the remote user, and continued appending every time it ran. This has been fixed. Thanks to Dominique Peretti for the catch and Martin Vilcans for the patch. * :bug:`342` Combining `~fabric.context_managers.cd` with `~fabric.operations.put` and its ``use_sudo`` keyword caused an unrecoverable error. This has been fixed. Thanks to Egor M for the report. * :bug:`482` Parallel mode should imply linewise output; omission of this behavior was an oversight. * :bug:`230` Fix regression re: combo of no fabfile & arbitrary command use. Thanks to Ali Saifee for the catch. * :release:`1.3.2 <2011-11-07>` * :release:`1.2.4 <2011-11-07>` * :release:`1.1.6 <2011-11-07>` * :support:`459 backported` Update our `setup.py` files to note that PyCrypto released 2.4.1, which fixes the setuptools problems. * :support:`467 backported` (also :issue:`468`, :issue:`469`) Handful of documentation clarification tweaks. Thanks to Paul Hoffman for the patches. * :release:`1.3.1 <2011-10-24>` * :bug:`457` Ensured that Fabric fast-fails parallel tasks if any child processes encountered errors. Previously, multi-task invocations would continue to the 2nd, etc task when failures occurred, which does not fit with how Fabric usually behaves. Thanks to Github user ``sdcooke`` for the report and Morgan Goose for the fix. * :release:`1.3.0 <2011-10-23>` * :release:`1.2.3 <2011-10-23>` * :release:`1.1.5 <2011-10-23>` * :release:`1.0.5 <2011-10-23>` * :support:`275` To support an edge use case of the features released in :issue:`19`, and to lay the foundation for :issue:`275`, we have forked Paramiko into the `Python 'ssh' library `_ and changed our dependency to it for Fabric 1.3 and higher. This may have implications for the more uncommon install use cases, and package maintainers, but we hope to iron out any issues as they come up. * :bug:`323` `~fabric.operations.put` forgot how to expand leading tildes in the remote file path. This has been corrected. Thanks to Piet Delport for the catch. * :feature:`21` It is now possible, using the new `~fabric.tasks.execute` API call, to execute task objects (by reference or by name) from within other tasks or in library mode. `~fabric.tasks.execute` honors the other tasks' `~fabric.decorators.hosts`/`~fabric.decorators.roles` decorators, and also supports passing in explicit host and/or role arguments. * :feature:`19` Tasks may now be optionally executed in parallel. Please see the :ref:`parallel execution docs ` for details. Major thanks to Morgan Goose for the initial implementation. * :bug:`182` During display of remote stdout/stderr, Fabric occasionally printed extraneous line prefixes (which in turn sometimes overwrote wrapped text.) This has been fixed. * :bug:`430` Tasks decorated with `~fabric.decorators.runs_once` printed extraneous 'Executing...' status lines on subsequent invocations. This is noisy at best and misleading at worst, and has been corrected. Thanks to Jacob Kaplan-Moss for the report. * :release:`1.2.2 <2011-09-01>` * :release:`1.1.4 <2011-09-01>` * :release:`1.0.4 <2011-09-01>` * :bug:`252` `~fabric.context_managers.settings` would silently fail to set ``env`` values for keys which did not exist outside the context manager block. It now works as expected. Thanks to Will Maier for the catch and suggested solution. * :support:`393 backported` Fixed a typo in an example code snippet in the task docs. Thanks to Hugo Garza for the catch. * :bug:`396` ``--shortlist`` broke after the addition of ``--list-format`` and no longer displayed the short list format correctly. This has been fixed. * :bug:`373` Re-added missing functionality preventing :ref:`host exclusion ` from working correctly. * :bug:`303` Updated terminal size detection to correctly skip over non-tty stdout, such as when running ``fab taskname | other_command``. * :release:`1.2.1 <2011-08-21>` * :release:`1.1.3 <2011-08-21>` * :release:`1.0.3 <2011-08-21>` * :bug:`417` :ref:`abort-on-prompts` would incorrectly abort when set to True, even if both password and host were defined. This has been fixed. Thanks to Valerie Ishida for the report. * :support:`416 backported` Updated documentation to reflect move from Redmine to Github. * :bug:`389` Fixed/improved error handling when Paramiko import fails. Thanks to Brian Luft for the catch. * :release:`1.2.0 <2011-07-12>` * :feature:`22` Enhanced `@task ` to add :ref:`aliasing `, :ref:`per-module default tasks `, and :ref:`control over the wrapping task class `. Thanks to Travis Swicegood for the initial work and collaboration. * :bug:`380` Improved unicode support when testing objects for being string-like. Thanks to Jiri Barton for catch & patch. * :support:`382` Experimental overhaul of changelog formatting & process to make supporting multiple lines of development less of a hassle. * :release:`1.1.2 <2011-07-07>` * :release:`1.0.2 <2011-06-24>` fabric-1.14.0/sites/www/conf.py000066400000000000000000000014231315011462000162660ustar00rootroot00000000000000# Obtain shared config values import sys import os from os.path import abspath, join, dirname sys.path.append(abspath(join(dirname(__file__), '..'))) from shared_conf import * # Releases changelog extension extensions.append('releases') releases_github_path = "fabric/fabric" # Intersphinx for referencing API/usage docs extensions.append('sphinx.ext.intersphinx') # Default is 'local' building, but reference the public docs site when building # under RTD. target = join(dirname(__file__), '..', 'docs', '_build') if os.environ.get('READTHEDOCS') == 'True': target = 'http://docs.fabfile.org/en/latest/' intersphinx_mapping = { 'docs': (target, None), } # Sister-site links to API docs html_theme_options['extra_nav_links'] = { "API Docs": 'http://docs.fabfile.org', } fabric-1.14.0/sites/www/contact.rst000066400000000000000000000026551315011462000171640ustar00rootroot00000000000000======= Contact ======= If you've scoured the :ref:`prose ` and :ref:`API ` documentation and still can't find an answer to your question, below are various support resources that should help. We do request that you do at least skim the documentation before posting tickets or mailing list questions, however! Mailing list ------------ The best way to get help with using Fabric is via the `fab-user mailing list `_ (currently hosted at ``nongnu.org``.) The Fabric developers do their best to reply promptly, and the list contains an active community of other Fabric users and contributors as well. Twitter ------- Fabric has an official Twitter account, `@pyfabric `_, which is used for announcements and occasional related news tidbits (e.g. "Hey, check out this neat article on Fabric!"). .. _bugs: Bugs/ticket tracker ------------------- To file new bugs or search existing ones, you may visit Fabric's `Github Issues `_ page. This does require a (free, easy to set up) Github account. .. _irc: IRC --- We maintain a semi-official IRC channel at ``#fabric`` on Freenode (``irc://irc.freenode.net``) where the developers and other users may be found. As always with IRC, we can't promise immediate responses, but some folks keep logs of the channel and will try to get back to you when they can. fabric-1.14.0/sites/www/development.rst000066400000000000000000000054021315011462000200440ustar00rootroot00000000000000=========== Development =========== The Fabric development team is headed by `Jeff Forcier `_, aka ``bitprophet``. However, dozens of other developers pitch in by submitting patches and ideas via `GitHub issues and pull requests `_, :ref:`IRC ` or the `mailing list `_. Get the code ============ Please see the :ref:`source-code-checkouts` section of the :doc:`installing` page for details on how to obtain Fabric's source code. Contributing ============ There are a number of ways to get involved with Fabric: * **Use Fabric and send us feedback!** This is both the easiest and arguably the most important way to improve the project -- let us know how you currently use Fabric and how you want to use it. (Please do try to search the `ticket tracker`_ first, though, when submitting feature ideas.) * **Report bugs or submit feature requests.** We follow `contribution-guide.org`_'s guidelines, so please check them out before visiting the `ticket tracker`_. * **Fix bugs or implement features!** Again, follow `contribution-guide.org`_ for details on this process. Regarding the changelog step, our changelog is stored in ``sites/www/changelog.rst``. .. _contribution-guide.org: http://contribution-guide.org .. _ticket tracker: https://github.com/fabric/fabric/issues While we may not always reply promptly, we do try to make time eventually to inspect all contributions and either incorporate them or explain why we don't feel the change is a good fit. Support of older releases ========================= Major and minor releases do not mark the end of the previous line or lines of development: * The two most recent minor release branches will continue to receive critical bugfixes. For example, if 1.1 were the latest minor release, it and 1.0 would get bugfixes, but not 0.9 or earlier; and once 1.2 came out, this window would then only extend back to 1.1. * Depending on the nature of bugs found and the difficulty in backporting them, older release lines may also continue to get bugfixes -- but there's no longer a guarantee of any kind. Thus, if a bug were found in 1.1 that affected 0.9 and could be easily applied, a new 0.9.x version *might* be released. * This policy may change in the future to accommodate more branches, depending on development speed. We hope that this policy will allow us to have a rapid minor release cycle (and thus keep new features coming out frequently) without causing users to feel too much pressure to upgrade right away. At the same time, the backwards compatibility guarantee means that users should still feel comfortable upgrading to the next minor release in order to stay within this sliding support window. fabric-1.14.0/sites/www/faq.rst000066400000000000000000000300061315011462000162670ustar00rootroot00000000000000================================ Frequently Asked Questions (FAQ) ================================ These are some of the most commonly encountered problems or frequently asked questions which we receive from users. They aren't intended as a substitute for reading the rest of the documentation, especially the :ref:`usage docs `, so please make sure you check those out if your question is not answered here. Fabric installs but doesn't run! ================================ On systems with old versions of ``setuptools`` (notably OS X Mavericks [10.9] as well as older Linux distribution versions) users frequently have problems running Fabric's binary scripts; this is because these ``setuptools`` are too old to deal with the modern distribution formats Fabric and some of its dependencies may use. One method we've used to recreate this error: * OS X 10.9 using system Python * Pip obtained via e.g. ``sudo easy_install pip`` or ``sudo python get-pip.py`` * ``pip install fabric`` * ``fab [args]`` then results in the following traceback:: Traceback (most recent call last): File "/usr/local/bin/fab", line 5, in from pkg_resources import load_entry_point File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2603, in working_set.require(__requires__) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 666, in require needed = self.resolve(parse_requirements(requirements)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve raise DistributionNotFound(req) # XXX put more info here pkg_resources.DistributionNotFound: paramiko>=1.10 The best solution is to obtain a newer ``setuptools`` (which fixes this bug among many others) like so:: $ sudo pip install -U setuptools Uninstalling, then reinstalling Fabric after doing so should fix the issue. Another approach is to tell ``pip`` not to use the ``wheel`` format (make sure you've already uninstalled Fabric and Paramiko beforehand):: $ sudo pip install fabric --no-use-wheel Finally, you may also find success by using a different Python interpreter/ecosystem, such as that provided by `Homebrew `_ (`specific Python doc page `_). How do I dynamically set host lists? ==================================== See :ref:`dynamic-hosts`. How can I run something after my task is done on all hosts? =========================================================== See :ref:`leveraging-execute-return-value`. .. _init-scripts-pty: Init scripts don't work! ======================== Init-style start/stop/restart scripts (e.g. ``/etc/init.d/apache2 start``) sometimes don't like Fabric's allocation of a pseudo-tty, which is active by default. In almost all cases, explicitly calling the command in question with ``pty=False`` works correctly:: sudo("/etc/init.d/apache2 restart", pty=False) If you have no need for interactive behavior and run into this problem frequently, you may want to deactivate pty allocation globally by setting :ref:`env.always_use_pty ` to ``False``. .. _one-shell-per-command: My (``cd``/``workon``/``export``/etc) calls don't seem to work! =============================================================== While Fabric can be used for many shell-script-like tasks, there's a slightly unintuitive catch: each `~fabric.operations.run` or `~fabric.operations.sudo` call has its own distinct shell session. This is required in order for Fabric to reliably figure out, after your command has run, what its standard out/error and return codes were. Unfortunately, it means that code like the following doesn't behave as you might assume:: def deploy(): run("cd /path/to/application") run("./update.sh") If that were a shell script, the second `~fabric.operations.run` call would have executed with a current working directory of ``/path/to/application/`` -- but because both commands are run in their own distinct session over SSH, it actually tries to execute ``$HOME/update.sh`` instead (since your remote home directory is the default working directory). A simple workaround is to make use of shell logic operations such as ``&&``, which link multiple expressions together (provided the left hand side executed without error) like so:: def deploy(): run("cd /path/to/application && ./update.sh") Fabric provides a convenient shortcut for this specific use case, in fact: `~fabric.context_managers.cd`. There is also `~fabric.context_managers.prefix` for arbitrary prefix commands. .. note:: You might also get away with an absolute path and skip directory changing altogether:: def deploy(): run("/path/to/application/update.sh") However, this requires that the command in question makes no assumptions about your current working directory! How do I use ``su`` to run commands as another user? ==================================================== This is a special case of :ref:`one-shell-per-command`. As that FAQ explains, commands like ``su`` which are 'stateful' do not work well in Fabric, so workarounds must be used. In the case of running commands as a user distinct from the login user, you have two options: #. Use `~fabric.operations.sudo` with its ``user=`` kwarg, e.g. ``sudo("command", user="otheruser")``. If you want to factor the ``user`` part out of a bunch of commands, use `~fabric.context_managers.settings` to set ``env.sudo_user``:: with settings(sudo_user="otheruser"): sudo("command 1") sudo("command 2") ... #. If your target system cannot use ``sudo`` for some reason, you can still use ``su``, but you need to invoke it in a non-interactive fashion by telling it to run a specific command instead of opening a shell. Typically this is the ``-c`` flag, e.g. ``su otheruser -c "command"``. To run multiple commands in the same ``su -c`` "wrapper", you could e.g. write a wrapper function around `~fabric.operations.run`:: def run_su(command, user="otheruser"): return run('su %s -c "%s"' % (user, command)) Why do I sometimes see ``err: stdin: is not a tty``? ==================================================== This message is typically generated by programs such as ``biff`` or ``mesg`` lurking within your remote user's ``.profile`` or ``.bashrc`` files (or any other such files, including system-wide ones.) Fabric's default mode of operation involves executing the Bash shell in "login mode", which causes these files to be executed. Because Fabric also doesn't bother asking the remote end for a tty by default (as it's not usually necessary) programs fired within your startup files, which expect a tty to be present, will complain -- and thus, stderr output about "stdin is not a tty" or similar. There are multiple ways to deal with this problem: * Find and remove or comment out the offending program call. If the program was not added by you on purpose and is simply a legacy of the operating system, this may be safe to do, and is the simplest approach. * Override ``env.shell`` to remove the ``-l`` flag. This should tell Bash not to load your startup files. If you don't depend on the contents of your startup files (such as aliases or whatnot) this may be a good solution. * Pass ``pty=True`` to `run` or `sudo`, which will force allocation of a pseudo-tty on the remote end, and hopefully cause the offending program to be less cranky. .. _faq-daemonize: Why can't I run programs in the background with ``&``? It makes Fabric hang. ============================================================================ Because Fabric executes a shell on the remote end for each invocation of ``run`` or ``sudo`` (:ref:`see also `), backgrounding a process via the shell will not work as expected. Backgrounded processes may still prevent the calling shell from exiting until they stop running, and this in turn prevents Fabric from continuing on with its own execution. The key to fixing this is to ensure that your process' standard pipes are all disassociated from the calling shell, which may be done in a number of ways (listed in order of robustness): * Use a pre-existing daemonization technique if one exists for the program at hand -- for example, calling an init script instead of directly invoking a server binary. * Or leverage a process manager such as ``supervisord``, ``upstart`` or ``systemd`` - such tools let you define what it means to "run" one of your background processes, then issue init-script-like start/stop/restart/status commands. They offer many advantages over classic init scripts as well. * Use ``tmux``, ``screen`` or ``dtach`` to fully detach the process from the running shell; these tools have the benefit of allowing you to reattach to the process later on if needed (though they are more ad-hoc than ``supervisord``-like tools). * You *may* be able to the program under ``nohup`` or similar "in-shell" tools - however we strongly recommend the prior approaches because ``nohup`` has only worked well for a minority of our users. .. _faq-bash: My remote system doesn't have ``bash`` installed by default, do I need to install ``bash``? =========================================================================================== While Fabric is written with ``bash`` in mind, it's not an absolute requirement. Simply change :ref:`env.shell ` to call your desired shell, and include an argument similar to ``bash``'s ``-c`` argument, which allows us to build shell commands of the form:: /bin/bash -l -c "" where ``/bin/bash -l -c`` is the default value of :ref:`env.shell `. .. note:: The ``-l`` argument specifies a login shell and is not absolutely required, merely convenient in many situations. Some shells lack the option entirely and it may be safely omitted in such cases. A relatively safe baseline is to call ``/bin/sh``, which may call the original ``sh`` binary, or (on some systems) ``csh``, and give it the ``-c`` argument, like so:: from fabric.api import env env.shell = "/bin/sh -c" This has been shown to work on FreeBSD and may work on other systems as well. .. _faq-csh: I use ``csh`` remotely and keep getting errors about ``Unmatched ".``. ====================================================================== If the remote host uses ``csh`` for your login shell, Fabric requires the shell variable ``backslash_quote`` to be set, or else any quote-escaping Fabric does will not work. For example, add the following line to ``~/.cshrc``:: set backslash_quote I'm sometimes incorrectly asked for a passphrase instead of a password. ======================================================================= Due to a bug of sorts in our SSH layer, it's not currently possible for Fabric to always accurately detect the type of authentication needed. We have to try and guess whether we're being asked for a private key passphrase or a remote server password, and in some cases our guess ends up being wrong. The most common such situation is where you, the local user, appear to have an SSH keychain agent running, but the remote server is not able to honor your SSH key, e.g. you haven't yet transferred the public key over or are using an incorrect username. In this situation, Fabric will prompt you with "Please enter passphrase for private key", but the text you enter is actually being sent to the remote end's password authentication. We hope to address this in future releases by modifying a fork of the aforementioned SSH library. Is Fabric thread-safe? ====================== Currently, no, it's not -- the present version of Fabric relies heavily on shared state in order to keep the codebase simple. However, there are definite plans to update its internals so that Fabric may be either threaded or otherwise parallelized so your tasks can run on multiple servers concurrently. fabric-1.14.0/sites/www/index.rst000066400000000000000000000010451315011462000166300ustar00rootroot00000000000000Welcome to Fabric! ================== .. include:: ../../README.rst ---- This website covers project information for Fabric such as the changelog, contribution guidelines, development roadmap, news/blog, and so forth. Detailed usage and API documentation can be found at our code documentation site, `docs.fabfile.org `_. Please see the navigation sidebar to the left to begin. .. toctree:: :hidden: changelog FAQs installing troubleshooting development Roadmap contact fabric-1.14.0/sites/www/installing.rst000066400000000000000000000161631315011462000176740ustar00rootroot00000000000000========== Installing ========== Fabric is best installed via `pip `_ (highly recommended) or `easy_install `_ (older, but still works fine), e.g.:: $ pip install fabric You may also opt to use your operating system's package manager; the package is typically called ``fabric`` or ``python-fabric``. E.g.:: $ sudo apt-get install fabric Advanced users wanting to install a development version may use ``pip`` to grab the latest master branch (as well as the dev version of the Paramiko dependency):: $ pip install -e git+https://github.com/paramiko/paramiko/#egg=paramiko $ pip install -e git+https://github.com/fabric/fabric/#egg=fabric .. warning:: Development installs of Fabric, regardless of whether they involve source checkouts or direct ``pip`` installs, require the development version of Paramiko to be installed beforehand or Fabric's installation may fail. Dependencies ============ In order for Fabric's installation to succeed, you will need three primary pieces of software: * the Python programming language; * the ``setuptools`` packaging/installation library; * and the Python `Paramiko `_ SSH library. Paramiko's dependencies differ significantly between the 1.x and 2.x releases. See the `Paramiko installation docs `_ for more info. and, if using the :ref:`parallel execution mode `: * the `multiprocessing`_ library. If you're using Paramiko 1.12 or above, you will also need an additional dependency for Paramiko: * the `ecdsa `_ library Please read on for important details on these -- there are a few gotchas. Python ------ Fabric requires `Python `_ version 2.5 - 2.7. Some caveats and notes about other Python versions: * We are not planning on supporting **Python 2.4** given its age and the number of useful tools in Python 2.5 such as context managers and new modules. That said, the actual amount of 2.5-specific functionality is not prohibitively large, and we would link to -- but not support -- a third-party 2.4-compatible fork. (No such fork exists at this time, to our knowledge.) * Fabric has not yet been tested on **Python 3.x** and is thus likely to be incompatible with that line of development. However, we try to be at least somewhat forward-looking (e.g. using ``print()`` instead of ``print``) and will definitely be porting to 3.x in the future once our dependencies do. setuptools ---------- `Setuptools`_ comes with some Python installations by default; if yours doesn't, you'll need to grab it. In such situations it's typically packaged as ``python-setuptools``, ``py25-setuptools`` or similar. Fabric may drop its setuptools dependency in the future, or include alternative support for the `Distribute`_ project, but for now setuptools is required for installation. .. _setuptools: http://pypi.python.org/pypi/setuptools .. _Distribute: http://pypi.python.org/pypi/distribute ``multiprocessing`` ------------------- An optional dependency, the ``multiprocessing`` library is included in Python's standard library in version 2.6 and higher. If you're using Python 2.5 and want to make use of Fabric's :ref:`parallel execution features ` you'll need to install it manually; the recommended route, as usual, is via ``pip``. Please see the `multiprocessing PyPI page `_ for details. .. warning:: Early versions of Python 2.6 (in our testing, 2.6.0 through 2.6.2) ship with a buggy ``multiprocessing`` module that appears to cause Fabric to hang at the end of sessions involving large numbers of concurrent hosts. If you encounter this problem, either use :ref:`env.pool_size / -z ` to limit the amount of concurrency, or upgrade to Python >=2.6.3. Python 2.5 is unaffected, as it requires the PyPI version of ``multiprocessing``, which is newer than that shipped with Python <2.6.3. Development dependencies ------------------------ If you are interested in doing development work on Fabric (or even just running the test suite), you may also need to install some or all of the following packages: * `git `_ and `Mercurial`_, in order to obtain some of the other dependencies below; * `Nose `_ * `Coverage `_ * `PyLint `_ * `Fudge `_ * `Sphinx `_ For an up-to-date list of exact testing/development requirements, including version numbers, please see the ``requirements.txt`` file included with the source distribution. This file is intended to be used with ``pip``, e.g. ``pip install -r requirements.txt``. .. _Mercurial: http://mercurial.selenic.com/wiki/ .. _downloads: Downloads ========= To obtain a tar.gz or zip archive of the Fabric source code, you may visit `Fabric's PyPI page `_, which offers manual downloads in addition to being the entry point for ``pip`` and ``easy-install``. .. _source-code-checkouts: Source code checkouts ===================== The Fabric developers manage the project's source code with the `Git `_ DVCS. To follow Fabric's development via Git instead of downloading official releases, you have the following options: * Clone the canonical repository straight from `the Fabric organization's repository on Github `_, ``git://github.com/fabric/fabric.git`` * Make your own fork of the Github repository by making a Github account, visiting `fabric/fabric `_ and clicking the "fork" button. .. note:: If you've obtained the Fabric source via source control and plan on updating your checkout in the future, we highly suggest using ``python setup.py develop`` instead -- it will use symbolic links instead of file copies, ensuring that imports of the library or use of the command-line tool will always refer to your checkout. For information on the hows and whys of Fabric development, including which branches may be of interest and how you can help out, please see the :doc:`development` page. .. _pypm: ActivePython and PyPM ===================== Windows users who already have ActiveState's `ActivePython `_ distribution installed may find Fabric is best installed with `its package manager, PyPM `_. Below is example output from an installation of Fabric via ``pypm``:: C:\> pypm install fabric The following packages will be installed into "%APPDATA%\Python" (2.7): paramiko-1.7.8 pycrypto-2.4 fabric-1.3.0 Get: [pypm-free.activestate.com] fabric 1.3.0 Get: [pypm-free.activestate.com] paramiko 1.7.8 Get: [pypm-free.activestate.com] pycrypto 2.4 Installing paramiko-1.7.8 Installing pycrypto-2.4 Installing fabric-1.3.0 Fixing script %APPDATA%\Python\Scripts\fab-script.py C:\> fabric-1.14.0/sites/www/roadmap.rst000066400000000000000000000060001315011462000171400ustar00rootroot00000000000000=================== Development roadmap =================== This document outlines Fabric's intended development path. Please make sure you're reading `the latest version `_ of this document! .. warning:: This information is subject to change without warning, and should not be used as a basis for any life- or career-altering decisions! Fabric 1.x ========== Fabric 1.x, while not quite yet end-of-life'd, has reached a tipping point regarding internal tech debt & ability to make improvements without harming backwards compatibility. As such, future 1.x releases (**1.6** onwards) will emphasize small-to-medium features (new features not requiring major overhauls of the internals) and bugfixes. Invoke, Fabric 2.x and Patchwork ================================ While 1.x moves on as above, we are working on a reimagined 2.x version of the tool, and plan to: * Finish and release `the Invoke tool/library `_ (see also :issue:`565` and `this Invoke FAQ `_), which is a revamped and standalone version of Fabric's task running components. * As of early 2015, Invoke is already reasonably mature and has a handful of features lacking in Fabric itself, including but not limited to: * a more explicit and powerful namespacing implementation * "regular" style CLI flags, including powerful tab completion * before/after hooks * explicit context management (no shared state) * significantly more powerful configuration mechanisms * Invoke is already Python 3 compatible, due to being a new codebase with few dependencies. * As Fabric 2 is developed, Invoke will approach a 1.0 release, and will continue to grow & change to suit Fabric's needs while remaining a high quality standalone task runner. * Release Fabric 2.0, a mostly-rewritten Fabric core: * Leverage Invoke for task running, leaving Fabric itself much more library oriented. * Implement object-oriented hosts/host lists and all the fun stuff that provides (no more hacky host string and unintuitive env var manipulation.) * No more shared state by default (thanks to Invoke's context design.) * Any other core overhauls difficult to do in a backwards compatible fashion. * Test-driven development (Invoke does this as well.) * Spin off ``fabric.contrib.*`` into a standalone "super-Fabric" (as in, "above Fabric") library, `Patchwork `_. * This lets core "execute commands on hosts" functionality iterate separately from "commonly useful shortcuts using Fabric core". * Lots of preliminary work & prior-art scanning has been done in :issue:`461`. * A public-but-alpha codebase for Patchwork exists as we think about the API, and is currently based on Fabric 1.x. It will likely be Fabric 2.x based by the time it is stable. fabric-1.14.0/sites/www/troubleshooting.rst000066400000000000000000000044251315011462000207550ustar00rootroot00000000000000=============== Troubleshooting =============== Stuck? Having a problem? Here are the steps to try before you submit a bug report. * **Make sure you're on the latest version.** If you're not on the most recent version, your problem may have been solved already! Upgrading is always the best first step. * **Try older versions.** If you're already *on* the latest Fabric, try rolling back a few minor versions (e.g. if on 1.7, try Fabric 1.5 or 1.6) and see if the problem goes away. This will help the devs narrow down when the problem first arose in the commit log. * **Try switching up your Paramiko.** Fabric relies heavily on the Paramiko library for its SSH functionality, so try applying the above two steps to your Paramiko install as well. .. note:: Fabric versions sometimes have different Paramiko dependencies - so to try older Paramikos you may need to downgrade Fabric as well. * **Make sure Fabric is really the problem.** If your problem is in the behavior or output of a remote command, try recreating it without Fabric involved: * Run Fabric with ``--show=debug`` and look for the ``run:`` or ``sudo:`` line about the command in question. Try running that exact command, including any ``/bin/bash`` wrapper, remotely and see what happens. This may find problems related to the bash or sudo wrappers. * Execute the command (both the normal version, and the 'unwrapped' version seen via ``--show=debug``) from your local workstation using ``ssh``, e.g.:: $ ssh -t mytarget "my command" The ``-t`` flag matches Fabric's default behavior of enabling a PTY remotely. This helps identify apps that behave poorly when run in a non-shell-spawned PTY. * **Enable Paramiko-level debug logging.** If your issue is in the lower level Paramiko library, it can help us to see the debug output Paramiko prints. At top level in your fabfile, add the following:: import logging logging.basicConfig(level=logging.DEBUG) This should start printing Paramiko's debug statements to your standard error stream. (Feel free to add more logging kwargs to ``basicConfig()`` such as ``filename='/path/to/a/file'`` if you like.) Then submit this info to anybody helping you on IRC or in your bug report. fabric-1.14.0/tasks.py000066400000000000000000000005461315011462000145200ustar00rootroot00000000000000from invocations.docs import docs, www from invocations.packaging import release from invoke import Collection ns = Collection(docs, www, release) ns.configure({ 'packaging': { 'sign': True, 'wheel': True, 'changelog_file': 'sites/www/changelog.rst', 'package': 'fabric', 'version_module': 'version', }, }) fabric-1.14.0/tests/000077500000000000000000000000001315011462000141565ustar00rootroot00000000000000fabric-1.14.0/tests/Python26SocketServer.py000066400000000000000000000530771315011462000205350ustar00rootroot00000000000000"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 Luke Kenneth Casson Leighton example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """ # This file copyright (c) 2001-2015 Python Software Foundation; All Rights Reserved # Author of the BaseServer patch: Luke Kenneth Casson Leighton # XXX Warning! # There is a test suite for this module, but it cannot be run by the # standard regression test. # To run it manually, run Lib/test/test_socketserver.py. __version__ = "0.4" import socket import select import sys import os try: import threading except ImportError: import dummy_threading as threading __all__ = ["TCPServer", "UDPServer", "ForkingUDPServer", "ForkingTCPServer", "ThreadingUDPServer", "ThreadingTCPServer", "BaseRequestHandler", "StreamRequestHandler", "DatagramRequestHandler", "ThreadingMixIn", "ForkingMixIn"] if hasattr(socket, "AF_UNIX"): __all__.extend(["UnixStreamServer", "UnixDatagramServer", "ThreadingUnixStreamServer", "ThreadingUnixDatagramServer"]) class BaseServer: """Base class for server classes. Methods for the caller: - __init__(server_address, RequestHandlerClass) - serve_forever(poll_interval=0.5) - shutdown() - handle_request() # if you do not use serve_forever() - fileno() -> int # for select() Methods that may be overridden: - server_bind() - server_activate() - get_request() -> request, client_address - handle_timeout() - verify_request(request, client_address) - server_close() - process_request(request, client_address) - close_request(request) - handle_error() Methods for derived classes: - finish_request(request, client_address) Class variables that may be overridden by derived classes or instances: - timeout - address_family - socket_type - allow_reuse_address Instance variables: - RequestHandlerClass - socket """ timeout = None def __init__(self, server_address, RequestHandlerClass): """Constructor. May be extended, do not override.""" self.server_address = server_address self.RequestHandlerClass = RequestHandlerClass self.__is_shut_down = threading.Event() self.__serving = False def server_activate(self): """Called by constructor to activate the server. May be overridden. """ pass def serve_forever(self, poll_interval=0.5): """Handle one request at a time until shutdown. Polls for shutdown every poll_interval seconds. Ignores self.timeout. If you need to do periodic tasks, do them in another thread. """ self.__serving = True self.__is_shut_down.clear() while self.__serving: # XXX: Consider using another file descriptor or # connecting to the socket to wake this up instead of # polling. Polling reduces our responsiveness to a # shutdown request and wastes cpu at all other times. r, w, e = select.select([self], [], [], poll_interval) if r: self._handle_request_noblock() self.__is_shut_down.set() def shutdown(self): """Stops the serve_forever loop. Blocks until the loop has finished. This must be called while serve_forever() is running in another thread, or it will deadlock. """ self.__serving = False self.__is_shut_down.wait() # The distinction between handling, getting, processing and # finishing a request is fairly arbitrary. Remember: # # - handle_request() is the top-level call. It calls # select, get_request(), verify_request() and process_request() # - get_request() is different for stream or datagram sockets # - process_request() is the place that may fork a new process # or create a new thread to finish the request # - finish_request() instantiates the request handler class; # this constructor will handle the request all by itself def handle_request(self): """Handle one request, possibly blocking. Respects self.timeout. """ # Support people who used socket.settimeout() to escape # handle_request before self.timeout was available. timeout = self.socket.gettimeout() if timeout is None: timeout = self.timeout elif self.timeout is not None: timeout = min(timeout, self.timeout) fd_sets = select.select([self], [], [], timeout) if not fd_sets[0]: self.handle_timeout() return self._handle_request_noblock() def _handle_request_noblock(self): """Handle one request, without blocking. I assume that select.select has returned that the socket is readable before this function was called, so there should be no risk of blocking in get_request(). """ try: request, client_address = self.get_request() except socket.error: return if self.verify_request(request, client_address): try: self.process_request(request, client_address) except: self.handle_error(request, client_address) self.close_request(request) def handle_timeout(self): """Called if no new request arrives within self.timeout. Overridden by ForkingMixIn. """ pass def verify_request(self, request, client_address): """Verify the request. May be overridden. Return True if we should proceed with this request. """ return True def process_request(self, request, client_address): """Call finish_request. Overridden by ForkingMixIn and ThreadingMixIn. """ self.finish_request(request, client_address) self.close_request(request) def server_close(self): """Called to clean-up the server. May be overridden. """ pass def finish_request(self, request, client_address): """Finish one request by instantiating RequestHandlerClass.""" self.RequestHandlerClass(request, client_address, self) def close_request(self, request): """Called to clean up an individual request.""" pass def handle_error(self, request, client_address): """Handle an error gracefully. May be overridden. The default is to print a traceback and continue. """ print('-' * 40) print('Exception happened during processing of request from %s' % (client_address,)) import traceback traceback.print_exc() # XXX But this goes to stderr! print('-' * 40) class TCPServer(BaseServer): """Base class for various socket-based server classes. Defaults to synchronous IP stream (i.e., TCP). Methods for the caller: - __init__(server_address, RequestHandlerClass, bind_and_activate=True) - serve_forever(poll_interval=0.5) - shutdown() - handle_request() # if you don't use serve_forever() - fileno() -> int # for select() Methods that may be overridden: - server_bind() - server_activate() - get_request() -> request, client_address - handle_timeout() - verify_request(request, client_address) - process_request(request, client_address) - close_request(request) - handle_error() Methods for derived classes: - finish_request(request, client_address) Class variables that may be overridden by derived classes or instances: - timeout - address_family - socket_type - request_queue_size (only for stream sockets) - allow_reuse_address Instance variables: - server_address - RequestHandlerClass - socket """ address_family = socket.AF_INET socket_type = socket.SOCK_STREAM request_queue_size = 5 allow_reuse_address = False def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True): """Constructor. May be extended, do not override.""" BaseServer.__init__(self, server_address, RequestHandlerClass) self.socket = socket.socket(self.address_family, self.socket_type) if bind_and_activate: self.server_bind() self.server_activate() def server_bind(self): """Called by constructor to bind the socket. May be overridden. """ if self.allow_reuse_address: self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.socket.bind(self.server_address) self.server_address = self.socket.getsockname() def server_activate(self): """Called by constructor to activate the server. May be overridden. """ self.socket.listen(self.request_queue_size) def server_close(self): """Called to clean-up the server. May be overridden. """ self.socket.close() def fileno(self): """Return socket file number. Interface required by select(). """ return self.socket.fileno() def get_request(self): """Get the request and client address from the socket. May be overridden. """ return self.socket.accept() def close_request(self, request): """Called to clean up an individual request.""" request.close() class UDPServer(TCPServer): """UDP server class.""" allow_reuse_address = False socket_type = socket.SOCK_DGRAM max_packet_size = 8192 def get_request(self): data, client_addr = self.socket.recvfrom(self.max_packet_size) return (data, self.socket), client_addr def server_activate(self): # No need to call listen() for UDP. pass def close_request(self, request): # No need to close anything. pass class ForkingMixIn: """Mix-in class to handle each request in a new process.""" timeout = 300 active_children = None max_children = 40 def collect_children(self): """Internal routine to wait for children that have exited.""" if self.active_children is None: return while len(self.active_children) >= self.max_children: # XXX: This will wait for any child process, not just ones # spawned by this library. This could confuse other # libraries that expect to be able to wait for their own # children. try: pid, status = os.waitpid(0, 0) except os.error: pid = None if pid not in self.active_children: continue self.active_children.remove(pid) # XXX: This loop runs more system calls than it ought # to. There should be a way to put the active_children into a # process group and then use os.waitpid(-pgid) to wait for any # of that set, but I couldn't find a way to allocate pgids # that couldn't collide. for child in self.active_children: try: pid, status = os.waitpid(child, os.WNOHANG) except os.error: pid = None if not pid: continue try: self.active_children.remove(pid) except ValueError, e: raise ValueError('%s. x=%d and list=%r' % \ (e.message, pid, self.active_children)) def handle_timeout(self): """Wait for zombies after self.timeout seconds of inactivity. May be extended, do not override. """ self.collect_children() def process_request(self, request, client_address): """Fork a new subprocess to process the request.""" self.collect_children() pid = os.fork() if pid: # Parent process if self.active_children is None: self.active_children = [] self.active_children.append(pid) self.close_request(request) return else: # Child process. # This must never return, hence os._exit()! try: self.finish_request(request, client_address) os._exit(0) except: try: self.handle_error(request, client_address) finally: os._exit(1) class ThreadingMixIn: """Mix-in class to handle each request in a new thread.""" # Decides how threads will act upon termination of the # main process daemon_threads = False def process_request_thread(self, request, client_address): """Same as in BaseServer but as a thread. In addition, exception handling is done here. """ try: self.finish_request(request, client_address) self.close_request(request) except: self.handle_error(request, client_address) self.close_request(request) def process_request(self, request, client_address): """Start a new thread to process the request.""" t = threading.Thread(target=self.process_request_thread, args=(request, client_address)) if self.daemon_threads: t.setDaemon(1) t.start() class ForkingUDPServer(ForkingMixIn, UDPServer): pass class ForkingTCPServer(ForkingMixIn, TCPServer): pass class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass class ThreadingTCPServer(ThreadingMixIn, TCPServer): pass if hasattr(socket, 'AF_UNIX'): class UnixStreamServer(TCPServer): address_family = socket.AF_UNIX class UnixDatagramServer(UDPServer): address_family = socket.AF_UNIX class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer): pass class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer): pass class BaseRequestHandler: """Base class for request handler classes. This class is instantiated for each request to be handled. The constructor sets the instance variables request, client_address and server, and then calls the handle() method. To implement a specific service, all you need to do is to derive a class which defines a handle() method. The handle() method can find the request as self.request, the client address as self.client_address, and the server (in case it needs access to per-server information) as self.server. Since a separate instance is created for each request, the handle() method can define arbitrary other instance variariables. """ def __init__(self, request, client_address, server): self.request = request self.client_address = client_address self.server = server try: self.setup() self.handle() self.finish() finally: sys.exc_traceback = None # Help garbage collection def setup(self): pass def handle(self): pass def finish(self): pass # The following two classes make it possible to use the same service # class for stream or datagram servers. # Each class sets up these instance variables: # - rfile: a file object from which receives the request is read # - wfile: a file object to which the reply is written # When the handle() method returns, wfile is flushed properly class StreamRequestHandler(BaseRequestHandler): """Define self.rfile and self.wfile for stream sockets.""" # Default buffer sizes for rfile, wfile. # We default rfile to buffered because otherwise it could be # really slow for large data (a getc() call per byte); we make # wfile unbuffered because (a) often after a write() we want to # read and we need to flush the line; (b) big writes to unbuffered # files are typically optimized by stdio even when big reads # aren't. rbufsize = -1 wbufsize = 0 def setup(self): self.connection = self.request self.rfile = self.connection.makefile('rb', self.rbufsize) self.wfile = self.connection.makefile('wb', self.wbufsize) def finish(self): if not self.wfile.closed: self.wfile.flush() self.wfile.close() self.rfile.close() class DatagramRequestHandler(BaseRequestHandler): # XXX Regrettably, I cannot get this working on Linux; # s.recvfrom() doesn't return a meaningful client address. """Define self.rfile and self.wfile for datagram sockets.""" def setup(self): try: from cStringIO import StringIO except ImportError: from StringIO import StringIO self.packet, self.socket = self.request self.rfile = StringIO(self.packet) self.wfile = StringIO() def finish(self): self.socket.sendto(self.wfile.getvalue(), self.client_address) fabric-1.14.0/tests/client.key000066400000000000000000000033171315011462000161520ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,F1AFE040F412E6D1 cIBbwu1/PD9vjtyFn+xbpc2X9Uv9sllCRooLwkOv9rkBxDRItT8D5UiGHGIGIAvj eq9sUze8bXQeXs9zpJwMRH1kjdmCmnmRX0iXcsxSgnioL3aEGLTbXqxkUOnSgj4Y cJ1trT51XVRSBGlRHYPmF1IhYYW/RPZlFUPMJDE5s1moROU29DfnaboTREf8shJ9 A/jHvKoivn4GgM1U6VcwwtijvmgrrB5KzqpRfTLf6Rxe6St3e4WjQusYWVP4BOmz ImQyaATcPwn5iMWPfvXohPQR/ajuoU9jzMM3DqzcrH7Q4VmpSTrmkdG7Ra5GfSE1 O5WEiqNwUkfjAYIjbxo11gVtIH8ddsMuF5odsh2LVXYocHeZzRlZvsip2AePKiKX xMkZItP4xqFBfi0jnqCVkQGUdtRYhHomDUO8U0JtB3BFNT/L+LC+dsrj8G/FaQiD n8an2sDf1CrYXqfz3V3rGzuPDq/CKwPD8HeTpjZUT7bPUNsTNMVx58LiYShRV2uB zUn83diKX12xS+gyS5PfuujwQP93ZQXOP9agKSa2UlY2ojUxtpc1vxiEzcFcU9Zg 2uLEbsRKW1qe2jLDTmRyty14rJmi7ocbjPUuEuw9Aj1v46jzhBXBPE7cWHGm1o2/ /e0lGfLTtm3Q2SponTLTcHTrBvrDBRlDAN5sChhbaoEoUCHjTKo8aj6whDKfAw4Q KNHrOkkXyDyvd90c1loen5u5iaol+l5W+7LG3Sr5uRHMHAsF0MH9cZd/RQXMSY/U sQLWumskx/iSrbjFztW0La0bBCB6vHBYLervC3lrrmvnhfYrNBrZM8eH1hTSZUsT VFeKgm+KVkwEG/uXoI/XOge01b1oOHzKNKGT7Q5ogbV6w67LtOrSeTH0FCjHsN8z 2LCQHWuII4h3b1U/Pg8N5Pz59+qraSrMZAHOROYc19r0HSS5gg7m1yD3IPXO73fI gLO0/44f/KYqVP2+FKgQo9enUSLI5GuMAfhWaTpeOpJNd10egSOB3SaJ7nn20/Pm vSBSL0KsSeXY4/Df43MuHu46PvYzRwKvZB7GJJJPi2XjdFqCxuoCuEqfaZxf1lnI ZhZFmsZE1rd7kgBYyn0VXn1AvrLjaLuvmsOKaFdO4TAbQpE3Pps6AdQ8EpJ62Gei 0yZlXgh2+zZp5lRMfO5JFtr7/pVpIqnRKfaDk1XawWP7i1/0PnVXsR2G6yu6kbEg R/v2LKnp49TUldfNmVW8QHElw/LrCBW08iA+44vlGYdCU8nAW9Sy+y4plW+X32z8 Viw82ISUcoJSHmRfzXOWaj24AftbSOzo2bRmCO+xkBkXFrhTI83Aqbu7TN/yejB8 hDb04AVxzEkBTw/B0pLkJUt5lpcr9fZMvACHsL0gTRc5OPb4/zhG7y9npWgq5Snb ZnUAOi+ndnW8IL4y9YI6U7LBSyMvE7L7+QCnLJxVnO2NxjDCJVDDe6fLR9pRBCCC Sh3X/FNsu1YQzNIOvf75ri1zzqKmv4x6ETmmgs+vMGRl62s8SQcgWFEGAVrAP+uR ocx0chW3BWEQalRat2vBWpj1gyH2aHd8tgamb8XXFLK35iTk2/oCqQ== -----END RSA PRIVATE KEY----- fabric-1.14.0/tests/client.key.pub000066400000000000000000000006141315011462000167340ustar00rootroot00000000000000ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2FxgXlTZGk/JZMacwgMPC6LEd3efYgIdgK0RXGRMNs06aSyeEUwTKqmelNnElsRsUW68Ybosox0LoHGfTUj0gtSOqG+pb0QJQ5yslPBwBlL+WUC65HDzHdBrUf/bFR+rc02i2Ciraan4elvuLW07UfO5ceCOeJSYyNmrhN/vboHr3Pcv2QG717sEy/9pSAVzrriCqYFd6IFg9o6UhuSB7hvW4bzKXDHtz6OeXrC6U/FWxx3rYZg3h9K2SBGXLavqiJSkFgeSzn3geSbyAjTgowaZ8kNq4+Mc1hsAMtLZBKMBZUTuMjHpQR31nWloUUfuz5QhaORk1pJBmE90MqShiw== jforcier@ytram fabric-1.14.0/tests/fake_filesystem.py000066400000000000000000000036231315011462000177060ustar00rootroot00000000000000import os import stat from StringIO import StringIO from types import StringTypes from fabric.network import ssh class FakeFile(StringIO): def __init__(self, value=None, path=None): init = lambda x: StringIO.__init__(self, x) if value is None: init("") ftype = 'dir' size = 4096 else: init(value) ftype = 'file' size = len(value) attr = ssh.SFTPAttributes() attr.st_mode = {'file': stat.S_IFREG, 'dir': stat.S_IFDIR}[ftype] attr.st_size = size attr.filename = os.path.basename(path) self.attributes = attr def __str__(self): return self.getvalue() def write(self, value): StringIO.write(self, value) self.attributes.st_size = len(self.getvalue()) def close(self): """ Always hold fake files open. """ pass def __cmp__(self, other): me = str(self) if isinstance(other, StringTypes) else self return cmp(me, other) class FakeFilesystem(dict): def __init__(self, d=None): # Replicate input dictionary using our custom __setitem__ d = d or {} for key, value in d.iteritems(): self[key] = value def __setitem__(self, key, value): if isinstance(value, StringTypes) or value is None: value = FakeFile(value, key) super(FakeFilesystem, self).__setitem__(key, value) def normalize(self, path): """ Normalize relative paths. In our case, the "home" directory is just the root, /. I expect real servers do this as well but with the user's home directory. """ if not path.startswith(os.path.sep): path = os.path.join(os.path.sep, path) return path def __getitem__(self, key): return super(FakeFilesystem, self).__getitem__(self.normalize(key)) fabric-1.14.0/tests/mock_streams.py000066400000000000000000000046601315011462000172250ustar00rootroot00000000000000""" Stand-alone stream mocking decorator for easier imports. """ from functools import wraps import sys from StringIO import StringIO # No need for cStringIO at this time class CarbonCopy(StringIO): """ A StringIO capable of multiplexing its writes to other buffer objects. """ def __init__(self, buffer='', cc=None): """ If ``cc`` is given and is a file-like object or an iterable of same, it/they will be written to whenever this StringIO instance is written to. """ StringIO.__init__(self, buffer) if cc is None: cc = [] elif hasattr(cc, 'write'): cc = [cc] self.cc = cc def write(self, s): StringIO.write(self, s) for writer in self.cc: writer.write(s) def mock_streams(which): """ Replaces a stream with a ``StringIO`` during the test, then restores after. Must specify which stream (stdout, stderr, etc) via string args, e.g.:: @mock_streams('stdout') def func(): pass @mock_streams('stderr') def func(): pass @mock_streams('both') def func() pass If ``'both'`` is specified, not only will both streams be replaced with StringIOs, but a new combined-streams output (another StringIO) will appear at ``sys.stdall``. This StringIO will resemble what a user sees at a terminal, i.e. both streams intermingled. """ both = (which == 'both') stdout = (which == 'stdout') or both stderr = (which == 'stderr') or both def mocked_streams_decorator(func): @wraps(func) def inner_wrapper(*args, **kwargs): if both: sys.stdall = StringIO() fake_stdout = CarbonCopy(cc=sys.stdall) fake_stderr = CarbonCopy(cc=sys.stdall) else: fake_stdout, fake_stderr = StringIO(), StringIO() if stdout: my_stdout, sys.stdout = sys.stdout, fake_stdout if stderr: my_stderr, sys.stderr = sys.stderr, fake_stderr try: func(*args, **kwargs) finally: if stdout: sys.stdout = my_stdout if stderr: sys.stderr = my_stderr if both: del sys.stdall return inner_wrapper return mocked_streams_decorator fabric-1.14.0/tests/private.key000066400000000000000000000015631315011462000163470ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICWgIBAAKBgQDTj1bqB4WmayWNPB+8jVSYpZYk80Ujvj680pOTh2bORBjbIAyz oWGW+GUjzKxTiiPvVmxFgx5wdsFvF03v34lEVVhMpouqPAYQ15N37K/ir5XY+9m/ d8ufMCkjeXsQkKqFbAlQcnWMCRnOoPHS3I4vi6hmnDDeeYTSRvfLbW0fhwIBIwKB gBIiOqZYaoqbeD9OS9z2K9KR2atlTxGxOJPXiP4ESqP3NVScWNwyZ3NXHpyrJLa0 EbVtzsQhLn6rF+TzXnOlcipFvjsem3iYzCpuChfGQ6SovTcOjHV9z+hnpXvQ/fon soVRZY65wKnF7IAoUwTmJS9opqgrN6kRgCd3DASAMd1bAkEA96SBVWFt/fJBNJ9H tYnBKZGw0VeHOYmVYbvMSstssn8un+pQpUm9vlG/bp7Oxd/m+b9KWEh2xPfv6zqU avNwHwJBANqzGZa/EpzF4J8pGti7oIAPUIDGMtfIcmqNXVMckrmzQ2vTfqtkEZsA 4rE1IERRyiJQx6EJsz21wJmGV9WJQ5kCQQDwkS0uXqVdFzgHO6S++tjmjYcxwr3g H0CoFYSgbddOT6miqRskOQF3DZVkJT3kyuBgU2zKygz52ukQZMqxCb1fAkASvuTv qfpH87Qq5kQhNKdbbwbmd2NxlNabazPijWuphGTdW0VfJdWfklyS2Kr+iqrs/5wV HhathJt636Eg7oIjAkA8ht3MQ+XSl9yIJIS8gVpbPxSw5OMfw0PjVE7tBdQruiSc nvuQES5C9BMHjF39LZiGH1iLQy7FgdHyoP+eodI7 -----END RSA PRIVATE KEY----- fabric-1.14.0/tests/server.py000066400000000000000000000372771315011462000160560ustar00rootroot00000000000000from __future__ import with_statement import os import re import socket import threading import time import types from functools import wraps from Python26SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer from fabric.operations import _sudo_prefix from fabric.api import env, hide from fabric.thread_handling import ThreadHandler from fabric.network import disconnect_all, ssh from fake_filesystem import FakeFilesystem, FakeFile # # Debugging # import logging logging.basicConfig(filename='/tmp/fab.log', level=logging.DEBUG) logger = logging.getLogger('server.py') # # Constants # HOST = '127.0.0.1' PORT = 2200 USER = 'username' HOME = '/' RESPONSES = { "ls /simple": "some output", "ls /": """AUTHORS FAQ Fabric.egg-info INSTALL LICENSE MANIFEST README build docs fabfile.py fabfile.pyc fabric requirements.txt setup.py tests""", "both_streams": [ "stdout", "stderr" ], } FILES = FakeFilesystem({ '/file.txt': 'contents', '/file2.txt': 'contents2', '/folder/file3.txt': 'contents3', '/empty_folder': None, '/tree/file1.txt': 'x', '/tree/file2.txt': 'y', '/tree/subfolder/file3.txt': 'z', '/etc/apache2/apache2.conf': 'Include other.conf', HOME: None # So $HOME is a directory }) PASSWORDS = { 'root': 'root', USER: 'password' } def _local_file(filename): return os.path.join(os.path.dirname(__file__), filename) SERVER_PRIVKEY = _local_file('private.key') CLIENT_PUBKEY = _local_file('client.key.pub') CLIENT_PRIVKEY = _local_file('client.key') CLIENT_PRIVKEY_PASSPHRASE = "passphrase" def _equalize(lists, fillval=None): """ Pad all given list items in ``lists`` to be the same length. """ lists = map(list, lists) upper = max(len(x) for x in lists) for lst in lists: diff = upper - len(lst) if diff: lst.extend([fillval] * diff) return lists class TestServer(ssh.ServerInterface): """ Test server implementing the 'ssh' lib's server interface parent class. Mostly just handles the bare minimum necessary to handle SSH-level things such as honoring authentication types and exec/shell/etc requests. The bulk of the actual server side logic is handled in the ``serve_responses`` function and its ``SSHHandler`` class. """ def __init__(self, passwords, home, pubkeys, files): self.event = threading.Event() self.passwords = passwords self.pubkeys = pubkeys self.files = FakeFilesystem(files) self.home = home self.command = None def check_channel_request(self, kind, chanid): if kind == 'session': return ssh.OPEN_SUCCEEDED return ssh.OPEN_FAILED_ADMINISTRATIVELY_PROHIBITED def check_channel_exec_request(self, channel, command): self.command = command self.event.set() return True def check_channel_pty_request(self, *args): return True def check_channel_shell_request(self, channel): self.event.set() return True def check_auth_password(self, username, password): self.username = username passed = self.passwords.get(username) == password return ssh.AUTH_SUCCESSFUL if passed else ssh.AUTH_FAILED def check_auth_publickey(self, username, key): self.username = username return ssh.AUTH_SUCCESSFUL if self.pubkeys else ssh.AUTH_FAILED def get_allowed_auths(self, username): return 'password,publickey' class SSHServer(ThreadingMixIn, TCPServer): """ Threading TCPServer subclass. """ def _socket_info(self, addr_tup): """ Clone of the very top of Paramiko 1.7.6 SSHClient.connect(). We must use this in order to make sure that our address family matches up with the client side (which we cannot control, and which varies depending on individual computers and their network settings). """ hostname, port = addr_tup addr_info = socket.getaddrinfo(hostname, port, socket.AF_UNSPEC, socket.SOCK_STREAM) for (family, socktype, proto, canonname, sockaddr) in addr_info: if socktype == socket.SOCK_STREAM: af = family addr = sockaddr break else: # some OS like AIX don't indicate SOCK_STREAM support, so just # guess. :( af, _, _, _, addr = socket.getaddrinfo(hostname, port, socket.AF_UNSPEC, socket.SOCK_STREAM) return af, addr def __init__( self, server_address, RequestHandlerClass, bind_and_activate=True ): # Prevent "address already in use" errors when running tests 2x in a # row. self.allow_reuse_address = True # Handle network family/host addr (see docstring for _socket_info) family, addr = self._socket_info(server_address) self.address_family = family TCPServer.__init__(self, addr, RequestHandlerClass, bind_and_activate) class FakeSFTPHandle(ssh.SFTPHandle): """ Extremely basic way to get SFTPHandle working with our fake setup. """ def chattr(self, attr): self.readfile.attributes = attr return ssh.SFTP_OK def stat(self): return self.readfile.attributes class PrependList(list): def prepend(self, val): self.insert(0, val) def expand(path): """ '/foo/bar/biz' => ('/', 'foo', 'bar', 'biz') 'relative/path' => ('relative', 'path') """ # Base case if path in ['', os.path.sep]: return [path] ret = PrependList() directory, filename = os.path.split(path) while directory and directory != os.path.sep: ret.prepend(filename) directory, filename = os.path.split(directory) ret.prepend(filename) # Handle absolute vs relative paths ret.prepend(directory if directory == os.path.sep else '') return ret def contains(folder, path): """ contains(('a', 'b', 'c'), ('a', 'b')) => True contains('a', 'b', 'c'), ('f',)) => False """ return False if len(path) >= len(folder) else folder[:len(path)] == path def missing_folders(paths): """ missing_folders(['a/b/c']) => ['a', 'a/b', 'a/b/c'] """ ret = [] pool = set(paths) for path in paths: expanded = expand(path) for i in range(len(expanded)): folder = os.path.join(*expanded[:len(expanded) - i]) if folder and folder not in pool: pool.add(folder) ret.append(folder) return ret def canonicalize(path, home): ret = path if not os.path.isabs(path): ret = os.path.normpath(os.path.join(home, path)) return ret class FakeSFTPServer(ssh.SFTPServerInterface): def __init__(self, server, *args, **kwargs): self.server = server files = self.server.files # Expand such that omitted, implied folders get added explicitly for folder in missing_folders(files.keys()): files[folder] = None self.files = files def canonicalize(self, path): """ Make non-absolute paths relative to $HOME. """ return canonicalize(path, self.server.home) def list_folder(self, path): path = self.files.normalize(path) expanded_files = map(expand, self.files) expanded_path = expand(path) candidates = [x for x in expanded_files if contains(x, expanded_path)] children = [] for candidate in candidates: cut = candidate[:len(expanded_path) + 1] if cut not in children: children.append(cut) results = [self.stat(os.path.join(*x)) for x in children] bad = not results or any(x == ssh.SFTP_NO_SUCH_FILE for x in results) return ssh.SFTP_NO_SUCH_FILE if bad else results def open(self, path, flags, attr): path = self.files.normalize(path) try: fobj = self.files[path] except KeyError: if flags & os.O_WRONLY: # Only allow writes to files in existing directories. if os.path.dirname(path) not in self.files: return ssh.SFTP_NO_SUCH_FILE self.files[path] = fobj = FakeFile("", path) # No write flag means a read, which means they tried to read a # nonexistent file. else: return ssh.SFTP_NO_SUCH_FILE f = FakeSFTPHandle() f.readfile = f.writefile = fobj return f def stat(self, path): path = self.files.normalize(path) try: fobj = self.files[path] except KeyError: return ssh.SFTP_NO_SUCH_FILE return fobj.attributes # Don't care about links right now lstat = stat def chattr(self, path, attr): path = self.files.normalize(path) if path not in self.files: return ssh.SFTP_NO_SUCH_FILE # Attempt to gracefully update instead of overwrite, since things like # chmod will call us with an SFTPAttributes object that only exhibits # e.g. st_mode, and we don't want to lose our filename or size... for which in "size uid gid mode atime mtime".split(): attname = "st_" + which incoming = getattr(attr, attname) if incoming is not None: setattr(self.files[path].attributes, attname, incoming) return ssh.SFTP_OK def mkdir(self, path, attr): self.files[path] = None return ssh.SFTP_OK def serve_responses(responses, files, passwords, home, pubkeys, port): """ Return a threading TCP based SocketServer listening on ``port``. Used as a fake SSH server which will respond to commands given in ``responses`` and allow connections for users listed in ``passwords``. ``home`` is used as the remote $HOME (mostly for SFTP purposes). ``pubkeys`` is a Boolean value determining whether the server will allow pubkey auth or not. """ # Define handler class inline so it can access serve_responses' args class SSHHandler(BaseRequestHandler): def handle(self): try: self.init_transport() self.waiting_for_command = False while not self.server.all_done.isSet(): # Don't overwrite channel if we're waiting for a command. if not self.waiting_for_command: self.channel = self.transport.accept(1) if not self.channel: continue self.ssh_server.event.wait(10) if self.ssh_server.command: self.command = self.ssh_server.command # Set self.sudo_prompt, update self.command self.split_sudo_prompt() if self.command in responses: self.stdout, self.stderr, self.status = \ self.response() if self.sudo_prompt and not self.sudo_password(): self.channel.send( "sudo: 3 incorrect password attempts\n" ) break self.respond() else: self.channel.send_stderr( "Sorry, I don't recognize that command.\n" ) self.channel.send_exit_status(1) # Close up shop self.command = self.ssh_server.command = None self.waiting_for_command = False time.sleep(0.5) self.channel.close() else: # If we're here, self.command was False or None, # but we do have a valid Channel object. Thus we're # waiting for the command to show up. self.waiting_for_command = True finally: self.transport.close() def init_transport(self): transport = ssh.Transport(self.request) transport.add_server_key(ssh.RSAKey(filename=SERVER_PRIVKEY)) transport.set_subsystem_handler('sftp', ssh.SFTPServer, sftp_si=FakeSFTPServer) server = TestServer(passwords, home, pubkeys, files) transport.start_server(server=server) self.ssh_server = server self.transport = transport def split_sudo_prompt(self): prefix = re.escape(_sudo_prefix(None, None).rstrip()) + ' +' result = re.findall(r'^(%s)?(.*)$' % prefix, self.command)[0] self.sudo_prompt, self.command = result def response(self): result = responses[self.command] stderr = "" status = 0 sleep = 0 if isinstance(result, types.StringTypes): stdout = result else: size = len(result) if size == 1: stdout = result[0] elif size == 2: stdout, stderr = result elif size == 3: stdout, stderr, status = result elif size == 4: stdout, stderr, status, sleep = result stdout, stderr = _equalize((stdout, stderr)) time.sleep(sleep) return stdout, stderr, status def sudo_password(self): # Give user 3 tries, as is typical passed = False for x in range(3): self.channel.send(env.sudo_prompt) password = self.channel.recv(65535).strip() # Spit back newline to fake the echo of user's # newline self.channel.send('\n') # Test password if password == passwords[self.ssh_server.username]: passed = True break # If here, password was bad. self.channel.send("Sorry, try again.\n") return passed def respond(self): for out, err in zip(self.stdout, self.stderr): if out is not None: self.channel.send(out) if err is not None: self.channel.send_stderr(err) self.channel.send_exit_status(self.status) return SSHServer((HOST, port), SSHHandler) def server( responses=RESPONSES, files=FILES, passwords=PASSWORDS, home=HOME, pubkeys=False, port=PORT ): """ Returns a decorator that runs an SSH server during function execution. Direct passthrough to ``serve_responses``. """ def run_server(func): @wraps(func) def inner(*args, **kwargs): # Start server _server = serve_responses(responses, files, passwords, home, pubkeys, port) _server.all_done = threading.Event() worker = ThreadHandler('server', _server.serve_forever) # Execute function try: return func(*args, **kwargs) finally: # Clean up client side connections with hide('status'): disconnect_all() # Stop server _server.all_done.set() _server.shutdown() # Why this is not called in shutdown() is beyond me. _server.server_close() worker.thread.join() # Handle subthread exceptions e = worker.exception if e: raise e[0], e[1], e[2] return inner return run_server fabric-1.14.0/tests/support/000077500000000000000000000000001315011462000156725ustar00rootroot00000000000000fabric-1.14.0/tests/support/__init__.py000066400000000000000000000000001315011462000177710ustar00rootroot00000000000000fabric-1.14.0/tests/support/aborts.py000066400000000000000000000001171315011462000175350ustar00rootroot00000000000000from fabric.api import task, abort @task def kaboom(): abort("It burns!") fabric-1.14.0/tests/support/classbased_task_fabfile.py000066400000000000000000000002051315011462000230370ustar00rootroot00000000000000from fabric import tasks class ClassBasedTask(tasks.Task): def run(self, *args, **kwargs): pass foo = ClassBasedTask() fabric-1.14.0/tests/support/decorated_fabfile.py000066400000000000000000000001231315011462000216420ustar00rootroot00000000000000from fabric.decorators import task @task def foo(): pass def bar(): pass fabric-1.14.0/tests/support/decorated_fabfile_with_classbased_task.py000066400000000000000000000003751315011462000261140ustar00rootroot00000000000000from fabric import tasks from fabric.decorators import task class ClassBasedTask(tasks.Task): def __init__(self): self.name = "foo" self.use_decorated = True def run(self, *args, **kwargs): pass foo = ClassBasedTask() fabric-1.14.0/tests/support/decorated_fabfile_with_modules.py000066400000000000000000000001631315011462000244310ustar00rootroot00000000000000from fabric.decorators import task import module_fabtasks as tasks @task def foo(): pass def bar(): pass fabric-1.14.0/tests/support/decorator_order.py000066400000000000000000000003331315011462000214200ustar00rootroot00000000000000from fabric.api import task, hosts, roles @hosts('whatever') @task def foo(): pass # There must be at least one unmolested new-style task for the decorator order # problem to appear. @task def caller(): pass fabric-1.14.0/tests/support/deep.py000066400000000000000000000000211315011462000171520ustar00rootroot00000000000000import submodule fabric-1.14.0/tests/support/default_task_submodule.py000066400000000000000000000001201315011462000227620ustar00rootroot00000000000000from fabric.api import task @task(default=True) def long_task_name(): pass fabric-1.14.0/tests/support/default_tasks.py000066400000000000000000000000521315011462000210720ustar00rootroot00000000000000import default_task_submodule as mymodule fabric-1.14.0/tests/support/docstring.py000066400000000000000000000001301315011462000202320ustar00rootroot00000000000000from fabric.decorators import task @task def foo(): """ Foos! """ pass fabric-1.14.0/tests/support/exceptions_fabfile.py000066400000000000000000000000771315011462000221010ustar00rootroot00000000000000class NotATask(Exception): pass def some_task(): pass fabric-1.14.0/tests/support/explicit_fabfile.py000066400000000000000000000000741315011462000215360ustar00rootroot00000000000000__all__ = ['foo'] def foo(): pass def bar(): pass fabric-1.14.0/tests/support/flat_alias.py000066400000000000000000000001141315011462000203370ustar00rootroot00000000000000from fabric.api import task @task(alias="foo_aliased") def foo(): pass fabric-1.14.0/tests/support/flat_aliases.py000066400000000000000000000001431315011462000206710ustar00rootroot00000000000000from fabric.api import task @task(aliases=["foo_aliased", "foo_aliased_two"]) def foo(): pass fabric-1.14.0/tests/support/implicit_fabfile.py000066400000000000000000000000511315011462000215220ustar00rootroot00000000000000def foo(): pass def bar(): pass fabric-1.14.0/tests/support/mapping.py000066400000000000000000000002371315011462000177010ustar00rootroot00000000000000from fabric.tasks import Task class MappingTask(dict, Task): def run(self): pass mapping_task = MappingTask() mapping_task.name = "mapping_task" fabric-1.14.0/tests/support/module_fabtasks.py000066400000000000000000000001021315011462000214000ustar00rootroot00000000000000def hello(): print("hello") def world(): print("world") fabric-1.14.0/tests/support/nested_alias.py000066400000000000000000000000341315011462000206740ustar00rootroot00000000000000import flat_alias as nested fabric-1.14.0/tests/support/nested_aliases.py000066400000000000000000000000361315011462000212260ustar00rootroot00000000000000import flat_aliases as nested fabric-1.14.0/tests/support/ssh_config000066400000000000000000000002561315011462000177420ustar00rootroot00000000000000Host myhost User neighbor Port 664 IdentityFile neighbor.pub Host myalias HostName otherhost Host * User satan Port 666 IdentityFile foobar.pub fabric-1.14.0/tests/support/submodule/000077500000000000000000000000001315011462000176715ustar00rootroot00000000000000fabric-1.14.0/tests/support/submodule/__init__.py000066400000000000000000000000621315011462000220000ustar00rootroot00000000000000import subsubmodule def classic_task(): pass fabric-1.14.0/tests/support/submodule/subsubmodule/000077500000000000000000000000001315011462000224025ustar00rootroot00000000000000fabric-1.14.0/tests/support/submodule/subsubmodule/__init__.py000066400000000000000000000000741315011462000245140ustar00rootroot00000000000000from fabric.api import task @task def deeptask(): pass fabric-1.14.0/tests/support/testserver_ssh_config000066400000000000000000000001721315011462000222250ustar00rootroot00000000000000Host testserver # TODO: get these pulling from server.py. Meh. HostName 127.0.0.1 Port 2200 User username fabric-1.14.0/tests/support/tree/000077500000000000000000000000001315011462000166315ustar00rootroot00000000000000fabric-1.14.0/tests/support/tree/__init__.py000066400000000000000000000001601315011462000207370ustar00rootroot00000000000000from fabric.api import task import system, db @task def deploy(): pass @task def build_docs(): pass fabric-1.14.0/tests/support/tree/db.py000066400000000000000000000000741315011462000175710ustar00rootroot00000000000000from fabric.api import task @task def migrate(): pass fabric-1.14.0/tests/support/tree/system/000077500000000000000000000000001315011462000201555ustar00rootroot00000000000000fabric-1.14.0/tests/support/tree/system/__init__.py000066400000000000000000000001221315011462000222610ustar00rootroot00000000000000from fabric.api import task import debian @task def install_package(): pass fabric-1.14.0/tests/support/tree/system/debian.py000066400000000000000000000000771315011462000217550ustar00rootroot00000000000000from fabric.api import task @task def update_apt(): pass fabric-1.14.0/tests/test_context_managers.py000066400000000000000000000176631315011462000211450ustar00rootroot00000000000000from __future__ import with_statement import os import sys from StringIO import StringIO from nose.tools import eq_, ok_ from fabric.state import env, output from fabric.context_managers import (cd, settings, lcd, hide, shell_env, quiet, warn_only, prefix, path) from fabric.operations import run, local, _prefix_commands from utils import mock_streams, FabricTest from server import server # # cd() # def test_error_handling(): """ cd cleans up after itself even in case of an exception """ class TestException(Exception): pass try: with cd('somewhere'): raise TestException('Houston, we have a problem.') except TestException: pass finally: with cd('else'): eq_(env.cwd, 'else') def test_cwd_with_absolute_paths(): """ cd() should append arg if non-absolute or overwrite otherwise """ existing = '/some/existing/path' additional = 'another' absolute = '/absolute/path' with settings(cwd=existing): with cd(absolute): eq_(env.cwd, absolute) with cd(additional): eq_(env.cwd, existing + '/' + additional) def test_cd_home_dir(): """ cd() should work with home directories """ homepath = "~/somepath" with cd(homepath): eq_(env.cwd, homepath) def test_cd_nested_home_abs_dirs(): """ cd() should work with nested user homedir (starting with ~) paths. It should always take the last path if the new path begins with `/` or `~` """ home_path = "~/somepath" abs_path = "/some/random/path" relative_path = "some/random/path" # 2 nested homedir paths with cd(home_path): eq_(env.cwd, home_path) another_path = home_path + "/another/path" with cd(another_path): eq_(env.cwd, another_path) # first absolute path, then a homedir path with cd(abs_path): eq_(env.cwd, abs_path) with cd(home_path): eq_(env.cwd, home_path) # first relative path, then a homedir path with cd(relative_path): eq_(env.cwd, relative_path) with cd(home_path): eq_(env.cwd, home_path) # first home path, then a a relative path with cd(home_path): eq_(env.cwd, home_path) with cd(relative_path): eq_(env.cwd, home_path + "/" + relative_path) # # prefix # def test_nested_prefix(): """ prefix context managers can be created outside of the with block and nested """ cm1 = prefix('1') cm2 = prefix('2') with cm1: with cm2: eq_(env.command_prefixes, ['1', '2']) # # cd prefix with dev/null # def test_cd_prefix(): """ cd prefix should direct output to /dev/null in case of CDPATH """ some_path = "~/somepath" with cd(some_path): command_out = _prefix_commands('foo', "remote") eq_(command_out, 'cd %s >/dev/null && foo' % some_path) # def test_cd_prefix_on_win32(): # """ # cd prefix should NOT direct output to /dev/null on win32 # """ # some_path = "~/somepath" # import fabric # try: # fabric.state.win32 = True # with cd(some_path): # command_out = _prefix_commands('foo', "remote") # eq_(command_out, 'cd %s && foo' % some_path) # finally: # fabric.state.win32 = False # # hide/show # def test_hide_show_exception_handling(): """ hide()/show() should clean up OK if exceptions are raised """ try: with hide('stderr'): # now it's False, while the default is True eq_(output.stderr, False) raise Exception except Exception: # Here it should be True again. # If it's False, this means hide() didn't clean up OK. eq_(output.stderr, True) # # settings() # def test_setting_new_env_dict_key_should_work(): """ Using settings() with a previously nonexistent key should work correctly """ key = 'thisshouldnevereverexistseriouslynow' value = 'a winner is you' with settings(**{key: value}): ok_(key in env) ok_(key not in env) def test_settings(): """ settings() should temporarily override env dict with given key/value pair """ env.testval = "outer value" with settings(testval="inner value"): eq_(env.testval, "inner value") eq_(env.testval, "outer value") def test_settings_with_multiple_kwargs(): """ settings() should temporarily override env dict with given key/value pairS """ env.testval1 = "outer 1" env.testval2 = "outer 2" with settings(testval1="inner 1", testval2="inner 2"): eq_(env.testval1, "inner 1") eq_(env.testval2, "inner 2") eq_(env.testval1, "outer 1") eq_(env.testval2, "outer 2") def test_settings_with_other_context_managers(): """ settings() should take other context managers, and use them with other overrided key/value pairs. """ env.testval1 = "outer 1" prev_lcwd = env.lcwd with settings(lcd("here"), testval1="inner 1"): eq_(env.testval1, "inner 1") ok_(env.lcwd.endswith("here")) # Should be the side-effect of adding cd to settings ok_(env.testval1, "outer 1") eq_(env.lcwd, prev_lcwd) def test_settings_clean_revert(): """ settings(clean_revert=True) should only revert values matching input values """ env.modified = "outer" env.notmodified = "outer" with settings( modified="inner", notmodified="inner", inner_only="only", clean_revert=True ): eq_(env.modified, "inner") eq_(env.notmodified, "inner") eq_(env.inner_only, "only") env.modified = "modified internally" eq_(env.modified, "modified internally") ok_("inner_only" not in env) # # shell_env() # def test_shell_env(): """ shell_env() sets the shell_env attribute in the env dict """ with shell_env(KEY="value"): eq_(env.shell_env['KEY'], 'value') eq_(env.shell_env, {}) class TestQuietAndWarnOnly(FabricTest): @server() @mock_streams('both') def test_quiet_hides_all_output(self): # Sanity test - normally this is not empty run("ls /simple") ok_(sys.stdout.getvalue()) # Reset sys.stdout = StringIO() # Real test with quiet(): run("ls /simple") # Empty output ok_(not sys.stdout.getvalue()) # Reset sys.stdout = StringIO() # Kwarg test run("ls /simple", quiet=True) ok_(not sys.stdout.getvalue()) @server(responses={'barf': [ "this is my stdout", "this is my stderr", 1 ]}) def test_quiet_sets_warn_only_to_true(self): # Sanity test to ensure environment with settings(warn_only=False): with quiet(): eq_(run("barf").return_code, 1) # Kwarg test eq_(run("barf", quiet=True).return_code, 1) @server(responses={'hrm': ["", "", 1]}) @mock_streams('both') def test_warn_only_is_same_as_settings_warn_only(self): with warn_only(): eq_(run("hrm").failed, True) @server() @mock_streams('both') def test_warn_only_does_not_imply_hide_everything(self): with warn_only(): run("ls /simple") assert sys.stdout.getvalue().strip() != "" # path() (distinct from shell_env) class TestPathManager(FabricTest): def setup(self): super(TestPathManager, self).setup() self.real = os.environ.get('PATH') def via_local(self): with hide('everything'): return local("echo $PATH", capture=True) def test_lack_of_path_has_default_local_path(self): """ No use of 'with path' == default local $PATH """ eq_(self.real, self.via_local()) def test_use_of_path_appends_by_default(self): """ 'with path' appends by default """ with path('foo'): eq_(self.via_local(), self.real + ":foo") fabric-1.14.0/tests/test_contrib.py000066400000000000000000000126051315011462000172330ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import with_statement import os from fabric.api import hide, get from fabric.contrib.files import upload_template, contains from fabric.context_managers import lcd from utils import FabricTest, eq_contents from server import server class TestContrib(FabricTest): # Make sure it knows / is a directory. # This is in lieu of starting down the "actual honest to god fake operating # system" road...:( @server(responses={'test -d "$(echo /)"': ""}) def test_upload_template_uses_correct_remote_filename(self): """ upload_template() shouldn't munge final remote filename """ template = self.mkfile('template.txt', 'text') with hide('everything'): upload_template(template, '/') assert self.exists_remotely('/template.txt') @server() def test_upload_template_handles_file_destination(self): """ upload_template() should work OK with file and directory destinations """ template = self.mkfile('template.txt', '%(varname)s') local = self.path('result.txt') remote = '/configfile.txt' var = 'foobar' with hide('everything'): upload_template(template, remote, {'varname': var}) get(remote, local) eq_contents(local, var) @server() def test_upload_template_handles_template_dir(self): """ upload_template() should work OK with template dir """ template = self.mkfile('template.txt', '%(varname)s') template_dir = os.path.dirname(template) local = self.path('result.txt') remote = '/configfile.txt' var = 'foobar' with hide('everything'): upload_template( 'template.txt', remote, {'varname': var}, template_dir=template_dir ) get(remote, local) eq_contents(local, var) @server(responses={ 'egrep "text" "/file.txt"': ( "sudo: unable to resolve host fabric", "", 1 )} ) def test_contains_checks_only_succeeded_flag(self): """ contains() should return False on bad grep even if stdout isn't empty """ with hide('everything'): result = contains('/file.txt', 'text', use_sudo=True) assert result == False @server(responses={ r'egrep "Include other\\.conf" "$(echo /etc/apache2/apache2.conf)"': "Include other.conf" }) def test_contains_performs_case_sensitive_search(self): """ contains() should perform a case-sensitive search by default. """ with hide('everything'): result = contains('/etc/apache2/apache2.conf', 'Include other.conf', use_sudo=True) assert result == True @server(responses={ r'egrep -i "include Other\\.CONF" "$(echo /etc/apache2/apache2.conf)"': "Include other.conf" }) def test_contains_performs_case_insensitive_search(self): """ contains() should perform a case-insensitive search when passed `case_sensitive=False` """ with hide('everything'): result = contains('/etc/apache2/apache2.conf', 'include Other.CONF', use_sudo=True, case_sensitive=False) assert result == True @server() def test_upload_template_handles_jinja_template(self): """ upload_template() should work OK with Jinja2 template """ template = self.mkfile('template_jinja2.txt', '{{ first_name }}') template_name = os.path.basename(template) template_dir = os.path.dirname(template) local = self.path('result.txt') remote = '/configfile.txt' first_name = u'S\u00E9bastien' with hide('everything'): upload_template(template_name, remote, {'first_name': first_name}, use_jinja=True, template_dir=template_dir) get(remote, local) eq_contents(local, first_name.encode('utf-8')) @server() def test_upload_template_jinja_and_no_template_dir(self): # Crummy doesn't-die test fname = "foo.tpl" try: with hide('everything'): with open(fname, 'w+') as fd: fd.write('whatever') upload_template(fname, '/configfile.txt', {}, use_jinja=True) finally: os.remove(fname) def test_upload_template_obeys_lcd(self): for jinja in (True, False): for mirror in (True, False): self._upload_template_obeys_lcd(jinja=jinja, mirror=mirror) @server() def _upload_template_obeys_lcd(self, jinja, mirror): template_content = {True: '{{ varname }}s', False: '%(varname)s'} template_dir = 'template_dir' template_name = 'template.txt' if not self.exists_locally(self.path(template_dir)): os.mkdir(self.path(template_dir)) self.mkfile( os.path.join(template_dir, template_name), template_content[jinja] ) remote = '/configfile.txt' var = 'foobar' with hide('everything'): with lcd(self.path(template_dir)): upload_template( template_name, remote, {'varname': var}, mirror_local_mode=mirror ) fabric-1.14.0/tests/test_decorators.py000066400000000000000000000152201315011462000177340ustar00rootroot00000000000000from __future__ import with_statement import random import sys from nose.tools import eq_, ok_, assert_true, assert_false, assert_equal import fudge from fudge import Fake, with_fakes, patched_context from fabric import decorators, tasks from fabric.state import env import fabric # for patching fabric.state.xxx from fabric.tasks import _parallel_tasks, requires_parallel, execute from fabric.context_managers import lcd, settings, hide from utils import mock_streams # # Support # def fake_function(*args, **kwargs): """ Returns a ``fudge.Fake`` exhibiting function-like attributes. Passes in all args/kwargs to the ``fudge.Fake`` constructor. However, if ``callable`` or ``expect_call`` kwargs are not given, ``callable`` will be set to True by default. """ # Must define __name__ to be compatible with function wrapping mechanisms # like @wraps(). if 'callable' not in kwargs and 'expect_call' not in kwargs: kwargs['callable'] = True return Fake(*args, **kwargs).has_attr(__name__='fake') # # @task # def test_task_returns_an_instance_of_wrappedfunctask_object(): def foo(): pass task = decorators.task(foo) ok_(isinstance(task, tasks.WrappedCallableTask)) def test_task_will_invoke_provided_class(): def foo(): pass fake = Fake() fake.expects("__init__").with_args(foo) fudge.clear_calls() fudge.clear_expectations() foo = decorators.task(foo, task_class=fake) fudge.verify() def test_task_passes_args_to_the_task_class(): random_vars = ("some text", random.randint(100, 200)) def foo(): pass fake = Fake() fake.expects("__init__").with_args(foo, *random_vars) fudge.clear_calls() fudge.clear_expectations() foo = decorators.task(foo, task_class=fake, *random_vars) fudge.verify() def test_passes_kwargs_to_the_task_class(): random_vars = { "msg": "some text", "number": random.randint(100, 200), } def foo(): pass fake = Fake() fake.expects("__init__").with_args(foo, **random_vars) fudge.clear_calls() fudge.clear_expectations() foo = decorators.task(foo, task_class=fake, **random_vars) fudge.verify() def test_integration_tests_for_invoked_decorator_with_no_args(): r = random.randint(100, 200) @decorators.task() def foo(): return r eq_(r, foo()) def test_integration_tests_for_decorator(): r = random.randint(100, 200) @decorators.task(task_class=tasks.WrappedCallableTask) def foo(): return r eq_(r, foo()) def test_original_non_invoked_style_task(): r = random.randint(100, 200) @decorators.task def foo(): return r eq_(r, foo()) # # @runs_once # @with_fakes def test_runs_once_runs_only_once(): """ @runs_once prevents decorated func from running >1 time """ func = fake_function(expect_call=True).times_called(1) task = decorators.runs_once(func) for i in range(2): task() def test_runs_once_returns_same_value_each_run(): """ @runs_once memoizes return value of decorated func """ return_value = "foo" task = decorators.runs_once(fake_function().returns(return_value)) for i in range(2): eq_(task(), return_value) @decorators.runs_once def single_run(): pass def test_runs_once(): assert_false(hasattr(single_run, 'return_value')) single_run() assert_true(hasattr(single_run, 'return_value')) assert_equal(None, single_run()) # # @serial / @parallel # @decorators.serial def serial(): pass @decorators.serial @decorators.parallel def serial2(): pass @decorators.parallel @decorators.serial def serial3(): pass @decorators.parallel def parallel(): pass @decorators.parallel(pool_size=20) def parallel2(): pass fake_tasks = { 'serial': serial, 'serial2': serial2, 'serial3': serial3, 'parallel': parallel, 'parallel2': parallel2, } def parallel_task_helper(actual_tasks, expected): commands_to_run = map(lambda x: [x], actual_tasks) with patched_context(fabric.state, 'commands', fake_tasks): eq_(_parallel_tasks(commands_to_run), expected) def test_parallel_tasks(): for desc, task_names, expected in ( ("One @serial-decorated task == no parallelism", ['serial'], False), ("One @parallel-decorated task == parallelism", ['parallel'], True), ("One @parallel-decorated and one @serial-decorated task == paralellism", ['parallel', 'serial'], True), ("Tasks decorated with both @serial and @parallel count as @parallel", ['serial2', 'serial3'], True) ): parallel_task_helper.description = desc yield parallel_task_helper, task_names, expected del parallel_task_helper.description def test_parallel_wins_vs_serial(): """ @parallel takes precedence over @serial when both are used on one task """ ok_(requires_parallel(serial2)) ok_(requires_parallel(serial3)) @mock_streams('stdout') def test_global_parallel_honors_runs_once(): """ fab -P (or env.parallel) should honor @runs_once """ @decorators.runs_once def mytask(): print("yolo") # 'Carpe diem' for stupid people! with settings(hide('everything'), parallel=True): execute(mytask, hosts=['localhost', '127.0.0.1']) result = sys.stdout.getvalue() eq_(result, "yolo\n") assert result != "yolo\nyolo\n" # # @roles # @decorators.roles('test') def use_roles(): pass def test_roles(): assert_true(hasattr(use_roles, 'roles')) assert_equal(use_roles.roles, ['test']) # # @hosts # @decorators.hosts('test') def use_hosts(): pass def test_hosts(): assert_true(hasattr(use_hosts, 'hosts')) assert_equal(use_hosts.hosts, ['test']) # # @with_settings # def test_with_settings_passes_env_vars_into_decorated_function(): env.value = True random_return = random.randint(1000, 2000) def some_task(): return env.value decorated_task = decorators.with_settings(value=random_return)(some_task) ok_(some_task(), msg="sanity check") eq_(random_return, decorated_task()) def test_with_settings_with_other_context_managers(): """ with_settings() should take other context managers, and use them with other overrided key/value pairs. """ env.testval1 = "outer 1" prev_lcwd = env.lcwd def some_task(): eq_(env.testval1, "inner 1") ok_(env.lcwd.endswith("here")) # Should be the side-effect of adding cd to settings decorated_task = decorators.with_settings( lcd("here"), testval1="inner 1" )(some_task) decorated_task() ok_(env.testval1, "outer 1") eq_(env.lcwd, prev_lcwd) fabric-1.14.0/tests/test_io.py000066400000000000000000000017031315011462000161770ustar00rootroot00000000000000from __future__ import with_statement from nose.tools import eq_ from fabric.io import OutputLooper from fabric.context_managers import settings def test_request_prompts(): """ Test valid responses from prompts """ def run(txt, prompts): with settings(prompts=prompts): # try to fulfil the OutputLooper interface, only want to test # _get_prompt_response. (str has a method upper) ol = OutputLooper(str, 'upper', None, list(txt), None) return ol._get_prompt_response() prompts = {"prompt2": "response2", "prompt1": "response1", "prompt": "response" } eq_(run("this is a prompt for prompt1", prompts), ("prompt1", "response1")) eq_(run("this is a prompt for prompt2", prompts), ("prompt2", "response2")) eq_(run("this is a prompt for promptx:", prompts), (None, None)) eq_(run("prompt for promp", prompts), (None, None)) fabric-1.14.0/tests/test_main.py000066400000000000000000000512421315011462000165170ustar00rootroot00000000000000from __future__ import with_statement import copy from functools import partial from operator import isMappingType import os.path import sys from fudge import Fake, patched_context from nose.tools import ok_, eq_ from fabric.decorators import hosts, roles, task from fabric.context_managers import settings from fabric.main import (parse_arguments, _escape_split, find_fabfile, load_fabfile as _load_fabfile, list_commands, _task_names, COMMANDS_HEADER, NESTED_REMINDER) import fabric.state from fabric.tasks import Task, WrappedCallableTask from fabric.task_utils import _crawl, crawl, merge from utils import FabricTest, fabfile, path_prefix, aborts # Stupid load_fabfile wrapper to hide newly added return value. # WTB more free time to rewrite all this with objects :) def load_fabfile(*args, **kwargs): return _load_fabfile(*args, **kwargs)[:2] # # Basic CLI stuff # def test_argument_parsing(): for args, output in [ # Basic ('abc', ('abc', [], {}, [], [], [])), # Arg ('ab:c', ('ab', ['c'], {}, [], [], [])), # Kwarg ('a:b=c', ('a', [], {'b':'c'}, [], [], [])), # Arg and kwarg ('a:b=c,d', ('a', ['d'], {'b':'c'}, [], [], [])), # Multiple kwargs ('a:b=c,d=e', ('a', [], {'b':'c','d':'e'}, [], [], [])), # Host ('abc:host=foo', ('abc', [], {}, ['foo'], [], [])), # Hosts with single host ('abc:hosts=foo', ('abc', [], {}, ['foo'], [], [])), # Hosts with multiple hosts # Note: in a real shell, one would need to quote or escape "foo;bar". # But in pure-Python that would get interpreted literally, so we don't. ('abc:hosts=foo;bar', ('abc', [], {}, ['foo', 'bar'], [], [])), # Exclude hosts ('abc:hosts=foo;bar,exclude_hosts=foo', ('abc', [], {}, ['foo', 'bar'], [], ['foo'])), ('abc:hosts=foo;bar,exclude_hosts=foo;bar', ('abc', [], {}, ['foo', 'bar'], [], ['foo','bar'])), # Empty string args ("task:x=y,z=", ('task', [], {'x': 'y', 'z': ''}, [], [], [])), ("task:foo,,x=y", ('task', ['foo', ''], {'x': 'y'}, [], [], [])), ]: yield eq_, parse_arguments([args]), [output] def test_escaped_task_arg_split(): """ Allow backslashes to escape the task argument separator character """ argstr = r"foo,bar\,biz\,baz,what comes after baz?" eq_( _escape_split(',', argstr), ['foo', 'bar,biz,baz', 'what comes after baz?'] ) def test_escaped_task_kwarg_split(): """ Allow backslashes to escape the = in x=y task kwargs """ argstr = r"cmd:arg,escaped\,arg,nota\=kwarg,regular=kwarg,escaped=regular\=kwarg" args = ['arg', 'escaped,arg', 'nota=kwarg'] kwargs = {'regular': 'kwarg', 'escaped': 'regular=kwarg'} eq_( parse_arguments([argstr])[0], ('cmd', args, kwargs, [], [], []), ) # # Host/role decorators # # Allow calling Task.get_hosts as function instead (meh.) def get_hosts_and_effective_roles(command, *args): return WrappedCallableTask(command).get_hosts_and_effective_roles(*args) def eq_hosts(command, expected_hosts, cli_hosts=None, excluded_hosts=None, env=None, func=set): eq_(func(get_hosts_and_effective_roles(command, cli_hosts or [], [], excluded_hosts or [], env)[0]), func(expected_hosts)) def eq_effective_roles(command, expected_effective_roles, cli_roles=None, env=None, func=set): eq_(func(get_hosts_and_effective_roles(command, [], cli_roles or [], [], env)[1]), func(expected_effective_roles)) true_eq_hosts = partial(eq_hosts, func=lambda x: x) def test_hosts_decorator_by_itself(): """ Use of @hosts only """ host_list = ['a', 'b'] @hosts(*host_list) def command(): pass eq_hosts(command, host_list) fake_roles = { 'r1': ['a', 'b'], 'r2': ['b', 'c'] } def test_roles_decorator_by_itself(): """ Use of @roles only """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': fake_roles}) eq_effective_roles(command, ['r1'], env={'roledefs': fake_roles}) def test_roles_decorator_overrides_env_roles(): """ If @roles is used it replaces any env.roles value """ @roles('r1') def command(): pass eq_effective_roles(command, ['r1'], env={'roledefs': fake_roles, 'roles': ['r2']}) def test_cli_roles_override_decorator_roles(): """ If CLI roles are provided they replace roles defined in @roles. """ @roles('r1') def command(): pass eq_effective_roles(command, ['r2'], cli_roles=['r2'], env={'roledefs': fake_roles}) def test_hosts_and_roles_together(): """ Use of @roles and @hosts together results in union of both """ @roles('r1', 'r2') @hosts('d') def command(): pass eq_hosts(command, ['a', 'b', 'c', 'd'], env={'roledefs': fake_roles}) eq_effective_roles(command, ['r1', 'r2'], env={'roledefs': fake_roles}) def test_host_role_merge_deduping(): """ Use of @roles and @hosts dedupes when merging """ @roles('r1', 'r2') @hosts('a') def command(): pass # Not ['a', 'a', 'b', 'c'] or etc true_eq_hosts(command, ['a', 'b', 'c'], env={'roledefs': fake_roles}) def test_host_role_merge_deduping_off(): """ Allow turning deduping off """ @roles('r1', 'r2') @hosts('a') def command(): pass with settings(dedupe_hosts=False): true_eq_hosts( command, # 'a' 1x host 1x role # 'b' 1x r1 1x r2 ['a', 'a', 'b', 'b', 'c'], env={'roledefs': fake_roles} ) tuple_roles = { 'r1': ('a', 'b'), 'r2': ('b', 'c'), } def test_roles_as_tuples(): """ Test that a list of roles as a tuple succeeds """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': tuple_roles}) eq_effective_roles(command, ['r1'], env={'roledefs': fake_roles}) def test_hosts_as_tuples(): """ Test that a list of hosts as a tuple succeeds """ def command(): pass eq_hosts(command, ['foo', 'bar'], env={'hosts': ('foo', 'bar')}) def test_hosts_decorator_overrides_env_hosts(): """ If @hosts is used it replaces any env.hosts value """ @hosts('bar') def command(): pass eq_hosts(command, ['bar'], env={'hosts': ['foo']}) def test_hosts_decorator_overrides_env_hosts_with_task_decorator_first(): """ If @hosts is used it replaces any env.hosts value even with @task """ @task @hosts('bar') def command(): pass eq_hosts(command, ['bar'], env={'hosts': ['foo']}) def test_hosts_decorator_overrides_env_hosts_with_task_decorator_last(): @hosts('bar') @task def command(): pass eq_hosts(command, ['bar'], env={'hosts': ['foo']}) def test_hosts_stripped_env_hosts(): """ Make sure hosts defined in env.hosts are cleaned of extra spaces """ def command(): pass myenv = {'hosts': [' foo ', 'bar '], 'roles': [], 'exclude_hosts': []} eq_hosts(command, ['foo', 'bar'], env=myenv) spaced_roles = { 'r1': [' a ', ' b '], 'r2': ['b', 'c'], } def test_roles_stripped_env_hosts(): """ Make sure hosts defined in env.roles are cleaned of extra spaces """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': spaced_roles}) dict_roles = { 'r1': {'hosts': ['a', 'b']}, 'r2': ['b', 'c'], } def test_hosts_in_role_dict(): """ Make sure hosts defined in env.roles are cleaned of extra spaces """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': dict_roles}) def test_hosts_decorator_expands_single_iterable(): """ @hosts(iterable) should behave like @hosts(*iterable) """ host_list = ['foo', 'bar'] @hosts(host_list) def command(): pass eq_(command.hosts, host_list) def test_roles_decorator_expands_single_iterable(): """ @roles(iterable) should behave like @roles(*iterable) """ role_list = ['foo', 'bar'] @roles(role_list) def command(): pass eq_(command.roles, role_list) # # Host exclusion # def dummy(): pass def test_get_hosts_excludes_cli_exclude_hosts_from_cli_hosts(): eq_hosts(dummy, ['bar'], cli_hosts=['foo', 'bar'], excluded_hosts=['foo']) def test_get_hosts_excludes_cli_exclude_hosts_from_decorator_hosts(): @hosts('foo', 'bar') def command(): pass eq_hosts(command, ['bar'], excluded_hosts=['foo']) def test_get_hosts_excludes_global_exclude_hosts_from_global_hosts(): fake_env = {'hosts': ['foo', 'bar'], 'exclude_hosts': ['foo']} eq_hosts(dummy, ['bar'], env=fake_env) # # Basic role behavior # @aborts def test_aborts_on_nonexistent_roles(): """ Aborts if any given roles aren't found """ merge([], ['badrole'], [], {}) def test_accepts_non_list_hosts(): """ Coerces given host string to a one-item list """ assert merge('badhosts', [], [], {}) == ['badhosts'] lazy_role = {'r1': lambda: ['a', 'b']} def test_lazy_roles(): """ Roles may be callables returning lists, as well as regular lists """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': lazy_role}) # # Fabfile finding # class TestFindFabfile(FabricTest): """Test Fabric's fabfile discovery mechanism.""" def test_find_fabfile_can_discovery_package(self): """Fabric should be capable of loading a normal package.""" path = self.mkfile("__init__.py", "") name = os.path.dirname(path) assert find_fabfile([name,]) is not None def test_find_fabfile_can_discovery_package_with_pyc_only(self): """ Fabric should be capable of loading a package with __init__.pyc only. """ path = self.mkfile("__init__.pyc", "") name = os.path.dirname(path) assert find_fabfile([name,]) is not None def test_find_fabfile_should_refuse_fake_package(self): """Fabric should refuse to load a non-package directory.""" path = self.mkfile("foo.py", "") name = os.path.dirname(path) assert find_fabfile([name,]) is None # # Fabfile loading # def run_load_fabfile(path, sys_path): # Module-esque object fake_module = Fake().has_attr(__dict__={}) # Fake __import__ importer = Fake(callable=True).returns(fake_module) # Snapshot sys.path for restore orig_path = copy.copy(sys.path) # Update with fake path sys.path = sys_path # Test for side effects load_fabfile(path, importer=importer) eq_(sys.path, sys_path) # Restore sys.path = orig_path def test_load_fabfile_should_not_remove_real_path_elements(): for fabfile_path, sys_dot_path in ( # Directory not in path ('subdir/fabfile.py', ['not_subdir']), ('fabfile.py', ['nope']), # Directory in path, but not at front ('subdir/fabfile.py', ['not_subdir', 'subdir']), ('fabfile.py', ['not_subdir', '']), ('fabfile.py', ['not_subdir', '', 'also_not_subdir']), # Directory in path, and at front already ('subdir/fabfile.py', ['subdir']), ('subdir/fabfile.py', ['subdir', 'not_subdir']), ('fabfile.py', ['', 'some_dir', 'some_other_dir']), ): yield run_load_fabfile, fabfile_path, sys_dot_path # # Namespacing and new-style tasks # class TestTaskAliases(FabricTest): def test_flat_alias(self): f = fabfile("flat_alias.py") with path_prefix(f): docs, funcs = load_fabfile(f) eq_(len(funcs), 2) ok_("foo" in funcs) ok_("foo_aliased" in funcs) def test_nested_alias(self): f = fabfile("nested_alias.py") with path_prefix(f): docs, funcs = load_fabfile(f) ok_("nested" in funcs) eq_(len(funcs["nested"]), 2) ok_("foo" in funcs["nested"]) ok_("foo_aliased" in funcs["nested"]) def test_flat_aliases(self): f = fabfile("flat_aliases.py") with path_prefix(f): docs, funcs = load_fabfile(f) eq_(len(funcs), 3) ok_("foo" in funcs) ok_("foo_aliased" in funcs) ok_("foo_aliased_two" in funcs) def test_nested_aliases(self): f = fabfile("nested_aliases.py") with path_prefix(f): docs, funcs = load_fabfile(f) ok_("nested" in funcs) eq_(len(funcs["nested"]), 3) ok_("foo" in funcs["nested"]) ok_("foo_aliased" in funcs["nested"]) ok_("foo_aliased_two" in funcs["nested"]) class TestNamespaces(FabricTest): def setup(self): # Parent class preserves current env super(TestNamespaces, self).setup() # Reset new-style-tests flag so running tests via Fab itself doesn't # muck with it. import fabric.state if 'new_style_tasks' in fabric.state.env: del fabric.state.env['new_style_tasks'] def test_implicit_discovery(self): """ Default to automatically collecting all tasks in a fabfile module """ implicit = fabfile("implicit_fabfile.py") with path_prefix(implicit): docs, funcs = load_fabfile(implicit) eq_(len(funcs), 2) ok_("foo" in funcs) ok_("bar" in funcs) def test_exception_exclusion(self): """ Exception subclasses should not be considered as tasks """ exceptions = fabfile("exceptions_fabfile.py") with path_prefix(exceptions): docs, funcs = load_fabfile(exceptions) ok_("some_task" in funcs) ok_("NotATask" not in funcs) def test_explicit_discovery(self): """ If __all__ is present, only collect the tasks it specifies """ explicit = fabfile("explicit_fabfile.py") with path_prefix(explicit): docs, funcs = load_fabfile(explicit) eq_(len(funcs), 1) ok_("foo" in funcs) ok_("bar" not in funcs) def test_should_load_decorated_tasks_only_if_one_is_found(self): """ If any new-style tasks are found, *only* new-style tasks should load """ module = fabfile('decorated_fabfile.py') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('foo' in funcs) def test_class_based_tasks_are_found_with_proper_name(self): """ Wrapped new-style tasks should preserve their function names """ module = fabfile('decorated_fabfile_with_classbased_task.py') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('foo' in funcs) def test_class_based_tasks_are_found_with_variable_name(self): """ A new-style tasks with undefined name attribute should use the instance variable name. """ module = fabfile('classbased_task_fabfile.py') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('foo' in funcs) eq_(funcs['foo'].name, 'foo') def test_recursion_steps_into_nontask_modules(self): """ Recursive loading will continue through modules with no tasks """ module = fabfile('deep') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('submodule.subsubmodule.deeptask' in _task_names(funcs)) def test_newstyle_task_presence_skips_classic_task_modules(self): """ Classic-task-only modules shouldn't add tasks if any new-style tasks exist """ module = fabfile('deep') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('submodule.classic_task' not in _task_names(funcs)) def test_task_decorator_plays_well_with_others(self): """ @task, when inside @hosts/@roles, should not hide the decorated task. """ module = fabfile('decorator_order') with path_prefix(module): docs, funcs = load_fabfile(module) # When broken, crawl() finds None for 'foo' instead. eq_(crawl('foo', funcs), funcs['foo']) # # --list output # def eq_output(docstring, format_, expected): return eq_( "\n".join(list_commands(docstring, format_)), expected ) def list_output(module, format_, expected): module = fabfile(module) with path_prefix(module): docstring, tasks = load_fabfile(module) with patched_context(fabric.state, 'commands', tasks): eq_output(docstring, format_, expected) def test_list_output(): lead = ":\n\n " normal_head = COMMANDS_HEADER + lead nested_head = COMMANDS_HEADER + NESTED_REMINDER + lead for desc, module, format_, expected in ( ("shorthand (& with namespacing)", 'deep', 'short', "submodule.subsubmodule.deeptask"), ("normal (& with namespacing)", 'deep', 'normal', normal_head + "submodule.subsubmodule.deeptask"), ("normal (with docstring)", 'docstring', 'normal', normal_head + "foo Foos!"), ("nested (leaf only)", 'deep', 'nested', nested_head + """submodule: subsubmodule: deeptask"""), ("nested (full)", 'tree', 'nested', nested_head + """build_docs deploy db: migrate system: install_package debian: update_apt"""), ): list_output.description = "--list output: %s" % desc yield list_output, module, format_, expected del list_output.description def name_to_task(name): t = Task() t.name = name return t def strings_to_tasks(d): ret = {} for key, value in d.iteritems(): if isMappingType(value): val = strings_to_tasks(value) else: val = name_to_task(value) ret[key] = val return ret def test_task_names(): for desc, input_, output in ( ('top level (single)', {'a': 5}, ['a']), ('top level (multiple, sorting)', {'a': 5, 'b': 6}, ['a', 'b']), ('just nested', {'a': {'b': 5}}, ['a.b']), ('mixed', {'a': 5, 'b': {'c': 6}}, ['a', 'b.c']), ('top level comes before nested', {'z': 5, 'b': {'c': 6}}, ['z', 'b.c']), ('peers sorted equally', {'z': 5, 'b': {'c': 6}, 'd': {'e': 7}}, ['z', 'b.c', 'd.e']), ( 'complex tree', { 'z': 5, 'b': { 'c': 6, 'd': { 'e': { 'f': '7' } }, 'g': 8 }, 'h': 9, 'w': { 'y': 10 } }, ['h', 'z', 'b.c', 'b.g', 'b.d.e.f', 'w.y'] ), ): eq_.description = "task name flattening: %s" % desc yield eq_, _task_names(strings_to_tasks(input_)), output del eq_.description def test_crawl(): for desc, name, mapping, output in ( ("base case", 'a', {'a': 5}, 5), ("one level", 'a.b', {'a': {'b': 5}}, 5), ("deep", 'a.b.c.d.e', {'a': {'b': {'c': {'d': {'e': 5}}}}}, 5), ("full tree", 'a.b.c', {'a': {'b': {'c': 5}, 'd': 6}, 'z': 7}, 5) ): eq_.description = "crawling dotted names: %s" % desc yield eq_, _crawl(name, mapping), output del eq_.description def test_mapping_task_classes(): """ Task classes implementing the mapping interface shouldn't break --list """ list_output('mapping', 'normal', COMMANDS_HEADER + """:\n mapping_task""") def test_default_task_listings(): """ @task(default=True) should cause task to also load under module's name """ for format_, expected in ( ('short', """mymodule mymodule.long_task_name"""), ('normal', COMMANDS_HEADER + """:\n mymodule mymodule.long_task_name"""), ('nested', COMMANDS_HEADER + NESTED_REMINDER + """:\n mymodule: long_task_name""") ): list_output.description = "Default task --list output: %s" % format_ yield list_output, 'default_tasks', format_, expected del list_output.description def test_default_task_loading(): """ crawl() should return default tasks where found, instead of module objs """ docs, tasks = load_fabfile(fabfile('default_tasks')) ok_(isinstance(crawl('mymodule', tasks), Task)) def test_aliases_appear_in_fab_list(): """ --list should include aliases """ list_output('nested_alias', 'short', """nested.foo nested.foo_aliased""") fabric-1.14.0/tests/test_network.py000066400000000000000000000645041315011462000172710ustar00rootroot00000000000000from __future__ import with_statement import sys from nose.tools import ok_, raises from fudge import (Fake, patch_object, with_patched_object, patched_context, with_fakes) from fabric.context_managers import settings, hide, show from fabric.network import (HostConnectionCache, join_host_strings, normalize, denormalize, key_filenames, ssh, NetworkError, connect) from fabric.state import env, output, _get_system_username from fabric.operations import run, sudo, prompt from fabric.tasks import execute from fabric.api import parallel from fabric import utils # for patching from mock_streams import mock_streams from server import (server, RESPONSES, PASSWORDS, CLIENT_PRIVKEY, USER, CLIENT_PRIVKEY_PASSPHRASE) from utils import (FabricTest, aborts, assert_contains, eq_, password_response, patched_input, support) # # Subroutines, e.g. host string normalization # class TestNetwork(FabricTest): def test_host_string_normalization(self): username = _get_system_username() for description, input, output_ in ( ("Sanity check: equal strings remain equal", 'localhost', 'localhost'), ("Empty username is same as get_system_username", 'localhost', username + '@localhost'), ("Empty port is same as port 22", 'localhost', 'localhost:22'), ("Both username and port tested at once, for kicks", 'localhost', username + '@localhost:22'), ): eq_.description = "Host-string normalization: %s" % description yield eq_, normalize(input), normalize(output_) del eq_.description def test_normalization_for_ipv6(self): """ normalize() will accept IPv6 notation and can separate host and port """ username = _get_system_username() for description, input, output_ in ( ("Full IPv6 address", '2001:DB8:0:0:0:0:0:1', (username, '2001:DB8:0:0:0:0:0:1', '22')), ("IPv6 address in short form", '2001:DB8::1', (username, '2001:DB8::1', '22')), ("IPv6 localhost", '::1', (username, '::1', '22')), ("Square brackets are required to separate non-standard port from IPv6 address", '[2001:DB8::1]:1222', (username, '2001:DB8::1', '1222')), ("Username and IPv6 address", 'user@2001:DB8::1', ('user', '2001:DB8::1', '22')), ("Username and IPv6 address with non-standard port", 'user@[2001:DB8::1]:1222', ('user', '2001:DB8::1', '1222')), ): eq_.description = "Host-string IPv6 normalization: %s" % description yield eq_, normalize(input), output_ del eq_.description def test_normalization_without_port(self): """ normalize() and join_host_strings() omit port if omit_port given """ eq_( join_host_strings(*normalize('user@localhost', omit_port=True)), 'user@localhost' ) def test_ipv6_host_strings_join(self): """ join_host_strings() should use square brackets only for IPv6 and if port is given """ eq_( join_host_strings('user', '2001:DB8::1'), 'user@2001:DB8::1' ) eq_( join_host_strings('user', '2001:DB8::1', '1222'), 'user@[2001:DB8::1]:1222' ) eq_( join_host_strings('user', '192.168.0.0', '1222'), 'user@192.168.0.0:1222' ) def test_nonword_character_in_username(self): """ normalize() will accept non-word characters in the username part """ eq_( normalize('user-with-hyphens@someserver.org')[0], 'user-with-hyphens' ) def test_at_symbol_in_username(self): """ normalize() should allow '@' in usernames (i.e. last '@' is split char) """ parts = normalize('user@example.com@www.example.com') eq_(parts[0], 'user@example.com') eq_(parts[1], 'www.example.com') def test_normalization_of_empty_input(self): empties = ('', '', '') for description, input in ( ("empty string", ''), ("None", None) ): template = "normalize() returns empty strings for %s input" eq_.description = template % description yield eq_, normalize(input), empties del eq_.description def test_host_string_denormalization(self): username = _get_system_username() for description, string1, string2 in ( ("Sanity check: equal strings remain equal", 'localhost', 'localhost'), ("Empty username is same as get_system_username", 'localhost:22', username + '@localhost:22'), ("Empty port is same as port 22", 'user@localhost', 'user@localhost:22'), ("Both username and port", 'localhost', username + '@localhost:22'), ("IPv6 address", '2001:DB8::1', username + '@[2001:DB8::1]:22'), ): eq_.description = "Host-string denormalization: %s" % description yield eq_, denormalize(string1), denormalize(string2) del eq_.description # # Connection caching # @staticmethod @with_fakes def check_connection_calls(host_strings, num_calls): # Clear Fudge call stack # Patch connect() with Fake obj set to expect num_calls calls patched_connect = patch_object('fabric.network', 'connect', Fake('connect', expect_call=True).times_called(num_calls) ) try: # Make new cache object cache = HostConnectionCache() # Connect to all connection strings for host_string in host_strings: # Obtain connection from cache, potentially calling connect() cache[host_string] finally: # Restore connect() patched_connect.restore() def test_connection_caching(self): for description, host_strings, num_calls in ( ("Two different host names, two connections", ('localhost', 'other-system'), 2), ("Same host twice, one connection", ('localhost', 'localhost'), 1), ("Same host twice, different ports, two connections", ('localhost:22', 'localhost:222'), 2), ("Same host twice, different users, two connections", ('user1@localhost', 'user2@localhost'), 2), ): TestNetwork.check_connection_calls.description = description yield TestNetwork.check_connection_calls, host_strings, num_calls def test_connection_cache_deletion(self): """ HostConnectionCache should delete correctly w/ non-full keys """ hcc = HostConnectionCache() fake = Fake('connect', callable=True) with patched_context('fabric.network', 'connect', fake): for host_string in ('hostname', 'user@hostname', 'user@hostname:222'): # Prime hcc[host_string] # Test ok_(host_string in hcc) # Delete del hcc[host_string] # Test ok_(host_string not in hcc) # # Connection loop flow # @server() def test_saved_authentication_returns_client_object(self): cache = HostConnectionCache() assert isinstance(cache[env.host_string], ssh.SSHClient) @server() @with_fakes def test_prompts_for_password_without_good_authentication(self): env.password = None with password_response(PASSWORDS[env.user], times_called=1): cache = HostConnectionCache() cache[env.host_string] @aborts def test_aborts_on_prompt_with_abort_on_prompt(self): """ abort_on_prompt=True should abort when prompt() is used """ env.abort_on_prompts = True prompt("This will abort") @server() @aborts def test_aborts_on_password_prompt_with_abort_on_prompt(self): """ abort_on_prompt=True should abort when password prompts occur """ env.password = None env.abort_on_prompts = True with password_response(PASSWORDS[env.user], times_called=1): cache = HostConnectionCache() cache[env.host_string] @with_fakes @raises(NetworkError) def test_connect_does_not_prompt_password_when_ssh_raises_channel_exception(self): def raise_channel_exception_once(*args, **kwargs): if raise_channel_exception_once.should_raise_channel_exception: raise_channel_exception_once.should_raise_channel_exception = False raise ssh.ChannelException(2, 'Connect failed') raise_channel_exception_once.should_raise_channel_exception = True def generate_fake_client(): fake_client = Fake('SSHClient', allows_any_call=True, expect_call=True) fake_client.provides('connect').calls(raise_channel_exception_once) return fake_client fake_ssh = Fake('ssh', allows_any_call=True) fake_ssh.provides('SSHClient').calls(generate_fake_client) # We need the real exceptions here to preserve the inheritence structure fake_ssh.SSHException = ssh.SSHException fake_ssh.ChannelException = ssh.ChannelException patched_connect = patch_object('fabric.network', 'ssh', fake_ssh) patched_password = patch_object('fabric.network', 'prompt_for_password', Fake('prompt_for_password', callable = True).times_called(0)) try: connect('user', 'localhost', 22, HostConnectionCache()) finally: # Restore ssh patched_connect.restore() patched_password.restore() @mock_streams('stdout') @server() def test_does_not_abort_with_password_and_host_with_abort_on_prompt(self): """ abort_on_prompt=True should not abort if no prompts are needed """ env.abort_on_prompts = True env.password = PASSWORDS[env.user] # env.host_string is automatically filled in when using server() run("ls /simple") @mock_streams('stdout') @server() def test_trailing_newline_line_drop(self): """ Trailing newlines shouldn't cause last line to be dropped. """ # Multiline output with trailing newline cmd = "ls /" output_string = RESPONSES[cmd] # TODO: fix below lines, duplicates inner workings of tested code prefix = "[%s] out: " % env.host_string expected = prefix + ('\n' + prefix).join(output_string.split('\n')) # Create, tie off thread with settings(show('everything'), hide('running')): result = run(cmd) # Test equivalence of expected, received output eq_(expected, sys.stdout.getvalue()) # Also test that the captured value matches, too. eq_(output_string, result) @server() def test_sudo_prompt_kills_capturing(self): """ Sudo prompts shouldn't screw up output capturing """ cmd = "ls /simple" with hide('everything'): eq_(sudo(cmd), RESPONSES[cmd]) @server() def test_password_memory_on_user_switch(self): """ Switching users mid-session should not screw up password memory """ def _to_user(user): return join_host_strings(user, env.host, env.port) user1 = 'root' user2 = USER with settings(hide('everything'), password=None): # Connect as user1 (thus populating both the fallback and # user-specific caches) with settings( password_response(PASSWORDS[user1]), host_string=_to_user(user1) ): run("ls /simple") # Connect as user2: * First cxn attempt will use fallback cache, # which contains user1's password, and thus fail * Second cxn # attempt will prompt user, and succeed due to mocked p4p * but # will NOT overwrite fallback cache with settings( password_response(PASSWORDS[user2]), host_string=_to_user(user2) ): # Just to trigger connection run("ls /simple") # * Sudo call should use cached user2 password, NOT fallback cache, # and thus succeed. (I.e. p_f_p should NOT be called here.) with settings( password_response('whatever', times_called=0), host_string=_to_user(user2) ): sudo("ls /simple") @mock_streams('stderr') @server() def test_password_prompt_displays_host_string(self): """ Password prompt lines should include the user/host in question """ env.password = None env.no_agent = env.no_keys = True output.everything = False with password_response(PASSWORDS[env.user], silent=False): run("ls /simple") regex = r'^\[%s\] Login password for \'%s\': ' % (env.host_string, env.user) assert_contains(regex, sys.stderr.getvalue()) @mock_streams('stderr') @server(pubkeys=True) def test_passphrase_prompt_displays_host_string(self): """ Passphrase prompt lines should include the user/host in question """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY output.everything = False with password_response(CLIENT_PRIVKEY_PASSPHRASE, silent=False): run("ls /simple") regex = r'^\[%s\] Login password for \'%s\': ' % (env.host_string, env.user) assert_contains(regex, sys.stderr.getvalue()) def test_sudo_prompt_display_passthrough(self): """ Sudo prompt should display (via passthrough) when stdout/stderr shown """ TestNetwork._prompt_display(True) def test_sudo_prompt_display_directly(self): """ Sudo prompt should display (manually) when stdout/stderr hidden """ TestNetwork._prompt_display(False) @staticmethod @mock_streams('both') @server(pubkeys=True, responses={'oneliner': 'result'}) def _prompt_display(display_output): env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY output.output = display_output with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[env.user]), silent=False ): sudo('oneliner') if display_output: expected = """ [%(prefix)s] sudo: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: sudo password: [%(prefix)s] out: Sorry, try again. [%(prefix)s] out: sudo password: [%(prefix)s] out: result """ % {'prefix': env.host_string, 'user': env.user} else: # Note lack of first sudo prompt (as it's autoresponded to) and of # course the actual result output. expected = """ [%(prefix)s] sudo: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: Sorry, try again. [%(prefix)s] out: sudo password: """ % { 'prefix': env.host_string, 'user': env.user } eq_(expected[1:], sys.stdall.getvalue()) @mock_streams('both') @server( pubkeys=True, responses={'oneliner': 'result', 'twoliner': 'result1\nresult2'} ) def test_consecutive_sudos_should_not_have_blank_line(self): """ Consecutive sudo() calls should not incur a blank line in-between """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[USER]), silent=False ): sudo('oneliner') sudo('twoliner') expected = """ [%(prefix)s] sudo: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: sudo password: [%(prefix)s] out: Sorry, try again. [%(prefix)s] out: sudo password: [%(prefix)s] out: result [%(prefix)s] sudo: twoliner [%(prefix)s] out: sudo password: [%(prefix)s] out: result1 [%(prefix)s] out: result2 """ % {'prefix': env.host_string, 'user': env.user} eq_(sys.stdall.getvalue(), expected[1:]) @mock_streams('both') @server(pubkeys=True, responses={'silent': '', 'normal': 'foo'}) def test_silent_commands_should_not_have_blank_line(self): """ Silent commands should not generate an extra trailing blank line After the move to interactive I/O, it was noticed that while run/sudo commands which had non-empty stdout worked normally (consecutive such commands were totally adjacent), those with no stdout (i.e. silent commands like ``test`` or ``mkdir``) resulted in spurious blank lines after the "run:" line. This looks quite ugly in real world scripts. """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response(CLIENT_PRIVKEY_PASSPHRASE, silent=False): run('normal') run('silent') run('normal') with hide('everything'): run('normal') run('silent') expected = """ [%(prefix)s] run: normal [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: foo [%(prefix)s] run: silent [%(prefix)s] run: normal [%(prefix)s] out: foo """ % {'prefix': env.host_string, 'user': env.user} eq_(expected[1:], sys.stdall.getvalue()) @mock_streams('both') @server( pubkeys=True, responses={'oneliner': 'result', 'twoliner': 'result1\nresult2'} ) def test_io_should_print_prefix_if_ouput_prefix_is_true(self): """ run/sudo should print [host_string] if env.output_prefix == True """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[USER]), silent=False ): run('oneliner') run('twoliner') expected = """ [%(prefix)s] run: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: result [%(prefix)s] run: twoliner [%(prefix)s] out: result1 [%(prefix)s] out: result2 """ % {'prefix': env.host_string, 'user': env.user} eq_(expected[1:], sys.stdall.getvalue()) @mock_streams('both') @server( pubkeys=True, responses={'oneliner': 'result', 'twoliner': 'result1\nresult2'} ) def test_io_should_not_print_prefix_if_ouput_prefix_is_false(self): """ run/sudo shouldn't print [host_string] if env.output_prefix == False """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[USER]), silent=False ): with settings(output_prefix=False): run('oneliner') run('twoliner') expected = """ [%(prefix)s] run: oneliner [%(prefix)s] Login password for '%(user)s': result [%(prefix)s] run: twoliner result1 result2 """ % {'prefix': env.host_string, 'user': env.user} eq_(expected[1:], sys.stdall.getvalue()) @server() def test_env_host_set_when_host_prompt_used(self): """ Ensure env.host is set during host prompting """ copied_host_string = str(env.host_string) fake = Fake('raw_input', callable=True).returns(copied_host_string) env.host_string = None env.host = None with settings(hide('everything'), patched_input(fake)): run("ls /") # Ensure it did set host_string back to old value eq_(env.host_string, copied_host_string) # Ensure env.host is correct eq_(env.host, normalize(copied_host_string)[1]) def subtask(): run("This should never execute") class TestConnections(FabricTest): @aborts def test_should_abort_when_cannot_connect(self): """ By default, connecting to a nonexistent server should abort. """ with hide('everything'): execute(subtask, hosts=['nope.nonexistent.com']) def test_should_warn_when_skip_bad_hosts_is_True(self): """ env.skip_bad_hosts = True => execute() skips current host """ with settings(hide('everything'), skip_bad_hosts=True): execute(subtask, hosts=['nope.nonexistent.com']) @server() def test_host_not_in_known_hosts_exception(self): """ Check reject_unknown_hosts exception """ with settings( hide('everything'), password=None, reject_unknown_hosts=True, disable_known_hosts=True, abort_on_prompts=True, ): try: run("echo foo") except NetworkError as exc: exp = "Server '[127.0.0.1]:2200' not found in known_hosts" assert str(exc) == exp, "%s != %s" % (exc, exp) else: raise AssertionError("Host connected without valid " "fingerprint.") @parallel def parallel_subtask(): run("This should never execute") class TestParallelConnections(FabricTest): @aborts def test_should_abort_when_cannot_connect(self): """ By default, connecting to a nonexistent server should abort. """ with hide('everything'): execute(parallel_subtask, hosts=['nope.nonexistent.com']) def test_should_warn_when_skip_bad_hosts_is_True(self): """ env.skip_bad_hosts = True => execute() skips current host """ with settings(hide('everything'), skip_bad_hosts=True): execute(parallel_subtask, hosts=['nope.nonexistent.com']) class TestSSHConfig(FabricTest): def env_setup(self): super(TestSSHConfig, self).env_setup() env.use_ssh_config = True env.ssh_config_path = support("ssh_config") # Undo the changes FabricTest makes to env for server support env.user = env.local_user env.port = env.default_port def test_global_user_with_default_env(self): """ Global User should override default env.user """ eq_(normalize("localhost")[0], "satan") def test_global_user_with_nondefault_env(self): """ Global User should NOT override nondefault env.user """ with settings(user="foo"): eq_(normalize("localhost")[0], "foo") def test_specific_user_with_default_env(self): """ Host-specific User should override default env.user """ eq_(normalize("myhost")[0], "neighbor") def test_user_vs_host_string_value(self): """ SSH-config derived user should NOT override host-string user value """ eq_(normalize("myuser@localhost")[0], "myuser") eq_(normalize("myuser@myhost")[0], "myuser") def test_global_port_with_default_env(self): """ Global Port should override default env.port """ eq_(normalize("localhost")[2], "666") def test_global_port_with_nondefault_env(self): """ Global Port should NOT override nondefault env.port """ with settings(port="777", use_ssh_config=False): eq_(normalize("localhost")[2], "777") def test_specific_port_with_default_env(self): """ Host-specific Port should override default env.port """ eq_(normalize("myhost")[2], "664") def test_port_vs_host_string_value(self): """ SSH-config derived port should NOT override host-string port value """ eq_(normalize("localhost:123")[2], "123") eq_(normalize("myhost:123")[2], "123") def test_hostname_alias(self): """ Hostname setting overrides host string's host value """ eq_(normalize("localhost")[1], "localhost") eq_(normalize("myalias")[1], "otherhost") @with_patched_object(utils, 'warn', Fake('warn', callable=True, expect_call=True)) def test_warns_with_bad_config_file_path(self): # use_ssh_config is already set in our env_setup() with settings(hide('everything'), ssh_config_path="nope_bad_lol"): normalize('foo') @server() def test_real_connection(self): """ Test-server connection using ssh_config values """ with settings( hide('everything'), ssh_config_path=support("testserver_ssh_config"), host_string='testserver', ): ok_(run("ls /simple").succeeded) class TestKeyFilenames(FabricTest): def test_empty_everything(self): """ No env.key_filename and no ssh_config = empty list """ with settings(use_ssh_config=False): with settings(key_filename=""): eq_(key_filenames(), []) with settings(key_filename=[]): eq_(key_filenames(), []) def test_just_env(self): """ Valid env.key_filename and no ssh_config = just env """ with settings(use_ssh_config=False): with settings(key_filename="mykey"): eq_(key_filenames(), ["mykey"]) with settings(key_filename=["foo", "bar"]): eq_(key_filenames(), ["foo", "bar"]) def test_just_ssh_config(self): """ No env.key_filename + valid ssh_config = ssh value """ with settings(use_ssh_config=True, ssh_config_path=support("ssh_config")): for val in ["", []]: with settings(key_filename=val): eq_(key_filenames(), ["foobar.pub"]) def test_both(self): """ Both env.key_filename + valid ssh_config = both show up w/ env var first """ with settings(use_ssh_config=True, ssh_config_path=support("ssh_config")): with settings(key_filename="bizbaz.pub"): eq_(key_filenames(), ["bizbaz.pub", "foobar.pub"]) with settings(key_filename=["bizbaz.pub", "whatever.pub"]): expected = ["bizbaz.pub", "whatever.pub", "foobar.pub"] eq_(key_filenames(), expected) fabric-1.14.0/tests/test_operations.py000066400000000000000000001073761315011462000177700ustar00rootroot00000000000000from __future__ import with_statement import os import re import shutil import sys from contextlib import nested from StringIO import StringIO from nose.tools import ok_, raises from fudge import patched_context, with_fakes, Fake from fudge.inspector import arg as fudge_arg from mock_streams import mock_streams from paramiko.sftp_client import SFTPClient # for patching from fabric.state import env, output from fabric.operations import require, prompt, _sudo_prefix, _shell_wrap, \ _shell_escape from fabric.api import get, put, hide, show, cd, lcd, local, run, sudo, quiet from fabric.context_managers import settings from fabric.exceptions import CommandTimeout from fabric.sftp import SFTP from fabric.decorators import with_settings from utils import (eq_, aborts, assert_contains, eq_contents, with_patched_input, FabricTest) from server import server, FILES # # require() # def test_require_single_existing_key(): """ When given a single existing key, require() throws no exceptions """ # 'version' is one of the default values, so we know it'll be there require('version') def test_require_multiple_existing_keys(): """ When given multiple existing keys, require() throws no exceptions """ require('version', 'sudo_prompt') @aborts def test_require_single_missing_key(): """ When given a single non-existent key, require() aborts """ require('blah') @aborts def test_require_multiple_missing_keys(): """ When given multiple non-existent keys, require() aborts """ require('foo', 'bar') @aborts def test_require_mixed_state_keys(): """ When given mixed-state keys, require() aborts """ require('foo', 'version') @mock_streams('stderr') def test_require_mixed_state_keys_prints_missing_only(): """ When given mixed-state keys, require() prints missing keys only """ try: require('foo', 'version') except SystemExit: err = sys.stderr.getvalue() assert 'version' not in err assert 'foo' in err @aborts def test_require_iterable_provided_by_key(): """ When given a provided_by iterable value, require() aborts """ # 'version' is one of the default values, so we know it'll be there def fake_providing_function(): pass require('foo', provided_by=[fake_providing_function]) @aborts def test_require_noniterable_provided_by_key(): """ When given a provided_by noniterable value, require() aborts """ # 'version' is one of the default values, so we know it'll be there def fake_providing_function(): pass require('foo', provided_by=fake_providing_function) @aborts def test_require_key_exists_empty_list(): """ When given a single existing key but the value is an empty list, require() aborts """ # 'hosts' is one of the default values, so we know it'll be there require('hosts') @aborts @with_settings(foo={}) def test_require_key_exists_empty_dict(): """ When given a single existing key but the value is an empty dict, require() aborts """ require('foo') @aborts @with_settings(foo=()) def test_require_key_exists_empty_tuple(): """ When given a single existing key but the value is an empty tuple, require() aborts """ require('foo') @aborts @with_settings(foo=set()) def test_require_key_exists_empty_set(): """ When given a single existing key but the value is an empty set, require() aborts """ require('foo') @with_settings(foo=0, bar=False) def test_require_key_exists_false_primitive_values(): """ When given keys that exist with primitive values that evaluate to False, require() throws no exception """ require('foo', 'bar') @with_settings(foo=['foo'], bar={'bar': 'bar'}, baz=('baz',), qux=set('qux')) def test_require_complex_non_empty_values(): """ When given keys that exist with non-primitive values that are not empty, require() throws no exception """ require('foo', 'bar', 'baz', 'qux') # # prompt() # def p(x): sys.stdout.write(x) @mock_streams('stdout') @with_patched_input(p) def test_prompt_appends_space(): """ prompt() appends a single space when no default is given """ s = "This is my prompt" prompt(s) eq_(sys.stdout.getvalue(), s + ' ') @mock_streams('stdout') @with_patched_input(p) def test_prompt_with_default(): """ prompt() appends given default value plus one space on either side """ s = "This is my prompt" d = "default!" prompt(s, default=d) eq_(sys.stdout.getvalue(), "%s [%s] " % (s, d)) # # run()/sudo() # def test_sudo_prefix_with_user(): """ _sudo_prefix() returns prefix plus -u flag for nonempty user """ eq_( _sudo_prefix(user="foo", group=None), "%s -u \"foo\" " % (env.sudo_prefix % env) ) def test_sudo_prefix_without_user(): """ _sudo_prefix() returns standard prefix when user is empty """ eq_(_sudo_prefix(user=None, group=None), env.sudo_prefix % env) def test_sudo_prefix_with_group(): """ _sudo_prefix() returns prefix plus -g flag for nonempty group """ eq_( _sudo_prefix(user=None, group="foo"), "%s -g \"foo\" " % (env.sudo_prefix % env) ) def test_sudo_prefix_with_user_and_group(): """ _sudo_prefix() returns prefix plus -u and -g for nonempty user and group """ eq_( _sudo_prefix(user="foo", group="bar"), "%s -u \"foo\" -g \"bar\" " % (env.sudo_prefix % env) ) @with_settings(use_shell=True) def test_shell_wrap(): prefix = "prefix" command = "command" for description, shell, sudo_prefix, result in ( ("shell=True, sudo_prefix=None", True, None, '%s "%s"' % (env.shell, command)), ("shell=True, sudo_prefix=string", True, prefix, prefix + ' %s "%s"' % (env.shell, command)), ("shell=False, sudo_prefix=None", False, None, command), ("shell=False, sudo_prefix=string", False, prefix, prefix + " " + command), ): eq_.description = "_shell_wrap: %s" % description yield eq_, _shell_wrap(command, shell_escape=True, shell=shell, sudo_prefix=sudo_prefix), result del eq_.description @with_settings(use_shell=True) def test_shell_wrap_escapes_command_if_shell_is_true(): """ _shell_wrap() escapes given command if shell=True """ cmd = "cd \"Application Support\"" eq_( _shell_wrap(cmd, shell_escape=True, shell=True), '%s "%s"' % (env.shell, _shell_escape(cmd)) ) @with_settings(use_shell=True) def test_shell_wrap_does_not_escape_command_if_shell_is_true_and_shell_escape_is_false(): """ _shell_wrap() does no escaping if shell=True and shell_escape=False """ cmd = "cd \"Application Support\"" eq_( _shell_wrap(cmd, shell_escape=False, shell=True), '%s "%s"' % (env.shell, cmd) ) def test_shell_wrap_does_not_escape_command_if_shell_is_false(): """ _shell_wrap() does no escaping if shell=False """ cmd = "cd \"Application Support\"" eq_(_shell_wrap(cmd, shell_escape=True, shell=False), cmd) def test_shell_escape_escapes_doublequotes(): """ _shell_escape() escapes double-quotes """ cmd = "cd \"Application Support\"" eq_(_shell_escape(cmd), 'cd \\"Application Support\\"') def test_shell_escape_escapes_dollar_signs(): """ _shell_escape() escapes dollar signs """ cmd = "cd $HOME" eq_(_shell_escape(cmd), 'cd \$HOME') def test_shell_escape_escapes_backticks(): """ _shell_escape() escapes backticks """ cmd = "touch test.pid && kill `cat test.pid`" eq_(_shell_escape(cmd), "touch test.pid && kill \`cat test.pid\`") class TestCombineStderr(FabricTest): @server() def test_local_none_global_true(self): """ combine_stderr: no kwarg => uses global value (True) """ output.everything = False r = run("both_streams") # Note: the exact way the streams are jumbled here is an implementation # detail of our fake SSH server and may change in the future. eq_("ssttddoeurtr", r.stdout) eq_(r.stderr, "") @server() def test_local_none_global_false(self): """ combine_stderr: no kwarg => uses global value (False) """ output.everything = False env.combine_stderr = False r = run("both_streams") eq_("stdout", r.stdout) eq_("stderr", r.stderr) @server() def test_local_true_global_false(self): """ combine_stderr: True kwarg => overrides global False value """ output.everything = False env.combine_stderr = False r = run("both_streams", combine_stderr=True) eq_("ssttddoeurtr", r.stdout) eq_(r.stderr, "") @server() def test_local_false_global_true(self): """ combine_stderr: False kwarg => overrides global True value """ output.everything = False env.combine_stderr = True r = run("both_streams", combine_stderr=False) eq_("stdout", r.stdout) eq_("stderr", r.stderr) class TestQuietAndWarnKwargs(FabricTest): @server(responses={'wat': ["", "", 1]}) def test_quiet_implies_warn_only(self): # Would raise an exception if warn_only was False eq_(run("wat", quiet=True).failed, True) @server() @mock_streams('both') def test_quiet_implies_hide_everything(self): run("ls /", quiet=True) eq_(sys.stdout.getvalue(), "") eq_(sys.stderr.getvalue(), "") @server(responses={'hrm': ["", "", 1]}) @mock_streams('both') def test_warn_only_is_same_as_settings_warn_only(self): eq_(run("hrm", warn_only=True).failed, True) @server() @mock_streams('both') def test_warn_only_does_not_imply_hide_everything(self): run("ls /simple", warn_only=True) assert sys.stdout.getvalue() != "" class TestMultipleOKReturnCodes(FabricTest): @server(responses={'no srsly its ok': ['', '', 1]}) def test_expand_to_include_1(self): with settings(quiet(), ok_ret_codes=[0, 1]): eq_(run("no srsly its ok").succeeded, True) slow_server = server(responses={'slow': ['', '', 0, 3]}) slow = lambda x: slow_server(raises(CommandTimeout)(x)) class TestRun(FabricTest): """ @server-using generic run()/sudo() tests """ @slow def test_command_timeout_via_env_var(self): env.command_timeout = 2 # timeout after 2 seconds with hide('everything'): run("slow") @slow def test_command_timeout_via_kwarg(self): with hide('everything'): run("slow", timeout=2) @slow def test_command_timeout_via_env_var_in_sudo(self): env.command_timeout = 2 # timeout after 2 seconds with hide('everything'): sudo("slow") @slow def test_command_timeout_via_kwarg_of_sudo(self): with hide('everything'): sudo("slow", timeout=2) # # get() and put() # class TestFileTransfers(FabricTest): # # get() # @server(files={'/home/user/.bashrc': 'bash!'}, home='/home/user') def test_get_relative_remote_dir_uses_home(self): """ get('relative/path') should use remote $HOME """ with hide('everything'): # Another if-it-doesn't-error-out-it-passed test; meh. eq_(get('.bashrc', self.path()), [self.path('.bashrc')]) @server(files={'/top/%a/%(/%()/%(x)/%(no)s/%(host)s/%d': 'yo'}) def test_get_with_format_chars_on_server(self): """ get('*') with format symbols (%) on remote paths should not break """ remote = '*' with hide('everything'): get(remote, self.path()) @server() def test_get_single_file(self): """ get() with a single non-globbed filename """ remote = 'file.txt' local = self.path(remote) with hide('everything'): get(remote, local) eq_contents(local, FILES[remote]) @server(files={'/base/dir with spaces/file': 'stuff!'}) def test_get_file_from_relative_path_with_spaces(self): """ get('file') should work when the remote path contains spaces """ # from nose.tools import set_trace; set_trace() with hide('everything'): with cd('/base/dir with spaces'): eq_(get('file', self.path()), [self.path('file')]) @server() def test_get_sibling_globs(self): """ get() with globbed files, but no directories """ remotes = ['file.txt', 'file2.txt'] with hide('everything'): get('file*.txt', self.tmpdir) for remote in remotes: eq_contents(self.path(remote), FILES[remote]) @server() def test_get_single_file_in_folder(self): """ get() a folder containing one file """ remote = 'folder/file3.txt' with hide('everything'): get('folder', self.tmpdir) eq_contents(self.path(remote), FILES[remote]) @server() def test_get_tree(self): """ Download entire tree """ with hide('everything'): get('tree', self.tmpdir) leaves = filter(lambda x: x[0].startswith('/tree'), FILES.items()) for path, contents in leaves: eq_contents(self.path(path[1:]), contents) @server() def test_get_tree_with_implicit_local_path(self): """ Download entire tree without specifying a local path """ dirname = env.host_string.replace(':', '-') try: with hide('everything'): get('tree') leaves = filter(lambda x: x[0].startswith('/tree'), FILES.items()) for path, contents in leaves: path = os.path.join(dirname, path[1:]) eq_contents(path, contents) os.remove(path) # Cleanup finally: if os.path.exists(dirname): shutil.rmtree(dirname) @server() def test_get_absolute_path_should_save_relative(self): """ get(/x/y) w/ %(path)s should save y, not x/y """ lpath = self.path() ltarget = os.path.join(lpath, "%(path)s") with hide('everything'): get('/tree/subfolder', ltarget) assert self.exists_locally(os.path.join(lpath, 'subfolder')) assert not self.exists_locally(os.path.join(lpath, 'tree/subfolder')) @server() def test_path_formatstr_nonrecursively_is_just_filename(self): """ get(x/y/z) nonrecursively w/ %(path)s should save y, not y/z """ lpath = self.path() ltarget = os.path.join(lpath, "%(path)s") with hide('everything'): get('/tree/subfolder/file3.txt', ltarget) assert self.exists_locally(os.path.join(lpath, 'file3.txt')) @server() @mock_streams('stderr') def _invalid_file_obj_situations(self, remote_path): with settings(hide('running'), warn_only=True): get(remote_path, StringIO()) assert_contains('is a glob or directory', sys.stderr.getvalue()) def test_glob_and_file_object_invalid(self): """ Remote glob and local file object is invalid """ self._invalid_file_obj_situations('/tree/*') def test_directory_and_file_object_invalid(self): """ Remote directory and local file object is invalid """ self._invalid_file_obj_situations('/tree') @server() def test_nonexistent_glob_should_not_create_empty_files(self): path = self.path() with settings(hide('everything'), warn_only=True): get('/nope*.txt', path) assert not self.exists_locally(os.path.join(path, 'nope*.txt')) @server() def test_nonexistent_glob_raises_error(self): try: with hide('everything', 'aborts'): get('/nope*.txt', self.path()) except SystemExit as e: assert 'No such file' in e.message else: assert False @server() def test_get_single_file_absolutely(self): """ get() a single file, using absolute file path """ target = '/etc/apache2/apache2.conf' with hide('everything'): get(target, self.tmpdir) eq_contents(self.path(os.path.basename(target)), FILES[target]) @server() def test_get_file_with_nonexistent_target(self): """ Missing target path on single file download => effectively a rename """ local = self.path('otherfile.txt') target = 'file.txt' with hide('everything'): get(target, local) eq_contents(local, FILES[target]) @server() @mock_streams('stderr') def test_get_file_with_existing_file_target(self): """ Clobbering existing local file should overwrite, with warning """ local = self.path('target.txt') target = 'file.txt' with open(local, 'w') as fd: fd.write("foo") with hide('stdout', 'running'): get(target, local) assert "%s already exists" % local in sys.stderr.getvalue() eq_contents(local, FILES[target]) @server() def test_get_file_to_directory(self): """ Directory as target path should result in joined pathname (Yes, this is duplicated in most of the other tests -- but good to have a default in case those tests change how they work later!) """ target = 'file.txt' with hide('everything'): get(target, self.tmpdir) eq_contents(self.path(target), FILES[target]) @server(port=2200) @server(port=2201) def test_get_from_multiple_servers(self): ports = [2200, 2201] hosts = map(lambda x: '127.0.0.1:%s' % x, ports) with settings(all_hosts=hosts): for port in ports: with settings( hide('everything'), host_string='127.0.0.1:%s' % port ): tmp = self.path('') local_path = os.path.join(tmp, "%(host)s", "%(path)s") # Top level file path = 'file.txt' get(path, local_path) assert self.exists_locally(os.path.join( tmp, "127.0.0.1-%s" % port, path )) # Nested file get('tree/subfolder/file3.txt', local_path) assert self.exists_locally(os.path.join( tmp, "127.0.0.1-%s" % port, 'file3.txt' )) @server() def test_get_from_empty_directory_uses_cwd(self): """ get() expands empty remote arg to remote cwd """ with hide('everything'): get('', self.tmpdir) # Spot checks -- though it should've downloaded the entirety of # server.FILES. for x in "file.txt file2.txt tree/file1.txt".split(): assert os.path.exists(os.path.join(self.tmpdir, x)) @server() def _get_to_cwd(self, arg): path = 'file.txt' with hide('everything'): get(path, arg) host_dir = os.path.join( os.getcwd(), env.host_string.replace(':', '-'), ) target = os.path.join(host_dir, path) try: assert os.path.exists(target) # Clean up, since we're not using our tmpdir finally: shutil.rmtree(host_dir) def test_get_to_empty_string_uses_default_format_string(self): """ get() expands empty local arg to local cwd + host + file """ self._get_to_cwd('') def test_get_to_None_uses_default_format_string(self): """ get() expands None local arg to local cwd + host + file """ self._get_to_cwd(None) @server() def test_get_should_accept_file_like_objects(self): """ get()'s local_path arg should take file-like objects too """ fake_file = StringIO() target = '/file.txt' with hide('everything'): get(target, fake_file) eq_(fake_file.getvalue(), FILES[target]) @server() def test_get_interpolation_without_host(self): """ local formatting should work w/o use of %(host)s when run on one host """ with hide('everything'): tmp = self.path('') # dirname, basename local_path = tmp + "/%(dirname)s/foo/%(basename)s" get('/folder/file3.txt', local_path) assert self.exists_locally(tmp + "foo/file3.txt") # path local_path = tmp + "bar/%(path)s" get('/folder/file3.txt', local_path) assert self.exists_locally(tmp + "bar/file3.txt") @server() def test_get_returns_list_of_local_paths(self): """ get() should return an iterable of the local files it created. """ d = self.path() with hide('everything'): retval = get('tree', d) files = ['file1.txt', 'file2.txt', 'subfolder/file3.txt'] eq_(map(lambda x: os.path.join(d, 'tree', x), files), retval) @server() def test_get_returns_none_for_stringio(self): """ get() should return None if local_path is a StringIO """ with hide('everything'): eq_([], get('/file.txt', StringIO())) @server() def test_get_return_value_failed_attribute(self): """ get()'s return value should indicate any paths which failed to download. """ with settings(hide('everything'), warn_only=True): retval = get('/doesnt/exist', self.path()) eq_(['/doesnt/exist'], retval.failed) assert not retval.succeeded @server() def test_get_should_not_use_windows_slashes_in_remote_paths(self): """ sftp.glob() should always use Unix-style slashes. """ with hide('everything'): path = "/tree/file1.txt" sftp = SFTP(env.host_string) eq_(sftp.glob(path), [path]) @server() @with_fakes def test_get_use_sudo(self): """ get(use_sudo=True) works by copying to a temporary path, downloading it and then removing it at the end """ fake_run = Fake('_run_command', callable=True, expect_call=True).with_matching_args( fudge_arg.startswith('cp -p "/etc/apache2/apache2.conf" "'), True, True, None ).next_call().with_matching_args( fudge_arg.startswith('chown username "'), True, True, None, ).next_call().with_matching_args( fudge_arg.startswith('chmod 400 "'), True, True, None, ).next_call().with_matching_args( fudge_arg.startswith('rm -f "'), True, True, None, ) fake_get = Fake('get', callable=True, expect_call=True) with hide('everything'): with patched_context('fabric.operations', '_run_command', fake_run): with patched_context(SFTPClient, 'get', fake_get): retval = get('/etc/apache2/apache2.conf', self.path(), use_sudo=True) # check that the downloaded file has the same name as the one requested assert retval[0].endswith('apache2.conf') @server() @with_fakes def test_get_use_sudo_temp_dir(self): """ get(use_sudo=True, temp_dir="/tmp") works by copying to /tmp/..., downloading it and then removing it at the end """ fake_run = Fake('_run_command', callable=True, expect_call=True).with_matching_args( fudge_arg.startswith('cp -p "/etc/apache2/apache2.conf" "/tmp/'), True, True, None, ).next_call().with_matching_args( fudge_arg.startswith('chown username "/tmp/'), True, True, None, ).next_call().with_matching_args( fudge_arg.startswith('chmod 400 "/tmp/'), True, True, None, ).next_call().with_matching_args( fudge_arg.startswith('rm -f "/tmp/'), True, True, None, ) fake_get = Fake('get', callable=True, expect_call=True).with_args( fudge_arg.startswith('/tmp/'), fudge_arg.any_value()) with hide('everything'): with patched_context('fabric.operations', '_run_command', fake_run): with patched_context(SFTPClient, 'get', fake_get): retval = get('/etc/apache2/apache2.conf', self.path(), use_sudo=True, temp_dir="/tmp") # check that the downloaded file has the same name as the one requested assert retval[0].endswith('apache2.conf') # # put() # @server() def test_put_file_to_existing_directory(self): """ put() a single file into an existing remote directory """ text = "foo!" local = self.mkfile('foo.txt', text) local2 = self.path('foo2.txt') with hide('everything'): put(local, '/') get('/foo.txt', local2) eq_contents(local2, text) @server() def test_put_to_empty_directory_uses_cwd(self): """ put() expands empty remote arg to remote cwd Not a terribly sharp test -- we just get() with a relative path and are testing to make sure they match up -- but should still suffice. """ text = "foo!" local = self.path('foo.txt') local2 = self.path('foo2.txt') with open(local, 'w') as fd: fd.write(text) with hide('everything'): put(local) get('foo.txt', local2) eq_contents(local2, text) @server() def test_put_from_empty_directory_uses_cwd(self): """ put() expands empty local arg to local cwd """ text = 'foo!' # Don't use the current cwd since that's a whole lotta files to upload old_cwd = os.getcwd() os.chdir(self.tmpdir) # Write out file right here with open('file.txt', 'w') as fd: fd.write(text) with hide('everything'): # Put our cwd (which should only contain the file we just created) put('', '/') # Get it back under a new name (noting that when we use a truly # empty put() local call, it makes a directory remotely with the # name of the cwd) remote = os.path.join(os.path.basename(self.tmpdir), 'file.txt') get(remote, 'file2.txt') # Compare for sanity test eq_contents('file2.txt', text) # Restore cwd os.chdir(old_cwd) @server() def test_put_should_accept_file_like_objects(self): """ put()'s local_path arg should take file-like objects too """ local = self.path('whatever') fake_file = StringIO() fake_file.write("testing file-like objects in put()") pointer = fake_file.tell() target = '/new_file.txt' with hide('everything'): put(fake_file, target) get(target, local) eq_contents(local, fake_file.getvalue()) # Sanity test of file pointer eq_(pointer, fake_file.tell()) @server() @raises(ValueError) def test_put_should_raise_exception_for_nonexistent_local_path(self): """ put(nonexistent_file) should raise a ValueError """ put('thisfiledoesnotexist', '/tmp') @server() def test_put_returns_list_of_remote_paths(self): """ put() should return an iterable of the remote files it created. """ p = 'uploaded.txt' f = self.path(p) with open(f, 'w') as fd: fd.write("contents") with hide('everything'): retval = put(f, p) eq_(retval, [p]) @server() def test_put_returns_list_of_remote_paths_with_stringio(self): """ put() should return a one-item iterable when uploading from a StringIO """ f = 'uploaded.txt' with hide('everything'): eq_(put(StringIO('contents'), f), [f]) @server() def test_put_return_value_failed_attribute(self): """ put()'s return value should indicate any paths which failed to upload. """ with settings(hide('everything'), warn_only=True): f = StringIO('contents') retval = put(f, '/nonexistent/directory/structure') eq_([""], retval.failed) assert not retval.succeeded @server() def test_put_sends_all_files_with_glob(self): """ put() should send all items that match a glob. """ paths = ['foo1.txt', 'foo2.txt'] glob = 'foo*.txt' remote_directory = '/' for path in paths: self.mkfile(path, 'foo!') with hide('everything'): retval = put(self.path(glob), remote_directory) eq_(sorted(retval), sorted([remote_directory + path for path in paths])) @server() def test_put_sends_correct_file_with_globbing_off(self): """ put() should send a file with a glob pattern in the path, when globbing disabled. """ text = "globbed!" local = self.mkfile('foo[bar].txt', text) local2 = self.path('foo2.txt') with hide('everything'): put(local, '/', use_glob=False) get('/foo[bar].txt', local2) eq_contents(local2, text) @server() @with_fakes def test_put_use_sudo(self): """ put(use_sudo=True) works by uploading a the `local_path` to a temporary path and then moving it to a `remote_path` """ fake_run = Fake('_run_command', callable=True, expect_call=True).with_matching_args( fudge_arg.startswith('mv "'), True, True, None, ) fake_put = Fake('put', callable=True, expect_call=True) local_path = self.mkfile('foobar.txt', "baz") with hide('everything'): with patched_context('fabric.operations', '_run_command', fake_run): with patched_context(SFTPClient, 'put', fake_put): retval = put(local_path, "/", use_sudo=True) # check that the downloaded file has the same name as the one requested assert retval[0].endswith('foobar.txt') @server() @with_fakes def test_put_use_sudo_temp_dir(self): """ put(use_sudo=True, temp_dir='/tmp/') works by uploading a file to /tmp/ and then moving it to a `remote_path` """ # the sha1 hash is the unique filename of the file being downloaded. sha1() fake_run = Fake('_run_command', callable=True, expect_call=True).with_matching_args( fudge_arg.startswith('mv "'), True, True, None, ) fake_put = Fake('put', callable=True, expect_call=True) local_path = self.mkfile('foobar.txt', "baz") with hide('everything'): with patched_context('fabric.operations', '_run_command', fake_run): with patched_context(SFTPClient, 'put', fake_put): retval = put(local_path, "/", use_sudo=True, temp_dir='/tmp/') # check that the downloaded file has the same name as the one requested assert retval[0].endswith('foobar.txt') # # Interactions with cd() # @server() def test_cd_should_apply_to_put(self): """ put() should honor env.cwd for relative remote paths """ f = 'test.txt' d = '/empty_folder' local = self.path(f) with open(local, 'w') as fd: fd.write('test') with nested(cd(d), hide('everything')): put(local, f) assert self.exists_remotely('%s/%s' % (d, f)) @server(files={'/tmp/test.txt': 'test'}) def test_cd_should_apply_to_get(self): """ get() should honor env.cwd for relative remote paths """ local = self.path('test.txt') with nested(cd('/tmp'), hide('everything')): get('test.txt', local) assert os.path.exists(local) @server() def test_cd_should_not_apply_to_absolute_put(self): """ put() should not prepend env.cwd to absolute remote paths """ local = self.path('test.txt') with open(local, 'w') as fd: fd.write('test') with nested(cd('/tmp'), hide('everything')): put(local, '/test.txt') assert not self.exists_remotely('/tmp/test.txt') assert self.exists_remotely('/test.txt') @server(files={'/test.txt': 'test'}) def test_cd_should_not_apply_to_absolute_get(self): """ get() should not prepend env.cwd to absolute remote paths """ local = self.path('test.txt') with nested(cd('/tmp'), hide('everything')): get('/test.txt', local) assert os.path.exists(local) @server() def test_lcd_should_apply_to_put(self): """ lcd() should apply to put()'s local_path argument """ f = 'lcd_put_test.txt' d = 'subdir' local = self.path(d, f) os.makedirs(os.path.dirname(local)) with open(local, 'w') as fd: fd.write("contents") with nested(lcd(self.path(d)), hide('everything')): put(f, '/') assert self.exists_remotely('/%s' % f) @server() def test_lcd_should_apply_to_get(self): """ lcd() should apply to get()'s local_path argument """ d = self.path('subdir') f = 'file.txt' with nested(lcd(d), hide('everything')): get(f, f) assert self.exists_locally(os.path.join(d, f)) @server() @mock_streams('stdout') def test_stringio_without_name(self): file_obj = StringIO(u'test data') put(file_obj, '/') assert re.search('', sys.stdout.getvalue()) @server() @mock_streams('stdout') def test_stringio_with_name(self): """If a file object (StringIO) has a name attribute, use that in output""" file_obj = StringIO(u'test data') file_obj.name = 'Test StringIO Object' put(file_obj, '/') assert re.search(file_obj.name, sys.stdout.getvalue()) # # local() # # TODO: figure out how to mock subprocess, if it's even possible. # For now, simply test to make sure local() does not raise exceptions with # various settings enabled/disabled. def test_local_output_and_capture(): for capture in (True, False): for stdout in (True, False): for stderr in (True, False): hides, shows = ['running'], [] if stdout: hides.append('stdout') else: shows.append('stdout') if stderr: hides.append('stderr') else: shows.append('stderr') with nested(hide(*hides), show(*shows)): d = "local(): capture: %r, stdout: %r, stderr: %r" % ( capture, stdout, stderr ) local.description = d yield local, "echo 'foo' >/dev/null", capture del local.description class TestRunSudoReturnValues(FabricTest): @server() def test_returns_command_given(self): """ run("foo").command == foo """ with hide('everything'): eq_(run("ls /").command, "ls /") @server() def test_returns_fully_wrapped_command(self): """ run("foo").real_command involves env.shell + etc """ # FabTest turns use_shell off, we must reactivate it. # Doing so will cause a failure: server's default command list assumes # it's off, we're not testing actual wrapping here so we don't really # care. Just warn_only it. with settings(hide('everything'), warn_only=True, use_shell=True): # Slightly flexible test, we're not testing the actual construction # here, just that this attribute exists. ok_(env.shell in run("ls /").real_command) fabric-1.14.0/tests/test_parallel.py000066400000000000000000000050561315011462000173710ustar00rootroot00000000000000from __future__ import with_statement from fabric.api import run, parallel, env, hide, execute, settings from utils import FabricTest, eq_, aborts, mock_streams from server import server, RESPONSES, USER, HOST, PORT # TODO: move this into test_tasks? meh. class OhNoesException(Exception): pass class TestParallel(FabricTest): @server() @parallel def test_parallel(self): """ Want to do a simple call and respond """ env.pool_size = 10 cmd = "ls /simple" with hide('everything'): eq_(run(cmd), RESPONSES[cmd]) @server(port=2200) @server(port=2201) def test_env_host_no_user_or_port(self): """ Ensure env.host doesn't get user/port parts when parallel """ @parallel def _task(): run("ls /simple") assert USER not in env.host assert str(PORT) not in env.host host_string = '%s@%s:%%s' % (USER, HOST) with hide('everything'): execute(_task, hosts=[host_string % 2200, host_string % 2201]) @server(port=2200) @server(port=2201) @aborts def test_parallel_failures_abort(self): with hide('everything'): host1 = '127.0.0.1:2200' host2 = '127.0.0.1:2201' @parallel def mytask(): run("ls /") if env.host_string == host2: raise OhNoesException execute(mytask, hosts=[host1, host2]) @server(port=2200) @server(port=2201) @mock_streams('stderr') # To hide the traceback for now def test_parallel_failures_honor_warn_only(self): with hide('everything'): host1 = '127.0.0.1:2200' host2 = '127.0.0.1:2201' @parallel def mytask(): run("ls /") if env.host_string == host2: raise OhNoesException with settings(warn_only=True): result = execute(mytask, hosts=[host1, host2]) eq_(result[host1], None) assert isinstance(result[host2], OhNoesException) @server(port=2200) @server(port=2201) def test_parallel_implies_linewise(self): host1 = '127.0.0.1:2200' host2 = '127.0.0.1:2201' assert not env.linewise @parallel def mytask(): run("ls /") return env.linewise with hide('everything'): result = execute(mytask, hosts=[host1, host2]) eq_(result[host1], True) eq_(result[host2], True) fabric-1.14.0/tests/test_project.py000066400000000000000000000131631315011462000172410ustar00rootroot00000000000000import unittest import os import fudge from fudge.inspector import arg from fabric.contrib import project class UploadProjectTestCase(unittest.TestCase): """Test case for :func: `fabric.contrib.project.upload_project`.""" fake_tmp = "testtempfolder" def setUp(self): fudge.clear_expectations() # We need to mock out run, local, and put self.fake_run = fudge.Fake('project.run', callable=True) self.patched_run = fudge.patch_object( project, 'run', self.fake_run ) self.fake_local = fudge.Fake('local', callable=True) self.patched_local = fudge.patch_object( project, 'local', self.fake_local ) self.fake_put = fudge.Fake('put', callable=True) self.patched_put = fudge.patch_object( project, 'put', self.fake_put ) # We don't want to create temp folders self.fake_mkdtemp = fudge.Fake( 'mkdtemp', expect_call=True ).returns(self.fake_tmp) self.patched_mkdtemp = fudge.patch_object( project, 'mkdtemp', self.fake_mkdtemp ) def tearDown(self): self.patched_run.restore() self.patched_local.restore() self.patched_put.restore() fudge.clear_expectations() @fudge.with_fakes def test_temp_folder_is_used(self): """A unique temp folder is used for creating the archive to upload.""" # Exercise project.upload_project() @fudge.with_fakes def test_project_is_archived_locally(self): """The project should be archived locally before being uploaded.""" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args(arg.startswith("tar -czf")).next_call() # Exercise project.upload_project() @fudge.with_fakes def test_current_directory_is_uploaded_by_default(self): """By default the project uploaded is the current working directory.""" cwd_path, cwd_name = os.path.split(os.getcwd()) # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args( arg.endswith("-C %s %s" % (cwd_path, cwd_name)) ).next_call() # Exercise project.upload_project() @fudge.with_fakes def test_path_to_local_project_can_be_specified(self): """It should be possible to specify which local folder to upload.""" project_path = "path/to/my/project" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args( arg.endswith("-C path/to/my project") ).next_call() # Exercise project.upload_project(local_dir=project_path) @fudge.with_fakes def test_path_to_local_project_no_separator(self): """Local folder can have no path separator (in current directory).""" project_path = "testpath" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args( arg.endswith("-C . testpath") ).next_call() # Exercise project.upload_project(local_dir=project_path) @fudge.with_fakes def test_path_to_local_project_can_end_in_separator(self): """A local path ending in a separator should be handled correctly.""" project_path = "path/to/my" base = "project" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args( arg.endswith("-C %s %s" % (project_path, base)) ).next_call() # Exercise project.upload_project(local_dir="%s/%s/" % (project_path, base)) @fudge.with_fakes def test_default_remote_folder_is_home(self): """Project is uploaded to remote home by default.""" local_dir = "folder" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_put.with_args( "%s/folder.tar.gz" % self.fake_tmp, "folder.tar.gz", use_sudo=False ).next_call() # Exercise project.upload_project(local_dir=local_dir) @fudge.with_fakes def test_path_to_remote_folder_can_be_specified(self): """It should be possible to specify which local folder to upload to.""" local_dir = "folder" remote_path = "path/to/remote/folder" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_put.with_args( "%s/folder.tar.gz" % self.fake_tmp, "%s/folder.tar.gz" % remote_path, use_sudo=False ).next_call() # Exercise project.upload_project(local_dir=local_dir, remote_dir=remote_path) fabric-1.14.0/tests/test_server.py000066400000000000000000000056401315011462000171020ustar00rootroot00000000000000""" Tests for the test server itself. Not intended to be run by the greater test suite, only by specifically targeting it on the command-line. Rationale: not really testing Fabric itself, no need to pollute Fab's own test suite. (Yes, if these tests fail, it's likely that the Fabric tests using the test server may also have issues, but still.) """ from nose.tools import eq_, ok_ from fabric.network import ssh from server import FakeSFTPServer __test__ = False class AttrHolder(object): pass def test_list_folder(): for desc, file_map, arg, expected in ( ( "Single file", {'file.txt': 'contents'}, '', ['file.txt'] ), ( "Single absolute file", {'/file.txt': 'contents'}, '/', ['file.txt'] ), ( "Multiple files", {'file1.txt': 'contents', 'file2.txt': 'contents2'}, '', ['file1.txt', 'file2.txt'] ), ( "Single empty folder", {'folder': None}, '', ['folder'] ), ( "Empty subfolders", {'folder': None, 'folder/subfolder': None}, '', ['folder'] ), ( "Non-empty sub-subfolder", {'folder/subfolder/subfolder2/file.txt': 'contents'}, "folder/subfolder/subfolder2", ['file.txt'] ), ( "Mixed files, folders empty and non-empty, in homedir", { 'file.txt': 'contents', 'file2.txt': 'contents2', 'folder/file3.txt': 'contents3', 'empty_folder': None }, '', ['file.txt', 'file2.txt', 'folder', 'empty_folder'] ), ( "Mixed files, folders empty and non-empty, in subdir", { 'file.txt': 'contents', 'file2.txt': 'contents2', 'folder/file3.txt': 'contents3', 'folder/subfolder/file4.txt': 'contents4', 'empty_folder': None }, "folder", ['file3.txt', 'subfolder'] ), ): # Pass in fake server obj. (Can't easily clean up API to be more # testable since it's all implementing 'ssh' interface stuff.) server = AttrHolder() server.files = file_map interface = FakeSFTPServer(server) results = interface.list_folder(arg) # In this particular suite of tests, all results should be a file list, # not "no files found" ok_(results != ssh.SFTP_NO_SUCH_FILE) # Grab filename from SFTPAttribute objects in result output = map(lambda x: x.filename, results) # Yield test generator eq_.description = "list_folder: %s" % desc yield eq_, set(expected), set(output) del eq_.description fabric-1.14.0/tests/test_state.py000066400000000000000000000021251315011462000167070ustar00rootroot00000000000000from nose.tools import eq_ from fabric.state import _AliasDict def test_dict_aliasing(): """ Assigning values to aliases updates aliased keys """ ad = _AliasDict( {'bar': False, 'biz': True, 'baz': False}, aliases={'foo': ['bar', 'biz', 'baz']} ) # Before eq_(ad['bar'], False) eq_(ad['biz'], True) eq_(ad['baz'], False) # Change ad['foo'] = True # After eq_(ad['bar'], True) eq_(ad['biz'], True) eq_(ad['baz'], True) def test_nested_dict_aliasing(): """ Aliases can be nested """ ad = _AliasDict( {'bar': False, 'biz': True}, aliases={'foo': ['bar', 'nested'], 'nested': ['biz']} ) # Before eq_(ad['bar'], False) eq_(ad['biz'], True) # Change ad['foo'] = True # After eq_(ad['bar'], True) eq_(ad['biz'], True) def test_dict_alias_expansion(): """ Alias expansion """ ad = _AliasDict( {'bar': False, 'biz': True}, aliases={'foo': ['bar', 'nested'], 'nested': ['biz']} ) eq_(ad.expand_aliases(['foo']), ['bar', 'biz']) fabric-1.14.0/tests/test_tasks.py000066400000000000000000000450001315011462000167130ustar00rootroot00000000000000from __future__ import with_statement from fudge import Fake, patched_context, with_fakes import unittest from nose.tools import raises, ok_ import random import sys import fabric from fabric.tasks import WrappedCallableTask, execute, Task, get_task_details from fabric.main import display_command from fabric.api import run, env, settings, hosts, roles, hide, parallel, task, runs_once, serial from fabric.exceptions import NetworkError from utils import eq_, FabricTest, aborts, mock_streams, support from server import server def test_base_task_provides_undefined_name(): task = Task() eq_("undefined", task.name) @raises(NotImplementedError) def test_base_task_raises_exception_on_call_to_run(): task = Task() task.run() class TestWrappedCallableTask(unittest.TestCase): def test_passes_unused_args_to_parent(self): args = [i for i in range(random.randint(1, 10))] def foo(): pass try: WrappedCallableTask(foo, *args) except TypeError: msg = "__init__ raised a TypeError, meaning args weren't handled" self.fail(msg) def test_passes_unused_kwargs_to_parent(self): random_range = range(random.randint(1, 10)) kwargs = dict([("key_%s" % i, i) for i in random_range]) def foo(): pass try: WrappedCallableTask(foo, **kwargs) except TypeError: self.fail( "__init__ raised a TypeError, meaning kwargs weren't handled") def test_allows_any_number_of_args(self): args = [i for i in range(random.randint(0, 10))] def foo(): pass WrappedCallableTask(foo, *args) def test_allows_any_number_of_kwargs(self): kwargs = dict([("key%d" % i, i) for i in range(random.randint(0, 10))]) def foo(): pass WrappedCallableTask(foo, **kwargs) def test_run_is_wrapped_callable(self): def foo(): pass task = WrappedCallableTask(foo) eq_(task.wrapped, foo) def test_name_is_the_name_of_the_wrapped_callable(self): def foo(): pass foo.__name__ = "random_name_%d" % random.randint(1000, 2000) task = WrappedCallableTask(foo) eq_(task.name, foo.__name__) def test_name_can_be_overridden(self): def foo(): pass eq_(WrappedCallableTask(foo).name, 'foo') eq_(WrappedCallableTask(foo, name='notfoo').name, 'notfoo') def test_reads_double_under_doc_from_callable(self): def foo(): pass foo.__doc__ = "Some random __doc__: %d" % random.randint(1000, 2000) task = WrappedCallableTask(foo) eq_(task.__doc__, foo.__doc__) def test_dispatches_to_wrapped_callable_on_run(self): random_value = "some random value %d" % random.randint(1000, 2000) def foo(): return random_value task = WrappedCallableTask(foo) eq_(random_value, task()) def test_passes_all_regular_args_to_run(self): def foo(*args): return args random_args = tuple( [random.randint(1000, 2000) for i in range(random.randint(1, 5))] ) task = WrappedCallableTask(foo) eq_(random_args, task(*random_args)) def test_passes_all_keyword_args_to_run(self): def foo(**kwargs): return kwargs random_kwargs = {} for i in range(random.randint(1, 5)): random_key = ("foo", "bar", "baz", "foobar", "barfoo")[i] random_kwargs[random_key] = random.randint(1000, 2000) task = WrappedCallableTask(foo) eq_(random_kwargs, task(**random_kwargs)) def test_calling_the_object_is_the_same_as_run(self): random_return = random.randint(1000, 2000) def foo(): return random_return task = WrappedCallableTask(foo) eq_(task(), task.run()) class TestTask(unittest.TestCase): def test_takes_an_alias_kwarg_and_wraps_it_in_aliases_list(self): random_alias = "alias_%d" % random.randint(100, 200) task = Task(alias=random_alias) self.assertTrue(random_alias in task.aliases) def test_aliases_are_set_based_on_provided_aliases(self): aliases = ["a_%d" % i for i in range(random.randint(1, 10))] task = Task(aliases=aliases) self.assertTrue(all([a in task.aliases for a in aliases])) def test_aliases_are_None_by_default(self): task = Task() self.assertTrue(task.aliases is None) # Reminder: decorator syntax, e.g.: # @foo # def bar():... # # is semantically equivalent to: # def bar():... # bar = foo(bar) # # this simplifies testing :) def test_decorator_incompatibility_on_task(): from fabric.decorators import task, hosts, runs_once, roles def foo(): return "foo" foo = task(foo) # since we aren't setting foo to be the newly decorated thing, its cool hosts('me@localhost')(foo) runs_once(foo) roles('www')(foo) def test_decorator_closure_hiding(): """ @task should not accidentally destroy decorated attributes from @hosts/etc """ from fabric.decorators import task, hosts def foo(): print(env.host_string) foo = task(hosts("me@localhost")(foo)) eq_(["me@localhost"], foo.hosts) # # execute() # def dict_contains(superset, subset): """ Assert that all key/val pairs in dict 'subset' also exist in 'superset' """ for key, value in subset.iteritems(): ok_(key in superset) eq_(superset[key], value) class TestExecute(FabricTest): @with_fakes def test_calls_task_function_objects(self): """ should execute the passed-in function object """ execute(Fake(callable=True, expect_call=True)) @with_fakes def test_should_look_up_task_name(self): """ should also be able to handle task name strings """ name = 'task1' commands = {name: Fake(callable=True, expect_call=True)} with patched_context(fabric.state, 'commands', commands): execute(name) @with_fakes def test_should_handle_name_of_Task_object(self): """ handle corner case of Task object referrred to by name """ name = 'task2' class MyTask(Task): run = Fake(callable=True, expect_call=True) mytask = MyTask() mytask.name = name commands = {name: mytask} with patched_context(fabric.state, 'commands', commands): execute(name) @aborts def test_should_abort_if_task_name_not_found(self): """ should abort if given an invalid task name """ execute('thisisnotavalidtaskname') def test_should_not_abort_if_task_name_not_found_with_skip(self): """ should not abort if given an invalid task name and skip_unknown_tasks in env """ env.skip_unknown_tasks = True execute('thisisnotavalidtaskname') del env['skip_unknown_tasks'] @with_fakes def test_should_pass_through_args_kwargs(self): """ should pass in any additional args, kwargs to the given task. """ task = ( Fake(callable=True, expect_call=True) .with_args('foo', biz='baz') ) execute(task, 'foo', biz='baz') @with_fakes def test_should_honor_hosts_kwarg(self): """ should use hosts kwarg to set run list """ # Make two full copies of a host list hostlist = ['a', 'b', 'c'] hosts = hostlist[:] # Side-effect which asserts the value of env.host_string when it runs def host_string(): eq_(env.host_string, hostlist.pop(0)) task = Fake(callable=True, expect_call=True).calls(host_string) with hide('everything'): execute(task, hosts=hosts) def test_should_honor_hosts_decorator(self): """ should honor @hosts on passed-in task objects """ # Make two full copies of a host list hostlist = ['a', 'b', 'c'] @hosts(*hostlist[:]) def task(): eq_(env.host_string, hostlist.pop(0)) with hide('running'): execute(task) def test_should_honor_roles_decorator(self): """ should honor @roles on passed-in task objects """ # Make two full copies of a host list roledefs = {'role1': ['a', 'b', 'c']} role_copy = roledefs['role1'][:] @roles('role1') def task(): eq_(env.host_string, role_copy.pop(0)) with settings(hide('running'), roledefs=roledefs): execute(task) @with_fakes def test_should_set_env_command_to_string_arg(self): """ should set env.command to any string arg, if given """ name = "foo" def command(): eq_(env.command, name) task = Fake(callable=True, expect_call=True).calls(command) with patched_context(fabric.state, 'commands', {name: task}): execute(name) @with_fakes def test_should_set_env_command_to_name_attr(self): """ should set env.command to TaskSubclass.name if possible """ name = "foo" def command(): eq_(env.command, name) task = ( Fake(callable=True, expect_call=True) .has_attr(name=name) .calls(command) ) execute(task) @with_fakes def test_should_set_all_hosts(self): """ should set env.all_hosts to its derived host list """ hosts = ['a', 'b'] roledefs = {'r1': ['c', 'd']} roles = ['r1'] exclude_hosts = ['a'] def command(): eq_(set(env.all_hosts), set(['b', 'c', 'd'])) task = Fake(callable=True, expect_call=True).calls(command) with settings(hide('everything'), roledefs=roledefs): execute( task, hosts=hosts, roles=roles, exclude_hosts=exclude_hosts ) @mock_streams('stdout') def test_should_print_executing_line_per_host(self): """ should print "Executing" line once per host """ def task(): pass execute(task, hosts=['host1', 'host2']) eq_(sys.stdout.getvalue(), """[host1] Executing task 'task' [host2] Executing task 'task' """) @mock_streams('stdout') def test_should_not_print_executing_line_for_singletons(self): """ should not print "Executing" line for non-networked tasks """ def task(): pass with settings(hosts=[]): # protect against really odd test bleed :( execute(task) eq_(sys.stdout.getvalue(), "") def test_should_return_dict_for_base_case(self): """ Non-network-related tasks should return a dict w/ special key """ def task(): return "foo" eq_(execute(task), {'': 'foo'}) @server(port=2200) @server(port=2201) def test_should_return_dict_for_serial_use_case(self): """ Networked but serial tasks should return per-host-string dict """ ports = [2200, 2201] hosts = map(lambda x: '127.0.0.1:%s' % x, ports) def task(): run("ls /simple") return "foo" with hide('everything'): eq_(execute(task, hosts=hosts), { '127.0.0.1:2200': 'foo', '127.0.0.1:2201': 'foo' }) @server() def test_should_preserve_None_for_non_returning_tasks(self): """ Tasks which don't return anything should still show up in the dict """ def local_task(): pass def remote_task(): with hide('everything'): run("ls /simple") eq_(execute(local_task), {'': None}) with hide('everything'): eq_( execute(remote_task, hosts=[env.host_string]), {env.host_string: None} ) def test_should_use_sentinel_for_tasks_that_errored(self): """ Tasks which errored but didn't abort should contain an eg NetworkError """ def task(): run("whoops") host_string = 'localhost:1234' with settings(hide('everything'), skip_bad_hosts=True): retval = execute(task, hosts=[host_string]) assert isinstance(retval[host_string], NetworkError) @server(port=2200) @server(port=2201) def test_parallel_return_values(self): """ Parallel mode should still return values as in serial mode """ @parallel @hosts('127.0.0.1:2200', '127.0.0.1:2201') def task(): run("ls /simple") return env.host_string.split(':')[1] with hide('everything'): retval = execute(task) eq_(retval, {'127.0.0.1:2200': '2200', '127.0.0.1:2201': '2201'}) @with_fakes def test_should_work_with_Task_subclasses(self): """ should work for Task subclasses, not just WrappedCallableTask """ class MyTask(Task): name = "mytask" run = Fake(callable=True, expect_call=True) mytask = MyTask() execute(mytask) @server(port=2200) @server(port=2201) def test_nested_execution_with_explicit_ports(self): """ nested executions should work with defined ports """ def expect_host_string_port(): eq_(env.port, '2201') return "bar" def expect_env_port(): eq_(env.port, '2202') def expect_per_host_config_port(): eq_(env.port, '664') run = execute(expect_default_config_port, hosts=['some_host']) return run['some_host'] def expect_default_config_port(): # uses `Host *` in ssh_config eq_(env.port, '666') return "bar" def main_task(): eq_(env.port, '2200') execute(expect_host_string_port, hosts=['localhost:2201']) with settings(port='2202'): execute(expect_env_port, hosts=['localhost']) with settings( use_ssh_config=True, ssh_config_path=support("ssh_config") ): run = execute(expect_per_host_config_port, hosts='myhost') return run['myhost'] run = execute(main_task, hosts=['localhost:2200']) eq_(run['localhost:2200'], 'bar') class TestExecuteEnvInteractions(FabricTest): def set_network(self): # Don't update env.host/host_string/etc pass @server(port=2200) @server(port=2201) def test_should_not_mutate_its_own_env_vars(self): """ internal env changes should not bleed out, but task env changes should """ # Task that uses a handful of features which involve env vars @parallel @hosts('username@127.0.0.1:2200', 'username@127.0.0.1:2201') def mytask(): run("ls /simple") # Pre-assertions assertions = { 'parallel': False, 'all_hosts': [], 'host': None, 'hosts': [], 'host_string': None } for key, value in assertions.items(): eq_(env[key], value) # Run with hide('everything'): result = execute(mytask) eq_(len(result), 2) # Post-assertions for key, value in assertions.items(): eq_(env[key], value) @server() def test_should_allow_task_to_modify_env_vars(self): @hosts('username@127.0.0.1:2200') def mytask(): run("ls /simple") env.foo = "bar" with hide('everything'): execute(mytask) eq_(env.foo, "bar") eq_(env.host_string, None) class TestTaskDetails(unittest.TestCase): def test_old_style_task_with_default_args(self): """ __details__() should print docstr for old style task methods with default args """ def task_old_style(arg1, arg2, arg3=None, arg4='yes'): '''Docstring''' details = get_task_details(task_old_style) eq_("Docstring\n" "Arguments: arg1, arg2, arg3=None, arg4='yes'", details) def test_old_style_task_without_default_args(self): """ __details__() should print docstr for old style task methods without default args """ def task_old_style(arg1, arg2): '''Docstring''' details = get_task_details(task_old_style) eq_("Docstring\n" "Arguments: arg1, arg2", details) def test_old_style_task_without_args(self): """ __details__() should print docstr for old style task methods without args """ def task_old_style(): '''Docstring''' details = get_task_details(task_old_style) eq_("Docstring\n" "Arguments: ", details) def test_decorated_task(self): """ __details__() should print docstr for method with any number and order of decorations """ expected = "\n".join([ "Docstring", "Arguments: arg1", ]) @task def decorated_task(arg1): '''Docstring''' actual = decorated_task.__details__() eq_(expected, actual) @runs_once @task def decorated_task1(arg1): '''Docstring''' actual = decorated_task1.__details__() eq_(expected, actual) @runs_once @serial @task def decorated_task2(arg1): '''Docstring''' actual = decorated_task2.__details__() eq_(expected, actual) def test_subclassed_task(self): """ __details__() should print docstr for subclassed task methods with args """ class SpecificTask(Task): def run(self, arg1, arg2, arg3): '''Docstring''' eq_("Docstring\n" "Arguments: self, arg1, arg2, arg3", SpecificTask().__details__()) @mock_streams('stdout') def test_multiline_docstring_indented_correctly(self): """ display_command() should properly indent docstr for old style task methods """ def mytask(arg1): """ This is a multi line docstring. For reals. """ try: with patched_context(fabric.state, 'commands', {'mytask': mytask}): display_command('mytask') except SystemExit: # ugh pass eq_( sys.stdout.getvalue(), """Displaying detailed information for task 'mytask': This is a multi line docstring. For reals. Arguments: arg1 """ ) fabric-1.14.0/tests/test_utils.py000066400000000000000000000262001315011462000167270ustar00rootroot00000000000000from __future__ import with_statement import sys from unittest import TestCase from fudge import Fake, patched_context, with_fakes from fudge.patcher import with_patched_object from nose.tools import eq_, raises from fabric.state import output from fabric.utils import warn, indent, abort, puts, fastprint, error, RingBuffer from fabric import utils # For patching from fabric.api import local, quiet from fabric.context_managers import settings, hide from fabric.colors import magenta, red from utils import mock_streams, aborts, FabricTest, assert_contains, \ assert_not_contains @mock_streams('stderr') @with_patched_object(output, 'warnings', True) def test_warn(): """ warn() should print 'Warning' plus given text """ warn("Test") eq_("\nWarning: Test\n\n", sys.stderr.getvalue()) def test_indent(): for description, input_, output_ in ( ("Sanity check: 1 line string", 'Test', ' Test'), ("List of strings turns in to strings joined by \\n", ["Test", "Test"], ' Test\n Test'), ): eq_.description = "indent(): %s" % description yield eq_, indent(input_), output_ del eq_.description def test_indent_with_strip(): for description, input_, output_ in ( ("Sanity check: 1 line string", indent('Test', strip=True), ' Test'), ("Check list of strings", indent(["Test", "Test"], strip=True), ' Test\n Test'), ("Check list of strings", indent([" Test", " Test"], strip=True), ' Test\n Test'), ): eq_.description = "indent(strip=True): %s" % description yield eq_, input_, output_ del eq_.description @aborts def test_abort(): """ abort() should raise SystemExit """ abort("Test") class TestException(Exception): pass @raises(TestException) def test_abort_with_exception(): """ abort() should raise a provided exception """ with settings(abort_exception=TestException): abort("Test") @mock_streams('stderr') @with_patched_object(output, 'aborts', True) def test_abort_message(): """ abort() should print 'Fatal error' plus exception value """ try: abort("Test") except SystemExit: pass result = sys.stderr.getvalue() eq_("\nFatal error: Test\n\nAborting.\n", result) def test_abort_message_only_printed_once(): """ abort()'s SystemExit should not cause a reprint of the error message """ # No good way to test the implicit stderr print which sys.exit/SystemExit # perform when they are allowed to bubble all the way to the top. So, we # invoke a subprocess and look at its stderr instead. with quiet(): result = local("python -m fabric.__main__ -f tests/support/aborts.py kaboom", capture=True) # When error in #1318 is present, this has an extra "It burns!" at end of # stderr string. eq_(result.stderr, "Fatal error: It burns!\n\nAborting.") @mock_streams('stderr') @with_patched_object(output, 'aborts', True) def test_abort_exception_contains_separate_message_and_code(): """ abort()'s SystemExit contains distinct .code/.message attributes. """ # Re #1318 / #1213 try: abort("Test") except SystemExit as e: eq_(e.message, "Test") eq_(e.code, 1) @mock_streams('stdout') def test_puts_with_user_output_on(): """ puts() should print input to sys.stdout if "user" output level is on """ s = "string!" output.user = True puts(s, show_prefix=False) eq_(sys.stdout.getvalue(), s + "\n") @mock_streams('stdout') def test_puts_with_unicode_output(): """ puts() should print unicode input """ s = u"string!" output.user = True puts(s, show_prefix=False) eq_(sys.stdout.getvalue(), s + "\n") @mock_streams('stdout') def test_puts_with_encoding_type_none_output(): """ puts() should print unicode output without a stream encoding """ s = u"string!" output.user = True sys.stdout.encoding = None puts(s, show_prefix=False) eq_(sys.stdout.getvalue(), s + "\n") @mock_streams('stdout') def test_puts_with_user_output_off(): """ puts() shouldn't print input to sys.stdout if "user" output level is off """ output.user = False puts("You aren't reading this.") eq_(sys.stdout.getvalue(), "") @mock_streams('stdout') def test_puts_with_prefix(): """ puts() should prefix output with env.host_string if non-empty """ s = "my output" h = "localhost" with settings(host_string=h): puts(s) eq_(sys.stdout.getvalue(), "[%s] %s" % (h, s + "\n")) @mock_streams('stdout') def test_puts_without_prefix(): """ puts() shouldn't prefix output with env.host_string if show_prefix is False """ s = "my output" puts(s, show_prefix=False) eq_(sys.stdout.getvalue(), "%s" % (s + "\n")) @with_fakes def test_fastprint_calls_puts(): """ fastprint() is just an alias to puts() """ text = "Some output" fake_puts = Fake('puts', expect_call=True).with_args( text=text, show_prefix=False, end="", flush=True ) with patched_context(utils, 'puts', fake_puts): fastprint(text) class TestErrorHandling(FabricTest): dummy_string = 'test1234!' @with_patched_object(utils, 'warn', Fake('warn', callable=True, expect_call=True)) def test_error_warns_if_warn_only_True_and_func_None(self): """ warn_only=True, error(func=None) => calls warn() """ with settings(warn_only=True): error('foo') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True)) def test_error_aborts_if_warn_only_False_and_func_None(self): """ warn_only=False, error(func=None) => calls abort() """ with settings(warn_only=False): error('foo') def test_error_calls_given_func_if_func_not_None(self): """ error(func=callable) => calls callable() """ error('foo', func=Fake(callable=True, expect_call=True)) @mock_streams('stdout') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True).calls(lambda x: sys.stdout.write(x + "\n"))) def test_error_includes_stdout_if_given_and_hidden(self): """ error() correctly prints stdout if it was previously hidden """ # Mostly to catch regression bug(s) stdout = "this is my stdout" with hide('stdout'): error("error message", func=utils.abort, stdout=stdout) assert_contains(stdout, sys.stdout.getvalue()) @mock_streams('stdout') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True).calls(lambda x: sys.stdout.write(x + "\n"))) @with_patched_object(output, 'exceptions', True) @with_patched_object(utils, 'format_exc', Fake('format_exc', callable=True, expect_call=True).returns(dummy_string)) def test_includes_traceback_if_exceptions_logging_is_on(self): """ error() includes traceback in message if exceptions logging is on """ error("error message", func=utils.abort, stdout=error) assert_contains(self.dummy_string, sys.stdout.getvalue()) @mock_streams('stdout') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True).calls(lambda x: sys.stdout.write(x + "\n"))) @with_patched_object(output, 'debug', True) @with_patched_object(utils, 'format_exc', Fake('format_exc', callable=True, expect_call=True).returns(dummy_string)) def test_includes_traceback_if_debug_logging_is_on(self): """ error() includes traceback in message if debug logging is on (backwardis compatibility) """ error("error message", func=utils.abort, stdout=error) assert_contains(self.dummy_string, sys.stdout.getvalue()) @mock_streams('stdout') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True).calls(lambda x: sys.stdout.write(x + "\n"))) @with_patched_object(output, 'exceptions', True) @with_patched_object(utils, 'format_exc', Fake('format_exc', callable=True, expect_call=True).returns(None)) def test_doesnt_print_None_when_no_traceback_present(self): """ error() doesn't include None in message if there is no traceback """ error("error message", func=utils.abort, stdout=error) assert_not_contains('None', sys.stdout.getvalue()) @mock_streams('stderr') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True).calls(lambda x: sys.stderr.write(x + "\n"))) def test_error_includes_stderr_if_given_and_hidden(self): """ error() correctly prints stderr if it was previously hidden """ # Mostly to catch regression bug(s) stderr = "this is my stderr" with hide('stderr'): error("error message", func=utils.abort, stderr=stderr) assert_contains(stderr, sys.stderr.getvalue()) @mock_streams('stderr') def test_warnings_print_magenta_if_colorize_on(self): with settings(colorize_errors=True): error("oh god", func=utils.warn, stderr="oops") # can't use assert_contains as ANSI codes contain regex specialchars eq_(magenta("\nWarning: oh god\n\n"), sys.stderr.getvalue()) @mock_streams('stderr') @raises(SystemExit) def test_errors_print_red_if_colorize_on(self): with settings(colorize_errors=True): error("oh god", func=utils.abort, stderr="oops") # can't use assert_contains as ANSI codes contain regex specialchars eq_(red("\Error: oh god\n\n"), sys.stderr.getvalue()) class TestRingBuffer(TestCase): def setUp(self): self.b = RingBuffer([], maxlen=5) def test_append_empty(self): self.b.append('x') eq_(self.b, ['x']) def test_append_full(self): self.b.extend("abcde") self.b.append('f') eq_(self.b, ['b', 'c', 'd', 'e', 'f']) def test_extend_empty(self): self.b.extend("abc") eq_(self.b, ['a', 'b', 'c']) def test_extend_overrun(self): self.b.extend("abc") self.b.extend("defg") eq_(self.b, ['c', 'd', 'e', 'f', 'g']) def test_extend_full(self): self.b.extend("abcde") self.b.extend("fgh") eq_(self.b, ['d', 'e', 'f', 'g', 'h']) def test_plus_equals(self): self.b += "abcdefgh" eq_(self.b, ['d', 'e', 'f', 'g', 'h']) def test_oversized_extend(self): self.b.extend("abcdefghijklmn") eq_(self.b, ['j', 'k', 'l', 'm', 'n']) def test_zero_maxlen_append(self): b = RingBuffer([], maxlen=0) b.append('a') eq_(b, []) def test_zero_maxlen_extend(self): b = RingBuffer([], maxlen=0) b.extend('abcdefghijklmnop') eq_(b, []) def test_None_maxlen_append(self): b = RingBuffer([], maxlen=None) b.append('a') eq_(b, ['a']) def test_None_maxlen_extend(self): b = RingBuffer([], maxlen=None) b.extend('abcdefghijklmnop') eq_(''.join(b), 'abcdefghijklmnop') fabric-1.14.0/tests/test_version.py000066400000000000000000000016041315011462000172550ustar00rootroot00000000000000""" Tests covering Fabric's version number pretty-print functionality. """ from nose.tools import eq_ import fabric.version def test_get_version(): get_version = fabric.version.get_version for tup, short, normal, verbose in [ ((0, 9, 0, 'final', 0), '0.9.0', '0.9', '0.9 final'), ((0, 9, 1, 'final', 0), '0.9.1', '0.9.1', '0.9.1 final'), ((0, 9, 0, 'alpha', 1), '0.9a1', '0.9 alpha 1', '0.9 alpha 1'), ((0, 9, 1, 'beta', 1), '0.9.1b1', '0.9.1 beta 1', '0.9.1 beta 1'), ((0, 9, 0, 'release candidate', 1), '0.9rc1', '0.9 release candidate 1', '0.9 release candidate 1'), ((1, 0, 0, 'alpha', 0), '1.0a', '1.0 pre-alpha', '1.0 pre-alpha'), ]: fabric.version.VERSION = tup yield eq_, get_version('short'), short yield eq_, get_version('normal'), normal yield eq_, get_version('verbose'), verbose fabric-1.14.0/tests/utils.py000066400000000000000000000134061315011462000156740ustar00rootroot00000000000000from __future__ import with_statement from contextlib import contextmanager from fudge.patcher import with_patched_object from functools import partial from types import StringTypes import copy import getpass import os import re import shutil import sys import tempfile from fudge import Fake, patched_context, clear_expectations from nose.tools import raises from fabric.state import env, output from fabric.sftp import SFTP from fabric.network import to_dict from server import PORT, PASSWORDS, USER, HOST from mock_streams import mock_streams class FabricTest(object): """ Nose-oriented test runner which wipes state.env and provides file helpers. """ def setup(self): # Clear Fudge mock expectations clear_expectations() # Copy env, output for restoration in teardown self.previous_env = copy.deepcopy(env) # Deepcopy doesn't work well on AliasDicts; but they're only one layer # deep anyways, so... self.previous_output = output.items() # Allow hooks from subclasses here for setting env vars (so they get # purged correctly in teardown()) self.env_setup() # Temporary local file dir self.tmpdir = tempfile.mkdtemp() def set_network(self): env.update(to_dict('%s@%s:%s' % (USER, HOST, PORT))) def env_setup(self): # Set up default networking for test server env.disable_known_hosts = True self.set_network() env.password = PASSWORDS[USER] # Command response mocking is easier without having to account for # shell wrapping everywhere. env.use_shell = False def teardown(self): env.clear() # In case tests set env vars that didn't exist previously env.update(self.previous_env) output.update(self.previous_output) shutil.rmtree(self.tmpdir) # Clear Fudge mock expectations...again clear_expectations() def path(self, *path_parts): return os.path.join(self.tmpdir, *path_parts) def mkfile(self, path, contents): dest = self.path(path) with open(dest, 'w') as fd: fd.write(contents) return dest def exists_remotely(self, path): return SFTP(env.host_string).exists(path) def exists_locally(self, path): return os.path.exists(path) def password_response(password, times_called=None, silent=True): """ Context manager which patches ``getpass.getpass`` to return ``password``. ``password`` may be a single string or an iterable of strings: * If single string, given password is returned every time ``getpass`` is called. * If iterable, iterated over for each call to ``getpass``, after which ``getpass`` will error. If ``times_called`` is given, it is used to add a ``Fake.times_called`` clause to the mock object, e.g. ``.times_called(1)``. Specifying ``times_called`` alongside an iterable ``password`` list is unsupported (see Fudge docs on ``Fake.next_call``). If ``silent`` is True, no prompt will be printed to ``sys.stderr``. """ fake = Fake('getpass', callable=True) # Assume stringtype or iterable, turn into mutable iterable if isinstance(password, StringTypes): passwords = [password] else: passwords = list(password) # Optional echoing of prompt to mimic real behavior of getpass # NOTE: also echo a newline if the prompt isn't a "passthrough" from the # server (as it means the server won't be sending its own newline for us). echo = lambda x, y: y.write(x + ("\n" if x != " " else "")) # Always return first (only?) password right away fake = fake.returns(passwords.pop(0)) if not silent: fake = fake.calls(echo) # If we had >1, return those afterwards for pw in passwords: fake = fake.next_call().returns(pw) if not silent: fake = fake.calls(echo) # Passthrough times_called if times_called: fake = fake.times_called(times_called) return patched_context(getpass, 'getpass', fake) def _assert_contains(needle, haystack, invert): matched = re.search(needle, haystack, re.M) if (invert and matched) or (not invert and not matched): raise AssertionError("r'%s' %sfound in '%s'" % ( needle, "" if invert else "not ", haystack )) assert_contains = partial(_assert_contains, invert=False) assert_not_contains = partial(_assert_contains, invert=True) def line_prefix(prefix, string): """ Return ``string`` with all lines prefixed by ``prefix``. """ return "\n".join(prefix + x for x in string.splitlines()) def eq_(result, expected, msg=None): """ Shadow of the Nose builtin which presents easier to read multiline output. """ params = {'expected': expected, 'result': result} aka = """ --------------------------------- aka ----------------------------------------- Expected: %(expected)r Got: %(result)r """ % params default_msg = """ Expected: %(expected)s Got: %(result)s """ % params if (repr(result) != str(result)) or (repr(expected) != str(expected)): default_msg += aka assert result == expected, msg or default_msg def eq_contents(path, text): with open(path) as fd: eq_(text, fd.read()) def support(path): return os.path.join(os.path.dirname(__file__), 'support', path) fabfile = support @contextmanager def path_prefix(module): i = 0 sys.path.insert(i, os.path.dirname(module)) yield sys.path.pop(i) def aborts(func): return raises(SystemExit)(mock_streams('stderr')(func)) def _patched_input(func, fake): return func(sys.modules['__builtin__'], 'raw_input', fake) patched_input = partial(_patched_input, patched_context) with_patched_input = partial(_patched_input, with_patched_object)