Fabric-1.8.2/000755 000765 000024 00000000000 12277461504 013666 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/AUTHORS000644 000765 000024 00000002152 12257074160 014732 0ustar00jforcierstaff000000 000000 The following list contains individuals who contributed nontrivial code to Fabric's codebase, ordered by date of first contribution. Individuals who submitted bug reports or trivial one-line "you forgot to do X" patches are generally credited in the commit log only. IMPORTANT: as of 2012, this file is historical only and we'll probably stop updating it. The changelog and/or Git history is the canonical source for thanks, credits etc. Christian Vest Hansen Rob Cowie Jeff Forcier Travis Cline Niklas Lindström Kevin Horn Max Battcher Alexander Artemenko Dennis Schoen Erick Dennis Sverre Johansen Michael Stephens Armin Ronacher Curt Micol Patrick McNerthney Steve Steiner Ali Saifee Jorge Vargas Peter Ellis Brian Rosner Xinan Wu Alex Koshelev Mich Matuson Morgan Goose Carl Meyer Erich Heine Travis Swicegood Paul Smith Alex Koshelev Stephen Goss James Murty Thomas Ballinger Rick Harding Kirill Pinchuk Ales Zoulek Casey Banner Roman Imankulov Rodrigue Alcazar Jeremy Avnet Matt Chisholm Mark Merritt Max Arnold Szymon Reichmann David Wolever Jason Coombs Ben Davis Neilen Marais Rory Geoghegan Alexey Diyan Kamil Kisiel Fabric-1.8.2/docs/000755 000765 000024 00000000000 12277461504 014616 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/fabfile/000755 000765 000024 00000000000 12277461504 015256 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/fabric/000755 000765 000024 00000000000 12277461504 015114 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/Fabric.egg-info/000755 000765 000024 00000000000 12277461504 016546 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/INSTALL000644 000765 000024 00000000155 12257074160 014714 0ustar00jforcierstaff000000 000000 For installation help, please see http://fabfile.org/ or (if using a source checkout) docs/installation.rst. Fabric-1.8.2/LICENSE000644 000765 000024 00000002502 12277461064 014673 0ustar00jforcierstaff000000 000000 Copyright (c) 2013, Christian Vest Hansen and Jeffrey E. Forcier All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Fabric-1.8.2/MANIFEST.in000644 000765 000024 00000000323 12257074160 015416 0ustar00jforcierstaff000000 000000 include AUTHORS include INSTALL include LICENSE include README.rst recursive-include docs * recursive-exclude docs/_build * include requirements.txt recursive-include tests * recursive-exclude tests *.pyc *.pyo Fabric-1.8.2/PKG-INFO000644 000765 000024 00000006406 12277461504 014771 0ustar00jforcierstaff000000 000000 Metadata-Version: 1.0 Name: Fabric Version: 1.8.2 Summary: Fabric is a simple, Pythonic tool for remote execution and deployment. Home-page: http://fabfile.org Author: Jeff Forcier Author-email: jeff@bitprophet.org License: UNKNOWN Description: To find out what's new in this version of Fabric, please see `the changelog `_. You can also install the `in-development version `_ using pip, with `pip install fabric==dev`. ---- .. image:: https://secure.travis-ci.org/fabric/fabric.png?branch=master :target: https://travis-ci.org/fabric/fabric Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. It provides a basic suite of operations for executing local or remote shell commands (normally or via ``sudo``) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution. Typical use involves creating a Python module containing one or more functions, then executing them via the ``fab`` command-line tool. Below is a small but complete "fabfile" containing a single task:: from fabric.api import run def host_type(): run('uname -s') Once a task is defined, it may be run on one or more servers, like so:: $ fab -H localhost,linuxbox host_type [localhost] run: uname -s [localhost] out: Darwin [linuxbox] run: uname -s [linuxbox] out: Linux Done. Disconnecting from localhost... done. Disconnecting from linuxbox... done. In addition to use via the ``fab`` tool, Fabric's components may be imported into other Python code, providing a Pythonic interface to the SSH protocol suite at a higher level than that provided by e.g. the ``Paramiko`` library (which Fabric itself uses.) ---- For more information, please see the Fabric website or execute ``fab --help``. Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Unix Classifier: Operating System :: POSIX Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Topic :: Software Development Classifier: Topic :: Software Development :: Build Tools Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Clustering Classifier: Topic :: System :: Software Distribution Classifier: Topic :: System :: Systems Administration Fabric-1.8.2/README.rst000644 000765 000024 00000002563 12277451130 015355 0ustar00jforcierstaff000000 000000 .. image:: https://secure.travis-ci.org/fabric/fabric.png?branch=master :target: https://travis-ci.org/fabric/fabric Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. It provides a basic suite of operations for executing local or remote shell commands (normally or via ``sudo``) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution. Typical use involves creating a Python module containing one or more functions, then executing them via the ``fab`` command-line tool. Below is a small but complete "fabfile" containing a single task:: from fabric.api import run def host_type(): run('uname -s') Once a task is defined, it may be run on one or more servers, like so:: $ fab -H localhost,linuxbox host_type [localhost] run: uname -s [localhost] out: Darwin [linuxbox] run: uname -s [linuxbox] out: Linux Done. Disconnecting from localhost... done. Disconnecting from linuxbox... done. In addition to use via the ``fab`` tool, Fabric's components may be imported into other Python code, providing a Pythonic interface to the SSH protocol suite at a higher level than that provided by e.g. the ``Paramiko`` library (which Fabric itself uses.) Fabric-1.8.2/requirements.txt000644 000765 000024 00000001016 12277451120 017141 0ustar00jforcierstaff000000 000000 # These requirements are for DEVELOPMENT ONLY! # You do not need e.g. Sphinx or Fudge just to run the 'fab' tool. # Instead, these are necessary for executing the test suite or developing the # cutting edge (which may have different requirements from released versions.) # Development version of Paramiko, just in case we're in one of those phases. -e git+https://github.com/paramiko/paramiko#egg=paramiko # Pull in actual "you already have local installed checkouts of Fabric + # Paramiko" dev deps. -r dev-requirements.txt Fabric-1.8.2/setup.cfg000644 000765 000024 00000000073 12277461504 015507 0ustar00jforcierstaff000000 000000 [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 Fabric-1.8.2/setup.py000644 000765 000024 00000004105 12277451120 015371 0ustar00jforcierstaff000000 000000 #!/usr/bin/env python import sys from setuptools import setup, find_packages from fabric.version import get_version readme = open('README.rst').read() long_description = """ To find out what's new in this version of Fabric, please see `the changelog `_. You can also install the `in-development version `_ using pip, with `pip install fabric==dev`. ---- %s ---- For more information, please see the Fabric website or execute ``fab --help``. """ % (get_version('branch'), readme) setup( name='Fabric', version=get_version('short'), description='Fabric is a simple, Pythonic tool for remote execution and deployment.', long_description=long_description, author='Jeff Forcier', author_email='jeff@bitprophet.org', url='http://fabfile.org', packages=find_packages(), test_suite='nose.collector', tests_require=['nose', 'fudge<1.0'], install_requires=['paramiko>=1.10.0'], entry_points={ 'console_scripts': [ 'fab = fabric.main:main', ] }, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Environment :: Console', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'License :: OSI Approved :: BSD License', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Unix', 'Operating System :: POSIX', 'Programming Language :: Python', 'Programming Language :: Python :: 2.5', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Topic :: Software Development', 'Topic :: Software Development :: Build Tools', 'Topic :: Software Development :: Libraries', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: System :: Clustering', 'Topic :: System :: Software Distribution', 'Topic :: System :: Systems Administration', ], ) Fabric-1.8.2/tests/000755 000765 000024 00000000000 12277461504 015030 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/tests/client.key000644 000765 000024 00000003317 12257074160 017020 0ustar00jforcierstaff000000 000000 -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,F1AFE040F412E6D1 cIBbwu1/PD9vjtyFn+xbpc2X9Uv9sllCRooLwkOv9rkBxDRItT8D5UiGHGIGIAvj eq9sUze8bXQeXs9zpJwMRH1kjdmCmnmRX0iXcsxSgnioL3aEGLTbXqxkUOnSgj4Y cJ1trT51XVRSBGlRHYPmF1IhYYW/RPZlFUPMJDE5s1moROU29DfnaboTREf8shJ9 A/jHvKoivn4GgM1U6VcwwtijvmgrrB5KzqpRfTLf6Rxe6St3e4WjQusYWVP4BOmz ImQyaATcPwn5iMWPfvXohPQR/ajuoU9jzMM3DqzcrH7Q4VmpSTrmkdG7Ra5GfSE1 O5WEiqNwUkfjAYIjbxo11gVtIH8ddsMuF5odsh2LVXYocHeZzRlZvsip2AePKiKX xMkZItP4xqFBfi0jnqCVkQGUdtRYhHomDUO8U0JtB3BFNT/L+LC+dsrj8G/FaQiD n8an2sDf1CrYXqfz3V3rGzuPDq/CKwPD8HeTpjZUT7bPUNsTNMVx58LiYShRV2uB zUn83diKX12xS+gyS5PfuujwQP93ZQXOP9agKSa2UlY2ojUxtpc1vxiEzcFcU9Zg 2uLEbsRKW1qe2jLDTmRyty14rJmi7ocbjPUuEuw9Aj1v46jzhBXBPE7cWHGm1o2/ /e0lGfLTtm3Q2SponTLTcHTrBvrDBRlDAN5sChhbaoEoUCHjTKo8aj6whDKfAw4Q KNHrOkkXyDyvd90c1loen5u5iaol+l5W+7LG3Sr5uRHMHAsF0MH9cZd/RQXMSY/U sQLWumskx/iSrbjFztW0La0bBCB6vHBYLervC3lrrmvnhfYrNBrZM8eH1hTSZUsT VFeKgm+KVkwEG/uXoI/XOge01b1oOHzKNKGT7Q5ogbV6w67LtOrSeTH0FCjHsN8z 2LCQHWuII4h3b1U/Pg8N5Pz59+qraSrMZAHOROYc19r0HSS5gg7m1yD3IPXO73fI gLO0/44f/KYqVP2+FKgQo9enUSLI5GuMAfhWaTpeOpJNd10egSOB3SaJ7nn20/Pm vSBSL0KsSeXY4/Df43MuHu46PvYzRwKvZB7GJJJPi2XjdFqCxuoCuEqfaZxf1lnI ZhZFmsZE1rd7kgBYyn0VXn1AvrLjaLuvmsOKaFdO4TAbQpE3Pps6AdQ8EpJ62Gei 0yZlXgh2+zZp5lRMfO5JFtr7/pVpIqnRKfaDk1XawWP7i1/0PnVXsR2G6yu6kbEg R/v2LKnp49TUldfNmVW8QHElw/LrCBW08iA+44vlGYdCU8nAW9Sy+y4plW+X32z8 Viw82ISUcoJSHmRfzXOWaj24AftbSOzo2bRmCO+xkBkXFrhTI83Aqbu7TN/yejB8 hDb04AVxzEkBTw/B0pLkJUt5lpcr9fZMvACHsL0gTRc5OPb4/zhG7y9npWgq5Snb ZnUAOi+ndnW8IL4y9YI6U7LBSyMvE7L7+QCnLJxVnO2NxjDCJVDDe6fLR9pRBCCC Sh3X/FNsu1YQzNIOvf75ri1zzqKmv4x6ETmmgs+vMGRl62s8SQcgWFEGAVrAP+uR ocx0chW3BWEQalRat2vBWpj1gyH2aHd8tgamb8XXFLK35iTk2/oCqQ== -----END RSA PRIVATE KEY----- Fabric-1.8.2/tests/client.key.pub000644 000765 000024 00000000614 12257074160 017602 0ustar00jforcierstaff000000 000000 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2FxgXlTZGk/JZMacwgMPC6LEd3efYgIdgK0RXGRMNs06aSyeEUwTKqmelNnElsRsUW68Ybosox0LoHGfTUj0gtSOqG+pb0QJQ5yslPBwBlL+WUC65HDzHdBrUf/bFR+rc02i2Ciraan4elvuLW07UfO5ceCOeJSYyNmrhN/vboHr3Pcv2QG717sEy/9pSAVzrriCqYFd6IFg9o6UhuSB7hvW4bzKXDHtz6OeXrC6U/FWxx3rYZg3h9K2SBGXLavqiJSkFgeSzn3geSbyAjTgowaZ8kNq4+Mc1hsAMtLZBKMBZUTuMjHpQR31nWloUUfuz5QhaORk1pJBmE90MqShiw== jforcier@ytram Fabric-1.8.2/tests/fake_filesystem.py000644 000765 000024 00000003623 12257074160 020554 0ustar00jforcierstaff000000 000000 import os import stat from StringIO import StringIO from types import StringTypes from fabric.network import ssh class FakeFile(StringIO): def __init__(self, value=None, path=None): init = lambda x: StringIO.__init__(self, x) if value is None: init("") ftype = 'dir' size = 4096 else: init(value) ftype = 'file' size = len(value) attr = ssh.SFTPAttributes() attr.st_mode = {'file': stat.S_IFREG, 'dir': stat.S_IFDIR}[ftype] attr.st_size = size attr.filename = os.path.basename(path) self.attributes = attr def __str__(self): return self.getvalue() def write(self, value): StringIO.write(self, value) self.attributes.st_size = len(self.getvalue()) def close(self): """ Always hold fake files open. """ pass def __cmp__(self, other): me = str(self) if isinstance(other, StringTypes) else self return cmp(me, other) class FakeFilesystem(dict): def __init__(self, d=None): # Replicate input dictionary using our custom __setitem__ d = d or {} for key, value in d.iteritems(): self[key] = value def __setitem__(self, key, value): if isinstance(value, StringTypes) or value is None: value = FakeFile(value, key) super(FakeFilesystem, self).__setitem__(key, value) def normalize(self, path): """ Normalize relative paths. In our case, the "home" directory is just the root, /. I expect real servers do this as well but with the user's home directory. """ if not path.startswith(os.path.sep): path = os.path.join(os.path.sep, path) return path def __getitem__(self, key): return super(FakeFilesystem, self).__getitem__(self.normalize(key)) Fabric-1.8.2/tests/mock_streams.py000644 000765 000024 00000004666 12277461502 020103 0ustar00jforcierstaff000000 000000 """ Stand-alone stream mocking decorator for easier imports. """ from functools import wraps import sys from StringIO import StringIO # No need for cStringIO at this time class CarbonCopy(StringIO): """ A StringIO capable of multiplexing its writes to other buffer objects. """ def __init__(self, buffer='', cc=None): """ If ``cc`` is given and is a file-like object or an iterable of same, it/they will be written to whenever this StringIO instance is written to. """ StringIO.__init__(self, buffer) if cc is None: cc = [] elif hasattr(cc, 'write'): cc = [cc] self.cc = cc def write(self, s): StringIO.write(self, s) for writer in self.cc: writer.write(s) def mock_streams(which): """ Replaces a stream with a ``StringIO`` during the test, then restores after. Must specify which stream (stdout, stderr, etc) via string args, e.g.:: @mock_streams('stdout') def func(): pass @mock_streams('stderr') def func(): pass @mock_streams('both') def func() pass If ``'both'`` is specified, not only will both streams be replaced with StringIOs, but a new combined-streams output (another StringIO) will appear at ``sys.stdall``. This StringIO will resemble what a user sees at a terminal, i.e. both streams intermingled. """ both = (which == 'both') stdout = (which == 'stdout') or both stderr = (which == 'stderr') or both def mocked_streams_decorator(func): @wraps(func) def inner_wrapper(*args, **kwargs): if both: sys.stdall = StringIO() fake_stdout = CarbonCopy(cc=sys.stdall) fake_stderr = CarbonCopy(cc=sys.stdall) else: fake_stdout, fake_stderr = StringIO(), StringIO() if stdout: my_stdout, sys.stdout = sys.stdout, fake_stdout if stderr: my_stderr, sys.stderr = sys.stderr, fake_stderr try: ret = func(*args, **kwargs) finally: if stdout: sys.stdout = my_stdout if stderr: sys.stderr = my_stderr if both: del sys.stdall return inner_wrapper return mocked_streams_decorator Fabric-1.8.2/tests/private.key000644 000765 000024 00000001563 12257074160 017215 0ustar00jforcierstaff000000 000000 -----BEGIN RSA PRIVATE KEY----- MIICWgIBAAKBgQDTj1bqB4WmayWNPB+8jVSYpZYk80Ujvj680pOTh2bORBjbIAyz oWGW+GUjzKxTiiPvVmxFgx5wdsFvF03v34lEVVhMpouqPAYQ15N37K/ir5XY+9m/ d8ufMCkjeXsQkKqFbAlQcnWMCRnOoPHS3I4vi6hmnDDeeYTSRvfLbW0fhwIBIwKB gBIiOqZYaoqbeD9OS9z2K9KR2atlTxGxOJPXiP4ESqP3NVScWNwyZ3NXHpyrJLa0 EbVtzsQhLn6rF+TzXnOlcipFvjsem3iYzCpuChfGQ6SovTcOjHV9z+hnpXvQ/fon soVRZY65wKnF7IAoUwTmJS9opqgrN6kRgCd3DASAMd1bAkEA96SBVWFt/fJBNJ9H tYnBKZGw0VeHOYmVYbvMSstssn8un+pQpUm9vlG/bp7Oxd/m+b9KWEh2xPfv6zqU avNwHwJBANqzGZa/EpzF4J8pGti7oIAPUIDGMtfIcmqNXVMckrmzQ2vTfqtkEZsA 4rE1IERRyiJQx6EJsz21wJmGV9WJQ5kCQQDwkS0uXqVdFzgHO6S++tjmjYcxwr3g H0CoFYSgbddOT6miqRskOQF3DZVkJT3kyuBgU2zKygz52ukQZMqxCb1fAkASvuTv qfpH87Qq5kQhNKdbbwbmd2NxlNabazPijWuphGTdW0VfJdWfklyS2Kr+iqrs/5wV HhathJt636Eg7oIjAkA8ht3MQ+XSl9yIJIS8gVpbPxSw5OMfw0PjVE7tBdQruiSc nvuQES5C9BMHjF39LZiGH1iLQy7FgdHyoP+eodI7 -----END RSA PRIVATE KEY----- Fabric-1.8.2/tests/Python26SocketServer.py000644 000765 000024 00000052753 12257074160 021403 0ustar00jforcierstaff000000 000000 """Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 Luke Kenneth Casson Leighton example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """ # Author of the BaseServer patch: Luke Kenneth Casson Leighton # XXX Warning! # There is a test suite for this module, but it cannot be run by the # standard regression test. # To run it manually, run Lib/test/test_socketserver.py. __version__ = "0.4" import socket import select import sys import os try: import threading except ImportError: import dummy_threading as threading __all__ = ["TCPServer", "UDPServer", "ForkingUDPServer", "ForkingTCPServer", "ThreadingUDPServer", "ThreadingTCPServer", "BaseRequestHandler", "StreamRequestHandler", "DatagramRequestHandler", "ThreadingMixIn", "ForkingMixIn"] if hasattr(socket, "AF_UNIX"): __all__.extend(["UnixStreamServer", "UnixDatagramServer", "ThreadingUnixStreamServer", "ThreadingUnixDatagramServer"]) class BaseServer: """Base class for server classes. Methods for the caller: - __init__(server_address, RequestHandlerClass) - serve_forever(poll_interval=0.5) - shutdown() - handle_request() # if you do not use serve_forever() - fileno() -> int # for select() Methods that may be overridden: - server_bind() - server_activate() - get_request() -> request, client_address - handle_timeout() - verify_request(request, client_address) - server_close() - process_request(request, client_address) - close_request(request) - handle_error() Methods for derived classes: - finish_request(request, client_address) Class variables that may be overridden by derived classes or instances: - timeout - address_family - socket_type - allow_reuse_address Instance variables: - RequestHandlerClass - socket """ timeout = None def __init__(self, server_address, RequestHandlerClass): """Constructor. May be extended, do not override.""" self.server_address = server_address self.RequestHandlerClass = RequestHandlerClass self.__is_shut_down = threading.Event() self.__serving = False def server_activate(self): """Called by constructor to activate the server. May be overridden. """ pass def serve_forever(self, poll_interval=0.5): """Handle one request at a time until shutdown. Polls for shutdown every poll_interval seconds. Ignores self.timeout. If you need to do periodic tasks, do them in another thread. """ self.__serving = True self.__is_shut_down.clear() while self.__serving: # XXX: Consider using another file descriptor or # connecting to the socket to wake this up instead of # polling. Polling reduces our responsiveness to a # shutdown request and wastes cpu at all other times. r, w, e = select.select([self], [], [], poll_interval) if r: self._handle_request_noblock() self.__is_shut_down.set() def shutdown(self): """Stops the serve_forever loop. Blocks until the loop has finished. This must be called while serve_forever() is running in another thread, or it will deadlock. """ self.__serving = False self.__is_shut_down.wait() # The distinction between handling, getting, processing and # finishing a request is fairly arbitrary. Remember: # # - handle_request() is the top-level call. It calls # select, get_request(), verify_request() and process_request() # - get_request() is different for stream or datagram sockets # - process_request() is the place that may fork a new process # or create a new thread to finish the request # - finish_request() instantiates the request handler class; # this constructor will handle the request all by itself def handle_request(self): """Handle one request, possibly blocking. Respects self.timeout. """ # Support people who used socket.settimeout() to escape # handle_request before self.timeout was available. timeout = self.socket.gettimeout() if timeout is None: timeout = self.timeout elif self.timeout is not None: timeout = min(timeout, self.timeout) fd_sets = select.select([self], [], [], timeout) if not fd_sets[0]: self.handle_timeout() return self._handle_request_noblock() def _handle_request_noblock(self): """Handle one request, without blocking. I assume that select.select has returned that the socket is readable before this function was called, so there should be no risk of blocking in get_request(). """ try: request, client_address = self.get_request() except socket.error: return if self.verify_request(request, client_address): try: self.process_request(request, client_address) except: self.handle_error(request, client_address) self.close_request(request) def handle_timeout(self): """Called if no new request arrives within self.timeout. Overridden by ForkingMixIn. """ pass def verify_request(self, request, client_address): """Verify the request. May be overridden. Return True if we should proceed with this request. """ return True def process_request(self, request, client_address): """Call finish_request. Overridden by ForkingMixIn and ThreadingMixIn. """ self.finish_request(request, client_address) self.close_request(request) def server_close(self): """Called to clean-up the server. May be overridden. """ pass def finish_request(self, request, client_address): """Finish one request by instantiating RequestHandlerClass.""" self.RequestHandlerClass(request, client_address, self) def close_request(self, request): """Called to clean up an individual request.""" pass def handle_error(self, request, client_address): """Handle an error gracefully. May be overridden. The default is to print a traceback and continue. """ print('-' * 40) print('Exception happened during processing of request from %s' % (client_address,)) import traceback traceback.print_exc() # XXX But this goes to stderr! print('-' * 40) class TCPServer(BaseServer): """Base class for various socket-based server classes. Defaults to synchronous IP stream (i.e., TCP). Methods for the caller: - __init__(server_address, RequestHandlerClass, bind_and_activate=True) - serve_forever(poll_interval=0.5) - shutdown() - handle_request() # if you don't use serve_forever() - fileno() -> int # for select() Methods that may be overridden: - server_bind() - server_activate() - get_request() -> request, client_address - handle_timeout() - verify_request(request, client_address) - process_request(request, client_address) - close_request(request) - handle_error() Methods for derived classes: - finish_request(request, client_address) Class variables that may be overridden by derived classes or instances: - timeout - address_family - socket_type - request_queue_size (only for stream sockets) - allow_reuse_address Instance variables: - server_address - RequestHandlerClass - socket """ address_family = socket.AF_INET socket_type = socket.SOCK_STREAM request_queue_size = 5 allow_reuse_address = False def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True): """Constructor. May be extended, do not override.""" BaseServer.__init__(self, server_address, RequestHandlerClass) self.socket = socket.socket(self.address_family, self.socket_type) if bind_and_activate: self.server_bind() self.server_activate() def server_bind(self): """Called by constructor to bind the socket. May be overridden. """ if self.allow_reuse_address: self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.socket.bind(self.server_address) self.server_address = self.socket.getsockname() def server_activate(self): """Called by constructor to activate the server. May be overridden. """ self.socket.listen(self.request_queue_size) def server_close(self): """Called to clean-up the server. May be overridden. """ self.socket.close() def fileno(self): """Return socket file number. Interface required by select(). """ return self.socket.fileno() def get_request(self): """Get the request and client address from the socket. May be overridden. """ return self.socket.accept() def close_request(self, request): """Called to clean up an individual request.""" request.close() class UDPServer(TCPServer): """UDP server class.""" allow_reuse_address = False socket_type = socket.SOCK_DGRAM max_packet_size = 8192 def get_request(self): data, client_addr = self.socket.recvfrom(self.max_packet_size) return (data, self.socket), client_addr def server_activate(self): # No need to call listen() for UDP. pass def close_request(self, request): # No need to close anything. pass class ForkingMixIn: """Mix-in class to handle each request in a new process.""" timeout = 300 active_children = None max_children = 40 def collect_children(self): """Internal routine to wait for children that have exited.""" if self.active_children is None: return while len(self.active_children) >= self.max_children: # XXX: This will wait for any child process, not just ones # spawned by this library. This could confuse other # libraries that expect to be able to wait for their own # children. try: pid, status = os.waitpid(0, 0) except os.error: pid = None if pid not in self.active_children: continue self.active_children.remove(pid) # XXX: This loop runs more system calls than it ought # to. There should be a way to put the active_children into a # process group and then use os.waitpid(-pgid) to wait for any # of that set, but I couldn't find a way to allocate pgids # that couldn't collide. for child in self.active_children: try: pid, status = os.waitpid(child, os.WNOHANG) except os.error: pid = None if not pid: continue try: self.active_children.remove(pid) except ValueError, e: raise ValueError('%s. x=%d and list=%r' % \ (e.message, pid, self.active_children)) def handle_timeout(self): """Wait for zombies after self.timeout seconds of inactivity. May be extended, do not override. """ self.collect_children() def process_request(self, request, client_address): """Fork a new subprocess to process the request.""" self.collect_children() pid = os.fork() if pid: # Parent process if self.active_children is None: self.active_children = [] self.active_children.append(pid) self.close_request(request) return else: # Child process. # This must never return, hence os._exit()! try: self.finish_request(request, client_address) os._exit(0) except: try: self.handle_error(request, client_address) finally: os._exit(1) class ThreadingMixIn: """Mix-in class to handle each request in a new thread.""" # Decides how threads will act upon termination of the # main process daemon_threads = False def process_request_thread(self, request, client_address): """Same as in BaseServer but as a thread. In addition, exception handling is done here. """ try: self.finish_request(request, client_address) self.close_request(request) except: self.handle_error(request, client_address) self.close_request(request) def process_request(self, request, client_address): """Start a new thread to process the request.""" t = threading.Thread(target=self.process_request_thread, args=(request, client_address)) if self.daemon_threads: t.setDaemon(1) t.start() class ForkingUDPServer(ForkingMixIn, UDPServer): pass class ForkingTCPServer(ForkingMixIn, TCPServer): pass class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass class ThreadingTCPServer(ThreadingMixIn, TCPServer): pass if hasattr(socket, 'AF_UNIX'): class UnixStreamServer(TCPServer): address_family = socket.AF_UNIX class UnixDatagramServer(UDPServer): address_family = socket.AF_UNIX class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer): pass class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer): pass class BaseRequestHandler: """Base class for request handler classes. This class is instantiated for each request to be handled. The constructor sets the instance variables request, client_address and server, and then calls the handle() method. To implement a specific service, all you need to do is to derive a class which defines a handle() method. The handle() method can find the request as self.request, the client address as self.client_address, and the server (in case it needs access to per-server information) as self.server. Since a separate instance is created for each request, the handle() method can define arbitrary other instance variariables. """ def __init__(self, request, client_address, server): self.request = request self.client_address = client_address self.server = server try: self.setup() self.handle() self.finish() finally: sys.exc_traceback = None # Help garbage collection def setup(self): pass def handle(self): pass def finish(self): pass # The following two classes make it possible to use the same service # class for stream or datagram servers. # Each class sets up these instance variables: # - rfile: a file object from which receives the request is read # - wfile: a file object to which the reply is written # When the handle() method returns, wfile is flushed properly class StreamRequestHandler(BaseRequestHandler): """Define self.rfile and self.wfile for stream sockets.""" # Default buffer sizes for rfile, wfile. # We default rfile to buffered because otherwise it could be # really slow for large data (a getc() call per byte); we make # wfile unbuffered because (a) often after a write() we want to # read and we need to flush the line; (b) big writes to unbuffered # files are typically optimized by stdio even when big reads # aren't. rbufsize = -1 wbufsize = 0 def setup(self): self.connection = self.request self.rfile = self.connection.makefile('rb', self.rbufsize) self.wfile = self.connection.makefile('wb', self.wbufsize) def finish(self): if not self.wfile.closed: self.wfile.flush() self.wfile.close() self.rfile.close() class DatagramRequestHandler(BaseRequestHandler): # XXX Regrettably, I cannot get this working on Linux; # s.recvfrom() doesn't return a meaningful client address. """Define self.rfile and self.wfile for datagram sockets.""" def setup(self): try: from cStringIO import StringIO except ImportError: from StringIO import StringIO self.packet, self.socket = self.request self.rfile = StringIO(self.packet) self.wfile = StringIO() def finish(self): self.socket.sendto(self.wfile.getvalue(), self.client_address) Fabric-1.8.2/tests/server.py000644 000765 000024 00000037421 12277451120 016710 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import copy import itertools import os import re import socket import stat import sys import threading import time import types from StringIO import StringIO from functools import wraps from Python26SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer from fabric.operations import _sudo_prefix from fabric.api import env, hide from fabric.thread_handling import ThreadHandler from fabric.network import disconnect_all, ssh from fake_filesystem import FakeFilesystem, FakeFile # # Debugging # import logging logging.basicConfig(filename='/tmp/fab.log', level=logging.DEBUG) logger = logging.getLogger('server.py') # # Constants # HOST = '127.0.0.1' PORT = 2200 USER = 'username' HOME = '/' RESPONSES = { "ls /simple": "some output", "ls /": """AUTHORS FAQ Fabric.egg-info INSTALL LICENSE MANIFEST README build docs fabfile.py fabfile.pyc fabric requirements.txt setup.py tests""", "both_streams": [ "stdout", "stderr" ], } FILES = FakeFilesystem({ '/file.txt': 'contents', '/file2.txt': 'contents2', '/folder/file3.txt': 'contents3', '/empty_folder': None, '/tree/file1.txt': 'x', '/tree/file2.txt': 'y', '/tree/subfolder/file3.txt': 'z', '/etc/apache2/apache2.conf': 'Include other.conf', HOME: None # So $HOME is a directory }) PASSWORDS = { 'root': 'root', USER: 'password' } def _local_file(filename): return os.path.join(os.path.dirname(__file__), filename) SERVER_PRIVKEY = _local_file('private.key') CLIENT_PUBKEY = _local_file('client.key.pub') CLIENT_PRIVKEY = _local_file('client.key') CLIENT_PRIVKEY_PASSPHRASE = "passphrase" def _equalize(lists, fillval=None): """ Pad all given list items in ``lists`` to be the same length. """ lists = map(list, lists) upper = max(len(x) for x in lists) for lst in lists: diff = upper - len(lst) if diff: lst.extend([fillval] * diff) return lists class TestServer(ssh.ServerInterface): """ Test server implementing the 'ssh' lib's server interface parent class. Mostly just handles the bare minimum necessary to handle SSH-level things such as honoring authentication types and exec/shell/etc requests. The bulk of the actual server side logic is handled in the ``serve_responses`` function and its ``SSHHandler`` class. """ def __init__(self, passwords, home, pubkeys, files): self.event = threading.Event() self.passwords = passwords self.pubkeys = pubkeys self.files = FakeFilesystem(files) self.home = home self.command = None def check_channel_request(self, kind, chanid): if kind == 'session': return ssh.OPEN_SUCCEEDED return ssh.OPEN_FAILED_ADMINISTRATIVELY_PROHIBITED def check_channel_exec_request(self, channel, command): self.command = command self.event.set() return True def check_channel_pty_request(self, *args): return True def check_channel_shell_request(self, channel): self.event.set() return True def check_auth_password(self, username, password): self.username = username passed = self.passwords.get(username) == password return ssh.AUTH_SUCCESSFUL if passed else ssh.AUTH_FAILED def check_auth_publickey(self, username, key): self.username = username return ssh.AUTH_SUCCESSFUL if self.pubkeys else ssh.AUTH_FAILED def get_allowed_auths(self, username): return 'password,publickey' class SSHServer(ThreadingMixIn, TCPServer): """ Threading TCPServer subclass. """ def _socket_info(self, addr_tup): """ Clone of the very top of Paramiko 1.7.6 SSHClient.connect(). We must use this in order to make sure that our address family matches up with the client side (which we cannot control, and which varies depending on individual computers and their network settings). """ hostname, port = addr_tup addr_info = socket.getaddrinfo(hostname, port, socket.AF_UNSPEC, socket.SOCK_STREAM) for (family, socktype, proto, canonname, sockaddr) in addr_info: if socktype == socket.SOCK_STREAM: af = family addr = sockaddr break else: # some OS like AIX don't indicate SOCK_STREAM support, so just # guess. :( af, _, _, _, addr = socket.getaddrinfo(hostname, port, socket.AF_UNSPEC, socket.SOCK_STREAM) return af, addr def __init__( self, server_address, RequestHandlerClass, bind_and_activate=True ): # Prevent "address already in use" errors when running tests 2x in a # row. self.allow_reuse_address = True # Handle network family/host addr (see docstring for _socket_info) family, addr = self._socket_info(server_address) self.address_family = family TCPServer.__init__(self, addr, RequestHandlerClass, bind_and_activate) class FakeSFTPHandle(ssh.SFTPHandle): """ Extremely basic way to get SFTPHandle working with our fake setup. """ def chattr(self, attr): self.readfile.attributes = attr return ssh.SFTP_OK def stat(self): return self.readfile.attributes class PrependList(list): def prepend(self, val): self.insert(0, val) def expand(path): """ '/foo/bar/biz' => ('/', 'foo', 'bar', 'biz') 'relative/path' => ('relative', 'path') """ # Base case if path in ['', os.path.sep]: return [path] ret = PrependList() directory, filename = os.path.split(path) while directory and directory != os.path.sep: ret.prepend(filename) directory, filename = os.path.split(directory) ret.prepend(filename) # Handle absolute vs relative paths ret.prepend(directory if directory == os.path.sep else '') return ret def contains(folder, path): """ contains(('a', 'b', 'c'), ('a', 'b')) => True contains('a', 'b', 'c'), ('f',)) => False """ return False if len(path) >= len(folder) else folder[:len(path)] == path def missing_folders(paths): """ missing_folders(['a/b/c']) => ['a', 'a/b', 'a/b/c'] """ ret = [] pool = set(paths) for path in paths: expanded = expand(path) for i in range(len(expanded)): folder = os.path.join(*expanded[:len(expanded) - i]) if folder and folder not in pool: pool.add(folder) ret.append(folder) return ret def canonicalize(path, home): ret = path if not os.path.isabs(path): ret = os.path.normpath(os.path.join(home, path)) return ret class FakeSFTPServer(ssh.SFTPServerInterface): def __init__(self, server, *args, **kwargs): self.server = server files = self.server.files # Expand such that omitted, implied folders get added explicitly for folder in missing_folders(files.keys()): files[folder] = None self.files = files def canonicalize(self, path): """ Make non-absolute paths relative to $HOME. """ return canonicalize(path, self.server.home) def list_folder(self, path): path = self.files.normalize(path) expanded_files = map(expand, self.files) expanded_path = expand(path) candidates = [x for x in expanded_files if contains(x, expanded_path)] children = [] for candidate in candidates: cut = candidate[:len(expanded_path) + 1] if cut not in children: children.append(cut) results = [self.stat(os.path.join(*x)) for x in children] bad = not results or any(x == ssh.SFTP_NO_SUCH_FILE for x in results) return ssh.SFTP_NO_SUCH_FILE if bad else results def open(self, path, flags, attr): path = self.files.normalize(path) try: fobj = self.files[path] except KeyError: if flags & os.O_WRONLY: # Only allow writes to files in existing directories. if os.path.dirname(path) not in self.files: return ssh.SFTP_NO_SUCH_FILE self.files[path] = fobj = FakeFile("", path) # No write flag means a read, which means they tried to read a # nonexistent file. else: return ssh.SFTP_NO_SUCH_FILE f = FakeSFTPHandle() f.readfile = f.writefile = fobj return f def stat(self, path): path = self.files.normalize(path) try: fobj = self.files[path] except KeyError: return ssh.SFTP_NO_SUCH_FILE return fobj.attributes # Don't care about links right now lstat = stat def chattr(self, path, attr): path = self.files.normalize(path) if path not in self.files: return ssh.SFTP_NO_SUCH_FILE # Attempt to gracefully update instead of overwrite, since things like # chmod will call us with an SFTPAttributes object that only exhibits # e.g. st_mode, and we don't want to lose our filename or size... for which in "size uid gid mode atime mtime".split(): attname = "st_" + which incoming = getattr(attr, attname) if incoming is not None: setattr(self.files[path].attributes, attname, incoming) return ssh.SFTP_OK def mkdir(self, path, attr): self.files[path] = None return ssh.SFTP_OK def serve_responses(responses, files, passwords, home, pubkeys, port): """ Return a threading TCP based SocketServer listening on ``port``. Used as a fake SSH server which will respond to commands given in ``responses`` and allow connections for users listed in ``passwords``. ``home`` is used as the remote $HOME (mostly for SFTP purposes). ``pubkeys`` is a Boolean value determining whether the server will allow pubkey auth or not. """ # Define handler class inline so it can access serve_responses' args class SSHHandler(BaseRequestHandler): def handle(self): try: self.init_transport() self.waiting_for_command = False while not self.server.all_done.isSet(): # Don't overwrite channel if we're waiting for a command. if not self.waiting_for_command: self.channel = self.transport.accept(1) if not self.channel: continue self.ssh_server.event.wait(10) if self.ssh_server.command: self.command = self.ssh_server.command # Set self.sudo_prompt, update self.command self.split_sudo_prompt() if self.command in responses: self.stdout, self.stderr, self.status = \ self.response() if self.sudo_prompt and not self.sudo_password(): self.channel.send( "sudo: 3 incorrect password attempts\n" ) break self.respond() else: self.channel.send_stderr( "Sorry, I don't recognize that command.\n" ) self.channel.send_exit_status(1) # Close up shop self.command = self.ssh_server.command = None self.waiting_for_command = False time.sleep(0.5) self.channel.close() else: # If we're here, self.command was False or None, # but we do have a valid Channel object. Thus we're # waiting for the command to show up. self.waiting_for_command = True finally: self.transport.close() def init_transport(self): transport = ssh.Transport(self.request) transport.add_server_key(ssh.RSAKey(filename=SERVER_PRIVKEY)) transport.set_subsystem_handler('sftp', ssh.SFTPServer, sftp_si=FakeSFTPServer) server = TestServer(passwords, home, pubkeys, files) transport.start_server(server=server) self.ssh_server = server self.transport = transport def split_sudo_prompt(self): prefix = re.escape(_sudo_prefix(None, None).rstrip()) + ' +' result = re.findall(r'^(%s)?(.*)$' % prefix, self.command)[0] self.sudo_prompt, self.command = result def response(self): result = responses[self.command] stderr = "" status = 0 sleep = 0 if isinstance(result, types.StringTypes): stdout = result else: size = len(result) if size == 1: stdout = result[0] elif size == 2: stdout, stderr = result elif size == 3: stdout, stderr, status = result elif size == 4: stdout, stderr, status, sleep = result stdout, stderr = _equalize((stdout, stderr)) time.sleep(sleep) return stdout, stderr, status def sudo_password(self): # Give user 3 tries, as is typical passed = False for x in range(3): self.channel.send(env.sudo_prompt) password = self.channel.recv(65535).strip() # Spit back newline to fake the echo of user's # newline self.channel.send('\n') # Test password if password == passwords[self.ssh_server.username]: passed = True break # If here, password was bad. self.channel.send("Sorry, try again.\n") return passed def respond(self): for out, err in zip(self.stdout, self.stderr): if out is not None: self.channel.send(out) if err is not None: self.channel.send_stderr(err) self.channel.send_exit_status(self.status) return SSHServer((HOST, port), SSHHandler) def server( responses=RESPONSES, files=FILES, passwords=PASSWORDS, home=HOME, pubkeys=False, port=PORT ): """ Returns a decorator that runs an SSH server during function execution. Direct passthrough to ``serve_responses``. """ def run_server(func): @wraps(func) def inner(*args, **kwargs): # Start server _server = serve_responses(responses, files, passwords, home, pubkeys, port) _server.all_done = threading.Event() worker = ThreadHandler('server', _server.serve_forever) # Execute function try: return func(*args, **kwargs) finally: # Clean up client side connections with hide('status'): disconnect_all() # Stop server _server.all_done.set() _server.shutdown() # Why this is not called in shutdown() is beyond me. _server.server_close() worker.thread.join() # Handle subthread exceptions e = worker.exception if e: raise e[0], e[1], e[2] return inner return run_server Fabric-1.8.2/tests/support/000755 000765 000024 00000000000 12277461504 016544 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/tests/test_context_managers.py000644 000765 000024 00000014002 12257074160 021773 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import os import sys from nose.tools import eq_, ok_ from fabric.state import env, output from fabric.context_managers import (cd, settings, lcd, hide, shell_env, quiet, warn_only, prefix, path) from fabric.operations import run, local from utils import mock_streams, FabricTest from server import server from StringIO import StringIO # # cd() # def test_error_handling(): """ cd cleans up after itself even in case of an exception """ class TestException(Exception): pass try: with cd('somewhere'): raise TestException('Houston, we have a problem.') except TestException: pass finally: with cd('else'): eq_(env.cwd, 'else') def test_cwd_with_absolute_paths(): """ cd() should append arg if non-absolute or overwrite otherwise """ existing = '/some/existing/path' additional = 'another' absolute = '/absolute/path' with settings(cwd=existing): with cd(absolute): eq_(env.cwd, absolute) with cd(additional): eq_(env.cwd, existing + '/' + additional) # # prefix # def test_nested_prefix(): """ prefix context managers can be created outside of the with block and nested """ cm1 = prefix('1') cm2 = prefix('2') with cm1: with cm2: eq_(env.command_prefixes, ['1', '2']) # # hide/show # def test_hide_show_exception_handling(): """ hide()/show() should clean up OK if exceptions are raised """ try: with hide('stderr'): # now it's False, while the default is True eq_(output.stderr, False) raise Exception except Exception: # Here it should be True again. # If it's False, this means hide() didn't clean up OK. eq_(output.stderr, True) # # settings() # def test_setting_new_env_dict_key_should_work(): """ Using settings() with a previously nonexistent key should work correctly """ key = 'thisshouldnevereverexistseriouslynow' value = 'a winner is you' with settings(**{key: value}): ok_(key in env) ok_(key not in env) def test_settings(): """ settings() should temporarily override env dict with given key/value pair """ env.testval = "outer value" with settings(testval="inner value"): eq_(env.testval, "inner value") eq_(env.testval, "outer value") def test_settings_with_multiple_kwargs(): """ settings() should temporarily override env dict with given key/value pairS """ env.testval1 = "outer 1" env.testval2 = "outer 2" with settings(testval1="inner 1", testval2="inner 2"): eq_(env.testval1, "inner 1") eq_(env.testval2, "inner 2") eq_(env.testval1, "outer 1") eq_(env.testval2, "outer 2") def test_settings_with_other_context_managers(): """ settings() should take other context managers, and use them with other overrided key/value pairs. """ env.testval1 = "outer 1" prev_lcwd = env.lcwd with settings(lcd("here"), testval1="inner 1"): eq_(env.testval1, "inner 1") ok_(env.lcwd.endswith("here")) # Should be the side-effect of adding cd to settings ok_(env.testval1, "outer 1") eq_(env.lcwd, prev_lcwd) def test_settings_clean_revert(): """ settings(clean_revert=True) should only revert values matching input values """ env.modified = "outer" env.notmodified = "outer" with settings( modified="inner", notmodified="inner", inner_only="only", clean_revert=True ): eq_(env.modified, "inner") eq_(env.notmodified, "inner") eq_(env.inner_only, "only") env.modified = "modified internally" eq_(env.modified, "modified internally") ok_("inner_only" not in env) # # shell_env() # def test_shell_env(): """ shell_env() sets the shell_env attribute in the env dict """ with shell_env(KEY="value"): eq_(env.shell_env['KEY'], 'value') eq_(env.shell_env, {}) class TestQuietAndWarnOnly(FabricTest): @server() @mock_streams('both') def test_quiet_hides_all_output(self): # Sanity test - normally this is not empty run("ls /simple") ok_(sys.stdout.getvalue()) # Reset sys.stdout = StringIO() # Real test with quiet(): run("ls /simple") # Empty output ok_(not sys.stdout.getvalue()) # Reset sys.stdout = StringIO() # Kwarg test run("ls /simple", quiet=True) ok_(not sys.stdout.getvalue()) @server(responses={'barf': [ "this is my stdout", "this is my stderr", 1 ]}) def test_quiet_sets_warn_only_to_true(self): # Sanity test to ensure environment with settings(warn_only=False): with quiet(): eq_(run("barf").return_code, 1) # Kwarg test eq_(run("barf", quiet=True).return_code, 1) @server(responses={'hrm': ["", "", 1]}) @mock_streams('both') def test_warn_only_is_same_as_settings_warn_only(self): with warn_only(): eq_(run("hrm").failed, True) @server() @mock_streams('both') def test_warn_only_does_not_imply_hide_everything(self): with warn_only(): run("ls /simple") assert sys.stdout.getvalue().strip() != "" # path() (distinct from shell_env) class TestPathManager(FabricTest): def setup(self): super(TestPathManager, self).setup() self.real = os.environ.get('PATH') def via_local(self): with hide('everything'): return local("echo $PATH", capture=True) def test_lack_of_path_has_default_local_path(self): """ No use of 'with path' == default local $PATH """ eq_(self.real, self.via_local()) def test_use_of_path_appends_by_default(self): """ 'with path' appends by default """ with path('foo'): eq_(self.via_local(), self.real + ":foo") Fabric-1.8.2/tests/test_contrib.py000644 000765 000024 00000005463 12277451120 020102 0ustar00jforcierstaff000000 000000 # -*- coding: utf-8 -*- from __future__ import with_statement import os from fabric.api import hide, get, show from fabric.contrib.files import upload_template, contains from utils import FabricTest, eq_contents from server import server class TestContrib(FabricTest): # Make sure it knows / is a directory. # This is in lieu of starting down the "actual honest to god fake operating # system" road...:( @server(responses={'test -d "$(echo /)"': ""}) def test_upload_template_uses_correct_remote_filename(self): """ upload_template() shouldn't munge final remote filename """ template = self.mkfile('template.txt', 'text') with hide('everything'): upload_template(template, '/') assert self.exists_remotely('/template.txt') @server() def test_upload_template_handles_file_destination(self): """ upload_template() should work OK with file and directory destinations """ template = self.mkfile('template.txt', '%(varname)s') local = self.path('result.txt') remote = '/configfile.txt' var = 'foobar' with hide('everything'): upload_template(template, remote, {'varname': var}) get(remote, local) eq_contents(local, var) @server(responses={ 'egrep "text" "/file.txt"': ( "sudo: unable to resolve host fabric", "", 1 )} ) def test_contains_checks_only_succeeded_flag(self): """ contains() should return False on bad grep even if stdout isn't empty """ with hide('everything'): result = contains('/file.txt', 'text', use_sudo=True) assert result == False @server() def test_upload_template_handles_jinja_template(self): """ upload_template() should work OK with Jinja2 template """ template = self.mkfile('template_jinja2.txt', '{{ first_name }}') template_name = os.path.basename(template) template_dir = os.path.dirname(template) local = self.path('result.txt') remote = '/configfile.txt' first_name = u'S\u00E9bastien' with hide('everything'): upload_template(template_name, remote, {'first_name': first_name}, use_jinja=True, template_dir=template_dir) get(remote, local) eq_contents(local, first_name.encode('utf-8')) @server() def test_upload_template_jinja_and_no_template_dir(self): # Crummy doesn't-die test fname = "foo.tpl" try: with hide('everything'): with open(fname, 'w+') as fd: fd.write('whatever') upload_template(fname, '/configfile.txt', {}, use_jinja=True) finally: os.remove(fname) Fabric-1.8.2/tests/test_decorators.py000644 000765 000024 00000015220 12257074160 020602 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import random import sys from nose.tools import eq_, ok_, assert_true, assert_false, assert_equal import fudge from fudge import Fake, with_fakes, patched_context from fabric import decorators, tasks from fabric.state import env import fabric # for patching fabric.state.xxx from fabric.tasks import _parallel_tasks, requires_parallel, execute from fabric.context_managers import lcd, settings, hide from utils import mock_streams # # Support # def fake_function(*args, **kwargs): """ Returns a ``fudge.Fake`` exhibiting function-like attributes. Passes in all args/kwargs to the ``fudge.Fake`` constructor. However, if ``callable`` or ``expect_call`` kwargs are not given, ``callable`` will be set to True by default. """ # Must define __name__ to be compatible with function wrapping mechanisms # like @wraps(). if 'callable' not in kwargs and 'expect_call' not in kwargs: kwargs['callable'] = True return Fake(*args, **kwargs).has_attr(__name__='fake') # # @task # def test_task_returns_an_instance_of_wrappedfunctask_object(): def foo(): pass task = decorators.task(foo) ok_(isinstance(task, tasks.WrappedCallableTask)) def test_task_will_invoke_provided_class(): def foo(): pass fake = Fake() fake.expects("__init__").with_args(foo) fudge.clear_calls() fudge.clear_expectations() foo = decorators.task(foo, task_class=fake) fudge.verify() def test_task_passes_args_to_the_task_class(): random_vars = ("some text", random.randint(100, 200)) def foo(): pass fake = Fake() fake.expects("__init__").with_args(foo, *random_vars) fudge.clear_calls() fudge.clear_expectations() foo = decorators.task(foo, task_class=fake, *random_vars) fudge.verify() def test_passes_kwargs_to_the_task_class(): random_vars = { "msg": "some text", "number": random.randint(100, 200), } def foo(): pass fake = Fake() fake.expects("__init__").with_args(foo, **random_vars) fudge.clear_calls() fudge.clear_expectations() foo = decorators.task(foo, task_class=fake, **random_vars) fudge.verify() def test_integration_tests_for_invoked_decorator_with_no_args(): r = random.randint(100, 200) @decorators.task() def foo(): return r eq_(r, foo()) def test_integration_tests_for_decorator(): r = random.randint(100, 200) @decorators.task(task_class=tasks.WrappedCallableTask) def foo(): return r eq_(r, foo()) def test_original_non_invoked_style_task(): r = random.randint(100, 200) @decorators.task def foo(): return r eq_(r, foo()) # # @runs_once # @with_fakes def test_runs_once_runs_only_once(): """ @runs_once prevents decorated func from running >1 time """ func = fake_function(expect_call=True).times_called(1) task = decorators.runs_once(func) for i in range(2): task() def test_runs_once_returns_same_value_each_run(): """ @runs_once memoizes return value of decorated func """ return_value = "foo" task = decorators.runs_once(fake_function().returns(return_value)) for i in range(2): eq_(task(), return_value) @decorators.runs_once def single_run(): pass def test_runs_once(): assert_false(hasattr(single_run, 'return_value')) single_run() assert_true(hasattr(single_run, 'return_value')) assert_equal(None, single_run()) # # @serial / @parallel # @decorators.serial def serial(): pass @decorators.serial @decorators.parallel def serial2(): pass @decorators.parallel @decorators.serial def serial3(): pass @decorators.parallel def parallel(): pass @decorators.parallel(pool_size=20) def parallel2(): pass fake_tasks = { 'serial': serial, 'serial2': serial2, 'serial3': serial3, 'parallel': parallel, 'parallel2': parallel2, } def parallel_task_helper(actual_tasks, expected): commands_to_run = map(lambda x: [x], actual_tasks) with patched_context(fabric.state, 'commands', fake_tasks): eq_(_parallel_tasks(commands_to_run), expected) def test_parallel_tasks(): for desc, task_names, expected in ( ("One @serial-decorated task == no parallelism", ['serial'], False), ("One @parallel-decorated task == parallelism", ['parallel'], True), ("One @parallel-decorated and one @serial-decorated task == paralellism", ['parallel', 'serial'], True), ("Tasks decorated with both @serial and @parallel count as @parallel", ['serial2', 'serial3'], True) ): parallel_task_helper.description = desc yield parallel_task_helper, task_names, expected del parallel_task_helper.description def test_parallel_wins_vs_serial(): """ @parallel takes precedence over @serial when both are used on one task """ ok_(requires_parallel(serial2)) ok_(requires_parallel(serial3)) @mock_streams('stdout') def test_global_parallel_honors_runs_once(): """ fab -P (or env.parallel) should honor @runs_once """ @decorators.runs_once def mytask(): print("yolo") # 'Carpe diem' for stupid people! with settings(hide('everything'), parallel=True): execute(mytask, hosts=['localhost', '127.0.0.1']) result = sys.stdout.getvalue() eq_(result, "yolo\n") assert result != "yolo\nyolo\n" # # @roles # @decorators.roles('test') def use_roles(): pass def test_roles(): assert_true(hasattr(use_roles, 'roles')) assert_equal(use_roles.roles, ['test']) # # @hosts # @decorators.hosts('test') def use_hosts(): pass def test_hosts(): assert_true(hasattr(use_hosts, 'hosts')) assert_equal(use_hosts.hosts, ['test']) # # @with_settings # def test_with_settings_passes_env_vars_into_decorated_function(): env.value = True random_return = random.randint(1000, 2000) def some_task(): return env.value decorated_task = decorators.with_settings(value=random_return)(some_task) ok_(some_task(), msg="sanity check") eq_(random_return, decorated_task()) def test_with_settings_with_other_context_managers(): """ with_settings() should take other context managers, and use them with other overrided key/value pairs. """ env.testval1 = "outer 1" prev_lcwd = env.lcwd def some_task(): eq_(env.testval1, "inner 1") ok_(env.lcwd.endswith("here")) # Should be the side-effect of adding cd to settings decorated_task = decorators.with_settings( lcd("here"), testval1="inner 1" )(some_task) decorated_task() ok_(env.testval1, "outer 1") eq_(env.lcwd, prev_lcwd) Fabric-1.8.2/tests/test_main.py000644 000765 000024 00000044405 12277451120 017365 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import copy from functools import partial from operator import isMappingType import os import sys from contextlib import contextmanager from fudge import Fake, patched_context, with_fakes from nose.tools import ok_, eq_ from fabric.decorators import hosts, roles, task from fabric.context_managers import settings from fabric.main import (parse_arguments, _escape_split, load_fabfile as _load_fabfile, list_commands, _task_names, COMMANDS_HEADER, NESTED_REMINDER) import fabric.state from fabric.state import _AttributeDict from fabric.tasks import Task, WrappedCallableTask from fabric.task_utils import _crawl, crawl, merge from utils import mock_streams, eq_, FabricTest, fabfile, path_prefix, aborts # Stupid load_fabfile wrapper to hide newly added return value. # WTB more free time to rewrite all this with objects :) def load_fabfile(*args, **kwargs): return _load_fabfile(*args, **kwargs)[:2] # # Basic CLI stuff # def test_argument_parsing(): for args, output in [ # Basic ('abc', ('abc', [], {}, [], [], [])), # Arg ('ab:c', ('ab', ['c'], {}, [], [], [])), # Kwarg ('a:b=c', ('a', [], {'b':'c'}, [], [], [])), # Arg and kwarg ('a:b=c,d', ('a', ['d'], {'b':'c'}, [], [], [])), # Multiple kwargs ('a:b=c,d=e', ('a', [], {'b':'c','d':'e'}, [], [], [])), # Host ('abc:host=foo', ('abc', [], {}, ['foo'], [], [])), # Hosts with single host ('abc:hosts=foo', ('abc', [], {}, ['foo'], [], [])), # Hosts with multiple hosts # Note: in a real shell, one would need to quote or escape "foo;bar". # But in pure-Python that would get interpreted literally, so we don't. ('abc:hosts=foo;bar', ('abc', [], {}, ['foo', 'bar'], [], [])), # Exclude hosts ('abc:hosts=foo;bar,exclude_hosts=foo', ('abc', [], {}, ['foo', 'bar'], [], ['foo'])), ('abc:hosts=foo;bar,exclude_hosts=foo;bar', ('abc', [], {}, ['foo', 'bar'], [], ['foo','bar'])), # Empty string args ("task:x=y,z=", ('task', [], {'x': 'y', 'z': ''}, [], [], [])), ("task:foo,,x=y", ('task', ['foo', ''], {'x': 'y'}, [], [], [])), ]: yield eq_, parse_arguments([args]), [output] def test_escaped_task_arg_split(): """ Allow backslashes to escape the task argument separator character """ argstr = r"foo,bar\,biz\,baz,what comes after baz?" eq_( _escape_split(',', argstr), ['foo', 'bar,biz,baz', 'what comes after baz?'] ) def test_escaped_task_kwarg_split(): """ Allow backslashes to escape the = in x=y task kwargs """ argstr = r"cmd:arg,escaped\,arg,nota\=kwarg,regular=kwarg,escaped=regular\=kwarg" args = ['arg', 'escaped,arg', 'nota=kwarg'] kwargs = {'regular': 'kwarg', 'escaped': 'regular=kwarg'} eq_( parse_arguments([argstr])[0], ('cmd', args, kwargs, [], [], []), ) # # Host/role decorators # # Allow calling Task.get_hosts as function instead (meh.) def get_hosts(command, *args): return WrappedCallableTask(command).get_hosts(*args) def eq_hosts(command, host_list, env=None, func=set): eq_(func(get_hosts(command, [], [], [], env)), func(host_list)) true_eq_hosts = partial(eq_hosts, func=lambda x: x) def test_hosts_decorator_by_itself(): """ Use of @hosts only """ host_list = ['a', 'b'] @hosts(*host_list) def command(): pass eq_hosts(command, host_list) fake_roles = { 'r1': ['a', 'b'], 'r2': ['b', 'c'] } def test_roles_decorator_by_itself(): """ Use of @roles only """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': fake_roles}) def test_hosts_and_roles_together(): """ Use of @roles and @hosts together results in union of both """ @roles('r1', 'r2') @hosts('d') def command(): pass eq_hosts(command, ['a', 'b', 'c', 'd'], env={'roledefs': fake_roles}) def test_host_role_merge_deduping(): """ Use of @roles and @hosts dedupes when merging """ @roles('r1', 'r2') @hosts('a') def command(): pass # Not ['a', 'a', 'b', 'c'] or etc true_eq_hosts(command, ['a', 'b', 'c'], env={'roledefs': fake_roles}) def test_host_role_merge_deduping_off(): """ Allow turning deduping off """ @roles('r1', 'r2') @hosts('a') def command(): pass with settings(dedupe_hosts=False): true_eq_hosts( command, # 'a' 1x host 1x role # 'b' 1x r1 1x r2 ['a', 'a', 'b', 'b', 'c'], env={'roledefs': fake_roles} ) tuple_roles = { 'r1': ('a', 'b'), 'r2': ('b', 'c'), } def test_roles_as_tuples(): """ Test that a list of roles as a tuple succeeds """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': tuple_roles}) def test_hosts_as_tuples(): """ Test that a list of hosts as a tuple succeeds """ def command(): pass eq_hosts(command, ['foo', 'bar'], env={'hosts': ('foo', 'bar')}) def test_hosts_decorator_overrides_env_hosts(): """ If @hosts is used it replaces any env.hosts value """ @hosts('bar') def command(): pass eq_hosts(command, ['bar']) assert 'foo' not in get_hosts(command, [], [], [], {'hosts': ['foo']}) def test_hosts_decorator_overrides_env_hosts_with_task_decorator_first(): """ If @hosts is used it replaces any env.hosts value even with @task """ @task @hosts('bar') def command(): pass eq_hosts(command, ['bar']) assert 'foo' not in get_hosts(command, [], [], {'hosts': ['foo']}) def test_hosts_decorator_overrides_env_hosts_with_task_decorator_last(): @hosts('bar') @task def command(): pass eq_hosts(command, ['bar']) assert 'foo' not in get_hosts(command, [], [], {'hosts': ['foo']}) def test_hosts_stripped_env_hosts(): """ Make sure hosts defined in env.hosts are cleaned of extra spaces """ def command(): pass myenv = {'hosts': [' foo ', 'bar '], 'roles': [], 'exclude_hosts': []} eq_hosts(command, ['foo', 'bar'], myenv) spaced_roles = { 'r1': [' a ', ' b '], 'r2': ['b', 'c'], } def test_roles_stripped_env_hosts(): """ Make sure hosts defined in env.roles are cleaned of extra spaces """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], {'roledefs': spaced_roles}) def test_hosts_decorator_expands_single_iterable(): """ @hosts(iterable) should behave like @hosts(*iterable) """ host_list = ['foo', 'bar'] @hosts(host_list) def command(): pass eq_(command.hosts, host_list) def test_roles_decorator_expands_single_iterable(): """ @roles(iterable) should behave like @roles(*iterable) """ role_list = ['foo', 'bar'] @roles(role_list) def command(): pass eq_(command.roles, role_list) # # Host exclusion # def dummy(): pass def test_get_hosts_excludes_cli_exclude_hosts_from_cli_hosts(): assert 'foo' not in get_hosts(dummy, ['foo', 'bar'], [], ['foo']) def test_get_hosts_excludes_cli_exclude_hosts_from_decorator_hosts(): assert 'foo' not in get_hosts(hosts('foo', 'bar')(dummy), [], [], ['foo']) def test_get_hosts_excludes_global_exclude_hosts_from_global_hosts(): fake_env = {'hosts': ['foo', 'bar'], 'exclude_hosts': ['foo']} assert 'foo' not in get_hosts(dummy, [], [], [], fake_env) # # Basic role behavior # @aborts def test_aborts_on_nonexistent_roles(): """ Aborts if any given roles aren't found """ merge([], ['badrole'], [], {}) def test_accepts_non_list_hosts(): """ Aborts if hosts is a string, not a list """ assert merge('badhosts', [], [], {}) == ['badhosts'] lazy_role = {'r1': lambda: ['a', 'b']} def test_lazy_roles(): """ Roles may be callables returning lists, as well as regular lists """ @roles('r1') def command(): pass eq_hosts(command, ['a', 'b'], env={'roledefs': lazy_role}) # # Fabfile loading # def run_load_fabfile(path, sys_path): # Module-esque object fake_module = Fake().has_attr(__dict__={}) # Fake __import__ importer = Fake(callable=True).returns(fake_module) # Snapshot sys.path for restore orig_path = copy.copy(sys.path) # Update with fake path sys.path = sys_path # Test for side effects load_fabfile(path, importer=importer) eq_(sys.path, sys_path) # Restore sys.path = orig_path def test_load_fabfile_should_not_remove_real_path_elements(): for fabfile_path, sys_dot_path in ( # Directory not in path ('subdir/fabfile.py', ['not_subdir']), ('fabfile.py', ['nope']), # Directory in path, but not at front ('subdir/fabfile.py', ['not_subdir', 'subdir']), ('fabfile.py', ['not_subdir', '']), ('fabfile.py', ['not_subdir', '', 'also_not_subdir']), # Directory in path, and at front already ('subdir/fabfile.py', ['subdir']), ('subdir/fabfile.py', ['subdir', 'not_subdir']), ('fabfile.py', ['', 'some_dir', 'some_other_dir']), ): yield run_load_fabfile, fabfile_path, sys_dot_path # # Namespacing and new-style tasks # class TestTaskAliases(FabricTest): def test_flat_alias(self): f = fabfile("flat_alias.py") with path_prefix(f): docs, funcs = load_fabfile(f) eq_(len(funcs), 2) ok_("foo" in funcs) ok_("foo_aliased" in funcs) def test_nested_alias(self): f = fabfile("nested_alias.py") with path_prefix(f): docs, funcs = load_fabfile(f) ok_("nested" in funcs) eq_(len(funcs["nested"]), 2) ok_("foo" in funcs["nested"]) ok_("foo_aliased" in funcs["nested"]) def test_flat_aliases(self): f = fabfile("flat_aliases.py") with path_prefix(f): docs, funcs = load_fabfile(f) eq_(len(funcs), 3) ok_("foo" in funcs) ok_("foo_aliased" in funcs) ok_("foo_aliased_two" in funcs) def test_nested_aliases(self): f = fabfile("nested_aliases.py") with path_prefix(f): docs, funcs = load_fabfile(f) ok_("nested" in funcs) eq_(len(funcs["nested"]), 3) ok_("foo" in funcs["nested"]) ok_("foo_aliased" in funcs["nested"]) ok_("foo_aliased_two" in funcs["nested"]) class TestNamespaces(FabricTest): def setup(self): # Parent class preserves current env super(TestNamespaces, self).setup() # Reset new-style-tests flag so running tests via Fab itself doesn't # muck with it. import fabric.state if 'new_style_tasks' in fabric.state.env: del fabric.state.env['new_style_tasks'] def test_implicit_discovery(self): """ Default to automatically collecting all tasks in a fabfile module """ implicit = fabfile("implicit_fabfile.py") with path_prefix(implicit): docs, funcs = load_fabfile(implicit) eq_(len(funcs), 2) ok_("foo" in funcs) ok_("bar" in funcs) def test_explicit_discovery(self): """ If __all__ is present, only collect the tasks it specifies """ explicit = fabfile("explicit_fabfile.py") with path_prefix(explicit): docs, funcs = load_fabfile(explicit) eq_(len(funcs), 1) ok_("foo" in funcs) ok_("bar" not in funcs) def test_should_load_decorated_tasks_only_if_one_is_found(self): """ If any new-style tasks are found, *only* new-style tasks should load """ module = fabfile('decorated_fabfile.py') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('foo' in funcs) def test_class_based_tasks_are_found_with_proper_name(self): """ Wrapped new-style tasks should preserve their function names """ module = fabfile('decorated_fabfile_with_classbased_task.py') from fabric.state import env with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('foo' in funcs) def test_class_based_tasks_are_found_with_variable_name(self): """ A new-style tasks with undefined name attribute should use the instance variable name. """ module = fabfile('classbased_task_fabfile.py') from fabric.state import env with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('foo' in funcs) eq_(funcs['foo'].name, 'foo') def test_recursion_steps_into_nontask_modules(self): """ Recursive loading will continue through modules with no tasks """ module = fabfile('deep') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('submodule.subsubmodule.deeptask' in _task_names(funcs)) def test_newstyle_task_presence_skips_classic_task_modules(self): """ Classic-task-only modules shouldn't add tasks if any new-style tasks exist """ module = fabfile('deep') with path_prefix(module): docs, funcs = load_fabfile(module) eq_(len(funcs), 1) ok_('submodule.classic_task' not in _task_names(funcs)) def test_task_decorator_plays_well_with_others(self): """ @task, when inside @hosts/@roles, should not hide the decorated task. """ module = fabfile('decorator_order') with path_prefix(module): docs, funcs = load_fabfile(module) # When broken, crawl() finds None for 'foo' instead. eq_(crawl('foo', funcs), funcs['foo']) # # --list output # def eq_output(docstring, format_, expected): return eq_( "\n".join(list_commands(docstring, format_)), expected ) def list_output(module, format_, expected): module = fabfile(module) with path_prefix(module): docstring, tasks = load_fabfile(module) with patched_context(fabric.state, 'commands', tasks): eq_output(docstring, format_, expected) def test_list_output(): lead = ":\n\n " normal_head = COMMANDS_HEADER + lead nested_head = COMMANDS_HEADER + NESTED_REMINDER + lead for desc, module, format_, expected in ( ("shorthand (& with namespacing)", 'deep', 'short', "submodule.subsubmodule.deeptask"), ("normal (& with namespacing)", 'deep', 'normal', normal_head + "submodule.subsubmodule.deeptask"), ("normal (with docstring)", 'docstring', 'normal', normal_head + "foo Foos!"), ("nested (leaf only)", 'deep', 'nested', nested_head + """submodule: subsubmodule: deeptask"""), ("nested (full)", 'tree', 'nested', nested_head + """build_docs deploy db: migrate system: install_package debian: update_apt"""), ): list_output.description = "--list output: %s" % desc yield list_output, module, format_, expected del list_output.description def name_to_task(name): t = Task() t.name = name return t def strings_to_tasks(d): ret = {} for key, value in d.iteritems(): if isMappingType(value): val = strings_to_tasks(value) else: val = name_to_task(value) ret[key] = val return ret def test_task_names(): for desc, input_, output in ( ('top level (single)', {'a': 5}, ['a']), ('top level (multiple, sorting)', {'a': 5, 'b': 6}, ['a', 'b']), ('just nested', {'a': {'b': 5}}, ['a.b']), ('mixed', {'a': 5, 'b': {'c': 6}}, ['a', 'b.c']), ('top level comes before nested', {'z': 5, 'b': {'c': 6}}, ['z', 'b.c']), ('peers sorted equally', {'z': 5, 'b': {'c': 6}, 'd': {'e': 7}}, ['z', 'b.c', 'd.e']), ( 'complex tree', { 'z': 5, 'b': { 'c': 6, 'd': { 'e': { 'f': '7' } }, 'g': 8 }, 'h': 9, 'w': { 'y': 10 } }, ['h', 'z', 'b.c', 'b.g', 'b.d.e.f', 'w.y'] ), ): eq_.description = "task name flattening: %s" % desc yield eq_, _task_names(strings_to_tasks(input_)), output del eq_.description def test_crawl(): for desc, name, mapping, output in ( ("base case", 'a', {'a': 5}, 5), ("one level", 'a.b', {'a': {'b': 5}}, 5), ("deep", 'a.b.c.d.e', {'a': {'b': {'c': {'d': {'e': 5}}}}}, 5), ("full tree", 'a.b.c', {'a': {'b': {'c': 5}, 'd': 6}, 'z': 7}, 5) ): eq_.description = "crawling dotted names: %s" % desc yield eq_, _crawl(name, mapping), output del eq_.description def test_mapping_task_classes(): """ Task classes implementing the mapping interface shouldn't break --list """ list_output('mapping', 'normal', COMMANDS_HEADER + """:\n mapping_task""") def test_default_task_listings(): """ @task(default=True) should cause task to also load under module's name """ for format_, expected in ( ('short', """mymodule mymodule.long_task_name"""), ('normal', COMMANDS_HEADER + """:\n mymodule mymodule.long_task_name"""), ('nested', COMMANDS_HEADER + NESTED_REMINDER + """:\n mymodule: long_task_name""") ): list_output.description = "Default task --list output: %s" % format_ yield list_output, 'default_tasks', format_, expected del list_output.description def test_default_task_loading(): """ crawl() should return default tasks where found, instead of module objs """ docs, tasks = load_fabfile(fabfile('default_tasks')) ok_(isinstance(crawl('mymodule', tasks), Task)) def test_aliases_appear_in_fab_list(): """ --list should include aliases """ list_output('nested_alias', 'short', """nested.foo nested.foo_aliased""") Fabric-1.8.2/tests/test_network.py000644 000765 000024 00000057131 12277451130 020133 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from datetime import datetime import copy import getpass import sys from nose.tools import with_setup, ok_, raises from fudge import (Fake, clear_calls, clear_expectations, patch_object, verify, with_patched_object, patched_context, with_fakes) from fabric.context_managers import settings, hide, show from fabric.network import (HostConnectionCache, join_host_strings, normalize, denormalize, key_filenames, ssh) from fabric.io import output_loop import fabric.network # So I can call patch_object correctly. Sigh. from fabric.state import env, output, _get_system_username from fabric.operations import run, sudo, prompt from fabric.exceptions import NetworkError from fabric.tasks import execute from fabric import utils # for patching from utils import * from server import (server, PORT, RESPONSES, PASSWORDS, CLIENT_PRIVKEY, USER, CLIENT_PRIVKEY_PASSPHRASE) # # Subroutines, e.g. host string normalization # class TestNetwork(FabricTest): def test_host_string_normalization(self): username = _get_system_username() for description, input, output_ in ( ("Sanity check: equal strings remain equal", 'localhost', 'localhost'), ("Empty username is same as get_system_username", 'localhost', username + '@localhost'), ("Empty port is same as port 22", 'localhost', 'localhost:22'), ("Both username and port tested at once, for kicks", 'localhost', username + '@localhost:22'), ): eq_.description = "Host-string normalization: %s" % description yield eq_, normalize(input), normalize(output_) del eq_.description def test_normalization_for_ipv6(self): """ normalize() will accept IPv6 notation and can separate host and port """ username = _get_system_username() for description, input, output_ in ( ("Full IPv6 address", '2001:DB8:0:0:0:0:0:1', (username, '2001:DB8:0:0:0:0:0:1', '22')), ("IPv6 address in short form", '2001:DB8::1', (username, '2001:DB8::1', '22')), ("IPv6 localhost", '::1', (username, '::1', '22')), ("Square brackets are required to separate non-standard port from IPv6 address", '[2001:DB8::1]:1222', (username, '2001:DB8::1', '1222')), ("Username and IPv6 address", 'user@2001:DB8::1', ('user', '2001:DB8::1', '22')), ("Username and IPv6 address with non-standard port", 'user@[2001:DB8::1]:1222', ('user', '2001:DB8::1', '1222')), ): eq_.description = "Host-string IPv6 normalization: %s" % description yield eq_, normalize(input), output_ del eq_.description def test_normalization_without_port(self): """ normalize() and join_host_strings() omit port if omit_port given """ eq_( join_host_strings(*normalize('user@localhost', omit_port=True)), 'user@localhost' ) def test_ipv6_host_strings_join(self): """ join_host_strings() should use square brackets only for IPv6 and if port is given """ eq_( join_host_strings('user', '2001:DB8::1'), 'user@2001:DB8::1' ) eq_( join_host_strings('user', '2001:DB8::1', '1222'), 'user@[2001:DB8::1]:1222' ) eq_( join_host_strings('user', '192.168.0.0', '1222'), 'user@192.168.0.0:1222' ) def test_nonword_character_in_username(self): """ normalize() will accept non-word characters in the username part """ eq_( normalize('user-with-hyphens@someserver.org')[0], 'user-with-hyphens' ) def test_at_symbol_in_username(self): """ normalize() should allow '@' in usernames (i.e. last '@' is split char) """ parts = normalize('user@example.com@www.example.com') eq_(parts[0], 'user@example.com') eq_(parts[1], 'www.example.com') def test_normalization_of_empty_input(self): empties = ('', '', '') for description, input in ( ("empty string", ''), ("None", None) ): template = "normalize() returns empty strings for %s input" eq_.description = template % description yield eq_, normalize(input), empties del eq_.description def test_host_string_denormalization(self): username = _get_system_username() for description, string1, string2 in ( ("Sanity check: equal strings remain equal", 'localhost', 'localhost'), ("Empty username is same as get_system_username", 'localhost:22', username + '@localhost:22'), ("Empty port is same as port 22", 'user@localhost', 'user@localhost:22'), ("Both username and port", 'localhost', username + '@localhost:22'), ("IPv6 address", '2001:DB8::1', username + '@[2001:DB8::1]:22'), ): eq_.description = "Host-string denormalization: %s" % description yield eq_, denormalize(string1), denormalize(string2) del eq_.description # # Connection caching # @staticmethod @with_fakes def check_connection_calls(host_strings, num_calls): # Clear Fudge call stack # Patch connect() with Fake obj set to expect num_calls calls patched_connect = patch_object('fabric.network', 'connect', Fake('connect', expect_call=True).times_called(num_calls) ) try: # Make new cache object cache = HostConnectionCache() # Connect to all connection strings for host_string in host_strings: # Obtain connection from cache, potentially calling connect() cache[host_string] finally: # Restore connect() patched_connect.restore() def test_connection_caching(self): for description, host_strings, num_calls in ( ("Two different host names, two connections", ('localhost', 'other-system'), 2), ("Same host twice, one connection", ('localhost', 'localhost'), 1), ("Same host twice, different ports, two connections", ('localhost:22', 'localhost:222'), 2), ("Same host twice, different users, two connections", ('user1@localhost', 'user2@localhost'), 2), ): TestNetwork.check_connection_calls.description = description yield TestNetwork.check_connection_calls, host_strings, num_calls def test_connection_cache_deletion(self): """ HostConnectionCache should delete correctly w/ non-full keys """ hcc = HostConnectionCache() fake = Fake('connect', callable=True) with patched_context('fabric.network', 'connect', fake): for host_string in ('hostname', 'user@hostname', 'user@hostname:222'): # Prime hcc[host_string] # Test ok_(host_string in hcc) # Delete del hcc[host_string] # Test ok_(host_string not in hcc) # # Connection loop flow # @server() def test_saved_authentication_returns_client_object(self): cache = HostConnectionCache() assert isinstance(cache[env.host_string], ssh.SSHClient) @server() @with_fakes def test_prompts_for_password_without_good_authentication(self): env.password = None with password_response(PASSWORDS[env.user], times_called=1): cache = HostConnectionCache() cache[env.host_string] @aborts def test_aborts_on_prompt_with_abort_on_prompt(self): """ abort_on_prompt=True should abort when prompt() is used """ env.abort_on_prompts = True prompt("This will abort") @server() @aborts def test_aborts_on_password_prompt_with_abort_on_prompt(self): """ abort_on_prompt=True should abort when password prompts occur """ env.password = None env.abort_on_prompts = True with password_response(PASSWORDS[env.user], times_called=1): cache = HostConnectionCache() cache[env.host_string] @mock_streams('stdout') @server() def test_does_not_abort_with_password_and_host_with_abort_on_prompt(self): """ abort_on_prompt=True should not abort if no prompts are needed """ env.abort_on_prompts = True env.password = PASSWORDS[env.user] # env.host_string is automatically filled in when using server() run("ls /simple") @mock_streams('stdout') @server() def test_trailing_newline_line_drop(self): """ Trailing newlines shouldn't cause last line to be dropped. """ # Multiline output with trailing newline cmd = "ls /" output_string = RESPONSES[cmd] # TODO: fix below lines, duplicates inner workings of tested code prefix = "[%s] out: " % env.host_string expected = prefix + ('\n' + prefix).join(output_string.split('\n')) # Create, tie off thread with settings(show('everything'), hide('running')): result = run(cmd) # Test equivalence of expected, received output eq_(expected, sys.stdout.getvalue()) # Also test that the captured value matches, too. eq_(output_string, result) @server() def test_sudo_prompt_kills_capturing(self): """ Sudo prompts shouldn't screw up output capturing """ cmd = "ls /simple" with hide('everything'): eq_(sudo(cmd), RESPONSES[cmd]) @server() def test_password_memory_on_user_switch(self): """ Switching users mid-session should not screw up password memory """ def _to_user(user): return join_host_strings(user, env.host, env.port) user1 = 'root' user2 = USER with settings(hide('everything'), password=None): # Connect as user1 (thus populating both the fallback and # user-specific caches) with settings( password_response(PASSWORDS[user1]), host_string=_to_user(user1) ): run("ls /simple") # Connect as user2: * First cxn attempt will use fallback cache, # which contains user1's password, and thus fail * Second cxn # attempt will prompt user, and succeed due to mocked p4p * but # will NOT overwrite fallback cache with settings( password_response(PASSWORDS[user2]), host_string=_to_user(user2) ): # Just to trigger connection run("ls /simple") # * Sudo call should use cached user2 password, NOT fallback cache, # and thus succeed. (I.e. p_f_p should NOT be called here.) with settings( password_response('whatever', times_called=0), host_string=_to_user(user2) ): sudo("ls /simple") @mock_streams('stderr') @server() def test_password_prompt_displays_host_string(self): """ Password prompt lines should include the user/host in question """ env.password = None env.no_agent = env.no_keys = True output.everything = False with password_response(PASSWORDS[env.user], silent=False): run("ls /simple") regex = r'^\[%s\] Login password for \'%s\': ' % (env.host_string, env.user) assert_contains(regex, sys.stderr.getvalue()) @mock_streams('stderr') @server(pubkeys=True) def test_passphrase_prompt_displays_host_string(self): """ Passphrase prompt lines should include the user/host in question """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY output.everything = False with password_response(CLIENT_PRIVKEY_PASSPHRASE, silent=False): run("ls /simple") regex = r'^\[%s\] Login password for \'%s\': ' % (env.host_string, env.user) assert_contains(regex, sys.stderr.getvalue()) def test_sudo_prompt_display_passthrough(self): """ Sudo prompt should display (via passthrough) when stdout/stderr shown """ TestNetwork._prompt_display(True) def test_sudo_prompt_display_directly(self): """ Sudo prompt should display (manually) when stdout/stderr hidden """ TestNetwork._prompt_display(False) @staticmethod @mock_streams('both') @server(pubkeys=True, responses={'oneliner': 'result'}) def _prompt_display(display_output): env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY output.output = display_output with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[env.user]), silent=False ): sudo('oneliner') if display_output: expected = """ [%(prefix)s] sudo: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: sudo password: [%(prefix)s] out: Sorry, try again. [%(prefix)s] out: sudo password: [%(prefix)s] out: result """ % {'prefix': env.host_string, 'user': env.user} else: # Note lack of first sudo prompt (as it's autoresponded to) and of # course the actual result output. expected = """ [%(prefix)s] sudo: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: Sorry, try again. [%(prefix)s] out: sudo password: """ % { 'prefix': env.host_string, 'user': env.user } eq_(expected[1:], sys.stdall.getvalue()) @mock_streams('both') @server( pubkeys=True, responses={'oneliner': 'result', 'twoliner': 'result1\nresult2'} ) def test_consecutive_sudos_should_not_have_blank_line(self): """ Consecutive sudo() calls should not incur a blank line in-between """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[USER]), silent=False ): sudo('oneliner') sudo('twoliner') expected = """ [%(prefix)s] sudo: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: sudo password: [%(prefix)s] out: Sorry, try again. [%(prefix)s] out: sudo password: [%(prefix)s] out: result [%(prefix)s] sudo: twoliner [%(prefix)s] out: sudo password: [%(prefix)s] out: result1 [%(prefix)s] out: result2 """ % {'prefix': env.host_string, 'user': env.user} eq_(sys.stdall.getvalue(), expected[1:]) @mock_streams('both') @server(pubkeys=True, responses={'silent': '', 'normal': 'foo'}) def test_silent_commands_should_not_have_blank_line(self): """ Silent commands should not generate an extra trailing blank line After the move to interactive I/O, it was noticed that while run/sudo commands which had non-empty stdout worked normally (consecutive such commands were totally adjacent), those with no stdout (i.e. silent commands like ``test`` or ``mkdir``) resulted in spurious blank lines after the "run:" line. This looks quite ugly in real world scripts. """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response(CLIENT_PRIVKEY_PASSPHRASE, silent=False): run('normal') run('silent') run('normal') with hide('everything'): run('normal') run('silent') expected = """ [%(prefix)s] run: normal [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: foo [%(prefix)s] run: silent [%(prefix)s] run: normal [%(prefix)s] out: foo """ % {'prefix': env.host_string, 'user': env.user} eq_(expected[1:], sys.stdall.getvalue()) @mock_streams('both') @server( pubkeys=True, responses={'oneliner': 'result', 'twoliner': 'result1\nresult2'} ) def test_io_should_print_prefix_if_ouput_prefix_is_true(self): """ run/sudo should print [host_string] if env.output_prefix == True """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[USER]), silent=False ): run('oneliner') run('twoliner') expected = """ [%(prefix)s] run: oneliner [%(prefix)s] Login password for '%(user)s': [%(prefix)s] out: result [%(prefix)s] run: twoliner [%(prefix)s] out: result1 [%(prefix)s] out: result2 """ % {'prefix': env.host_string, 'user': env.user} eq_(expected[1:], sys.stdall.getvalue()) @mock_streams('both') @server( pubkeys=True, responses={'oneliner': 'result', 'twoliner': 'result1\nresult2'} ) def test_io_should_not_print_prefix_if_ouput_prefix_is_false(self): """ run/sudo shouldn't print [host_string] if env.output_prefix == False """ env.password = None env.no_agent = env.no_keys = True env.key_filename = CLIENT_PRIVKEY with password_response( (CLIENT_PRIVKEY_PASSPHRASE, PASSWORDS[USER]), silent=False ): with settings(output_prefix=False): run('oneliner') run('twoliner') expected = """ [%(prefix)s] run: oneliner [%(prefix)s] Login password for '%(user)s': result [%(prefix)s] run: twoliner result1 result2 """ % {'prefix': env.host_string, 'user': env.user} eq_(expected[1:], sys.stdall.getvalue()) @server() def test_env_host_set_when_host_prompt_used(self): """ Ensure env.host is set during host prompting """ copied_host_string = str(env.host_string) fake = Fake('raw_input', callable=True).returns(copied_host_string) env.host_string = None env.host = None with settings(hide('everything'), patched_input(fake)): run("ls /") # Ensure it did set host_string back to old value eq_(env.host_string, copied_host_string) # Ensure env.host is correct eq_(env.host, normalize(copied_host_string)[1]) def subtask(): run("This should never execute") class TestConnections(FabricTest): @aborts def test_should_abort_when_cannot_connect(self): """ By default, connecting to a nonexistent server should abort. """ with hide('everything'): execute(subtask, hosts=['nope.nonexistent.com']) def test_should_warn_when_skip_bad_hosts_is_True(self): """ env.skip_bad_hosts = True => execute() skips current host """ with settings(hide('everything'), skip_bad_hosts=True): execute(subtask, hosts=['nope.nonexistent.com']) class TestSSHConfig(FabricTest): def env_setup(self): super(TestSSHConfig, self).env_setup() env.use_ssh_config = True env.ssh_config_path = support("ssh_config") # Undo the changes FabricTest makes to env for server support env.user = env.local_user env.port = env.default_port def test_global_user_with_default_env(self): """ Global User should override default env.user """ eq_(normalize("localhost")[0], "satan") def test_global_user_with_nondefault_env(self): """ Global User should NOT override nondefault env.user """ with settings(user="foo"): eq_(normalize("localhost")[0], "foo") def test_specific_user_with_default_env(self): """ Host-specific User should override default env.user """ eq_(normalize("myhost")[0], "neighbor") def test_user_vs_host_string_value(self): """ SSH-config derived user should NOT override host-string user value """ eq_(normalize("myuser@localhost")[0], "myuser") eq_(normalize("myuser@myhost")[0], "myuser") def test_global_port_with_default_env(self): """ Global Port should override default env.port """ eq_(normalize("localhost")[2], "666") def test_global_port_with_nondefault_env(self): """ Global Port should NOT override nondefault env.port """ with settings(port="777"): eq_(normalize("localhost")[2], "777") def test_specific_port_with_default_env(self): """ Host-specific Port should override default env.port """ eq_(normalize("myhost")[2], "664") def test_port_vs_host_string_value(self): """ SSH-config derived port should NOT override host-string port value """ eq_(normalize("localhost:123")[2], "123") eq_(normalize("myhost:123")[2], "123") def test_hostname_alias(self): """ Hostname setting overrides host string's host value """ eq_(normalize("localhost")[1], "localhost") eq_(normalize("myalias")[1], "otherhost") @with_patched_object(utils, 'warn', Fake('warn', callable=True, expect_call=True)) def test_warns_with_bad_config_file_path(self): # use_ssh_config is already set in our env_setup() with settings(hide('everything'), ssh_config_path="nope_bad_lol"): normalize('foo') @server() def test_real_connection(self): """ Test-server connection using ssh_config values """ with settings( hide('everything'), ssh_config_path=support("testserver_ssh_config"), host_string='testserver', ): ok_(run("ls /simple").succeeded) class TestKeyFilenames(FabricTest): def test_empty_everything(self): """ No env.key_filename and no ssh_config = empty list """ with settings(use_ssh_config=False): with settings(key_filename=""): eq_(key_filenames(), []) with settings(key_filename=[]): eq_(key_filenames(), []) def test_just_env(self): """ Valid env.key_filename and no ssh_config = just env """ with settings(use_ssh_config=False): with settings(key_filename="mykey"): eq_(key_filenames(), ["mykey"]) with settings(key_filename=["foo", "bar"]): eq_(key_filenames(), ["foo", "bar"]) def test_just_ssh_config(self): """ No env.key_filename + valid ssh_config = ssh value """ with settings(use_ssh_config=True, ssh_config_path=support("ssh_config")): for val in ["", []]: with settings(key_filename=val): eq_(key_filenames(), ["foobar.pub"]) def test_both(self): """ Both env.key_filename + valid ssh_config = both show up w/ env var first """ with settings(use_ssh_config=True, ssh_config_path=support("ssh_config")): with settings(key_filename="bizbaz.pub"): eq_(key_filenames(), ["bizbaz.pub", "foobar.pub"]) with settings(key_filename=["bizbaz.pub", "whatever.pub"]): expected = ["bizbaz.pub", "whatever.pub", "foobar.pub"] eq_(key_filenames(), expected) Fabric-1.8.2/tests/test_operations.py000644 000765 000024 00000074614 12277451130 020632 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import os import shutil import sys import types from contextlib import nested from StringIO import StringIO import unittest import random import types from nose.tools import raises, eq_, ok_ from fudge import with_patched_object from fabric.state import env, output from fabric.operations import require, prompt, _sudo_prefix, _shell_wrap, \ _shell_escape from fabric.api import get, put, hide, show, cd, lcd, local, run, sudo, quiet from fabric.sftp import SFTP from fabric.exceptions import CommandTimeout from fabric.decorators import with_settings from utils import * from server import (server, PORT, RESPONSES, FILES, PASSWORDS, CLIENT_PRIVKEY, USER, CLIENT_PRIVKEY_PASSPHRASE) # # require() # def test_require_single_existing_key(): """ When given a single existing key, require() throws no exceptions """ # 'version' is one of the default values, so we know it'll be there require('version') def test_require_multiple_existing_keys(): """ When given multiple existing keys, require() throws no exceptions """ require('version', 'sudo_prompt') @aborts def test_require_single_missing_key(): """ When given a single non-existent key, require() aborts """ require('blah') @aborts def test_require_multiple_missing_keys(): """ When given multiple non-existent keys, require() aborts """ require('foo', 'bar') @aborts def test_require_mixed_state_keys(): """ When given mixed-state keys, require() aborts """ require('foo', 'version') @mock_streams('stderr') def test_require_mixed_state_keys_prints_missing_only(): """ When given mixed-state keys, require() prints missing keys only """ try: require('foo', 'version') except SystemExit: err = sys.stderr.getvalue() assert 'version' not in err assert 'foo' in err @aborts def test_require_iterable_provided_by_key(): """ When given a provided_by iterable value, require() aborts """ # 'version' is one of the default values, so we know it'll be there def fake_providing_function(): pass require('foo', provided_by=[fake_providing_function]) @aborts def test_require_noniterable_provided_by_key(): """ When given a provided_by noniterable value, require() aborts """ # 'version' is one of the default values, so we know it'll be there def fake_providing_function(): pass require('foo', provided_by=fake_providing_function) @aborts def test_require_key_exists_empty_list(): """ When given a single existing key but the value is an empty list, require() aborts """ # 'hosts' is one of the default values, so we know it'll be there require('hosts') @aborts @with_settings(foo={}) def test_require_key_exists_empty_dict(): """ When given a single existing key but the value is an empty dict, require() aborts """ require('foo') @aborts @with_settings(foo=()) def test_require_key_exists_empty_tuple(): """ When given a single existing key but the value is an empty tuple, require() aborts """ require('foo') @aborts @with_settings(foo=set()) def test_require_key_exists_empty_set(): """ When given a single existing key but the value is an empty set, require() aborts """ require('foo') @with_settings(foo=0, bar=False) def test_require_key_exists_false_primitive_values(): """ When given keys that exist with primitive values that evaluate to False, require() throws no exception """ require('foo', 'bar') @with_settings(foo=['foo'], bar={'bar': 'bar'}, baz=('baz',), qux=set('qux')) def test_require_complex_non_empty_values(): """ When given keys that exist with non-primitive values that are not empty, require() throws no exception """ require('foo', 'bar', 'baz', 'qux') # # prompt() # def p(x): sys.stdout.write(x) @mock_streams('stdout') @with_patched_input(p) def test_prompt_appends_space(): """ prompt() appends a single space when no default is given """ s = "This is my prompt" prompt(s) eq_(sys.stdout.getvalue(), s + ' ') @mock_streams('stdout') @with_patched_input(p) def test_prompt_with_default(): """ prompt() appends given default value plus one space on either side """ s = "This is my prompt" d = "default!" prompt(s, default=d) eq_(sys.stdout.getvalue(), "%s [%s] " % (s, d)) # # run()/sudo() # def test_sudo_prefix_with_user(): """ _sudo_prefix() returns prefix plus -u flag for nonempty user """ eq_( _sudo_prefix(user="foo", group=None), "%s -u \"foo\" " % (env.sudo_prefix % env) ) def test_sudo_prefix_without_user(): """ _sudo_prefix() returns standard prefix when user is empty """ eq_(_sudo_prefix(user=None, group=None), env.sudo_prefix % env) def test_sudo_prefix_with_group(): """ _sudo_prefix() returns prefix plus -g flag for nonempty group """ eq_( _sudo_prefix(user=None, group="foo"), "%s -g \"foo\" " % (env.sudo_prefix % env) ) def test_sudo_prefix_with_user_and_group(): """ _sudo_prefix() returns prefix plus -u and -g for nonempty user and group """ eq_( _sudo_prefix(user="foo", group="bar"), "%s -u \"foo\" -g \"bar\" " % (env.sudo_prefix % env) ) @with_settings(use_shell=True) def test_shell_wrap(): prefix = "prefix" command = "command" for description, shell, sudo_prefix, result in ( ("shell=True, sudo_prefix=None", True, None, '%s "%s"' % (env.shell, command)), ("shell=True, sudo_prefix=string", True, prefix, prefix + ' %s "%s"' % (env.shell, command)), ("shell=False, sudo_prefix=None", False, None, command), ("shell=False, sudo_prefix=string", False, prefix, prefix + " " + command), ): eq_.description = "_shell_wrap: %s" % description yield eq_, _shell_wrap(command, shell_escape=True, shell=shell, sudo_prefix=sudo_prefix), result del eq_.description @with_settings(use_shell=True) def test_shell_wrap_escapes_command_if_shell_is_true(): """ _shell_wrap() escapes given command if shell=True """ cmd = "cd \"Application Support\"" eq_( _shell_wrap(cmd, shell_escape=True, shell=True), '%s "%s"' % (env.shell, _shell_escape(cmd)) ) @with_settings(use_shell=True) def test_shell_wrap_does_not_escape_command_if_shell_is_true_and_shell_escape_is_false(): """ _shell_wrap() does no escaping if shell=True and shell_escape=False """ cmd = "cd \"Application Support\"" eq_( _shell_wrap(cmd, shell_escape=False, shell=True), '%s "%s"' % (env.shell, cmd) ) def test_shell_wrap_does_not_escape_command_if_shell_is_false(): """ _shell_wrap() does no escaping if shell=False """ cmd = "cd \"Application Support\"" eq_(_shell_wrap(cmd, shell_escape=True, shell=False), cmd) def test_shell_escape_escapes_doublequotes(): """ _shell_escape() escapes double-quotes """ cmd = "cd \"Application Support\"" eq_(_shell_escape(cmd), 'cd \\"Application Support\\"') def test_shell_escape_escapes_dollar_signs(): """ _shell_escape() escapes dollar signs """ cmd = "cd $HOME" eq_(_shell_escape(cmd), 'cd \$HOME') def test_shell_escape_escapes_backticks(): """ _shell_escape() escapes backticks """ cmd = "touch test.pid && kill `cat test.pid`" eq_(_shell_escape(cmd), "touch test.pid && kill \`cat test.pid\`") class TestCombineStderr(FabricTest): @server() def test_local_none_global_true(self): """ combine_stderr: no kwarg => uses global value (True) """ output.everything = False r = run("both_streams") # Note: the exact way the streams are jumbled here is an implementation # detail of our fake SSH server and may change in the future. eq_("ssttddoeurtr", r.stdout) eq_(r.stderr, "") @server() def test_local_none_global_false(self): """ combine_stderr: no kwarg => uses global value (False) """ output.everything = False env.combine_stderr = False r = run("both_streams") eq_("stdout", r.stdout) eq_("stderr", r.stderr) @server() def test_local_true_global_false(self): """ combine_stderr: True kwarg => overrides global False value """ output.everything = False env.combine_stderr = False r = run("both_streams", combine_stderr=True) eq_("ssttddoeurtr", r.stdout) eq_(r.stderr, "") @server() def test_local_false_global_true(self): """ combine_stderr: False kwarg => overrides global True value """ output.everything = False env.combine_stderr = True r = run("both_streams", combine_stderr=False) eq_("stdout", r.stdout) eq_("stderr", r.stderr) class TestQuietAndWarnKwargs(FabricTest): @server(responses={'wat': ["", "", 1]}) def test_quiet_implies_warn_only(self): # Would raise an exception if warn_only was False eq_(run("wat", quiet=True).failed, True) @server() @mock_streams('both') def test_quiet_implies_hide_everything(self): run("ls /", quiet=True) eq_(sys.stdout.getvalue(), "") eq_(sys.stderr.getvalue(), "") @server(responses={'hrm': ["", "", 1]}) @mock_streams('both') def test_warn_only_is_same_as_settings_warn_only(self): eq_(run("hrm", warn_only=True).failed, True) @server() @mock_streams('both') def test_warn_only_does_not_imply_hide_everything(self): run("ls /simple", warn_only=True) assert sys.stdout.getvalue() != "" class TestMultipleOKReturnCodes(FabricTest): @server(responses={'no srsly its ok': ['', '', 1]}) def test_expand_to_include_1(self): with settings(quiet(), ok_ret_codes=[0, 1]): eq_(run("no srsly its ok").succeeded, True) slow_server = server(responses={'slow': ['', '', 0, 3]}) slow = lambda x: slow_server(raises(CommandTimeout)(x)) class TestRun(FabricTest): """ @server-using generic run()/sudo() tests """ @slow def test_command_timeout_via_env_var(self): env.command_timeout = 2 # timeout after 2 seconds with hide('everything'): run("slow") @slow def test_command_timeout_via_kwarg(self): with hide('everything'): run("slow", timeout=2) @slow def test_command_timeout_via_env_var_in_sudo(self): env.command_timeout = 2 # timeout after 2 seconds with hide('everything'): sudo("slow") @slow def test_command_timeout_via_kwarg_of_sudo(self): with hide('everything'): sudo("slow", timeout=2) # # get() and put() # class TestFileTransfers(FabricTest): # # get() # @server(files={'/home/user/.bashrc': 'bash!'}, home='/home/user') def test_get_relative_remote_dir_uses_home(self): """ get('relative/path') should use remote $HOME """ with hide('everything'): # Another if-it-doesn't-error-out-it-passed test; meh. eq_(get('.bashrc', self.path()), [self.path('.bashrc')]) @server() def test_get_single_file(self): """ get() with a single non-globbed filename """ remote = 'file.txt' local = self.path(remote) with hide('everything'): get(remote, local) eq_contents(local, FILES[remote]) @server(files={'/base/dir with spaces/file': 'stuff!'}) def test_get_file_from_relative_path_with_spaces(self): """ get('file') should work when the remote path contains spaces """ # from nose.tools import set_trace; set_trace() with hide('everything'): with cd('/base/dir with spaces'): eq_(get('file', self.path()), [self.path('file')]) @server() def test_get_sibling_globs(self): """ get() with globbed files, but no directories """ remotes = ['file.txt', 'file2.txt'] with hide('everything'): get('file*.txt', self.tmpdir) for remote in remotes: eq_contents(self.path(remote), FILES[remote]) @server() def test_get_single_file_in_folder(self): """ get() a folder containing one file """ remote = 'folder/file3.txt' with hide('everything'): get('folder', self.tmpdir) eq_contents(self.path(remote), FILES[remote]) @server() def test_get_tree(self): """ Download entire tree """ with hide('everything'): get('tree', self.tmpdir) leaves = filter(lambda x: x[0].startswith('/tree'), FILES.items()) for path, contents in leaves: eq_contents(self.path(path[1:]), contents) @server() def test_get_tree_with_implicit_local_path(self): """ Download entire tree without specifying a local path """ dirname = env.host_string.replace(':', '-') try: with hide('everything'): get('tree') leaves = filter(lambda x: x[0].startswith('/tree'), FILES.items()) for path, contents in leaves: path = os.path.join(dirname, path[1:]) eq_contents(path, contents) os.remove(path) # Cleanup finally: if os.path.exists(dirname): shutil.rmtree(dirname) @server() def test_get_absolute_path_should_save_relative(self): """ get(/x/y) w/ %(path)s should save y, not x/y """ lpath = self.path() ltarget = os.path.join(lpath, "%(path)s") with hide('everything'): get('/tree/subfolder', ltarget) assert self.exists_locally(os.path.join(lpath, 'subfolder')) assert not self.exists_locally(os.path.join(lpath, 'tree/subfolder')) @server() def test_path_formatstr_nonrecursively_is_just_filename(self): """ get(x/y/z) nonrecursively w/ %(path)s should save y, not y/z """ lpath = self.path() ltarget = os.path.join(lpath, "%(path)s") with hide('everything'): get('/tree/subfolder/file3.txt', ltarget) assert self.exists_locally(os.path.join(lpath, 'file3.txt')) @server() @mock_streams('stderr') def _invalid_file_obj_situations(self, remote_path): with settings(hide('running'), warn_only=True): get(remote_path, StringIO()) assert_contains('is a glob or directory', sys.stderr.getvalue()) def test_glob_and_file_object_invalid(self): """ Remote glob and local file object is invalid """ self._invalid_file_obj_situations('/tree/*') def test_directory_and_file_object_invalid(self): """ Remote directory and local file object is invalid """ self._invalid_file_obj_situations('/tree') @server() def test_get_single_file_absolutely(self): """ get() a single file, using absolute file path """ target = '/etc/apache2/apache2.conf' with hide('everything'): get(target, self.tmpdir) eq_contents(self.path(os.path.basename(target)), FILES[target]) @server() def test_get_file_with_nonexistent_target(self): """ Missing target path on single file download => effectively a rename """ local = self.path('otherfile.txt') target = 'file.txt' with hide('everything'): get(target, local) eq_contents(local, FILES[target]) @server() @mock_streams('stderr') def test_get_file_with_existing_file_target(self): """ Clobbering existing local file should overwrite, with warning """ local = self.path('target.txt') target = 'file.txt' with open(local, 'w') as fd: fd.write("foo") with hide('stdout', 'running'): get(target, local) assert "%s already exists" % local in sys.stderr.getvalue() eq_contents(local, FILES[target]) @server() def test_get_file_to_directory(self): """ Directory as target path should result in joined pathname (Yes, this is duplicated in most of the other tests -- but good to have a default in case those tests change how they work later!) """ target = 'file.txt' with hide('everything'): get(target, self.tmpdir) eq_contents(self.path(target), FILES[target]) @server(port=2200) @server(port=2201) def test_get_from_multiple_servers(self): ports = [2200, 2201] hosts = map(lambda x: '127.0.0.1:%s' % x, ports) with settings(all_hosts=hosts): for port in ports: with settings( hide('everything'), host_string='127.0.0.1:%s' % port ): tmp = self.path('') local_path = os.path.join(tmp, "%(host)s", "%(path)s") # Top level file path = 'file.txt' get(path, local_path) assert self.exists_locally(os.path.join( tmp, "127.0.0.1-%s" % port, path )) # Nested file get('tree/subfolder/file3.txt', local_path) assert self.exists_locally(os.path.join( tmp, "127.0.0.1-%s" % port, 'file3.txt' )) @server() def test_get_from_empty_directory_uses_cwd(self): """ get() expands empty remote arg to remote cwd """ with hide('everything'): get('', self.tmpdir) # Spot checks -- though it should've downloaded the entirety of # server.FILES. for x in "file.txt file2.txt tree/file1.txt".split(): assert os.path.exists(os.path.join(self.tmpdir, x)) @server() def _get_to_cwd(self, arg): path = 'file.txt' with hide('everything'): get(path, arg) host_dir = os.path.join( os.getcwd(), env.host_string.replace(':', '-'), ) target = os.path.join(host_dir, path) try: assert os.path.exists(target) # Clean up, since we're not using our tmpdir finally: shutil.rmtree(host_dir) def test_get_to_empty_string_uses_default_format_string(self): """ get() expands empty local arg to local cwd + host + file """ self._get_to_cwd('') def test_get_to_None_uses_default_format_string(self): """ get() expands None local arg to local cwd + host + file """ self._get_to_cwd(None) @server() def test_get_should_accept_file_like_objects(self): """ get()'s local_path arg should take file-like objects too """ fake_file = StringIO() target = '/file.txt' with hide('everything'): get(target, fake_file) eq_(fake_file.getvalue(), FILES[target]) @server() def test_get_interpolation_without_host(self): """ local formatting should work w/o use of %(host)s when run on one host """ with hide('everything'): tmp = self.path('') # dirname, basename local_path = tmp + "/%(dirname)s/foo/%(basename)s" get('/folder/file3.txt', local_path) assert self.exists_locally(tmp + "foo/file3.txt") # path local_path = tmp + "bar/%(path)s" get('/folder/file3.txt', local_path) assert self.exists_locally(tmp + "bar/file3.txt") @server() def test_get_returns_list_of_local_paths(self): """ get() should return an iterable of the local files it created. """ d = self.path() with hide('everything'): retval = get('tree', d) files = ['file1.txt', 'file2.txt', 'subfolder/file3.txt'] eq_(map(lambda x: os.path.join(d, 'tree', x), files), retval) @server() def test_get_returns_none_for_stringio(self): """ get() should return None if local_path is a StringIO """ with hide('everything'): eq_([], get('/file.txt', StringIO())) @server() def test_get_return_value_failed_attribute(self): """ get()'s return value should indicate any paths which failed to download. """ with settings(hide('everything'), warn_only=True): retval = get('/doesnt/exist', self.path()) eq_(['/doesnt/exist'], retval.failed) assert not retval.succeeded @server() def test_get_should_not_use_windows_slashes_in_remote_paths(self): """ sftp.glob() should always use Unix-style slashes. """ with hide('everything'): path = "/tree/file1.txt" sftp = SFTP(env.host_string) eq_(sftp.glob(path), [path]) # # put() # @server() def test_put_file_to_existing_directory(self): """ put() a single file into an existing remote directory """ text = "foo!" local = self.mkfile('foo.txt', text) local2 = self.path('foo2.txt') with hide('everything'): put(local, '/') get('/foo.txt', local2) eq_contents(local2, text) @server() def test_put_to_empty_directory_uses_cwd(self): """ put() expands empty remote arg to remote cwd Not a terribly sharp test -- we just get() with a relative path and are testing to make sure they match up -- but should still suffice. """ text = "foo!" local = self.path('foo.txt') local2 = self.path('foo2.txt') with open(local, 'w') as fd: fd.write(text) with hide('everything'): put(local) get('foo.txt', local2) eq_contents(local2, text) @server() def test_put_from_empty_directory_uses_cwd(self): """ put() expands empty local arg to local cwd """ text = 'foo!' # Don't use the current cwd since that's a whole lotta files to upload old_cwd = os.getcwd() os.chdir(self.tmpdir) # Write out file right here with open('file.txt', 'w') as fd: fd.write(text) with hide('everything'): # Put our cwd (which should only contain the file we just created) put('', '/') # Get it back under a new name (noting that when we use a truly # empty put() local call, it makes a directory remotely with the # name of the cwd) remote = os.path.join(os.path.basename(self.tmpdir), 'file.txt') get(remote, 'file2.txt') # Compare for sanity test eq_contents('file2.txt', text) # Restore cwd os.chdir(old_cwd) @server() def test_put_should_accept_file_like_objects(self): """ put()'s local_path arg should take file-like objects too """ local = self.path('whatever') fake_file = StringIO() fake_file.write("testing file-like objects in put()") pointer = fake_file.tell() target = '/new_file.txt' with hide('everything'): put(fake_file, target) get(target, local) eq_contents(local, fake_file.getvalue()) # Sanity test of file pointer eq_(pointer, fake_file.tell()) @server() @raises(ValueError) def test_put_should_raise_exception_for_nonexistent_local_path(self): """ put(nonexistent_file) should raise a ValueError """ put('thisfiledoesnotexist', '/tmp') @server() def test_put_returns_list_of_remote_paths(self): """ put() should return an iterable of the remote files it created. """ p = 'uploaded.txt' f = self.path(p) with open(f, 'w') as fd: fd.write("contents") with hide('everything'): retval = put(f, p) eq_(retval, [p]) @server() def test_put_returns_list_of_remote_paths_with_stringio(self): """ put() should return a one-item iterable when uploading from a StringIO """ f = 'uploaded.txt' with hide('everything'): eq_(put(StringIO('contents'), f), [f]) @server() def test_put_return_value_failed_attribute(self): """ put()'s return value should indicate any paths which failed to upload. """ with settings(hide('everything'), warn_only=True): f = StringIO('contents') retval = put(f, '/nonexistent/directory/structure') eq_([""], retval.failed) assert not retval.succeeded @server() def test_put_sends_all_files_with_glob(self): """ put() should send all items that match a glob. """ paths = ['foo1.txt', 'foo2.txt'] glob = 'foo*.txt' remote_directory = '/' for path in paths: self.mkfile(path, 'foo!') with hide('everything'): retval = put(self.path(glob), remote_directory) eq_(sorted(retval), sorted([remote_directory + path for path in paths])) @server() def test_put_sends_correct_file_with_globbing_off(self): """ put() should send a file with a glob pattern in the path, when globbing disabled. """ text = "globbed!" local = self.mkfile('foo[bar].txt', text) local2 = self.path('foo2.txt') with hide('everything'): put(local, '/', use_glob=False) get('/foo[bar].txt', local2) eq_contents(local2, text) # # Interactions with cd() # @server() def test_cd_should_apply_to_put(self): """ put() should honor env.cwd for relative remote paths """ f = 'test.txt' d = '/empty_folder' local = self.path(f) with open(local, 'w') as fd: fd.write('test') with nested(cd(d), hide('everything')): put(local, f) assert self.exists_remotely('%s/%s' % (d, f)) @server(files={'/tmp/test.txt': 'test'}) def test_cd_should_apply_to_get(self): """ get() should honor env.cwd for relative remote paths """ local = self.path('test.txt') with nested(cd('/tmp'), hide('everything')): get('test.txt', local) assert os.path.exists(local) @server() def test_cd_should_not_apply_to_absolute_put(self): """ put() should not prepend env.cwd to absolute remote paths """ local = self.path('test.txt') with open(local, 'w') as fd: fd.write('test') with nested(cd('/tmp'), hide('everything')): put(local, '/test.txt') assert not self.exists_remotely('/tmp/test.txt') assert self.exists_remotely('/test.txt') @server(files={'/test.txt': 'test'}) def test_cd_should_not_apply_to_absolute_get(self): """ get() should not prepend env.cwd to absolute remote paths """ local = self.path('test.txt') with nested(cd('/tmp'), hide('everything')): get('/test.txt', local) assert os.path.exists(local) @server() def test_lcd_should_apply_to_put(self): """ lcd() should apply to put()'s local_path argument """ f = 'lcd_put_test.txt' d = 'subdir' local = self.path(d, f) os.makedirs(os.path.dirname(local)) with open(local, 'w') as fd: fd.write("contents") with nested(lcd(self.path(d)), hide('everything')): put(f, '/') assert self.exists_remotely('/%s' % f) @server() def test_lcd_should_apply_to_get(self): """ lcd() should apply to get()'s local_path argument """ d = self.path('subdir') f = 'file.txt' with nested(lcd(d), hide('everything')): get(f, f) assert self.exists_locally(os.path.join(d, f)) @server() @mock_streams('stdout') def test_stringio_without_name(self): file_obj = StringIO(u'test data') put(file_obj, '/') assert re.search('', sys.stdout.getvalue()) @server() @mock_streams('stdout') def test_stringio_with_name(self): """If a file object (StringIO) has a name attribute, use that in output""" file_obj = StringIO(u'test data') file_obj.name = 'Test StringIO Object' put(file_obj, '/') assert re.search(file_obj.name, sys.stdout.getvalue()) # # local() # # TODO: figure out how to mock subprocess, if it's even possible. # For now, simply test to make sure local() does not raise exceptions with # various settings enabled/disabled. def test_local_output_and_capture(): for capture in (True, False): for stdout in (True, False): for stderr in (True, False): hides, shows = ['running'], [] if stdout: hides.append('stdout') else: shows.append('stdout') if stderr: hides.append('stderr') else: shows.append('stderr') with nested(hide(*hides), show(*shows)): d = "local(): capture: %r, stdout: %r, stderr: %r" % ( capture, stdout, stderr ) local.description = d yield local, "echo 'foo' >/dev/null", capture del local.description class TestRunSudoReturnValues(FabricTest): @server() def test_returns_command_given(self): """ run("foo").command == foo """ with hide('everything'): eq_(run("ls /").command, "ls /") @server() def test_returns_fully_wrapped_command(self): """ run("foo").real_command involves env.shell + etc """ # FabTest turns use_shell off, we must reactivate it. # Doing so will cause a failure: server's default command list assumes # it's off, we're not testing actual wrapping here so we don't really # care. Just warn_only it. with settings(hide('everything'), warn_only=True, use_shell=True): # Slightly flexible test, we're not testing the actual construction # here, just that this attribute exists. ok_(env.shell in run("ls /").real_command) Fabric-1.8.2/tests/test_parallel.py000644 000765 000024 00000005056 12277451130 020235 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from fabric.api import run, parallel, env, hide, execute, settings from utils import FabricTest, eq_, aborts, mock_streams from server import server, RESPONSES, USER, HOST, PORT # TODO: move this into test_tasks? meh. class OhNoesException(Exception): pass class TestParallel(FabricTest): @server() @parallel def test_parallel(self): """ Want to do a simple call and respond """ env.pool_size = 10 cmd = "ls /simple" with hide('everything'): eq_(run(cmd), RESPONSES[cmd]) @server(port=2200) @server(port=2201) def test_env_host_no_user_or_port(self): """ Ensure env.host doesn't get user/port parts when parallel """ @parallel def _task(): run("ls /simple") assert USER not in env.host assert str(PORT) not in env.host host_string = '%s@%s:%%s' % (USER, HOST) with hide('everything'): execute(_task, hosts=[host_string % 2200, host_string % 2201]) @server(port=2200) @server(port=2201) @aborts def test_parallel_failures_abort(self): with hide('everything'): host1 = '127.0.0.1:2200' host2 = '127.0.0.1:2201' @parallel def mytask(): run("ls /") if env.host_string == host2: raise OhNoesException execute(mytask, hosts=[host1, host2]) @server(port=2200) @server(port=2201) @mock_streams('stderr') # To hide the traceback for now def test_parallel_failures_honor_warn_only(self): with hide('everything'): host1 = '127.0.0.1:2200' host2 = '127.0.0.1:2201' @parallel def mytask(): run("ls /") if env.host_string == host2: raise OhNoesException with settings(warn_only=True): result = execute(mytask, hosts=[host1, host2]) eq_(result[host1], None) assert isinstance(result[host2], OhNoesException) @server(port=2200) @server(port=2201) def test_parallel_implies_linewise(self): host1 = '127.0.0.1:2200' host2 = '127.0.0.1:2201' assert not env.linewise @parallel def mytask(): run("ls /") return env.linewise with hide('everything'): result = execute(mytask, hosts=[host1, host2]) eq_(result[host1], True) eq_(result[host2], True) Fabric-1.8.2/tests/test_project.py000644 000765 000024 00000012202 12277451130 020076 0ustar00jforcierstaff000000 000000 import unittest import os import fudge from fudge.inspector import arg from fabric.contrib import project class UploadProjectTestCase(unittest.TestCase): """Test case for :func: `fabric.contrib.project.upload_project`.""" fake_tmp = "testtempfolder" def setUp(self): fudge.clear_expectations() # We need to mock out run, local, and put self.fake_run = fudge.Fake('project.run', callable=True) self.patched_run = fudge.patch_object( project, 'run', self.fake_run ) self.fake_local = fudge.Fake('local', callable=True) self.patched_local = fudge.patch_object( project, 'local', self.fake_local ) self.fake_put = fudge.Fake('put', callable=True) self.patched_put = fudge.patch_object( project, 'put', self.fake_put ) # We don't want to create temp folders self.fake_mkdtemp = fudge.Fake( 'mkdtemp', expect_call=True ).returns(self.fake_tmp) self.patched_mkdtemp = fudge.patch_object( project, 'mkdtemp', self.fake_mkdtemp ) def tearDown(self): self.patched_run.restore() self.patched_local.restore() self.patched_put.restore() fudge.clear_expectations() @fudge.with_fakes def test_temp_folder_is_used(self): """A unique temp folder is used for creating the archive to upload.""" # Exercise project.upload_project() @fudge.with_fakes def test_project_is_archived_locally(self): """The project should be archived locally before being uploaded.""" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args(arg.startswith("tar -czf")).next_call() # Exercise project.upload_project() @fudge.with_fakes def test_current_directory_is_uploaded_by_default(self): """By default the project uploaded is the current working directory.""" cwd_path, cwd_name = os.path.split(os.getcwd()) # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args( arg.endswith("-C %s %s" % (cwd_path, cwd_name)) ).next_call() # Exercise project.upload_project() @fudge.with_fakes def test_path_to_local_project_can_be_specified(self): """It should be possible to specify which local folder to upload.""" project_path = "path/to/my/project" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args( arg.endswith("-C %s %s" % os.path.split(project_path)) ).next_call() # Exercise project.upload_project(local_dir=project_path) @fudge.with_fakes def test_path_to_local_project_can_end_in_separator(self): """A local path ending in a separator should be handled correctly.""" project_path = "path/to/my" base = "project" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_local.with_args( arg.endswith("-C %s %s" % (project_path, base)) ).next_call() # Exercise project.upload_project(local_dir="%s/%s/" % (project_path, base)) @fudge.with_fakes def test_default_remote_folder_is_home(self): """Project is uploaded to remote home by default.""" local_dir = "folder" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_put.with_args( "%s/folder.tar.gz" % self.fake_tmp, "folder.tar.gz", use_sudo=False ).next_call() # Exercise project.upload_project(local_dir=local_dir) @fudge.with_fakes def test_path_to_remote_folder_can_be_specified(self): """It should be possible to specify which local folder to upload to.""" local_dir = "folder" remote_path = "path/to/remote/folder" # local() is called more than once so we need an extra next_call() # otherwise fudge compares the args to the last call to local() self.fake_put.with_args( "%s/folder.tar.gz" % self.fake_tmp, "%s/folder.tar.gz" % remote_path, use_sudo=False ).next_call() # Exercise project.upload_project(local_dir=local_dir, remote_dir=remote_path) Fabric-1.8.2/tests/test_server.py000644 000765 000024 00000005637 12257074160 017756 0ustar00jforcierstaff000000 000000 """ Tests for the test server itself. Not intended to be run by the greater test suite, only by specifically targeting it on the command-line. Rationale: not really testing Fabric itself, no need to pollute Fab's own test suite. (Yes, if these tests fail, it's likely that the Fabric tests using the test server may also have issues, but still.) """ __test__ = False from nose.tools import eq_, ok_ from fabric.network import ssh from server import FakeSFTPServer class AttrHolder(object): pass def test_list_folder(): for desc, file_map, arg, expected in ( ( "Single file", {'file.txt': 'contents'}, '', ['file.txt'] ), ( "Single absolute file", {'/file.txt': 'contents'}, '/', ['file.txt'] ), ( "Multiple files", {'file1.txt': 'contents', 'file2.txt': 'contents2'}, '', ['file1.txt', 'file2.txt'] ), ( "Single empty folder", {'folder': None}, '', ['folder'] ), ( "Empty subfolders", {'folder': None, 'folder/subfolder': None}, '', ['folder'] ), ( "Non-empty sub-subfolder", {'folder/subfolder/subfolder2/file.txt': 'contents'}, "folder/subfolder/subfolder2", ['file.txt'] ), ( "Mixed files, folders empty and non-empty, in homedir", { 'file.txt': 'contents', 'file2.txt': 'contents2', 'folder/file3.txt': 'contents3', 'empty_folder': None }, '', ['file.txt', 'file2.txt', 'folder', 'empty_folder'] ), ( "Mixed files, folders empty and non-empty, in subdir", { 'file.txt': 'contents', 'file2.txt': 'contents2', 'folder/file3.txt': 'contents3', 'folder/subfolder/file4.txt': 'contents4', 'empty_folder': None }, "folder", ['file3.txt', 'subfolder'] ), ): # Pass in fake server obj. (Can't easily clean up API to be more # testable since it's all implementing 'ssh' interface stuff.) server = AttrHolder() server.files = file_map interface = FakeSFTPServer(server) results = interface.list_folder(arg) # In this particular suite of tests, all results should be a file list, # not "no files found" ok_(results != ssh.SFTP_NO_SUCH_FILE) # Grab filename from SFTPAttribute objects in result output = map(lambda x: x.filename, results) # Yield test generator eq_.description = "list_folder: %s" % desc yield eq_, set(expected), set(output) del eq_.description Fabric-1.8.2/tests/test_state.py000644 000765 000024 00000002125 12257074160 017555 0ustar00jforcierstaff000000 000000 from nose.tools import eq_ from fabric.state import _AliasDict def test_dict_aliasing(): """ Assigning values to aliases updates aliased keys """ ad = _AliasDict( {'bar': False, 'biz': True, 'baz': False}, aliases={'foo': ['bar', 'biz', 'baz']} ) # Before eq_(ad['bar'], False) eq_(ad['biz'], True) eq_(ad['baz'], False) # Change ad['foo'] = True # After eq_(ad['bar'], True) eq_(ad['biz'], True) eq_(ad['baz'], True) def test_nested_dict_aliasing(): """ Aliases can be nested """ ad = _AliasDict( {'bar': False, 'biz': True}, aliases={'foo': ['bar', 'nested'], 'nested': ['biz']} ) # Before eq_(ad['bar'], False) eq_(ad['biz'], True) # Change ad['foo'] = True # After eq_(ad['bar'], True) eq_(ad['biz'], True) def test_dict_alias_expansion(): """ Alias expansion """ ad = _AliasDict( {'bar': False, 'biz': True}, aliases={'foo': ['bar', 'nested'], 'nested': ['biz']} ) eq_(ad.expand_aliases(['foo']), ['bar', 'biz']) Fabric-1.8.2/tests/test_tasks.py000644 000765 000024 00000037730 12277451130 017572 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from contextlib import contextmanager from fudge import Fake, patched_context, with_fakes import unittest from nose.tools import eq_, raises, ok_ import random import sys import fabric from fabric.tasks import WrappedCallableTask, execute, Task, get_task_details from fabric.main import display_command from fabric.api import run, env, settings, hosts, roles, hide, parallel, task from fabric.network import from_dict from fabric.exceptions import NetworkError from utils import eq_, FabricTest, aborts, mock_streams from server import server def test_base_task_provides_undefined_name(): task = Task() eq_("undefined", task.name) @raises(NotImplementedError) def test_base_task_raises_exception_on_call_to_run(): task = Task() task.run() class TestWrappedCallableTask(unittest.TestCase): def test_passes_unused_args_to_parent(self): args = [i for i in range(random.randint(1, 10))] def foo(): pass try: task = WrappedCallableTask(foo, *args) except TypeError: msg = "__init__ raised a TypeError, meaning args weren't handled" self.fail(msg) def test_passes_unused_kwargs_to_parent(self): random_range = range(random.randint(1, 10)) kwargs = dict([("key_%s" % i, i) for i in random_range]) def foo(): pass try: task = WrappedCallableTask(foo, **kwargs) except TypeError: self.fail( "__init__ raised a TypeError, meaning kwargs weren't handled") def test_allows_any_number_of_args(self): args = [i for i in range(random.randint(0, 10))] def foo(): pass task = WrappedCallableTask(foo, *args) def test_allows_any_number_of_kwargs(self): kwargs = dict([("key%d" % i, i) for i in range(random.randint(0, 10))]) def foo(): pass task = WrappedCallableTask(foo, **kwargs) def test_run_is_wrapped_callable(self): def foo(): pass task = WrappedCallableTask(foo) eq_(task.wrapped, foo) def test_name_is_the_name_of_the_wrapped_callable(self): def foo(): pass foo.__name__ = "random_name_%d" % random.randint(1000, 2000) task = WrappedCallableTask(foo) eq_(task.name, foo.__name__) def test_name_can_be_overridden(self): def foo(): pass eq_(WrappedCallableTask(foo).name, 'foo') eq_(WrappedCallableTask(foo, name='notfoo').name, 'notfoo') def test_reads_double_under_doc_from_callable(self): def foo(): pass foo.__doc__ = "Some random __doc__: %d" % random.randint(1000, 2000) task = WrappedCallableTask(foo) eq_(task.__doc__, foo.__doc__) def test_dispatches_to_wrapped_callable_on_run(self): random_value = "some random value %d" % random.randint(1000, 2000) def foo(): return random_value task = WrappedCallableTask(foo) eq_(random_value, task()) def test_passes_all_regular_args_to_run(self): def foo(*args): return args random_args = tuple( [random.randint(1000, 2000) for i in range(random.randint(1, 5))] ) task = WrappedCallableTask(foo) eq_(random_args, task(*random_args)) def test_passes_all_keyword_args_to_run(self): def foo(**kwargs): return kwargs random_kwargs = {} for i in range(random.randint(1, 5)): random_key = ("foo", "bar", "baz", "foobar", "barfoo")[i] random_kwargs[random_key] = random.randint(1000, 2000) task = WrappedCallableTask(foo) eq_(random_kwargs, task(**random_kwargs)) def test_calling_the_object_is_the_same_as_run(self): random_return = random.randint(1000, 2000) def foo(): return random_return task = WrappedCallableTask(foo) eq_(task(), task.run()) class TestTask(unittest.TestCase): def test_takes_an_alias_kwarg_and_wraps_it_in_aliases_list(self): random_alias = "alias_%d" % random.randint(100, 200) task = Task(alias=random_alias) self.assertTrue(random_alias in task.aliases) def test_aliases_are_set_based_on_provided_aliases(self): aliases = ["a_%d" % i for i in range(random.randint(1, 10))] task = Task(aliases=aliases) self.assertTrue(all([a in task.aliases for a in aliases])) def test_aliases_are_None_by_default(self): task = Task() self.assertTrue(task.aliases is None) # Reminder: decorator syntax, e.g.: # @foo # def bar():... # # is semantically equivalent to: # def bar():... # bar = foo(bar) # # this simplifies testing :) def test_decorator_incompatibility_on_task(): from fabric.decorators import task, hosts, runs_once, roles def foo(): return "foo" foo = task(foo) # since we aren't setting foo to be the newly decorated thing, its cool hosts('me@localhost')(foo) runs_once(foo) roles('www')(foo) def test_decorator_closure_hiding(): """ @task should not accidentally destroy decorated attributes from @hosts/etc """ from fabric.decorators import task, hosts def foo(): print(env.host_string) foo = task(hosts("me@localhost")(foo)) eq_(["me@localhost"], foo.hosts) # # execute() # def dict_contains(superset, subset): """ Assert that all key/val pairs in dict 'subset' also exist in 'superset' """ for key, value in subset.iteritems(): ok_(key in superset) eq_(superset[key], value) class TestExecute(FabricTest): @with_fakes def test_calls_task_function_objects(self): """ should execute the passed-in function object """ execute(Fake(callable=True, expect_call=True)) @with_fakes def test_should_look_up_task_name(self): """ should also be able to handle task name strings """ name = 'task1' commands = {name: Fake(callable=True, expect_call=True)} with patched_context(fabric.state, 'commands', commands): execute(name) @with_fakes def test_should_handle_name_of_Task_object(self): """ handle corner case of Task object referrred to by name """ name = 'task2' class MyTask(Task): run = Fake(callable=True, expect_call=True) mytask = MyTask() mytask.name = name commands = {name: mytask} with patched_context(fabric.state, 'commands', commands): execute(name) @aborts def test_should_abort_if_task_name_not_found(self): """ should abort if given an invalid task name """ execute('thisisnotavalidtaskname') @with_fakes def test_should_pass_through_args_kwargs(self): """ should pass in any additional args, kwargs to the given task. """ task = ( Fake(callable=True, expect_call=True) .with_args('foo', biz='baz') ) execute(task, 'foo', biz='baz') @with_fakes def test_should_honor_hosts_kwarg(self): """ should use hosts kwarg to set run list """ # Make two full copies of a host list hostlist = ['a', 'b', 'c'] hosts = hostlist[:] # Side-effect which asserts the value of env.host_string when it runs def host_string(): eq_(env.host_string, hostlist.pop(0)) task = Fake(callable=True, expect_call=True).calls(host_string) with hide('everything'): execute(task, hosts=hosts) def test_should_honor_hosts_decorator(self): """ should honor @hosts on passed-in task objects """ # Make two full copies of a host list hostlist = ['a', 'b', 'c'] @hosts(*hostlist[:]) def task(): eq_(env.host_string, hostlist.pop(0)) with hide('running'): execute(task) def test_should_honor_roles_decorator(self): """ should honor @roles on passed-in task objects """ # Make two full copies of a host list roledefs = {'role1': ['a', 'b', 'c']} role_copy = roledefs['role1'][:] @roles('role1') def task(): eq_(env.host_string, role_copy.pop(0)) with settings(hide('running'), roledefs=roledefs): execute(task) @with_fakes def test_should_set_env_command_to_string_arg(self): """ should set env.command to any string arg, if given """ name = "foo" def command(): eq_(env.command, name) task = Fake(callable=True, expect_call=True).calls(command) with patched_context(fabric.state, 'commands', {name: task}): execute(name) @with_fakes def test_should_set_env_command_to_name_attr(self): """ should set env.command to TaskSubclass.name if possible """ name = "foo" def command(): eq_(env.command, name) task = ( Fake(callable=True, expect_call=True) .has_attr(name=name) .calls(command) ) execute(task) @with_fakes def test_should_set_all_hosts(self): """ should set env.all_hosts to its derived host list """ hosts = ['a', 'b'] roledefs = {'r1': ['c', 'd']} roles = ['r1'] exclude_hosts = ['a'] def command(): eq_(set(env.all_hosts), set(['b', 'c', 'd'])) task = Fake(callable=True, expect_call=True).calls(command) with settings(hide('everything'), roledefs=roledefs): execute( task, hosts=hosts, roles=roles, exclude_hosts=exclude_hosts ) @mock_streams('stdout') def test_should_print_executing_line_per_host(self): """ should print "Executing" line once per host """ def task(): pass execute(task, hosts=['host1', 'host2']) eq_(sys.stdout.getvalue(), """[host1] Executing task 'task' [host2] Executing task 'task' """) @mock_streams('stdout') def test_should_not_print_executing_line_for_singletons(self): """ should not print "Executing" line for non-networked tasks """ def task(): pass with settings(hosts=[]): # protect against really odd test bleed :( execute(task) eq_(sys.stdout.getvalue(), "") def test_should_return_dict_for_base_case(self): """ Non-network-related tasks should return a dict w/ special key """ def task(): return "foo" eq_(execute(task), {'': 'foo'}) @server(port=2200) @server(port=2201) def test_should_return_dict_for_serial_use_case(self): """ Networked but serial tasks should return per-host-string dict """ ports = [2200, 2201] hosts = map(lambda x: '127.0.0.1:%s' % x, ports) def task(): run("ls /simple") return "foo" with hide('everything'): eq_(execute(task, hosts=hosts), { '127.0.0.1:2200': 'foo', '127.0.0.1:2201': 'foo' }) @server() def test_should_preserve_None_for_non_returning_tasks(self): """ Tasks which don't return anything should still show up in the dict """ def local_task(): pass def remote_task(): with hide('everything'): run("ls /simple") eq_(execute(local_task), {'': None}) with hide('everything'): eq_( execute(remote_task, hosts=[env.host_string]), {env.host_string: None} ) def test_should_use_sentinel_for_tasks_that_errored(self): """ Tasks which errored but didn't abort should contain an eg NetworkError """ def task(): run("whoops") host_string = 'localhost:1234' with settings(hide('everything'), skip_bad_hosts=True): retval = execute(task, hosts=[host_string]) assert isinstance(retval[host_string], NetworkError) @server(port=2200) @server(port=2201) def test_parallel_return_values(self): """ Parallel mode should still return values as in serial mode """ @parallel @hosts('127.0.0.1:2200', '127.0.0.1:2201') def task(): run("ls /simple") return env.host_string.split(':')[1] with hide('everything'): retval = execute(task) eq_(retval, {'127.0.0.1:2200': '2200', '127.0.0.1:2201': '2201'}) @with_fakes def test_should_work_with_Task_subclasses(self): """ should work for Task subclasses, not just WrappedCallableTask """ class MyTask(Task): name = "mytask" run = Fake(callable=True, expect_call=True) mytask = MyTask() execute(mytask) class TestExecuteEnvInteractions(FabricTest): def set_network(self): # Don't update env.host/host_string/etc pass @server(port=2200) @server(port=2201) def test_should_not_mutate_its_own_env_vars(self): """ internal env changes should not bleed out, but task env changes should """ # Task that uses a handful of features which involve env vars @parallel @hosts('username@127.0.0.1:2200', 'username@127.0.0.1:2201') def mytask(): run("ls /simple") # Pre-assertions assertions = { 'parallel': False, 'all_hosts': [], 'host': None, 'hosts': [], 'host_string': None } for key, value in assertions.items(): eq_(env[key], value) # Run with hide('everything'): result = execute(mytask) eq_(len(result), 2) # Post-assertions for key, value in assertions.items(): eq_(env[key], value) @server() def test_should_allow_task_to_modify_env_vars(self): @hosts('username@127.0.0.1:2200') def mytask(): run("ls /simple") env.foo = "bar" with hide('everything'): execute(mytask) eq_(env.foo, "bar") eq_(env.host_string, None) class TestTaskDetails(unittest.TestCase): def test_old_style_task_with_default_args(self): def task_old_style(arg1, arg2, arg3=None, arg4='yes'): '''Docstring''' details = get_task_details(task_old_style) eq_("Docstring\n" "Arguments: arg1, arg2, arg3=None, arg4='yes'", details) def test_old_style_task_without_default_args(self): def task_old_style(arg1, arg2): '''Docstring''' details = get_task_details(task_old_style) eq_("Docstring\n" "Arguments: arg1, arg2", details) def test_old_style_task_without_args(self): def task_old_style(): '''Docstring''' details = get_task_details(task_old_style) eq_("Docstring\n" "Arguments: ", details) def test_decorated_task(self): @task def decorated_task(arg1): '''Docstring''' eq_("Docstring\n" "Arguments: arg1", decorated_task.__details__()) def test_subclassed_task(self): class SpecificTask(Task): def run(self, arg1, arg2, arg3): '''Docstring''' eq_("Docstring\n" "Arguments: self, arg1, arg2, arg3", SpecificTask().__details__()) @mock_streams('stdout') def test_multiline_docstring_indented_correctly(self): def mytask(arg1): """ This is a multi line docstring. For reals. """ try: with patched_context(fabric.state, 'commands', {'mytask': mytask}): display_command('mytask') except SystemExit: # ugh pass eq_( sys.stdout.getvalue(), """Displaying detailed information for task 'mytask': This is a multi line docstring. For reals. Arguments: arg1 """ ) Fabric-1.8.2/tests/test_utils.py000644 000765 000024 00000015701 12277461502 017603 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import sys from unittest import TestCase from fudge import Fake, patched_context, with_fakes from fudge.patcher import with_patched_object from nose.tools import eq_, raises from fabric.state import output, env from fabric.utils import warn, indent, abort, puts, fastprint, error, RingBuffer from fabric import utils # For patching from fabric.context_managers import settings, hide from fabric.colors import magenta, red from utils import mock_streams, aborts, FabricTest, assert_contains @mock_streams('stderr') @with_patched_object(output, 'warnings', True) def test_warn(): """ warn() should print 'Warning' plus given text """ warn("Test") eq_("\nWarning: Test\n\n", sys.stderr.getvalue()) def test_indent(): for description, input, output in ( ("Sanity check: 1 line string", 'Test', ' Test'), ("List of strings turns in to strings joined by \\n", ["Test", "Test"], ' Test\n Test'), ): eq_.description = "indent(): %s" % description yield eq_, indent(input), output del eq_.description def test_indent_with_strip(): for description, input, output in ( ("Sanity check: 1 line string", indent('Test', strip=True), ' Test'), ("Check list of strings", indent(["Test", "Test"], strip=True), ' Test\n Test'), ("Check list of strings", indent([" Test", " Test"], strip=True), ' Test\n Test'), ): eq_.description = "indent(strip=True): %s" % description yield eq_, input, output del eq_.description @aborts def test_abort(): """ abort() should raise SystemExit """ abort("Test") class TestException(Exception): pass @raises(TestException) def test_abort_with_exception(): """ abort() should raise a provided exception """ with settings(abort_exception=TestException): abort("Test") @mock_streams('stderr') @with_patched_object(output, 'aborts', True) def test_abort_message(): """ abort() should print 'Fatal error' plus exception value """ try: abort("Test") except SystemExit: pass result = sys.stderr.getvalue() eq_("\nFatal error: Test\n\nAborting.\n", result) @mock_streams('stdout') def test_puts_with_user_output_on(): """ puts() should print input to sys.stdout if "user" output level is on """ s = "string!" output.user = True puts(s, show_prefix=False) eq_(sys.stdout.getvalue(), s + "\n") @mock_streams('stdout') def test_puts_with_user_output_off(): """ puts() shouldn't print input to sys.stdout if "user" output level is off """ output.user = False puts("You aren't reading this.") eq_(sys.stdout.getvalue(), "") @mock_streams('stdout') def test_puts_with_prefix(): """ puts() should prefix output with env.host_string if non-empty """ s = "my output" h = "localhost" with settings(host_string=h): puts(s) eq_(sys.stdout.getvalue(), "[%s] %s" % (h, s + "\n")) @mock_streams('stdout') def test_puts_without_prefix(): """ puts() shouldn't prefix output with env.host_string if show_prefix is False """ s = "my output" h = "localhost" puts(s, show_prefix=False) eq_(sys.stdout.getvalue(), "%s" % (s + "\n")) @with_fakes def test_fastprint_calls_puts(): """ fastprint() is just an alias to puts() """ text = "Some output" fake_puts = Fake('puts', expect_call=True).with_args( text=text, show_prefix=False, end="", flush=True ) with patched_context(utils, 'puts', fake_puts): fastprint(text) class TestErrorHandling(FabricTest): @with_patched_object(utils, 'warn', Fake('warn', callable=True, expect_call=True)) def test_error_warns_if_warn_only_True_and_func_None(self): """ warn_only=True, error(func=None) => calls warn() """ with settings(warn_only=True): error('foo') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True)) def test_error_aborts_if_warn_only_False_and_func_None(self): """ warn_only=False, error(func=None) => calls abort() """ with settings(warn_only=False): error('foo') def test_error_calls_given_func_if_func_not_None(self): """ error(func=callable) => calls callable() """ error('foo', func=Fake(callable=True, expect_call=True)) @mock_streams('stdout') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True).calls(lambda x: sys.stdout.write(x + "\n"))) def test_error_includes_stdout_if_given_and_hidden(self): """ error() correctly prints stdout if it was previously hidden """ # Mostly to catch regression bug(s) stdout = "this is my stdout" with hide('stdout'): error("error message", func=utils.abort, stdout=stdout) assert_contains(stdout, sys.stdout.getvalue()) @mock_streams('stderr') @with_patched_object(utils, 'abort', Fake('abort', callable=True, expect_call=True).calls(lambda x: sys.stderr.write(x + "\n"))) def test_error_includes_stderr_if_given_and_hidden(self): """ error() correctly prints stderr if it was previously hidden """ # Mostly to catch regression bug(s) stderr = "this is my stderr" with hide('stderr'): error("error message", func=utils.abort, stderr=stderr) assert_contains(stderr, sys.stderr.getvalue()) @mock_streams('stderr') def test_warnings_print_magenta_if_colorize_on(self): with settings(colorize_errors=True): error("oh god", func=utils.warn, stderr="oops") # can't use assert_contains as ANSI codes contain regex specialchars eq_(magenta("\nWarning: oh god\n\n"), sys.stderr.getvalue()) @mock_streams('stderr') @raises(SystemExit) def test_errors_print_red_if_colorize_on(self): with settings(colorize_errors=True): error("oh god", func=utils.abort, stderr="oops") # can't use assert_contains as ANSI codes contain regex specialchars eq_(red("\Error: oh god\n\n"), sys.stderr.getvalue()) class TestRingBuffer(TestCase): def setUp(self): self.b = RingBuffer([], maxlen=5) def test_append_empty(self): self.b.append('x') eq_(self.b, ['x']) def test_append_full(self): self.b.extend("abcde") self.b.append('f') eq_(self.b, ['b', 'c', 'd', 'e', 'f']) def test_extend_empty(self): self.b.extend("abc") eq_(self.b, ['a', 'b', 'c']) def test_extend_overrun(self): self.b.extend("abc") self.b.extend("defg") eq_(self.b, ['c', 'd', 'e', 'f', 'g']) def test_extend_full(self): self.b.extend("abcde") self.b.extend("fgh") eq_(self.b, ['d', 'e', 'f', 'g', 'h']) Fabric-1.8.2/tests/test_version.py000644 000765 000024 00000001770 12257074160 020127 0ustar00jforcierstaff000000 000000 """ Tests covering Fabric's version number pretty-print functionality. """ from nose.tools import eq_ import fabric.version def test_get_version(): get_version = fabric.version.get_version sha = fabric.version.git_sha() sha1 = (" (%s)" % sha) if sha else "" for tup, short, normal, verbose in [ ((0, 9, 0, 'final', 0), '0.9.0', '0.9', '0.9 final'), ((0, 9, 1, 'final', 0), '0.9.1', '0.9.1', '0.9.1 final'), ((0, 9, 0, 'alpha', 1), '0.9a1', '0.9 alpha 1', '0.9 alpha 1'), ((0, 9, 1, 'beta', 1), '0.9.1b1', '0.9.1 beta 1', '0.9.1 beta 1'), ((0, 9, 0, 'release candidate', 1), '0.9rc1', '0.9 release candidate 1', '0.9 release candidate 1'), ((1, 0, 0, 'alpha', 0), '1.0a%s' % sha1, '1.0 pre-alpha%s' % sha1, '1.0 pre-alpha%s' % sha1), ]: fabric.version.VERSION = tup yield eq_, get_version('short'), short yield eq_, get_version('normal'), normal yield eq_, get_version('verbose'), verbose Fabric-1.8.2/tests/utils.py000644 000765 000024 00000013635 12277461502 016550 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from contextlib import contextmanager from copy import deepcopy from fudge.patcher import with_patched_object from functools import partial from types import StringTypes import copy import getpass import os import re import shutil import sys import tempfile from fudge import Fake, patched_context, clear_expectations, with_patched_object from nose.tools import raises from nose import SkipTest from fabric.context_managers import settings from fabric.state import env, output from fabric.sftp import SFTP import fabric.network from fabric.network import normalize, to_dict from server import PORT, PASSWORDS, USER, HOST from mock_streams import mock_streams class FabricTest(object): """ Nose-oriented test runner which wipes state.env and provides file helpers. """ def setup(self): # Clear Fudge mock expectations clear_expectations() # Copy env, output for restoration in teardown self.previous_env = copy.deepcopy(env) # Deepcopy doesn't work well on AliasDicts; but they're only one layer # deep anyways, so... self.previous_output = output.items() # Allow hooks from subclasses here for setting env vars (so they get # purged correctly in teardown()) self.env_setup() # Temporary local file dir self.tmpdir = tempfile.mkdtemp() def set_network(self): env.update(to_dict('%s@%s:%s' % (USER, HOST, PORT))) def env_setup(self): # Set up default networking for test server env.disable_known_hosts = True self.set_network() env.password = PASSWORDS[USER] # Command response mocking is easier without having to account for # shell wrapping everywhere. env.use_shell = False def teardown(self): env.clear() # In case tests set env vars that didn't exist previously env.update(self.previous_env) output.update(self.previous_output) shutil.rmtree(self.tmpdir) # Clear Fudge mock expectations...again clear_expectations() def path(self, *path_parts): return os.path.join(self.tmpdir, *path_parts) def mkfile(self, path, contents): dest = self.path(path) with open(dest, 'w') as fd: fd.write(contents) return dest def exists_remotely(self, path): return SFTP(env.host_string).exists(path) def exists_locally(self, path): return os.path.exists(path) def password_response(password, times_called=None, silent=True): """ Context manager which patches ``getpass.getpass`` to return ``password``. ``password`` may be a single string or an iterable of strings: * If single string, given password is returned every time ``getpass`` is called. * If iterable, iterated over for each call to ``getpass``, after which ``getpass`` will error. If ``times_called`` is given, it is used to add a ``Fake.times_called`` clause to the mock object, e.g. ``.times_called(1)``. Specifying ``times_called`` alongside an iterable ``password`` list is unsupported (see Fudge docs on ``Fake.next_call``). If ``silent`` is True, no prompt will be printed to ``sys.stderr``. """ fake = Fake('getpass', callable=True) # Assume stringtype or iterable, turn into mutable iterable if isinstance(password, StringTypes): passwords = [password] else: passwords = list(password) # Optional echoing of prompt to mimic real behavior of getpass # NOTE: also echo a newline if the prompt isn't a "passthrough" from the # server (as it means the server won't be sending its own newline for us). echo = lambda x, y: y.write(x + ("\n" if x != " " else "")) # Always return first (only?) password right away fake = fake.returns(passwords.pop(0)) if not silent: fake = fake.calls(echo) # If we had >1, return those afterwards for pw in passwords: fake = fake.next_call().returns(pw) if not silent: fake = fake.calls(echo) # Passthrough times_called if times_called: fake = fake.times_called(times_called) return patched_context(getpass, 'getpass', fake) def _assert_contains(needle, haystack, invert): matched = re.search(needle, haystack, re.M) if (invert and matched) or (not invert and not matched): raise AssertionError("r'%s' %sfound in '%s'" % ( needle, "" if invert else "not ", haystack )) assert_contains = partial(_assert_contains, invert=False) assert_not_contains = partial(_assert_contains, invert=True) def line_prefix(prefix, string): """ Return ``string`` with all lines prefixed by ``prefix``. """ return "\n".join(prefix + x for x in string.splitlines()) def eq_(result, expected, msg=None): """ Shadow of the Nose builtin which presents easier to read multiline output. """ params = {'expected': expected, 'result': result} aka = """ --------------------------------- aka ----------------------------------------- Expected: %(expected)r Got: %(result)r """ % params default_msg = """ Expected: %(expected)s Got: %(result)s """ % params if (repr(result) != str(result)) or (repr(expected) != str(expected)): default_msg += aka assert result == expected, msg or default_msg def eq_contents(path, text): with open(path) as fd: eq_(text, fd.read()) def support(path): return os.path.join(os.path.dirname(__file__), 'support', path) fabfile = support @contextmanager def path_prefix(module): i = 0 sys.path.insert(i, os.path.dirname(module)) yield sys.path.pop(i) def aborts(func): return raises(SystemExit)(mock_streams('stderr')(func)) def _patched_input(func, fake): return func(sys.modules['__builtin__'], 'raw_input', fake) patched_input = partial(_patched_input, patched_context) with_patched_input = partial(_patched_input, with_patched_object) Fabric-1.8.2/tests/support/__init__.py000644 000765 000024 00000000000 12257074160 020637 0ustar00jforcierstaff000000 000000 Fabric-1.8.2/tests/support/classbased_task_fabfile.py000644 000765 000024 00000000205 12257074160 023705 0ustar00jforcierstaff000000 000000 from fabric import tasks class ClassBasedTask(tasks.Task): def run(self, *args, **kwargs): pass foo = ClassBasedTask() Fabric-1.8.2/tests/support/decorated_fabfile.py000644 000765 000024 00000000123 12257074160 022510 0ustar00jforcierstaff000000 000000 from fabric.decorators import task @task def foo(): pass def bar(): pass Fabric-1.8.2/tests/support/decorated_fabfile_with_classbased_task.py000644 000765 000024 00000000375 12257074160 026762 0ustar00jforcierstaff000000 000000 from fabric import tasks from fabric.decorators import task class ClassBasedTask(tasks.Task): def __init__(self): self.name = "foo" self.use_decorated = True def run(self, *args, **kwargs): pass foo = ClassBasedTask() Fabric-1.8.2/tests/support/decorated_fabfile_with_modules.py000644 000765 000024 00000000163 12257074160 025277 0ustar00jforcierstaff000000 000000 from fabric.decorators import task import module_fabtasks as tasks @task def foo(): pass def bar(): pass Fabric-1.8.2/tests/support/decorator_order.py000644 000765 000024 00000000333 12257074160 022266 0ustar00jforcierstaff000000 000000 from fabric.api import task, hosts, roles @hosts('whatever') @task def foo(): pass # There must be at least one unmolested new-style task for the decorator order # problem to appear. @task def caller(): pass Fabric-1.8.2/tests/support/deep.py000644 000765 000024 00000000021 12257074160 020020 0ustar00jforcierstaff000000 000000 import submodule Fabric-1.8.2/tests/support/default_task_submodule.py000644 000765 000024 00000000120 12257074160 023630 0ustar00jforcierstaff000000 000000 from fabric.api import task @task(default=True) def long_task_name(): pass Fabric-1.8.2/tests/support/default_tasks.py000644 000765 000024 00000000052 12257074160 021740 0ustar00jforcierstaff000000 000000 import default_task_submodule as mymodule Fabric-1.8.2/tests/support/docstring.py000644 000765 000024 00000000130 12257074160 021100 0ustar00jforcierstaff000000 000000 from fabric.decorators import task @task def foo(): """ Foos! """ pass Fabric-1.8.2/tests/support/explicit_fabfile.py000644 000765 000024 00000000074 12257074160 022404 0ustar00jforcierstaff000000 000000 __all__ = ['foo'] def foo(): pass def bar(): pass Fabric-1.8.2/tests/support/flat_alias.py000644 000765 000024 00000000114 12257074160 021205 0ustar00jforcierstaff000000 000000 from fabric.api import task @task(alias="foo_aliased") def foo(): pass Fabric-1.8.2/tests/support/flat_aliases.py000644 000765 000024 00000000143 12257074160 021537 0ustar00jforcierstaff000000 000000 from fabric.api import task @task(aliases=["foo_aliased", "foo_aliased_two"]) def foo(): pass Fabric-1.8.2/tests/support/implicit_fabfile.py000644 000765 000024 00000000051 12257074160 022370 0ustar00jforcierstaff000000 000000 def foo(): pass def bar(): pass Fabric-1.8.2/tests/support/mapping.py000644 000765 000024 00000000237 12257074160 020547 0ustar00jforcierstaff000000 000000 from fabric.tasks import Task class MappingTask(dict, Task): def run(self): pass mapping_task = MappingTask() mapping_task.name = "mapping_task" Fabric-1.8.2/tests/support/module_fabtasks.py000644 000765 000024 00000000102 12257074160 022246 0ustar00jforcierstaff000000 000000 def hello(): print("hello") def world(): print("world") Fabric-1.8.2/tests/support/nested_alias.py000644 000765 000024 00000000034 12257074160 021542 0ustar00jforcierstaff000000 000000 import flat_alias as nested Fabric-1.8.2/tests/support/nested_aliases.py000644 000765 000024 00000000036 12257074160 022074 0ustar00jforcierstaff000000 000000 import flat_aliases as nested Fabric-1.8.2/tests/support/ssh_config000644 000765 000024 00000000256 12257074160 020610 0ustar00jforcierstaff000000 000000 Host myhost User neighbor Port 664 IdentityFile neighbor.pub Host myalias HostName otherhost Host * User satan Port 666 IdentityFile foobar.pub Fabric-1.8.2/tests/support/submodule/000755 000765 000024 00000000000 12277461504 020543 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/tests/support/testserver_ssh_config000644 000765 000024 00000000172 12257074160 023073 0ustar00jforcierstaff000000 000000 Host testserver # TODO: get these pulling from server.py. Meh. HostName 127.0.0.1 Port 2200 User username Fabric-1.8.2/tests/support/tree/000755 000765 000024 00000000000 12277461504 017503 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/tests/support/tree/__init__.py000644 000765 000024 00000000160 12257074160 021605 0ustar00jforcierstaff000000 000000 from fabric.api import task import system, db @task def deploy(): pass @task def build_docs(): pass Fabric-1.8.2/tests/support/tree/db.py000644 000765 000024 00000000074 12257074160 020437 0ustar00jforcierstaff000000 000000 from fabric.api import task @task def migrate(): pass Fabric-1.8.2/tests/support/tree/system/000755 000765 000024 00000000000 12277461504 021027 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/tests/support/tree/system/__init__.py000644 000765 000024 00000000122 12257074160 023127 0ustar00jforcierstaff000000 000000 from fabric.api import task import debian @task def install_package(): pass Fabric-1.8.2/tests/support/tree/system/debian.py000644 000765 000024 00000000077 12257074160 022623 0ustar00jforcierstaff000000 000000 from fabric.api import task @task def update_apt(): pass Fabric-1.8.2/tests/support/submodule/__init__.py000644 000765 000024 00000000062 12257074160 022646 0ustar00jforcierstaff000000 000000 import subsubmodule def classic_task(): pass Fabric-1.8.2/tests/support/submodule/subsubmodule/000755 000765 000024 00000000000 12277461504 023254 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/tests/support/submodule/subsubmodule/__init__.py000644 000765 000024 00000000074 12257074160 025362 0ustar00jforcierstaff000000 000000 from fabric.api import task @task def deeptask(): pass Fabric-1.8.2/Fabric.egg-info/dependency_links.txt000644 000765 000024 00000000001 12277461504 022614 0ustar00jforcierstaff000000 000000 Fabric-1.8.2/Fabric.egg-info/entry_points.txt000644 000765 000024 00000000052 12277461504 022041 0ustar00jforcierstaff000000 000000 [console_scripts] fab = fabric.main:main Fabric-1.8.2/Fabric.egg-info/PKG-INFO000644 000765 000024 00000006406 12277461504 017651 0ustar00jforcierstaff000000 000000 Metadata-Version: 1.0 Name: Fabric Version: 1.8.2 Summary: Fabric is a simple, Pythonic tool for remote execution and deployment. Home-page: http://fabfile.org Author: Jeff Forcier Author-email: jeff@bitprophet.org License: UNKNOWN Description: To find out what's new in this version of Fabric, please see `the changelog `_. You can also install the `in-development version `_ using pip, with `pip install fabric==dev`. ---- .. image:: https://secure.travis-ci.org/fabric/fabric.png?branch=master :target: https://travis-ci.org/fabric/fabric Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. It provides a basic suite of operations for executing local or remote shell commands (normally or via ``sudo``) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution. Typical use involves creating a Python module containing one or more functions, then executing them via the ``fab`` command-line tool. Below is a small but complete "fabfile" containing a single task:: from fabric.api import run def host_type(): run('uname -s') Once a task is defined, it may be run on one or more servers, like so:: $ fab -H localhost,linuxbox host_type [localhost] run: uname -s [localhost] out: Darwin [linuxbox] run: uname -s [linuxbox] out: Linux Done. Disconnecting from localhost... done. Disconnecting from linuxbox... done. In addition to use via the ``fab`` tool, Fabric's components may be imported into other Python code, providing a Pythonic interface to the SSH protocol suite at a higher level than that provided by e.g. the ``Paramiko`` library (which Fabric itself uses.) ---- For more information, please see the Fabric website or execute ``fab --help``. Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Unix Classifier: Operating System :: POSIX Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Topic :: Software Development Classifier: Topic :: Software Development :: Build Tools Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Clustering Classifier: Topic :: System :: Software Distribution Classifier: Topic :: System :: Systems Administration Fabric-1.8.2/Fabric.egg-info/requires.txt000644 000765 000024 00000000020 12277461504 021136 0ustar00jforcierstaff000000 000000 paramiko>=1.10.0Fabric-1.8.2/Fabric.egg-info/SOURCES.txt000644 000765 000024 00000005623 12277461504 020440 0ustar00jforcierstaff000000 000000 AUTHORS INSTALL LICENSE MANIFEST.in README.rst requirements.txt setup.py Fabric.egg-info/PKG-INFO Fabric.egg-info/SOURCES.txt Fabric.egg-info/dependency_links.txt Fabric.egg-info/entry_points.txt Fabric.egg-info/requires.txt Fabric.egg-info/top_level.txt docs/.changelog.rst.swp docs/Makefile docs/changelog.rst docs/conf.py docs/development.rst docs/faq.rst docs/index.rst docs/installation.rst docs/roadmap.rst docs/troubleshooting.rst docs/tutorial.rst docs/_static/rtd.css docs/_templates/layout.html docs/api/contrib/console.rst docs/api/contrib/django.rst docs/api/contrib/files.rst docs/api/contrib/project.rst docs/api/core/colors.rst docs/api/core/context_managers.rst docs/api/core/decorators.rst docs/api/core/docs.rst docs/api/core/network.rst docs/api/core/operations.rst docs/api/core/tasks.rst docs/api/core/utils.rst docs/usage/env.rst docs/usage/execution.rst docs/usage/fab.rst docs/usage/fabfiles.rst docs/usage/interactivity.rst docs/usage/library.rst docs/usage/output_controls.rst docs/usage/parallel.rst docs/usage/ssh.rst docs/usage/tasks.rst fabfile/__init__.py fabfile/docs.py fabfile/tag.py fabfile/utils.py fabric/__init__.py fabric/api.py fabric/auth.py fabric/colors.py fabric/context_managers.py fabric/decorators.py fabric/docs.py fabric/exceptions.py fabric/io.py fabric/job_queue.py fabric/main.py fabric/network.py fabric/operations.py fabric/sftp.py fabric/state.py fabric/task_utils.py fabric/tasks.py fabric/thread_handling.py fabric/utils.py fabric/version.py fabric/contrib/__init__.py fabric/contrib/console.py fabric/contrib/django.py fabric/contrib/files.py fabric/contrib/project.py tests/Python26SocketServer.py tests/client.key tests/client.key.pub tests/fake_filesystem.py tests/mock_streams.py tests/private.key tests/server.py tests/test_context_managers.py tests/test_contrib.py tests/test_decorators.py tests/test_main.py tests/test_network.py tests/test_operations.py tests/test_parallel.py tests/test_project.py tests/test_server.py tests/test_state.py tests/test_tasks.py tests/test_utils.py tests/test_version.py tests/utils.py tests/support/__init__.py tests/support/classbased_task_fabfile.py tests/support/decorated_fabfile.py tests/support/decorated_fabfile_with_classbased_task.py tests/support/decorated_fabfile_with_modules.py tests/support/decorator_order.py tests/support/deep.py tests/support/default_task_submodule.py tests/support/default_tasks.py tests/support/docstring.py tests/support/explicit_fabfile.py tests/support/flat_alias.py tests/support/flat_aliases.py tests/support/implicit_fabfile.py tests/support/mapping.py tests/support/module_fabtasks.py tests/support/nested_alias.py tests/support/nested_aliases.py tests/support/ssh_config tests/support/testserver_ssh_config tests/support/submodule/__init__.py tests/support/submodule/subsubmodule/__init__.py tests/support/tree/__init__.py tests/support/tree/db.py tests/support/tree/system/__init__.py tests/support/tree/system/debian.pyFabric-1.8.2/Fabric.egg-info/top_level.txt000644 000765 000024 00000000017 12277461504 021276 0ustar00jforcierstaff000000 000000 fabfile fabric Fabric-1.8.2/fabric/__init__.py000644 000765 000024 00000000074 12257074160 017222 0ustar00jforcierstaff000000 000000 """ See `fabric.api` for the publically importable API. """ Fabric-1.8.2/fabric/api.py000644 000765 000024 00000001442 12277451120 016231 0ustar00jforcierstaff000000 000000 """ Non-init module for doing convenient * imports from. Necessary because if we did this in __init__, one would be unable to import anything else inside the package -- like, say, the version number used in setup.py -- without triggering loads of most of the code. Which doesn't work so well when you're using setup.py to install e.g. ssh! """ from fabric.context_managers import (cd, hide, settings, show, path, prefix, lcd, quiet, warn_only, remote_tunnel, shell_env) from fabric.decorators import (hosts, roles, runs_once, with_settings, task, serial, parallel) from fabric.operations import (require, prompt, put, get, run, sudo, local, reboot, open_shell) from fabric.state import env, output from fabric.utils import abort, warn, puts, fastprint from fabric.tasks import execute Fabric-1.8.2/fabric/auth.py000644 000765 000024 00000001036 12277451120 016420 0ustar00jforcierstaff000000 000000 """ Common authentication subroutines. Primarily for internal use. """ def get_password(user, host, port): from fabric.state import env from fabric.network import join_host_strings host_string = join_host_strings(user, host, port) return env.passwords.get(host_string, env.password) def set_password(user, host, port, password): from fabric.state import env from fabric.network import join_host_strings host_string = join_host_strings(user, host, port) env.password = env.passwords[host_string] = password Fabric-1.8.2/fabric/colors.py000644 000765 000024 00000002175 12257074160 016770 0ustar00jforcierstaff000000 000000 """ .. versionadded:: 0.9.2 Functions for wrapping strings in ANSI color codes. Each function within this module returns the input string ``text``, wrapped with ANSI color codes for the appropriate color. For example, to print some text as green on supporting terminals:: from fabric.colors import green print(green("This text is green!")) Because these functions simply return modified strings, you can nest them:: from fabric.colors import red, green print(red("This sentence is red, except for " + \ green("these words, which are green") + ".")) If ``bold`` is set to ``True``, the ANSI flag for bolding will be flipped on for that particular invocation, which usually shows up as a bold or brighter version of the original color on most terminals. """ def _wrap_with(code): def inner(text, bold=False): c = code if bold: c = "1;%s" % c return "\033[%sm%s\033[0m" % (c, text) return inner red = _wrap_with('31') green = _wrap_with('32') yellow = _wrap_with('33') blue = _wrap_with('34') magenta = _wrap_with('35') cyan = _wrap_with('36') white = _wrap_with('37') Fabric-1.8.2/fabric/context_managers.py000644 000765 000024 00000051201 12277451120 021017 0ustar00jforcierstaff000000 000000 """ Context managers for use with the ``with`` statement. .. note:: When using Python 2.5, you will need to start your fabfile with ``from __future__ import with_statement`` in order to make use of the ``with`` statement (which is a regular, non ``__future__`` feature of Python 2.6+.) .. note:: If you are using multiple directly nested ``with`` statements, it can be convenient to use multiple context expressions in one single with statement. Instead of writing:: with cd('/path/to/app'): with prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') you can write:: with cd('/path/to/app'), prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') Note that you need Python 2.7+ for this to work. On Python 2.5 or 2.6, you can do the following:: from contextlib import nested with nested(cd('/path/to/app'), prefix('workon myvenv')): ... Finally, note that `~fabric.context_managers.settings` implements ``nested`` itself -- see its API doc for details. """ from contextlib import contextmanager, nested import sys import socket import select from fabric.thread_handling import ThreadHandler from fabric.state import output, win32, connections, env from fabric import state if not win32: import termios import tty def _set_output(groups, which): """ Refactored subroutine used by ``hide`` and ``show``. """ try: # Preserve original values, pull in new given value to use previous = {} for group in output.expand_aliases(groups): previous[group] = output[group] output[group] = which # Yield control yield finally: # Restore original values output.update(previous) def documented_contextmanager(func): wrapper = contextmanager(func) wrapper.undecorated = func return wrapper @documented_contextmanager def show(*groups): """ Context manager for setting the given output ``groups`` to True. ``groups`` must be one or more strings naming the output groups defined in `~fabric.state.output`. The given groups will be set to True for the duration of the enclosed block, and restored to their previous value afterwards. For example, to turn on debug output (which is typically off by default):: def my_task(): with show('debug'): run('ls /var/www') As almost all output groups are displayed by default, `show` is most useful for turning on the normally-hidden ``debug`` group, or when you know or suspect that code calling your own code is trying to hide output with `hide`. """ return _set_output(groups, True) @documented_contextmanager def hide(*groups): """ Context manager for setting the given output ``groups`` to False. ``groups`` must be one or more strings naming the output groups defined in `~fabric.state.output`. The given groups will be set to False for the duration of the enclosed block, and restored to their previous value afterwards. For example, to hide the "[hostname] run:" status lines, as well as preventing printout of stdout and stderr, one might use `hide` as follows:: def my_task(): with hide('running', 'stdout', 'stderr'): run('ls /var/www') """ return _set_output(groups, False) @documented_contextmanager def _setenv(variables): """ Context manager temporarily overriding ``env`` with given key/value pairs. A callable that returns a dict can also be passed. This is necessary when new values are being calculated from current values, in order to ensure that the "current" value is current at the time that the context is entered, not when the context manager is initialized. (See Issue #736.) This context manager is used internally by `settings` and is not intended to be used directly. """ if callable(variables): variables = variables() clean_revert = variables.pop('clean_revert', False) previous = {} new = [] for key, value in variables.iteritems(): if key in state.env: previous[key] = state.env[key] else: new.append(key) state.env[key] = value try: yield finally: if clean_revert: for key, value in variables.iteritems(): # If the current env value for this key still matches the # value we set it to beforehand, we are OK to revert it to the # pre-block value. if key in state.env and value == state.env[key]: if key in previous: state.env[key] = previous[key] else: del state.env[key] else: state.env.update(previous) for key in new: del state.env[key] def settings(*args, **kwargs): """ Nest context managers and/or override ``env`` variables. `settings` serves two purposes: * Most usefully, it allows temporary overriding/updating of ``env`` with any provided keyword arguments, e.g. ``with settings(user='foo'):``. Original values, if any, will be restored once the ``with`` block closes. * The keyword argument ``clean_revert`` has special meaning for ``settings`` itself (see below) and will be stripped out before execution. * In addition, it will use `contextlib.nested`_ to nest any given non-keyword arguments, which should be other context managers, e.g. ``with settings(hide('stderr'), show('stdout')):``. .. _contextlib.nested: http://docs.python.org/library/contextlib.html#contextlib.nested These behaviors may be specified at the same time if desired. An example will hopefully illustrate why this is considered useful:: def my_task(): with settings( hide('warnings', 'running', 'stdout', 'stderr'), warn_only=True ): if run('ls /etc/lsb-release'): return 'Ubuntu' elif run('ls /etc/redhat-release'): return 'RedHat' The above task executes a `run` statement, but will warn instead of aborting if the ``ls`` fails, and all output -- including the warning itself -- is prevented from printing to the user. The end result, in this scenario, is a completely silent task that allows the caller to figure out what type of system the remote host is, without incurring the handful of output that would normally occur. Thus, `settings` may be used to set any combination of environment variables in tandem with hiding (or showing) specific levels of output, or in tandem with any other piece of Fabric functionality implemented as a context manager. If ``clean_revert`` is set to ``True``, ``settings`` will **not** revert keys which are altered within the nested block, instead only reverting keys whose values remain the same as those given. More examples will make this clear; below is how ``settings`` operates normally:: # Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost'): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string is None again The internal modification of ``env.host_string`` is nullified -- not always desirable. That's where ``clean_revert`` comes in:: # Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost', clean_revert=True): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string remains 'otherhost' Brand new keys which did not exist in ``env`` prior to using ``settings`` are also preserved if ``clean_revert`` is active. When ``False``, such keys are removed when the block exits. .. versionadded:: 1.4.1 The ``clean_revert`` kwarg. """ managers = list(args) if kwargs: managers.append(_setenv(kwargs)) return nested(*managers) def cd(path): """ Context manager that keeps directory state when calling remote operations. Any calls to `run`, `sudo`, `get`, or `put` within the wrapped block will implicitly have a string similar to ``"cd && "`` prefixed in order to give the sense that there is actually statefulness involved. .. note:: `cd` only affects *remote* paths -- to modify *local* paths, use `~fabric.context_managers.lcd`. Because use of `cd` affects all such invocations, any code making use of those operations, such as much of the ``contrib`` section, will also be affected by use of `cd`. Like the actual 'cd' shell builtin, `cd` may be called with relative paths (keep in mind that your default starting directory is your remote user's ``$HOME``) and may be nested as well. Below is a "normal" attempt at using the shell 'cd', which doesn't work due to how shell-less SSH connections are implemented -- state is **not** kept between invocations of `run` or `sudo`:: run('cd /var/www') run('ls') The above snippet will list the contents of the remote user's ``$HOME`` instead of ``/var/www``. With `cd`, however, it will work as expected:: with cd('/var/www'): run('ls') # Turns into "cd /var/www && ls" Finally, a demonstration (see inline comments) of nesting:: with cd('/var/www'): run('ls') # cd /var/www && ls with cd('website1'): run('ls') # cd /var/www/website1 && ls .. note:: This context manager is currently implemented by appending to (and, as always, restoring afterwards) the current value of an environment variable, ``env.cwd``. However, this implementation may change in the future, so we do not recommend manually altering ``env.cwd`` -- only the *behavior* of `cd` will have any guarantee of backwards compatibility. .. note:: Space characters will be escaped automatically to make dealing with such directory names easier. .. versionchanged:: 1.0 Applies to `get` and `put` in addition to the command-running operations. .. seealso:: `~fabric.context_managers.lcd` """ return _change_cwd('cwd', path) def lcd(path): """ Context manager for updating local current working directory. This context manager is identical to `~fabric.context_managers.cd`, except that it changes a different env var (`lcwd`, instead of `cwd`) and thus only affects the invocation of `~fabric.operations.local` and the local arguments to `~fabric.operations.get`/`~fabric.operations.put`. Relative path arguments are relative to the local user's current working directory, which will vary depending on where Fabric (or Fabric-using code) was invoked. You can check what this is with `os.getcwd `_. It may be useful to pin things relative to the location of the fabfile in use, which may be found in :ref:`env.real_fabfile ` .. versionadded:: 1.0 """ return _change_cwd('lcwd', path) def _change_cwd(which, path): path = path.replace(' ', '\ ') if state.env.get(which) and not path.startswith('/'): new_cwd = state.env.get(which) + '/' + path else: new_cwd = path return _setenv({which: new_cwd}) def path(path, behavior='append'): """ Append the given ``path`` to the PATH used to execute any wrapped commands. Any calls to `run` or `sudo` within the wrapped block will implicitly have a string similar to ``"PATH=$PATH: "`` prepended before the given command. You may customize the behavior of `path` by specifying the optional ``behavior`` keyword argument, as follows: * ``'append'``: append given path to the current ``$PATH``, e.g. ``PATH=$PATH:``. This is the default behavior. * ``'prepend'``: prepend given path to the current ``$PATH``, e.g. ``PATH=:$PATH``. * ``'replace'``: ignore previous value of ``$PATH`` altogether, e.g. ``PATH=``. .. note:: This context manager is currently implemented by modifying (and, as always, restoring afterwards) the current value of environment variables, ``env.path`` and ``env.path_behavior``. However, this implementation may change in the future, so we do not recommend manually altering them directly. .. versionadded:: 1.0 """ return _setenv({'path': path, 'path_behavior': behavior}) def prefix(command): """ Prefix all wrapped `run`/`sudo` commands with given command plus ``&&``. This is nearly identical to `~fabric.operations.cd`, except that nested invocations append to a list of command strings instead of modifying a single string. Most of the time, you'll want to be using this alongside a shell script which alters shell state, such as ones which export or alter shell environment variables. For example, one of the most common uses of this tool is with the ``workon`` command from `virtualenvwrapper `_:: with prefix('workon myvenv'): run('./manage.py syncdb') In the above snippet, the actual shell command run would be this:: $ workon myvenv && ./manage.py syncdb This context manager is compatible with `~fabric.context_managers.cd`, so if your virtualenv doesn't ``cd`` in its ``postactivate`` script, you could do the following:: with cd('/path/to/app'): with prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture') Which would result in executions like so:: $ cd /path/to/app && workon myvenv && ./manage.py syncdb $ cd /path/to/app && workon myvenv && ./manage.py loaddata myfixture Finally, as alluded to near the beginning, `~fabric.context_managers.prefix` may be nested if desired, e.g.:: with prefix('workon myenv'): run('ls') with prefix('source /some/script'): run('touch a_file') The result:: $ workon myenv && ls $ workon myenv && source /some/script && touch a_file Contrived, but hopefully illustrative. """ return _setenv(lambda: {'command_prefixes': state.env.command_prefixes + [command]}) @documented_contextmanager def char_buffered(pipe): """ Force local terminal ``pipe`` be character, not line, buffered. Only applies on Unix-based systems; on Windows this is a no-op. """ if win32 or not pipe.isatty(): yield else: old_settings = termios.tcgetattr(pipe) tty.setcbreak(pipe) try: yield finally: termios.tcsetattr(pipe, termios.TCSADRAIN, old_settings) def shell_env(**kw): """ Set shell environment variables for wrapped commands. For example, the below shows how you might set a ZeroMQ related environment variable when installing a Python ZMQ library:: with shell_env(ZMQ_DIR='/home/user/local'): run('pip install pyzmq') As with `~fabric.context_managers.prefix`, this effectively turns the ``run`` command into:: $ export ZMQ_DIR='/home/user/local' && pip install pyzmq Multiple key-value pairs may be given simultaneously. .. note:: If used to affect the behavior of `~fabric.operations.local` when running from a Windows localhost, ``SET`` commands will be used to implement this feature. """ return _setenv({'shell_env': kw}) def _forwarder(chan, sock): # Bidirectionally forward data between a socket and a Paramiko channel. while True: r, w, x = select.select([sock, chan], [], []) if sock in r: data = sock.recv(1024) if len(data) == 0: break chan.send(data) if chan in r: data = chan.recv(1024) if len(data) == 0: break sock.send(data) chan.close() sock.close() @documented_contextmanager def remote_tunnel(remote_port, local_port=None, local_host="localhost", remote_bind_address="127.0.0.1"): """ Create a tunnel forwarding a locally-visible port to the remote target. For example, you can let the remote host access a database that is installed on the client host:: # Map localhost:6379 on the server to localhost:6379 on the client, # so that the remote 'redis-cli' program ends up speaking to the local # redis-server. with remote_tunnel(6379): run("redis-cli -i") The database might be installed on a client only reachable from the client host (as opposed to *on* the client itself):: # Map localhost:6379 on the server to redis.internal:6379 on the client with remote_tunnel(6379, local_host="redis.internal") run("redis-cli -i") ``remote_tunnel`` accepts up to four arguments: * ``remote_port`` (mandatory) is the remote port to listen to. * ``local_port`` (optional) is the local port to connect to; the default is the same port as the remote one. * ``local_host`` (optional) is the locally-reachable computer (DNS name or IP address) to connect to; the default is ``localhost`` (that is, the same computer Fabric is running on). * ``remote_bind_address`` (optional) is the remote IP address to bind to for listening, on the current target. It should be an IP address assigned to an interface on the target (or a DNS name that resolves to such IP). You can use "0.0.0.0" to bind to all interfaces. .. note:: By default, most SSH servers only allow remote tunnels to listen to the localhost interface (127.0.0.1). In these cases, `remote_bind_address` is ignored by the server, and the tunnel will listen only to 127.0.0.1. .. versionadded: 1.6 """ if local_port is None: local_port = remote_port sockets = [] channels = [] threads = [] def accept(channel, (src_addr, src_port), (dest_addr, dest_port)): channels.append(channel) sock = socket.socket() sockets.append(sock) try: sock.connect((local_host, local_port)) except Exception, e: print "[%s] rtunnel: cannot connect to %s:%d (from local)" % (env.host_string, local_host, local_port) chan.close() return print "[%s] rtunnel: opened reverse tunnel: %r -> %r -> %r"\ % (env.host_string, channel.origin_addr, channel.getpeername(), (local_host, local_port)) th = ThreadHandler('fwd', _forwarder, channel, sock) threads.append(th) transport = connections[env.host_string].get_transport() transport.request_port_forward(remote_bind_address, remote_port, handler=accept) try: yield finally: for sock, chan, th in zip(sockets, channels, threads): sock.close() chan.close() th.thread.join() th.raise_if_needed() transport.cancel_port_forward(remote_bind_address, remote_port) quiet = lambda: settings(hide('everything'), warn_only=True) quiet.__doc__ = """ Alias to ``settings(hide('everything'), warn_only=True)``. Useful for wrapping remote interrogative commands which you expect to fail occasionally, and/or which you want to silence. Example:: with quiet(): have_build_dir = run("test -e /tmp/build").succeeded When used in a task, the above snippet will not produce any ``run: test -e /tmp/build`` line, nor will any stdout/stderr display, and command failure is ignored. .. seealso:: :ref:`env.warn_only `, `~fabric.context_managers.settings`, `~fabric.context_managers.hide` .. versionadded:: 1.5 """ warn_only = lambda: settings(warn_only=True) warn_only.__doc__ = """ Alias to ``settings(warn_only=True)``. .. seealso:: :ref:`env.warn_only `, `~fabric.context_managers.settings`, `~fabric.context_managers.quiet` """ Fabric-1.8.2/fabric/contrib/000755 000765 000024 00000000000 12277461504 016554 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/fabric/decorators.py000644 000765 000024 00000016562 12257074160 017641 0ustar00jforcierstaff000000 000000 """ Convenience decorators for use in fabfiles. """ from __future__ import with_statement import types from functools import wraps from Crypto import Random from fabric import tasks from .context_managers import settings def task(*args, **kwargs): """ Decorator declaring the wrapped function to be a new-style task. May be invoked as a simple, argument-less decorator (i.e. ``@task``) or with arguments customizing its behavior (e.g. ``@task(alias='myalias')``). Please see the :ref:`new-style task ` documentation for details on how to use this decorator. .. versionchanged:: 1.2 Added the ``alias``, ``aliases``, ``task_class`` and ``default`` keyword arguments. See :ref:`task-decorator-arguments` for details. .. versionchanged:: 1.5 Added the ``name`` keyword argument. .. seealso:: `~fabric.docs.unwrap_tasks`, `~fabric.tasks.WrappedCallableTask` """ invoked = bool(not args or kwargs) task_class = kwargs.pop("task_class", tasks.WrappedCallableTask) if not invoked: func, args = args[0], () def wrapper(func): return task_class(func, *args, **kwargs) return wrapper if invoked else wrapper(func) def _wrap_as_new(original, new): if isinstance(original, tasks.Task): return tasks.WrappedCallableTask(new) return new def _list_annotating_decorator(attribute, *values): def attach_list(func): @wraps(func) def inner_decorator(*args, **kwargs): return func(*args, **kwargs) _values = values # Allow for single iterable argument as well as *args if len(_values) == 1 and not isinstance(_values[0], basestring): _values = _values[0] setattr(inner_decorator, attribute, list(_values)) # Don't replace @task new-style task objects with inner_decorator by # itself -- wrap in a new Task object first. inner_decorator = _wrap_as_new(func, inner_decorator) return inner_decorator return attach_list def hosts(*host_list): """ Decorator defining which host or hosts to execute the wrapped function on. For example, the following will ensure that, barring an override on the command line, ``my_func`` will be run on ``host1``, ``host2`` and ``host3``, and with specific users on ``host1`` and ``host3``:: @hosts('user1@host1', 'host2', 'user2@host3') def my_func(): pass `~fabric.decorators.hosts` may be invoked with either an argument list (``@hosts('host1')``, ``@hosts('host1', 'host2')``) or a single, iterable argument (``@hosts(['host1', 'host2'])``). Note that this decorator actually just sets the function's ``.hosts`` attribute, which is then read prior to executing the function. .. versionchanged:: 0.9.2 Allow a single, iterable argument (``@hosts(iterable)``) to be used instead of requiring ``@hosts(*iterable)``. """ return _list_annotating_decorator('hosts', *host_list) def roles(*role_list): """ Decorator defining a list of role names, used to look up host lists. A role is simply defined as a key in `env` whose value is a list of one or more host connection strings. For example, the following will ensure that, barring an override on the command line, ``my_func`` will be executed against the hosts listed in the ``webserver`` and ``dbserver`` roles:: env.roledefs.update({ 'webserver': ['www1', 'www2'], 'dbserver': ['db1'] }) @roles('webserver', 'dbserver') def my_func(): pass As with `~fabric.decorators.hosts`, `~fabric.decorators.roles` may be invoked with either an argument list or a single, iterable argument. Similarly, this decorator uses the same mechanism as `~fabric.decorators.hosts` and simply sets ``.roles``. .. versionchanged:: 0.9.2 Allow a single, iterable argument to be used (same as `~fabric.decorators.hosts`). """ return _list_annotating_decorator('roles', *role_list) def runs_once(func): """ Decorator preventing wrapped function from running more than once. By keeping internal state, this decorator allows you to mark a function such that it will only run once per Python interpreter session, which in typical use means "once per invocation of the ``fab`` program". Any function wrapped with this decorator will silently fail to execute the 2nd, 3rd, ..., Nth time it is called, and will return the value of the original run. .. note:: ``runs_once`` does not work with parallel task execution. """ @wraps(func) def decorated(*args, **kwargs): if not hasattr(decorated, 'return_value'): decorated.return_value = func(*args, **kwargs) return decorated.return_value decorated = _wrap_as_new(func, decorated) # Mark as serial (disables parallelism) and return return serial(decorated) def serial(func): """ Forces the wrapped function to always run sequentially, never in parallel. This decorator takes precedence over the global value of :ref:`env.parallel `. However, if a task is decorated with both `~fabric.decorators.serial` *and* `~fabric.decorators.parallel`, `~fabric.decorators.parallel` wins. .. versionadded:: 1.3 """ if not getattr(func, 'parallel', False): func.serial = True return _wrap_as_new(func, func) def parallel(pool_size=None): """ Forces the wrapped function to run in parallel, instead of sequentially. This decorator takes precedence over the global value of :ref:`env.parallel `. It also takes precedence over `~fabric.decorators.serial` if a task is decorated with both. .. versionadded:: 1.3 """ called_without_args = type(pool_size) == types.FunctionType def real_decorator(func): @wraps(func) def inner(*args, **kwargs): # Required for ssh/PyCrypto to be happy in multiprocessing # (as far as we can tell, this is needed even with the extra such # calls in newer versions of paramiko.) Random.atfork() return func(*args, **kwargs) inner.parallel = True inner.serial = False inner.pool_size = None if called_without_args else pool_size return _wrap_as_new(func, inner) # Allow non-factory-style decorator use (@decorator vs @decorator()) if called_without_args: return real_decorator(pool_size) return real_decorator def with_settings(*arg_settings, **kw_settings): """ Decorator equivalent of ``fabric.context_managers.settings``. Allows you to wrap an entire function as if it was called inside a block with the ``settings`` context manager. This may be useful if you know you want a given setting applied to an entire function body, or wish to retrofit old code without indenting everything. For example, to turn aborts into warnings for an entire task function:: @with_settings(warn_only=True) def foo(): ... .. seealso:: `~fabric.context_managers.settings` .. versionadded:: 1.1 """ def outer(func): @wraps(func) def inner(*args, **kwargs): with settings(*arg_settings, **kw_settings): return func(*args, **kwargs) return _wrap_as_new(func, inner) return outer Fabric-1.8.2/fabric/docs.py000644 000765 000024 00000004730 12257074160 016416 0ustar00jforcierstaff000000 000000 from fabric.tasks import WrappedCallableTask def unwrap_tasks(module, hide_nontasks=False): """ Replace task objects on ``module`` with their wrapped functions instead. Specifically, look for instances of `~fabric.tasks.WrappedCallableTask` and replace them with their ``.wrapped`` attribute (the original decorated function.) This is intended for use with the Sphinx autodoc tool, to be run near the bottom of a project's ``conf.py``. It ensures that the autodoc extension will have full access to the "real" function, in terms of function signature and so forth. Without use of ``unwrap_tasks``, autodoc is unable to access the function signature (though it is able to see e.g. ``__doc__``.) For example, at the bottom of your ``conf.py``:: from fabric.docs import unwrap_tasks import my_package.my_fabfile unwrap_tasks(my_package.my_fabfile) You can go above and beyond, and explicitly **hide** all non-task functions, by saying ``hide_nontasks=True``. This renames all objects failing the "is it a task?" check so they appear to be private, which will then cause autodoc to skip over them. ``hide_nontasks`` is thus useful when you have a fabfile mixing in subroutines with real tasks and want to document *just* the real tasks. If you run this within an actual Fabric-code-using session (instead of within a Sphinx ``conf.py``), please seek immediate medical attention. .. versionadded: 1.5 .. seealso:: `~fabric.tasks.WrappedCallableTask`, `~fabric.decorators.task` """ set_tasks = [] for name, obj in vars(module).items(): if isinstance(obj, WrappedCallableTask): setattr(module, obj.name, obj.wrapped) # Handle situation where a task's real name shadows a builtin. # If the builtin comes after the task in vars().items(), the object # we just setattr'd above will get re-hidden :( set_tasks.append(obj.name) # In the same vein, "privately" named wrapped functions whose task # name is public, needs to get renamed so autodoc picks it up. obj.wrapped.func_name = obj.name else: if name in set_tasks: continue has_docstring = getattr(obj, '__doc__', False) if hide_nontasks and has_docstring and not name.startswith('_'): setattr(module, '_%s' % name, obj) delattr(module, name) Fabric-1.8.2/fabric/exceptions.py000644 000765 000024 00000001343 12277451120 017641 0ustar00jforcierstaff000000 000000 """ Custom Fabric exception classes. Most are simply distinct Exception subclasses for purposes of message-passing (though typically still in actual error situations.) """ class NetworkError(Exception): # Must allow for calling with zero args/kwargs, since pickle is apparently # stupid with exceptions and tries to call it as such when passed around in # a multiprocessing.Queue. def __init__(self, message=None, wrapped=None): self.message = message self.wrapped = wrapped def __str__(self): return self.message def __repr__(self): return "%s(%s) => %r" % ( self.__class__.__name__, self.message, self.wrapped ) class CommandTimeout(Exception): pass Fabric-1.8.2/fabric/io.py000644 000765 000024 00000021153 12277461064 016100 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import sys import time import re import socket from select import select from fabric.state import env, output, win32 from fabric.auth import get_password, set_password import fabric.network from fabric.network import ssh, normalize from fabric.utils import RingBuffer from fabric.exceptions import CommandTimeout if win32: import msvcrt def _endswith(char_list, substring): tail = char_list[-1 * len(substring):] substring = list(substring) return tail == substring def _has_newline(bytelist): return '\r' in bytelist or '\n' in bytelist def output_loop(*args, **kwargs): OutputLooper(*args, **kwargs).loop() class OutputLooper(object): def __init__(self, chan, attr, stream, capture, timeout): self.chan = chan self.stream = stream self.capture = capture self.timeout = timeout self.read_func = getattr(chan, attr) self.prefix = "[%s] %s: " % ( env.host_string, "out" if attr == 'recv' else "err" ) self.printing = getattr(output, 'stdout' if (attr == 'recv') else 'stderr') self.linewise = (env.linewise or env.parallel) self.reprompt = False self.read_size = 4096 self.write_buffer = RingBuffer([], maxlen=len(self.prefix)) def _flush(self, text): self.stream.write(text) self.stream.flush() self.write_buffer.extend(text) def loop(self): """ Loop, reading from .(), writing to and buffering to . Will raise `~fabric.exceptions.CommandTimeout` if network timeouts continue to be seen past the defined ``self.timeout`` threshold. (Timeouts before then are considered part of normal short-timeout fast network reading; see Fabric issue #733 for background.) """ # Internal capture-buffer-like buffer, used solely for state keeping. # Unlike 'capture', nothing is ever purged from this. _buffer = [] # Initialize loop variables initial_prefix_printed = False seen_cr = False line = [] # Allow prefix to be turned off. if not env.output_prefix: self.prefix = "" start = time.time() while True: # Handle actual read try: bytelist = self.read_func(self.read_size) except socket.timeout: elapsed = time.time() - start if self.timeout is not None and elapsed > self.timeout: raise CommandTimeout continue # Empty byte == EOS if bytelist == '': # If linewise, ensure we flush any leftovers in the buffer. if self.linewise and line: self._flush(self.prefix) self._flush("".join(line)) break # A None capture variable implies that we're in open_shell() if self.capture is None: # Just print directly -- no prefixes, no capturing, nada # And since we know we're using a pty in this mode, just go # straight to stdout. self._flush(bytelist) # Otherwise, we're in run/sudo and need to handle capturing and # prompts. else: # Print to user if self.printing: printable_bytes = bytelist # Small state machine to eat \n after \r if printable_bytes[-1] == "\r": seen_cr = True if printable_bytes[0] == "\n" and seen_cr: printable_bytes = printable_bytes[1:] seen_cr = False while _has_newline(printable_bytes) and printable_bytes != "": # at most 1 split ! cr = re.search("(\r\n|\r|\n)", printable_bytes) if cr is None: break end_of_line = printable_bytes[:cr.start(0)] printable_bytes = printable_bytes[cr.end(0):] if not initial_prefix_printed: self._flush(self.prefix) if _has_newline(end_of_line): end_of_line = '' if self.linewise: self._flush("".join(line) + end_of_line + "\n") line = [] else: self._flush(end_of_line + "\n") initial_prefix_printed = False if self.linewise: line += [printable_bytes] else: if not initial_prefix_printed: self._flush(self.prefix) initial_prefix_printed = True self._flush(printable_bytes) # Now we have handled printing, handle interactivity read_lines = re.split(r"(\r|\n|\r\n)", bytelist) for fragment in read_lines: # Store in capture buffer self.capture += fragment # Store in internal buffer _buffer += fragment # Handle prompts prompt = _endswith(self.capture, env.sudo_prompt) try_again = (_endswith(self.capture, env.again_prompt + '\n') or _endswith(self.capture, env.again_prompt + '\r\n')) if prompt: self.prompt() elif try_again: self.try_again() # Print trailing new line if the last thing we printed was our line # prefix. if self.prefix and "".join(self.write_buffer) == self.prefix: self._flush('\n') def prompt(self): # Obtain cached password, if any password = get_password(*normalize(env.host_string)) # Remove the prompt itself from the capture buffer. This is # backwards compatible with Fabric 0.9.x behavior; the user # will still see the prompt on their screen (no way to avoid # this) but at least it won't clutter up the captured text. del self.capture[-1 * len(env.sudo_prompt):] # If the password we just tried was bad, prompt the user again. if (not password) or self.reprompt: # Print the prompt and/or the "try again" notice if # output is being hidden. In other words, since we need # the user's input, they need to see why we're # prompting them. if not self.printing: self._flush(self.prefix) if self.reprompt: self._flush(env.again_prompt + '\n' + self.prefix) self._flush(env.sudo_prompt) # Prompt for, and store, password. Give empty prompt so the # initial display "hides" just after the actually-displayed # prompt from the remote end. self.chan.input_enabled = False password = fabric.network.prompt_for_password( prompt=" ", no_colon=True, stream=self.stream ) self.chan.input_enabled = True # Update env.password, env.passwords if necessary user, host, port = normalize(env.host_string) set_password(user, host, port, password) # Reset reprompt flag self.reprompt = False # Send current password down the pipe self.chan.sendall(password + '\n') def try_again(self): # Remove text from capture buffer self.capture = self.capture[:len(env.again_prompt)] # Set state so we re-prompt the user at the next prompt. self.reprompt = True def input_loop(chan, using_pty): while not chan.exit_status_ready(): if win32: have_char = msvcrt.kbhit() else: r, w, x = select([sys.stdin], [], [], 0.0) have_char = (r and r[0] == sys.stdin) if have_char and chan.input_enabled: # Send all local stdin to remote end's stdin byte = msvcrt.getch() if win32 else sys.stdin.read(1) chan.sendall(byte) # Optionally echo locally, if needed. if not using_pty and env.echo_stdin: # Not using fastprint() here -- it prints as 'user' # output level, don't want it to be accidentally hidden sys.stdout.write(byte) sys.stdout.flush() time.sleep(ssh.io_sleep) Fabric-1.8.2/fabric/job_queue.py000644 000765 000024 00000016771 12277451120 017451 0ustar00jforcierstaff000000 000000 """ Sliding-window-based job/task queue class (& example of use.) May use ``multiprocessing.Process`` or ``threading.Thread`` objects as queue items, though within Fabric itself only ``Process`` objects are used/supported. """ from __future__ import with_statement import time import Queue from fabric.state import env from fabric.network import ssh from fabric.context_managers import settings class JobQueue(object): """ The goal of this class is to make a queue of processes to run, and go through them running X number at any given time. So if the bubble is 5 start with 5 running and move the bubble of running procs along the queue looking something like this: Start ........................... [~~~~~].................... ___[~~~~~]................. _________[~~~~~]........... __________________[~~~~~].. ____________________[~~~~~] ___________________________ End """ def __init__(self, max_running, comms_queue): """ Setup the class to resonable defaults. """ self._queued = [] self._running = [] self._completed = [] self._num_of_jobs = 0 self._max = max_running self._comms_queue = comms_queue self._finished = False self._closed = False self._debug = False def _all_alive(self): """ Simply states if all procs are alive or not. Needed to determine when to stop looping, and pop dead procs off and add live ones. """ if self._running: return all([x.is_alive() for x in self._running]) else: return False def __len__(self): """ Just going to use number of jobs as the JobQueue length. """ return self._num_of_jobs def close(self): """ A sanity check, so that the need to care about new jobs being added in the last throws of the job_queue's run are negated. """ if self._debug: print("job queue closed.") self._closed = True def append(self, process): """ Add the Process() to the queue, so that later it can be checked up on. That is if the JobQueue is still open. If the queue is closed, this will just silently do nothing. To get data back out of this process, give ``process`` access to a ``multiprocessing.Queue`` object, and give it here as ``queue``. Then ``JobQueue.run`` will include the queue's contents in its return value. """ if not self._closed: self._queued.append(process) self._num_of_jobs += 1 if self._debug: print("job queue appended %s." % process.name) def run(self): """ This is the workhorse. It will take the intial jobs from the _queue, start them, add them to _running, and then go into the main running loop. This loop will check for done procs, if found, move them out of _running into _completed. It also checks for a _running queue with open spots, which it will then fill as discovered. To end the loop, there have to be no running procs, and no more procs to be run in the queue. This function returns an iterable of all its children's exit codes. """ def _advance_the_queue(): """ Helper function to do the job of poping a new proc off the queue start it, then add it to the running queue. This will eventually depleate the _queue, which is a condition of stopping the running while loop. It also sets the env.host_string from the job.name, so that fabric knows that this is the host to be making connections on. """ job = self._queued.pop() if self._debug: print("Popping '%s' off the queue and starting it" % job.name) with settings(clean_revert=True, host_string=job.name, host=job.name): job.start() self._running.append(job) # Prep return value so we can start filling it during main loop results = {} for job in self._queued: results[job.name] = dict.fromkeys(('exit_code', 'results')) if not self._closed: raise Exception("Need to close() before starting.") if self._debug: print("Job queue starting.") while len(self._running) < self._max: _advance_the_queue() # Main loop! while not self._finished: while len(self._running) < self._max and self._queued: _advance_the_queue() if not self._all_alive(): for id, job in enumerate(self._running): if not job.is_alive(): if self._debug: print("Job queue found finished proc: %s." % job.name) done = self._running.pop(id) self._completed.append(done) if self._debug: print("Job queue has %d running." % len(self._running)) if not (self._queued or self._running): if self._debug: print("Job queue finished.") for job in self._completed: job.join() self._finished = True # Each loop pass, try pulling results off the queue to keep its # size down. At this point, we don't actually care if any results # have arrived yet; they will be picked up after the main loop. self._fill_results(results) time.sleep(ssh.io_sleep) # Consume anything left in the results queue. Note that there is no # need to block here, as the main loop ensures that all workers will # already have finished. self._fill_results(results) # Attach exit codes now that we're all done & have joined all jobs for job in self._completed: results[job.name]['exit_code'] = job.exitcode return results def _fill_results(self, results): """ Attempt to pull data off self._comms_queue and add to 'results' dict. If no data is available (i.e. the queue is empty), bail immediately. """ while True: try: datum = self._comms_queue.get_nowait() results[datum['name']]['results'] = datum['result'] except Queue.Empty: break #### Sample def try_using(parallel_type): """ This will run the queue through it's paces, and show a simple way of using the job queue. """ def print_number(number): """ Simple function to give a simple task to execute. """ print(number) if parallel_type == "multiprocessing": from multiprocessing import Process as Bucket elif parallel_type == "threading": from threading import Thread as Bucket # Make a job_queue with a bubble of len 5, and have it print verbosely jobs = JobQueue(5) jobs._debug = True # Add 20 procs onto the stack for x in range(20): jobs.append(Bucket( target=print_number, args=[x], kwargs={}, )) # Close up the queue and then start it's execution jobs.close() jobs.run() if __name__ == '__main__': try_using("multiprocessing") try_using("threading") Fabric-1.8.2/fabric/main.py000644 000765 000024 00000062075 12277451130 016416 0ustar00jforcierstaff000000 000000 """ This module contains Fab's `main` method plus related subroutines. `main` is executed as the command line ``fab`` program and takes care of parsing options and commands, loading the user settings file, loading a fabfile, and executing the commands given. The other callables defined in this module are internal only. Anything useful to individuals leveraging Fabric as a library, should be kept elsewhere. """ import getpass from operator import isMappingType from optparse import OptionParser import os import sys import types # For checking callables against the API, & easy mocking from fabric import api, state, colors from fabric.contrib import console, files, project from fabric.network import disconnect_all, ssh from fabric.state import env_options from fabric.tasks import Task, execute, get_task_details from fabric.task_utils import _Dict, crawl from fabric.utils import abort, indent, warn, _pty_size # One-time calculation of "all internal callables" to avoid doing this on every # check of a given fabfile callable (in is_classic_task()). _modules = [api, project, files, console, colors] _internals = reduce(lambda x, y: x + filter(callable, vars(y).values()), _modules, [] ) # Module recursion cache class _ModuleCache(object): """ Set-like object operating on modules and storing __name__s internally. """ def __init__(self): self.cache = set() def __contains__(self, value): return value.__name__ in self.cache def add(self, value): return self.cache.add(value.__name__) def clear(self): return self.cache.clear() _seen = _ModuleCache() def load_settings(path): """ Take given file path and return dictionary of any key=value pairs found. Usage docs are in docs/usage/fab.rst, in "Settings files." """ if os.path.exists(path): comments = lambda s: s and not s.startswith("#") settings = filter(comments, open(path, 'r')) return dict((k.strip(), v.strip()) for k, _, v in [s.partition('=') for s in settings]) # Handle nonexistent or empty settings file return {} def _is_package(path): """ Is the given path a Python package? """ return ( os.path.isdir(path) and os.path.exists(os.path.join(path, '__init__.py')) ) def find_fabfile(names=None): """ Attempt to locate a fabfile, either explicitly or by searching parent dirs. Usage docs are in docs/usage/fabfiles.rst, in "Fabfile discovery." """ # Obtain env value if not given specifically if names is None: names = [state.env.fabfile] # Create .py version if necessary if not names[0].endswith('.py'): names += [names[0] + '.py'] # Does the name contain path elements? if os.path.dirname(names[0]): # If so, expand home-directory markers and test for existence for name in names: expanded = os.path.expanduser(name) if os.path.exists(expanded): if name.endswith('.py') or _is_package(expanded): return os.path.abspath(expanded) else: # Otherwise, start in cwd and work downwards towards filesystem root path = '.' # Stop before falling off root of filesystem (should be platform # agnostic) while os.path.split(os.path.abspath(path))[1]: for name in names: joined = os.path.join(path, name) if os.path.exists(joined): if name.endswith('.py') or _is_package(joined): return os.path.abspath(joined) path = os.path.join('..', path) # Implicit 'return None' if nothing was found def is_classic_task(tup): """ Takes (name, object) tuple, returns True if it's a non-Fab public callable. """ name, func = tup try: is_classic = ( callable(func) and (func not in _internals) and not name.startswith('_') ) # Handle poorly behaved __eq__ implementations except (ValueError, TypeError): is_classic = False return is_classic def load_fabfile(path, importer=None): """ Import given fabfile path and return (docstring, callables). Specifically, the fabfile's ``__doc__`` attribute (a string) and a dictionary of ``{'name': callable}`` containing all callables which pass the "is a Fabric task" test. """ if importer is None: importer = __import__ # Get directory and fabfile name directory, fabfile = os.path.split(path) # If the directory isn't in the PYTHONPATH, add it so our import will work added_to_path = False index = None if directory not in sys.path: sys.path.insert(0, directory) added_to_path = True # If the directory IS in the PYTHONPATH, move it to the front temporarily, # otherwise other fabfiles -- like Fabric's own -- may scoop the intended # one. else: i = sys.path.index(directory) if i != 0: # Store index for later restoration index = i # Add to front, then remove from original position sys.path.insert(0, directory) del sys.path[i + 1] # Perform the import (trimming off the .py) imported = importer(os.path.splitext(fabfile)[0]) # Remove directory from path if we added it ourselves (just to be neat) if added_to_path: del sys.path[0] # Put back in original index if we moved it if index is not None: sys.path.insert(index + 1, directory) del sys.path[0] # Actually load tasks docstring, new_style, classic, default = load_tasks_from_module(imported) tasks = new_style if state.env.new_style_tasks else classic # Clean up after ourselves _seen.clear() return docstring, tasks, default def load_tasks_from_module(imported): """ Handles loading all of the tasks for a given `imported` module """ # Obey the use of .__all__ if it is present imported_vars = vars(imported) if "__all__" in imported_vars: imported_vars = [(name, imported_vars[name]) for name in \ imported_vars if name in imported_vars["__all__"]] else: imported_vars = imported_vars.items() # Return a two-tuple value. First is the documentation, second is a # dictionary of callables only (and don't include Fab operations or # underscored callables) new_style, classic, default = extract_tasks(imported_vars) return imported.__doc__, new_style, classic, default def extract_tasks(imported_vars): """ Handle extracting tasks from a given list of variables """ new_style_tasks = _Dict() classic_tasks = {} default_task = None if 'new_style_tasks' not in state.env: state.env.new_style_tasks = False for tup in imported_vars: name, obj = tup if is_task_object(obj): state.env.new_style_tasks = True # Use instance.name if defined if obj.name and obj.name != 'undefined': new_style_tasks[obj.name] = obj else: obj.name = name new_style_tasks[name] = obj # Handle aliasing if obj.aliases is not None: for alias in obj.aliases: new_style_tasks[alias] = obj # Handle defaults if obj.is_default: default_task = obj elif is_classic_task(tup): classic_tasks[name] = obj elif is_task_module(obj): docs, newstyle, classic, default = load_tasks_from_module(obj) for task_name, task in newstyle.items(): if name not in new_style_tasks: new_style_tasks[name] = _Dict() new_style_tasks[name][task_name] = task if default is not None: new_style_tasks[name].default = default return new_style_tasks, classic_tasks, default_task def is_task_module(a): """ Determine if the provided value is a task module """ #return (type(a) is types.ModuleType and # any(map(is_task_object, vars(a).values()))) if isinstance(a, types.ModuleType) and a not in _seen: # Flag module as seen _seen.add(a) # Signal that we need to check it out return True def is_task_object(a): """ Determine if the provided value is a ``Task`` object. This returning True signals that all tasks within the fabfile module must be Task objects. """ return isinstance(a, Task) and a.use_task_objects def parse_options(): """ Handle command-line options with optparse.OptionParser. Return list of arguments, largely for use in `parse_arguments`. """ # # Initialize # parser = OptionParser( usage=("fab [options] " "[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...")) # # Define options that don't become `env` vars (typically ones which cause # Fabric to do something other than its normal execution, such as # --version) # # Display info about a specific command parser.add_option('-d', '--display', metavar='NAME', help="print detailed info about command NAME" ) # Control behavior of --list LIST_FORMAT_OPTIONS = ('short', 'normal', 'nested') parser.add_option('-F', '--list-format', choices=LIST_FORMAT_OPTIONS, default='normal', metavar='FORMAT', help="formats --list, choices: %s" % ", ".join(LIST_FORMAT_OPTIONS) ) parser.add_option('-I', '--initial-password-prompt', action='store_true', default=False, help="Force password prompt up-front" ) # List Fab commands found in loaded fabfiles/source files parser.add_option('-l', '--list', action='store_true', dest='list_commands', default=False, help="print list of possible commands and exit" ) # Allow setting of arbitrary env vars at runtime. parser.add_option('--set', metavar="KEY=VALUE,...", dest='env_settings', default="", help="comma separated KEY=VALUE pairs to set Fab env vars" ) # Like --list, but text processing friendly parser.add_option('--shortlist', action='store_true', dest='shortlist', default=False, help="alias for -F short --list" ) # Version number (optparse gives you --version but we have to do it # ourselves to get -V too. sigh) parser.add_option('-V', '--version', action='store_true', dest='show_version', default=False, help="show program's version number and exit" ) # # Add in options which are also destined to show up as `env` vars. # for option in env_options: parser.add_option(option) # # Finalize # # Return three-tuple of parser + the output from parse_args (opt obj, args) opts, args = parser.parse_args() return parser, opts, args def _is_task(name, value): """ Is the object a task as opposed to e.g. a dict or int? """ return is_classic_task((name, value)) or is_task_object(value) def _sift_tasks(mapping): tasks, collections = [], [] for name, value in mapping.iteritems(): if _is_task(name, value): tasks.append(name) elif isMappingType(value): collections.append(name) tasks = sorted(tasks) collections = sorted(collections) return tasks, collections def _task_names(mapping): """ Flatten & sort task names in a breadth-first fashion. Tasks are always listed before submodules at the same level, but within those two groups, sorting is alphabetical. """ tasks, collections = _sift_tasks(mapping) for collection in collections: module = mapping[collection] if hasattr(module, 'default'): tasks.append(collection) join = lambda x: ".".join((collection, x)) tasks.extend(map(join, _task_names(module))) return tasks def _print_docstring(docstrings, name): if not docstrings: return False docstring = crawl(name, state.commands).__doc__ if isinstance(docstring, basestring): return docstring def _normal_list(docstrings=True): result = [] task_names = _task_names(state.commands) # Want separator between name, description to be straight col max_len = reduce(lambda a, b: max(a, len(b)), task_names, 0) sep = ' ' trail = '...' max_width = _pty_size()[1] - 1 - len(trail) for name in task_names: output = None docstring = _print_docstring(docstrings, name) if docstring: lines = filter(None, docstring.splitlines()) first_line = lines[0].strip() # Truncate it if it's longer than N chars size = max_width - (max_len + len(sep) + len(trail)) if len(first_line) > size: first_line = first_line[:size] + trail output = name.ljust(max_len) + sep + first_line # Or nothing (so just the name) else: output = name result.append(indent(output)) return result def _nested_list(mapping, level=1): result = [] tasks, collections = _sift_tasks(mapping) # Tasks come first result.extend(map(lambda x: indent(x, spaces=level * 4), tasks)) for collection in collections: module = mapping[collection] # Section/module "header" result.append(indent(collection + ":", spaces=level * 4)) # Recurse result.extend(_nested_list(module, level + 1)) return result COMMANDS_HEADER = "Available commands" NESTED_REMINDER = " (remember to call as module.[...].task)" def list_commands(docstring, format_): """ Print all found commands/tasks, then exit. Invoked with ``-l/--list.`` If ``docstring`` is non-empty, it will be printed before the task list. ``format_`` should conform to the options specified in ``LIST_FORMAT_OPTIONS``, e.g. ``"short"``, ``"normal"``. """ # Short-circuit with simple short output if format_ == "short": return _task_names(state.commands) # Otherwise, handle more verbose modes result = [] # Docstring at top, if applicable if docstring: trailer = "\n" if not docstring.endswith("\n") else "" result.append(docstring + trailer) header = COMMANDS_HEADER if format_ == "nested": header += NESTED_REMINDER result.append(header + ":\n") c = _normal_list() if format_ == "normal" else _nested_list(state.commands) result.extend(c) return result def display_command(name): """ Print command function's docstring, then exit. Invoked with -d/--display. """ # Sanity check command = crawl(name, state.commands) if command is None: msg = "Task '%s' does not appear to exist. Valid task names:\n%s" abort(msg % (name, "\n".join(_normal_list(False)))) # Print out nicely presented docstring if found if hasattr(command, '__details__'): task_details = command.__details__() else: task_details = get_task_details(command) if task_details: print("Displaying detailed information for task '%s':" % name) print('') print(indent(task_details, strip=True)) print('') # Or print notice if not else: print("No detailed information available for task '%s':" % name) sys.exit(0) def _escape_split(sep, argstr): """ Allows for escaping of the separator: e.g. task:arg='foo\, bar' It should be noted that the way bash et. al. do command line parsing, those single quotes are required. """ escaped_sep = r'\%s' % sep if escaped_sep not in argstr: return argstr.split(sep) before, _, after = argstr.partition(escaped_sep) startlist = before.split(sep) # a regular split is fine here unfinished = startlist[-1] startlist = startlist[:-1] # recurse because there may be more escaped separators endlist = _escape_split(sep, after) # finish building the escaped value. we use endlist[0] becaue the first # part of the string sent in recursion is the rest of the escaped value. unfinished += sep + endlist[0] return startlist + [unfinished] + endlist[1:] # put together all the parts def parse_arguments(arguments): """ Parse string list into list of tuples: command, args, kwargs, hosts, roles. See docs/usage/fab.rst, section on "per-task arguments" for details. """ cmds = [] for cmd in arguments: args = [] kwargs = {} hosts = [] roles = [] exclude_hosts = [] if ':' in cmd: cmd, argstr = cmd.split(':', 1) for pair in _escape_split(',', argstr): result = _escape_split('=', pair) if len(result) > 1: k, v = result # Catch, interpret host/hosts/role/roles/exclude_hosts # kwargs if k in ['host', 'hosts', 'role', 'roles', 'exclude_hosts']: if k == 'host': hosts = [v.strip()] elif k == 'hosts': hosts = [x.strip() for x in v.split(';')] elif k == 'role': roles = [v.strip()] elif k == 'roles': roles = [x.strip() for x in v.split(';')] elif k == 'exclude_hosts': exclude_hosts = [x.strip() for x in v.split(';')] # Otherwise, record as usual else: kwargs[k] = v else: args.append(result[0]) cmds.append((cmd, args, kwargs, hosts, roles, exclude_hosts)) return cmds def parse_remainder(arguments): """ Merge list of "remainder arguments" into a single command string. """ return ' '.join(arguments) def update_output_levels(show, hide): """ Update state.output values as per given comma-separated list of key names. For example, ``update_output_levels(show='debug,warnings')`` is functionally equivalent to ``state.output['debug'] = True ; state.output['warnings'] = True``. Conversely, anything given to ``hide`` sets the values to ``False``. """ if show: for key in show.split(','): state.output[key] = True if hide: for key in hide.split(','): state.output[key] = False def show_commands(docstring, format, code=0): print("\n".join(list_commands(docstring, format))) sys.exit(code) def main(fabfile_locations=None): """ Main command-line execution loop. """ try: # Parse command line options parser, options, arguments = parse_options() # Handle regular args vs -- args arguments = parser.largs remainder_arguments = parser.rargs # Allow setting of arbitrary env keys. # This comes *before* the "specific" env_options so that those may # override these ones. Specific should override generic, if somebody # was silly enough to specify the same key in both places. # E.g. "fab --set shell=foo --shell=bar" should have env.shell set to # 'bar', not 'foo'. for pair in _escape_split(',', options.env_settings): pair = _escape_split('=', pair) # "--set x" => set env.x to True # "--set x=" => set env.x to "" key = pair[0] value = True if len(pair) == 2: value = pair[1] state.env[key] = value # Update env with any overridden option values # NOTE: This needs to remain the first thing that occurs # post-parsing, since so many things hinge on the values in env. for option in env_options: state.env[option.dest] = getattr(options, option.dest) # Handle --hosts, --roles, --exclude-hosts (comma separated string => # list) for key in ['hosts', 'roles', 'exclude_hosts']: if key in state.env and isinstance(state.env[key], basestring): state.env[key] = state.env[key].split(',') # Feed the env.tasks : tasks that are asked to be executed. state.env['tasks'] = arguments # Handle output control level show/hide update_output_levels(show=options.show, hide=options.hide) # Handle version number option if options.show_version: print("Fabric %s" % state.env.version) print("Paramiko %s" % ssh.__version__) sys.exit(0) # Load settings from user settings file, into shared env dict. state.env.update(load_settings(state.env.rcfile)) # Find local fabfile path or abort fabfile = find_fabfile(fabfile_locations) if not fabfile and not remainder_arguments: abort("""Couldn't find any fabfiles! Remember that -f can be used to specify fabfile path, and use -h for help.""") # Store absolute path to fabfile in case anyone needs it state.env.real_fabfile = fabfile # Load fabfile (which calls its module-level code, including # tweaks to env values) and put its commands in the shared commands # dict default = None if fabfile: docstring, callables, default = load_fabfile(fabfile) state.commands.update(callables) # Handle case where we were called bare, i.e. just "fab", and print # a help message. actions = (options.list_commands, options.shortlist, options.display, arguments, remainder_arguments, default) if not any(actions): parser.print_help() sys.exit(1) # Abort if no commands found if not state.commands and not remainder_arguments: abort("Fabfile didn't contain any commands!") # Now that we're settled on a fabfile, inform user. if state.output.debug: if fabfile: print("Using fabfile '%s'" % fabfile) else: print("No fabfile loaded -- remainder command only") # Shortlist is now just an alias for the "short" list format; # it overrides use of --list-format if somebody were to specify both if options.shortlist: options.list_format = 'short' options.list_commands = True # List available commands if options.list_commands: show_commands(docstring, options.list_format) # Handle show (command-specific help) option if options.display: display_command(options.display) # If user didn't specify any commands to run, show help if not (arguments or remainder_arguments or default): parser.print_help() sys.exit(0) # Or should it exit with error (1)? # Parse arguments into commands to run (plus args/kwargs/hosts) commands_to_run = parse_arguments(arguments) # Parse remainders into a faux "command" to execute remainder_command = parse_remainder(remainder_arguments) # Figure out if any specified task names are invalid unknown_commands = [] for tup in commands_to_run: if crawl(tup[0], state.commands) is None: unknown_commands.append(tup[0]) # Abort if any unknown commands were specified if unknown_commands: warn("Command(s) not found:\n%s" \ % indent(unknown_commands)) show_commands(None, options.list_format, 1) # Generate remainder command and insert into commands, commands_to_run if remainder_command: r = '' state.commands[r] = lambda: api.run(remainder_command) commands_to_run.append((r, [], {}, [], [], [])) # Ditto for a default, if found if not commands_to_run and default: commands_to_run.append((default.name, [], {}, [], [], [])) # Initial password prompt, if requested if options.initial_password_prompt: prompt = "Initial value for env.password: " state.env.password = getpass.getpass(prompt) if state.output.debug: names = ", ".join(x[0] for x in commands_to_run) print("Commands to run: %s" % names) # At this point all commands must exist, so execute them in order. for name, args, kwargs, arg_hosts, arg_roles, arg_exclude_hosts in commands_to_run: execute( name, hosts=arg_hosts, roles=arg_roles, exclude_hosts=arg_exclude_hosts, *args, **kwargs ) # If we got here, no errors occurred, so print a final note. if state.output.status: print("\nDone.") except SystemExit: # a number of internal functions might raise this one. raise except KeyboardInterrupt: if state.output.status: sys.stderr.write("\nStopped.\n") sys.exit(1) except: sys.excepthook(*sys.exc_info()) # we might leave stale threads if we don't explicitly exit() sys.exit(1) finally: disconnect_all() sys.exit(0) Fabric-1.8.2/fabric/network.py000644 000765 000024 00000060045 12277461502 017162 0ustar00jforcierstaff000000 000000 """ Classes and subroutines dealing with network connections and related topics. """ from __future__ import with_statement from functools import wraps import getpass import os import re import time import socket import sys from StringIO import StringIO from fabric.auth import get_password, set_password from fabric.utils import abort, handle_prompt_abort, warn from fabric.exceptions import NetworkError try: import warnings warnings.simplefilter('ignore', DeprecationWarning) import paramiko as ssh except ImportError, e: import traceback traceback.print_exc() msg = """ There was a problem importing our SSH library (see traceback above). Please make sure all dependencies are installed and importable. """.rstrip() sys.stderr.write(msg + '\n') sys.exit(1) ipv6_regex = re.compile('^\[?(?P[0-9A-Fa-f:]+)\]?(:(?P\d+))?$') def direct_tcpip(client, host, port): return client.get_transport().open_channel( 'direct-tcpip', (host, int(port)), ('', 0) ) def is_key_load_error(e): return ( e.__class__ is ssh.SSHException and 'Unable to parse key file' in str(e) ) def _tried_enough(tries): from fabric.state import env return tries >= env.connection_attempts def get_gateway(host, port, cache, replace=False): """ Create and return a gateway socket, if one is needed. This function checks ``env`` for gateway or proxy-command settings and returns the necessary socket-like object for use by a final host connection. :param host: Hostname of target server. :param port: Port to connect to on target server. :param cache: A ``HostConnectionCache`` object, in which gateway ``SSHClient`` objects are to be retrieved/cached. :param replace: Whether to forcibly replace a cached gateway client object. :returns: A ``socket.socket``-like object, or ``None`` if none was created. """ from fabric.state import env, output sock = None proxy_command = ssh_config().get('proxycommand', None) if env.gateway: gateway = normalize_to_string(env.gateway) # ensure initial gateway connection if replace or gateway not in cache: if output.debug: print "Creating new gateway connection to %r" % gateway cache[gateway] = connect(*normalize(gateway) + (cache, False)) # now we should have an open gw connection and can ask it for a # direct-tcpip channel to the real target. (bypass cache's own # __getitem__ override to avoid hilarity - this is usually called # within that method.) sock = direct_tcpip(dict.__getitem__(cache, gateway), host, port) elif proxy_command: sock = ssh.ProxyCommand(proxy_command) return sock class HostConnectionCache(dict): """ Dict subclass allowing for caching of host connections/clients. This subclass will intelligently create new client connections when keys are requested, or return previously created connections instead. It also handles creating new socket-like objects when required to implement gateway connections and `ProxyCommand`, and handing them to the inner connection methods. Key values are the same as host specifiers throughout Fabric: optional username + ``@``, mandatory hostname, optional ``:`` + port number. Examples: * ``example.com`` - typical Internet host address. * ``firewall`` - atypical, but still legal, local host address. * ``user@example.com`` - with specific username attached. * ``bob@smith.org:222`` - with specific nonstandard port attached. When the username is not given, ``env.user`` is used. ``env.user`` defaults to the currently running user at startup but may be overwritten by user code or by specifying a command-line flag. Note that differing explicit usernames for the same hostname will result in multiple client connections being made. For example, specifying ``user1@example.com`` will create a connection to ``example.com``, logged in as ``user1``; later specifying ``user2@example.com`` will create a new, 2nd connection as ``user2``. The same applies to ports: specifying two different ports will result in two different connections to the same host being made. If no port is given, 22 is assumed, so ``example.com`` is equivalent to ``example.com:22``. """ def connect(self, key): """ Force a new connection to ``key`` host string. """ user, host, port = normalize(key) key = normalize_to_string(key) self[key] = connect(user, host, port, cache=self) def __getitem__(self, key): """ Autoconnect + return connection object """ key = normalize_to_string(key) if key not in self: self.connect(key) return dict.__getitem__(self, key) # # Dict overrides that normalize input keys # def __setitem__(self, key, value): return dict.__setitem__(self, normalize_to_string(key), value) def __delitem__(self, key): return dict.__delitem__(self, normalize_to_string(key)) def __contains__(self, key): return dict.__contains__(self, normalize_to_string(key)) def ssh_config(host_string=None): """ Return ssh configuration dict for current env.host_string host value. Memoizes the loaded SSH config file, but not the specific per-host results. This function performs the necessary "is SSH config enabled?" checks and will simply return an empty dict if not. If SSH config *is* enabled and the value of env.ssh_config_path is not a valid file, it will abort. May give an explicit host string as ``host_string``. """ from fabric.state import env dummy = {} if not env.use_ssh_config: return dummy if '_ssh_config' not in env: try: conf = ssh.SSHConfig() path = os.path.expanduser(env.ssh_config_path) with open(path) as fd: conf.parse(fd) env._ssh_config = conf except IOError: warn("Unable to load SSH config file '%s'" % path) return dummy host = parse_host_string(host_string or env.host_string)['host'] return env._ssh_config.lookup(host) def key_filenames(): """ Returns list of SSH key filenames for the current env.host_string. Takes into account ssh_config and env.key_filename, including normalization to a list. Also performs ``os.path.expanduser`` expansion on any key filenames. """ from fabric.state import env keys = env.key_filename # For ease of use, coerce stringish key filename into list if isinstance(env.key_filename, basestring) or env.key_filename is None: keys = [keys] # Strip out any empty strings (such as the default value...meh) keys = filter(bool, keys) # Honor SSH config conf = ssh_config() if 'identityfile' in conf: # Assume a list here as we require Paramiko 1.10+ keys.extend(conf['identityfile']) return map(os.path.expanduser, keys) def key_from_env(passphrase=None): """ Returns a paramiko-ready key from a text string of a private key """ from fabric.state import env, output if 'key' in env: if output.debug: # NOTE: this may not be the most secure thing; OTOH anybody running # the process must by definition have access to the key value, # so only serious problem is if they're logging the output. sys.stderr.write("Trying to honor in-memory key %r\n" % env.key) for pkey_class in (ssh.rsakey.RSAKey, ssh.dsskey.DSSKey): if output.debug: sys.stderr.write("Trying to load it as %s\n" % pkey_class) try: return pkey_class.from_private_key(StringIO(env.key), passphrase) except Exception, e: # File is valid key, but is encrypted: raise it, this will # cause cxn loop to prompt for passphrase & retry if 'Private key file is encrypted' in e: raise # Otherwise, it probably means it wasn't a valid key of this # type, so try the next one. else: pass def parse_host_string(host_string): # Split host_string to user (optional) and host/port user_hostport = host_string.rsplit('@', 1) hostport = user_hostport.pop() user = user_hostport[0] if user_hostport and user_hostport[0] else None # Split host/port string to host and optional port # For IPv6 addresses square brackets are mandatory for host/port separation if hostport.count(':') > 1: # Looks like IPv6 address r = ipv6_regex.match(hostport).groupdict() host = r['host'] or None port = r['port'] or None else: # Hostname or IPv4 address host_port = hostport.rsplit(':', 1) host = host_port.pop(0) or None port = host_port[0] if host_port and host_port[0] else None return {'user': user, 'host': host, 'port': port} def normalize(host_string, omit_port=False): """ Normalizes a given host string, returning explicit host, user, port. If ``omit_port`` is given and is True, only the host and user are returned. This function will process SSH config files if Fabric is configured to do so, and will use them to fill in some default values or swap in hostname aliases. """ from fabric.state import env # Gracefully handle "empty" input by returning empty output if not host_string: return ('', '') if omit_port else ('', '', '') # Parse host string (need this early on to look up host-specific ssh_config # values) r = parse_host_string(host_string) host = r['host'] # Env values (using defaults if somehow earlier defaults were replaced with # empty values) user = env.user or env.local_user port = env.port or env.default_port # SSH config data conf = ssh_config(host_string) # Only use ssh_config values if the env value appears unmodified from # the true defaults. If the user has tweaked them, that new value # takes precedence. if user == env.local_user and 'user' in conf: user = conf['user'] if port == env.default_port and 'port' in conf: port = conf['port'] # Also override host if needed if 'hostname' in conf: host = conf['hostname'] # Merge explicit user/port values with the env/ssh_config derived ones # (Host is already done at this point.) user = r['user'] or user port = r['port'] or port if omit_port: return user, host return user, host, port def to_dict(host_string): user, host, port = normalize(host_string) return { 'user': user, 'host': host, 'port': port, 'host_string': host_string } def from_dict(arg): return join_host_strings(arg['user'], arg['host'], arg['port']) def denormalize(host_string): """ Strips out default values for the given host string. If the user part is the default user, it is removed; if the port is port 22, it also is removed. """ from fabric.state import env r = parse_host_string(host_string) user = '' if r['user'] is not None and r['user'] != env.user: user = r['user'] + '@' port = '' if r['port'] is not None and r['port'] != '22': port = ':' + r['port'] host = r['host'] host = '[%s]' % host if port and host.count(':') > 1 else host return user + host + port def join_host_strings(user, host, port=None): """ Turns user/host/port strings into ``user@host:port`` combined string. This function is not responsible for handling missing user/port strings; for that, see the ``normalize`` function. If ``host`` looks like IPv6 address, it will be enclosed in square brackets If ``port`` is omitted, the returned string will be of the form ``user@host``. """ if port: # Square brackets are necessary for IPv6 host/port separation template = "%s@[%s]:%s" if host.count(':') > 1 else "%s@%s:%s" return template % (user, host, port) else: return "%s@%s" % (user, host) def normalize_to_string(host_string): """ normalize() returns a tuple; this returns another valid host string. """ return join_host_strings(*normalize(host_string)) def connect(user, host, port, cache, seek_gateway=True): """ Create and return a new SSHClient instance connected to given host. :param user: Username to connect as. :param host: Network hostname. :param port: SSH daemon port. :param cache: A ``HostConnectionCache`` instance used to cache/store gateway hosts when gatewaying is enabled. :param seek_gateway: Whether to try setting up a gateway socket for this connection. Used so the actual gateway connection can prevent recursion. """ from state import env, output # # Initialization # # Init client client = ssh.SSHClient() # Load system hosts file (e.g. /etc/ssh/ssh_known_hosts) known_hosts = env.get('system_known_hosts') if known_hosts: client.load_system_host_keys(known_hosts) # Load known host keys (e.g. ~/.ssh/known_hosts) unless user says not to. if not env.disable_known_hosts: client.load_system_host_keys() # Unless user specified not to, accept/add new, unknown host keys if not env.reject_unknown_hosts: client.set_missing_host_key_policy(ssh.AutoAddPolicy()) # # Connection attempt loop # # Initialize loop variables connected = False password = get_password(user, host, port) tries = 0 sock = None # Loop until successful connect (keep prompting for new password) while not connected: # Attempt connection try: tries += 1 # (Re)connect gateway socket, if needed. # Nuke cached client object if not on initial try. if seek_gateway: sock = get_gateway(host, port, cache, replace=tries > 0) # Ready to connect client.connect( hostname=host, port=int(port), username=user, password=password, pkey=key_from_env(password), key_filename=key_filenames(), timeout=env.timeout, allow_agent=not env.no_agent, look_for_keys=not env.no_keys, sock=sock ) connected = True # set a keepalive if desired if env.keepalive: client.get_transport().set_keepalive(env.keepalive) return client # BadHostKeyException corresponds to key mismatch, i.e. what on the # command line results in the big banner error about man-in-the-middle # attacks. except ssh.BadHostKeyException, e: raise NetworkError("Host key for %s did not match pre-existing key! Server's key was changed recently, or possible man-in-the-middle attack." % host, e) # Prompt for new password to try on auth failure except ( ssh.AuthenticationException, ssh.PasswordRequiredException, ssh.SSHException ), e: msg = str(e) # If we get SSHExceptionError and the exception message indicates # SSH protocol banner read failures, assume it's caused by the # server load and try again. if e.__class__ is ssh.SSHException \ and msg == 'Error reading SSH protocol banner': if _tried_enough(tries): raise NetworkError(msg, e) continue # For whatever reason, empty password + no ssh key or agent # results in an SSHException instead of an # AuthenticationException. Since it's difficult to do # otherwise, we must assume empty password + SSHException == # auth exception. # # Conversely: if we get SSHException and there # *was* a password -- it is probably something non auth # related, and should be sent upwards. (This is not true if the # exception message does indicate key parse problems.) # # This also holds true for rejected/unknown host keys: we have to # guess based on other heuristics. if e.__class__ is ssh.SSHException \ and (password or msg.startswith('Unknown server')) \ and not is_key_load_error(e): raise NetworkError(msg, e) # Otherwise, assume an auth exception, and prompt for new/better # password. # Paramiko doesn't handle prompting for locked private # keys (i.e. keys with a passphrase and not loaded into an agent) # so we have to detect this and tweak our prompt slightly. # (Otherwise, however, the logic flow is the same, because # ssh's connect() method overrides the password argument to be # either the login password OR the private key passphrase. Meh.) # # NOTE: This will come up if you normally use a # passphrase-protected private key with ssh-agent, and enter an # incorrect remote username, because ssh.connect: # * Tries the agent first, which will fail as you gave the wrong # username, so obviously any loaded keys aren't gonna work for a # nonexistent remote account; # * Then tries the on-disk key file, which is passphrased; # * Realizes there's no password to try unlocking that key with, # because you didn't enter a password, because you're using # ssh-agent; # * In this condition (trying a key file, password is None) # ssh raises PasswordRequiredException. text = None if e.__class__ is ssh.PasswordRequiredException \ or is_key_load_error(e): # NOTE: we can't easily say WHICH key's passphrase is needed, # because ssh doesn't provide us with that info, and # env.key_filename may be a list of keys, so we can't know # which one raised the exception. Best not to try. prompt = "[%s] Passphrase for private key" text = prompt % env.host_string password = prompt_for_password(text) # Update env.password, env.passwords if empty set_password(user, host, port, password) # Ctrl-D / Ctrl-C for exit except (EOFError, TypeError): # Print a newline (in case user was sitting at prompt) print('') sys.exit(0) # Handle DNS error / name lookup failure except socket.gaierror, e: raise NetworkError('Name lookup failed for %s' % host, e) # Handle timeouts and retries, including generic errors # NOTE: In 2.6, socket.error subclasses IOError except socket.error, e: not_timeout = type(e) is not socket.timeout giving_up = _tried_enough(tries) # Baseline error msg for when debug is off msg = "Timed out trying to connect to %s" % host # Expanded for debug on err = msg + " (attempt %s of %s)" % (tries, env.connection_attempts) if giving_up: err += ", giving up" err += ")" # Debuggin' if output.debug: sys.stderr.write(err + '\n') # Having said our piece, try again if not giving_up: # Sleep if it wasn't a timeout, so we still get timeout-like # behavior if not_timeout: time.sleep(env.timeout) continue # Override eror msg if we were retrying other errors if not_timeout: msg = "Low level socket error connecting to host %s on port %s: %s" % ( host, port, e[1] ) # Here, all attempts failed. Tweak error msg to show # tries. # TODO: find good humanization module, jeez s = "s" if env.connection_attempts > 1 else "" msg += " (tried %s time%s)" % (env.connection_attempts, s) raise NetworkError(msg, e) # Ensure that if we terminated without connecting and we were given an # explicit socket, close it out. finally: if not connected and sock is not None: sock.close() def _password_prompt(prompt, stream): # NOTE: Using encode-to-ascii to prevent (Windows, at least) getpass from # choking if given Unicode. return getpass.getpass(prompt.encode('ascii', 'ignore'), stream) def prompt_for_password(prompt=None, no_colon=False, stream=None): """ Prompts for and returns a new password if required; otherwise, returns None. A trailing colon is appended unless ``no_colon`` is True. If the user supplies an empty password, the user will be re-prompted until they enter a non-empty password. ``prompt_for_password`` autogenerates the user prompt based on the current host being connected to. To override this, specify a string value for ``prompt``. ``stream`` is the stream the prompt will be printed to; if not given, defaults to ``sys.stderr``. """ from fabric.state import env handle_prompt_abort("a connection or sudo password") stream = stream or sys.stderr # Construct prompt default = "[%s] Login password for '%s'" % (env.host_string, env.user) password_prompt = prompt if (prompt is not None) else default if not no_colon: password_prompt += ": " # Get new password value new_password = _password_prompt(password_prompt, stream) # Otherwise, loop until user gives us a non-empty password (to prevent # returning the empty string, and to avoid unnecessary network overhead.) while not new_password: print("Sorry, you can't enter an empty password. Please try again.") new_password = _password_prompt(password_prompt, stream) return new_password def needs_host(func): """ Prompt user for value of ``env.host_string`` when ``env.host_string`` is empty. This decorator is basically a safety net for silly users who forgot to specify the host/host list in one way or another. It should be used to wrap operations which require a network connection. Due to how we execute commands per-host in ``main()``, it's not possible to specify multiple hosts at this point in time, so only a single host will be prompted for. Because this decorator sets ``env.host_string``, it will prompt once (and only once) per command. As ``main()`` clears ``env.host_string`` between commands, this decorator will also end up prompting the user once per command (in the case where multiple commands have no hosts set, of course.) """ from fabric.state import env @wraps(func) def host_prompting_wrapper(*args, **kwargs): while not env.get('host_string', False): handle_prompt_abort("the target host connection string") host_string = raw_input("No hosts found. Please specify (single)" " host string for connection: ") env.update(to_dict(host_string)) return func(*args, **kwargs) host_prompting_wrapper.undecorated = func return host_prompting_wrapper def disconnect_all(): """ Disconnect from all currently connected servers. Used at the end of ``fab``'s main loop, and also intended for use by library users. """ from fabric.state import connections, output # Explicitly disconnect from all servers for key in connections.keys(): if output.status: # Here we can't use the py3k print(x, end=" ") # because 2.5 backwards compatibility sys.stdout.write("Disconnecting from %s... " % denormalize(key)) connections[key].close() del connections[key] if output.status: sys.stdout.write("done.\n") Fabric-1.8.2/fabric/operations.py000644 000765 000024 00000142213 12277451130 017646 0ustar00jforcierstaff000000 000000 """ Functions to be used in fabfiles and other non-core code, such as run()/sudo(). """ from __future__ import with_statement import os import os.path import posixpath import re import subprocess import sys import time from glob import glob from contextlib import closing, contextmanager from fabric.context_managers import (settings, char_buffered, hide, quiet as quiet_manager, warn_only as warn_only_manager) from fabric.io import output_loop, input_loop from fabric.network import needs_host, ssh, ssh_config from fabric.sftp import SFTP from fabric.state import env, connections, output, win32, default_channel from fabric.thread_handling import ThreadHandler from fabric.utils import ( abort, error, handle_prompt_abort, indent, _pty_size, warn, apply_lcwd ) def _shell_escape(string): """ Escape double quotes, backticks and dollar signs in given ``string``. For example:: >>> _shell_escape('abc$') 'abc\\\\$' >>> _shell_escape('"') '\\\\"' """ for char in ('"', '$', '`'): string = string.replace(char, '\%s' % char) return string class _AttributeString(str): """ Simple string subclass to allow arbitrary attribute access. """ @property def stdout(self): return str(self) class _AttributeList(list): """ Like _AttributeString, but for lists. """ pass # Can't wait till Python versions supporting 'def func(*args, foo=bar)' become # widespread :( def require(*keys, **kwargs): """ Check for given keys in the shared environment dict and abort if not found. Positional arguments should be strings signifying what env vars should be checked for. If any of the given arguments do not exist, Fabric will abort execution and print the names of the missing keys. The optional keyword argument ``used_for`` may be a string, which will be printed in the error output to inform users why this requirement is in place. ``used_for`` is printed as part of a string similar to:: "Th(is|ese) variable(s) (are|is) used for %s" so format it appropriately. The optional keyword argument ``provided_by`` may be a list of functions or function names or a single function or function name which the user should be able to execute in order to set the key or keys; it will be included in the error output if requirements are not met. Note: it is assumed that the keyword arguments apply to all given keys as a group. If you feel the need to specify more than one ``used_for``, for example, you should break your logic into multiple calls to ``require()``. .. versionchanged:: 1.1 Allow iterable ``provided_by`` values instead of just single values. """ # If all keys exist and are non-empty, we're good, so keep going. missing_keys = filter(lambda x: x not in env or (x in env and isinstance(env[x], (dict, list, tuple, set)) and not env[x]), keys) if not missing_keys: return # Pluralization if len(missing_keys) > 1: variable = "variables were" used = "These variables are" else: variable = "variable was" used = "This variable is" # Regardless of kwargs, print what was missing. (Be graceful if used outside # of a command.) if 'command' in env: prefix = "The command '%s' failed because the " % env.command else: prefix = "The " msg = "%sfollowing required environment %s not defined:\n%s" % ( prefix, variable, indent(missing_keys) ) # Print used_for if given if 'used_for' in kwargs: msg += "\n\n%s used for %s" % (used, kwargs['used_for']) # And print provided_by if given if 'provided_by' in kwargs: funcs = kwargs['provided_by'] # non-iterable is given, treat it as a list of this single item if not hasattr(funcs, '__iter__'): funcs = [funcs] if len(funcs) > 1: command = "one of the following commands" else: command = "the following command" to_s = lambda obj: getattr(obj, '__name__', str(obj)) provided_by = [to_s(obj) for obj in funcs] msg += "\n\nTry running %s prior to this one, to fix the problem:\n%s"\ % (command, indent(provided_by)) abort(msg) def prompt(text, key=None, default='', validate=None): """ Prompt user with ``text`` and return the input (like ``raw_input``). A single space character will be appended for convenience, but nothing else. Thus, you may want to end your prompt text with a question mark or a colon, e.g. ``prompt("What hostname?")``. If ``key`` is given, the user's input will be stored as ``env.`` in addition to being returned by `prompt`. If the key already existed in ``env``, its value will be overwritten and a warning printed to the user. If ``default`` is given, it is displayed in square brackets and used if the user enters nothing (i.e. presses Enter without entering any text). ``default`` defaults to the empty string. If non-empty, a space will be appended, so that a call such as ``prompt("What hostname?", default="foo")`` would result in a prompt of ``What hostname? [foo]`` (with a trailing space after the ``[foo]``.) The optional keyword argument ``validate`` may be a callable or a string: * If a callable, it is called with the user's input, and should return the value to be stored on success. On failure, it should raise an exception with an exception message, which will be printed to the user. * If a string, the value passed to ``validate`` is used as a regular expression. It is thus recommended to use raw strings in this case. Note that the regular expression, if it is not fully matching (bounded by ``^`` and ``$``) it will be made so. In other words, the input must fully match the regex. Either way, `prompt` will re-prompt until validation passes (or the user hits ``Ctrl-C``). .. note:: `~fabric.operations.prompt` honors :ref:`env.abort_on_prompts ` and will call `~fabric.utils.abort` instead of prompting if that flag is set to ``True``. If you want to block on user input regardless, try wrapping with `~fabric.context_managers.settings`. Examples:: # Simplest form: environment = prompt('Please specify target environment: ') # With default, and storing as env.dish: prompt('Specify favorite dish: ', 'dish', default='spam & eggs') # With validation, i.e. requiring integer input: prompt('Please specify process nice level: ', key='nice', validate=int) # With validation against a regular expression: release = prompt('Please supply a release name', validate=r'^\w+-\d+(\.\d+)?$') # Prompt regardless of the global abort-on-prompts setting: with settings(abort_on_prompts=False): prompt('I seriously need an answer on this! ') """ handle_prompt_abort("a user-specified prompt() call") # Store previous env value for later display, if necessary if key: previous_value = env.get(key) # Set up default display default_str = "" if default != '': default_str = " [%s] " % str(default).strip() else: default_str = " " # Construct full prompt string prompt_str = text.strip() + default_str # Loop until we pass validation value = None while value is None: # Get input value = raw_input(prompt_str) or default # Handle validation if validate: # Callable if callable(validate): # Callable validate() must raise an exception if validation # fails. try: value = validate(value) except Exception, e: # Reset value so we stay in the loop value = None print("Validation failed for the following reason:") print(indent(e.message) + "\n") # String / regex must match and will be empty if validation fails. else: # Need to transform regex into full-matching one if it's not. if not validate.startswith('^'): validate = r'^' + validate if not validate.endswith('$'): validate += r'$' result = re.findall(validate, value) if not result: print("Regular expression validation failed: '%s' does not match '%s'\n" % (value, validate)) # Reset value so we stay in the loop value = None # At this point, value must be valid, so update env if necessary if key: env[key] = value # Print warning if we overwrote some other value if key and previous_value is not None and previous_value != value: warn("overwrote previous env variable '%s'; used to be '%s', is now '%s'." % ( key, previous_value, value )) # And return the value, too, just in case someone finds that useful. return value @needs_host def put(local_path=None, remote_path=None, use_sudo=False, mirror_local_mode=False, mode=None, use_glob=True, temp_dir=""): """ Upload one or more files to a remote host. `~fabric.operations.put` returns an iterable containing the absolute file paths of all remote files uploaded. This iterable also exhibits a ``.failed`` attribute containing any local file paths which failed to upload (and may thus be used as a boolean test.) You may also check ``.succeeded`` which is equivalent to ``not .failed``. ``local_path`` may be a relative or absolute local file or directory path, and may contain shell-style wildcards, as understood by the Python ``glob`` module (give ``use_glob=False`` to disable this behavior). Tilde expansion (as implemented by ``os.path.expanduser``) is also performed. ``local_path`` may alternately be a file-like object, such as the result of ``open('path')`` or a ``StringIO`` instance. .. note:: In this case, `~fabric.operations.put` will attempt to read the entire contents of the file-like object by rewinding it using ``seek`` (and will use ``tell`` afterwards to preserve the previous file position). ``remote_path`` may also be a relative or absolute location, but applied to the remote host. Relative paths are relative to the remote user's home directory, but tilde expansion (e.g. ``~/.ssh/``) will also be performed if necessary. An empty string, in either path argument, will be replaced by the appropriate end's current working directory. While the SFTP protocol (which `put` uses) has no direct ability to upload files to locations not owned by the connecting user, you may specify ``use_sudo=True`` to work around this. When set, this setting causes `put` to upload the local files to a temporary location on the remote end (defaults to remote user's ``$HOME``; this may be overridden via ``temp_dir``), and then use `sudo` to move them to ``remote_path``. In some use cases, it is desirable to force a newly uploaded file to match the mode of its local counterpart (such as when uploading executable scripts). To do this, specify ``mirror_local_mode=True``. Alternately, you may use the ``mode`` kwarg to specify an exact mode, in the same vein as ``os.chmod`` or the Unix ``chmod`` command. `~fabric.operations.put` will honor `~fabric.context_managers.cd`, so relative values in ``remote_path`` will be prepended by the current remote working directory, if applicable. Thus, for example, the below snippet would attempt to upload to ``/tmp/files/test.txt`` instead of ``~/files/test.txt``:: with cd('/tmp'): put('/path/to/local/test.txt', 'files') Use of `~fabric.context_managers.lcd` will affect ``local_path`` in the same manner. Examples:: put('bin/project.zip', '/tmp/project.zip') put('*.py', 'cgi-bin/') put('index.html', 'index.html', mode=0755) .. note:: If a file-like object such as StringIO has a ``name`` attribute, that will be used in Fabric's printed output instead of the default ```` .. versionchanged:: 1.0 Now honors the remote working directory as manipulated by `~fabric.context_managers.cd`, and the local working directory as manipulated by `~fabric.context_managers.lcd`. .. versionchanged:: 1.0 Now allows file-like objects in the ``local_path`` argument. .. versionchanged:: 1.0 Directories may be specified in the ``local_path`` argument and will trigger recursive uploads. .. versionchanged:: 1.0 Return value is now an iterable of uploaded remote file paths which also exhibits the ``.failed`` and ``.succeeded`` attributes. .. versionchanged:: 1.5 Allow a ``name`` attribute on file-like objects for log output .. versionchanged:: 1.7 Added ``use_glob`` option to allow disabling of globbing. """ # Handle empty local path local_path = local_path or os.getcwd() # Test whether local_path is a path or a file-like object local_is_path = not (hasattr(local_path, 'read') \ and callable(local_path.read)) ftp = SFTP(env.host_string) with closing(ftp) as ftp: home = ftp.normalize('.') # Empty remote path implies cwd remote_path = remote_path or home # Expand tildes if remote_path.startswith('~'): remote_path = remote_path.replace('~', home, 1) # Honor cd() (assumes Unix style file paths on remote end) if not os.path.isabs(remote_path) and env.get('cwd'): remote_path = env.cwd.rstrip('/') + '/' + remote_path if local_is_path: # Apply lcwd, expand tildes, etc local_path = os.path.expanduser(local_path) local_path = apply_lcwd(local_path, env) if use_glob: # Glob local path names = glob(local_path) else: # Check if file exists first so ValueError gets raised if os.path.exists(local_path): names = [local_path] else: names = [] else: names = [local_path] # Make sure local arg exists if local_is_path and not names: err = "'%s' is not a valid local path or glob." % local_path raise ValueError(err) # Sanity check and wierd cases if ftp.exists(remote_path): if local_is_path and len(names) != 1 and not ftp.isdir(remote_path): raise ValueError("'%s' is not a directory" % remote_path) # Iterate over all given local files remote_paths = [] failed_local_paths = [] for lpath in names: try: if local_is_path and os.path.isdir(lpath): p = ftp.put_dir(lpath, remote_path, use_sudo, mirror_local_mode, mode, temp_dir) remote_paths.extend(p) else: p = ftp.put(lpath, remote_path, use_sudo, mirror_local_mode, mode, local_is_path, temp_dir) remote_paths.append(p) except Exception, e: msg = "put() encountered an exception while uploading '%s'" failure = lpath if local_is_path else "" failed_local_paths.append(failure) error(message=msg % lpath, exception=e) ret = _AttributeList(remote_paths) ret.failed = failed_local_paths ret.succeeded = not ret.failed return ret @needs_host def get(remote_path, local_path=None): """ Download one or more files from a remote host. `~fabric.operations.get` returns an iterable containing the absolute paths to all local files downloaded, which will be empty if ``local_path`` was a StringIO object (see below for more on using StringIO). This object will also exhibit a ``.failed`` attribute containing any remote file paths which failed to download, and a ``.succeeded`` attribute equivalent to ``not .failed``. ``remote_path`` is the remote file or directory path to download, which may contain shell glob syntax, e.g. ``"/var/log/apache2/*.log"``, and will have tildes replaced by the remote home directory. Relative paths will be considered relative to the remote user's home directory, or the current remote working directory as manipulated by `~fabric.context_managers.cd`. If the remote path points to a directory, that directory will be downloaded recursively. ``local_path`` is the local file path where the downloaded file or files will be stored. If relative, it will honor the local current working directory as manipulated by `~fabric.context_managers.lcd`. It may be interpolated, using standard Python dict-based interpolation, with the following variables: * ``host``: The value of ``env.host_string``, eg ``myhostname`` or ``user@myhostname-222`` (the colon between hostname and port is turned into a dash to maximize filesystem compatibility) * ``dirname``: The directory part of the remote file path, e.g. the ``src/projectname`` in ``src/projectname/utils.py``. * ``basename``: The filename part of the remote file path, e.g. the ``utils.py`` in ``src/projectname/utils.py`` * ``path``: The full remote path, e.g. ``src/projectname/utils.py``. .. note:: When ``remote_path`` is an absolute directory path, only the inner directories will be recreated locally and passed into the above variables. So for example, ``get('/var/log', '%(path)s')`` would start writing out files like ``apache2/access.log``, ``postgresql/8.4/postgresql.log``, etc, in the local working directory. It would **not** write out e.g. ``var/log/apache2/access.log``. Additionally, when downloading a single file, ``%(dirname)s`` and ``%(path)s`` do not make as much sense and will be empty and equivalent to ``%(basename)s``, respectively. Thus a call like ``get('/var/log/apache2/access.log', '%(path)s')`` will save a local file named ``access.log``, not ``var/log/apache2/access.log``. This behavior is intended to be consistent with the command-line ``scp`` program. If left blank, ``local_path`` defaults to ``"%(host)s/%(path)s"`` in order to be safe for multi-host invocations. .. warning:: If your ``local_path`` argument does not contain ``%(host)s`` and your `~fabric.operations.get` call runs against multiple hosts, your local files will be overwritten on each successive run! If ``local_path`` does not make use of the above variables (i.e. if it is a simple, explicit file path) it will act similar to ``scp`` or ``cp``, overwriting pre-existing files if necessary, downloading into a directory if given (e.g. ``get('/path/to/remote_file.txt', 'local_directory')`` will create ``local_directory/remote_file.txt``) and so forth. ``local_path`` may alternately be a file-like object, such as the result of ``open('path', 'w')`` or a ``StringIO`` instance. .. note:: Attempting to `get` a directory into a file-like object is not valid and will result in an error. .. note:: This function will use ``seek`` and ``tell`` to overwrite the entire contents of the file-like object, in order to be consistent with the behavior of `~fabric.operations.put` (which also considers the entire file). However, unlike `~fabric.operations.put`, the file pointer will not be restored to its previous location, as that doesn't make as much sense here and/or may not even be possible. .. note:: If a file-like object such as StringIO has a ``name`` attribute, that will be used in Fabric's printed output instead of the default ```` .. versionchanged:: 1.0 Now honors the remote working directory as manipulated by `~fabric.context_managers.cd`, and the local working directory as manipulated by `~fabric.context_managers.lcd`. .. versionchanged:: 1.0 Now allows file-like objects in the ``local_path`` argument. .. versionchanged:: 1.0 ``local_path`` may now contain interpolated path- and host-related variables. .. versionchanged:: 1.0 Directories may be specified in the ``remote_path`` argument and will trigger recursive downloads. .. versionchanged:: 1.0 Return value is now an iterable of downloaded local file paths, which also exhibits the ``.failed`` and ``.succeeded`` attributes. .. versionchanged:: 1.5 Allow a ``name`` attribute on file-like objects for log output """ # Handle empty local path / default kwarg value local_path = local_path or "%(host)s/%(path)s" # Test whether local_path is a path or a file-like object local_is_path = not (hasattr(local_path, 'write') \ and callable(local_path.write)) # Honor lcd() where it makes sense if local_is_path: local_path = apply_lcwd(local_path, env) ftp = SFTP(env.host_string) with closing(ftp) as ftp: home = ftp.normalize('.') # Expand home directory markers (tildes, etc) if remote_path.startswith('~'): remote_path = remote_path.replace('~', home, 1) if local_is_path: local_path = os.path.expanduser(local_path) # Honor cd() (assumes Unix style file paths on remote end) if not os.path.isabs(remote_path): # Honor cwd if it's set (usually by with cd():) if env.get('cwd'): remote_path_escaped = env.cwd.rstrip('/') remote_path_escaped = remote_path_escaped.replace('\\ ', ' ') remote_path = remote_path_escaped + '/' + remote_path # Otherwise, be relative to remote home directory (SFTP server's # '.') else: remote_path = posixpath.join(home, remote_path) # Track final local destination files so we can return a list local_files = [] failed_remote_files = [] try: # Glob remote path names = ftp.glob(remote_path) # Handle invalid local-file-object situations if not local_is_path: if len(names) > 1 or ftp.isdir(names[0]): error("[%s] %s is a glob or directory, but local_path is a file object!" % (env.host_string, remote_path)) for remote_path in names: if ftp.isdir(remote_path): result = ftp.get_dir(remote_path, local_path) local_files.extend(result) else: # Perform actual get. If getting to real local file path, # add result (will be true final path value) to # local_files. File-like objects are omitted. result = ftp.get(remote_path, local_path, local_is_path, os.path.basename(remote_path)) if local_is_path: local_files.append(result) except Exception, e: failed_remote_files.append(remote_path) msg = "get() encountered an exception while downloading '%s'" error(message=msg % remote_path, exception=e) ret = _AttributeList(local_files if local_is_path else []) ret.failed = failed_remote_files ret.succeeded = not ret.failed return ret def _sudo_prefix_argument(argument, value): if value is None: return "" if str(value).isdigit(): value = "#%s" % value return ' %s "%s"' % (argument, value) def _sudo_prefix(user, group=None): """ Return ``env.sudo_prefix`` with ``user``/``group`` inserted if necessary. """ # Insert env.sudo_prompt into env.sudo_prefix prefix = env.sudo_prefix % env if user is not None or group is not None: return "%s%s%s " % (prefix, _sudo_prefix_argument('-u', user), _sudo_prefix_argument('-g', group)) return prefix def _shell_wrap(command, shell_escape, shell=True, sudo_prefix=None): """ Conditionally wrap given command in env.shell (while honoring sudo.) """ # Honor env.shell, while allowing the 'shell' kwarg to override it (at # least in terms of turning it off.) if shell and not env.use_shell: shell = False # Sudo plus space, or empty string if sudo_prefix is None: sudo_prefix = "" else: sudo_prefix += " " # If we're shell wrapping, prefix shell and space. Next, escape the command # if requested, and then quote it. Otherwise, empty string. if shell: shell = env.shell + " " if shell_escape: command = _shell_escape(command) command = '"%s"' % command else: shell = "" # Resulting string should now have correct formatting return sudo_prefix + shell + command def _prefix_commands(command, which): """ Prefixes ``command`` with all prefixes found in ``env.command_prefixes``. ``env.command_prefixes`` is a list of strings which is modified by the `~fabric.context_managers.prefix` context manager. This function also handles a special-case prefix, ``cwd``, used by `~fabric.context_managers.cd`. The ``which`` kwarg should be a string, ``"local"`` or ``"remote"``, which will determine whether ``cwd`` or ``lcwd`` is used. """ # Local prefix list (to hold env.command_prefixes + any special cases) prefixes = list(env.command_prefixes) # Handle current working directory, which gets its own special case due to # being a path string that gets grown/shrunk, instead of just a single # string or lack thereof. # Also place it at the front of the list, in case user is expecting another # prefixed command to be "in" the current working directory. cwd = env.cwd if which == 'remote' else env.lcwd if cwd: prefixes.insert(0, 'cd %s' % cwd) glue = " && " prefix = (glue.join(prefixes) + glue) if prefixes else "" return prefix + command def _prefix_env_vars(command, local=False): """ Prefixes ``command`` with any shell environment vars, e.g. ``PATH=foo ``. Currently, this only applies the PATH updating implemented in `~fabric.context_managers.path` and environment variables from `~fabric.context_managers.shell_env`. Will switch to using Windows style 'SET' commands when invoked by ``local()`` and on a Windows localhost. """ env_vars = {} # path(): local shell env var update, appending/prepending/replacing $PATH path = env.path if path: if env.path_behavior == 'append': path = '$PATH:\"%s\"' % path elif env.path_behavior == 'prepend': path = '\"%s\":$PATH' % path elif env.path_behavior == 'replace': path = '\"%s\"' % path env_vars['PATH'] = path # shell_env() env_vars.update(env.shell_env) if env_vars: set_cmd, exp_cmd = '', '' if win32 and local: set_cmd = 'SET ' else: exp_cmd = 'export ' exports = ' '.join( '%s%s="%s"' % (set_cmd, k, v if k == 'PATH' else _shell_escape(v)) for k, v in env_vars.iteritems() ) shell_env_str = '%s%s && ' % (exp_cmd, exports) else: shell_env_str = '' return shell_env_str + command def _execute(channel, command, pty=True, combine_stderr=None, invoke_shell=False, stdout=None, stderr=None, timeout=None): """ Execute ``command`` over ``channel``. ``pty`` controls whether a pseudo-terminal is created. ``combine_stderr`` controls whether we call ``channel.set_combine_stderr``. By default, the global setting for this behavior (:ref:`env.combine_stderr `) is consulted, but you may specify ``True`` or ``False`` here to override it. ``invoke_shell`` controls whether we use ``exec_command`` or ``invoke_shell`` (plus a handful of other things, such as always forcing a pty.) Returns a three-tuple of (``stdout``, ``stderr``, ``status``), where ``stdout``/``stderr`` are captured output strings and ``status`` is the program's return code, if applicable. """ # stdout/stderr redirection stdout = stdout or sys.stdout stderr = stderr or sys.stderr # Timeout setting control timeout = env.command_timeout if (timeout is None) else timeout # What to do with CTRl-C? remote_interrupt = env.remote_interrupt with char_buffered(sys.stdin): # Combine stdout and stderr to get around oddball mixing issues if combine_stderr is None: combine_stderr = env.combine_stderr channel.set_combine_stderr(combine_stderr) # Assume pty use, and allow overriding of this either via kwarg or env # var. (invoke_shell always wants a pty no matter what.) using_pty = True if not invoke_shell and (not pty or not env.always_use_pty): using_pty = False # Request pty with size params (default to 80x24, obtain real # parameters if on POSIX platform) if using_pty: rows, cols = _pty_size() channel.get_pty(width=cols, height=rows) # Use SSH agent forwarding from 'ssh' if enabled by user config_agent = ssh_config().get('forwardagent', 'no').lower() == 'yes' forward = None if env.forward_agent or config_agent: forward = ssh.agent.AgentRequestHandler(channel) # Kick off remote command if invoke_shell: channel.invoke_shell() if command: channel.sendall(command + "\n") else: channel.exec_command(command=command) # Init stdout, stderr capturing. Must use lists instead of strings as # strings are immutable and we're using these as pass-by-reference stdout_buf, stderr_buf = [], [] if invoke_shell: stdout_buf = stderr_buf = None workers = ( ThreadHandler('out', output_loop, channel, "recv", capture=stdout_buf, stream=stdout, timeout=timeout), ThreadHandler('err', output_loop, channel, "recv_stderr", capture=stderr_buf, stream=stderr, timeout=timeout), ThreadHandler('in', input_loop, channel, using_pty) ) if remote_interrupt is None: remote_interrupt = invoke_shell if remote_interrupt and not using_pty: remote_interrupt = False while True: if channel.exit_status_ready(): break else: # Check for thread exceptions here so we can raise ASAP # (without chance of getting blocked by, or hidden by an # exception within, recv_exit_status()) for worker in workers: worker.raise_if_needed() try: time.sleep(ssh.io_sleep) except KeyboardInterrupt: if not remote_interrupt: raise channel.send('\x03') # Obtain exit code of remote program now that we're done. status = channel.recv_exit_status() # Wait for threads to exit so we aren't left with stale threads for worker in workers: worker.thread.join() worker.raise_if_needed() # Close channel channel.close() # Close any agent forward proxies if forward is not None: forward.close() # Update stdout/stderr with captured values if applicable if not invoke_shell: stdout_buf = ''.join(stdout_buf).strip() stderr_buf = ''.join(stderr_buf).strip() # Tie off "loose" output by printing a newline. Helps to ensure any # following print()s aren't on the same line as a trailing line prefix # or similar. However, don't add an extra newline if we've already # ended up with one, as that adds a entire blank line instead. if output.running \ and (output.stdout and stdout_buf and not stdout_buf.endswith("\n")) \ or (output.stderr and stderr_buf and not stderr_buf.endswith("\n")): print("") return stdout_buf, stderr_buf, status @needs_host def open_shell(command=None): """ Invoke a fully interactive shell on the remote end. If ``command`` is given, it will be sent down the pipe before handing control over to the invoking user. This function is most useful for when you need to interact with a heavily shell-based command or series of commands, such as when debugging or when fully interactive recovery is required upon remote program failure. It should be considered an easy way to work an interactive shell session into the middle of a Fabric script and is *not* a drop-in replacement for `~fabric.operations.run`, which is also capable of interacting with the remote end (albeit only while its given command is executing) and has much stronger programmatic abilities such as error handling and stdout/stderr capture. Specifically, `~fabric.operations.open_shell` provides a better interactive experience than `~fabric.operations.run`, but use of a full remote shell prevents Fabric from determining whether programs run within the shell have failed, and pollutes the stdout/stderr stream with shell output such as login banners, prompts and echoed stdin. Thus, this function does not have a return value and will not trigger Fabric's failure handling if any remote programs result in errors. .. versionadded:: 1.0 """ _execute(channel=default_channel(), command=command, pty=True, combine_stderr=True, invoke_shell=True) @contextmanager def _noop(): yield def _run_command(command, shell=True, pty=True, combine_stderr=True, sudo=False, user=None, quiet=False, warn_only=False, stdout=None, stderr=None, group=None, timeout=None, shell_escape=None): """ Underpinnings of `run` and `sudo`. See their docstrings for more info. """ manager = _noop if warn_only: manager = warn_only_manager # Quiet's behavior is a superset of warn_only's, so it wins. if quiet: manager = quiet_manager with manager(): # Set up new var so original argument can be displayed verbatim later. given_command = command # Check if shell_escape has been overridden in env if shell_escape is None: shell_escape = env.get('shell_escape', True) # Handle context manager modifications, and shell wrapping wrapped_command = _shell_wrap( _prefix_commands(_prefix_env_vars(command), 'remote'), shell_escape, shell, _sudo_prefix(user, group) if sudo else None ) # Execute info line which = 'sudo' if sudo else 'run' if output.debug: print("[%s] %s: %s" % (env.host_string, which, wrapped_command)) elif output.running: print("[%s] %s: %s" % (env.host_string, which, given_command)) # Actual execution, stdin/stdout/stderr handling, and termination result_stdout, result_stderr, status = _execute( channel=default_channel(), command=wrapped_command, pty=pty, combine_stderr=combine_stderr, invoke_shell=False, stdout=stdout, stderr=stderr, timeout=timeout) # Assemble output string out = _AttributeString(result_stdout) err = _AttributeString(result_stderr) # Error handling out.failed = False out.command = given_command out.real_command = wrapped_command if status not in env.ok_ret_codes: out.failed = True msg = "%s() received nonzero return code %s while executing" % ( which, status ) if env.warn_only: msg += " '%s'!" % given_command else: msg += "!\n\nRequested: %s\nExecuted: %s" % ( given_command, wrapped_command ) error(message=msg, stdout=out, stderr=err) # Attach return code to output string so users who have set things to # warn only, can inspect the error code. out.return_code = status # Convenience mirror of .failed out.succeeded = not out.failed # Attach stderr for anyone interested in that. out.stderr = err return out @needs_host def run(command, shell=True, pty=True, combine_stderr=None, quiet=False, warn_only=False, stdout=None, stderr=None, timeout=None, shell_escape=None): """ Run a shell command on a remote host. If ``shell`` is True (the default), `run` will execute the given command string via a shell interpreter, the value of which may be controlled by setting ``env.shell`` (defaulting to something similar to ``/bin/bash -l -c ""``.) Any double-quote (``"``) or dollar-sign (``$``) characters in ``command`` will be automatically escaped when ``shell`` is True. `run` will return the result of the remote program's stdout as a single (likely multiline) string. This string will exhibit ``failed`` and ``succeeded`` boolean attributes specifying whether the command failed or succeeded, and will also include the return code as the ``return_code`` attribute. Furthermore, it includes a copy of the requested & actual command strings executed, as ``.command`` and ``.real_command``, respectively. Any text entered in your local terminal will be forwarded to the remote program as it runs, thus allowing you to interact with password or other prompts naturally. For more on how this works, see :doc:`/usage/interactivity`. You may pass ``pty=False`` to forego creation of a pseudo-terminal on the remote end in case the presence of one causes problems for the command in question. However, this will force Fabric itself to echo any and all input you type while the command is running, including sensitive passwords. (With ``pty=True``, the remote pseudo-terminal will echo for you, and will intelligently handle password-style prompts.) See :ref:`pseudottys` for details. Similarly, if you need to programmatically examine the stderr stream of the remote program (exhibited as the ``stderr`` attribute on this function's return value), you may set ``combine_stderr=False``. Doing so has a high chance of causing garbled output to appear on your terminal (though the resulting strings returned by `~fabric.operations.run` will be properly separated). For more info, please read :ref:`combine_streams`. To ignore non-zero return codes, specify ``warn_only=True``. To both ignore non-zero return codes *and* force a command to run silently, specify ``quiet=True``. To override which local streams are used to display remote stdout and/or stderr, specify ``stdout`` or ``stderr``. (By default, the regular ``sys.stdout`` and ``sys.stderr`` Python stream objects are used.) For example, ``run("command", stderr=sys.stdout)`` would print the remote standard error to the local standard out, while preserving it as its own distinct attribute on the return value (as per above.) Alternately, you could even provide your own stream objects or loggers, e.g. ``myout = StringIO(); run("command", stdout=myout)``. If you want an exception raised when the remote program takes too long to run, specify ``timeout=N`` where ``N`` is an integer number of seconds, after which to time out. This will cause ``run`` to raise a `~fabric.exceptions.CommandTimeout` exception. If you want to disable Fabric's automatic attempts at escaping quotes, dollar signs etc., specify ``shell_escape=False``. Examples:: run("ls /var/www/") run("ls /home/myuser", shell=False) output = run('ls /var/www/site1') run("take_a_long_time", timeout=5) .. versionadded:: 1.0 The ``succeeded`` and ``stderr`` return value attributes, the ``combine_stderr`` kwarg, and interactive behavior. .. versionchanged:: 1.0 The default value of ``pty`` is now ``True``. .. versionchanged:: 1.0.2 The default value of ``combine_stderr`` is now ``None`` instead of ``True``. However, the default *behavior* is unchanged, as the global setting is still ``True``. .. versionadded:: 1.5 The ``quiet``, ``warn_only``, ``stdout`` and ``stderr`` kwargs. .. versionadded:: 1.5 The return value attributes ``.command`` and ``.real_command``. .. versionadded:: 1.6 The ``timeout`` argument. .. versionadded:: 1.7 The ``shell_escape`` argument. """ return _run_command(command, shell, pty, combine_stderr, quiet=quiet, warn_only=warn_only, stdout=stdout, stderr=stderr, timeout=timeout, shell_escape=shell_escape) @needs_host def sudo(command, shell=True, pty=True, combine_stderr=None, user=None, quiet=False, warn_only=False, stdout=None, stderr=None, group=None, timeout=None, shell_escape=None): """ Run a shell command on a remote host, with superuser privileges. `sudo` is identical in every way to `run`, except that it will always wrap the given ``command`` in a call to the ``sudo`` program to provide superuser privileges. `sudo` accepts additional ``user`` and ``group`` arguments, which are passed to ``sudo`` and allow you to run as some user and/or group other than root. On most systems, the ``sudo`` program can take a string username/group or an integer userid/groupid (uid/gid); ``user`` and ``group`` may likewise be strings or integers. You may set :ref:`env.sudo_user ` at module level or via `~fabric.context_managers.settings` if you want multiple ``sudo`` calls to have the same ``user`` value. An explicit ``user`` argument will, of course, override this global setting. Examples:: sudo("~/install_script.py") sudo("mkdir /var/www/new_docroot", user="www-data") sudo("ls /home/jdoe", user=1001) result = sudo("ls /tmp/") with settings(sudo_user='mysql'): sudo("whoami") # prints 'mysql' .. versionchanged:: 1.0 See the changed and added notes for `~fabric.operations.run`. .. versionchanged:: 1.5 Now honors :ref:`env.sudo_user `. .. versionadded:: 1.5 The ``quiet``, ``warn_only``, ``stdout`` and ``stderr`` kwargs. .. versionadded:: 1.5 The return value attributes ``.command`` and ``.real_command``. .. versionadded:: 1.7 The ``shell_escape`` argument. """ return _run_command( command, shell, pty, combine_stderr, sudo=True, user=user if user else env.sudo_user, group=group, quiet=quiet, warn_only=warn_only, stdout=stdout, stderr=stderr, timeout=timeout, shell_escape=shell_escape, ) def local(command, capture=False, shell=None): """ Run a command on the local system. `local` is simply a convenience wrapper around the use of the builtin Python ``subprocess`` module with ``shell=True`` activated. If you need to do anything special, consider using the ``subprocess`` module directly. ``shell`` is passed directly to `subprocess.Popen `_'s ``execute`` argument (which determines the local shell to use.) As per the linked documentation, on Unix the default behavior is to use ``/bin/sh``, so this option is useful for setting that value to e.g. ``/bin/bash``. `local` is not currently capable of simultaneously printing and capturing output, as `~fabric.operations.run`/`~fabric.operations.sudo` do. The ``capture`` kwarg allows you to switch between printing and capturing as necessary, and defaults to ``False``. When ``capture=False``, the local subprocess' stdout and stderr streams are hooked up directly to your terminal, though you may use the global :doc:`output controls ` ``output.stdout`` and ``output.stderr`` to hide one or both if desired. In this mode, the return value's stdout/stderr values are always empty. When ``capture=True``, you will not see any output from the subprocess in your terminal, but the return value will contain the captured stdout/stderr. In either case, as with `~fabric.operations.run` and `~fabric.operations.sudo`, this return value exhibits the ``return_code``, ``stderr``, ``failed`` and ``succeeded`` attributes. See `run` for details. `~fabric.operations.local` will honor the `~fabric.context_managers.lcd` context manager, allowing you to control its current working directory independently of the remote end (which honors `~fabric.context_managers.cd`). .. versionchanged:: 1.0 Added the ``succeeded`` and ``stderr`` attributes. .. versionchanged:: 1.0 Now honors the `~fabric.context_managers.lcd` context manager. .. versionchanged:: 1.0 Changed the default value of ``capture`` from ``True`` to ``False``. """ given_command = command # Apply cd(), path() etc with_env = _prefix_env_vars(command, local=True) wrapped_command = _prefix_commands(with_env, 'local') if output.debug: print("[localhost] local: %s" % (wrapped_command)) elif output.running: print("[localhost] local: " + given_command) # Tie in to global output controls as best we can; our capture argument # takes precedence over the output settings. dev_null = None if capture: out_stream = subprocess.PIPE err_stream = subprocess.PIPE else: dev_null = open(os.devnull, 'w+') # Non-captured, hidden streams are discarded. out_stream = None if output.stdout else dev_null err_stream = None if output.stderr else dev_null try: cmd_arg = wrapped_command if win32 else [wrapped_command] if shell is not None: p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream, stderr=err_stream, executable=shell) else: p = subprocess.Popen(cmd_arg, shell=True, stdout=out_stream, stderr=err_stream) (stdout, stderr) = p.communicate() finally: if dev_null is not None: dev_null.close() # Handle error condition (deal with stdout being None, too) out = _AttributeString(stdout.strip() if stdout else "") err = _AttributeString(stderr.strip() if stderr else "") out.failed = False out.return_code = p.returncode out.stderr = err if p.returncode not in env.ok_ret_codes: out.failed = True msg = "local() encountered an error (return code %s) while executing '%s'" % (p.returncode, command) error(message=msg, stdout=out, stderr=err) out.succeeded = not out.failed # If we were capturing, this will be a string; otherwise it will be None. return out @needs_host def reboot(wait=120): """ Reboot the remote system. Will temporarily tweak Fabric's reconnection settings (:ref:`timeout` and :ref:`connection-attempts`) to ensure that reconnection does not give up for at least ``wait`` seconds. .. note:: As of Fabric 1.4, the ability to reconnect partway through a session no longer requires use of internal APIs. While we are not officially deprecating this function, adding more features to it will not be a priority. Users who want greater control are encouraged to check out this function's (6 lines long, well commented) source code and write their own adaptation using different timeout/attempt values or additional logic. .. versionadded:: 0.9.2 .. versionchanged:: 1.4 Changed the ``wait`` kwarg to be optional, and refactored to leverage the new reconnection functionality; it may not actually have to wait for ``wait`` seconds before reconnecting. """ # Shorter timeout for a more granular cycle than the default. timeout = 5 # Use 'wait' as max total wait time attempts = int(round(float(wait) / float(timeout))) # Don't bleed settings, since this is supposed to be self-contained. # User adaptations will probably want to drop the "with settings()" and # just have globally set timeout/attempts values. with settings( hide('running'), timeout=timeout, connection_attempts=attempts ): sudo('reboot') # Try to make sure we don't slip in before pre-reboot lockdown time.sleep(5) # This is actually an internal-ish API call, but users can simply drop # it in real fabfile use -- the next run/sudo/put/get/etc call will # automatically trigger a reconnect. # We use it here to force the reconnect while this function is still in # control and has the above timeout settings enabled. connections.connect(env.host_string) # At this point we should be reconnected to the newly rebooted server. Fabric-1.8.2/fabric/sftp.py000644 000765 000024 00000026434 12277451130 016445 0ustar00jforcierstaff000000 000000 from __future__ import with_statement import hashlib import os import posixpath import stat import re from fnmatch import filter as fnfilter from fabric.state import output, connections, env from fabric.utils import warn from fabric.context_managers import settings def _format_local(local_path, local_is_path): """Format a path for log output""" if local_is_path: return local_path else: # This allows users to set a name attr on their StringIO objects # just like an open file object would have return getattr(local_path, 'name', '') class SFTP(object): """ SFTP helper class, which is also a facade for ssh.SFTPClient. """ def __init__(self, host_string): self.ftp = connections[host_string].open_sftp() # Recall that __getattr__ is the "fallback" attribute getter, and is thus # pretty safe to use for facade-like behavior as we're doing here. def __getattr__(self, attr): return getattr(self.ftp, attr) def isdir(self, path): try: return stat.S_ISDIR(self.ftp.lstat(path).st_mode) except IOError: return False def islink(self, path): try: return stat.S_ISLNK(self.ftp.lstat(path).st_mode) except IOError: return False def exists(self, path): try: self.ftp.lstat(path).st_mode except IOError: return False return True def glob(self, path): from fabric.state import win32 dirpart, pattern = os.path.split(path) rlist = self.ftp.listdir(dirpart) names = fnfilter([f for f in rlist if not f[0] == '.'], pattern) ret = [path] if len(names): s = '/' ret = [dirpart.rstrip(s) + s + name.lstrip(s) for name in names] if not win32: ret = [posixpath.join(dirpart, name) for name in names] return ret def walk(self, top, topdown=True, onerror=None, followlinks=False): from os.path import join # We may not have read permission for top, in which case we can't get a # list of the files the directory contains. os.path.walk always # suppressed the exception then, rather than blow up for a minor reason # when (say) a thousand readable directories are still left to visit. # That logic is copied here. try: # Note that listdir and error are globals in this module due to # earlier import-*. names = self.ftp.listdir(top) except Exception, err: if onerror is not None: onerror(err) return dirs, nondirs = [], [] for name in names: if self.isdir(join(top, name)): dirs.append(name) else: nondirs.append(name) if topdown: yield top, dirs, nondirs for name in dirs: path = join(top, name) if followlinks or not self.islink(path): for x in self.walk(path, topdown, onerror, followlinks): yield x if not topdown: yield top, dirs, nondirs def mkdir(self, path, use_sudo): from fabric.api import sudo, hide if use_sudo: with hide('everything'): sudo('mkdir "%s"' % path) else: self.ftp.mkdir(path) def get(self, remote_path, local_path, local_is_path, rremote=None): # rremote => relative remote path, so get(/var/log) would result in # this function being called with # remote_path=/var/log/apache2/access.log and # rremote=apache2/access.log rremote = rremote if rremote is not None else remote_path # Handle format string interpolation (e.g. %(dirname)s) path_vars = { 'host': env.host_string.replace(':', '-'), 'basename': os.path.basename(rremote), 'dirname': os.path.dirname(rremote), 'path': rremote } if local_is_path: # Naive fix to issue #711 escaped_path = re.sub(r'(%[^()]*\w)', r'%\1', local_path) local_path = os.path.abspath(escaped_path % path_vars ) # Ensure we give ssh.SFTPCLient a file by prepending and/or # creating local directories as appropriate. dirpath, filepath = os.path.split(local_path) if dirpath and not os.path.exists(dirpath): os.makedirs(dirpath) if os.path.isdir(local_path): local_path = os.path.join(local_path, path_vars['basename']) if output.running: print("[%s] download: %s <- %s" % ( env.host_string, _format_local(local_path, local_is_path), remote_path )) # Warn about overwrites, but keep going if local_is_path and os.path.exists(local_path): msg = "Local file %s already exists and is being overwritten." warn(msg % local_path) # File-like objects: reset to file seek 0 (to ensure full overwrite) # and then use Paramiko's getfo() directly getter = self.ftp.get if not local_is_path: local_path.seek(0) getter = self.ftp.getfo getter(remote_path, local_path) # Return local_path object for posterity. (If mutated, caller will want # to know.) return local_path def get_dir(self, remote_path, local_path): # Decide what needs to be stripped from remote paths so they're all # relative to the given remote_path if os.path.basename(remote_path): strip = os.path.dirname(remote_path) else: strip = os.path.dirname(os.path.dirname(remote_path)) # Store all paths gotten so we can return them when done result = [] # Use our facsimile of os.walk to find all files within remote_path for context, dirs, files in self.walk(remote_path): # Normalize current directory to be relative # E.g. remote_path of /var/log and current dir of /var/log/apache2 # would be turned into just 'apache2' lcontext = rcontext = context.replace(strip, '', 1).lstrip('/') # Prepend local path to that to arrive at the local mirrored # version of this directory. So if local_path was 'mylogs', we'd # end up with 'mylogs/apache2' lcontext = os.path.join(local_path, lcontext) # Download any files in current directory for f in files: # Construct full and relative remote paths to this file rpath = posixpath.join(context, f) rremote = posixpath.join(rcontext, f) # If local_path isn't using a format string that expands to # include its remote path, we need to add it here. if "%(path)s" not in local_path \ and "%(dirname)s" not in local_path: lpath = os.path.join(lcontext, f) # Otherwise, just passthrough local_path to self.get() else: lpath = local_path # Now we can make a call to self.get() with specific file paths # on both ends. result.append(self.get(rpath, lpath, True, rremote)) return result def put(self, local_path, remote_path, use_sudo, mirror_local_mode, mode, local_is_path, temp_dir): from fabric.api import sudo, hide pre = self.ftp.getcwd() pre = pre if pre else '' if local_is_path and self.isdir(remote_path): basename = os.path.basename(local_path) remote_path = posixpath.join(remote_path, basename) if output.running: print("[%s] put: %s -> %s" % ( env.host_string, _format_local(local_path, local_is_path), posixpath.join(pre, remote_path) )) # When using sudo, "bounce" the file through a guaranteed-unique file # path in the default remote CWD (which, typically, the login user will # have write permissions on) in order to sudo(mv) it later. if use_sudo: target_path = remote_path hasher = hashlib.sha1() hasher.update(env.host_string) hasher.update(target_path) remote_path = posixpath.join(temp_dir, hasher.hexdigest()) # Read, ensuring we handle file-like objects correct re: seek pointer putter = self.ftp.put if not local_is_path: old_pointer = local_path.tell() local_path.seek(0) putter = self.ftp.putfo rattrs = putter(local_path, remote_path) if not local_is_path: local_path.seek(old_pointer) # Handle modes if necessary if (local_is_path and mirror_local_mode) or (mode is not None): lmode = os.stat(local_path).st_mode if mirror_local_mode else mode # Cast to octal integer in case of string if isinstance(lmode, basestring): lmode = int(lmode, 8) lmode = lmode & 07777 rmode = rattrs.st_mode # Only bitshift if we actually got an rmode if rmode is not None: rmode = (rmode & 07777) if lmode != rmode: if use_sudo: # Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv # command. (The target path has already been cwd-ified # elsewhere.) with settings(hide('everything'), cwd=""): sudo('chmod %o \"%s\"' % (lmode, remote_path)) else: self.ftp.chmod(remote_path, lmode) if use_sudo: # Temporarily nuke 'cwd' so sudo() doesn't "cd" its mv command. # (The target path has already been cwd-ified elsewhere.) with settings(hide('everything'), cwd=""): sudo("mv \"%s\" \"%s\"" % (remote_path, target_path)) # Revert to original remote_path for return value's sake remote_path = target_path return remote_path def put_dir(self, local_path, remote_path, use_sudo, mirror_local_mode, mode, temp_dir): if os.path.basename(local_path): strip = os.path.dirname(local_path) else: strip = os.path.dirname(os.path.dirname(local_path)) remote_paths = [] for context, dirs, files in os.walk(local_path): rcontext = context.replace(strip, '', 1) # normalize pathname separators with POSIX separator rcontext = rcontext.replace(os.sep, '/') rcontext = rcontext.lstrip('/') rcontext = posixpath.join(remote_path, rcontext) if not self.exists(rcontext): self.mkdir(rcontext, use_sudo) for d in dirs: n = posixpath.join(rcontext, d) if not self.exists(n): self.mkdir(n, use_sudo) for f in files: local_path = os.path.join(context, f) n = posixpath.join(rcontext, f) p = self.put(local_path, n, use_sudo, mirror_local_mode, mode, True, temp_dir) remote_paths.append(p) return remote_paths Fabric-1.8.2/fabric/state.py000644 000765 000024 00000026616 12277461502 016617 0ustar00jforcierstaff000000 000000 """ Internal shared-state variables such as config settings and host lists. """ import os import sys from optparse import make_option from fabric.network import HostConnectionCache, ssh from fabric.version import get_version from fabric.utils import _AliasDict, _AttributeDict # # Win32 flag # # Impacts a handful of platform specific behaviors. Note that Cygwin's Python # is actually close enough to "real" UNIXes that it doesn't need (or want!) to # use PyWin32 -- so we only test for literal Win32 setups (vanilla Python, # ActiveState etc) here. win32 = (sys.platform == 'win32') # # Environment dictionary - support structures # # By default, if the user (including code using Fabric as a library) doesn't # set the username, we obtain the currently running username and use that. def _get_system_username(): """ Obtain name of current system user, which will be default connection user. """ import getpass username = None try: username = getpass.getuser() # getpass.getuser supported on both Unix and Windows systems. # getpass.getuser may call pwd.getpwuid which in turns may raise KeyError # if it cannot find a username for the given UID, e.g. on ep.io # and similar "non VPS" style services. Rather than error out, just keep # the 'default' username to None. Can check for this value later if needed. except KeyError: pass except ImportError: if win32: import win32api import win32security import win32profile username = win32api.GetUserName() return username def _rc_path(): """ Return platform-specific default file path for $HOME/.fabricrc. """ rc_file = '.fabricrc' rc_path = '~/' + rc_file expanded_rc_path = os.path.expanduser(rc_path) if expanded_rc_path == rc_path and win32: from win32com.shell.shell import SHGetSpecialFolderPath from win32com.shell.shellcon import CSIDL_PROFILE expanded_rc_path = "%s/%s" % ( SHGetSpecialFolderPath(0, CSIDL_PROFILE), rc_file ) return expanded_rc_path default_port = '22' # hurr durr default_ssh_config_path = '~/.ssh/config' # Options/settings which exist both as environment keys and which can be set on # the command line, are defined here. When used via `fab` they will be added to # the optparse parser, and either way they are added to `env` below (i.e. the # 'dest' value becomes the environment key and the value, the env value). # # Keep in mind that optparse changes hyphens to underscores when automatically # deriving the `dest` name, e.g. `--reject-unknown-hosts` becomes # `reject_unknown_hosts`. # # Furthermore, *always* specify some sort of default to avoid ending up with # optparse.NO_DEFAULT (currently a two-tuple)! In general, None is a better # default than ''. # # User-facing documentation for these are kept in docs/env.rst. env_options = [ make_option('-a', '--no_agent', action='store_true', default=False, help="don't use the running SSH agent" ), make_option('-A', '--forward-agent', action='store_true', default=False, help="forward local agent to remote end" ), make_option('--abort-on-prompts', action='store_true', default=False, help="abort instead of prompting (for password, host, etc)" ), make_option('-c', '--config', dest='rcfile', default=_rc_path(), metavar='PATH', help="specify location of config file to use" ), make_option('--colorize-errors', action='store_true', default=False, help="Color error output", ), make_option('-D', '--disable-known-hosts', action='store_true', default=False, help="do not load user known_hosts file" ), make_option('-e', '--eagerly-disconnect', action='store_true', default=False, help="disconnect from hosts as soon as possible" ), make_option('-f', '--fabfile', default='fabfile', metavar='PATH', help="python module file to import, e.g. '../other.py'" ), make_option('-g', '--gateway', default=None, metavar='HOST', help="gateway host to connect through" ), make_option('--hide', metavar='LEVELS', help="comma-separated list of output levels to hide" ), make_option('-H', '--hosts', default=[], help="comma-separated list of hosts to operate on" ), make_option('-i', action='append', dest='key_filename', metavar='PATH', default=None, help="path to SSH private key file. May be repeated." ), make_option('-k', '--no-keys', action='store_true', default=False, help="don't load private key files from ~/.ssh/" ), make_option('--keepalive', dest='keepalive', type=int, default=0, metavar="N", help="enables a keepalive every N seconds" ), make_option('--linewise', action='store_true', default=False, help="print line-by-line instead of byte-by-byte" ), make_option('-n', '--connection-attempts', type='int', metavar='M', dest='connection_attempts', default=1, help="make M attempts to connect before giving up" ), make_option('--no-pty', dest='always_use_pty', action='store_false', default=True, help="do not use pseudo-terminal in run/sudo" ), make_option('-p', '--password', default=None, help="password for use with authentication and/or sudo" ), make_option('-P', '--parallel', dest='parallel', action='store_true', default=False, help="default to parallel execution method" ), make_option('--port', default=default_port, help="SSH connection port" ), make_option('-r', '--reject-unknown-hosts', action='store_true', default=False, help="reject unknown hosts" ), make_option('--system-known-hosts', default=None, help="load system known_hosts file before reading user known_hosts" ), make_option('-R', '--roles', default=[], help="comma-separated list of roles to operate on" ), make_option('-s', '--shell', default='/bin/bash -l -c', help="specify a new shell, defaults to '/bin/bash -l -c'" ), make_option('--show', metavar='LEVELS', help="comma-separated list of output levels to show" ), make_option('--skip-bad-hosts', action="store_true", default=False, help="skip over hosts that can't be reached" ), make_option('--ssh-config-path', default=default_ssh_config_path, metavar='PATH', help="Path to SSH config file" ), make_option('-t', '--timeout', type='int', default=10, metavar="N", help="set connection timeout to N seconds" ), make_option('-T', '--command-timeout', dest='command_timeout', type='int', default=None, metavar="N", help="set remote command timeout to N seconds" ), make_option('-u', '--user', default=_get_system_username(), help="username to use when connecting to remote hosts" ), make_option('-w', '--warn-only', action='store_true', default=False, help="warn, instead of abort, when commands fail" ), make_option('-x', '--exclude-hosts', default=[], metavar='HOSTS', help="comma-separated list of hosts to exclude" ), make_option('-z', '--pool-size', dest='pool_size', type='int', metavar='INT', default=0, help="number of concurrent processes to use in parallel mode", ), ] # # Environment dictionary - actual dictionary object # # Global environment dict. Currently a catchall for everything: config settings # such as global deep/broad mode, host lists, username etc. # Most default values are specified in `env_options` above, in the interests of # preserving DRY: anything in here is generally not settable via the command # line. env = _AttributeDict({ 'abort_exception': None, 'again_prompt': 'Sorry, try again.', 'all_hosts': [], 'combine_stderr': True, 'colorize_errors': False, 'command': None, 'command_prefixes': [], 'cwd': '', # Must be empty string, not None, for concatenation purposes 'dedupe_hosts': True, 'default_port': default_port, 'eagerly_disconnect': False, 'echo_stdin': True, 'exclude_hosts': [], 'gateway': None, 'host': None, 'host_string': None, 'lcwd': '', # Must be empty string, not None, for concatenation purposes 'local_user': _get_system_username(), 'output_prefix': True, 'passwords': {}, 'path': '', 'path_behavior': 'append', 'port': default_port, 'real_fabfile': None, 'remote_interrupt': None, 'roles': [], 'roledefs': {}, 'shell_env': {}, 'skip_bad_hosts': False, 'ssh_config_path': default_ssh_config_path, 'ok_ret_codes': [0], # a list of return codes that indicate success # -S so sudo accepts passwd via stdin, -p with our known-value prompt for # later detection (thus %s -- gets filled with env.sudo_prompt at runtime) 'sudo_prefix': "sudo -S -p '%(sudo_prompt)s' ", 'sudo_prompt': 'sudo password:', 'sudo_user': None, 'tasks': [], 'use_exceptions_for': {'network': False}, 'use_shell': True, 'use_ssh_config': False, 'user': None, 'version': get_version('short') }) # Fill in exceptions settings exceptions = ['network'] exception_dict = {} for e in exceptions: exception_dict[e] = False env.use_exceptions_for = _AliasDict(exception_dict, aliases={'everything': exceptions}) # Add in option defaults for option in env_options: env[option.dest] = option.default # # Command dictionary # # Keys are the command/function names, values are the callables themselves. # This is filled in when main() runs. commands = {} # # Host connection dict/cache # connections = HostConnectionCache() def _open_session(): return connections[env.host_string].get_transport().open_session() def default_channel(): """ Return a channel object based on ``env.host_string``. """ try: chan = _open_session() except ssh.SSHException, err: if str(err) == 'SSH session not active': connections[env.host_string].close() del connections[env.host_string] chan = _open_session() else: raise chan.settimeout(0.1) chan.input_enabled = True return chan # # Output controls # # Keys are "levels" or "groups" of output, values are always boolean, # determining whether output falling into the given group is printed or not # printed. # # By default, everything except 'debug' is printed, as this is what the average # user, and new users, are most likely to expect. # # See docs/usage.rst for details on what these levels mean. output = _AliasDict({ 'status': True, 'aborts': True, 'warnings': True, 'running': True, 'stdout': True, 'stderr': True, 'debug': False, 'user': True }, aliases={ 'everything': ['warnings', 'running', 'user', 'output'], 'output': ['stdout', 'stderr'], 'commands': ['stdout', 'running'] }) Fabric-1.8.2/fabric/task_utils.py000644 000765 000024 00000005167 12277451120 017652 0ustar00jforcierstaff000000 000000 from fabric.utils import abort, indent from fabric import state # For attribute tomfoolery class _Dict(dict): pass def _crawl(name, mapping): """ ``name`` of ``'a.b.c'`` => ``mapping['a']['b']['c']`` """ key, _, rest = name.partition('.') value = mapping[key] if not rest: return value return _crawl(rest, value) def crawl(name, mapping): try: result = _crawl(name, mapping) # Handle default tasks if isinstance(result, _Dict): if getattr(result, 'default', False): result = result.default # Ensure task modules w/ no default are treated as bad targets else: result = None return result except (KeyError, TypeError): return None def merge(hosts, roles, exclude, roledefs): """ Merge given host and role lists into one list of deduped hosts. """ # Abort if any roles don't exist bad_roles = [x for x in roles if x not in roledefs] if bad_roles: abort("The following specified roles do not exist:\n%s" % ( indent(bad_roles) )) # Coerce strings to one-item lists if isinstance(hosts, basestring): hosts = [hosts] # Look up roles, turn into flat list of hosts role_hosts = [] for role in roles: value = roledefs[role] # Handle "lazy" roles (callables) if callable(value): value = value() role_hosts += value # Strip whitespace from host strings. cleaned_hosts = [x.strip() for x in list(hosts) + list(role_hosts)] # Return deduped combo of hosts and role_hosts, preserving order within # them (vs using set(), which may lose ordering) and skipping hosts to be # excluded. # But only if the user hasn't indicated they want this behavior disabled. all_hosts = cleaned_hosts if state.env.dedupe_hosts: deduped_hosts = [] for host in cleaned_hosts: if host not in deduped_hosts and host not in exclude: deduped_hosts.append(host) all_hosts = deduped_hosts return all_hosts def parse_kwargs(kwargs): new_kwargs = {} hosts = [] roles = [] exclude_hosts = [] for key, value in kwargs.iteritems(): if key == 'host': hosts = [value] elif key == 'hosts': hosts = value elif key == 'role': roles = [value] elif key == 'roles': roles = value elif key == 'exclude_hosts': exclude_hosts = value else: new_kwargs[key] = value return new_kwargs, hosts, roles, exclude_hosts Fabric-1.8.2/fabric/tasks.py000644 000765 000024 00000036061 12277451130 016613 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from functools import wraps import inspect import sys import textwrap from fabric import state from fabric.utils import abort, warn, error from fabric.network import to_dict, normalize_to_string, disconnect_all from fabric.context_managers import settings from fabric.job_queue import JobQueue from fabric.task_utils import crawl, merge, parse_kwargs from fabric.exceptions import NetworkError if sys.version_info[:2] == (2, 5): # Python 2.5 inspect.getargspec returns a tuple # instead of ArgSpec namedtuple. class ArgSpec(object): def __init__(self, args, varargs, keywords, defaults): self.args = args self.varargs = varargs self.keywords = keywords self.defaults = defaults self._tuple = (args, varargs, keywords, defaults) def __getitem__(self, idx): return self._tuple[idx] def patched_get_argspec(func): return ArgSpec(*inspect._getargspec(func)) inspect._getargspec = inspect.getargspec inspect.getargspec = patched_get_argspec def get_task_details(task): details = [ textwrap.dedent(task.__doc__) if task.__doc__ else 'No docstring provided'] argspec = inspect.getargspec(task) default_args = [] if not argspec.defaults else argspec.defaults num_default_args = len(default_args) args_without_defaults = argspec.args[:len(argspec.args) - num_default_args] args_with_defaults = argspec.args[-1 * num_default_args:] details.append('Arguments: %s' % ( ', '.join( args_without_defaults + [ '%s=%r' % (arg, default) for arg, default in zip(args_with_defaults, default_args) ]) )) return '\n'.join(details) def _get_list(env): def inner(key): return env.get(key, []) return inner class Task(object): """ Abstract base class for objects wishing to be picked up as Fabric tasks. Instances of subclasses will be treated as valid tasks when present in fabfiles loaded by the :doc:`fab ` tool. For details on how to implement and use `~fabric.tasks.Task` subclasses, please see the usage documentation on :ref:`new-style tasks `. .. versionadded:: 1.1 """ name = 'undefined' use_task_objects = True aliases = None is_default = False # TODO: make it so that this wraps other decorators as expected def __init__(self, alias=None, aliases=None, default=False, name=None, *args, **kwargs): if alias is not None: self.aliases = [alias, ] if aliases is not None: self.aliases = aliases if name is not None: self.name = name self.is_default = default def __details__(self): return get_task_details(self.run) def run(self): raise NotImplementedError def get_hosts(self, arg_hosts, arg_roles, arg_exclude_hosts, env=None): """ Return the host list the given task should be using. See :ref:`host-lists` for detailed documentation on how host lists are set. """ env = env or {'hosts': [], 'roles': [], 'exclude_hosts': []} roledefs = env.get('roledefs', {}) # Command line per-task takes precedence over anything else. if arg_hosts or arg_roles: return merge(arg_hosts, arg_roles, arg_exclude_hosts, roledefs) # Decorator-specific hosts/roles go next func_hosts = getattr(self, 'hosts', []) func_roles = getattr(self, 'roles', []) if func_hosts or func_roles: return merge(func_hosts, func_roles, arg_exclude_hosts, roledefs) # Finally, the env is checked (which might contain globally set lists # from the CLI or from module-level code). This will be the empty list # if these have not been set -- which is fine, this method should # return an empty list if no hosts have been set anywhere. env_vars = map(_get_list(env), "hosts roles exclude_hosts".split()) env_vars.append(roledefs) return merge(*env_vars) def get_pool_size(self, hosts, default): # Default parallel pool size (calculate per-task in case variables # change) default_pool_size = default or len(hosts) # Allow per-task override # Also cast to int in case somebody gave a string from_task = getattr(self, 'pool_size', None) pool_size = int(from_task or default_pool_size) # But ensure it's never larger than the number of hosts pool_size = min((pool_size, len(hosts))) # Inform user of final pool size for this task if state.output.debug: print("Parallel tasks now using pool size of %d" % pool_size) return pool_size class WrappedCallableTask(Task): """ Wraps a given callable transparently, while marking it as a valid Task. Generally used via `~fabric.decorators.task` and not directly. .. versionadded:: 1.1 .. seealso:: `~fabric.docs.unwrap_tasks`, `~fabric.decorators.task` """ def __init__(self, callable, *args, **kwargs): super(WrappedCallableTask, self).__init__(*args, **kwargs) self.wrapped = callable # Don't use getattr() here -- we want to avoid touching self.name # entirely so the superclass' value remains default. if hasattr(callable, '__name__'): if self.name == 'undefined': self.__name__ = self.name = callable.__name__ else: self.__name__ = self.name if hasattr(callable, '__doc__'): self.__doc__ = callable.__doc__ if hasattr(callable, '__module__'): self.__module__ = callable.__module__ def __call__(self, *args, **kwargs): return self.run(*args, **kwargs) def run(self, *args, **kwargs): return self.wrapped(*args, **kwargs) def __getattr__(self, k): return getattr(self.wrapped, k) def __details__(self): return get_task_details(self.wrapped) def requires_parallel(task): """ Returns True if given ``task`` should be run in parallel mode. Specifically: * It's been explicitly marked with ``@parallel``, or: * It's *not* been explicitly marked with ``@serial`` *and* the global parallel option (``env.parallel``) is set to ``True``. """ return ( (state.env.parallel and not getattr(task, 'serial', False)) or getattr(task, 'parallel', False) ) def _parallel_tasks(commands_to_run): return any(map( lambda x: requires_parallel(crawl(x[0], state.commands)), commands_to_run )) def _execute(task, host, my_env, args, kwargs, jobs, queue, multiprocessing): """ Primary single-host work body of execute() """ # Log to stdout if state.output.running and not hasattr(task, 'return_value'): print("[%s] Executing task '%s'" % (host, my_env['command'])) # Create per-run env with connection settings local_env = to_dict(host) local_env.update(my_env) # Set a few more env flags for parallelism if queue is not None: local_env.update({'parallel': True, 'linewise': True}) # Handle parallel execution if queue is not None: # Since queue is only set for parallel name = local_env['host_string'] # Wrap in another callable that: # * expands the env it's given to ensure parallel, linewise, etc are # all set correctly and explicitly. Such changes are naturally # insulted from the parent process. # * nukes the connection cache to prevent shared-access problems # * knows how to send the tasks' return value back over a Queue # * captures exceptions raised by the task def inner(args, kwargs, queue, name, env): state.env.update(env) def submit(result): queue.put({'name': name, 'result': result}) try: key = normalize_to_string(state.env.host_string) state.connections.pop(key, "") submit(task.run(*args, **kwargs)) except BaseException, e: # We really do want to capture everything # SystemExit implies use of abort(), which prints its own # traceback, host info etc -- so we don't want to double up # on that. For everything else, though, we need to make # clear what host encountered the exception that will # print. if e.__class__ is not SystemExit: sys.stderr.write("!!! Parallel execution exception under host %r:\n" % name) submit(e) # Here, anything -- unexpected exceptions, or abort() # driven SystemExits -- will bubble up and terminate the # child process. raise # Stuff into Process wrapper kwarg_dict = { 'args': args, 'kwargs': kwargs, 'queue': queue, 'name': name, 'env': local_env, } p = multiprocessing.Process(target=inner, kwargs=kwarg_dict) # Name/id is host string p.name = name # Add to queue jobs.append(p) # Handle serial execution else: with settings(**local_env): return task.run(*args, **kwargs) def _is_task(task): return isinstance(task, Task) def execute(task, *args, **kwargs): """ Execute ``task`` (callable or name), honoring host/role decorators, etc. ``task`` may be an actual callable object, or it may be a registered task name, which is used to look up a callable just as if the name had been given on the command line (including :ref:`namespaced tasks `, e.g. ``"deploy.migrate"``. The task will then be executed once per host in its host list, which is (again) assembled in the same manner as CLI-specified tasks: drawing from :option:`-H`, :ref:`env.hosts `, the `~fabric.decorators.hosts` or `~fabric.decorators.roles` decorators, and so forth. ``host``, ``hosts``, ``role``, ``roles`` and ``exclude_hosts`` kwargs will be stripped out of the final call, and used to set the task's host list, as if they had been specified on the command line like e.g. ``fab taskname:host=hostname``. Any other arguments or keyword arguments will be passed verbatim into ``task`` (the function itself -- not the ``@task`` decorator wrapping your function!) when it is called, so ``execute(mytask, 'arg1', kwarg1='value')`` will (once per host) invoke ``mytask('arg1', kwarg1='value')``. :returns: a dictionary mapping host strings to the given task's return value for that host's execution run. For example, ``execute(foo, hosts=['a', 'b'])`` might return ``{'a': None, 'b': 'bar'}`` if ``foo`` returned nothing on host `a` but returned ``'bar'`` on host `b`. In situations where a task execution fails for a given host but overall progress does not abort (such as when :ref:`env.skip_bad_hosts ` is True) the return value for that host will be the error object or message. .. seealso:: :ref:`The execute usage docs `, for an expanded explanation and some examples. .. versionadded:: 1.3 .. versionchanged:: 1.4 Added the return value mapping; previously this function had no defined return value. """ my_env = {'clean_revert': True} results = {} # Obtain task is_callable = callable(task) if not (is_callable or _is_task(task)): # Assume string, set env.command to it my_env['command'] = task task = crawl(task, state.commands) if task is None: abort("%r is not callable or a valid task name" % (task,)) # Set env.command if we were given a real function or callable task obj else: dunder_name = getattr(task, '__name__', None) my_env['command'] = getattr(task, 'name', dunder_name) # Normalize to Task instance if we ended up with a regular callable if not _is_task(task): task = WrappedCallableTask(task) # Filter out hosts/roles kwargs new_kwargs, hosts, roles, exclude_hosts = parse_kwargs(kwargs) # Set up host list my_env['all_hosts'] = task.get_hosts(hosts, roles, exclude_hosts, state.env) parallel = requires_parallel(task) if parallel: # Import multiprocessing if needed, erroring out usefully # if it can't. try: import multiprocessing except ImportError: import traceback tb = traceback.format_exc() abort(tb + """ At least one task needs to be run in parallel, but the multiprocessing module cannot be imported (see above traceback.) Please make sure the module is installed or that the above ImportError is fixed.""") else: multiprocessing = None # Get pool size for this task pool_size = task.get_pool_size(my_env['all_hosts'], state.env.pool_size) # Set up job queue in case parallel is needed queue = multiprocessing.Queue() if parallel else None jobs = JobQueue(pool_size, queue) if state.output.debug: jobs._debug = True # Call on host list if my_env['all_hosts']: # Attempt to cycle on hosts, skipping if needed for host in my_env['all_hosts']: try: results[host] = _execute( task, host, my_env, args, new_kwargs, jobs, queue, multiprocessing ) except NetworkError, e: results[host] = e # Backwards compat test re: whether to use an exception or # abort if not state.env.use_exceptions_for['network']: func = warn if state.env.skip_bad_hosts else abort error(e.message, func=func, exception=e.wrapped) else: raise # If requested, clear out connections here and not just at the end. if state.env.eagerly_disconnect: disconnect_all() # If running in parallel, block until job queue is emptied if jobs: err = "One or more hosts failed while executing task '%s'" % ( my_env['command'] ) jobs.close() # Abort if any children did not exit cleanly (fail-fast). # This prevents Fabric from continuing on to any other tasks. # Otherwise, pull in results from the child run. ran_jobs = jobs.run() for name, d in ran_jobs.iteritems(): if d['exit_code'] != 0: if isinstance(d['results'], BaseException): error(err, exception=d['results']) else: error(err) results[name] = d['results'] # Or just run once for local-only else: with settings(**my_env): results[''] = task.run(*args, **new_kwargs) # Return what we can from the inner task executions return results Fabric-1.8.2/fabric/thread_handling.py000644 000765 000024 00000001311 12257074160 020571 0ustar00jforcierstaff000000 000000 import threading import sys class ThreadHandler(object): def __init__(self, name, callable, *args, **kwargs): # Set up exception handling self.exception = None def wrapper(*args, **kwargs): try: callable(*args, **kwargs) except BaseException: self.exception = sys.exc_info() # Kick off thread thread = threading.Thread(None, wrapper, name, args, kwargs) thread.setDaemon(True) thread.start() # Make thread available to instantiator self.thread = thread def raise_if_needed(self): if self.exception: e = self.exception raise e[0], e[1], e[2] Fabric-1.8.2/fabric/utils.py000644 000765 000024 00000031677 12277461502 016642 0ustar00jforcierstaff000000 000000 """ Internal subroutines for e.g. aborting execution with an error message, or performing indenting on multiline output. """ import os import sys import textwrap from traceback import format_exc def abort(msg): """ Abort execution, print ``msg`` to stderr and exit with error status (1.) This function currently makes use of `sys.exit`_, which raises `SystemExit`_. Therefore, it's possible to detect and recover from inner calls to `abort` by using ``except SystemExit`` or similar. .. _sys.exit: http://docs.python.org/library/sys.html#sys.exit .. _SystemExit: http://docs.python.org/library/exceptions.html#exceptions.SystemExit """ from fabric.state import output, env if not env.colorize_errors: red = lambda x: x else: from colors import red if output.aborts: sys.stderr.write(red("\nFatal error: %s\n" % str(msg))) sys.stderr.write(red("\nAborting.\n")) if env.abort_exception: raise env.abort_exception(msg) else: sys.exit(1) def warn(msg): """ Print warning message, but do not abort execution. This function honors Fabric's :doc:`output controls <../../usage/output_controls>` and will print the given ``msg`` to stderr, provided that the ``warnings`` output level (which is active by default) is turned on. """ from fabric.state import output, env if not env.colorize_errors: magenta = lambda x: x else: from colors import magenta if output.warnings: sys.stderr.write(magenta("\nWarning: %s\n\n" % msg)) def indent(text, spaces=4, strip=False): """ Return ``text`` indented by the given number of spaces. If text is not a string, it is assumed to be a list of lines and will be joined by ``\\n`` prior to indenting. When ``strip`` is ``True``, a minimum amount of whitespace is removed from the left-hand side of the given string (so that relative indents are preserved, but otherwise things are left-stripped). This allows you to effectively "normalize" any previous indentation for some inputs. """ # Normalize list of strings into a string for dedenting. "list" here means # "not a string" meaning "doesn't have splitlines". Meh. if not hasattr(text, 'splitlines'): text = '\n'.join(text) # Dedent if requested if strip: text = textwrap.dedent(text) prefix = ' ' * spaces output = '\n'.join(prefix + line for line in text.splitlines()) # Strip out empty lines before/aft output = output.strip() # Reintroduce first indent (which just got stripped out) output = prefix + output return output def puts(text, show_prefix=None, end="\n", flush=False): """ An alias for ``print`` whose output is managed by Fabric's output controls. In other words, this function simply prints to ``sys.stdout``, but will hide its output if the ``user`` :doc:`output level ` is set to ``False``. If ``show_prefix=False``, `puts` will omit the leading ``[hostname]`` which it tacks on by default. (It will also omit this prefix if ``env.host_string`` is empty.) Newlines may be disabled by setting ``end`` to the empty string (``''``). (This intentionally mirrors Python 3's ``print`` syntax.) You may force output flushing (e.g. to bypass output buffering) by setting ``flush=True``. .. versionadded:: 0.9.2 .. seealso:: `~fabric.utils.fastprint` """ from fabric.state import output, env if show_prefix is None: show_prefix = env.output_prefix if output.user: prefix = "" if env.host_string and show_prefix: prefix = "[%s] " % env.host_string sys.stdout.write(prefix + str(text) + end) if flush: sys.stdout.flush() def fastprint(text, show_prefix=False, end="", flush=True): """ Print ``text`` immediately, without any prefix or line ending. This function is simply an alias of `~fabric.utils.puts` with different default argument values, such that the ``text`` is printed without any embellishment and immediately flushed. It is useful for any situation where you wish to print text which might otherwise get buffered by Python's output buffering (such as within a processor intensive ``for`` loop). Since such use cases typically also require a lack of line endings (such as printing a series of dots to signify progress) it also omits the traditional newline by default. .. note:: Since `~fabric.utils.fastprint` calls `~fabric.utils.puts`, it is likewise subject to the ``user`` :doc:`output level `. .. versionadded:: 0.9.2 .. seealso:: `~fabric.utils.puts` """ return puts(text=text, show_prefix=show_prefix, end=end, flush=flush) def handle_prompt_abort(prompt_for): import fabric.state reason = "Needed to prompt for %s (host: %s), but %%s" % ( prompt_for, fabric.state.env.host_string ) # Explicit "don't prompt me bro" if fabric.state.env.abort_on_prompts: abort(reason % "abort-on-prompts was set to True") # Implicit "parallel == stdin/prompts have ambiguous target" if fabric.state.env.parallel: abort(reason % "input would be ambiguous in parallel mode") class _AttributeDict(dict): """ Dictionary subclass enabling attribute lookup/assignment of keys/values. For example:: >>> m = _AttributeDict({'foo': 'bar'}) >>> m.foo 'bar' >>> m.foo = 'not bar' >>> m['foo'] 'not bar' ``_AttributeDict`` objects also provide ``.first()`` which acts like ``.get()`` but accepts multiple keys as arguments, and returns the value of the first hit, e.g.:: >>> m = _AttributeDict({'foo': 'bar', 'biz': 'baz'}) >>> m.first('wrong', 'incorrect', 'foo', 'biz') 'bar' """ def __getattr__(self, key): try: return self[key] except KeyError: # to conform with __getattr__ spec raise AttributeError(key) def __setattr__(self, key, value): self[key] = value def first(self, *names): for name in names: value = self.get(name) if value: return value class _AliasDict(_AttributeDict): """ `_AttributeDict` subclass that allows for "aliasing" of keys to other keys. Upon creation, takes an ``aliases`` mapping, which should map alias names to lists of key names. Aliases do not store their own value, but instead set (override) all mapped keys' values. For example, in the following `_AliasDict`, calling ``mydict['foo'] = True`` will set the values of ``mydict['bar']``, ``mydict['biz']`` and ``mydict['baz']`` all to True:: mydict = _AliasDict( {'biz': True, 'baz': False}, aliases={'foo': ['bar', 'biz', 'baz']} ) Because it is possible for the aliased values to be in a heterogenous state, reading aliases is not supported -- only writing to them is allowed. This also means they will not show up in e.g. ``dict.keys()``. ..note:: Aliases are recursive, so you may refer to an alias within the key list of another alias. Naturally, this means that you can end up with infinite loops if you're not careful. `_AliasDict` provides a special function, `expand_aliases`, which will take a list of keys as an argument and will return that list of keys with any aliases expanded. This function will **not** dedupe, so any aliases which overlap will result in duplicate keys in the resulting list. """ def __init__(self, arg=None, aliases=None): init = super(_AliasDict, self).__init__ if arg is not None: init(arg) else: init() # Can't use super() here because of _AttributeDict's setattr override dict.__setattr__(self, 'aliases', aliases) def __setitem__(self, key, value): # Attr test required to not blow up when deepcopy'd if hasattr(self, 'aliases') and key in self.aliases: for aliased in self.aliases[key]: self[aliased] = value else: return super(_AliasDict, self).__setitem__(key, value) def expand_aliases(self, keys): ret = [] for key in keys: if key in self.aliases: ret.extend(self.expand_aliases(self.aliases[key])) else: ret.append(key) return ret def _pty_size(): """ Obtain (rows, cols) tuple for sizing a pty on the remote end. Defaults to 80x24 (which is also the 'ssh' lib's default) but will detect local (stdout-based) terminal window size on non-Windows platforms. """ from fabric.state import win32 if not win32: import fcntl import termios import struct default_rows, default_cols = 24, 80 rows, cols = default_rows, default_cols if not win32 and sys.stdout.isatty(): # We want two short unsigned integers (rows, cols) fmt = 'HH' # Create an empty (zeroed) buffer for ioctl to map onto. Yay for C! buffer = struct.pack(fmt, 0, 0) # Call TIOCGWINSZ to get window size of stdout, returns our filled # buffer try: result = fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ, buffer) # Unpack buffer back into Python data types rows, cols = struct.unpack(fmt, result) # Fall back to defaults if TIOCGWINSZ returns unreasonable values if rows == 0: rows = default_rows if cols == 0: cols = default_cols # Deal with e.g. sys.stdout being monkeypatched, such as in testing. # Or termios not having a TIOCGWINSZ. except AttributeError: pass return rows, cols def error(message, func=None, exception=None, stdout=None, stderr=None): """ Call ``func`` with given error ``message``. If ``func`` is None (the default), the value of ``env.warn_only`` determines whether to call ``abort`` or ``warn``. If ``exception`` is given, it is inspected to get a string message, which is printed alongside the user-generated ``message``. If ``stdout`` and/or ``stderr`` are given, they are assumed to be strings to be printed. """ import fabric.state if func is None: func = fabric.state.env.warn_only and warn or abort # If debug printing is on, append a traceback to the message if fabric.state.output.debug: message += "\n\n" + format_exc() # Otherwise, if we were given an exception, append its contents. elif exception is not None: # Figure out how to get a string out of the exception; EnvironmentError # subclasses, for example, "are" integers and .strerror is the string. # Others "are" strings themselves. May have to expand this further for # other error types. if hasattr(exception, 'strerror') and exception.strerror is not None: underlying = exception.strerror else: underlying = exception message += "\n\nUnderlying exception:\n" + indent(str(underlying)) if func is abort: if stdout and not fabric.state.output.stdout: message += _format_error_output("Standard output", stdout) if stderr and not fabric.state.output.stderr: message += _format_error_output("Standard error", stderr) return func(message) def _format_error_output(header, body): term_width = _pty_size()[1] header_side_length = (term_width - (len(header) + 2)) / 2 mark = "=" side = mark * header_side_length return "\n\n%s %s %s\n\n%s\n\n%s" % ( side, header, side, body, mark * term_width ) # TODO: replace with collections.deque(maxlen=xxx) in Python 2.6 class RingBuffer(list): def __init__(self, value, maxlen): # Heh. self._super = super(RingBuffer, self) self._maxlen = maxlen return self._super.__init__(value) def _free(self): return self._maxlen - len(self) def append(self, value): if self._free() == 0: del self[0] return self._super.append(value) def extend(self, values): overage = len(values) - self._free() if overage > 0: del self[0:overage] return self._super.extend(values) # Paranoia from here on out. def insert(self, index, value): raise ValueError("Can't insert into the middle of a ring buffer!") def __setslice__(self, i, j, sequence): raise ValueError("Can't set a slice of a ring buffer!") def __setitem__(self, key, value): if isinstance(key, slice): raise ValueError("Can't set a slice of a ring buffer!") else: return self._super.__setitem__(key, value) def apply_lcwd(path, env): # Apply CWD if a relative path if not os.path.isabs(path) and env.lcwd: path = os.path.join(env.lcwd, path) return path Fabric-1.8.2/fabric/version.py000644 000765 000024 00000005651 12277461502 017160 0ustar00jforcierstaff000000 000000 """ Current Fabric version constant plus version pretty-print method. This functionality is contained in its own module to prevent circular import problems with ``__init__.py`` (which is loaded by setup.py during installation, which in turn needs access to this version information.) """ from subprocess import Popen, PIPE from os.path import abspath, dirname VERSION = (1, 8, 2, 'final', 0) def git_sha(): loc = abspath(dirname(__file__)) try: p = Popen( "cd \"%s\" && git log -1 --format=format:%%h" % loc, shell=True, stdout=PIPE, stderr=PIPE ) return p.communicate()[0] # OSError occurs on Unix-derived platforms lacking Popen's configured shell # default, /bin/sh. E.g. Android. except OSError: return None def get_version(form='short'): """ Return a version string for this package, based on `VERSION`. Takes a single argument, ``form``, which should be one of the following strings: * ``branch``: just the major + minor, e.g. "0.9", "1.0". * ``short`` (default): compact, e.g. "0.9rc1", "0.9.0". For package filenames or SCM tag identifiers. * ``normal``: human readable, e.g. "0.9", "0.9.1", "0.9 beta 1". For e.g. documentation site headers. * ``verbose``: like ``normal`` but fully explicit, e.g. "0.9 final". For tag commit messages, or anywhere that it's important to remove ambiguity between a branch and the first final release within that branch. * ``all``: Returns all of the above, as a dict. """ # Setup versions = {} branch = "%s.%s" % (VERSION[0], VERSION[1]) tertiary = VERSION[2] type_ = VERSION[3] final = (type_ == "final") type_num = VERSION[4] firsts = "".join([x[0] for x in type_.split()]) sha = git_sha() sha1 = (" (%s)" % sha) if sha else "" # Branch versions['branch'] = branch # Short v = branch if (tertiary or final): v += "." + str(tertiary) if not final: v += firsts if type_num: v += str(type_num) else: v += sha1 versions['short'] = v # Normal v = branch if tertiary: v += "." + str(tertiary) if not final: if type_num: v += " " + type_ + " " + str(type_num) else: v += " pre-" + type_ + sha1 versions['normal'] = v # Verbose v = branch if tertiary: v += "." + str(tertiary) if not final: if type_num: v += " " + type_ + " " + str(type_num) else: v += " pre-" + type_ + sha1 else: v += " final" versions['verbose'] = v try: return versions[form] except KeyError: if form == 'all': return versions raise TypeError('"%s" is not a valid form specifier.' % form) __version__ = get_version('short') if __name__ == "__main__": print(get_version('all')) Fabric-1.8.2/fabric/contrib/__init__.py000644 000765 000024 00000000000 12257074160 020647 0ustar00jforcierstaff000000 000000 Fabric-1.8.2/fabric/contrib/console.py000644 000765 000024 00000002227 12257074160 020567 0ustar00jforcierstaff000000 000000 """ Console/terminal user interface functionality. """ from fabric.api import prompt def confirm(question, default=True): """ Ask user a yes/no question and return their response as True or False. ``question`` should be a simple, grammatically complete question such as "Do you wish to continue?", and will have a string similar to " [Y/n] " appended automatically. This function will *not* append a question mark for you. By default, when the user presses Enter without typing anything, "yes" is assumed. This can be changed by specifying ``default=False``. """ # Set up suffix if default: suffix = "Y/n" else: suffix = "y/N" # Loop till we get something we like while True: response = prompt("%s [%s] " % (question, suffix)).lower() # Default if not response: return default # Yes if response in ['y', 'yes']: return True # No if response in ['n', 'no']: return False # Didn't get empty, yes or no, so complain and loop print("I didn't understand you. Please specify '(y)es' or '(n)o'.") Fabric-1.8.2/fabric/contrib/django.py000644 000765 000024 00000006375 12257074160 020377 0ustar00jforcierstaff000000 000000 """ .. versionadded:: 0.9.2 These functions streamline the process of initializing Django's settings module environment variable. Once this is done, your fabfile may import from your Django project, or Django itself, without requiring the use of ``manage.py`` plugins or having to set the environment variable yourself every time you use your fabfile. Currently, these functions only allow Fabric to interact with local-to-your-fabfile Django installations. This is not as limiting as it sounds; for example, you can use Fabric as a remote "build" tool as well as using it locally. Imagine the following fabfile:: from fabric.api import run, local, hosts, cd from fabric.contrib import django django.project('myproject') from myproject.myapp.models import MyModel def print_instances(): for instance in MyModel.objects.all(): print(instance) @hosts('production-server') def print_production_instances(): with cd('/path/to/myproject'): run('fab print_instances') With Fabric installed on both ends, you could execute ``print_production_instances`` locally, which would trigger ``print_instances`` on the production server -- which would then be interacting with your production Django database. As another example, if your local and remote settings are similar, you can use it to obtain e.g. your database settings, and then use those when executing a remote (non-Fabric) command. This would allow you some degree of freedom even if Fabric is only installed locally:: from fabric.api import run from fabric.contrib import django django.settings_module('myproject.settings') from django.conf import settings def dump_production_database(): run('mysqldump -u %s -p=%s %s > /tmp/prod-db.sql' % ( settings.DATABASE_USER, settings.DATABASE_PASSWORD, settings.DATABASE_NAME )) The above snippet will work if run from a local, development environment, again provided your local ``settings.py`` mirrors your remote one in terms of database connection info. """ import os def settings_module(module): """ Set ``DJANGO_SETTINGS_MODULE`` shell environment variable to ``module``. Due to how Django works, imports from Django or a Django project will fail unless the shell environment variable ``DJANGO_SETTINGS_MODULE`` is correctly set (see `the Django settings docs `_.) This function provides a shortcut for doing so; call it near the top of your fabfile or Fabric-using code, after which point any Django imports should work correctly. .. note:: This function sets a **shell** environment variable (via ``os.environ``) and is unrelated to Fabric's own internal "env" variables. """ os.environ['DJANGO_SETTINGS_MODULE'] = module def project(name): """ Sets ``DJANGO_SETTINGS_MODULE`` to ``'.settings'``. This function provides a handy shortcut for the common case where one is using the Django default naming convention for their settings file and location. Uses `settings_module` -- see its documentation for details on why and how to use this functionality. """ settings_module('%s.settings' % name) Fabric-1.8.2/fabric/contrib/files.py000644 000765 000024 00000036560 12277451130 020234 0ustar00jforcierstaff000000 000000 """ Module providing easy API for working with remote files and folders. """ from __future__ import with_statement import hashlib import tempfile import re import os from StringIO import StringIO from fabric.api import * from fabric.utils import apply_lcwd def exists(path, use_sudo=False, verbose=False): """ Return True if given path exists on the current remote host. If ``use_sudo`` is True, will use `sudo` instead of `run`. `exists` will, by default, hide all output (including the run line, stdout, stderr and any warning resulting from the file not existing) in order to avoid cluttering output. You may specify ``verbose=True`` to change this behavior. """ func = use_sudo and sudo or run cmd = 'test -e %s' % _expand_path(path) # If verbose, run normally if verbose: with settings(warn_only=True): return not func(cmd).failed # Otherwise, be quiet with settings(hide('everything'), warn_only=True): return not func(cmd).failed def is_link(path, use_sudo=False, verbose=False): """ Return True if the given path is a symlink on the current remote host. If ``use_sudo`` is True, will use `.sudo` instead of `.run`. `.is_link` will, by default, hide all output. Give ``verbose=True`` to change this. """ func = sudo if use_sudo else run cmd = 'test -L "$(echo %s)"' % path args, kwargs = [], {'warn_only': True} if not verbose: opts = [hide('everything')] with settings(*args, **kwargs): return func(cmd).succeeded def first(*args, **kwargs): """ Given one or more file paths, returns first one found, or None if none exist. May specify ``use_sudo`` and ``verbose`` which are passed to `exists`. """ for directory in args: if exists(directory, **kwargs): return directory def upload_template(filename, destination, context=None, use_jinja=False, template_dir=None, use_sudo=False, backup=True, mirror_local_mode=False, mode=None): """ Render and upload a template text file to a remote host. Returns the result of the inner call to `~fabric.operations.put` -- see its documentation for details. ``filename`` should be the path to a text file, which may contain `Python string interpolation formatting `_ and will be rendered with the given context dictionary ``context`` (if given.) Alternately, if ``use_jinja`` is set to True and you have the Jinja2 templating library available, Jinja will be used to render the template instead. Templates will be loaded from the invoking user's current working directory by default, or from ``template_dir`` if given. The resulting rendered file will be uploaded to the remote file path ``destination``. If the destination file already exists, it will be renamed with a ``.bak`` extension unless ``backup=False`` is specified. By default, the file will be copied to ``destination`` as the logged-in user; specify ``use_sudo=True`` to use `sudo` instead. The ``mirror_local_mode`` and ``mode`` kwargs are passed directly to an internal `~fabric.operations.put` call; please see its documentation for details on these two options. .. versionchanged:: 1.1 Added the ``backup``, ``mirror_local_mode`` and ``mode`` kwargs. """ func = use_sudo and sudo or run # Normalize destination to be an actual filename, due to using StringIO with settings(hide('everything'), warn_only=True): if func('test -d %s' % _expand_path(destination)).succeeded: sep = "" if destination.endswith('/') else "/" destination += sep + os.path.basename(filename) # Use mode kwarg to implement mirror_local_mode, again due to using # StringIO if mirror_local_mode and mode is None: mode = os.stat(filename).st_mode # To prevent put() from trying to do this # logic itself mirror_local_mode = False # Process template text = None if use_jinja: try: template_dir = template_dir or os.getcwd() template_dir = apply_lcwd(template_dir, env) from jinja2 import Environment, FileSystemLoader jenv = Environment(loader=FileSystemLoader(template_dir)) text = jenv.get_template(filename).render(**context or {}) # Force to a byte representation of Unicode, or str()ification # within Paramiko's SFTP machinery may cause decode issues for # truly non-ASCII characters. text = text.encode('utf-8') except ImportError: import traceback tb = traceback.format_exc() abort(tb + "\nUnable to import Jinja2 -- see above.") else: filename = apply_lcwd(filename, env) with open(os.path.expanduser(filename)) as inputfile: text = inputfile.read() if context: text = text % context # Back up original file if backup and exists(destination): func("cp %s{,.bak}" % _expand_path(destination)) # Upload the file. return put( local_path=StringIO(text), remote_path=destination, use_sudo=use_sudo, mirror_local_mode=mirror_local_mode, mode=mode ) def sed(filename, before, after, limit='', use_sudo=False, backup='.bak', flags='', shell=False): """ Run a search-and-replace on ``filename`` with given regex patterns. Equivalent to ``sed -i -r -e "// s///g" ``. Setting ``backup`` to an empty string will, disable backup file creation. For convenience, ``before`` and ``after`` will automatically escape forward slashes, single quotes and parentheses for you, so you don't need to specify e.g. ``http:\/\/foo\.com``, instead just using ``http://foo\.com`` is fine. If ``use_sudo`` is True, will use `sudo` instead of `run`. The ``shell`` argument will be eventually passed to `run`/`sudo`. It defaults to False in order to avoid problems with many nested levels of quotes and backslashes. However, setting it to True may help when using ``~fabric.operations.cd`` to wrap explicit or implicit ``sudo`` calls. (``cd`` by it's nature is a shell built-in, not a standalone command, so it should be called within a shell.) Other options may be specified with sed-compatible regex flags -- for example, to make the search and replace case insensitive, specify ``flags="i"``. The ``g`` flag is always specified regardless, so you do not need to remember to include it when overriding this parameter. .. versionadded:: 1.1 The ``flags`` parameter. .. versionadded:: 1.6 Added the ``shell`` keyword argument. """ func = use_sudo and sudo or run # Characters to be escaped in both for char in "/'": before = before.replace(char, r'\%s' % char) after = after.replace(char, r'\%s' % char) # Characters to be escaped in replacement only (they're useful in regexen # in the 'before' part) for char in "()": after = after.replace(char, r'\%s' % char) if limit: limit = r'/%s/ ' % limit context = { 'script': r"'%ss/%s/%s/%sg'" % (limit, before, after, flags), 'filename': _expand_path(filename), 'backup': backup } # Test the OS because of differences between sed versions with hide('running', 'stdout'): platform = run("uname") if platform in ('NetBSD', 'OpenBSD', 'QNX'): # Attempt to protect against failures/collisions hasher = hashlib.sha1() hasher.update(env.host_string) hasher.update(filename) context['tmp'] = "/tmp/%s" % hasher.hexdigest() # Use temp file to work around lack of -i expr = r"""cp -p %(filename)s %(tmp)s \ && sed -r -e %(script)s %(filename)s > %(tmp)s \ && cp -p %(filename)s %(filename)s%(backup)s \ && mv %(tmp)s %(filename)s""" else: context['extended_regex'] = '-E' if platform == 'Darwin' else '-r' expr = r"sed -i%(backup)s %(extended_regex)s -e %(script)s %(filename)s" command = expr % context return func(command, shell=shell) def uncomment(filename, regex, use_sudo=False, char='#', backup='.bak', shell=False): """ Attempt to uncomment all lines in ``filename`` matching ``regex``. The default comment delimiter is `#` and may be overridden by the ``char`` argument. This function uses the `sed` function, and will accept the same ``use_sudo``, ``shell`` and ``backup`` keyword arguments that `sed` does. `uncomment` will remove a single whitespace character following the comment character, if it exists, but will preserve all preceding whitespace. For example, ``# foo`` would become ``foo`` (the single space is stripped) but `` # foo`` would become `` foo`` (the single space is still stripped, but the preceding 4 spaces are not.) .. versionchanged:: 1.6 Added the ``shell`` keyword argument. """ return sed( filename, before=r'^([[:space:]]*)%s[[:space:]]?' % char, after=r'\1', limit=regex, use_sudo=use_sudo, backup=backup, shell=shell ) def comment(filename, regex, use_sudo=False, char='#', backup='.bak', shell=False): """ Attempt to comment out all lines in ``filename`` matching ``regex``. The default commenting character is `#` and may be overridden by the ``char`` argument. This function uses the `sed` function, and will accept the same ``use_sudo``, ``shell`` and ``backup`` keyword arguments that `sed` does. `comment` will prepend the comment character to the beginning of the line, so that lines end up looking like so:: this line is uncommented #this line is commented # this line is indented and commented In other words, comment characters will not "follow" indentation as they sometimes do when inserted by hand. Neither will they have a trailing space unless you specify e.g. ``char='# '``. .. note:: In order to preserve the line being commented out, this function will wrap your ``regex`` argument in parentheses, so you don't need to. It will ensure that any preceding/trailing ``^`` or ``$`` characters are correctly moved outside the parentheses. For example, calling ``comment(filename, r'^foo$')`` will result in a `sed` call with the "before" regex of ``r'^(foo)$'`` (and the "after" regex, naturally, of ``r'#\\1'``.) .. versionadded:: 1.5 Added the ``shell`` keyword argument. """ carot, dollar = '', '' if regex.startswith('^'): carot = '^' regex = regex[1:] if regex.endswith('$'): dollar = '$' regex = regex[:-1] regex = "%s(%s)%s" % (carot, regex, dollar) return sed( filename, before=regex, after=r'%s\1' % char, use_sudo=use_sudo, backup=backup, shell=shell ) def contains(filename, text, exact=False, use_sudo=False, escape=True, shell=False): """ Return True if ``filename`` contains ``text`` (which may be a regex.) By default, this function will consider a partial line match (i.e. where ``text`` only makes up part of the line it's on). Specify ``exact=True`` to change this behavior so that only a line containing exactly ``text`` results in a True return value. This function leverages ``egrep`` on the remote end (so it may not follow Python regular expression syntax perfectly), and skips ``env.shell`` wrapper by default. If ``use_sudo`` is True, will use `sudo` instead of `run`. If ``escape`` is False, no extra regular expression related escaping is performed (this includes overriding ``exact`` so that no ``^``/``$`` is added.) The ``shell`` argument will be eventually passed to ``run/sudo``. See description of the same argumnet in ``~fabric.contrib.sed`` for details. .. versionchanged:: 1.0 Swapped the order of the ``filename`` and ``text`` arguments to be consistent with other functions in this module. .. versionchanged:: 1.4 Updated the regular expression related escaping to try and solve various corner cases. .. versionchanged:: 1.4 Added ``escape`` keyword argument. .. versionadded:: 1.6 Added the ``shell`` keyword argument. """ func = use_sudo and sudo or run if escape: text = _escape_for_regex(text) if exact: text = "^%s$" % text with settings(hide('everything'), warn_only=True): egrep_cmd = 'egrep "%s" %s' % (text, _expand_path(filename)) return func(egrep_cmd, shell=shell).succeeded def append(filename, text, use_sudo=False, partial=False, escape=True, shell=False): """ Append string (or list of strings) ``text`` to ``filename``. When a list is given, each string inside is handled independently (but in the order given.) If ``text`` is already found in ``filename``, the append is not run, and None is returned immediately. Otherwise, the given text is appended to the end of the given ``filename`` via e.g. ``echo '$text' >> $filename``. The test for whether ``text`` already exists defaults to a full line match, e.g. ``^$``, as this seems to be the most sensible approach for the "append lines to a file" use case. You may override this and force partial searching (e.g. ``^``) by specifying ``partial=True``. Because ``text`` is single-quoted, single quotes will be transparently backslash-escaped. This can be disabled with ``escape=False``. If ``use_sudo`` is True, will use `sudo` instead of `run`. The ``shell`` argument will be eventually passed to ``run/sudo``. See description of the same argumnet in ``~fabric.contrib.sed`` for details. .. versionchanged:: 0.9.1 Added the ``partial`` keyword argument. .. versionchanged:: 1.0 Swapped the order of the ``filename`` and ``text`` arguments to be consistent with other functions in this module. .. versionchanged:: 1.0 Changed default value of ``partial`` kwarg to be ``False``. .. versionchanged:: 1.4 Updated the regular expression related escaping to try and solve various corner cases. .. versionadded:: 1.6 Added the ``shell`` keyword argument. """ func = use_sudo and sudo or run # Normalize non-list input to be a list if isinstance(text, basestring): text = [text] for line in text: regex = '^' + _escape_for_regex(line) + ('' if partial else '$') if (exists(filename, use_sudo=use_sudo) and line and contains(filename, regex, use_sudo=use_sudo, escape=False, shell=shell)): continue line = line.replace("'", r"'\\''") if escape else line func("echo '%s' >> %s" % (line, _expand_path(filename))) def _escape_for_regex(text): """Escape ``text`` to allow literal matching using egrep""" regex = re.escape(text) # Seems like double escaping is needed for \ regex = regex.replace('\\\\', '\\\\\\') # Triple-escaping seems to be required for $ signs regex = regex.replace(r'\$', r'\\\$') # Whereas single quotes should not be escaped regex = regex.replace(r"\'", "'") return regex def _expand_path(path): return '"$(echo %s)"' % path Fabric-1.8.2/fabric/contrib/project.py000644 000765 000024 00000020046 12277461502 020574 0ustar00jforcierstaff000000 000000 """ Useful non-core functionality, e.g. functions composing multiple operations. """ from __future__ import with_statement from os import getcwd, sep import os.path from datetime import datetime from tempfile import mkdtemp from fabric.network import needs_host, key_filenames, normalize from fabric.operations import local, run, sudo, put from fabric.state import env, output from fabric.context_managers import cd __all__ = ['rsync_project', 'upload_project'] @needs_host def rsync_project( remote_dir, local_dir=None, exclude=(), delete=False, extra_opts='', ssh_opts='', capture=False, upload=True, default_opts='-pthrvz' ): """ Synchronize a remote directory with the current project directory via rsync. Where ``upload_project()`` makes use of ``scp`` to copy one's entire project every time it is invoked, ``rsync_project()`` uses the ``rsync`` command-line utility, which only transfers files newer than those on the remote end. ``rsync_project()`` is thus a simple wrapper around ``rsync``; for details on how ``rsync`` works, please see its manpage. ``rsync`` must be installed on both your local and remote systems in order for this operation to work correctly. This function makes use of Fabric's ``local()`` operation, and returns the output of that function call; thus it will return the stdout, if any, of the resultant ``rsync`` call. ``rsync_project()`` takes the following parameters: * ``remote_dir``: the only required parameter, this is the path to the directory on the remote server. Due to how ``rsync`` is implemented, the exact behavior depends on the value of ``local_dir``: * If ``local_dir`` ends with a trailing slash, the files will be dropped inside of ``remote_dir``. E.g. ``rsync_project("/home/username/project", "foldername/")`` will drop the contents of ``foldername`` inside of ``/home/username/project``. * If ``local_dir`` does **not** end with a trailing slash (and this includes the default scenario, when ``local_dir`` is not specified), ``remote_dir`` is effectively the "parent" directory, and a new directory named after ``local_dir`` will be created inside of it. So ``rsync_project("/home/username", "foldername")`` would create a new directory ``/home/username/foldername`` (if needed) and place the files there. * ``local_dir``: by default, ``rsync_project`` uses your current working directory as the source directory. This may be overridden by specifying ``local_dir``, which is a string passed verbatim to ``rsync``, and thus may be a single directory (``"my_directory"``) or multiple directories (``"dir1 dir2"``). See the ``rsync`` documentation for details. * ``exclude``: optional, may be a single string, or an iterable of strings, and is used to pass one or more ``--exclude`` options to ``rsync``. * ``delete``: a boolean controlling whether ``rsync``'s ``--delete`` option is used. If True, instructs ``rsync`` to remove remote files that no longer exist locally. Defaults to False. * ``extra_opts``: an optional, arbitrary string which you may use to pass custom arguments or options to ``rsync``. * ``ssh_opts``: Like ``extra_opts`` but specifically for the SSH options string (rsync's ``--rsh`` flag.) * ``capture``: Sent directly into an inner `~fabric.operations.local` call. * ``upload``: a boolean controlling whether file synchronization is performed up or downstream. Upstream by default. * ``default_opts``: the default rsync options ``-pthrvz``, override if desired (e.g. to remove verbosity, etc). Furthermore, this function transparently honors Fabric's port and SSH key settings. Calling this function when the current host string contains a nonstandard port, or when ``env.key_filename`` is non-empty, will use the specified port and/or SSH key filename(s). For reference, the approximate ``rsync`` command-line call that is constructed by this function is the following:: rsync [--delete] [--exclude exclude[0][, --exclude[1][, ...]]] \\ [default_opts] [extra_opts] : .. versionadded:: 1.4.0 The ``ssh_opts`` keyword argument. .. versionadded:: 1.4.1 The ``capture`` keyword argument. .. versionadded:: 1.8.0 The ``default_opts`` keyword argument. """ # Turn single-string exclude into a one-item list for consistency if not hasattr(exclude, '__iter__'): exclude = (exclude,) # Create --exclude options from exclude list exclude_opts = ' --exclude "%s"' * len(exclude) # Double-backslash-escape exclusions = tuple([str(s).replace('"', '\\\\"') for s in exclude]) # Honor SSH key(s) key_string = "" keys = key_filenames() if keys: key_string = "-i " + " -i ".join(keys) # Port user, host, port = normalize(env.host_string) port_string = "-p %s" % port # RSH rsh_string = "" rsh_parts = [key_string, port_string, ssh_opts] if any(rsh_parts): rsh_string = "--rsh='ssh %s'" % " ".join(rsh_parts) # Set up options part of string options_map = { 'delete': '--delete' if delete else '', 'exclude': exclude_opts % exclusions, 'rsh': rsh_string, 'default': default_opts, 'extra': extra_opts, } options = "%(delete)s%(exclude)s %(default)s %(extra)s %(rsh)s" % options_map # Get local directory if local_dir is None: local_dir = '../' + getcwd().split(sep)[-1] # Create and run final command string if host.count(':') > 1: # Square brackets are mandatory for IPv6 rsync address, # even if port number is not specified remote_prefix = "[%s@%s]" % (user, host) else: remote_prefix = "%s@%s" % (user, host) if upload: cmd = "rsync %s %s %s:%s" % (options, local_dir, remote_prefix, remote_dir) else: cmd = "rsync %s %s:%s %s" % (options, remote_prefix, remote_dir, local_dir) if output.running: print("[%s] rsync_project: %s" % (env.host_string, cmd)) return local(cmd, capture=capture) def upload_project(local_dir=None, remote_dir="", use_sudo=False): """ Upload the current project to a remote system via ``tar``/``gzip``. ``local_dir`` specifies the local project directory to upload, and defaults to the current working directory. ``remote_dir`` specifies the target directory to upload into (meaning that a copy of ``local_dir`` will appear as a subdirectory of ``remote_dir``) and defaults to the remote user's home directory. ``use_sudo`` specifies which method should be used when executing commands remotely. ``sudo`` will be used if use_sudo is True, otherwise ``run`` will be used. This function makes use of the ``tar`` and ``gzip`` programs/libraries, thus it will not work too well on Win32 systems unless one is using Cygwin or something similar. It will attempt to clean up the local and remote tarfiles when it finishes executing, even in the event of a failure. .. versionchanged:: 1.1 Added the ``local_dir`` and ``remote_dir`` kwargs. .. versionchanged:: 1.7 Added the ``use_sudo`` kwarg. """ runner = use_sudo and sudo or run local_dir = local_dir or os.getcwd() # Remove final '/' in local_dir so that basename() works local_dir = local_dir.rstrip(os.sep) local_path, local_name = os.path.split(local_dir) tar_file = "%s.tar.gz" % local_name target_tar = os.path.join(remote_dir, tar_file) tmp_folder = mkdtemp() try: tar_path = os.path.join(tmp_folder, tar_file) local("tar -czf %s -C %s %s" % (tar_path, local_path, local_name)) put(tar_path, target_tar, use_sudo=use_sudo) with cd(remote_dir): try: runner("tar -xzf %s" % tar_file) finally: runner("rm -f %s" % tar_file) finally: local("rm -rf %s" % tmp_folder) Fabric-1.8.2/fabfile/__init__.py000644 000765 000024 00000001512 12277451120 017357 0ustar00jforcierstaff000000 000000 """ Fabric's own fabfile. """ from __future__ import with_statement import nose from fabric.api import abort, local, task import docs import tag from utils import msg @task(default=True) def test(args=None): """ Run all unit tests and doctests. Specify string argument ``args`` for additional args to ``nosetests``. """ default_args = "-sv --with-doctest --nologcapture --with-color" default_args += (" " + args) if args else "" nose.core.run_exit(argv=[''] + default_args.split()) @task def upload(): """ Build, register and upload to PyPI """ with msg("Uploading to PyPI"): local('python setup.py sdist register upload') @task def release(force='no'): """ Tag, push tag to Github, & upload new version to PyPI. """ tag.tag(force=force, push='yes') upload() Fabric-1.8.2/fabfile/docs.py000644 000765 000024 00000001015 12257074160 016551 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from fabric.api import lcd, local, task @task(default=True) def build(clean='no', browse_='no'): """ Generate the Sphinx documentation. """ c = "" if clean.lower() in ['yes', 'y']: c = "clean " b = "" with lcd('docs'): local('make %shtml%s' % (c, b)) if browse_.lower() in ['yes', 'y']: browse() @task def browse(): """ Open the current dev docs in a browser tab. """ local("open docs/_build/html/index.html") Fabric-1.8.2/fabfile/tag.py000644 000765 000024 00000010203 12277451120 016370 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from contextlib import nested from fabric.api import abort, hide, local, settings, task # Need to import this as fabric.version for reload() purposes import fabric.version # But nothing is stopping us from making a convenient binding! _version = fabric.version.get_version from utils import msg def _seek_version(cmd, txt): with nested(hide('running'), msg(txt)): cmd = cmd % _version('short') return local(cmd, capture=True) def current_version_is_tagged(): return _seek_version( 'git tag | egrep "^%s$"', "Searching for existing tag" ) def current_version_is_changelogged(filename): return _seek_version( 'egrep "^\* :release:\`%%s " %s' % filename, "Looking for changelog entry" ) def update_code(filename, force): """ Update version data structure in-code and commit that change to git. Normally, if the version file has not been modified, we abort assuming the user quit without saving. Specify ``force=yes`` to override this. """ raw_input("Version update in %r required! Press Enter to load $EDITOR." % filename) with hide('running'): local("$EDITOR %s" % filename) # Try to detect whether user bailed out of the edit with hide('running'): has_diff = local("git diff -- %s" % filename, capture=True) if not has_diff and not force: abort("You seem to have aborted the file edit, so I'm aborting too.") return filename def commits_since_last_tag(): """ Has any work been done since the last tag? """ with hide('running'): return local("git log %s.." % _version('short'), capture=True) @task(default=True) def tag(force='no', push='no'): """ Tag a new release. Normally, if a Git tag exists matching the current version, and no Git commits appear after that tag, we abort assuming the user is making a mistake or forgot to commit their work. To override this -- i.e. to re-tag and re-upload -- specify ``force=yes``. We assume you know what you're doing if you use this. By default we do not push the tag remotely; specify ``push=yes`` to force a ``git push origin ``. """ force = force.lower() in ['y', 'yes'] with settings(warn_only=True): changed = [] # Does the current in-code version exist as a Git tag already? # If so, this means we haven't updated the in-code version specifier # yet, and need to do so. if current_version_is_tagged(): # That is, if any work has been done since. Sanity check! if not commits_since_last_tag() and not force: abort("No work done since last tag!") # Open editor, update version version_file = "fabric/version.py" changed.append(update_code(version_file, force)) # If the tag doesn't exist, the user has already updated version info # and we can just move on. else: print("Version has already been updated, no need to edit...") # Similar process but for the changelog. changelog = "docs/changelog.rst" if not current_version_is_changelogged(changelog): changed.append(update_code(changelog, force)) else: print("Changelog already updated, no need to edit...") # Commit any changes if changed: with msg("Committing updated version and/or changelog"): reload(fabric.version) local("git add %s" % " ".join(changed)) local("git commit -m \"Cut %s\"" % _version('verbose')) local("git push") # At this point, we've incremented the in-code version and just need to # tag it in Git. f = 'f' if force else '' with msg("Tagging"): local("git tag -%sam \"Fabric %s\" %s" % ( f, _version('normal'), _version('short') )) # And push to the central server, if we were told to if push.lower() in ['y', 'yes']: with msg("Pushing"): local("git push origin %s" % _version('short')) Fabric-1.8.2/fabfile/utils.py000644 000765 000024 00000000426 12257074160 016766 0ustar00jforcierstaff000000 000000 from __future__ import with_statement from contextlib import contextmanager from fabric.api import hide, puts @contextmanager def msg(txt): puts(txt + "...", end='', flush=True) with hide('everything'): yield puts("done.", show_prefix=False, flush=True) Fabric-1.8.2/docs/.changelog.rst.swp000644 000765 000024 00000300000 12277461442 020157 0ustar00jforcierstaff000000 000000 b0VIM 7.3cR3T}jforcierbahro.local~jforcier/Code/oss/fabric/docs/changelog.rstutf-8 3210#"! Utp@>G?>9>; =y > Y GM YA@._nQR`pPR jr8adg> y P ' u , a G ' i } A !  \ C83"w0w6NiIH* :release:`1.6.1 <2013-05-23>` names. Thanks to Chris Rose for catch & patch.* :bug:`882` Fix a `.get` bug regarding spaces in remote working directory suggesting the workaround. Thanks to Devin Bayer for the report & Dieter Plaetinck / Jesse Myers for which "bounce" folder to use when calling `.put` with ``use_sudo=True``. remote login directory: add ``temp_dir`` kwarg for explicit specification of* :bug:`694 major` Allow users to work around ownership issues in the default kwarg in `.upload_project`. Thanks to ``@abec`` for the patch.* :feature:`826` Enable sudo extraction of compressed archive via `use_sudo` Nenciarini for the catch. password-requiring SSH gateway connections. That's fixed now. Thanks to Marco* :bug:`884` The password cache feature was not working correctly with contribution. to corresponding command-line options. Thanks to Daniel D. Beck for the* :bug:`171` Added missing cross-references from ``env`` variables documentation for the patch.* :feature:`908` Support loading SSH keys from memory. Thanks to Caleb Groom Fievet for the catch. been addressed by encoding as ``utf-8`` prior to upload. Thanks to Sébastien `.upload_template` would cause ``UnicodeDecodeError`` when uploaded. This has* :bug:`593` Non-ASCII character sets in Jinja templates rendered within coloring errors and warnings. Thanks to Aaron Meurer for the patch.* :feature:`924` Add new env var option :ref:`colorize-errors` to enable & to Erick Yellott + Kevin Williams for patches. 'startswith'`` errors. This has been fixed. Thanks to Erick Yellott for catch `.upload_template` in Jinja mode used to cause ``'NoneType' has no attribute* :bug:`912` Leaving ``template_dir`` un-specified when using :option:`fab -d <-d>`. Thanks to Kevin Qiu for the patch.* :feature:`922` Task argument strings are now displayed when using for the patch.* :feature:`925` Added `contrib.files.is_link <.is_link>`. Thanks to `@jtangas`* :release:`1.6.2 <2013-07-26>`* :release:`1.7.0 <2013-07-26>` default options. Thanks to ``@moorepants`` for the patch.* :feature:`910` Added a keyword argument to rsync_project to configure the format. per-release sections, generated automatically from the old timeline source* :support:`984 backported` Make this changelog easier to read! Now with `. Thanks to Chris Rose for the patch. exception-returning callable set as :ref:`env.abort_exception* :feature:`931` Allow overriding of `.abort` behavior via a custom* :release:`1.8.0 <2013-09-20>` `. to hosts a number of times specified in :ref:`env.connection_attempts* :bug:`948` Handle connection failures due to server load and try connecting `@Bengrunt` and `@adrianbn` for their bug reports. targets requiring password authentication. Thanks to Daniel González,* :bug:`957` Fix bug preventing use of :ref:`env.gateway ` with `@akitada` for catch & patch.* :bug:`956` Fix pty size detection when running inside Emacs. Thanks to* :release:`1.5.5 <2013-12-24>` 956, 957* :release:`1.6.4 <2013-12-24>` 956, 957* :release:`1.7.1 <2013-12-24>`* :release:`1.8.1 <2013-12-24>` Thanks to Keith Yang.* :bug:`1046` Fix typo preventing use of ProxyCommand in some situations. (``@nl5887``) for catch & patch. ``chmod`` was trying to apply to the wrong location. Thanks to Remco* :bug:`917` Correct an issue with ``put(use_sudo=True, mode=xxx)`` where the (such as spaces) work correctly. Thanks to John Harris for the catch. uploads when ``use_sudo=True`` so directories with shell meta-characters* :bug:`955` Quote directories created as part of ``put``'s recursive directory* :release:`1.8.2 <2014-02-14>`=========Changelog=========:orphan:ad8SRoBA t s r G   I ^  z S  } U  Z*F+ non-user-facing things.* Various changes to internal fabfile, version mechanisms, and other ``sed`` options it uses.* Fixed the `~fabric.contrib.files.sed` docstring to accurately reflect which-------------------------------------------------Changes from release candidate 1 to final release* Update local() to honor context_managers.cd()* Add note to compatibility page re: fab_quiet.* ``fab test`` (when in source tree) now uses doctests.* Rest of TODO migrated to tickets.* Add FAQ item re: changing env.shell.* Add note to install docs re: PyCrypto on win32.* Update role/host initialization logic (was missing some edge cases)* Attempted fix re #32 (dropped output)* Turn off DeprecationWarning when importing paramiko.* Abort on bad role names instead of blowing up.* ``local``'s return value now exhibits ``.return_code``.* Fixes #50. Thanks to Alex Koshelev for the patch.* Replace "ls" with "test" in exists()* Fix #62 by escaping backticks. dependency -- now works on 2.5.0 and up)* Fix #34: switch upload_template to use mkstemp (also removes Python 2.5.2+ an external dependency, at least until upstream fixes a nasty 1.7.5 bug.* Vendorized Paramiko: it is now included in our distribution and is no longer* Add FAQ question re: passphrase/password prompt mention, e.g. FAQ updates. Other mentions of documentation in this list are items deserving their own* Near-total overhaul and expansion of documentation (this is the big one!)promise that future changelogs will be more verbose :)As with the previous changelog, this is also mostly a dump of the Git log. We------------------------------------------Changes from beta 1 to release candidate 1* Handle carriage returns in output_thread correctly. Thanks to Brian Rosner.* Fix edge case that comes up during some auth/prompt situations.* Gracefully handle "display" for tasks with no docstring. pre-empt user's fabfile during load phase.* Fix Python path bug which sometimes caused Fabric's internal fabfile to* Add debug output of chosen fabfile for troubleshooting fabfile discovery.* Extend some of put()'s niceties to get(), plus docstring/comment updates* Add FAQ question re: backgrounding processes.* Add dollar-sign backslash escaping to run/sudo.* Bring local() in line with run()/sudo() in terms of .failed attribute.adVr@EgD y P  U p P I j J * I l9a\KY _w+r* :release:`1.6.1 <2013-05-23>` names. Thanks to Chris Rose for catch & patch.* :bug:`882` Fix a `.get` bug regarding spaces in remote working directory suggesting the workaround. Thanks to Devin Bayer for the report & Dieter Plaetinck / Jesse Myers for which "bounce" folder to use when calling `.put` with ``use_sudo=True``. remote login directory: add ``temp_dir`` kwarg for explicit specification of* :bug:`694 major` Allow users to work around ownership issues in the default kwarg in `.upload_project`. Thanks to ``@abec`` for the patch.* :feature:`826` Enable sudo extraction of compressed archive via `use_sudo` Nenciarini for the catch. password-requiring SSH gateway connections. That's fixed now. Thanks to Marco* :bug:`884` The password cache feature was not working correctly with contribution. to corresponding command-line options. Thanks to Daniel D. Beck for the* :bug:`171` Added missing cross-references from ``env`` variables documentation for the patch.* :feature:`908` Support loading SSH keys from memory. Thanks to Caleb Groom Fievet for the catch. been addressed by encoding as ``utf-8`` prior to upload. Thanks to Sébastien `.upload_template` would cause ``UnicodeDecodeError`` when uploaded. This has* :bug:`593` Non-ASCII character sets in Jinja templates rendered within coloring errors and warnings. Thanks to Aaron Meurer for the patch.* :feature:`924` Add new env var option :ref:`colorize-errors` to enable & to Erick Yellott + Kevin Williams for patches. 'startswith'`` errors. This has been fixed. Thanks to Erick Yellott for catch `.upload_template` in Jinja mode used to cause ``'NoneType' has no attribute* :bug:`912` Leaving ``template_dir`` un-specified when using :option:`fab -d <-d>`. Thanks to Kevin Qiu for the patch.* :feature:`922` Task argument strings are now displayed when using for the patch.* :feature:`925` Added `contrib.files.is_link <.is_link>`. Thanks to `@jtangas`* :release:`1.6.2 <2013-07-26>`* :release:`1.7.0 <2013-07-26>` default options. Thanks to ``@moorepants`` for the patch.* :feature:`910` Added a keyword argument to rsync_project to configure the format. per-release sections, generated automatically from the old timeline source* :support:`984 backported` Make this changelog easier to read! Now with `. Thanks to Chris Rose for the patch. exception-returning callable set as :ref:`env.abort_exception* :feature:`931` Allow overriding of `.abort` behavior via a custom* :release:`1.8.0 <2013-09-20>` `. to hosts a number of times specified in :ref:`env.connection_attempts* :bug:`948` Handle connection failures due to server load and try connecting `@Bengrunt` and `@adrianbn` for their bug reports. targets requiring password authentication. Thanks to Daniel González,* :bug:`957` Fix bug preventing use of :ref:`env.gateway ` with `@akitada` for catch & patch.* :bug:`956` Fix pty size detection when running inside Emacs. Thanks to* :release:`1.5.5 <2013-12-24>` 956, 957* :release:`1.6.4 <2013-12-24>` 956, 957* :release:`1.7.1 <2013-12-24>`* :release:`1.8.1 <2013-12-24>` Thanks to Keith Yang.* :bug:`1046` Fix typo preventing use of ProxyCommand in some situations. (``@nl5887``) for catch & patch. ``chmod`` was trying to apply to the wrong location. Thanks to Remco* :bug:`917` Correct an issue with ``put(use_sudo=True, mode=xxx)`` where the (such as spaces) work correctly. Thanks to John Harris for the catch. uploads when ``use_sudo=True`` so directories with shell meta-characters* :bug:`955` Quote directories created as part of ``put``'s recursive directory* :release:`1.7.2 <2014-02-14>`ad0H?fKA Q  T S (  ~ H ^  Z"^3D3_lk`_$aHG* :bug:`767 major` Fix (and add test for) regression re: having linewise output fixed. Thanks to Alex Morega for the patch. error-prone duplication of function signatures in our API docs has been* :support:`103` (via :issue:`748`) Long standing Sphinx autodoc issue requiring Thanks to Jaka Hudoklin for catch & patch. metadata; this allows decorated functions to play nice with Sphinx autodoc. `~fabric.decorators.task` wraps task functions to preserve additional* :feature:`684 backported` (also :issue:`569`) Update how both Fabric and Paramiko. Lundberg and Github user `@acrish` for providing the original patches to Noonan, Vladimir Lazarenko, Lincoln de Sousa, Valentino Volonghi, Olle * Thanks in no particular order to Erwin Bolwidt, Oskari Saarenmaa, Steven not implemented yet.) tunneling non-SSH traffic over the SSH connection, which is :issue:`78` and ``ProxyCommand``-based gatewaying for SSH traffic. (This is distinct from* :feature:`38` (also :issue:`698`) Implement both SSH-level and* :release:`1.4.4 <2012-11-06>`* :release:`1.5.0 <2012-11-06>` with ``@task(name=xxx)`` in some situations. This has been fixed.* :bug:`771` Sphinx autodoc helper `~fabric.docs.unwrap_tasks` didn't play nice correctly now. gatewaying (e.g. that triggered by ``-g`` or ``env.gateway``.) Should work* :bug:`776` Fixed serious-but-non-obvious bug in direct-tcpip driven* :release:`1.5.1 <2012-11-15>`* Added current host string to prompt abort error messages. McMillan for suggestion & initial patch. setting :ref:`env.command_timeout `. Thanks to Paul* :feature:`249` Allow specification of remote command timeout value by `~fabric.operations.get`/`~fabric.operations.put`. temporary local files when using file-like objects in* :feature:`787` Utilize new Paramiko feature allowing us to skip the use of situations. This has been fixed. `~fabric.operations.local` binary paths to become inoperable in certain updates in our shell environment handling, causing (at the very least)* :bug:`775` Shell escaping was incorrectly applied to the value of ``$PATH`` return codes to be treated os "ok". Thanks to Andy Kraut for the pull request.* :feature:`735` Add ``ok_ret_codes`` option to ``env`` to allow alternate been remedied. Thanks to Vishal Rana for the catch. was incorrectly omitted from the ``fabric.api`` import endpoint. This has* :bug:`792` The newish `~fabric.context_managers.shell_env` context manager for the patch. `@diresys` & Oliver Janik for the patch. directory tree on Windows. Thanks to Jason Coombs for the catch and* :bug:`604` Fixed wrong treatment of backslashes in put operation when uploading `@todddeluca` for the patch. subclass object when the object name attribute is undefined. Thanks to* :bug:`766` Use the variable name of a new-style ``fabric.tasks.Task`` be executed by current ``fab`` command.* :feature:`706` Added :ref:`env.tasks `, returning list of tasks to option to help prevent pile-up of many open connections.* :feature:`818` Added :ref:`env.eagerly_disconnect `* :release:`1.5.2 <2013-01-15>` ``known_hosts`` file. Thanks to Roy Smith for the patch. ` to allow loading a user-specified system-level SSH* :feature:`730` Add :ref:`env.system_known_hosts/--system-known-hosts `~fabric.context_managers.cd` and similar. Thanks to `@mikek` for the patch. `~fabric.contrib.files` to help avoid conflicts with* :bug:`703 major` Add a ``shell`` kwarg to many methods in Schreiber for the patch. type in case the caller submitted a string by mistake. Thanks to Thomas* :bug:`791` Cast `~fabric.operations.reboot`'s ``wait`` parameter to a numericad(<>6k   6  x - l !  V  7  X8t'cU_?p#tk!z<; honors :ref:`env.path `. Thanks to `@floledermann* :bug:`659` Update docs to reflect that `~fabric.operations.local` currently report. prompt instead of an abort. This has been fixed. Thanks to Roy Smith for the* :bug:`671` :ref:`reject-unknown-hosts` sometimes resulted in a password patch. cross-platform Python stdlib implementations. Thanks to Alexey Diyan for the* :feature:`669` Updates to our Windows compatibility to rely more heavily on implementation. Tonnhofer for the original pull request, and to Kamil Kisiel for the final `~fabric.context_managers.shell_env` context manager. Thanks to Oliver `~fabric.operations.run`/`~fabric.operations.sudo` added in the form of the* :feature:`263` Shell environment variable support for* :release:`1.3.8 <2012-07-06>`* :release:`1.4.3 <2012-07-06>` were not updated. Thanks to Jan Brauer for the catch. in parallel mode. That behavior was fixed in an earlier release but the docs `~fabric.decorators.runs_once` which claimed it would get run multiple times* :support:`681 backported` Fixed outdated docstring for handling. Thanks to Marcin Kuźmiński for the report. tasks could result in a top level exception (a ``KeyError``) regarding error* :bug:`693` Fixed edge case where ``abort`` driven failures within parallel Thanks to Mikhail Korobov. of ``type(foo) == Bar`` in order to fix some edge cases.* :bug:`718` ``isinstance(foo, Bar)`` is used in `~fabric.main` instead fixed, thanks to Rich Schumacher. result instead of alerting that the value was in fact empty. This has been ``require('a-key-whose-value-is-an-empty-list')`` would register a successful values in the env keys it checks (e.g.* :bug:`702 major` `~fabric.operations.require` failed to test for "empty" path. Thanks to John Begeman* :bug:`711 major` `~fabric.sftp.get` would fail when filenames had % in their to Peter Lyons for the pull request.* :feature:`699` Allow `name` attribute on file-like objects for get/put. Thanks Arnold for the patch.* :feature:`491` (also :feature:`385`) IPv6 host string support. Thanks to Max be forwards compatible. Thanks to Francesco Del Degan for the patch.* :bug:`704 major` Fix up a bunch of Python 2.x style ``print`` statements to of which local shell is used. Thanks to Mustafa Khattab.* :feature:`725` Updated `~fabric.operations.local` to allow override `~fabric.operations.sudo`. Thanks to Antti Kaihola for the pull request.* :feature:`723` Add the ``group=`` argument to calls.) lacking ``/bin/sh`` (which causes an ``OSError`` in ``subprocess.Popen``* :bug:`749` Gracefully work around calls to ``fabric.version`` on systems force loading of specific fabfiles.* :feature:`761` Allow advanced users to parameterize ``fabric.main.main()`` to name is task name" behavior. Thanks to Daniel Simmons for catch & patch. `) to allow overriding of the default "function* :feature:`578` Add ``name`` argument to `~fabric.decorators.task` (:ref:`docs Rodrigue Alcazar for the patch. `~fabric.operations.put` call. Thanks to Miquel Torres for the catch & have a more useful return value, namely that of its internal* :feature:`665` (and #629) Update `~fabric.contrib.files.upload_template` to parallel runs. prefilling the password cache at the start of a run. Great for sudo-powered* :feature:`763` Add :option:`--initial-password-prompt <-I>` to allow time. Thanks to Matthew Tretter for catch & patch. ``contextlib.nested`` by deferring env var reference to entry time, not call* :bug:`736 major` Ensure context managers that build env vars play nice with Fortin and Dustin McQuay for the bug reports. automatically activate when parallelism is in effect. Thanks to Alexanderad9~2m# G `  [  } p # ^  [ w'?u'G@zq?g :ref:`env.sudo_user ` as a default for its ``user`` kwarg.* :feature:`615` Updated `~fabric.operations.sudo` to honor the new setting created settings keys. Thanks to Chris Streeter for the catch. `~fabric.context_managers.settings` so it doesn't ``KeyError`` for newly* :bug:`617` Fix the ``clean_revert`` behavior of patch. authenticated for. This has been fixed. Thanks to Nick Zalutskiy for catch &* :bug:`624` Login password prompts did not always display the username being inside the block. This has been fixed. did not correctly restore prior display settings if an exception was raised* :bug:`625` `~fabric.context_managers.hide`/`~fabric.context_managers.show` user `m4z` for the patches.* :support:`626 backported` Clarity updates to the tutorial. Thanks to GitHub This has been fixed thanks to Steven McDonald and GitHub user `@lynxis`. `~fabric.operations.run` command resulting in multiple Git or SVN checkouts.) the forwarded agent were used per remote invocation (e.g. a single* :bug:`562` Agent forwarding would error out or freeze when multiple uses of* :release:`1.3.7 <2012-05-07>`* :release:`1.4.2 <2012-05-07>` `@techtonik `_ for the catch. relative paths given to it will be relative to ``os.getcwd()``. Thanks to no special handling re: the user's current working directory, and thus* :support:`634 backported` Clarified that `~fabric.context_managers.lcd` does same task multiple times on a single host, which was previously not possible. :ref:`env.dedupe_hosts ` to ``False``. This enables running the* :feature:`633` Allow users to turn off host list deduping by setting are expected to fail and/or whose output doesn't need to be shown to users. `.) Useful for remote program calls which `context ` `managers ``settings(warn_only=True)``, respectively. (Also added corresponding ``settings(hide('everything'), warn_only=True)`` and to `~fabric.operations.run`/`~fabric.operations.sudo` which are aliases for* :feature:`627` Added convenient ``quiet`` and ``warn_only`` keyword arguments and `@Arfrever` for catch & patch. sdist tarballs include all necessary test & doc files. Thanks to Mike Gilbert* :support:`640 backported` (also :issue:`644`) Update packaging manifest so the catch. source tarball as opposed to a Git checkout. Thanks again to `@Arfrever` for* :support:`645 backported` Update Sphinx docs to work well when run out of a stdout/stderr, via e.g. ``run("command", stderr=sys.stdout)``. `~fabric.operations.run`/`~fabric.operations.sudo` print the remote* :feature:`646` Allow specification of which local streams to use when the shell wrapper and any escaping.) includes a second attribute containing the "real" command executed, including return value of `~fabric.operations.run`/`~fabric.operations.sudo`. (Also* :feature:`241` Add the command executed as a ``.command`` attribute to the instead of encountering a hard stop. Thanks to Matt Robenolt for the catch. users may still opt to call it by hand and inspect the returned exceptions, `~fabric.tasks.execute` now also honors :ref:`env.warn_only ` so processes to halt with a nonzero status. This has been fixed. `~fabric.tasks.execute`, but did not actually cause the child or parent Fabric correctly printed such exceptions, and returned them from* :bug:`649` Don't swallow non-``abort``-driven exceptions in parallel mode. Python 2.6+. Thanks to Jens Rantil for the patch.* :support:`651 backported` Added note about nesting ``with`` statements on* :bug:`652` Show available commands when aborting on invalid command names. `_ for the catch.ad+>\U  \ 6 r , L P l  ;Q1J/jJ ] t'F9]+* Thanks to Kirill Pinchuk for the initial patch. ` is set to ``True``. See :ref:`ssh-config` for details. directly from your local ``~/.ssh/config`` if :ref:`env.use_ssh_config* :feature:`3` Fabric can now load a subset of SSH config functionality read-only. level to set a default nonstandard port number. Previously this value was* :feature:`138` :ref:`env.port ` may now be written to at fabfile module append extra SSH-specific arguments to ``rsync``'s ``--rsh`` flag.* :feature:`559` `~fabric.contrib.project.rsync_project` now allows users to new-style tasks are detected.) Thanks to Dan Colish for the initial patch. that task to become invalid when invoked by name (due to how old-style vs inside/under another decorator such as `~fabric.decorators.hosts` could cause* :bug:`410` Fixed a bug where using the `~fabric.decorators.task` decorator for the patch. `~fabric.tasks.Task`. Thanks to Brett Haydon for the catch and Mark Merritt* :bug:`495` Fixed documentation example showing how to subclass* :release:`1.1.8 <2012-02-13>`* :release:`1.2.6 <2012-02-13>`* :release:`1.3.5 <2012-02-13>`* :release:`1.4.0 <2012-02-13>` Coombs for the patch.* :bug:`306` Remote paths now use posixpath for a separator. Thanks to Jason Langworth for the catch. parent process to exit with the correct (nonzero) return code. Thanks to Ian correctly print their abort messages instead of tracebacks, and cause the* :bug:`572` Parallel task aborts (as oppposed to unhandled exceptions) now for the initial pull request. and truncates (or doesn't truncate) accordingly. Thanks to Horacio G. de Oro* :bug:`551` :option:`--list <-l>` output now detects terminal window size call. This has been fixed. Thanks to Massimiliano Torromeo for catch & patch. outdated function signature in its wrapped `~fabric.contrib.files.exists`* :bug:`499` `contrib.files.first ` used an managers. This has been corrected. Thanks to Rory Geoghegan for the patch. `~fabric.context_managers.settings`, re: ability to inline additional context* :bug:`458` `~fabric.decorators.with_settings` did not perfectly match patch. `~fabric.context_managers.cd` to address this. Thanks to Ben Burry for the remote directory location into account when untarring, and now uses* :bug:`584` `~fabric.contrib.project.upload_project` did not take explicit being executed. its own state changes -- while preserving any state changes made by the task incorrect behaviors. `~fabric.tasks.execute` has been overhauled to clean up persist after execution completed; this caused a number of different changes (to variables such as ``env.host_string`` and ``env.parallel``) to* :bug:`568` `~fabric.tasks.execute` allowed too much of its internal state handle init scripts which misbehave when a pseudo-tty is allocated.* :bug:`395` Added :ref:`an FAQ entry ` detailing how to warns/aborts, if it was capturing them.* :bug:`607` Allow `~fabric.operations.local` to display stdout/stderr when it to aid in debugging rsync problems.* :bug:`608` Add ``capture`` kwarg to `~fabric.contrib.project.rsync_project`* :release:`1.3.6 <2012-04-04>`* :release:`1.4.1 <2012-04-04>` Brandon Rhodes for the catch. SSH config support in 1.4) so it supports arbitrary iterables. Thanks to* :bug:`610` Change detection of ``env.key_filename``'s type (added as part of for the documentation catch. use cases. Thanks to GitHub users `3point2` for the cleanup and `SirScott` ` so it can be more easily modified by users facing uncommon* :bug:`609` (and :issue:`564`) Document and clean up :ref:`env.sudo_prefix failures.* :bug:`616` Add port number to the error message displayed upon connectionad=n"<3 w , N  ^ I Q  j I ) [y>u,pP0dR SB to Dougal Matthews for the catch. ``KeyError`` for the user's UID instead of returning a valid string. Thanks* :bug:`400` Handle corner case of systems where ``pwd.getpwuid`` raises the catch and Luke Plant for the patch. wrapped function's docstring and other attributes. Thanks to Eric Buckley for* :bug:`437` `~fabric.decorators.with_settings` now correctly preserves the Thanks to Riccardo Magliocchetti for the patch.* :bug:`443` `~fabric.contrib.files.exists` didn't expand tildes; now it does. targets. Thanks to Rodrigo Madruga for the tip.* :bug:`446` Add QNX to list of secondary-case `~fabric.contrib.files.sed` dependencies. Thanks to David Wolever for the patches.* :bug:`450` Improve traceback display when handling ``ImportError`` for* :bug:`475` Allow escaping of equals signs in per-task args/kwargs. Thanks to Mitchell Hashimoto for the catch. blows up but presents the usual "no task by that name" error message instead.* :bug:`441` Specifying a task module as a task on the command line no longer* :release:`1.1.7 <2011-11-23>`* :release:`1.2.5 <2011-11-23>`* :release:`1.3.3 <2011-11-23>` <-l>` output. Thanks to Nick Trew for the report.* :bug:`339` Don't show imported `~fabric.colors` members in :option:`--list Terhaar for the report. when run under `@parallel `. Thanks to Rob `~fabric.contrib.project.rsync_project` bailing out due to a None port value `env.port` under parallel mode. Symptoms included* :bug:`494` Fixed regression bug affecting some `env` values such as been updated to cleanly and obviously ``abort`` instead. by device` errors when such prompts needed to be displayed. This behavior has password/hostname prompts, and was causing cryptic `Operation not supported* :bug:`510` Parallel mode is incompatible with user input, such as been fixed. Thanks to Brandon Huey for the catch. trigger :ref:`linewise output `, as was intended. This has* :bug:`492` `@parallel ` did not automatically* :release:`1.3.4 <2012-01-12>` the necessary tweak to Fabric. to Ben Davis for porting an existing Paramiko patch to `ssh` and providing :option:`-A` or :ref:`env.forward_agent ` to enable.) Thanks library, and hooks for using it have been added (disabled by default; use* :feature:`72` SSH agent forwarding support has made it into Fabric's SSH output lines. been added, which allows hiding remote stdout and local "running command X"* :feature:`506` A new :ref:`output alias `, ``commands``, has command-line flag.* :feature:`13` Env vars may now be set at runtime via the new :option:`--set` hosts. ` option to allow skipping past temporarily down/unreachable* :feature:`8` Added :option:`--skip-bad-hosts`/:ref:`env.skip_bad_hosts* :support:`532` Reorganized and cleaned up the output of ``fab --help``. and handle more corner cases. Thanks to Neilen Marais for the patch. `~fabric.contrib.files.append` and `~fabric.contrib.files.contains` to try* :bug:`487 major` Overhauled the regular expression escaping performed in host strings executed against. task's return values, by itself returning a dictionary whose keys are the* :feature:`474` `~fabric.tasks.execute` now allows you to access the executed docs.) has also been overhauled (but practically deprecated -- see its updated connection timeouts and total number of attempts. `~fabric.operations.reboot` :ref:`env.connection_attempts ` for controlling both behavior is still to only try once.) See :ref:`env.timeout ` and temporarily-down remote systems, instead of immediately failing. (Default* :feature:`12` Added the ability to try connecting multiple times toadK_>g=[ N  t T 4  F G h H (  u&S_z?o&q%vV u_^ suggested solution. block. It now works as expected. Thanks to Will Maier for the catch and ``env`` values for keys which did not exist outside the context manager* :bug:`252` `~fabric.context_managers.settings` would silently fail to set* :release:`1.0.4 <2011-09-01>`* :release:`1.1.4 <2011-09-01>`* :release:`1.2.2 <2011-09-01>` Jacob Kaplan-Moss for the report. noisy at best and misleading at worst, and has been corrected. Thanks to extraneous 'Executing...' status lines on subsequent invocations. This is* :bug:`430` Tasks decorated with `~fabric.decorators.runs_once` printed text.) This has been fixed. printed extraneous line prefixes (which in turn sometimes overwrote wrapped* :bug:`182` During display of remote stdout/stderr, Fabric occasionally thanks to Morgan Goose for the initial implementation. the :doc:`parallel execution docs ` for details. Major* :feature:`19` Tasks may now be optionally executed in parallel. Please see supports passing in explicit host and/or role arguments. `~fabric.decorators.hosts`/`~fabric.decorators.roles` decorators, and also tasks or in library mode. `~fabric.tasks.execute` honors the other tasks' call, to execute task objects (by reference or by name) from within other* :feature:`21` It is now possible, using the new `~fabric.tasks.execute` API catch. the remote file path. This has been corrected. Thanks to Piet Delport for the* :bug:`323` `~fabric.operations.put` forgot how to expand leading tildes in maintainers, but we hope to iron out any issues as they come up. implications for the more uncommon install use cases, and package and changed our dependency to it for Fabric 1.3 and higher. This may have Paramiko into the `Python 'ssh' library `_ :issue:`19`, and to lay the foundation for :issue:`275`, we have forked* :support:`275` To support an edge use case of the features released in* :release:`1.0.5 <2011-10-23>`* :release:`1.1.5 <2011-10-23>`* :release:`1.2.3 <2011-10-23>`* :release:`1.3.0 <2011-10-23>` and Morgan Goose for the fix. how Fabric usually behaves. Thanks to Github user ``sdcooke`` for the report continue to the 2nd, etc task when failures occurred, which does not fit with processes encountered errors. Previously, multi-task invocations would* :bug:`457` Ensured that Fabric fast-fails parallel tasks if any child* :release:`1.3.1 <2011-10-24>` documentation clarification tweaks. Thanks to Paul Hoffman for the patches.* :support:`467 backported` (also :issue:`468`, :issue:`469`) Handful of released 2.4.1, which fixes the setuptools problems.* :support:`459 backported` Update our `setup.py` files to note that PyCrypto* :release:`1.1.6 <2011-11-07>`* :release:`1.2.4 <2011-11-07>`* :release:`1.3.2 <2011-11-07>` Thanks to Ali Saifee for the catch.* :bug:`230` Fix regression re: combo of no fabfile & arbitrary command use. behavior was an oversight.* :bug:`482` Parallel mode should imply linewise output; omission of this error. This has been fixed. Thanks to Egor M for the report. `~fabric.operations.put` and its ``use_sudo`` keyword caused an unrecoverable* :bug:`342` Combining `~fabric.context_managers.cd` with Dominique Peretti for the catch and Martin Vilcans for the patch. continued appending every time it ran. This has been fixed. Thanks to the line(s) given already existed in files hidden to the remote user, and* :bug:`341` `~fabric.contrib.files.append` incorrectly failed to detect that added which tries to work around these. exceptions during Fabric's "classic or new-style task?" test. A fix has been* :bug:`397` Some poorly behaved objects in third party modules triggeredad%YD\ o b U  x 0 Z " t T S R G < ; QJIDCBIxon%FE:9.#"F\= objects looking for new-style task objects and build a dotted hierarchy* :issue:`56`: Namespacing is now possible: Fabric will crawl imported module the original implementation. effect if no new-style tasks are found. Major thanks to Travis Swicegood for (now referred to as :ref:`"classic" tasks `) will still take functions as tasks, and Fabric will ignore the rest. The original behavior `~fabric.tasks.Task` class, you can now "opt-in" and explicitly mark task the addition of the `~fabric.decorators.task` decorator and the* :issue:`76`: :ref:`New-style tasks ` have been added. With----------Highlights This release also includes all applicable changes from the 1.0.2 release... note::This page lists all changes made to Fabric in its 1.1.0 release.===================================Changes in version 1.1 (2011-06-24) .. seealso:: :ref:`task-subclasses` ``__call__()`` instead. This was an oversight and has been corrected. call it, forcing users who subclassed `~fabric.tasks.Task` to define method, but Fabric's main execution loop had not been updated to look for and* The public API for `~fabric.tasks.Task` mentioned use of the ``run()``--------Bugfixes=====================================Changes in version 1.1.1 (2011-06-29) Thanks to Vladimir Mihailenco for the catch. interface to be modules, not individual tasks. This has been corrected. ``fab --list`` incorrectly considered task classes implementing the mapping* :issue:`375`: The logic used to separate tasks from modules when running--------Bugfixes=====================================Changes in version 1.1.2 (2011-07-07)----order.preserved here for historical reasons, and may not be in strict chronologicalout changelogs to individual, undated files. They have been concatenated andThe content below this section comes from older versions of Fabric which wrote==========Prehistory* :release:`1.0.2 <2011-06-24>`* :release:`1.1.2 <2011-07-07>` make supporting multiple lines of development less of a hassle.* :support:`382` Experimental overhaul of changelog formatting & process to string-like. Thanks to Jiri Barton for catch & patch.* :bug:`380` Improved unicode support when testing objects for being Thanks to Travis Swicegood for the initial work and collaboration. :ref:`control over the wrapping task class `. `, :ref:`per-module default tasks `, and* :feature:`22` Enhanced `@task ` to add :ref:`aliasing* :release:`1.2.0 <2011-07-12>` to Brian Luft for the catch.* :bug:`389` Fixed/improved error handling when Paramiko import fails. Thanks to Github.* :support:`416 backported` Updated documentation to reflect move from Redmine Valerie Ishida for the report. even if both password and host were defined. This has been fixed. Thanks to* :bug:`417` :ref:`abort-on-prompts` would incorrectly abort when set to True,* :release:`1.0.3 <2011-08-21>`* :release:`1.1.3 <2011-08-21>`* :release:`1.2.1 <2011-08-21>` stdout, such as when running ``fab taskname | other_command``.* :bug:`303` Updated terminal size detection to correctly skip over non-tty ` from working correctly.* :bug:`373` Re-added missing functionality preventing :ref:`host exclusion correctly. This has been fixed. :option:`--list-format <-F>` and no longer displayed the short list format* :bug:`396` :option:`--shortlist` broke after the addition of docs. Thanks to Hugo Garza for the catch.* :support:`393 backported` Fixed a typo in an example code snippet in the taskad;GjVUTB0/r' : ) N y ) P  } 2 V M k]~A@*Pm43);* :issue:`314`: Test utility decorator improvements. Thanks to Rick Harding for highlighting the ``pep8`` tool and to Rick Harding for the patch.* :issue:`307`: A whole pile of minor PEP8 tweaks. Thanks to Markus Gattol for---------Internals directory names instead of just one single directory.) (Specifically, so users know they can pass in multiple, space-joined ``local_dir`` argument more obvious, regarding its use in the ``rsync`` call.* :issue:`184`: Make the usage of `~fabric.contrib.project.rsync_project`'s result of the changes added in :issue:`76` and :issue:`56`. :doc:`/usage/execution` into its own docs page, :doc:`/usage/tasks`, as a* Documentation for task declaration has been moved from---------------------Documentation updates corrected. Thanks to Szymon Reichmann for catch and patch. behavior when stderr exists and is combined with stdout. This has been internal ``grep`` command instead of success/failure, causing incorrect* :issue:`345`: `~fabric.contrib.files.contains` returned the stdout of its to Rohit Aggarwal for the report. there was a chance the order could get mixed up during deduplication. Thanks 'c']``) will now always run on ``a``, then ``b``, then ``c``. Previously, order instead. So e.g. ``fab -H a,b,c`` (or setting ``env.hosts = ['a', 'b', when deduped by the ``fab`` execution loop, has been patched to preserve* :issue:`115`: An implementation detail causing host lists to lose order--------Bugfixes connections. Thanks to Mark Merritt for catch & patch. allow specification of an SSH keepalive parameter for troublesome network* :issue:`353`: Added :option:`--keepalive`/:ref:`env.keepalive ` to Travis Swicegood for the patch. using the `~fabric.context_managers.settings` context manager. Thanks to application of env var settings to an entire function, as an alternative to* :issue:`283`: Added the `~fabric.decorators.with_settings` decorator to allow Thanks to Ales Zoulek for the suggestion and initial patch. whether it attempts to create backups of pre-existing destination files.* :issue:`273`: `~fabric.contrib.files.upload_template` now offers control over Jeremy Avnet and Matt Chisholm for the initial patch. aborting/exiting instead of trying to prompt the running user. Thanks to ` to allow a more non-interactive behavior,* :issue:`189`: Added :option:`--abort-on-prompts`/:ref:`env.abort_on_prompts the final run list. Thanks to Casey Banner for the suggestion and patch.* :issue:`170`: Allow :ref:`exclusion ` of specific hosts from suggestion and Morgan Goose for initial implementation. to be specified via a new ``flags`` parameter. Thanks to Nick Trew for the* :issue:`154`: `~fabric.contrib.files.sed` now allows customized regex flags Joe Stump for the suggestion and Thomas Ballinger for the patch. `~fabric.operations.put` flags ``mirror_local_mode`` and ``mode``. Thanks to* :issue:`117`: `~fabric.contrib.files.upload_template` now supports the for the patch. accepts iterables in addition to single values. Thanks to Thomas Ballinger* :issue:`107`: `~fabric.operations.require`'s ``provided_by`` kwarg now <-l>`. <-F>` to allow specification of a nested output format from :option:`--list* As part of :issue:`56` (highlighted above), added :option:`--list-format Rodrigue Alcazar for the patch. local and remote directory paths, and has improved error handling. Thanks to* :issue:`10`: `~fabric.contrib.upload_project` now allows control over the-----------------Feature additions Travis Swicegood. greater organization. See :ref:`namespaces` for details. Thanks again to (tasks named e.g. ``web.deploy`` or ``db.migrations.run``), allowing foradY@?6-,e { = \ t ] \ [ M ? > W M L C : 9 z-W)B"p"P?>=Ji0 including but not limited to: recursion, globbing, inline ``sudo``* `~fabric.operations.put` and `~fabric.operations.get` received many updates, :doc:`/usage/interactivity` for more on these changes. the need to find noninteractive solutions if you don't want to. See similar interfaces, making certain tasks much easier, and freeing you from interactivity with the remote end. You can interact with remote prompts and* :issue:`7`: `~fabric.operations.run`/`~fabric.operations.sudo` now allow full----------HighlightsThis page lists all changes made to Fabric in its 1.0.0 release.===================================Changes in version 1.0 (2011-03-04) documentation.* Added a missing entry for :ref:`env.path ` in the usage-------------Documentation same change as for :issue:`312`. while entering `~fabric.operations.sudo` passwords. This was fixed via the* :issue:`320`: Some users reported problems with dropped input, particularly the initial patch. poor screen-printing behavior on some systems. Thanks to Kirill Pinchuk for* :issue:`312`: Tweak internal I/O related loops to prevent high CPU usage and bringing it to our attention. the documentation was altered. This has been fixed. Thanks to Adam Ernst for ``partial`` kwarg's default flipped from ``True`` to ``False``. However, only* :issue:`311`: `~fabric.contrib.files.append` was supposed to have its Adam Ernst for the catch and suggested fix. was being interpolated in the ``sudo`` call as a string/integer. Thanks to `~fabric.operations.sudo` command. The ``mode`` kwarg needs to be octal but ``mode`` kwarg alongside ``use_sudo=True`` runs a hidden* :issue:`310`: Update edge case in `~fabric.operations.put` where using the Thanks to Chris Rose for the catch. ``capture=False`` and ``output.stdout`` (or ``.stderr``) was also ``False``.* :issue:`301`: Fixed a bug in `~fabric.operations.local`'s behavior when--------Bugfixes This release also includes all applicable changes from the 0.9.5 release... note::=====================================Changes in version 1.0.1 (2011-03-27) Merritt for the catch. trailing slashes in ``local_dir`` affect ``remote_dir``. Thanks to Mark* Clarified the behavior of `~fabric.contrib.project.rsync_project` re: how point users to `~fabric.context_managers.lcd` for modifying local paths.* Updated the API documentation for `~fabric.context_managers.cd` to explicitly-------------Documentation diagnosis and patch. Max Arnold and Paul Oswald for the detailed reports and to Max for the to have been caused by an I/O race condition. This has been fixed. Thanks to and input problems (e.g. sudo prompts incorrectly rejecting passwords) appear* :issue:`352` (also :issue:`320`): Seemingly random issues with output lockup ``mirror_local_mode``. Thanks to Roman Imankulov for catch & patch.* :issue:`337`: Fix logic bug in `~fabric.operations.put` preventing use of Thanks to Matthew Woodcraft and Connor Smith for the catch. default behavior (behaving as if the setting were ``True``) has not changed. all cases. This required changing its default value to ``None``, but the ``combine_stderr`` kwarg so that it correctly overrides the global setting in* :issue:`324`: Update `~fabric.operations.run`/`~fabric.operations.sudo`'s `~fabric.operations.local` on Windows platforms.* :issue:`258`: Bugfix to a previous, incorrectly applied fix regarding--------Bugfixes This release also includes all applicable changes from the 0.9.7 release... note::=====================================Changes in version 1.0.2 (2011-06-24) initial catch & patch.ad$DAhk E y x - v + l  _ C e{Aes$={-j]sjiWEDC-----------------Feature additions patch. returning None. Thanks to Jacob Kaplan-Moss and Travis Swicegood for catch + return value and returns that value on subsequent invocations, instead of* :issue:`221`: `~fabric.decorators.runs_once` now memoizes the wrapped task's intuitive. its ``partial`` kwarg from ``True`` to ``False`` in order to be safer/more* :issue:`172`: `~fabric.contrib.files.append` has changed the default value of patch covering part of this change. ``mirror_local_mode=True`` to get this behavior. Thanks to Paul Smith for a to mirror local file modes. Instead, you'll need to specify* :issue:`121`: `~fabric.operations.put` will no longer automatically attempt is still available by specifying ``capture=True``.) return value as in `~fabric.operations.run`/`~fabric.operations.sudo` (which be more intuitive, at the cost of no longer defaulting to the same rich ``capture`` kwarg, from ``True`` to ``False``. This was changed in order to* :issue:`88`: `~fabric.operations.local` has changed the default value of its brainstorming solutions. changed. Thanks to Juha Mustonen, Erich Heine and Max Arnold for incompatible because the default path/filename for downloaded files has filenames when downloading single or multiple files. This is backwards* :issue:`79`: `~fabric.operations.get` now allows increased control over local remote paths as being relative to the remote home directory. `~fabric.context_managers.cd`. Previously they would always treat relative the remote current-working-directory changes applied by* :issue:`61`: `~fabric.operations.put` and `~fabric.operations.get` now honor changes in remote programs which operate differently when attached to a tty. :ref:`combine-stderr` kwarg/env var, may result in significant behavioral changed from ``False`` to ``True``. This, plus the addition of the `~fabric.operations.sudo` have had the default value of their ``pty`` kwargs* As part of the changes made in :issue:`7`, `~fabric.operations.run` and `~fabric.operations.run`. had their order swapped around to more closely match* The ``user`` and ``pty`` kwargs in `~fabric.operations.sudo`'s signature have to, but not entirely contained within, :issue:`159`. addition to forward slashes, in its ``before`` and ``after`` kwargs. Related* `~fabric.contrib.files.sed` now escapes single-quotes and parentheses in line with the rest of the module. ``filename`` as the first argument. These two functions have been brought in other functions in the `contrib.files ` module had previously had the ``filename`` argument in the second position, whereas all* `~fabric.contrib.files.contains` and `~fabric.contrib.files.append`break your 0.9.x based fabfiles!The below changes are **backwards incompatible** and have the potential to------------------------------Backwards-incompatible changes testing of Fabric (and, to an extent, user fabfiles) possible. during runs of our test suite. This makes quick, configurable full-stack Fabric now leverages Paramiko to provide a stub SSH and SFTP server for use* :issue:`185`: Mostly of interest to those contributing to Fabric itself, in the near future. See :ref:`fabfile-discovery` for details. organization of nontrivial fabfiles and paves the way for task namespacing (directories) instead of just modules (single files). This allows for easier* Added functionality for loading fabfiles which are Python packages deserves much of the credit. played a large part in implementing and/or collecting these changes and ticket line-items below for details. Erich Heine (``sophacles`` on IRC) capability, and increased control over local file paths. See the individualad@e.2 l  C  K  q P u ' FG @oby.MycbV `~fabric.contrib.files.upload_template`'s docstring.* Added a link to Python's documentation for string interpolation in folks were still assuming it would run on Python 2.4.* README now makes the Python 2.5+ requirement up front and explicit; some* API, tutorial and usage docs updated with the above new features.---------------------Documentation updates acts as a boolean as well.) a list itself containing any paths which failed to transfer, which naturally `~fabric.operations.run` and friends. (In this case, ``.failed`` is actually also allow access to ``.failed`` and ``.succeeded`` attributes, just like uploaded remotely or downloaded locally, respectively. These return values return values which may be iterated over to access the paths of files* :issue:`274`: `~fabric.operations.put`/`~fabric.operations.get` now have directories. `~fabric.operations.put`/`~fabric.operations.get`'s local working controlling `~fabric.operations.local`'s current working directory and* :issue:`245`: Added the `~fabric.context_managers.lcd` context manager for arguments. file-like objects as well as local file paths for their ``local_path``* :issue:`217`: `~fabric.operations.get`/`~fabric.operations.put` now accept terminal's dimensions instead of going with the default.* :issue:`193`: When requesting a remote pseudo-terminal, use the invoking multi-connection scenarios.* :issue:`192`: Added per-user/host password cache to assist in non-newline-terminated printing. `~fabric.utils.puts` allowing for convenient unbuffered, output. Also added `~fabric.utils.fastprint`, an alias to greater control over fabfile-generated (as opposed to Fabric-generated)* :issue:`151`: Added a `~fabric.utils.puts` utility function, which allows ``.return_code`` attributes, and so forth. empty string (if ``capture=False``) which exposes the ``.failed`` and is a string containing the command's stdout (if ``capture=True``) or the object as `~fabric.operations.run`/`~fabric.operations.sudo` do, so that it* :issue:`148`: `~fabric.operations.local` now returns the same "rich" string (stderr) as an attribute on the return value, alongside e.g. ``.failed``. `~fabric.operations.local` now provide access to their standard error* :issue:`55`: `~fabric.operations.run`, `~fabric.operations.sudo` and to Erich Heine and Max Arnold.* :issue:`28`: Allow shell-style globbing in `~fabric.operations.get`. Thanks `~fabric.operations.run`/`~fabric.operations.sudo` ``pty`` argument. command-line flag (:option:`--no-pty`) for global control over the* :issue:`27`: Added environment variable (:ref:`always-use-pty`) and easier management of persistent state across commands.* :issue:`23`: Added `~fabric.context_managers.prefix` context manager for user ``npmap`` for suggestions and patches. uploading of files to privileged locations. Thanks to Erich Heine and IRC* :issue:`2`: Added ``use_sudo`` kwarg to `~fabric.operations.put` to allow with non-fabfile Python scripts hanging after execution finishes. `~fabric.network.disconnect_all`, allowing library users to avoid problems* Refactored SSH disconnection code out of the main ``fab`` loop into also been backported to Fabric's 0.9 series.) which is simply the opposite of the ``.failed`` attribute. (This addition has `~fabric.operations.run`/`~fabric.operations.sudo`/`~fabric.operations.local`* Added convenience ``.succeeded`` attribute to the return values of effective ``$PATH``.* Added `~fabric.context_managers.path` context manager for modifying commands' of a Git clone of the Fabric source code repository. print the Git SHA1 hash of the current checkout, if the user is working off* Prerelease versions of Fabric (starting with the 1.0 prereleases) will nowad _zypgf]76 N    x w )  J p ,  } 2 ( 09nXWs)('}|/GFEY; `~fabric.operations.run`/`~fabric.operations.sudo` return values, from the* :issue:`254`: Backported the ``.stderr`` and ``.succeeded`` attributes on `~fabric.operations.local`.* :issue:`255`: Added ``stderr`` and ``succeeded`` attributes to-----------------Feature additionsThe following changes were implemented in Fabric 0.9.3:=====================================Changes in version 0.9.3 (2010-11-12) allow control over previously automatic single-quote escaping.* :issue:`290`: Added ``escape`` kwarg to `~fabric.contrib.files.append` to docs page.* Mentioned our `Twitter account `_ on the main* Added :doc:`documentation ` for using Fabric as a library.-----------------Feature additionsThe following changes were implemented in Fabric 0.9.4:=====================================Changes in version 0.9.4 (2011-02-18) PyPM and its download URLs. Thanks to Sridhar Ratnakumar for the patch.* :issue:`291`: Updated the PyPM-related install docs re: recent changes in the documentation and removed the Python 2.5 check from ``setup.py``.* :issue:`228`: Added description of the PyCrypto + pip + Python 2.5 problem to---------------------Documentation updates Ernst for the catch. previously set in ``env`` no longer raises KeyErrors. Whoops. Thanks to Adam* :issue:`316`: Use of `~fabric.context_managers.settings` with key names not Thanks to Michael Bravo for the report and Rick Harding for the patch.* :issue:`305`: Strip whitespace from hostnames to help prevent user error. catch + patches. OpenBSD and NetBSD `~fabric.contrib.files.sed`. Thanks to Morgan Lefieux for* :issue:`288`: Use temporary files to work around the lack of a ``-i`` flag in proposed solution. Thanks to Antti Kaihola for the catch and Rick Harding for testing the* :issue:`287`: Fix bug in password prompt causing occasional tracebacks. report. last ``@`` found instead of the first. Thanks to Thadeus Burgess for the systems. Fabric's host-string parser now splits username and hostname at the* :issue:`268`: Allow for ``@`` symbols in usernames, which is valid on some troubleshooting. clearing connection cache. Thanks to Jason Gerry for the report &* :issue:`264`: Fix edge case in `~fabric.operations.reboot` by gracefully regexen ending with ``$``. Thanks to Antti Kaihola for the catch.* :issue:`261`: Fix bug in `~fabric.contrib.files.comment` which truncated Heimbuerger and Raymond Cote for catch + suggested fixes. space/quote problems in `~fabric.operations.local`. Thanks to Henrik* :issue:`258`: Modify subprocess call on Windows platforms to avoid ``get_transport()``.* :issue:`37`: Internal refactoring of a Paramiko call from ``_transport`` to--------BugfixesThe following changes were implemented in Fabric 0.9.5:=====================================Changes in version 0.9.5 (2011-03-21) if they were one character per line. This has been fixed. instead of ``types.StringTypes``, causing it to split up Unicode strings as* :issue:`347`: `~fabric.contrib.files.append` incorrectly tested for ``str``--------BugfixesThe following changes were implemented in Fabric 0.9.6:=====================================Changes in version 0.9.6 (2011-04-29)* :issue:`329`: `~fabric.operations.reboot` would have problems reconnecting post-reboot (resulting in a traceback) if ``env.host_string`` was not fully-formed (did not contain user and port specifiers.) This has been fixed.--------BugfixesThe following changes were implemented in Fabric 0.9.7:=====================================Changes in version 0.9.7 (2011-06-23)ad(QMc; D + * )   ~ . : 3 q b  dX fa}.vml|- imported into Fabric under Python 2.6.* Disabled some DeprecationWarnings thrown by Paramiko when that library is at the front of the PYTHONPATH during import. fabfile due to a PYTHONPATH order issue. User fabfiles are now always loaded* Fixed a bug where Fabric's own internal fabfile would pre-empt the user's input. appropriate environment variables (hostname, username, port) based on user* The interactive "what host to connect to?" prompt now correctly updates the--------Bugfixes list of task names. <-l>` may now use :option:`--shortlist` to get a plain, newline-separated themselves performing text manipulation on the output of :option:`--list* :issue:`208`: Users rolling their own shell completion or who otherwise find non-newline-terminated printing. `~fabric.utils.puts` allowing for convenient unbuffered, output. Also added `~fabric.utils.fastprint`, an alias to greater control over fabfile-generated (as opposed to Fabric-generated)* :issue:`151`: Added a `~fabric.utils.puts` utility function, which allows e.g. ``@hosts(*iterable)``. will now expand a single, iterable argument instead of requiring one to use* :issue:`144`: `~fabric.decorators.hosts` (and `~fabric.decorators.roles`) with a backslash. Thanks to Erich Heine for the patch.* :issue:`137`: Commas used to separate per-task arguments may now be escaped own output in the future.) (:issue:`101` is still open: we plan to leverage the new module in Fabric's* :issue:`101`: Added `~fabric.colors` module with basic color output support. opted to see debug-level output.* :issue:`52`: Full tracebacks during aborts are now displayed if the user has via ``fab [options] -- [shell command]``. See :ref:`arbitrary-commands`.* :issue:`29`: Added support for arbitrary command-line-driven anonymous tasks been specified. access to the local system username, even if an alternate remote username has* :ref:`env.local_user ` was added, providing easy and permanent integration.* Added `contrib.django ` module with basic Django* Added support for lazy (callable) role definition values in ``env.roledefs``. ``[myserver] Executing task 'foo'``.)* Added output lines informing the user of which tasks are being executed (e.g. :ref:`fabfile-discovery`. (directories) instead of just modules (single files.) See* Added functionality for loading fabfiles which are Python packages Holscher for the patch. the prerequisites from the ``requirements.txt`` installed). Thanks to Eric* ``python setup.py test`` now runs Fabric's test suite (provided you have all restarted. Fabric to issue a reboot command and then reconnect after the system has* The `~fabric.operations.reboot` operation has been added, providing a way for-----------------Feature additionsThe following changes were implemented in Fabric 0.9.2:=====================================Changes in version 0.9.2 (2010-09-06) Turicas for the catch.* :issue:`239`: Fixed typo in execution usage docs. Thanks to Pradeep Gowda and---------------------Documentation updates Thanks to Aaron Levy for the catch + patch.* :issue:`242`: Empty string values in task CLI args now parse correctly. to IRC user ``orkaa`` for the report. here>``) will no longer blow up when invoked with no fabfile present. Thanks* :issue:`230`: Arbitrary or remainder commands (``fab -- `. Thanks to Erich Heine.* :issue:`110`: Alphabetized :ref:`the CLI argument command reference (technically, to the ``setup.py`` ``long_description`` field).* :issue:`95`: Added a link to a given version's changelog on the PyPI page Pearson for providing the necessary perspective. package maintainers should be downloading tarballs from. Thanks to James* :issue:`64`: Updated :ref:`installation docs ` to clarify where alternative to ``screen``. Thanks to Erich Heine for the tip.* :issue:`17`: Added :ref:`note to FAQ ` re: use of ``dtach`` as source-only text file. (Said FAQ was also slightly expanded with new FAQs.) FAQ <../faq>` to the Sphinx documentation instead of storing it as a specific Redmine ticket. The largest of these is the addition of :doc:`the* A number of small to medium documentation tweaks were made which had no---------------------Documentation updates argument to ``env.cwd``'s previous value. ``env.cwd`` when given an absolute path, instead of naively appending its* :issue:`166`: `~fabric.context_managers.cd` now correctly overwrites Thanks to Carl Meyer again on this one. matching the behavior of `~fabric.operations.run`/`~fabric.operations.sudo`.* :issue:`132`: `~fabric.operations.local` now calls ``strip`` on its stdout, encounter an exception. Thanks to Carl Meyer for catch + patch.* :issue:`130`: Context managers now correctly clean up ``env`` if they instead of displaying help. in a location with no fabfile would complain about the lack of fabfile prints help, even when no fabfile can be found. Previously, calling ``fab``* :issue:`75`: ``fab``, when called with no arguments or (useful) options, now--------Bugfixes :ref:`no_keys`. of the Paramiko ``connect`` call -- specifically :ref:`no_agent` and* :issue:`141`: Added some more CLI args/env vars to allow user configuration docstring as a header, if there is one.* :issue:`112`: ``fab --list`` now prints out the fabfile's module-level and discussion. test looks for exact matches or not. Thanks to Jonas Nockert for the catch allowing control over whether the "don't append if given text already exists"* :issue:`82`: `~fabric.contrib.files.append` now offers a ``partial`` kwarg-----------------Feature additionsThe following changes were implemented in Fabric 0.9.1:=====================================Changes in version 0.9.1 (2010-05-28) Paramiko 1.7.5 while getting some much needed bugfixes in Paramiko 1.7.6. ``setup.py`` to require Paramiko >=1.7.6. This lets us skip the known-buggy* :issue:`86`, :issue:`158`: Removed the bundled Paramiko 1.7.4 and updated the-----------------Packaging updates to include additional techniques and be more holistic.* :issue:`216`: Overhauled the :ref:`process backgrounding FAQ ` possible edge case some Windows 64-bit Python users may encounter.* :issue:`194`: Added a note to :doc:`the install docs ` about a Ted Nyman for the catch.* :issue:`173`: Simple but rather embarrassing typo fix in README. Thanks to* Added a few new items to the :doc:`FAQ `.---------------------Documentation updates code paths (e.g. importing of ``pywin32``). Python installs whose ``sys.platform`` says ``'win32'`` will use Windows-only* :issue:`123`: Removed Cygwin from the "are we on Windows" test; now, only and Morgan Goose. honor the SSH port and identity file settings. Thanks to Mitch Matuson* :issue:`44`, :issue:`63`: Modified `~fabric.contrib.project.rsync_project` toad5`]98S  D  _ - ,   r $ # ?  ~ o ` _  yTS p&XBmDC0~on#=In Fabric 0.9, this dichotomy has been removed, and **"deep mode" is thefunction would run once per host.current host list. We also offered "deep mode", in which your entire taskthat made connections (such as `run` or `sudo`) would run once per host in themode": your tasks, as Python code, ran only once, and any calls to functionsFabric's default mode of use, in prior versions, was what we called "broad~~~~~~~~~~~~~~Execution mode env.foo = 'bar' def my_task(): from fabric.api import envone will now be explicitly importing the object like so:: config.foo = 'bar' def my_task():one might have seen fabfiles like this:::mod:`state` submodule and is importable via ``fabric.api``, so where beforeobject-attribute style access to its data.) :data:`env` resides in theotherwise remains mostly the same (it allows both dictionary and(including Fabric config options) **has been renamed** to :data:`env`, butThe ``config`` object previously used to access and set internal state~~~~~~~~~~~~~~~~~~~~~~~~~~~~Environment/config variablesversion or fork, we may provide a link to such a project if one arises.Finally, note that while we will not officially support a 2.4-compatibleservers, we're hoping this doesn't cut out too many folks.Fabric from their workstations, which are typically more up-to-date thanreality of operating system packaging concerns. Given that most users useand 2.6. We feel this is a decent compromise between new features and therecent stable releases of the Python 2.x line, which at time of writing is 2.5With this change we're setting an official policy to support the two mostfunctions available in that version.2.5 or newer**, in order to take advantage of the various new features andpoint during its lifetime. Fabric is once again **only compatible with PythonFabric started out Python 2.5-only, but became largely 2.4 compatible at one~~~~~~~~~~~~~~Python versionSee :doc:`../usage/fabfiles` for more information.Fabric submodules (e.g. ``from fabric.decorators import hosts``.)technically not Python best practices; or you may import directly from theYou may, if you wish, use ``from fabric.api import *``, though this is sudo('mkdir /var/www/newsite') run('ls /var/www') def my_task(): @hosts('a', 'b') from fabric.api import hosts, run, sudosimply needs to import those objects from the new API module ``fabric.api``::The above fabfile uses `hosts`, `run` and `sudo`, and so in Fabric 0.9 one sudo('mkdir /var/www/newsite') run('ls /var/www') def my_task(): @hosts('a', 'b')sample fabfile that worked with 0.1 and earlier::at the top of your fabfile; they are no longer magically available. Here's aYou will need to **explicitly import any and all methods or decorators used**,~~~~~~~Importswill need to be made to one's fabfiles are as follows:You'll want to at least skim the entire document, but the primary changes that-------------Major changesprogram.command-line component has also been redone to behave more like a typical Unix"magical" behavior and make things more simple and Pythonic; the ``fab``completely rewritten and reorganized and an attempt has been made to removeFabric's rewrite between versions 0.1 and 0.9. The codebase has been almostThis document details the various backwards-incompatible changes made during===================================Changes in version 0.9 (2009-11-08) PyPM. Thanks to Sridhar Ratnakumar for the tip.* :issue:`127`: Added :ref:`note to install docs ` re: ActiveState'sad fPi]#" T m ! o n - , ^ - , +  z 7 6 RGFgI/.ZY  v.a`y65fe * Additionally, (and this appeared to be undocumented) the ``default`` will print a warning to the user if it does so. * It will now overwrite pre-existing values in the environment dict, but obviously, as its previous behavior was confusing in a few ways:* In addition to the above changes, `prompt` has been updated to behave more functions to wrap `prompt` and handle "empty" input on their own terms.* `prompt` now considers the empty string to be valid input; this allows other it may still optionally store the entered value in `env` as well.* `prompt`'s primary function is now to return a value to the caller, although latter) and should stay in the documentation only. authors, not fab users (even though the former is a subset of the available through the command line -- those topics impact fabfile * Furthermore, there is no longer a listing of Fabric's programming API* The old ``help`` command is now the typical Unix options ``-h``/``--help``. their respective text files distributed with the software. typically offer this. Users can always view the license and warranty info in* The old ``about`` command has been removed; other Unix programs don't print version and exit. you should now use the standard ``--version``/``-V`` command-line options to* Fabric's version header is no longer printed every time the program runs; types already, so we felt it was worth the tradeoff. variables, but the same holds true for e.g. Python's builtin methods and typing. Users will naturally need to be careful not to override these ``host``. This looks cleaner and feels more natural, and requires less* Environment variables such as ``fab_host`` have been renamed to simply e.g. `first`. to the first valid key. This method still exists but has been renamed to took one or more key strings as arguments, and returned the value attached* The old ``config`` object (now :data:`env`) had a ``getAny`` method which of honoring Unix filename conventions, it's now ``~/.fabricrc``.* The Fabric config file location used to be ``~/.fabric``; in the interestsIn no particular order:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Other backwards-incompatible changesFabric at some point if a need becomes apparent.that automatically makes use of ``env`` values may find its way back intoAs with the execution modes, a special string interpolation function or method print("Your current username is %s" % env.user)Fabric 0.9 and up simply reference ``env`` variables as normal:: print("Your current username is $(fab_user)")For example, Fabric 0.1 had to insert the current username like so::refer to ``env.host_string``, ``env.user`` and so forth.per host, you may build strings normally (e.g. with the ``%`` operator) andlonger necessary and has been removed**. Because your tasks are executed oncecurrent state of execution to be represented in one's operations. **This is nonotation, e.g. ``"hostname: $(fab_host)"`` -- had to be used to allow thespecial string formatting technique -- the use of a bash-like dollar signBecause of how Fabric used to run in "broad mode" (see previous section) a~~~~~~~~~~~~~~~~~~~~~~~~~~~"Lazy" string interpolationdeep mode made the most sense as the primary mode of use.internals are refactored and expanded, but for now we've simplified things, andOther modes of execution such as the old "broad mode" may return as Fabric'sreturn the output from the server.statements and so forth, and allows operations like `run` to unambiguouslyFabfiles much more like regular Python code, including the use of ``if``method Fabric uses to perform all operations**. This allows you to treat yourad#Re; p o M f e    z y 2 [  @WFE>vuFtPO<`* Removed the old ``fab shell`` functionality, since the move to "just Python" that needs something Fabric doesn't provide right now! don't hesitate to email the list or the developers if you have a use case The execution model is still subject to change as Fabric evolves. Please .. note:: symlink() upload() build() def deploy(): run('ln -s /srv/media/photos /var/www/app/photos') def symlink(): run('tar xzf /tmp/app.tgz') put('app.tgz', '/tmp/app.tgz') def upload(): local('make clean all') def build(): right order, e.g.:: that it "depends on" command B, create a command C which calls A and B in the For example, instead of having command A say need performed for a given overarching task. make "meta" commands that just call whatever chain of "real" commands you decorator doesn't make a lot of sense: instead, it's safest/best to simply* As things currently stand with the execution model, the ``depends`` used, so the decorator has been removed. function, and the function is more versatile in terms of where it may be* Couldn't think of a good reason for `require` to be a decorator *and* a was never actually implemented. It now works as advertised.* It looks like the regex behavior of the ``validate`` argument to `prompt` and then call it multiple times in a normal fashion. command only runs once, you may now use `runs_once` to decorate the function Thus, where you might have used ``invoke`` multiple times to ensure a given function will only execute one time during the lifetime of a ``fab`` run. (living in :mod:`fabric.decorators`). When used to decorate a function, that* ``invoke`` has been turned on its head, and is now the `runs_once` decorator may build on top of the core Fabric operations. is intended to be a new tree of submodules for housing "extra" code which :mod:`fabric.contrib` (specifically, :mod:`fabric.contrib.project`), which* ``rsyncproject`` and ``upload_project`` have been moved into string message, printing it to the user and then calling ``sys.exit(1)``. :mod:`fabric.utils`. It is otherwise the same as before, taking a single* ``abort`` is no longer an "operation" *per se* and has been moved to import mechanisms in order to stitch multiple fabfiles together.* ``load`` has been removed; Fabric is now "just Python", so use Python's "broad mode".* ``local_per_host`` has been removed, as it only applied to the now-removed ``use_shell`` and also applies to both `run` and `sudo`. * Additionally, the global ``sudo_noshell`` option has been renamed to operation. effective behavior remains the same) and has also been extended to the `run` Fabric versions) has been renamed to ``shell`` (defaults to True, so the* The ``noshell`` argument to `sudo` (added late in its life to previous was due to `get` being the old method of getting env vars.)* ``download`` is now `get` in order to match up with `put` (the name mismatch to use; that should generally be enough for most users. ``ssh``-like ``-i`` option, which allows one to specify a private key file client's ``connect`` method. This has been removed in favor of an variable as a method of passing in a Paramiko ``PKey`` object to the SSH* When connecting, Fabric used to use the undocumented ``fab_pkey`` env ``prompt(blah, msg, default=my_callable()``) so it has been removed. seemed to add unnecessary complexity (given that users may call e.g. the default message to the return value if a callable was given. This argument could take a callable as well as a string, and would simply setad9jh \[Z: | 8 y x 8 7 a O N   k j ! i h 8 7 fe^]sr$ZYX9ih:9ihLKkjHG* Start fledgling FAQ; extended pty option to run(); related doc tweaks.* Fix race condition in run/sudo.* Handle case where remote host SSH key doesn't match known_hosts.* upload_template() can now use Jinja2 if it's installed and user asks for it.* Update upload_template to correctly handle backing up target directories.* Add 'cd' context manager.* Straighten out how prompt() deals with trailing whitespace* contrib.files.append may now take a list of strings if desired.* Handle exceptions with strerror attributes that are None instead of strings.* Minor cleanup to package init and setup.py.* Add autodocs for fabric.contrib.console.sections; apologies for the overall change in tense.This is closer to being a straight dump of the Git changelog than the previous------------------------------Changes from alpha 3 to beta 1 bugfixes, feedback and/or patches. cluttering up this list! Thanks as always to everyone who contributed* And many, many additional bugfixes and behavioral tweaks too small to merit McNerthney.* Only load host keys when ``env.reject_unknown_keys`` is True. Thanks to Pat* Add CLI argument for setting the shell used in commands (thanks to Steve Steiner)* Add ``pty`` option to `sudo`. Thanks to José Muanis for the tip-off re: get_pty()* Make private key passphrase prompting more obvious to users.* Changed ``--list`` linewrap behavior to truncate instead.* Roles overhauled/fixed (more like hosts now) Ellis)* Test coverage tweaked and grown a small amount (thanks in part to Peter* Output controls (including CLI args, documentation) have been added* Reworked config file mechanisms a bit, added CLI flag for setting it.* Added contrib.console for console UI stuff (so far, just `confirm`)* Added contrib.files with a handful of file-centric subroutines* Lots of updates to the documentation and TODO-------------------------------Changes from alpha 2 to alpha 3* Various internal fabfile tweaks. default. kwarg; local stderr now prints by default instead of being hidden by* Reversed default value of `~fabric.operations.local`'s ``show_stderr`` interactive tasks.* Added 'capture' argument to `~fabric.operations.local` to allow local* Tweaked ``setup.py`` to use ``find_packages``. Thanks to Pat McNerthney.* User/host info no longer cleared out between commands. (e.g. hyphens.)* Parsing of host strings is now more lenient when examining the username env dict, alongside the full ``user@host:port`` host string.* Host information now available in granular form (user, host, port) in the* Added a number of TODO items based on user feedback (thanks!) thanks to Curt Micol.* Various minor tweaks to the (still in-progress) documentation, including onethe door.Sphinx-specific methods of documenting changes once the final release is outprobably occur for the rest of the alphas and betas; we hope to usemanually sifting through and editing the resulting commit messages. This willThe below list was generated by running ``git shortlog 0.9a1..0.9a2`` and then-------------------------------Changes from alpha 1 to alpha 2 set of output controls. For more info, see :doc:`../usage/output_controls`.* The undocumented `fab_quiet` option has been replaced by a much more granular fabric.foo import bar`` calls. amounts to running ``ipython`` and performing a handful of ``from * We may add it back in later as a convenient shortcut to what basically should make vanilla ``python``/``ipython`` usage of Fabric much easier.ad,>e\~1 Z V  = < W t*i |0A~4d=;],+ arise. Thanks to `@webengineer` for the patch.* :feature:`402` Attempt to detect stale SSH sessions and reconnect when they host list) were causing frustrating hangs. This has been fixed. large return values from the task, or simply a large number of hosts in the* :bug:`654` Parallel runs whose sum total of returned data was large (e.g. Fernando Macedo for the patch. Windows (7, at least) systems and `~fabric.operations.local`. Thanks to* :bug:`805` Update `~fabric.context_managers.shell_env` to play nice with Thanks to Alex Louden for the patch. ASCII, to prevent issues on some platforms when Unicode is encountered.* :bug:`806` Force strings given to ``getpass`` during password prompts to be* :release:`1.5.3 <2013-01-28>` Jansen for the patch. (previously, only the latter behavior was implemented). Thanks to Geert controls whether Ctrl-C is forwarded to the remote end or is captured locally* :feature:`823` Add :ref:`env.remote_interrupt ` which Thanks to Giovanni Bajo for the patch. SSH tunneling (exposing locally-visible network ports to the remote end).* :feature:`821` Add `~fabric.context_managers.remote_tunnel` to allow reverse report. stops for users lacking SSH configs. Thanks to Rodrigo Pimentel for the This allows multi-user fabfiles to enable SSH config without causing hard ` is True but the configured SSH conf file doesn't exist.* :bug:`587` Warn instead of aborting when :ref:`env.use_ssh_config catch & patch. address were not always correctly detected. Thanks to Antonio Barrero for* :bug:`839` Fix bug in `~fabric.contrib.project.rsync_project` where IPv6 Thanks to Chris Kastorff for the catch. before deriving final result (stdlib ``min()`` has odd behavior here...).* :bug:`843` Ensure string ``pool_size`` values get run through ``int()`` them in this release because they do fix incorrect, off-spec behavior. SSH config parsing changes are backwards incompatible**; we are including treatment of ``IdentityFile`` to handle multiple values. **This and related* :bug:`844` Account for SSH config overhaul in Paramiko 1.10 by e.g. updating* :release:`1.5.4 <2013-03-01>`* :release:`1.6.0 <2013-03-01>` patch. `~fabric.contrib.project.rsync_project`. Thanks to Antonio Barrero for the* :feature:`845 backported` Downstream synchronization option implemented for Konrad Hałas for catch & patch.* :bug:`367` Expand paths with tilde inside (``contrib.files``). Thanks to literal to ``env.hosts``. Thanks to Bill Tucker for catch & patch.* :bug:`861` Gracefully handle situations where users give a single string Thanks to Konrad Hałas for the patch.* :bug:`84 major` Fixed problem with missing -r flag in Mac OS X sed version. fixed. <.put>` would sometimes cause ``Unsupported operand`` errors. This has been* :bug:`871` Use of string mode values in `put(local, remote, mode="NNNN") a regression test added. now damaging whitespace in `with path(): <.path>`. This has been removed and* :bug:`870` Changes to shell env var escaping highlighted some extraneous and `.run`/`.sudo`. Thanks to Christian Long and Michael McHugh for the patch.* :bug:`864 major` Allow users to disable Fabric's auto-escaping in default globbing behavior. Thanks to Michael McHugh for the patch. real filenames containing glob patterns (``*``, ``[`` etc) can disable the* :feature:`812` Add ``use_glob`` option to `.put` so users trying to upload catch. `.upload_template`; this has been fixed. Thanks to Joseph Lawson for the* :bug:`328` `.lcd` was no longer being correctly applied to patch. blocking timeout in the ``JobQueue`` loop. Thanks to Simo Kinnunen for the* :bug:`868` Substantial speedup of parallel tasks by removing an unnecessaryFabric-1.8.2/docs/_static/000755 000765 000024 00000000000 12277461504 016244 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/docs/_templates/000755 000765 000024 00000000000 12277461504 016753 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/docs/api/000755 000765 000024 00000000000 12277461504 015367 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/docs/changelog.rst000644 000765 000024 00000226536 12277461502 017313 0ustar00jforcierstaff000000 000000 :orphan: ========= Changelog ========= * :release:`1.8.2 <2014-02-14>` * :release:`1.7.2 <2014-02-14>` * :bug:`955` Quote directories created as part of ``put``'s recursive directory uploads when ``use_sudo=True`` so directories with shell meta-characters (such as spaces) work correctly. Thanks to John Harris for the catch. * :bug:`917` Correct an issue with ``put(use_sudo=True, mode=xxx)`` where the ``chmod`` was trying to apply to the wrong location. Thanks to Remco (``@nl5887``) for catch & patch. * :bug:`1046` Fix typo preventing use of ProxyCommand in some situations. Thanks to Keith Yang. * :release:`1.8.1 <2013-12-24>` * :release:`1.7.1 <2013-12-24>` * :release:`1.6.4 <2013-12-24>` 956, 957 * :release:`1.5.5 <2013-12-24>` 956, 957 * :bug:`956` Fix pty size detection when running inside Emacs. Thanks to `@akitada` for catch & patch. * :bug:`957` Fix bug preventing use of :ref:`env.gateway ` with targets requiring password authentication. Thanks to Daniel González, `@Bengrunt` and `@adrianbn` for their bug reports. * :bug:`948` Handle connection failures due to server load and try connecting to hosts a number of times specified in :ref:`env.connection_attempts `. * :release:`1.8.0 <2013-09-20>` * :feature:`931` Allow overriding of `.abort` behavior via a custom exception-returning callable set as :ref:`env.abort_exception `. Thanks to Chris Rose for the patch. * :support:`984 backported` Make this changelog easier to read! Now with per-release sections, generated automatically from the old timeline source format. * :feature:`910` Added a keyword argument to rsync_project to configure the default options. Thanks to ``@moorepants`` for the patch. * :release:`1.7.0 <2013-07-26>` * :release:`1.6.2 <2013-07-26>` * :feature:`925` Added `contrib.files.is_link <.is_link>`. Thanks to `@jtangas` for the patch. * :feature:`922` Task argument strings are now displayed when using :option:`fab -d <-d>`. Thanks to Kevin Qiu for the patch. * :bug:`912` Leaving ``template_dir`` un-specified when using `.upload_template` in Jinja mode used to cause ``'NoneType' has no attribute 'startswith'`` errors. This has been fixed. Thanks to Erick Yellott for catch & to Erick Yellott + Kevin Williams for patches. * :feature:`924` Add new env var option :ref:`colorize-errors` to enable coloring errors and warnings. Thanks to Aaron Meurer for the patch. * :bug:`593` Non-ASCII character sets in Jinja templates rendered within `.upload_template` would cause ``UnicodeDecodeError`` when uploaded. This has been addressed by encoding as ``utf-8`` prior to upload. Thanks to Sébastien Fievet for the catch. * :feature:`908` Support loading SSH keys from memory. Thanks to Caleb Groom for the patch. * :bug:`171` Added missing cross-references from ``env`` variables documentation to corresponding command-line options. Thanks to Daniel D. Beck for the contribution. * :bug:`884` The password cache feature was not working correctly with password-requiring SSH gateway connections. That's fixed now. Thanks to Marco Nenciarini for the catch. * :feature:`826` Enable sudo extraction of compressed archive via `use_sudo` kwarg in `.upload_project`. Thanks to ``@abec`` for the patch. * :bug:`694 major` Allow users to work around ownership issues in the default remote login directory: add ``temp_dir`` kwarg for explicit specification of which "bounce" folder to use when calling `.put` with ``use_sudo=True``. Thanks to Devin Bayer for the report & Dieter Plaetinck / Jesse Myers for suggesting the workaround. * :bug:`882` Fix a `.get` bug regarding spaces in remote working directory names. Thanks to Chris Rose for catch & patch. * :release:`1.6.1 <2013-05-23>` * :bug:`868` Substantial speedup of parallel tasks by removing an unnecessary blocking timeout in the ``JobQueue`` loop. Thanks to Simo Kinnunen for the patch. * :bug:`328` `.lcd` was no longer being correctly applied to `.upload_template`; this has been fixed. Thanks to Joseph Lawson for the catch. * :feature:`812` Add ``use_glob`` option to `.put` so users trying to upload real filenames containing glob patterns (``*``, ``[`` etc) can disable the default globbing behavior. Thanks to Michael McHugh for the patch. * :bug:`864 major` Allow users to disable Fabric's auto-escaping in `.run`/`.sudo`. Thanks to Christian Long and Michael McHugh for the patch. * :bug:`870` Changes to shell env var escaping highlighted some extraneous and now damaging whitespace in `with path(): <.path>`. This has been removed and a regression test added. * :bug:`871` Use of string mode values in `put(local, remote, mode="NNNN") <.put>` would sometimes cause ``Unsupported operand`` errors. This has been fixed. * :bug:`84 major` Fixed problem with missing -r flag in Mac OS X sed version. Thanks to Konrad Hałas for the patch. * :bug:`861` Gracefully handle situations where users give a single string literal to ``env.hosts``. Thanks to Bill Tucker for catch & patch. * :bug:`367` Expand paths with tilde inside (``contrib.files``). Thanks to Konrad Hałas for catch & patch. * :feature:`845 backported` Downstream synchronization option implemented for `~fabric.contrib.project.rsync_project`. Thanks to Antonio Barrero for the patch. * :release:`1.6.0 <2013-03-01>` * :release:`1.5.4 <2013-03-01>` * :bug:`844` Account for SSH config overhaul in Paramiko 1.10 by e.g. updating treatment of ``IdentityFile`` to handle multiple values. **This and related SSH config parsing changes are backwards incompatible**; we are including them in this release because they do fix incorrect, off-spec behavior. * :bug:`843` Ensure string ``pool_size`` values get run through ``int()`` before deriving final result (stdlib ``min()`` has odd behavior here...). Thanks to Chris Kastorff for the catch. * :bug:`839` Fix bug in `~fabric.contrib.project.rsync_project` where IPv6 address were not always correctly detected. Thanks to Antonio Barrero for catch & patch. * :bug:`587` Warn instead of aborting when :ref:`env.use_ssh_config ` is True but the configured SSH conf file doesn't exist. This allows multi-user fabfiles to enable SSH config without causing hard stops for users lacking SSH configs. Thanks to Rodrigo Pimentel for the report. * :feature:`821` Add `~fabric.context_managers.remote_tunnel` to allow reverse SSH tunneling (exposing locally-visible network ports to the remote end). Thanks to Giovanni Bajo for the patch. * :feature:`823` Add :ref:`env.remote_interrupt ` which controls whether Ctrl-C is forwarded to the remote end or is captured locally (previously, only the latter behavior was implemented). Thanks to Geert Jansen for the patch. * :release:`1.5.3 <2013-01-28>` * :bug:`806` Force strings given to ``getpass`` during password prompts to be ASCII, to prevent issues on some platforms when Unicode is encountered. Thanks to Alex Louden for the patch. * :bug:`805` Update `~fabric.context_managers.shell_env` to play nice with Windows (7, at least) systems and `~fabric.operations.local`. Thanks to Fernando Macedo for the patch. * :bug:`654` Parallel runs whose sum total of returned data was large (e.g. large return values from the task, or simply a large number of hosts in the host list) were causing frustrating hangs. This has been fixed. * :feature:`402` Attempt to detect stale SSH sessions and reconnect when they arise. Thanks to `@webengineer` for the patch. * :bug:`791` Cast `~fabric.operations.reboot`'s ``wait`` parameter to a numeric type in case the caller submitted a string by mistake. Thanks to Thomas Schreiber for the patch. * :bug:`703 major` Add a ``shell`` kwarg to many methods in `~fabric.contrib.files` to help avoid conflicts with `~fabric.context_managers.cd` and similar. Thanks to `@mikek` for the patch. * :feature:`730` Add :ref:`env.system_known_hosts/--system-known-hosts ` to allow loading a user-specified system-level SSH ``known_hosts`` file. Thanks to Roy Smith for the patch. * :release:`1.5.2 <2013-01-15>` * :feature:`818` Added :ref:`env.eagerly_disconnect ` option to help prevent pile-up of many open connections. * :feature:`706` Added :ref:`env.tasks `, returning list of tasks to be executed by current ``fab`` command. * :bug:`766` Use the variable name of a new-style ``fabric.tasks.Task`` subclass object when the object name attribute is undefined. Thanks to `@todddeluca` for the patch. * :bug:`604` Fixed wrong treatment of backslashes in put operation when uploading directory tree on Windows. Thanks to Jason Coombs for the catch and `@diresys` & Oliver Janik for the patch. for the patch. * :bug:`792` The newish `~fabric.context_managers.shell_env` context manager was incorrectly omitted from the ``fabric.api`` import endpoint. This has been remedied. Thanks to Vishal Rana for the catch. * :feature:`735` Add ``ok_ret_codes`` option to ``env`` to allow alternate return codes to be treated os "ok". Thanks to Andy Kraut for the pull request. * :bug:`775` Shell escaping was incorrectly applied to the value of ``$PATH`` updates in our shell environment handling, causing (at the very least) `~fabric.operations.local` binary paths to become inoperable in certain situations. This has been fixed. * :feature:`787` Utilize new Paramiko feature allowing us to skip the use of temporary local files when using file-like objects in `~fabric.operations.get`/`~fabric.operations.put`. * :feature:`249` Allow specification of remote command timeout value by setting :ref:`env.command_timeout `. Thanks to Paul McMillan for suggestion & initial patch. * Added current host string to prompt abort error messages. * :release:`1.5.1 <2012-11-15>` * :bug:`776` Fixed serious-but-non-obvious bug in direct-tcpip driven gatewaying (e.g. that triggered by ``-g`` or ``env.gateway``.) Should work correctly now. * :bug:`771` Sphinx autodoc helper `~fabric.docs.unwrap_tasks` didn't play nice with ``@task(name=xxx)`` in some situations. This has been fixed. * :release:`1.5.0 <2012-11-06>` * :release:`1.4.4 <2012-11-06>` * :feature:`38` (also :issue:`698`) Implement both SSH-level and ``ProxyCommand``-based gatewaying for SSH traffic. (This is distinct from tunneling non-SSH traffic over the SSH connection, which is :issue:`78` and not implemented yet.) * Thanks in no particular order to Erwin Bolwidt, Oskari Saarenmaa, Steven Noonan, Vladimir Lazarenko, Lincoln de Sousa, Valentino Volonghi, Olle Lundberg and Github user `@acrish` for providing the original patches to both Fabric and Paramiko. * :feature:`684 backported` (also :issue:`569`) Update how `~fabric.decorators.task` wraps task functions to preserve additional metadata; this allows decorated functions to play nice with Sphinx autodoc. Thanks to Jaka Hudoklin for catch & patch. * :support:`103` (via :issue:`748`) Long standing Sphinx autodoc issue requiring error-prone duplication of function signatures in our API docs has been fixed. Thanks to Alex Morega for the patch. * :bug:`767 major` Fix (and add test for) regression re: having linewise output automatically activate when parallelism is in effect. Thanks to Alexander Fortin and Dustin McQuay for the bug reports. * :bug:`736 major` Ensure context managers that build env vars play nice with ``contextlib.nested`` by deferring env var reference to entry time, not call time. Thanks to Matthew Tretter for catch & patch. * :feature:`763` Add :option:`--initial-password-prompt <-I>` to allow prefilling the password cache at the start of a run. Great for sudo-powered parallel runs. * :feature:`665` (and #629) Update `~fabric.contrib.files.upload_template` to have a more useful return value, namely that of its internal `~fabric.operations.put` call. Thanks to Miquel Torres for the catch & Rodrigue Alcazar for the patch. * :feature:`578` Add ``name`` argument to `~fabric.decorators.task` (:ref:`docs `) to allow overriding of the default "function name is task name" behavior. Thanks to Daniel Simmons for catch & patch. * :feature:`761` Allow advanced users to parameterize ``fabric.main.main()`` to force loading of specific fabfiles. * :bug:`749` Gracefully work around calls to ``fabric.version`` on systems lacking ``/bin/sh`` (which causes an ``OSError`` in ``subprocess.Popen`` calls.) * :feature:`723` Add the ``group=`` argument to `~fabric.operations.sudo`. Thanks to Antti Kaihola for the pull request. * :feature:`725` Updated `~fabric.operations.local` to allow override of which local shell is used. Thanks to Mustafa Khattab. * :bug:`704 major` Fix up a bunch of Python 2.x style ``print`` statements to be forwards compatible. Thanks to Francesco Del Degan for the patch. * :feature:`491` (also :feature:`385`) IPv6 host string support. Thanks to Max Arnold for the patch. * :feature:`699` Allow `name` attribute on file-like objects for get/put. Thanks to Peter Lyons for the pull request. * :bug:`711 major` `~fabric.sftp.get` would fail when filenames had % in their path. Thanks to John Begeman * :bug:`702 major` `~fabric.operations.require` failed to test for "empty" values in the env keys it checks (e.g. ``require('a-key-whose-value-is-an-empty-list')`` would register a successful result instead of alerting that the value was in fact empty. This has been fixed, thanks to Rich Schumacher. * :bug:`718` ``isinstance(foo, Bar)`` is used in `~fabric.main` instead of ``type(foo) == Bar`` in order to fix some edge cases. Thanks to Mikhail Korobov. * :bug:`693` Fixed edge case where ``abort`` driven failures within parallel tasks could result in a top level exception (a ``KeyError``) regarding error handling. Thanks to Marcin Kuźmiński for the report. * :support:`681 backported` Fixed outdated docstring for `~fabric.decorators.runs_once` which claimed it would get run multiple times in parallel mode. That behavior was fixed in an earlier release but the docs were not updated. Thanks to Jan Brauer for the catch. * :release:`1.4.3 <2012-07-06>` * :release:`1.3.8 <2012-07-06>` * :feature:`263` Shell environment variable support for `~fabric.operations.run`/`~fabric.operations.sudo` added in the form of the `~fabric.context_managers.shell_env` context manager. Thanks to Oliver Tonnhofer for the original pull request, and to Kamil Kisiel for the final implementation. * :feature:`669` Updates to our Windows compatibility to rely more heavily on cross-platform Python stdlib implementations. Thanks to Alexey Diyan for the patch. * :bug:`671` :ref:`reject-unknown-hosts` sometimes resulted in a password prompt instead of an abort. This has been fixed. Thanks to Roy Smith for the report. * :bug:`659` Update docs to reflect that `~fabric.operations.local` currently honors :ref:`env.path `. Thanks to `@floledermann `_ for the catch. * :bug:`652` Show available commands when aborting on invalid command names. * :support:`651 backported` Added note about nesting ``with`` statements on Python 2.6+. Thanks to Jens Rantil for the patch. * :bug:`649` Don't swallow non-``abort``-driven exceptions in parallel mode. Fabric correctly printed such exceptions, and returned them from `~fabric.tasks.execute`, but did not actually cause the child or parent processes to halt with a nonzero status. This has been fixed. `~fabric.tasks.execute` now also honors :ref:`env.warn_only ` so users may still opt to call it by hand and inspect the returned exceptions, instead of encountering a hard stop. Thanks to Matt Robenolt for the catch. * :feature:`241` Add the command executed as a ``.command`` attribute to the return value of `~fabric.operations.run`/`~fabric.operations.sudo`. (Also includes a second attribute containing the "real" command executed, including the shell wrapper and any escaping.) * :feature:`646` Allow specification of which local streams to use when `~fabric.operations.run`/`~fabric.operations.sudo` print the remote stdout/stderr, via e.g. ``run("command", stderr=sys.stdout)``. * :support:`645 backported` Update Sphinx docs to work well when run out of a source tarball as opposed to a Git checkout. Thanks again to `@Arfrever` for the catch. * :support:`640 backported` (also :issue:`644`) Update packaging manifest so sdist tarballs include all necessary test & doc files. Thanks to Mike Gilbert and `@Arfrever` for catch & patch. * :feature:`627` Added convenient ``quiet`` and ``warn_only`` keyword arguments to `~fabric.operations.run`/`~fabric.operations.sudo` which are aliases for ``settings(hide('everything'), warn_only=True)`` and ``settings(warn_only=True)``, respectively. (Also added corresponding `context ` `managers `.) Useful for remote program calls which are expected to fail and/or whose output doesn't need to be shown to users. * :feature:`633` Allow users to turn off host list deduping by setting :ref:`env.dedupe_hosts ` to ``False``. This enables running the same task multiple times on a single host, which was previously not possible. * :support:`634 backported` Clarified that `~fabric.context_managers.lcd` does no special handling re: the user's current working directory, and thus relative paths given to it will be relative to ``os.getcwd()``. Thanks to `@techtonik `_ for the catch. * :release:`1.4.2 <2012-05-07>` * :release:`1.3.7 <2012-05-07>` * :bug:`562` Agent forwarding would error out or freeze when multiple uses of the forwarded agent were used per remote invocation (e.g. a single `~fabric.operations.run` command resulting in multiple Git or SVN checkouts.) This has been fixed thanks to Steven McDonald and GitHub user `@lynxis`. * :support:`626 backported` Clarity updates to the tutorial. Thanks to GitHub user `m4z` for the patches. * :bug:`625` `~fabric.context_managers.hide`/`~fabric.context_managers.show` did not correctly restore prior display settings if an exception was raised inside the block. This has been fixed. * :bug:`624` Login password prompts did not always display the username being authenticated for. This has been fixed. Thanks to Nick Zalutskiy for catch & patch. * :bug:`617` Fix the ``clean_revert`` behavior of `~fabric.context_managers.settings` so it doesn't ``KeyError`` for newly created settings keys. Thanks to Chris Streeter for the catch. * :feature:`615` Updated `~fabric.operations.sudo` to honor the new setting :ref:`env.sudo_user ` as a default for its ``user`` kwarg. * :bug:`616` Add port number to the error message displayed upon connection failures. * :bug:`609` (and :issue:`564`) Document and clean up :ref:`env.sudo_prefix ` so it can be more easily modified by users facing uncommon use cases. Thanks to GitHub users `3point2` for the cleanup and `SirScott` for the documentation catch. * :bug:`610` Change detection of ``env.key_filename``'s type (added as part of SSH config support in 1.4) so it supports arbitrary iterables. Thanks to Brandon Rhodes for the catch. * :release:`1.4.1 <2012-04-04>` * :release:`1.3.6 <2012-04-04>` * :bug:`608` Add ``capture`` kwarg to `~fabric.contrib.project.rsync_project` to aid in debugging rsync problems. * :bug:`607` Allow `~fabric.operations.local` to display stdout/stderr when it warns/aborts, if it was capturing them. * :bug:`395` Added :ref:`an FAQ entry ` detailing how to handle init scripts which misbehave when a pseudo-tty is allocated. * :bug:`568` `~fabric.tasks.execute` allowed too much of its internal state changes (to variables such as ``env.host_string`` and ``env.parallel``) to persist after execution completed; this caused a number of different incorrect behaviors. `~fabric.tasks.execute` has been overhauled to clean up its own state changes -- while preserving any state changes made by the task being executed. * :bug:`584` `~fabric.contrib.project.upload_project` did not take explicit remote directory location into account when untarring, and now uses `~fabric.context_managers.cd` to address this. Thanks to Ben Burry for the patch. * :bug:`458` `~fabric.decorators.with_settings` did not perfectly match `~fabric.context_managers.settings`, re: ability to inline additional context managers. This has been corrected. Thanks to Rory Geoghegan for the patch. * :bug:`499` `contrib.files.first ` used an outdated function signature in its wrapped `~fabric.contrib.files.exists` call. This has been fixed. Thanks to Massimiliano Torromeo for catch & patch. * :bug:`551` :option:`--list <-l>` output now detects terminal window size and truncates (or doesn't truncate) accordingly. Thanks to Horacio G. de Oro for the initial pull request. * :bug:`572` Parallel task aborts (as oppposed to unhandled exceptions) now correctly print their abort messages instead of tracebacks, and cause the parent process to exit with the correct (nonzero) return code. Thanks to Ian Langworth for the catch. * :bug:`306` Remote paths now use posixpath for a separator. Thanks to Jason Coombs for the patch. * :release:`1.4.0 <2012-02-13>` * :release:`1.3.5 <2012-02-13>` * :release:`1.2.6 <2012-02-13>` * :release:`1.1.8 <2012-02-13>` * :bug:`495` Fixed documentation example showing how to subclass `~fabric.tasks.Task`. Thanks to Brett Haydon for the catch and Mark Merritt for the patch. * :bug:`410` Fixed a bug where using the `~fabric.decorators.task` decorator inside/under another decorator such as `~fabric.decorators.hosts` could cause that task to become invalid when invoked by name (due to how old-style vs new-style tasks are detected.) Thanks to Dan Colish for the initial patch. * :feature:`559` `~fabric.contrib.project.rsync_project` now allows users to append extra SSH-specific arguments to ``rsync``'s ``--rsh`` flag. * :feature:`138` :ref:`env.port ` may now be written to at fabfile module level to set a default nonstandard port number. Previously this value was read-only. * :feature:`3` Fabric can now load a subset of SSH config functionality directly from your local ``~/.ssh/config`` if :ref:`env.use_ssh_config ` is set to ``True``. See :ref:`ssh-config` for details. Thanks to Kirill Pinchuk for the initial patch. * :feature:`12` Added the ability to try connecting multiple times to temporarily-down remote systems, instead of immediately failing. (Default behavior is still to only try once.) See :ref:`env.timeout ` and :ref:`env.connection_attempts ` for controlling both connection timeouts and total number of attempts. `~fabric.operations.reboot` has also been overhauled (but practically deprecated -- see its updated docs.) * :feature:`474` `~fabric.tasks.execute` now allows you to access the executed task's return values, by itself returning a dictionary whose keys are the host strings executed against. * :bug:`487 major` Overhauled the regular expression escaping performed in `~fabric.contrib.files.append` and `~fabric.contrib.files.contains` to try and handle more corner cases. Thanks to Neilen Marais for the patch. * :support:`532` Reorganized and cleaned up the output of ``fab --help``. * :feature:`8` Added :option:`--skip-bad-hosts`/:ref:`env.skip_bad_hosts ` option to allow skipping past temporarily down/unreachable hosts. * :feature:`13` Env vars may now be set at runtime via the new :option:`--set` command-line flag. * :feature:`506` A new :ref:`output alias `, ``commands``, has been added, which allows hiding remote stdout and local "running command X" output lines. * :feature:`72` SSH agent forwarding support has made it into Fabric's SSH library, and hooks for using it have been added (disabled by default; use :option:`-A` or :ref:`env.forward_agent ` to enable.) Thanks to Ben Davis for porting an existing Paramiko patch to `ssh` and providing the necessary tweak to Fabric. * :release:`1.3.4 <2012-01-12>` * :bug:`492` `@parallel ` did not automatically trigger :ref:`linewise output `, as was intended. This has been fixed. Thanks to Brandon Huey for the catch. * :bug:`510` Parallel mode is incompatible with user input, such as password/hostname prompts, and was causing cryptic `Operation not supported by device` errors when such prompts needed to be displayed. This behavior has been updated to cleanly and obviously ``abort`` instead. * :bug:`494` Fixed regression bug affecting some `env` values such as `env.port` under parallel mode. Symptoms included `~fabric.contrib.project.rsync_project` bailing out due to a None port value when run under `@parallel `. Thanks to Rob Terhaar for the report. * :bug:`339` Don't show imported `~fabric.colors` members in :option:`--list <-l>` output. Thanks to Nick Trew for the report. * :release:`1.3.3 <2011-11-23>` * :release:`1.2.5 <2011-11-23>` * :release:`1.1.7 <2011-11-23>` * :bug:`441` Specifying a task module as a task on the command line no longer blows up but presents the usual "no task by that name" error message instead. Thanks to Mitchell Hashimoto for the catch. * :bug:`475` Allow escaping of equals signs in per-task args/kwargs. * :bug:`450` Improve traceback display when handling ``ImportError`` for dependencies. Thanks to David Wolever for the patches. * :bug:`446` Add QNX to list of secondary-case `~fabric.contrib.files.sed` targets. Thanks to Rodrigo Madruga for the tip. * :bug:`443` `~fabric.contrib.files.exists` didn't expand tildes; now it does. Thanks to Riccardo Magliocchetti for the patch. * :bug:`437` `~fabric.decorators.with_settings` now correctly preserves the wrapped function's docstring and other attributes. Thanks to Eric Buckley for the catch and Luke Plant for the patch. * :bug:`400` Handle corner case of systems where ``pwd.getpwuid`` raises ``KeyError`` for the user's UID instead of returning a valid string. Thanks to Dougal Matthews for the catch. * :bug:`397` Some poorly behaved objects in third party modules triggered exceptions during Fabric's "classic or new-style task?" test. A fix has been added which tries to work around these. * :bug:`341` `~fabric.contrib.files.append` incorrectly failed to detect that the line(s) given already existed in files hidden to the remote user, and continued appending every time it ran. This has been fixed. Thanks to Dominique Peretti for the catch and Martin Vilcans for the patch. * :bug:`342` Combining `~fabric.context_managers.cd` with `~fabric.operations.put` and its ``use_sudo`` keyword caused an unrecoverable error. This has been fixed. Thanks to Egor M for the report. * :bug:`482` Parallel mode should imply linewise output; omission of this behavior was an oversight. * :bug:`230` Fix regression re: combo of no fabfile & arbitrary command use. Thanks to Ali Saifee for the catch. * :release:`1.3.2 <2011-11-07>` * :release:`1.2.4 <2011-11-07>` * :release:`1.1.6 <2011-11-07>` * :support:`459 backported` Update our `setup.py` files to note that PyCrypto released 2.4.1, which fixes the setuptools problems. * :support:`467 backported` (also :issue:`468`, :issue:`469`) Handful of documentation clarification tweaks. Thanks to Paul Hoffman for the patches. * :release:`1.3.1 <2011-10-24>` * :bug:`457` Ensured that Fabric fast-fails parallel tasks if any child processes encountered errors. Previously, multi-task invocations would continue to the 2nd, etc task when failures occurred, which does not fit with how Fabric usually behaves. Thanks to Github user ``sdcooke`` for the report and Morgan Goose for the fix. * :release:`1.3.0 <2011-10-23>` * :release:`1.2.3 <2011-10-23>` * :release:`1.1.5 <2011-10-23>` * :release:`1.0.5 <2011-10-23>` * :support:`275` To support an edge use case of the features released in :issue:`19`, and to lay the foundation for :issue:`275`, we have forked Paramiko into the `Python 'ssh' library `_ and changed our dependency to it for Fabric 1.3 and higher. This may have implications for the more uncommon install use cases, and package maintainers, but we hope to iron out any issues as they come up. * :bug:`323` `~fabric.operations.put` forgot how to expand leading tildes in the remote file path. This has been corrected. Thanks to Piet Delport for the catch. * :feature:`21` It is now possible, using the new `~fabric.tasks.execute` API call, to execute task objects (by reference or by name) from within other tasks or in library mode. `~fabric.tasks.execute` honors the other tasks' `~fabric.decorators.hosts`/`~fabric.decorators.roles` decorators, and also supports passing in explicit host and/or role arguments. * :feature:`19` Tasks may now be optionally executed in parallel. Please see the :doc:`parallel execution docs ` for details. Major thanks to Morgan Goose for the initial implementation. * :bug:`182` During display of remote stdout/stderr, Fabric occasionally printed extraneous line prefixes (which in turn sometimes overwrote wrapped text.) This has been fixed. * :bug:`430` Tasks decorated with `~fabric.decorators.runs_once` printed extraneous 'Executing...' status lines on subsequent invocations. This is noisy at best and misleading at worst, and has been corrected. Thanks to Jacob Kaplan-Moss for the report. * :release:`1.2.2 <2011-09-01>` * :release:`1.1.4 <2011-09-01>` * :release:`1.0.4 <2011-09-01>` * :bug:`252` `~fabric.context_managers.settings` would silently fail to set ``env`` values for keys which did not exist outside the context manager block. It now works as expected. Thanks to Will Maier for the catch and suggested solution. * :support:`393 backported` Fixed a typo in an example code snippet in the task docs. Thanks to Hugo Garza for the catch. * :bug:`396` :option:`--shortlist` broke after the addition of :option:`--list-format <-F>` and no longer displayed the short list format correctly. This has been fixed. * :bug:`373` Re-added missing functionality preventing :ref:`host exclusion ` from working correctly. * :bug:`303` Updated terminal size detection to correctly skip over non-tty stdout, such as when running ``fab taskname | other_command``. * :release:`1.2.1 <2011-08-21>` * :release:`1.1.3 <2011-08-21>` * :release:`1.0.3 <2011-08-21>` * :bug:`417` :ref:`abort-on-prompts` would incorrectly abort when set to True, even if both password and host were defined. This has been fixed. Thanks to Valerie Ishida for the report. * :support:`416 backported` Updated documentation to reflect move from Redmine to Github. * :bug:`389` Fixed/improved error handling when Paramiko import fails. Thanks to Brian Luft for the catch. * :release:`1.2.0 <2011-07-12>` * :feature:`22` Enhanced `@task ` to add :ref:`aliasing `, :ref:`per-module default tasks `, and :ref:`control over the wrapping task class `. Thanks to Travis Swicegood for the initial work and collaboration. * :bug:`380` Improved unicode support when testing objects for being string-like. Thanks to Jiri Barton for catch & patch. * :support:`382` Experimental overhaul of changelog formatting & process to make supporting multiple lines of development less of a hassle. * :release:`1.1.2 <2011-07-07>` * :release:`1.0.2 <2011-06-24>` Prehistory ========== The content below this section comes from older versions of Fabric which wrote out changelogs to individual, undated files. They have been concatenated and preserved here for historical reasons, and may not be in strict chronological order. ---- Changes in version 1.1.2 (2011-07-07) ===================================== Bugfixes -------- * :issue:`375`: The logic used to separate tasks from modules when running ``fab --list`` incorrectly considered task classes implementing the mapping interface to be modules, not individual tasks. This has been corrected. Thanks to Vladimir Mihailenco for the catch. Changes in version 1.1.1 (2011-06-29) ===================================== Bugfixes -------- * The public API for `~fabric.tasks.Task` mentioned use of the ``run()`` method, but Fabric's main execution loop had not been updated to look for and call it, forcing users who subclassed `~fabric.tasks.Task` to define ``__call__()`` instead. This was an oversight and has been corrected. .. seealso:: :ref:`task-subclasses` Changes in version 1.1 (2011-06-24) =================================== This page lists all changes made to Fabric in its 1.1.0 release. .. note:: This release also includes all applicable changes from the 1.0.2 release. Highlights ---------- * :issue:`76`: :ref:`New-style tasks ` have been added. With the addition of the `~fabric.decorators.task` decorator and the `~fabric.tasks.Task` class, you can now "opt-in" and explicitly mark task functions as tasks, and Fabric will ignore the rest. The original behavior (now referred to as :ref:`"classic" tasks `) will still take effect if no new-style tasks are found. Major thanks to Travis Swicegood for the original implementation. * :issue:`56`: Namespacing is now possible: Fabric will crawl imported module objects looking for new-style task objects and build a dotted hierarchy (tasks named e.g. ``web.deploy`` or ``db.migrations.run``), allowing for greater organization. See :ref:`namespaces` for details. Thanks again to Travis Swicegood. Feature additions ----------------- * :issue:`10`: `~fabric.contrib.upload_project` now allows control over the local and remote directory paths, and has improved error handling. Thanks to Rodrigue Alcazar for the patch. * As part of :issue:`56` (highlighted above), added :option:`--list-format <-F>` to allow specification of a nested output format from :option:`--list <-l>`. * :issue:`107`: `~fabric.operations.require`'s ``provided_by`` kwarg now accepts iterables in addition to single values. Thanks to Thomas Ballinger for the patch. * :issue:`117`: `~fabric.contrib.files.upload_template` now supports the `~fabric.operations.put` flags ``mirror_local_mode`` and ``mode``. Thanks to Joe Stump for the suggestion and Thomas Ballinger for the patch. * :issue:`154`: `~fabric.contrib.files.sed` now allows customized regex flags to be specified via a new ``flags`` parameter. Thanks to Nick Trew for the suggestion and Morgan Goose for initial implementation. * :issue:`170`: Allow :ref:`exclusion ` of specific hosts from the final run list. Thanks to Casey Banner for the suggestion and patch. * :issue:`189`: Added :option:`--abort-on-prompts`/:ref:`env.abort_on_prompts ` to allow a more non-interactive behavior, aborting/exiting instead of trying to prompt the running user. Thanks to Jeremy Avnet and Matt Chisholm for the initial patch. * :issue:`273`: `~fabric.contrib.files.upload_template` now offers control over whether it attempts to create backups of pre-existing destination files. Thanks to Ales Zoulek for the suggestion and initial patch. * :issue:`283`: Added the `~fabric.decorators.with_settings` decorator to allow application of env var settings to an entire function, as an alternative to using the `~fabric.context_managers.settings` context manager. Thanks to Travis Swicegood for the patch. * :issue:`353`: Added :option:`--keepalive`/:ref:`env.keepalive ` to allow specification of an SSH keepalive parameter for troublesome network connections. Thanks to Mark Merritt for catch & patch. Bugfixes -------- * :issue:`115`: An implementation detail causing host lists to lose order when deduped by the ``fab`` execution loop, has been patched to preserve order instead. So e.g. ``fab -H a,b,c`` (or setting ``env.hosts = ['a', 'b', 'c']``) will now always run on ``a``, then ``b``, then ``c``. Previously, there was a chance the order could get mixed up during deduplication. Thanks to Rohit Aggarwal for the report. * :issue:`345`: `~fabric.contrib.files.contains` returned the stdout of its internal ``grep`` command instead of success/failure, causing incorrect behavior when stderr exists and is combined with stdout. This has been corrected. Thanks to Szymon Reichmann for catch and patch. Documentation updates --------------------- * Documentation for task declaration has been moved from :doc:`/usage/execution` into its own docs page, :doc:`/usage/tasks`, as a result of the changes added in :issue:`76` and :issue:`56`. * :issue:`184`: Make the usage of `~fabric.contrib.project.rsync_project`'s ``local_dir`` argument more obvious, regarding its use in the ``rsync`` call. (Specifically, so users know they can pass in multiple, space-joined directory names instead of just one single directory.) Internals --------- * :issue:`307`: A whole pile of minor PEP8 tweaks. Thanks to Markus Gattol for highlighting the ``pep8`` tool and to Rick Harding for the patch. * :issue:`314`: Test utility decorator improvements. Thanks to Rick Harding for initial catch & patch. Changes in version 1.0.2 (2011-06-24) ===================================== .. note:: This release also includes all applicable changes from the 0.9.7 release. Bugfixes -------- * :issue:`258`: Bugfix to a previous, incorrectly applied fix regarding `~fabric.operations.local` on Windows platforms. * :issue:`324`: Update `~fabric.operations.run`/`~fabric.operations.sudo`'s ``combine_stderr`` kwarg so that it correctly overrides the global setting in all cases. This required changing its default value to ``None``, but the default behavior (behaving as if the setting were ``True``) has not changed. Thanks to Matthew Woodcraft and Connor Smith for the catch. * :issue:`337`: Fix logic bug in `~fabric.operations.put` preventing use of ``mirror_local_mode``. Thanks to Roman Imankulov for catch & patch. * :issue:`352` (also :issue:`320`): Seemingly random issues with output lockup and input problems (e.g. sudo prompts incorrectly rejecting passwords) appear to have been caused by an I/O race condition. This has been fixed. Thanks to Max Arnold and Paul Oswald for the detailed reports and to Max for the diagnosis and patch. Documentation ------------- * Updated the API documentation for `~fabric.context_managers.cd` to explicitly point users to `~fabric.context_managers.lcd` for modifying local paths. * Clarified the behavior of `~fabric.contrib.project.rsync_project` re: how trailing slashes in ``local_dir`` affect ``remote_dir``. Thanks to Mark Merritt for the catch. Changes in version 1.0.1 (2011-03-27) ===================================== .. note:: This release also includes all applicable changes from the 0.9.5 release. Bugfixes -------- * :issue:`301`: Fixed a bug in `~fabric.operations.local`'s behavior when ``capture=False`` and ``output.stdout`` (or ``.stderr``) was also ``False``. Thanks to Chris Rose for the catch. * :issue:`310`: Update edge case in `~fabric.operations.put` where using the ``mode`` kwarg alongside ``use_sudo=True`` runs a hidden `~fabric.operations.sudo` command. The ``mode`` kwarg needs to be octal but was being interpolated in the ``sudo`` call as a string/integer. Thanks to Adam Ernst for the catch and suggested fix. * :issue:`311`: `~fabric.contrib.files.append` was supposed to have its ``partial`` kwarg's default flipped from ``True`` to ``False``. However, only the documentation was altered. This has been fixed. Thanks to Adam Ernst for bringing it to our attention. * :issue:`312`: Tweak internal I/O related loops to prevent high CPU usage and poor screen-printing behavior on some systems. Thanks to Kirill Pinchuk for the initial patch. * :issue:`320`: Some users reported problems with dropped input, particularly while entering `~fabric.operations.sudo` passwords. This was fixed via the same change as for :issue:`312`. Documentation ------------- * Added a missing entry for :ref:`env.path ` in the usage documentation. Changes in version 1.0 (2011-03-04) =================================== This page lists all changes made to Fabric in its 1.0.0 release. Highlights ---------- * :issue:`7`: `~fabric.operations.run`/`~fabric.operations.sudo` now allow full interactivity with the remote end. You can interact with remote prompts and similar interfaces, making certain tasks much easier, and freeing you from the need to find noninteractive solutions if you don't want to. See :doc:`/usage/interactivity` for more on these changes. * `~fabric.operations.put` and `~fabric.operations.get` received many updates, including but not limited to: recursion, globbing, inline ``sudo`` capability, and increased control over local file paths. See the individual ticket line-items below for details. Erich Heine (``sophacles`` on IRC) played a large part in implementing and/or collecting these changes and deserves much of the credit. * Added functionality for loading fabfiles which are Python packages (directories) instead of just modules (single files). This allows for easier organization of nontrivial fabfiles and paves the way for task namespacing in the near future. See :ref:`fabfile-discovery` for details. * :issue:`185`: Mostly of interest to those contributing to Fabric itself, Fabric now leverages Paramiko to provide a stub SSH and SFTP server for use during runs of our test suite. This makes quick, configurable full-stack testing of Fabric (and, to an extent, user fabfiles) possible. Backwards-incompatible changes ------------------------------ The below changes are **backwards incompatible** and have the potential to break your 0.9.x based fabfiles! * `~fabric.contrib.files.contains` and `~fabric.contrib.files.append` previously had the ``filename`` argument in the second position, whereas all other functions in the `contrib.files ` module had ``filename`` as the first argument. These two functions have been brought in line with the rest of the module. * `~fabric.contrib.files.sed` now escapes single-quotes and parentheses in addition to forward slashes, in its ``before`` and ``after`` kwargs. Related to, but not entirely contained within, :issue:`159`. * The ``user`` and ``pty`` kwargs in `~fabric.operations.sudo`'s signature have had their order swapped around to more closely match `~fabric.operations.run`. * As part of the changes made in :issue:`7`, `~fabric.operations.run` and `~fabric.operations.sudo` have had the default value of their ``pty`` kwargs changed from ``False`` to ``True``. This, plus the addition of the :ref:`combine-stderr` kwarg/env var, may result in significant behavioral changes in remote programs which operate differently when attached to a tty. * :issue:`61`: `~fabric.operations.put` and `~fabric.operations.get` now honor the remote current-working-directory changes applied by `~fabric.context_managers.cd`. Previously they would always treat relative remote paths as being relative to the remote home directory. * :issue:`79`: `~fabric.operations.get` now allows increased control over local filenames when downloading single or multiple files. This is backwards incompatible because the default path/filename for downloaded files has changed. Thanks to Juha Mustonen, Erich Heine and Max Arnold for brainstorming solutions. * :issue:`88`: `~fabric.operations.local` has changed the default value of its ``capture`` kwarg, from ``True`` to ``False``. This was changed in order to be more intuitive, at the cost of no longer defaulting to the same rich return value as in `~fabric.operations.run`/`~fabric.operations.sudo` (which is still available by specifying ``capture=True``.) * :issue:`121`: `~fabric.operations.put` will no longer automatically attempt to mirror local file modes. Instead, you'll need to specify ``mirror_local_mode=True`` to get this behavior. Thanks to Paul Smith for a patch covering part of this change. * :issue:`172`: `~fabric.contrib.files.append` has changed the default value of its ``partial`` kwarg from ``True`` to ``False`` in order to be safer/more intuitive. * :issue:`221`: `~fabric.decorators.runs_once` now memoizes the wrapped task's return value and returns that value on subsequent invocations, instead of returning None. Thanks to Jacob Kaplan-Moss and Travis Swicegood for catch + patch. Feature additions ----------------- * Prerelease versions of Fabric (starting with the 1.0 prereleases) will now print the Git SHA1 hash of the current checkout, if the user is working off of a Git clone of the Fabric source code repository. * Added `~fabric.context_managers.path` context manager for modifying commands' effective ``$PATH``. * Added convenience ``.succeeded`` attribute to the return values of `~fabric.operations.run`/`~fabric.operations.sudo`/`~fabric.operations.local` which is simply the opposite of the ``.failed`` attribute. (This addition has also been backported to Fabric's 0.9 series.) * Refactored SSH disconnection code out of the main ``fab`` loop into `~fabric.network.disconnect_all`, allowing library users to avoid problems with non-fabfile Python scripts hanging after execution finishes. * :issue:`2`: Added ``use_sudo`` kwarg to `~fabric.operations.put` to allow uploading of files to privileged locations. Thanks to Erich Heine and IRC user ``npmap`` for suggestions and patches. * :issue:`23`: Added `~fabric.context_managers.prefix` context manager for easier management of persistent state across commands. * :issue:`27`: Added environment variable (:ref:`always-use-pty`) and command-line flag (:option:`--no-pty`) for global control over the `~fabric.operations.run`/`~fabric.operations.sudo` ``pty`` argument. * :issue:`28`: Allow shell-style globbing in `~fabric.operations.get`. Thanks to Erich Heine and Max Arnold. * :issue:`55`: `~fabric.operations.run`, `~fabric.operations.sudo` and `~fabric.operations.local` now provide access to their standard error (stderr) as an attribute on the return value, alongside e.g. ``.failed``. * :issue:`148`: `~fabric.operations.local` now returns the same "rich" string object as `~fabric.operations.run`/`~fabric.operations.sudo` do, so that it is a string containing the command's stdout (if ``capture=True``) or the empty string (if ``capture=False``) which exposes the ``.failed`` and ``.return_code`` attributes, and so forth. * :issue:`151`: Added a `~fabric.utils.puts` utility function, which allows greater control over fabfile-generated (as opposed to Fabric-generated) output. Also added `~fabric.utils.fastprint`, an alias to `~fabric.utils.puts` allowing for convenient unbuffered, non-newline-terminated printing. * :issue:`192`: Added per-user/host password cache to assist in multi-connection scenarios. * :issue:`193`: When requesting a remote pseudo-terminal, use the invoking terminal's dimensions instead of going with the default. * :issue:`217`: `~fabric.operations.get`/`~fabric.operations.put` now accept file-like objects as well as local file paths for their ``local_path`` arguments. * :issue:`245`: Added the `~fabric.context_managers.lcd` context manager for controlling `~fabric.operations.local`'s current working directory and `~fabric.operations.put`/`~fabric.operations.get`'s local working directories. * :issue:`274`: `~fabric.operations.put`/`~fabric.operations.get` now have return values which may be iterated over to access the paths of files uploaded remotely or downloaded locally, respectively. These return values also allow access to ``.failed`` and ``.succeeded`` attributes, just like `~fabric.operations.run` and friends. (In this case, ``.failed`` is actually a list itself containing any paths which failed to transfer, which naturally acts as a boolean as well.) Documentation updates --------------------- * API, tutorial and usage docs updated with the above new features. * README now makes the Python 2.5+ requirement up front and explicit; some folks were still assuming it would run on Python 2.4. * Added a link to Python's documentation for string interpolation in `~fabric.contrib.files.upload_template`'s docstring. Changes in version 0.9.7 (2011-06-23) ===================================== The following changes were implemented in Fabric 0.9.7: Bugfixes -------- * :issue:`329`: `~fabric.operations.reboot` would have problems reconnecting post-reboot (resulting in a traceback) if ``env.host_string`` was not fully-formed (did not contain user and port specifiers.) This has been fixed. Changes in version 0.9.6 (2011-04-29) ===================================== The following changes were implemented in Fabric 0.9.6: Bugfixes -------- * :issue:`347`: `~fabric.contrib.files.append` incorrectly tested for ``str`` instead of ``types.StringTypes``, causing it to split up Unicode strings as if they were one character per line. This has been fixed. Changes in version 0.9.5 (2011-03-21) ===================================== The following changes were implemented in Fabric 0.9.5: Bugfixes -------- * :issue:`37`: Internal refactoring of a Paramiko call from ``_transport`` to ``get_transport()``. * :issue:`258`: Modify subprocess call on Windows platforms to avoid space/quote problems in `~fabric.operations.local`. Thanks to Henrik Heimbuerger and Raymond Cote for catch + suggested fixes. * :issue:`261`: Fix bug in `~fabric.contrib.files.comment` which truncated regexen ending with ``$``. Thanks to Antti Kaihola for the catch. * :issue:`264`: Fix edge case in `~fabric.operations.reboot` by gracefully clearing connection cache. Thanks to Jason Gerry for the report & troubleshooting. * :issue:`268`: Allow for ``@`` symbols in usernames, which is valid on some systems. Fabric's host-string parser now splits username and hostname at the last ``@`` found instead of the first. Thanks to Thadeus Burgess for the report. * :issue:`287`: Fix bug in password prompt causing occasional tracebacks. Thanks to Antti Kaihola for the catch and Rick Harding for testing the proposed solution. * :issue:`288`: Use temporary files to work around the lack of a ``-i`` flag in OpenBSD and NetBSD `~fabric.contrib.files.sed`. Thanks to Morgan Lefieux for catch + patches. * :issue:`305`: Strip whitespace from hostnames to help prevent user error. Thanks to Michael Bravo for the report and Rick Harding for the patch. * :issue:`316`: Use of `~fabric.context_managers.settings` with key names not previously set in ``env`` no longer raises KeyErrors. Whoops. Thanks to Adam Ernst for the catch. Documentation updates --------------------- * :issue:`228`: Added description of the PyCrypto + pip + Python 2.5 problem to the documentation and removed the Python 2.5 check from ``setup.py``. * :issue:`291`: Updated the PyPM-related install docs re: recent changes in PyPM and its download URLs. Thanks to Sridhar Ratnakumar for the patch. Changes in version 0.9.4 (2011-02-18) ===================================== The following changes were implemented in Fabric 0.9.4: Feature additions ----------------- * Added :doc:`documentation ` for using Fabric as a library. * Mentioned our `Twitter account `_ on the main docs page. * :issue:`290`: Added ``escape`` kwarg to `~fabric.contrib.files.append` to allow control over previously automatic single-quote escaping. Changes in version 0.9.3 (2010-11-12) ===================================== The following changes were implemented in Fabric 0.9.3: Feature additions ----------------- * :issue:`255`: Added ``stderr`` and ``succeeded`` attributes to `~fabric.operations.local`. * :issue:`254`: Backported the ``.stderr`` and ``.succeeded`` attributes on `~fabric.operations.run`/`~fabric.operations.sudo` return values, from the Git master/pre-1.0 branch. Please see those functions' API docs for details. Bugfixes -------- * :issue:`228`: We discovered that the pip + PyCrypto installation problem was limited to Python 2.5 only, and have updated our ``setup.py`` accordingly. * :issue:`230`: Arbitrary or remainder commands (``fab -- ``) will no longer blow up when invoked with no fabfile present. Thanks to IRC user ``orkaa`` for the report. * :issue:`242`: Empty string values in task CLI args now parse correctly. Thanks to Aaron Levy for the catch + patch. Documentation updates --------------------- * :issue:`239`: Fixed typo in execution usage docs. Thanks to Pradeep Gowda and Turicas for the catch. Changes in version 0.9.2 (2010-09-06) ===================================== The following changes were implemented in Fabric 0.9.2: Feature additions ----------------- * The `~fabric.operations.reboot` operation has been added, providing a way for Fabric to issue a reboot command and then reconnect after the system has restarted. * ``python setup.py test`` now runs Fabric's test suite (provided you have all the prerequisites from the ``requirements.txt`` installed). Thanks to Eric Holscher for the patch. * Added functionality for loading fabfiles which are Python packages (directories) instead of just modules (single files.) See :ref:`fabfile-discovery`. * Added output lines informing the user of which tasks are being executed (e.g. ``[myserver] Executing task 'foo'``.) * Added support for lazy (callable) role definition values in ``env.roledefs``. * Added `contrib.django ` module with basic Django integration. * :ref:`env.local_user ` was added, providing easy and permanent access to the local system username, even if an alternate remote username has been specified. * :issue:`29`: Added support for arbitrary command-line-driven anonymous tasks via ``fab [options] -- [shell command]``. See :ref:`arbitrary-commands`. * :issue:`52`: Full tracebacks during aborts are now displayed if the user has opted to see debug-level output. * :issue:`101`: Added `~fabric.colors` module with basic color output support. (:issue:`101` is still open: we plan to leverage the new module in Fabric's own output in the future.) * :issue:`137`: Commas used to separate per-task arguments may now be escaped with a backslash. Thanks to Erich Heine for the patch. * :issue:`144`: `~fabric.decorators.hosts` (and `~fabric.decorators.roles`) will now expand a single, iterable argument instead of requiring one to use e.g. ``@hosts(*iterable)``. * :issue:`151`: Added a `~fabric.utils.puts` utility function, which allows greater control over fabfile-generated (as opposed to Fabric-generated) output. Also added `~fabric.utils.fastprint`, an alias to `~fabric.utils.puts` allowing for convenient unbuffered, non-newline-terminated printing. * :issue:`208`: Users rolling their own shell completion or who otherwise find themselves performing text manipulation on the output of :option:`--list <-l>` may now use :option:`--shortlist` to get a plain, newline-separated list of task names. Bugfixes -------- * The interactive "what host to connect to?" prompt now correctly updates the appropriate environment variables (hostname, username, port) based on user input. * Fixed a bug where Fabric's own internal fabfile would pre-empt the user's fabfile due to a PYTHONPATH order issue. User fabfiles are now always loaded at the front of the PYTHONPATH during import. * Disabled some DeprecationWarnings thrown by Paramiko when that library is imported into Fabric under Python 2.6. * :issue:`44`, :issue:`63`: Modified `~fabric.contrib.project.rsync_project` to honor the SSH port and identity file settings. Thanks to Mitch Matuson and Morgan Goose. * :issue:`123`: Removed Cygwin from the "are we on Windows" test; now, only Python installs whose ``sys.platform`` says ``'win32'`` will use Windows-only code paths (e.g. importing of ``pywin32``). Documentation updates --------------------- * Added a few new items to the :doc:`FAQ `. * :issue:`173`: Simple but rather embarrassing typo fix in README. Thanks to Ted Nyman for the catch. * :issue:`194`: Added a note to :doc:`the install docs ` about a possible edge case some Windows 64-bit Python users may encounter. * :issue:`216`: Overhauled the :ref:`process backgrounding FAQ ` to include additional techniques and be more holistic. Packaging updates ----------------- * :issue:`86`, :issue:`158`: Removed the bundled Paramiko 1.7.4 and updated the ``setup.py`` to require Paramiko >=1.7.6. This lets us skip the known-buggy Paramiko 1.7.5 while getting some much needed bugfixes in Paramiko 1.7.6. Changes in version 0.9.1 (2010-05-28) ===================================== The following changes were implemented in Fabric 0.9.1: Feature additions ----------------- * :issue:`82`: `~fabric.contrib.files.append` now offers a ``partial`` kwarg allowing control over whether the "don't append if given text already exists" test looks for exact matches or not. Thanks to Jonas Nockert for the catch and discussion. * :issue:`112`: ``fab --list`` now prints out the fabfile's module-level docstring as a header, if there is one. * :issue:`141`: Added some more CLI args/env vars to allow user configuration of the Paramiko ``connect`` call -- specifically :ref:`no_agent` and :ref:`no_keys`. Bugfixes -------- * :issue:`75`: ``fab``, when called with no arguments or (useful) options, now prints help, even when no fabfile can be found. Previously, calling ``fab`` in a location with no fabfile would complain about the lack of fabfile instead of displaying help. * :issue:`130`: Context managers now correctly clean up ``env`` if they encounter an exception. Thanks to Carl Meyer for catch + patch. * :issue:`132`: `~fabric.operations.local` now calls ``strip`` on its stdout, matching the behavior of `~fabric.operations.run`/`~fabric.operations.sudo`. Thanks to Carl Meyer again on this one. * :issue:`166`: `~fabric.context_managers.cd` now correctly overwrites ``env.cwd`` when given an absolute path, instead of naively appending its argument to ``env.cwd``'s previous value. Documentation updates --------------------- * A number of small to medium documentation tweaks were made which had no specific Redmine ticket. The largest of these is the addition of :doc:`the FAQ <../faq>` to the Sphinx documentation instead of storing it as a source-only text file. (Said FAQ was also slightly expanded with new FAQs.) * :issue:`17`: Added :ref:`note to FAQ ` re: use of ``dtach`` as alternative to ``screen``. Thanks to Erich Heine for the tip. * :issue:`64`: Updated :ref:`installation docs ` to clarify where package maintainers should be downloading tarballs from. Thanks to James Pearson for providing the necessary perspective. * :issue:`95`: Added a link to a given version's changelog on the PyPI page (technically, to the ``setup.py`` ``long_description`` field). * :issue:`110`: Alphabetized :ref:`the CLI argument command reference `. Thanks to Erich Heine. * :issue:`120`: Tweaked documentation, help strings to make it more obvious that fabfiles are simply Python modules. * :issue:`127`: Added :ref:`note to install docs ` re: ActiveState's PyPM. Thanks to Sridhar Ratnakumar for the tip. Changes in version 0.9 (2009-11-08) =================================== This document details the various backwards-incompatible changes made during Fabric's rewrite between versions 0.1 and 0.9. The codebase has been almost completely rewritten and reorganized and an attempt has been made to remove "magical" behavior and make things more simple and Pythonic; the ``fab`` command-line component has also been redone to behave more like a typical Unix program. Major changes ------------- You'll want to at least skim the entire document, but the primary changes that will need to be made to one's fabfiles are as follows: Imports ~~~~~~~ You will need to **explicitly import any and all methods or decorators used**, at the top of your fabfile; they are no longer magically available. Here's a sample fabfile that worked with 0.1 and earlier:: @hosts('a', 'b') def my_task(): run('ls /var/www') sudo('mkdir /var/www/newsite') The above fabfile uses `hosts`, `run` and `sudo`, and so in Fabric 0.9 one simply needs to import those objects from the new API module ``fabric.api``:: from fabric.api import hosts, run, sudo @hosts('a', 'b') def my_task(): run('ls /var/www') sudo('mkdir /var/www/newsite') You may, if you wish, use ``from fabric.api import *``, though this is technically not Python best practices; or you may import directly from the Fabric submodules (e.g. ``from fabric.decorators import hosts``.) See :doc:`../usage/fabfiles` for more information. Python version ~~~~~~~~~~~~~~ Fabric started out Python 2.5-only, but became largely 2.4 compatible at one point during its lifetime. Fabric is once again **only compatible with Python 2.5 or newer**, in order to take advantage of the various new features and functions available in that version. With this change we're setting an official policy to support the two most recent stable releases of the Python 2.x line, which at time of writing is 2.5 and 2.6. We feel this is a decent compromise between new features and the reality of operating system packaging concerns. Given that most users use Fabric from their workstations, which are typically more up-to-date than servers, we're hoping this doesn't cut out too many folks. Finally, note that while we will not officially support a 2.4-compatible version or fork, we may provide a link to such a project if one arises. Environment/config variables ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``config`` object previously used to access and set internal state (including Fabric config options) **has been renamed** to :data:`env`, but otherwise remains mostly the same (it allows both dictionary and object-attribute style access to its data.) :data:`env` resides in the :mod:`state` submodule and is importable via ``fabric.api``, so where before one might have seen fabfiles like this:: def my_task(): config.foo = 'bar' one will now be explicitly importing the object like so:: from fabric.api import env def my_task(): env.foo = 'bar' Execution mode ~~~~~~~~~~~~~~ Fabric's default mode of use, in prior versions, was what we called "broad mode": your tasks, as Python code, ran only once, and any calls to functions that made connections (such as `run` or `sudo`) would run once per host in the current host list. We also offered "deep mode", in which your entire task function would run once per host. In Fabric 0.9, this dichotomy has been removed, and **"deep mode" is the method Fabric uses to perform all operations**. This allows you to treat your Fabfiles much more like regular Python code, including the use of ``if`` statements and so forth, and allows operations like `run` to unambiguously return the output from the server. Other modes of execution such as the old "broad mode" may return as Fabric's internals are refactored and expanded, but for now we've simplified things, and deep mode made the most sense as the primary mode of use. "Lazy" string interpolation ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Because of how Fabric used to run in "broad mode" (see previous section) a special string formatting technique -- the use of a bash-like dollar sign notation, e.g. ``"hostname: $(fab_host)"`` -- had to be used to allow the current state of execution to be represented in one's operations. **This is no longer necessary and has been removed**. Because your tasks are executed once per host, you may build strings normally (e.g. with the ``%`` operator) and refer to ``env.host_string``, ``env.user`` and so forth. For example, Fabric 0.1 had to insert the current username like so:: print("Your current username is $(fab_user)") Fabric 0.9 and up simply reference ``env`` variables as normal:: print("Your current username is %s" % env.user) As with the execution modes, a special string interpolation function or method that automatically makes use of ``env`` values may find its way back into Fabric at some point if a need becomes apparent. Other backwards-incompatible changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In no particular order: * The Fabric config file location used to be ``~/.fabric``; in the interests of honoring Unix filename conventions, it's now ``~/.fabricrc``. * The old ``config`` object (now :data:`env`) had a ``getAny`` method which took one or more key strings as arguments, and returned the value attached to the first valid key. This method still exists but has been renamed to `first`. * Environment variables such as ``fab_host`` have been renamed to simply e.g. ``host``. This looks cleaner and feels more natural, and requires less typing. Users will naturally need to be careful not to override these variables, but the same holds true for e.g. Python's builtin methods and types already, so we felt it was worth the tradeoff. * Fabric's version header is no longer printed every time the program runs; you should now use the standard ``--version``/``-V`` command-line options to print version and exit. * The old ``about`` command has been removed; other Unix programs don't typically offer this. Users can always view the license and warranty info in their respective text files distributed with the software. * The old ``help`` command is now the typical Unix options ``-h``/``--help``. * Furthermore, there is no longer a listing of Fabric's programming API available through the command line -- those topics impact fabfile authors, not fab users (even though the former is a subset of the latter) and should stay in the documentation only. * `prompt`'s primary function is now to return a value to the caller, although it may still optionally store the entered value in `env` as well. * `prompt` now considers the empty string to be valid input; this allows other functions to wrap `prompt` and handle "empty" input on their own terms. * In addition to the above changes, `prompt` has been updated to behave more obviously, as its previous behavior was confusing in a few ways: * It will now overwrite pre-existing values in the environment dict, but will print a warning to the user if it does so. * Additionally, (and this appeared to be undocumented) the ``default`` argument could take a callable as well as a string, and would simply set the default message to the return value if a callable was given. This seemed to add unnecessary complexity (given that users may call e.g. ``prompt(blah, msg, default=my_callable()``) so it has been removed. * When connecting, Fabric used to use the undocumented ``fab_pkey`` env variable as a method of passing in a Paramiko ``PKey`` object to the SSH client's ``connect`` method. This has been removed in favor of an ``ssh``-like ``-i`` option, which allows one to specify a private key file to use; that should generally be enough for most users. * ``download`` is now `get` in order to match up with `put` (the name mismatch was due to `get` being the old method of getting env vars.) * The ``noshell`` argument to `sudo` (added late in its life to previous Fabric versions) has been renamed to ``shell`` (defaults to True, so the effective behavior remains the same) and has also been extended to the `run` operation. * Additionally, the global ``sudo_noshell`` option has been renamed to ``use_shell`` and also applies to both `run` and `sudo`. * ``local_per_host`` has been removed, as it only applied to the now-removed "broad mode". * ``load`` has been removed; Fabric is now "just Python", so use Python's import mechanisms in order to stitch multiple fabfiles together. * ``abort`` is no longer an "operation" *per se* and has been moved to :mod:`fabric.utils`. It is otherwise the same as before, taking a single string message, printing it to the user and then calling ``sys.exit(1)``. * ``rsyncproject`` and ``upload_project`` have been moved into :mod:`fabric.contrib` (specifically, :mod:`fabric.contrib.project`), which is intended to be a new tree of submodules for housing "extra" code which may build on top of the core Fabric operations. * ``invoke`` has been turned on its head, and is now the `runs_once` decorator (living in :mod:`fabric.decorators`). When used to decorate a function, that function will only execute one time during the lifetime of a ``fab`` run. Thus, where you might have used ``invoke`` multiple times to ensure a given command only runs once, you may now use `runs_once` to decorate the function and then call it multiple times in a normal fashion. * It looks like the regex behavior of the ``validate`` argument to `prompt` was never actually implemented. It now works as advertised. * Couldn't think of a good reason for `require` to be a decorator *and* a function, and the function is more versatile in terms of where it may be used, so the decorator has been removed. * As things currently stand with the execution model, the ``depends`` decorator doesn't make a lot of sense: instead, it's safest/best to simply make "meta" commands that just call whatever chain of "real" commands you need performed for a given overarching task. For example, instead of having command A say that it "depends on" command B, create a command C which calls A and B in the right order, e.g.:: def build(): local('make clean all') def upload(): put('app.tgz', '/tmp/app.tgz') run('tar xzf /tmp/app.tgz') def symlink(): run('ln -s /srv/media/photos /var/www/app/photos') def deploy(): build() upload() symlink() .. note:: The execution model is still subject to change as Fabric evolves. Please don't hesitate to email the list or the developers if you have a use case that needs something Fabric doesn't provide right now! * Removed the old ``fab shell`` functionality, since the move to "just Python" should make vanilla ``python``/``ipython`` usage of Fabric much easier. * We may add it back in later as a convenient shortcut to what basically amounts to running ``ipython`` and performing a handful of ``from fabric.foo import bar`` calls. * The undocumented `fab_quiet` option has been replaced by a much more granular set of output controls. For more info, see :doc:`../usage/output_controls`. Changes from alpha 1 to alpha 2 ------------------------------- The below list was generated by running ``git shortlog 0.9a1..0.9a2`` and then manually sifting through and editing the resulting commit messages. This will probably occur for the rest of the alphas and betas; we hope to use Sphinx-specific methods of documenting changes once the final release is out the door. * Various minor tweaks to the (still in-progress) documentation, including one thanks to Curt Micol. * Added a number of TODO items based on user feedback (thanks!) * Host information now available in granular form (user, host, port) in the env dict, alongside the full ``user@host:port`` host string. * Parsing of host strings is now more lenient when examining the username (e.g. hyphens.) * User/host info no longer cleared out between commands. * Tweaked ``setup.py`` to use ``find_packages``. Thanks to Pat McNerthney. * Added 'capture' argument to `~fabric.operations.local` to allow local interactive tasks. * Reversed default value of `~fabric.operations.local`'s ``show_stderr`` kwarg; local stderr now prints by default instead of being hidden by default. * Various internal fabfile tweaks. Changes from alpha 2 to alpha 3 ------------------------------- * Lots of updates to the documentation and TODO * Added contrib.files with a handful of file-centric subroutines * Added contrib.console for console UI stuff (so far, just `confirm`) * Reworked config file mechanisms a bit, added CLI flag for setting it. * Output controls (including CLI args, documentation) have been added * Test coverage tweaked and grown a small amount (thanks in part to Peter Ellis) * Roles overhauled/fixed (more like hosts now) * Changed ``--list`` linewrap behavior to truncate instead. * Make private key passphrase prompting more obvious to users. * Add ``pty`` option to `sudo`. Thanks to José Muanis for the tip-off re: get_pty() * Add CLI argument for setting the shell used in commands (thanks to Steve Steiner) * Only load host keys when ``env.reject_unknown_keys`` is True. Thanks to Pat McNerthney. * And many, many additional bugfixes and behavioral tweaks too small to merit cluttering up this list! Thanks as always to everyone who contributed bugfixes, feedback and/or patches. Changes from alpha 3 to beta 1 ------------------------------ This is closer to being a straight dump of the Git changelog than the previous sections; apologies for the overall change in tense. * Add autodocs for fabric.contrib.console. * Minor cleanup to package init and setup.py. * Handle exceptions with strerror attributes that are None instead of strings. * contrib.files.append may now take a list of strings if desired. * Straighten out how prompt() deals with trailing whitespace * Add 'cd' context manager. * Update upload_template to correctly handle backing up target directories. * upload_template() can now use Jinja2 if it's installed and user asks for it. * Handle case where remote host SSH key doesn't match known_hosts. * Fix race condition in run/sudo. * Start fledgling FAQ; extended pty option to run(); related doc tweaks. * Bring local() in line with run()/sudo() in terms of .failed attribute. * Add dollar-sign backslash escaping to run/sudo. * Add FAQ question re: backgrounding processes. * Extend some of put()'s niceties to get(), plus docstring/comment updates * Add debug output of chosen fabfile for troubleshooting fabfile discovery. * Fix Python path bug which sometimes caused Fabric's internal fabfile to pre-empt user's fabfile during load phase. * Gracefully handle "display" for tasks with no docstring. * Fix edge case that comes up during some auth/prompt situations. * Handle carriage returns in output_thread correctly. Thanks to Brian Rosner. Changes from beta 1 to release candidate 1 ------------------------------------------ As with the previous changelog, this is also mostly a dump of the Git log. We promise that future changelogs will be more verbose :) * Near-total overhaul and expansion of documentation (this is the big one!) Other mentions of documentation in this list are items deserving their own mention, e.g. FAQ updates. * Add FAQ question re: passphrase/password prompt * Vendorized Paramiko: it is now included in our distribution and is no longer an external dependency, at least until upstream fixes a nasty 1.7.5 bug. * Fix #34: switch upload_template to use mkstemp (also removes Python 2.5.2+ dependency -- now works on 2.5.0 and up) * Fix #62 by escaping backticks. * Replace "ls" with "test" in exists() * Fixes #50. Thanks to Alex Koshelev for the patch. * ``local``'s return value now exhibits ``.return_code``. * Abort on bad role names instead of blowing up. * Turn off DeprecationWarning when importing paramiko. * Attempted fix re #32 (dropped output) * Update role/host initialization logic (was missing some edge cases) * Add note to install docs re: PyCrypto on win32. * Add FAQ item re: changing env.shell. * Rest of TODO migrated to tickets. * ``fab test`` (when in source tree) now uses doctests. * Add note to compatibility page re: fab_quiet. * Update local() to honor context_managers.cd() Changes from release candidate 1 to final release ------------------------------------------------- * Fixed the `~fabric.contrib.files.sed` docstring to accurately reflect which ``sed`` options it uses. * Various changes to internal fabfile, version mechanisms, and other non-user-facing things. Fabric-1.8.2/docs/conf.py000644 000765 000024 00000017405 12277451130 016116 0ustar00jforcierstaff000000 000000 # -*- coding: utf-8 -*- # # Fabric documentation build configuration file, created by # sphinx-quickstart on Sat Apr 25 14:53:36 2009. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. from __future__ import with_statement import os import sys import types from datetime import datetime # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'releases'] # 'releases' (changelog) settings releases_issue_uri = "https://github.com/fabric/fabric/issues/%s" releases_release_uri = "https://github.com/fabric/fabric/tree/%s" # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Fabric' year = datetime.now().year copyright = u'%d, Christian Vest Hansen and Jeffrey E. Forcier' % year # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # Add this checkout's local Fabric module to sys.path. Allows use of # fabric.version in here, and ensures that the autodoc stuff also works. sys.path.insert(0, os.path.abspath(os.path.join(os.getcwd(), '..'))) from fabric.version import get_version # Get version info # # Branch-only name version = get_version('branch') # The full human readable version, including alpha/beta/rc tags. release = get_version('normal') # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. default_role = 'obj' # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'default' html_style = 'rtd.css' html_context = {} from fabric.api import local, hide, settings with settings(hide('everything'), warn_only=True): get_tags = 'git tag | sort -r | egrep "(1\.[^0]+)\.."' tag_result = local(get_tags, True) if tag_result.succeeded: html_context['fabric_tags'] = tag_result.split() # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'Fabricdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'Fabric.tex', u'Fabric Documentation', u'Jeff Forcier', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True # Restore decorated functions so that autodoc inspects the right arguments def unwrap_decorated_functions(): from fabric import operations, context_managers for module in [context_managers, operations]: for name, obj in vars(module).iteritems(): if ( # Only function objects - just in case some real object showed # up that had .undecorated isinstance(obj, types.FunctionType) # Has our .undecorated 'cache' of the real object and hasattr(obj, 'undecorated') ): setattr(module, name, obj.undecorated) unwrap_decorated_functions() Fabric-1.8.2/docs/development.rst000644 000765 000024 00000022316 12277451120 017667 0ustar00jforcierstaff000000 000000 =========== Development =========== The Fabric development team is headed by `Jeff Forcier `_, aka ``bitprophet``. However, dozens of other developers pitch in by submitting patches and ideas via `GitHub issues and pull requests `_, :ref:`IRC ` or the `mailing list `_. Get the code ============ Please see the :ref:`source-code-checkouts` section of the :doc:`installation` page for details on how to obtain Fabric's source code. Contributing ============ There are a number of ways to get involved with Fabric: * **Use Fabric and send us feedback!** This is both the easiest and arguably the most important way to improve the project -- let us know how you currently use Fabric and how you want to use it. (Please do try to search the `ticket tracker `_ first, though, when submitting feature ideas.) * **Report bugs.** Pretty much a special case of the previous item: if you think you've found a bug in Fabric, check on the `ticket tracker `_ to see if anyone's reported it yet, and if not -- file a bug! If possible, try to make sure you can replicate it repeatedly, and let us know the circumstances (what version of Fabric you're using, what platform you're on, and what exactly you were doing when the bug cropped up.) * **Submit patches or new features.** Make a `Github `_ account, `create a fork `_ of `the main Fabric repository `_, and `submit a pull request `_. While we may not always reply promptly, we do try to make time eventually to inspect all contributions and either incorporate them or explain why we don't feel the change is a good fit. .. include:: ../CONTRIBUTING.rst Coding style ------------ Fabric tries hard to honor `PEP-8`_, especially (but not limited to!) the following: * Keep all lines under 80 characters. This goes for the ReST documentation as well as code itself. * Exceptions are made for situations where breaking a long string (such as a string being ``print``-ed from source code, or an especially long URL link in documentation) would be kind of a pain. * Typical Python 4-space (soft-tab) indents. No tabs! No 8 space indents! (No 2- or 3-space indents, for that matter!) * ``CamelCase`` class names, but ``lowercase_underscore_separated`` everything else. .. _PEP-8: http://www.python.org/dev/peps/pep-0008/ Communication ------------- If a ticket-tracker ticket exists for a given issue, **please** keep all communication in that ticket's comments -- for example, when submitting patches via Github, it's easier for us if you leave a note in the ticket **instead of** sending a Github pull request. The core devs receive emails for just about any ticket-tracker activity, so additional notices via Github or other means only serve to slow things down. Branching/Repository Layout =========================== While Fabric's development methodology isn't set in stone yet, the following items detail how we currently organize the Git repository and expect to perform merges and so forth. This will be chiefly of interest to those who wish to follow a specific Git branch instead of released versions, or to any contributors. * We use a combined 'release and feature branches' methodology, where every minor release (e.g. 0.9, 1.0, 1.1, 1.2 etc; see :ref:`releases` below for details on versioning) gets a release branch for bugfixes, and big feature development is performed in a central ``master`` branch and/or in feature-specific feature branches (e.g. a branch for reworking the internals to be threadsafe, or one for overhauling task dependencies, etc.) * Releases each get their own release branch, e.g. ``0.9``, ``1.0``, ``1.1`` etc, and from these the actual releases are tagged, e.g. ``0.9.3`` or ``1.0.0``. * New feature work is typically done in feature branches, whose naming convention is ``-``. For example, ticket #61, which concerned adding ``cd`` support to ``get`` and ``put``, was developed in a branch named ``61-add-cd-to-get-put``. * These branches are not intended for public use, and may be cleaned out of the repositories periodically. Ideally, no one feature will be in development long enough for its branch to become used in production! * Completed feature work is merged into the ``master`` branch, and once enough new features are done, a new release branch is created and optionally used to create prerelease versions for testing -- or simply released as-is. * While we try our best not to commit broken code or change APIs without warning, as with many other open-source projects we can only have a guarantee of stability in the release branches. Only follow ``master`` (or, even worse, feature branches!) if you're willing to deal with a little pain. * Conversely, because we try to keep release branches relatively stable, you may find it easier to use Fabric from a source checkout of a release branch instead of manually upgrading to new released versions. This can provide a decent middle ground between stability and the ability to get bugfixes or backported features easily. * The core developers will take care of performing merging/branching on the official repositories. Since Git is Git, contributors may of course do whatever they wish in their own clones/forks. * Bugfixes are to be performed on release branches and then merged into ``master`` so that ``master`` is always up-to-date (or nearly so; while it's not mandatory to merge after every bugfix, doing so at least daily is a good idea.) * Feature branches should periodically merge in changes from ``master`` so that when it comes time for them to merge back into ``master`` things aren't quite as painful. .. _releases: Releases ======== Fabric tries to follow open-source standards and conventions in its release tagging, including typical version numbers such as 2.0, 1.2.5, or 1.2b1. Each release will be marked as a tag in the Git repositories, and are broken down as follows: Major ----- Major releases update the first number, e.g. going from 0.9 to 1.0, and indicate that the software has reached some very large milestone. For example, the 1.0 release signified a commitment to a medium to long term API and some significant backwards incompatible (compared to the 0.9 series) features. Version 2.0 might indicate a rewrite using a new underlying network technology or an overhaul to be more object-oriented. Major releases will often be backwards-incompatible with the previous line of development, though this is not a requirement, just a usual happenstance. Users should expect to have to make at least some changes to their fabfiles when switching between major versions. Minor ----- Minor releases, such as moving from 1.0 to 1.1, typically mean that one or more new, large features has been added. They are also sometimes used to mark off the fact that a lot of bug fixes or small feature modifications have occurred since the previous minor release. (And, naturally, some of them will involve both at the same time.) These releases are guaranteed to be backwards-compatible with all other releases containing the same major version number, so a fabfile that works with 1.0 should also work fine with 1.1 or even 1.9. Bugfix/tertiary --------------- The third and final part of version numbers, such as the '3' in 1.0.3, generally indicate a release containing one or more bugfixes, although minor feature modifications may (rarely) occur. This third number is sometimes omitted for the first major or minor release in a series, e.g. 1.2 or 2.0, and in these cases it can be considered an implicit zero (e.g. 2.0.0). .. note:: The 0.9 series of development included more significant feature work than is typically found in tertiary releases; from 1.0 onwards a more traditional approach, as per the above, is used. Support of older releases ========================= Major and minor releases do not mark the end of the previous line or lines of development: * The two most recent minor release branches will continue to receive critical bugfixes. For example, if 1.1 were the latest minor release, it and 1.0 would get bugfixes, but not 0.9 or earlier; and once 1.2 came out, this window would then only extend back to 1.1. * Depending on the nature of bugs found and the difficulty in backporting them, older release lines may also continue to get bugfixes -- but there's no longer a guarantee of any kind. Thus, if a bug were found in 1.1 that affected 0.9 and could be easily applied, a new 0.9.x version *might* be released. * This policy may change in the future to accommodate more branches, depending on development speed. We hope that this policy will allow us to have a rapid minor release cycle (and thus keep new features coming out frequently) without causing users to feel too much pressure to upgrade right away. At the same time, the backwards compatibility guarantee means that users should still feel comfortable upgrading to the next minor release in order to stay within this sliding support window. Fabric-1.8.2/docs/faq.rst000644 000765 000024 00000023053 12277451120 016113 0ustar00jforcierstaff000000 000000 ================================ Frequently Asked Questions (FAQ) ================================ These are some of the most commonly encountered problems or frequently asked questions which we receive from users. They aren't intended as a substitute for reading the rest of the documentation, especially the :ref:`usage docs `, so please make sure you check those out if your question is not answered here. How do I dynamically set host lists? ==================================== See :ref:`dynamic-hosts`. How can I run something after my task is done on all hosts? =========================================================== See :ref:`leveraging-execute-return-value`. .. _init-scripts-pty: Init scripts don't work! ======================== Init-style start/stop/restart scripts (e.g. ``/etc/init.d/apache2 start``) sometimes don't like Fabric's allocation of a pseudo-tty, which is active by default. In almost all cases, explicitly calling the command in question with ``pty=False`` works correctly:: sudo("/etc/init.d/apache2 restart", pty=False) If you have no need for interactive behavior and run into this problem frequently, you may want to deactivate pty allocation globally by setting :ref:`env.always_use_pty ` to ``False``. .. _one-shell-per-command: My (``cd``/``workon``/``export``/etc) calls don't seem to work! =============================================================== While Fabric can be used for many shell-script-like tasks, there's a slightly unintuitive catch: each `~fabric.operations.run` or `~fabric.operations.sudo` call has its own distinct shell session. This is required in order for Fabric to reliably figure out, after your command has run, what its standard out/error and return codes were. Unfortunately, it means that code like the following doesn't behave as you might assume:: def deploy(): run("cd /path/to/application") run("./update.sh") If that were a shell script, the second `~fabric.operations.run` call would have executed with a current working directory of ``/path/to/application/`` -- but because both commands are run in their own distinct session over SSH, it actually tries to execute ``$HOME/update.sh`` instead (since your remote home directory is the default working directory). A simple workaround is to make use of shell logic operations such as ``&&``, which link multiple expressions together (provided the left hand side executed without error) like so:: def deploy(): run("cd /path/to/application && ./update.sh") Fabric provides a convenient shortcut for this specific use case, in fact: `~fabric.context_managers.cd`. There is also `~fabric.context_managers.prefix` for arbitrary prefix commands. .. note:: You might also get away with an absolute path and skip directory changing altogether:: def deploy(): run("/path/to/application/update.sh") However, this requires that the command in question makes no assumptions about your current working directory! How do I use ``su`` to run commands as another user? ==================================================== This is a special case of :ref:`one-shell-per-command`. As that FAQ explains, commands like ``su`` which are 'stateful' do not work well in Fabric, so workarounds must be used. In the case of running commands as a user distinct from the login user, you have two options: #. Use `~fabric.operations.sudo` with its ``user=`` kwarg, e.g. ``sudo("command", user="otheruser")``. If you want to factor the ``user`` part out of a bunch of commands, use `~fabric.context_managers.settings` to set ``env.sudo_user``:: with settings(sudo_user="otheruser"): sudo("command 1") sudo("command 2") ... #. If your target system cannot use ``sudo`` for some reason, you can still use ``su``, but you need to invoke it in a non-interactive fashion by telling it to run a specific command instead of opening a shell. Typically this is the ``-c`` flag, e.g. ``su otheruser -c "command"``. To run multiple commands in the same ``su -c`` "wrapper", you could e.g. write a wrapper function around `~fabric.operations.run`:: def run_su(command, user="otheruser"): return run('su %s -c "%s"' % (user, command)) Why do I sometimes see ``err: stdin: is not a tty``? ==================================================== This message is typically generated by programs such as ``biff`` or ``mesg`` lurking within your remote user's ``.profile`` or ``.bashrc`` files (or any other such files, including system-wide ones.) Fabric's default mode of operation involves executing the Bash shell in "login mode", which causes these files to be executed. Because Fabric also doesn't bother asking the remote end for a tty by default (as it's not usually necessary) programs fired within your startup files, which expect a tty to be present, will complain -- and thus, stderr output about "stdin is not a tty" or similar. There are multiple ways to deal with this problem: * Find and remove or comment out the offending program call. If the program was not added by you on purpose and is simply a legacy of the operating system, this may be safe to do, and is the simplest approach. * Override ``env.shell`` to remove the ``-l`` flag. This should tell Bash not to load your startup files. If you don't depend on the contents of your startup files (such as aliases or whatnot) this may be a good solution. * Pass ``pty=True`` to `run` or `sudo`, which will force allocation of a pseudo-tty on the remote end, and hopefully cause the offending program to be less cranky. .. _faq-daemonize: Why can't I run programs in the background with ``&``? It makes Fabric hang. ============================================================================ Because Fabric executes a shell on the remote end for each invocation of ``run`` or ``sudo`` (:ref:`see also `), backgrounding a process via the shell will not work as expected. Backgrounded processes may still prevent the calling shell from exiting until they stop running, and this in turn prevents Fabric from continuing on with its own execution. The key to fixing this is to ensure that your process' standard pipes are all disassociated from the calling shell, which may be done in a number of ways (listed in order of robustness): * Use a pre-existing daemonization technique if one exists for the program at hand -- for example, calling an init script instead of directly invoking a server binary. * Or leverage a process manager such as ``supervisord``, ``upstart`` or ``systemd`` - such tools let you define what it means to "run" one of your background processes, then issue init-script-like start/stop/restart/status commands. They offer many advantages over classic init scripts as well. * Use ``tmux``, ``screen`` or ``dtach`` to fully detach the process from the running shell; these tools have the benefit of allowing you to reattach to the process later on if needed (though they are more ad-hoc than ``supervisord``-like tools). * Run the program under ``nohup`` or similar "in-shell" tools - note that this approach has seen limited success for most users. .. _faq-bash: My remote system doesn't have ``bash`` installed by default, do I need to install ``bash``? =========================================================================================== While Fabric is written with ``bash`` in mind, it's not an absolute requirement. Simply change :ref:`env.shell ` to call your desired shell, and include an argument similar to ``bash``'s ``-c`` argument, which allows us to build shell commands of the form:: /bin/bash -l -c "" where ``/bin/bash -l -c`` is the default value of :ref:`env.shell `. .. note:: The ``-l`` argument specifies a login shell and is not absolutely required, merely convenient in many situations. Some shells lack the option entirely and it may be safely omitted in such cases. A relatively safe baseline is to call ``/bin/sh``, which may call the original ``sh`` binary, or (on some systems) ``csh``, and give it the ``-c`` argument, like so:: from fabric.api import env env.shell = "/bin/sh -c" This has been shown to work on FreeBSD and may work on other systems as well. I'm sometimes incorrectly asked for a passphrase instead of a password. ======================================================================= Due to a bug of sorts in our SSH layer, it's not currently possible for Fabric to always accurately detect the type of authentication needed. We have to try and guess whether we're being asked for a private key passphrase or a remote server password, and in some cases our guess ends up being wrong. The most common such situation is where you, the local user, appear to have an SSH keychain agent running, but the remote server is not able to honor your SSH key, e.g. you haven't yet transferred the public key over or are using an incorrect username. In this situation, Fabric will prompt you with "Please enter passphrase for private key", but the text you enter is actually being sent to the remote end's password authentication. We hope to address this in future releases by modifying a fork of the aforementioned SSH library. Is Fabric thread-safe? ====================== Currently, no, it's not -- the present version of Fabric relies heavily on shared state in order to keep the codebase simple. However, there are definite plans to update its internals so that Fabric may be either threaded or otherwise parallelized so your tasks can run on multiple servers concurrently. Fabric-1.8.2/docs/index.rst000644 000765 000024 00000013235 12277451120 016454 0ustar00jforcierstaff000000 000000 ====== Fabric ====== About ===== .. include:: ../README.rst Installation ============ Stable releases of Fabric are best installed via ``pip`` or ``easy_install``; or you may download TGZ or ZIP source archives from a couple of official locations. Detailed instructions and links may be found on the :doc:`installation` page. We recommend using the latest stable version of Fabric; releases are made often to prevent any large gaps in functionality between the latest stable release and the development version. However, if you want to live on the edge, you can pull down the source code from our Git repository, or fork us on Github. The :doc:`installation` page has details for how to access the source code. .. warning:: If you install Fabric from Git, you will need to install its dependency Paramiko from Git as well. See :doc:`the installation docs ` for details. Development =========== Any hackers interested in improving Fabric (or even users interested in how Fabric is put together or released) please see the :doc:`development` page. It contains comprehensive info on contributing, repository layout, our release strategy, and more. .. _documentation-index: Documentation ============= Please note that all documentation is currently written with Python 2.5 users in mind, but with an eye for eventual Python 3.x compatibility. This leads to the following patterns that may throw off readers used to Python 2.4 or who have already upgraded to Python 2.6/2.7: * ``from __future__ import with_statement``: a "future import" required to use the ``with`` statement in Python 2.5 -- a feature you'll be using frequently. Python 2.6+ users don't need to do this. * `` if else ``: Python's relatively new ternary statement, available in 2.5 and newer. Python 2.4 and older used to fake this with `` and or `` (which isn't quite the same thing and has some logical loopholes.) * ``print()`` instead of ``print ``: We use the ``print`` statement's optional parentheses where possible, in order to be more compatible with Python 3.x (in which ``print`` becomes a function.) .. toctree:: :hidden: tutorial installation development faq troubleshooting roadmap Tutorial -------- For new users, and/or for an overview of Fabric's basic functionality, please see the :doc:`tutorial`. The rest of the documentation will assume you're at least passingly familiar with the material contained within. .. _usage-docs: Usage documentation ------------------- The following list contains all major sections of Fabric's prose (non-API) documentation, which expands upon the concepts outlined in the :doc:`tutorial` and also covers advanced topics. .. toctree:: :maxdepth: 2 :glob: usage/* .. _faq: FAQ --- Some frequently encountered questions, coupled with answers/solutions/excuses, may be found on the :doc:`faq` page. Troubleshooting --------------- Before asking for help or filing a bug, make sure you've read our :doc:`document on troubleshooting `. .. _api_docs: API documentation ----------------- Fabric maintains two sets of API documentation, autogenerated from the source code's docstrings (which are typically very thorough.) .. _core-api: Core API ~~~~~~~~ The **core** API is loosely defined as those functions, classes and methods which form the basic building blocks of Fabric (such as `~fabric.operations.run` and `~fabric.operations.sudo`) upon which everything else (the below "contrib" section, and user fabfiles) builds. .. toctree:: :maxdepth: 1 :glob: api/core/* .. _contrib-api: Contrib API ~~~~~~~~~~~ Fabric's **contrib** package contains commonly useful tools (often merged in from user fabfiles) for tasks such as user I/O, modifying remote files, and so forth. While the core API is likely to remain small and relatively unchanged over time, this contrib section will grow and evolve (while trying to remain backwards-compatible) as more use-cases are solved and added. .. toctree:: :maxdepth: 1 :glob: api/contrib/* Changelog --------- Please see :doc:`the changelog `. Roadmap ------- Please see :doc:`the roadmap `. Getting help ============ If you've scoured the :ref:`prose ` and :ref:`API ` documentation and still can't find an answer to your question, below are various support resources that should help. We do request that you do at least skim the documentation before posting tickets or mailing list questions, however! Mailing list ------------ The best way to get help with using Fabric is via the `fab-user mailing list `_ (currently hosted at ``nongnu.org``.) The Fabric developers do their best to reply promptly, and the list contains an active community of other Fabric users and contributors as well. Twitter ------- Fabric has an official Twitter account, `@pyfabric `_, which is used for announcements and occasional related news tidbits (e.g. "Hey, check out this neat article on Fabric!"). .. _bugs: Bugs/ticket tracker ------------------- To file new bugs or search existing ones, you may visit Fabric's `Github Issues `_ page. This does require a (free, easy to set up) Github account. .. _irc: IRC --- We maintain a semi-official IRC channel at ``#fabric`` on Freenode (``irc://irc.freenode.net``) where the developers and other users may be found. As always with IRC, we can't promise immediate responses, but some folks keep logs of the channel and will try to get back to you when they can. Fabric-1.8.2/docs/installation.rst000644 000765 000024 00000022622 12277461064 020056 0ustar00jforcierstaff000000 000000 ============ Installation ============ Fabric is best installed via `pip `_ (highly recommended) or `easy_install `_ (older, but still works fine), e.g.:: $ pip install fabric You may also opt to use your operating system's package manager; the package is typically called ``fabric`` or ``python-fabric``. E.g.:: $ sudo apt-get install fabric Advanced users wanting to install a development version may use ``pip`` to grab the latest master branch (as well as the dev version of the Paramiko dependency):: $ pip install paramiko==dev $ pip install fabric==dev Or, to install an editable version for debugging/hacking, execute ``pip install -e .`` (or ``python setup.py install``) inside a :ref:`downloaded ` or :ref:`cloned ` copy of the source code. .. warning:: Any development installs of Fabric (whether via ``==dev`` or ``install -e``) require the development version of Paramiko to be installed beforehand, or Fabric's installation may fail. Dependencies ============ In order for Fabric's installation to succeed, you will need four primary pieces of software: * the Python programming language; * the ``setuptools`` packaging/installation library; * the Python ``paramiko`` SSH2 library; * and ``paramiko``'s dependency, the PyCrypto cryptography library. and, if using the :doc:`parallel execution mode `: * the `multiprocessing`_ library. Please read on for important details on each dependency -- there are a few gotchas. Python ------ Fabric requires `Python `_ version 2.5 or 2.6. Some caveats and notes about other Python versions: * We are not planning on supporting **Python 2.4** given its age and the number of useful tools in Python 2.5 such as context managers and new modules. That said, the actual amount of 2.5-specific functionality is not prohibitively large, and we would link to -- but not support -- a third-party 2.4-compatible fork. (No such fork exists at this time, to our knowledge.) * Fabric has not yet been tested on **Python 3.x** and is thus likely to be incompatible with that line of development. However, we try to be at least somewhat forward-looking (e.g. using ``print()`` instead of ``print``) and will definitely be porting to 3.x in the future once our dependencies do. setuptools ---------- `Setuptools`_ comes with some Python installations by default; if yours doesn't, you'll need to grab it. In such situations it's typically packaged as ``python-setuptools``, ``py25-setuptools`` or similar. Fabric may drop its setuptools dependency in the future, or include alternative support for the `Distribute`_ project, but for now setuptools is required for installation. .. _setuptools: http://pypi.python.org/pypi/setuptools .. _Distribute: http://pypi.python.org/pypi/distribute PyCrypto -------- `PyCrypto `_ provides the low-level (C-based) encryption algorithms used to run SSH, and is thus required by our SSH library. There are a couple gotchas associated with installing PyCrypto: its compatibility with Python's package tools, and the fact that it is a C-based extension. .. _pycrypto-and-pip: Package tools ~~~~~~~~~~~~~ We strongly recommend using ``pip`` to install Fabric as it is newer and generally better than ``easy_install``. However, a combination of bugs in specific versions of Python, ``pip`` and PyCrypto can prevent installation of PyCrypto. Specifically: * Python = 2.5.x * PyCrypto >= 2.1 (which is required to run Fabric >= 1.3) * ``pip`` < 0.8.1 When all three criteria are met, you may encounter ``No such file or directory`` IOErrors when trying to ``pip install Fabric`` or ``pip install PyCrypto``. The fix is simply to make sure at least one of the above criteria is not met, by doing the following (in order of preference): * Upgrade to ``pip`` 0.8.1 or above, e.g. by running ``pip install -U pip``. * Upgrade to Python 2.6 or above. * Downgrade to Fabric 1.2.x, which does not require PyCrypto >= 2.1, and install PyCrypto 2.0.1 (the oldest version on PyPI which works with Fabric 1.2.) C extension ~~~~~~~~~~~ Unless you are installing from a precompiled source such as a Debian apt repository or RedHat RPM, or using :ref:`pypm `, you will also need the ability to build Python C-based modules from source in order to install PyCrypto. Users on **Unix-based platforms** such as Ubuntu or Mac OS X will need the traditional C build toolchain installed (e.g. Developer Tools / XCode Tools on the Mac, or the ``build-essential`` package on Ubuntu or Debian Linux -- basically, anything with ``gcc``, ``make`` and so forth) as well as the Python development libraries, often named ``python-dev`` or similar. For **Windows** users we recommend using :ref:`pypm`, installing a C development environment such as `Cygwin `_ or obtaining a precompiled Win32 PyCrypto package from `voidspace's Python modules page `_. .. note:: Some Windows users whose Python is 64-bit have found that the PyCrypto dependency ``winrandom`` may not install properly, leading to ImportErrors. In this scenario, you'll probably need to compile ``winrandom`` yourself via e.g. MS Visual Studio. See :issue:`194` for info. ``multiprocessing`` ------------------- An optional dependency, the ``multiprocessing`` library is included in Python's standard library in version 2.6 and higher. If you're using Python 2.5 and want to make use of Fabric's :doc:`parallel execution features ` you'll need to install it manually; the recommended route, as usual, is via ``pip``. Please see the `multiprocessing PyPI page `_ for details. .. warning:: Early versions of Python 2.6 (in our testing, 2.6.0 through 2.6.2) ship with a buggy ``multiprocessing`` module that appears to cause Fabric to hang at the end of sessions involving large numbers of concurrent hosts. If you encounter this problem, either use :ref:`env.pool_size / -z ` to limit the amount of concurrency, or upgrade to Python >=2.6.3. Python 2.5 is unaffected, as it requires the PyPI version of ``multiprocessing``, which is newer than that shipped with Python <2.6.3. Development dependencies ------------------------ If you are interested in doing development work on Fabric (or even just running the test suite), you may also need to install some or all of the following packages: * `git `_ and `Mercurial`_, in order to obtain some of the other dependencies below; * `Nose `_ * `Coverage `_ * `PyLint `_ * `Fudge `_ * `Sphinx `_ For an up-to-date list of exact testing/development requirements, including version numbers, please see the ``requirements.txt`` file included with the source distribution. This file is intended to be used with ``pip``, e.g. ``pip install -r requirements.txt``. .. _Mercurial: http://mercurial.selenic.com/wiki/ .. _downloads: Downloads ========= To obtain a tar.gz or zip archive of the Fabric source code, you may visit `Fabric's PyPI page `_, which offers manual downloads in addition to being the entry point for ``pip`` and ``easy-install``. .. _source-code-checkouts: Source code checkouts ===================== The Fabric developers manage the project's source code with the `Git `_ DVCS. To follow Fabric's development via Git instead of downloading official releases, you have the following options: * Clone the canonical repository straight from `the Fabric organization's repository on Github `_, ``git://github.com/fabric/fabric.git`` * Make your own fork of the Github repository by making a Github account, visiting `fabric/fabric `_ and clicking the "fork" button. .. note:: If you've obtained the Fabric source via source control and plan on updating your checkout in the future, we highly suggest using ``python setup.py develop`` instead -- it will use symbolic links instead of file copies, ensuring that imports of the library or use of the command-line tool will always refer to your checkout. For information on the hows and whys of Fabric development, including which branches may be of interest and how you can help out, please see the :doc:`development` page. .. _pypm: ActivePython and PyPM ===================== Windows users who already have ActiveState's `ActivePython `_ distribution installed may find Fabric is best installed with `its package manager, PyPM `_. Below is example output from an installation of Fabric via ``pypm``:: C:\> pypm install fabric The following packages will be installed into "%APPDATA%\Python" (2.7): paramiko-1.7.8 pycrypto-2.4 fabric-1.3.0 Get: [pypm-free.activestate.com] fabric 1.3.0 Get: [pypm-free.activestate.com] paramiko 1.7.8 Get: [pypm-free.activestate.com] pycrypto 2.4 Installing paramiko-1.7.8 Installing pycrypto-2.4 Installing fabric-1.3.0 Fixing script %APPDATA%\Python\Scripts\fab-script.py C:\> Fabric-1.8.2/docs/Makefile000644 000765 000024 00000005661 12257074160 016262 0ustar00jforcierstaff000000 000000 # Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf _build/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) _build/html @echo @echo "Build finished. The HTML pages are in _build/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) _build/dirhtml @echo @echo "Build finished. The HTML pages are in _build/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) _build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) _build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in _build/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) _build/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in _build/qthelp, like this:" @echo "# qcollectiongenerator _build/qthelp/Fabric.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile _build/qthelp/Fabric.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex @echo @echo "Build finished; the LaTeX files are in _build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes @echo @echo "The overview file is in _build/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) _build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in _build/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) _build/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in _build/doctest/output.txt." Fabric-1.8.2/docs/roadmap.rst000644 000765 000024 00000005037 12277461064 017001 0ustar00jforcierstaff000000 000000 =================== Development roadmap =================== This document outlines Fabric's intended development path. Please make sure you're reading `the latest version `_ of this document! .. warning:: This information is subject to change without warning, and should not be used as a basis for any life- or career-altering decisions! Fabric 1.x ========== Fabric 1.x, while not end-of-life'd, has reached a tipping point regarding internal tech debt & ability to make significant improvements without harming backwards compatibility. As such, future 1.x releases (**1.6** onwards) will emphasize small-to-medium features (new features not requiring major overhauls of the internals) and bugfixes. Invoke, Fabric 2.x and Patchwork ================================ While 1.x moves on as above, we are working on a reimagined 2.x version of the tool, and plan to: * Finish and release `the Invoke tool/library `_ (see also :issue:`565`), which is a revamped and standalone version of Fabric's task running components. * Initially it will be basic, matching Fabric's current functionality, but with a cleaner base to build on. * Said cleaner base then gives us a jumping-off point for new task-oriented features such as before/after hooks / call chains, task collections, improved namespacing and so forth. * Start putting together Fabric 2.0, a partly/mostly rewritten Fabric core: * Leverage Invoke for task running, which will leave Fabric itself much more library oriented. * Implement object-oriented hosts/host lists and all the fun stuff that provides (e.g. no more hacky host string and unintuitive env var manipulation.) * No (or optional & non-default) shared state. * Any other core overhauls difficult to do in a backwards compatible fashion. * `Current issue list `_ * Spin off ``fabric.contrib.*`` into a standalone "super-Fabric" (as in, "above Fabric") library, `Patchwork `_. * This lets core "execute commands on hosts" functionality iterate separately from "commonly useful shortcuts using Fabric core". * Lots of preliminary work & prior-art scanning has been done in :issue:`461`. * A public-but-alpha codebase for Patchwork exists as we think about the API, and is currently based on Fabric 1.x. It will likely be Fabric 2.x based by the time it is stable. Fabric-1.8.2/docs/troubleshooting.rst000644 000765 000024 00000004425 12277451120 020575 0ustar00jforcierstaff000000 000000 =============== Troubleshooting =============== Stuck? Having a problem? Here are the steps to try before you submit a bug report. * **Make sure you're on the latest version.** If you're not on the most recent version, your problem may have been solved already! Upgrading is always the best first step. * **Try older versions.** If you're already *on* the latest Fabric, try rolling back a few minor versions (e.g. if on 1.7, try Fabric 1.5 or 1.6) and see if the problem goes away. This will help the devs narrow down when the problem first arose in the commit log. * **Try switching up your Paramiko.** Fabric relies heavily on the Paramiko library for its SSH functionality, so try applying the above two steps to your Paramiko install as well. .. note:: Fabric versions sometimes have different Paramiko dependencies - so to try older Paramikos you may need to downgrade Fabric as well. * **Make sure Fabric is really the problem.** If your problem is in the behavior or output of a remote command, try recreating it without Fabric involved: * Run Fabric with ``--show=debug`` and look for the ``run:`` or ``sudo:`` line about the command in question. Try running that exact command, including any ``/bin/bash`` wrapper, remotely and see what happens. This may find problems related to the bash or sudo wrappers. * Execute the command (both the normal version, and the 'unwrapped' version seen via ``--show=debug``) from your local workstation using ``ssh``, e.g.:: $ ssh -t mytarget "my command" The ``-t`` flag matches Fabric's default behavior of enabling a PTY remotely. This helps identify apps that behave poorly when run in a non-shell-spawned PTY. * **Enable Paramiko-level debug logging.** If your issue is in the lower level Paramiko library, it can help us to see the debug output Paramiko prints. At top level in your fabfile, add the following:: import logging logging.basicConfig(level=logging.DEBUG) This should start printing Paramiko's debug statements to your standard error stream. (Feel free to add more logging kwargs to ``basicConfig()`` such as ``filename='/path/to/a/file'`` if you like.) Then submit this info to anybody helping you on IRC or in your bug report. Fabric-1.8.2/docs/tutorial.rst000644 000765 000024 00000037077 12257074160 017225 0ustar00jforcierstaff000000 000000 ===================== Overview and Tutorial ===================== Welcome to Fabric! This document is a whirlwind tour of Fabric's features and a quick guide to its use. Additional documentation (which is linked to throughout) can be found in the :ref:`usage documentation ` -- please make sure to check it out. What is Fabric? =============== As the ``README`` says: .. include:: ../README.rst :end-before: It provides More specifically, Fabric is: * A tool that lets you execute **arbitrary Python functions** via the **command line**; * A library of subroutines (built on top of a lower-level library) to make executing shell commands over SSH **easy** and **Pythonic**. Naturally, most users combine these two things, using Fabric to write and execute Python functions, or **tasks**, to automate interactions with remote servers. Let's take a look. Hello, ``fab`` ============== This wouldn't be a proper tutorial without "the usual":: def hello(): print("Hello world!") Placed in a Python module file named ``fabfile.py`` in your current working directory, that ``hello`` function can be executed with the ``fab`` tool (installed as part of Fabric) and does just what you'd expect:: $ fab hello Hello world! Done. That's all there is to it. This functionality allows Fabric to be used as a (very) basic build tool even without importing any of its API. .. note:: The ``fab`` tool simply imports your fabfile and executes the function or functions you instruct it to. There's nothing magic about it -- anything you can do in a normal Python script can be done in a fabfile! .. seealso:: :ref:`execution-strategy`, :doc:`/usage/tasks`, :doc:`/usage/fab` Task arguments ============== It's often useful to pass runtime parameters into your tasks, just as you might during regular Python programming. Fabric has basic support for this using a shell-compatible notation: ``:,=,...``. It's contrived, but let's extend the above example to say hello to you personally:: def hello(name="world"): print("Hello %s!" % name) By default, calling ``fab hello`` will still behave as it did before; but now we can personalize it:: $ fab hello:name=Jeff Hello Jeff! Done. Those already used to programming in Python might have guessed that this invocation behaves exactly the same way:: $ fab hello:Jeff Hello Jeff! Done. For the time being, your argument values will always show up in Python as strings and may require a bit of string manipulation for complex types such as lists. Future versions may add a typecasting system to make this easier. .. seealso:: :ref:`task-arguments` Local commands ============== As used above, ``fab`` only really saves a couple lines of ``if __name__ == "__main__"`` boilerplate. It's mostly designed for use with Fabric's API, which contains functions (or **operations**) for executing shell commands, transferring files, and so forth. Let's build a hypothetical Web application fabfile. This example scenario is as follows: The Web application is managed via Git on a remote host ``vcshost``. On ``localhost``, we have a local clone of said Web application. When we push changes back to ``vcshost``, we want to be able to immediately install these changes on a remote host ``my_server`` in an automated fashion. We will do this by automating the local and remote Git commands. Fabfiles usually work best at the root of a project:: . |-- __init__.py |-- app.wsgi |-- fabfile.py <-- our fabfile! |-- manage.py `-- my_app |-- __init__.py |-- models.py |-- templates | `-- index.html |-- tests.py |-- urls.py `-- views.py .. note:: We're using a Django application here, but only as an example -- Fabric is not tied to any external codebase, save for its SSH library. For starters, perhaps we want to run our tests and commit to our VCS so we're ready for a deploy:: from fabric.api import local def prepare_deploy(): local("./manage.py test my_app") local("git add -p && git commit") local("git push") The output of which might look a bit like this:: $ fab prepare_deploy [localhost] run: ./manage.py test my_app Creating test database... Creating tables Creating indexes .......................................... ---------------------------------------------------------------------- Ran 42 tests in 9.138s OK Destroying test database... [localhost] run: git add -p && git commit [localhost] run: git push Done. The code itself is straightforward: import a Fabric API function, `~fabric.operations.local`, and use it to run and interact with local shell commands. The rest of Fabric's API is similar -- it's all just Python. .. seealso:: :doc:`api/core/operations`, :ref:`fabfile-discovery` Organize it your way ==================== Because Fabric is "just Python" you're free to organize your fabfile any way you want. For example, it's often useful to start splitting things up into subtasks:: from fabric.api import local def test(): local("./manage.py test my_app") def commit(): local("git add -p && git commit") def push(): local("git push") def prepare_deploy(): test() commit() push() The ``prepare_deploy`` task can be called just as before, but now you can make a more granular call to one of the sub-tasks, if desired. Failure ======= Our base case works fine now, but what happens if our tests fail? Chances are we want to put on the brakes and fix them before deploying. Fabric checks the return value of programs called via operations and will abort if they didn't exit cleanly. Let's see what happens if one of our tests encounters an error:: $ fab prepare_deploy [localhost] run: ./manage.py test my_app Creating test database... Creating tables Creating indexes .............E............................ ====================================================================== ERROR: testSomething (my_project.my_app.tests.MainTests) ---------------------------------------------------------------------- Traceback (most recent call last): [...] ---------------------------------------------------------------------- Ran 42 tests in 9.138s FAILED (errors=1) Destroying test database... Fatal error: local() encountered an error (return code 2) while executing './manage.py test my_app' Aborting. Great! We didn't have to do anything ourselves: Fabric detected the failure and aborted, never running the ``commit`` task. .. seealso:: :ref:`Failure handling (usage documentation) ` Failure handling ---------------- But what if we wanted to be flexible and give the user a choice? A setting (or **environment variable**, usually shortened to **env var**) called :ref:`warn_only` lets you turn aborts into warnings, allowing flexible error handling to occur. Let's flip this setting on for our ``test`` function, and then inspect the result of the `~fabric.operations.local` call ourselves:: from __future__ import with_statement from fabric.api import local, settings, abort from fabric.contrib.console import confirm def test(): with settings(warn_only=True): result = local('./manage.py test my_app', capture=True) if result.failed and not confirm("Tests failed. Continue anyway?"): abort("Aborting at user request.") [...] In adding this new feature we've introduced a number of new things: * The ``__future__`` import required to use ``with:`` in Python 2.5; * Fabric's `contrib.console ` submodule, containing the `~fabric.contrib.console.confirm` function, used for simple yes/no prompts; * The `~fabric.context_managers.settings` context manager, used to apply settings to a specific block of code; * Command-running operations like `~fabric.operations.local` can return objects containing info about their result (such as ``.failed``, or ``.return_code``); * And the `~fabric.utils.abort` function, used to manually abort execution. However, despite the additional complexity, it's still pretty easy to follow, and is now much more flexible. .. seealso:: :doc:`api/core/context_managers`, :ref:`env-vars` Making connections ================== Let's start wrapping up our fabfile by putting in the keystone: a ``deploy`` task that is destined to run on one or more remote server(s), and ensures the code is up to date:: def deploy(): code_dir = '/srv/django/myproject' with cd(code_dir): run("git pull") run("touch app.wsgi") Here again, we introduce a handful of new concepts: * Fabric is just Python -- so we can make liberal use of regular Python code constructs such as variables and string interpolation; * `~fabric.context_managers.cd`, an easy way of prefixing commands with a ``cd /to/some/directory`` call. This is similar to `~fabric.context_managers.lcd` which does the same locally. * `~fabric.operations.run`, which is similar to `~fabric.operations.local` but runs **remotely** instead of locally. We also need to make sure we import the new functions at the top of our file:: from __future__ import with_statement from fabric.api import local, settings, abort, run, cd from fabric.contrib.console import confirm With these changes in place, let's deploy:: $ fab deploy No hosts found. Please specify (single) host string for connection: my_server [my_server] run: git pull [my_server] out: Already up-to-date. [my_server] out: [my_server] run: touch app.wsgi Done. We never specified any connection info in our fabfile, so Fabric doesn't know on which host(s) the remote command should be executed. When this happens, Fabric prompts us at runtime. Connection definitions use SSH-like "host strings" (e.g. ``user@host:port``) and will use your local username as a default -- so in this example, we just had to specify the hostname, ``my_server``. Remote interactivity -------------------- ``git pull`` works fine if you've already got a checkout of your source code -- but what if this is the first deploy? It'd be nice to handle that case too and do the initial ``git clone``:: def deploy(): code_dir = '/srv/django/myproject' with settings(warn_only=True): if run("test -d %s" % code_dir).failed: run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir) with cd(code_dir): run("git pull") run("touch app.wsgi") As with our calls to `~fabric.operations.local` above, `~fabric.operations.run` also lets us construct clean Python-level logic based on executed shell commands. However, the interesting part here is the ``git clone`` call: since we're using Git's SSH method of accessing the repository on our Git server, this means our remote `~fabric.operations.run` call will need to authenticate itself. Older versions of Fabric (and similar high level SSH libraries) run remote programs in limbo, unable to be touched from the local end. This is problematic when you have a serious need to enter passwords or otherwise interact with the remote program. Fabric 1.0 and later breaks down this wall and ensures you can always talk to the other side. Let's see what happens when we run our updated ``deploy`` task on a new server with no Git checkout:: $ fab deploy No hosts found. Please specify (single) host string for connection: my_server [my_server] run: test -d /srv/django/myproject Warning: run() encountered an error (return code 1) while executing 'test -d /srv/django/myproject' [my_server] run: git clone user@vcshost:/path/to/repo/.git /srv/django/myproject [my_server] out: Cloning into /srv/django/myproject... [my_server] out: Password: [my_server] out: remote: Counting objects: 6698, done. [my_server] out: remote: Compressing objects: 100% (2237/2237), done. [my_server] out: remote: Total 6698 (delta 4633), reused 6414 (delta 4412) [my_server] out: Receiving objects: 100% (6698/6698), 1.28 MiB, done. [my_server] out: Resolving deltas: 100% (4633/4633), done. [my_server] out: [my_server] run: git pull [my_server] out: Already up-to-date. [my_server] out: [my_server] run: touch app.wsgi Done. Notice the ``Password:`` prompt -- that was our remote ``git`` call on our Web server, asking for the password to the Git server. We were able to type it in and the clone continued normally. .. seealso:: :doc:`/usage/interactivity` .. _defining-connections: Defining connections beforehand ------------------------------- Specifying connection info at runtime gets old real fast, so Fabric provides a handful of ways to do it in your fabfile or on the command line. We won't cover all of them here, but we will show you the most common one: setting the global host list, :ref:`env.hosts `. :doc:`env ` is a global dictionary-like object driving many of Fabric's settings, and can be written to with attributes as well (in fact, `~fabric.context_managers.settings`, seen above, is simply a wrapper for this.) Thus, we can modify it at module level near the top of our fabfile like so:: from __future__ import with_statement from fabric.api import * from fabric.contrib.console import confirm env.hosts = ['my_server'] def test(): do_test_stuff() When ``fab`` loads up our fabfile, our modification of ``env`` will execute, storing our settings change. The end result is exactly as above: our ``deploy`` task will run against the ``my_server`` server. This is also how you can tell Fabric to run on multiple remote systems at once: because ``env.hosts`` is a list, ``fab`` iterates over it, calling the given task once for each connection. .. seealso:: :doc:`usage/env`, :ref:`host-lists` Conclusion ========== Our completed fabfile is still pretty short, as such things go. Here it is in its entirety:: from __future__ import with_statement from fabric.api import * from fabric.contrib.console import confirm env.hosts = ['my_server'] def test(): with settings(warn_only=True): result = local('./manage.py test my_app', capture=True) if result.failed and not confirm("Tests failed. Continue anyway?"): abort("Aborting at user request.") def commit(): local("git add -p && git commit") def push(): local("git push") def prepare_deploy(): test() commit() push() def deploy(): code_dir = '/srv/django/myproject' with settings(warn_only=True): if run("test -d %s" % code_dir).failed: run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir) with cd(code_dir): run("git pull") run("touch app.wsgi") This fabfile makes use of a large portion of Fabric's feature set: * defining fabfile tasks and running them with :doc:`fab `; * calling local shell commands with `~fabric.operations.local`; * modifying env vars with `~fabric.context_managers.settings`; * handling command failures, prompting the user, and manually aborting; * and defining host lists and `~fabric.operations.run`-ning remote commands. However, there's still a lot more we haven't covered here! Please make sure you follow the various "see also" links, and check out the documentation table of contents on :ref:`the main index page `. Thanks for reading! Fabric-1.8.2/docs/usage/000755 000765 000024 00000000000 12277461504 015722 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/docs/usage/env.rst000644 000765 000024 00000053672 12277461502 017257 0ustar00jforcierstaff000000 000000 =================================== The environment dictionary, ``env`` =================================== A simple but integral aspect of Fabric is what is known as the "environment": a Python dictionary subclass, which is used as a combination settings registry and shared inter-task data namespace. The environment dict is currently implemented as a global singleton, ``fabric.state.env``, and is included in ``fabric.api`` for convenience. Keys in ``env`` are sometimes referred to as "env variables". Environment as configuration ============================ Most of Fabric's behavior is controllable by modifying ``env`` variables, such as ``env.hosts`` (as seen in :ref:`the tutorial `). Other commonly-modified env vars include: * ``user``: Fabric defaults to your local username when making SSH connections, but you can use ``env.user`` to override this if necessary. The :doc:`execution` documentation also has info on how to specify usernames on a per-host basis. * ``password``: Used to explicitly set your default connection or sudo password if desired. Fabric will prompt you when necessary if this isn't set or doesn't appear to be valid. * ``warn_only``: a Boolean setting determining whether Fabric exits when detecting errors on the remote end. See :doc:`execution` for more on this behavior. There are a number of other env variables; for the full list, see :ref:`env-vars` at the bottom of this document. The `~fabric.context_managers.settings` context manager ------------------------------------------------------- In many situations, it's useful to only temporarily modify ``env`` vars so that a given settings change only applies to a block of code. Fabric provides a `~fabric.context_managers.settings` context manager, which takes any number of key/value pairs and will use them to modify ``env`` within its wrapped block. For example, there are many situations where setting ``warn_only`` (see below) is useful. To apply it to a few lines of code, use ``settings(warn_only=True)``, as seen in this simplified version of the ``contrib`` `~fabric.contrib.files.exists` function:: from fabric.api import settings, run def exists(path): with settings(warn_only=True): return run('test -e %s' % path) See the :doc:`../api/core/context_managers` API documentation for details on `~fabric.context_managers.settings` and other, similar tools. Environment as shared state =========================== As mentioned, the ``env`` object is simply a dictionary subclass, so your own fabfile code may store information in it as well. This is sometimes useful for keeping state between multiple tasks within a single execution run. .. note:: This aspect of ``env`` is largely historical: in the past, fabfiles were not pure Python and thus the environment was the only way to communicate between tasks. Nowadays, you may call other tasks or subroutines directly, and even keep module-level shared state if you wish. In future versions, Fabric will become threadsafe, at which point ``env`` may be the only easy/safe way to keep global state. Other considerations ==================== While it subclasses ``dict``, Fabric's ``env`` has been modified so that its values may be read/written by way of attribute access, as seen in some of the above material. In other words, ``env.host_string`` and ``env['host_string']`` are functionally identical. We feel that attribute access can often save a bit of typing and makes the code more readable, so it's the recommended way to interact with ``env``. The fact that it's a dictionary can be useful in other ways, such as with Python's ``dict``-based string interpolation, which is especially handy if you need to insert multiple env vars into a single string. Using "normal" string interpolation might look like this:: print("Executing on %s as %s" % (env.host, env.user)) Using dict-style interpolation is more readable and slightly shorter:: print("Executing on %(host)s as %(user)s" % env) .. _env-vars: Full list of env vars ===================== Below is a list of all predefined (or defined by Fabric itself during execution) environment variables. While many of them may be manipulated directly, it's often best to use `~fabric.context_managers`, either generally via `~fabric.context_managers.settings` or via specific context managers such as `~fabric.context_managers.cd`. Note that many of these may be set via ``fab``'s command-line switches -- see :doc:`fab` for details. Cross-references are provided where appropriate. .. seealso:: :option:`--set` .. _abort-exception: ``abort_exception`` ------------------- **Default:** ``None`` Fabric normally handles aborting by printing an error message to stderr and calling ``sys.exit(1)``. This setting allows you to override that behavior (which is what happens when ``env.abort_exception`` is ``None``.) Give it a callable which takes a string (the error message that would have been printed) and returns an exception instance. That exception object is then raised instead of ``SystemExit`` (which is what ``sys.exit`` does.) Much of the time you'll want to simply set this to an exception class, as those fit the above description perfectly (callable, take a string, return an exception instance.) E.g. ``env.abort_exception = MyExceptionClass``. .. _abort-on-prompts: ``abort_on_prompts`` -------------------- **Default:** ``False`` When ``True``, Fabric will run in a non-interactive mode, calling `~fabric.utils.abort` anytime it would normally prompt the user for input (such as password prompts, "What host to connect to?" prompts, fabfile invocation of `~fabric.operations.prompt`, and so forth.) This allows users to ensure a Fabric session will always terminate cleanly instead of blocking on user input forever when unforeseen circumstances arise. .. versionadded:: 1.1 .. seealso:: :option:`--abort-on-prompts` ``all_hosts`` ------------- **Default:** ``None`` Set by ``fab`` to the full host list for the currently executing command. For informational purposes only. .. seealso:: :doc:`execution` .. _always-use-pty: ``always_use_pty`` ------------------ **Default:** ``True`` When set to ``False``, causes `~fabric.operations.run`/`~fabric.operations.sudo` to act as if they have been called with ``pty=False``. .. seealso:: :option:`--no-pty` .. versionadded:: 1.0 .. _colorize-errors: ``colorize_errors`` ------------------- **Default** ``False`` When set to ``True``, error output to the terminal is colored red and warnings are colored magenta to make them easier to see. .. versionadded:: 1.7 .. _combine-stderr: ``combine_stderr`` ------------------ **Default**: ``True`` Causes the SSH layer to merge a remote program's stdout and stderr streams to avoid becoming meshed together when printed. See :ref:`combine_streams` for details on why this is needed and what its effects are. .. versionadded:: 1.0 ``command`` ----------- **Default:** ``None`` Set by ``fab`` to the currently executing command name (e.g., when executed as ``$ fab task1 task2``, ``env.command`` will be set to ``"task1"`` while ``task1`` is executing, and then to ``"task2"``.) For informational purposes only. .. seealso:: :doc:`execution` ``command_prefixes`` -------------------- **Default:** ``[]`` Modified by `~fabric.context_managers.prefix`, and prepended to commands executed by `~fabric.operations.run`/`~fabric.operations.sudo`. .. versionadded:: 1.0 .. _command-timeout: ``command_timeout`` ------------------- **Default:** ``None`` Remote command timeout, in seconds. .. versionadded:: 1.6 .. seealso:: :option:`--command-timeout` .. _connection-attempts: ``connection_attempts`` ----------------------- **Default:** ``1`` Number of times Fabric will attempt to connect when connecting to a new server. For backwards compatibility reasons, it defaults to only one connection attempt. .. versionadded:: 1.4 .. seealso:: :option:`--connection-attempts`, :ref:`timeout` ``cwd`` ------- **Default:** ``''`` Current working directory. Used to keep state for the `~fabric.context_managers.cd` context manager. .. _dedupe_hosts: ``dedupe_hosts`` ---------------- **Default:** ``True`` Deduplicate merged host lists so any given host string is only represented once (e.g. when using combinations of ``@hosts`` + ``@roles``, or ``-H`` and ``-R``.) When set to ``False``, this option relaxes the deduplication, allowing users who explicitly want to run a task multiple times on the same host (say, in parallel, though it works fine serially too) to do so. .. versionadded:: 1.5 .. _disable-known-hosts: ``disable_known_hosts`` ----------------------- **Default:** ``False`` If ``True``, the SSH layer will skip loading the user's known-hosts file. Useful for avoiding exceptions in situations where a "known host" changing its host key is actually valid (e.g. cloud servers such as EC2.) .. seealso:: :option:`--disable-known-hosts <-D>`, :doc:`ssh` .. _eagerly-disconnect: ``eagerly_disconnect`` ---------------------- **Default:** ``False`` If ``True``, causes ``fab`` to close connections after each individual task execution, instead of at the end of the run. This helps prevent a lot of typically-unused network sessions from piling up and causing problems with limits on per-process open files, or network hardware. .. note:: When active, this setting will result in the disconnect messages appearing throughout your output, instead of at the end. This may be improved in future releases. .. _exclude-hosts: ``exclude_hosts`` ----------------- **Default:** ``[]`` Specifies a list of host strings to be :ref:`skipped over ` during ``fab`` execution. Typically set via :option:`--exclude-hosts/-x <-x>`. .. versionadded:: 1.1 ``fabfile`` ----------- **Default:** ``fabfile.py`` Filename pattern which ``fab`` searches for when loading fabfiles. To indicate a specific file, use the full path to the file. Obviously, it doesn't make sense to set this in a fabfile, but it may be specified in a ``.fabricrc`` file or on the command line. .. seealso:: :option:`--fabfile <-f>`, :doc:`fab` .. _gateway: ``gateway`` ----------- **Default:** ``None`` Enables SSH-driven gatewaying through the indicated host. The value should be a normal Fabric host string as used in e.g. :ref:`env.host_string `. When this is set, newly created connections will be set to route their SSH traffic through the remote SSH daemon to the final destination. .. versionadded:: 1.5 .. seealso:: :option:`--gateway <-g>` .. _host_string: ``host_string`` --------------- **Default:** ``None`` Defines the current user/host/port which Fabric will connect to when executing `~fabric.operations.run`, `~fabric.operations.put` and so forth. This is set by ``fab`` when iterating over a previously set host list, and may also be manually set when using Fabric as a library. .. seealso:: :doc:`execution` .. _forward-agent: ``forward_agent`` -------------------- **Default:** ``False`` If ``True``, enables forwarding of your local SSH agent to the remote end. .. versionadded:: 1.4 .. seealso:: :option:`--forward-agent <-A>` ``host`` -------- **Default:** ``None`` Set to the hostname part of ``env.host_string`` by ``fab``. For informational purposes only. .. _hosts: ``hosts`` --------- **Default:** ``[]`` The global host list used when composing per-task host lists. .. seealso:: :option:`--hosts <-H>`, :doc:`execution` .. _keepalive: ``keepalive`` ------------- **Default:** ``0`` (i.e. no keepalive) An integer specifying an SSH keepalive interval to use; basically maps to the SSH config option ``ClientAliveInterval``. Useful if you find connections are timing out due to meddlesome network hardware or what have you. .. seealso:: :option:`--keepalive` .. versionadded:: 1.1 .. _key: ``key`` ---------------- **Default:** ``None`` A string, or file-like object, containing an SSH key; used during connection authentication. .. note:: The most common method for using SSH keys is to set :ref:`key-filename`. .. versionadded:: 1.7 .. _key-filename: ``key_filename`` ---------------- **Default:** ``None`` May be a string or list of strings, referencing file paths to SSH key files to try when connecting. Passed through directly to the SSH layer. May be set/appended to with :option:`-i`. .. seealso:: `Paramiko's documentation for SSHClient.connect() `_ .. _env-linewise: ``linewise`` ------------ **Default:** ``False`` Forces buffering by line instead of by character/byte, typically when running in parallel mode. May be activated via :option:`--linewise`. This option is implied by :ref:`env.parallel ` -- even if ``linewise`` is False, if ``parallel`` is True then linewise behavior will occur. .. seealso:: :ref:`linewise-output` .. versionadded:: 1.3 .. _local-user: ``local_user`` -------------- A read-only value containing the local system username. This is the same value as :ref:`user`'s initial value, but whereas :ref:`user` may be altered by CLI arguments, Python code or specific host strings, :ref:`local-user` will always contain the same value. .. _no_agent: ``no_agent`` ------------ **Default:** ``False`` If ``True``, will tell the SSH layer not to seek out running SSH agents when using key-based authentication. .. versionadded:: 0.9.1 .. seealso:: :option:`--no_agent <-a>` .. _no_keys: ``no_keys`` ------------------ **Default:** ``False`` If ``True``, will tell the SSH layer not to load any private key files from one's ``$HOME/.ssh/`` folder. (Key files explicitly loaded via ``fab -i`` will still be used, of course.) .. versionadded:: 0.9.1 .. seealso:: :option:`-k` .. _env-parallel: ``parallel`` ------------------- **Default:** ``False`` When ``True``, forces all tasks to run in parallel. Implies :ref:`env.linewise `. .. versionadded:: 1.3 .. seealso:: :option:`--parallel <-P>`, :doc:`parallel` .. _password: ``password`` ------------ **Default:** ``None`` The default password used by the SSH layer when connecting to remote hosts, **and/or** when answering `~fabric.operations.sudo` prompts. .. seealso:: :option:`--initial-password-prompt <-I>`, :ref:`env.passwords `, :ref:`password-management` .. _passwords: ``passwords`` ------------- **Default:** ``{}`` This dictionary is largely for internal use, and is filled automatically as a per-host-string password cache. Keys are full :ref:`host strings ` and values are passwords (strings). .. seealso:: :ref:`password-management` .. versionadded:: 1.0 .. _env-path: ``path`` -------- **Default:** ``''`` Used to set the ``$PATH`` shell environment variable when executing commands in `~fabric.operations.run`/`~fabric.operations.sudo`/`~fabric.operations.local`. It is recommended to use the `~fabric.context_managers.path` context manager for managing this value instead of setting it directly. .. versionadded:: 1.0 .. _pool-size: ``pool_size`` ------------- **Default:** ``0`` Sets the number of concurrent processes to use when executing tasks in parallel. .. versionadded:: 1.3 .. seealso:: :option:`--pool-size <-z>`, :doc:`parallel` .. _port: ``port`` -------- **Default:** ``None`` Set to the port part of ``env.host_string`` by ``fab`` when iterating over a host list. May also be used to specify a default port. .. _real-fabfile: ``real_fabfile`` ---------------- **Default:** ``None`` Set by ``fab`` with the path to the fabfile it has loaded up, if it got that far. For informational purposes only. .. seealso:: :doc:`fab` .. _remote-interrupt: ``remote_interrupt`` -------------------- **Default:** ``None`` Controls whether Ctrl-C triggers an interrupt remotely or is captured locally, as follows: * ``None`` (the default): only `~fabric.operations.open_shell` will exhibit remote interrupt behavior, and `~fabric.operations.run`/`~fabric.operations.sudo` will capture interrupts locally. * ``False``: even `~fabric.operations.open_shell` captures locally. * ``True``: all functions will send the interrupt to the remote end. .. versionadded:: 1.6 .. _rcfile: ``rcfile`` ---------- **Default:** ``$HOME/.fabricrc`` Path used when loading Fabric's local settings file. .. seealso:: :option:`--config <-c>`, :doc:`fab` .. _reject-unknown-hosts: ``reject_unknown_hosts`` ------------------------ **Default:** ``False`` If ``True``, the SSH layer will raise an exception when connecting to hosts not listed in the user's known-hosts file. .. seealso:: :option:`--reject-unknown-hosts <-r>`, :doc:`ssh` .. _system-known-hosts: ``system_known_hosts`` ------------------------ **Default:** ``None`` If set, should be the path to a :file:`known_hosts` file. The SSH layer will read this file before reading the user's known-hosts file. .. seealso:: :doc:`ssh` ``roledefs`` ------------ **Default:** ``{}`` Dictionary defining role name to host list mappings. .. seealso:: :doc:`execution` .. _roles: ``roles`` --------- **Default:** ``[]`` The global role list used when composing per-task host lists. .. seealso:: :option:`--roles <-R>`, :doc:`execution` .. _shell: ``shell`` --------- **Default:** ``/bin/bash -l -c`` Value used as shell wrapper when executing commands with e.g. `~fabric.operations.run`. Must be able to exist in the form `` ""`` -- e.g. the default uses Bash's ``-c`` option which takes a command string as its value. .. seealso:: :option:`--shell <-s>`, :ref:`FAQ on bash as default shell `, :doc:`execution` .. _skip-bad-hosts: ``skip_bad_hosts`` ------------------ **Default:** ``False`` If ``True``, causes ``fab`` (or non-``fab`` use of `~fabric.tasks.execute`) to skip over hosts it can't connect to. .. versionadded:: 1.4 .. seealso:: :option:`--skip-bad-hosts`, :ref:`excluding-hosts`, :doc:`execution` .. _ssh-config-path: ``ssh_config_path`` ------------------- **Default:** ``$HOME/.ssh/config`` Allows specification of an alternate SSH configuration file path. .. versionadded:: 1.4 .. seealso:: :option:`--ssh-config-path`, :ref:`ssh-config` ``ok_ret_codes`` ------------------------ **Default:** ``[0]`` Return codes in this list are used to determine whether calls to `~fabric.operations.run`/`~fabric.operations.sudo`/`~fabric.operations.sudo` are considered successful. .. versionadded:: 1.6 .. _sudo_prefix: ``sudo_prefix`` --------------- **Default:** ``"sudo -S -p '%(sudo_prompt)s' " % env`` The actual ``sudo`` command prefixed onto `~fabric.operations.sudo` calls' command strings. Users who do not have ``sudo`` on their default remote ``$PATH``, or who need to make other changes (such as removing the ``-p`` when passwordless sudo is in effect) may find changing this useful. .. seealso:: The `~fabric.operations.sudo` operation; :ref:`env.sudo_prompt ` .. _sudo_prompt: ``sudo_prompt`` --------------- **Default:** ``"sudo password:"`` Passed to the ``sudo`` program on remote systems so that Fabric may correctly identify its password prompt. .. seealso:: The `~fabric.operations.sudo` operation; :ref:`env.sudo_prefix ` .. _sudo_user: ``sudo_user`` ------------- **Default:** ``None`` Used as a fallback value for `~fabric.operations.sudo`'s ``user`` argument if none is given. Useful in combination with `~fabric.context_managers.settings`. .. seealso:: `~fabric.operations.sudo` .. _env-tasks: ``tasks`` ------------- **Default:** ``[]`` Set by ``fab`` to the full tasks list to be executed for the currently executing command. For informational purposes only. .. seealso:: :doc:`execution` .. _timeout: ``timeout`` ----------- **Default:** ``10`` Network connection timeout, in seconds. .. versionadded:: 1.4 .. seealso:: :option:`--timeout`, :ref:`connection-attempts` ``use_shell`` ------------- **Default:** ``True`` Global setting which acts like the ``use_shell`` argument to `~fabric.operations.run`/`~fabric.operations.sudo`: if it is set to ``False``, operations will not wrap executed commands in ``env.shell``. .. _use-ssh-config: ``use_ssh_config`` ------------------ **Default:** ``False`` Set to ``True`` to cause Fabric to load your local SSH config file. .. versionadded:: 1.4 .. seealso:: :ref:`ssh-config` .. _user: ``user`` -------- **Default:** User's local username The username used by the SSH layer when connecting to remote hosts. May be set globally, and will be used when not otherwise explicitly set in host strings. However, when explicitly given in such a manner, this variable will be temporarily overwritten with the current value -- i.e. it will always display the user currently being connected as. To illustrate this, a fabfile:: from fabric.api import env, run env.user = 'implicit_user' env.hosts = ['host1', 'explicit_user@host2', 'host3'] def print_user(): with hide('running'): run('echo "%(user)s"' % env) and its use:: $ fab print_user [host1] out: implicit_user [explicit_user@host2] out: explicit_user [host3] out: implicit_user Done. Disconnecting from host1... done. Disconnecting from host2... done. Disconnecting from host3... done. As you can see, during execution on ``host2``, ``env.user`` was set to ``"explicit_user"``, but was restored to its previous value (``"implicit_user"``) afterwards. .. note:: ``env.user`` is currently somewhat confusing (it's used for configuration **and** informational purposes) so expect this to change in the future -- the informational aspect will likely be broken out into a separate env variable. .. seealso:: :doc:`execution`, :option:`--user <-u>` ``version`` ----------- **Default:** current Fabric version string Mostly for informational purposes. Modification is not recommended, but probably won't break anything either. .. seealso:: :option:`--version <-V>` .. _warn_only: ``warn_only`` ------------- **Default:** ``False`` Specifies whether or not to warn, instead of abort, when `~fabric.operations.run`/`~fabric.operations.sudo`/`~fabric.operations.local` encounter error conditions. .. seealso:: :option:`--warn-only <-w>`, :doc:`execution` Fabric-1.8.2/docs/usage/execution.rst000644 000765 000024 00000100275 12257074160 020460 0ustar00jforcierstaff000000 000000 =============== Execution model =============== If you've read the :doc:`../tutorial`, you should already be familiar with how Fabric operates in the base case (a single task on a single host.) However, in many situations you'll find yourself wanting to execute multiple tasks and/or on multiple hosts. Perhaps you want to split a big task into smaller reusable parts, or crawl a collection of servers looking for an old user to remove. Such a scenario requires specific rules for when and how tasks are executed. This document explores Fabric's execution model, including the main execution loop, how to define host lists, how connections are made, and so forth. .. _execution-strategy: Execution strategy ================== Fabric defaults to a single, serial execution method, though there is an alternative parallel mode available as of Fabric 1.3 (see :doc:`/usage/parallel`). This default behavior is as follows: * A list of tasks is created. Currently this list is simply the arguments given to :doc:`fab `, preserving the order given. * For each task, a task-specific host list is generated from various sources (see :ref:`host-lists` below for details.) * The task list is walked through in order, and each task is run once per host in its host list. * Tasks with no hosts in their host list are considered local-only, and will always run once and only once. Thus, given the following fabfile:: from fabric.api import run, env env.hosts = ['host1', 'host2'] def taskA(): run('ls') def taskB(): run('whoami') and the following invocation:: $ fab taskA taskB you will see that Fabric performs the following: * ``taskA`` executed on ``host1`` * ``taskA`` executed on ``host2`` * ``taskB`` executed on ``host1`` * ``taskB`` executed on ``host2`` While this approach is simplistic, it allows for a straightforward composition of task functions, and (unlike tools which push the multi-host functionality down to the individual function calls) enables shell script-like logic where you may introspect the output or return code of a given command and decide what to do next. Defining tasks ============== For details on what constitutes a Fabric task and how to organize them, please see :doc:`/usage/tasks`. Defining host lists =================== Unless you're using Fabric as a simple build system (which is possible, but not the primary use-case) having tasks won't do you any good without the ability to specify remote hosts on which to execute them. There are a number of ways to do so, with scopes varying from global to per-task, and it's possible mix and match as needed. .. _host-strings: Hosts ----- Hosts, in this context, refer to what are also called "host strings": Python strings specifying a username, hostname and port combination, in the form of ``username@hostname:port``. User and/or port (and the associated ``@`` or ``:``) may be omitted, and will be filled by the executing user's local username, and/or port 22, respectively. Thus, ``admin@foo.com:222``, ``deploy@website`` and ``nameserver1`` could all be valid host strings. IPv6 address notation is also supported, for example ``::1``, ``[::1]:1222``, ``user@2001:db8::1`` or ``user@[2001:db8::1]:1222``. Square brackets are necessary only to separate the address from the port number. If no port number is used, the brackets are optional. Also if host string is specified via command-line argument, it may be necessary to escape brackets in some shells. .. note:: The user/hostname split occurs at the last ``@`` found, so e.g. email address usernames are valid and will be parsed correctly. During execution, Fabric normalizes the host strings given and then stores each part (username/hostname/port) in the environment dictionary, for both its use and for tasks to reference if the need arises. See :doc:`env` for details. .. _execution-roles: Roles ----- Host strings map to single hosts, but sometimes it's useful to arrange hosts in groups. Perhaps you have a number of Web servers behind a load balancer and want to update all of them, or want to run a task on "all client servers". Roles provide a way of defining strings which correspond to lists of host strings, and can then be specified instead of writing out the entire list every time. This mapping is defined as a dictionary, ``env.roledefs``, which must be modified by a fabfile in order to be used. A simple example:: from fabric.api import env env.roledefs['webservers'] = ['www1', 'www2', 'www3'] Since ``env.roledefs`` is naturally empty by default, you may also opt to re-assign to it without fear of losing any information (provided you aren't loading other fabfiles which also modify it, of course):: from fabric.api import env env.roledefs = { 'web': ['www1', 'www2', 'www3'], 'dns': ['ns1', 'ns2'] } In addition to list/iterable object types, the values in ``env.roledefs`` may be callables, and will thus be called when looked up when tasks are run instead of at module load time. (For example, you could connect to remote servers to obtain role definitions, and not worry about causing delays at fabfile load time when calling e.g. ``fab --list``.) Use of roles is not required in any way -- it's simply a convenience in situations where you have common groupings of servers. .. versionchanged:: 0.9.2 Added ability to use callables as ``roledefs`` values. .. _host-lists: How host lists are constructed ------------------------------ There are a number of ways to specify host lists, either globally or per-task, and generally these methods override one another instead of merging together (though this may change in future releases.) Each such method is typically split into two parts, one for hosts and one for roles. Globally, via ``env`` ~~~~~~~~~~~~~~~~~~~~~ The most common method of setting hosts or roles is by modifying two key-value pairs in the environment dictionary, :doc:`env `: ``hosts`` and ``roles``. The value of these variables is checked at runtime, while constructing each tasks's host list. Thus, they may be set at module level, which will take effect when the fabfile is imported:: from fabric.api import env, run env.hosts = ['host1', 'host2'] def mytask(): run('ls /var/www') Such a fabfile, run simply as ``fab mytask``, will run ``mytask`` on ``host1`` followed by ``host2``. Since the env vars are checked for *each* task, this means that if you have the need, you can actually modify ``env`` in one task and it will affect all following tasks:: from fabric.api import env, run def set_hosts(): env.hosts = ['host1', 'host2'] def mytask(): run('ls /var/www') When run as ``fab set_hosts mytask``, ``set_hosts`` is a "local" task -- its own host list is empty -- but ``mytask`` will again run on the two hosts given. .. note:: This technique used to be a common way of creating fake "roles", but is less necessary now that roles are fully implemented. It may still be useful in some situations, however. Alongside ``env.hosts`` is ``env.roles`` (not to be confused with ``env.roledefs``!) which, if given, will be taken as a list of role names to look up in ``env.roledefs``. Globally, via the command line ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In addition to modifying ``env.hosts``, ``env.roles``, and ``env.exclude_hosts`` at the module level, you may define them by passing comma-separated string arguments to the command-line switches :option:`--hosts/-H <-H>` and :option:`--roles/-R <-R>`, e.g.:: $ fab -H host1,host2 mytask Such an invocation is directly equivalent to ``env.hosts = ['host1', 'host2']`` -- the argument parser knows to look for these arguments and will modify ``env`` at parse time. .. note:: It's possible, and in fact common, to use these switches to set only a single host or role. Fabric simply calls ``string.split(',')`` on the given string, so a string with no commas turns into a single-item list. It is important to know that these command-line switches are interpreted **before** your fabfile is loaded: any reassignment to ``env.hosts`` or ``env.roles`` in your fabfile will overwrite them. If you wish to nondestructively merge the command-line hosts with your fabfile-defined ones, make sure your fabfile uses ``env.hosts.extend()`` instead:: from fabric.api import env, run env.hosts.extend(['host3', 'host4']) def mytask(): run('ls /var/www') When this fabfile is run as ``fab -H host1,host2 mytask``, ``env.hosts`` will then contain ``['host1', 'host2', 'host3', 'host4']`` at the time that ``mytask`` is executed. .. note:: ``env.hosts`` is simply a Python list object -- so you may use ``env.hosts.append()`` or any other such method you wish. .. _hosts-per-task-cli: Per-task, via the command line ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Globally setting host lists only works if you want all your tasks to run on the same host list all the time. This isn't always true, so Fabric provides a few ways to be more granular and specify host lists which apply to a single task only. The first of these uses task arguments. As outlined in :doc:`fab`, it's possible to specify per-task arguments via a special command-line syntax. In addition to naming actual arguments to your task function, this may be used to set the ``host``, ``hosts``, ``role`` or ``roles`` "arguments", which are interpreted by Fabric when building host lists (and removed from the arguments passed to the task itself.) .. note:: Since commas are already used to separate task arguments from one another, semicolons must be used in the ``hosts`` or ``roles`` arguments to delineate individual host strings or role names. Furthermore, the argument must be quoted to prevent your shell from interpreting the semicolons. Take the below fabfile, which is the same one we've been using, but which doesn't define any host info at all:: from fabric.api import run def mytask(): run('ls /var/www') To specify per-task hosts for ``mytask``, execute it like so:: $ fab mytask:hosts="host1;host2" This will override any other host list and ensure ``mytask`` always runs on just those two hosts. Per-task, via decorators ~~~~~~~~~~~~~~~~~~~~~~~~ If a given task should always run on a predetermined host list, you may wish to specify this in your fabfile itself. This can be done by decorating a task function with the `~fabric.decorators.hosts` or `~fabric.decorators.roles` decorators. These decorators take a variable argument list, like so:: from fabric.api import hosts, run @hosts('host1', 'host2') def mytask(): run('ls /var/www') They will also take an single iterable argument, e.g.:: my_hosts = ('host1', 'host2') @hosts(my_hosts) def mytask(): # ... When used, these decorators override any checks of ``env`` for that particular task's host list (though ``env`` is not modified in any way -- it is simply ignored.) Thus, even if the above fabfile had defined ``env.hosts`` or the call to :doc:`fab ` uses :option:`--hosts/-H <-H>`, ``mytask`` would still run on a host list of ``['host1', 'host2']``. However, decorator host lists do **not** override per-task command-line arguments, as given in the previous section. Order of precedence ~~~~~~~~~~~~~~~~~~~ We've been pointing out which methods of setting host lists trump the others, as we've gone along. However, to make things clearer, here's a quick breakdown: * Per-task, command-line host lists (``fab mytask:host=host1``) override absolutely everything else. * Per-task, decorator-specified host lists (``@hosts('host1')``) override the ``env`` variables. * Globally specified host lists set in the fabfile (``env.hosts = ['host1']``) *can* override such lists set on the command-line, but only if you're not careful (or want them to.) * Globally specified host lists set on the command-line (``--hosts=host1``) will initialize the ``env`` variables, but that's it. This logic may change slightly in the future to be more consistent (e.g. having :option:`--hosts <-H>` somehow take precedence over ``env.hosts`` in the same way that command-line per-task lists trump in-code ones) but only in a backwards-incompatible release. .. _combining-host-lists: Combining host lists -------------------- There is no "unionizing" of hosts between the various sources mentioned in :ref:`host-lists`. If ``env.hosts`` is set to ``['host1', 'host2', 'host3']``, and a per-function (e.g. via `~fabric.decorators.hosts`) host list is set to just ``['host2', 'host3']``, that function will **not** execute on ``host1``, because the per-task decorator host list takes precedence. However, for each given source, if both roles **and** hosts are specified, they will be merged together into a single host list. Take, for example, this fabfile where both of the decorators are used:: from fabric.api import env, hosts, roles, run env.roledefs = {'role1': ['b', 'c']} @hosts('a', 'b') @roles('role1') def mytask(): run('ls /var/www') Assuming no command-line hosts or roles are given when ``mytask`` is executed, this fabfile will call ``mytask`` on a host list of ``['a', 'b', 'c']`` -- the union of ``role1`` and the contents of the `~fabric.decorators.hosts` call. .. _deduplication: Host list deduplication ----------------------- By default, to support :ref:`combining-host-lists`, Fabric deduplicates the final host list so any given host string is only present once. However, this prevents explicit/intentional running of a task multiple times on the same target host, which is sometimes useful. To turn off deduplication, set :ref:`env.dedupe_hosts ` to ``False``. .. _excluding-hosts: Excluding specific hosts ------------------------ At times, it is useful to exclude one or more specific hosts, e.g. to override a few bad or otherwise undesirable hosts which are pulled in from a role or an autogenerated host list. .. note:: As of Fabric 1.4, you may wish to use :ref:`skip-bad-hosts` instead, which automatically skips over any unreachable hosts. Host exclusion may be accomplished globally with :option:`--exclude-hosts/-x <-x>`:: $ fab -R myrole -x host2,host5 mytask If ``myrole`` was defined as ``['host1', 'host2', ..., 'host15']``, the above invocation would run with an effective host list of ``['host1', 'host3', 'host4', 'host6', ..., 'host15']``. .. note:: Using this option does not modify ``env.hosts`` -- it only causes the main execution loop to skip the requested hosts. Exclusions may be specified per-task by using an extra ``exclude_hosts`` kwarg, which is implemented similarly to the abovementioned ``hosts`` and ``roles`` per-task kwargs, in that it is stripped from the actual task invocation. This example would have the same result as the global exclude above:: $ fab mytask:roles=myrole,exclude_hosts="host2;host5" Note that the host list is semicolon-separated, just as with the ``hosts`` per-task argument. Combining exclusions ~~~~~~~~~~~~~~~~~~~~ Host exclusion lists, like host lists themselves, are not merged together across the different "levels" they can be declared in. For example, a global ``-x`` option will not affect a per-task host list set with a decorator or keyword argument, nor will per-task ``exclude_hosts`` keyword arguments affect a global ``-H`` list. There is one minor exception to this rule, namely that CLI-level keyword arguments (``mytask:exclude_hosts=x,y``) **will** be taken into account when examining host lists set via ``@hosts`` or ``@roles``. Thus a task function decorated with ``@hosts('host1', 'host2')`` executed as ``fab taskname:exclude_hosts=host2`` will only run on ``host1``. As with the host list merging, this functionality is currently limited (partly to keep the implementation simple) and may be expanded in future releases. .. _execute: Intelligently executing tasks with ``execute`` ============================================== .. versionadded:: 1.3 Most of the information here involves "top level" tasks executed via :doc:`fab `, such as the first example where we called ``fab taskA taskB``. However, it's often convenient to wrap up multi-task invocations like this into their own, "meta" tasks. Prior to Fabric 1.3, this had to be done by hand, as outlined in :doc:`/usage/library`. Fabric's design eschews magical behavior, so simply *calling* a task function does **not** take into account decorators such as `~fabric.decorators.roles`. New in Fabric 1.3 is the `~fabric.tasks.execute` helper function, which takes a task object or name as its first argument. Using it is effectively the same as calling the given task from the command line: all the rules given above in :ref:`host-lists` apply. (The ``hosts`` and ``roles`` keyword arguments to `~fabric.tasks.execute` are analogous to :ref:`CLI per-task arguments `, including how they override all other host/role-setting methods.) As an example, here's a fabfile defining two stand-alone tasks for deploying a Web application:: from fabric.api import run, roles env.roledefs = { 'db': ['db1', 'db2'], 'web': ['web1', 'web2', 'web3'], } @roles('db') def migrate(): # Database stuff here. pass @roles('web') def update(): # Code updates here. pass In Fabric <=1.2, the only way to ensure that ``migrate`` runs on the DB servers and that ``update`` runs on the Web servers (short of manual ``env.host_string`` manipulation) was to call both as top level tasks:: $ fab migrate update Fabric >=1.3 can use `~fabric.tasks.execute` to set up a meta-task. Update the ``import`` line like so:: from fabric.api import run, roles, execute and append this to the bottom of the file:: def deploy(): execute(migrate) execute(update) That's all there is to it; the `~fabric.decorators.roles` decorators will be honored as expected, resulting in the following execution sequence: * `migrate` on `db1` * `migrate` on `db2` * `update` on `web1` * `update` on `web2` * `update` on `web3` .. warning:: This technique works because tasks that themselves have no host list (this includes the global host list settings) only run one time. If used inside a "regular" task that is going to run on multiple hosts, calls to `~fabric.tasks.execute` will also run multiple times, resulting in multiplicative numbers of subtask calls -- be careful! If you would like your `execute` calls to only be called once, you may use the `~fabric.decorators.runs_once` decorator. .. seealso:: `~fabric.tasks.execute`, `~fabric.decorators.runs_once` .. _leveraging-execute-return-value: Leveraging ``execute`` to access multi-host results --------------------------------------------------- In nontrivial Fabric runs, especially parallel ones, you may want to gather up a bunch of per-host result values at the end - e.g. to present a summary table, perform calculations, etc. It's not possible to do this in Fabric's default "naive" mode (one where you rely on Fabric looping over host lists on your behalf), but with `.execute` it's pretty easy. Simply switch from calling the actual work-bearing task, to calling a "meta" task which takes control of execution with `.execute`:: from fabric.api import task, execute, run, runs_once @task def workhorse(): return run("get my infos") @task @runs_once def go(): results = execute(workhorse) print results In the above, ``workhorse`` can do any Fabric stuff at all -- it's literally your old "naive" task -- except that it needs to return something useful. ``go`` is your new entry point (to be invoked as ``fab go``, or whatnot) and its job is to take the ``results`` dictionary from the `.execute` call and do whatever you need with it. Check the API docs for details on the structure of that return value. .. _dynamic-hosts: Using ``execute`` with dynamically-set host lists ------------------------------------------------- A common intermediate-to-advanced use case for Fabric is to parameterize lookup of one's target host list at runtime (when use of :ref:`execution-roles` does not suffice). ``execute`` can make this extremely simple, like so:: from fabric.api import run, execute, task # For example, code talking to an HTTP API, or a database, or ... from mylib import external_datastore # This is the actual algorithm involved. It does not care about host # lists at all. def do_work(): run("something interesting on a host") # This is the user-facing task invoked on the command line. @task def deploy(lookup_param): # This is the magic you don't get with @hosts or @roles. # Even lazy-loading roles require you to declare available roles # beforehand. Here, the sky is the limit. host_list = external_datastore.query(lookup_param) # Put this dynamically generated host list together with the work to be # done. execute(do_work, hosts=host_list) For example, if ``external_datastore`` was a simplistic "look up hosts by tag in a database" service, and you wanted to run a task on all hosts tagged as being related to your application stack, you might call the above like this:: $ fab deploy:app But wait! A data migration has gone awry on the DB servers. Let's fix up our migration code in our source repo, and deploy just the DB boxes again:: $ fab deploy:db This use case looks similar to Fabric's roles, but has much more potential, and is by no means limited to a single argument. Define the task however you wish, query your external data store in whatever way you need -- it's just Python. The alternate approach ~~~~~~~~~~~~~~~~~~~~~~ Similar to the above, but using ``fab``'s ability to call multiple tasks in succession instead of an explicit ``execute`` call, is to mutate :ref:`env.hosts ` in a host-list lookup task and then call ``do_work`` in the same session:: from fabric.api import run, task from mylib import external_datastore # Marked as a publicly visible task, but otherwise unchanged: still just # "do the work, let somebody else worry about what hosts to run on". @task def do_work(): run("something interesting on a host") @task def set_hosts(lookup_param): # Update env.hosts instead of calling execute() env.hosts = external_datastore.query(lookup_param) Then invoke like so:: $ fab set_hosts:app do_work One benefit of this approach over the previous one is that you can replace ``do_work`` with any other "workhorse" task:: $ fab set_hosts:db snapshot $ fab set_hosts:cassandra,cluster2 repair_ring $ fab set_hosts:redis,environ=prod status .. _failures: Failure handling ================ Once the task list has been constructed, Fabric will start executing them as outlined in :ref:`execution-strategy`, until all tasks have been run on the entirety of their host lists. However, Fabric defaults to a "fail-fast" behavior pattern: if anything goes wrong, such as a remote program returning a nonzero return value or your fabfile's Python code encountering an exception, execution will halt immediately. This is typically the desired behavior, but there are many exceptions to the rule, so Fabric provides ``env.warn_only``, a Boolean setting. It defaults to ``False``, meaning an error condition will result in the program aborting immediately. However, if ``env.warn_only`` is set to ``True`` at the time of failure -- with, say, the `~fabric.context_managers.settings` context manager -- Fabric will emit a warning message but continue executing. .. _connections: Connections =========== ``fab`` itself doesn't actually make any connections to remote hosts. Instead, it simply ensures that for each distinct run of a task on one of its hosts, the env var ``env.host_string`` is set to the right value. Users wanting to leverage Fabric as a library may do so manually to achieve similar effects (though as of Fabric 1.3, using `~fabric.tasks.execute` is preferred and more powerful.) ``env.host_string`` is (as the name implies) the "current" host string, and is what Fabric uses to determine what connections to make (or re-use) when network-aware functions are run. Operations like `~fabric.operations.run` or `~fabric.operations.put` use ``env.host_string`` as a lookup key in a shared dictionary which maps host strings to SSH connection objects. .. note:: The connections dictionary (currently located at ``fabric.state.connections``) acts as a cache, opting to return previously created connections if possible in order to save some overhead, and creating new ones otherwise. Lazy connections ---------------- Because connections are driven by the individual operations, Fabric will not actually make connections until they're necessary. Take for example this task which does some local housekeeping prior to interacting with the remote server:: from fabric.api import * @hosts('host1') def clean_and_upload(): local('find assets/ -name "*.DS_Store" -exec rm '{}' \;') local('tar czf /tmp/assets.tgz assets/') put('/tmp/assets.tgz', '/tmp/assets.tgz') with cd('/var/www/myapp/'): run('tar xzf /tmp/assets.tgz') What happens, connection-wise, is as follows: #. The two `~fabric.operations.local` calls will run without making any network connections whatsoever; #. `~fabric.operations.put` asks the connection cache for a connection to ``host1``; #. The connection cache fails to find an existing connection for that host string, and so creates a new SSH connection, returning it to `~fabric.operations.put`; #. `~fabric.operations.put` uploads the file through that connection; #. Finally, the `~fabric.operations.run` call asks the cache for a connection to that same host string, and is given the existing, cached connection for its own use. Extrapolating from this, you can also see that tasks which don't use any network-borne operations will never actually initiate any connections (though they will still be run once for each host in their host list, if any.) Closing connections ------------------- Fabric's connection cache never closes connections itself -- it leaves this up to whatever is using it. The :doc:`fab ` tool does this bookkeeping for you: it iterates over all open connections and closes them just before it exits (regardless of whether the tasks failed or not.) Library users will need to ensure they explicitly close all open connections before their program exits. This can be accomplished by calling `~fabric.network.disconnect_all` at the end of your script. .. note:: `~fabric.network.disconnect_all` may be moved to a more public location in the future; we're still working on making the library aspects of Fabric more solidified and organized. Multiple connection attempts and skipping bad hosts --------------------------------------------------- As of Fabric 1.4, multiple attempts may be made to connect to remote servers before aborting with an error: Fabric will try connecting :ref:`env.connection_attempts ` times before giving up, with a timeout of :ref:`env.timeout ` seconds each time. (These currently default to 1 try and 10 seconds, to match previous behavior, but they may be safely changed to whatever you need.) Furthermore, even total failure to connect to a server is no longer an absolute hard stop: set :ref:`env.skip_bad_hosts ` to ``True`` and in most situations (typically initial connections) Fabric will simply warn and continue, instead of aborting. .. versionadded:: 1.4 .. _password-management: Password management =================== Fabric maintains an in-memory, two-tier password cache to help remember your login and sudo passwords in certain situations; this helps avoid tedious re-entry when multiple systems share the same password [#]_, or if a remote system's ``sudo`` configuration doesn't do its own caching. The first layer is a simple default or fallback password cache, :ref:`env.password ` (which may also be set at the command line via :option:`--password <-p>` or :option:`--initial-password-prompt <-I>`). This env var stores a single password which (if non-empty) will be tried in the event that the host-specific cache (see below) has no entry for the current :ref:`host string `. :ref:`env.passwords ` (plural!) serves as a per-user/per-host cache, storing the most recently entered password for every unique user/host/port combination. Due to this cache, connections to multiple different users and/or hosts in the same session will only require a single password entry for each. (Previous versions of Fabric used only the single, default password cache and thus required password re-entry every time the previously entered password became invalid.) Depending on your configuration and the number of hosts your session will connect to, you may find setting either or both of these env vars to be useful. However, Fabric will automatically fill them in as necessary without any additional configuration. Specifically, each time a password prompt is presented to the user, the value entered is used to update both the single default password cache, and the cache value for the current value of ``env.host_string``. .. [#] We highly recommend the use of SSH `key-based access `_ instead of relying on homogeneous password setups, as it's significantly more secure. .. _ssh-config: Leveraging native SSH config files ================================== Command-line SSH clients (such as the one provided by `OpenSSH `_) make use of a specific configuration format typically known as ``ssh_config``, and will read from a file in the platform-specific location ``$HOME/.ssh/config`` (or an arbitrary path given to :option:`--ssh-config-path`/:ref:`env.ssh_config_path `.) This file allows specification of various SSH options such as default or per-host usernames, hostname aliases, and toggling other settings (such as whether to use :ref:`agent forwarding `.) Fabric's SSH implementation allows loading a subset of these options from one's actual SSH config file, should it exist. This behavior is not enabled by default (in order to be backwards compatible) but may be turned on by setting :ref:`env.use_ssh_config ` to ``True`` at the top of your fabfile. If enabled, the following SSH config directives will be loaded and honored by Fabric: * ``User`` and ``Port`` will be used to fill in the appropriate connection parameters when not otherwise specified, in the following fashion: * Globally specified ``User``/``Port`` will be used in place of the current defaults (local username and 22, respectively) if the appropriate env vars are not set. * However, if :ref:`env.user `/:ref:`env.port ` *are* set, they override global ``User``/``Port`` values. * User/port values in the host string itself (e.g. ``hostname:222``) will override everything, including any ``ssh_config`` values. * ``HostName`` can be used to replace the given hostname, just like with regular ``ssh``. So a ``Host foo`` entry specifying ``HostName example.com`` will allow you to give Fabric the hostname ``'foo'`` and have that expanded into ``'example.com'`` at connection time. * ``IdentityFile`` will extend (not replace) :ref:`env.key_filename `. * ``ForwardAgent`` will augment :ref:`env.forward_agent ` in an "OR" manner: if either is set to a positive value, agent forwarding will be enabled. * ``ProxyCommand`` will trigger use of a proxy command for host connections, just as with regular ``ssh``. .. note:: If all you want to do is bounce SSH traffic off a gateway, you may find :ref:`env.gateway ` to be a more efficient connection method (which will also honor more Fabric-level settings) than the typical ``ssh gatewayhost nc %h %p`` method of using ``ProxyCommand`` as a gateway. .. note:: If your SSH config file contains ``ProxyCommand`` directives *and* you have set :ref:`env.gateway ` to a non-``None`` value, ``env.gateway`` will take precedence and the ``ProxyCommand`` will be ignored. If one has a pre-created SSH config file, rationale states it will be easier for you to modify ``env.gateway`` (e.g. via `~fabric.context_managers.settings`) than to work around your conf file's contents entirely. Fabric-1.8.2/docs/usage/fab.rst000644 000765 000024 00000037766 12277451120 017220 0ustar00jforcierstaff000000 000000 ============================= ``fab`` options and arguments ============================= The most common method for utilizing Fabric is via its command-line tool, ``fab``, which should have been placed on your shell's executable path when Fabric was installed. ``fab`` tries hard to be a good Unix citizen, using a standard style of command-line switches, help output, and so forth. Basic use ========= In its most simple form, ``fab`` may be called with no options at all, and with one or more arguments, which should be task names, e.g.:: $ fab task1 task2 As detailed in :doc:`../tutorial` and :doc:`execution`, this will run ``task1`` followed by ``task2``, assuming that Fabric was able to find a fabfile nearby containing Python functions with those names. However, it's possible to expand this simple usage into something more flexible, by using the provided options and/or passing arguments to individual tasks. .. _arbitrary-commands: Arbitrary remote shell commands =============================== .. versionadded:: 0.9.2 Fabric leverages a lesser-known command line convention and may be called in the following manner:: $ fab [options] -- [shell command] where everything after the ``--`` is turned into a temporary `~fabric.operations.run` call, and is not parsed for ``fab`` options. If you've defined a host list at the module level or on the command line, this usage will act like a one-line anonymous task. For example, let's say you just wanted to get the kernel info for a bunch of systems; you could do this:: $ fab -H system1,system2,system3 -- uname -a which would be literally equivalent to the following fabfile:: from fabric.api import run def anonymous(): run("uname -a") as if it were executed thusly:: $ fab -H system1,system2,system3 anonymous Most of the time you will want to just write out the task in your fabfile (anything you use once, you're likely to use again) but this feature provides a handy, fast way to quickly dash off an SSH-borne command while leveraging your fabfile's connection settings. .. _command-line-options: Command-line options ==================== A quick overview of all possible command line options can be found via ``fab --help``. If you're looking for details on a specific option, we go into detail below. .. note:: ``fab`` uses Python's `optparse`_ library, meaning that it honors typical Linux or GNU style short and long options, as well as freely mixing options and arguments. E.g. ``fab task1 -H hostname task2 -i path/to/keyfile`` is just as valid as the more straightforward ``fab -H hostname -i path/to/keyfile task1 task2``. .. _optparse: http://docs.python.org/library/optparse.html .. cmdoption:: -a, --no_agent Sets :ref:`env.no_agent ` to ``True``, forcing our SSH layer not to talk to the SSH agent when trying to unlock private key files. .. versionadded:: 0.9.1 .. cmdoption:: -A, --forward-agent Sets :ref:`env.forward_agent ` to ``True``, enabling agent forwarding. .. versionadded:: 1.4 .. cmdoption:: --abort-on-prompts Sets :ref:`env.abort_on_prompts ` to ``True``, forcing Fabric to abort whenever it would prompt for input. .. versionadded:: 1.1 .. cmdoption:: -c RCFILE, --config=RCFILE Sets :ref:`env.rcfile ` to the given file path, which Fabric will try to load on startup and use to update environment variables. .. cmdoption:: -d COMMAND, --display=COMMAND Prints the entire docstring for the given task, if there is one. Does not currently print out the task's function signature, so descriptive docstrings are a good idea. (They're *always* a good idea, of course -- just moreso here.) .. cmdoption:: --connection-attempts=M, -n M Set number of times to attempt connections. Sets :ref:`env.connection_attempts `. .. seealso:: :ref:`env.connection_attempts `, :ref:`env.timeout ` .. versionadded:: 1.4 .. cmdoption:: -D, --disable-known-hosts Sets :ref:`env.disable_known_hosts ` to ``True``, preventing Fabric from loading the user's SSH :file:`known_hosts` file. .. cmdoption:: -f FABFILE, --fabfile=FABFILE The fabfile name pattern to search for (defaults to ``fabfile.py``), or alternately an explicit file path to load as the fabfile (e.g. ``/path/to/my/fabfile.py``.) .. seealso:: :doc:`fabfiles` .. cmdoption:: -F LIST_FORMAT, --list-format=LIST_FORMAT Allows control over the output format of :option:`--list <-l>`. ``short`` is equivalent to :option:`--shortlist`, ``normal`` is the same as simply omitting this option entirely (i.e. the default), and ``nested`` prints out a nested namespace tree. .. versionadded:: 1.1 .. seealso:: :option:`--shortlist`, :option:`--list <-l>` .. cmdoption:: -g HOST, --gateway=HOST Sets :ref:`env.gateway ` to ``HOST`` host string. .. versionadded:: 1.5 .. cmdoption:: -h, --help Displays a standard help message, with all possible options and a brief overview of what they do, then exits. .. cmdoption:: --hide=LEVELS A comma-separated list of :doc:`output levels ` to hide by default. .. cmdoption:: -H HOSTS, --hosts=HOSTS Sets :ref:`env.hosts ` to the given comma-delimited list of host strings. .. cmdoption:: -x HOSTS, --exclude-hosts=HOSTS Sets :ref:`env.exclude_hosts ` to the given comma-delimited list of host strings to then keep out of the final host list. .. versionadded:: 1.1 .. cmdoption:: -i KEY_FILENAME When set to a file path, will load the given file as an SSH identity file (usually a private key.) This option may be repeated multiple times. Sets (or appends to) :ref:`env.key_filename `. .. cmdoption:: -I, --initial-password-prompt Forces a password prompt at the start of the session (after fabfile load and option parsing, but before executing any tasks) in order to pre-fill :ref:`env.password `. This is useful for fire-and-forget runs (especially parallel sessions, in which runtime input is not possible) when setting the password via :option:`--password <-p>` or by setting :ref:`env.password ` in your fabfile, is undesirable. .. note:: The value entered into this prompt will *overwrite* anything supplied via :ref:`env.password ` at module level, or via :option:`--password <-p>`. .. seealso:: :ref:`password-management` .. cmdoption:: -k Sets :ref:`env.no_keys ` to ``True``, forcing the SSH layer to not look for SSH private key files in one's home directory. .. versionadded:: 0.9.1 .. cmdoption:: --keepalive=KEEPALIVE Sets :ref:`env.keepalive ` to the given (integer) value, specifying an SSH keepalive interval. .. versionadded:: 1.1 .. cmdoption:: --linewise Forces output to be buffered line-by-line instead of byte-by-byte. Often useful or required for :ref:`parallel execution `. .. versionadded:: 1.3 .. cmdoption:: -l, --list Imports a fabfile as normal, but then prints a list of all discovered tasks and exits. Will also print the first line of each task's docstring, if it has one, next to it (truncating if necessary.) .. versionchanged:: 0.9.1 Added docstring to output. .. seealso:: :option:`--shortlist`, :option:`--list-format <-F>` .. cmdoption:: -p PASSWORD, --password=PASSWORD Sets :ref:`env.password ` to the given string; it will then be used as the default password when making SSH connections or calling the ``sudo`` program. .. seealso:: :option:`--initial-password-prompt <-I>` .. cmdoption:: -P, --parallel Sets :ref:`env.parallel ` to ``True``, causing tasks to run in parallel. .. versionadded:: 1.3 .. seealso:: :doc:`/usage/parallel` .. cmdoption:: --no-pty Sets :ref:`env.always_use_pty ` to ``False``, causing all `~fabric.operations.run`/`~fabric.operations.sudo` calls to behave as if one had specified ``pty=False``. .. versionadded:: 1.0 .. cmdoption:: -r, --reject-unknown-hosts Sets :ref:`env.reject_unknown_hosts ` to ``True``, causing Fabric to abort when connecting to hosts not found in the user's SSH :file:`known_hosts` file. .. cmdoption:: -R ROLES, --roles=ROLES Sets :ref:`env.roles ` to the given comma-separated list of role names. .. cmdoption:: --set KEY=VALUE,... Allows you to set default values for arbitrary Fabric env vars. Values set this way have a low precedence -- they will not override more specific env vars which are also specified on the command line. E.g.:: fab --set password=foo --password=bar will result in ``env.password = 'bar'``, not ``'foo'`` Multiple ``KEY=VALUE`` pairs may be comma-separated, e.g. ``fab --set var1=val1,var2=val2``. Other than basic string values, you may also set env vars to True by omitting the ``=VALUE`` (e.g. ``fab --set KEY``), and you may set values to the empty string (and thus a False-equivalent value) by keeping the equals sign, but omitting ``VALUE`` (e.g. ``fab --set KEY=``.) .. versionadded:: 1.4 .. cmdoption:: -s SHELL, --shell=SHELL Sets :ref:`env.shell ` to the given string, overriding the default shell wrapper used to execute remote commands. .. cmdoption:: --shortlist Similar to :option:`--list <-l>`, but without any embellishment, just task names separated by newlines with no indentation or docstrings. .. versionadded:: 0.9.2 .. seealso:: :option:`--list <-l>` .. cmdoption:: --show=LEVELS A comma-separated list of :doc:`output levels ` to be added to those that are shown by default. .. seealso:: `~fabric.operations.run`, `~fabric.operations.sudo` .. cmdoption:: --ssh-config-path Sets :ref:`env.ssh_config_path `. .. versionadded:: 1.4 .. seealso:: :ref:`ssh-config` .. cmdoption:: --skip-bad-hosts Sets :ref:`env.skip_bad_hosts `, causing Fabric to skip unavailable hosts. .. versionadded:: 1.4 .. cmdoption:: --timeout=N, -t N Set connection timeout in seconds. Sets :ref:`env.timeout `. .. seealso:: :ref:`env.timeout `, :ref:`env.connection_attempts ` .. versionadded:: 1.4 .. cmdoption:: --command-timeout=N, -T N Set remote command timeout in seconds. Sets :ref:`env.command_timeout `. .. seealso:: :ref:`env.command_timeout `, .. versionadded:: 1.6 .. cmdoption:: -u USER, --user=USER Sets :ref:`env.user ` to the given string; it will then be used as the default username when making SSH connections. .. cmdoption:: -V, --version Displays Fabric's version number, then exits. .. cmdoption:: -w, --warn-only Sets :ref:`env.warn_only ` to ``True``, causing Fabric to continue execution even when commands encounter error conditions. .. cmdoption:: -z, --pool-size Sets :ref:`env.pool_size `, which specifies how many processes to run concurrently during parallel execution. .. versionadded:: 1.3 .. seealso:: :doc:`/usage/parallel` .. _task-arguments: Per-task arguments ================== The options given in :ref:`command-line-options` apply to the invocation of ``fab`` as a whole; even if the order is mixed around, options still apply to all given tasks equally. Additionally, since tasks are just Python functions, it's often desirable to pass in arguments to them at runtime. Answering both these needs is the concept of "per-task arguments", which is a special syntax you can tack onto the end of any task name: * Use a colon (``:``) to separate the task name from its arguments; * Use commas (``,``) to separate arguments from one another (may be escaped by using a backslash, i.e. ``\,``); * Use equals signs (``=``) for keyword arguments, or omit them for positional arguments. May also be escaped with backslashes. Additionally, since this process involves string parsing, all values will end up as Python strings, so plan accordingly. (We hope to improve upon this in future versions of Fabric, provided an intuitive syntax can be found.) For example, a "create a new user" task might be defined like so (omitting most of the actual logic for brevity):: def new_user(username, admin='no', comment="No comment provided"): log_action("New User (%s): %s" % (username, comment)) pass You can specify just the username:: $ fab new_user:myusername Or treat it as an explicit keyword argument:: $ fab new_user:username=myusername If both args are given, you can again give them as positional args:: $ fab new_user:myusername,yes Or mix and match, just like in Python:: $ fab new_user:myusername,admin=yes The ``log_action`` call above is useful for illustrating escaped commas, like so:: $ fab new_user:myusername,admin=no,comment='Gary\, new developer (starts Monday)' .. note:: Quoting the backslash-escaped comma is required, as not doing so will cause shell syntax errors. Quotes are also needed whenever an argument involves other shell-related characters such as spaces. All of the above are translated into the expected Python function calls. For example, the last call above would become:: >>> new_user('myusername', admin='yes', comment='Gary, new developer (starts Monday)') Roles and hosts --------------- As mentioned in :ref:`the section on task execution `, there are a handful of per-task keyword arguments (``host``, ``hosts``, ``role`` and ``roles``) which do not actually map to the task functions themselves, but are used for setting per-task host and/or role lists. These special kwargs are **removed** from the args/kwargs sent to the task function itself; this is so that you don't run into TypeErrors if your task doesn't define the kwargs in question. (It also means that if you **do** define arguments with these names, you won't be able to specify them in this manner -- a regrettable but necessary sacrifice.) .. note:: If both the plural and singular forms of these kwargs are given, the value of the plural will win out and the singular will be discarded. When using the plural form of these arguments, one must use semicolons (``;``) since commas are already being used to separate arguments from one another. Furthermore, since your shell is likely to consider semicolons a special character, you'll want to quote the host list string to prevent shell interpretation, e.g.:: $ fab new_user:myusername,hosts="host1;host2" Again, since the ``hosts`` kwarg is removed from the argument list sent to the ``new_user`` task function, the actual Python invocation would be ``new_user('myusername')``, and the function would be executed on a host list of ``['host1', 'host2']``. .. _fabricrc: Settings files ============== Fabric currently honors a simple user settings file, or ``fabricrc`` (think ``bashrc`` but for ``fab``) which should contain one or more key-value pairs, one per line. These lines will be subject to ``string.split('=')``, and thus can currently only be used to specify string settings. Any such key-value pairs will be used to update :doc:`env ` when ``fab`` runs, and is loaded prior to the loading of any fabfile. By default, Fabric looks for ``~/.fabricrc``, and this may be overridden by specifying the :option:`-c` flag to ``fab``. For example, if your typical SSH login username differs from your workstation username, and you don't want to modify ``env.user`` in a project's fabfile (possibly because you expect others to use it as well) you could write a ``fabricrc`` file like so:: user = ssh_user_name Then, when running ``fab``, your fabfile would load up with ``env.user`` set to ``'ssh_user_name'``. Other users of that fabfile could do the same, allowing the fabfile itself to be cleanly agnostic regarding the default username. Fabric-1.8.2/docs/usage/fabfiles.rst000644 000765 000024 00000007151 12277451120 020224 0ustar00jforcierstaff000000 000000 ============================ Fabfile construction and use ============================ This document contains miscellaneous sections about fabfiles, both how to best write them, and how to use them once written. .. _fabfile-discovery: Fabfile discovery ================= Fabric is capable of loading Python modules (e.g. ``fabfile.py``) or packages (e.g. a ``fabfile/`` directory containing an ``__init__.py``). By default, it looks for something named (to Python's import machinery) ``fabfile`` - so either ``fabfile/`` or ``fabfile.py``. The fabfile discovery algorithm searches in the invoking user's current working directory or any parent directories. Thus, it is oriented around "project" use, where one keeps e.g. a ``fabfile.py`` at the root of a source code tree. Such a fabfile will then be discovered no matter where in the tree the user invokes ``fab``. The specific name to be searched for may be overridden on the command-line with the :option:`-f` option, or by adding a :ref:`fabricrc ` line which sets the value of ``fabfile``. For example, if you wanted to name your fabfile ``fab_tasks.py``, you could create such a file and then call ``fab -f fab_tasks.py ``, or add ``fabfile = fab_tasks.py`` to ``~/.fabricrc``. If the given fabfile name contains path elements other than a filename (e.g. ``../fabfile.py`` or ``/dir1/dir2/custom_fabfile``) it will be treated as a file path and directly checked for existence without any sort of searching. When in this mode, tilde-expansion will be applied, so one may refer to e.g. ``~/personal_fabfile.py``. .. note:: Fabric does a normal ``import`` (actually an ``__import__``) of your fabfile in order to access its contents -- it does not do any ``eval``-ing or similar. In order for this to work, Fabric temporarily adds the found fabfile's containing folder to the Python load path (and removes it immediately afterwards.) .. versionchanged:: 0.9.2 The ability to load package fabfiles. .. _importing-the-api: Importing Fabric ================ Because Fabric is just Python, you *can* import its components any way you want. However, for the purposes of encapsulation and convenience (and to make life easier for Fabric's packaging script) Fabric's public API is maintained in the ``fabric.api`` module. All of Fabric's :doc:`../api/core/operations`, :doc:`../api/core/context_managers`, :doc:`../api/core/decorators` and :doc:`../api/core/utils` are included in this module as a single, flat namespace. This enables a very simple and consistent interface to Fabric within your fabfiles:: from fabric.api import * # call run(), sudo(), etc etc This is not technically best practices (for `a number of reasons`_) and if you're only using a couple of Fab API calls, it *is* probably a good idea to explicitly ``from fabric.api import env, run`` or similar. However, in most nontrivial fabfiles, you'll be using all or most of the API, and the star import:: from fabric.api import * will be a lot easier to write and read than:: from fabric.api import abort, cd, env, get, hide, hosts, local, prompt, \ put, require, roles, run, runs_once, settings, show, sudo, warn so in this case we feel pragmatism overrides best practices. .. _a number of reasons: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#importing Defining tasks and importing callables ====================================== For important information on what exactly Fabric will consider as a task when it loads your fabfile, as well as notes on how best to import other code, please see :doc:`/usage/tasks` in the :doc:`execution` documentation. Fabric-1.8.2/docs/usage/interactivity.rst000644 000765 000024 00000015127 12257074160 021354 0ustar00jforcierstaff000000 000000 ================================ Interaction with remote programs ================================ Fabric's primary operations, `~fabric.operations.run` and `~fabric.operations.sudo`, are capable of sending local input to the remote end, in a manner nearly identical to the ``ssh`` program. For example, programs which display password prompts (e.g. a database dump utility, or changing a user's password) will behave just as if you were interacting with them directly. However, as with ``ssh`` itself, Fabric's implementation of this feature is subject to a handful of limitations which are not always intuitive. This document discusses such issues in detail. .. note:: Readers unfamiliar with the basics of Unix stdout and stderr pipes, and/or terminal devices, may wish to visit the Wikipedia pages for `Unix pipelines `_ and `Pseudo terminals `_ respectively. .. _combine_streams: Combining stdout and stderr =========================== The first issue to be aware of is that of the stdout and stderr streams, and why they are separated or combined as needed. Buffering --------- Fabric 0.9.x and earlier, and Python itself, buffer output on a line-by-line basis: text is not printed to the user until a newline character is found. This works fine in most situations but becomes problematic when one needs to deal with partial-line output such as prompts. .. note:: Line-buffered output can make programs appear to halt or freeze for no reason, as prompts print out text without a newline, waiting for the user to enter their input and press Return. Newer Fabric versions buffer both input and output on a character-by-character basis in order to make interaction with prompts possible. This has the convenient side effect of enabling interaction with complex programs utilizing the "curses" libraries or which otherwise redraw the screen (think ``top``). Crossing the streams -------------------- Unfortunately, printing to stderr and stdout simultaneously (as many programs do) means that when the two streams are printed independently one byte at a time, they can become garbled or meshed together. While this can sometimes be mitigated by line-buffering one of the streams and not the other, it's still a serious issue. To solve this problem, Fabric uses a setting in our SSH layer which merges the two streams at a low level and causes output to appear more naturally. This setting is represented in Fabric as the :ref:`combine-stderr` env var and keyword argument, and is ``True`` by default. Due to this default setting, output will appear correctly, but at the cost of an empty ``.stderr`` attribute on the return values of `~fabric.operations.run`/`~fabric.operations.sudo`, as all output will appear to be stdout. Conversely, users requiring a distinct stderr stream at the Python level and who aren't bothered by garbled user-facing output (or who are hiding stdout and stderr from the command in question) may opt to set this to ``False`` as needed. .. _pseudottys: Pseudo-terminals ================ The other main issue to consider when presenting interactive prompts to users is that of echoing the user's own input. Echoes ------ Typical terminal applications or bona fide text terminals (e.g. when using a Unix system without a running GUI) present programs with a terminal device called a tty or pty (for pseudo-terminal). These automatically echo all text typed into them back out to the user (via stdout), as interaction without seeing what you had just typed would be difficult. Terminal devices are also able to conditionally turn off echoing, allowing secure password prompts. However, it's possible for programs to be run without a tty or pty present at all (consider cron jobs, for example) and in this situation, any stdin data being fed to the program won't be echoed. This is desirable for programs being run without any humans around, and it's also Fabric's old default mode of operation. Fabric's approach ----------------- Unfortunately, in the context of executing commands via Fabric, when no pty is present to echo a user's stdin, Fabric must echo it for them. This is sufficient for many applications, but it presents problems for password prompts, which become insecure. In the interests of security and meeting the principle of least surprise (insofar as users are typically expecting things to behave as they would when run in a terminal emulator), Fabric 1.0 and greater force a pty by default. With a pty enabled, Fabric simply allows the remote end to handle echoing or hiding of stdin and does not echo anything itself. .. note:: In addition to allowing normal echo behavior, a pty also means programs that behave differently when attached to a terminal device will then do so. For example, programs that colorize output on terminals but not when run in the background will print colored output. Be wary of this if you inspect the return value of `~fabric.operations.run` or `~fabric.operations.sudo`! For situations requiring the pty behavior turned off, the :option:`--no-pty` command-line argument and :ref:`always-use-pty` env var may be used. Combining the two ================= As a final note, keep in mind that use of pseudo-terminals effectively implies combining stdout and stderr -- in much the same way as the :ref:`combine_stderr ` setting does. This is because a terminal device naturally sends both stdout and stderr to the same place -- the user's display -- thus making it impossible to differentiate between them. However, at the Fabric level, the two groups of settings are distinct from one another and may be combined in various ways. The default is for both to be set to ``True``; the other combinations are as follows: * ``run("cmd", pty=False, combine_stderr=True)``: will cause Fabric to echo all stdin itself, including passwords, as well as potentially altering ``cmd``'s behavior. Useful if ``cmd`` behaves undesirably when run under a pty and you're not concerned about password prompts. * ``run("cmd", pty=False, combine_stderr=False)``: with both settings ``False``, Fabric will echo stdin and won't issue a pty -- and this is highly likely to result in undesired behavior for all but the simplest commands. However, it is also the only way to access a distinct stderr stream, which is occasionally useful. * ``run("cmd", pty=True, combine_stderr=False)``: valid, but won't really make much of a difference, as ``pty=True`` will still result in merged streams. May be useful for avoiding any edge case problems in ``combine_stderr`` (none are presently known). Fabric-1.8.2/docs/usage/library.rst000644 000765 000024 00000005346 12257074160 020124 0ustar00jforcierstaff000000 000000 =========== Library Use =========== Fabric's primary use case is via fabfiles and the :doc:`fab ` tool, and this is reflected in much of the documentation. However, Fabric's internals are written in such a manner as to be easily used without ``fab`` or fabfiles at all -- this document will show you how. There's really only a couple of considerations one must keep in mind, when compared to writing a fabfile and using ``fab`` to run it: how connections are really made, and how disconnections occur. Connections =========== We've documented how Fabric really connects to its hosts before, but it's currently somewhat buried in the middle of the overall :doc:`execution docs `. Specifically, you'll want to skip over to the :ref:`connections` section and read it real quick. (You should really give that entire document a once-over, but it's not absolutely required.) As that section mentions, the key is simply that `~fabric.operations.run`, `~fabric.operations.sudo` and the other operations only look in one place when connecting: :ref:`env.host_string `. All of the other mechanisms for setting hosts are interpreted by the ``fab`` tool when it runs, and don't matter when running as a library. That said, most use cases where you want to marry a given task ``X`` and a given list of hosts ``Y`` can, as of Fabric 1.3, be handled with the `~fabric.tasks.execute` function via ``execute(X, hosts=Y)``. Please see `~fabric.tasks.execute`'s documentation for details -- manual host string manipulation should be rarely necessary. Disconnecting ============= The other main thing that ``fab`` does for you is to disconnect from all hosts at the end of a session; otherwise, Python will sit around forever waiting for those network resources to be released. Fabric 0.9.4 and newer have a function you can use to do this easily: `~fabric.network.disconnect_all`. Simply make sure your code calls this when it terminates (typically in the ``finally`` clause of an outer ``try: finally`` statement -- lest errors in your code prevent disconnections from happening!) and things ought to work pretty well. If you're on Fabric 0.9.3 or older, you can simply do this (``disconnect_all`` just adds a bit of nice output to this logic):: from fabric.state import connections for key in connections.keys(): connections[key].close() del connections[key] Final note ========== This document is an early draft, and may not cover absolutely every difference between ``fab`` use and library use. However, the above should highlight the largest stumbling blocks. When in doubt, note that in the Fabric source code, ``fabric/main.py`` contains the bulk of the extra work done by ``fab``, and may serve as a useful reference. Fabric-1.8.2/docs/usage/output_controls.rst000644 000765 000024 00000014027 12257074160 021737 0ustar00jforcierstaff000000 000000 =============== Managing output =============== The ``fab`` tool is very verbose by default and prints out almost everything it can, including the remote end's stderr and stdout streams, the command strings being executed, and so forth. While this is necessary in many cases in order to know just what's going on, any nontrivial Fabric task will quickly become difficult to follow as it runs. Output levels ============= To aid in organizing task output, Fabric output is grouped into a number of non-overlapping levels or groups, each of which may be turned on or off independently. This provides flexible control over what is displayed to the user. .. note:: All levels, save for ``debug``, are on by default. Standard output levels ---------------------- The standard, atomic output levels/groups are as follows: * **status**: Status messages, i.e. noting when Fabric is done running, if the user used a keyboard interrupt, or when servers are disconnected from. These messages are almost always relevant and rarely verbose. * **aborts**: Abort messages. Like status messages, these should really only be turned off when using Fabric as a library, and possibly not even then. Note that even if this output group is turned off, aborts will still occur -- there just won't be any output about why Fabric aborted! * **warnings**: Warning messages. These are often turned off when one expects a given operation to fail, such as when using ``grep`` to test existence of text in a file. If paired with setting ``env.warn_only`` to True, this can result in fully silent warnings when remote programs fail. As with ``aborts``, this setting does not control actual warning behavior, only whether warning messages are printed or hidden. * **running**: Printouts of commands being executed or files transferred, e.g. ``[myserver] run: ls /var/www``. Also controls printing of tasks being run, e.g. ``[myserver] Executing task 'foo'``. * **stdout**: Local, or remote, stdout, i.e. non-error output from commands. * **stderr**: Local, or remote, stderr, i.e. error-related output from commands. * **user**: User-generated output, i.e. local output printed by fabfile code via use of the `~fabric.utils.fastprint` or `~fabric.utils.puts` functions. .. versionchanged:: 0.9.2 Added "Executing task" lines to the ``running`` output level. .. versionchanged:: 0.9.2 Added the ``user`` output level. Debug output ------------ There is a final atomic output level, ``debug``, which behaves slightly differently from the rest: * **debug**: Turn on debugging (which is off by default.) Currently, this is largely used to view the "full" commands being run; take for example this `~fabric.operations.run` call:: run('ls "/home/username/Folder Name With Spaces/"') Normally, the ``running`` line will show exactly what is passed into `~fabric.operations.run`, like so:: [hostname] run: ls "/home/username/Folder Name With Spaces/" With ``debug`` on, and assuming you've left :ref:`shell` set to ``True``, you will see the literal, full string as passed to the remote server:: [hostname] run: /bin/bash -l -c "ls \"/home/username/Folder Name With Spaces\"" Enabling ``debug`` output will also display full Python tracebacks during aborts. .. note:: Where modifying other pieces of output (such as in the above example where it modifies the 'running' line to show the shell and any escape characters), this setting takes precedence over the others; so if ``running`` is False but ``debug`` is True, you will still be shown the 'running' line in its debugging form. .. versionchanged:: 1.0 Debug output now includes full Python tracebacks during aborts. .. _output-aliases: Output level aliases -------------------- In addition to the atomic/standalone levels above, Fabric also provides a couple of convenience aliases which map to multiple other levels. These may be referenced anywhere the other levels are referenced, and will effectively toggle all of the levels they are mapped to. * **output**: Maps to both ``stdout`` and ``stderr``. Useful for when you only care to see the 'running' lines and your own print statements (and warnings). * **everything**: Includes ``warnings``, ``running``, ``user`` and ``output`` (see above.) Thus, when turning off ``everything``, you will only see a bare minimum of output (just ``status`` and ``debug`` if it's on), along with your own print statements. * **commands**: Includes ``stdout`` and ``running``. Good for hiding non-erroring commands entirely, while still displaying any stderr output. .. versionchanged:: 1.4 Added the ``commands`` output alias. Hiding and/or showing output levels =================================== You may toggle any of Fabric's output levels in a number of ways; for examples, please see the API docs linked in each bullet point: * **Direct modification of fabric.state.output**: `fabric.state.output` is a dictionary subclass (similar to :doc:`env `) whose keys are the output level names, and whose values are either True (show that particular type of output) or False (hide it.) `fabric.state.output` is the lowest-level implementation of output levels and is what Fabric's internals reference when deciding whether or not to print their output. * **Context managers**: `~fabric.context_managers.hide` and `~fabric.context_managers.show` are twin context managers that take one or more output level names as strings, and either hide or show them within the wrapped block. As with Fabric's other context managers, the prior values are restored when the block exits. .. seealso:: `~fabric.context_managers.settings`, which can nest calls to `~fabric.context_managers.hide` and/or `~fabric.context_managers.show` inside itself. * **Command-line arguments**: You may use the :option:`--hide` and/or :option:`--show` arguments to :doc:`fab`, which behave exactly like the context managers of the same names (but are, naturally, globally applied) and take comma-separated strings as input. Fabric-1.8.2/docs/usage/parallel.rst000644 000765 000024 00000012160 12257074160 020244 0ustar00jforcierstaff000000 000000 ================== Parallel execution ================== .. versionadded:: 1.3 By default, Fabric executes all specified tasks **serially** (see :ref:`execution-strategy` for details.) This document describes Fabric's options for running tasks on multiple hosts in **parallel**, via per-task decorators and/or global command-line switches. What it does ============ Because Fabric 1.x is not fully threadsafe (and because in general use, task functions do not typically interact with one another) this functionality is implemented via the Python `multiprocessing `_ module. It creates one new process for each host and task combination, optionally using a (configurable) sliding window to prevent too many processes from running at the same time. For example, imagine a scenario where you want to update Web application code on a number of Web servers, and then reload the servers once the code has been distributed everywhere (to allow for easier rollback if code updates fail.) One could implement this with the following fabfile:: from fabric.api import * def update(): with cd("/srv/django/myapp"): run("git pull") def reload(): sudo("service apache2 reload") and execute it on a set of 3 servers, in serial, like so:: $ fab -H web1,web2,web3 update reload Normally, without any parallel execution options activated, Fabric would run in order: #. ``update`` on ``web1`` #. ``update`` on ``web2`` #. ``update`` on ``web3`` #. ``reload`` on ``web1`` #. ``reload`` on ``web2`` #. ``reload`` on ``web3`` With parallel execution activated (via :option:`-P` -- see below for details), this turns into: #. ``update`` on ``web1``, ``web2``, and ``web3`` #. ``reload`` on ``web1``, ``web2``, and ``web3`` Hopefully the benefits of this are obvious -- if ``update`` took 5 seconds to run and ``reload`` took 2 seconds, serial execution takes (5+2)*3 = 21 seconds to run, while parallel execution takes only a third of the time, (5+2) = 7 seconds on average. How to use it ============= Decorators ---------- Since the minimum "unit" that parallel execution affects is a task, the functionality may be enabled or disabled on a task-by-task basis using the `~fabric.decorators.parallel` and `~fabric.decorators.serial` decorators. For example, this fabfile:: from fabric.api import * @parallel def runs_in_parallel(): pass def runs_serially(): pass when run in this manner:: $ fab -H host1,host2,host3 runs_in_parallel runs_serially will result in the following execution sequence: #. ``runs_in_parallel`` on ``host1``, ``host2``, and ``host3`` #. ``runs_serially`` on ``host1`` #. ``runs_serially`` on ``host2`` #. ``runs_serially`` on ``host3`` Command-line flags ------------------ One may also force all tasks to run in parallel by using the command-line flag :option:`-P` or the env variable :ref:`env.parallel `. However, any task specifically wrapped with `~fabric.decorators.serial` will ignore this setting and continue to run serially. For example, the following fabfile will result in the same execution sequence as the one above:: from fabric.api import * def runs_in_parallel(): pass @serial def runs_serially(): pass when invoked like so:: $ fab -H host1,host2,host3 -P runs_in_parallel runs_serially As before, ``runs_in_parallel`` will run in parallel, and ``runs_serially`` in sequence. Bubble size =========== With large host lists, a user's local machine can get overwhelmed by running too many concurrent Fabric processes. Because of this, you may opt to use a moving bubble approach that limits Fabric to a specific number of concurrently active processes. By default, no bubble is used and all hosts are run in one concurrent pool. You can override this on a per-task level by specifying the ``pool_size`` keyword argument to `~fabric.decorators.parallel`, or globally via :option:`-z`. For example, to run on 5 hosts at a time:: from fabric.api import * @parallel(pool_size=5) def heavy_task(): # lots of heavy local lifting or lots of IO here Or skip the ``pool_size`` kwarg and instead:: $ fab -P -z 5 heavy_task .. _linewise-output: Linewise vs bytewise output =========================== Fabric's default mode of printing to the terminal is byte-by-byte, in order to support :doc:`/usage/interactivity`. This often gives poor results when running in parallel mode, as the multiple processes may write to your terminal's standard out stream simultaneously. To help offset this problem, Fabric's option for linewise output is automatically enabled whenever parallelism is active. This will cause you to lose most of the benefits outlined in the above link Fabric's remote interactivity features, but as those do not map well to parallel invocations, it's typically a fair trade. There's no way to avoid the multiple processes mixing up on a line-by-line basis, but you will at least be able to tell them apart by the host-string line prefix. .. note:: Future versions will add improved logging support to make troubleshooting parallel runs easier. Fabric-1.8.2/docs/usage/ssh.rst000644 000765 000024 00000006147 12257074160 017255 0ustar00jforcierstaff000000 000000 ============ SSH behavior ============ Fabric currently makes use of a pure-Python SSH re-implementation for managing connections, meaning that there are occasionally spots where it is limited by that library's capabilities. Below are areas of note where Fabric will exhibit behavior that isn't consistent with, or as flexible as, the behavior of the ``ssh`` command-line program. Unknown hosts ============= SSH's host key tracking mechanism keeps tabs on all the hosts you attempt to connect to, and maintains a ``~/.ssh/known_hosts`` file with mappings between identifiers (IP address, sometimes with a hostname as well) and SSH keys. (For details on how this works, please see the `OpenSSH documentation `_.) The ``paramiko`` library is capable of loading up your ``known_hosts`` file, and will then compare any host it connects to, with that mapping. Settings are available to determine what happens when an unknown host (a host whose username or IP is not found in ``known_hosts``) is seen: * **Reject**: the host key is rejected and the connection is not made. This results in a Python exception, which will terminate your Fabric session with a message that the host is unknown. * **Add**: the new host key is added to the in-memory list of known hosts, the connection is made, and things continue normally. Note that this does **not** modify your on-disk ``known_hosts`` file! * **Ask**: not yet implemented at the Fabric level, this is a ``paramiko`` library option which would result in the user being prompted about the unknown key and whether to accept it. Whether to reject or add hosts, as above, is controlled in Fabric via the :ref:`env.reject_unknown_hosts ` option, which is False by default for convenience's sake. We feel this is a valid tradeoff between convenience and security; anyone who feels otherwise can easily modify their fabfiles at module level to set ``env.reject_unknown_hosts = True``. Known hosts with changed keys ============================= The point of SSH's key/fingerprint tracking is so that man-in-the-middle attacks can be detected: if an attacker redirects your SSH traffic to a computer under his control, and pretends to be your original destination server, the host keys will not match. Thus, the default behavior of SSH (and its Python implementation) is to immediately abort the connection when a host previously recorded in ``known_hosts`` suddenly starts sending us a different host key. In some edge cases such as some EC2 deployments, you may want to ignore this potential problem. Our SSH layer, at the time of writing, doesn't give us control over this exact behavior, but we can sidestep it by simply skipping the loading of ``known_hosts`` -- if the host list being compared to is empty, then there's no problem. Set :ref:`env.disable_known_hosts ` to True when you want this behavior; it is False by default, in order to preserve default SSH behavior. .. warning:: Enabling :ref:`env.disable_known_hosts ` will leave you wide open to man-in-the-middle attacks! Please use with caution. Fabric-1.8.2/docs/usage/tasks.rst000644 000765 000024 00000042774 12257074160 017613 0ustar00jforcierstaff000000 000000 ============== Defining tasks ============== As of Fabric 1.1, there are two distinct methods you may use in order to define which objects in your fabfile show up as tasks: * The "new" method starting in 1.1 considers instances of `~fabric.tasks.Task` or its subclasses, and also descends into imported modules to allow building nested namespaces. * The "classic" method from 1.0 and earlier considers all public callable objects (functions, classes etc) and only considers the objects in the fabfile itself with no recursing into imported module. .. note:: These two methods are **mutually exclusive**: if Fabric finds *any* new-style task objects in your fabfile or in modules it imports, it will assume you've committed to this method of task declaration and won't consider any non-`~fabric.tasks.Task` callables. If *no* new-style tasks are found, it reverts to the classic behavior. The rest of this document explores these two methods in detail. .. note:: To see exactly what tasks in your fabfile may be executed via ``fab``, use :option:`fab --list <-l>`. .. _new-style-tasks: New-style tasks =============== Fabric 1.1 introduced the `~fabric.tasks.Task` class to facilitate new features and enable some programming best practices, specifically: * **Object-oriented tasks**. Inheritance and all that comes with it can make for much more sensible code reuse than passing around simple function objects. The classic style of task declaration didn't entirely rule this out, but it also didn't make it terribly easy. * **Namespaces**. Having an explicit method of declaring tasks makes it easier to set up recursive namespaces without e.g. polluting your task list with the contents of Python's ``os`` module (which would show up as valid "tasks" under the classic methodology.) With the introduction of `~fabric.tasks.Task`, there are two ways to set up new tasks: * Decorate a regular module level function with `@task `, which transparently wraps the function in a `~fabric.tasks.Task` subclass. The function name will be used as the task name when invoking. * Subclass `~fabric.tasks.Task` (`~fabric.tasks.Task` itself is intended to be abstract), define a ``run`` method, and instantiate your subclass at module level. Instances' ``name`` attributes are used as the task name; if omitted the instance's variable name will be used instead. Use of new-style tasks also allows you to set up :ref:`namespaces `. .. _task-decorator: The ``@task`` decorator ----------------------- The quickest way to make use of new-style task features is to wrap basic task functions with `@task `:: from fabric.api import task, run @task def mytask(): run("a command") When this decorator is used, it signals to Fabric that *only* functions wrapped in the decorator are to be loaded up as valid tasks. (When not present, :ref:`classic-style task ` behavior kicks in.) .. _task-decorator-arguments: Arguments ~~~~~~~~~ `@task ` may also be called with arguments to customize its behavior. Any arguments not documented below are passed into the constructor of the ``task_class`` being used, with the function itself as the first argument (see :ref:`task-decorator-and-classes` for details.) * ``task_class``: The `~fabric.tasks.Task` subclass used to wrap the decorated function. Defaults to `~fabric.tasks.WrappedCallableTask`. * ``aliases``: An iterable of string names which will be used as aliases for the wrapped function. See :ref:`task-aliases` for details. * ``alias``: Like ``aliases`` but taking a single string argument instead of an iterable. If both ``alias`` and ``aliases`` are specified, ``aliases`` will take precedence. * ``default``: A boolean value determining whether the decorated task also stands in for its containing module as a task name. See :ref:`default-tasks`. * ``name``: A string setting the name this task appears as to the command-line interface. Useful for task names that would otherwise shadow Python builtins (which is technically legal but frowned upon and bug-prone.) .. _task-aliases: Aliases ~~~~~~~ Here's a quick example of using the ``alias`` keyword argument to facilitate use of both a longer human-readable task name, and a shorter name which is quicker to type:: from fabric.api import task @task(alias='dwm') def deploy_with_migrations(): pass Calling :option:`--list <-l>` on this fabfile would show both the original ``deploy_with_migrations`` and its alias ``dwm``:: $ fab --list Available commands: deploy_with_migrations dwm When more than one alias for the same function is needed, simply swap in the ``aliases`` kwarg, which takes an iterable of strings instead of a single string. .. _default-tasks: Default tasks ~~~~~~~~~~~~~ In a similar manner to :ref:`aliases `, it's sometimes useful to designate a given task within a module as the "default" task, which may be called by referencing *just* the module name. This can save typing and/or allow for neater organization when there's a single "main" task and a number of related tasks or subroutines. For example, a ``deploy`` submodule might contain tasks for provisioning new servers, pushing code, migrating databases, and so forth -- but it'd be very convenient to highlight a task as the default "just deploy" action. Such a ``deploy.py`` module might look like this:: from fabric.api import task @task def migrate(): pass @task def push(): pass @task def provision(): pass @task def full_deploy(): if not provisioned: provision() push() migrate() With the following task list (assuming a simple top level ``fabfile.py`` that just imports ``deploy``):: $ fab --list Available commands: deploy.full_deploy deploy.migrate deploy.provision deploy.push Calling ``deploy.full_deploy`` on every deploy could get kind of old, or somebody new to the team might not be sure if that's really the right task to run. Using the ``default`` kwarg to `@task `, we can tag e.g. ``full_deploy`` as the default task:: @task(default=True) def full_deploy(): pass Doing so updates the task list like so:: $ fab --list Available commands: deploy deploy.full_deploy deploy.migrate deploy.provision deploy.push Note that ``full_deploy`` still exists as its own explicit task -- but now ``deploy`` shows up as a sort of top level alias for ``full_deploy``. If multiple tasks within a module have ``default=True`` set, the last one to be loaded (typically the one lowest down in the file) will take precedence. Top-level default tasks ~~~~~~~~~~~~~~~~~~~~~~~ Using ``@task(default=True)`` in the top level fabfile will cause the denoted task to execute when a user invokes ``fab`` without any task names (similar to e.g. ``make``.) When using this shortcut, it is not possible to specify arguments to the task itself -- use a regular invocation of the task if this is necessary. .. _task-subclasses: ``Task`` subclasses ------------------- If you're used to :ref:`classic-style tasks `, an easy way to think about `~fabric.tasks.Task` subclasses is that their ``run`` method is directly equivalent to a classic task; its arguments are the task arguments (other than ``self``) and its body is what gets executed. For example, this new-style task:: class MyTask(Task): name = "deploy" def run(self, environment, domain="whatever.com"): run("git clone foo") sudo("service apache2 restart") instance = MyTask() is exactly equivalent to this function-based task:: @task def deploy(environment, domain="whatever.com"): run("git clone foo") sudo("service apache2 restart") Note how we had to instantiate an instance of our class; that's simply normal Python object-oriented programming at work. While it's a small bit of boilerplate right now -- for example, Fabric doesn't care about the name you give the instantiation, only the instance's ``name`` attribute -- it's well worth the benefit of having the power of classes available. We plan to extend the API in the future to make this experience a bit smoother. .. _task-decorator-and-classes: Using custom subclasses with ``@task`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It's possible to marry custom `~fabric.tasks.Task` subclasses with `@task `. This may be useful in cases where your core execution logic doesn't do anything class/object-specific, but you want to take advantage of class metaprogramming or similar techniques. Specifically, any `~fabric.tasks.Task` subclass which is designed to take in a callable as its first constructor argument (as the built-in `~fabric.tasks.WrappedCallableTask` does) may be specified as the ``task_class`` argument to `@task `. Fabric will automatically instantiate a copy of the given class, passing in the wrapped function as the first argument. All other args/kwargs given to the decorator (besides the "special" arguments documented in :ref:`task-decorator-arguments`) are added afterwards. Here's a brief and somewhat contrived example to make this obvious:: from fabric.api import task from fabric.tasks import Task class CustomTask(Task): def __init__(self, func, myarg, *args, **kwargs): super(CustomTask, self).__init__(*args, **kwargs) self.func = func self.myarg = myarg def run(self, *args, **kwargs): return self.func(*args, **kwargs) @task(task_class=CustomTask, myarg='value', alias='at') def actual_task(): pass When this fabfile is loaded, a copy of ``CustomTask`` is instantiated, effectively calling:: task_obj = CustomTask(actual_task, myarg='value') Note how the ``alias`` kwarg is stripped out by the decorator itself and never reaches the class instantiation; this is identical in function to how :ref:`command-line task arguments ` work. .. _namespaces: Namespaces ---------- With :ref:`classic tasks `, fabfiles were limited to a single, flat set of task names with no real way to organize them. In Fabric 1.1 and newer, if you declare tasks the new way (via `@task ` or your own `~fabric.tasks.Task` subclass instances) you may take advantage of **namespacing**: * Any module objects imported into your fabfile will be recursed into, looking for additional task objects. * Within submodules, you may control which objects are "exported" by using the standard Python ``__all__`` module-level variable name (thought they should still be valid new-style task objects.) * These tasks will be given new dotted-notation names based on the modules they came from, similar to Python's own import syntax. Let's build up a fabfile package from simple to complex and see how this works. Basic ~~~~~ We start with a single `__init__.py` containing a few tasks (the Fabric API import omitted for brevity):: @task def deploy(): ... @task def compress(): ... The output of ``fab --list`` would look something like this:: deploy compress There's just one namespace here: the "root" or global namespace. Looks simple now, but in a real-world fabfile with dozens of tasks, it can get difficult to manage. Importing a submodule ~~~~~~~~~~~~~~~~~~~~~ As mentioned above, Fabric will examine any imported module objects for tasks, regardless of where that module exists on your Python import path. For now we just want to include our own, "nearby" tasks, so we'll make a new submodule in our package for dealing with, say, load balancers -- ``lb.py``:: @task def add_backend(): ... And we'll add this to the top of ``__init__.py``:: import lb Now ``fab --list`` shows us:: deploy compress lb.add_backend Again, with only one task in its own submodule, it looks kind of silly, but the benefits should be pretty obvious. Going deeper ~~~~~~~~~~~~ Namespacing isn't limited to just one level. Let's say we had a larger setup and wanted a namespace for database related tasks, with additional differentiation inside that. We make a sub-package named ``db/`` and inside it, a ``migrations.py`` module:: @task def list(): ... @task def run(): ... We need to make sure that this module is visible to anybody importing ``db``, so we add it to the sub-package's ``__init__.py``:: import migrations As a final step, we import the sub-package into our root-level ``__init__.py``, so now its first few lines look like this:: import lb import db After all that, our file tree looks like this:: . ├── __init__.py ├── db │   ├── __init__.py │   └── migrations.py └── lb.py and ``fab --list`` shows:: deploy compress lb.add_backend db.migrations.list db.migrations.run We could also have specified (or imported) tasks directly into ``db/__init__.py``, and they would show up as ``db.`` as you might expect. Limiting with ``__all__`` ~~~~~~~~~~~~~~~~~~~~~~~~~ You may limit what Fabric "sees" when it examines imported modules, by using the Python convention of a module level ``__all__`` variable (a list of variable names.) If we didn't want the ``db.migrations.run`` task to show up by default for some reason, we could add this to the top of ``db/migrations.py``:: __all__ = ['list'] Note the lack of ``'run'`` there. You could, if needed, import ``run`` directly into some other part of the hierarchy, but otherwise it'll remain hidden. Switching it up ~~~~~~~~~~~~~~~ We've been keeping our fabfile package neatly organized and importing it in a straightforward manner, but the filesystem layout doesn't actually matter here. All Fabric's loader cares about is the names the modules are given when they're imported. For example, if we changed the top of our root ``__init__.py`` to look like this:: import db as database Our task list would change thusly:: deploy compress lb.add_backend database.migrations.list database.migrations.run This applies to any other import -- you could import third party modules into your own task hierarchy, or grab a deeply nested module and make it appear near the top level. Nested list output ~~~~~~~~~~~~~~~~~~ As a final note, we've been using the default Fabric :option:`--list <-l>` output during this section -- it makes it more obvious what the actual task names are. However, you can get a more nested or tree-like view by passing ``nested`` to the :option:`--list-format <-F>` option:: $ fab --list-format=nested --list Available commands (remember to call as module.[...].task): deploy compress lb: add_backend database: migrations: list run While it slightly obfuscates the "real" task names, this view provides a handy way of noting the organization of tasks in large namespaces. .. _classic-tasks: Classic tasks ============= When no new-style `~fabric.tasks.Task`-based tasks are found, Fabric will consider any callable object found in your fabfile, **except** the following: * Callables whose name starts with an underscore (``_``). In other words, Python's usual "private" convention holds true here. * Callables defined within Fabric itself. Fabric's own functions such as `~fabric.operations.run` and `~fabric.operations.sudo` will not show up in your task list. Imports ------- Python's ``import`` statement effectively includes the imported objects in your module's namespace. Since Fabric's fabfiles are just Python modules, this means that imports are also considered as possible classic-style tasks, alongside anything defined in the fabfile itself. .. note:: This only applies to imported *callable objects* -- not modules. Imported modules only come into play if they contain :ref:`new-style tasks `, at which point this section no longer applies. Because of this, we strongly recommend that you use the ``import module`` form of importing, followed by ``module.callable()``, which will result in a cleaner fabfile API than doing ``from module import callable``. For example, here's a sample fabfile which uses ``urllib.urlopen`` to get some data out of a webservice:: from urllib import urlopen from fabric.api import run def webservice_read(): objects = urlopen('http://my/web/service/?foo=bar').read().split() print(objects) This looks simple enough, and will run without error. However, look what happens if we run :option:`fab --list <-l>` on this fabfile:: $ fab --list Available commands: webservice_read List some directories. urlopen urlopen(url [, data]) -> open file-like object Our fabfile of only one task is showing two "tasks", which is bad enough, and an unsuspecting user might accidentally try to call ``fab urlopen``, which probably won't work very well. Imagine any real-world fabfile, which is likely to be much more complex, and hopefully you can see how this could get messy fast. For reference, here's the recommended way to do it:: import urllib from fabric.api import run def webservice_read(): objects = urllib.urlopen('http://my/web/service/?foo=bar').read().split() print(objects) It's a simple change, but it'll make anyone using your fabfile a bit happier. Fabric-1.8.2/docs/api/contrib/000755 000765 000024 00000000000 12277461504 017027 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/docs/api/core/000755 000765 000024 00000000000 12277461504 016317 5ustar00jforcierstaff000000 000000 Fabric-1.8.2/docs/api/core/colors.rst000644 000765 000024 00000000206 12257074160 020344 0ustar00jforcierstaff000000 000000 ====================== Color output functions ====================== .. automodule:: fabric.colors :members: :undoc-members: Fabric-1.8.2/docs/api/core/context_managers.rst000644 000765 000024 00000000152 12257074160 022404 0ustar00jforcierstaff000000 000000 ================ Context Managers ================ .. automodule:: fabric.context_managers :members: Fabric-1.8.2/docs/api/core/decorators.rst000644 000765 000024 00000000221 12257074160 021205 0ustar00jforcierstaff000000 000000 ========== Decorators ========== .. automodule:: fabric.decorators :members: hosts, roles, runs_once, serial, parallel, task, with_settings Fabric-1.8.2/docs/api/core/docs.rst000644 000765 000024 00000000155 12257074160 017776 0ustar00jforcierstaff000000 000000 ===================== Documentation helpers ===================== .. automodule:: fabric.docs :members: Fabric-1.8.2/docs/api/core/network.rst000644 000765 000024 00000000136 12257074160 020536 0ustar00jforcierstaff000000 000000 ======= Network ======= .. automodule:: fabric.network .. autofunction:: disconnect_all Fabric-1.8.2/docs/api/core/operations.rst000644 000765 000024 00000000122 12257074160 021223 0ustar00jforcierstaff000000 000000 ========== Operations ========== .. automodule:: fabric.operations :members: Fabric-1.8.2/docs/api/core/tasks.rst000644 000765 000024 00000000141 12257074160 020166 0ustar00jforcierstaff000000 000000 ===== Tasks ===== .. automodule:: fabric.tasks :members: Task, WrappedCallableTask, execute Fabric-1.8.2/docs/api/core/utils.rst000644 000765 000024 00000000076 12257074160 020210 0ustar00jforcierstaff000000 000000 ===== Utils ===== .. automodule:: fabric.utils :members: Fabric-1.8.2/docs/api/contrib/console.rst000644 000765 000024 00000000150 12257074160 021213 0ustar00jforcierstaff000000 000000 Console Output Utilities ======================== .. automodule:: fabric.contrib.console :members: Fabric-1.8.2/docs/api/contrib/django.rst000644 000765 000024 00000000156 12257074160 021021 0ustar00jforcierstaff000000 000000 ================== Django Integration ================== .. automodule:: fabric.contrib.django :members: Fabric-1.8.2/docs/api/contrib/files.rst000644 000765 000024 00000000216 12257074160 020656 0ustar00jforcierstaff000000 000000 ============================= File and Directory Management ============================= .. automodule:: fabric.contrib.files :members: Fabric-1.8.2/docs/api/contrib/project.rst000644 000765 000024 00000000140 12257074160 021216 0ustar00jforcierstaff000000 000000 ============= Project Tools ============= .. automodule:: fabric.contrib.project :members: Fabric-1.8.2/docs/_templates/layout.html000644 000765 000024 00000001604 12257074160 021153 0ustar00jforcierstaff000000 000000 {% extends "!layout.html" %} {%- block extrahead %} {{ super() }} {% endblock %} {% block sidebarlogo %} {{ super() }} {% if fabric_tags %}

Project Versions

{% endif %} {% endblock %} Fabric-1.8.2/docs/_static/rtd.css000644 000765 000024 00000035634 12257074160 017556 0ustar00jforcierstaff000000 000000 /* * rtd.css * ~~~~~~~~~~~~~~~ * * Sphinx stylesheet -- sphinxdoc theme. Originally created by * Armin Ronacher for Werkzeug. * * Customized for ReadTheDocs by Eric Pierce & Eric Holscher * * :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ /* RTD colors * light blue: #e8ecef * medium blue: #8ca1af * dark blue: #465158 * dark grey: #444444 * * white hover: #d1d9df; * medium blue hover: #697983; * green highlight: #8ecc4c * light blue (project bar): #e8ecef */ @import url("basic.css"); /* PAGE LAYOUT -------------------------------------------------------------- */ body { font: 100%/1.5 "ff-meta-web-pro-1","ff-meta-web-pro-2",Arial,"Helvetica Neue",sans-serif; text-align: center; color: black; background-color: #465158; padding: 0; margin: 0; } div.document { text-align: left; background-color: #e8ecef; } div.bodywrapper { background-color: #ffffff; border-left: 1px solid #ccc; border-bottom: 1px solid #ccc; margin: 0 0 0 16em; } div.body { margin: 0; padding: 0.5em 1.3em; max-width: 55em; min-width: 20em; } div.related { font-size: 1em; background-color: #465158; } div.documentwrapper { float: left; width: 100%; background-color: #e8ecef; } /* HEADINGS --------------------------------------------------------------- */ h1 { margin: 0; padding: 0.7em 0 0.3em 0; font-size: 1.5em; line-height: 1.15; color: #111; clear: both; } h2 { margin: 2em 0 0.2em 0; font-size: 1.35em; padding: 0; color: #465158; } h3 { margin: 1em 0 -0.3em 0; font-size: 1.2em; color: #6c818f; } div.body h1 a, div.body h2 a, div.body h3 a, div.body h4 a, div.body h5 a, div.body h6 a { color: black; } h1 a.anchor, h2 a.anchor, h3 a.anchor, h4 a.anchor, h5 a.anchor, h6 a.anchor { display: none; margin: 0 0 0 0.3em; padding: 0 0.2em 0 0.2em; color: #aaa !important; } h1:hover a.anchor, h2:hover a.anchor, h3:hover a.anchor, h4:hover a.anchor, h5:hover a.anchor, h6:hover a.anchor { display: inline; } h1 a.anchor:hover, h2 a.anchor:hover, h3 a.anchor:hover, h4 a.anchor:hover, h5 a.anchor:hover, h6 a.anchor:hover { color: #777; background-color: #eee; } /* LINKS ------------------------------------------------------------------ */ /* Normal links get a pseudo-underline */ a { color: #444; text-decoration: none; border-bottom: 1px solid #ccc; } /* Links in sidebar, TOC, index trees and tables have no underline */ .sphinxsidebar a, .toctree-wrapper a, .indextable a, #indices-and-tables a { color: #444; text-decoration: none; border-bottom: none; } /* Most links get an underline-effect when hovered */ a:hover, div.toctree-wrapper a:hover, .indextable a:hover, #indices-and-tables a:hover { color: #111; text-decoration: none; border-bottom: 1px solid #111; } /* Footer links */ div.footer a { color: #86989B; text-decoration: none; border: none; } div.footer a:hover { color: #a6b8bb; text-decoration: underline; border: none; } /* Permalink anchor (subtle grey with a red hover) */ div.body a.headerlink { color: #ccc; font-size: 1em; margin-left: 6px; padding: 0 4px 0 4px; text-decoration: none; border: none; } div.body a.headerlink:hover { color: #c60f0f; border: none; } /* NAVIGATION BAR --------------------------------------------------------- */ div.related ul { height: 2.5em; } div.related ul li { margin: 0; padding: 0.65em 0; float: left; display: block; color: white; /* For the >> separators */ font-size: 0.8em; } div.related ul li.right { float: right; margin-right: 5px; color: transparent; /* Hide the | separators */ } /* "Breadcrumb" links in nav bar */ div.related ul li a { order: none; background-color: inherit; font-weight: bold; margin: 6px 0 6px 4px; line-height: 1.75em; color: #ffffff; padding: 0.4em 0.8em; border: none; border-radius: 3px; } /* previous / next / modules / index links look more like buttons */ div.related ul li.right a { margin: 0.375em 0; background-color: #697983; text-shadow: 0 1px rgba(0, 0, 0, 0.5); border-radius: 3px; -webkit-border-radius: 3px; -moz-border-radius: 3px; } /* All navbar links light up as buttons when hovered */ div.related ul li a:hover { background-color: #8ca1af; color: #ffffff; text-decoration: none; border-radius: 3px; -webkit-border-radius: 3px; -moz-border-radius: 3px; } /* Take extra precautions for tt within links */ a tt, div.related ul li a tt { background: inherit !important; color: inherit !important; } /* SIDEBAR ---------------------------------------------------------------- */ div.sphinxsidebarwrapper { padding: 0; } div.sphinxsidebar { margin: 0; margin-left: -100%; float: left; top: 3em; left: 0; padding: 0 1em; width: 14em; font-size: 1em; text-align: left; background-color: #e8ecef; } div.sphinxsidebar img { max-width: 12em; } div.sphinxsidebar h3, div.sphinxsidebar h4 { margin: 1.2em 0 0.3em 0; font-size: 1em; padding: 0; color: #222222; font-family: "ff-meta-web-pro-1", "ff-meta-web-pro-2", "Arial", "Helvetica Neue", sans-serif; } div.sphinxsidebar h3 a { color: #444444; } div.sphinxsidebar ul, div.sphinxsidebar p { margin-top: 0; padding-left: 0; line-height: 130%; background-color: #e8ecef; } /* No bullets for nested lists, but a little extra indentation */ div.sphinxsidebar ul ul { list-style-type: none; margin-left: 1.5em; padding: 0; } /* A little top/bottom padding to prevent adjacent links' borders * from overlapping each other */ div.sphinxsidebar ul li { padding: 1px 0; } /* A little left-padding to make these align with the ULs */ div.sphinxsidebar p.topless { padding-left: 0 0 0 1em; } /* Make these into hidden one-liners */ div.sphinxsidebar ul li, div.sphinxsidebar p.topless { white-space: nowrap; overflow: hidden; } /* ...which become visible when hovered */ div.sphinxsidebar ul li:hover, div.sphinxsidebar p.topless:hover { overflow: visible; } /* Search text box and "Go" button */ #searchbox { margin-top: 2em; margin-bottom: 1em; background: #ddd; padding: 0.5em; border-radius: 6px; -moz-border-radius: 6px; -webkit-border-radius: 6px; } #searchbox h3 { margin-top: 0; } /* Make search box and button abut and have a border */ input, div.sphinxsidebar input { border: 1px solid #999; float: left; } /* Search textbox */ input[type="text"] { margin: 0; padding: 0 3px; height: 20px; width: 144px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; -moz-border-radius-topleft: 3px; -moz-border-radius-bottomleft: 3px; -webkit-border-top-left-radius: 3px; -webkit-border-bottom-left-radius: 3px; } /* Search button */ input[type="submit"] { margin: 0 0 0 -1px; /* -1px prevents a double-border with textbox */ height: 22px; color: #444; background-color: #e8ecef; padding: 1px 4px; font-weight: bold; border-top-right-radius: 3px; border-bottom-right-radius: 3px; -moz-border-radius-topright: 3px; -moz-border-radius-bottomright: 3px; -webkit-border-top-right-radius: 3px; -webkit-border-bottom-right-radius: 3px; } input[type="submit"]:hover { color: #ffffff; background-color: #8ecc4c; } div.sphinxsidebar p.searchtip { clear: both; padding: 0.5em 0 0 0; background: #ddd; color: #666; font-size: 0.9em; } /* Sidebar links are unusual */ div.sphinxsidebar li a, div.sphinxsidebar p a { background: #e8ecef; /* In case links overlap main content */ border-radius: 3px; -moz-border-radius: 3px; -webkit-border-radius: 3px; border: 1px solid transparent; /* To prevent things jumping around on hover */ padding: 0 5px 0 5px; } div.sphinxsidebar li a:hover, div.sphinxsidebar p a:hover { color: #111; text-decoration: none; border: 1px solid #888; } /* Tweak any link appearing in a heading */ div.sphinxsidebar h3 a { } /* OTHER STUFF ------------------------------------------------------------ */ cite, code, tt { font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.95em; letter-spacing: 0.01em; } tt { background-color: #f2f2f2; color: #444; } tt.descname, tt.descclassname, tt.xref { border: 0; } hr { border: 1px solid #abc; margin: 2em; } pre, #_fontwidthtest { font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; margin: 1em 2em; font-size: 0.95em; letter-spacing: 0.015em; line-height: 120%; padding: 0.5em; border: 1px solid #ccc; background-color: #eee; border-radius: 6px; -moz-border-radius: 6px; -webkit-border-radius: 6px; } pre a { color: inherit; text-decoration: underline; } td.linenos pre { padding: 0.5em 0; } div.quotebar { background-color: #f8f8f8; max-width: 250px; float: right; padding: 2px 7px; border: 1px solid #ccc; } div.topic { background-color: #f8f8f8; } table { border-collapse: collapse; margin: 0 -0.5em 0 -0.5em; } table td, table th { padding: 0.2em 0.5em 0.2em 0.5em; } /* ADMONITIONS AND WARNINGS ------------------------------------------------- */ /* Shared by admonitions and warnings */ div.admonition, div.warning { font-size: 0.9em; margin: 2em; padding: 0; /* border-radius: 6px; -moz-border-radius: 6px; -webkit-border-radius: 6px; */ } div.admonition p, div.warning p { margin: 0.5em 1em 0.5em 1em; padding: 0; } div.admonition pre, div.warning pre { margin: 0.4em 1em 0.4em 1em; } div.admonition p.admonition-title, div.warning p.admonition-title { margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; font-weight: bold; font-size: 1.1em; text-shadow: 0 1px rgba(0, 0, 0, 0.5); } div.admonition ul, div.admonition ol, div.warning ul, div.warning ol { margin: 0.1em 0.5em 0.5em 3em; padding: 0; } /* Admonitions only */ div.admonition { border: 1px solid #609060; background-color: #e9ffe9; } div.admonition p.admonition-title { background-color: #70A070; border-bottom: 1px solid #609060; } /* Warnings only */ div.warning { border: 1px solid #900000; background-color: #ffe9e9; } div.warning p.admonition-title { background-color: #b04040; border-bottom: 1px solid #900000; } div.versioninfo { margin: 1em 0 0 0; border: 1px solid #ccc; background-color: #DDEAF0; padding: 8px; line-height: 1.3em; font-size: 0.9em; } .viewcode-back { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #ac9; border-bottom: 1px solid #ac9; } dl { margin: 1em 0 2.5em 0; } /* Highlight target when you click an internal link */ dt:target { background: #ffe080; } /* Don't highlight whole divs */ div.highlight { background: transparent; } /* But do highlight spans (so search results can be highlighted) */ span.highlight { background: #ffe080; } div.footer { background-color: #465158; color: #eeeeee; padding: 0 2em 2em 2em; clear: both; font-size: 0.8em; text-align: center; } p { margin: 0.8em 0 0.5em 0; } .section p img { margin: 1em 2em; } /* MOBILE LAYOUT -------------------------------------------------------------- */ @media screen and (max-width: 600px) { h1, h2, h3, h4, h5 { position: relative; } ul { padding-left: 1.75em; } div.bodywrapper a.headerlink, #indices-and-tables h1 a { color: #e6e6e6; font-size: 80%; float: right; line-height: 1.8; position: absolute; right: -0.7em; visibility: inherit; } div.bodywrapper h1 a.headerlink, #indices-and-tables h1 a { line-height: 1.5; } pre { font-size: 0.7em; overflow: auto; word-wrap: break-word; white-space: pre-wrap; } div.related ul { height: 2.5em; padding: 0; text-align: left; } div.related ul li { clear: both; color: #465158; padding: 0.2em 0; } div.related ul li:last-child { border-bottom: 1px dotted #8ca1af; padding-bottom: 0.4em; margin-bottom: 1em; width: 100%; } div.related ul li a { color: #465158; padding-right: 0; } div.related ul li a:hover { background: inherit; color: inherit; } div.related ul li.right { clear: none; padding: 0.65em 0; margin-bottom: 0.5em; } div.related ul li.right a { color: #fff; padding-right: 0.8em; } div.related ul li.right a:hover { background-color: #8ca1af; } div.body { clear: both; min-width: 0; word-wrap: break-word; } div.bodywrapper { margin: 0 0 0 0; } div.sphinxsidebar { float: none; margin: 0; width: auto; } div.sphinxsidebar input[type="text"] { height: 2em; line-height: 2em; width: 70%; } div.sphinxsidebar input[type="submit"] { height: 2em; margin-left: 0.5em; width: 20%; } div.sphinxsidebar p.searchtip { background: inherit; margin-bottom: 1em; } div.sphinxsidebar ul li, div.sphinxsidebar p.topless { white-space: normal; } .bodywrapper img { display: block; margin-left: auto; margin-right: auto; max-width: 100%; } div.documentwrapper { float: none; } div.admonition, div.warning, pre, blockquote { margin-left: 0em; margin-right: 0em; } .body p img { margin: 0; } #searchbox { background: transparent; } .related:not(:first-child) li { display: none; } .related:not(:first-child) li.right { display: block; } div.footer { padding: 1em; } .rtd_doc_footer .badge { float: none; margin: 1em auto; position: static; } .rtd_doc_footer .badge.revsys-inline { margin-right: auto; margin-bottom: 2em; } table.indextable { display: block; width: auto; } .indextable tr { display: block; } .indextable td { display: block; padding: 0; width: auto !important; } .indextable td dt { margin: 1em 0; } ul.search { margin-left: 0.25em; } ul.search li div.context { font-size: 90%; line-height: 1.1; margin-bottom: 1; margin-left: 0; } }