ansible-1.5.4/0000775000000000000000000000000012316627017011643 5ustar rootrootansible-1.5.4/RELEASES.txt0000664000000000000000000000254112316627017013611 0ustar rootrootAnsible Releases at a Glance ============================ 1.6 "The Cradle Will Rock" - NEXT 1.5.4 "Love Walks In" -------- 04-01-2014 1.5.3 "Love Walks In" -------- 03-13-2014 1.5.2 "Love Walks In" -------- 03-11-2014 1.5.1 "Love Walks In" -------- 03-10-2014 1.5 "Love Walks In" -------- 02-28-2014 1.4.5 "Could This Be Magic?" - 02-12-2014 1.4.4 "Could This Be Magic?" - 01-06-2014 1.4.3 "Could This Be Magic?" - 12-20-2013 1.4.2 "Could This Be Magic?" - 12-18-2013 1.4.1 "Could This Be Magic?" - 11-27-2013 1.4 "Could This Be Magic?" - 11-21-2013 1.3.4 "Top of the World" ----- 10-29-2013 1.3.3 "Top of the World" ----- 10-09-2013 1.3.2 "Top of the World" ----- 09-19-2013 1.3.1 "Top of the World" ----- 09-16-2013 1.3 "Top of the World" ----- 09-13-2013 1.2.3 "Hear About It Later" -- 08-21-2013 1.2.2 "Hear About It Later" -- 07-05-2013 1.2.1 "Hear About It Later" -- 07-04-2013 1.2 "Right Now" ------------ 06-10-2013 1.1 "Mean Street" ---------- 04-02-2013 1.0 "Eruption" ------------- 02-01-2013 0.9 "Dreams" --------------- 11-30-2012 0.8 "Cathedral" ------------ 10-19-2012 0.7 "Panama" --------------- 09-06-2012 0.6 "Cabo" ----------------- 08-06-2012 0.5 "Amsterdam" ------------ 07-04-2012 0.4 "Unchained" ------------ 05-23-2012 0.3 "Baluchitherium" ------- 04-23-2012 0.0.2 Untitled 0.0.1 Untitled ansible-1.5.4/setup.py0000664000000000000000000000322212316627017013354 0ustar rootroot#!/usr/bin/env python import os import sys from glob import glob sys.path.insert(0, os.path.abspath('lib')) from ansible import __version__, __author__ from distutils.core import setup # find library modules from ansible.constants import DEFAULT_MODULE_PATH module_paths = DEFAULT_MODULE_PATH.split(os.pathsep) # always install in /usr/share/ansible if specified # otherwise use the first module path listed if '/usr/share/ansible' in module_paths: install_path = '/usr/share/ansible' else: install_path = module_paths[0] dirs=os.listdir("./library/") data_files = [] for i in dirs: data_files.append((os.path.join(install_path, i), glob('./library/' + i + '/*'))) setup(name='ansible', version=__version__, description='Radically simple IT automation', author=__author__, author_email='michael@ansible.com', url='http://ansible.com/', license='GPLv3', install_requires=['paramiko', 'jinja2', "PyYAML"], package_dir={ 'ansible': 'lib/ansible' }, packages=[ 'ansible', 'ansible.utils', 'ansible.inventory', 'ansible.inventory.vars_plugins', 'ansible.playbook', 'ansible.runner', 'ansible.runner.action_plugins', 'ansible.runner.lookup_plugins', 'ansible.runner.connection_plugins', 'ansible.runner.filter_plugins', 'ansible.callback_plugins', 'ansible.module_utils' ], scripts=[ 'bin/ansible', 'bin/ansible-playbook', 'bin/ansible-pull', 'bin/ansible-doc', 'bin/ansible-galaxy', 'bin/ansible-vault', ], data_files=data_files ) ansible-1.5.4/CONTRIBUTING.md0000664000000000000000000002140412316627017014075 0ustar rootrootAnsible Community Information ============================== The purpose of the Ansible community is to unite developers, system administrators, operations, and IT managers to share and build great automation solutions. This document contains all sorts of information about how to contribute and interact with Ansible. Welcome! Ways to Interact ================ There are a lot of ways to join and be a part of the Ansible community, such as: Sharing Ansible with Others --------------------------- You can help share Ansible with others by telling friends and colleagues, writing a blog post, or presenting at user groups (like DevOps groups or the local LUG or BUG). You are also welcome to share slides on speakerdeck, sign up for a free account and tag it “Ansible”. On Twitter, you can also share things with #ansible and may wish to follow [@Ansible](https://twitter.com/ansible). Sharing Content and Tips ------------------------ Join the [Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) and you can share playbooks you may have written and other interesting implementation stories. Put your Ansible content up on places like github to share with others. Sharing A Feature Idea ---------------------- If you have an idea for a new feature, you can open a new ticket at [github.com/ansible/ansible](https://github.com/ansible/ansible), though in general we like to talk about feature ideas first and bring in lots of people into the discussion. Consider stopping by the [Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) ([Subscribe](https://groups.google.com/forum/#!forum/ansible-project/join)) or #ansible on irc.freenode.net. There is an overview about more mailing lists later in this document. Helping with Documentation -------------------------- Ansible documentation is a community project too! If you would like to help with the documentation, whether correcting a typo or improving a section, or maybe even documenting a new feature, submit a github pull request to the code that lives in the “docsite/latest/rst” subdirectory of the project. Docs are in restructured text format. If you aren’t comfortable with restructured text, you can also open a ticket on github about any errors you spot or sections you would like to see added. For more information on creating pull requests, please refer to the [github help guide](https://help.github.com/articles/using-pull-requests). Contributing Code (Features or Bugfixes) ---------------------------------------- The Ansible project keeps it’s source on github at [github.com/ansible/ansible](http://github.com/ansible/ansible) and takes contributions through [github pull requests](https://help.github.com/articles/using-pull-requests). It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first time, that revisions are needed. (This is not usually needed for module development) When submitting patches, be sure to run the unit tests first “make tests” and always use “git rebase” vs “git merge” (aliasing git pull to git pull --rebase is a great idea) to avoid merge commits in your submissions. We will require resubmission of pull requests that contain merge commits. We’ll then review your contributions and engage with you about questions and so on. Please be advised we have a very large and active community, so it may take awhile to get your contributions in! Patches should be made against the 'devel' branch. Contributions can be for new features like modules, or to fix bugs you or others have found. If you are interested in writing new modules to be included in the core Ansible distribution, please refer to the [Module Developers documentation on our website](http://docs.ansible.com/developing_modules.html). Ansible's aesthetic encourages simple, readable code and consistent, conservatively extending, backwards-compatible improvements. Code developed for Ansible needs to support Python 2.6+, while code in modules must run under Python 2.4 or higher. Please also use a 4-space indent and no tabs. Tip: To easily run from a checkout, source "./hacking/env-setup" and that's it -- no install required. You're now live! Reporting A Bug --------------- Bugs should be reported to [github.com/ansible/ansible](http://github.com/ansible/ansible) after signing up for a free github account. Before reporting a bug, please use the bug/issue search to see if the issue has already been reported. When filing a bug, please use the [issue template](https://raw2.github.com/ansible/ansible/devel/examples/issues/ISSUE_TEMPLATE.md) to provide all relevant information. Do not use the issue tracker for "how do I do this" type questions. These are great candidates for IRC or the mailing list instead where things are likely to be more of a discussion. To be respectful of reviewers time and allow us to help everyone efficiently, please provide minimal well-reduced and well-commented examples versus sharing your entire production playbook. Include playbook snippets and output where possible. Content in the GitHub bug tracker can be indented four spaces to preserve formatting. For multiple-file content, we encourage use of gist.github.com. Online pastebin content can expire. If you are not sure if something is a bug yet, you are welcome to ask about something on the mailing list or IRC first. As we are a very high volume project, if you determine that you do have a bug, please be sure to open the issue yourself to ensure we have a record of it. Don’t rely on someone else in the community to file the bug report for you. Online Resources ================ Documentation ------------- The main ansible documentation can be found at [docs.ansible.com](http://docs.ansible.com). As mentioned above this is an open source project, so we accept contributions to the documentation. You can also find some best practices examples that we recommend reading at [ansible-examples](http://github.com/ansible/ansible-examples). Mailing lists ------------- Ansible has several mailing lists. Your first post to the mailing list will be moderated (to reduce spam), so please allow a day or less for your first post. [ansible-announce](https://groups.google.com/forum/#!forum/ansible-announce) is for release announcements and major news. It is a low traffic read-only list and you should only get a few emails a month. [ansible-project](https://groups.google.com/forum/#!forum/ansible-project) is the main list, and is used for sharing cool projects you may have built, talking about Ansible ideas, and for users to ask questions or to help other users. [ansible-devel](https://groups.google.com/forum/#!forum/ansible-devel) is a technical list for developers working on Ansible and Ansible modules. Join here to discuss how to build modules, prospective feature implementations, or technical challenges. To subscribe to a group from a non-google account, you can email the subscription address, for example ansible-devel+subscribe@googlegroups.com. IRC --- Ansible has a general purpose IRC channel available at #ansible on irc.freenode.net. Use this channel for all types of conversations, including sharing tips, coordinating development work, or getting help from other users. Miscellaneous Information ========================= Staff ----- Ansible, Inc is a company supporting Ansible and building additional solutions based on Ansible. We also do services and support for those that are interested. Our most important task however is enabling all the great things that happen in the Ansible community, including organizing software releases of Ansible. For more information about any of these things, contact info@ansible.com On IRC, you can find us as mdehaan, jimi_c, Tybstar, and others. On the mailing list, we post with an @ansible.com address. Community Code of Conduct ------------------------- Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please treat others as you expect to be treated, keep discussions positive, and avoid discrimination or engaging in controversial debates (except vi vs emacs is cool). Posts to mailing lists should remain focused around Ansible and IT automation. Abuse of these community guidelines will not be tolerated and may result in banning from community resources. Contributors License Agreement ------------------------------ By contributing you agree that these contributions are your own (or approved by your employer) and you grant a full, complete, irrevocable copyright license to all users and developers of the project, present and future, pursuant to the license of the project. ansible-1.5.4/library/0000775000000000000000000000000012316627017013307 5ustar rootrootansible-1.5.4/library/utilities/0000775000000000000000000000000012316627017015322 5ustar rootrootansible-1.5.4/library/utilities/set_fact0000664000000000000000000000311612316627017017036 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2013 Dag Wieers # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- author: Dag Wieers module: set_fact short_description: Set host facts from a task description: - This module allows setting new variables. Variables are set on a host-by-host basis just like facts discovered by the setup module. - These variables will survive between plays. options: key_value: description: - The C(set_fact) module takes key=value pairs as variables to set in the playbook scope. Or alternatively, accepts complex arguments using the C(args:) statement. required: true default: null version_added: "1.2" ''' EXAMPLES = ''' # Example setting host facts using key=value pairs - set_fact: one_fact="something" other_fact="{{ local_var * 2 }}" # Example setting host facts using complex arguments - set_fact: one_fact: something other_fact: "{{ local_var * 2 }}" ''' ansible-1.5.4/library/utilities/debug0000664000000000000000000000337612316627017016344 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2012 Dag Wieers # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: debug short_description: Print statements during execution description: - This module prints statements during execution and can be useful for debugging variables or expressions without necessarily halting the playbook. Useful for debugging together with the 'when:' directive. version_added: "0.8" options: msg: description: - The customized message that is printed. If omitted, prints a generic message. required: false default: "Hello world!" var: description: - A variable name to debug. Mutually exclusive with the 'msg' option. author: Dag Wieers, Michael DeHaan ''' EXAMPLES = ''' # Example that prints the loopback address and gateway for each host - debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}" - debug: msg="System {{ inventory_hostname }} has gateway {{ ansible_default_ipv4.gateway }}" when: ansible_default_ipv4.gateway is defined - shell: /usr/bin/uptime register: result - debug: var=result ''' ansible-1.5.4/library/utilities/pause0000664000000000000000000000271012316627017016362 0ustar rootroot# -*- mode: python -*- DOCUMENTATION = ''' --- module: pause short_description: Pause playbook execution description: - Pauses playbook execution for a set amount of time, or until a prompt is acknowledged. All parameters are optional. The default behavior is to pause with a prompt. - "You can use C(ctrl+c) if you wish to advance a pause earlier than it is set to expire or if you need to abort a playbook run entirely. To continue early: press C(ctrl+c) and then C(c). To abort a playbook: press C(ctrl+c) and then C(a)." - "The pause module integrates into async/parallelized playbooks without any special considerations (see also: Rolling Updates). When using pauses with the C(serial) playbook parameter (as in rolling updates) you are only prompted once for the current group of hosts." version_added: "0.8" options: minutes: description: - Number of minutes to pause for. required: false default: null seconds: description: - Number of seconds to pause for. required: false default: null prompt: description: - Optional text to use for the prompt message. required: false default: null author: Tim Bielawa ''' EXAMPLES = ''' # Pause for 5 minutes to build app cache. - pause: minutes=5 # Pause until you can verify updates to an application were successful. - pause: # A helpful reminder of what to look out for post-update. - pause: prompt="Make sure org.foo.FooOverload exception is not present" ''' ansible-1.5.4/library/utilities/assert0000664000000000000000000000236212316627017016551 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2012 Dag Wieers # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: assert short_description: Fail with custom message description: - This module asserts that a given expression is true and can be a simpler alternative to the 'fail' module in some cases. version_added: "1.5" options: that: description: - "A string expression of the same form that can be passed to the 'when' statement" required: true author: Michael DeHaan ''' EXAMPLES = ''' - assert: ansible_os_family != "RedHat" - assert: "'foo' in some_command_result.stdout" ''' ansible-1.5.4/library/utilities/include_vars0000664000000000000000000000254712316627017017733 0ustar rootroot# -*- mode: python -*- # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- author: Benno Joy module: include_vars short_description: Load variables from files, dynamically within a task. description: - Loads variables from a YAML file dynamically during task runtime. It can work with conditionals, or use host specific variables to determine the path name to load from. options: free-form: description: - The file name from which variables should be loaded, if called from a role it will look for the file in vars/ subdirectory of the role, otherwise the path would be relative to playbook. An absolute path can also be provided. required: true version_added: "1.4" ''' EXAMPLES = """ # Conditionally decide to load in variables when x is 0, otherwise do not. - include_vars: contingency_plan.yml when: x == 0 # Load a variable file based on the OS type, or a default if not found. - include_vars: "{{ item }}" with_first_found: - "{{ ansible_os_distribution }}.yml" - "default.yml" """ ansible-1.5.4/library/utilities/accelerate0000664000000000000000000004344512316627017017347 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, James Cammarata # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: accelerate short_description: Enable accelerated mode on remote node description: - This modules launches an ephemeral I(accelerate) daemon on the remote node which Ansible can use to communicate with nodes at high speed. - The daemon listens on a configurable port for a configurable amount of time. - Fireball mode is AES encrypted version_added: "1.3" options: port: description: - TCP port for the socket connection required: false default: 5099 aliases: [] timeout: description: - The number of seconds the socket will wait for data. If none is received when the timeout value is reached, the connection will be closed. required: false default: 300 aliases: [] minutes: description: - The I(accelerate) listener daemon is started on nodes and will stay around for this number of minutes before turning itself off. required: false default: 30 ipv6: description: - The listener daemon on the remote host will bind to the ipv6 localhost socket if this parameter is set to true. required: false default: false notes: - See the advanced playbooks chapter for more about using accelerated mode. requirements: [ "python-keyczar" ] author: James Cammarata ''' EXAMPLES = ''' # To use accelerate mode, simply add "accelerate: true" to your play. The initial # key exchange and starting up of the daemon will occur over SSH, but all commands and # subsequent actions will be conducted over the raw socket connection using AES encryption - hosts: devservers accelerate: true tasks: - command: /usr/bin/anything ''' import base64 import getpass import json import os import os.path import pwd import signal import socket import struct import sys import syslog import tempfile import time import traceback import SocketServer from datetime import datetime from threading import Thread syslog.openlog('ansible-%s' % os.path.basename(__file__)) PIDFILE = os.path.expanduser("~/.accelerate.pid") # the chunk size to read and send, assuming mtu 1500 and # leaving room for base64 (+33%) encoding and header (100 bytes) # 4 * (975/3) + 100 = 1400 # which leaves room for the TCP/IP header CHUNK_SIZE=10240 # FIXME: this all should be moved to module_common, as it's # pretty much a copy from the callbacks/util code DEBUG_LEVEL=0 def log(msg, cap=0): global DEBUG_LEVEL if DEBUG_LEVEL >= cap: syslog.syslog(syslog.LOG_NOTICE|syslog.LOG_DAEMON, msg) def vv(msg): log(msg, cap=2) def vvv(msg): log(msg, cap=3) def vvvv(msg): log(msg, cap=4) if os.path.exists(PIDFILE): try: data = int(open(PIDFILE).read()) try: os.kill(data, signal.SIGKILL) except OSError: pass except ValueError: pass os.unlink(PIDFILE) HAS_KEYCZAR = False try: from keyczar.keys import AesKey HAS_KEYCZAR = True except ImportError: pass # NOTE: this shares a fair amount of code in common with async_wrapper, if async_wrapper were a new module we could move # this into utils.module_common and probably should anyway def daemonize_self(module, password, port, minutes): # daemonizing code: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012 try: pid = os.fork() if pid > 0: vvv("exiting pid %s" % pid) # exit first parent module.exit_json(msg="daemonized accelerate on port %s for %s minutes with pid %s" % (port, minutes, str(pid))) except OSError, e: log("fork #1 failed: %d (%s)" % (e.errno, e.strerror)) sys.exit(1) # decouple from parent environment os.chdir("/") os.setsid() os.umask(022) # do second fork try: pid = os.fork() if pid > 0: log("daemon pid %s, writing %s" % (pid, PIDFILE)) pid_file = open(PIDFILE, "w") pid_file.write("%s" % pid) pid_file.close() vvv("pidfile written") sys.exit(0) except OSError, e: log("fork #2 failed: %d (%s)" % (e.errno, e.strerror)) sys.exit(1) dev_null = file('/dev/null','rw') os.dup2(dev_null.fileno(), sys.stdin.fileno()) os.dup2(dev_null.fileno(), sys.stdout.fileno()) os.dup2(dev_null.fileno(), sys.stderr.fileno()) log("daemonizing successful") class ThreadWithReturnValue(Thread): def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, Verbose=None): Thread.__init__(self, group, target, name, args, kwargs, Verbose) self._return = None def run(self): if self._Thread__target is not None: self._return = self._Thread__target(*self._Thread__args, **self._Thread__kwargs) def join(self,timeout=None): Thread.join(self, timeout=timeout) return self._return class ThreadedTCPServer(SocketServer.ThreadingTCPServer): def __init__(self, server_address, RequestHandlerClass, module, password, timeout): self.module = module self.key = AesKey.Read(password) self.allow_reuse_address = True self.timeout = timeout SocketServer.ThreadingTCPServer.__init__(self, server_address, RequestHandlerClass) class ThreadedTCPV6Server(SocketServer.ThreadingTCPServer): def __init__(self, server_address, RequestHandlerClass, module, password, timeout): self.module = module self.address_family = socket.AF_INET6 self.key = AesKey.Read(password) self.allow_reuse_address = True self.timeout = timeout SocketServer.ThreadingTCPServer.__init__(self, server_address, RequestHandlerClass) class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): def send_data(self, data): packed_len = struct.pack('!Q', len(data)) return self.request.sendall(packed_len + data) def recv_data(self): header_len = 8 # size of a packed unsigned long long data = "" vvvv("in recv_data(), waiting for the header") while len(data) < header_len: d = self.request.recv(header_len - len(data)) if not d: vvv("received nothing, bailing out") return None data += d vvvv("in recv_data(), got the header, unpacking") data_len = struct.unpack('!Q',data[:header_len])[0] data = data[header_len:] vvvv("data received so far (expecting %d): %d" % (data_len,len(data))) while len(data) < data_len: d = self.request.recv(data_len - len(data)) if not d: vvv("received nothing, bailing out") return None data += d vvvv("data received so far (expecting %d): %d" % (data_len,len(data))) vvvv("received all of the data, returning") return data def handle(self): try: while True: vvvv("waiting for data") data = self.recv_data() if not data: vvvv("received nothing back from recv_data(), breaking out") break try: vvvv("got data, decrypting") data = self.server.key.Decrypt(data) vvvv("decryption done") except: vv("bad decrypt, skipping...") data2 = json.dumps(dict(rc=1)) data2 = self.server.key.Encrypt(data2) self.send_data(data2) return vvvv("loading json from the data") data = json.loads(data) mode = data['mode'] response = {} last_pong = datetime.now() if mode == 'command': vvvv("received a command request, running it") twrv = ThreadWithReturnValue(target=self.command, args=(data,)) twrv.start() response = None while twrv.is_alive(): if (datetime.now() - last_pong).seconds >= 15: last_pong = datetime.now() vvvv("command still running, sending keepalive packet") data2 = json.dumps(dict(pong=True)) data2 = self.server.key.Encrypt(data2) self.send_data(data2) time.sleep(0.1) response = twrv._return vvvv("thread is done, response from join was %s" % response) elif mode == 'put': vvvv("received a put request, putting it") response = self.put(data) elif mode == 'fetch': vvvv("received a fetch request, getting it") response = self.fetch(data) elif mode == 'validate_user': vvvv("received a request to validate the user id") response = self.validate_user(data) vvvv("response result is %s" % str(response)) data2 = json.dumps(response) data2 = self.server.key.Encrypt(data2) vvvv("sending the response back to the controller") self.send_data(data2) vvvv("done sending the response") if mode == 'validate_user' and response.get('rc') == 1: vvvv("detected a uid mismatch, shutting down") self.server.shutdown() except: tb = traceback.format_exc() log("encountered an unhandled exception in the handle() function") log("error was:\n%s" % tb) data2 = json.dumps(dict(rc=1, failed=True, msg="unhandled error in the handle() function")) data2 = self.server.key.Encrypt(data2) self.send_data(data2) def validate_user(self, data): if 'username' not in data: return dict(failed=True, msg='No username specified') vvvv("validating we're running as %s" % data['username']) # get the current uid c_uid = os.getuid() try: # the target uid t_uid = pwd.getpwnam(data['username']).pw_uid except: vvvv("could not find user %s" % data['username']) return dict(failed=True, msg='could not find user %s' % data['username']) # and return rc=0 for success, rc=1 for failure if c_uid == t_uid: return dict(rc=0) else: return dict(rc=1) def command(self, data): if 'cmd' not in data: return dict(failed=True, msg='internal error: cmd is required') if 'tmp_path' not in data: return dict(failed=True, msg='internal error: tmp_path is required') if 'executable' not in data: return dict(failed=True, msg='internal error: executable is required') vvvv("executing: %s" % data['cmd']) rc, stdout, stderr = self.server.module.run_command(data['cmd'], executable=data['executable'], close_fds=True) if stdout is None: stdout = '' if stderr is None: stderr = '' vvvv("got stdout: %s" % stdout) vvvv("got stderr: %s" % stderr) return dict(rc=rc, stdout=stdout, stderr=stderr) def fetch(self, data): if 'in_path' not in data: return dict(failed=True, msg='internal error: in_path is required') try: fd = file(data['in_path'], 'rb') fstat = os.stat(data['in_path']) vvv("FETCH file is %d bytes" % fstat.st_size) while fd.tell() < fstat.st_size: data = fd.read(CHUNK_SIZE) last = False if fd.tell() >= fstat.st_size: last = True data = dict(data=base64.b64encode(data), last=last) data = json.dumps(data) data = self.server.key.Encrypt(data) if self.send_data(data): return dict(failed=True, stderr="failed to send data") response = self.recv_data() if not response: log("failed to get a response, aborting") return dict(failed=True, stderr="Failed to get a response from %s" % self.host) response = self.server.key.Decrypt(response) response = json.loads(response) if response.get('failed',False): log("got a failed response from the master") return dict(failed=True, stderr="Master reported failure, aborting transfer") except Exception, e: fd.close() tb = traceback.format_exc() log("failed to fetch the file: %s" % tb) return dict(failed=True, stderr="Could not fetch the file: %s" % str(e)) fd.close() return dict() def put(self, data): if 'data' not in data: return dict(failed=True, msg='internal error: data is required') if 'out_path' not in data: return dict(failed=True, msg='internal error: out_path is required') final_path = None if 'user' in data and data.get('user') != getpass.getuser(): vv("the target user doesn't match this user, we'll move the file into place via sudo") tmp_path = os.path.expanduser('~/.ansible/tmp/') if not os.path.exists(tmp_path): try: os.makedirs(tmp_path, 0700) except: return dict(failed=True, msg='could not create a temporary directory at %s' % tmp_path) (fd,out_path) = tempfile.mkstemp(prefix='ansible.', dir=tmp_path) out_fd = os.fdopen(fd, 'w', 0) final_path = data['out_path'] else: out_path = data['out_path'] out_fd = open(out_path, 'w') try: bytes=0 while True: out = base64.b64decode(data['data']) bytes += len(out) out_fd.write(out) response = json.dumps(dict()) response = self.server.key.Encrypt(response) self.send_data(response) if data['last']: break data = self.recv_data() if not data: raise "" data = self.server.key.Decrypt(data) data = json.loads(data) except: out_fd.close() tb = traceback.format_exc() log("failed to put the file: %s" % tb) return dict(failed=True, stdout="Could not write the file") vvvv("wrote %d bytes" % bytes) out_fd.close() if final_path: vvv("moving %s to %s" % (out_path, final_path)) self.server.module.atomic_move(out_path, final_path) return dict() def daemonize(module, password, port, timeout, minutes, ipv6): try: daemonize_self(module, password, port, minutes) def catcher(signum, _): module.exit_json(msg='timer expired') signal.signal(signal.SIGALRM, catcher) signal.setitimer(signal.ITIMER_REAL, 60 * minutes) tries = 5 while tries > 0: try: if ipv6: server = ThreadedTCPV6Server(("::", port), ThreadedTCPRequestHandler, module, password, timeout) else: server = ThreadedTCPServer(("0.0.0.0", port), ThreadedTCPRequestHandler, module, password, timeout) server.allow_reuse_address = True break except: vv("Failed to create the TCP server (tries left = %d)" % tries) tries -= 1 time.sleep(0.2) if tries == 0: vv("Maximum number of attempts to create the TCP server reached, bailing out") raise Exception("max # of attempts to serve reached") vv("serving!") server.serve_forever(poll_interval=0.1) except Exception, e: tb = traceback.format_exc() log("exception caught, exiting accelerated mode: %s\n%s" % (e, tb)) sys.exit(0) def main(): global DEBUG_LEVEL module = AnsibleModule( argument_spec = dict( port=dict(required=False, default=5099), ipv6=dict(required=False, default=False, type='bool'), timeout=dict(required=False, default=300), password=dict(required=True), minutes=dict(required=False, default=30), debug=dict(required=False, default=0, type='int') ), supports_check_mode=True ) password = base64.b64decode(module.params['password']) port = int(module.params['port']) timeout = int(module.params['timeout']) minutes = int(module.params['minutes']) debug = int(module.params['debug']) ipv6 = module.params['ipv6'] if not HAS_KEYCZAR: module.fail_json(msg="keyczar is not installed (on the remote side)") DEBUG_LEVEL=debug daemonize(module, password, port, timeout, minutes, ipv6) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/utilities/wait_for0000664000000000000000000001520612316627017017063 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Jeroen Hoekx # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import socket import datetime import time import sys import re DOCUMENTATION = ''' --- module: wait_for short_description: Waits for a condition before continuing. description: - Waiting for a port to become available is useful for when services are not immediately available after their init scripts return - which is true of certain Java application servers. It is also useful when starting guests with the M(virt) module and needing to pause until they are ready. This module can also be used to wait for a file to be available on the filesystem or with a regex match a string to be present in a file. version_added: "0.7" options: host: description: - hostname or IP address to wait for required: false default: "127.0.0.1" aliases: [] timeout: description: - maximum number of seconds to wait for required: false default: 300 delay: description: - number of seconds to wait before starting to poll required: false default: 0 port: description: - port number to poll required: false state: description: - either C(present), C(started), or C(stopped) - When checking a port C(started) will ensure the port is open, C(stopped) will check that it is closed - When checking for a file or a search string C(present) or C(started) will ensure that the file or string is present before continuing choices: [ "present", "started", "stopped" ] default: "started" path: version_added: "1.4" required: false description: - path to a file on the filesytem that must exist before continuing search_regex: version_added: "1.4" required: false description: - with the path option can be used match a string in the file that must match before continuing. Defaults to a multiline regex. notes: [] requirements: [] author: Jeroen Hoekx, John Jarvis ''' EXAMPLES = ''' # wait 300 seconds for port 8000 to become open on the host, don't start checking for 10 seconds - wait_for: port=8000 delay=10 # wait until the file /tmp/foo is present before continuing - wait_for: path=/tmp/foo # wait until the string "completed" is in the file /tmp/foo before continuing - wait_for: path=/tmp/foo search_regex=completed ''' def main(): module = AnsibleModule( argument_spec = dict( host=dict(default='127.0.0.1'), timeout=dict(default=300), connect_timeout=dict(default=5), delay=dict(default=0), port=dict(default=None), path=dict(default=None), search_regex=dict(default=None), state=dict(default='started', choices=['started', 'stopped', 'present']), ), ) params = module.params host = params['host'] timeout = int(params['timeout']) connect_timeout = int(params['connect_timeout']) delay = int(params['delay']) if params['port']: port = int(params['port']) else: port = None state = params['state'] path = params['path'] search_regex = params['search_regex'] if port and path: module.fail_json(msg="port and path parameter can not both be passed to wait_for") if path and state == 'stopped': module.fail_json(msg="state=stopped should only be used for checking a port in the wait_for module") start = datetime.datetime.now() if delay: time.sleep(delay) if state == 'stopped': ### first wait for the stop condition end = start + datetime.timedelta(seconds=timeout) while datetime.datetime.now() < end: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.settimeout(connect_timeout) try: s.connect( (host, port) ) s.shutdown(socket.SHUT_RDWR) s.close() time.sleep(1) except: break else: elapsed = datetime.datetime.now() - start module.fail_json(msg="Timeout when waiting for %s:%s to stop." % (host, port), elapsed=elapsed.seconds) elif state in ['started', 'present']: ### wait for start condition end = start + datetime.timedelta(seconds=timeout) while datetime.datetime.now() < end: if path: try: f = open(path) try: if search_regex: if re.search(search_regex, f.read(), re.MULTILINE): break else: time.sleep(1) else: break finally: f.close() except IOError: time.sleep(1) pass elif port: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.settimeout(connect_timeout) try: s.connect( (host, port) ) s.shutdown(socket.SHUT_RDWR) s.close() break except: time.sleep(1) pass else: elapsed = datetime.datetime.now() - start if port: module.fail_json(msg="Timeout when waiting for %s:%s" % (host, port), elapsed=elapsed.seconds) elif path: if search_regex: module.fail_json(msg="Timeout when waiting for search string %s in %s" % (search_regex, path), elapsed=elapsed.seconds) else: module.fail_json(msg="Timeout when waiting for file %s" % (path), elapsed=elapsed.seconds) elapsed = datetime.datetime.now() - start module.exit_json(state=state, port=port, search_regex=search_regex, path=path, elapsed=elapsed.seconds) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/utilities/fail0000664000000000000000000000264312316627017016165 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2012 Dag Wieers # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: fail short_description: Fail with custom message description: - This module fails the progress with a custom message. It can be useful for bailing out when a certain condition is met using C(when). version_added: "0.8" options: msg: description: - The customized message used for failing execution. If omitted, fail will simple bail out with a generic message. required: false default: "'Failed as requested from task'" author: Dag Wieers ''' EXAMPLES = ''' # Example playbook using fail and when together - fail: msg="The system may not be provisioned according to the CMDB status." when: cmdb_status != "to-be-staged" ''' ansible-1.5.4/library/utilities/fireball0000664000000000000000000001753712316627017017042 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: fireball short_description: Enable fireball mode on remote node description: - This modules launches an ephemeral I(fireball) ZeroMQ message bus daemon on the remote node which Ansible can use to communicate with nodes at high speed. - The daemon listens on a configurable port for a configurable amount of time. - Starting a new fireball as a given user terminates any existing user fireballs. - Fireball mode is AES encrypted version_added: "0.9" options: port: description: - TCP port for ZeroMQ required: false default: 5099 aliases: [] minutes: description: - The I(fireball) listener daemon is started on nodes and will stay around for this number of minutes before turning itself off. required: false default: 30 notes: - See the advanced playbooks chapter for more about using fireball mode. requirements: [ "zmq", "keyczar" ] author: Michael DeHaan ''' EXAMPLES = ''' # This example playbook has two plays: the first launches 'fireball' mode on all hosts via SSH, and # the second actually starts using it for subsequent management over the fireball connection - hosts: devservers gather_facts: false connection: ssh sudo: yes tasks: - action: fireball - hosts: devservers connection: fireball tasks: - command: /usr/bin/anything ''' import os import sys import shutil import time import base64 import syslog import signal import time import signal import traceback syslog.openlog('ansible-%s' % os.path.basename(__file__)) PIDFILE = os.path.expanduser("~/.fireball.pid") def log(msg): syslog.syslog(syslog.LOG_NOTICE, msg) if os.path.exists(PIDFILE): try: data = int(open(PIDFILE).read()) try: os.kill(data, signal.SIGKILL) except OSError: pass except ValueError: pass os.unlink(PIDFILE) HAS_ZMQ = False try: import zmq HAS_ZMQ = True except ImportError: pass HAS_KEYCZAR = False try: from keyczar.keys import AesKey HAS_KEYCZAR = True except ImportError: pass # NOTE: this shares a fair amount of code in common with async_wrapper, if async_wrapper were a new module we could move # this into utils.module_common and probably should anyway def daemonize_self(module, password, port, minutes): # daemonizing code: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012 try: pid = os.fork() if pid > 0: log("exiting pid %s" % pid) # exit first parent module.exit_json(msg="daemonized fireball on port %s for %s minutes" % (port, minutes)) except OSError, e: log("fork #1 failed: %d (%s)" % (e.errno, e.strerror)) sys.exit(1) # decouple from parent environment os.chdir("/") os.setsid() os.umask(022) # do second fork try: pid = os.fork() if pid > 0: log("daemon pid %s, writing %s" % (pid, PIDFILE)) pid_file = open(PIDFILE, "w") pid_file.write("%s" % pid) pid_file.close() log("pidfile written") sys.exit(0) except OSError, e: log("fork #2 failed: %d (%s)" % (e.errno, e.strerror)) sys.exit(1) dev_null = file('/dev/null','rw') os.dup2(dev_null.fileno(), sys.stdin.fileno()) os.dup2(dev_null.fileno(), sys.stdout.fileno()) os.dup2(dev_null.fileno(), sys.stderr.fileno()) log("daemonizing successful (%s,%s)" % (password, port)) def command(module, data): if 'cmd' not in data: return dict(failed=True, msg='internal error: cmd is required') if 'tmp_path' not in data: return dict(failed=True, msg='internal error: tmp_path is required') if 'executable' not in data: return dict(failed=True, msg='internal error: executable is required') log("executing: %s" % data['cmd']) rc, stdout, stderr = module.run_command(data['cmd'], executable=data['executable'], close_fds=True) if stdout is None: stdout = '' if stderr is None: stderr = '' log("got stdout: %s" % stdout) return dict(rc=rc, stdout=stdout, stderr=stderr) def fetch(data): if 'in_path' not in data: return dict(failed=True, msg='internal error: in_path is required') # FIXME: should probably support chunked file transfer for binary files # at some point. For now, just base64 encodes the file # so don't use it to move ISOs, use rsync. fh = open(data['in_path']) data = base64.b64encode(fh.read()) return dict(data=data) def put(data): if 'data' not in data: return dict(failed=True, msg='internal error: data is required') if 'out_path' not in data: return dict(failed=True, msg='internal error: out_path is required') # FIXME: should probably support chunked file transfer for binary files # at some point. For now, just base64 encodes the file # so don't use it to move ISOs, use rsync. fh = open(data['out_path'], 'w') fh.write(base64.b64decode(data['data'])) fh.close() return dict() def serve(module, password, port, minutes): log("serving") context = zmq.Context() socket = context.socket(zmq.REP) addr = "tcp://*:%s" % port log("zmq serving on %s" % addr) socket.bind(addr) # password isn't so much a password but a serialized AesKey object that we xferred over SSH # password as a variable in ansible is never logged though, so it serves well key = AesKey.Read(password) while True: data = socket.recv() try: data = key.Decrypt(data) except: continue data = json.loads(data) mode = data['mode'] response = {} if mode == 'command': response = command(module, data) elif mode == 'put': response = put(data) elif mode == 'fetch': response = fetch(data) data2 = json.dumps(response) data2 = key.Encrypt(data2) socket.send(data2) def daemonize(module, password, port, minutes): try: daemonize_self(module, password, port, minutes) def catcher(signum, _): module.exit_json(msg='timer expired') signal.signal(signal.SIGALRM, catcher) signal.setitimer(signal.ITIMER_REAL, 60 * minutes) serve(module, password, port, minutes) except Exception, e: tb = traceback.format_exc() log("exception caught, exiting fireball mode: %s\n%s" % (e, tb)) sys.exit(0) def main(): module = AnsibleModule( argument_spec = dict( port=dict(required=False, default=5099), password=dict(required=True), minutes=dict(required=False, default=30), ), supports_check_mode=True ) password = base64.b64decode(module.params['password']) port = module.params['port'] minutes = int(module.params['minutes']) if not HAS_ZMQ: module.fail_json(msg="zmq is not installed") if not HAS_KEYCZAR: module.fail_json(msg="keyczar is not installed") daemonize(module, password, port, minutes) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/monitoring/0000775000000000000000000000000012316627017015474 5ustar rootrootansible-1.5.4/library/monitoring/pingdom0000664000000000000000000000663612316627017017067 0ustar rootroot#!/usr/bin/python DOCUMENTATION = ''' module: pingdom short_description: Pause/unpause Pingdom alerts description: - This module will let you pause/unpause Pingdom alerts version_added: "1.2" author: Justin Johns requirements: - "This pingdom python library: https://github.com/mbabineau/pingdom-python" options: state: description: - Define whether or not the check should be running or paused. required: true default: null choices: [ "running", "paused" ] aliases: [] checkid: description: - Pingdom ID of the check. required: true default: null choices: [] aliases: [] uid: description: - Pingdom user ID. required: true default: null choices: [] aliases: [] passwd: description: - Pingdom user password. required: true default: null choices: [] aliases: [] key: description: - Pingdom API key. required: true default: null choices: [] aliases: [] notes: - This module does not yet have support to add/remove checks. ''' EXAMPLES = ''' # Pause the check with the ID of 12345. - pingdom: uid=example@example.com passwd=password123 key=apipassword123 checkid=12345 state=paused # Unpause the check with the ID of 12345. - pingdom: uid=example@example.com passwd=password123 key=apipassword123 checkid=12345 state=running ''' try: import pingdom HAS_PINGDOM = True except: HAS_PINGDOM = False def pause(checkid, uid, passwd, key): c = pingdom.PingdomConnection(uid, passwd, key) c.modify_check(checkid, paused=True) check = c.get_check(checkid) name = check.name result = check.status #if result != "paused": # api output buggy - accept raw exception for now # return (True, name, result) return (False, name, result) def unpause(checkid, uid, passwd, key): c = pingdom.PingdomConnection(uid, passwd, key) c.modify_check(checkid, paused=False) check = c.get_check(checkid) name = check.name result = check.status #if result != "up": # api output buggy - accept raw exception for now # return (True, name, result) return (False, name, result) def main(): module = AnsibleModule( argument_spec=dict( state=dict(required=True, choices=['running', 'paused', 'started', 'stopped']), checkid=dict(required=True), uid=dict(required=True), passwd=dict(required=True), key=dict(required=True) ) ) if not HAS_PINGDOM: module.fail_json(msg="Missing requried pingdom module (check docs)") checkid = module.params['checkid'] state = module.params['state'] uid = module.params['uid'] passwd = module.params['passwd'] key = module.params['key'] if (state == "paused" or state == "stopped"): (rc, name, result) = pause(checkid, uid, passwd, key) if (state == "running" or state == "started"): (rc, name, result) = unpause(checkid, uid, passwd, key) if rc != 0: module.fail_json(checkid=checkid, name=name, status=result) module.exit_json(checkid=checkid, name=name, status=result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/monitoring/airbrake_deployment0000664000000000000000000000746512316627017021453 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2013 Bruce Pennypacker # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: airbrake_deployment version_added: "1.2" author: Bruce Pennypacker short_description: Notify airbrake about app deployments description: - Notify airbrake about app deployments (see http://help.airbrake.io/kb/api-2/deploy-tracking) options: token: description: - API token. required: true environment: description: - The airbrake environment name, typically 'production', 'staging', etc. required: true user: description: - The username of the person doing the deployment required: false repo: description: - URL of the project repository required: false revision: description: - A hash, number, tag, or other identifier showing what revision was deployed required: false url: description: - Optional URL to submit the notification to. Use to send notifications to Airbrake-compliant tools like Errbit. required: false default: "https://airbrake.io/deploys" validate_certs: description: - If C(no), SSL certificates for the target url will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] # informational: requirements for nodes requirements: [ urllib, urllib2 ] ''' EXAMPLES = ''' - airbrake_deployment: token=AAAAAA environment='staging' user='ansible' revision=4.2 ''' # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec=dict( token=dict(required=True), environment=dict(required=True), user=dict(required=False), repo=dict(required=False), revision=dict(required=False), url=dict(required=False, default='https://api.airbrake.io/deploys.txt'), validate_certs=dict(default='yes', type='bool'), ), supports_check_mode=True ) # build list of params params = {} if module.params["environment"]: params["deploy[rails_env]"] = module.params["environment"] if module.params["user"]: params["deploy[local_username]"] = module.params["user"] if module.params["repo"]: params["deploy[scm_repository]"] = module.params["repo"] if module.params["revision"]: params["deploy[scm_revision]"] = module.params["revision"] params["api_key"] = module.params["token"] url = module.params.get('url') # If we're in check mode, just exit pretending like we succeeded if module.check_mode: module.exit_json(changed=True) # Send the data to airbrake data = urllib.urlencode(params) response, info = fetch_url(module, url, data=data) if info['status'] == 200: module.exit_json(changed=True) else: module.fail_json(msg="HTTP result code: %d connecting to %s" % (info['status'], url)) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/monitoring/newrelic_deployment0000664000000000000000000001075712316627017021501 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2013 Matt Coddington # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: newrelic_deployment version_added: "1.2" author: Matt Coddington short_description: Notify newrelic about app deployments description: - Notify newrelic about app deployments (see http://newrelic.github.io/newrelic_api/NewRelicApi/Deployment.html) options: token: description: - API token. required: true app_name: description: - (one of app_name or application_id are required) The value of app_name in the newrelic.yml file used by the application required: false application_id: description: - (one of app_name or application_id are required) The application id, found in the URL when viewing the application in RPM required: false changelog: description: - A list of changes for this deployment required: false description: description: - Text annotation for the deployment - notes for you required: false revision: description: - A revision number (e.g., git commit SHA) required: false user: description: - The name of the user/process that triggered this deployment required: false appname: description: - Name of the application required: false environment: description: - The environment for this deployment required: false validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 # informational: requirements for nodes requirements: [ urllib, urllib2 ] ''' EXAMPLES = ''' - newrelic_deployment: token=AAAAAA app_name=myapp user='ansible deployment' revision=1.0 ''' # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec=dict( token=dict(required=True), app_name=dict(required=False), application_id=dict(required=False), changelog=dict(required=False), description=dict(required=False), revision=dict(required=False), user=dict(required=False), appname=dict(required=False), environment=dict(required=False), validate_certs = dict(default='yes', type='bool'), ), supports_check_mode=True ) # build list of params params = {} if module.params["app_name"] and module.params["application_id"]: module.fail_json(msg="only one of 'app_name' or 'application_id' can be set") if module.params["app_name"]: params["app_name"] = module.params["app_name"] elif module.params["application_id"]: params["application_id"] = module.params["application_id"] else: module.fail_json(msg="you must set one of 'app_name' or 'application_id'") for item in [ "changelog", "description", "revision", "user", "appname", "environment" ]: if module.params[item]: params[item] = module.params[item] # If we're in check mode, just exit pretending like we succeeded if module.check_mode: module.exit_json(changed=True) # Send the data to NewRelic url = "https://rpm.newrelic.com/deployments.xml" data = urllib.urlencode(params) headers = { 'x-api-key': module.params["token"], } response, info = fetch_url(module, url, data=data, headers=headers) if info['status'] in (200, 201): module.exit_json(changed=True) else: module.fail_json(msg="unable to update newrelic: %s" % info['msg']) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/monitoring/pagerduty0000664000000000000000000001206112316627017017423 0ustar rootroot#!/usr/bin/python DOCUMENTATION = ''' module: pagerduty short_description: Create PagerDuty maintenance windows description: - This module will let you create PagerDuty maintenance windows version_added: "1.2" author: Justin Johns requirements: - PagerDuty API access options: state: description: - Create a maintenance window or get a list of ongoing windows. required: true default: null choices: [ "running", "started", "ongoing" ] aliases: [] name: description: - PagerDuty unique subdomain. required: true default: null choices: [] aliases: [] user: description: - PagerDuty user ID. required: true default: null choices: [] aliases: [] passwd: description: - PagerDuty user password. required: true default: null choices: [] aliases: [] service: description: - PagerDuty service ID. required: false default: null choices: [] aliases: [] hours: description: - Length of maintenance window in hours. required: false default: 1 choices: [] aliases: [] desc: description: - Short description of maintenance window. required: false default: Created by Ansible choices: [] aliases: [] validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 notes: - This module does not yet have support to end maintenance windows. ''' EXAMPLES=''' # List ongoing maintenance windows. - pagerduty: name=companyabc user=example@example.com passwd=password123 state=ongoing # Create a 1 hour maintenance window for service FOO123. - pagerduty: name=companyabc user=example@example.com passwd=password123 state=running service=FOO123 # Create a 4 hour maintenance window for service FOO123 with the description "deployment". - pagerduty: name=companyabc user=example@example.com passwd=password123 state=running service=FOO123 hours=4 desc=deployment ''' import json import datetime import base64 def ongoing(module, name, user, passwd): url = "https://" + name + ".pagerduty.com/api/v1/maintenance_windows/ongoing" auth = base64.encodestring('%s:%s' % (user, passwd)).replace('\n', '') headers = {"Authorization": "Basic %s" % auth} response, info = fetch_url(module, url, headers=headers) if info['status'] != 200: module.fail_json(msg="failed to lookup the ongoing window: %s" % info['msg']) return False, response.read() def create(module, name, user, passwd, service, hours, desc): now = datetime.datetime.utcnow() later = now + datetime.timedelta(hours=int(hours)) start = now.strftime("%Y-%m-%dT%H:%M:%SZ") end = later.strftime("%Y-%m-%dT%H:%M:%SZ") url = "https://" + name + ".pagerduty.com/api/v1/maintenance_windows" auth = base64.encodestring('%s:%s' % (user, passwd)).replace('\n', '') headers = { 'Authorization': 'Basic %s' % auth, 'Content-Type' : 'application/json', } data = json.dumps({'maintenance_window': {'start_time': start, 'end_time': end, 'description': desc, 'service_ids': [service]}}) response, info = fetch_url(module, url, data=data, headers=headers, method='POST') if info['status'] != 200: module.fail_json(msg="failed to create the window: %s" % info['msg']) return False, response.read() def main(): module = AnsibleModule( argument_spec=dict( state=dict(required=True, choices=['running', 'started', 'ongoing']), name=dict(required=True), user=dict(required=True), passwd=dict(required=True), service=dict(required=False), hours=dict(default='1', required=False), desc=dict(default='Created by Ansible', required=False), validate_certs = dict(default='yes', type='bool'), ) ) state = module.params['state'] name = module.params['name'] user = module.params['user'] passwd = module.params['passwd'] service = module.params['service'] hours = module.params['hours'] desc = module.params['desc'] if state == "running" or state == "started": if not service: module.fail_json(msg="service not specified") (rc, out) = create(module, name, user, passwd, service, hours, desc) if state == "ongoing": (rc, out) = ongoing(module, name, user, passwd) if rc != 0: module.fail_json(msg="failed", result=out) module.exit_json(msg="success", result=out) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/monitoring/boundary_meter0000664000000000000000000001772112316627017020446 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- """ Ansible module to add boundary meters. (c) 2013, curtis This file is part of Ansible Ansible is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Ansible is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Ansible. If not, see . """ import json import datetime import base64 import os DOCUMENTATION = ''' module: boundary_meter short_description: Manage boundary meters description: - This module manages boundary meters version_added: "1.3" author: curtis@serverascode.com requirements: - Boundary API access - bprobe is required to send data, but not to register a meter - Python urllib2 options: name: description: - meter name required: true state: description: - Whether to create or remove the client from boundary required: false default: true choices: ["present", "absent"] apiid: description: - Organizations boundary API ID required: true apikey: description: - Organizations boundary API KEY required: true validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 notes: - This module does not yet support boundary tags. ''' EXAMPLES=''' - name: Create meter boundary_meter: apiid=AAAAAA api_key=BBBBBB state=present name={{ inventory_hostname }}" - name: Delete meter boundary_meter: apiid=AAAAAA api_key=BBBBBB state=absent name={{ inventory_hostname }}" ''' api_host = "api.boundary.com" config_directory = "/etc/bprobe" # "resource" like thing or apikey? def auth_encode(apikey): auth = base64.standard_b64encode(apikey) auth.replace("\n", "") return auth def build_url(name, apiid, action, meter_id=None, cert_type=None): if action == "create": return 'https://%s/%s/meters' % (api_host, apiid) elif action == "search": return "https://%s/%s/meters?name=%s" % (api_host, apiid, name) elif action == "certificates": return "https://%s/%s/meters/%s/%s.pem" % (api_host, apiid, meter_id, cert_type) elif action == "tags": return "https://%s/%s/meters/%s/tags" % (api_host, apiid, meter_id) elif action == "delete": return "https://%s/%s/meters/%s" % (api_host, apiid, meter_id) def http_request(module, name, apiid, apikey, action, data=None, meter_id=None, cert_type=None): if meter_id is None: url = build_url(name, apiid, action) else: if cert_type is None: url = build_url(name, apiid, action, meter_id) else: url = build_url(name, apiid, action, meter_id, cert_type) headers = dict() headers["Authorization"] = "Basic %s" % auth_encode(apikey) headers["Content-Type"] = "application/json" return fetch_url(module, url, data=data, headers=headers) def create_meter(module, name, apiid, apikey): meters = search_meter(module, name, apiid, apikey) if len(meters) > 0: # If the meter already exists, do nothing module.exit_json(status="Meter " + name + " already exists",changed=False) else: # If it doesn't exist, create it body = '{"name":"' + name + '"}' response, info = http_request(module, name, apiid, apikey, data=body, action="create") if info['status'] != 200: module.fail_json(msg="Failed to connect to api host to create meter") # If the config directory doesn't exist, create it if not os.path.exists(config_directory): try: os.makedirs(config_directory) except: module.fail_json("Could not create " + config_directory) # Download both cert files from the api host types = ['key', 'cert'] for cert_type in types: try: # If we can't open the file it's not there, so we should download it cert_file = open('%s/%s.pem' % (config_directory,cert_type)) except IOError: # Now download the file... rc = download_request(module, name, apiid, apikey, cert_type) if rc == False: module.fail_json("Download request for " + cert_type + ".pem failed") return 0, "Meter " + name + " created" def search_meter(module, name, apiid, apikey): response, info = http_request(module, name, apiid, apikey, action="search") if info['status'] != 200: module.fail_json("Failed to connect to api host to search for meter") # Return meters return json.loads(response.read()) def get_meter_id(module, name, apiid, apikey): # In order to delete the meter we need its id meters = search_meter(module, name, apiid, apikey) if len(meters) > 0: return meters[0]['id'] else: return None def delete_meter(module, name, apiid, apikey): meter_id = get_meter_id(module, name, apiid, apikey) if meter_id is None: return 1, "Meter does not exist, so can't delete it" else: response, info = http_request(module, name, apiid, apikey, action, meter_id) if info['status'] != 200: module.fail_json("Failed to delete meter") # Each new meter gets a new key.pem and ca.pem file, so they should be deleted types = ['cert', 'key'] for cert_type in types: try: cert_file = '%s/%s.pem' % (config_directory,cert_type) os.remove(cert_file) except OSError, e: module.fail_json("Failed to remove " + cert_type + ".pem file") return 0, "Meter " + name + " deleted" def download_request(module, name, apiid, apikey, cert_type): meter_id = get_meter_id(module, name, apiid, apikey) if meter_id is not None: action = "certificates" response, info = http_request(module, name, apiid, apikey, action, meter_id, cert_type) if info['status'] != 200: module.fail_json("Failed to connect to api host to download certificate") if result: try: cert_file_path = '%s/%s.pem' % (config_directory,cert_type) body = response.read() cert_file = open(cert_file_path, 'w') cert_file.write(body) cert_file.close os.chmod(cert_file_path, 0o600) except: module.fail_json("Could not write to certificate file") return True else: module.fail_json("Could not get meter id") def main(): module = AnsibleModule( argument_spec=dict( state=dict(required=True, choices=['present', 'absent']), name=dict(required=False), apikey=dict(required=True), apiid=dict(required=True), validate_certs = dict(default='yes', type='bool'), ) ) state = module.params['state'] name= module.params['name'] apikey = module.params['api_key'] apiid = module.params['api_id'] if state == "present": (rc, result) = create_meter(module, name, apiid, apikey) if state == "absent": (rc, result) = delete_meter(module, name, apiid, apikey) if rc != 0: module.fail_json(msg=result) module.exit_json(status=result,changed=True) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/monitoring/monit0000664000000000000000000001313512316627017016550 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Darryl Stoflet # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: monit short_description: Manage the state of a program monitored via Monit description: - Manage the state of a program monitored via I(Monit) version_added: "1.2" options: name: description: - The name of the I(monit) program/process to manage required: true default: null state: description: - The state of service required: true default: null choices: [ "present", "started", "stopped", "restarted", "monitored", "unmonitored", "reloaded" ] requirements: [ ] author: Darryl Stoflet ''' EXAMPLES = ''' # Manage the state of program "httpd" to be in "started" state. - monit: name=httpd state=started ''' import pipes def main(): arg_spec = dict( name=dict(required=True), state=dict(required=True, choices=['present', 'started', 'restarted', 'stopped', 'monitored', 'unmonitored', 'reloaded']) ) module = AnsibleModule(argument_spec=arg_spec, supports_check_mode=True) name = module.params['name'] state = module.params['state'] MONIT = module.get_bin_path('monit', True) if state == 'reloaded': if module.check_mode: module.exit_json(changed=True) rc, out, err = module.run_command('%s reload' % MONIT) module.exit_json(changed=True, name=name, state=state) rc, out, err = module.run_command('%s summary | grep "Process \'%s\'"' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) present = name in out if not present and not state == 'present': module.fail_json(msg='%s process not presently configured with monit' % name, name=name, state=state) if state == 'present': if not present: if module.check_mode: module.exit_json(changed=True) module.run_command('%s reload' % MONIT, check_rc=True) rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) if name in out: module.exit_json(changed=True, name=name, state=state) else: module.fail_json(msg=out, name=name, state=state) module.exit_json(changed=False, name=name, state=state) rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) running = 'running' in out.lower() if running and (state == 'started' or state == 'monitored'): module.exit_json(changed=False, name=name, state=state) if running and state == 'monitored': module.exit_json(changed=False, name=name, state=state) if running and state == 'stopped': if module.check_mode: module.exit_json(changed=True) module.run_command('%s stop %s' % (MONIT, name)) rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) if 'not monitored' in out.lower() or 'stop pending' in out.lower(): module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) if running and state == 'unmonitored': if module.check_mode: module.exit_json(changed=True) module.run_command('%s unmonitor %s' % (MONIT, name)) # FIXME: DRY FOLKS! rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True) if 'not monitored' in out.lower(): module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) elif state == 'restarted': if module.check_mode: module.exit_json(changed=True) module.run_command('%s restart %s' % (MONIT, name)) rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name)) if 'initializing' in out.lower() or 'restart pending' in out.lower(): module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) elif not running and state == 'started': if module.check_mode: module.exit_json(changed=True) module.run_command('%s start %s' % (MONIT, name)) rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name)) if 'initializing' in out.lower() or 'start pending' in out.lower(): module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) elif not running and state == 'monitored': if module.check_mode: module.exit_json(changed=True) module.run_command('%s monitor %s' % (MONIT, name)) rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name)) if 'initializing' in out.lower() or 'start pending' in out.lower(): module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) module.exit_json(changed=False, name=name, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/monitoring/nagios0000664000000000000000000007347212316627017016714 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # This file is largely copied from the Nagios module included in the # Func project. Original copyright follows: # # func-nagios - Schedule downtime and enables/disable notifications # Copyright 2011, Red Hat, Inc. # Tim Bielawa # # This software may be freely redistributed under the terms of the GNU # general public license version 2. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . DOCUMENTATION = ''' --- module: nagios short_description: Perform common tasks in Nagios related to downtime and notifications. description: - "The M(nagios) module has two basic functions: scheduling downtime and toggling alerts for services or hosts." - All actions require the I(host) parameter to be given explicitly. In playbooks you can use the C({{inventory_hostname}}) variable to refer to the host the playbook is currently running on. - You can specify multiple services at once by separating them with commas, .e.g., C(services=httpd,nfs,puppet). - When specifying what service to handle there is a special service value, I(host), which will handle alerts/downtime for the I(host itself), e.g., C(service=host). This keyword may not be given with other services at the same time. I(Setting alerts/downtime for a host does not affect alerts/downtime for any of the services running on it.) To schedule downtime for all services on particular host use keyword "all", e.g., C(service=all). - When using the M(nagios) module you will need to specify your Nagios server using the C(delegate_to) parameter. version_added: "0.7" options: action: description: - Action to take. required: true default: null choices: [ "downtime", "enable_alerts", "disable_alerts", "silence", "unsilence", "silence_nagios", "unsilence_nagios", "command" ] host: description: - Host to operate on in Nagios. required: false default: null cmdfile: description: - Path to the nagios I(command file) (FIFO pipe). Only required if auto-detection fails. required: false default: auto-detected author: description: - Author to leave downtime comments as. Only usable with the C(downtime) action. required: false default: Ansible minutes: description: - Minutes to schedule downtime for. - Only usable with the C(downtime) action. required: false default: 30 services: description: - What to manage downtime/alerts for. Separate multiple services with commas. C(service) is an alias for C(services). B(Required) option when using the C(downtime), C(enable_alerts), and C(disable_alerts) actions. aliases: [ "service" ] required: true default: null command: description: - The raw command to send to nagios, which should not include the submitted time header or the line-feed B(Required) option when using the C(command) action. required: true default: null author: Tim Bielawa requirements: [ "Nagios" ] ''' EXAMPLES = ''' # set 30 minutes of apache downtime - nagios: action=downtime minutes=30 service=httpd host={{ inventory_hostname }} # schedule an hour of HOST downtime - nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }} # schedule downtime for ALL services on HOST - nagios: action=downtime minutes=45 service=all host={{ inventory_hostname }} # schedule downtime for a few services - nagios: action=downtime services=frob,foobar,qeuz host={{ inventory_hostname }} # enable SMART disk alerts - nagios: action=enable_alerts service=smart host={{ inventory_hostname }} # "two services at once: disable httpd and nfs alerts" - nagios: action=disable_alerts service=httpd,nfs host={{ inventory_hostname }} # disable HOST alerts - nagios: action=disable_alerts service=host host={{ inventory_hostname }} # silence ALL alerts - nagios: action=silence host={{ inventory_hostname }} # unsilence all alerts - nagios: action=unsilence host={{ inventory_hostname }} # SHUT UP NAGIOS - nagios: action=silence_nagios # ANNOY ME NAGIOS - nagios: action=unsilence_nagios # command something - nagios: action=command command='DISABLE_FAILURE_PREDICTION' ''' import ConfigParser import types import time import os.path ###################################################################### def which_cmdfile(): locations = [ # rhel '/etc/nagios/nagios.cfg', # debian '/etc/nagios3/nagios.cfg', # older debian '/etc/nagios2/nagios.cfg', # bsd, solaris '/usr/local/etc/nagios/nagios.cfg', # groundwork it monitoring '/usr/local/groundwork/nagios/etc/nagios.cfg', # open monitoring distribution '/omd/sites/oppy/tmp/nagios/nagios.cfg', # ??? '/usr/local/nagios/etc/nagios.cfg', '/usr/local/nagios/nagios.cfg', '/opt/nagios/etc/nagios.cfg', '/opt/nagios/nagios.cfg', # icinga on debian/ubuntu '/etc/icinga/icinga.cfg', # icinga installed from source (default location) '/usr/local/icinga/etc/icinga.cfg', ] for path in locations: if os.path.exists(path): for line in open(path): if line.startswith('command_file'): return line.split('=')[1].strip() return None ###################################################################### def main(): ACTION_CHOICES = [ 'downtime', 'silence', 'unsilence', 'enable_alerts', 'disable_alerts', 'silence_nagios', 'unsilence_nagios', 'command', ] module = AnsibleModule( argument_spec=dict( action=dict(required=True, default=None, choices=ACTION_CHOICES), author=dict(default='Ansible'), host=dict(required=False, default=None), minutes=dict(default=30), cmdfile=dict(default=which_cmdfile()), services=dict(default=None, aliases=['service']), command=dict(required=False, default=None), ) ) action = module.params['action'] host = module.params['host'] minutes = module.params['minutes'] services = module.params['services'] cmdfile = module.params['cmdfile'] command = module.params['command'] ################################################################## # Required args per action: # downtime = (minutes, service, host) # (un)silence = (host) # (enable/disable)_alerts = (service, host) # command = command # # AnsibleModule will verify most stuff, we need to verify # 'minutes' and 'service' manually. ################################################################## if action not in ['command', 'silence_nagios', 'unsilence_nagios']: if not host: module.fail_json(msg='no host specified for action requiring one') ###################################################################### if action == 'downtime': # Make sure there's an actual service selected if not services: module.fail_json(msg='no service selected to set downtime for') # Make sure minutes is a number try: m = int(minutes) if not isinstance(m, types.IntType): module.fail_json(msg='minutes must be a number') except Exception: module.fail_json(msg='invalid entry for minutes') ################################################################## if action in ['enable_alerts', 'disable_alerts']: if not services: module.fail_json(msg='a service is required when setting alerts') if action in ['command']: if not command: module.fail_json(msg='no command passed for command action') ################################################################## if not cmdfile: module.fail_json('unable to locate nagios.cfg') ################################################################## ansible_nagios = Nagios(module, **module.params) if module.check_mode: module.exit_json(changed=True) else: ansible_nagios.act() ################################################################## ###################################################################### class Nagios(object): """ Perform common tasks in Nagios related to downtime and notifications. The complete set of external commands Nagios handles is documented on their website: http://old.nagios.org/developerinfo/externalcommands/commandlist.php Note that in the case of `schedule_svc_downtime`, `enable_svc_notifications`, and `disable_svc_notifications`, the service argument should be passed as a list. """ def __init__(self, module, **kwargs): self.module = module self.action = kwargs['action'] self.author = kwargs['author'] self.host = kwargs['host'] self.minutes = int(kwargs['minutes']) self.cmdfile = kwargs['cmdfile'] self.command = kwargs['command'] if (kwargs['services'] is None) or (kwargs['services'] == 'host') or (kwargs['services'] == 'all'): self.services = kwargs['services'] else: self.services = kwargs['services'].split(',') self.command_results = [] def _now(self): """ The time in seconds since 12:00:00AM Jan 1, 1970 """ return int(time.time()) def _write_command(self, cmd): """ Write the given command to the Nagios command file """ try: fp = open(self.cmdfile, 'w') fp.write(cmd) fp.flush() fp.close() self.command_results.append(cmd.strip()) except IOError: self.module.fail_json(msg='unable to write to nagios command file', cmdfile=self.cmdfile) def _fmt_dt_str(self, cmd, host, duration, author=None, comment="Scheduling downtime", start=None, svc=None, fixed=1, trigger=0): """ Format an external-command downtime string. cmd - Nagios command ID host - Host schedule downtime on duration - Minutes to schedule downtime for author - Name to file the downtime as comment - Reason for running this command (upgrade, reboot, etc) start - Start of downtime in seconds since 12:00AM Jan 1 1970 Default is to use the entry time (now) svc - Service to schedule downtime for, omit when for host downtime fixed - Start now if 1, start when a problem is detected if 0 trigger - Optional ID of event to start downtime from. Leave as 0 for fixed downtime. Syntax: [submitted] COMMAND;;[] ;;;;;; """ entry_time = self._now() if start is None: start = entry_time hdr = "[%s] %s;%s;" % (entry_time, cmd, host) duration_s = (duration * 60) end = start + duration_s if not author: author = self.author if svc is not None: dt_args = [svc, str(start), str(end), str(fixed), str(trigger), str(duration_s), author, comment] else: # Downtime for a host if no svc specified dt_args = [str(start), str(end), str(fixed), str(trigger), str(duration_s), author, comment] dt_arg_str = ";".join(dt_args) dt_str = hdr + dt_arg_str + "\n" return dt_str def _fmt_notif_str(self, cmd, host=None, svc=None): """ Format an external-command notification string. cmd - Nagios command ID. host - Host to en/disable notifications on.. A value is not required for global downtime svc - Service to schedule downtime for. A value is not required for host downtime. Syntax: [submitted] COMMAND;[;] """ entry_time = self._now() notif_str = "[%s] %s" % (entry_time, cmd) if host is not None: notif_str += ";%s" % host if svc is not None: notif_str += ";%s" % svc notif_str += "\n" return notif_str def schedule_svc_downtime(self, host, services=[], minutes=30): """ This command is used to schedule downtime for a particular service. During the specified downtime, Nagios will not send notifications out about the service. Syntax: SCHEDULE_SVC_DOWNTIME;; ;;;;;; """ cmd = "SCHEDULE_SVC_DOWNTIME" for service in services: dt_cmd_str = self._fmt_dt_str(cmd, host, minutes, svc=service) self._write_command(dt_cmd_str) def schedule_host_downtime(self, host, minutes=30): """ This command is used to schedule downtime for a particular host. During the specified downtime, Nagios will not send notifications out about the host. Syntax: SCHEDULE_HOST_DOWNTIME;;;; ;;;; """ cmd = "SCHEDULE_HOST_DOWNTIME" dt_cmd_str = self._fmt_dt_str(cmd, host, minutes) self._write_command(dt_cmd_str) def schedule_host_svc_downtime(self, host, minutes=30): """ This command is used to schedule downtime for all services associated with a particular host. During the specified downtime, Nagios will not send notifications out about the host. SCHEDULE_HOST_SVC_DOWNTIME;;;; ;;;; """ cmd = "SCHEDULE_HOST_SVC_DOWNTIME" dt_cmd_str = self._fmt_dt_str(cmd, host, minutes) self._write_command(dt_cmd_str) def schedule_hostgroup_host_downtime(self, hostgroup, minutes=30): """ This command is used to schedule downtime for all hosts in a particular hostgroup. During the specified downtime, Nagios will not send notifications out about the hosts. Syntax: SCHEDULE_HOSTGROUP_HOST_DOWNTIME;;; ;;;;; """ cmd = "SCHEDULE_HOSTGROUP_HOST_DOWNTIME" dt_cmd_str = self._fmt_dt_str(cmd, hostgroup, minutes) self._write_command(dt_cmd_str) def schedule_hostgroup_svc_downtime(self, hostgroup, minutes=30): """ This command is used to schedule downtime for all services in a particular hostgroup. During the specified downtime, Nagios will not send notifications out about the services. Note that scheduling downtime for services does not automatically schedule downtime for the hosts those services are associated with. Syntax: SCHEDULE_HOSTGROUP_SVC_DOWNTIME;;; ;;;;; """ cmd = "SCHEDULE_HOSTGROUP_SVC_DOWNTIME" dt_cmd_str = self._fmt_dt_str(cmd, hostgroup, minutes) self._write_command(dt_cmd_str) def schedule_servicegroup_host_downtime(self, servicegroup, minutes=30): """ This command is used to schedule downtime for all hosts in a particular servicegroup. During the specified downtime, Nagios will not send notifications out about the hosts. Syntax: SCHEDULE_SERVICEGROUP_HOST_DOWNTIME;; ;;;;;; """ cmd = "SCHEDULE_SERVICEGROUP_HOST_DOWNTIME" dt_cmd_str = self._fmt_dt_str(cmd, servicegroup, minutes) self._write_command(dt_cmd_str) def schedule_servicegroup_svc_downtime(self, servicegroup, minutes=30): """ This command is used to schedule downtime for all services in a particular servicegroup. During the specified downtime, Nagios will not send notifications out about the services. Note that scheduling downtime for services does not automatically schedule downtime for the hosts those services are associated with. Syntax: SCHEDULE_SERVICEGROUP_SVC_DOWNTIME;; ;;;;;; """ cmd = "SCHEDULE_SERVICEGROUP_SVC_DOWNTIME" dt_cmd_str = self._fmt_dt_str(cmd, servicegroup, minutes) self._write_command(dt_cmd_str) def disable_host_svc_notifications(self, host): """ This command is used to prevent notifications from being sent out for all services on the specified host. Note that this command does not disable notifications from being sent out about the host. Syntax: DISABLE_HOST_SVC_NOTIFICATIONS; """ cmd = "DISABLE_HOST_SVC_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, host) self._write_command(notif_str) def disable_host_notifications(self, host): """ This command is used to prevent notifications from being sent out for the specified host. Note that this command does not disable notifications for services associated with this host. Syntax: DISABLE_HOST_NOTIFICATIONS; """ cmd = "DISABLE_HOST_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, host) self._write_command(notif_str) def disable_svc_notifications(self, host, services=[]): """ This command is used to prevent notifications from being sent out for the specified service. Note that this command does not disable notifications from being sent out about the host. Syntax: DISABLE_SVC_NOTIFICATIONS;; """ cmd = "DISABLE_SVC_NOTIFICATIONS" for service in services: notif_str = self._fmt_notif_str(cmd, host, svc=service) self._write_command(notif_str) def disable_servicegroup_host_notifications(self, servicegroup): """ This command is used to prevent notifications from being sent out for all hosts in the specified servicegroup. Note that this command does not disable notifications for services associated with hosts in this service group. Syntax: DISABLE_SERVICEGROUP_HOST_NOTIFICATIONS; """ cmd = "DISABLE_SERVICEGROUP_HOST_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, servicegroup) self._write_command(notif_str) def disable_servicegroup_svc_notifications(self, servicegroup): """ This command is used to prevent notifications from being sent out for all services in the specified servicegroup. Note that this does not prevent notifications from being sent out about the hosts in this servicegroup. Syntax: DISABLE_SERVICEGROUP_SVC_NOTIFICATIONS; """ cmd = "DISABLE_SERVICEGROUP_SVC_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, servicegroup) self._write_command(notif_str) def disable_hostgroup_host_notifications(self, hostgroup): """ Disables notifications for all hosts in a particular hostgroup. Note that this does not disable notifications for the services associated with the hosts in the hostgroup - see the DISABLE_HOSTGROUP_SVC_NOTIFICATIONS command for that. Syntax: DISABLE_HOSTGROUP_HOST_NOTIFICATIONS; """ cmd = "DISABLE_HOSTGROUP_HOST_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, hostgroup) self._write_command(notif_str) def disable_hostgroup_svc_notifications(self, hostgroup): """ Disables notifications for all services associated with hosts in a particular hostgroup. Note that this does not disable notifications for the hosts in the hostgroup - see the DISABLE_HOSTGROUP_HOST_NOTIFICATIONS command for that. Syntax: DISABLE_HOSTGROUP_SVC_NOTIFICATIONS; """ cmd = "DISABLE_HOSTGROUP_SVC_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, hostgroup) self._write_command(notif_str) def enable_host_notifications(self, host): """ Enables notifications for a particular host. Note that this command does not enable notifications for services associated with this host. Syntax: ENABLE_HOST_NOTIFICATIONS; """ cmd = "ENABLE_HOST_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, host) self._write_command(notif_str) def enable_host_svc_notifications(self, host): """ Enables notifications for all services on the specified host. Note that this does not enable notifications for the host. Syntax: ENABLE_HOST_SVC_NOTIFICATIONS; """ cmd = "ENABLE_HOST_SVC_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, host) nagios_return = self._write_command(notif_str) if nagios_return: return notif_str else: return "Fail: could not write to the command file" def enable_svc_notifications(self, host, services=[]): """ Enables notifications for a particular service. Note that this does not enable notifications for the host. Syntax: ENABLE_SVC_NOTIFICATIONS;; """ cmd = "ENABLE_SVC_NOTIFICATIONS" nagios_return = True return_str_list = [] for service in services: notif_str = self._fmt_notif_str(cmd, host, svc=service) nagios_return = self._write_command(notif_str) and nagios_return return_str_list.append(notif_str) if nagios_return: return return_str_list else: return "Fail: could not write to the command file" def enable_hostgroup_host_notifications(self, hostgroup): """ Enables notifications for all hosts in a particular hostgroup. Note that this command does not enable notifications for services associated with the hosts in this hostgroup. Syntax: ENABLE_HOSTGROUP_HOST_NOTIFICATIONS; """ cmd = "ENABLE_HOSTGROUP_HOST_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, hostgroup) nagios_return = self._write_command(notif_str) if nagios_return: return notif_str else: return "Fail: could not write to the command file" def enable_hostgroup_svc_notifications(self, hostgroup): """ Enables notifications for all services that are associated with hosts in a particular hostgroup. Note that this does not enable notifications for the hosts in this hostgroup. Syntax: ENABLE_HOSTGROUP_SVC_NOTIFICATIONS; """ cmd = "ENABLE_HOSTGROUP_SVC_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, hostgroup) nagios_return = self._write_command(notif_str) if nagios_return: return notif_str else: return "Fail: could not write to the command file" def enable_servicegroup_host_notifications(self, servicegroup): """ Enables notifications for all hosts that have services that are members of a particular servicegroup. Note that this command does not enable notifications for services associated with the hosts in this servicegroup. Syntax: ENABLE_SERVICEGROUP_HOST_NOTIFICATIONS; """ cmd = "ENABLE_SERVICEGROUP_HOST_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, servicegroup) nagios_return = self._write_command(notif_str) if nagios_return: return notif_str else: return "Fail: could not write to the command file" def enable_servicegroup_svc_notifications(self, servicegroup): """ Enables notifications for all services that are members of a particular servicegroup. Note that this does not enable notifications for the hosts in this servicegroup. Syntax: ENABLE_SERVICEGROUP_SVC_NOTIFICATIONS; """ cmd = "ENABLE_SERVICEGROUP_SVC_NOTIFICATIONS" notif_str = self._fmt_notif_str(cmd, servicegroup) nagios_return = self._write_command(notif_str) if nagios_return: return notif_str else: return "Fail: could not write to the command file" def silence_host(self, host): """ This command is used to prevent notifications from being sent out for the host and all services on the specified host. This is equivalent to calling disable_host_svc_notifications and disable_host_notifications. Syntax: DISABLE_HOST_SVC_NOTIFICATIONS; Syntax: DISABLE_HOST_NOTIFICATIONS; """ cmd = [ "DISABLE_HOST_SVC_NOTIFICATIONS", "DISABLE_HOST_NOTIFICATIONS" ] nagios_return = True return_str_list = [] for c in cmd: notif_str = self._fmt_notif_str(c, host) nagios_return = self._write_command(notif_str) and nagios_return return_str_list.append(notif_str) if nagios_return: return return_str_list else: return "Fail: could not write to the command file" def unsilence_host(self, host): """ This command is used to enable notifications for the host and all services on the specified host. This is equivalent to calling enable_host_svc_notifications and enable_host_notifications. Syntax: ENABLE_HOST_SVC_NOTIFICATIONS; Syntax: ENABLE_HOST_NOTIFICATIONS; """ cmd = [ "ENABLE_HOST_SVC_NOTIFICATIONS", "ENABLE_HOST_NOTIFICATIONS" ] nagios_return = True return_str_list = [] for c in cmd: notif_str = self._fmt_notif_str(c, host) nagios_return = self._write_command(notif_str) and nagios_return return_str_list.append(notif_str) if nagios_return: return return_str_list else: return "Fail: could not write to the command file" def silence_nagios(self): """ This command is used to disable notifications for all hosts and services in nagios. This is a 'SHUT UP, NAGIOS' command """ cmd = 'DISABLE_NOTIFICATIONS' self._write_command(self._fmt_notif_str(cmd)) def unsilence_nagios(self): """ This command is used to enable notifications for all hosts and services in nagios. This is a 'OK, NAGIOS, GO'' command """ cmd = 'ENABLE_NOTIFICATIONS' self._write_command(self._fmt_notif_str(cmd)) def nagios_cmd(self, cmd): """ This sends an arbitrary command to nagios It prepends the submitted time and appends a \n You just have to provide the properly formatted command """ pre = '[%s]' % int(time.time()) post = '\n' cmdstr = '%s %s %s' % (pre, cmd, post) self._write_command(cmdstr) def act(self): """ Figure out what you want to do from ansible, and then do the needful (at the earliest). """ # host or service downtime? if self.action == 'downtime': if self.services == 'host': self.schedule_host_downtime(self.host, self.minutes) elif self.services == 'all': self.schedule_host_svc_downtime(self.host, self.minutes) else: self.schedule_svc_downtime(self.host, services=self.services, minutes=self.minutes) # toggle the host AND service alerts elif self.action == 'silence': self.silence_host(self.host) elif self.action == 'unsilence': self.unsilence_host(self.host) # toggle host/svc alerts elif self.action == 'enable_alerts': if self.services == 'host': self.enable_host_notifications(self.host) else: self.enable_svc_notifications(self.host, services=self.services) elif self.action == 'disable_alerts': if self.services == 'host': self.disable_host_notifications(self.host) else: self.disable_svc_notifications(self.host, services=self.services) elif self.action == 'silence_nagios': self.silence_nagios() elif self.action == 'unsilence_nagios': self.unsilence_nagios() elif self.action == 'command': self.nagios_cmd(self.command) # wtf? else: self.module.fail_json(msg="unknown action specified: '%s'" % \ self.action) self.module.exit_json(nagios_commands=self.command_results, changed=True) ###################################################################### # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/monitoring/datadog_event0000664000000000000000000001111512316627017020222 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # Author: Artūras 'arturaz' Šlajus # # This module is proudly sponsored by iGeolise (www.igeolise.com) and # Tiny Lab Productions (www.tinylabproductions.com). DOCUMENTATION = ''' --- module: datadog_event short_description: Posts events to DataDog service description: - "Allows to post events to DataDog (www.datadoghq.com) service." - "Uses http://docs.datadoghq.com/api/#events API." version_added: "1.3" author: Artūras 'arturaz' Šlajus notes: [] requirements: [urllib2] options: api_key: description: ["Your DataDog API key."] required: true default: null title: description: ["The event title."] required: true default: null text: description: ["The body of the event."] required: true default: null date_happened: description: - POSIX timestamp of the event. - Default value is now. required: false default: now priority: description: ["The priority of the event."] required: false default: normal choices: [normal, low] tags: description: ["Comma separated list of tags to apply to the event."] required: false default: null alert_type: description: ["Type of alert."] required: false default: info choices: ['error', 'warning', 'info', 'success'] aggregation_key: description: ["An arbitrary string to use for aggregation."] required: false default: null validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 ''' EXAMPLES = ''' # Post an event with low priority datadog_event: title="Testing from ansible" text="Test!" priority="low" api_key="6873258723457823548234234234" # Post an event with several tags datadog_event: title="Testing from ansible" text="Test!" api_key="6873258723457823548234234234" tags=aa,bb,cc ''' import socket def main(): module = AnsibleModule( argument_spec=dict( api_key=dict(required=True), title=dict(required=True), text=dict(required=True), date_happened=dict(required=False, default=None, type='int'), priority=dict( required=False, default='normal', choices=['normal', 'low'] ), tags=dict(required=False, default=None), alert_type=dict( required=False, default='info', choices=['error', 'warning', 'info', 'success'] ), aggregation_key=dict(required=False, default=None), source_type_name=dict( required=False, default='my apps', choices=['nagios', 'hudson', 'jenkins', 'user', 'my apps', 'feed', 'chef', 'puppet', 'git', 'bitbucket', 'fabric', 'capistrano'] ), validate_certs = dict(default='yes', type='bool'), ) ) post_event(module) def post_event(module): uri = "https://app.datadoghq.com/api/v1/events?api_key=%s" % module.params['api_key'] body = dict( title=module.params['title'], text=module.params['text'], priority=module.params['priority'], alert_type=module.params['alert_type'] ) if module.params['date_happened'] != None: body['date_happened'] = module.params['date_happened'] if module.params['tags'] != None: body['tags'] = module.params['tags'].split(",") if module.params['aggregation_key'] != None: body['aggregation_key'] = module.params['aggregation_key'] if module.params['source_type_name'] != None: body['source_type_name'] = module.params['source_type_name'] json_body = module.jsonify(body) headers = {"Content-Type": "application/json"} (response, info) = fetch_url(module, uri, data=json_body, headers=headers) if info['status'] == 200: response_body = response.read() response_json = module.from_json(response_body) if response_json['status'] == 'ok': module.exit_json(changed=True) else: module.fail_json(msg=response) else: module.fail_json(**info) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/notification/0000775000000000000000000000000012316627017015775 5ustar rootrootansible-1.5.4/library/notification/jabber0000664000000000000000000000730212316627017017147 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- DOCUMENTATION = ''' --- version_added: "1.2" module: jabber short_description: Send a message to jabber user or chat room description: - Send a message to jabber options: user: description: User as which to connect required: true password: description: password for user to connect required: true to: description: user ID or name of the room, when using room use a slash to indicate your nick. required: true msg: description: - The message body. required: true default: null host: description: host to connect, overrides user info required: false port: description: port to connect to, overrides default required: false default: 5222 encoding: description: message encoding required: false # informational: requirements for nodes requirements: [ xmpp ] author: Brian Coca ''' EXAMPLES = ''' # send a message to a user - jabber: user=mybot@example.net password=secret to=friend@example.net msg="Ansible task finished" # send a message to a room - jabber: user=mybot@example.net password=secret to=mychaps@conference.example.net/ansiblebot msg="Ansible task finished" # send a message, specifying the host and port - jabber user=mybot@example.net host=talk.example.net port=5223 password=secret to=mychaps@example.net msg="Ansible task finished" ''' import os import re import time HAS_XMPP = True try: import xmpp except ImportError: HAS_XMPP = False def main(): module = AnsibleModule( argument_spec=dict( user=dict(required=True), password=dict(required=True), to=dict(required=True), msg=dict(required=True), host=dict(required=False), port=dict(required=False,default=5222), encoding=dict(required=False), ), supports_check_mode=True ) if not HAS_XMPP: module.fail_json(msg="xmpp is not installed") jid = xmpp.JID(module.params['user']) user = jid.getNode() server = jid.getDomain() port = module.params['port'] password = module.params['password'] try: to, nick = module.params['to'].split('/', 1) except ValueError: to, nick = module.params['to'], None if module.params['host']: host = module.params['host'] else: host = server if module.params['encoding']: xmpp.simplexml.ENCODING = params['encoding'] msg = xmpp.protocol.Message(body=module.params['msg']) try: conn=xmpp.Client(server) if not conn.connect(server=(host,port)): module.fail_json(rc=1, msg='Failed to connect to server: %s' % (server)) if not conn.auth(user,password,'Ansible'): module.fail_json(rc=1, msg='Failed to authorize %s on: %s' % (user,server)) # some old servers require this, also the sleep following send conn.sendInitPresence(requestRoster=0) if nick: # sending to room instead of user, need to join msg.setType('groupchat') msg.setTag('x', namespace='http://jabber.org/protocol/muc#user') conn.send(xmpp.Presence(to=module.params['to'])) time.sleep(1) else: msg.setType('chat') msg.setTo(to) if not module.check_mode: conn.send(msg) time.sleep(1) conn.disconnect() except Exception, e: module.fail_json(msg="unable to send msg: %s" % e) module.exit_json(changed=False, to=to, user=user, msg=msg.getBody()) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/notification/mqtt0000664000000000000000000001207212316627017016707 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Jan-Piet Mens # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: mqtt short_description: Publish a message on an MQTT topic for the IoT version_added: "1.2" description: - Publish a message on an MQTT topic. options: server: description: - MQTT broker address/name required: false default: localhost port: description: - MQTT broker port number required: false default: 1883 username: description: - Username to authenticate against the broker. required: false password: description: - Password for C(username) to authenticate against the broker. required: false client_id: description: - MQTT client identifier required: false default: hostname + pid topic: description: - MQTT topic name required: true default: null payload: description: - Payload. The special string C("None") may be used to send a NULL (i.e. empty) payload which is useful to simply notify with the I(topic) or to clear previously retained messages. required: true default: null qos: description: - QoS (Quality of Service) required: false default: 0 choices: [ "0", "1", "2" ] retain: description: - Setting this flag causes the broker to retain (i.e. keep) the message so that applications that subsequently subscribe to the topic can received the last retained message immediately. required: false default: False # informational: requirements for nodes requirements: [ mosquitto ] notes: - This module requires a connection to an MQTT broker such as Mosquitto U(http://mosquitto.org) and the C(mosquitto) Python module (U(http://mosquitto.org/python)). author: Jan-Piet Mens ''' EXAMPLES = ''' - local_action: mqtt topic=service/ansible/{{ ansible_hostname }} payload="Hello at {{ ansible_date_time.iso8601 }}" qos=0 retain=false client_id=ans001 ''' # =========================================== # MQTT module support methods. # HAS_MOSQUITTO = True try: import socket import mosquitto except ImportError: HAS_MOSQUITTO = False import os def publish(module, topic, payload, server='localhost', port='1883', qos='0', client_id='', retain=False, username=None, password=None): '''Open connection to MQTT broker and publish the topic''' mqttc = mosquitto.Mosquitto(client_id, clean_session=True) if username is not None and password is not None: mqttc.username_pw_set(username, password) rc = mqttc.connect(server, int(port), 5) if rc != 0: module.fail_json(msg="unable to connect to MQTT broker") mqttc.publish(topic, payload, int(qos), retain) rc = mqttc.loop() if rc != 0: module.fail_json(msg="unable to send to MQTT broker") mqttc.disconnect() # =========================================== # Main # def main(): if not HAS_MOSQUITTO: module.fail_json(msg="mosquitto is not installed") module = AnsibleModule( argument_spec=dict( server = dict(default = 'localhost'), port = dict(default = 1883), topic = dict(required = True), payload = dict(required = True), client_id = dict(default = None), qos = dict(default="0", choices=["0", "1", "2"]), retain = dict(default=False, type='bool'), username = dict(default = None), password = dict(default = None), ), supports_check_mode=True ) server = module.params["server"] port = module.params["port"] topic = module.params["topic"] payload = module.params["payload"] client_id = module.params["client_id"] qos = module.params["qos"] retain = module.params["retain"] username = module.params["username"] password = module.params["password"] if client_id is None: client_id = "%s_%s" % (socket.getfqdn(), os.getpid()) if payload and payload == 'None': payload = None try: publish(module, topic, payload, server, port, qos, client_id, retain, username, password) except Exception, e: module.fail_json(msg="unable to publish to MQTT broker %s" % (e)) module.exit_json(changed=False, topic=topic) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/notification/grove0000664000000000000000000000530012316627017017040 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- DOCUMENTATION = ''' --- module: grove version_added: 1.4 short_description: Sends a notification to a grove.io channel description: - The M(grove) module sends a message for a service to a Grove.io channel. options: channel_token: description: - Token of the channel to post to. required: true service: description: - Name of the service (displayed as the "user" in the message) required: false default: ansible message: description: - Message content required: true url: description: - Service URL for the web client required: false icon_url: description: - Icon for the service required: false validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 author: Jonas Pfenniger ''' EXAMPLES = ''' - grove: > channel_token=6Ph62VBBJOccmtTPZbubiPzdrhipZXtg service=my-app message=deployed {{ target }} ''' BASE_URL = 'https://grove.io/api/notice/%s/' # ============================================================== # do_notify_grove def do_notify_grove(module, channel_token, service, message, url=None, icon_url=None): my_url = BASE_URL % (channel_token,) my_data = dict(service=service, message=message) if url is not None: my_data['url'] = url if icon_url is not None: my_data['icon_url'] = icon_url data = urllib.urlencode(my_data) response, info = fetch_url(module, my_url, data=data) if info['status'] != 200: module.fail_json(msg="failed to send notification: %s" % info['msg']) # ============================================================== # main def main(): module = AnsibleModule( argument_spec = dict( channel_token = dict(type='str', required=True), message = dict(type='str', required=True), service = dict(type='str', default='ansible'), url = dict(type='str', default=None), icon_url = dict(type='str', default=None), validate_certs = dict(default='yes', type='bool'), ) ) channel_token = module.params['channel_token'] service = module.params['service'] message = module.params['message'] url = module.params['url'] icon_url = module.params['icon_url'] do_notify_grove(module, channel_token, service, message, url, icon_url) # Mission complete module.exit_json(msg="OK") # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/notification/irc0000664000000000000000000001176612316627017016510 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Jan-Piet Mens # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: irc version_added: "1.2" short_description: Send a message to an IRC channel description: - Send a message to an IRC channel. This is a very simplistic implementation. options: server: description: - IRC server name/address required: false default: localhost port: description: - IRC server port number required: false default: 6667 nick: description: - Nickname required: false default: ansible msg: description: - The message body. required: true default: null color: description: - Text color for the message. Default is black. required: false default: black choices: [ "yellow", "red", "green", "blue", "black" ] channel: description: - Channel name required: true passwd: description: - Server password required: false timeout: description: - Timeout to use while waiting for successful registration and join messages, this is to prevent an endless loop default: 30 version_added: 1.5 # informational: requirements for nodes requirements: [ socket ] author: Jan-Piet Mens, Matt Martz ''' EXAMPLES = ''' - irc: server=irc.example.net channel="#t1" msg="Hello world" - local_action: irc port=6669 channel="#t1" msg="All finished at {{ ansible_date_time.iso8601 }}" color=red nick=ansibleIRC ''' # =========================================== # IRC module support methods. # import re import socket from time import sleep def send_msg(channel, msg, server='localhost', port='6667', nick="ansible", color='black', passwd=False, timeout=30): '''send message to IRC''' colornumbers = { 'black': "01", 'red': "04", 'green': "09", 'yellow': "08", 'blue': "12", } try: colornumber = colornumbers[color] except: colornumber = "01" # black message = "\x03" + colornumber + msg irc = socket.socket(socket.AF_INET, socket.SOCK_STREAM) irc.connect((server, int(port))) if passwd: irc.send('PASS %s\r\n' % passwd) irc.send('NICK %s\r\n' % nick) irc.send('USER %s %s %s :ansible IRC\r\n' % (nick, nick, nick)) motd = '' start = time.time() while 1: motd += irc.recv(1024) if re.search('^:\S+ 00[1-4] %s :' % nick, motd, flags=re.M): break elif time.time() - start > timeout: raise Exception('Timeout waiting for IRC server welcome response') time.sleep(0.5) irc.send('JOIN %s\r\n' % channel) join = '' start = time.time() while 1: join += irc.recv(1024) if re.search('^:\S+ 366 %s %s :' % (nick, channel), join, flags=re.M): break elif time.time() - start > timeout: raise Exception('Timeout waiting for IRC JOIN response') time.sleep(0.5) irc.send('PRIVMSG %s :%s\r\n' % (channel, message)) time.sleep(1) irc.send('PART %s\r\n' % channel) irc.send('QUIT\r\n') time.sleep(1) irc.close() # =========================================== # Main # def main(): module = AnsibleModule( argument_spec=dict( server=dict(default='localhost'), port=dict(default=6667), nick=dict(default='ansible'), msg=dict(required=True), color=dict(default="black", choices=["yellow", "red", "green", "blue", "black"]), channel=dict(required=True), passwd=dict(), timeout=dict(type='int', default=30) ), supports_check_mode=True ) server = module.params["server"] port = module.params["port"] nick = module.params["nick"] msg = module.params["msg"] color = module.params["color"] channel = module.params["channel"] passwd = module.params["passwd"] timeout = module.params["timeout"] try: send_msg(channel, msg, server, port, nick, color, passwd, timeout) except Exception, e: module.fail_json(msg="unable to send to IRC: %s" % e) module.exit_json(changed=False, channel=channel, nick=nick, msg=msg) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/notification/flowdock0000664000000000000000000001365112316627017017536 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2013 Matt Coddington # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: flowdock version_added: "1.2" author: Matt Coddington short_description: Send a message to a flowdock description: - Send a message to a flowdock team inbox or chat using the push API (see https://www.flowdock.com/api/team-inbox and https://www.flowdock.com/api/chat) options: token: description: - API token. required: true type: description: - Whether to post to 'inbox' or 'chat' required: true choices: [ "inbox", "chat" ] msg: description: - Content of the message required: true tags: description: - tags of the message, separated by commas required: false external_user_name: description: - (chat only - required) Name of the "user" sending the message required: false from_address: description: - (inbox only - required) Email address of the message sender required: false source: description: - (inbox only - required) Human readable identifier of the application that uses the Flowdock API required: false subject: description: - (inbox only - required) Subject line of the message required: false from_name: description: - (inbox only) Name of the message sender required: false reply_to: description: - (inbox only) Email address for replies required: false project: description: - (inbox only) Human readable identifier for more detailed message categorization required: false link: description: - (inbox only) Link associated with the message. This will be used to link the message subject in Team Inbox. required: false validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 # informational: requirements for nodes requirements: [ urllib, urllib2 ] ''' EXAMPLES = ''' - flowdock: type=inbox token=AAAAAA from_address=user@example.com source='my cool app' msg='test from ansible' subject='test subject' - flowdock: type=chat token=AAAAAA external_user_name=testuser msg='test from ansible' tags=tag1,tag2,tag3 ''' # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec=dict( token=dict(required=True), msg=dict(required=True), type=dict(required=True, choices=["inbox","chat"]), external_user_name=dict(required=False), from_address=dict(required=False), source=dict(required=False), subject=dict(required=False), from_name=dict(required=False), reply_to=dict(required=False), project=dict(required=False), tags=dict(required=False), link=dict(required=False), validate_certs = dict(default='yes', type='bool'), ), supports_check_mode=True ) type = module.params["type"] token = module.params["token"] if type == 'inbox': url = "https://api.flowdock.com/v1/messages/team_inbox/%s" % (token) else: url = "https://api.flowdock.com/v1/messages/chat/%s" % (token) params = {} # required params params['content'] = module.params["msg"] # required params for the 'chat' type if module.params['external_user_name']: if type == 'inbox': module.fail_json(msg="external_user_name is not valid for the 'inbox' type") else: params['external_user_name'] = module.params["external_user_name"] elif type == 'chat': module.fail_json(msg="%s is required for the 'inbox' type" % item) # required params for the 'inbox' type for item in [ 'from_address', 'source', 'subject' ]: if module.params[item]: if type == 'chat': module.fail_json(msg="%s is not valid for the 'chat' type" % item) else: params[item] = module.params[item] elif type == 'inbox': module.fail_json(msg="%s is required for the 'inbox' type" % item) # optional params if module.params["tags"]: params['tags'] = module.params["tags"] # optional params for the 'inbox' type for item in [ 'from_name', 'reply_to', 'project', 'link' ]: if module.params[item]: if type == 'chat': module.fail_json(msg="%s is not valid for the 'chat' type" % item) else: params[item] = module.params[item] # If we're in check mode, just exit pretending like we succeeded if module.check_mode: module.exit_json(changed=False) # Send the data to Flowdock data = urllib.urlencode(params) response, info = fetch_url(module, url, data=data) if info['status'] != 200: module.fail_json(msg="unable to send msg: %s" % info['msg']) module.exit_json(changed=True, msg=module.params["msg"]) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/notification/campfire0000664000000000000000000001021212316627017017502 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- DOCUMENTATION = ''' --- module: campfire version_added: "1.2" short_description: Send a message to Campfire description: - Send a message to Campfire. - Messages with newlines will result in a "Paste" message being sent. version_added: "1.2" options: subscription: description: - The subscription name to use. required: true token: description: - API token. required: true room: description: - Room number to which the message should be sent. required: true msg: description: - The message body. required: true notify: description: - Send a notification sound before the message. required: false choices: ["56k", "bueller", "crickets", "dangerzone", "deeper", "drama", "greatjob", "horn", "horror" , "inconceivable", "live", "loggins", "noooo", "nyan", "ohmy", "ohyeah", "pushit", "rimshot", "sax", "secret", "tada", "tmyk", "trombone", "vuvuzela", "yeah", "yodel"] # informational: requirements for nodes requirements: [ urllib2, cgi ] author: Adam Garside ''' EXAMPLES = ''' - campfire: subscription=foo token=12345 room=123 msg="Task completed." - campfire: subscription=foo token=12345 room=123 notify=loggins msg="Task completed ... with feeling." ''' def main(): try: import urllib2 except ImportError: module.fail_json(msg="urllib2 is required") try: import cgi except ImportError: module.fail_json(msg="cgi is required") module = AnsibleModule( argument_spec=dict( subscription=dict(required=True), token=dict(required=True), room=dict(required=True), msg=dict(required=True), notify=dict(required=False, choices=["56k", "bueller", "crickets", "dangerzone", "deeper", "drama", "greatjob", "horn", "horror", "inconceivable", "live", "loggins", "noooo", "nyan", "ohmy", "ohyeah", "pushit", "rimshot", "sax", "secret", "tada", "tmyk", "trombone", "vuvuzela", "yeah", "yodel"]), ), supports_check_mode=False ) subscription = module.params["subscription"] token = module.params["token"] room = module.params["room"] msg = module.params["msg"] notify = module.params["notify"] URI = "https://%s.campfirenow.com" % subscription NSTR = "SoundMessage%s" MSTR = "%s" AGENT = "Ansible/1.2" try: # Setup basic auth using token as the username pm = urllib2.HTTPPasswordMgrWithDefaultRealm() pm.add_password(None, URI, token, 'X') # Setup Handler and define the opener for the request handler = urllib2.HTTPBasicAuthHandler(pm) opener = urllib2.build_opener(handler) target_url = '%s/room/%s/speak.xml' % (URI, room) # Send some audible notification if requested if notify: req = urllib2.Request(target_url, NSTR % cgi.escape(notify)) req.add_header('Content-Type', 'application/xml') req.add_header('User-agent', AGENT) response = opener.open(req) # Send the message req = urllib2.Request(target_url, MSTR % cgi.escape(msg)) req.add_header('Content-Type', 'application/xml') req.add_header('User-agent', AGENT) response = opener.open(req) except urllib2.HTTPError, e: if not (200 <= e.code < 300): module.fail_json(msg="unable to send msg: '%s', campfire api" " returned error code: '%s'" % (msg, e.code)) except Exception, e: module.fail_json(msg="unable to send msg: %s" % msg) module.exit_json(changed=True, room=room, msg=msg, notify=notify) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/notification/mail0000664000000000000000000002022112316627017016637 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright 2012 Dag Wieers # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = """ --- author: Dag Wieers module: mail short_description: Send an email description: - This module is useful for sending emails from playbooks. - One may wonder why automate sending emails? In complex environments there are from time to time processes that cannot be automated, either because you lack the authority to make it so, or because not everyone agrees to a common approach. - If you cannot automate a specific step, but the step is non-blocking, sending out an email to the responsible party to make him perform his part of the bargain is an elegant way to put the responsibility in someone else's lap. - Of course sending out a mail can be equally useful as a way to notify one or more people in a team that a specific action has been (successfully) taken. version_added: "0.8" options: from: description: - The email-address the mail is sent from. May contain address and phrase. default: root required: false to: description: - The email-address(es) the mail is being sent to. This is a comma-separated list, which may contain address and phrase portions. default: root required: false cc: description: - The email-address(es) the mail is being copied to. This is a comma-separated list, which may contain address and phrase portions. required: false bcc: description: - The email-address(es) the mail is being 'blind' copied to. This is a comma-separated list, which may contain address and phrase portions. required: false subject: description: - The subject of the email being sent. aliases: [ msg ] required: true body: description: - The body of the email being sent. default: $subject required: false host: description: - The mail server default: 'localhost' required: false port: description: - The mail server port default: '25' required: false version_added: "1.0" attach: description: - A space-separated list of pathnames of files to attach to the message. Attached files will have their content-type set to C(application/octet-stream). default: null required: false version_added: "1.0" headers: description: - A vertical-bar-separated list of headers which should be added to the message. Each individual header is specified as C(header=value) (see example below). default: null required: false version_added: "1.0" charset: description: - The character set of email being sent default: 'us-ascii' requred: false """ EXAMPLES = ''' # Example playbook sending mail to root - local_action: mail msg='System {{ ansible_hostname }} has been successfully provisioned.' # Send e-mail to a bunch of users, attaching files - local_action: mail host='127.0.0.1' port=2025 subject="Ansible-report" body="Hello, this is an e-mail. I hope you like it ;-)" from="jane@example.net (Jane Jolie)" to="John Doe , Suzie Something " cc="Charlie Root " attach="/etc/group /tmp/pavatar2.png" headers=Reply-To=john@example.com|X-Special="Something or other" charset=utf8 ''' import os import sys import smtplib try: from email import encoders import email.utils from email.utils import parseaddr, formataddr from email.mime.base import MIMEBase from mail.mime.multipart import MIMEMultipart from email.mime.text import MIMEText except ImportError: from email import Encoders as encoders import email.Utils from email.Utils import parseaddr, formataddr from email.MIMEBase import MIMEBase from email.MIMEMultipart import MIMEMultipart from email.MIMEText import MIMEText def main(): module = AnsibleModule( argument_spec = dict( host = dict(default='localhost'), port = dict(default='25'), sender = dict(default='root', aliases=['from']), to = dict(default='root', aliases=['recipients']), cc = dict(default=None), bcc = dict(default=None), subject = dict(required=True, aliases=['msg']), body = dict(default=None), attach = dict(default=None), headers = dict(default=None), charset = dict(default='us-ascii') ) ) host = module.params.get('host') port = module.params.get('port') sender = module.params.get('sender') recipients = module.params.get('to') copies = module.params.get('cc') blindcopies = module.params.get('bcc') subject = module.params.get('subject') body = module.params.get('body') attach_files = module.params.get('attach') headers = module.params.get('headers') charset = module.params.get('charset') sender_phrase, sender_addr = parseaddr(sender) if not body: body = subject try: smtp = smtplib.SMTP(host, port=int(port)) except Exception, e: module.fail_json(rc=1, msg='Failed to send mail to server %s on port %s: %s' % (host, port, e)) msg = MIMEMultipart() msg['Subject'] = subject msg['From'] = formataddr((sender_phrase, sender_addr)) msg.preamble = "Multipart message" if headers is not None: for hdr in [x.strip() for x in headers.split('|')]: try: h_key, h_val = hdr.split('=') msg.add_header(h_key, h_val) except: pass if 'X-Mailer' not in msg: msg.add_header('X-Mailer', "Ansible") to_list = [] cc_list = [] addr_list = [] if recipients is not None: for addr in [x.strip() for x in recipients.split(',')]: to_list.append( formataddr( parseaddr(addr)) ) addr_list.append( parseaddr(addr)[1] ) # address only, w/o phrase if copies is not None: for addr in [x.strip() for x in copies.split(',')]: cc_list.append( formataddr( parseaddr(addr)) ) addr_list.append( parseaddr(addr)[1] ) # address only, w/o phrase if blindcopies is not None: for addr in [x.strip() for x in blindcopies.split(',')]: addr_list.append( parseaddr(addr)[1] ) if len(to_list) > 0: msg['To'] = ", ".join(to_list) if len(cc_list) > 0: msg['Cc'] = ", ".join(cc_list) part = MIMEText(body + "\n\n", _charset=charset) msg.attach(part) if attach_files is not None: for file in attach_files.split(): try: fp = open(file, 'rb') part = MIMEBase('application', 'octet-stream') part.set_payload(fp.read()) fp.close() encoders.encode_base64(part) part.add_header('Content-disposition', 'attachment', filename=os.path.basename(file)) msg.attach(part) except Exception, e: module.fail_json(rc=1, msg="Failed to send mail: can't attach file %s: %s" % (file, e)) sys.exit() composed = msg.as_string() try: smtp.sendmail(sender_addr, set(addr_list), composed) except Exception, e: module.fail_json(rc=1, msg='Failed to send mail to %s: %s' % (", ".join(addr_list), e)) smtp.quit() module.exit_json(changed=False) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/notification/hipchat0000664000000000000000000000735112316627017017346 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- DOCUMENTATION = ''' --- module: hipchat version_added: "1.2" short_description: Send a message to hipchat description: - Send a message to hipchat options: token: description: - API token. required: true room: description: - ID or name of the room. required: true from: description: - Name the message will appear be sent from. max 15 characters. Over 15, will be shorten. required: false default: Ansible msg: description: - The message body. required: true default: null color: description: - Background color for the message. Default is yellow. required: false default: yellow choices: [ "yellow", "red", "green", "purple", "gray", "random" ] msg_format: description: - message format. html or text. Default is text. required: false default: text choices: [ "text", "html" ] notify: description: - notify or not (change the tab color, play a sound, etc) required: false default: 'yes' choices: [ "yes", "no" ] validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 # informational: requirements for nodes requirements: [ urllib, urllib2 ] author: WAKAYAMA Shirou ''' EXAMPLES = ''' - hipchat: token=AAAAAA room=notify msg="Ansible task finished" ''' # =========================================== # HipChat module specific support methods. # MSG_URI = "https://api.hipchat.com/v1/rooms/message?" def send_msg(module, token, room, msg_from, msg, msg_format='text', color='yellow', notify=False): '''sending message to hipchat''' params = {} params['room_id'] = room params['from'] = msg_from[:15] # max length is 15 params['message'] = msg params['message_format'] = msg_format params['color'] = color if notify: params['notify'] = 1 else: params['notify'] = 0 url = MSG_URI + "auth_token=%s" % (token) data = urllib.urlencode(params) response, info = fetch_url(module, url, data=data) if info['status'] == 200: return response.read() else: module.fail_json(msg="failed to send message, return status=%s" % str(info['status'])) # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec=dict( token=dict(required=True), room=dict(required=True), msg=dict(required=True), msg_from=dict(default="Ansible", aliases=['from']), color=dict(default="yellow", choices=["yellow", "red", "green", "purple", "gray", "random"]), msg_format=dict(default="text", choices=["text", "html"]), notify=dict(default=True, type='bool'), validate_certs = dict(default='yes', type='bool'), ), supports_check_mode=True ) token = module.params["token"] room = module.params["room"] msg = module.params["msg"] msg_from = module.params["msg_from"] color = module.params["color"] msg_format = module.params["msg_format"] notify = module.params["notify"] try: send_msg(module, token, room, msg_from, msg, msg_format, color, notify) except Exception, e: module.fail_json(msg="unable to sent msg: %s" % e) changed = True module.exit_json(changed=changed, room=room, msg_from=msg_from, msg=msg) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/notification/osx_say0000664000000000000000000000401512316627017017405 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: osx_say version_added: "1.2" short_description: Makes an OSX computer to speak. description: - makes an OS computer speak! Amuse your friends, annoy your coworkers! notes: - If you like this module, you may also be interested in the osx_say callback in the plugins/ directory of the source checkout. options: msg: description: What to say required: true voice: description: What voice to use required: false requirements: [ say ] author: Michael DeHaan ''' EXAMPLES = ''' - local_action: osx_say msg="{{inventory_hostname}} is all done" voice=Zarvox ''' DEFAULT_VOICE='Trinoids' def say(module, msg, voice): module.run_command(["/usr/bin/say", msg, "--voice=%s" % (voice)], check_rc=True) def main(): module = AnsibleModule( argument_spec=dict( msg=dict(required=True), voice=dict(required=False, default=DEFAULT_VOICE), ), supports_check_mode=False ) if not os.path.exists("/usr/bin/say"): module.fail_json(msg="/usr/bin/say is not installed") msg = module.params['msg'] voice = module.params['voice'] say(module, msg, voice) module.exit_json(msg=msg, changed=False) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/internal/0000775000000000000000000000000012316627017015123 5ustar rootrootansible-1.5.4/library/internal/async_wrapper0000664000000000000000000001401612316627017017725 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan , and others # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # try: import json except ImportError: import simplejson as json import shlex import os import subprocess import sys import datetime import traceback import signal import time import syslog def daemonize_self(): # daemonizing code: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012 # logger.info("cobblerd started") try: pid = os.fork() if pid > 0: # exit first parent sys.exit(0) except OSError, e: print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror) sys.exit(1) # decouple from parent environment os.chdir("/") os.setsid() os.umask(022) # do second fork try: pid = os.fork() if pid > 0: # print "Daemon PID %d" % pid sys.exit(0) except OSError, e: print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror) sys.exit(1) dev_null = file('/dev/null','rw') os.dup2(dev_null.fileno(), sys.stdin.fileno()) os.dup2(dev_null.fileno(), sys.stdout.fileno()) os.dup2(dev_null.fileno(), sys.stderr.fileno()) if len(sys.argv) < 3: print json.dumps({ "failed" : True, "msg" : "usage: async_wrapper . Humans, do not call directly!" }) sys.exit(1) jid = sys.argv[1] time_limit = sys.argv[2] wrapped_module = sys.argv[3] argsfile = sys.argv[4] cmd = "%s %s" % (wrapped_module, argsfile) syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % " ".join(sys.argv[1:])) # setup logging directory logdir = os.path.expanduser("~/.ansible_async") log_path = os.path.join(logdir, jid) if not os.path.exists(logdir): try: os.makedirs(logdir) except: print json.dumps({ "failed" : 1, "msg" : "could not create: %s" % logdir }) def _run_command(wrapped_cmd, jid, log_path): logfile = open(log_path, "w") logfile.write(json.dumps({ "started" : 1, "ansible_job_id" : jid })) logfile.close() logfile = open(log_path, "w") result = {} outdata = '' try: cmd = shlex.split(wrapped_cmd) script = subprocess.Popen(cmd, shell=False, stdin=None, stdout=logfile, stderr=logfile) script.communicate() outdata = file(log_path).read() result = json.loads(outdata) except (OSError, IOError), e: result = { "failed": 1, "cmd" : wrapped_cmd, "msg": str(e), } result['ansible_job_id'] = jid logfile.write(json.dumps(result)) except: result = { "failed" : 1, "cmd" : wrapped_cmd, "data" : outdata, # temporary debug only "msg" : traceback.format_exc() } result['ansible_job_id'] = jid logfile.write(json.dumps(result)) logfile.close() # immediately exit this process, leaving an orphaned process # running which immediately forks a supervisory timing process #import logging #import logging.handlers #logger = logging.getLogger("ansible_async") #logger.setLevel(logging.WARNING) #logger.addHandler( logging.handlers.SysLogHandler("/dev/log") ) def debug(msg): #logger.warning(msg) pass try: pid = os.fork() if pid: # Notify the overlord that the async process started # we need to not return immmediately such that the launched command has an attempt # to initialize PRIOR to ansible trying to clean up the launch directory (and argsfile) # this probably could be done with some IPC later. Modules should always read # the argsfile at the very first start of their execution anyway time.sleep(1) debug("Return async_wrapper task started.") print json.dumps({ "started" : 1, "ansible_job_id" : jid, "results_file" : log_path }) sys.stdout.flush() sys.exit(0) else: # The actual wrapper process # Daemonize, so we keep on running daemonize_self() # we are now daemonized, create a supervisory process debug("Starting module and watcher") sub_pid = os.fork() if sub_pid: # the parent stops the process after the time limit remaining = int(time_limit) # set the child process group id to kill all children os.setpgid(sub_pid, sub_pid) debug("Start watching %s (%s)"%(sub_pid, remaining)) time.sleep(5) while os.waitpid(sub_pid, os.WNOHANG) == (0, 0): debug("%s still running (%s)"%(sub_pid, remaining)) time.sleep(5) remaining = remaining - 5 if remaining <= 0: debug("Now killing %s"%(sub_pid)) os.killpg(sub_pid, signal.SIGKILL) debug("Sent kill to group %s"%sub_pid) time.sleep(1) sys.exit(0) debug("Done in kid B.") os._exit(0) else: # the child process runs the actual module debug("Start module (%s)"%os.getpid()) _run_command(cmd, jid, log_path) debug("Module complete (%s)"%os.getpid()) sys.exit(0) except Exception, err: debug("error: %s"%(err)) raise err ansible-1.5.4/library/internal/async_status0000664000000000000000000000566312316627017017600 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan , and others # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: async_status short_description: Obtain status of asynchronous task description: - "This module gets the status of an asynchronous task." version_added: "0.5" options: jid: description: - Job or task identifier required: true default: null aliases: [] mode: description: - if C(status), obtain the status; if C(cleanup), clean up the async job cache located in C(~/.ansible_async/) for the specified job I(jid). required: false choices: [ "status", "cleanup" ] default: "status" notes: - See also U(http://docs.ansible.com/playbooks_async.html) requirements: [] author: Michael DeHaan ''' import datetime import traceback def main(): module = AnsibleModule(argument_spec=dict( jid=dict(required=True), mode=dict(default='status', choices=['status','cleanup']), )) mode = module.params['mode'] jid = module.params['jid'] # setup logging directory logdir = os.path.expanduser("~/.ansible_async") log_path = os.path.join(logdir, jid) if not os.path.exists(log_path): module.fail_json(msg="could not find job", ansible_job_id=jid) if mode == 'cleanup': os.unlink(log_path) module.exit_json(ansible_job_id=jid, erased=log_path) # NOT in cleanup mode, assume regular status mode # no remote kill mode currently exists, but probably should # consider log_path + ".pid" file and also unlink that above data = file(log_path).read() try: data = json.loads(data) except Exception, e: if data == '': # file not written yet? That means it is running module.exit_json(results_file=log_path, ansible_job_id=jid, started=1) else: module.fail_json(ansible_job_id=jid, results_file=log_path, msg="Could not parse job output: %s" % data) if not 'started' in data: data['finished'] = 1 data['ansible_job_id'] = jid # Fix error: TypeError: exit_json() keywords must be strings data = dict([(str(k), v) for k, v in data.iteritems()]) module.exit_json(**data) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/0000775000000000000000000000000012316627017015053 5ustar rootrootansible-1.5.4/library/database/mysql_user0000664000000000000000000004032112316627017017201 0ustar rootroot#!/usr/bin/python # (c) 2012, Mark Theunissen # Sponsored by Four Kitchens http://fourkitchens.com. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: mysql_user short_description: Adds or removes a user from a MySQL database. description: - Adds or removes a user from a MySQL database. version_added: "0.6" options: name: description: - name of the user (role) to add or remove required: true default: null password: description: - set the user's password required: false default: null host: description: - the 'host' part of the MySQL username required: false default: localhost login_user: description: - The username used to authenticate with required: false default: null login_password: description: - The password used to authenticate with required: false default: null login_host: description: - Host running the database required: false default: localhost login_port: description: - Port of the MySQL server required: false default: 3306 version_added: '1.4' login_unix_socket: description: - The path to a Unix domain socket for local connections required: false default: null priv: description: - "MySQL privileges string in the format: C(db.table:priv1,priv2)" required: false default: null append_privs: description: - Append the privileges defined by priv to the existing ones for this user instead of overwriting existing ones. required: false choices: [ "yes", "no" ] default: "no" version_added: "1.4" state: description: - Whether the user should exist. When C(absent), removes the user. required: false default: present choices: [ "present", "absent" ] check_implicit_admin: description: - Check if mysql allows login as root/nopassword before trying supplied credentials. required: false default: false version_added: "1.3" notes: - Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install python-mysqldb. - Both C(login_password) and C(login_username) are required when you are passing credentials. If none are present, the module will attempt to read the credentials from C(~/.my.cnf), and finally fall back to using the MySQL default login of 'root' with no password. - "MySQL server installs with default login_user of 'root' and no password. To secure this user as part of an idempotent playbook, you must create at least two tasks: the first must change the root user's password, without providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file." requirements: [ "ConfigParser", "MySQLdb" ] author: Mark Theunissen ''' EXAMPLES = """ # Create database user with name 'bob' and password '12345' with all database privileges - mysql_user: name=bob password=12345 priv=*.*:ALL state=present # Creates database user 'bob' and password '12345' with all database privileges and 'WITH GRANT OPTION' - mysql_user: name=bob password=12345 priv=*.*:ALL,GRANT state=present # Ensure no user named 'sally' exists, also passing in the auth credentials. - mysql_user: login_user=root login_password=123456 name=sally state=absent # Example privileges string format mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanotherdb.*:ALL # Example using login_unix_socket to connect to server - mysql_user: name=root password=abc123 login_unix_socket=/var/run/mysqld/mysqld.sock # Example .my.cnf file for setting the root password # Note: don't use quotes around the password, because the mysql_user module # will include them in the password but the mysql client will not [client] user=root password=n<_665{vS43y """ import ConfigParser import getpass import tempfile try: import MySQLdb except ImportError: mysqldb_found = False else: mysqldb_found = True # =========================================== # MySQL module specific support methods. # def user_exists(cursor, user, host): cursor.execute("SELECT count(*) FROM user WHERE user = %s AND host = %s", (user,host)) count = cursor.fetchone() return count[0] > 0 def user_add(cursor, user, host, password, new_priv): cursor.execute("CREATE USER %s@%s IDENTIFIED BY %s", (user,host,password)) if new_priv is not None: for db_table, priv in new_priv.iteritems(): privileges_grant(cursor, user,host,db_table,priv) return True def user_mod(cursor, user, host, password, new_priv, append_privs): changed = False grant_option = False # Handle passwords. if password is not None: cursor.execute("SELECT password FROM user WHERE user = %s AND host = %s", (user,host)) current_pass_hash = cursor.fetchone() cursor.execute("SELECT PASSWORD(%s)", (password,)) new_pass_hash = cursor.fetchone() if current_pass_hash[0] != new_pass_hash[0]: cursor.execute("SET PASSWORD FOR %s@%s = PASSWORD(%s)", (user,host,password)) changed = True # Handle privileges. if new_priv is not None: curr_priv = privileges_get(cursor, user,host) # If the user has privileges on a db.table that doesn't appear at all in # the new specification, then revoke all privileges on it. for db_table, priv in curr_priv.iteritems(): # If the user has the GRANT OPTION on a db.table, revoke it first. if "GRANT" in priv: grant_option = True if db_table not in new_priv: if user != "root" and "PROXY" not in priv and not append_privs: privileges_revoke(cursor, user,host,db_table,grant_option) changed = True # If the user doesn't currently have any privileges on a db.table, then # we can perform a straight grant operation. for db_table, priv in new_priv.iteritems(): if db_table not in curr_priv: privileges_grant(cursor, user,host,db_table,priv) changed = True # If the db.table specification exists in both the user's current privileges # and in the new privileges, then we need to see if there's a difference. db_table_intersect = set(new_priv.keys()) & set(curr_priv.keys()) for db_table in db_table_intersect: priv_diff = set(new_priv[db_table]) ^ set(curr_priv[db_table]) if (len(priv_diff) > 0): privileges_revoke(cursor, user,host,db_table,grant_option) privileges_grant(cursor, user,host,db_table,new_priv[db_table]) changed = True return changed def user_delete(cursor, user, host): cursor.execute("DROP USER %s@%s", (user,host)) return True def privileges_get(cursor, user,host): """ MySQL doesn't have a better method of getting privileges aside from the SHOW GRANTS query syntax, which requires us to then parse the returned string. Here's an example of the string that is returned from MySQL: GRANT USAGE ON *.* TO 'user'@'localhost' IDENTIFIED BY 'pass'; This function makes the query and returns a dictionary containing the results. The dictionary format is the same as that returned by privileges_unpack() below. """ output = {} cursor.execute("SHOW GRANTS FOR %s@%s", (user,host)) grants = cursor.fetchall() def pick(x): if x == 'ALL PRIVILEGES': return 'ALL' else: return x for grant in grants: res = re.match("GRANT (.+) ON (.+) TO '.+'@'.+'( IDENTIFIED BY PASSWORD '.+')? ?(.*)", grant[0]) if res is None: module.fail_json(msg="unable to parse the MySQL grant string") privileges = res.group(1).split(", ") privileges = [ pick(x) for x in privileges] if "WITH GRANT OPTION" in res.group(4): privileges.append('GRANT') db = res.group(2) output[db] = privileges return output def privileges_unpack(priv): """ Take a privileges string, typically passed as a parameter, and unserialize it into a dictionary, the same format as privileges_get() above. We have this custom format to avoid using YAML/JSON strings inside YAML playbooks. Example of a privileges string: mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanother.*:ALL The privilege USAGE stands for no privileges, so we add that in on *.* if it's not specified in the string, as MySQL will always provide this by default. """ output = {} for item in priv.split('/'): pieces = item.split(':') if pieces[0].find('.') != -1: pieces[0] = pieces[0].split('.') for idx, piece in enumerate(pieces): if pieces[0][idx] != "*": pieces[0][idx] = "`" + pieces[0][idx] + "`" pieces[0] = '.'.join(pieces[0]) output[pieces[0]] = pieces[1].upper().split(',') if '*.*' not in output: output['*.*'] = ['USAGE'] return output def privileges_revoke(cursor, user,host,db_table,grant_option): if grant_option: query = "REVOKE GRANT OPTION ON %s FROM '%s'@'%s'" % (db_table,user,host) cursor.execute(query) query = "REVOKE ALL PRIVILEGES ON %s FROM '%s'@'%s'" % (db_table,user,host) cursor.execute(query) def privileges_grant(cursor, user,host,db_table,priv): priv_string = ",".join(filter(lambda x: x != 'GRANT', priv)) query = "GRANT %s ON %s TO '%s'@'%s'" % (priv_string,db_table,user,host) if 'GRANT' in priv: query = query + " WITH GRANT OPTION" cursor.execute(query) def strip_quotes(s): """ Remove surrounding single or double quotes >>> print strip_quotes('hello') hello >>> print strip_quotes('"hello"') hello >>> print strip_quotes("'hello'") hello >>> print strip_quotes("'hello") 'hello """ single_quote = "'" double_quote = '"' if s.startswith(single_quote) and s.endswith(single_quote): s = s.strip(single_quote) elif s.startswith(double_quote) and s.endswith(double_quote): s = s.strip(double_quote) return s def config_get(config, section, option): """ Calls ConfigParser.get and strips quotes See: http://dev.mysql.com/doc/refman/5.0/en/option-files.html """ return strip_quotes(config.get(section, option)) def _safe_cnf_load(config, path): data = {'user':'', 'password':''} # read in user/pass f = open(path, 'r') for line in f.readlines(): line = line.strip() if line.startswith('user='): data['user'] = line.split('=', 1)[1].strip() if line.startswith('password=') or line.startswith('pass='): data['password'] = line.split('=', 1)[1].strip() f.close() # write out a new cnf file with only user/pass fh, newpath = tempfile.mkstemp(prefix=path + '.') f = open(newpath, 'wb') f.write('[client]\n') f.write('user=%s\n' % data['user']) f.write('password=%s\n' % data['password']) f.close() config.readfp(open(newpath)) os.remove(newpath) return config def load_mycnf(): config = ConfigParser.RawConfigParser() mycnf = os.path.expanduser('~/.my.cnf') if not os.path.exists(mycnf): return False try: config.readfp(open(mycnf)) except (IOError): return False except: config = _safe_cnf_load(config, mycnf) # We support two forms of passwords in .my.cnf, both pass= and password=, # as these are both supported by MySQL. try: passwd = config_get(config, 'client', 'password') except (ConfigParser.NoOptionError): try: passwd = config_get(config, 'client', 'pass') except (ConfigParser.NoOptionError): return False # If .my.cnf doesn't specify a user, default to user login name try: user = config_get(config, 'client', 'user') except (ConfigParser.NoOptionError): user = getpass.getuser() creds = dict(user=user,passwd=passwd) return creds def connect(module, login_user, login_password): if module.params["login_unix_socket"]: db_connection = MySQLdb.connect(host=module.params["login_host"], unix_socket=module.params["login_unix_socket"], user=login_user, passwd=login_password, db="mysql") else: db_connection = MySQLdb.connect(host=module.params["login_host"], port=int(module.params["login_port"]), user=login_user, passwd=login_password, db="mysql") return db_connection.cursor() # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec = dict( login_user=dict(default=None), login_password=dict(default=None), login_host=dict(default="localhost"), login_port=dict(default="3306"), login_unix_socket=dict(default=None), user=dict(required=True, aliases=['name']), password=dict(default=None), host=dict(default="localhost"), state=dict(default="present", choices=["absent", "present"]), priv=dict(default=None), append_privs=dict(type="bool", default="no"), check_implicit_admin=dict(default=False), ) ) user = module.params["user"] password = module.params["password"] host = module.params["host"] state = module.params["state"] priv = module.params["priv"] check_implicit_admin = module.params['check_implicit_admin'] append_privs = module.boolean(module.params["append_privs"]) if not mysqldb_found: module.fail_json(msg="the python mysqldb module is required") if priv is not None: try: priv = privileges_unpack(priv) except: module.fail_json(msg="invalid privileges string") # Either the caller passes both a username and password with which to connect to # mysql, or they pass neither and allow this module to read the credentials from # ~/.my.cnf. login_password = module.params["login_password"] login_user = module.params["login_user"] if login_user is None and login_password is None: mycnf_creds = load_mycnf() if mycnf_creds is False: login_user = "root" login_password = "" else: login_user = mycnf_creds["user"] login_password = mycnf_creds["passwd"] elif login_password is None or login_user is None: module.fail_json(msg="when supplying login arguments, both login_user and login_password must be provided") cursor = None try: if check_implicit_admin: try: cursor = connect(module, 'root', '') except: pass if not cursor: cursor = connect(module, login_user, login_password) except Exception, e: module.fail_json(msg="unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials") if state == "present": if user_exists(cursor, user, host): changed = user_mod(cursor, user, host, password, priv, append_privs) else: if password is None: module.fail_json(msg="password parameter required when adding a user") changed = user_add(cursor, user, host, password, priv) elif state == "absent": if user_exists(cursor, user, host): changed = user_delete(cursor, user, host) else: changed = False module.exit_json(changed=changed, user=user) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/mongodb_user0000664000000000000000000001677212316627017017476 0ustar rootroot#!/usr/bin/python # (c) 2012, Elliott Foster # Sponsored by Four Kitchens http://fourkitchens.com. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: mongodb_user short_description: Adds or removes a user from a MongoDB database. description: - Adds or removes a user from a MongoDB database. version_added: "1.1" options: login_user: description: - The username used to authenticate with required: false default: null login_password: description: - The password used to authenticate with required: false default: null login_host: description: - The host running the database required: false default: localhost login_port: description: - The port to connect to required: false default: 27017 database: description: - The name of the database to add/remove the user from required: true user: description: - The name of the user to add or remove required: true default: null password: description: - The password to use for the user required: false default: null roles: version_added: "1.3" description: - "The database user roles valid values are one or more of the following: read, 'readWrite', 'dbAdmin', 'userAdmin', 'clusterAdmin', 'readAnyDatabase', 'readWriteAnyDatabase', 'userAdminAnyDatabase', 'dbAdminAnyDatabase'" - This param requires mongodb 2.4+ and pymongo 2.5+ required: false default: "readWrite" state: state: description: - The database user state required: false default: present choices: [ "present", "absent" ] notes: - Requires the pymongo Python package on the remote host, version 2.4.2+. This can be installed using pip or the OS package manager. @see http://api.mongodb.org/python/current/installation.html requirements: [ "pymongo" ] author: Elliott Foster ''' EXAMPLES = ''' # Create 'burgers' database user with name 'bob' and password '12345'. - mongodb_user: database=burgers name=bob password=12345 state=present # Delete 'burgers' database user with name 'bob'. - mongodb_user: database=burgers name=bob state=absent # Define more users with various specific roles (if not defined, no roles is assigned, and the user will be added via pre mongo 2.2 style) - mongodb_user: database=burgers name=ben password=12345 roles='read' state=present - mongodb_user: database=burgers name=jim password=12345 roles='readWrite,dbAdmin,userAdmin' state=present - mongodb_user: database=burgers name=joe password=12345 roles='readWriteAnyDatabase' state=present ''' import ConfigParser try: from pymongo.errors import ConnectionFailure from pymongo.errors import OperationFailure from pymongo import MongoClient except ImportError: try: # for older PyMongo 2.2 from pymongo import Connection as MongoClient except ImportError: pymongo_found = False else: pymongo_found = True else: pymongo_found = True # ========================================= # MongoDB module specific support methods. # def user_add(module, client, db_name, user, password, roles): try: db = client[db_name] if roles is None: db.add_user(user, password, False) else: try: db.add_user(user, password, None, roles=roles) except: module.fail_json(msg='"problem adding user; you must be on mongodb 2.4+ and pymongo 2.5+ to use the roles param"') except OperationFailure: return False return True def user_remove(client, db_name, user): try: db = client[db_name] db.remove_user(user) except OperationFailure: return False return True def load_mongocnf(): config = ConfigParser.RawConfigParser() mongocnf = os.path.expanduser('~/.mongodb.cnf') if not os.path.exists(mongocnf): return False try: config.readfp(open(mongocnf)) creds = dict( user=config.get('client', 'user'), password=config.get('client', 'pass') ) except (ConfigParser.NoOptionError, IOError): return False return creds # ========================================= # Module execution. # def main(): module = AnsibleModule( argument_spec = dict( login_user=dict(default=None), login_password=dict(default=None), login_host=dict(default='localhost'), login_port=dict(default='27017'), database=dict(required=True, aliases=['db']), user=dict(required=True, aliases=['name']), password=dict(aliases=['pass']), roles=dict(default=None, type='list'), state=dict(default='present', choices=['absent', 'present']), ) ) if not pymongo_found: module.fail_json(msg='the python pymongo module is required') login_user = module.params['login_user'] login_password = module.params['login_password'] login_host = module.params['login_host'] login_port = module.params['login_port'] db_name = module.params['database'] user = module.params['user'] password = module.params['password'] roles = module.params['roles'] state = module.params['state'] try: client = MongoClient(login_host, int(login_port)) if login_user is None and login_password is None: mongocnf_creds = load_mongocnf() if mongocnf_creds is not False: login_user = mongocnf_creds['user'] login_password = mongocnf_creds['password'] elif login_password is None and login_user is not None: module.fail_json(msg='when supplying login arguments, both login_user and login_password must be provided') if login_user is not None and login_password is not None: client.admin.authenticate(login_user, login_password) except ConnectionFailure, e: module.fail_json(msg='unable to connect to database, check login_user and login_password are correct') if state == 'present': if password is None: module.fail_json(msg='password parameter required when adding a user') if user_add(module, client, db_name, user, password, roles) is not True: module.fail_json(msg='Unable to add or update user, check login_user and login_password are correct and that this user has access to the admin collection') elif state == 'absent': if user_remove(client, db_name, user) is not True: module.fail_json(msg='Unable to remove user, check login_user and login_password are correct and that this user has access to the admin collection') module.exit_json(changed=True, user=user) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/mysql_db0000664000000000000000000002517212316627017016617 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Mark Theunissen # Sponsored by Four Kitchens http://fourkitchens.com. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: mysql_db short_description: Add or remove MySQL databases from a remote host. description: - Add or remove MySQL databases from a remote host. version_added: "0.6" options: name: description: - name of the database to add or remove required: true default: null aliases: [ db ] login_user: description: - The username used to authenticate with required: false default: null login_password: description: - The password used to authenticate with required: false default: null login_host: description: - Host running the database required: false default: localhost login_port: description: - Port of the MySQL server required: false default: 3306 login_unix_socket: description: - The path to a Unix domain socket for local connections required: false default: null state: description: - The database state required: false default: present choices: [ "present", "absent", "dump", "import" ] collation: description: - Collation mode required: false default: null encoding: description: - Encoding mode required: false default: null target: description: - Location, on the remote host, of the dump file to read from or write to. Uncompressed SQL files (C(.sql)) as well as bzip2 (C(.bz2)) and gzip (C(.gz)) compressed files are supported. required: false notes: - Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install python-mysqldb. (See M(apt).) - Both I(login_password) and I(login_user) are required when you are passing credentials. If none are present, the module will attempt to read the credentials from C(~/.my.cnf), and finally fall back to using the MySQL default login of C(root) with no password. requirements: [ ConfigParser ] author: Mark Theunissen ''' EXAMPLES = ''' # Create a new database with name 'bobdata' - mysql_db: name=bobdata state=present # Copy database dump file to remote host and restore it to database 'my_db' - copy: src=dump.sql.bz2 dest=/tmp - mysql_db: name=my_db state=import target=/tmp/dump.sql.bz2 ''' import ConfigParser import os import pipes try: import MySQLdb except ImportError: mysqldb_found = False else: mysqldb_found = True # =========================================== # MySQL module specific support methods. # def db_exists(cursor, db): res = cursor.execute("SHOW DATABASES LIKE %s", (db,)) return bool(res) def db_delete(cursor, db): query = "DROP DATABASE `%s`" % db cursor.execute(query) return True def db_dump(module, host, user, password, db_name, target, port, socket=None): cmd = module.get_bin_path('mysqldump', True) cmd += " --quick --user=%s --password='%s'" % (pipes.quote(user), pipes.quote(password)) if socket is not None: cmd += " --socket=%s" % pipes.quote(socket) else: cmd += " --host=%s --port=%s" % (pipes.quote(host), pipes.quote(port)) cmd += " %s" % pipes.quote(db_name) if os.path.splitext(target)[-1] == '.gz': cmd = cmd + ' | gzip > ' + pipes.quote(target) elif os.path.splitext(target)[-1] == '.bz2': cmd = cmd + ' | bzip2 > ' + pipes.quote(target) else: cmd += " > %s" % pipes.quote(target) rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True) return rc, stdout, stderr def db_import(module, host, user, password, db_name, target, port, socket=None): cmd = module.get_bin_path('mysql', True) cmd += " --user=%s --password='%s'" % (pipes.quote(user), pipes.quote(password)) if socket is not None: cmd += " --socket=%s" % pipes.quote(socket) else: cmd += " --host=%s --port=%s" % (pipes.quote(host), pipes.quote(port)) cmd += " -D %s" % pipes.quote(db_name) if os.path.splitext(target)[-1] == '.gz': cmd = 'gunzip < ' + pipes.quote(target) + ' | ' + cmd elif os.path.splitext(target)[-1] == '.bz2': cmd = 'bunzip2 < ' + pipes.quote(target) + ' | ' + cmd else: cmd += " < %s" % pipes.quote(target) rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True) return rc, stdout, stderr def db_create(cursor, db, encoding, collation): if encoding: encoding = " CHARACTER SET %s" % encoding if collation: collation = " COLLATE %s" % collation query = "CREATE DATABASE `%s`%s%s" % (db, encoding, collation) res = cursor.execute(query) return True def strip_quotes(s): """ Remove surrounding single or double quotes >>> print strip_quotes('hello') hello >>> print strip_quotes('"hello"') hello >>> print strip_quotes("'hello'") hello >>> print strip_quotes("'hello") 'hello """ single_quote = "'" double_quote = '"' if s.startswith(single_quote) and s.endswith(single_quote): s = s.strip(single_quote) elif s.startswith(double_quote) and s.endswith(double_quote): s = s.strip(double_quote) return s def config_get(config, section, option): """ Calls ConfigParser.get and strips quotes See: http://dev.mysql.com/doc/refman/5.0/en/option-files.html """ return strip_quotes(config.get(section, option)) def load_mycnf(): config = ConfigParser.RawConfigParser() mycnf = os.path.expanduser('~/.my.cnf') if not os.path.exists(mycnf): return False try: config.readfp(open(mycnf)) except (IOError): return False # We support two forms of passwords in .my.cnf, both pass= and password=, # as these are both supported by MySQL. try: passwd = config_get(config, 'client', 'password') except (ConfigParser.NoOptionError): try: passwd = config_get(config, 'client', 'pass') except (ConfigParser.NoOptionError): return False try: creds = dict(user=config_get(config, 'client', 'user'),passwd=passwd) except (ConfigParser.NoOptionError): return False return creds # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec = dict( login_user=dict(default=None), login_password=dict(default=None), login_host=dict(default="localhost"), login_port=dict(default="3306"), login_unix_socket=dict(default=None), db=dict(required=True, aliases=['name']), encoding=dict(default=""), collation=dict(default=""), target=dict(default=None), state=dict(default="present", choices=["absent", "present","dump", "import"]), ) ) if not mysqldb_found: module.fail_json(msg="the python mysqldb module is required") db = module.params["db"] encoding = module.params["encoding"] collation = module.params["collation"] state = module.params["state"] target = module.params["target"] # Either the caller passes both a username and password with which to connect to # mysql, or they pass neither and allow this module to read the credentials from # ~/.my.cnf. login_password = module.params["login_password"] login_user = module.params["login_user"] if login_user is None and login_password is None: mycnf_creds = load_mycnf() if mycnf_creds is False: login_user = "root" login_password = "" else: login_user = mycnf_creds["user"] login_password = mycnf_creds["passwd"] elif login_password is None or login_user is None: module.fail_json(msg="when supplying login arguments, both login_user and login_password must be provided") login_host = module.params["login_host"] if state in ['dump','import']: if target is None: module.fail_json(msg="with state=%s target is required" % (state)) connect_to_db = db else: connect_to_db = 'mysql' try: if module.params["login_unix_socket"]: db_connection = MySQLdb.connect(host=module.params["login_host"], unix_socket=module.params["login_unix_socket"], user=login_user, passwd=login_password, db=connect_to_db) else: db_connection = MySQLdb.connect(host=module.params["login_host"], port=int(module.params["login_port"]), user=login_user, passwd=login_password, db=connect_to_db) cursor = db_connection.cursor() except Exception, e: module.fail_json(msg="unable to connect, check login_user and login_password are correct, or alternatively check ~/.my.cnf contains credentials") changed = False if db_exists(cursor, db): if state == "absent": changed = db_delete(cursor, db) elif state == "dump": rc, stdout, stderr = db_dump(module, login_host, login_user, login_password, db, target, port=module.params['login_port'], socket=module.params['login_unix_socket']) if rc != 0: module.fail_json(msg="%s" % stderr) else: module.exit_json(changed=True, db=db, msg=stdout) elif state == "import": rc, stdout, stderr = db_import(module, login_host, login_user, login_password, db, target, port=module.params['login_port'], socket=module.params['login_unix_socket']) if rc != 0: module.fail_json(msg="%s" % stderr) else: module.exit_json(changed=True, db=db, msg=stdout) else: if state == "present": changed = db_create(cursor, db, encoding, collation) module.exit_json(changed=changed, db=db) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/redis0000664000000000000000000002137412316627017016113 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: redis short_description: Various redis commands, slave and flush description: - Unified utility to interact with redis instances. 'slave' Sets a redis instance in slave or master mode. 'flush' Flushes all the instance or a specified db. version_added: "1.3" options: command: description: - The selected redis command required: true default: null choices: [ "slave", "flush" ] login_password: description: - The password used to authenticate with (usually not used) required: false default: null login_host: description: - The host running the database required: false default: localhost login_port: description: - The port to connect to required: false default: 6379 master_host: description: - The host of the master instance [slave command] required: false default: null master_port: description: - The port of the master instance [slave command] required: false default: null slave_mode: description: - the mode of the redis instance [slave command] required: false default: slave choices: [ "master", "slave" ] db: description: - The database to flush (used in db mode) [flush command] required: false default: null flush_mode: description: - Type of flush (all the dbs in a redis instance or a specific one) [flush command] required: false default: all choices: [ "all", "db" ] notes: - Requires the redis-py Python package on the remote host. You can install it with pip (pip install redis) or with a package manager. https://github.com/andymccurdy/redis-py - If the redis master instance we are making slave of is password protected this needs to be in the redis.conf in the masterauth variable requirements: [ redis ] author: Xabier Larrakoetxea ''' EXAMPLES = ''' # Set local redis instance to be slave of melee.island on port 6377 - redis: command=slave master_host=melee.island master_port=6377 # Deactivate slave mode - redis: command=slave slave_mode=master # Flush all the redis db - redis: command=flush flush_mode=all # Flush only one db in a redis instance - redis: command=flush db=1 flush_mode=db ''' try: import redis except ImportError: redis_found = False else: redis_found = True # =========================================== # Redis module specific support methods. # def set_slave_mode(client, master_host, master_port): try: return client.slaveof(master_host, master_port) except Exception: return False def set_master_mode(client): try: return client.slaveof() except Exception: return False def flush(client, db=None): try: if type(db) != int: return client.flushall() else: # The passed client has been connected to the database already return client.flushdb() except Exception: return False # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec = dict( command=dict(default=None, choices=['slave', 'flush']), login_password=dict(default=None), login_host=dict(default='localhost'), login_port=dict(default='6379'), master_host=dict(default=None), master_port=dict(default=None), slave_mode=dict(default='slave', choices=['master', 'slave']), db=dict(default=None), flush_mode=dict(default='all', choices=['all', 'db']), ), supports_check_mode = True ) if not redis_found: module.fail_json(msg="python redis module is required") login_password = module.params['login_password'] login_host = module.params['login_host'] login_port = int(module.params['login_port']) command = module.params['command'] # Slave Command section ----------- if command == "slave": master_host = module.params['master_host'] master_port = module.params['master_port'] try: master_port = int(module.params['master_port']) except Exception: pass mode = module.params['slave_mode'] #Check if we ahve all the data if mode == "slave": # Only need data if we want to be slave if not master_host: module.fail_json( msg='In slave mode master host must be provided') if not master_port: module.fail_json( msg='In slave mode master port must be provided') #Connect and check r = redis.StrictRedis(host=login_host, port=login_port, password=login_password) try: r.ping() except Exception, e: module.fail_json(msg="unable to connect to database: %s" % e) #Check if we are already in the mode that we want info = r.info() if mode == "master" and info["role"] == "master": module.exit_json(changed=False, mode=mode) elif mode == "slave" and\ info["role"] == "slave" and\ info["master_host"] == master_host and\ info["master_port"] == master_port: status = { 'status': mode, 'master_host': master_host, 'master_port': master_port, } module.exit_json(changed=False, mode=status) else: # Do the stuff # (Check Check_mode before commands so the commands aren't evaluated # if not necesary) if mode == "slave": if module.check_mode or\ set_slave_mode(r, master_host, master_port): info = r.info() status = { 'status': mode, 'master_host': master_host, 'master_port': master_port, } module.exit_json(changed=True, mode=status) else: module.fail_json(msg='Unable to set slave mode') else: if module.check_mode or set_master_mode(r): module.exit_json(changed=True, mode=mode) else: module.fail_json(msg='Unable to set master mode') # flush Command section ----------- elif command == "flush": try: db = int(module.params['db']) except Exception: db = 0 mode = module.params['flush_mode'] #Check if we have all the data if mode == "db": if type(db) != int: module.fail_json( msg="In db mode the db number must be provided") #Connect and check r = redis.StrictRedis(host=login_host, port=login_port, password=login_password, db=db) try: r.ping() except Exception, e: module.fail_json(msg="unable to connect to database: %s" % e) # Do the stuff # (Check Check_mode before commands so the commands aren't evaluated # if not necesary) if mode == "all": if module.check_mode or flush(r): module.exit_json(changed=True, flushed=True) else: # Flush never fails :) module.fail_json(msg="Unable to flush all databases") else: if module.check_mode or flush(r, db): module.exit_json(changed=True, flushed=True, db=db) else: # Flush never fails :) module.fail_json(msg="Unable to flush '%d' database" % db) else: module.fail_json(msg='A valid command must be provided') # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/postgresql_db0000664000000000000000000002373112316627017017654 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: postgresql_db short_description: Add or remove PostgreSQL databases from a remote host. description: - Add or remove PostgreSQL databases from a remote host. version_added: "0.6" options: name: description: - name of the database to add or remove required: true default: null login_user: description: - The username used to authenticate with required: false default: null login_password: description: - The password used to authenticate with required: false default: null login_host: description: - Host running the database required: false default: localhost owner: description: - Name of the role to set as owner of the database required: false default: null port: description: - Database port to connect to. required: false default: 5432 template: description: - Template used to create the database required: false default: null encoding: description: - Encoding of the database required: false default: null encoding: description: - Encoding of the database required: false default: null lc_collate: description: - Collation order (LC_COLLATE) to use in the database. Must match collation order of template database unless C(template0) is used as template. required: false default: null lc_ctype: description: - Character classification (LC_CTYPE) to use in the database (e.g. lower, upper, ...) Must match LC_CTYPE of template database unless C(template0) is used as template. required: false default: null state: description: - The database state required: false default: present choices: [ "present", "absent" ] notes: - The default authentication assumes that you are either logging in as or sudo'ing to the C(postgres) account on the host. - This module uses I(psycopg2), a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the C(postgresql), C(libpq-dev), and C(python-psycopg2) packages on the remote host before using this module. requirements: [ psycopg2 ] author: Lorin Hochstein ''' EXAMPLES = ''' # Create a new database with name "acme" - postgresql_db: name=acme # Create a new database with name "acme" and specific encoding and locale # settings. If a template different from "template0" is specified, encoding # and locale settings must match those of the template. - postgresql_db: name=acme encoding='UTF-8' lc_collate='de_DE.UTF-8' lc_ctype='de_DE.UTF-8' template='template0' ''' try: import psycopg2 import psycopg2.extras except ImportError: postgresqldb_found = False else: postgresqldb_found = True class NotSupportedError(Exception): pass # =========================================== # PostgreSQL module specific support methods. # def set_owner(cursor, db, owner): query = "ALTER DATABASE \"%s\" OWNER TO \"%s\"" % (db, owner) cursor.execute(query) return True def get_encoding_id(cursor, encoding): query = "SELECT pg_char_to_encoding(%(encoding)s) AS encoding_id;" cursor.execute(query, {'encoding': encoding}) return cursor.fetchone()['encoding_id'] def get_db_info(cursor, db): query = """ SELECT rolname AS owner, pg_encoding_to_char(encoding) AS encoding, encoding AS encoding_id, datcollate AS lc_collate, datctype AS lc_ctype FROM pg_database JOIN pg_roles ON pg_roles.oid = pg_database.datdba WHERE datname = %(db)s """ cursor.execute(query, {'db':db}) return cursor.fetchone() def db_exists(cursor, db): query = "SELECT * FROM pg_database WHERE datname=%(db)s" cursor.execute(query, {'db': db}) return cursor.rowcount == 1 def db_delete(cursor, db): if db_exists(cursor, db): query = "DROP DATABASE \"%s\"" % db cursor.execute(query) return True else: return False def db_create(cursor, db, owner, template, encoding, lc_collate, lc_ctype): if not db_exists(cursor, db): if owner: owner = " OWNER \"%s\"" % owner if template: template = " TEMPLATE \"%s\"" % template if encoding: encoding = " ENCODING '%s'" % encoding if lc_collate: lc_collate = " LC_COLLATE '%s'" % lc_collate if lc_ctype: lc_ctype = " LC_CTYPE '%s'" % lc_ctype query = 'CREATE DATABASE "%s"%s%s%s%s%s' % (db, owner, template, encoding, lc_collate, lc_ctype) cursor.execute(query) return True else: db_info = get_db_info(cursor, db) if (encoding and get_encoding_id(cursor, encoding) != db_info['encoding_id']): raise NotSupportedError( 'Changing database encoding is not supported. ' 'Current encoding: %s' % db_info['encoding'] ) elif lc_collate and lc_collate != db_info['lc_collate']: raise NotSupportedError( 'Changing LC_COLLATE is not supported. ' 'Current LC_COLLATE: %s' % db_info['lc_collate'] ) elif lc_ctype and lc_ctype != db_info['lc_ctype']: raise NotSupportedError( 'Changing LC_CTYPE is not supported.' 'Current LC_CTYPE: %s' % db_info['lc_ctype'] ) elif owner and owner != db_info['owner']: return set_owner(cursor, db, owner) else: return False def db_matches(cursor, db, owner, template, encoding, lc_collate, lc_ctype): if not db_exists(cursor, db): return False else: db_info = get_db_info(cursor, db) if (encoding and get_encoding_id(cursor, encoding) != db_info['encoding_id']): return False elif lc_collate and lc_collate != db_info['lc_collate']: return False elif lc_ctype and lc_ctype != db_info['lc_ctype']: return False elif owner and owner != db_info['owner']: return False else: return True # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec=dict( login_user=dict(default="postgres"), login_password=dict(default=""), login_host=dict(default=""), port=dict(default="5432"), db=dict(required=True, aliases=['name']), owner=dict(default=""), template=dict(default=""), encoding=dict(default=""), lc_collate=dict(default=""), lc_ctype=dict(default=""), state=dict(default="present", choices=["absent", "present"]), ), supports_check_mode = True ) if not postgresqldb_found: module.fail_json(msg="the python psycopg2 module is required") db = module.params["db"] port = module.params["port"] owner = module.params["owner"] template = module.params["template"] encoding = module.params["encoding"] lc_collate = module.params["lc_collate"] lc_ctype = module.params["lc_ctype"] state = module.params["state"] changed = False # To use defaults values, keyword arguments must be absent, so # check which values are empty and don't include in the **kw # dictionary params_map = { "login_host":"host", "login_user":"user", "login_password":"password", "port":"port" } kw = dict( (params_map[k], v) for (k, v) in module.params.iteritems() if k in params_map and v != '' ) try: db_connection = psycopg2.connect(database="template1", **kw) # Enable autocommit so we can create databases if psycopg2.__version__ >= '2.4.2': db_connection.autocommit = True else: db_connection.set_isolation_level(psycopg2 .extensions .ISOLATION_LEVEL_AUTOCOMMIT) cursor = db_connection.cursor( cursor_factory=psycopg2.extras.DictCursor) except Exception, e: module.fail_json(msg="unable to connect to database: %s" % e) try: if module.check_mode: if state == "absent": changed = not db_exists(cursor, db) elif state == "present": changed = not db_matches(cursor, db, owner, template, encoding, lc_collate, lc_ctype) module.exit_json(changed=changed,db=db) if state == "absent": changed = db_delete(cursor, db) elif state == "present": changed = db_create(cursor, db, owner, template, encoding, lc_collate, lc_ctype) except NotSupportedError, e: module.fail_json(msg=str(e)) except Exception, e: module.fail_json(msg="Database query failed: %s" % e) module.exit_json(changed=changed, db=db) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/riak0000664000000000000000000001744312316627017015735 0ustar rootroot#!/usr/bin/env python # -*- coding: utf-8 -*- # (c) 2013, James Martin , Drew Kerrigan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: riak short_description: This module handles some common Riak operations description: - This module can be used to join nodes to a cluster, check the status of the cluster. version_added: "1.2" options: command: description: - The command you would like to perform against the cluster. required: false default: null aliases: [] choices: ['ping', 'kv_test', 'join', 'plan', 'commit'] config_dir: description: - The path to the riak configuration directory required: false default: /etc/riak aliases: [] http_conn: description: - The ip address and port that is listening for Riak HTTP queries required: false default: 127.0.0.1:8098 aliases: [] target_node: description: - The target node for certain operations (join, ping) required: false default: riak@127.0.0.1 aliases: [] wait_for_handoffs: description: - Number of seconds to wait for handoffs to complete. required: false default: null aliases: [] type: 'int' wait_for_ring: description: - Number of seconds to wait for all nodes to agree on the ring. required: false default: null aliases: [] type: 'int' wait_for_service: description: - Waits for a riak service to come online before continuing. required: false default: None aliases: [] choices: ['kv'] validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 ''' EXAMPLES = ''' # Join's a Riak node to another node - riak: command=join target_node=riak@10.1.1.1 # Wait for handoffs to finish. Use with async and poll. - riak: wait_for_handoffs=yes # Wait for riak_kv service to startup - riak: wait_for_service=kv ''' import urllib2 import time import socket import sys try: import json except ImportError: import simplejson as json def ring_check(module, riak_admin_bin): cmd = '%s ringready' % riak_admin_bin rc, out, err = module.run_command(cmd) if rc == 0 and 'TRUE All nodes agree on the ring' in out: return True else: return False def main(): module = AnsibleModule( argument_spec=dict( command=dict(required=False, default=None, choices=[ 'ping', 'kv_test', 'join', 'plan', 'commit']), config_dir=dict(default='/etc/riak'), http_conn=dict(required=False, default='127.0.0.1:8098'), target_node=dict(default='riak@127.0.0.1', required=False), wait_for_handoffs=dict(default=False, type='int'), wait_for_ring=dict(default=False, type='int'), wait_for_service=dict( required=False, default=None, choices=['kv']), validate_certs = dict(default='yes', type='bool')) ) command = module.params.get('command') config_dir = module.params.get('config_dir') http_conn = module.params.get('http_conn') target_node = module.params.get('target_node') wait_for_handoffs = module.params.get('wait_for_handoffs') wait_for_ring = module.params.get('wait_for_ring') wait_for_service = module.params.get('wait_for_service') validate_certs = module.params.get('validate_certs') #make sure riak commands are on the path riak_bin = module.get_bin_path('riak') riak_admin_bin = module.get_bin_path('riak-admin') timeout = time.time() + 120 while True: if time.time() > timeout: module.fail_json(msg='Timeout, could not fetch Riak stats.') (response, info) = fetch_url(module, 'http://%s/stats' % (http_conn), force=True, timeout=5) if info['status'] == 200: stats_raw = response.read() break time.sleep(5) # here we attempt to load those stats, try: stats = json.loads(stats_raw) except: module.fail_json(msg='Could not parse Riak stats.') node_name = stats['nodename'] nodes = stats['ring_members'] ring_size = stats['ring_creation_size'] rc, out, err = module.run_command([riak_bin, 'version'] ) version = out.strip() result = dict(node_name=node_name, nodes=nodes, ring_size=ring_size, version=version) if command == 'ping': cmd = '%s ping %s' % ( riak_bin, target_node ) rc, out, err = module.run_command(cmd) if rc == 0: result['ping'] = out else: module.fail_json(msg=out) elif command == 'kv_test': cmd = '%s test' % riak_admin_bin rc, out, err = module.run_command(cmd) if rc == 0: result['kv_test'] = out else: module.fail_json(msg=out) elif command == 'join': if nodes.count(node_name) == 1 and len(nodes) > 1: result['join'] = 'Node is already in cluster or staged to be in cluster.' else: cmd = '%s cluster join %s' % (riak_admin_bin, target_node) rc, out, err = module.run_command(cmd) if rc == 0: result['join'] = out result['changed'] = True else: module.fail_json(msg=out) elif command == 'plan': cmd = '%s cluster plan' % riak_admin_bin rc, out, err = module.run_command(cmd) if rc == 0: result['plan'] = out if 'Staged Changes' in out: result['changed'] = True else: module.fail_json(msg=out) elif command == 'commit': cmd = '%s cluster commit' % riak_admin_bin rc, out, err = module.run_command(cmd) if rc == 0: result['commit'] = out result['changed'] = True else: module.fail_json(msg=out) # this could take a while, recommend to run in async mode if wait_for_handoffs: timeout = time.time() + wait_for_handoffs while True: cmd = '%s transfers' % riak_admin_bin rc, out, err = module.run_command(cmd) if 'No transfers active' in out: result['handoffs'] = 'No transfers active.' break time.sleep(10) if time.time() > timeout: module.fail_json(msg='Timeout waiting for handoffs.') if wait_for_service: cmd = [riak_admin_bin, 'wait_for_service', 'riak_%s' % wait_for_service, node_name ] rc, out, err = module.run_command(cmd) result['service'] = out if wait_for_ring: timeout = time.time() + wait_for_ring while True: if ring_check(module, riak_admin_bin): break time.sleep(10) if time.time() > timeout: module.fail_json(msg='Timeout waiting for nodes to agree on ring.') result['ring_ready'] = ring_check(module, riak_admin_bin) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/database/postgresql_user0000664000000000000000000004376712316627017020260 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: postgresql_user short_description: Adds or removes a users (roles) from a PostgreSQL database. description: - Add or remove PostgreSQL users (roles) from a remote host and, optionally, grant the users access to an existing database or tables. - The fundamental function of the module is to create, or delete, roles from a PostgreSQL cluster. Privilege assignment, or removal, is an optional step, which works on one database at a time. This allows for the module to be called several times in the same module to modify the permissions on different databases, or to grant permissions to already existing users. - A user cannot be removed until all the privileges have been stripped from the user. In such situation, if the module tries to remove the user it will fail. To avoid this from happening the fail_on_user option signals the module to try to remove the user, but if not possible keep going; the module will report if changes happened and separately if the user was removed or not. version_added: "0.6" options: name: description: - name of the user (role) to add or remove required: true default: null password: description: - set the user's password, before 1.4 this was required. - "When passing an encrypted password it must be generated with the format C('str[\\"md5\\"] + md5[ password + username ]'), resulting in a total of 35 characters. An easy way to do this is: C(echo \\"md5`echo -n \\"verysecretpasswordJOE\\" | md5`\\")." required: false default: null db: description: - name of database where permissions will be granted required: false default: null fail_on_user: description: - if C(yes), fail when user can't be removed. Otherwise just log and continue required: false default: 'yes' choices: [ "yes", "no" ] port: description: - Database port to connect to. required: false default: 5432 login_user: description: - User (role) used to authenticate with PostgreSQL required: false default: postgres login_password: description: - Password used to authenticate with PostgreSQL required: false default: null login_host: description: - Host running PostgreSQL. required: false default: localhost priv: description: - "PostgreSQL privileges string in the format: C(table:priv1,priv2)" required: false default: null role_attr_flags: description: - "PostgreSQL role attributes string in the format: CREATEDB,CREATEROLE,SUPERUSER" required: false default: null choices: [ "[NO]SUPERUSER","[NO]CREATEROLE", "[NO]CREATEUSER", "[NO]CREATEDB", "[NO]INHERIT", "[NO]LOGIN", "[NO]REPLICATION" ] state: description: - The user (role) state required: false default: present choices: [ "present", "absent" ] encrypted: description: - denotes if the password is already encrypted. boolean. required: false default: false version_added: '1.4' expires: description: - sets the user's password expiration. required: false default: null version_added: '1.4' notes: - The default authentication assumes that you are either logging in as or sudo'ing to the postgres account on the host. - This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql, libpq-dev, and python-psycopg2 packages on the remote host before using this module. - If you specify PUBLIC as the user, then the privilege changes will apply to all users. You may not specify password or role_attr_flags when the PUBLIC user is specified. requirements: [ psycopg2 ] author: Lorin Hochstein ''' EXAMPLES = ''' # Create django user and grant access to database and products table - postgresql_user: db=acme name=django password=ceec4eif7ya priv=CONNECT/products:ALL # Create rails user, grant privilege to create other databases and demote rails from super user status - postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER # Remove test user privileges from acme - postgresql_user: db=acme name=test priv=ALL/products:ALL state=absent fail_on_user=no # Remove test user from test database and the cluster - postgresql_user: db=test name=test priv=ALL state=absent # Example privileges string format INSERT,UPDATE/table:SELECT/anothertable:ALL # Remove an existing user's password - postgresql_user: db=test user=test password=NULL ''' import re try: import psycopg2 except ImportError: postgresqldb_found = False else: postgresqldb_found = True # =========================================== # PostgreSQL module specific support methods. # def user_exists(cursor, user): # The PUBLIC user is a special case that is always there if user == 'PUBLIC': return True query = "SELECT rolname FROM pg_roles WHERE rolname=%(user)s" cursor.execute(query, {'user': user}) return cursor.rowcount > 0 def user_add(cursor, user, password, role_attr_flags, encrypted, expires): """Create a new database user (role).""" query_password_data = dict() query = 'CREATE USER "%(user)s"' % { "user": user} if password is not None: query = query + " WITH %(crypt)s" % { "crypt": encrypted } query = query + " PASSWORD %(password)s" query_password_data.update(password=password) if expires is not None: query = query + " VALID UNTIL '%(expires)s'" % { "expires": expires } query = query + " " + role_attr_flags cursor.execute(query, query_password_data) return True def user_alter(cursor, module, user, password, role_attr_flags, encrypted, expires): """Change user password and/or attributes. Return True if changed, False otherwise.""" changed = False if user == 'PUBLIC': if password is not None: module.fail_json(msg="cannot change the password for PUBLIC user") elif role_attr_flags != '': module.fail_json(msg="cannot change the role_attr_flags for PUBLIC user") else: return False # Handle passwords. if password is not None or role_attr_flags is not None: # Select password and all flag-like columns in order to verify changes. query_password_data = dict() select = "SELECT * FROM pg_authid where rolname=%(user)s" cursor.execute(select, {"user": user}) # Grab current role attributes. current_role_attrs = cursor.fetchone() alter = 'ALTER USER "%(user)s"' % {"user": user} if password is not None: query_password_data.update(password=password) alter = alter + " WITH %(crypt)s" % {"crypt": encrypted} alter = alter + " PASSWORD %(password)s" alter = alter + " %(flags)s" % {'flags': role_attr_flags} elif role_attr_flags: alter = alter + ' WITH ' + role_attr_flags if expires is not None: alter = alter + " VALID UNTIL '%(expires)s'" % { "exipres": expires } try: cursor.execute(alter, query_password_data) except psycopg2.InternalError, e: if e.pgcode == '25006': # Handle errors due to read-only transactions indicated by pgcode 25006 # ERROR: cannot execute ALTER ROLE in a read-only transaction changed = False module.fail_json(msg=e.pgerror) return changed else: raise psycopg2.InternalError, e # Grab new role attributes. cursor.execute(select, {"user": user}) new_role_attrs = cursor.fetchone() # Detect any differences between current_ and new_role_attrs. for i in range(len(current_role_attrs)): if current_role_attrs[i] != new_role_attrs[i]: changed = True return changed def user_delete(cursor, user): """Try to remove a user. Returns True if successful otherwise False""" cursor.execute("SAVEPOINT ansible_pgsql_user_delete") try: cursor.execute("DROP USER \"%s\"" % user) except: cursor.execute("ROLLBACK TO SAVEPOINT ansible_pgsql_user_delete") cursor.execute("RELEASE SAVEPOINT ansible_pgsql_user_delete") return False cursor.execute("RELEASE SAVEPOINT ansible_pgsql_user_delete") return True def has_table_privilege(cursor, user, table, priv): query = 'SELECT has_table_privilege(%s, %s, %s)' cursor.execute(query, (user, table, priv)) return cursor.fetchone()[0] def get_table_privileges(cursor, user, table): if '.' in table: schema, table = table.split('.', 1) else: schema = 'public' query = '''SELECT privilege_type FROM information_schema.role_table_grants WHERE grantee=%s AND table_name=%s AND table_schema=%s''' cursor.execute(query, (user, table, schema)) return set([x[0] for x in cursor.fetchall()]) def quote_pg_identifier(identifier): """ quote postgresql identifiers involving zero or more namespaces """ if '"' in identifier: # the user has supplied their own quoting. we have to hope they're # doing it right. Maybe they have an unfortunately named table # containing a period in the name, such as: "public"."users.2013" return identifier tokens = identifier.strip().split(".") quoted_tokens = [] for token in tokens: quoted_tokens.append('"%s"' % (token, )) return ".".join(quoted_tokens) def grant_table_privilege(cursor, user, table, priv): prev_priv = get_table_privileges(cursor, user, table) query = 'GRANT %s ON TABLE %s TO %s' % ( priv, quote_pg_identifier(table), quote_pg_identifier(user), ) cursor.execute(query) curr_priv = get_table_privileges(cursor, user, table) return len(curr_priv) > len(prev_priv) def revoke_table_privilege(cursor, user, table, priv): prev_priv = get_table_privileges(cursor, user, table) query = 'REVOKE %s ON TABLE %s FROM %s' % ( priv, quote_pg_identifier(table), quote_pg_identifier(user), ) cursor.execute(query) curr_priv = get_table_privileges(cursor, user, table) return len(curr_priv) < len(prev_priv) def get_database_privileges(cursor, user, db): priv_map = { 'C':'CREATE', 'T':'TEMPORARY', 'c':'CONNECT', } query = 'SELECT datacl FROM pg_database WHERE datname = %s' cursor.execute(query, (db,)) datacl = cursor.fetchone()[0] if datacl is None: return [] r = re.search('%s=(C?T?c?)/[a-z]+\,?' % user, datacl) if r is None: return [] o = [] for v in r.group(1): o.append(priv_map[v]) return o def has_database_privilege(cursor, user, db, priv): query = 'SELECT has_database_privilege(%s, %s, %s)' cursor.execute(query, (user, db, priv)) return cursor.fetchone()[0] def grant_database_privilege(cursor, user, db, priv): prev_priv = get_database_privileges(cursor, user, db) if user == "PUBLIC": query = 'GRANT %s ON DATABASE \"%s\" TO PUBLIC' % (priv, db) else: query = 'GRANT %s ON DATABASE \"%s\" TO \"%s\"' % (priv, db, user) cursor.execute(query) curr_priv = get_database_privileges(cursor, user, db) return len(curr_priv) > len(prev_priv) def revoke_database_privilege(cursor, user, db, priv): prev_priv = get_database_privileges(cursor, user, db) if user == "PUBLIC": query = 'REVOKE %s ON DATABASE \"%s\" FROM PUBLIC' % (priv, db) else: query = 'REVOKE %s ON DATABASE \"%s\" FROM \"%s\"' % (priv, db, user) cursor.execute(query) curr_priv = get_database_privileges(cursor, user, db) return len(curr_priv) < len(prev_priv) def revoke_privileges(cursor, user, privs): if privs is None: return False changed = False for type_ in privs: revoke_func = { 'table':revoke_table_privilege, 'database':revoke_database_privilege }[type_] for name, privileges in privs[type_].iteritems(): for privilege in privileges: changed = revoke_func(cursor, user, name, privilege)\ or changed return changed def grant_privileges(cursor, user, privs): if privs is None: return False changed = False for type_ in privs: grant_func = { 'table':grant_table_privilege, 'database':grant_database_privilege }[type_] for name, privileges in privs[type_].iteritems(): for privilege in privileges: changed = grant_func(cursor, user, name, privilege)\ or changed return changed def parse_role_attrs(role_attr_flags): """ Parse role attributes string for user creation. Format: attributes[,attributes,...] Where: attributes := CREATEDB,CREATEROLE,NOSUPERUSER,... """ if ',' not in role_attr_flags: return role_attr_flags flag_set = role_attr_flags.split(",") o_flags = " ".join(flag_set) return o_flags def parse_privs(privs, db): """ Parse privilege string to determine permissions for database db. Format: privileges[/privileges/...] Where: privileges := DATABASE_PRIVILEGES[,DATABASE_PRIVILEGES,...] | TABLE_NAME:TABLE_PRIVILEGES[,TABLE_PRIVILEGES,...] """ if privs is None: return privs o_privs = { 'database':{}, 'table':{} } for token in privs.split('/'): if ':' not in token: type_ = 'database' name = db priv_set = set(x.strip() for x in token.split(',')) else: type_ = 'table' name, privileges = token.split(':', 1) priv_set = set(x.strip() for x in privileges.split(',')) o_privs[type_][name] = priv_set return o_privs # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec=dict( login_user=dict(default="postgres"), login_password=dict(default=""), login_host=dict(default=""), user=dict(required=True, aliases=['name']), password=dict(default=None), state=dict(default="present", choices=["absent", "present"]), priv=dict(default=None), db=dict(default=''), port=dict(default='5432'), fail_on_user=dict(type='bool', choices=BOOLEANS, default='yes'), role_attr_flags=dict(default=''), encrypted=dict(type='bool', choices=BOOLEANS, default='no'), expires=dict(default=None) ), supports_check_mode = True ) user = module.params["user"] password = module.params["password"] state = module.params["state"] fail_on_user = module.params["fail_on_user"] db = module.params["db"] if db == '' and module.params["priv"] is not None: module.fail_json(msg="privileges require a database to be specified") privs = parse_privs(module.params["priv"], db) port = module.params["port"] role_attr_flags = parse_role_attrs(module.params["role_attr_flags"]) if module.params["encrypted"]: encrypted = "ENCRYPTED" else: encrypted = "UNENCRYPTED" expires = module.params["expires"] if not postgresqldb_found: module.fail_json(msg="the python psycopg2 module is required") # To use defaults values, keyword arguments must be absent, so # check which values are empty and don't include in the **kw # dictionary params_map = { "login_host":"host", "login_user":"user", "login_password":"password", "port":"port", "db":"database" } kw = dict( (params_map[k], v) for (k, v) in module.params.iteritems() if k in params_map and v != "" ) try: db_connection = psycopg2.connect(**kw) cursor = db_connection.cursor() except Exception, e: module.fail_json(msg="unable to connect to database: %s" % e) kw = dict(user=user) changed = False user_removed = False if state == "present": if user_exists(cursor, user): changed = user_alter(cursor, module, user, password, role_attr_flags, encrypted, expires) else: changed = user_add(cursor, user, password, role_attr_flags, encrypted, expires) changed = grant_privileges(cursor, user, privs) or changed else: if user_exists(cursor, user): if module.check_mode: changed = True kw['user_removed'] = True else: changed = revoke_privileges(cursor, user, privs) user_removed = user_delete(cursor, user) changed = changed or user_removed if fail_on_user and not user_removed: msg = "unable to remove user" module.fail_json(msg=msg) kw['user_removed'] = user_removed if changed: if module.check_mode: db_connection.rollback() else: db_connection.commit() kw['changed'] = changed module.exit_json(**kw) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/postgresql_privs0000664000000000000000000005326712316627017020441 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = """ --- module: postgresql_privs version_added: "1.2" short_description: Grant or revoke privileges on PostgreSQL database objects. description: - Grant or revoke privileges on PostgreSQL database objects. - This module is basically a wrapper around most of the functionality of PostgreSQL's GRANT and REVOKE statements with detection of changes (GRANT/REVOKE I(privs) ON I(type) I(objs) TO/FROM I(roles)) options: database: description: - Name of database to connect to. - 'Alias: I(db)' required: yes state: description: - If C(present), the specified privileges are granted, if C(absent) they are revoked. required: no default: present choices: [present, absent] privs: description: - Comma separated list of privileges to grant/revoke. - 'Alias: I(priv)' required: no type: description: - Type of database object to set privileges on. required: no default: table choices: [table, sequence, function, database, schema, language, tablespace, group] objs: description: - Comma separated list of database objects to set privileges on. - If I(type) is C(table) or C(sequence), the special value C(ALL_IN_SCHEMA) can be provided instead to specify all database objects of type I(type) in the schema specified via I(schema). (This also works with PostgreSQL < 9.0.) - If I(type) is C(database), this parameter can be omitted, in which case privileges are set for the database specified via I(database). - 'If I(type) is I(function), colons (":") in object names will be replaced with commas (needed to specify function signatures, see examples)' - 'Alias: I(obj)' required: no schema: description: - Schema that contains the database objects specified via I(objs). - May only be provided if I(type) is C(table), C(sequence) or C(function). Defaults to C(public) in these cases. required: no roles: description: - Comma separated list of role (user/group) names to set permissions for. - The special value C(PUBLIC) can be provided instead to set permissions for the implicitly defined PUBLIC group. - 'Alias: I(role)' required: yes grant_option: description: - Whether C(role) may grant/revoke the specified privileges/group memberships to others. - Set to C(no) to revoke GRANT OPTION, leave unspecified to make no changes. - I(grant_option) only has an effect if I(state) is C(present). - 'Alias: I(admin_option)' required: no choices: ['yes', 'no'] host: description: - Database host address. If unspecified, connect via Unix socket. - 'Alias: I(login_host)' default: null required: no port: description: - Database port to connect to. required: no default: 5432 login: description: - The username to authenticate with. - 'Alias: I(login_user)' default: postgres password: description: - The password to authenticate with. - 'Alias: I(login_password))' default: null required: no notes: - Default authentication assumes that postgresql_privs is run by the C(postgres) user on the remote host. (Ansible's C(user) or C(sudo-user)). - This module requires Python package I(psycopg2) to be installed on the remote host. In the default case of the remote host also being the PostgreSQL server, PostgreSQL has to be installed there as well, obviously. For Debian/Ubuntu-based systems, install packages I(postgresql) and I(python-psycopg2). - Parameters that accept comma separated lists (I(privs), I(objs), I(roles)) have singular alias names (I(priv), I(obj), I(role)). - To revoke only C(GRANT OPTION) for a specific object, set I(state) to C(present) and I(grant_option) to C(no) (see examples). - Note that when revoking privileges from a role R, this role may still have access via privileges granted to any role R is a member of including C(PUBLIC). - Note that when revoking privileges from a role R, you do so as the user specified via I(login). If R has been granted the same privileges by another user also, R can still access database objects via these privileges. - When revoking privileges, C(RESTRICT) is assumed (see PostgreSQL docs). requirements: [psycopg2] author: Bernhard Weitzhofer """ EXAMPLES = """ # On database "library": # GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors # TO librarian, reader WITH GRANT OPTION - postgresql_privs: > database=library state=present privs=SELECT,INSERT,UPDATE type=table objs=books,authors schema=public roles=librarian,reader grant_option=yes # Same as above leveraging default values: - postgresql_privs: > db=library privs=SELECT,INSERT,UPDATE objs=books,authors roles=librarian,reader grant_option=yes # REVOKE GRANT OPTION FOR INSERT ON TABLE books FROM reader # Note that role "reader" will be *granted* INSERT privilege itself if this # isn't already the case (since state=present). - postgresql_privs: > db=library state=present priv=INSERT obj=books role=reader grant_option=no # REVOKE INSERT, UPDATE ON ALL TABLES IN SCHEMA public FROM reader # "public" is the default schema. This also works for PostgreSQL 8.x. - postgresql_privs: > db=library state=absent privs=INSERT,UPDATE objs=ALL_IN_SCHEMA role=reader # GRANT ALL PRIVILEGES ON SCHEMA public, math TO librarian - postgresql_privs: > db=library privs=ALL type=schema objs=public,math role=librarian # GRANT ALL PRIVILEGES ON FUNCTION math.add(int, int) TO librarian, reader # Note the separation of arguments with colons. - postgresql_privs: > db=library privs=ALL type=function obj=add(int:int) schema=math roles=librarian,reader # GRANT librarian, reader TO alice, bob WITH ADMIN OPTION # Note that group role memberships apply cluster-wide and therefore are not # restricted to database "library" here. - postgresql_privs: > db=library type=group objs=librarian,reader roles=alice,bob admin_option=yes # GRANT ALL PRIVILEGES ON DATABASE library TO librarian # Note that here "db=postgres" specifies the database to connect to, not the # database to grant privileges on (which is specified via the "objs" param) - postgresql_privs: > db=postgres privs=ALL type=database obj=library role=librarian # GRANT ALL PRIVILEGES ON DATABASE library TO librarian # If objs is omitted for type "database", it defaults to the database # to which the connection is established - postgresql_privs: > db=library privs=ALL type=database role=librarian """ try: import psycopg2 import psycopg2.extensions except ImportError: psycopg2 = None class Error(Exception): pass # We don't have functools.partial in Python < 2.5 def partial(f, *args, **kwargs): """Partial function application""" def g(*g_args, **g_kwargs): new_kwargs = kwargs.copy() new_kwargs.update(g_kwargs) return f(*(args + g_args), **g_kwargs) g.f = f g.args = args g.kwargs = kwargs return g class Connection(object): """Wrapper around a psycopg2 connection with some convenience methods""" def __init__(self, params): self.database = params.database # To use defaults values, keyword arguments must be absent, so # check which values are empty and don't include in the **kw # dictionary params_map = { "host":"host", "login":"user", "password":"password", "port":"port", "database": "database", } kw = dict( (params_map[k], getattr(params, k)) for k in params_map if getattr(params, k) != '' ) self.connection = psycopg2.connect(**kw) self.cursor = self.connection.cursor() def commit(self): self.connection.commit() def rollback(self): self.connection.rollback() @property def encoding(self): """Connection encoding in Python-compatible form""" return psycopg2.extensions.encodings[self.connection.encoding] ### Methods for querying database objects # PostgreSQL < 9.0 doesn't support "ALL TABLES IN SCHEMA schema"-like # phrases in GRANT or REVOKE statements, therefore alternative methods are # provided here. def schema_exists(self, schema): query = """SELECT count(*) FROM pg_catalog.pg_namespace WHERE nspname = %s""" self.cursor.execute(query, (schema,)) return self.cursor.fetchone()[0] > 0 def get_all_tables_in_schema(self, schema): if not self.schema_exists(schema): raise Error('Schema "%s" does not exist.' % schema) query = """SELECT relname FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE nspname = %s AND relkind = 'r'""" self.cursor.execute(query, (schema,)) return [t[0] for t in self.cursor.fetchall()] def get_all_sequences_in_schema(self, schema): if not self.schema_exists(schema): raise Error('Schema "%s" does not exist.' % schema) query = """SELECT relname FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE nspname = %s AND relkind = 'S'""" self.cursor.execute(query, (schema,)) return [t[0] for t in self.cursor.fetchall()] ### Methods for getting access control lists and group membership info # To determine whether anything has changed after granting/revoking # privileges, we compare the access control lists of the specified database # objects before and afterwards. Python's list/string comparison should # suffice for change detection, we should not actually have to parse ACLs. # The same should apply to group membership information. def get_table_acls(self, schema, tables): query = """SELECT relacl FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE nspname = %s AND relkind = 'r' AND relname = ANY (%s) ORDER BY relname""" self.cursor.execute(query, (schema, tables)) return [t[0] for t in self.cursor.fetchall()] def get_sequence_acls(self, schema, sequences): query = """SELECT relacl FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE nspname = %s AND relkind = 'S' AND relname = ANY (%s) ORDER BY relname""" self.cursor.execute(query, (schema, sequences)) return [t[0] for t in self.cursor.fetchall()] def get_function_acls(self, schema, function_signatures): funcnames = [f.split('(', 1)[0] for f in function_signatures] query = """SELECT proacl FROM pg_catalog.pg_proc p JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace WHERE nspname = %s AND proname = ANY (%s) ORDER BY proname, proargtypes""" self.cursor.execute(query, (schema, funcnames)) return [t[0] for t in self.cursor.fetchall()] def get_schema_acls(self, schemas): query = """SELECT nspacl FROM pg_catalog.pg_namespace WHERE nspname = ANY (%s) ORDER BY nspname""" self.cursor.execute(query, (schemas,)) return [t[0] for t in self.cursor.fetchall()] def get_language_acls(self, languages): query = """SELECT lanacl FROM pg_catalog.pg_language WHERE lanname = ANY (%s) ORDER BY lanname""" self.cursor.execute(query, (languages,)) return [t[0] for t in self.cursor.fetchall()] def get_tablespace_acls(self, tablespaces): query = """SELECT spcacl FROM pg_catalog.pg_tablespace WHERE spcname = ANY (%s) ORDER BY spcname""" self.cursor.execute(query, (tablespaces,)) return [t[0] for t in self.cursor.fetchall()] def get_database_acls(self, databases): query = """SELECT datacl FROM pg_catalog.pg_database WHERE datname = ANY (%s) ORDER BY datname""" self.cursor.execute(query, (databases,)) return [t[0] for t in self.cursor.fetchall()] def get_group_memberships(self, groups): query = """SELECT roleid, grantor, member, admin_option FROM pg_catalog.pg_auth_members am JOIN pg_catalog.pg_roles r ON r.oid = am.roleid WHERE r.rolname = ANY(%s) ORDER BY roleid, grantor, member""" self.cursor.execute(query, (groups,)) return self.cursor.fetchall() ### Manipulating privileges def manipulate_privs(self, obj_type, privs, objs, roles, state, grant_option, schema_qualifier=None): """Manipulate database object privileges. :param obj_type: Type of database object to grant/revoke privileges for. :param privs: Either a list of privileges to grant/revoke or None if type is "group". :param objs: List of database objects to grant/revoke privileges for. :param roles: Either a list of role names or "PUBLIC" for the implicitly defined "PUBLIC" group :param state: "present" to grant privileges, "absent" to revoke. :param grant_option: Only for state "present": If True, set grant/admin option. If False, revoke it. If None, don't change grant option. :param schema_qualifier: Some object types ("TABLE", "SEQUENCE", "FUNCTION") must be qualified by schema. Ignored for other Types. """ # get_status: function to get current status if obj_type == 'table': get_status = partial(self.get_table_acls, schema_qualifier) elif obj_type == 'sequence': get_status = partial(self.get_sequence_acls, schema_qualifier) elif obj_type == 'function': get_status = partial(self.get_function_acls, schema_qualifier) elif obj_type == 'schema': get_status = self.get_schema_acls elif obj_type == 'language': get_status = self.get_language_acls elif obj_type == 'tablespace': get_status = self.get_tablespace_acls elif obj_type == 'database': get_status = self.get_database_acls elif obj_type == 'group': get_status = self.get_group_memberships else: raise Error('Unsupported database object type "%s".' % obj_type) # Return False (nothing has changed) if there are no objs to work on. if not objs: return False # obj_ids: quoted db object identifiers (sometimes schema-qualified) if obj_type == 'function': obj_ids = [] for obj in objs: try: f, args = obj.split('(', 1) except: raise Error('Illegal function signature: "%s".' % obj) obj_ids.append('"%s"."%s"(%s' % (schema_qualifier, f, args)) elif obj_type in ['table', 'sequence']: obj_ids = ['"%s"."%s"' % (schema_qualifier, o) for o in objs] else: obj_ids = ['"%s"' % o for o in objs] # set_what: SQL-fragment specifying what to set for the target roless: # Either group membership or privileges on objects of a certain type. if obj_type == 'group': set_what = ','.join(obj_ids) else: set_what = '%s ON %s %s' % (','.join(privs), obj_type, ','.join(obj_ids)) # for_whom: SQL-fragment specifying for whom to set the above if roles == 'PUBLIC': for_whom = 'PUBLIC' else: for_whom = ','.join(['"%s"' % r for r in roles]) status_before = get_status(objs) if state == 'present': if grant_option: if obj_type == 'group': query = 'GRANT %s TO %s WITH ADMIN OPTION' else: query = 'GRANT %s TO %s WITH GRANT OPTION' else: query = 'GRANT %s TO %s' self.cursor.execute(query % (set_what, for_whom)) # Only revoke GRANT/ADMIN OPTION if grant_option actually is False. if grant_option == False: if obj_type == 'group': query = 'REVOKE ADMIN OPTION FOR %s FROM %s' else: query = 'REVOKE GRANT OPTION FOR %s FROM %s' self.cursor.execute(query % (set_what, for_whom)) else: query = 'REVOKE %s FROM %s' self.cursor.execute(query % (set_what, for_whom)) status_after = get_status(objs) return status_before != status_after def main(): module = AnsibleModule( argument_spec = dict( database=dict(required=True, aliases=['db']), state=dict(default='present', choices=['present', 'absent']), privs=dict(required=False, aliases=['priv']), type=dict(default='table', choices=['table', 'sequence', 'function', 'database', 'schema', 'language', 'tablespace', 'group']), objs=dict(required=False, aliases=['obj']), schema=dict(required=False), roles=dict(required=True, aliases=['role']), grant_option=dict(required=False, type='bool', aliases=['admin_option']), host=dict(default='', aliases=['login_host']), port=dict(type='int', default=5432), login=dict(default='postgres', aliases=['login_user']), password=dict(default='', aliases=['login_password']) ), supports_check_mode = True ) # Create type object as namespace for module params p = type('Params', (), module.params) # param "schema": default, allowed depends on param "type" if p.type in ['table', 'sequence', 'function']: p.schema = p.schema or 'public' elif p.schema: module.fail_json(msg='Argument "schema" is not allowed ' 'for type "%s".' % p.type) # param "objs": default, required depends on param "type" if p.type == 'database': p.objs = p.objs or p.database elif not p.objs: module.fail_json(msg='Argument "objs" is required ' 'for type "%s".' % p.type) # param "privs": allowed, required depends on param "type" if p.type == 'group': if p.privs: module.fail_json(msg='Argument "privs" is not allowed ' 'for type "group".') elif not p.privs: module.fail_json(msg='Argument "privs" is required ' 'for type "%s".' % p.type) # Connect to Database if not psycopg2: module.fail_json(msg='Python module "psycopg2" must be installed.') try: conn = Connection(p) except psycopg2.Error, e: module.fail_json(msg='Could not connect to database: %s' % e) try: # privs if p.privs: privs = p.privs.split(',') else: privs = None # objs: if p.type == 'table' and p.objs == 'ALL_IN_SCHEMA': objs = conn.get_all_tables_in_schema(p.schema) elif p.type == 'sequence' and p.objs == 'ALL_IN_SCHEMA': objs = conn.get_all_sequences_in_schema(p.schema) else: objs = p.objs.split(',') # function signatures are encoded using ':' to separate args if p.type == 'function': objs = [obj.replace(':', ',') for obj in objs] # roles if p.roles == 'PUBLIC': roles = 'PUBLIC' else: roles = p.roles.split(',') changed = conn.manipulate_privs( obj_type = p.type, privs = privs, objs = objs, roles = roles, state = p.state, grant_option = p.grant_option, schema_qualifier=p.schema ) except Error, e: conn.rollback() module.fail_json(msg=e.message) except psycopg2.Error, e: conn.rollback() # psycopg2 errors come in connection encoding, reencode msg = e.message.decode(conn.encoding).encode(errors='replace') module.fail_json(msg=msg) if module.check_mode: conn.rollback() else: conn.commit() module.exit_json(changed=changed) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/mysql_replication0000664000000000000000000003007312316627017020537 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- """ Ansible module to manage mysql replication (c) 2013, Balazs Pocze Certain parts are taken from Mark Theunissen's mysqldb module This file is part of Ansible Ansible is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Ansible is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Ansible. If not, see . """ DOCUMENTATION = ''' --- module: mysql_replication short_description: Manage MySQL replication description: - Manages MySQL server replication, slave, master status get and change master host. version_added: "1.3" options: mode: description: - module operating mode. Could be getslave (SHOW SLAVE STATUS), getmaster (SHOW MASTER STATUS), changemaster (CHANGE MASTER TO), startslave (START SLAVE), stopslave (STOP SLAVE) required: False choices: - getslave - getmaster - changemaster - stopslave - startslave default: getslave login_user: description: - username to connect mysql host, if defined login_password also needed. required: False login_password: description: - password to connect mysql host, if defined login_user also needed. required: False login_host: description: - mysql host to connect required: False login_unix_socket: description: - unix socket to connect mysql server master_host: description: - same as mysql variable master_user: description: - same as mysql variable master_password: description: - same as mysql variable master_port: description: - same as mysql variable master_connect_retry: description: - same as mysql variable master_log_file: description: - same as mysql variable master_log_pos: description: - same as mysql variable relay_log_file: description: - same as mysql variable relay_log_pos: description: - same as mysql variable master_ssl: description: - same as mysql variable possible values: 0,1 master_ssl_ca: description: - same as mysql variable master_ssl_capath: description: - same as mysql variable master_ssl_cert: description: - same as mysql variable master_ssl_key: description: - same as mysql variable master_ssl_cipher: description: - same as mysql variable ''' EXAMPLES = ''' # Stop mysql slave thread - mysql_replication: mode=stopslave # Get master binlog file name and binlog position - mysql_replication: mode=getmaster # Change master to master server 192.168.1.1 and use binary log 'mysql-bin.000009' with position 4578 - mysql_replication: mode=changemaster master_host=192.168.1.1 master_log_file=mysql-bin.000009 master_log_pos=4578 ''' import ConfigParser import os import warnings try: import MySQLdb except ImportError: mysqldb_found = False else: mysqldb_found = True def get_master_status(cursor): cursor.execute("SHOW MASTER STATUS") masterstatus = cursor.fetchone() return masterstatus def get_slave_status(cursor): cursor.execute("SHOW SLAVE STATUS") slavestatus = cursor.fetchone() return slavestatus def stop_slave(cursor): try: cursor.execute("STOP SLAVE") stopped = True except: stopped = False return stopped def start_slave(cursor): try: cursor.execute("START SLAVE") started = True except: started = False return started def changemaster(cursor, chm): SQLPARAM = ",".join(chm) cursor.execute("CHANGE MASTER TO " + SQLPARAM) def strip_quotes(s): """ Remove surrounding single or double quotes >>> print strip_quotes('hello') hello >>> print strip_quotes('"hello"') hello >>> print strip_quotes("'hello'") hello >>> print strip_quotes("'hello") 'hello """ single_quote = "'" double_quote = '"' if s.startswith(single_quote) and s.endswith(single_quote): s = s.strip(single_quote) elif s.startswith(double_quote) and s.endswith(double_quote): s = s.strip(double_quote) return s def config_get(config, section, option): """ Calls ConfigParser.get and strips quotes See: http://dev.mysql.com/doc/refman/5.0/en/option-files.html """ return strip_quotes(config.get(section, option)) def load_mycnf(): config = ConfigParser.RawConfigParser() mycnf = os.path.expanduser('~/.my.cnf') if not os.path.exists(mycnf): return False try: config.readfp(open(mycnf)) except (IOError): return False # We support two forms of passwords in .my.cnf, both pass= and password=, # as these are both supported by MySQL. try: passwd = config_get(config, 'client', 'password') except (ConfigParser.NoOptionError): try: passwd = config_get(config, 'client', 'pass') except (ConfigParser.NoOptionError): return False # If .my.cnf doesn't specify a user, default to user login name try: user = config_get(config, 'client', 'user') except (ConfigParser.NoOptionError): user = getpass.getuser() creds = dict(user=user, passwd=passwd) return creds def main(): module = AnsibleModule( argument_spec = dict( login_user=dict(default=None), login_password=dict(default=None), login_host=dict(default="localhost"), login_unix_socket=dict(default=None), mode=dict(default="getslave", choices=["getmaster", "getslave", "changemaster", "stopslave", "startslave"]), master_host=dict(default=None), master_user=dict(default=None), master_password=dict(default=None), master_port=dict(default=None), master_connect_retry=dict(default=None), master_log_file=dict(default=None), master_log_pos=dict(default=None), relay_log_file=dict(default=None), relay_log_pos=dict(default=None), master_ssl=dict(default=None, choices=[0,1]), master_ssl_ca=dict(default=None), master_ssl_capath=dict(default=None), master_ssl_cert=dict(default=None), master_ssl_key=dict(default=None), master_ssl_cipher=dict(default=None), ) ) user = module.params["login_user"] password = module.params["login_password"] host = module.params["login_host"] mode = module.params["mode"] master_host = module.params["master_host"] master_user = module.params["master_user"] master_password = module.params["master_password"] master_port = module.params["master_port"] master_connect_retry = module.params["master_connect_retry"] master_log_file = module.params["master_log_file"] master_log_pos = module.params["master_log_pos"] relay_log_file = module.params["relay_log_file"] relay_log_pos = module.params["relay_log_pos"] master_ssl = module.params["master_ssl"] master_ssl_ca = module.params["master_ssl_ca"] master_ssl_capath = module.params["master_ssl_capath"] master_ssl_cert = module.params["master_ssl_cert"] master_ssl_key = module.params["master_ssl_key"] master_ssl_cipher = module.params["master_ssl_cipher"] if not mysqldb_found: module.fail_json(msg="the python mysqldb module is required") else: warnings.filterwarnings('error', category=MySQLdb.Warning) # Either the caller passes both a username and password with which to connect to # mysql, or they pass neither and allow this module to read the credentials from # ~/.my.cnf. login_password = module.params["login_password"] login_user = module.params["login_user"] if login_user is None and login_password is None: mycnf_creds = load_mycnf() if mycnf_creds is False: login_user = "root" login_password = "" else: login_user = mycnf_creds["user"] login_password = mycnf_creds["passwd"] elif login_password is None or login_user is None: module.fail_json(msg="when supplying login arguments, both login_user and login_password must be provided") try: if module.params["login_unix_socket"]: db_connection = MySQLdb.connect(host=module.params["login_host"], unix_socket=module.params["login_unix_socket"], user=login_user, passwd=login_password, db="mysql") else: db_connection = MySQLdb.connect(host=module.params["login_host"], user=login_user, passwd=login_password, db="mysql") except Exception, e: module.fail_json(msg="unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials") try: cursor = db_connection.cursor(cursorclass=MySQLdb.cursors.DictCursor) except Exception, e: module.fail_json(msg="Trouble getting DictCursor from db_connection: %s" % e) if mode in "getmaster": masterstatus = get_master_status(cursor) try: module.exit_json( **masterstatus ) except TypeError: module.fail_json(msg="Server is not configured as mysql master") elif mode in "getslave": slavestatus = get_slave_status(cursor) try: module.exit_json( **slavestatus ) except TypeError: module.fail_json(msg="Server is not configured as mysql slave") elif mode in "changemaster": print "Change master" chm=[] if master_host: chm.append("MASTER_HOST='" + master_host + "'") if master_user: chm.append("MASTER_USER='" + master_user + "'") if master_password: chm.append("MASTER_PASSWORD='" + master_password + "'") if master_port: chm.append("MASTER_PORT=" + master_port) if master_connect_retry: chm.append("MASTER_CONNECT_RETRY='" + master_connect_retry + "'") if master_log_file: chm.append("MASTER_LOG_FILE='" + master_log_file + "'") if master_log_pos: chm.append("MASTER_LOG_POS=" + master_log_pos) if relay_log_file: chm.append("RELAY_LOG_FILE='" + relay_log_file + "'") if relay_log_pos: chm.append("RELAY_LOG_POS=" + relay_log_pos) if master_ssl: chm.append("MASTER_SSL=" + master_ssl) if master_ssl_ca: chm.append("MASTER_SSL_CA='" + master_ssl_ca + "'") if master_ssl_capath: chm.append("MASTER_SSL_CAPATH='" + master_ssl_capath + "'") if master_ssl_cert: chm.append("MASTER_SSL_CERT='" + master_ssl_cert + "'") if master_ssl_key: chm.append("MASTER_SSL_KEY='" + master_ssl_key + "'") if master_ssl_cipher: chm.append("MASTER_SSL_CIPTHER='" + master_ssl_cipher + "'") changemaster(cursor,chm) module.exit_json(changed=True) elif mode in "startslave": started = start_slave(cursor) if started is True: module.exit_json(msg="Slave started ", changed=True) else: module.exit_json(msg="Slave already started (Or cannot be started)", changed=False) elif mode in "stopslave": stopped = stop_slave(cursor) if stopped is True: module.exit_json(msg="Slave stopped", changed=True) else: module.exit_json(msg="Slave already stopped", changed=False) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/database/mysql_variables0000664000000000000000000001534612316627017020204 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- """ Ansible module to manage mysql variables (c) 2013, Balazs Pocze Certain parts are taken from Mark Theunissen's mysqldb module This file is part of Ansible Ansible is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Ansible is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Ansible. If not, see . """ DOCUMENTATION = ''' --- module: mysql_variables short_description: Manage MySQL global variables description: - Query / Set MySQL variables version_added: 1.3 options: variable: description: - Variable name to operate required: True value: description: - If set, then sets variable value to this required: False login_user: description: - username to connect mysql host, if defined login_password also needed. required: False login_password: description: - password to connect mysql host, if defined login_user also needed. required: False login_host: description: - mysql host to connect required: False login_unix_socket: description: - unix socket to connect mysql server ''' EXAMPLES = ''' # Check for sync_binary_log setting - mysql_variables: variable=sync_binary_log # Set read_only variable to 1 - mysql_variables: variable=read_only value=1 ''' import ConfigParser import os import warnings try: import MySQLdb except ImportError: mysqldb_found = False else: mysqldb_found = True def getvariable(cursor, mysqlvar): cursor.execute("SHOW VARIABLES LIKE '" + mysqlvar + "'") mysqlvar_val = cursor.fetchall() return mysqlvar_val def setvariable(cursor, mysqlvar, value): try: cursor.execute("SET GLOBAL " + mysqlvar + "=" + value) cursor.fetchall() result = True except Exception, e: result = str(e) return result def strip_quotes(s): """ Remove surrounding single or double quotes >>> print strip_quotes('hello') hello >>> print strip_quotes('"hello"') hello >>> print strip_quotes("'hello'") hello >>> print strip_quotes("'hello") 'hello """ single_quote = "'" double_quote = '"' if s.startswith(single_quote) and s.endswith(single_quote): s = s.strip(single_quote) elif s.startswith(double_quote) and s.endswith(double_quote): s = s.strip(double_quote) return s def config_get(config, section, option): """ Calls ConfigParser.get and strips quotes See: http://dev.mysql.com/doc/refman/5.0/en/option-files.html """ return strip_quotes(config.get(section, option)) def load_mycnf(): config = ConfigParser.RawConfigParser() mycnf = os.path.expanduser('~/.my.cnf') if not os.path.exists(mycnf): return False try: config.readfp(open(mycnf)) except (IOError): return False # We support two forms of passwords in .my.cnf, both pass= and password=, # as these are both supported by MySQL. try: passwd = config_get(config, 'client', 'password') except (ConfigParser.NoOptionError): try: passwd = config_get(config, 'client', 'pass') except (ConfigParser.NoOptionError): return False # If .my.cnf doesn't specify a user, default to user login name try: user = config_get(config, 'client', 'user') except (ConfigParser.NoOptionError): user = getpass.getuser() creds = dict(user=user, passwd=passwd) return creds def main(): module = AnsibleModule( argument_spec = dict( login_user=dict(default=None), login_password=dict(default=None), login_host=dict(default="localhost"), login_unix_socket=dict(default=None), variable=dict(default=None), value=dict(default=None) ) ) user = module.params["login_user"] password = module.params["login_password"] host = module.params["login_host"] mysqlvar = module.params["variable"] value = module.params["value"] if not mysqldb_found: module.fail_json(msg="the python mysqldb module is required") else: warnings.filterwarnings('error', category=MySQLdb.Warning) # Either the caller passes both a username and password with which to connect to # mysql, or they pass neither and allow this module to read the credentials from # ~/.my.cnf. login_password = module.params["login_password"] login_user = module.params["login_user"] if login_user is None and login_password is None: mycnf_creds = load_mycnf() if mycnf_creds is False: login_user = "root" login_password = "" else: login_user = mycnf_creds["user"] login_password = mycnf_creds["passwd"] elif login_password is None or login_user is None: module.fail_json(msg="when supplying login arguments, both login_user and login_password must be provided") try: if module.params["login_unix_socket"]: db_connection = MySQLdb.connect(host=module.params["login_host"], unix_socket=module.params["login_unix_socket"], user=login_user, passwd=login_password, db="mysql") else: db_connection = MySQLdb.connect(host=module.params["login_host"], user=login_user, passwd=login_password, db="mysql") cursor = db_connection.cursor() except Exception, e: module.fail_json(msg="unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials") if mysqlvar is None: module.fail_json(msg="Cannot run without variable to operate with") mysqlvar_val = getvariable(cursor, mysqlvar) if value is None: module.exit_json(msg=mysqlvar_val) else: if len(mysqlvar_val) < 1: module.fail_json(msg="Variable not available", changed=False) if value == mysqlvar_val[0][1]: module.exit_json(msg="Variable already set to requested value", changed=False) result = setvariable(cursor, mysqlvar, value) if result is True: module.exit_json(msg="Variable change succeeded", changed=True) else: module.fail_json(msg=result, changed=False) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/commands/0000775000000000000000000000000012316627017015110 5ustar rootrootansible-1.5.4/library/commands/command0000664000000000000000000001545012316627017016456 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan , and others # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import sys import datetime import traceback import re import shlex import os DOCUMENTATION = ''' --- module: command version_added: historical short_description: Executes a command on a remote node description: - The M(command) module takes the command name followed by a list of space-delimited arguments. - The given command will be executed on all selected nodes. It will not be processed through the shell, so variables like C($HOME) and operations like C("<"), C(">"), C("|"), and C("&") will not work (use the M(shell) module if you need these features). options: free_form: description: - the command module takes a free form command to run required: true default: null aliases: [] creates: description: - a filename, when it already exists, this step will B(not) be run. required: no default: null removes: description: - a filename, when it does not exist, this step will B(not) be run. version_added: "0.8" required: no default: null chdir: description: - cd into this directory before running the command version_added: "0.6" required: false default: null executable: description: - change the shell used to execute the command. Should be an absolute path to the executable. required: false default: null version_added: "0.9" notes: - If you want to run a command through the shell (say you are using C(<), C(>), C(|), etc), you actually want the M(shell) module instead. The M(command) module is much more secure as it's not affected by the user's environment. - " C(creates), C(removes), and C(chdir) can be specified after the command. For instance, if you only want to run a command if a certain file does not exist, use this." author: Michael DeHaan ''' EXAMPLES = ''' # Example from Ansible Playbooks - command: /sbin/shutdown -t now # Run the command if the specified file does not exist - command: /usr/bin/make_database.sh arg1 arg2 creates=/path/to/database ''' def main(): # the command module is the one ansible module that does not take key=value args # hence don't copy this one if you are looking to build others! module = CommandModule(argument_spec=dict()) shell = module.params['shell'] chdir = module.params['chdir'] executable = module.params['executable'] args = module.params['args'] creates = module.params['creates'] removes = module.params['removes'] if args.strip() == '': module.fail_json(rc=256, msg="no command given") if chdir: os.chdir(chdir) if creates: # do not run the command if the line contains creates=filename # and the filename already exists. This allows idempotence # of command executions. v = os.path.expanduser(creates) if os.path.exists(v): module.exit_json( cmd=args, stdout="skipped, since %s exists" % v, skipped=True, changed=False, stderr=False, rc=0 ) if removes: # do not run the command if the line contains removes=filename # and the filename does not exist. This allows idempotence # of command executions. v = os.path.expanduser(removes) if not os.path.exists(v): module.exit_json( cmd=args, stdout="skipped, since %s does not exist" % v, skipped=True, changed=False, stderr=False, rc=0 ) if not shell: args = shlex.split(args) startd = datetime.datetime.now() rc, out, err = module.run_command(args, executable=executable, use_unsafe_shell=shell) endd = datetime.datetime.now() delta = endd - startd if out is None: out = '' if err is None: err = '' module.exit_json( cmd = args, stdout = out.rstrip("\r\n"), stderr = err.rstrip("\r\n"), rc = rc, start = str(startd), end = str(endd), delta = str(delta), changed = True ) # import module snippets from ansible.module_utils.basic import * # only the command module should ever need to do this # everything else should be simple key=value class CommandModule(AnsibleModule): def _handle_aliases(self): return {} def _check_invalid_arguments(self): pass def _load_params(self): ''' read the input and return a dictionary and the arguments string ''' args = MODULE_ARGS params = {} params['chdir'] = None params['creates'] = None params['removes'] = None params['shell'] = False params['executable'] = None if args.find("#USE_SHELL") != -1: args = args.replace("#USE_SHELL", "") params['shell'] = True r = re.compile(r'(^|\s)(creates|removes|chdir|executable|NO_LOG)=(?P[\'"])?(.*?)(?(quote)(?> somelog.txt ''' ansible-1.5.4/library/cloud/0000775000000000000000000000000012316627017014415 5ustar rootrootansible-1.5.4/library/cloud/virt0000664000000000000000000003227712316627017015337 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- """ Virt management features Copyright 2007, 2012 Red Hat, Inc Michael DeHaan Seth Vidal This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program. If not, see . """ DOCUMENTATION = ''' --- module: virt short_description: Manages virtual machines supported by libvirt description: - Manages virtual machines supported by I(libvirt). version_added: "0.2" options: name: description: - name of the guest VM being managed. Note that VM must be previously defined with xml. required: true default: null aliases: [] state: description: - Note that there may be some lag for state requests like C(shutdown) since these refer only to VM states. After starting a guest, it may not be immediately accessible. required: false choices: [ "running", "shutdown" ] default: "no" command: description: - in addition to state management, various non-idempotent commands are available. See examples required: false choices: ["create","status", "start", "stop", "pause", "unpause", "shutdown", "undefine", "destroy", "get_xml", "autostart", "freemem", "list_vms", "info", "nodeinfo", "virttype", "define"] uri: description: - libvirt connection uri required: false defaults: qemu:///system xml: description: - XML document used with the define command required: false default: null requirements: [ "libvirt" ] author: Michael DeHaan, Seth Vidal ''' EXAMPLES = ''' # a playbook task line: - virt: name=alpha state=running # /usr/bin/ansible invocations ansible host -m virt -a "name=alpha command=status" ansible host -m virt -a "name=alpha command=get_xml" ansible host -m virt -a "name=alpha command=create uri=lxc:///" # a playbook example of defining and launching an LXC guest tasks: - name: define vm virt: name=foo command=define xml="{{ lookup('template', 'container-template.xml.j2') }}" uri=lxc:/// - name: start vm virt: name=foo state=running uri=lxc:/// ''' VIRT_FAILED = 1 VIRT_SUCCESS = 0 VIRT_UNAVAILABLE=2 import sys try: import libvirt except ImportError: print "failed=True msg='libvirt python module unavailable'" sys.exit(1) ALL_COMMANDS = [] VM_COMMANDS = ['create','status', 'start', 'stop', 'pause', 'unpause', 'shutdown', 'undefine', 'destroy', 'get_xml', 'autostart', 'define'] HOST_COMMANDS = ['freemem', 'list_vms', 'info', 'nodeinfo', 'virttype'] ALL_COMMANDS.extend(VM_COMMANDS) ALL_COMMANDS.extend(HOST_COMMANDS) VIRT_STATE_NAME_MAP = { 0 : "running", 1 : "running", 2 : "running", 3 : "paused", 4 : "shutdown", 5 : "shutdown", 6 : "crashed" } class VMNotFound(Exception): pass class LibvirtConnection(object): def __init__(self, uri, module): self.module = module cmd = "uname -r" rc, stdout, stderr = self.module.run_command(cmd) if stdout.find("xen") != -1: conn = libvirt.open(None) else: conn = libvirt.open(uri) if not conn: raise Exception("hypervisor connection failure") self.conn = conn def find_vm(self, vmid): """ Extra bonus feature: vmid = -1 returns a list of everything """ conn = self.conn vms = [] # this block of code borrowed from virt-manager: # get working domain's name ids = conn.listDomainsID() for id in ids: vm = conn.lookupByID(id) vms.append(vm) # get defined domain names = conn.listDefinedDomains() for name in names: vm = conn.lookupByName(name) vms.append(vm) if vmid == -1: return vms for vm in vms: if vm.name() == vmid: return vm raise VMNotFound("virtual machine %s not found" % vmid) def shutdown(self, vmid): return self.find_vm(vmid).shutdown() def pause(self, vmid): return self.suspend(self.conn,vmid) def unpause(self, vmid): return self.resume(self.conn,vmid) def suspend(self, vmid): return self.find_vm(vmid).suspend() def resume(self, vmid): return self.find_vm(vmid).resume() def create(self, vmid): return self.find_vm(vmid).create() def destroy(self, vmid): return self.find_vm(vmid).destroy() def undefine(self, vmid): return self.find_vm(vmid).undefine() def get_status2(self, vm): state = vm.info()[0] return VIRT_STATE_NAME_MAP.get(state,"unknown") def get_status(self, vmid): state = self.find_vm(vmid).info()[0] return VIRT_STATE_NAME_MAP.get(state,"unknown") def nodeinfo(self): return self.conn.getInfo() def get_type(self): return self.conn.getType() def get_maxVcpus(self, vmid): vm = self.conn.lookupByName(vmid) return vm.maxVcpus() def get_maxMemory(self, vmid): vm = self.conn.lookupByName(vmid) return vm.maxMemory() def getFreeMemory(self): return self.conn.getFreeMemory() def get_autostart(self, vmid): vm = self.conn.lookupByName(vmid) return vm.autostart() def set_autostart(self, vmid, val): vm = self.conn.lookupByName(vmid) return vm.setAutostart(val) def define_from_xml(self, xml): return self.conn.defineXML(xml) class Virt(object): def __init__(self, uri, module): self.module = module self.uri = uri def __get_conn(self): self.conn = LibvirtConnection(self.uri, self.module) return self.conn def get_vm(self, vmid): self.__get_conn() return self.conn.find_vm(vmid) def state(self): vms = self.list_vms() state = [] for vm in vms: state_blurb = self.conn.get_status(vm) state.append("%s %s" % (vm,state_blurb)) return state def info(self): vms = self.list_vms() info = dict() for vm in vms: data = self.conn.find_vm(vm).info() # libvirt returns maxMem, memory, and cpuTime as long()'s, which # xmlrpclib tries to convert to regular int's during serialization. # This throws exceptions, so convert them to strings here and # assume the other end of the xmlrpc connection can figure things # out or doesn't care. info[vm] = { "state" : VIRT_STATE_NAME_MAP.get(data[0],"unknown"), "maxMem" : str(data[1]), "memory" : str(data[2]), "nrVirtCpu" : data[3], "cpuTime" : str(data[4]), } info[vm]["autostart"] = self.conn.get_autostart(vm) return info def nodeinfo(self): self.__get_conn() info = dict() data = self.conn.nodeinfo() info = { "cpumodel" : str(data[0]), "phymemory" : str(data[1]), "cpus" : str(data[2]), "cpumhz" : str(data[3]), "numanodes" : str(data[4]), "sockets" : str(data[5]), "cpucores" : str(data[6]), "cputhreads" : str(data[7]) } return info def list_vms(self, state=None): self.conn = self.__get_conn() vms = self.conn.find_vm(-1) results = [] for x in vms: try: if state: vmstate = self.conn.get_status2(x) if vmstate == state: results.append(x.name()) else: results.append(x.name()) except: pass return results def virttype(self): return self.__get_conn().get_type() def autostart(self, vmid): self.conn = self.__get_conn() return self.conn.set_autostart(vmid, True) def freemem(self): self.conn = self.__get_conn() return self.conn.getFreeMemory() def shutdown(self, vmid): """ Make the machine with the given vmid stop running. Whatever that takes. """ self.__get_conn() self.conn.shutdown(vmid) return 0 def pause(self, vmid): """ Pause the machine with the given vmid. """ self.__get_conn() return self.conn.suspend(vmid) def unpause(self, vmid): """ Unpause the machine with the given vmid. """ self.__get_conn() return self.conn.resume(vmid) def create(self, vmid): """ Start the machine via the given vmid """ self.__get_conn() return self.conn.create(vmid) def start(self, vmid): """ Start the machine via the given id/name """ self.__get_conn() return self.conn.create(vmid) def destroy(self, vmid): """ Pull the virtual power from the virtual domain, giving it virtually no time to virtually shut down. """ self.__get_conn() return self.conn.destroy(vmid) def undefine(self, vmid): """ Stop a domain, and then wipe it from the face of the earth. (delete disk/config file) """ self.__get_conn() return self.conn.undefine(vmid) def status(self, vmid): """ Return a state suitable for server consumption. Aka, codes.py values, not XM output. """ self.__get_conn() return self.conn.get_status(vmid) def get_xml(self, vmid): """ Receive a Vm id as input Return an xml describing vm config returned by a libvirt call """ conn = libvirt.openReadOnly(None) if not conn: return (-1,'Failed to open connection to the hypervisor') try: domV = conn.lookupByName(vmid) except: return (-1,'Failed to find the main domain') return domV.XMLDesc(0) def get_maxVcpus(self, vmid): """ Gets the max number of VCPUs on a guest """ self.__get_conn() return self.conn.get_maxVcpus(vmid) def get_max_memory(self, vmid): """ Gets the max memory on a guest """ self.__get_conn() return self.conn.get_MaxMemory(vmid) def define(self, xml): """ Define a guest with the given xml """ self.__get_conn() return self.conn.define_from_xml(xml) def core(module): state = module.params.get('state', None) guest = module.params.get('name', None) command = module.params.get('command', None) uri = module.params.get('uri', None) xml = module.params.get('xml', None) v = Virt(uri, module) res = {} if state and command=='list_vms': res = v.list_vms(state=state) if type(res) != dict: res = { command: res } return VIRT_SUCCESS, res if state: if not guest: module.fail_json(msg = "state change requires a guest specified") res['changed'] = False if state == 'running': if v.status(guest) is not 'running': res['changed'] = True res['msg'] = v.start(guest) elif state == 'shutdown': if v.status(guest) is not 'shutdown': res['changed'] = True res['msg'] = v.shutdown(guest) else: module.fail_json(msg="unexpected state") return VIRT_SUCCESS, res if command: if command in VM_COMMANDS: if not guest: module.fail_json(msg = "%s requires 1 argument: guest" % command) if command == 'define': if not xml: module.fail_json(msg = "define requires xml argument") try: v.get_vm(guest) except VMNotFound: v.define(xml) res = {'changed': True, 'created': guest} return VIRT_SUCCESS, res res = getattr(v, command)(guest) if type(res) != dict: res = { command: res } return VIRT_SUCCESS, res elif hasattr(v, command): res = getattr(v, command)() if type(res) != dict: res = { command: res } return VIRT_SUCCESS, res else: module.fail_json(msg="Command %s not recognized" % basecmd) module.fail_json(msg="expected state or command parameter to be specified") def main(): module = AnsibleModule(argument_spec=dict( name = dict(aliases=['guest']), state = dict(choices=['running', 'shutdown']), command = dict(choices=ALL_COMMANDS), uri = dict(default='qemu:///system'), xml = dict(), )) rc = VIRT_SUCCESS try: rc, result = core(module) except Exception, e: module.fail_json(msg=str(e)) if rc != 0: # something went wrong emit the msg module.fail_json(rc=rc, msg=result) else: module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/keystone_user0000664000000000000000000002775112316627017017253 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # Based on Jimmy Tang's implementation DOCUMENTATION = ''' --- module: keystone_user version_added: "1.2" short_description: Manage OpenStack Identity (keystone) users, tenants and roles description: - Manage users,tenants, roles from OpenStack. options: login_user: description: - login username to authenticate to keystone required: false default: admin login_password: description: - Password of login user required: false default: 'yes' login_tenant_name: description: - The tenant login_user belongs to required: false default: None token: description: - The token to be uses in case the password is not specified required: false default: None endpoint: description: - The keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' user: description: - The name of the user that has to added/removed from OpenStack required: false default: None password: description: - The password to be assigned to the user required: false default: None tenant: description: - The tenant name that has be added/removed required: false default: None description: description: - A description for the tenant required: false default: None email: description: - An email address for the user required: false default: None role: description: - The name of the role to be assigned or created required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present requirements: [ python-keystoneclient ] author: Lorin Hochstein ''' EXAMPLES = ''' # Create a tenant - keystone_user: tenant=demo tenant_description="Default Tenant" # Create a user - keystone_user: user=john tenant=demo password=secrete # Apply the admin role to the john user in the demo tenant - keystone_user: role=admin user=john tenant=demo ''' try: from keystoneclient.v2_0 import client except ImportError: keystoneclient_found = False else: keystoneclient_found = True def authenticate(endpoint, token, login_user, login_password, login_tenant_name): """Return a keystone client object""" if token: return client.Client(endpoint=endpoint, token=token) else: return client.Client(auth_url=endpoint, username=login_user, password=login_password, tenant_name=login_tenant_name) def tenant_exists(keystone, tenant): """ Return True if tenant already exists""" return tenant in [x.name for x in keystone.tenants.list()] def user_exists(keystone, user): """" Return True if user already exists""" return user in [x.name for x in keystone.users.list()] def get_tenant(keystone, name): """ Retrieve a tenant by name""" tenants = [x for x in keystone.tenants.list() if x.name == name] count = len(tenants) if count == 0: raise KeyError("No keystone tenants with name %s" % name) elif count > 1: raise ValueError("%d tenants with name %s" % (count, name)) else: return tenants[0] def get_user(keystone, name): """ Retrieve a user by name""" users = [x for x in keystone.users.list() if x.name == name] count = len(users) if count == 0: raise KeyError("No keystone users with name %s" % name) elif count > 1: raise ValueError("%d users with name %s" % (count, name)) else: return users[0] def get_role(keystone, name): """ Retrieve a role by name""" roles = [x for x in keystone.roles.list() if x.name == name] count = len(roles) if count == 0: raise KeyError("No keystone roles with name %s" % name) elif count > 1: raise ValueError("%d roles with name %s" % (count, name)) else: return roles[0] def get_tenant_id(keystone, name): return get_tenant(keystone, name).id def get_user_id(keystone, name): return get_user(keystone, name).id def ensure_tenant_exists(keystone, tenant_name, tenant_description, check_mode): """ Ensure that a tenant exists. Return (True, id) if a new tenant was created, (False, None) if it already existed. """ # Check if tenant already exists try: tenant = get_tenant(keystone, tenant_name) except KeyError: # Tenant doesn't exist yet pass else: if tenant.description == tenant_description: return (False, tenant.id) else: # We need to update the tenant description if check_mode: return (True, tenant.id) else: tenant.update(description=tenant_description) return (True, tenant.id) # We now know we will have to create a new tenant if check_mode: return (True, None) ks_tenant = keystone.tenants.create(tenant_name=tenant_name, description=tenant_description, enabled=True) return (True, ks_tenant.id) def ensure_tenant_absent(keystone, tenant, check_mode): """ Ensure that a tenant does not exist Return True if the tenant was removed, False if it didn't exist in the first place """ if not tenant_exists(keystone, tenant): return False # We now know we will have to delete the tenant if check_mode: return True def ensure_user_exists(keystone, user_name, password, email, tenant_name, check_mode): """ Check if user exists Return (True, id) if a new user was created, (False, id) user alrady exists """ # Check if tenant already exists try: user = get_user(keystone, user_name) except KeyError: # Tenant doesn't exist yet pass else: # User does exist, we're done return (False, user.id) # We now know we will have to create a new user if check_mode: return (True, None) tenant = get_tenant(keystone, tenant_name) user = keystone.users.create(name=user_name, password=password, email=email, tenant_id=tenant.id) return (True, user.id) def ensure_role_exists(keystone, user_name, tenant_name, role_name, check_mode): """ Check if role exists Return (True, id) if a new role was created or if the role was newly assigned to the user for the tenant. (False, id) if the role already exists and was already assigned to the user ofr the tenant. """ # Check if the user has the role in the tenant user = get_user(keystone, user_name) tenant = get_tenant(keystone, tenant_name) roles = [x for x in keystone.roles.roles_for_user(user, tenant) if x.name == role_name] count = len(roles) if count == 1: # If the role is in there, we are done role = roles[0] return (False, role.id) elif count > 1: # Too many roles with the same name, throw an error raise ValueError("%d roles with name %s" % (count, role_name)) # At this point, we know we will need to make changes if check_mode: return (True, None) # Get the role if it exists try: role = get_role(keystone, role_name) except KeyError: # Role doesn't exist yet role = keystone.roles.create(role_name) # Associate the role with the user in the admin keystone.roles.add_user_role(user, role, tenant) return (True, role.id) def ensure_user_absent(keystone, user, check_mode): raise NotImplementedError("Not yet implemented") def ensure_role_absent(keystone, uesr, tenant, role, check_mode): raise NotImplementedError("Not yet implemented") def main(): module = AnsibleModule( argument_spec=dict( user=dict(required=False), password=dict(required=False), tenant=dict(required=False), tenant_description=dict(required=False), email=dict(required=False), role=dict(required=False), state=dict(default='present', choices=['present', 'absent']), endpoint=dict(required=False, default="http://127.0.0.1:35357/v2.0"), token=dict(required=False), login_user=dict(required=False), login_password=dict(required=False), login_tenant_name=dict(required=False) ), supports_check_mode=True, mutually_exclusive=[['token', 'login_user'], ['token', 'login_password'], ['token', 'login_tenant_name']] ) if not keystoneclient_found: module.fail_json(msg="the python-keystoneclient module is required") user = module.params['user'] password = module.params['password'] tenant = module.params['tenant'] tenant_description = module.params['tenant_description'] email = module.params['email'] role = module.params['role'] state = module.params['state'] endpoint = module.params['endpoint'] token = module.params['token'] login_user = module.params['login_user'] login_password = module.params['login_password'] login_tenant_name = module.params['login_tenant_name'] keystone = authenticate(endpoint, token, login_user, login_password, login_tenant_name) check_mode = module.check_mode try: d = dispatch(keystone, user, password, tenant, tenant_description, email, role, state, endpoint, token, login_user, login_password, check_mode) except Exception, e: if check_mode: # If we have a failure in check mode module.exit_json(changed=True, msg="exception: %s" % e.message) else: module.fail_json(msg=e.message) else: module.exit_json(**d) def dispatch(keystone, user=None, password=None, tenant=None, tenant_description=None, email=None, role=None, state="present", endpoint=None, token=None, login_user=None, login_password=None, check_mode=False): """ Dispatch to the appropriate method. Returns a dict that will be passed to exit_json tenant user role state ------ ---- ---- -------- X present ensure_tenant_exists X absent ensure_tenant_absent X X present ensure_user_exists X X absent ensure_user_absent X X X present ensure_role_exists X X X absent ensure_role_absent """ changed = False id = None if tenant and not user and not role and state == "present": changed, id = ensure_tenant_exists(keystone, tenant, tenant_description, check_mode) elif tenant and not user and not role and state == "absent": changed = ensure_tenant_absent(keystone, tenant, check_mode) elif tenant and user and not role and state == "present": changed, id = ensure_user_exists(keystone, user, password, email, tenant, check_mode) elif tenant and user and not role and state == "absent": changed = ensure_user_absent(keystone, user, check_mode) elif tenant and user and role and state == "present": changed, id = ensure_role_exists(keystone, user, tenant, role, check_mode) elif tenant and user and role and state == "absent": changed = ensure_role_absent(keystone, user, tenant, role, check_mode) else: # Should never reach here raise ValueError("Code should never reach here") return dict(changed=changed, id=id) # import module snippets from ansible.module_utils.basic import * if __name__ == '__main__': main() ansible-1.5.4/library/cloud/gce_pd0000664000000000000000000002165012316627017015565 0ustar rootroot#!/usr/bin/python # Copyright 2013 Google Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: gce_pd version_added: "1.4" short_description: utilize GCE persistent disk resources description: - This module can create and destroy unformatted GCE persistent disks U(https://developers.google.com/compute/docs/disks#persistentdisks). It also supports attaching and detaching disks from running instances but does not support creating boot disks from images or snapshots. The 'gce' module supports creating instances with boot disks. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py. options: detach_only: description: - do not destroy the disk, merely detach it from an instance required: false default: "no" choices: ["yes", "no"] aliases: [] instance_name: description: - instance name if you wish to attach or detach the disk required: false default: null aliases: [] mode: description: - GCE mount mode of disk, READ_ONLY (default) or READ_WRITE required: false default: "READ_ONLY" choices: ["READ_WRITE", "READ_ONLY"] aliases: [] name: description: - name of the disk required: true default: null aliases: [] size_gb: description: - whole integer size of disk (in GB) to create, default is 10 GB required: false default: 10 aliases: [] state: description: - desired state of the persistent disk required: false default: "present" choices: ["active", "present", "absent", "deleted"] aliases: [] zone: description: - zone in which to create the disk required: false default: "us-central1-b" aliases: [] requirements: [ "libcloud" ] author: Eric Johnson ''' EXAMPLES = ''' # Simple attachment action to an existing instance - local_action: module: gce_pd instance_name: notlocalhost size_gb: 5 name: pd ''' import sys USER_AGENT_PRODUCT="Ansible-gce_pd" USER_AGENT_VERSION="v1beta15" try: from libcloud.compute.types import Provider from libcloud.compute.providers import get_driver from libcloud.common.google import GoogleBaseError, QuotaExceededError, \ ResourceExistsError, ResourceNotFoundError, ResourceInUseError _ = Provider.GCE except ImportError: print("failed=True " + \ "msg='libcloud with GCE support is required for this module.'") sys.exit(1) # Load in the libcloud secrets file try: import secrets except ImportError: secrets = None ARGS = getattr(secrets, 'GCE_PARAMS', ()) KWARGS = getattr(secrets, 'GCE_KEYWORD_PARAMS', {}) if not ARGS or not 'project' in KWARGS: print("failed=True " + \ "msg='Missing GCE connection parameters in libcloud secrets file.'") sys.exit(1) def unexpected_error_msg(error): msg='Unexpected response: HTTP return_code[' msg+='%s], API error code[%s] and message: %s' % ( error.http_code, error.code, str(error.value)) return msg def main(): module = AnsibleModule( argument_spec = dict( detach_only = dict(choice=BOOLEANS), instance_name = dict(), mode = dict(default='READ_ONLY', choices=['READ_WRITE', 'READ_ONLY']), name = dict(required=True), size_gb = dict(default=10), state = dict(default='present'), zone = dict(default='us-central1-b'), ) ) detach_only = module.params.get('detach_only') instance_name = module.params.get('instance_name') mode = module.params.get('mode') name = module.params.get('name') size_gb = module.params.get('size_gb') state = module.params.get('state') zone = module.params.get('zone') if detach_only and not instance_name: module.fail_json( msg='Must specify an instance name when detaching a disk', changed=False) try: gce = get_driver(Provider.GCE)(*ARGS, datacenter=zone, **KWARGS) gce.connection.user_agent_append("%s/%s" % ( USER_AGENT_PRODUCT, USER_AGENT_VERSION)) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) disk = inst = None changed = is_attached = False json_output = { 'name': name, 'zone': zone, 'state': state } if detach_only: json_output['detach_only'] = True json_output['detached_from_instance'] = instance_name if instance_name: # user wants to attach/detach from an existing instance try: inst = gce.ex_get_node(instance_name, zone) # is the disk attached? for d in inst.extra['disks']: if d['deviceName'] == name: is_attached = True json_output['attached_mode'] = d['mode'] json_output['attached_to_instance'] = inst.name except: pass # find disk if it already exists try: disk = gce.ex_get_volume(name) json_output['size_gb'] = int(disk.size) except ResourceNotFoundError: pass except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) # user wants a disk to exist. If "instance_name" is supplied the user # also wants it attached if state in ['active', 'present']: if not size_gb: module.fail_json(msg="Must supply a size_gb", changed=False) try: size_gb = int(round(float(size_gb))) if size_gb < 1: raise Exception except: module.fail_json(msg="Must supply a size_gb larger than 1 GB", changed=False) if instance_name and inst is None: module.fail_json(msg='Instance %s does not exist in zone %s' % ( instance_name, zone), changed=False) if not disk: try: disk = gce.create_volume(size_gb, name, location=zone) except ResourceExistsError: pass except QuotaExceededError: module.fail_json(msg='Requested disk size exceeds quota', changed=False) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) json_output['size_gb'] = size_gb changed = True if inst and not is_attached: try: gce.attach_volume(inst, disk, device=name, ex_mode=mode) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) json_output['attached_to_instance'] = inst.name json_output['attached_mode'] = mode changed = True # user wants to delete a disk (or perhaps just detach it). if state in ['absent', 'deleted'] and disk: if inst and is_attached: try: gce.detach_volume(disk, ex_node=inst) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) changed = True if not detach_only: try: gce.destroy_volume(disk) except ResourceInUseError, e: module.fail_json(msg=str(e.value), changed=False) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) changed = True json_output['changed'] = changed print json.dumps(json_output) sys.exit(0) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/ec2_vpc0000664000000000000000000004664012316627017015673 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ec2_vpc short_description: configure AWS virtual private clouds description: - Create or terminates AWS virtual private clouds. This module has a dependency on python-boto. version_added: "1.4" options: cidr_block: description: - "The cidr block representing the VPC, e.g. 10.0.0.0/16" required: false, unless state=present instance_tenancy: description: - "The supported tenancy options for instances launched into the VPC." required: false default: "default" choices: [ "default", "dedicated" ] dns_support: description: - toggles the "Enable DNS resolution" flag required: false default: "yes" choices: [ "yes", "no" ] dns_hostnames: description: - toggles the "Enable DNS hostname support for instances" flag required: false default: "yes" choices: [ "yes", "no" ] subnets: description: - "A dictionary array of subnets to add of the form: { cidr: ..., az: ... }. Where az is the desired availability zone of the subnet, but it is not required. All VPC subnets not in this list will be removed." required: false default: null aliases: [] vpc_id: description: - A VPC id to terminate when state=absent required: false default: null aliases: [] internet_gateway: description: - Toggle whether there should be an Internet gateway attached to the VPC required: false default: "no" choices: [ "yes", "no" ] aliases: [] route_tables: description: - "A dictionary array of route tables to add of the form: { subnets: [172.22.2.0/24, 172.22.3.0/24,], routes: [{ dest: 0.0.0.0/0, gw: igw},] }. Where the subnets list is those subnets the route table should be associated with, and the routes list is a list of routes to be in the table. The special keyword for the gw of igw specifies that you should the route should go through the internet gateway attached to the VPC. gw also accepts instance-ids in addition igw. This module is currently unable to affect the 'main' route table due to some limitations in boto, so you must explicitly define the associated subnets or they will be attached to the main table implicitly." required: false default: null aliases: [] wait: description: - wait for the VPC to be in state 'available' before returning required: false default: "no" choices: [ "yes", "no" ] aliases: [] wait_timeout: description: - how long before wait gives up, in seconds default: 300 aliases: [] state: description: - Create or terminate the VPC required: true default: present aliases: [] region: description: - region in which the resource exists. required: false default: null aliases: ['aws_region', 'ec2_region'] aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: None aliases: ['ec2_secret_key', 'secret_key' ] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: None aliases: ['ec2_access_key', 'access_key' ] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] author: Carson Gee ''' EXAMPLES = ''' # Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Basic creation example: local_action: module: ec2_vpc state: present cidr_block: 172.23.0.0/16 region: us-west-2 # Full creation example with subnets and optional availability zones. # The absence or presense of subnets deletes or creates them respectively. local_action: module: ec2_vpc state: present cidr_block: 172.22.0.0/16 subnets: - cidr: 172.22.1.0/24 az: us-west-2c - cidr: 172.22.2.0/24 az: us-west-2b - cidr: 172.22.3.0/24 az: us-west-2a internet_gateway: True route_tables: - subnets: - 172.22.2.0/24 - 172.22.3.0/24 routes: - dest: 0.0.0.0/0 gw: igw - subnets: - 172.22.1.0/24 routes: - dest: 0.0.0.0/0 gw: igw region: us-west-2 register: vpc # Removal of a VPC by id local_action: module: ec2_vpc state: absent vpc_id: vpc-aaaaaaa region: us-west-2 If you have added elements not managed by this module, e.g. instances, NATs, etc then the delete will fail until those dependencies are removed. ''' import sys import time try: import boto.ec2 import boto.vpc from boto.exception import EC2ResponseError except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def get_vpc_info(vpc): """ Retrieves vpc information from an instance ID and returns it as a dictionary """ return({ 'id': vpc.id, 'cidr_block': vpc.cidr_block, 'dhcp_options_id': vpc.dhcp_options_id, 'region': vpc.region.name, 'state': vpc.state, }) def create_vpc(module, vpc_conn): """ Creates a new VPC module : AnsibleModule object vpc_conn: authenticated VPCConnection connection object Returns: A dictionary with information about the VPC and subnets that were launched """ id = module.params.get('id') cidr_block = module.params.get('cidr_block') instance_tenancy = module.params.get('instance_tenancy') dns_support = module.params.get('dns_support') dns_hostnames = module.params.get('dns_hostnames') subnets = module.params.get('subnets') internet_gateway = module.params.get('internet_gateway') route_tables = module.params.get('route_tables') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) changed = False # Check for existing VPC by cidr_block or id if id != None: filter_dict = {'vpc-id':id, 'state': 'available',} previous_vpcs = vpc_conn.get_all_vpcs(None, filter_dict) else: filter_dict = {'cidr': cidr_block, 'state': 'available'} previous_vpcs = vpc_conn.get_all_vpcs(None, filter_dict) if len(previous_vpcs) > 1: module.fail_json(msg='EC2 returned more than one VPC, aborting') if len(previous_vpcs) == 1: changed = False vpc = previous_vpcs[0] else: changed = True try: vpc = vpc_conn.create_vpc(cidr_block, instance_tenancy) # wait here until the vpc is available pending = True wait_timeout = time.time() + wait_timeout while wait and wait_timeout > time.time() and pending: pvpc = vpc_conn.get_all_vpcs(vpc.id) if hasattr(pvpc, 'state'): if pvpc.state == "available": pending = False elif hasattr(pvpc[0], 'state'): if pvpc[0].state == "available": pending = False time.sleep(5) if wait and wait_timeout <= time.time(): # waiting took too long module.fail_json(msg = "wait for vpc availability timeout on %s" % time.asctime()) except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) # Done with base VPC, now change to attributes and features. # boto doesn't appear to have a way to determine the existing # value of the dns attributes, so we just set them. # It also must be done one at a time. vpc_conn.modify_vpc_attribute(vpc.id, enable_dns_support=dns_support) vpc_conn.modify_vpc_attribute(vpc.id, enable_dns_hostnames=dns_hostnames) # Process all subnet properties if not isinstance(subnets, list): module.fail_json(msg='subnets needs to be a list of cidr blocks') current_subnets = vpc_conn.get_all_subnets(filters={ 'vpc_id': vpc.id }) # First add all new subnets for subnet in subnets: add_subnet = True for csn in current_subnets: if subnet['cidr'] == csn.cidr_block: add_subnet = False if add_subnet: try: vpc_conn.create_subnet(vpc.id, subnet['cidr'], subnet.get('az', None)) changed = True except EC2ResponseError, e: module.fail_json(msg='Unable to create subnet {0}, error: {1}'.format(subnet['cidr'], e)) # Now delete all absent subnets for csubnet in current_subnets: delete_subnet = True for subnet in subnets: if csubnet.cidr_block == subnet['cidr']: delete_subnet = False if delete_subnet: try: vpc_conn.delete_subnet(csubnet.id) changed = True except EC2ResponseError, e: module.fail_json(msg='Unable to delete subnet {0}, error: {1}'.format(csubnet.cidr_block, e)) # Handle Internet gateway (create/delete igw) igw = None igws = vpc_conn.get_all_internet_gateways(filters={'attachment.vpc-id': vpc.id}) if len(igws) > 1: module.fail_json(msg='EC2 returned more than one Internet Gateway for id %s, aborting' % vpc.id) if internet_gateway: if len(igws) != 1: try: igw = vpc_conn.create_internet_gateway() vpc_conn.attach_internet_gateway(igw.id, vpc.id) changed = True except EC2ResponseError, e: module.fail_json(msg='Unable to create Internet Gateway, error: {0}'.format(e)) else: # Set igw variable to the current igw instance for use in route tables. igw = igws[0] else: if len(igws) > 0: try: vpc_conn.detach_internet_gateway(igws[0].id, vpc.id) vpc_conn.delete_internet_gateway(igws[0].id) changed = True except EC2ResponseError, e: module.fail_json(msg='Unable to delete Internet Gateway, error: {0}'.format(e)) # Handle route tables - this may be worth splitting into a # different module but should work fine here. The strategy to stay # indempotent is to basically build all the route tables as # defined, track the route table ids, and then run through the # remote list of route tables and delete any that we didn't # create. This shouldn't interupt traffic in theory, but is the # only way to really work with route tables over time that I can # think of without using painful aws ids. Hopefully boto will add # the replace-route-table API to make this smoother and # allow control of the 'main' routing table. if not isinstance(route_tables, list): module.fail_json(msg='route tables need to be a list of dictionaries') # Work through each route table and update/create to match dictionary array all_route_tables = [] for rt in route_tables: try: new_rt = vpc_conn.create_route_table(vpc.id) for route in rt['routes']: r_gateway = route['gw'] if r_gateway == 'igw': if not internet_gateway: module.fail_json( msg='You asked for an Internet Gateway ' \ '(igw) route, but you have no Internet Gateway' ) r_gateway = igw.id vpc_conn.create_route(new_rt.id, route['dest'], r_gateway) # Associate with subnets for sn in rt['subnets']: rsn = vpc_conn.get_all_subnets(filters={'cidr': sn}) if len(rsn) != 1: module.fail_json( msg='The subnet {0} to associate with route_table {1} ' \ 'does not exist, aborting'.format(sn, rt) ) rsn = rsn[0] # Disassociate then associate since we don't have replace old_rt = vpc_conn.get_all_route_tables( filters={'association.subnet_id': rsn.id} ) if len(old_rt) == 1: old_rt = old_rt[0] association_id = None for a in old_rt.associations: if a.subnet_id == rsn.id: association_id = a.id vpc_conn.disassociate_route_table(association_id) vpc_conn.associate_route_table(new_rt.id, rsn.id) all_route_tables.append(new_rt) except EC2ResponseError, e: module.fail_json( msg='Unable to create and associate route table {0}, error: ' \ '{1}'.format(rt, e) ) # Now that we are good to go on our new route tables, delete the # old ones except the 'main' route table as boto can't set the main # table yet. all_rts = vpc_conn.get_all_route_tables(filters={'vpc-id': vpc.id}) for rt in all_rts: delete_rt = True for newrt in all_route_tables: if newrt.id == rt.id: delete_rt = False if delete_rt: rta = rt.associations is_main = False for a in rta: if a.main: is_main = True try: if not is_main: vpc_conn.delete_route_table(rt.id) except EC2ResponseError, e: module.fail_json(msg='Unable to delete old route table {0}, error: {1}'.format(rt.id, e)) vpc_dict = get_vpc_info(vpc) created_vpc_id = vpc.id returned_subnets = [] current_subnets = vpc_conn.get_all_subnets(filters={ 'vpc_id': vpc.id }) for sn in current_subnets: returned_subnets.append({ 'cidr': sn.cidr_block, 'az': sn.availability_zone, 'id': sn.id, }) return (vpc_dict, created_vpc_id, returned_subnets, changed) def terminate_vpc(module, vpc_conn, vpc_id=None, cidr=None): """ Terminates a VPC module: Ansible module object vpc_conn: authenticated VPCConnection connection object vpc_id: a vpc id to terminate cidr: The cidr block of the VPC - can be used in lieu of an ID Returns a dictionary of VPC information about the VPC terminated. If the VPC to be terminated is available "changed" will be set to True. """ vpc_dict = {} terminated_vpc_id = '' changed = False if vpc_id == None and cidr == None: module.fail_json( msg='You must either specify a vpc id or a cidr '\ 'block to terminate a VPC, aborting' ) if vpc_id is not None: vpc_rs = vpc_conn.get_all_vpcs(vpc_id) else: vpc_rs = vpc_conn.get_all_vpcs(filters={'cidr': cidr}) if len(vpc_rs) > 1: module.fail_json( msg='EC2 returned more than one VPC for id {0} ' \ 'or cidr {1}, aborting'.format(vpc_id,vidr) ) if len(vpc_rs) == 1: vpc = vpc_rs[0] if vpc.state == 'available': terminated_vpc_id=vpc.id vpc_dict=get_vpc_info(vpc) try: subnets = vpc_conn.get_all_subnets(filters={'vpc_id': vpc.id}) for sn in subnets: vpc_conn.delete_subnet(sn.id) igws = vpc_conn.get_all_internet_gateways( filters={'attachment.vpc-id': vpc.id} ) for igw in igws: vpc_conn.detach_internet_gateway(igw.id, vpc.id) vpc_conn.delete_internet_gateway(igw.id) rts = vpc_conn.get_all_route_tables(filters={'vpc_id': vpc.id}) for rt in rts: rta = rt.associations is_main = False for a in rta: if a.main: is_main = True if not is_main: vpc_conn.delete_route_table(rt.id) vpc_conn.delete_vpc(vpc.id) except EC2ResponseError, e: module.fail_json( msg='Unable to delete VPC {0}, error: {1}'.format(vpc.id, e) ) changed = True return (changed, vpc_dict, terminated_vpc_id) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( cidr_block = dict(), instance_tenancy = dict(choices=['default', 'dedicated'], default='default'), wait = dict(choices=BOOLEANS, default=False), wait_timeout = dict(default=300), dns_support = dict(choices=BOOLEANS, default=True), dns_hostnames = dict(choices=BOOLEANS, default=True), subnets = dict(type='list'), vpc_id = dict(), internet_gateway = dict(choices=BOOLEANS, default=False), route_tables = dict(type='list'), state = dict(choices=['present', 'absent'], default='present'), ) ) module = AnsibleModule( argument_spec=argument_spec, ) state = module.params.get('state') ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) # If we have a region specified, connect to its endpoint. if region: try: vpc_conn = boto.vpc.connect_to_region( region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key ) except boto.exception.NoAuthHandlerFound, e: module.fail_json(msg = str(e)) else: module.fail_json(msg="region must be specified") if module.params.get('state') == 'absent': vpc_id = module.params.get('vpc_id') cidr = module.params.get('cidr_block') if vpc_id == None and cidr == None: module.fail_json( msg='You must either specify a vpc id or a cidr '\ 'block to terminate a VPC, aborting' ) (changed, vpc_dict, new_vpc_id) = terminate_vpc(module, vpc_conn, vpc_id, cidr) subnets_changed = None elif module.params.get('state') == 'present': # Changed is always set to true when provisioning a new VPC (vpc_dict, new_vpc_id, subnets_changed, changed) = create_vpc(module, vpc_conn) module.exit_json(changed=changed, vpc_id=new_vpc_id, vpc=vpc_dict, subnets=subnets_changed) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/ovirt0000775000000000000000000003354212316627017015515 0ustar rootroot#!/usr/bin/python # (c) 2013, Vincent Van der Kussen # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ovirt author: Vincent Van der Kussen short_description: oVirt/RHEV platform management description: - allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the oVirt/RHEV platform version_added: "1.4" options: user: description: - the user to authenticate with default: null required: true aliases: [] url: description: - the url of the oVirt instance default: null required: true aliases: [] instance_name: description: - the name of the instance to use default: null required: true aliases: [ vmname ] password: description: - password of the user to authenticate with default: null required: true aliases: [] image: description: - template to use for the instance default: null required: false aliases: [] resource_type: description: - whether you want to deploy an image or create an instance from scratch. default: null required: false aliases: [] choices: [ 'new', 'template' ] zone: description: - deploy the image to this oVirt cluster default: null required: false aliases: [] instance_disksize: description: - size of the instance's disk in GB default: null required: false aliases: [ vm_disksize] instance_cpus: description: - the instance's number of cpu's default: 1 required: false aliases: [ vmcpus ] instance_nic: description: - name of the network interface in oVirt/RHEV default: null required: false aliases: [ vmnic ] instance_network: description: - the logical network the machine should belong to default: rhevm required: false aliases: [ vmnetwork ] instance_mem: description: - the instance's amount of memory in MB default: null required: false aliases: [ vmmem ] instance_type: description: - define if the instance is a server or desktop default: server required: false aliases: [ vmtype ] choices: [ 'server', 'desktop' ] disk_alloc: description: - define if disk is thin or preallocated default: thin required: false aliases: [] choices: [ 'thin', 'preallocated' ] disk_int: description: - interface type of the disk default: virtio required: false aliases: [] choices: [ 'virtio', 'ide' ] instance_os: description: - type of Operating System default: null required: false aliases: [ vmos ] instance_cores: description: - define the instance's number of cores default: 1 required: false aliases: [ vmcores ] sdomain: description: - the Storage Domain where you want to create the instance's disk on. default: null required: false aliases: [] region: description: - the oVirt/RHEV datacenter where you want to deploy to default: null required: false aliases: [] state: description: - create, terminate or remove instances default: 'present' required: false aliases: [] choices: ['present', 'absent', 'shutdown', 'started', 'restarted'] requirements: [ "ovirt-engine-sdk" ] ''' EXAMPLES = ''' # Basic example provisioning from image. action: ovirt > user=admin@internal url=https://ovirt.example.com instance_name=ansiblevm04 password=secret image=centos_64 zone=cluster01 resource_type=template" # Full example to create new instance from scratch action: ovirt > instance_name=testansible resource_type=new instance_type=server user=admin@internal password=secret url=https://ovirt.example.com instance_disksize=10 zone=cluster01 region=datacenter1 instance_cpus=1 instance_nic=nic1 instance_network=rhevm instance_mem=1000 disk_alloc=thin sdomain=FIBER01 instance_cores=1 instance_os=rhel_6x64 disk_int=virtio" # stopping an instance action: ovirt > instance_name=testansible state=stopped user=admin@internal password=secret url=https://ovirt.example.com # starting an instance action: ovirt > instance_name=testansible state=started user=admin@internal password=secret url=https://ovirt.example.com ''' try: from ovirtsdk.api import API from ovirtsdk.xml import params except ImportError: print "failed=True msg='ovirtsdk required for this module'" sys.exit(1) # ------------------------------------------------------------------- # # create connection with API # def conn(url, user, password): api = API(url=url, username=user, password=password, insecure=True) try: value = api.test() except: print "error connecting to the oVirt API" sys.exit(1) return api # ------------------------------------------------------------------- # # Create VM from scratch def create_vm(conn, vmtype, vmname, zone, vmdisk_size, vmcpus, vmnic, vmnetwork, vmmem, vmdisk_alloc, sdomain, vmcores, vmos, vmdisk_int): if vmdisk_alloc == 'thin': # define VM params vmparams = params.VM(name=vmname,cluster=conn.clusters.get(name=zone),os=params.OperatingSystem(type_=vmos),template=conn.templates.get(name="Blank"),memory=1024 * 1024 * int(vmmem),cpu=params.CPU(topology=params.CpuTopology(cores=int(vmcores))), type_=vmtype) # define disk params vmdisk= params.Disk(size=1024 * 1024 * 1024 * int(vmdisk_size), wipe_after_delete=True, sparse=True, interface=vmdisk_int, type_="System", format='cow', storage_domains=params.StorageDomains(storage_domain=[conn.storagedomains.get(name=sdomain)])) # define network parameters network_net = params.Network(name=vmnetwork) nic_net1 = params.NIC(name='nic1', network=network_net, interface='virtio') elif vmdisk_alloc == 'preallocated': # define VM params vmparams = params.VM(name=vmname,cluster=conn.clusters.get(name=zone),os=params.OperatingSystem(type_=vmos),template=conn.templates.get(name="Blank"),memory=1024 * 1024 * int(vmmem),cpu=params.CPU(topology=params.CpuTopology(cores=int(vmcores))) ,type_=vmtype) # define disk params vmdisk= params.Disk(size=1024 * 1024 * 1024 * int(vmdisk_size), wipe_after_delete=True, sparse=False, interface=vmdisk_int, type_="System", format='raw', storage_domains=params.StorageDomains(storage_domain=[conn.storagedomains.get(name=sdomain)])) # define network parameters network_net = params.Network(name=vmnetwork) nic_net1 = params.NIC(name=vmnic, network=network_net, interface='virtio') try: conn.vms.add(vmparams) except: print "Error creating VM with specified parameters" sys.exit(1) vm = conn.vms.get(name=vmname) try: vm.disks.add(vmdisk) except: print "Error attaching disk" try: vm.nics.add(nic_net1) except: print "Error adding nic" # create an instance from a template def create_vm_template(conn, vmname, image, zone): vmparams = params.VM(name=vmname, cluster=conn.clusters.get(name=zone), template=conn.templates.get(name=image),disks=params.Disks(clone=True)) try: conn.vms.add(vmparams) except: print 'error adding template %s' % image sys.exit(1) # start instance def vm_start(conn, vmname): vm = conn.vms.get(name=vmname) vm.start() # Stop instance def vm_stop(conn, vmname): vm = conn.vms.get(name=vmname) vm.stop() # restart instance def vm_restart(conn, vmname): state = vm_status(conn, vmname) vm = conn.vms.get(name=vmname) vm.stop() while conn.vms.get(vmname).get_status().get_state() != 'down': time.sleep(5) vm.start() # remove an instance def vm_remove(conn, vmname): vm = conn.vms.get(name=vmname) vm.delete() # ------------------------------------------------------------------- # # VM statuses # # Get the VMs status def vm_status(conn, vmname): status = conn.vms.get(name=vmname).status.state print "vm status is : %s" % status return status # Get VM object and return it's name if object exists def get_vm(conn, vmname): vm = conn.vms.get(name=vmname) if vm == None: name = "empty" print "vmname: %s" % name else: name = vm.get_name() print "vmname: %s" % name return name # ------------------------------------------------------------------- # # Hypervisor operations # # not available yet # ------------------------------------------------------------------- # # Main def main(): module = AnsibleModule( argument_spec = dict( state = dict(default='present', choices=['present', 'absent', 'shutdown', 'started', 'restart']), #name = dict(required=True), user = dict(required=True), url = dict(required=True), instance_name = dict(required=True, aliases=['vmname']), password = dict(required=True), image = dict(), resource_type = dict(choices=['new', 'template']), zone = dict(), instance_disksize = dict(aliases=['vm_disksize']), instance_cpus = dict(default=1, aliases=['vmcpus']), instance_nic = dict(aliases=['vmnic']), instance_network = dict(default='rhevm', aliases=['vmnetwork']), instance_mem = dict(aliases=['vmmem']), instance_type = dict(default='server', aliases=['vmtype'], choices=['server', 'desktop']), disk_alloc = dict(default='thin', choices=['thin', 'preallocated']), disk_int = dict(default='virtio', choices=['virtio', 'ide']), instance_os = dict(aliases=['vmos']), instance_cores = dict(default=1, aliases=['vmcores']), sdomain = dict(), region = dict(), ) ) state = module.params['state'] user = module.params['user'] url = module.params['url'] vmname = module.params['instance_name'] password = module.params['password'] image = module.params['image'] # name of the image to deploy resource_type = module.params['resource_type'] # template or from scratch zone = module.params['zone'] # oVirt cluster vmdisk_size = module.params['instance_disksize'] # disksize vmcpus = module.params['instance_cpus'] # number of cpu vmnic = module.params['instance_nic'] # network interface vmnetwork = module.params['instance_network'] # logical network vmmem = module.params['instance_mem'] # mem size vmdisk_alloc = module.params['disk_alloc'] # thin, preallocated vmdisk_int = module.params['disk_int'] # disk interface virtio or ide vmos = module.params['instance_os'] # Operating System vmtype = module.params['instance_type'] # server or desktop vmcores = module.params['instance_cores'] # number of cores sdomain = module.params['sdomain'] # storage domain to store disk on region = module.params['region'] # oVirt Datacenter #initialize connection c = conn(url+"/api", user, password) if state == 'present': if get_vm(c, vmname) == "empty": if resource_type == 'template': create_vm_template(c, vmname, image, zone) module.exit_json(changed=True, msg="deployed VM %s from template %s" % (vmname,image)) elif resource_type == 'new': # FIXME: refactor, use keyword args. create_vm(c, vmtype, vmname, zone, vmdisk_size, vmcpus, vmnic, vmnetwork, vmmem, vmdisk_alloc, sdomain, vmcores, vmos, vmdisk_int) module.exit_json(changed=True, msg="deployed VM %s from scratch" % vmname) else: module.exit_json(changed=False, msg="You did not specify a resource type") else: module.exit_json(changed=False, msg="VM %s already exists" % vmname) if state == 'started': if vm_status(c, vmname) == 'up': module.exit_json(changed=False, msg="VM %s is already running" % vmname) else: vm_start(c, vmname) module.exit_json(changed=True, msg="VM %s started" % vmname) if state == 'shutdown': if vm_status(c, vmname) == 'down': module.exit_json(changed=False, msg="VM %s is already shutdown" % vmname) else: vm_stop(c, vmname) module.exit_json(changed=True, msg="VM %s is shutting down" % vmname) if state == 'restart': if vm_status(c, vmname) == 'up': vm_restart(c, vmname) module.exit_json(changed=True, msg="VM %s is restarted" % vmname) else: module.exit_json(changed=False, msg="VM %s is not running" % vmname) if state == 'absent': if get_vm(c, vmname) == "empty": module.exit_json(changed=False, msg="VM %s does not exist" % vmname) else: vm_remove(c, vmname) module.exit_json(changed=True, msg="VM %s removed" % vmname) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/ec2_group0000664000000000000000000001756112316627017016237 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- DOCUMENTATION = ''' --- module: ec2_group version_added: "1.3" short_description: maintain an ec2 VPC security group. description: - maintains ec2 security groups. This module has a dependency on python-boto >= 2.5 options: name: description: - Name of the security group. required: true description: description: - Description of the security group. required: true vpc_id: description: - ID of the VPC to create the group in. required: false rules: description: - List of firewall rules to enforce in this group (see example). required: true region: description: - the EC2 region to use required: false default: null aliases: [] ec2_url: description: - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints) required: false default: null aliases: [] ec2_secret_key: description: - EC2 secret key required: false default: null aliases: ['aws_secret_key'] ec2_access_key: description: - EC2 access key required: false default: null aliases: ['aws_access_key'] state: version_added: "1.4" description: - create or delete security group required: false default: 'present' aliases: [] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] ''' EXAMPLES = ''' - name: example ec2 group local_action: module: ec2_group name: example description: an example EC2 group vpc_id: 12345 region: eu-west-1a ec2_secret_key: SECRET ec2_access_key: ACCESS rules: - proto: tcp from_port: 80 to_port: 80 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 22 to_port: 22 cidr_ip: 10.0.0.0/8 - proto: udp from_port: 10050 to_port: 10050 cidr_ip: 10.0.0.0/8 - proto: udp from_port: 10051 to_port: 10051 group_id: sg-12345678 - proto: all # the containing group name may be specified here group_name: example ''' try: import boto.ec2 except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def addRulesToLookup(rules, prefix, dict): for rule in rules: for grant in rule.grants: dict["%s-%s-%s-%s-%s-%s" % (prefix, rule.ip_protocol, rule.from_port, rule.to_port, grant.group_id, grant.cidr_ip)] = rule def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( name=dict(required=True), description=dict(required=True), vpc_id=dict(), rules=dict(), state = dict(default='present', choices=['present', 'absent']), ) ) module = AnsibleModule( argument_spec=argument_spec, supports_check_mode=True, ) name = module.params['name'] description = module.params['description'] vpc_id = module.params['vpc_id'] rules = module.params['rules'] state = module.params.get('state') changed = False ec2 = ec2_connect(module) # find the group if present group = None groups = {} for curGroup in ec2.get_all_security_groups(): groups[curGroup.id] = curGroup groups[curGroup.name] = curGroup if curGroup.name == name and (vpc_id is None or curGroup.vpc_id == vpc_id): group = curGroup # Ensure requested group is absent if state == 'absent': if group: '''found a match, delete it''' try: group.delete() except Exception, e: module.fail_json(msg="Unable to delete security group '%s' - %s" % (group, e)) else: group = None changed = True else: '''no match found, no changes required''' # Ensure requested group is present elif state == 'present': if group: '''existing group found''' # check the group parameters are correct group_in_use = False rs = ec2.get_all_instances() for r in rs: for i in r.instances: group_in_use |= reduce(lambda x, y: x | (y.name == 'public-ssh'), i.groups, False) if group.description != description: if group_in_use: module.fail_json(msg="Group description does not match, but it is in use so cannot be changed.") # if the group doesn't exist, create it now else: '''no match found, create it''' if not module.check_mode: group = ec2.create_security_group(name, description, vpc_id=vpc_id) changed = True else: module.fail_json(msg="Unsupported state requested: %s" % state) # create a lookup for all existing rules on the group if group: groupRules = {} addRulesToLookup(group.rules, 'in', groupRules) # Now, go through all provided rules and ensure they are there. if rules: for rule in rules: group_id = None group_name = None ip = None if 'group_id' in rule and 'cidr_ip' in rule: module.fail_json(msg="Specify group_id OR cidr_ip, not both") elif 'group_name' in rule and 'cidr_ip' in rule: module.fail_json(msg="Specify group_name OR cidr_ip, not both") elif 'group_id' in rule and 'group_name' in rule: module.fail_json(msg="Specify group_id OR group_name, not both") elif 'group_id' in rule: group_id = rule['group_id'] elif 'group_name' in rule: group_name = rule['group_name'] if group_name in groups: group_id = groups[group_name].id elif group_name == name: group_id = group.id groups[group_id] = group groups[group_name] = group elif 'cidr_ip' in rule: ip = rule['cidr_ip'] if rule['proto'] == 'all': rule['proto'] = -1 rule['from_port'] = None rule['to_port'] = None # If rule already exists, don't later delete it ruleId = "%s-%s-%s-%s-%s-%s" % ('in', rule['proto'], rule['from_port'], rule['to_port'], group_id, ip) if ruleId in groupRules: del groupRules[ruleId] # Otherwise, add new rule else: grantGroup = None if group_id: grantGroup = groups[group_id] if not module.check_mode: group.authorize(rule['proto'], rule['from_port'], rule['to_port'], ip, grantGroup) changed = True # Finally, remove anything left in the groupRules -- these will be defunct rules for rule in groupRules.itervalues(): for grant in rule.grants: grantGroup = None if grant.group_id: grantGroup = groups[grant.group_id] if not module.check_mode: group.revoke(rule.ip_protocol, rule.from_port, rule.to_port, grant.cidr_ip, grantGroup) changed = True if group: module.exit_json(changed=changed, group_id=group.id) else: module.exit_json(changed=changed, group_id=None) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/quantum_network0000664000000000000000000002410212316627017017602 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: try: from neutronclient.neutron import client except ImportError: from quantumclient.quantum import client from keystoneclient.v2_0 import client as ksclient except ImportError: print("failed=True msg='quantumclient (or neutronclient) and keystone client are required'") DOCUMENTATION = ''' --- module: quantum_network version_added: "1.4" short_description: Creates/Removes networks from OpenStack description: - Add or Remove network from OpenStack. options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' tenant_name: description: - The name of the tenant for whom the network is created required: false default: None auth_url: description: - The keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present name: description: - Name to be assigned to the nework required: true default: None provider_network_type: description: - The type of the network to be created, gre, vlan, local. Available types depend on the plugin. The Quantum service decides if not specified. required: false default: None provider_physical_network: description: - The physical network which would realize the virtual network for flat and vlan networks. required: false default: None provider_segmentation_id: description: - The id that has to be assigned to the network, in case of vlan networks that would be vlan id and for gre the tunnel id required: false default: None router_external: description: - If 'yes', specifies that the virtual network is a external network (public). required: false default: false shared: description: - Whether this network is shared or not required: false default: false admin_state_up: description: - Whether the state should be marked as up or down required: false default: true requirements: ["quantumclient", "neutronclient", "keystoneclient"] ''' EXAMPLES = ''' # Create a GRE backed Quantum network with tunnel id 1 for tenant1 - quantum_network: name=t1network tenant_name=tenant1 state=present provider_network_type=gre provider_segmentation_id=1 login_username=admin login_password=admin login_tenant_name=admin # Create an external network - quantum_network: name=external_network state=present provider_network_type=local router_external=yes login_username=admin login_password=admin login_tenant_name=admin ''' _os_keystone = None _os_tenant_id = None def _get_ksclient(module, kwargs): try: kclient = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s" %e.message) global _os_keystone _os_keystone = kclient return kclient def _get_endpoint(module, ksclient): try: endpoint = ksclient.service_catalog.url_for(service_type='network', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting network endpoint: %s " %e.message) return endpoint def _get_neutron_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint = _get_endpoint(module, _ksclient) kwargs = { 'token': token, 'endpoint_url': endpoint } try: neutron = client.Client('2.0', **kwargs) except Exception, e: module.fail_json(msg = " Error in connecting to neutron: %s " %e.message) return neutron def _set_tenant_id(module): global _os_tenant_id if not module.params['tenant_name']: tenant_name = module.params['login_tenant_name'] else: tenant_name = module.params['tenant_name'] for tenant in _os_keystone.tenants.list(): if tenant.name == tenant_name: _os_tenant_id = tenant.id break if not _os_tenant_id: module.fail_json(msg = "The tenant id cannot be found, please check the paramters") def _get_net_id(neutron, module): kwargs = { 'tenant_id': _os_tenant_id, 'name': module.params['name'], } try: networks = neutron.list_networks(**kwargs) except Exception, e: module.fail_json(msg = "Error in listing neutron networks: %s" % e.message) if not networks['networks']: return None return networks['networks'][0]['id'] def _create_network(module, neutron): neutron.format = 'json' network = { 'name': module.params.get('name'), 'tenant_id': _os_tenant_id, 'provider:network_type': module.params.get('provider_network_type'), 'provider:physical_network': module.params.get('provider_physical_network'), 'provider:segmentation_id': module.params.get('provider_segmentation_id'), 'router:external': module.params.get('router_external'), 'shared': module.params.get('shared'), 'admin_state_up': module.params.get('admin_state_up'), } if module.params['provider_network_type'] == 'local': network.pop('provider:physical_network', None) network.pop('provider:segmentation_id', None) if module.params['provider_network_type'] == 'flat': network.pop('provider:segmentation_id', None) if module.params['provider_network_type'] == 'gre': network.pop('provider:physical_network', None) if module.params['provider_network_type'] is None: network.pop('provider:network_type', None) network.pop('provider:physical_network', None) network.pop('provider:segmentation_id', None) try: net = neutron.create_network({'network':network}) except Exception, e: module.fail_json(msg = "Error in creating network: %s" % e.message) return net['network']['id'] def _delete_network(module, net_id, neutron): try: id = neutron.delete_network(net_id) except Exception, e: module.fail_json(msg = "Error in deleting the network: %s" % e.message) return True def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), name = dict(required=True), tenant_name = dict(default=None), provider_network_type = dict(default=None, choices=['local', 'vlan', 'flat', 'gre']), provider_physical_network = dict(default=None), provider_segmentation_id = dict(default=None), router_external = dict(default=False, type='bool'), shared = dict(default=False, type='bool'), admin_state_up = dict(default=True, type='bool'), state = dict(default='present', choices=['absent', 'present']) ), ) if module.params['provider_network_type'] in ['vlan' , 'flat']: if not module.params['provider_physical_network']: module.fail_json(msg = " for vlan and flat networks, variable provider_physical_network should be set.") if module.params['provider_network_type'] in ['vlan', 'gre']: if not module.params['provider_segmentation_id']: module.fail_json(msg = " for vlan & gre networks, variable provider_segmentation_id should be set.") neutron = _get_neutron_client(module, module.params) _set_tenant_id(module) if module.params['state'] == 'present': network_id = _get_net_id(neutron, module) if not network_id: network_id = _create_network(module, neutron) module.exit_json(changed = True, result = "Created", id = network_id) else: module.exit_json(changed = False, result = "Success", id = network_id) if module.params['state'] == 'absent': network_id = _get_net_id(neutron, module) if not network_id: module.exit_json(changed = False, result = "Success") else: _delete_network(module, network_id, neutron) module.exit_json(changed = True, result = "Deleted") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/nova_compute0000664000000000000000000002407712316627017017051 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # (c) 2013, John Dewey # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: from novaclient.v1_1 import client as nova_client from novaclient import exceptions import time except ImportError: print("failed=True msg='novaclient is required for this module'") DOCUMENTATION = ''' --- module: nova_compute version_added: "1.2" short_description: Create/Delete VMs from OpenStack description: - Create or Remove virtual machines from Openstack. options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' auth_url: description: - The keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present name: description: - Name that has to be given to the instance required: true default: None image_id: description: - The id of the image that has to be cloned required: true default: None flavor_id: description: - The id of the flavor in which the new VM has to be created required: false default: 1 key_name: description: - The key pair name to be used when creating a VM required: false default: None security_groups: description: - The name of the security group to which the VM should be added required: false default: None nics: description: - A list of network id's to which the VM's interface should be attached required: false default: None meta: description: - A list of key value pairs that should be provided as a metadata to the new VM required: false default: None wait: description: - If the module should wait for the VM to be created. required: false default: 'yes' wait_for: description: - The amount of time the module should wait for the VM to get into active state required: false default: 180 requirements: ["novaclient"] ''' EXAMPLES = ''' # Creates a new VM and attaches to a network and passes metadata to the instance - nova_compute: state: present login_username: admin login_password: admin login_tenant_name: admin name: vm1 image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529 key_name: ansible_key wait_for: 200 flavor_id: 4 nics: - net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723 meta: hostname: test1 group: uge_master ''' def _delete_server(module, nova): name = None server_list = None try: server_list = nova.servers.list(True, {'name': module.params['name']}) if server_list: server = [x for x in server_list if x.name == module.params['name']] nova.servers.delete(server.pop()) except Exception, e: module.fail_json( msg = "Error in deleting vm: %s" % e.message) if module.params['wait'] == 'no': module.exit_json(changed = True, result = "deleted") expire = time.time() + int(module.params['wait_for']) while time.time() < expire: name = nova.servers.list(True, {'name': module.params['name']}) if not name: module.exit_json(changed = True, result = "deleted") time.sleep(5) module.fail_json(msg = "Timed out waiting for server to get deleted, please check manually") def _create_server(module, nova): bootargs = [module.params['name'], module.params['image_id'], module.params['flavor_id']] bootkwargs = { 'nics' : module.params['nics'], 'meta' : module.params['meta'], 'key_name': module.params['key_name'], 'security_groups': module.params['security_groups'].split(','), } if not module.params['key_name']: del bootkwargs['key_name'] try: server = nova.servers.create(*bootargs, **bootkwargs) server = nova.servers.get(server.id) except Exception, e: module.fail_json( msg = "Error in creating instance: %s " % e.message) if module.params['wait'] == 'yes': expire = time.time() + int(module.params['wait_for']) while time.time() < expire: try: server = nova.servers.get(server.id) except Exception, e: module.fail_json( msg = "Error in getting info from instance: %s " % e.message) if server.status == 'ACTIVE': private = [ x['addr'] for x in getattr(server, 'addresses').itervalues().next() if 'OS-EXT-IPS:type' in x and x['OS-EXT-IPS:type'] == 'fixed'] public = [ x['addr'] for x in getattr(server, 'addresses').itervalues().next() if 'OS-EXT-IPS:type' in x and x['OS-EXT-IPS:type'] == 'floating'] module.exit_json(changed = True, id = server.id, private_ip=''.join(private), public_ip=''.join(public), status = server.status, info = server._info) if server.status == 'ERROR': module.fail_json(msg = "Error in creating the server, please check logs") time.sleep(2) module.fail_json(msg = "Timeout waiting for the server to come up.. Please check manually") if server.status == 'ERROR': module.fail_json(msg = "Error in creating the server.. Please check manually") private = [ x['addr'] for x in getattr(server, 'addresses').itervalues().next() if x['OS-EXT-IPS:type'] == 'fixed'] public = [ x['addr'] for x in getattr(server, 'addresses').itervalues().next() if x['OS-EXT-IPS:type'] == 'floating'] module.exit_json(changed = True, id = info['id'], private_ip=''.join(private), public_ip=''.join(public), status = server.status, info = server._info) def _get_server_state(module, nova): server = None try: servers = nova.servers.list(True, {'name': module.params['name']}) if servers: server = [x for x in servers if x.name == module.params['name']][0] except Exception, e: module.fail_json(msg = "Error in getting the server list: %s" % e.message) if server and module.params['state'] == 'present': if server.status != 'ACTIVE': module.fail_json( msg="The VM is available but not Active. state:" + server.status) private = [ x['addr'] for x in getattr(server, 'addresses').itervalues().next() if 'OS-EXT-IPS:type' in x and x['OS-EXT-IPS:type'] == 'fixed'] public = [ x['addr'] for x in getattr(server, 'addresses').itervalues().next() if 'OS-EXT-IPS:type' in x and x['OS-EXT-IPS:type'] == 'floating'] module.exit_json(changed = False, id = server.id, public_ip = ''.join(public), private_ip = ''.join(private), info = server._info) if server and module.params['state'] == 'absent': return True if module.params['state'] == 'absent': module.exit_json(changed = False, result = "not present") return True def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), name = dict(required=True), image_id = dict(default=None), flavor_id = dict(default=1), key_name = dict(default=None), security_groups = dict(default='default'), nics = dict(default=None), meta = dict(default=None), wait = dict(default='yes', choices=['yes', 'no']), wait_for = dict(default=180), state = dict(default='present', choices=['absent', 'present']) ), ) nova = nova_client.Client(module.params['login_username'], module.params['login_password'], module.params['login_tenant_name'], module.params['auth_url'], service_type='compute') try: nova.authenticate() except exceptions.Unauthorized, e: module.fail_json(msg = "Invalid OpenStack Nova credentials.: %s" % e.message) except exceptions.AuthorizationFailure, e: module.fail_json(msg = "Unable to authorize user: %s" % e.message) if module.params['state'] == 'present': if not module.params['image_id']: module.fail_json( msg = "Parameter 'image_id' is required if state == 'present'") else: _get_server_state(module, nova) _create_server(module, nova) if module.params['state'] == 'absent': _get_server_state(module, nova) _delete_server(module, nova) # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/gce0000664000000000000000000003266612316627017015113 0ustar rootroot#!/usr/bin/python # Copyright 2013 Google Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: gce version_added: "1.4" short_description: create or terminate GCE instances description: - Creates or terminates Google Compute Engine (GCE) instances. See U(https://cloud.google.com/products/compute-engine) for an overview. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py. options: image: description: - image string to use for the instance required: false default: "debian-7" aliases: [] instance_names: description: - a comma-separated list of instance names to create or destroy required: false default: null aliases: [] machine_type: description: - machine type to use for the instance, use 'n1-standard-1' by default required: false default: "n1-standard-1" aliases: [] metadata: description: - a hash/dictionary of custom data for the instance; '{"key":"value",...}' required: false default: null aliases: [] name: description: - identifier when working with a single instance required: false aliases: [] network: description: - name of the network, 'default' will be used if not specified required: false default: "default" aliases: [] persistent_boot_disk: description: - if set, create the instance with a persistent boot disk required: false default: "false" aliases: [] state: description: - desired state of the resource required: false default: "present" choices: ["active", "present", "absent", "deleted"] aliases: [] tags: description: - a comma-separated list of tags to associate with the instance required: false default: null aliases: [] zone: description: - the GCE zone to use required: true default: "us-central1-a" choices: ["us-central1-a", "us-central1-b", "us-central2-a", "europe-west1-a", "europe-west1-b"] aliases: [] requirements: [ "libcloud" ] author: Eric Johnson ''' EXAMPLES = ''' # Basic provisioning example. Create a single Debian 7 instance in the # us-central1-a Zone of n1-standard-1 machine type. - local_action: module: gce name: test-instance zone: us-central1-a machine_type: n1-standard-1 image: debian-7 # Example using defaults and with metadata to create a single 'foo' instance - local_action: module: gce name: foo metadata: '{"db":"postgres", "group":"qa", "id":500}' # Launch instances from a control node, runs some tasks on the new instances, # and then terminate them - name: Create a sandbox instance hosts: localhost vars: names: foo,bar machine_type: n1-standard-1 image: debian-6 zone: us-central1-a tasks: - name: Launch instances local_action: gce instance_names={{names}} machine_type={{machine_type}} image={{image}} zone={{zone}} register: gce - name: Wait for SSH to come up local_action: wait_for host={{item.public_ip}} port=22 delay=10 timeout=60 state=started with_items: {{gce.instance_data}} - name: Configure instance(s) hosts: launched sudo: True roles: - my_awesome_role - my_awesome_tasks - name: Terminate instances hosts: localhost connection: local tasks: - name: Terminate instances that were previously launched local_action: module: gce state: 'absent' instance_names: {{gce.instance_names}} ''' import sys USER_AGENT_PRODUCT="Ansible-gce" USER_AGENT_VERSION="v1beta15" try: from libcloud.compute.types import Provider from libcloud.compute.providers import get_driver from libcloud.common.google import GoogleBaseError, QuotaExceededError, \ ResourceExistsError, ResourceInUseError, ResourceNotFoundError _ = Provider.GCE except ImportError: print("failed=True " + \ "msg='libcloud with GCE support (0.13.3+) required for this module'") sys.exit(1) try: from ast import literal_eval except ImportError: print("failed=True " + \ "msg='GCE module requires python's 'ast' module, python v2.6+'") sys.exit(1) # Load in the libcloud secrets file try: import secrets except ImportError: secrets = None ARGS = getattr(secrets, 'GCE_PARAMS', ()) KWARGS = getattr(secrets, 'GCE_KEYWORD_PARAMS', {}) if not ARGS or not 'project' in KWARGS: print("failed=True " + \ "msg='Missing GCE connection parametres in libcloud secrets file.'") sys.exit(1) def unexpected_error_msg(error): """Create an error string based on passed in error.""" msg='Unexpected response: HTTP return_code[' msg+='%s], API error code[%s] and message: %s' % ( error.http_code, error.code, str(error.value)) return msg def get_instance_info(inst): """Retrieves instance information from an instance object and returns it as a dictionary. """ metadata = {} if 'metadata' in inst.extra and 'items' in inst.extra['metadata']: for md in inst.extra['metadata']['items']: metadata[md['key']] = md['value'] try: netname = inst.extra['networkInterfaces'][0]['network'].split('/')[-1] except: netname = None return({ 'image': not inst.image is None and inst.image.split('/')[-1] or None, 'machine_type': inst.size, 'metadata': metadata, 'name': inst.name, 'network': netname, 'private_ip': inst.private_ips[0], 'public_ip': inst.public_ips[0], 'status': ('status' in inst.extra) and inst.extra['status'] or None, 'tags': ('tags' in inst.extra) and inst.extra['tags'] or [], 'zone': ('zone' in inst.extra) and inst.extra['zone'].name or None, }) def create_instances(module, gce, instance_names): """Creates new instances. Attributes other than instance_names are picked up from 'module' module : AnsibleModule object gce: authenticated GCE libcloud driver instance_names: python list of instance names to create Returns: A list of dictionaries with instance information about the instances that were launched. """ image = module.params.get('image') machine_type = module.params.get('machine_type') metadata = module.params.get('metadata') network = module.params.get('network') persistent_boot_disk = module.params.get('persistent_boot_disk') state = module.params.get('state') tags = module.params.get('tags') zone = module.params.get('zone') new_instances = [] changed = False lc_image = gce.ex_get_image(image) lc_network = gce.ex_get_network(network) lc_machine_type = gce.ex_get_size(machine_type) lc_zone = gce.ex_get_zone(zone) # Try to convert the user's metadata value into the format expected # by GCE. First try to ensure user has proper quoting of a # dictionary-like syntax using 'literal_eval', then convert the python # dict into a python list of 'key' / 'value' dicts. Should end up # with: # [ {'key': key1, 'value': value1}, {'key': key2, 'value': value2}, ...] if metadata: try: md = literal_eval(metadata) if not isinstance(md, dict): raise ValueError('metadata must be a dict') except ValueError, e: print("failed=True msg='bad metadata: %s'" % str(e)) sys.exit(1) except SyntaxError, e: print("failed=True msg='bad metadata syntax'") sys.exit(1) items = [] for k,v in md.items(): items.append({"key": k,"value": v}) metadata = {'items': items} # These variables all have default values but check just in case if not lc_image or not lc_network or not lc_machine_type or not lc_zone: module.fail_json(msg='Missing required create instance variable', changed=False) for name in instance_names: pd = None if persistent_boot_disk: try: pd = gce.create_volume(None, "%s" % name, image=lc_image) except ResourceExistsError: pd = gce.ex_get_volume("%s" % name, lc_zone) inst = None try: inst = gce.create_node(name, lc_machine_type, lc_image, location=lc_zone, ex_network=network, ex_tags=tags, ex_metadata=metadata, ex_boot_disk=pd) changed = True except ResourceExistsError: inst = gce.ex_get_node(name, lc_zone) except GoogleBaseError, e: module.fail_json(msg='Unexpected error attempting to create ' + \ 'instance %s, error: %s' % (name, e.value)) if inst: new_instances.append(inst) instance_names = [] instance_json_data = [] for inst in new_instances: d = get_instance_info(inst) instance_names.append(d['name']) instance_json_data.append(d) return (changed, instance_json_data, instance_names) def terminate_instances(module, gce, instance_names, zone_name): """Terminates a list of instances. module: Ansible module object gce: authenticated GCE connection object instance_names: a list of instance names to terminate zone_name: the zone where the instances reside prior to termination Returns a dictionary of instance names that were terminated. """ changed = False terminated_instance_names = [] for name in instance_names: inst = None try: inst = gce.ex_get_node(name, zone_name) except ResourceNotFoundError: pass except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) if inst: gce.destroy_node(inst) terminated_instance_names.append(inst.name) changed = True return (changed, terminated_instance_names) def main(): module = AnsibleModule( argument_spec = dict( image = dict(default='debian-7'), instance_names = dict(), machine_type = dict(default='n1-standard-1'), metadata = dict(), name = dict(), network = dict(default='default'), persistent_boot_disk = dict(type='bool', choices=BOOLEANS, default=False), state = dict(choices=['active', 'present', 'absent', 'deleted'], default='present'), tags = dict(type='list'), zone = dict(choices=['us-central1-a', 'us-central1-b', 'us-central2-a', 'europe-west1-a', 'europe-west1-b'], default='us-central1-a'), ) ) image = module.params.get('image') instance_names = module.params.get('instance_names') machine_type = module.params.get('machine_type') metadata = module.params.get('metadata') name = module.params.get('name') network = module.params.get('network') persistent_boot_disk = module.params.get('persistent_boot_disk') state = module.params.get('state') tags = module.params.get('tags') zone = module.params.get('zone') changed = False try: gce = get_driver(Provider.GCE)(*ARGS, datacenter=zone, **KWARGS) gce.connection.user_agent_append("%s/%s" % ( USER_AGENT_PRODUCT, USER_AGENT_VERSION)) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) inames = [] if isinstance(instance_names, list): inames = instance_names elif isinstance(instance_names, str): inames = instance_names.split(',') if name: inames.append(name) if not inames: module.fail_json(msg='Must specify a "name" or "instance_names"', changed=False) if not zone: module.fail_json(msg='Must specify a "zone"', changed=False) json_output = {'zone': zone} if state in ['absent', 'deleted']: json_output['state'] = 'absent' (changed, terminated_instance_names) = terminate_instances(module, gce, inames, zone) # based on what user specified, return the same variable, although # value could be different if an instance could not be destroyed if instance_names: json_output['instance_names'] = terminated_instance_names elif name: json_output['name'] = name elif state in ['active', 'present']: json_output['state'] = 'present' (changed, instance_data,instance_name_list) = create_instances( module, gce, inames) json_output['instance_data'] = instance_data if instance_names: json_output['instance_names'] = instance_name_list elif name: json_output['name'] = name json_output['changed'] = changed print json.dumps(json_output) sys.exit(0) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/rax_files_objects0000664000000000000000000004655712316627017020046 0ustar rootroot#!/usr/bin/python -tt # (c) 2013, Paul Durivage # # This file is part of Ansible. # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_files_objects short_description: Upload, download, and delete objects in Rackspace Cloud Files description: - Upload, download, and delete objects in Rackspace Cloud Files version_added: "1.5" options: api_key: description: - Rackspace API key (overrides I(credentials)) default: null clear_meta: description: - Optionally clear existing metadata when applying metadata to existing objects. Selecting this option is only appropriate when setting type=meta choices: ["yes", "no"] default: "no" container: description: - The container to use for file object operations. required: true default: null credentials: description: - File to find the Rackspace credentials in (ignored if I(api_key) and I(username) are provided) default: null aliases: ['creds_file'] dest: description: - The destination of a "get" operation; i.e. a local directory, "/home/user/myfolder". Used to specify the destination of an operation on a remote object; i.e. a file name, "file1", or a comma-separated list of remote objects, "file1,file2,file17" expires: description: - Used to set an expiration on a file or folder uploaded to Cloud Files. Requires an integer, specifying expiration in seconds default: null meta: description: - A hash of items to set as metadata values on an uploaded file or folder default: null method: description: - The method of operation to be performed. For example, put to upload files to Cloud Files, get to download files from Cloud Files or delete to delete remote objects in Cloud Files choices: ["get", "put", "delete"] default: "get" region: description: - Region in which to work. Maps to a Rackspace Cloud region, i.e. DFW, ORD, IAD, SYD, LON default: DFW src: description: - Source from which to upload files. Used to specify a remote object as a source for an operation, i.e. a file name, "file1", or a comma-separated list of remote objects, "file1,file2,file17". src and dest are mutually exclusive on remote-only object operations default: null structure: description: - Used to specify whether to maintain nested directory structure when downloading objects from Cloud Files. Setting to false downloads the contents of a container to a single, flat directory choices: ["yes", "no"] default: "yes" type: description: - Type of object to do work on - Metadata object or a file object choices: ["file", "meta"] default: "file" username: description: - Rackspace username (overrides I(credentials)) default: null requirements: [ "pyrax" ] author: Paul Durivage notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: "Test Cloud Files Objects" hosts: local gather_facts: False tasks: - name: "Get objects from test container" rax_files_objects: container=testcont dest=~/Downloads/testcont - name: "Get single object from test container" rax_files_objects: container=testcont src=file1 dest=~/Downloads/testcont - name: "Get several objects from test container" rax_files_objects: container=testcont src=file1,file2,file3 dest=~/Downloads/testcont - name: "Delete one object in test container" rax_files_objects: container=testcont method=delete dest=file1 - name: "Delete several objects in test container" rax_files_objects: container=testcont method=delete dest=file2,file3,file4 - name: "Delete all objects in test container" rax_files_objects: container=testcont method=delete - name: "Upload all files to test container" rax_files_objects: container=testcont method=put src=~/Downloads/onehundred - name: "Upload one file to test container" rax_files_objects: container=testcont method=put src=~/Downloads/testcont/file1 - name: "Upload one file to test container with metadata" rax_files_objects: container: testcont src: ~/Downloads/testcont/file2 method: put meta: testkey: testdata who_uploaded_this: someuser@example.com - name: "Upload one file to test container with TTL of 60 seconds" rax_files_objects: container=testcont method=put src=~/Downloads/testcont/file3 expires=60 - name: "Attempt to get remote object that does not exist" rax_files_objects: container=testcont method=get src=FileThatDoesNotExist.jpg dest=~/Downloads/testcont ignore_errors: yes - name: "Attempt to delete remote object that does not exist" rax_files_objects: container=testcont method=delete dest=FileThatDoesNotExist.jpg ignore_errors: yes - name: "Test Cloud Files Objects Metadata" hosts: local gather_facts: false tasks: - name: "Get metadata on one object" rax_files_objects: container=testcont type=meta dest=file2 - name: "Get metadata on several objects" rax_files_objects: container=testcont type=meta src=file2,file1 - name: "Set metadata on an object" rax_files_objects: container: testcont type: meta dest: file17 method: put meta: key1: value1 key2: value2 clear_meta: true - name: "Verify metadata is set" rax_files_objects: container=testcont type=meta src=file17 - name: "Delete metadata" rax_files_objects: container: testcont type: meta dest: file17 method: delete meta: key1: '' key2: '' - name: "Get metadata on all objects" rax_files_objects: container=testcont type=meta ''' import os try: import pyrax except ImportError, e: print("failed=True msg='pyrax is required for this module'") sys.exit(1) EXIT_DICT = dict(success=False) META_PREFIX = 'x-object-meta-' def _get_container(module, cf, container): try: return cf.get_container(container) except pyrax.exc.NoSuchContainer, e: module.fail_json(msg=e.message) def upload(module, cf, container, src, dest, meta, expires): """ Uploads a single object or a folder to Cloud Files Optionally sets an metadata, TTL value (expires), or Content-Disposition and Content-Encoding headers. """ c = _get_container(module, cf, container) num_objs_before = len(c.get_object_names()) if not src: module.fail_json(msg='src must be specified when uploading') src = os.path.abspath(os.path.expanduser(src)) is_dir = os.path.isdir(src) if not is_dir and not os.path.isfile(src) or not os.path.exists(src): module.fail_json(msg='src must be a file or a directory') if dest and is_dir: module.fail_json(msg='dest cannot be set when whole ' 'directories are uploaded') cont_obj = None if dest and not is_dir: try: cont_obj = c.upload_file(src, obj_name=dest, ttl=expires) except Exception, e: module.fail_json(msg=e.message) elif is_dir: try: id, total_bytes = cf.upload_folder(src, container=c.name, ttl=expires) except Exception, e: module.fail_json(msg=e.message) while True: bytes = cf.get_uploaded(id) if bytes == total_bytes: break time.sleep(1) else: try: cont_obj = c.upload_file(src, ttl=expires) except Exception, e: module.fail_json(msg=e.message) num_objs_after = len(c.get_object_names()) if not meta: meta = dict() meta_result = dict() if meta: if cont_obj: meta_result = cont_obj.set_metadata(meta) else: def _set_meta(objs, meta): """ Sets metadata on a list of objects specified by name """ for obj in objs: try: result = c.get_object(obj).set_metadata(meta) except Exception, e: module.fail_json(msg=e.message) else: meta_result[obj] = result return meta_result def _walker(objs, path, filenames): """ Callback func for os.path.walk """ prefix = '' if path != src: prefix = path.split(src)[-1].lstrip('/') filenames = [os.path.join(prefix, name) for name in filenames if not os.path.isdir(name)] objs += filenames _objs = [] os.path.walk(src, _walker, _objs) meta_result = _set_meta(_objs, meta) EXIT_DICT['success'] = True EXIT_DICT['container'] = c.name EXIT_DICT['msg'] = "Uploaded %s to container: %s" % (src, c.name) if cont_obj or locals().get('bytes'): EXIT_DICT['changed'] = True if meta_result: EXIT_DICT['meta'] = dict(updated=True) if cont_obj: EXIT_DICT['bytes'] = cont_obj.total_bytes EXIT_DICT['etag'] = cont_obj.etag else: EXIT_DICT['bytes'] = total_bytes module.exit_json(**EXIT_DICT) def download(module, cf, container, src, dest, structure): """ Download objects from Cloud Files to a local path specified by "dest". Optionally disable maintaining a directory structure by by passing a false value to "structure". """ # Looking for an explicit destination if not dest: module.fail_json(msg='dest is a required argument when ' 'downloading from Cloud Files') # Attempt to fetch the container by name c = _get_container(module, cf, container) # Accept a single object name or a comma-separated list of objs # If not specified, get the entire container if src: objs = src.split(',') objs = map(str.strip, objs) else: objs = c.get_object_names() dest = os.path.abspath(os.path.expanduser(dest)) is_dir = os.path.isdir(dest) if not is_dir: module.fail_json(msg='dest must be a directory') results = [] for obj in objs: try: c.download_object(obj, dest, structure=structure) except Exception, e: module.fail_json(msg=e.message) else: results.append(obj) len_results = len(results) len_objs = len(objs) EXIT_DICT['container'] = c.name EXIT_DICT['requested_downloaded'] = results if results: EXIT_DICT['changed'] = True if len_results == len_objs: EXIT_DICT['success'] = True EXIT_DICT['msg'] = "%s objects downloaded to %s" % (len_results, dest) else: EXIT_DICT['msg'] = "Error: only %s of %s objects were " \ "downloaded" % (len_results, len_objs) module.exit_json(**EXIT_DICT) def delete(module, cf, container, src, dest): """ Delete specific objects by proving a single file name or a comma-separated list to src OR dest (but not both). Ommitting file name(s) assumes the entire container is to be deleted. """ objs = None if src and dest: module.fail_json(msg="Error: ambiguous instructions; files to be deleted " "have been specified on both src and dest args") elif dest: objs = dest else: objs = src c = _get_container(module, cf, container) if objs: objs = objs.split(',') objs = map(str.strip, objs) else: objs = c.get_object_names() num_objs = len(objs) results = [] for obj in objs: try: result = c.delete_object(obj) except Exception, e: module.fail_json(msg=e.message) else: results.append(result) num_deleted = results.count(True) EXIT_DICT['container'] = c.name EXIT_DICT['deleted'] = num_deleted EXIT_DICT['requested_deleted'] = objs if num_deleted: EXIT_DICT['changed'] = True if num_objs == num_deleted: EXIT_DICT['success'] = True EXIT_DICT['msg'] = "%s objects deleted" % num_deleted else: EXIT_DICT['msg'] = ("Error: only %s of %s objects " "deleted" % (num_deleted, num_objs)) module.exit_json(**EXIT_DICT) def get_meta(module, cf, container, src, dest): """ Get metadata for a single file, comma-separated list, or entire container """ c = _get_container(module, cf, container) objs = None if src and dest: module.fail_json(msg="Error: ambiguous instructions; files to be deleted " "have been specified on both src and dest args") elif dest: objs = dest else: objs = src if objs: objs = objs.split(',') objs = map(str.strip, objs) else: objs = c.get_object_names() results = dict() for obj in objs: try: meta = c.get_object(obj).get_metadata() except Exception, e: module.fail_json(msg=e.message) else: results[obj] = dict() for k, v in meta.items(): meta_key = k.split(META_PREFIX)[-1] results[obj][meta_key] = v EXIT_DICT['container'] = c.name if results: EXIT_DICT['meta_results'] = results EXIT_DICT['success'] = True module.exit_json(**EXIT_DICT) def put_meta(module, cf, container, src, dest, meta, clear_meta): """ Set metadata on a container, single file, or comma-separated list. Passing a true value to clear_meta clears the metadata stored in Cloud Files before setting the new metadata to the value of "meta". """ objs = None if src and dest: module.fail_json(msg="Error: ambiguous instructions; files to set meta" " have been specified on both src and dest args") elif dest: objs = dest else: objs = src objs = objs.split(',') objs = map(str.strip, objs) c = _get_container(module, cf, container) results = [] for obj in objs: try: result = c.get_object(obj).set_metadata(meta, clear=clear_meta) except Exception, e: module.fail_json(msg=e.message) else: results.append(result) EXIT_DICT['container'] = c.name EXIT_DICT['success'] = True if results: EXIT_DICT['changed'] = True EXIT_DICT['num_changed'] = True module.exit_json(**EXIT_DICT) def delete_meta(module, cf, container, src, dest, meta): """ Removes metadata keys and values specified in meta, if any. Deletes on all objects specified by src or dest (but not both), if any; otherwise it deletes keys on all objects in the container """ objs = None if src and dest: module.fail_json(msg="Error: ambiguous instructions; meta keys to be " "deleted have been specified on both src and dest" " args") elif dest: objs = dest else: objs = src objs = objs.split(',') objs = map(str.strip, objs) c = _get_container(module, cf, container) results = [] # Num of metadata keys removed, not objects affected for obj in objs: if meta: for k, v in meta.items(): try: result = c.get_object(obj).remove_metadata_key(k) except Exception, e: module.fail_json(msg=e.message) else: results.append(result) else: try: o = c.get_object(obj) except pyrax.exc.NoSuchObject, e: module.fail_json(msg=e.message) for k, v in o.get_metadata().items(): try: result = o.remove_metadata_key(k) except Exception, e: module.fail_json(msg=e.message) results.append(result) EXIT_DICT['container'] = c.name EXIT_DICT['success'] = True if results: EXIT_DICT['changed'] = True EXIT_DICT['num_deleted'] = len(results) module.exit_json(**EXIT_DICT) def cloudfiles(module, container, src, dest, method, typ, meta, clear_meta, structure, expires): """ Dispatch from here to work with metadata or file objects """ cf = pyrax.cloudfiles if typ == "file": if method == 'put': upload(module, cf, container, src, dest, meta, expires) elif method == 'get': download(module, cf, container, src, dest, structure) elif method == 'delete': delete(module, cf, container, src, dest) else: if method == 'get': get_meta(module, cf, container, src, dest) if method == 'put': put_meta(module, cf, container, src, dest, meta, clear_meta) if method == 'delete': delete_meta(module, cf, container, src, dest, meta) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( container=dict(required=True), src=dict(), dest=dict(), method=dict(default='get', choices=['put', 'get', 'delete']), type=dict(default='file', choices=['file', 'meta']), meta=dict(type='dict', default=dict()), clear_meta=dict(choices=BOOLEANS, default=False, type='bool'), structure=dict(choices=BOOLEANS, default=True, type='bool'), expires=dict(type='int'), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together() ) container = module.params.get('container') src = module.params.get('src') dest = module.params.get('dest') method = module.params.get('method') typ = module.params.get('type') meta = module.params.get('meta') clear_meta = module.params.get('clear_meta') structure = module.params.get('structure') expires = module.params.get('expires') if clear_meta and not typ == 'meta': module.fail_json(msg='clear_meta can only be used when setting metadata') setup_rax_module(module, pyrax) cloudfiles(module, container, src, dest, method, typ, meta, clear_meta, structure, expires) from ansible.module_utils.basic import * from ansible.module_utils.rax import * main()ansible-1.5.4/library/cloud/rax_facts0000664000000000000000000001342412316627017016316 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_facts short_description: Gather facts for Rackspace Cloud Servers description: - Gather facts for Rackspace Cloud Servers. version_added: "1.4" options: api_key: description: - Rackspace API key (overrides I(credentials)) aliases: - password auth_endpoint: description: - The URI of the authentication service default: https://identity.api.rackspacecloud.com/v2.0/ version_added: 1.5 credentials: description: - File to find the Rackspace credentials in (ignored if I(api_key) and I(username) are provided) default: null aliases: - creds_file env: description: - Environment as configured in ~/.pyrax.cfg, see https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration version_added: 1.5 identity_type: description: - Authentication machanism to use, such as rackspace or keystone default: rackspace version_added: 1.5 region: description: - Region to create an instance in default: DFW tenant_id: description: - The tenant ID used for authentication version_added: 1.5 tenant_name: description: - The tenant name used for authentication version_added: 1.5 username: description: - Rackspace username (overrides I(credentials)) verify_ssl: description: - Whether or not to require SSL validation of API endpoints version_added: 1.5 address: description: - Server IP address to retrieve facts for, will match any IP assigned to the server id: description: - Server ID to retrieve facts for name: description: - Server name to retrieve facts for default: null requirements: [ "pyrax" ] author: Matt Martz notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: Gather info about servers hosts: all gather_facts: False tasks: - name: Get facts about servers local_action: module: rax_facts credentials: ~/.raxpub name: "{{ inventory_hostname }}" region: DFW - name: Map some facts set_fact: ansible_ssh_host: "{{ rax_accessipv4 }}" ''' import sys import os from types import NoneType try: import pyrax except ImportError: print("failed=True msg='pyrax required for this module'") sys.exit(1) NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) def rax_slugify(value): return 'rax_%s' % (re.sub('[^\w-]', '_', value).lower().lstrip('_')) def pyrax_object_to_dict(obj): instance = {} for key in dir(obj): value = getattr(obj, key) if (isinstance(value, NON_CALLABLES) and not key.startswith('_')): key = rax_slugify(key) instance[key] = value return instance def rax_facts(module, address, name, server_id): changed = False cs = pyrax.cloudservers ansible_facts = {} search_opts = {} if name: search_opts = dict(name='^%s$' % name) try: servers = cs.servers.list(search_opts=search_opts) except Exception, e: module.fail_json(msg='%s' % e.message) elif address: servers = [] try: for server in cs.servers.list(): for addresses in server.networks.values(): if address in addresses: servers.append(server) break except Exception, e: module.fail_json(msg='%s' % e.message) elif server_id: servers = [] try: servers.append(cs.servers.get(server_id)) except Exception, e: pass if len(servers) > 1: module.fail_json(msg='Multiple servers found matching provided ' 'search parameters') elif len(servers) == 1: ansible_facts = pyrax_object_to_dict(servers[0]) module.exit_json(changed=changed, ansible_facts=ansible_facts) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( address=dict(), id=dict(), name=dict(), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), mutually_exclusive=[['address', 'id', 'name']], required_one_of=[['address', 'id', 'name']], ) address = module.params.get('address') server_id = module.params.get('id') name = module.params.get('name') setup_rax_module(module, pyrax) rax_facts(module, address, name, server_id) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/rax_queue0000664000000000000000000001041412316627017016336 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_queue short_description: create / delete a queue in Rackspace Public Cloud description: - creates / deletes a Rackspace Public Cloud queue. version_added: "1.5" options: api_key: description: - Rackspace API key (overrides C(credentials)) credentials: description: - File to find the Rackspace credentials in (ignored if C(api_key) and C(username) are provided) default: null aliases: ['creds_file'] name: description: - Name to give the queue default: null region: description: - Region to create the load balancer in default: DFW state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present username: description: - Rackspace username (overrides C(credentials)) requirements: [ "pyrax" ] author: Christopher H. Laco, Matt Martz notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: Build a Queue gather_facts: False hosts: local connection: local tasks: - name: Queue create request local_action: module: rax_queue credentials: ~/.raxpub client_id: unique-client-name name: my-queue region: DFW state: present register: my_queue ''' import sys import os try: import pyrax except ImportError: print("failed=True msg='pyrax is required for this module'") sys.exit(1) def cloud_queue(module, state, name): for arg in (state, name): if not arg: module.fail_json(msg='%s is required for rax_queue' % arg) changed = False queues = [] instance = {} cq = pyrax.queues for queue in cq.list(): if name != queue.name: continue queues.append(queue) if len(queues) > 1: module.fail_json(msg='Multiple Queues were matched by name') if state == 'present': if not queues: try: queue = cq.create(name) changed = True except Exception, e: module.fail_json(msg='%s' % e.message) else: queue = queues[0] instance = dict(name=queue.name) result = dict(changed=changed, queue=instance) module.exit_json(**result) elif state == 'absent': if queues: queue = queues[0] try: queue.delete() changed = True except Exception, e: module.fail_json(msg='%s' % e.message) module.exit_json(changed=changed, queue=instance) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( name=dict(), state=dict(default='present', choices=['present', 'absent']), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together() ) name = module.params.get('name') state = module.params.get('state') setup_rax_module(module, pyrax) cloud_queue(module, state, name) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/quantum_router0000664000000000000000000001560012316627017017434 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: try: from neutronclient.neutron import client except ImportError: from quantumclient.quantum import client from keystoneclient.v2_0 import client as ksclient except ImportError: print("failed=True msg='quantumclient (or neutronclient) and keystone client are required'") DOCUMENTATION = ''' --- module: quantum_router version_added: "1.2" short_description: Create or Remove router from openstack description: - Create or Delete routers from OpenStack options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' auth_url: description: - The keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present name: description: - Name to be give to the router required: true default: None tenant_name: description: - Name of the tenant for which the router has to be created, if none router would be created for the login tenant. required: false default: None admin_state_up: description: - desired admin state of the created router . required: false default: true requirements: ["quantumclient", "neutronclient", "keystoneclient"] ''' EXAMPLES = ''' # Creates a router for tenant admin - quantum_router: state=present login_username=admin login_password=admin login_tenant_name=admin name=router1" ''' _os_keystone = None _os_tenant_id = None def _get_ksclient(module, kwargs): try: kclient = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s " % e.message) global _os_keystone _os_keystone = kclient return kclient def _get_endpoint(module, ksclient): try: endpoint = ksclient.service_catalog.url_for(service_type='network', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting network endpoint: %s" % e.message) return endpoint def _get_neutron_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint = _get_endpoint(module, _ksclient) kwargs = { 'token': token, 'endpoint_url': endpoint } try: neutron = client.Client('2.0', **kwargs) except Exception, e: module.fail_json(msg = "Error in connecting to neutron: %s " % e.message) return neutron def _set_tenant_id(module): global _os_tenant_id if not module.params['tenant_name']: login_tenant_name = module.params['login_tenant_name'] else: login_tenant_name = module.params['tenant_name'] for tenant in _os_keystone.tenants.list(): if tenant.name == login_tenant_name: _os_tenant_id = tenant.id break if not _os_tenant_id: module.fail_json(msg = "The tenant id cannot be found, please check the paramters") def _get_router_id(module, neutron): kwargs = { 'name': module.params['name'], 'tenant_id': _os_tenant_id, } try: routers = neutron.list_routers(**kwargs) except Exception, e: module.fail_json(msg = "Error in getting the router list: %s " % e.message) if not routers['routers']: return None return routers['routers'][0]['id'] def _create_router(module, neutron): router = { 'name': module.params['name'], 'tenant_id': _os_tenant_id, 'admin_state_up': module.params['admin_state_up'], } try: new_router = neutron.create_router(dict(router=router)) except Exception, e: module.fail_json( msg = "Error in creating router: %s" % e.message) return new_router['router']['id'] def _delete_router(module, neutron, router_id): try: neutron.delete_router(router_id) except: module.fail_json("Error in deleting the router") return True def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), name = dict(required=True), tenant_name = dict(default=None), state = dict(default='present', choices=['absent', 'present']), admin_state_up = dict(type='bool', default=True), ), ) neutron = _get_neutron_client(module, module.params) _set_tenant_id(module) if module.params['state'] == 'present': router_id = _get_router_id(module, neutron) if not router_id: router_id = _create_router(module, neutron) module.exit_json(changed=True, result="Created", id=router_id) else: module.exit_json(changed=False, result="success" , id=router_id) else: router_id = _get_router_id(module, neutron) if not router_id: module.exit_json(changed=False, result="success") else: _delete_router(module, neutron, router_id) module.exit_json(changed=True, result="deleted") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/docker0000664000000000000000000005663412316627017015625 0ustar rootroot#!/usr/bin/python # (c) 2013, Cove Schneider # (c) 2014, Joshua Conner # (c) 2014, Pavel Antonov # # This file is part of Ansible, # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . ###################################################################### DOCUMENTATION = ''' --- module: docker version_added: "1.4" short_description: manage docker containers description: - Manage the life cycle of docker containers. options: count: description: - Set number of containers to run required: False default: 1 aliases: [] image: description: - Set container image to use required: true default: null aliases: [] command: description: - Set command to run in a container on startup required: false default: null aliases: [] name: description: - Set name for container (used to find single container or to provide links) required: false default: null aliases: [] version_added: "1.5" ports: description: - Set private to public port mapping specification using docker CLI-style syntax [([:[host_port]])|():][/udp] required: false default: null aliases: [] version_added: "1.5" expose: description: - Set container ports to expose for port mappings or links. (If the port is already exposed using EXPOSE in a Dockerfile, you don't need to expose it again.) required: false default: null aliases: [] version_added: "1.5" publish_all_ports: description: - Publish all exposed ports to the host interfaces required: false default: false aliases: [] version_added: "1.5" volumes: description: - Set volume(s) to mount on the container required: false default: null aliases: [] volumes_from: description: - Set shared volume(s) from another container required: false default: null aliases: [] links: description: - Link container(s) to other container(s) (e.g. links=redis,postgresql:db) required: false default: null aliases: [] version_added: "1.5" memory_limit: description: - Set RAM allocated to container required: false default: null aliases: [] default: 256MB docker_url: description: - URL of docker host to issue commands to required: false default: unix://var/run/docker.sock aliases: [] username: description: - Set remote API username required: false default: null aliases: [] password: description: - Set remote API password required: false default: null aliases: [] hostname: description: - Set container hostname required: false default: null aliases: [] env: description: - Set environment variables (e.g. env="PASSWORD=sEcRe7,WORKERS=4") required: false default: null aliases: [] dns: description: - Set custom DNS servers for the container required: false default: null aliases: [] detach: description: - Enable detached mode on start up, leaves container running in background required: false default: true aliases: [] state: description: - Set the state of the container required: false default: present choices: [ "present", "stopped", "absent", "killed", "restarted" ] aliases: [] privileged: description: - Set whether the container should run in privileged mode required: false default: false aliases: [] lxc_conf: description: - LXC config parameters, e.g. lxc.aa_profile:unconfined required: false default: aliases: [] name: description: - Set the name of the container (cannot use with count) required: false default: null aliases: [] version_added: "1.5" author: Cove Schneider, Joshua Conner, Pavel Antonov requirements: [ "docker-py >= 0.3.0" ] ''' EXAMPLES = ''' Start one docker container running tomcat in each host of the web group and bind tomcat's listening port to 8080 on the host: - hosts: web sudo: yes tasks: - name: run tomcat servers docker: image=centos command="service tomcat6 start" ports=8080 The tomcat server's port is NAT'ed to a dynamic port on the host, but you can determine which port the server was mapped to using docker_containers: - hosts: web sudo: yes tasks: - name: run tomcat servers docker: image=centos command="service tomcat6 start" ports=8080 count=5 - name: Display IP address and port mappings for containers debug: msg={{inventory_hostname}}:{{item['HostConfig']['PortBindings']['8080/tcp'][0]['HostPort']}} with_items: docker_containers Just as in the previous example, but iterates over the list of docker containers with a sequence: - hosts: web sudo: yes vars: start_containers_count: 5 tasks: - name: run tomcat servers docker: image=centos command="service tomcat6 start" ports=8080 count={{start_containers_count}} - name: Display IP address and port mappings for containers debug: msg="{{inventory_hostname}}:{{docker_containers[{{item}}]['HostConfig']['PortBindings']['8080/tcp'][0]['HostPort']}}" with_sequence: start=0 end={{start_containers_count - 1}} Stop, remove all of the running tomcat containers and list the exit code from the stopped containers: - hosts: web sudo: yes tasks: - name: stop tomcat servers docker: image=centos command="service tomcat6 start" state=absent - name: Display return codes from stopped containers debug: msg="Returned {{inventory_hostname}}:{{item}}" with_items: docker_containers Create a named container: - hosts: web sudo: yes tasks: - name: run tomcat server docker: image=centos name=tomcat command="service tomcat6 start" ports=8080 Create multiple named containers: - hosts: web sudo: yes tasks: - name: run tomcat servers docker: image=centos name={{item}} command="service tomcat6 start" ports=8080 with_items: - crookshank - snowbell - heathcliff - felix - sylvester Create containers named in a sequence: - hosts: web sudo: yes tasks: - name: run tomcat servers docker: image=centos name={{item}} command="service tomcat6 start" ports=8080 with_sequence: start=1 end=5 format=tomcat_%d.example.com Create two linked containers: - hosts: web sudo: yes tasks: - name: ensure redis container is running docker: image=crosbymichael/redis name=redis - name: ensure redis_ambassador container is running docker: image=svendowideit/ambassador ports=6379:6379 links=redis:redis name=redis_ambassador_ansible Create containers with options specified as key-value pairs and lists: - hosts: web sudo: yes tasks: - docker: image: namespace/image_name links: - postgresql:db - redis:redis Create containers with options specified as strings and lists as comma-separated strings: - hosts: web sudo: yes tasks: docker: image=namespace/image_name links=postgresql:db,redis:redis ''' HAS_DOCKER_PY = True import sys from urlparse import urlparse try: import docker.client from requests.exceptions import * except ImportError, e: HAS_DOCKER_PY = False def _human_to_bytes(number): suffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB'] if isinstance(number, int): return number if number[-1] == suffixes[0] and number[-2].isdigit(): return number[:-1] i = 1 for each in suffixes[1:]: if number[-len(each):] == suffixes[i]: return int(number[:-len(each)]) * (1024 ** i) i = i + 1 print "failed=True msg='Could not convert %s to integer'" % (number) sys.exit(1) def _ansible_facts(container_list): return {"docker_containers": container_list} def _docker_id_quirk(inspect): # XXX: some quirk in docker if 'ID' in inspect: inspect['Id'] = inspect['ID'] del inspect['ID'] return inspect class DockerManager: counters = {'created':0, 'started':0, 'stopped':0, 'killed':0, 'removed':0, 'restarted':0, 'pull':0} def __init__(self, module): self.module = module self.binds = None self.volumes = None if self.module.params.get('volumes'): self.binds = {} self.volumes = {} vols = self.parse_list_from_param('volumes') for vol in vols: parts = vol.split(":") # host mount (e.g. /mnt:/tmp, bind mounts host's /tmp to /mnt in the container) if len(parts) == 2: self.volumes[parts[1]] = {} self.binds[parts[0]] = parts[1] # docker mount (e.g. /www, mounts a docker volume /www on the container at the same location) else: self.volumes[parts[0]] = {} self.lxc_conf = None if self.module.params.get('lxc_conf'): self.lxc_conf = [] options = self.parse_list_from_param('lxc_conf') for option in options: parts = option.split(':') self.lxc_conf.append({"Key": parts[0], "Value": parts[1]}) self.exposed_ports = None if self.module.params.get('expose'): expose = self.parse_list_from_param('expose') self.exposed_ports = self.get_exposed_ports(expose) self.port_bindings = None if self.module.params.get('ports'): ports = self.parse_list_from_param('ports') self.port_bindings = self.get_port_bindings(ports) self.links = None if self.module.params.get('links'): links = self.parse_list_from_param('links') self.links = dict(map(lambda x: x.split(':'), links)) self.env = None if self.module.params.get('env'): env = self.parse_list_from_param('env') self.env = dict(map(lambda x: x.split("="), env)) # connect to docker server docker_url = urlparse(module.params.get('docker_url')) self.client = docker.Client(base_url=docker_url.geturl()) def parse_list_from_param(self, param_name, delimiter=','): """ Get a list from a module parameter, whether it's specified as a delimiter-separated string or is already in list form. """ param_list = self.module.params.get(param_name) if not isinstance(param_list, list): param_list = param_list.split(delimiter) return param_list def get_exposed_ports(self, expose_list): """ Parse the ports and protocols (TCP/UDP) to expose in the docker-py `create_container` call from the docker CLI-style syntax. """ if expose_list: exposed = [] for port in expose_list: if port.endswith('/tcp') or port.endswith('/udp'): port_with_proto = tuple(port.split('/')) else: # assume tcp protocol if not specified port_with_proto = (port, 'tcp') exposed.append(port_with_proto) return exposed else: return None def get_port_bindings(self, ports): """ Parse the `ports` string into a port bindings dict for the `start_container` call. """ binds = {} for port in ports: parts = port.split(':') container_port = parts[-1] if '/' not in container_port: container_port = int(parts[-1]) p_len = len(parts) if p_len == 1: # Bind `container_port` of the container to a dynamically # allocated TCP port on all available interfaces of the host # machine. bind = ('0.0.0.0',) elif p_len == 2: # Bind `container_port` of the container to port `parts[0]` on # all available interfaces of the host machine. bind = ('0.0.0.0', int(parts[0])) elif p_len == 3: # Bind `container_port` of the container to port `parts[1]` on # IP `parts[0]` of the host machine. If `parts[1]` empty bind # to a dynamically allocacted port of IP `parts[0]`. bind = (parts[0], int(parts[1])) if parts[1] else (parts[0],) if container_port in binds: old_bind = binds[container_port] if isinstance(old_bind, list): # append to list if it already exists old_bind.append(bind) else: # otherwise create list that contains the old and new binds binds[container_port] = [binds[container_port], bind] else: binds[container_port] = bind return binds def get_split_image_tag(self, image): if '/' in image: image = image.split('/')[1] tag = None if image.find(':') > 0: return image.split(':') else: return image, tag def get_summary_counters_msg(self): msg = "" for k, v in self.counters.iteritems(): msg = msg + "%s %d " % (k, v) return msg def increment_counter(self, name): self.counters[name] = self.counters[name] + 1 def has_changed(self): for k, v in self.counters.iteritems(): if v > 0: return True return False def get_inspect_containers(self, containers): inspect = [] for i in containers: details = self.client.inspect_container(i['Id']) details = _docker_id_quirk(details) inspect.append(details) return inspect def get_deployed_containers(self): # determine which images/commands are running already containers = self.client.containers(all=True) image = self.module.params.get('image') command = self.module.params.get('command') if command: command = command.strip() name = self.module.params.get('name') if name and not name.startswith('/'): name = '/' + name deployed = [] # if we weren't given a tag with the image, we need to only compare on the image name, as that # docker will give us back the full image name including a tag in the container list if one exists. image, tag = self.get_split_image_tag(image) for i in containers: running_image, running_tag = self.get_split_image_tag(i['Image']) running_command = i['Command'].strip() if (name and name in i['Names']) or \ (not name and running_image == image and (not tag or tag == running_tag) and (not command or running_command == command)): details = self.client.inspect_container(i['Id']) details = _docker_id_quirk(details) deployed.append(details) return deployed def get_running_containers(self): running = [] for i in self.get_deployed_containers(): if i['State']['Running'] == True and i['State']['Ghost'] == False: running.append(i) return running def create_containers(self, count=1): params = {'image': self.module.params.get('image'), 'command': self.module.params.get('command'), 'ports': self.exposed_ports, 'volumes': self.volumes, 'volumes_from': self.module.params.get('volumes_from'), 'mem_limit': _human_to_bytes(self.module.params.get('memory_limit')), 'environment': self.env, 'dns': self.module.params.get('dns'), 'hostname': self.module.params.get('hostname'), 'detach': self.module.params.get('detach'), 'name': self.module.params.get('name'), } def do_create(count, params): results = [] for _ in range(count): result = self.client.create_container(**params) self.increment_counter('created') results.append(result) return results try: containers = do_create(count, params) except: self.client.pull(params['image']) self.increment_counter('pull') containers = do_create(count, params) return containers def start_containers(self, containers): params = { 'lxc_conf': self.lxc_conf, 'binds': self.binds, 'port_bindings': self.port_bindings, 'publish_all_ports': self.module.params.get('publish_all_ports'), 'privileged': self.module.params.get('privileged'), 'links': self.links, } for i in containers: self.client.start(i['Id'], **params) self.increment_counter('started') def stop_containers(self, containers): for i in containers: self.client.stop(i['Id']) self.increment_counter('stopped') return [self.client.wait(i['Id']) for i in containers] def remove_containers(self, containers): for i in containers: self.client.remove_container(i['Id']) self.increment_counter('removed') def kill_containers(self, containers): for i in containers: self.client.kill(i['Id']) self.increment_counter('killed') def restart_containers(self, containers): for i in containers: self.client.restart(i['Id']) self.increment_counter('restarted') def check_dependencies(module): """ Ensure `docker-py` >= 0.3.0 is installed, and call module.fail_json with a helpful error message if it isn't. """ if not HAS_DOCKER_PY: module.fail_json(msg="`docker-py` doesn't seem to be installed, but is required for the Ansible Docker module.") else: HAS_NEW_ENOUGH_DOCKER_PY = False if hasattr(docker, '__version__'): # a '__version__' attribute was added to the module but not until # after 0.3.0 was added pushed to pip. If it's there, use it. if docker.__version__ >= '0.3.0': HAS_NEW_ENOUGH_DOCKER_PY = True else: # HACK: if '__version__' isn't there, we check for the existence of # `_get_raw_response_socket` in the docker.Client class, which was # added in 0.3.0 if hasattr(docker.Client, '_get_raw_response_socket'): HAS_NEW_ENOUGH_DOCKER_PY = True if not HAS_NEW_ENOUGH_DOCKER_PY: module.fail_json(msg="The Ansible Docker module requires `docker-py` >= 0.3.0.") def main(): module = AnsibleModule( argument_spec = dict( count = dict(default=1), image = dict(required=True), command = dict(required=False, default=None), expose = dict(required=False, default=None), ports = dict(required=False, default=None), publish_all_ports = dict(default=False, type='bool'), volumes = dict(default=None), volumes_from = dict(default=None), links = dict(default=None), memory_limit = dict(default=0), memory_swap = dict(default=0), docker_url = dict(default='unix://var/run/docker.sock'), user = dict(default=None), password = dict(), email = dict(), hostname = dict(default=None), env = dict(), dns = dict(), detach = dict(default=True, type='bool'), state = dict(default='present', choices=['absent', 'present', 'stopped', 'killed', 'restarted']), debug = dict(default=False, type='bool'), privileged = dict(default=False, type='bool'), lxc_conf = dict(default=None), name = dict(default=None) ) ) check_dependencies(module) try: manager = DockerManager(module) state = module.params.get('state') count = int(module.params.get('count')) name = module.params.get('name') if count < 0: module.fail_json(msg="Count must be greater than zero") if count > 1 and name: module.fail_json(msg="Count and name must not be used together") running_containers = manager.get_running_containers() running_count = len(running_containers) delta = count - running_count deployed_containers = manager.get_deployed_containers() facts = None failed = False changed = False # start/stop containers if state == "present": # make sure a container with `name` is running if name and "/" + name not in map(lambda x: x.get('Name'), running_containers): containers = manager.create_containers(1) manager.start_containers(containers) # start more containers if we don't have enough elif delta > 0: containers = manager.create_containers(delta) manager.start_containers(containers) # stop containers if we have too many elif delta < 0: containers_to_stop = running_containers[0:abs(delta)] containers = manager.stop_containers(containers_to_stop) manager.remove_containers(containers_to_stop) facts = manager.get_running_containers() # stop and remove containers elif state == "absent": facts = manager.stop_containers(deployed_containers) manager.remove_containers(deployed_containers) # stop containers elif state == "stopped": facts = manager.stop_containers(running_containers) # kill containers elif state == "killed": manager.kill_containers(running_containers) # restart containers elif state == "restarted": manager.restart_containers(running_containers) facts = manager.get_inspect_containers(running_containers) msg = "%s container(s) running image %s with command %s" % \ (manager.get_summary_counters_msg(), module.params.get('image'), module.params.get('command')) changed = manager.has_changed() module.exit_json(failed=failed, changed=changed, msg=msg, ansible_facts=_ansible_facts(facts)) except docker.client.APIError, e: changed = manager.has_changed() module.exit_json(failed=True, changed=changed, msg="Docker API error: " + e.explanation) except RequestException, e: changed = manager.has_changed() module.exit_json(failed=True, changed=changed, msg=repr(e)) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/digital_ocean0000664000000000000000000003255712316627017017136 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: digital_ocean short_description: Create/delete a droplet/SSH_key in DigitalOcean description: - Create/delete a droplet in DigitalOcean and optionally waits for it to be 'running', or deploy an SSH key. version_added: "1.3" options: command: description: - Which target you want to operate on. default: droplet choices: ['droplet', 'ssh'] state: description: - Indicate desired state of the target. default: present choices: ['present', 'active', 'absent', 'deleted'] client_id: description: - Digital Ocean manager id. api_key: description: - Digital Ocean api key. id: description: - Numeric, the droplet id you want to operate on. name: description: - String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key. unique_name: description: - Bool, require unique hostnames. By default, digital ocean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence. version_added: "1.4" default: "no" choices: [ "yes", "no" ] size_id: description: - Numeric, this is the id of the size you would like the droplet created at. image_id: description: - Numeric, this is the id of the image you would like the droplet created with. region_id: description: - "Numeric, this is the id of the region you would like your server" ssh_key_ids: description: - Optional, comma separated list of ssh_key_ids that you would like to be added to the server virtio: description: - "Bool, turn on virtio driver in droplet for improved network and storage I/O" version_added: "1.4" default: "yes" choices: [ "yes", "no" ] private_networking: description: - "Bool, add an additional, private network interface to droplet for inter-droplet communication" version_added: "1.4" default: "no" choices: [ "yes", "no" ] wait: description: - Wait for the droplet to be in state 'running' before returning. If wait is "no" an ip_address may not be returned. default: "yes" choices: [ "yes", "no" ] wait_timeout: description: - How long before wait gives up, in seconds. default: 300 ssh_pub_key: description: - The public SSH key you want to add to your account. notes: - Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY. requirements: [ dopy ] ''' EXAMPLES = ''' # Ensure a SSH key is present # If a key matches this name, will return the ssh key id and changed = False # If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False - digital_ocean: > state=present command=ssh name=my_ssh_key ssh_pub_key='ssh-rsa AAAA...' client_id=XXX api_key=XXX # Create a new Droplet # Will return the droplet details including the droplet id (used for idempotence) - digital_ocean: > state=present command=droplet name=mydroplet client_id=XXX api_key=XXX size_id=1 region_id=2 image_id=3 wait_timeout=500 register: my_droplet - debug: msg="ID is {{ my_droplet.droplet.id }}" - debug: msg="IP is {{ my_droplet.droplet.ip_address }}" # Ensure a droplet is present # If droplet id already exist, will return the droplet details and changed = False # If no droplet matches the id, a new droplet will be created and the droplet details (including the new id) are returned, changed = True. - digital_ocean: > state=present command=droplet id=123 name=mydroplet client_id=XXX api_key=XXX size_id=1 region_id=2 image_id=3 wait_timeout=500 # Create a droplet with ssh key # The ssh key id can be passed as argument at the creation of a droplet (see ssh_key_ids). # Several keys can be added to ssh_key_ids as id1,id2,id3 # The keys are used to connect as root to the droplet. - digital_ocean: > state=present ssh_key_ids=id1,id2 name=mydroplet client_id=XXX api_key=XXX size_id=1 region_id=2 image_id=3 ''' import sys import os import time try: import dopy from dopy.manager import DoError, DoManager except ImportError, e: print "failed=True msg='dopy >= 0.2.2 required for this module'" sys.exit(1) if dopy.__version__ < '0.2.2': print "failed=True msg='dopy >= 0.2.2 required for this module'" sys.exit(1) class TimeoutError(DoError): def __init__(self, msg, id): super(TimeoutError, self).__init__(msg) self.id = id class JsonfyMixIn(object): def to_json(self): return self.__dict__ class Droplet(JsonfyMixIn): manager = None def __init__(self, droplet_json): self.status = 'new' self.__dict__.update(droplet_json) def is_powered_on(self): return self.status == 'active' def update_attr(self, attrs=None): if attrs: for k, v in attrs.iteritems(): setattr(self, k, v) else: json = self.manager.show_droplet(self.id) if json['ip_address']: self.update_attr(json) def power_on(self): assert self.status == 'off', 'Can only power on a closed one.' json = self.manager.power_on_droplet(self.id) self.update_attr(json) def ensure_powered_on(self, wait=True, wait_timeout=300): if self.is_powered_on(): return if self.status == 'off': # powered off self.power_on() if wait: end_time = time.time() + wait_timeout while time.time() < end_time: time.sleep(min(20, end_time - time.time())) self.update_attr() if self.is_powered_on(): if not self.ip_address: raise TimeoutError('No ip is found.', self.id) return raise TimeoutError('Wait for droplet running timeout', self.id) def destroy(self): return self.manager.destroy_droplet(self.id, scrub_data=True) @classmethod def setup(cls, client_id, api_key): cls.manager = DoManager(client_id, api_key) @classmethod def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False): json = cls.manager.new_droplet(name, size_id, image_id, region_id, ssh_key_ids, virtio, private_networking) droplet = cls(json) return droplet @classmethod def find(cls, id=None, name=None): if not id and not name: return False droplets = cls.list_all() # Check first by id. digital ocean requires that it be unique for droplet in droplets: if droplet.id == id: return droplet # Failing that, check by hostname. for droplet in droplets: if droplet.name == name: return droplet return False @classmethod def list_all(cls): json = cls.manager.all_active_droplets() return map(cls, json) class SSH(JsonfyMixIn): manager = None def __init__(self, ssh_key_json): self.__dict__.update(ssh_key_json) update_attr = __init__ def destroy(self): self.manager.destroy_ssh_key(self.id) return True @classmethod def setup(cls, client_id, api_key): cls.manager = DoManager(client_id, api_key) @classmethod def find(cls, name): if not name: return False keys = cls.list_all() for key in keys: if key.name == name: return key return False @classmethod def list_all(cls): json = cls.manager.all_ssh_keys() return map(cls, json) @classmethod def add(cls, name, key_pub): json = cls.manager.new_ssh_key(name, key_pub) return cls(json) def core(module): def getkeyordie(k): v = module.params[k] if v is None: module.fail_json(msg='Unable to load %s' % k) return v try: # params['client_id'] will be None even if client_id is not passed in client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID'] api_key = module.params['api_key'] or os.environ['DO_API_KEY'] except KeyError, e: module.fail_json(msg='Unable to load %s' % e.message) changed = True command = module.params['command'] state = module.params['state'] if command == 'droplet': Droplet.setup(client_id, api_key) if state in ('active', 'present'): # First, try to find a droplet by id. droplet = Droplet.find(id=module.params['id']) # If we couldn't find the droplet and the user is allowing unique # hostnames, then check to see if a droplet with the specified # hostname already exists. if not droplet and module.params['unique_name']: droplet = Droplet.find(name=getkeyordie('name')) # If both of those attempts failed, then create a new droplet. if not droplet: droplet = Droplet.add( name=getkeyordie('name'), size_id=getkeyordie('size_id'), image_id=getkeyordie('image_id'), region_id=getkeyordie('region_id'), ssh_key_ids=module.params['ssh_key_ids'], virtio=module.params['virtio'], private_networking=module.params['private_networking'] ) if droplet.is_powered_on(): changed = False droplet.ensure_powered_on( wait=getkeyordie('wait'), wait_timeout=getkeyordie('wait_timeout') ) module.exit_json(changed=changed, droplet=droplet.to_json()) elif state in ('absent', 'deleted'): # First, try to find a droplet by id. droplet = Droplet.find(id=getkeyordie('id')) # If we couldn't find the droplet and the user is allowing unique # hostnames, then check to see if a droplet with the specified # hostname already exists. if not droplet and module.params['unique_name']: droplet = Droplet.find(name=getkeyordie('name')) if not droplet: module.exit_json(changed=False, msg='The droplet is not found.') event_json = droplet.destroy() module.exit_json(changed=True, event_id=event_json['event_id']) elif command == 'ssh': SSH.setup(client_id, api_key) name = getkeyordie('name') if state in ('active', 'present'): key = SSH.find(name) if key: module.exit_json(changed=False, ssh_key=key.to_json()) key = SSH.add(name, getkeyordie('ssh_pub_key')) module.exit_json(changed=True, ssh_key=key.to_json()) elif state in ('absent', 'deleted'): key = SSH.find(name) if not key: module.exit_json(changed=False, msg='SSH key with the name of %s is not found.' % name) key.destroy() module.exit_json(changed=True) def main(): module = AnsibleModule( argument_spec = dict( command = dict(choices=['droplet', 'ssh'], default='droplet'), state = dict(choices=['active', 'present', 'absent', 'deleted'], default='present'), client_id = dict(aliases=['CLIENT_ID'], no_log=True), api_key = dict(aliases=['API_KEY'], no_log=True), name = dict(type='str'), size_id = dict(type='int'), image_id = dict(type='int'), region_id = dict(type='int'), ssh_key_ids = dict(default=''), virtio = dict(type='bool', choices=BOOLEANS, default='yes'), private_networking = dict(type='bool', choices=BOOLEANS, default='no'), id = dict(aliases=['droplet_id'], type='int'), unique_name = dict(type='bool', default='no'), wait = dict(type='bool', default=True), wait_timeout = dict(default=300, type='int'), ssh_pub_key = dict(type='str'), ), required_together = ( ['size_id', 'image_id', 'region_id'], ), mutually_exclusive = ( ['size_id', 'ssh_pub_key'], ['image_id', 'ssh_pub_key'], ['region_id', 'ssh_pub_key'], ), required_one_of = ( ['id', 'name'], ), ) try: core(module) except TimeoutError, e: module.fail_json(msg=str(e), id=e.id) except (DoError, Exception), e: module.fail_json(msg=str(e)) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/rax0000664000000000000000000006004712316627017015141 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax short_description: create / delete an instance in Rackspace Public Cloud description: - creates / deletes a Rackspace Public Cloud instance and optionally waits for it to be 'running'. version_added: "1.2" options: api_key: description: - Rackspace API key (overrides I(credentials)) aliases: - password auth_endpoint: description: - The URI of the authentication service default: https://identity.api.rackspacecloud.com/v2.0/ version_added: 1.5 credentials: description: - File to find the Rackspace credentials in (ignored if I(api_key) and I(username) are provided) default: null aliases: - creds_file env: description: - Environment as configured in ~/.pyrax.cfg, see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration) version_added: 1.5 identity_type: description: - Authentication machanism to use, such as rackspace or keystone default: rackspace version_added: 1.5 region: description: - Region to create an instance in default: DFW tenant_id: description: - The tenant ID used for authentication version_added: 1.5 tenant_name: description: - The tenant name used for authentication version_added: 1.5 username: description: - Rackspace username (overrides I(credentials)) verify_ssl: description: - Whether or not to require SSL validation of API endpoints version_added: 1.5 auto_increment: description: - Whether or not to increment a single number with the name of the created servers. Only applicable when used with the I(group) attribute or meta key. default: yes version_added: 1.5 count: description: - number of instances to launch default: 1 version_added: 1.4 count_offset: description: - number count to start at default: 1 version_added: 1.4 disk_config: description: - Disk partitioning strategy choices: ['auto', 'manual'] version_added: '1.4' default: auto exact_count: description: - Explicitly ensure an exact count of instances, used with state=active/present default: no version_added: 1.4 files: description: - Files to insert into the instance. remotefilename:localcontent default: null flavor: description: - flavor to use for the instance default: null group: description: - host group to assign to server, is also used for idempotent operations to ensure a specific number of instances version_added: 1.4 image: description: - image to use for the instance. Can be an C(id), C(human_id) or C(name) default: null instance_ids: description: - list of instance ids, currently only used when state='absent' to remove instances version_added: 1.4 key_name: description: - key pair to use on the instance default: null aliases: ['keypair'] meta: description: - A hash of metadata to associate with the instance default: null name: description: - Name to give the instance default: null networks: description: - The network to attach to the instances. If specified, you must include ALL networks including the public and private interfaces. Can be C(id) or C(label). default: ['public', 'private'] version_added: 1.4 state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present wait: description: - wait for the instance to be in state 'running' before returning default: "no" choices: [ "yes", "no" ] wait_timeout: description: - how long before wait gives up, in seconds default: 300 requirements: [ "pyrax" ] author: Jesse Keating, Matt Martz notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: Build a Cloud Server gather_facts: False tasks: - name: Server build request local_action: module: rax credentials: ~/.raxpub name: rax-test1 flavor: 5 image: b11d9567-e412-4255-96b9-bd63ab23bcfe files: /root/.ssh/authorized_keys: /home/localuser/.ssh/id_rsa.pub /root/test.txt: /home/localuser/test.txt wait: yes state: present networks: - private - public register: rax - name: Build an exact count of cloud servers with incremented names hosts: local gather_facts: False tasks: - name: Server build requests local_action: module: rax credentials: ~/.raxpub name: test%03d.example.org flavor: performance1-1 image: ubuntu-1204-lts-precise-pangolin state: present count: 10 count_offset: 10 exact_count: yes group: test wait: yes register: rax ''' import sys import time import os import re from uuid import UUID from types import NoneType try: import pyrax except ImportError: print("failed=True msg='pyrax is required for this module'") sys.exit(1) ACTIVE_STATUSES = ('ACTIVE', 'BUILD', 'HARD_REBOOT', 'MIGRATING', 'PASSWORD', 'REBOOT', 'REBUILD', 'RESCUE', 'RESIZE', 'REVERT_RESIZE') FINAL_STATUSES = ('ACTIVE', 'ERROR') NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) PUBLIC_NET_ID = "00000000-0000-0000-0000-000000000000" SERVICE_NET_ID = "11111111-1111-1111-1111-111111111111" def rax_slugify(value): return 'rax_%s' % (re.sub('[^\w-]', '_', value).lower().lstrip('_')) def pyrax_object_to_dict(obj): instance = {} for key in dir(obj): value = getattr(obj, key) if (isinstance(value, NON_CALLABLES) and not key.startswith('_')): key = rax_slugify(key) instance[key] = value for attr in ['id', 'accessIPv4', 'name', 'status']: instance[attr] = instance.get(rax_slugify(attr)) return instance def create(module, names, flavor, image, meta, key_name, files, wait, wait_timeout, disk_config, group, nics): cs = pyrax.cloudservers changed = False # Handle the file contents for rpath in files.keys(): lpath = os.path.expanduser(files[rpath]) try: fileobj = open(lpath, 'r') files[rpath] = fileobj.read() except Exception, e: module.fail_json(msg='Failed to load %s' % lpath) try: servers = [] for name in names: servers.append(cs.servers.create(name=name, image=image, flavor=flavor, meta=meta, key_name=key_name, files=files, nics=nics, disk_config=disk_config)) except Exception, e: module.fail_json(msg='%s' % e.message) else: changed = True if wait: end_time = time.time() + wait_timeout infinite = wait_timeout == 0 while infinite or time.time() < end_time: for server in servers: try: server.get() except: server.status == 'ERROR' if not filter(lambda s: s.status not in FINAL_STATUSES, servers): break time.sleep(5) success = [] error = [] timeout = [] for server in servers: try: server.get() except: server.status == 'ERROR' instance = pyrax_object_to_dict(server) if server.status == 'ACTIVE' or not wait: success.append(instance) elif server.status == 'ERROR': error.append(instance) elif wait: timeout.append(instance) results = { 'changed': changed, 'action': 'create', 'instances': success + error + timeout, 'success': success, 'error': error, 'timeout': timeout, 'instance_ids': { 'instances': [i['id'] for i in success + error + timeout], 'success': [i['id'] for i in success], 'error': [i['id'] for i in error], 'timeout': [i['id'] for i in timeout] } } if timeout: results['msg'] = 'Timeout waiting for all servers to build' elif error: results['msg'] = 'Failed to build all servers' if 'msg' in results: module.fail_json(**results) else: module.exit_json(**results) def delete(module, instance_ids, wait, wait_timeout): cs = pyrax.cloudservers changed = False instances = {} servers = [] for instance_id in instance_ids: servers.append(cs.servers.get(instance_id)) for server in servers: try: server.delete() except Exception, e: module.fail_json(msg=e.message) else: changed = True instance = pyrax_object_to_dict(server) instances[instance['id']] = instance # If requested, wait for server deletion if wait: end_time = time.time() + wait_timeout infinite = wait_timeout == 0 while infinite or time.time() < end_time: for server in servers: instance_id = server.id try: server.get() except: instances[instance_id]['status'] = 'DELETED' if not filter(lambda s: s['status'] not in ('', 'DELETED', 'ERROR'), instances.values()): break time.sleep(5) timeout = filter(lambda s: s['status'] not in ('', 'DELETED', 'ERROR'), instances.values()) error = filter(lambda s: s['status'] in ('ERROR'), instances.values()) success = filter(lambda s: s['status'] in ('', 'DELETED'), instances.values()) results = { 'changed': changed, 'action': 'delete', 'instances': success + error + timeout, 'success': success, 'error': error, 'timeout': timeout, 'instance_ids': { 'instances': [i['id'] for i in success + error + timeout], 'success': [i['id'] for i in success], 'error': [i['id'] for i in error], 'timeout': [i['id'] for i in timeout] } } if timeout: results['msg'] = 'Timeout waiting for all servers to delete' elif error: results['msg'] = 'Failed to delete all servers' if 'msg' in results: module.fail_json(**results) else: module.exit_json(**results) def cloudservers(module, state, name, flavor, image, meta, key_name, files, wait, wait_timeout, disk_config, count, group, instance_ids, exact_count, networks, count_offset, auto_increment): cs = pyrax.cloudservers cnw = pyrax.cloud_networks servers = [] # Add the group meta key if group and 'group' not in meta: meta['group'] = group elif 'group' in meta and group is None: group = meta['group'] # When using state=absent with group, the absent block won't match the # names properly. Use the exact_count functionality to decrease the count # to the desired level was_absent = False if group is not None and state == 'absent': exact_count = True state = 'present' was_absent = True # Check if the provided image is a UUID and if not, search for an # appropriate image using human_id and name if image: try: UUID(image) except ValueError: try: image = cs.images.find(human_id=image) except(cs.exceptions.NotFound, cs.exceptions.NoUniqueMatch): try: image = cs.images.find(name=image) except (cs.exceptions.NotFound, cs.exceptions.NoUniqueMatch): module.fail_json(msg='No matching image found (%s)' % image) image = pyrax.utils.get_id(image) # Check if the provided network is a UUID and if not, search for an # appropriate network using label nics = [] if networks: for network in networks: try: UUID(network) except ValueError: if network.lower() == 'public': nics.extend(cnw.get_server_networks(PUBLIC_NET_ID)) elif network.lower() == 'private': nics.extend(cnw.get_server_networks(SERVICE_NET_ID)) else: try: network_obj = cnw.find_network_by_label(network) except (pyrax.exceptions.NetworkNotFound, pyrax.exceptions.NetworkLabelNotUnique): module.fail_json(msg='No matching network found (%s)' % network) else: nics.extend(cnw.get_server_networks(network_obj)) else: nics.extend(cnw.get_server_networks(network)) # act on the state if state == 'present': for arg, value in dict(name=name, flavor=flavor, image=image).iteritems(): if not value: module.fail_json(msg='%s is required for the "rax" module' % arg) # Idempotent ensurance of a specific count of servers if exact_count is not False: # See if we can find servers that match our options if group is None: module.fail_json(msg='"group" must be provided when using ' '"exact_count"') else: if auto_increment: numbers = set() try: name % 0 except TypeError, e: if e.message.startswith('not all'): name = '%s%%d' % name else: module.fail_json(msg=e.message) pattern = re.sub(r'%\d*[sd]', r'(\d+)', name) for server in cs.servers.list(): if server.metadata.get('group') == group: servers.append(server) match = re.search(pattern, server.name) if match: number = int(match.group(1)) numbers.add(number) number_range = xrange(count_offset, count_offset + count) available_numbers = list(set(number_range) .difference(numbers)) else: for server in cs.servers.list(): if server.metadata.get('group') == group: servers.append(server) # If state was absent but the count was changed, # assume we only wanted to remove that number of instances if was_absent: diff = len(servers) - count if diff < 0: count = 0 else: count = diff if len(servers) > count: state = 'absent' del servers[:count] instance_ids = [] for server in servers: instance_ids.append(server.id) delete(module, instance_ids, wait, wait_timeout) elif len(servers) < count: if auto_increment: names = [] name_slice = count - len(servers) numbers_to_use = available_numbers[:name_slice] for number in numbers_to_use: names.append(name % number) else: names = [name] * (count - len(servers)) else: module.exit_json(changed=False, action=None, instances=[], success=[], error=[], timeout=[], instance_ids={'instances': [], 'success': [], 'error': [], 'timeout': []}) else: if group is not None: if auto_increment: numbers = set() try: name % 0 except TypeError, e: if e.message.startswith('not all'): name = '%s%%d' % name else: module.fail_json(msg=e.message) pattern = re.sub(r'%\d*[sd]', r'(\d+)', name) for server in cs.servers.list(): if server.metadata.get('group') == group: servers.append(server) match = re.search(pattern, server.name) if match: number = int(match.group(1)) numbers.add(number) number_range = xrange(count_offset, count_offset + count + len(numbers)) available_numbers = list(set(number_range) .difference(numbers)) names = [] numbers_to_use = available_numbers[:count] for number in numbers_to_use: names.append(name % number) else: names = [name] * count else: search_opts = { 'name': '^%s$' % name, 'image': image, 'flavor': flavor } servers = [] for server in cs.servers.list(search_opts=search_opts): if server.metadata != meta: continue servers.append(server) if len(servers) >= count: instances = [] for server in servers: instances.append(pyrax_object_to_dict(server)) instance_ids = [i['id'] for i in instances] module.exit_json(changed=False, action=None, instances=instances, success=[], error=[], timeout=[], instance_ids={'instances': instance_ids, 'success': [], 'error': [], 'timeout': []}) names = [name] * (count - len(servers)) create(module, names, flavor, image, meta, key_name, files, wait, wait_timeout, disk_config, group, nics) elif state == 'absent': if instance_ids is None: for arg, value in dict(name=name, flavor=flavor, image=image).iteritems(): if not value: module.fail_json(msg='%s is required for the "rax" ' 'module' % arg) search_opts = { 'name': '^%s$' % name, 'image': image, 'flavor': flavor } for server in cs.servers.list(search_opts=search_opts): if meta != server.metadata: continue servers.append(server) instance_ids = [] for server in servers: if len(instance_ids) < count: instance_ids.append(server.id) else: break if not instance_ids: module.exit_json(changed=False, action=None, instances=[], success=[], error=[], timeout=[], instance_ids={'instances': [], 'success': [], 'error': [], 'timeout': []}) delete(module, instance_ids, wait, wait_timeout) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( auto_increment=dict(choices=BOOLEANS, default=True, type='bool'), count=dict(default=1, type='int'), count_offset=dict(default=1, type='int'), disk_config=dict(choices=['auto', 'manual']), exact_count=dict(choices=BOOLEANS, default=False, type='bool'), files=dict(type='dict', default={}), flavor=dict(), group=dict(), image=dict(), instance_ids=dict(type='list'), key_name=dict(aliases=['keypair']), meta=dict(type='dict', default={}), name=dict(), networks=dict(type='list', default=['public', 'private']), service=dict(), state=dict(default='present', choices=['present', 'absent']), wait=dict(choices=BOOLEANS, default=False, type='bool'), wait_timeout=dict(default=300), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), ) service = module.params.get('service') if service is not None: module.fail_json(msg='The "service" attribute has been deprecated, ' 'please remove "service: cloudservers" from your ' 'playbook pertaining to the "rax" module') auto_increment = module.params.get('auto_increment') count = module.params.get('count') count_offset = module.params.get('count_offset') disk_config = module.params.get('disk_config') if disk_config: disk_config = disk_config.upper() exact_count = module.params.get('exact_count', False) files = module.params.get('files') flavor = module.params.get('flavor') group = module.params.get('group') image = module.params.get('image') instance_ids = module.params.get('instance_ids') key_name = module.params.get('key_name') meta = module.params.get('meta') name = module.params.get('name') networks = module.params.get('networks') state = module.params.get('state') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) setup_rax_module(module, pyrax) cloudservers(module, state, name, flavor, image, meta, key_name, files, wait, wait_timeout, disk_config, count, group, instance_ids, exact_count, networks, count_offset, auto_increment) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/s30000664000000000000000000004367112316627017014700 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: s3 short_description: idempotent S3 module putting a file into S3. description: - This module allows the user to dictate the presence of a given file in an S3 bucket. If or once the key (file) exists in the bucket, it returns a time-expired download URL. This module has a dependency on python-boto. version_added: "1.1" options: bucket: description: - Bucket name. required: true default: null aliases: [] object: description: - Keyname of the object inside the bucket. Can be used to create "virtual directories", see examples. required: false default: null aliases: [] version_added: "1.3" src: description: - The source file path when performing a PUT operation. required: false default: null aliases: [] version_added: "1.3" dest: description: - The destination file path when downloading an object/key with a GET operation. required: false aliases: [] version_added: "1.3" overwrite: description: - Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations. required: false default: true version_added: "1.2" mode: description: - Switches the module behaviour between put (upload), get (download), geturl (return download url (Ansible 1.3+), getstr (download object as string (1.3+)), create (bucket) and delete (bucket). required: true default: null aliases: [] expiration: description: - Time limit (in seconds) for the URL generated and returned by S3/Walrus when performing a mode=put or mode=geturl operation. required: false default: 600 aliases: [] s3_url: description: - S3 URL endpoint. If not specified then the S3_URL environment variable is used, if that variable is defined. default: null aliases: [ S3_URL ] aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: null aliases: ['ec2_secret_key', 'secret_key'] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: null aliases: [ 'ec2_access_key', 'access_key' ] requirements: [ "boto" ] author: Lester Wade, Ralph Tice ''' EXAMPLES = ''' # Simple PUT operation - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put # Simple GET operation - s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get # GET/download and overwrite local file (trust remote) - s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get # GET/download and do not overwrite local file (trust remote) - s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get force=false # PUT/upload and overwrite remote file (trust local) - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put # PUT/upload and do not overwrite remote file (trust local) - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put force=false # Download an object as a string to use else where in your playbook - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=getstr # Create an empty bucket - s3: bucket=mybucket mode=create # Create a bucket with key as directory - s3: bucket=mybucket object=/my/directory/path mode=create # Delete a bucket and all contents - s3: bucket=mybucket mode=delete ''' import sys import os import urlparse import hashlib try: import boto except ImportError: module.fail_json(msg="boto required for this module") def key_check(module, s3, bucket, obj): try: bucket = s3.lookup(bucket) key_check = bucket.get_key(obj) except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) if key_check: return True else: return False def keysum(module, s3, bucket, obj): bucket = s3.lookup(bucket) key_check = bucket.get_key(obj) if key_check: md5_remote = key_check.etag[1:-1] etag_multipart = md5_remote.find('-')!=-1 #Check for multipart, etag is not md5 if etag_multipart is True: module.fail_json(msg="Files uploaded with multipart of s3 are not supported with checksum, unable to compute checksum.") return md5_remote def bucket_check(module, s3, bucket): try: result = s3.lookup(bucket) except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) if result: return True else: return False def create_bucket(module, s3, bucket): try: bucket = s3.create_bucket(bucket) except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) if bucket: return True def delete_bucket(module, s3, bucket): try: bucket = s3.lookup(bucket) bucket_contents = bucket.list() bucket.delete_keys([key.name for key in bucket_contents]) bucket.delete() return True except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) def delete_key(module, s3, bucket, obj): try: bucket = s3.lookup(bucket) bucket.delete_key(obj) module.exit_json(msg="Object deleted from bucket %s"%bucket, changed=True) except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) def create_dirkey(module, s3, bucket, obj): try: bucket = s3.lookup(bucket) key = bucket.new_key(obj) key.set_contents_from_string('') module.exit_json(msg="Virtual directory %s created in bucket %s" % (obj, bucket.name), changed=True) except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) def upload_file_check(src): if os.path.exists(src): file_exists is True else: file_exists is False if os.path.isdir(src): module.fail_json(msg="Specifying a directory is not a valid source for upload.", failed=True) return file_exists def path_check(path): if os.path.exists(path): return True else: return False def upload_s3file(module, s3, bucket, obj, src, expiry): try: bucket = s3.lookup(bucket) key = bucket.new_key(obj) key.set_contents_from_filename(src) url = key.generate_url(expiry) module.exit_json(msg="PUT operation complete", url=url, changed=True) except s3.provider.storage_copy_error, e: module.fail_json(msg= str(e)) def download_s3file(module, s3, bucket, obj, dest): try: bucket = s3.lookup(bucket) key = bucket.lookup(obj) key.get_contents_to_filename(dest) module.exit_json(msg="GET operation complete", changed=True) except s3.provider.storage_copy_error, e: module.fail_json(msg= str(e)) def download_s3str(module, s3, bucket, obj): try: bucket = s3.lookup(bucket) key = bucket.lookup(obj) contents = key.get_contents_as_string() module.exit_json(msg="GET operation complete", contents=contents, changed=True) except s3.provider.storage_copy_error, e: module.fail_json(msg= str(e)) def get_download_url(module, s3, bucket, obj, expiry, changed=True): try: bucket = s3.lookup(bucket) key = bucket.lookup(obj) url = key.generate_url(expiry) module.exit_json(msg="Download url:", url=url, expiry=expiry, changed=changed) except s3.provider.storage_response_error, e: module.fail_json(msg= str(e)) def is_walrus(s3_url): """ Return True if it's Walrus endpoint, not S3 We assume anything other than *.amazonaws.com is Walrus""" if s3_url is not None: o = urlparse.urlparse(s3_url) return not o.hostname.endswith('amazonaws.com') else: return False def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( bucket = dict(required=True), object = dict(), src = dict(), dest = dict(default=None), mode = dict(choices=['get', 'put', 'delete', 'create', 'geturl', 'getstr'], required=True), expiry = dict(default=600, aliases=['expiration']), s3_url = dict(aliases=['S3_URL']), overwrite = dict(aliases=['force'], default=True, type='bool'), ) ) module = AnsibleModule(argument_spec=argument_spec) bucket = module.params.get('bucket') obj = module.params.get('object') src = module.params.get('src') if module.params.get('dest'): dest = os.path.expanduser(module.params.get('dest')) mode = module.params.get('mode') expiry = int(module.params['expiry']) s3_url = module.params.get('s3_url') overwrite = module.params.get('overwrite') ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) if module.params.get('object'): obj = os.path.expanduser(module.params['object']) # allow eucarc environment variables to be used if ansible vars aren't set if not s3_url and 'S3_URL' in os.environ: s3_url = os.environ['S3_URL'] # If we have an S3_URL env var set, this is likely to be Walrus, so change connection method if is_walrus(s3_url): try: walrus = urlparse.urlparse(s3_url).hostname s3 = boto.connect_walrus(walrus, aws_access_key, aws_secret_key) except boto.exception.NoAuthHandlerFound, e: module.fail_json(msg = str(e)) else: try: s3 = boto.connect_s3(aws_access_key, aws_secret_key) except boto.exception.NoAuthHandlerFound, e: module.fail_json(msg = str(e)) # If our mode is a GET operation (download), go through the procedure as appropriate ... if mode == 'get': # First, we check to see if the bucket exists, we get "bucket" returned. bucketrtn = bucket_check(module, s3, bucket) if bucketrtn is False: module.fail_json(msg="Target bucket cannot be found", failed=True) # Next, we check to see if the key in the bucket exists. If it exists, it also returns key_matches md5sum check. keyrtn = key_check(module, s3, bucket, obj) if keyrtn is False: module.fail_json(msg="Target key cannot be found", failed=True) # If the destination path doesn't exist, no need to md5um etag check, so just download. pathrtn = path_check(dest) if pathrtn is False: download_s3file(module, s3, bucket, obj, dest) # Compare the remote MD5 sum of the object with the local dest md5sum, if it already exists. if pathrtn is True: md5_remote = keysum(module, s3, bucket, obj) md5_local = hashlib.md5(open(dest, 'rb').read()).hexdigest() if md5_local == md5_remote: sum_matches = True if overwrite is True: download_s3file(module, s3, bucket, obj, dest) else: module.exit_json(msg="Local and remote object are identical, ignoring. Use overwrite parameter to force.", changed=False) else: sum_matches = False if overwrite is True: download_s3file(module, s3, bucket, obj, dest) else: module.fail_json(msg="WARNING: Checksums do not match. Use overwrite parameter to force download.", failed=True) # Firstly, if key_matches is TRUE and overwrite is not enabled, we EXIT with a helpful message. if sum_matches is True and overwrite is False: module.exit_json(msg="Local and remote object are identical, ignoring. Use overwrite parameter to force.", changed=False) # At this point explicitly define the overwrite condition. if sum_matches is True and pathrtn is True and overwrite is True: download_s3file(module, s3, bucket, obj, dest) # If sum does not match but the destination exists, we # if our mode is a PUT operation (upload), go through the procedure as appropriate ... if mode == 'put': # Use this snippet to debug through conditionals: # module.exit_json(msg="Bucket return %s"%bucketrtn) # sys.exit(0) # Lets check the src path. pathrtn = path_check(src) if pathrtn is False: module.fail_json(msg="Local object for PUT does not exist", failed=True) # Lets check to see if bucket exists to get ground truth. bucketrtn = bucket_check(module, s3, bucket) if bucketrtn is True: keyrtn = key_check(module, s3, bucket, obj) # Lets check key state. Does it exist and if it does, compute the etag md5sum. if bucketrtn is True and keyrtn is True: md5_remote = keysum(module, s3, bucket, obj) md5_local = hashlib.md5(open(src, 'rb').read()).hexdigest() if md5_local == md5_remote: sum_matches = True if overwrite is True: upload_s3file(module, s3, bucket, obj, src, expiry) else: get_download_url(module, s3, bucket, obj, expiry, changed=False) else: sum_matches = False if overwrite is True: upload_s3file(module, s3, bucket, obj, src, expiry) else: module.exit_json(msg="WARNING: Checksums do not match. Use overwrite parameter to force upload.", failed=True) # If neither exist (based on bucket existence), we can create both. if bucketrtn is False and pathrtn is True: create_bucket(module, s3, bucket) upload_s3file(module, s3, bucket, obj, src, expiry) # If bucket exists but key doesn't, just upload. if bucketrtn is True and pathrtn is True and keyrtn is False: upload_s3file(module, s3, bucket, obj, src, expiry) # Support for deleting an object if we have both params. if mode == 'delete': if bucket: bucketrtn = bucket_check(module, s3, bucket) if bucketrtn is True: deletertn = delete_bucket(module, s3, bucket) if deletertn is True: module.exit_json(msg="Bucket %s and all keys have been deleted."%bucket, changed=True) else: module.fail_json(msg="Bucket does not exist.", changed=False) else: module.fail_json(msg="Bucket parameter is required.", failed=True) # Need to research how to create directories without "populating" a key, so this should just do bucket creation for now. # WE SHOULD ENABLE SOME WAY OF CREATING AN EMPTY KEY TO CREATE "DIRECTORY" STRUCTURE, AWS CONSOLE DOES THIS. if mode == 'create': if bucket and not obj: bucketrtn = bucket_check(module, s3, bucket) if bucketrtn is True: module.exit_json(msg="Bucket already exists.", changed=False) else: module.exit_json(msg="Bucket created succesfully", changed=create_bucket(module, s3, bucket)) if bucket and obj: bucketrtn = bucket_check(module, s3, bucket) if obj.endswith('/'): dirobj = obj else: dirobj = obj + "/" if bucketrtn is True: keyrtn = key_check(module, s3, bucket, dirobj) if keyrtn is True: module.exit_json(msg="Bucket %s and key %s already exists."% (bucket, obj), changed=False) else: create_dirkey(module, s3, bucket, dirobj) if bucketrtn is False: created = create_bucket(module, s3, bucket) create_dirkey(module, s3, bucket, dirobj) # Support for grabbing the time-expired URL for an object in S3/Walrus. if mode == 'geturl': if bucket and obj: bucketrtn = bucket_check(module, s3, bucket) if bucketrtn is False: module.fail_json(msg="Bucket %s does not exist."%bucket, failed=True) else: keyrtn = key_check(module, s3, bucket, obj) if keyrtn is True: get_download_url(module, s3, bucket, obj, expiry) else: module.fail_json(msg="Key %s does not exist."%obj, failed=True) else: module.fail_json(msg="Bucket and Object parameters must be set", failed=True) if mode == 'getstr': if bucket and obj: bucketrtn = bucket_check(module, s3, bucket) if bucketrtn is False: module.fail_json(msg="Bucket %s does not exist."%bucket, failed=True) else: keyrtn = key_check(module, s3, bucket, obj) if keyrtn is True: download_s3str(module, s3, bucket, obj) else: module.fail_json(msg="Key %s does not exist."%obj, failed=True) module.exit_json(failed=False) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/ec2_elb0000664000000000000000000003164612316627017015645 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = """ --- module: ec2_elb short_description: De-registers or registers instances from EC2 ELBs description: - This module de-registers or registers an AWS EC2 instance from the ELBs that it belongs to. - Returns fact "ec2_elbs" which is a list of elbs attached to the instance if state=absent is passed as an argument. - Will be marked changed when called only if there are ELBs found to operate on. version_added: "1.2" requirements: [ "boto" ] author: John Jarvis options: state: description: - register or deregister the instance required: true choices: ['present', 'absent'] instance_id: description: - EC2 Instance ID required: true ec2_elbs: description: - List of ELB names, required for registration. The ec2_elbs fact should be used if there was a previous de-register. required: false default: None aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: None aliases: ['ec2_secret_key', 'secret_key' ] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: None aliases: ['ec2_access_key', 'access_key' ] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false aliases: ['aws_region', 'ec2_region'] enable_availability_zone: description: - Whether to enable the availability zone of the instance on the target ELB if the availability zone has not already been enabled. If set to no, the task will fail if the availability zone is not enabled on the ELB. required: false default: yes choices: [ "yes", "no" ] wait: description: - Wait for instance registration or deregistration to complete successfully before returning. required: false default: yes choices: [ "yes", "no" ] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" """ EXAMPLES = """ # basic pre_task and post_task example pre_tasks: - name: Gathering ec2 facts ec2_facts: - name: Instance De-register local_action: ec2_elb args: instance_id: "{{ ansible_ec2_instance_id }}" state: 'absent' roles: - myrole post_tasks: - name: Instance Register local_action: ec2_elb args: instance_id: "{{ ansible_ec2_instance_id }}" ec2_elbs: "{{ item }}" state: 'present' with_items: ec2_elbs """ import time import sys import os try: import boto import boto.ec2 import boto.ec2.elb from boto.regioninfo import RegionInfo except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) class ElbManager: """Handles EC2 instance ELB registration and de-registration""" def __init__(self, module, instance_id=None, ec2_elbs=None, aws_access_key=None, aws_secret_key=None, region=None): self.aws_access_key = aws_access_key self.aws_secret_key = aws_secret_key self.module = module self.instance_id = instance_id self.region = region self.lbs = self._get_instance_lbs(ec2_elbs) self.changed = False def deregister(self, wait): """De-register the instance from all ELBs and wait for the ELB to report it out-of-service""" for lb in self.lbs: initial_state = self._get_instance_health(lb) if wait else None if initial_state and initial_state.state == 'InService': lb.deregister_instances([self.instance_id]) else: return if wait: self._await_elb_instance_state(lb, 'OutOfService', initial_state) else: # We cannot assume no change was made if we don't wait # to find out self.changed = True def register(self, wait, enable_availability_zone): """Register the instance for all ELBs and wait for the ELB to report the instance in-service""" for lb in self.lbs: if wait: tries = 1 while True: initial_state = self._get_instance_health(lb) if initial_state: break time.sleep(1) tries += 1 # FIXME: this should be configurable, but since it didn't # wait at all before this is at least better if tries > 10: self.module.fail_json(msg='failed to find the initial state of the load balancer') if enable_availability_zone: self._enable_availailability_zone(lb) lb.register_instances([self.instance_id]) if wait: self._await_elb_instance_state(lb, 'InService', initial_state) else: # We cannot assume no change was made if we don't wait # to find out self.changed = True def exists(self, lbtest): """ Verify that the named ELB actually exists """ found = False for lb in self.lbs: if lb.name == lbtest: found=True break return found def _enable_availailability_zone(self, lb): """Enable the current instance's availability zone in the provided lb. Returns True if the zone was enabled or False if no change was made. lb: load balancer""" instance = self._get_instance() if instance.placement in lb.availability_zones: return False lb.enable_zones(zones=instance.placement) # If successful, the new zone will have been added to # lb.availability_zones return instance.placement in lb.availability_zones def _await_elb_instance_state(self, lb, awaited_state, initial_state): """Wait for an ELB to change state lb: load balancer awaited_state : state to poll for (string)""" while True: instance_state = self._get_instance_health(lb) if not instance_state: msg = ("The instance %s could not be put in service on %s." " Reason: Invalid Instance") self.module.fail_json(msg=msg % (self.instance_id, lb)) if instance_state.state == awaited_state: # Check the current state agains the initial state, and only set # changed if they are different. if (initial_state is None) or (instance_state.state != initial_state.state): self.changed = True break elif self._is_instance_state_pending(instance_state): # If it's pending, we'll skip further checks andd continue waiting pass elif (awaited_state == 'InService' and instance_state.reason_code == "Instance"): # If the reason_code for the instance being out of service is # "Instance" this indicates a failure state, e.g. the instance # has failed a health check or the ELB does not have the # instance's availabilty zone enabled. The exact reason why is # described in InstantState.description. msg = ("The instance %s could not be put in service on %s." " Reason: %s") self.module.fail_json(msg=msg % (self.instance_id, lb, instance_state.description)) time.sleep(1) def _is_instance_state_pending(self, instance_state): """ Determines whether the instance_state is "pending", meaning there is an operation under way to bring it in service. """ # This is messy, because AWS provides no way to distinguish between # an instance that is is OutOfService because it's pending vs. OutOfService # because it's failing health checks. So we're forced to analyze the # description, which is likely to be brittle. return (instance_state and 'pending' in instance_state.description) def _get_instance_health(self, lb): """ Check instance health, should return status object or None under certain error conditions. """ try: status = lb.get_instance_health([self.instance_id])[0] except boto.exception.BotoServerError, e: if e.error_code == 'InvalidInstance': return None else: raise return status def _get_instance_lbs(self, ec2_elbs=None): """Returns a list of ELBs attached to self.instance_id ec2_elbs: an optional list of elb names that will be used for elb lookup instead of returning what elbs are attached to self.instance_id""" try: endpoint="elasticloadbalancing.%s.amazonaws.com" % self.region connect_region = RegionInfo(name=self.region, endpoint=endpoint) elb = boto.ec2.elb.ELBConnection(self.aws_access_key, self.aws_secret_key, region=connect_region) except boto.exception.NoAuthHandlerFound, e: self.module.fail_json(msg=str(e)) elbs = elb.get_all_load_balancers() if ec2_elbs: lbs = sorted(lb for lb in elbs if lb.name in ec2_elbs) else: lbs = [] for lb in elbs: for info in lb.instances: if self.instance_id == info.id: lbs.append(lb) return lbs def _get_instance(self): """Returns a boto.ec2.InstanceObject for self.instance_id""" try: endpoint = "ec2.%s.amazonaws.com" % self.region connect_region = RegionInfo(name=self.region, endpoint=endpoint) ec2_conn = boto.ec2.EC2Connection(self.aws_access_key, self.aws_secret_key, region=connect_region) except boto.exception.NoAuthHandlerFound, e: self.module.fail_json(msg=str(e)) return ec2_conn.get_only_instances(instance_ids=[self.instance_id])[0] def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( state={'required': True, 'choices': ['present', 'absent']}, instance_id={'required': True}, ec2_elbs={'default': None, 'required': False, 'type':'list'}, enable_availability_zone={'default': True, 'required': False, 'choices': BOOLEANS, 'type': 'bool'}, wait={'required': False, 'choices': BOOLEANS, 'default': True, 'type': 'bool'} ) ) module = AnsibleModule( argument_spec=argument_spec, ) # def get_ec2_creds(module): # return ec2_url, ec2_access_key, ec2_secret_key, region ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) ec2_elbs = module.params['ec2_elbs'] region = module.params['region'] wait = module.params['wait'] enable_availability_zone = module.params['enable_availability_zone'] if module.params['state'] == 'present' and 'ec2_elbs' not in module.params: module.fail_json(msg="ELBs are required for registration") instance_id = module.params['instance_id'] elb_man = ElbManager(module, instance_id, ec2_elbs, aws_access_key, aws_secret_key, region=region) if ec2_elbs is not None: for elb in ec2_elbs: if not elb_man.exists(elb): msg="ELB %s does not exist" % elb module.fail_json(msg=msg) if module.params['state'] == 'present': elb_man.register(wait, enable_availability_zone) elif module.params['state'] == 'absent': elb_man.deregister(wait) ansible_facts = {'ec2_elbs': [lb.name for lb in elb_man.lbs]} ec2_facts_result = dict(changed=elb_man.changed, ansible_facts=ansible_facts) module.exit_json(**ec2_facts_result) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/rds0000664000000000000000000006305112316627017015135 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rds version_added: "1.3" short_description: create, delete, or modify an Amazon rds instance description: - Creates, deletes, or modifies rds instances. When creating an instance it can be either a new instance or a read-only replica of an existing instance. This module has a dependency on python-boto >= 2.5. The 'promote' command requires boto >= 2.18.0. options: command: description: - Specifies the action to take. required: true default: null aliases: [] choices: [ 'create', 'replicate', 'delete', 'facts', 'modify' , 'promote', 'snapshot', 'restore' ] instance_name: description: - Database instance identifier. required: true default: null aliases: [] source_instance: description: - Name of the database to replicate. Used only when command=replicate. required: false default: null aliases: [] db_engine: description: - The type of database. Used only when command=create. required: false default: null aliases: [] choices: [ 'MySQL', 'oracle-se1', 'oracle-se', 'oracle-ee', 'sqlserver-ee', 'sqlserver-se', 'sqlserver-ex', 'sqlserver-web', 'postgres'] size: description: - Size in gigabytes of the initial storage for the DB instance. Used only when command=create or command=modify. required: false default: null aliases: [] instance_type: description: - The instance type of the database. Must be specified when command=create. Optional when command=replicate, command=modify or command=restore. If not specified then the replica inherits the same instance type as the source instance. required: false default: null aliases: [] choices: [ 'db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge' ] username: description: - Master database username. Used only when command=create. required: false default: null aliases: [] password: description: - Password for the master database username. Used only when command=create or command=modify. required: false default: null aliases: [] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: true default: null aliases: [ 'aws_region', 'ec2_region' ] db_name: description: - Name of a database to create within the instance. If not specified then no database is created. Used only when command=create. required: false default: null aliases: [] engine_version: description: - Version number of the database engine to use. Used only when command=create. If not specified then the current Amazon RDS default engine version is used. required: false default: null aliases: [] parameter_group: description: - Name of the DB parameter group to associate with this instance. If omitted then the RDS default DBParameterGroup will be used. Used only when command=create or command=modify. required: false default: null aliases: [] license_model: description: - The license model for this DB instance. Used only when command=create or command=restore. required: false default: null aliases: [] choices: [ 'license-included', 'bring-your-own-license', 'general-public-license' ] multi_zone: description: - Specifies if this is a Multi-availability-zone deployment. Can not be used in conjunction with zone parameter. Used only when command=create or command=modify. choices: [ "yes", "no" ] required: false default: null aliases: [] iops: description: - Specifies the number of IOPS for the instance. Used only when command=create or command=modify. Must be an integer greater than 1000. required: false default: null aliases: [] security_groups: description: - Comma separated list of one or more security groups. Used only when command=create or command=modify. required: false default: null aliases: [] vpc_security_groups: description: - Comma separated list of one or more vpc security groups. Used only when command=create or command=modify. required: false default: null aliases: [] port: description: - Port number that the DB instance uses for connections. Defaults to 3306 for mysql, 1521 for Oracle, 1443 for SQL Server. Used only when command=create or command=replicate. required: false default: null aliases: [] upgrade: description: - Indicates that minor version upgrades should be applied automatically. Used only when command=create or command=replicate. required: false default: no choices: [ "yes", "no" ] aliases: [] option_group: description: - The name of the option group to use. If not specified then the default option group is used. Used only when command=create. required: false default: null aliases: [] maint_window: description: - "Maintenance window in format of ddd:hh24:mi-ddd:hh24:mi. (Example: Mon:22:00-Mon:23:15) If not specified then a random maintenance window is assigned. Used only when command=create or command=modify." required: false default: null aliases: [] backup_window: description: - Backup window in format of hh24:mi-hh24:mi. If not specified then a random backup window is assigned. Used only when command=create or command=modify. required: false default: null aliases: [] backup_retention: description: - "Number of days backups are retained. Set to 0 to disable backups. Default is 1 day. Valid range: 0-35. Used only when command=create or command=modify." required: false default: null aliases: [] zone: description: - availability zone in which to launch the instance. Used only when command=create, command=replicate or command=restore. required: false default: null aliases: ['aws_zone', 'ec2_zone'] subnet: description: - VPC subnet group. If specified then a VPC instance is created. Used only when command=create. required: false default: null aliases: [] snapshot: description: - Name of snapshot to take. When command=delete, if no snapshot name is provided then no snapshot is taken. Used only when command=delete or command=snapshot. required: false default: null aliases: [] aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: null aliases: [ 'ec2_secret_key', 'secret_key' ] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: null aliases: [ 'ec2_access_key', 'access_key' ] wait: description: - When command=create, replicate, modify or restore then wait for the database to enter the 'available' state. When command=delete wait for the database to be terminated. required: false default: "no" choices: [ "yes", "no" ] aliases: [] wait_timeout: description: - how long before wait gives up, in seconds default: 300 aliases: [] apply_immediately: description: - Used only when command=modify. If enabled, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window. default: no choices: [ "yes", "no" ] aliases: [] new_instance_name: description: - Name to rename an instance to. Used only when command=modify. required: false default: null aliases: [] version_added: 1.5 requirements: [ "boto" ] author: Bruce Pennypacker ''' EXAMPLES = ''' # Basic mysql provisioning example - rds: > command=create instance_name=new_database db_engine=MySQL size=10 instance_type=db.m1.small username=mysql_admin password=1nsecure # Create a read-only replica and wait for it to become available - rds: > command=replicate instance_name=new_database_replica source_instance=new_database wait=yes wait_timeout=600 # Delete an instance, but create a snapshot before doing so - rds: > command=delete instance_name=new_database snapshot=new_database_snapshot # Get facts about an instance - rds: > command=facts instance_name=new_database register: new_database_facts # Rename an instance and wait for the change to take effect - rds: > command=modify instance_name=new_database new_instance_name=renamed_database wait=yes ''' import sys import time try: import boto.rds except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def get_current_resource(conn, resource, command): # There will be exceptions but we want the calling code to handle them if command == 'snapshot': return conn.get_all_dbsnapshots(snapshot_id=resource)[0] else: return conn.get_all_dbinstances(resource)[0] def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( command = dict(choices=['create', 'replicate', 'delete', 'facts', 'modify', 'promote', 'snapshot', 'restore'], required=True), instance_name = dict(required=True), source_instance = dict(required=False), db_engine = dict(choices=['MySQL', 'oracle-se1', 'oracle-se', 'oracle-ee', 'sqlserver-ee', 'sqlserver-se', 'sqlserver-ex', 'sqlserver-web', 'postgres'], required=False), size = dict(required=False), instance_type = dict(aliases=['type'], choices=['db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge'], required=False), username = dict(required=False), password = dict(no_log=True, required=False), db_name = dict(required=False), engine_version = dict(required=False), parameter_group = dict(required=False), license_model = dict(choices=['license-included', 'bring-your-own-license', 'general-public-license'], required=False), multi_zone = dict(type='bool', default=False), iops = dict(required=False), security_groups = dict(required=False), vpc_security_groups = dict(required=False), port = dict(required=False), upgrade = dict(type='bool', default=False), option_group = dict(required=False), maint_window = dict(required=False), backup_window = dict(required=False), backup_retention = dict(required=False), zone = dict(aliases=['aws_zone', 'ec2_zone'], required=False), subnet = dict(required=False), wait = dict(type='bool', default=False), wait_timeout = dict(default=300), snapshot = dict(required=False), apply_immediately = dict(type='bool', default=False), new_instance_name = dict(required=False), ) ) module = AnsibleModule( argument_spec=argument_spec, ) command = module.params.get('command') instance_name = module.params.get('instance_name') source_instance = module.params.get('source_instance') db_engine = module.params.get('db_engine') size = module.params.get('size') instance_type = module.params.get('instance_type') username = module.params.get('username') password = module.params.get('password') db_name = module.params.get('db_name') engine_version = module.params.get('engine_version') parameter_group = module.params.get('parameter_group') license_model = module.params.get('license_model') multi_zone = module.params.get('multi_zone') iops = module.params.get('iops') security_groups = module.params.get('security_groups') vpc_security_groups = module.params.get('vpc_security_groups') port = module.params.get('port') upgrade = module.params.get('upgrade') option_group = module.params.get('option_group') maint_window = module.params.get('maint_window') subnet = module.params.get('subnet') backup_window = module.params.get('backup_window') backup_retention = module.params.get('module_retention') region = module.params.get('region') zone = module.params.get('zone') aws_secret_key = module.params.get('aws_secret_key') aws_access_key = module.params.get('aws_access_key') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) snapshot = module.params.get('snapshot') apply_immediately = module.params.get('apply_immediately') new_instance_name = module.params.get('new_instance_name') ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) if not region: module.fail_json(msg = str("region not specified and unable to determine region from EC2_REGION.")) # connect to the rds endpoint try: conn = boto.rds.connect_to_region(region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) def invalid_security_group_type(subnet): if subnet: return 'security_groups' else: return 'vpc_security_groups' # Validate parameters for each command if command == 'create': required_vars = [ 'instance_name', 'db_engine', 'size', 'instance_type', 'username', 'password' ] invalid_vars = [ 'source_instance', 'snapshot', 'apply_immediately', 'new_instance_name' ] + [invalid_security_group_type(subnet)] elif command == 'replicate': required_vars = [ 'instance_name', 'source_instance' ] invalid_vars = [ 'db_engine', 'size', 'username', 'password', 'db_name', 'engine_version', 'parameter_group', 'license_model', 'multi_zone', 'iops', 'vpc_security_groups', 'security_groups', 'option_group', 'maint_window', 'backup_window', 'backup_retention', 'subnet', 'snapshot', 'apply_immediately', 'new_instance_name' ] elif command == 'delete': required_vars = [ 'instance_name' ] invalid_vars = [ 'db_engine', 'size', 'instance_type', 'username', 'password', 'db_name', 'engine_version', 'parameter_group', 'license_model', 'multi_zone', 'iops', 'vpc_security_groups' ,'security_groups', 'option_group', 'maint_window', 'backup_window', 'backup_retention', 'port', 'upgrade', 'subnet', 'zone' , 'source_instance', 'apply_immediately', 'new_instance_name' ] elif command == 'facts': required_vars = [ 'instance_name' ] invalid_vars = [ 'db_engine', 'size', 'instance_type', 'username', 'password', 'db_name', 'engine_version', 'parameter_group', 'license_model', 'multi_zone', 'iops', 'vpc_security_groups', 'security_groups', 'option_group', 'maint_window', 'backup_window', 'backup_retention', 'port', 'upgrade', 'subnet', 'zone', 'wait', 'source_instance' 'apply_immediately', 'new_instance_name' ] elif command == 'modify': required_vars = [ 'instance_name' ] if password: params["master_password"] = password invalid_vars = [ 'db_engine', 'username', 'db_name', 'engine_version', 'license_model', 'option_group', 'port', 'upgrade', 'subnet', 'zone', 'source_instance'] elif command == 'promote': required_vars = [ 'instance_name' ] invalid_vars = [ 'db_engine', 'size', 'username', 'password', 'db_name', 'engine_version', 'parameter_group', 'license_model', 'multi_zone', 'iops', 'vpc_security_groups', 'security_groups', 'option_group', 'maint_window', 'subnet', 'source_instance', 'snapshot', 'apply_immediately', 'new_instance_name' ] elif command == 'snapshot': required_vars = [ 'instance_name', 'snapshot'] invalid_vars = [ 'db_engine', 'size', 'username', 'password', 'db_name', 'engine_version', 'parameter_group', 'license_model', 'multi_zone', 'iops', 'vpc_security_groups', 'security_groups', 'option_group', 'maint_window', 'subnet', 'source_instance', 'apply_immediately', 'new_instance_name' ] elif command == 'restore': required_vars = [ 'instance_name', 'snapshot', 'instance_type' ] invalid_vars = [ 'db_engine', 'db_name', 'username', 'password', 'engine_version', 'option_group', 'source_instance', 'apply_immediately', 'new_instance_name', 'vpc_security_groups', 'security_groups' ] for v in required_vars: if not module.params.get(v): module.fail_json(msg = str("Parameter %s required for %s command" % (v, command))) for v in invalid_vars: if module.params.get(v): module.fail_json(msg = str("Parameter %s invalid for %s command" % (v, command))) # Package up the optional parameters params = {} if db_engine: params["engine"] = db_engine if port: params["port"] = port if db_name: params["db_name"] = db_name if parameter_group: params["param_group"] = parameter_group if zone: params["availability_zone"] = zone if maint_window: params["preferred_maintenance_window"] = maint_window if backup_window: params["preferred_backup_window"] = backup_window if backup_retention: params["backup_retention_period"] = backup_retention if multi_zone: params["multi_az"] = multi_zone if engine_version: params["engine_version"] = engine_version if upgrade: params["auto_minor_version_upgrade"] = upgrade if subnet: params["db_subnet_group_name"] = subnet if license_model: params["license_model"] = license_model if option_group: params["option_group_name"] = option_group if iops: params["iops"] = iops if security_groups: params["security_groups"] = security_groups.split(',') if vpc_security_groups: params["vpc_security_groups"] = vpc_security_groups.split(',') if new_instance_name: params["new_instance_id"] = new_instance_name changed = True if command in ['create', 'restore', 'facts']: try: result = conn.get_all_dbinstances(instance_name)[0] changed = False except boto.exception.BotoServerError, e: try: if command == 'create': result = conn.create_dbinstance(instance_name, size, instance_type, username, password, **params) if command == 'restore': result = conn.restore_dbinstance_from_dbsnapshot(snapshot, instance_name, instance_type, **params) if command == 'facts': module.fail_json(msg = "DB Instance %s does not exist" % instance_name) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) if command == 'snapshot': try: result = conn.get_all_dbsnapshots(snapshot)[0] changed = False except boto.exception.BotoServerError, e: try: result = conn.create_dbsnapshot(snapshot, instance_name) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) if command == 'delete': try: result = conn.get_all_dbinstances(instance_name)[0] if result.status == 'deleting': module.exit_json(changed=False) except boto.exception.BotoServerError, e: module.exit_json(changed=False) try: if snapshot: params["skip_final_snapshot"] = False params["final_snapshot_id"] = snapshot else: params["skip_final_snapshot"] = True result = conn.delete_dbinstance(instance_name, **params) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) if command == 'replicate': try: if instance_type: params["instance_class"] = instance_type result = conn.create_dbinstance_read_replica(instance_name, source_instance, **params) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) if command == 'modify': try: params["apply_immediately"] = apply_immediately result = conn.modify_dbinstance(instance_name, **params) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) if apply_immediately: if new_instance_name: # Wait until the new instance name is valid found = 0 while found == 0: instances = conn.get_all_dbinstances() for i in instances: if i.id == new_instance_name: instance_name = new_instance_name found = 1 if found == 0: time.sleep(5) else: # Wait for a few seconds since it takes a while for AWS # to change the instance from 'available' to 'modifying' time.sleep(5) if command == 'promote': try: result = conn.promote_read_replica(instance_name, **params) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) # If we're not waiting for a delete to complete then we're all done # so just return if command == 'delete' and not wait: module.exit_json(changed=True) try: resource = get_current_resource(conn, result.id, command) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) # Wait for the resource to be available if requested if wait: try: wait_timeout = time.time() + wait_timeout time.sleep(5) while wait_timeout > time.time() and resource.status != 'available': time.sleep(5) if wait_timeout <= time.time(): module.fail_json(msg = "Timeout waiting for resource %s" % resource.id) resource = get_current_resource(conn, result.id, command) except boto.exception.BotoServerError, e: # If we're waiting for an instance to be deleted then # get_all_dbinstances will eventually throw a # DBInstanceNotFound error. if command == 'delete' and e.error_code == 'DBInstanceNotFound': module.exit_json(changed=True) else: module.fail_json(msg = e.error_message) # If we got here then pack up all the instance details to send # back to ansible if command == 'snapshot': d = { 'id' : resource.id, 'create_time' : resource.snapshot_create_time, 'status' : resource.status, 'availability_zone' : resource.availability_zone, 'instance_id' : resource.instance_id, 'instance_created' : resource.instance_create_time, } try: d["snapshot_type"] = resource.snapshot_type d["iops"] = resource.iops except AttributeError, e: pass # needs boto >= 2.21.0 return module.exit_json(changed=changed, snapshot=d) d = { 'id' : resource.id, 'create_time' : resource.create_time, 'status' : resource.status, 'availability_zone' : resource.availability_zone, 'backup_retention' : resource.backup_retention_period, 'backup_window' : resource.preferred_backup_window, 'maintenance_window' : resource.preferred_maintenance_window, 'multi_zone' : resource.multi_az, 'instance_type' : resource.instance_class, 'username' : resource.master_username, 'iops' : resource.iops } # Endpoint exists only if the instance is available if resource.status == 'available' and command != 'snapshot': d["endpoint"] = resource.endpoint[0] d["port"] = resource.endpoint[1] else: d["endpoint"] = None d["port"] = None # ReadReplicaSourceDBInstanceIdentifier may or may not exist try: d["replication_source"] = resource.ReadReplicaSourceDBInstanceIdentifier except Exception, e: d["replication_source"] = None module.exit_json(changed=changed, instance=d) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/glance_image0000664000000000000000000002107512316627017016740 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: glance_image version_added: "1.2" short_description: Add/Delete images from glance description: - Add or Remove images from the glance repository. options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' auth_url: description: - The keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present name: description: - Name that has to be given to the image required: true default: None disk_format: description: - The format of the disk that is getting uploaded required: false default: qcow2 container_format: description: - The format of the container required: false default: bare owner: description: - The owner of the image required: false default: None min_disk: description: - The minimum disk space required to deploy this image required: false default: None min_ram: description: - The minimum ram required to deploy this image required: false default: None is_public: description: - Whether the image can be accessed publicly required: false default: 'yes' copy_from: description: - A url from where the image can be downloaded, mutually exclusive with file parameter required: false default: None timeout: description: - The time to wait for the image process to complete in seconds required: false default: 180 file: description: - The path to the file which has to be uploaded, mutually exclusive with copy_from required: false default: None requirements: ["glanceclient", "keystoneclient"] ''' EXAMPLES = ''' # Upload an image from an HTTP URL - glance_image: login_username=admin login_password=passme login_tenant_name=admin name=cirros container_format=bare disk_format=qcow2 state=present copy_from=http:launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img ''' import time try: import glanceclient from keystoneclient.v2_0 import client as ksclient except ImportError: print("failed=True msg='glanceclient and keystone client are required'") def _get_ksclient(module, kwargs): try: client = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s " % e.message) return client def _get_endpoint(module, client): try: endpoint = client.service_catalog.url_for(service_type='image', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting endpoint for glance: %s" % e.message) return endpoint def _get_glance_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint =_get_endpoint(module, _ksclient) kwargs = { 'token': token, } try: client = glanceclient.Client('1', endpoint, **kwargs) except Exception, e: module.fail_json(msg = "Error in connecting to glance: %s" %e.message) return client def _glance_image_present(module, params, client): try: for image in client.images.list(): if image.name == params['name']: return image.id return None except Exception, e: module.fail_json(msg = "Error in fetching image list: %s" %e.message) def _glance_image_create(module, params, client): kwargs = { 'name': params.get('name'), 'disk_format': params.get('disk_format'), 'container_format': params.get('container_format'), 'owner': params.get('owner'), 'is_public': params.get('is_public'), 'copy_from': params.get('copy_from'), } try: timeout = float(params.get('timeout')) expire = time.time() + timeout image = client.images.create(**kwargs) if not params['copy_from']: image.update(data=open(params['file'], 'rb')) while time.time() < expire: image = client.images.get(image.id) if image.status == 'active': break time.sleep(5) except Exception, e: module.fail_json(msg = "Error in creating image: %s" %e.message) if image.status == 'active': module.exit_json(changed = True, result = image.status, id=image.id) else: module.fail_json(msg = " The module timed out, please check manually " + image.status) def _glance_delete_image(module, params, client): try: for image in client.images.list(): if image.name == params['name']: client.images.delete(image) except Exception, e: module.fail_json(msg = "Error in deleting image: %s" %e.message) module.exit_json(changed = True, result = "Deleted") def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required=True), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), name = dict(required=True), disk_format = dict(default='qcow2', choices=['aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso']), container_format = dict(default='bare', choices=['aki', 'ari', 'bare', 'ovf']), owner = dict(default=None), min_disk = dict(default=None), min_ram = dict(default=None), is_public = dict(default=True), copy_from = dict(default= None), timeout = dict(default=180), file = dict(default=None), state = dict(default='present', choices=['absent', 'present']) ), mutually_exclusive = [['file','copy_from']], ) if module.params['state'] == 'present': if not module.params['file'] and not module.params['copy_from']: module.fail_json(msg = "Either file or copy_from variable should be set to create the image") client = _get_glance_client(module, module.params) id = _glance_image_present(module, module.params, client) if not id: _glance_image_create(module, module.params, client) module.exit_json(changed = False, id = id, result = "success") if module.params['state'] == 'absent': client = _get_glance_client(module, module.params) id = _glance_image_present(module, module.params, client) if not id: module.exit_json(changed = False, result = "Success") else: _glance_delete_image(module, module.params, client) # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/quantum_floating_ip0000664000000000000000000002277712316627017020424 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: from novaclient.v1_1 import client as nova_client try: from neutronclient.neutron import client except ImportError: from quantumclient.quantum import client from keystoneclient.v2_0 import client as ksclient import time except ImportError: print("failed=True msg='novaclient,keystoneclient and quantumclient (or neutronclient) are required'") DOCUMENTATION = ''' --- module: quantum_floating_ip version_added: "1.2" short_description: Add/Remove floating IP from an instance description: - Add or Remove a floating IP to an instance options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' auth_url: description: - The keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present network_name: description: - Name of the network from which IP has to be assigned to VM. Please make sure the network is an external network required: true default: None instance_name: description: - The name of the instance to which the IP address should be assigned required: true default: None internal_network_name: description: - The name of the network of the port to associate with the floating ip. Necessary when VM multiple networks. required: false default: None requirements: ["novaclient", "quantumclient", "neutronclient", "keystoneclient"] ''' EXAMPLES = ''' # Assign a floating ip to the instance from an external network - quantum_floating_ip: state=present login_username=admin login_password=admin login_tenant_name=admin network_name=external_network instance_name=vm1 internal_network_name=internal_network ''' def _get_ksclient(module, kwargs): try: kclient = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s " % e.message) global _os_keystone _os_keystone = kclient return kclient def _get_endpoint(module, ksclient): try: endpoint = ksclient.service_catalog.url_for(service_type='network', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting network endpoint: %s" % e.message) return endpoint def _get_neutron_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint = _get_endpoint(module, _ksclient) kwargs = { 'token': token, 'endpoint_url': endpoint } try: neutron = client.Client('2.0', **kwargs) except Exception, e: module.fail_json(msg = "Error in connecting to neutron: %s " % e.message) return neutron def _get_server_state(module, nova): server_info = None server = None try: for server in nova.servers.list(): if server: info = server._info if info['name'] == module.params['instance_name']: if info['status'] != 'ACTIVE' and module.params['state'] == 'present': module.fail_json( msg="The VM is available but not Active. state:" + info['status']) server_info = info break except Exception, e: module.fail_json(msg = "Error in getting the server list: %s" % e.message) return server_info, server def _get_port_info(neutron, module, instance_id, internal_network_name=None): if internal_network_name: kwargs = { 'name': internal_network_name, } networks = neutron.list_networks(**kwargs) subnet_id = networks['networks'][0]['subnets'][0] kwargs = { 'device_id': instance_id, } try: ports = neutron.list_ports(**kwargs) except Exception, e: module.fail_json( msg = "Error in listing ports: %s" % e.message) if subnet_id: port = next(port for port in ports['ports'] if port['fixed_ips'][0]['subnet_id'] == subnet_id) port_id = port['id'] fixed_ip_address = port['fixed_ips'][0]['ip_address'] else: port_id = ports['ports'][0]['id'] fixed_ip_address = ports['ports'][0]['fixed_ips'][0]['ip_address'] if not ports['ports']: return None, None return fixed_ip_address, port_id def _get_floating_ip(module, neutron, fixed_ip_address): kwargs = { 'fixed_ip_address': fixed_ip_address } try: ips = neutron.list_floatingips(**kwargs) except Exception, e: module.fail_json(msg = "error in fetching the floatingips's %s" % e.message) if not ips['floatingips']: return None, None return ips['floatingips'][0]['id'], ips['floatingips'][0]['floating_ip_address'] def _create_floating_ip(neutron, module, port_id, net_id): kwargs = { 'port_id': port_id, 'floating_network_id': net_id } try: result = neutron.create_floatingip({'floatingip': kwargs}) except Exception, e: module.fail_json(msg="There was an error in updating the floating ip address: %s" % e.message) module.exit_json(changed=True, result=result, public_ip=result['floatingip']['floating_ip_address']) def _get_net_id(neutron, module): kwargs = { 'name': module.params['network_name'], } try: networks = neutron.list_networks(**kwargs) except Exception, e: module.fail_json("Error in listing neutron networks: %s" % e.message) if not networks['networks']: return None return networks['networks'][0]['id'] def _update_floating_ip(neutron, module, port_id, floating_ip_id): kwargs = { 'port_id': port_id } try: result = neutron.update_floatingip(floating_ip_id, {'floatingip': kwargs}) except Exception, e: module.fail_json(msg="There was an error in updating the floating ip address: %s" % e.message) module.exit_json(changed=True, result=result) def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), network_name = dict(required=True), instance_name = dict(required=True), state = dict(default='present', choices=['absent', 'present']), internal_network_name = dict(default=None), ), ) try: nova = nova_client.Client(module.params['login_username'], module.params['login_password'], module.params['login_tenant_name'], module.params['auth_url'], service_type='compute') neutron = _get_neutron_client(module, module.params) except Exception, e: module.fail_json(msg="Error in authenticating to nova: %s" % e.message) server_info, server_obj = _get_server_state(module, nova) if not server_info: module.fail_json(msg="The instance name provided cannot be found") fixed_ip, port_id = _get_port_info(neutron, module, server_info['id'], module.params['internal_network_name']) if not port_id: module.fail_json(msg="Cannot find a port for this instance, maybe fixed ip is not assigned") floating_id, floating_ip = _get_floating_ip(module, neutron, fixed_ip) if module.params['state'] == 'present': if floating_ip: module.exit_json(changed = False, public_ip=floating_ip) net_id = _get_net_id(neutron, module) if not net_id: module.fail_json(msg = "cannot find the network specified, please check") _create_floating_ip(neutron, module, port_id, net_id) if module.params['state'] == 'absent': if floating_ip: _update_floating_ip(neutron, module, None, floating_id) module.exit_json(changed=False) # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/cloudformation0000664000000000000000000002461212316627017017372 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: cloudformation short_description: create a AWS CloudFormation stack description: - Launches an AWS CloudFormation stack and waits for it complete. version_added: "1.1" options: stack_name: description: - name of the cloudformation stack required: true default: null aliases: [] disable_rollback: description: - If a stacks fails to form, rollback will remove the stack required: false default: "no" choices: [ "yes", "no" ] aliases: [] template_parameters: description: - a list of hashes of all the template variables for the stack required: false default: {} aliases: [] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: true default: null aliases: ['aws_region', 'ec2_region'] state: description: - If state is "present", stack will be created. If state is "present" and if stack exists and template has changed, it will be updated. If state is absent, stack will be removed. required: true default: null aliases: [] template: description: - the path of the cloudformation template required: true default: null aliases: [] tags: description: - Dictionary of tags to associate with stack and it's resources during stack creation. Cannot be updated later. Requires at least Boto version 2.6.0. required: false default: null aliases: [] version_added: "1.4" aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: null aliases: [ 'ec2_secret_key', 'secret_key' ] version_added: "1.5" aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: null aliases: [ 'ec2_access_key', 'access_key' ] version_added: "1.5" region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false aliases: ['aws_region', 'ec2_region'] version_added: "1.5" requirements: [ "boto" ] author: James S. Martin ''' EXAMPLES = ''' # Basic task example tasks: - name: launch ansible cloudformation example action: cloudformation > stack_name="ansible-cloudformation" state=present region=us-east-1 disable_rollback=yes template=files/cloudformation-example.json args: template_parameters: KeyName: jmartin DiskType: ephemeral InstanceType: m1.small ClusterSize: 3 tags: Stack: ansible-cloudformation ''' import json import time try: import boto import boto.cloudformation.connection except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) class Region: def __init__(self, region): '''connects boto to the region specified in the cloudformation template''' self.name = region self.endpoint = 'cloudformation.%s.amazonaws.com' % region def boto_exception(err): '''generic error message handler''' if hasattr(err, 'error_message'): error = err.error_message elif hasattr(err, 'message'): error = err.message else: error = '%s: %s' % (Exception, err) return error def boto_version_required(version_tuple): parts = boto.Version.split('.') boto_version = [] try: for part in parts: boto_version.append(int(part)) except: boto_version.append(-1) return tuple(boto_version) >= tuple(version_tuple) def stack_operation(cfn, stack_name, operation): '''gets the status of a stack while it is created/updated/deleted''' existed = [] result = {} operation_complete = False while operation_complete == False: try: stack = cfn.describe_stacks(stack_name)[0] existed.append('yes') except: if 'yes' in existed: result = dict(changed=True, output='Stack Deleted', events=map(str, list(stack.describe_events()))) else: result = dict(changed= True, output='Stack Not Found') break if '%s_COMPLETE' % operation == stack.stack_status: result = dict(changed=True, events = map(str, list(stack.describe_events())), output = 'Stack %s complete' % operation) break if 'ROLLBACK_COMPLETE' == stack.stack_status or '%s_ROLLBACK_COMPLETE' % operation == stack.stack_status: result = dict(changed=True, failed=True, events = map(str, list(stack.describe_events())), output = 'Problem with %s. Rollback complete' % operation) break elif '%s_FAILED' % operation == stack.stack_status: result = dict(changed=True, failed=True, events = map(str, list(stack.describe_events())), output = 'Stack %s failed' % operation) break else: time.sleep(5) return result def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( stack_name=dict(required=True), template_parameters=dict(required=False, type='dict', default={}), state=dict(default='present', choices=['present', 'absent']), template=dict(default=None, required=True), disable_rollback=dict(default=False), tags=dict(default=None) ) ) module = AnsibleModule( argument_spec=argument_spec, ) state = module.params['state'] stack_name = module.params['stack_name'] template_body = open(module.params['template'], 'r').read() disable_rollback = module.params['disable_rollback'] template_parameters = module.params['template_parameters'] tags = module.params['tags'] ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) kwargs = dict() if tags is not None: if not boto_version_required((2,6,0)): module.fail_json(msg='Module parameter "tags" requires at least Boto version 2.6.0') kwargs['tags'] = tags # convert the template parameters ansible passes into a tuple for boto template_parameters_tup = [(k, v) for k, v in template_parameters.items()] stack_outputs = {} try: cf_region = Region(region) cfn = boto.cloudformation.connection.CloudFormationConnection( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, region=cf_region, ) except boto.exception.NoAuthHandlerFound, e: module.fail_json(msg=str(e)) update = False result = {} operation = None # if state is present we are going to ensure that the stack is either # created or updated if state == 'present': try: cfn.create_stack(stack_name, parameters=template_parameters_tup, template_body=template_body, disable_rollback=disable_rollback, capabilities=['CAPABILITY_IAM'], **kwargs) operation = 'CREATE' except Exception, err: error_msg = boto_exception(err) if 'AlreadyExistsException' in error_msg: update = True else: module.fail_json(msg=error_msg) if not update: result = stack_operation(cfn, stack_name, operation) # if the state is present and the stack already exists, we try to update it # AWS will tell us if the stack template and parameters are the same and # don't need to be updated. if update: try: cfn.update_stack(stack_name, parameters=template_parameters_tup, template_body=template_body, disable_rollback=disable_rollback, capabilities=['CAPABILITY_IAM']) operation = 'UPDATE' except Exception, err: error_msg = boto_exception(err) if 'No updates are to be performed.' in error_msg: result = dict(changed=False, output='Stack is already up-to-date.') else: module.fail_json(msg=error_msg) if operation == 'UPDATE': result = stack_operation(cfn, stack_name, operation) # check the status of the stack while we are creating/updating it. # and get the outputs of the stack if state == 'present' or update: stack = cfn.describe_stacks(stack_name)[0] for output in stack.outputs: stack_outputs[output.key] = output.value result['stack_outputs'] = stack_outputs # absent state is different because of the way delete_stack works. # problem is it it doesn't give an error if stack isn't found # so must describe the stack first if state == 'absent': try: cfn.describe_stacks(stack_name) operation = 'DELETE' except Exception, err: error_msg = boto_exception(err) if 'Stack:%s does not exist' % stack_name in error_msg: result = dict(changed=False, output='Stack not found.') else: module.fail_json(msg=error_msg) if operation == 'DELETE': cfn.delete_stack(stack_name) result = stack_operation(cfn, stack_name, operation) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/quantum_floating_ip_associate0000664000000000000000000002002712316627017022441 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: from novaclient.v1_1 import client as nova_client try: from neutronclient.neutron import client except ImportError: from quantumclient.quantum import client from keystoneclient.v2_0 import client as ksclient import time except ImportError: print "failed=True msg='novaclient, keystone, and quantumclient (or neutronclient) client are required'" DOCUMENTATION = ''' --- module: quantum_floating_ip_associate version_added: "1.2" short_description: Associate or disassociate a particular floating IP with an instance description: - Associates or disassociates a specific floating IP with a particular instance options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - password of login user required: true default: 'yes' login_tenant_name: description: - the tenant name of the login user required: true default: true auth_url: description: - the keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - name of the region required: false default: None state: description: - indicates the desired state of the resource choices: ['present', 'absent'] default: present instance_name: description: - name of the instance to which the public IP should be assigned required: true default: None ip_address: description: - floating ip that should be assigned to the instance required: true default: None requirements: ["quantumclient", "neutronclient", "keystoneclient"] ''' EXAMPLES = ''' # Associate a specific floating IP with an Instance - quantum_floating_ip_associate: state=present login_username=admin login_password=admin login_tenant_name=admin ip_address=1.1.1.1 instance_name=vm1 ''' def _get_ksclient(module, kwargs): try: kclient = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s " % e.message) global _os_keystone _os_keystone = kclient return kclient def _get_endpoint(module, ksclient): try: endpoint = ksclient.service_catalog.url_for(service_type='network', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting network endpoint: %s" % e.message) return endpoint def _get_neutron_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint = _get_endpoint(module, _ksclient) kwargs = { 'token': token, 'endpoint_url': endpoint } try: neutron = client.Client('2.0', **kwargs) except Exception, e: module.fail_json(msg = "Error in connecting to neutron: %s " % e.message) return neutron def _get_server_state(module, nova): server_info = None server = None try: for server in nova.servers.list(): if server: info = server._info if info['name'] == module.params['instance_name']: if info['status'] != 'ACTIVE' and module.params['state'] == 'present': module.fail_json(msg="The VM is available but not Active. state:" + info['status']) server_info = info break except Exception, e: module.fail_json(msg = "Error in getting the server list: %s" % e.message) return server_info, server def _get_port_id(neutron, module, instance_id): kwargs = dict(device_id = instance_id) try: ports = neutron.list_ports(**kwargs) except Exception, e: module.fail_json( msg = "Error in listing ports: %s" % e.message) if not ports['ports']: return None return ports['ports'][0]['id'] def _get_floating_ip_id(module, neutron): kwargs = { 'floating_ip_address': module.params['ip_address'] } try: ips = neutron.list_floatingips(**kwargs) except Exception, e: module.fail_json(msg = "error in fetching the floatingips's %s" % e.message) if not ips['floatingips']: module.fail_json(msg = "Could find the ip specified in parameter, Please check") ip = ips['floatingips'][0]['id'] if not ips['floatingips'][0]['port_id']: state = "detached" else: state = "attached" return state, ip def _update_floating_ip(neutron, module, port_id, floating_ip_id): kwargs = { 'port_id': port_id } try: result = neutron.update_floatingip(floating_ip_id, {'floatingip': kwargs}) except Exception, e: module.fail_json(msg = "There was an error in updating the floating ip address: %s" % e.message) module.exit_json(changed = True, result = result, public_ip=module.params['ip_address']) def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), ip_address = dict(required=True), instance_name = dict(required=True), state = dict(default='present', choices=['absent', 'present']) ), ) try: nova = nova_client.Client(module.params['login_username'], module.params['login_password'], module.params['login_tenant_name'], module.params['auth_url'], service_type='compute') except Exception, e: module.fail_json( msg = " Error in authenticating to nova: %s" % e.message) neutron = _get_neutron_client(module, module.params) state, floating_ip_id = _get_floating_ip_id(module, neutron) if module.params['state'] == 'present': if state == 'attached': module.exit_json(changed = False, result = 'attached', public_ip=module.params['ip_address']) server_info, server_obj = _get_server_state(module, nova) if not server_info: module.fail_json(msg = " The instance name provided cannot be found") port_id = _get_port_id(neutron, module, server_info['id']) if not port_id: module.fail_json(msg = "Cannot find a port for this instance, maybe fixed ip is not assigned") _update_floating_ip(neutron, module, port_id, floating_ip_id) if module.params['state'] == 'absent': if state == 'detached': module.exit_json(changed = False, result = 'detached') if state == 'attached': _update_floating_ip(neutron, module, None, floating_ip_id) module.exit_json(changed = True, result = "detached") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/ec2_facts0000664000000000000000000001416712316627017016202 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ec2_facts short_description: Gathers facts about remote hosts within ec2 (aws) version_added: "1.0" options: validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 description: - This module fetches data from the metadata servers in ec2 (aws). Eucalyptus cloud provides a similar service and this module should work this cloud provider as well. notes: - Parameters to filter on ec2_facts may be added later. author: "Silviu Dicu " ''' EXAMPLES = ''' # Conditional example - name: Gather facts action: ec2_facts - name: Conditional action: debug msg="This instance is a t1.micro" when: ansible_ec2_instance_type == "t1.micro" ''' import socket import re socket.setdefaulttimeout(5) class Ec2Metadata(object): ec2_metadata_uri = 'http://169.254.169.254/latest/meta-data/' ec2_sshdata_uri = 'http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key' ec2_userdata_uri = 'http://169.254.169.254/latest/user-data/' AWS_REGIONS = ('ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2', 'eu-west-1', 'sa-east-1', 'us-east-1', 'us-west-1', 'us-west-2') def __init__(self, module, ec2_metadata_uri=None, ec2_sshdata_uri=None, ec2_userdata_uri=None): self.module = module self.uri_meta = ec2_metadata_uri or self.ec2_metadata_uri self.uri_user = ec2_userdata_uri or self.ec2_userdata_uri self.uri_ssh = ec2_sshdata_uri or self.ec2_sshdata_uri self._data = {} self._prefix = 'ansible_ec2_%s' def _fetch(self, url): (response, info) = fetch_url(self.module, url, force=True) if response: data = response.read() else: data = None return data def _mangle_fields(self, fields, uri, filter_patterns=['public-keys-0']): new_fields = {} for key, value in fields.iteritems(): split_fields = key[len(uri):].split('/') if len(split_fields) > 1 and split_fields[1]: new_key = "-".join(split_fields) new_fields[self._prefix % new_key] = value else: new_key = "".join(split_fields) new_fields[self._prefix % new_key] = value for pattern in filter_patterns: for key in new_fields.keys(): match = re.search(pattern, key) if match: new_fields.pop(key) return new_fields def fetch(self, uri, recurse=True): raw_subfields = self._fetch(uri) if not raw_subfields: return subfields = raw_subfields.split('\n') for field in subfields: if field.endswith('/') and recurse: self.fetch(uri + field) if uri.endswith('/'): new_uri = uri + field else: new_uri = uri + '/' + field if new_uri not in self._data and not new_uri.endswith('/'): content = self._fetch(new_uri) if field == 'security-groups': sg_fields = ",".join(content.split('\n')) self._data['%s' % (new_uri)] = sg_fields else: self._data['%s' % (new_uri)] = content def fix_invalid_varnames(self, data): """Change ':'' and '-' to '_' to ensure valid template variable names""" for (key, value) in data.items(): if ':' in key or '-' in key: newkey = key.replace(':','_').replace('-','_') data[newkey] = value def add_ec2_region(self, data): """Use the 'ansible_ec2_placement_availability_zone' key/value pair to add 'ansible_ec2_placement_region' key/value pair with the EC2 region name. """ # Only add a 'ansible_ec2_placement_region' key if the # 'ansible_ec2_placement_availability_zone' exists. zone = data.get('ansible_ec2_placement_availability_zone') if zone is not None: # Use the zone name as the region name unless the zone # name starts with a known AWS region name. region = zone for r in self.AWS_REGIONS: if zone.startswith(r): region = r break data['ansible_ec2_placement_region'] = region def run(self): self.fetch(self.uri_meta) # populate _data data = self._mangle_fields(self._data, self.uri_meta) data[self._prefix % 'user-data'] = self._fetch(self.uri_user) data[self._prefix % 'public-key'] = self._fetch(self.uri_ssh) self.fix_invalid_varnames(data) self.add_ec2_region(data) return data def main(): argument_spec = url_argument_spec() module = AnsibleModule( argument_spec = argument_spec, supports_check_mode = True, ) ec2_facts = Ec2Metadata(module).run() ec2_facts_result = dict(changed=False, ansible_facts=ec2_facts) module.exit_json(**ec2_facts_result) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/cloud/quantum_router_gateway0000664000000000000000000001651612316627017021164 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: try: from neutronclient.neutron import client except ImportError: from quantumclient.quantum import client from keystoneclient.v2_0 import client as ksclient except ImportError: print("failed=True msg='quantumclient (or neutronclient) and keystone client are required'") DOCUMENTATION = ''' --- module: quantum_router_gateway version_added: "1.2" short_description: set/unset a gateway interface for the router with the specified external network description: - Creates/Removes a gateway interface from the router, used to associate a external network with a router to route external traffic. options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' auth_url: description: - The keystone URL for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present router_name: description: - Name of the router to which the gateway should be attached. required: true default: None network_name: description: - Name of the external network which should be attached to the router. required: true default: None requirements: ["quantumclient", "neutronclient", "keystoneclient"] ''' EXAMPLES = ''' # Attach an external network with a router to allow flow of external traffic - quantum_router_gateway: state=present login_username=admin login_password=admin login_tenant_name=admin router_name=external_router network_name=external_network ''' _os_keystone = None def _get_ksclient(module, kwargs): try: kclient = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s " % e.message) global _os_keystone _os_keystone = kclient return kclient def _get_endpoint(module, ksclient): try: endpoint = ksclient.service_catalog.url_for(service_type='network', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting network endpoint: %s" % e.message) return endpoint def _get_neutron_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint = _get_endpoint(module, _ksclient) kwargs = { 'token': token, 'endpoint_url': endpoint } try: neutron = client.Client('2.0', **kwargs) except Exception, e: module.fail_json(msg = "Error in connecting to neutron: %s " % e.message) return neutron def _get_router_id(module, neutron): kwargs = { 'name': module.params['router_name'], } try: routers = neutron.list_routers(**kwargs) except Exception, e: module.fail_json(msg = "Error in getting the router list: %s " % e.message) if not routers['routers']: return None return routers['routers'][0]['id'] def _get_net_id(neutron, module): kwargs = { 'name': module.params['network_name'], 'router:external': True } try: networks = neutron.list_networks(**kwargs) except Exception, e: module.fail_json("Error in listing neutron networks: %s" % e.message) if not networks['networks']: return None return networks['networks'][0]['id'] def _get_port_id(neutron, module, router_id, network_id): kwargs = { 'device_id': router_id, 'network_id': network_id, } try: ports = neutron.list_ports(**kwargs) except Exception, e: module.fail_json( msg = "Error in listing ports: %s" % e.message) if not ports['ports']: return None return ports['ports'][0]['id'] def _add_gateway_router(neutron, module, router_id, network_id): kwargs = { 'network_id': network_id } try: neutron.add_gateway_router(router_id, kwargs) except Exception, e: module.fail_json(msg = "Error in adding gateway to router: %s" % e.message) return True def _remove_gateway_router(neutron, module, router_id): try: neutron.remove_gateway_router(router_id) except Exception, e: module.fail_json(msg = "Error in removing gateway to router: %s" % e.message) return True def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), router_name = dict(required=True), network_name = dict(required=True), state = dict(default='present', choices=['absent', 'present']), ), ) neutron = _get_neutron_client(module, module.params) router_id = _get_router_id(module, neutron) if not router_id: module.fail_json(msg="failed to get the router id, please check the router name") network_id = _get_net_id(neutron, module) if not network_id: module.fail_json(msg="failed to get the network id, please check the network name and make sure it is external") if module.params['state'] == 'present': port_id = _get_port_id(neutron, module, router_id, network_id) if not port_id: _add_gateway_router(neutron, module, router_id, network_id) module.exit_json(changed=True, result="created") module.exit_json(changed=False, result="success") if module.params['state'] == 'absent': port_id = _get_port_id(neutron, module, router_id, network_id) if not port_id: module.exit_json(changed=False, result="Success") _remove_gateway_router(neutron, module, router_id) module.exit_json(changed=True, result="Deleted") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/gce_lb0000664000000000000000000003146612316627017015565 0ustar rootroot#!/usr/bin/python # Copyright 2013 Google Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: gce_lb version_added: "1.5" short_description: create/destroy GCE load-balancer resources description: - This module can create and destroy Google Compute Engine C(loadbalancer) and C(httphealthcheck) resources. The primary LB resource is the C(load_balancer) resource and the health check parameters are all prefixed with I(httphealthcheck). The full documentation for Google Compute Engine load balancing is at U(https://developers.google.com/compute/docs/load-balancing/). However, the ansible module simplifies the configuration by following the libcloud model. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py. options: httphealthcheck_name: description: - the name identifier for the HTTP health check required: false default: null httphealthcheck_port: description: - the TCP port to use for HTTP health checking required: false default: 80 httphealthcheck_path: description: - the url path to use for HTTP health checking required: false default: "/" httphealthcheck_interval: description: - the duration in seconds between each health check request required: false default: 5 httphealthcheck_timeout: description: - the timeout in seconds before a request is considered a failed check required: false default: 5 httphealthcheck_unhealthy_count: description: - number of consecutive failed checks before marking a node unhealthy required: false default: 2 httphealthcheck_healthy_count: description: - number of consecutive successful checks before marking a node healthy required: false default: 2 httphealthcheck_host: description: - host header to pass through on HTTP check requests required: false default: null name: description: - name of the load-balancer resource required: false default: null protocol: description: - the protocol used for the load-balancer packet forwarding, tcp or udp required: false default: "tcp" choices: ['tcp', 'udp'] region: description: - the GCE region where the load-balancer is defined required: false choices: ["us-central1", "us-central2", "europe-west1"] external_ip: description: - the external static IPv4 (or auto-assigned) address for the LB required: false default: null port_range: description: - the port (range) to forward, e.g. 80 or 8000-8888 defaults to all ports required: false default: null members: description: - a list of zone/nodename pairs, e.g ['us-central1-a/www-a', ...] required: false aliases: ['nodes'] state: description: - desired state of the LB default: "present" choices: ["active", "present", "absent", "deleted"] aliases: [] requirements: [ "libcloud" ] author: Eric Johnson ''' EXAMPLES = ''' # Simple example of creating a new LB, adding members, and a health check - local_action: module: gce_lb name: testlb region: us-central1 members: ["us-central1-a/www-a", "us-central1-b/www-b"] httphealthcheck_name: hc httphealthcheck_port: 80 httphealthcheck_path: "/up" ''' import sys USER_AGENT_PRODUCT="Ansible-gce_lb" USER_AGENT_VERSION="v1beta15" try: from libcloud.compute.types import Provider from libcloud.compute.providers import get_driver from libcloud.loadbalancer.types import Provider as Provider_lb from libcloud.loadbalancer.providers import get_driver as get_driver_lb from libcloud.common.google import GoogleBaseError, QuotaExceededError, \ ResourceExistsError, ResourceNotFoundError _ = Provider.GCE except ImportError: print("failed=True " + \ "msg='libcloud with GCE support required for this module.'") sys.exit(1) # Load in the libcloud secrets file try: import secrets except ImportError: secrets = None ARGS = getattr(secrets, 'GCE_PARAMS', ()) KWARGS = getattr(secrets, 'GCE_KEYWORD_PARAMS', {}) if not ARGS or not 'project' in KWARGS: print("failed=True msg='Missing GCE connection " + \ "parameters in libcloud secrets file.'") sys.exit(1) def unexpected_error_msg(error): """Format error string based on passed in error.""" msg='Unexpected response: HTTP return_code[' msg+='%s], API error code[%s] and message: %s' % ( error.http_code, error.code, str(error.value)) return msg def main(): module = AnsibleModule( argument_spec = dict( httphealthcheck_name = dict(), httphealthcheck_port = dict(default=80), httphealthcheck_path = dict(default='/'), httphealthcheck_interval = dict(default=5), httphealthcheck_timeout = dict(default=5), httphealthcheck_unhealthy_count = dict(default=2), httphealthcheck_healthy_count = dict(default=2), httphealthcheck_host = dict(), name = dict(), protocol = dict(default='tcp'), region = dict(), external_ip = dict(), port_range = dict(), members = dict(type='list'), state = dict(default='present'), ) ) httphealthcheck_name = module.params.get('httphealthcheck_name') httphealthcheck_port = module.params.get('httphealthcheck_port') httphealthcheck_path = module.params.get('httphealthcheck_path') httphealthcheck_interval = module.params.get('httphealthcheck_interval') httphealthcheck_timeout = module.params.get('httphealthcheck_timeout') httphealthcheck_unhealthy_count = \ module.params.get('httphealthcheck_unhealthy_count') httphealthcheck_healthy_count = \ module.params.get('httphealthcheck_healthy_count') httphealthcheck_host = module.params.get('httphealthcheck_host') name = module.params.get('name') protocol = module.params.get('protocol') region = module.params.get('region') external_ip = module.params.get('external_ip') port_range = module.params.get('port_range') members = module.params.get('members') state = module.params.get('state') try: gce = get_driver(Provider.GCE)(*ARGS, **KWARGS) gce.connection.user_agent_append("%s/%s" % ( USER_AGENT_PRODUCT, USER_AGENT_VERSION)) gcelb = get_driver_lb(Provider_lb.GCE)(gce_driver=gce) gcelb.connection.user_agent_append("%s/%s" % ( USER_AGENT_PRODUCT, USER_AGENT_VERSION)) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) changed = False json_output = {'name': name, 'state': state} if not name and not httphealthcheck_name: module.fail_json(msg='Nothing to do, please specify a "name" ' + \ 'or "httphealthcheck_name" parameter', changed=False) if state in ['active', 'present']: # first, create the httphealthcheck if requested hc = None if httphealthcheck_name: json_output['httphealthcheck_name'] = httphealthcheck_name try: hc = gcelb.ex_create_healthcheck(httphealthcheck_name, host=httphealthcheck_host, path=httphealthcheck_path, port=httphealthcheck_port, interval=httphealthcheck_interval, timeout=httphealthcheck_timeout, unhealthy_threshold=httphealthcheck_unhealthy_count, healthy_threshold=httphealthcheck_healthy_count) changed = True except ResourceExistsError: hc = gce.ex_get_healthcheck(httphealthcheck_name) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) if hc is not None: json_output['httphealthcheck_host'] = hc.extra['host'] json_output['httphealthcheck_path'] = hc.path json_output['httphealthcheck_port'] = hc.port json_output['httphealthcheck_interval'] = hc.interval json_output['httphealthcheck_timeout'] = hc.timeout json_output['httphealthcheck_unhealthy_count'] = \ hc.unhealthy_threshold json_output['httphealthcheck_healthy_count'] = \ hc.healthy_threshold # create the forwarding rule (and target pool under the hood) lb = None if name: if not region: module.fail_json(msg='Missing required region name', changed=False) nodes = [] output_nodes = [] json_output['name'] = name # members is a python list of 'zone/inst' strings if members: for node in members: try: zone, node_name = node.split('/') nodes.append(gce.ex_get_node(node_name, zone)) output_nodes.append(node) except: # skip nodes that are badly formatted or don't exist pass try: if hc is not None: lb = gcelb.create_balancer(name, port_range, protocol, None, nodes, ex_region=region, ex_healthchecks=[hc], ex_address=external_ip) else: lb = gcelb.create_balancer(name, port_range, protocol, None, nodes, ex_region=region, ex_address=external_ip) changed = True except ResourceExistsError: lb = gcelb.get_balancer(name) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) if lb is not None: json_output['members'] = output_nodes json_output['protocol'] = protocol json_output['region'] = region json_output['external_ip'] = lb.ip json_output['port_range'] = lb.port hc_names = [] if 'healthchecks' in lb.extra: for hc in lb.extra['healthchecks']: hc_names.append(hc.name) json_output['httphealthchecks'] = hc_names if state in ['absent', 'deleted']: # first, delete the load balancer (forwarding rule and target pool) # if specified. if name: json_output['name'] = name try: lb = gcelb.get_balancer(name) gcelb.destroy_balancer(lb) changed = True except ResourceNotFoundError: pass except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) # destroy the health check if specified if httphealthcheck_name: json_output['httphealthcheck_name'] = httphealthcheck_name try: hc = gce.ex_get_healthcheck(httphealthcheck_name) gce.ex_destroy_healthcheck(hc) changed = True except ResourceNotFoundError: pass except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) json_output['changed'] = changed print json.dumps(json_output) sys.exit(0) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/rax_dns_record0000664000000000000000000001637212316627017017345 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_dns_record short_description: Manage DNS records on Rackspace Cloud DNS description: - Manage DNS records on Rackspace Cloud DNS version_added: 1.5 options: api_key: description: - Rackspace API key (overrides C(credentials)) comment: description: - Brief description of the domain. Maximum length of 160 characters credentials: description: - File to find the Rackspace credentials in (ignored if C(api_key) and C(username) are provided) default: null aliases: ['creds_file'] data: description: - IP address for A/AAAA record, FQDN for CNAME/MX/NS, or text data for SRV/TXT required: True domain: description: - Domain name to create the record in required: True name: description: - FQDN record name to create required: True priority: description: - Required for MX and SRV records, but forbidden for other record types. If specified, must be an integer from 0 to 65535. state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present ttl: description: - Time to live of domain in seconds default: 3600 type: description: - DNS record type choices: ['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SRV', 'TXT'] default: A username: description: - Rackspace username (overrides C(credentials)) requirements: [ "pyrax" ] author: Matt Martz notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: Create record hosts: all gather_facts: False tasks: - name: Record create request local_action: module: rax_dns_record credentials: ~/.raxpub domain: example.org name: www.example.org data: 127.0.0.1 type: A register: rax_dns_record ''' import sys import os from types import NoneType try: import pyrax except ImportError: print("failed=True msg='pyrax required for this module'") sys.exit(1) NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) def to_dict(obj): instance = {} for key in dir(obj): value = getattr(obj, key) if (isinstance(value, NON_CALLABLES) and not key.startswith('_')): instance[key] = value return instance def rax_dns_record(module, comment, data, domain, name, priority, record_type, state, ttl): changed = False dns = pyrax.cloud_dns if state == 'present': if not priority and record_type in ['MX', 'SRV']: module.fail_json(msg='A "priority" attribute is required for ' 'creating a MX or SRV record') try: domain = dns.find(name=domain) except Exception, e: module.fail_json(msg='%s' % e.message) try: record = domain.find_record(record_type, name=name) except pyrax.exceptions.DomainRecordNotUnique, e: module.fail_json(msg='%s' % e.message) except pyrax.exceptions.DomainRecordNotFound, e: try: record_data = { 'type': record_type, 'name': name, 'data': data, 'ttl': ttl } if comment: record_data.update(dict(comment=comment)) if priority: record_data.update(dict(priority=priority)) record = domain.add_records([record_data])[0] changed = True except Exception, e: module.fail_json(msg='%s' % e.message) update = {} if comment != getattr(record, 'comment', None): update['comment'] = comment if ttl != getattr(record, 'ttl', None): update['ttl'] = ttl if priority != getattr(record, 'priority', None): update['priority'] = priority if data != getattr(record, 'data', None): update['data'] = data if update: try: record.update(**update) changed = True record.get() except Exception, e: module.fail_json(msg='%s' % e.message) elif state == 'absent': try: domain = dns.find(name=domain) except Exception, e: module.fail_json(msg='%s' % e.message) try: record = domain.find_record(record_type, name=name, data=data) except pyrax.exceptions.DomainRecordNotFound, e: record = {} pass except pyrax.exceptions.DomainRecordNotUnique, e: module.fail_json(msg='%s' % e.message) if record: try: record.delete() changed = True except Exception, e: module.fail_json(msg='%s' % e.message) module.exit_json(changed=changed, record=to_dict(record)) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( comment=dict(), data=dict(required=True), domain=dict(required=True), name=dict(required=True), priority=dict(type='int'), state=dict(default='present', choices=['present', 'absent']), ttl=dict(type='int', default=3600), type=dict(default='A', choices=['A', 'AAAA', 'CNAME', 'MX', 'NS', 'SRV', 'TXT']) ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), ) comment = module.params.get('comment') data = module.params.get('data') domain = module.params.get('domain') name = module.params.get('name') priority = module.params.get('priority') state = module.params.get('state') ttl = module.params.get('ttl') record_type = module.params.get('type') setup_rax_module(module, pyrax) rax_dns_record(module, comment, data, domain, name, priority, record_type, state, ttl) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/ec2_snapshot0000664000000000000000000001157712316627017016743 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ec2_snapshot short_description: creates a snapshot from an existing volume description: - creates an EC2 snapshot from an existing EBS volume version_added: "1.5" options: ec2_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: None aliases: ['aws_secret_key', 'secret_key' ] ec2_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: None aliases: ['aws_access_key', 'access_key' ] ec2_url: description: - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used required: false default: null aliases: [] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false default: null aliases: ['aws_region', 'ec2_region'] volume_id: description: - volume from which to take the snapshot required: false default: null aliases: [] description: description: - description to be applied to the snapshot required: false default: null aliases: [] instance_id: description: - instance that has a the required volume to snapshot mounted required: false default: null aliases: [] device_name: description: - device name of a mounted volume to be snapshotted required: false default: null aliases: [] requirements: [ "boto" ] author: Will Thames ''' EXAMPLES = ''' # Simple snapshot of volume using volume_id - local_action: module: ec2_snapshot volume_id: vol-abcdef12 description: snapshot of /data from DB123 taken 2013/11/28 12:18:32 # Snapshot of volume mounted on device_name attached to instance_id - local_action: module: ec2_snapshot instance_id: i-12345678 device_name: /dev/sdb1 description: snapshot of /data from DB123 taken 2013/11/28 12:18:32 ''' import sys import time try: import boto.ec2 except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def main(): module = AnsibleModule( argument_spec = dict( volume_id = dict(), description = dict(), instance_id = dict(), device_name = dict(), region = dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS), ec2_url = dict(), ec2_secret_key = dict(aliases=['aws_secret_key', 'secret_key'], no_log=True), ec2_access_key = dict(aliases=['aws_access_key', 'access_key']), ) ) volume_id = module.params.get('volume_id') description = module.params.get('description') instance_id = module.params.get('instance_id') device_name = module.params.get('device_name') if not volume_id and not instance_id or volume_id and instance_id: module.fail_json('One and only one of volume_id or instance_id must be specified') if instance_id and not device_name or device_name and not instance_id: module.fail_json('Instance ID and device name must both be specified') ec2 = ec2_connect(module) if instance_id: try: volumes = ec2.get_all_volumes(filters={'attachment.instance-id': instance_id, 'attachment.device': device_name}) if not volumes: module.fail_json(msg="Could not find volume with name %s attached to instance %s" % (device_name, instance_id)) volume_id = volumes[0].id except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) try: snapshot = ec2.create_snapshot(volume_id, description=description) except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) module.exit_json(changed=True, snapshot_id=snapshot.id) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/rax_network0000664000000000000000000001063412316627017016707 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_network short_description: create / delete an isolated network in Rackspace Public Cloud description: - creates / deletes a Rackspace Public Cloud isolated network. version_added: "1.4" options: state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present credentials: description: - File to find the Rackspace credentials in (ignored if C(api_key) and C(username) are provided) default: null aliases: ['creds_file'] api_key: description: - Rackspace API key (overrides C(credentials)) username: description: - Rackspace username (overrides C(credentials)) label: description: - Label (name) to give the network default: null cidr: description: - cidr of the network being created default: null region: description: - Region to create the network in default: DFW requirements: [ "pyrax" ] author: Christopher H. Laco, Jesse Keating notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS) points to a credentials file appropriate for pyrax - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: Build an Isolated Network gather_facts: False tasks: - name: Network create request local_action: module: rax_network credentials: ~/.raxpub label: my-net cidr: 192.168.3.0/24 state: present ''' import sys import os try: import pyrax import pyrax.utils from pyrax import exc except ImportError: print("failed=True msg='pyrax required for this module'") sys.exit(1) def cloud_network(module, state, label, cidr): for arg in (state, label, cidr): if not arg: module.fail_json(msg='%s is required for cloud_networks' % arg) changed = False network = None networks = [] if state == 'present': try: network = pyrax.cloud_networks.find_network_by_label(label) except exc.NetworkNotFound: try: network = pyrax.cloud_networks.create(label, cidr=cidr) changed = True except Exception, e: module.fail_json(msg='%s' % e.message) except Exception, e: module.fail_json(msg='%s' % e.message) elif state == 'absent': try: network = pyrax.cloud_networks.find_network_by_label(label) network.delete() changed = True except exc.NetworkNotFound: pass except Exception, e: module.fail_json(msg='%s' % e.message) if network: instance = dict(id=network.id, label=network.label, cidr=network.cidr) networks.append(instance) module.exit_json(changed=changed, networks=networks) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( state=dict(default='present', choices=['present', 'absent']), label=dict(), cidr=dict() ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), ) state = module.params.get('state') label = module.params.get('label') cidr = module.params.get('cidr') setup_rax_module(module, pyrax) cloud_network(module, state, label, cidr) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/gce_net0000664000000000000000000002250712316627017015752 0ustar rootroot#!/usr/bin/python # Copyright 2013 Google Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: gce_net version_added: "1.5" short_description: create/destroy GCE networks and firewall rules description: - This module can create and destroy Google Compue Engine networks and firewall rules U(https://developers.google.com/compute/docs/networking). The I(name) parameter is reserved for referencing a network while the I(fwname) parameter is used to reference firewall rules. IPv4 Address ranges must be specified using the CIDR U(http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) format. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py. options: allowed: description: - the protocol:ports to allow ('tcp:80' or 'tcp:80,443' or 'tcp:80-800') required: false default: null aliases: [] ipv4_range: description: - the IPv4 address range in CIDR notation for the network required: false aliases: ['cidr'] fwname: description: - name of the firewall rule required: false default: null aliases: ['fwrule'] name: description: - name of the network required: false default: null aliases: [] src_range: description: - the source IPv4 address range in CIDR notation required: false default: null aliases: ['src_cidr'] src_tags: description: - the source instance tags for creating a firewall rule required: false default: null aliases: [] state: description: - desired state of the persistent disk required: false default: "present" choices: ["active", "present", "absent", "deleted"] aliases: [] requirements: [ "libcloud" ] author: Eric Johnson ''' EXAMPLES = ''' # Simple example of creating a new network - local_action: module: gce_net name: privatenet ipv4_range: '10.240.16.0/24' # Simple example of creating a new firewall rule - local_action: module: gce_net name: privatenet allowed: tcp:80,8080 src_tags: ["web", "proxy"] ''' import sys USER_AGENT_PRODUCT="Ansible-gce_net" USER_AGENT_VERSION="v1beta15" try: from libcloud.compute.types import Provider from libcloud.compute.providers import get_driver from libcloud.common.google import GoogleBaseError, QuotaExceededError, \ ResourceExistsError, ResourceNotFoundError _ = Provider.GCE except ImportError: print("failed=True " + \ "msg='libcloud with GCE support required for this module.'") sys.exit(1) # Load in the libcloud secrets file try: import secrets except ImportError: secrets = None ARGS = getattr(secrets, 'GCE_PARAMS', ()) KWARGS = getattr(secrets, 'GCE_KEYWORD_PARAMS', {}) if not ARGS or not 'project' in KWARGS: print("failed=True msg='Missing GCE connection " + \ "parameters in libcloud secrets file.'") sys.exit(1) def unexpected_error_msg(error): """Format error string based on passed in error.""" msg='Unexpected response: HTTP return_code[' msg+='%s], API error code[%s] and message: %s' % ( error.http_code, error.code, str(error.value)) return msg def format_allowed(allowed): """Format the 'allowed' value so that it is GCE compatible.""" if allowed.count(":") == 0: protocol = allowed ports = [] elif allowed.count(":") == 1: protocol, ports = allowed.split(":") else: return [] if ports.count(","): ports = ports.split(",") else: ports = [ports] return_val = {"IPProtocol": protocol} if ports: return_val["ports"] = ports return [return_val] def main(): module = AnsibleModule( argument_spec = dict( allowed = dict(), ipv4_range = dict(), fwname = dict(), name = dict(), src_range = dict(), src_tags = dict(type='list'), state = dict(default='present'), ) ) allowed = module.params.get('allowed') ipv4_range = module.params.get('ipv4_range') fwname = module.params.get('fwname') name = module.params.get('name') src_range = module.params.get('src_range') src_tags = module.params.get('src_tags') state = module.params.get('state') try: gce = get_driver(Provider.GCE)(*ARGS, **KWARGS) gce.connection.user_agent_append("%s/%s" % ( USER_AGENT_PRODUCT, USER_AGENT_VERSION)) except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) changed = False json_output = {'state': state} if state in ['active', 'present']: network = None try: network = gce.ex_get_network(name) json_output['name'] = name json_output['ipv4_range'] = network.cidr except ResourceNotFoundError: pass except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) # user wants to create a new network that doesn't yet exist if name and not network: if not ipv4_range: module.fail_json(msg="Missing required 'ipv4_range' parameter", changed=False) try: network = gce.ex_create_network(name, ipv4_range) json_output['name'] = name json_output['ipv4_range'] = ipv4_range changed = True except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) if fwname: # user creating a firewall rule if not allowed and not src_range and not src_tags: if changed and network: module.fail_json( msg="Network created, but missing required " + \ "firewall rule parameter(s)", changed=True) module.fail_json( msg="Missing required firewall rule parameter(s)", changed=False) allowed_list = format_allowed(allowed) try: gce.ex_create_firewall(fwname, allowed_list, network=name, source_ranges=src_range, source_tags=src_tags) changed = True except ResourceExistsError: pass except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) json_output['fwname'] = fwname json_output['allowed'] = allowed json_output['src_range'] = src_range json_output['src_tags'] = src_tags if state in ['absent', 'deleted']: if fwname: json_output['fwname'] = fwname fw = None try: fw = gce.ex_get_firewall(fwname) except ResourceNotFoundError: pass except Exception, e: module.fail_json(msg=unexpected_error_msg(e), changed=False) if fw: gce.ex_destroy_firewall(fw) changed = True if name: json_output['name'] = name network = None try: network = gce.ex_get_network(name) # json_output['d1'] = 'found network name %s' % name except ResourceNotFoundError: # json_output['d2'] = 'not found network name %s' % name pass except Exception, e: # json_output['d3'] = 'error with %s' % name module.fail_json(msg=unexpected_error_msg(e), changed=False) if network: # json_output['d4'] = 'deleting %s' % name gce.ex_destroy_network(network) # json_output['d5'] = 'deleted %s' % name changed = True json_output['changed'] = changed print json.dumps(json_output) sys.exit(0) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/ec2_tag0000664000000000000000000001370112316627017015646 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ec2_tag short_description: create and remove tag(s) to ec2 resources. description: - Creates and removes tags from any EC2 resource. The resource is referenced by its resource id (e.g. an instance being i-XXXXXXX). It is designed to be used with complex args (tags), see the examples. This module has a dependency on python-boto. version_added: "1.3" options: resource: description: - The EC2 resource id. required: true default: null aliases: [] state: description: - Whether the tags should be present or absent on the resource. required: false default: present choices: ['present', 'absent'] aliases: [] region: description: - region in which the resource exists. required: false default: null aliases: ['aws_region', 'ec2_region'] aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: None aliases: ['ec2_secret_key', 'secret_key' ] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: None aliases: ['ec2_access_key', 'access_key' ] ec2_url: description: - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used. required: false default: null aliases: [] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] author: Lester Wade ''' EXAMPLES = ''' # Basic example of adding tag(s) tasks: - name: tag a resource local_action: ec2_tag resource=vol-XXXXXX region=eu-west-1 state=present args: tags: Name: ubervol env: prod # Playbook example of adding tag(s) to spawned instances tasks: - name: launch some instances local_action: ec2 keypair={{ keypair }} group={{ security_group }} instance_type={{ instance_type }} image={{ image_id }} wait=true region=eu-west-1 register: ec2 - name: tag my launched instances local_action: ec2_tag resource={{ item.id }} region=eu-west-1 state=present with_items: ec2.instances args: tags: Name: webserver env: prod ''' # Note: this module needs to be made idempotent. Possible solution is to use resource tags with the volumes. # if state=present and it doesn't exist, create, tag and attach. # Check for state by looking for volume attachment with tag (and against block device mapping?). # Would personally like to revisit this in May when Eucalyptus also has tagging support (3.3). import sys import time try: import boto.ec2 except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( resource = dict(required=True), tags = dict(required=True), state = dict(default='present', choices=['present', 'absent']), ) ) module = AnsibleModule(argument_spec=argument_spec) resource = module.params.get('resource') tags = module.params['tags'] state = module.params.get('state') ec2 = ec2_connect(module) # We need a comparison here so that we can accurately report back changed status. # Need to expand the gettags return format and compare with "tags" and then tag or detag as appropriate. filters = {'resource-id' : resource} gettags = ec2.get_all_tags(filters=filters) dictadd = {} dictremove = {} baddict = {} tagdict = {} for tag in gettags: tagdict[tag.name] = tag.value if state == 'present': if set(tags.items()).issubset(set(tagdict.items())): module.exit_json(msg="Tags already exists in %s." %resource, changed=False) else: for (key, value) in set(tags.items()): if (key, value) not in set(tagdict.items()): dictadd[key] = value tagger = ec2.create_tags(resource, dictadd) gettags = ec2.get_all_tags(filters=filters) module.exit_json(msg="Tags %s created for resource %s." % (dictadd,resource), changed=True) if state == 'absent': for (key, value) in set(tags.items()): if (key, value) not in set(tagdict.items()): baddict[key] = value if set(baddict) == set(tags): module.exit_json(msg="Nothing to remove here. Move along.", changed=False) for (key, value) in set(tags.items()): if (key, value) in set(tagdict.items()): dictremove[key] = value tagger = ec2.delete_tags(resource, dictremove) gettags = ec2.get_all_tags(filters=filters) module.exit_json(msg="Tags %s removed for resource %s." % (dictremove,resource), changed=True) # print json.dumps({ # "current_resource_tags": gettags, # }) sys.exit(0) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/docker_image0000664000000000000000000001551012316627017016753 0ustar rootroot#!/usr/bin/env python # # (c) 2014, Pavel Antonov # # This file is part of Ansible # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . ###################################################################### DOCUMENTATION = ''' --- module: docker_image author: Pavel Antonov version_added: "1.5" short_description: manage docker images description: - Create, check and remove docker images options: path: description: - Path to directory with Dockerfile required: false default: null aliases: [] name: description: - Image name to work with required: true default: null aliases: [] tag: description: - Image tag to work with required: false default: "" aliases: [] nocache: description: - Do not use cache with building required: false default: false aliases: [] docker_url: description: - URL of docker host to issue commands to required: false default: unix://var/run/docker.sock aliases: [] state: description: - Set the state of the image required: false default: present choices: [ "present", "absent", "build" ] aliases: [] timeout: description: - Set image operation timeout required: false default: 600 aliases: [] requirements: [ "docker-py" ] ''' EXAMPLES = ''' Build docker image if required. Path should contains Dockerfile to build image: - hosts: web sudo: yes tasks: - name: check or build image docker_image: path="/path/to/build/dir" name="my/app" state=present Build new version of image: - hosts: web sudo: yes tasks: - name: check or build image docker_image: path="/path/to/build/dir" name="my/app" state=build Remove image from local docker storage: - hosts: web sudo: yes tasks: - name: run tomcat servers docker_image: name="my/app" state=absent ''' try: import sys import re import json import docker.client from requests.exceptions import * from urlparse import urlparse except ImportError, e: print "failed=True msg='failed to import python module: %s'" % e sys.exit(1) class DockerImageManager: def __init__(self, module): self.module = module self.path = self.module.params.get('path') self.name = self.module.params.get('name') self.tag = self.module.params.get('tag') self.nocache = self.module.params.get('nocache') docker_url = urlparse(module.params.get('docker_url')) self.client = docker.Client(base_url=docker_url.geturl(), timeout=module.params.get('timeout')) self.changed = False self.log = [] self.error_msg = None def get_log(self, as_string=True): return "".join(self.log) if as_string else self.log def build(self): stream = self.client.build(self.path, tag=':'.join([self.name, self.tag]), nocache=self.nocache, rm=True, stream=True) success_search = r'Successfully built ([0-9a-f]+)' image_id = None self.changed = True for chunk in stream: chunk_json = json.loads(chunk) if 'error' in chunk_json: self.error_msg = chunk_json['error'] return None if 'stream' in chunk_json: output = chunk_json['stream'] self.log.append(output) match = re.search(success_search, output) if match: image_id = match.group(1) return image_id def has_changed(self): return self.changed def get_images(self): filtered_images = [] images = self.client.images() for i in images: # Docker-py version >= 0.3 (Docker API >= 1.8) if 'RepoTags' in i: repotag = '%s:%s' % (getattr(self, 'name', ''), getattr(self, 'tag', 'latest')) if not self.name or repotag in i['RepoTags']: filtered_images.append(i) # Docker-py version < 0.3 (Docker API < 1.8) elif (not self.name or self.name == i['Repository']) and (not self.tag or self.tag == i['Tag']): filtered_images.append(i) return filtered_images def remove_images(self): images = self.get_images() for i in images: try: self.client.remove_image(i['Id']) self.changed = True except docker.APIError as e: # image can be removed by docker if not used pass def main(): module = AnsibleModule( argument_spec = dict( path = dict(required=False, default=None), name = dict(required=True), tag = dict(required=False, default=""), nocache = dict(default=False, type='bool'), state = dict(default='present', choices=['absent', 'present', 'build']), docker_url = dict(default='unix://var/run/docker.sock'), timeout = dict(default=600, type='int'), ) ) try: manager = DockerImageManager(module) state = module.params.get('state') failed = False image_id = None msg = '' do_build = False # build image if not exists if state == "present": images = manager.get_images() if len(images) == 0: do_build = True # build image elif state == "build": do_build = True # remove image or images elif state == "absent": manager.remove_images() if do_build: image_id = manager.build() if image_id: msg = "Image builded: %s" % image_id else: failed = True msg = "Error: %s\nLog:%s" % (manager.error_msg, manager.get_log()) module.exit_json(failed=failed, changed=manager.has_changed(), msg=msg, image_id=image_id) except docker.client.APIError as e: module.exit_json(failed=True, changed=manager.has_changed(), msg="Docker API error: " + e.explanation) except RequestException as e: module.exit_json(failed=True, changed=manager.has_changed(), msg=repr(e)) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/rax_keypair0000664000000000000000000001340112316627017016655 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_keypair short_description: Create a keypair for use with Rackspace Cloud Servers description: - Create a keypair for use with Rackspace Cloud Servers version_added: 1.5 options: api_key: description: - Rackspace API key (overrides I(credentials)) aliases: - password auth_endpoint: description: - The URI of the authentication service default: https://identity.api.rackspacecloud.com/v2.0/ version_added: 1.5 credentials: description: - File to find the Rackspace credentials in (ignored if I(api_key) and I(username) are provided) default: null aliases: - creds_file env: description: - Environment as configured in ~/.pyrax.cfg, see https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration version_added: 1.5 identity_type: description: - Authentication machanism to use, such as rackspace or keystone default: rackspace version_added: 1.5 region: description: - Region to create an instance in default: DFW tenant_id: description: - The tenant ID used for authentication version_added: 1.5 tenant_name: description: - The tenant name used for authentication version_added: 1.5 username: description: - Rackspace username (overrides I(credentials)) verify_ssl: description: - Whether or not to require SSL validation of API endpoints version_added: 1.5 name: description: - Name of keypair required: true public_key: description: - Public Key string to upload default: null state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present requirements: [ "pyrax" ] author: Matt Martz notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) - Keypairs cannot be manipulated, only created and deleted. To "update" a keypair you must first delete and then recreate. ''' EXAMPLES = ''' - name: Create a keypair hosts: local gather_facts: False tasks: - name: keypair request local_action: module: rax_keypair credentials: ~/.raxpub name: my_keypair region: DFW register: keypair - name: Create local public key local_action: module: copy content: "{{ keypair.keypair.public_key }}" dest: "{{ inventory_dir }}/{{ keypair.keypair.name }}.pub" - name: Create local private key local_action: module: copy content: "{{ keypair.keypair.private_key }}" dest: "{{ inventory_dir }}/{{ keypair.keypair.name }}" ''' import sys from types import NoneType try: import pyrax except ImportError: print("failed=True msg='pyrax required for this module'") sys.exit(1) NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) def to_dict(obj): instance = {} for key in dir(obj): value = getattr(obj, key) if (isinstance(value, NON_CALLABLES) and not key.startswith('_')): instance[key] = value return instance def rax_keypair(module, name, public_key, state): changed = False cs = pyrax.cloudservers keypair = {} if state == 'present': try: keypair = cs.keypairs.find(name=name) except cs.exceptions.NotFound: try: keypair = cs.keypairs.create(name, public_key) changed = True except Exception, e: module.fail_json(msg='%s' % e.message) except Exception, e: module.fail_json(msg='%s' % e.message) elif state == 'absent': try: keypair = cs.keypairs.find(name=name) except: pass if keypair: try: keypair.delete() changed = True except Exception, e: module.fail_json(msg='%s' % e.message) module.exit_json(changed=changed, keypair=to_dict(keypair)) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( name=dict(), public_key=dict(), state=dict(default='present', choices=['absent', 'present']), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), ) name = module.params.get('name') public_key = module.params.get('public_key') state = module.params.get('state') setup_rax_module(module, pyrax) rax_keypair(module, name, public_key, state) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/ec2_key0000664000000000000000000001142012316627017015657 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- DOCUMENTATION = ''' --- module: ec2_key version_added: "1.5" short_description: maintain an ec2 key pair. description: - maintains ec2 key pairs. This module has a dependency on python-boto >= 2.5 options: name: description: - Name of the key pair. required: true key_material: description: - Public key material. required: false region: description: - the EC2 region to use required: false default: null aliases: [] ec2_url: description: - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints) required: false default: null aliases: [] ec2_secret_key: description: - EC2 secret key required: false default: null aliases: ['aws_secret_key', 'secret_key'] ec2_access_key: description: - EC2 access key required: false default: null aliases: ['aws_access_key', 'access_key'] state: description: - create or delete keypair required: false default: 'present' aliases: [] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] author: Vincent Viallet ''' EXAMPLES = ''' # Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Creates a new ec2 key pair named `example` if not present, returns generated # private key - name: example ec2 key local_action: module: ec2_key name: example # Creates a new ec2 key pair named `example` if not present using provided key # material - name: example2 ec2 key local_action: module: ec2_key name: example2 key_material: 'ssh-rsa AAAAxyz...== me@example.com' state: present # Creates a new ec2 key pair named `example` if not present using provided key # material - name: example3 ec2 key local_action: module: ec2_key name: example3 key_material: "{{ item }}" with_file: /path/to/public_key.id_rsa.pub # Removes ec2 key pair by name - name: remove example key local_action: module: ec2_key name: example state: absent ''' try: import boto.ec2 except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( name=dict(required=True), key_material=dict(required=False), state = dict(default='present', choices=['present', 'absent']), ) ) module = AnsibleModule( argument_spec=argument_spec, supports_check_mode=True, ) name = module.params['name'] state = module.params.get('state') key_material = module.params.get('key_material') changed = False ec2 = ec2_connect(module) # find the key if present key = ec2.get_key_pair(name) # Ensure requested key is absent if state == 'absent': if key: '''found a match, delete it''' try: key.delete() except Exception, e: module.fail_json(msg="Unable to delete key pair '%s' - %s" % (key, e)) else: key = None changed = True else: '''no match found, no changes required''' # Ensure requested key is present elif state == 'present': if key: '''existing key found''' # Should check if the fingerprint is the same - but lack of info # and different fingerprint provided (pub or private) depending if # the key has been created of imported. pass # if the key doesn't exist, create it now else: '''no match found, create it''' if not module.check_mode: if key_material: '''We are providing the key, need to import''' key = ec2.import_key_pair(name, key_material) else: ''' No material provided, let AWS handle the key creation and retrieve the private key ''' key = ec2.create_key_pair(name) changed = True if key: data = { 'name': key.name, 'fingerprint': key.fingerprint } if key.material: data.update({'private_key': key.material}) module.exit_json(changed=changed, key=data) else: module.exit_json(changed=changed, key=None) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/rax_files0000664000000000000000000003004012316627017016311 0ustar rootroot#!/usr/bin/python -tt # (c) 2013, Paul Durivage # # This file is part of Ansible. # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_files short_description: Manipulate Rackspace Cloud Files Containers description: - Manipulate Rackspace Cloud Files Containers version_added: "1.5" options: api_key: description: - Rackspace API key (overrides I(credentials)) clear_meta: description: - Optionally clear existing metadata when applying metadata to existing containers. Selecting this option is only appropriate when setting type=meta choices: ["yes", "no"] default: "no" container: description: - The container to use for container or metadata operations. required: true credentials: description: - File to find the Rackspace credentials in (ignored if I(api_key) and I(username) are provided) default: null aliases: ['creds_file'] meta: description: - A hash of items to set as metadata values on a container private: description: - Used to set a container as private, removing it from the CDN. B(Warning!) Private containers, if previously made public, can have live objects available until the TTL on cached objects expires public: description: - Used to set a container as public, available via the Cloud Files CDN region: description: - Region to create an instance in default: DFW ttl: description: - In seconds, set a container-wide TTL for all objects cached on CDN edge nodes. Setting a TTL is only appropriate for containers that are public type: description: - Type of object to do work on, i.e. metadata object or a container object choices: ["file", "meta"] default: "file" username: description: - Rackspace username (overrides I(credentials)) web_error: description: - Sets an object to be presented as the HTTP error page when accessed by the CDN URL web_index: description: - Sets an object to be presented as the HTTP index page when accessed by the CDN URL requirements: [ "pyrax" ] author: Paul Durivage notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: "Test Cloud Files Containers" hosts: local gather_facts: no tasks: - name: "List all containers" rax_files: state=list - name: "Create container called 'mycontainer'" rax_files: container=mycontainer - name: "Create container 'mycontainer2' with metadata" rax_files: container: mycontainer2 meta: key: value file_for: someuser@example.com - name: "Set a container's web index page" rax_files: container=mycontainer web_index=index.html - name: "Set a container's web error page" rax_files: container=mycontainer web_error=error.html - name: "Make container public" rax_files: container=mycontainer public=yes - name: "Make container public with a 24 hour TTL" rax_files: container=mycontainer public=yes ttl=86400 - name: "Make container private" rax_files: container=mycontainer private=yes - name: "Test Cloud Files Containers Metadata Storage" hosts: local gather_facts: no tasks: - name: "Get mycontainer2 metadata" rax_files: container: mycontainer2 type: meta - name: "Set mycontainer2 metadata" rax_files: container: mycontainer2 type: meta meta: uploaded_by: someuser@example.com - name: "Remove mycontainer2 metadata" rax_files: container: "mycontainer2" type: meta state: absent meta: key: "" file_for: "" ''' from ansible import __version__ try: import pyrax except ImportError, e: print("failed=True msg='pyrax is required for this module'") sys.exit(1) EXIT_DICT = dict(success=True) META_PREFIX = 'x-container-meta-' USER_AGENT = "Ansible/%s via pyrax" % __version__ def _get_container(module, cf, container): try: return cf.get_container(container) except pyrax.exc.NoSuchContainer, e: module.fail_json(msg=e.message) def _fetch_meta(module, container): EXIT_DICT['meta'] = dict() try: for k, v in container.get_metadata().items(): split_key = k.split(META_PREFIX)[-1] EXIT_DICT['meta'][split_key] = v except Exception, e: module.fail_json(msg=e.message) def meta(cf, module, container_, state, meta_, clear_meta): c = _get_container(module, cf, container_) if meta_ and state == 'present': try: meta_set = c.set_metadata(meta_, clear=clear_meta) except Exception, e: module.fail_json(msg=e.message) elif meta_ and state == 'absent': remove_results = [] for k, v in meta_.items(): c.remove_metadata_key(k) remove_results.append(k) EXIT_DICT['deleted_meta_keys'] = remove_results elif state == 'absent': remove_results = [] for k, v in c.get_metadata().items(): c.remove_metadata_key(k) remove_results.append(k) EXIT_DICT['deleted_meta_keys'] = remove_results _fetch_meta(module, c) _locals = locals().keys() EXIT_DICT['container'] = c.name if 'meta_set' in _locals or 'remove_results' in _locals: EXIT_DICT['changed'] = True module.exit_json(**EXIT_DICT) def container(cf, module, container_, state, meta_, clear_meta, ttl, public, private, web_index, web_error): if public and private: module.fail_json(msg='container cannot be simultaneously ' 'set to public and private') if state == 'absent' and (meta_ or clear_meta or public or private or web_index or web_error): module.fail_json(msg='state cannot be omitted when setting/removing ' 'attributes on a container') if state == 'list': # We don't care if attributes are specified, let's list containers EXIT_DICT['containers'] = cf.list_containers() module.exit_json(**EXIT_DICT) try: c = cf.get_container(container_) except pyrax.exc.NoSuchContainer, e: # Make the container if state=present, otherwise bomb out if state == 'present': try: c = cf.create_container(container_) except Exception, e: module.fail_json(msg=e.message) else: EXIT_DICT['created'] = True else: module.fail_json(msg=e.message) else: # Successfully grabbed a container object # Delete if state is absent if state == 'absent': try: cont_deleted = c.delete() except Exception, e: module.fail_json(msg=e.message) else: EXIT_DICT['deleted'] = True if meta_: try: meta_set = c.set_metadata(meta_, clear=clear_meta) except Exception, e: module.fail_json(msg=e.message) finally: _fetch_meta(module, c) if ttl: try: c.cdn_ttl = ttl except Exception, e: module.fail_json(msg=e.message) else: EXIT_DICT['ttl'] = c.cdn_ttl if public: try: cont_public = c.make_public() except Exception, e: module.fail_json(msg=e.message) else: EXIT_DICT['container_urls'] = dict(url=c.cdn_uri, ssl_url=c.cdn_ssl_uri, streaming_url=c.cdn_streaming_uri, ios_uri=c.cdn_ios_uri) if private: try: cont_private = c.make_private() except Exception, e: module.fail_json(msg=e.message) else: EXIT_DICT['set_private'] = True if web_index: try: cont_web_index = c.set_web_index_page(web_index) except Exception, e: module.fail_json(msg=e.message) else: EXIT_DICT['set_index'] = True finally: _fetch_meta(module, c) if web_error: try: cont_err_index = c.set_web_error_page(web_error) except Exception, e: module.fail_json(msg=e.message) else: EXIT_DICT['set_error'] = True finally: _fetch_meta(module, c) EXIT_DICT['container'] = c.name EXIT_DICT['objs_in_container'] = c.object_count EXIT_DICT['total_bytes'] = c.total_bytes _locals = locals().keys() if ('cont_created' in _locals or 'cont_deleted' in _locals or 'meta_set' in _locals or 'cont_public' in _locals or 'cont_private' in _locals or 'cont_web_index' in _locals or 'cont_err_index' in _locals): EXIT_DICT['changed'] = True module.exit_json(**EXIT_DICT) def cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public, private, web_index, web_error): """ Dispatch from here to work with metadata or file objects """ cf = pyrax.cloudfiles cf.user_agent = USER_AGENT if typ == "container": container(cf, module, container_, state, meta_, clear_meta, ttl, public, private, web_index, web_error) else: meta(cf, module, container_, state, meta_, clear_meta) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( container=dict(), state=dict(choices=['present', 'absent', 'list'], default='present'), meta=dict(type='dict', default=dict()), clear_meta=dict(choices=BOOLEANS, default=False, type='bool'), type=dict(choices=['container', 'meta'], default='container'), ttl=dict(type='int'), public=dict(choices=BOOLEANS, default=False, type='bool'), private=dict(choices=BOOLEANS, default=False, type='bool'), web_index=dict(), web_error=dict() ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together() ) container_ = module.params.get('container') state = module.params.get('state') meta_ = module.params.get('meta') clear_meta = module.params.get('clear_meta') typ = module.params.get('type') ttl = module.params.get('ttl') public = module.params.get('public') private = module.params.get('private') web_index = module.params.get('web_index') web_error = module.params.get('web_error') if state in ['present', 'absent'] and not container_: module.fail_json(msg='please specify a container name') if clear_meta and not typ == 'meta': module.fail_json(msg='clear_meta can only be used when setting metadata') setup_rax_module(module, pyrax) cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public, private, web_index, web_error) from ansible.module_utils.basic import * from ansible.module_utils.rax import * main() ansible-1.5.4/library/cloud/rax_clb0000664000000000000000000002473312316627017015763 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_clb short_description: create / delete a load balancer in Rackspace Public Cloud description: - creates / deletes a Rackspace Public Cloud load balancer. version_added: "1.4" options: algorithm: description: - algorithm for the balancer being created choices: ['RANDOM', 'LEAST_CONNECTIONS', 'ROUND_ROBIN', 'WEIGHTED_LEAST_CONNECTIONS', 'WEIGHTED_ROUND_ROBIN'] default: LEAST_CONNECTIONS api_key: description: - Rackspace API key (overrides C(credentials)) credentials: description: - File to find the Rackspace credentials in (ignored if C(api_key) and C(username) are provided) default: null aliases: ['creds_file'] meta: description: - A hash of metadata to associate with the instance default: null name: description: - Name to give the load balancer default: null port: description: - Port for the balancer being created default: 80 protocol: description: - Protocol for the balancer being created choices: ['DNS_TCP', 'DNS_UDP' ,'FTP', 'HTTP', 'HTTPS', 'IMAPS', 'IMAPv4', 'LDAP', 'LDAPS', 'MYSQL', 'POP3', 'POP3S', 'SMTP', 'TCP', 'TCP_CLIENT_FIRST', 'UDP', 'UDP_STREAM', 'SFTP'] default: HTTP region: description: - Region to create the load balancer in default: DFW state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present timeout: description: - timeout for communication between the balancer and the node default: 30 type: description: - type of interface for the balancer being created choices: ['PUBLIC', 'SERVICENET'] default: PUBLIC username: description: - Rackspace username (overrides C(credentials)) vip_id: description: - Virtual IP ID to use when creating the load balancer for purposes of sharing an IP with another load balancer of another protocol version_added: 1.5 wait: description: - wait for the balancer to be in state 'running' before returning default: "no" choices: [ "yes", "no" ] wait_timeout: description: - how long before wait gives up, in seconds default: 300 requirements: [ "pyrax" ] author: Christopher H. Laco, Matt Martz notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: Build a Load Balancer gather_facts: False hosts: local connection: local tasks: - name: Load Balancer create request local_action: module: rax_clb credentials: ~/.raxpub name: my-lb port: 8080 protocol: HTTP type: SERVICENET timeout: 30 region: DFW wait: yes state: present meta: app: my-cool-app register: my_lb ''' import sys from types import NoneType try: import pyrax except ImportError: print("failed=True msg='pyrax required for this module'") sys.exit(1) NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) ALGORITHMS = ['RANDOM', 'LEAST_CONNECTIONS', 'ROUND_ROBIN', 'WEIGHTED_LEAST_CONNECTIONS', 'WEIGHTED_ROUND_ROBIN'] PROTOCOLS = ['DNS_TCP', 'DNS_UDP', 'FTP', 'HTTP', 'HTTPS', 'IMAPS', 'IMAPv4', 'LDAP', 'LDAPS', 'MYSQL', 'POP3', 'POP3S', 'SMTP', 'TCP', 'TCP_CLIENT_FIRST', 'UDP', 'UDP_STREAM', 'SFTP'] def node_to_dict(obj): node = obj.to_dict() node['id'] = obj.id return node def to_dict(obj): instance = {} for key in dir(obj): value = getattr(obj, key) if key == 'virtual_ips': instance[key] = [] for vip in value: vip_dict = {} for vip_key, vip_value in vars(vip).iteritems(): if isinstance(vip_value, NON_CALLABLES): vip_dict[vip_key] = vip_value instance[key].append(vip_dict) elif key == 'nodes': instance[key] = [] for node in value: instance[key].append(node_to_dict(node)) elif (isinstance(value, NON_CALLABLES) and not key.startswith('_')): instance[key] = value return instance def cloud_load_balancer(module, state, name, meta, algorithm, port, protocol, vip_type, timeout, wait, wait_timeout, vip_id): for arg in (state, name, port, protocol, vip_type): if not arg: module.fail_json(msg='%s is required for rax_clb' % arg) if int(timeout) < 30: module.fail_json(msg='"timeout" must be greater than or equal to 30') changed = False balancers = [] clb = pyrax.cloud_loadbalancers for balancer in clb.list(): if name != balancer.name and name != balancer.id: continue balancers.append(balancer) if len(balancers) > 1: module.fail_json(msg='Multiple Load Balancers were matched by name, ' 'try using the Load Balancer ID instead') if state == 'present': if isinstance(meta, dict): metadata = [dict(key=k, value=v) for k, v in meta.items()] if not balancers: try: virtual_ips = [clb.VirtualIP(type=vip_type, id=vip_id)] balancer = clb.create(name, metadata=metadata, port=port, algorithm=algorithm, protocol=protocol, timeout=timeout, virtual_ips=virtual_ips) changed = True except Exception, e: module.fail_json(msg='%s' % e.message) else: balancer = balancers[0] setattr(balancer, 'metadata', [dict(key=k, value=v) for k, v in balancer.get_metadata().items()]) atts = { 'name': name, 'algorithm': algorithm, 'port': port, 'protocol': protocol, 'timeout': timeout } for att, value in atts.iteritems(): current = getattr(balancer, att) if current != value: changed = True if changed: balancer.update(**atts) if balancer.metadata != metadata: balancer.set_metadata(meta) changed = True virtual_ips = [clb.VirtualIP(type=vip_type)] current_vip_types = set([v.type for v in balancer.virtual_ips]) vip_types = set([v.type for v in virtual_ips]) if current_vip_types != vip_types: module.fail_json(msg='Load balancer Virtual IP type cannot ' 'be changed') if wait: attempts = wait_timeout / 5 pyrax.utils.wait_for_build(balancer, interval=5, attempts=attempts) balancer.get() instance = to_dict(balancer) result = dict(changed=changed, balancer=instance) if balancer.status == 'ERROR': result['msg'] = '%s failed to build' % balancer.id elif wait and balancer.status not in ('ACTIVE', 'ERROR'): result['msg'] = 'Timeout waiting on %s' % balancer.id if 'msg' in result: module.fail_json(**result) else: module.exit_json(**result) elif state == 'absent': if balancers: balancer = balancers[0] try: balancer.delete() changed = True except Exception, e: module.fail_json(msg='%s' % e.message) instance = to_dict(balancer) if wait: attempts = wait_timeout / 5 pyrax.utils.wait_until(balancer, 'status', ('DELETED'), interval=5, attempts=attempts) else: instance = {} module.exit_json(changed=changed, balancer=instance) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( algorithm=dict(choices=ALGORITHMS, default='LEAST_CONNECTIONS'), meta=dict(type='dict', default={}), name=dict(), port=dict(type='int', default=80), protocol=dict(choices=PROTOCOLS, default='HTTP'), state=dict(default='present', choices=['present', 'absent']), timeout=dict(type='int', default=30), type=dict(choices=['PUBLIC', 'SERVICENET'], default='PUBLIC'), vip_id=dict(), wait=dict(type='bool'), wait_timeout=dict(default=300), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), ) algorithm = module.params.get('algorithm') meta = module.params.get('meta') name = module.params.get('name') port = module.params.get('port') protocol = module.params.get('protocol') state = module.params.get('state') timeout = int(module.params.get('timeout')) vip_id = module.params.get('vip_id') vip_type = module.params.get('type') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) setup_rax_module(module, pyrax) cloud_load_balancer(module, state, name, meta, algorithm, port, protocol, vip_type, timeout, wait, wait_timeout, vip_id) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/gc_storage0000664000000000000000000003766312316627017016474 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: gc_storage version_added: "1.4" short_description: This module manages objects/buckets in Google Cloud Storage. description: - This module allows users to manage their objects/buckets in Google Cloud Storage. It allows upload and download operations and can set some canned permissions. It also allows retrieval of URLs for objects for use in playbooks, and retrieval of string contents of objects. This module requires setting the default project in GCS prior to playbook usage. See U(https://developers.google.com/storage/docs/reference/v1/apiversion1) for information about setting the default project. options: bucket: description: - Bucket name. required: true default: null aliases: [] object: description: - Keyname of the object inside the bucket. Can be also be used to create "virtual directories" (see examples). required: false default: null aliases: [] src: description: - The source file path when performing a PUT operation. required: false default: null aliases: [] dest: description: - The destination file path when downloading an object/key with a GET operation. required: false aliases: [] force: description: - Forces an overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations. required: false default: true aliases: [ 'overwrite' ] permission: description: - This option let's the user set the canned permissions on the object/bucket that are created. The permissions that can be set are 'private', 'public-read', 'authenticated-read'. required: false default: private expiration: description: - Time limit (in seconds) for the URL generated and returned by GCA when performing a mode=put or mode=get_url operation. This url is only avaialbe when public-read is the acl for the object. required: false default: null aliases: [] mode: description: - Switches the module behaviour between upload, download, get_url (return download url) , get_str (download object as string), create (bucket) and delete (bucket). required: true default: null aliases: [] choices: [ 'get', 'put', 'get_url', 'get_str', 'delete', 'create' ] gcs_secret_key: description: - GCS secret key. If not set then the value of the GCS_SECRET_KEY environment variable is used. required: true default: null gcs_access_key: description: - GCS access key. If not set then the value of the GCS_ACCESS_KEY environment variable is used. required: true default: null requirements: [ "boto 2.9+" ] author: benno@ansible.com Note. Most of the code has been taken from the S3 module. ''' EXAMPLES = ''' # upload some content - gc_storage: bucket=mybucket object=key.txt src=/usr/local/myfile.txt mode=put permission=public-read # download some content - gc_storage: bucket=mybucket object=key.txt dest=/usr/local/myfile.txt mode=get # Download an object as a string to use else where in your playbook - gc_storage: bucket=mybucket object=key.txt mode=get_str # Create an empty bucket - gc_storage: bucket=mybucket mode=create # Create a bucket with key as directory - gc_storage: bucket=mybucket object=/my/directory/path mode=create # Delete a bucket and all contents - gc_storage: bucket=mybucket mode=delete ''' import sys import os import urlparse import hashlib try: import boto except ImportError: print "failed=True msg='boto 2.9+ required for this module'" sys.exit(1) def grant_check(module, gs, obj): try: acp = obj.get_acl() if module.params.get('permission') == 'public-read': grant = [ x for x in acp.entries.entry_list if x.scope.type == 'AllUsers'] if not grant: obj.set_acl('public-read') module.exit_json(changed=True, result="The objects permission as been set to public-read") if module.params.get('permission') == 'authenticated-read': grant = [ x for x in acp.entries.entry_list if x.scope.type == 'AllAuthenticatedUsers'] if not grant: obj.set_acl('authenticated-read') module.exit_json(changed=True, result="The objects permission as been set to authenticated-read") except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) return True def key_check(module, gs, bucket, obj): try: bucket = gs.lookup(bucket) key_check = bucket.get_key(obj) except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) if key_check: grant_check(module, gs, key_check) return True else: return False def keysum(module, gs, bucket, obj): bucket = gs.lookup(bucket) key_check = bucket.get_key(obj) if key_check: md5_remote = key_check.etag[1:-1] etag_multipart = md5_remote.find('-')!=-1 #Check for multipart, etag is not md5 if etag_multipart is True: module.fail_json(msg="Files uploaded with multipart of gs are not supported with checksum, unable to compute checksum.") return md5_remote def bucket_check(module, gs, bucket): try: result = gs.lookup(bucket) except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) if result: grant_check(module, gs, result) return True else: return False def create_bucket(module, gs, bucket): try: bucket = gs.create_bucket(bucket) bucket.set_acl(module.params.get('permission')) except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) if bucket: return True def delete_bucket(module, gs, bucket): try: bucket = gs.lookup(bucket) bucket_contents = bucket.list() for key in bucket_contents: bucket.delete_key(key.name) bucket.delete() return True except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) def delete_key(module, gs, bucket, obj): try: bucket = gs.lookup(bucket) bucket.delete_key(obj) module.exit_json(msg="Object deleted from bucket ", changed=True) except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) def create_dirkey(module, gs, bucket, obj): try: bucket = gs.lookup(bucket) key = bucket.new_key(obj) key.set_contents_from_string('') module.exit_json(msg="Virtual directory %s created in bucket %s" % (obj, bucket.name), changed=True) except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) def upload_file_check(src): if os.path.exists(src): file_exists is True else: file_exists is False if os.path.isdir(src): module.fail_json(msg="Specifying a directory is not a valid source for upload.", failed=True) return file_exists def path_check(path): if os.path.exists(path): return True else: return False def upload_gsfile(module, gs, bucket, obj, src, expiry): try: bucket = gs.lookup(bucket) key = bucket.new_key(obj) key.set_contents_from_filename(src) key.set_acl(module.params.get('permission')) url = key.generate_url(expiry) module.exit_json(msg="PUT operation complete", url=url, changed=True) except gs.provider.storage_copy_error, e: module.fail_json(msg= str(e)) def download_gsfile(module, gs, bucket, obj, dest): try: bucket = gs.lookup(bucket) key = bucket.lookup(obj) key.get_contents_to_filename(dest) module.exit_json(msg="GET operation complete", changed=True) except gs.provider.storage_copy_error, e: module.fail_json(msg= str(e)) def download_gsstr(module, gs, bucket, obj): try: bucket = gs.lookup(bucket) key = bucket.lookup(obj) contents = key.get_contents_as_string() module.exit_json(msg="GET operation complete", contents=contents, changed=True) except gs.provider.storage_copy_error, e: module.fail_json(msg= str(e)) def get_download_url(module, gs, bucket, obj, expiry): try: bucket = gs.lookup(bucket) key = bucket.lookup(obj) url = key.generate_url(expiry) module.exit_json(msg="Download url:", url=url, expiration=expiry, changed=True) except gs.provider.storage_response_error, e: module.fail_json(msg= str(e)) def handle_get(module, gs, bucket, obj, overwrite, dest): md5_remote = keysum(module, gs, bucket, obj) md5_local = hashlib.md5(open(dest, 'rb').read()).hexdigest() if md5_local == md5_remote: module.exit_json(changed=False) if md5_local != md5_remote and not overwrite: module.exit_json(msg="WARNING: Checksums do not match. Use overwrite parameter to force download.", failed=True) else: download_gsfile(module, gs, bucket, obj, dest) def handle_put(module, gs, bucket, obj, overwrite, src, expiration): # Lets check to see if bucket exists to get ground truth. bucket_rc = bucket_check(module, gs, bucket) key_rc = key_check(module, gs, bucket, obj) # Lets check key state. Does it exist and if it does, compute the etag md5sum. if bucket_rc and key_rc: md5_remote = keysum(module, gs, bucket, obj) md5_local = hashlib.md5(open(src, 'rb').read()).hexdigest() if md5_local == md5_remote: module.exit_json(msg="Local and remote object are identical", changed=False) if md5_local != md5_remote and not overwrite: module.exit_json(msg="WARNING: Checksums do not match. Use overwrite parameter to force upload.", failed=True) else: upload_gsfile(module, gs, bucket, obj, src, expiration) if not bucket_rc: create_bucket(module, gs, bucket) upload_gsfile(module, gs, bucket, obj, src, expiration) # If bucket exists but key doesn't, just upload. if bucket_rc and not key_rc: upload_gsfile(module, gs, bucket, obj, src, expiration) def handle_delete(module, gs, bucket, obj): if bucket and not obj: if bucket_check(module, gs, bucket): module.exit_json(msg="Bucket %s and all keys have been deleted."%bucket, changed=delete_bucket(module, gs, bucket)) else: module.exit_json(msg="Bucket does not exist.", changed=False) if bucket and obj: if bucket_check(module, gs, bucket): if key_check(module, gs, bucket, obj): module.exit_json(msg="Object has been deleted.", changed=delete_key(module, gs, bucket, obj)) else: module.exit_json(msg="Object does not exists.", changed=False) else: module.exit_json(msg="Bucket does not exist.", changed=False) else: module.fail_json(msg="Bucket or Bucket & object parameter is required.", failed=True) def handle_create(module, gs, bucket, obj): if bucket and not obj: if bucket_check(module, gs, bucket): module.exit_json(msg="Bucket already exists.", changed=False) else: module.exit_json(msg="Bucket created succesfully", changed=create_bucket(module, gs, bucket)) if bucket and obj: if bucket_check(module, gs, bucket): if obj.endswith('/'): dirobj = obj else: dirobj = obj + "/" if key_check(module, gs, bucket, dirobj): module.exit_json(msg="Bucket %s and key %s already exists."% (bucket, obj), changed=False) else: create_dirkey(module, gs, bucket, dirobj) else: create_bucket(module, gs, bucket) create_dirkey(module, gs, bucket, dirobj) def main(): module = AnsibleModule( argument_spec = dict( bucket = dict(required=True), object = dict(default=None), src = dict(default=None), dest = dict(default=None), expiration = dict(default=600, aliases=['expiry']), mode = dict(choices=['get', 'put', 'delete', 'create', 'get_url', 'get_str'], required=True), permission = dict(choices=['private', 'public-read', 'authenticated-read'], default='private'), gs_secret_key = dict(no_log=True, required=True), gs_access_key = dict(required=True), overwrite = dict(default=True, type='bool', aliases=['force']), ), ) bucket = module.params.get('bucket') obj = module.params.get('object') src = module.params.get('src') dest = module.params.get('dest') if dest: dest = os.path.expanduser(dest) mode = module.params.get('mode') expiry = module.params.get('expiration') gs_secret_key = module.params.get('gs_secret_key') gs_access_key = module.params.get('gs_access_key') overwrite = module.params.get('overwrite') if mode == 'put': if not src or not object: module.fail_json(msg="When using PUT, src, bucket, object are mandatory paramters") if mode == 'get': if not dest or not object: module.fail_json(msg="When using GET, dest, bucket, object are mandatory paramters") if obj: obj = os.path.expanduser(module.params['object']) try: gs = boto.connect_gs(gs_access_key, gs_secret_key) except boto.exception.NoAuthHandlerFound, e: module.fail_json(msg = str(e)) if mode == 'get': if not bucket_check(module, gs, bucket) or not key_check(module, gs, bucket, obj): module.fail_json(msg="Target bucket/key cannot be found", failed=True) if not path_check(dest): download_gsfile(module, gs, bucket, obj, dest) else: handle_get(module, gs, bucket, obj, overwrite, dest) if mode == 'put': if not path_check(src): module.fail_json(msg="Local object for PUT does not exist", failed=True) handle_put(module, gs, bucket, obj, overwrite, src, expiry) # Support for deleting an object if we have both params. if mode == 'delete': handle_delete(module, gs, bucket, obj) if mode == 'create': handle_create(module, gs, bucket, obj) if mode == 'get_url': if bucket and obj: if bucket_check(module, gs, bucket) and key_check(module, gs, bucket, obj): get_download_url(module, gs, bucket, obj, expiry) else: module.fail_json(msg="Key/Bucket does not exist", failed=True) else: module.fail_json(msg="Bucket and Object parameters must be set", failed=True) # --------------------------- Get the String contents of an Object ------------------------- if mode == 'get_str': if bucket and obj: if bucket_check(module, gs, bucket) and key_check(module, gs, bucket, obj): download_gsstr(module, gs, bucket, obj) else: module.fail_json(msg="Key/Bucket does not exist", failed=True) else: module.fail_json(msg="Bucket and Object parameters must be set", failed=True) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/ec2_eip0000664000000000000000000002061712316627017015654 0ustar rootroot#!/usr/bin/python DOCUMENTATION = ''' --- module: ec2_eip short_description: associate an EC2 elastic IP with an instance. description: - This module associates AWS EC2 elastic IP addresses with instances version_added: 1.4 options: instance_id: description: - The EC2 instance id required: false public_ip: description: - The elastic IP address to associate with the instance. - If absent, allocate a new address required: false state: description: - If present, associate the IP with the instance. - If absent, disassociate the IP with the instance. required: false choices: ['present', 'absent'] default: present ec2_url: description: - URL to use to connect to EC2-compatible cloud (by default the module will use EC2 endpoints) required: false default: null aliases: [ EC2_URL ] ec2_access_key: description: - EC2 access key. If not specified then the EC2_ACCESS_KEY environment variable is used. required: false default: null aliases: [ EC2_ACCESS_KEY ] ec2_secret_key: description: - EC2 secret key. If not specified then the EC2_SECRET_KEY environment variable is used. required: false default: null aliases: [ EC2_SECRET_KEY ] region: description: - the EC2 region to use required: false default: null aliases: [ ec2_region ] in_vpc: description: - allocate an EIP inside a VPC or not required: false default: false version_added: "1.4" validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] author: Lorin Hochstein notes: - This module will return C(public_ip) on success, which will contain the public IP address associated with the instance. - There may be a delay between the time the Elastic IP is assigned and when the cloud instance is reachable via the new address. Use wait_for and pause to delay further playbook execution until the instance is reachable, if necessary. ''' EXAMPLES = ''' - name: associate an elastic IP with an instance ec2_eip: instance_id=i-1212f003 ip=93.184.216.119 - name: disassociate an elastic IP from an instance ec2_eip: instance_id=i-1212f003 ip=93.184.216.119 state=absent - name: allocate a new elastic IP and associate it with an instance ec2_eip: instance_id=i-1212f003 - name: allocate a new elastic IP without associating it to anything ec2_eip: register: eip - name: output the IP debug: msg="Allocated IP is {{ eip.public_ip }}" - name: provision new instances with ec2 ec2: keypair=mykey instance_type=c1.medium image=emi-40603AD1 wait=yes group=webserver count=3 register: ec2 - name: associate new elastic IPs with each of the instances ec2_eip: "instance_id={{ item }}" with_items: ec2.instance_ids - name: allocate a new elastic IP inside a VPC in us-west-2 ec2_eip: region=us-west-2 in_vpc=yes register: eip - name: output the IP debug: msg="Allocated IP inside a VPC is {{ eip.public_ip }}" ''' try: import boto.ec2 except ImportError: boto_found = False else: boto_found = True def associate_ip_and_instance(ec2, address, instance_id, module): if ip_is_associated_with_instance(ec2, address.public_ip, instance_id, module): module.exit_json(changed=False, public_ip=address.public_ip) # If we're in check mode, nothing else to do if module.check_mode: module.exit_json(changed=True) try: if address.domain == "vpc": res = ec2.associate_address(instance_id, allocation_id=address.allocation_id) else: res = ec2.associate_address(instance_id, public_ip=address.public_ip) except boto.exception.EC2ResponseError, e: module.fail_json(msg=str(e)) if res: module.exit_json(changed=True, public_ip=address.public_ip) else: module.fail_json(msg="association failed") def disassociate_ip_and_instance(ec2, address, instance_id, module): if not ip_is_associated_with_instance(ec2, address.public_ip, instance_id, module): module.exit_json(changed=False, public_ip=address.public_ip) # If we're in check mode, nothing else to do if module.check_mode: module.exit_json(changed=True) try: if address.domain == "vpc": res = ec2.disassociate_address(association_id=address.association_id) else: res = ec2.disassociate_address(public_ip=address.public_ip) except boto.exception.EC2ResponseError, e: module.fail_json(msg=str(e)) if res: module.exit_json(changed=True) else: module.fail_json(msg="disassociation failed") def find_address(ec2, public_ip, module): """ Find an existing Elastic IP address """ try: addresses = ec2.get_all_addresses([public_ip]) except boto.exception.EC2ResponseError, e: module.fail_json(msg=str(e.message)) return addresses[0] def ip_is_associated_with_instance(ec2, public_ip, instance_id, module): """ Check if the elastic IP is currently associated with the instance """ address = find_address(ec2, public_ip, module) if address: return address.instance_id == instance_id else: return False def allocate_address(ec2, domain, module): """ Allocate a new elastic IP address and return it """ # If we're in check mode, nothing else to do if module.check_mode: module.exit_json(change=True) address = ec2.allocate_address(domain=domain) return address def release_address(ec2, public_ip, module): """ Release a previously allocated elastic IP address """ address = find_address(ec2, public_ip, module) # If we're in check mode, nothing else to do if module.check_mode: module.exit_json(change=True) res = address.release() if res: module.exit_json(changed=True) else: module.fail_json(msg="release failed") def find_instance(ec2, instance_id, module): """ Attempt to find the EC2 instance and return it """ try: reservations = ec2.get_all_reservations(instance_ids=[instance_id]) except boto.exception.EC2ResponseError, e: module.fail_json(msg=str(e)) if len(reservations) == 1: instances = reservations[0].instances if len(instances) == 1: return instances[0] module.fail_json(msg="could not find instance" + instance_id) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( instance_id = dict(required=False), public_ip = dict(required=False, aliases= ['ip']), state = dict(required=False, default='present', choices=['present', 'absent']), in_vpc = dict(required=False, choices=BOOLEANS, default=False), ) ) module = AnsibleModule( argument_spec=argument_spec, supports_check_mode=True ) if not boto_found: module.fail_json(msg="boto is required") ec2 = ec2_connect(module) instance_id = module.params.get('instance_id') public_ip = module.params.get('public_ip') state = module.params.get('state') in_vpc = module.params.get('in_vpc') domain = "vpc" if in_vpc else None if state == 'present': if public_ip is None: if instance_id is None: address = allocate_address(ec2, domain, module) module.exit_json(changed=True, public_ip=address.public_ip) else: # Determine if the instance is inside a VPC or not instance = find_instance(ec2, instance_id, module) if instance.vpc_id != None: domain = "vpc" address = allocate_address(ec2, domain, module) else: address = find_address(ec2, public_ip, module) associate_ip_and_instance(ec2, address, instance_id, module) else: if instance_id is None: release_address(ec2, public_ip, module) else: address = find_address(ec2, public_ip, module) disassociate_ip_and_instance(ec2, address, instance_id, module) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * if __name__ == '__main__': main() ansible-1.5.4/library/cloud/ec20000664000000000000000000011042212316627017015011 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ec2 short_description: create, terminate, start or stop an instance in ec2, return instanceid description: - Creates or terminates ec2 instances. When created optionally waits for it to be 'running'. This module has a dependency on python-boto >= 2.5 version_added: "0.9" options: key_name: description: - key pair to use on the instance required: false default: null aliases: ['keypair'] id: description: - identifier for this instance or set of instances, so that the module will be idempotent with respect to EC2 instances. This identifier is valid for at least 24 hours after the termination of the instance, and should not be reused for another call later on. For details, see the description of client token at U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html). required: false default: null aliases: [] group: description: - security group (or list of groups) to use with the instance required: false default: null aliases: [ 'groups' ] group_id: version_added: "1.1" description: - security group id (or list of ids) to use with the instance required: false default: null aliases: [] region: version_added: "1.2" description: - The AWS region to use. Must be specified if ec2_url is not used. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false default: null aliases: [ 'aws_region', 'ec2_region' ] zone: version_added: "1.2" description: - AWS availability zone in which to launch the instance required: false default: null aliases: [ 'aws_zone', 'ec2_zone' ] instance_type: description: - instance type to use for the instance required: true default: null aliases: [] image: description: - I(emi) (or I(ami)) to use for the instance required: true default: null aliases: [] kernel: description: - kernel I(eki) to use for the instance required: false default: null aliases: [] ramdisk: description: - ramdisk I(eri) to use for the instance required: false default: null aliases: [] wait: description: - wait for the instance to be in state 'running' before returning required: false default: "no" choices: [ "yes", "no" ] aliases: [] wait_timeout: description: - how long before wait gives up, in seconds default: 300 aliases: [] ec2_url: description: - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used required: false default: null aliases: [] aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: null aliases: [ 'ec2_secret_key', 'secret_key' ] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: null aliases: [ 'ec2_access_key', 'access_key' ] count: description: - number of instances to launch required: False default: 1 aliases: [] monitoring: version_added: "1.1" description: - enable detailed monitoring (CloudWatch) for instance required: false default: null aliases: [] user_data: version_added: "0.9" description: - opaque blob of data which is made available to the ec2 instance required: false default: null aliases: [] instance_tags: version_added: "1.0" description: - a hash/dictionary of tags to add to the new instance; '{"key":"value"}' and '{"key":"value","key":"value"}' required: false default: null aliases: [] placement_group: version_added: "1.3" description: - placement group for the instance when using EC2 Clustered Compute required: false default: null aliases: [] vpc_subnet_id: version_added: "1.1" description: - the subnet ID in which to launch the instance (VPC) required: false default: null aliases: [] assign_public_ip: version_added: "1.4" description: - when provisioning within vpc, assign a public IP address. Boto library must be 2.13.0+ required: false default: null aliases: [] private_ip: version_added: "1.2" description: - the private ip address to assign the instance (from the vpc subnet) required: false defualt: null aliases: [] instance_profile_name: version_added: "1.3" description: - Name of the IAM instance profile to use. Boto library must be 2.5.0+ required: false default: null aliases: [] instance_ids: version_added: "1.3" description: - list of instance ids, currently only used when state='absent' required: false default: null aliases: [] state: version_added: "1.3" description: - create or terminate instances required: false default: 'present' aliases: [] volumes: version_added: "1.5" description: - a list of volume dicts, each containing device name and optionally ephemeral id or snapshot id. Size and type (and number of iops for io device type) must be specified for a new volume or a root volume, and may be passed for a snapshot volume. For any volume, a volume size less than 1 will be interpreted as a request not to create the volume. required: false default: null aliases: [] exact_count: version_added: "1.5" description: - An integer value which indicates how many instances that match the 'count_tag' parameter should be running. Instances are either created or terminated based on this value. required: false default: null aliases: [] count_tag: version_added: "1.5" description: - Used with 'exact_count' to determine how many nodes based on a specific tag criteria should be running. This can be expressed in multiple ways and is shown in the EXAMPLES section. For instance, one can request 25 servers that are tagged with "class=webserver". required: false default: null aliases: [] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] author: Seth Vidal, Tim Gerla, Lester Wade ''' EXAMPLES = ''' # Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Basic provisioning example - local_action: module: ec2 key_name: mykey instance_type: c1.medium image: emi-40603AD1 wait: yes group: webserver count: 3 # Advanced example with tagging and CloudWatch - local_action: module: ec2 key_name: mykey group: databases instance_type: m1.large image: ami-6e649707 wait: yes wait_timeout: 500 count: 5 instance_tags: db: postgres monitoring: yes # Single instance with additional IOPS volume from snapshot local_action: module: ec2 key_name: mykey group: webserver instance_type: m1.large image: ami-6e649707 wait: yes wait_timeout: 500 volumes: - device_name: /dev/sdb snapshot: snap-abcdef12 device_type: io1 iops: 1000 volume_size: 100 monitoring: yes # Multiple groups example local_action: module: ec2 key_name: mykey group: ['databases', 'internal-services', 'sshable', 'and-so-forth'] instance_type: m1.large image: ami-6e649707 wait: yes wait_timeout: 500 count: 5 instance_tags: db: postgres monitoring: yes # Multiple instances with additional volume from snapshot local_action: module: ec2 key_name: mykey group: webserver instance_type: m1.large image: ami-6e649707 wait: yes wait_timeout: 500 count: 5 volumes: - device_name: /dev/sdb snapshot: snap-abcdef12 volume_size: 10 monitoring: yes # VPC example - local_action: module: ec2 key_name: mykey group_id: sg-1dc53f72 instance_type: m1.small image: ami-6e649707 wait: yes vpc_subnet_id: subnet-29e63245 assign_public_ip: yes # Launch instances, runs some tasks # and then terminate them - name: Create a sandbox instance hosts: localhost gather_facts: False vars: key_name: my_keypair instance_type: m1.small security_group: my_securitygroup image: my_ami_id region: us-east-1 tasks: - name: Launch instance local_action: ec2 key_name={{ keypair }} group={{ security_group }} instance_type={{ instance_type }} image={{ image }} wait=true region={{ region }} register: ec2 - name: Add new instance to host group local_action: add_host hostname={{ item.public_ip }} groupname=launched with_items: ec2.instances - name: Wait for SSH to come up local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started with_items: ec2.instances - name: Configure instance(s) hosts: launched sudo: True gather_facts: True roles: - my_awesome_role - my_awesome_test - name: Terminate instances hosts: localhost connection: local tasks: - name: Terminate instances that were previously launched local_action: module: ec2 state: 'absent' instance_ids: '{{ ec2.instance_ids }}' # Start a few existing instances, run some tasks # and stop the instances - name: Start sandbox instances hosts: localhost gather_facts: false connection: local vars: instance_ids: - 'i-xxxxxx' - 'i-xxxxxx' - 'i-xxxxxx' region: us-east-1 tasks: - name: Start the sandbox instances local_action: module: ec2 instance_ids: '{{ instance_ids }}' region: '{{ region }}' state: running wait: True role: - do_neat_stuff - do_more_neat_stuff - name: Stop sandbox instances hosts: localhost gather_facts: false connection: local vars: instance_ids: - 'i-xxxxxx' - 'i-xxxxxx' - 'i-xxxxxx' region: us-east-1 tasks: - name: Stop the sanbox instances local_action: module: ec2 instance_ids: '{{ instance_ids }}' region: '{{ region }}' state: stopped wait: True # # Enforce that 5 instances with a tag "foo" are running # - local_action: module: ec2 key_name: mykey instance_type: c1.medium image: emi-40603AD1 wait: yes group: webserver instance_tags: foo: bar exact_count: 5 count_tag: foo # # Enforce that 5 running instances named "database" with a "dbtype" of "postgres" # - local_action: module: ec2 key_name: mykey instance_type: c1.medium image: emi-40603AD1 wait: yes group: webserver instance_tags: Name: database dbtype: postgres exact_count: 5 count_tag: Name: database dbtype: postgres # # count_tag complex argument examples # # instances with tag foo count_tag: foo: # instances with tag foo=bar count_tag: foo: bar # instances with tags foo=bar & baz count_tag: foo: bar baz: # instances with tags foo & bar & baz=bang count_tag: - foo - bar - baz: bang ''' import sys import time from ast import literal_eval try: import boto.ec2 from boto.ec2.blockdevicemapping import BlockDeviceType, BlockDeviceMapping from boto.exception import EC2ResponseError except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def find_running_instances_by_count_tag(module, ec2, count_tag): # get reservations for instances that match tag(s) and are running reservations = get_reservations(module, ec2, tags=count_tag, state="running") instances = [] for res in reservations: if hasattr(res, 'instances'): for inst in res.instances: instances.append(inst) return reservations, instances def _set_none_to_blank(dictionary): result = dictionary for k in result.iterkeys(): if type(result[k]) == dict: result[k] = _set_non_to_blank(result[k]) elif not result[k]: result[k] = "" return result def get_reservations(module, ec2, tags=None, state=None): # TODO: filters do not work with tags that have underscores filters = dict() if tags is not None: if type(tags) is str: try: tags = literal_eval(tags) except: pass # if string, we only care that a tag of that name exists if type(tags) is str: filters.update({"tag-key": tags}) # if list, append each item to filters if type(tags) is list: for x in tags: if type(x) is dict: x = _set_none_to_blank(x) filters.update(dict(("tag:"+tn, tv) for (tn,tv) in x.iteritems())) else: filters.update({"tag-key": x}) # if dict, add the key and value to the filter if type(tags) is dict: tags = _set_none_to_blank(tags) filters.update(dict(("tag:"+tn, tv) for (tn,tv) in tags.iteritems())) if state: # http://stackoverflow.com/questions/437511/what-are-the-valid-instancestates-for-the-amazon-ec2-api filters.update({'instance-state-name': state}) results = ec2.get_all_instances(filters=filters) return results def get_instance_info(inst): """ Retrieves instance information from an instance ID and returns it as a dictionary """ instance_info = {'id': inst.id, 'ami_launch_index': inst.ami_launch_index, 'private_ip': inst.private_ip_address, 'private_dns_name': inst.private_dns_name, 'public_ip': inst.ip_address, 'dns_name': inst.dns_name, 'public_dns_name': inst.public_dns_name, 'state_code': inst.state_code, 'architecture': inst.architecture, 'image_id': inst.image_id, 'key_name': inst.key_name, 'placement': inst.placement, 'region': inst.placement[:-1], 'kernel': inst.kernel, 'ramdisk': inst.ramdisk, 'launch_time': inst.launch_time, 'instance_type': inst.instance_type, 'root_device_type': inst.root_device_type, 'root_device_name': inst.root_device_name, 'state': inst.state, 'hypervisor': inst.hypervisor} try: instance_info['virtualization_type'] = getattr(inst,'virtualization_type') except AttributeError: instance_info['virtualization_type'] = None return instance_info def boto_supports_associate_public_ip_address(ec2): """ Check if Boto library has associate_public_ip_address in the NetworkInterfaceSpecification class. Added in Boto 2.13.0 ec2: authenticated ec2 connection object Returns: True if Boto library accepts associate_public_ip_address argument, else false """ try: network_interface = boto.ec2.networkinterface.NetworkInterfaceSpecification() getattr(network_interface, "associate_public_ip_address") return True except AttributeError: return False def boto_supports_profile_name_arg(ec2): """ Check if Boto library has instance_profile_name argument. instance_profile_name has been added in Boto 2.5.0 ec2: authenticated ec2 connection object Returns: True if Boto library accept instance_profile_name argument, else false """ run_instances_method = getattr(ec2, 'run_instances') return 'instance_profile_name' in run_instances_method.func_code.co_varnames def create_block_device(module, ec2, volume): # Not aware of a way to determine this programatically # http://aws.amazon.com/about-aws/whats-new/2013/10/09/ebs-provisioned-iops-maximum-iops-gb-ratio-increased-to-30-1/ MAX_IOPS_TO_SIZE_RATIO = 30 if 'snapshot' not in volume and 'ephemeral' not in volume: if 'volume_size' not in volume: module.fail_json(msg = 'Size must be specified when creating a new volume or modifying the root volume') if 'snapshot' in volume: if 'device_type' in volume and volume.get('device_type') == 'io1' and 'iops' not in volume: module.fail_json(msg = 'io1 volumes must have an iops value set') if 'iops' in volume: snapshot = ec2.get_all_snapshots(snapshot_ids=[volume['snapshot']])[0] size = volume.get('volume_size', snapshot.volume_size) if int(volume['iops']) > MAX_IOPS_TO_SIZE_RATIO * size: module.fail_json(msg = 'IOPS must be at most %d times greater than size' % MAX_IOPS_TO_SIZE_RATIO) if 'ephemeral' in volume: if 'snapshot' in volume: module.fail_json(msg = 'Cannot set both ephemeral and snapshot') return BlockDeviceType(snapshot_id=volume.get('snapshot'), ephemeral_name=volume.get('ephemeral'), size=volume.get('volume_size'), volume_type=volume.get('device_type'), delete_on_termination=volume.get('delete_on_termination', False), iops=volume.get('iops')) def enforce_count(module, ec2): exact_count = module.params.get('exact_count') count_tag = module.params.get('count_tag') reservations, instances = find_running_instances_by_count_tag(module, ec2, count_tag) changed = None checkmode = False instance_dict_array = None changed_instance_ids = None if len(instances) == exact_count: changed = False elif len(instances) < exact_count: changed = True to_create = exact_count - len(instances) if not checkmode: (instance_dict_array, changed_instance_ids, changed) \ = create_instances(module, ec2, override_count=to_create) for inst in instance_dict_array: instances.append(inst) elif len(instances) > exact_count: changed = True to_remove = len(instances) - exact_count if not checkmode: all_instance_ids = sorted([ x.id for x in instances ]) remove_ids = all_instance_ids[0:to_remove] instances = [ x for x in instances if x.id not in remove_ids] (changed, instance_dict_array, changed_instance_ids) \ = terminate_instances(module, ec2, remove_ids) terminated_list = [] for inst in instance_dict_array: inst['state'] = "terminated" terminated_list.append(inst) instance_dict_array = terminated_list # ensure all instances are dictionaries all_instances = [] for inst in instances: if type(inst) is not dict: inst = get_instance_info(inst) all_instances.append(inst) return (all_instances, instance_dict_array, changed_instance_ids, changed) def create_instances(module, ec2, override_count=None): """ Creates new instances module : AnsibleModule object ec2: authenticated ec2 connection object Returns: A list of dictionaries with instance information about the instances that were launched """ key_name = module.params.get('key_name') id = module.params.get('id') group_name = module.params.get('group') group_id = module.params.get('group_id') zone = module.params.get('zone') instance_type = module.params.get('instance_type') image = module.params.get('image') if override_count: count = override_count else: count = module.params.get('count') monitoring = module.params.get('monitoring') kernel = module.params.get('kernel') ramdisk = module.params.get('ramdisk') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) placement_group = module.params.get('placement_group') user_data = module.params.get('user_data') instance_tags = module.params.get('instance_tags') vpc_subnet_id = module.params.get('vpc_subnet_id') assign_public_ip = module.boolean(module.params.get('assign_public_ip')) private_ip = module.params.get('private_ip') instance_profile_name = module.params.get('instance_profile_name') volumes = module.params.get('volumes') exact_count = module.params.get('exact_count') count_tag = module.params.get('count_tag') # group_id and group_name are exclusive of each other if group_id and group_name: module.fail_json(msg = str("Use only one type of parameter (group_name) or (group_id)")) sys.exit(1) try: # Here we try to lookup the group id from the security group name - if group is set. if group_name: grp_details = ec2.get_all_security_groups() if type(group_name) == list: group_id = [ str(grp.id) for grp in grp_details if str(grp.name) in group_name ] elif type(group_name) == str: for grp in grp_details: if str(group_name) in str(grp): group_id = [str(grp.id)] group_name = [group_name] # Now we try to lookup the group id testing if group exists. elif group_id: #wrap the group_id in a list if it's not one already if type(group_id) == str: group_id = [group_id] grp_details = ec2.get_all_security_groups(group_ids=group_id) grp_item = grp_details[0] group_name = [grp_item.name] except boto.exception.NoAuthHandlerFound, e: module.fail_json(msg = str(e)) # Lookup any instances that much our run id. running_instances = [] count_remaining = int(count) if id != None: filter_dict = {'client-token':id, 'instance-state-name' : 'running'} previous_reservations = ec2.get_all_instances(None, filter_dict) for res in previous_reservations: for prev_instance in res.instances: running_instances.append(prev_instance) count_remaining = count_remaining - len(running_instances) # Both min_count and max_count equal count parameter. This means the launch request is explicit (we want count, or fail) in how many instances we want. if count_remaining == 0: changed = False else: changed = True try: params = {'image_id': image, 'key_name': key_name, 'client_token': id, 'min_count': count_remaining, 'max_count': count_remaining, 'monitoring_enabled': monitoring, 'placement': zone, 'placement_group': placement_group, 'instance_type': instance_type, 'kernel_id': kernel, 'ramdisk_id': ramdisk, 'private_ip_address': private_ip, 'user_data': user_data} if boto_supports_profile_name_arg(ec2): params['instance_profile_name'] = instance_profile_name else: if instance_profile_name is not None: module.fail_json( msg="instance_profile_name parameter requires Boto version 2.5.0 or higher") if assign_public_ip: if not boto_supports_associate_public_ip_address(ec2): module.fail_json( msg="assign_public_ip parameter requires Boto version 2.13.0 or higher.") elif not vpc_subnet_id: module.fail_json( msg="assign_public_ip only available with vpc_subnet_id") else: interface = boto.ec2.networkinterface.NetworkInterfaceSpecification( subnet_id=vpc_subnet_id, groups=group_id, associate_public_ip_address=assign_public_ip) interfaces = boto.ec2.networkinterface.NetworkInterfaceCollection(interface) params['network_interfaces'] = interfaces else: params['subnet_id'] = vpc_subnet_id if vpc_subnet_id: params['security_group_ids'] = group_id else: params['security_groups'] = group_name if volumes: bdm = BlockDeviceMapping() for volume in volumes: if 'device_name' not in volume: module.fail_json(msg = 'Device name must be set for volume') # Minimum volume size is 1GB. We'll use volume size explicitly set to 0 # to be a signal not to create this volume if 'volume_size' not in volume or int(volume['volume_size']) > 0: bdm[volume['device_name']] = create_block_device(module, ec2, volume) params['block_device_map'] = bdm res = ec2.run_instances(**params) except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) instids = [ i.id for i in res.instances ] while True: try: res.connection.get_all_instances(instids) break except boto.exception.EC2ResponseError, e: if "InvalidInstanceID.NotFound" in str(e): # there's a race between start and get an instance continue else: module.fail_json(msg = str(e)) if instance_tags: try: ec2.create_tags(instids, instance_tags) except boto.exception.EC2ResponseError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) # wait here until the instances are up this_res = [] num_running = 0 wait_timeout = time.time() + wait_timeout while wait_timeout > time.time() and num_running < len(instids): res_list = res.connection.get_all_instances(instids) if len(res_list) > 0: this_res = res_list[0] num_running = len([ i for i in this_res.instances if i.state=='running' ]) else: # got a bad response of some sort, possibly due to # stale/cached data. Wait a second and then try again time.sleep(1) continue if wait and num_running < len(instids): time.sleep(5) else: break if wait and wait_timeout <= time.time(): # waiting took too long module.fail_json(msg = "wait for instances running timeout on %s" % time.asctime()) for inst in this_res.instances: running_instances.append(inst) instance_dict_array = [] created_instance_ids = [] for inst in running_instances: d = get_instance_info(inst) created_instance_ids.append(inst.id) instance_dict_array.append(d) return (instance_dict_array, created_instance_ids, changed) def terminate_instances(module, ec2, instance_ids): """ Terminates a list of instances module: Ansible module object ec2: authenticated ec2 connection object termination_list: a list of instances to terminate in the form of [ {id: }, ..] Returns a dictionary of instance information about the instances terminated. If the instance to be terminated is running "changed" will be set to False. """ # Whether to wait for termination to complete before returning wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) changed = False instance_dict_array = [] if not isinstance(instance_ids, list) or len(instance_ids) < 1: module.fail_json(msg='instance_ids should be a list of instances, aborting') terminated_instance_ids = [] for res in ec2.get_all_instances(instance_ids): for inst in res.instances: if inst.state == 'running': terminated_instance_ids.append(inst.id) instance_dict_array.append(get_instance_info(inst)) try: ec2.terminate_instances([inst.id]) except EC2ResponseError, e: module.fail_json(msg='Unable to terminate instance {0}, error: {1}'.format(inst.id, e)) changed = True # wait here until the instances are 'terminated' if wait: num_terminated = 0 wait_timeout = time.time() + wait_timeout while wait_timeout > time.time() and num_terminated < len(terminated_instance_ids): response = ec2.get_all_instances( \ instance_ids=terminated_instance_ids, \ filters={'instance-state-name':'terminated'}) try: num_terminated = len(response.pop().instances) except Exception, e: # got a bad response of some sort, possibly due to # stale/cached data. Wait a second and then try again time.sleep(1) continue if num_terminated < len(terminated_instance_ids): time.sleep(5) # waiting took too long if wait_timeout < time.time() and num_terminated < len(terminated_instance_ids): module.fail_json(msg = "wait for instance termination timeout on %s" % time.asctime()) return (changed, instance_dict_array, terminated_instance_ids) def startstop_instances(module, ec2, instance_ids): """ Starts or stops a list of existing instances module: Ansible module object ec2: authenticated ec2 connection object instance_ids: The list of instances to start in the form of [ {id: }, ..] Returns a dictionary of instance information about the instances started. If the instance was not able to change state, "changed" will be set to False. """ wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) changed = False instance_dict_array = [] if not isinstance(instance_ids, list) or len(instance_ids) < 1: module.fail_json(msg='instance_ids should be a list of instances, aborting') dest_state = module.params.get('state') dest_state_ec2 = 'stopped' if dest_state == 'stopped' else 'running' # Check that our instances are not in the state we want to take them to # and change them to our desired state running_instances_array = [] for res in ec2.get_all_instances(instance_ids): for inst in res.instances: if not inst.state == dest_state_ec2: instance_dict_array.append(get_instance_info(inst)) try: if dest_state == 'running': inst.start() else: inst.stop() except EC2ResponseError, e: module.fail_json(msg='Unable to change state for instance {0}, error: {1}'.format(inst.id, e)) changed = True ## Wait for all the instances to finish starting or stopping instids = [ i.id for i in res.instances ] this_res = [] num_running = 0 wait_timeout = time.time() + wait_timeout while wait_timeout > time.time() and num_running < len(instids): res_list = res.connection.get_all_instances(instids) if len(res_list) > 0: this_res = res_list[0] num_running = len([ i for i in this_res.instances if i.state == dest_state_ec2 ]) else: # got a bad response of some sort, possibly due to # stale/cached data. Wait a second and then try again time.sleep(1) continue if wait and num_running < len(instids): time.sleep(5) else: break if wait and wait_timeout <= time.time(): # waiting took too long module.fail_json(msg = "wait for instances running timeout on %s" % time.asctime()) for inst in this_res.instances: running_instances_array.append(inst.id) return (changed, instance_dict_array, running_instances_array) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( key_name = dict(aliases = ['keypair']), id = dict(), group = dict(type='list'), group_id = dict(type='list'), zone = dict(aliases=['aws_zone', 'ec2_zone']), instance_type = dict(aliases=['type']), image = dict(), kernel = dict(), count = dict(default='1'), monitoring = dict(type='bool', default=False), ramdisk = dict(), wait = dict(type='bool', default=False), wait_timeout = dict(default=300), placement_group = dict(), user_data = dict(), instance_tags = dict(type='dict'), vpc_subnet_id = dict(), assign_public_ip = dict(type='bool', default=False), private_ip = dict(), instance_profile_name = dict(), instance_ids = dict(type='list'), state = dict(default='present'), exact_count = dict(type='int', default=None), count_tag = dict(), volumes = dict(type='list'), ) ) module = AnsibleModule( argument_spec=argument_spec, mutually_exclusive = [ ['exact_count', 'count'], ['exact_count', 'state'], ['exact_count', 'instance_ids'] ], ) ec2 = ec2_connect(module) tagged_instances = [] if module.params.get('state') == 'absent': instance_ids = module.params.get('instance_ids') if not isinstance(instance_ids, list): module.fail_json(msg='termination_list needs to be a list of instances to terminate') (changed, instance_dict_array, new_instance_ids) = terminate_instances(module, ec2, instance_ids) elif module.params.get('state') == 'running' or module.params.get('state') == 'stopped': instance_ids = module.params.get('instance_ids') if not isinstance(instance_ids, list): module.fail_json(msg='running list needs to be a list of instances to run: %s' % instance_ids) (changed, instance_dict_array, new_instance_ids) = startstop_instances(module, ec2, instance_ids) elif module.params.get('state') == 'present': # Changed is always set to true when provisioning new instances if not module.params.get('image'): module.fail_json(msg='image parameter is required for new instance') if module.params.get('exact_count') is None: (instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2) else: (tagged_instances, instance_dict_array, new_instance_ids, changed) = enforce_count(module, ec2) module.exit_json(changed=changed, instance_ids=new_instance_ids, instances=instance_dict_array, tagged_instances=tagged_instances) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/rax_dns0000664000000000000000000001267312316627017016007 0ustar rootroot#!/usr/bin/python -tt # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_dns short_description: Manage domains on Rackspace Cloud DNS description: - Manage domains on Rackspace Cloud DNS version_added: 1.5 options: api_key: description: - Rackspace API key (overrides C(credentials)) comment: description: - Brief description of the domain. Maximum length of 160 characters credentials: description: - File to find the Rackspace credentials in (ignored if C(api_key) and C(username) are provided) default: null aliases: ['creds_file'] email: desctiption: - Email address of the domain administrator name: description: - Domain name to create state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present ttl: description: - Time to live of domain in seconds default: 3600 username: description: - Rackspace username (overrides C(credentials)) requirements: [ "pyrax" ] author: Matt Martz notes: - The following environment variables can be used, C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION). - C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating) - C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file - C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...) ''' EXAMPLES = ''' - name: Create domain hosts: all gather_facts: False tasks: - name: Domain create request local_action: module: rax_dns credentials: ~/.raxpub name: example.org email: admin@example.org register: rax_dns ''' import sys import os from types import NoneType try: import pyrax except ImportError: print("failed=True msg='pyrax required for this module'") sys.exit(1) NON_CALLABLES = (basestring, bool, dict, int, list, NoneType) def to_dict(obj): instance = {} for key in dir(obj): value = getattr(obj, key) if (isinstance(value, NON_CALLABLES) and not key.startswith('_')): instance[key] = value return instance def rax_dns(module, comment, email, name, state, ttl): changed = False dns = pyrax.cloud_dns if state == 'present': if not email: module.fail_json(msg='An "email" attribute is required for ' 'creating a domain') try: domain = dns.find(name=name) except pyrax.exceptions.NoUniqueMatch, e: module.fail_json(msg='%s' % e.message) except pyrax.exceptions.NotFound: try: domain = dns.create(name=name, emailAddress=email, ttl=ttl, comment=comment) changed = True except Exception, e: module.fail_json(msg='%s' % e.message) update = {} if comment != getattr(domain, 'comment', None): update['comment'] = comment if ttl != getattr(domain, 'ttl', None): update['ttl'] = ttl if email != getattr(domain, 'emailAddress', None): update['emailAddress'] = email if update: try: domain.update(**update) changed = True domain.get() except Exception, e: module.fail_json('%s' % e.message) elif state == 'absent': try: domain = dns.find(name=name) except pyrax.exceptions.NotFound: domain = {} pass except Exception, e: module.fail_json(msg='%s' % e.message) if domain: try: domain.delete() changed = True except Exception, e: module.fail_json(msg='%s' % e.message) module.exit_json(changed=changed, domain=to_dict(domain)) def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( comment=dict(), email=dict(), name=dict(), state=dict(default='present', choices=['present', 'absent']), ttl=dict(type='int', default=3600), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), ) comment = module.params.get('comment') email = module.params.get('email') name = module.params.get('name') state = module.params.get('state') ttl = module.params.get('ttl') setup_rax_module(module, pyrax) rax_dns(module, comment, email, name, state, ttl) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/ec2_elb_lb0000664000000000000000000004421412316627017016315 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = """ --- module: ec2_elb_lb description: Creates or destroys Amazon ELB. short_description: Creates or destroys Amazon ELB. - Returns information about the load balancer. - Will be marked changed when called only if state is changed. version_added: "1.5" requirements: [ "boto" ] author: Jim Dalton options: state: description: - Create or destroy the ELB required: true name: description: - The name of the ELB required: true listeners: description: - List of ports/protocols for this ELB to listen on (see example) required: false purge_listeners: description: - Purge existing listeners on ELB that are not found in listeners required: false default: true zones: description: - List of availability zones to enable on this ELB required: false purge_zones: description: - Purge existing availability zones on ELB that are not found in zones required: false default: false health_check: description: - An associative array of health check configuration settigs (see example) require: false default: None aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: None aliases: ['ec2_secret_key', 'secret_key'] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: None aliases: ['ec2_access_key', 'access_key'] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false aliases: ['aws_region', 'ec2_region'] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" """ EXAMPLES = """ # Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Basic provisioning example - local_action: module: ec2_elb_lb name: "test-please-delete" state: present zones: - us-east-1a - us-east-1d listeners: - protocol: http # options are http, https, ssl, tcp load_balancer_port: 80 instance_port: 80 - protocol: https load_balancer_port: 443 instance_protocol: http # optional, defaults to value of protocol setting instance_port: 80 # ssl certificate required for https or ssl ssl_certificate_id: "arn:aws:iam::123456789012:server-certificate/company/servercerts/ProdServerCert" # Configure a health check - local_action: module: ec2_elb_lb name: "test-please-delete" state: present zones: - us-east-1d listeners: - protocol: http load_balancer_port: 80 instance_port: 80 health_check: ping_protocol: http # options are http, https, ssl, tcp ping_port: 80 ping_path: "/index.html" # not required for tcp or ssl response_timeout: 5 # seconds interval: 30 # seconds unhealthy_threshold: 2 healthy_threshold: 10 # Ensure ELB is gone - local_action: module: ec2_elb_lb name: "test-please-delete" state: absent # Normally, this module will purge any listeners that exist on the ELB # but aren't specified in the listeners parameter. If purge_listeners is # false it leaves them alone - local_action: module: ec2_elb_lb name: "test-please-delete" state: present zones: - us-east-1a - us-east-1d listeners: - protocol: http load_balancer_port: 80 instance_port: 80 purge_listeners: no # Normally, this module will leave availability zones that are enabled # on the ELB alone. If purge_zones is true, then any extreneous zones # will be removed - local_action: module: ec2_elb_lb name: "test-please-delete" state: present zones: - us-east-1a - us-east-1d listeners: - protocol: http load_balancer_port: 80 instance_port: 80 purge_zones: yes """ import sys import os try: import boto import boto.ec2.elb from boto.ec2.elb.healthcheck import HealthCheck from boto.regioninfo import RegionInfo except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) class ElbManager(object): """Handles ELB creation and destruction""" def __init__(self, module, name, listeners=None, purge_listeners=None, zones=None, purge_zones=None, health_check=None, aws_access_key=None, aws_secret_key=None, region=None): self.module = module self.name = name self.listeners = listeners self.purge_listeners = purge_listeners self.zones = zones self.purge_zones = purge_zones self.health_check = health_check self.aws_access_key = aws_access_key self.aws_secret_key = aws_secret_key self.region = region self.changed = False self.status = 'gone' self.elb_conn = self._get_elb_connection() self.elb = self._get_elb() def ensure_ok(self): """Create the ELB""" if not self.elb: # Zones and listeners will be added at creation self._create_elb() else: self._set_zones() self._set_elb_listeners() self._set_health_check() def ensure_gone(self): """Destroy the ELB""" if self.elb: self._delete_elb() def get_info(self): if not self.elb: info = { 'name': self.name, 'status': self.status } else: info = { 'name': self.elb.name, 'dns_name': self.elb.dns_name, 'zones': self.elb.availability_zones, 'status': self.status } if self.elb.health_check: info['health_check'] = { 'target': self.elb.health_check.target, 'interval': self.elb.health_check.interval, 'timeout': self.elb.health_check.timeout, 'healthy_threshold': self.elb.health_check.healthy_threshold, 'unhealthy_threshold': self.elb.health_check.unhealthy_threshold, } if self.elb.listeners: info['listeners'] = [l.get_complex_tuple() for l in self.elb.listeners] elif self.status == 'created': # When creating a new ELB, listeners don't show in the # immediately returned result, so just include the # ones that were added info['listeners'] = [self._listener_as_tuple(l) for l in self.listeners] else: info['listeners'] = [] return info def _get_elb(self): elbs = self.elb_conn.get_all_load_balancers() for elb in elbs: if self.name == elb.name: self.status = 'ok' return elb def _get_elb_connection(self): try: endpoint = "elasticloadbalancing.%s.amazonaws.com" % self.region connect_region = RegionInfo(name=self.region, endpoint=endpoint) return boto.ec2.elb.ELBConnection(self.aws_access_key, self.aws_secret_key, region=connect_region) except boto.exception.NoAuthHandlerFound, e: self.module.fail_json(msg=str(e)) def _delete_elb(self): # True if succeeds, exception raised if not result = self.elb_conn.delete_load_balancer(name=self.name) if result: self.changed = True self.status = 'deleted' def _create_elb(self): listeners = [self._listener_as_tuple(l) for l in self.listeners] self.elb = self.elb_conn.create_load_balancer(name=self.name, zones=self.zones, complex_listeners=listeners) if self.elb: self.changed = True self.status = 'created' def _create_elb_listeners(self, listeners): """Takes a list of listener tuples and creates them""" # True if succeeds, exception raised if not self.changed = self.elb_conn.create_load_balancer_listeners(self.name, complex_listeners=listeners) def _delete_elb_listeners(self, listeners): """Takes a list of listener tuples and deletes them from the elb""" ports = [l[0] for l in listeners] # True if succeeds, exception raised if not self.changed = self.elb_conn.delete_load_balancer_listeners(self.name, ports) def _set_elb_listeners(self): """ Creates listeners specified by self.listeners; overwrites existing listeners on these ports; removes extraneous listeners """ listeners_to_add = [] listeners_to_remove = [] listeners_to_keep = [] # Check for any listeners we need to create or overwrite for listener in self.listeners: listener_as_tuple = self._listener_as_tuple(listener) # First we loop through existing listeners to see if one is # already specified for this port existing_listener_found = None for existing_listener in self.elb.listeners: # Since ELB allows only one listener on each incoming port, a # single match on the incomping port is all we're looking for if existing_listener[0] == listener['load_balancer_port']: existing_listener_found = existing_listener.get_complex_tuple() break if existing_listener_found: # Does it match exactly? if listener_as_tuple != existing_listener_found: # The ports are the same but something else is different, # so we'll remove the exsiting one and add the new one listeners_to_remove.append(existing_listener_found) listeners_to_add.append(listener_as_tuple) else: # We already have this listener, so we're going to keep it listeners_to_keep.append(existing_listener_found) else: # We didn't find an existing listener, so just add the new one listeners_to_add.append(listener_as_tuple) # Check for any extraneous listeners we need to remove, if desired if self.purge_listeners: for existing_listener in self.elb.listeners: existing_listener_tuple = existing_listener.get_complex_tuple() if existing_listener_tuple in listeners_to_remove: # Already queued for removal continue if existing_listener_tuple in listeners_to_keep: # Keep this one around continue # Since we're not already removing it and we don't need to keep # it, let's get rid of it listeners_to_remove.append(existing_listener_tuple) if listeners_to_remove: self._delete_elb_listeners(listeners_to_remove) if listeners_to_add: self._create_elb_listeners(listeners_to_add) def _listener_as_tuple(self, listener): """Formats listener as a 4- or 5-tuples, in the order specified by the ELB API""" # N.B. string manipulations on protocols below (str(), upper()) is to # ensure format matches output from ELB API listener_list = [ listener['load_balancer_port'], listener['instance_port'], str(listener['protocol'].upper()), ] # Instance protocol is not required by ELB API; it defaults to match # load balancer protocol. We'll mimic that behavior here if 'instance_protocol' in listener: listener_list.append(str(listener['instance_protocol'].upper())) else: listener_list.append(str(listener['protocol'].upper())) if 'ssl_certificate_id' in listener: listener_list.append(str(listener['ssl_certificate_id'])) return tuple(listener_list) def _enable_zones(self, zones): self.elb_conn.enable_availability_zones(self.name, zones) self.changed = True def _disable_zones(self, zones): self.elb_conn.disable_availability_zones(self.name, zones) self.changed = True def _set_zones(self): """Determine which zones need to be enabled or disabled on the ELB""" if self.purge_zones: zones_to_disable = list(set(self.elb.availability_zones) - set(self.zones)) zones_to_enable = list(set(self.zones) - set(self.elb.availability_zones)) else: zones_to_disable = None zones_to_enable = list(set(self.zones) - set(self.elb.availability_zones)) if zones_to_enable: self._enable_zones(zones_to_enable) # N.B. This must come second, in case it would have removed all zones if zones_to_disable: self._disable_zones(zones_to_disable) def _set_health_check(self): """Set health check values on ELB as needed""" if self.health_check: # This just makes it easier to compare each of the attributes # and look for changes. Keys are attributes of the current # health_check; values are desired values of new health_check health_check_config = { "target": self._get_health_check_target(), "timeout": self.health_check['response_timeout'], "interval": self.health_check['interval'], "unhealthy_threshold": self.health_check['unhealthy_threshold'], "healthy_threshold": self.health_check['healthy_threshold'], } update_health_check = False # The health_check attribute is *not* set on newly created # ELBs! So we have to create our own. if not self.elb.health_check: self.elb.health_check = HealthCheck() for attr, desired_value in health_check_config.iteritems(): if getattr(self.elb.health_check, attr) != desired_value: setattr(self.elb.health_check, attr, desired_value) update_health_check = True if update_health_check: self.elb.configure_health_check(self.elb.health_check) self.changed = True def _get_health_check_target(self): """Compose target string from healthcheck parameters""" protocol = self.health_check['ping_protocol'].upper() path = "" if protocol in ['HTTP', 'HTTPS'] and 'ping_path' in self.health_check: path = self.health_check['ping_path'] return "%s:%s%s" % (protocol, self.health_check['ping_port'], path) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( state={'required': True, 'choices': ['present', 'absent']}, name={'required': True}, listeners={'default': None, 'required': False, 'type': 'list'}, purge_listeners={'default': True, 'required': False, 'choices': BOOLEANS, 'type': 'bool'}, zones={'default': None, 'required': False, 'type': 'list'}, purge_zones={'default': False, 'required': False, 'choices': BOOLEANS, 'type': 'bool'}, health_check={'default': None, 'required': False, 'type': 'dict'}, ) ) module = AnsibleModule( argument_spec=argument_spec, ) # def get_ec2_creds(module): # return ec2_url, ec2_access_key, ec2_secret_key, region ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) name = module.params['name'] state = module.params['state'] listeners = module.params['listeners'] purge_listeners = module.params['purge_listeners'] zones = module.params['zones'] purge_zones = module.params['purge_zones'] health_check = module.params['health_check'] if state == 'present' and not listeners: module.fail_json(msg="At least one port is required for ELB creation") if state == 'present' and not zones: module.fail_json(msg="At least one availability zone is required for ELB creation") elb_man = ElbManager(module, name, listeners, purge_listeners, zones, purge_zones, health_check, aws_access_key, aws_secret_key, region=region) if state == 'present': elb_man.ensure_ok() elif state == 'absent': elb_man.ensure_gone() ansible_facts = {'ec2_elb': 'info'} ec2_facts_result = dict(changed=elb_man.changed, elb=elb_man.get_info(), ansible_facts=ansible_facts) module.exit_json(**ec2_facts_result) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/elasticache0000664000000000000000000004644212316627017016617 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = """ --- module: elasticache short_description: Manage cache clusters in Amazon Elasticache. description: - Manage cache clusters in Amazon Elasticache. - Returns information about the specified cache cluster. version_added: "1.4" requirements: [ "boto" ] author: Jim Dalton options: state: description: - C(absent) or C(present) are idempotent actions that will create or destroy a cache cluster as needed. C(rebooted) will reboot the cluster, resulting in a momentary outage. choices: ['present', 'absent', 'rebooted'] required: true name: description: - The cache cluster identifier required: true engine: description: - Name of the cache engine to be used (memcached or redis) required: false default: memcached cache_engine_version: description: - The version number of the cache engine required: false default: 1.4.14 node_type: description: - The compute and memory capacity of the nodes in the cache cluster required: false default: cache.m1.small num_nodes: description: - The initial number of cache nodes that the cache cluster will have required: false cache_port: description: - The port number on which each of the cache nodes will accept connections required: false default: 11211 cache_security_groups: description: - A list of cache security group names to associate with this cache cluster required: false default: ['default'] zone: description: - The EC2 Availability Zone in which the cache cluster will be created required: false default: None wait: description: - Wait for cache cluster result before returning required: false default: yes choices: [ "yes", "no" ] hard_modify: description: - Whether to destroy and recreate an existing cache cluster if necessary in order to modify its state required: false default: no choices: [ "yes", "no" ] aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: None aliases: ['ec2_secret_key', 'secret_key'] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: None aliases: ['ec2_access_key', 'access_key'] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false aliases: ['aws_region', 'ec2_region'] """ EXAMPLES = """ # Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Basic example - local_action: module: elasticache name: "test-please-delete" state: present engine: memcached cache_engine_version: 1.4.14 node_type: cache.m1.small num_nodes: 1 cache_port: 11211 cache_security_groups: - default zone: us-east-1d # Ensure cache cluster is gone - local_action: module: elasticache name: "test-please-delete" state: absent # Reboot cache cluster - local_action: module: elasticache name: "test-please-delete" state: rebooted """ import sys import os import time try: import boto from boto.elasticache.layer1 import ElastiCacheConnection from boto.regioninfo import RegionInfo except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) class ElastiCacheManager(object): """Handles elasticache creation and destruction""" EXIST_STATUSES = ['available', 'creating', 'rebooting', 'modifying'] def __init__(self, module, name, engine, cache_engine_version, node_type, num_nodes, cache_port, cache_security_groups, zone, wait, hard_modify, aws_access_key, aws_secret_key, region): self.module = module self.name = name self.engine = engine self.cache_engine_version = cache_engine_version self.node_type = node_type self.num_nodes = num_nodes self.cache_port = cache_port self.cache_security_groups = cache_security_groups self.zone = zone self.wait = wait self.hard_modify = hard_modify self.aws_access_key = aws_access_key self.aws_secret_key = aws_secret_key self.region = region self.changed = False self.data = None self.status = 'gone' self.conn = self._get_elasticache_connection() self._refresh_data() def ensure_present(self): """Ensure cache cluster exists or create it if not""" if self.exists(): self.sync() else: self.create() def ensure_absent(self): """Ensure cache cluster is gone or delete it if not""" self.delete() def ensure_rebooted(self): """Ensure cache cluster is gone or delete it if not""" self.reboot() def exists(self): """Check if cache cluster exists""" return self.status in self.EXIST_STATUSES def create(self): """Create an ElastiCache cluster""" if self.status == 'available': return if self.status in ['creating', 'rebooting', 'modifying']: if self.wait: self._wait_for_status('available') return if self.status == 'deleting': if self.wait: self._wait_for_status('gone') else: msg = "'%s' is currently deleting. Cannot create." self.module.fail_json(msg=msg % self.name) try: response = self.conn.create_cache_cluster(cache_cluster_id=self.name, num_cache_nodes=self.num_nodes, cache_node_type=self.node_type, engine=self.engine, engine_version=self.cache_engine_version, cache_security_group_names=self.cache_security_groups, preferred_availability_zone=self.zone, port=self.cache_port) except boto.exception.BotoServerError, e: self.module.fail_json(msg=e.message) cache_cluster_data = response['CreateCacheClusterResponse']['CreateCacheClusterResult']['CacheCluster'] self._refresh_data(cache_cluster_data) self.changed = True if self.wait: self._wait_for_status('available') return True def delete(self): """Destroy an ElastiCache cluster""" if self.status == 'gone': return if self.status == 'deleting': if self.wait: self._wait_for_status('gone') return if self.status in ['creating', 'rebooting', 'modifying']: if self.wait: self._wait_for_status('available') else: msg = "'%s' is currently %s. Cannot delete." self.module.fail_json(msg=msg % (self.name, self.status)) try: response = self.conn.delete_cache_cluster(cache_cluster_id=self.name) except boto.exception.BotoServerError, e: self.module.fail_json(msg=e.message) cache_cluster_data = response['DeleteCacheClusterResponse']['DeleteCacheClusterResult']['CacheCluster'] self._refresh_data(cache_cluster_data) self.changed = True if self.wait: self._wait_for_status('gone') def sync(self): """Sync settings to cluster if required""" if not self.exists(): msg = "'%s' is %s. Cannot sync." self.module.fail_json(msg=msg % (self.name, self.status)) if self.status in ['creating', 'rebooting', 'modifying']: if self.wait: self._wait_for_status('available') else: # Cluster can only be synced if available. If we can't wait # for this, then just be done. return if self._requires_destroy_and_create(): if not self.hard_modify: msg = "'%s' requires destructive modification. 'hard_modify' must be set to true to proceed." self.module.fail_json(msg=msg % self.name) if not self.wait: msg = "'%s' requires destructive modification. 'wait' must be set to true." self.module.fail_json(msg=msg % self.name) self.delete() self.create() return if self._requires_modification(): self.modify() def modify(self): """Modify the cache cluster. Note it's only possible to modify a few select options.""" nodes_to_remove = self._get_nodes_to_remove() try: response = self.conn.modify_cache_cluster(cache_cluster_id=self.name, num_cache_nodes=self.num_nodes, cache_node_ids_to_remove=nodes_to_remove, cache_security_group_names=self.cache_security_groups, apply_immediately=True, engine_version=self.cache_engine_version) except boto.exception.BotoServerError, e: self.module.fail_json(msg=e.message) cache_cluster_data = response['ModifyCacheClusterResponse']['ModifyCacheClusterResult']['CacheCluster'] self._refresh_data(cache_cluster_data) self.changed = True if self.wait: self._wait_for_status('available') def reboot(self): """Reboot the cache cluster""" if not self.exists(): msg = "'%s' is %s. Cannot reboot." self.module.fail_json(msg=msg % (self.name, self.status)) if self.status == 'rebooting': return if self.status in ['creating', 'modifying']: if self.wait: self._wait_for_status('available') else: msg = "'%s' is currently %s. Cannot reboot." self.module.fail_json(msg=msg % (self.name, self.status)) # Collect ALL nodes for reboot cache_node_ids = [cn['CacheNodeId'] for cn in self.data['CacheNodes']] try: response = self.conn.reboot_cache_cluster(cache_cluster_id=self.name, cache_node_ids_to_reboot=cache_node_ids) except boto.exception.BotoServerError, e: self.module.fail_json(msg=e.message) cache_cluster_data = response['RebootCacheClusterResponse']['RebootCacheClusterResult']['CacheCluster'] self._refresh_data(cache_cluster_data) self.changed = True if self.wait: self._wait_for_status('available') def get_info(self): """Return basic info about the cache cluster""" info = { 'name': self.name, 'status': self.status } if self.data: info['data'] = self.data return info def _wait_for_status(self, awaited_status): """Wait for status to change from present status to awaited_status""" status_map = { 'creating': 'available', 'rebooting': 'available', 'modifying': 'available', 'deleting': 'gone' } if status_map[self.status] != awaited_status: msg = "Invalid awaited status. '%s' cannot transition to '%s'" self.module.fail_json(msg=msg % (self.status, awaited_status)) if awaited_status not in set(status_map.values()): msg = "'%s' is not a valid awaited status." self.module.fail_json(msg=msg % awaited_status) while True: time.sleep(1) self._refresh_data() if self.status == awaited_status: break def _requires_modification(self): """Check if cluster requires (nondestructive) modification""" # Check modifiable data attributes modifiable_data = { 'NumCacheNodes': self.num_nodes, 'EngineVersion': self.cache_engine_version } for key, value in modifiable_data.iteritems(): if self.data[key] != value: return True # Check security groups cache_security_groups = [] for sg in self.data['CacheSecurityGroups']: cache_security_groups.append(sg['CacheSecurityGroupName']) if set(cache_security_groups) - set(self.cache_security_groups): return True return False def _requires_destroy_and_create(self): """ Check whether a destroy and create is required to synchronize cluster. """ unmodifiable_data = { 'node_type': self.data['CacheNodeType'], 'engine': self.data['Engine'], 'cache_port': self._get_port() } # Only check for modifications if zone is specified if self.zone is not None: unmodifiable_data['zone'] = self.data['PreferredAvailabilityZone'] for key, value in unmodifiable_data.iteritems(): if getattr(self, key) != value: return True return False def _get_elasticache_connection(self): """Get an elasticache connection""" try: endpoint = "elasticache.%s.amazonaws.com" % self.region connect_region = RegionInfo(name=self.region, endpoint=endpoint) return ElastiCacheConnection(aws_access_key_id=self.aws_access_key, aws_secret_access_key=self.aws_secret_key, region=connect_region) except boto.exception.NoAuthHandlerFound, e: self.module.fail_json(msg=e.message) def _get_port(self): """Get the port. Where this information is retrieved from is engine dependent.""" if self.data['Engine'] == 'memcached': return self.data['ConfigurationEndpoint']['Port'] elif self.data['Engine'] == 'redis': # Redis only supports a single node (presently) so just use # the first and only return self.data['CacheNodes'][0]['Endpoint']['Port'] def _refresh_data(self, cache_cluster_data=None): """Refresh data about this cache cluster""" if cache_cluster_data is None: try: response = self.conn.describe_cache_clusters(cache_cluster_id=self.name, show_cache_node_info=True) except boto.exception.BotoServerError: self.data = None self.status = 'gone' return cache_cluster_data = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters'][0] self.data = cache_cluster_data self.status = self.data['CacheClusterStatus'] # The documentation for elasticache lies -- status on rebooting is set # to 'rebooting cache cluster nodes' instead of 'rebooting'. Fix it # here to make status checks etc. more sane. if self.status == 'rebooting cache cluster nodes': self.status = 'rebooting' def _get_nodes_to_remove(self): """If there are nodes to remove, it figures out which need to be removed""" num_nodes_to_remove = self.data['NumCacheNodes'] - self.num_nodes if num_nodes_to_remove <= 0: return None if not self.hard_modify: msg = "'%s' requires removal of cache nodes. 'hard_modify' must be set to true to proceed." self.module.fail_json(msg=msg % self.name) cache_node_ids = [cn['CacheNodeId'] for cn in self.data['CacheNodes']] return cache_node_ids[-num_nodes_to_remove:] def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( state={'required': True, 'choices': ['present', 'absent', 'rebooted']}, name={'required': True}, engine={'required': False, 'default': 'memcached'}, cache_engine_version={'required': False, 'default': '1.4.14'}, node_type={'required': False, 'default': 'cache.m1.small'}, num_nodes={'required': False, 'default': None, 'type': 'int'}, cache_port={'required': False, 'default': 11211, 'type': 'int'}, cache_security_groups={'required': False, 'default': ['default'], 'type': 'list'}, zone={'required': False, 'default': None}, wait={'required': False, 'choices': BOOLEANS, 'default': True}, hard_modify={'required': False, 'choices': BOOLEANS, 'default': False} ) ) module = AnsibleModule( argument_spec=argument_spec, ) ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) name = module.params['name'] state = module.params['state'] engine = module.params['engine'] cache_engine_version = module.params['cache_engine_version'] node_type = module.params['node_type'] num_nodes = module.params['num_nodes'] cache_port = module.params['cache_port'] cache_security_groups = module.params['cache_security_groups'] zone = module.params['zone'] wait = module.params['wait'] hard_modify = module.params['hard_modify'] if state == 'present' and not num_nodes: module.fail_json(msg="'num_nodes' is a required parameter. Please specify num_nodes > 0") if not region: module.fail_json(msg=str("Either region or EC2_REGION environment variable must be set.")) elasticache_manager = ElastiCacheManager(module, name, engine, cache_engine_version, node_type, num_nodes, cache_port, cache_security_groups, zone, wait, hard_modify, aws_access_key, aws_secret_key, region) if state == 'present': elasticache_manager.ensure_present() elif state == 'absent': elasticache_manager.ensure_absent() elif state == 'rebooted': elasticache_manager.ensure_rebooted() facts_result = dict(changed=elasticache_manager.changed, elasticache=elasticache_manager.get_info()) module.exit_json(**facts_result) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/quantum_subnet0000664000000000000000000002340412316627017017415 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: try: from neutronclient.neutron import client except ImportError: from quantumclient.quantum import client from keystoneclient.v2_0 import client as ksclient except ImportError: print("failed=True msg='quantumclient (or neutronclient) and keystoneclient are required'") DOCUMENTATION = ''' --- module: quantum_subnet version_added: "1.2" short_description: Add/Remove floating IP from an instance description: - Add or Remove a floating IP to an instance options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: True login_tenant_name: description: - The tenant name of the login user required: true default: True auth_url: description: - The keystone URL for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present network_name: description: - Name of the network to which the subnet should be attached required: true default: None cidr: description: - The CIDR representation of the subnet that should be assigned to the subnet required: true default: None tenant_name: description: - The name of the tenant for whom the subnet should be created required: false default: None ip_version: description: - The IP version of the subnet 4 or 6 required: false default: 4 enable_dhcp: description: - Whether DHCP should be enabled for this subnet. required: false default: true gateway_ip: description: - The ip that would be assigned to the gateway for this subnet required: false default: None dns_nameservers: description: - DNS nameservers for this subnet, comma-separated required: false default: None allocation_pool_start: description: - From the subnet pool the starting address from which the IP should be allocated required: false default: None allocation_pool_end: description: - From the subnet pool the last IP that should be assigned to the virtual machines required: false default: None requirements: ["quantumclient", "neutronclient", "keystoneclient"] ''' EXAMPLES = ''' # Create a subnet for a tenant with the specified subnet - quantum_subnet: state=present login_username=admin login_password=admin login_tenant_name=admin tenant_name=tenant1 network_name=network1 name=net1subnet cidr=192.168.0.0/24" ''' _os_keystone = None _os_tenant_id = None _os_network_id = None def _get_ksclient(module, kwargs): try: kclient = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s" %e.message) global _os_keystone _os_keystone = kclient return kclient def _get_endpoint(module, ksclient): try: endpoint = ksclient.service_catalog.url_for(service_type='network', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting network endpoint: %s" % e.message) return endpoint def _get_neutron_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint = _get_endpoint(module, _ksclient) kwargs = { 'token': token, 'endpoint_url': endpoint } try: neutron = client.Client('2.0', **kwargs) except Exception, e: module.fail_json(msg = " Error in connecting to neutron: %s" % e.message) return neutron def _set_tenant_id(module): global _os_tenant_id if not module.params['tenant_name']: tenant_name = module.params['login_tenant_name'] else: tenant_name = module.params['tenant_name'] for tenant in _os_keystone.tenants.list(): if tenant.name == tenant_name: _os_tenant_id = tenant.id break if not _os_tenant_id: module.fail_json(msg = "The tenant id cannot be found, please check the paramters") def _get_net_id(neutron, module): kwargs = { 'tenant_id': _os_tenant_id, 'name': module.params['network_name'], } try: networks = neutron.list_networks(**kwargs) except Exception, e: module.fail_json("Error in listing neutron networks: %s" % e.message) if not networks['networks']: return None return networks['networks'][0]['id'] def _get_subnet_id(module, neutron): global _os_network_id subnet_id = None _os_network_id = _get_net_id(neutron, module) if not _os_network_id: module.fail_json(msg = "network id of network not found.") else: kwargs = { 'tenant_id': _os_tenant_id, 'name': module.params['name'], } try: subnets = neutron.list_subnets(**kwargs) except Exception, e: module.fail_json( msg = " Error in getting the subnet list:%s " % e.message) if not subnets['subnets']: return None return subnets['subnets'][0]['id'] def _create_subnet(module, neutron): neutron.format = 'json' subnet = { 'name': module.params['name'], 'ip_version': module.params['ip_version'], 'enable_dhcp': module.params['enable_dhcp'], 'tenant_id': _os_tenant_id, 'gateway_ip': module.params['gateway_ip'], 'dns_nameservers': module.params['dns_nameservers'], 'network_id': _os_network_id, 'cidr': module.params['cidr'], } if module.params['allocation_pool_start'] and module.params['allocation_pool_end']: allocation_pools = [ { 'start' : module.params['allocation_pool_start'], 'end' : module.params['allocation_pool_end'] } ] subnet.update({'allocation_pools': allocation_pools}) if not module.params['gateway_ip']: subnet.pop('gateway_ip') if module.params['dns_nameservers']: subnet['dns_nameservers'] = module.params['dns_nameservers'].split(',') else: subnet.pop('dns_nameservers') try: new_subnet = neutron.create_subnet(dict(subnet=subnet)) except Exception, e: module.fail_json(msg = "Failure in creating subnet: %s" % e.message) return new_subnet['subnet']['id'] def _delete_subnet(module, neutron, subnet_id): try: neutron.delete_subnet(subnet_id) except Exception, e: module.fail_json( msg = "Error in deleting subnet: %s" % e.message) return True def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), name = dict(required=True), network_name = dict(required=True), cidr = dict(required=True), tenant_name = dict(default=None), state = dict(default='present', choices=['absent', 'present']), ip_version = dict(default='4', choices=['4', '6']), enable_dhcp = dict(default='true', choices=BOOLEANS), gateway_ip = dict(default=None), dns_nameservers = dict(default=None), allocation_pool_start = dict(default=None), allocation_pool_end = dict(default=None), ), ) neutron = _get_neutron_client(module, module.params) _set_tenant_id(module) if module.params['state'] == 'present': subnet_id = _get_subnet_id(module, neutron) if not subnet_id: subnet_id = _create_subnet(module, neutron) module.exit_json(changed = True, result = "Created" , id = subnet_id) else: module.exit_json(changed = False, result = "success" , id = subnet_id) else: subnet_id = _get_subnet_id(module, neutron) if not subnet_id: module.exit_json(changed = False, result = "success") else: _delete_subnet(module, neutron, subnet_id) module.exit_json(changed = True, result = "deleted") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/quantum_router_interface0000664000000000000000000002063412316627017021457 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: try: from neutronclient.neutron import client except ImportError: from quantumclient.quantum import client from keystoneclient.v2_0 import client as ksclient except ImportError: print("failed=True msg='quantumclient (or neutronclient) and keystone client are required'") DOCUMENTATION = ''' --- module: quantum_router_interface version_added: "1.2" short_description: Attach/Dettach a subnet's interface to a router description: - Attach/Dettach a subnet interface to a router, to provide a gateway for the subnet. options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' auth_url: description: - The keystone URL for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present router_name: description: - Name of the router to which the subnet's interface should be attached. required: true default: None subnet_name: description: - Name of the subnet to whose interface should be attached to the router. required: true default: None tenant_name: description: - Name of the tenant whose subnet has to be attached. required: false default: None requirements: ["quantumclient", "keystoneclient"] ''' EXAMPLES = ''' # Attach tenant1's subnet to the external router - quantum_router_interface: state=present login_username=admin login_password=admin login_tenant_name=admin tenant_name=tenant1 router_name=external_route subnet_name=t1subnet ''' _os_keystone = None _os_tenant_id = None def _get_ksclient(module, kwargs): try: kclient = ksclient.Client(username=kwargs.get('login_username'), password=kwargs.get('login_password'), tenant_name=kwargs.get('login_tenant_name'), auth_url=kwargs.get('auth_url')) except Exception, e: module.fail_json(msg = "Error authenticating to the keystone: %s " % e.message) global _os_keystone _os_keystone = kclient return kclient def _get_endpoint(module, ksclient): try: endpoint = ksclient.service_catalog.url_for(service_type='network', endpoint_type='publicURL') except Exception, e: module.fail_json(msg = "Error getting network endpoint: %s" % e.message) return endpoint def _get_neutron_client(module, kwargs): _ksclient = _get_ksclient(module, kwargs) token = _ksclient.auth_token endpoint = _get_endpoint(module, _ksclient) kwargs = { 'token': token, 'endpoint_url': endpoint } try: neutron = client.Client('2.0', **kwargs) except Exception, e: module.fail_json(msg = "Error in connecting to neutron: %s " % e.message) return neutron def _set_tenant_id(module): global _os_tenant_id if not module.params['tenant_name']: login_tenant_name = module.params['login_tenant_name'] else: login_tenant_name = module.params['tenant_name'] for tenant in _os_keystone.tenants.list(): if tenant.name == login_tenant_name: _os_tenant_id = tenant.id break if not _os_tenant_id: module.fail_json(msg = "The tenant id cannot be found, please check the paramters") def _get_router_id(module, neutron): kwargs = { 'name': module.params['router_name'], } try: routers = neutron.list_routers(**kwargs) except Exception, e: module.fail_json(msg = "Error in getting the router list: %s " % e.message) if not routers['routers']: return None return routers['routers'][0]['id'] def _get_subnet_id(module, neutron): subnet_id = None kwargs = { 'tenant_id': _os_tenant_id, 'name': module.params['subnet_name'], } try: subnets = neutron.list_subnets(**kwargs) except Exception, e: module.fail_json( msg = " Error in getting the subnet list:%s " % e.message) if not subnets['subnets']: return None return subnets['subnets'][0]['id'] def _get_port_id(neutron, module, router_id, subnet_id): kwargs = { 'tenant_id': _os_tenant_id, 'device_id': router_id, } try: ports = neutron.list_ports(**kwargs) except Exception, e: module.fail_json( msg = "Error in listing ports: %s" % e.message) if not ports['ports']: return None for port in ports['ports']: for subnet in port['fixed_ips']: if subnet['subnet_id'] == subnet_id: return port['id'] return None def _add_interface_router(neutron, module, router_id, subnet_id): kwargs = { 'subnet_id': subnet_id } try: neutron.add_interface_router(router_id, kwargs) except Exception, e: module.fail_json(msg = "Error in adding interface to router: %s" % e.message) return True def _remove_interface_router(neutron, module, router_id, subnet_id): kwargs = { 'subnet_id': subnet_id } try: neutron.remove_interface_router(router_id, kwargs) except Exception, e: module.fail_json(msg="Error in removing interface from router: %s" % e.message) return True def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), router_name = dict(required=True), subnet_name = dict(required=True), tenant_name = dict(default=None), state = dict(default='present', choices=['absent', 'present']), ), ) neutron = _get_neutron_client(module, module.params) _set_tenant_id(module) router_id = _get_router_id(module, neutron) if not router_id: module.fail_json(msg="failed to get the router id, please check the router name") subnet_id = _get_subnet_id(module, neutron) if not subnet_id: module.fail_json(msg="failed to get the subnet id, please check the subnet name") if module.params['state'] == 'present': port_id = _get_port_id(neutron, module, router_id, subnet_id) if not port_id: _add_interface_router(neutron, module, router_id, subnet_id) module.exit_json(changed=True, result="created", id=port_id) module.exit_json(changed=False, result="success", id=port_id) if module.params['state'] == 'absent': port_id = _get_port_id(neutron, module, router_id, subnet_id) if not port_id: module.exit_json(changed = False, result = "Success") _remove_interface_router(neutron, module, router_id, subnet_id) module.exit_json(changed=True, result="Deleted") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/route530000664000000000000000000002004512316627017015647 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: route53 version_added: "1.3" short_description: add or delete entries in Amazons Route53 DNS service description: - Creates and deletes DNS records in Amazons Route53 service options: command: description: - Specifies the action to take. required: true default: null aliases: [] choices: [ 'get', 'create', 'delete' ] zone: description: - The DNS zone to modify required: true default: null aliases: [] record: description: - The full DNS record to create or delete required: true default: null aliases: [] ttl: description: - The TTL to give the new record required: false default: 3600 (one hour) aliases: [] type: description: - The type of DNS record to create required: true default: null aliases: [] choices: [ 'A', 'CNAME', 'MX', 'AAAA', 'TXT', 'PTR', 'SRV', 'SPF', 'NS' ] value: description: - The new value when creating a DNS record. Multiple comma-spaced values are allowed. When deleting a record all values for the record must be specified or Route53 will not delete it. required: false default: null aliases: [] aws_secret_key: description: - AWS secret key. required: false default: null aliases: ['ec2_secret_key', 'secret_key'] aws_access_key: description: - AWS access key. required: false default: null aliases: ['ec2_access_key', 'access_key'] overwrite: description: - Whether an existing record should be overwritten on create if values do not match required: false default: null aliases: [] requirements: [ "boto" ] author: Bruce Pennypacker ''' EXAMPLES = ''' # Add new.foo.com as an A record with 3 IPs - route53: > command=create zone=foo.com record=new.foo.com type=A ttl=7200 value=1.1.1.1,2.2.2.2,3.3.3.3 # Retrieve the details for new.foo.com - route53: > command=get zone=foo.com record=new.foo.com type=A register: rec # Delete new.foo.com A record using the results from the get command - route53: > command=delete zone=foo.com record={{ rec.set.record }} type={{ rec.set.type }} value={{ rec.set.value }} # Add an AAAA record. Note that because there are colons in the value # that the entire parameter list must be quoted: - route53: > command=create zone=foo.com record=localhost.foo.com type=AAAA ttl=7200 value="::1" # Add a TXT record. Note that TXT and SPF records must be surrounded # by quotes when sent to Route 53: - route53: > command=create zone=foo.com record=localhost.foo.com type=TXT ttl=7200 value="\"bar\"" ''' import sys import time try: import boto from boto import route53 from boto.route53.record import ResourceRecordSets except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def commit(changes): """Commit changes, but retry PriorRequestNotComplete errors.""" retry = 10 while True: try: retry -= 1 return changes.commit() except boto.route53.exception.DNSServerError, e: code = e.body.split("")[1] code = code.split("")[0] if code != 'PriorRequestNotComplete' or retry < 0: raise e time.sleep(500) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( command = dict(choices=['get', 'create', 'delete'], required=True), zone = dict(required=True), record = dict(required=True), ttl = dict(required=False, default=3600), type = dict(choices=['A', 'CNAME', 'MX', 'AAAA', 'TXT', 'PTR', 'SRV', 'SPF', 'NS'], required=True), value = dict(required=False), overwrite = dict(required=False, type='bool') ) ) module = AnsibleModule(argument_spec=argument_spec) command_in = module.params.get('command') zone_in = module.params.get('zone') ttl_in = module.params.get('ttl') record_in = module.params.get('record') type_in = module.params.get('type') value_in = module.params.get('value') ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module) value_list = () if type(value_in) is str: if value_in: value_list = sorted(value_in.split(',')) elif type(value_in) is list: value_list = sorted(value_in) if zone_in[-1:] != '.': zone_in += "." if record_in[-1:] != '.': record_in += "." if command_in == 'create' or command_in == 'delete': if not value_in: module.fail_json(msg = "parameter 'value' required for create/delete") # connect to the route53 endpoint try: conn = boto.route53.connection.Route53Connection(aws_access_key, aws_secret_key) except boto.exception.BotoServerError, e: module.fail_json(msg = e.error_message) # Get all the existing hosted zones and save their ID's zones = {} results = conn.get_all_hosted_zones() for r53zone in results['ListHostedZonesResponse']['HostedZones']: zone_id = r53zone['Id'].replace('/hostedzone/', '') zones[r53zone['Name']] = zone_id # Verify that the requested zone is already defined in Route53 if not zone_in in zones: errmsg = "Zone %s does not exist in Route53" % zone_in module.fail_json(msg = errmsg) record = {} found_record = False sets = conn.get_all_rrsets(zones[zone_in]) for rset in sets: if rset.type == type_in and rset.name == record_in: found_record = True record['zone'] = zone_in record['type'] = rset.type record['record'] = rset.name record['ttl'] = rset.ttl record['value'] = ','.join(sorted(rset.resource_records)) record['values'] = sorted(rset.resource_records) if value_list == sorted(rset.resource_records) and record['ttl'] == ttl_in and command_in == 'create': module.exit_json(changed=False) if command_in == 'get': module.exit_json(changed=False, set=record) if command_in == 'delete' and not found_record: module.exit_json(changed=False) changes = ResourceRecordSets(conn, zones[zone_in]) if command_in == 'create' and found_record: if not module.params['overwrite']: module.fail_json(msg = "Record already exists with different value. Set 'overwrite' to replace it") else: change = changes.add_change("DELETE", record_in, type_in, record['ttl']) for v in record['values']: change.add_value(v) if command_in == 'create' or command_in == 'delete': change = changes.add_change(command_in.upper(), record_in, type_in, ttl_in) for v in value_list: change.add_value(v) try: result = commit(changes) except boto.route53.exception.DNSServerError, e: txt = e.body.split("")[1] txt = txt.split("")[0] module.fail_json(msg = txt) module.exit_json(changed=True) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/ec2_vol0000664000000000000000000002054412316627017015676 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ec2_vol short_description: create and attach a volume, return volume id and device map description: - creates an EBS volume and optionally attaches it to an instance. If both an instance ID and a device name is given and the instance has a device at the device name, then no volume is created and no attachment is made. This module has a dependency on python-boto. version_added: "1.1" options: aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: None aliases: ['ec2_secret_key', 'secret_key' ] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: None aliases: ['ec2_access_key', 'access_key' ] ec2_url: description: - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used required: false default: null aliases: [] instance: description: - instance ID if you wish to attach the volume. required: false default: null aliases: [] volume_size: description: - size of volume (in GB) to create. required: true default: null aliases: [] iops: description: - the provisioned IOPs you want to associate with this volume (integer). required: false default: 100 aliases: [] version_added: "1.3" device_name: description: - device id to override device mapping. Assumes /dev/sdf for Linux/UNIX and /dev/xvdf for Windows. required: false default: null aliases: [] region: description: - The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false default: null aliases: ['aws_region', 'ec2_region'] zone: description: - zone in which to create the volume, if unset uses the zone the instance is in (if set) required: false default: null aliases: ['aws_zone', 'ec2_zone'] snapshot: description: - snapshot ID on which to base the volume required: false default: null validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] author: Lester Wade ''' EXAMPLES = ''' # Simple attachment action - local_action: module: ec2_vol instance: XXXXXX volume_size: 5 device_name: sdd # Example using custom iops params - local_action: module: ec2_vol instance: XXXXXX volume_size: 5 iops: 200 device_name: sdd # Example using snapshot id - local_action: module: ec2_vol instance: XXXXXX snapshot: "{{ snapshot }}" # Playbook example combined with instance launch - local_action: module: ec2 keypair: "{{ keypair }}" image: "{{ image }}" wait: yes count: 3 register: ec2 - local_action: module: ec2_vol instance: "{{ item.id }} " volume_size: 5 with_items: ec2.instances register: ec2_vol ''' # Note: this module needs to be made idempotent. Possible solution is to use resource tags with the volumes. # if state=present and it doesn't exist, create, tag and attach. # Check for state by looking for volume attachment with tag (and against block device mapping?). # Would personally like to revisit this in May when Eucalyptus also has tagging support (3.3). import sys import time try: import boto.ec2 except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( instance = dict(), volume_size = dict(required=True), iops = dict(), device_name = dict(), zone = dict(aliases=['availability_zone', 'aws_zone', 'ec2_zone']), snapshot = dict(), ) ) module = AnsibleModule(argument_spec=argument_spec) instance = module.params.get('instance') volume_size = module.params.get('volume_size') iops = module.params.get('iops') device_name = module.params.get('device_name') zone = module.params.get('zone') snapshot = module.params.get('snapshot') ec2 = ec2_connect(module) # Here we need to get the zone info for the instance. This covers situation where # instance is specified but zone isn't. # Useful for playbooks chaining instance launch with volume create + attach and where the # zone doesn't matter to the user. if instance: reservation = ec2.get_all_instances(instance_ids=instance) inst = reservation[0].instances[0] zone = inst.placement # Check if there is a volume already mounted there. if device_name: if device_name in inst.block_device_mapping: module.exit_json(msg="Volume mapping for %s already exists on instance %s" % (device_name, instance), volume_id=inst.block_device_mapping[device_name].volume_id, device=device_name, changed=False) # If custom iops is defined we use volume_type "io1" rather than the default of "standard" if iops: volume_type = 'io1' else: volume_type = 'standard' # If no instance supplied, try volume creation based on module parameters. try: volume = ec2.create_volume(volume_size, zone, snapshot, volume_type, iops) while volume.status != 'available': time.sleep(3) volume.update() except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) # Attach the created volume. if device_name and instance: try: attach = volume.attach(inst.id, device_name) while volume.attachment_state() != 'attached': time.sleep(3) volume.update() except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) # If device_name isn't set, make a choice based on best practices here: # http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html # In future this needs to be more dynamic but combining block device mapping best practices # (bounds for devices, as above) with instance.block_device_mapping data would be tricky. For me ;) # Use password data attribute to tell whether the instance is Windows or Linux if device_name is None and instance: try: if not ec2.get_password_data(inst.id): device_name = '/dev/sdf' attach = volume.attach(inst.id, device_name) while volume.attachment_state() != 'attached': time.sleep(3) volume.update() else: device_name = '/dev/xvdf' attach = volume.attach(inst.id, device_name) while volume.attachment_state() != 'attached': time.sleep(3) volume.update() except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) print json.dumps({ "volume_id": volume.id, "device": device_name }) sys.exit(0) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/ec2_ami0000664000000000000000000002273212316627017015645 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ec2_ami version_added: "1.3" short_description: create or destroy an image in ec2, return imageid description: - Creates or deletes ec2 images. This module has a dependency on python-boto >= 2.5 options: ec2_url: description: - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used required: false default: null aliases: [] aws_secret_key: description: - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. required: false default: null aliases: [ 'ec2_secret_key', 'secret_key' ] aws_access_key: description: - AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. required: false default: null aliases: ['ec2_access_key', 'access_key' ] instance_id: description: - instance id of the image to create required: false default: null aliases: [] name: description: - The name of the new image to create required: false default: null aliases: [] wait: description: - wait for the AMI to be in state 'available' before returning. required: false default: "no" choices: [ "yes", "no" ] aliases: [] wait_timeout: description: - how long before wait gives up, in seconds default: 300 aliases: [] state: description: - create or deregister/delete image required: false default: 'present' aliases: [] region: description: - The AWS region to use. Must be specified if ec2_url is not used. If not specified then the value of the EC2_REGION environment variable, if any, is used. required: false default: null aliases: [ 'aws_region', 'ec2_region' ] description: description: - An optional human-readable string describing the contents and purpose of the AMI. required: false default: null aliases: [] no_reboot: description: - An optional flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the responsibility of maintaining file system integrity is left to the owner of the instance. The default choice is "no". required: false default: no choices: [ "yes", "no" ] aliases: [] image_id: description: - Image ID to be deregistered. required: false default: null aliases: [] delete_snapshot: description: - Whether or not to deleted an AMI while deregistering it. required: false default: null aliases: [] validate_certs: description: - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. required: false default: "yes" choices: ["yes", "no"] aliases: [] version_added: "1.5" requirements: [ "boto" ] author: Evan Duffield ''' # Thank you to iAcquire for sponsoring development of this module. # # See http://alestic.com/2011/06/ec2-ami-security for more information about ensuring the security of your AMI. EXAMPLES = ''' # Basic AMI Creation - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx instance_id: i-xxxxxx wait: yes name: newtest register: instance # Basic AMI Creation, without waiting - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region: xxxxxx instance_id: i-xxxxxx wait: no name: newtest register: instance # Deregister/Delete AMI - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region: xxxxxx image_id: ${instance.image_id} delete_snapshot: True state: absent # Deregister AMI - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region: xxxxxx image_id: ${instance.image_id} delete_snapshot: False state: absent ''' import sys import time try: import boto import boto.ec2 except ImportError: print "failed=True msg='boto required for this module'" sys.exit(1) def create_image(module, ec2): """ Creates new AMI module : AnsibleModule object ec2: authenticated ec2 connection object """ instance_id = module.params.get('instance_id') name = module.params.get('name') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) description = module.params.get('description') no_reboot = module.params.get('no_reboot') try: params = {'instance_id': instance_id, 'name': name, 'description': description, 'no_reboot': no_reboot} image_id = ec2.create_image(**params) except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) # Wait until the image is recognized. EC2 API has eventual consistency, # such that a successful CreateImage API call doesn't guarantee the success # of subsequent DescribeImages API call using the new image id returned. for i in range(wait_timeout): try: img = ec2.get_image(image_id) break except boto.exception.EC2ResponseError, e: if 'InvalidAMIID.NotFound' in e.error_code and wait: time.sleep(1) else: module.fail_json(msg="Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help.") else: module.fail_json(msg="timed out waiting for image to be recognized") # wait here until the image is created wait_timeout = time.time() + wait_timeout while wait and wait_timeout > time.time() and (img is None or img.state != 'available'): img = ec2.get_image(image_id) time.sleep(3) if wait and wait_timeout <= time.time(): # waiting took too long module.fail_json(msg = "timed out waiting for image to be created") module.exit_json(msg="AMI creation operation complete", image_id=image_id, state=img.state, changed=True) def deregister_image(module, ec2): """ Deregisters AMI """ image_id = module.params.get('image_id') delete_snapshot = module.params.get('delete_snapshot') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) img = ec2.get_image(image_id) if img == None: module.fail_json(msg = "Image %s does not exist" % image_id, changed=False) try: params = {'image_id': image_id, 'delete_snapshot': delete_snapshot} res = ec2.deregister_image(**params) except boto.exception.BotoServerError, e: module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message)) # wait here until the image is gone img = ec2.get_image(image_id) wait_timeout = time.time() + wait_timeout while wait and wait_timeout > time.time() and img is not None: img = ec2.get_image(image_id) time.sleep(3) if wait and wait_timeout <= time.time(): # waiting took too long module.fail_json(msg = "timed out waiting for image to be reregistered/deleted") module.exit_json(msg="AMI deregister/delete operation complete", changed=True) sys.exit(0) def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( instance_id = dict(), image_id = dict(), delete_snapshot = dict(), name = dict(), wait = dict(type="bool", default=False), wait_timeout = dict(default=900), description = dict(default=""), no_reboot = dict(default=False, type="bool"), state = dict(default='present'), ) ) module = AnsibleModule(argument_spec=argument_spec) ec2 = ec2_connect(module) if module.params.get('state') == 'absent': if not module.params.get('image_id'): module.fail_json(msg='image_id needs to be an ami image to registered/delete') deregister_image(module, ec2) elif module.params.get('state') == 'present': # Changed is always set to true when provisioning new AMI if not module.params.get('instance_id'): module.fail_json(msg='instance_id parameter is required for new image') if not module.params.get('name'): module.fail_json(msg='name parameter is required for new image') create_image(module, ec2) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.ec2 import * main() ansible-1.5.4/library/cloud/linode0000664000000000000000000004264712316627017015627 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: linode short_description: create / delete / stop / restart an instance in Linode Public Cloud description: - creates / deletes a Linode Public Cloud instance and optionally waits for it to be 'running'. version_added: "1.3" options: state: description: - Indicate desired state of the resource choices: ['present', 'active', 'started', 'absent', 'deleted', 'stopped', 'restarted'] default: present api_key: description: - Linode API key default: null name: description: - Name to give the instance (alphanumeric, dashes, underscore) - To keep sanity on the Linode Web Console, name is prepended with LinodeID_ default: null type: string linode_id: description: - Unique ID of a linode server aliases: lid default: null type: integer plan: description: - plan to use for the instance (Linode plan) default: null type: integer payment_term: description: - payment term to use for the instance (payment term in months) default: 1 type: integer choices: [1, 12, 24] password: description: - root password to apply to a new server (auto generated if missing) default: null type: string ssh_pub_key: description: - SSH public key applied to root user default: null type: string swap: description: - swap size in MB default: 512 type: integer distribution: description: - distribution to use for the instance (Linode Distribution) default: null type: integer datacenter: description: - datacenter to create an instance in (Linode Datacenter) default: null type: integer wait: description: - wait for the instance to be in state 'running' before returning default: "no" choices: [ "yes", "no" ] wait_timeout: description: - how long before wait gives up, in seconds default: 300 requirements: [ "linode-python" ] author: Vincent Viallet notes: - LINODE_API_KEY env variable can be used instead ''' EXAMPLES = ''' # Create a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 plan: 1 datacenter: 2 distribution: 99 password: 'superSecureRootPassword' ssh_pub_key: 'ssh-rsa qwerty' swap: 768 wait: yes wait_timeout: 600 state: present # Ensure a running server (create if missing) - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 plan: 1 datacenter: 2 distribution: 99 password: 'superSecureRootPassword' ssh_pub_key: 'ssh-rsa qwerty' swap: 768 wait: yes wait_timeout: 600 state: present # Delete a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 state: absent # Stop a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 state: stopped # Reboot a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 state: restarted ''' import sys import time import os try: # linode module raise warning due to ssl - silently ignore them ... import warnings warnings.simplefilter("ignore") from linode import api as linode_api except ImportError: print("failed=True msg='linode-python required for this module'") sys.exit(1) def randompass(): ''' Generate a long random password that comply to Linode requirements ''' # Linode API currently requires the following: # It must contain at least two of these four character classes: # lower case letters - upper case letters - numbers - punctuation # we play it safe :) import random import string # as of python 2.4, this reseeds the PRNG from urandom random.seed() lower = ''.join(random.choice(string.ascii_lowercase) for x in range(6)) upper = ''.join(random.choice(string.ascii_uppercase) for x in range(6)) number = ''.join(random.choice(string.digits) for x in range(6)) punct = ''.join(random.choice(string.punctuation) for x in range(6)) p = lower + upper + number + punct return ''.join(random.sample(p, len(p))) def getInstanceDetails(api, server): ''' Return the details of an instance, populating IPs, etc. ''' instance = {'id': server['LINODEID'], 'name': server['LABEL'], 'public': [], 'private': []} # Populate with ips for ip in api.linode_ip_list(LinodeId=server['LINODEID']): if ip['ISPUBLIC'] and 'ipv4' not in instance: instance['ipv4'] = ip['IPADDRESS'] instance['fqdn'] = ip['RDNS_NAME'] if ip['ISPUBLIC']: instance['public'].append({'ipv4': ip['IPADDRESS'], 'fqdn': ip['RDNS_NAME'], 'ip_id': ip['IPADDRESSID']}) else: instance['private'].append({'ipv4': ip['IPADDRESS'], 'fqdn': ip['RDNS_NAME'], 'ip_id': ip['IPADDRESSID']}) return instance def linodeServers(module, api, state, name, plan, distribution, datacenter, linode_id, payment_term, password, ssh_pub_key, swap, wait, wait_timeout): instances = [] changed = False new_server = False servers = [] disks = [] configs = [] jobs = [] # See if we can match an existing server details with the provided linode_id if linode_id: # For the moment we only consider linode_id as criteria for match # Later we can use more (size, name, etc.) and update existing servers = api.linode_list(LinodeId=linode_id) # Attempt to fetch details about disks and configs only if servers are # found with linode_id if servers: disks = api.linode_disk_list(LinodeId=linode_id) configs = api.linode_config_list(LinodeId=linode_id) # Act on the state if state in ('active', 'present', 'started'): # TODO: validate all the plan / distribution / datacenter are valid # Multi step process/validation: # - need linode_id (entity) # - need disk_id for linode_id - create disk from distrib # - need config_id for linode_id - create config (need kernel) # Any create step triggers a job that need to be waited for. if not servers: for arg in ('name', 'plan', 'distribution', 'datacenter'): if not eval(arg): module.fail_json(msg='%s is required for active state' % arg) # Create linode entity new_server = True try: res = api.linode_create(DatacenterID=datacenter, PlanID=plan, PaymentTerm=payment_term) linode_id = res['LinodeID'] # Update linode Label to match name api.linode_update(LinodeId=linode_id, Label='%s_%s' % (linode_id, name)) # Save server servers = api.linode_list(LinodeId=linode_id) except Exception, e: module.fail_json(msg = '%s' % e.value[0]['ERRORMESSAGE']) if not disks: for arg in ('name', 'linode_id', 'distribution'): if not eval(arg): module.fail_json(msg='%s is required for active state' % arg) # Create disks (1 from distrib, 1 for SWAP) new_server = True try: if not password: # Password is required on creation, if not provided generate one password = randompass() if not swap: swap = 512 # Create data disk size = servers[0]['TOTALHD'] - swap if ssh_pub_key: res = api.linode_disk_createfromdistribution( LinodeId=linode_id, DistributionID=distribution, rootPass=password, rootSSHKey=ssh_pub_key, Label='%s data disk (lid: %s)' % (name, linode_id), Size=size) else: res = api.linode_disk_createfromdistribution( LinodeId=linode_id, DistributionID=distribution, rootPass=password, Label='%s data disk (lid: %s)' % (name, linode_id), Size=size) jobs.append(res['JobID']) # Create SWAP disk res = api.linode_disk_create(LinodeId=linode_id, Type='swap', Label='%s swap disk (lid: %s)' % (name, linode_id), Size=swap) jobs.append(res['JobID']) except Exception, e: # TODO: destroy linode ? module.fail_json(msg = '%s' % e.value[0]['ERRORMESSAGE']) if not configs: for arg in ('name', 'linode_id', 'distribution'): if not eval(arg): module.fail_json(msg='%s is required for active state' % arg) # Check architecture for distrib in api.avail_distributions(): if distrib['DISTRIBUTIONID'] != distribution: continue arch = '32' if distrib['IS64BIT']: arch = '64' break # Get latest kernel matching arch for kernel in api.avail_kernels(): if not kernel['LABEL'].startswith('Latest %s' % arch): continue kernel_id = kernel['KERNELID'] break # Get disk list disks_id = [] for disk in api.linode_disk_list(LinodeId=linode_id): if disk['TYPE'] == 'ext3': disks_id.insert(0, str(disk['DISKID'])) continue disks_id.append(str(disk['DISKID'])) # Trick to get the 9 items in the list while len(disks_id) < 9: disks_id.append('') disks_list = ','.join(disks_id) # Create config new_server = True try: api.linode_config_create(LinodeId=linode_id, KernelId=kernel_id, Disklist=disks_list, Label='%s config' % name) configs = api.linode_config_list(LinodeId=linode_id) except Exception, e: module.fail_json(msg = '%s' % e.value[0]['ERRORMESSAGE']) # Start / Ensure servers are running for server in servers: # Refresh server state server = api.linode_list(LinodeId=server['LINODEID'])[0] # Ensure existing servers are up and running, boot if necessary if server['STATUS'] != 1: res = api.linode_boot(LinodeId=linode_id) jobs.append(res['JobID']) changed = True # wait here until the instances are up wait_timeout = time.time() + wait_timeout while wait and wait_timeout > time.time(): # refresh the server details server = api.linode_list(LinodeId=server['LINODEID'])[0] # status: # -2: Boot failed # 1: Running if server['STATUS'] in (-2, 1): break time.sleep(5) if wait and wait_timeout <= time.time(): # waiting took too long module.fail_json(msg = 'Timeout waiting on %s (lid: %s)' % (server['LABEL'], server['LINODEID'])) # Get a fresh copy of the server details server = api.linode_list(LinodeId=server['LINODEID'])[0] if server['STATUS'] == -2: module.fail_json(msg = '%s (lid: %s) failed to boot' % (server['LABEL'], server['LINODEID'])) # From now on we know the task is a success # Build instance report instance = getInstanceDetails(api, server) # depending on wait flag select the status if wait: instance['status'] = 'Running' else: instance['status'] = 'Starting' # Return the root password if this is a new box and no SSH key # has been provided if new_server and not ssh_pub_key: instance['password'] = password instances.append(instance) elif state in ('stopped'): for arg in ('name', 'linode_id'): if not eval(arg): module.fail_json(msg='%s is required for active state' % arg) if not servers: module.fail_json(msg = 'Server %s (lid: %s) not found' % (name, linode_id)) for server in servers: instance = getInstanceDetails(api, server) if server['STATUS'] != 2: try: res = api.linode_shutdown(LinodeId=linode_id) except Exception, e: module.fail_json(msg = '%s' % e.value[0]['ERRORMESSAGE']) instance['status'] = 'Stopping' changed = True else: instance['status'] = 'Stopped' instances.append(instance) elif state in ('restarted'): for arg in ('name', 'linode_id'): if not eval(arg): module.fail_json(msg='%s is required for active state' % arg) if not servers: module.fail_json(msg = 'Server %s (lid: %s) not found' % (name, linode_id)) for server in servers: instance = getInstanceDetails(api, server) try: res = api.linode_reboot(LinodeId=server['LINODEID']) except Exception, e: module.fail_json(msg = '%s' % e.value[0]['ERRORMESSAGE']) instance['status'] = 'Restarting' changed = True instances.append(instance) elif state in ('absent', 'deleted'): for server in servers: instance = getInstanceDetails(api, server) try: api.linode_delete(LinodeId=server['LINODEID'], skipChecks=True) except Exception, e: module.fail_json(msg = '%s' % e.value[0]['ERRORMESSAGE']) instance['status'] = 'Deleting' changed = True instances.append(instance) # Ease parsing if only 1 instance if len(instances) == 1: module.exit_json(changed=changed, instance=instances[0]) module.exit_json(changed=changed, instances=instances) def main(): module = AnsibleModule( argument_spec = dict( state = dict(default='present', choices=['active', 'present', 'started', 'deleted', 'absent', 'stopped', 'restarted']), api_key = dict(), name = dict(type='str'), plan = dict(type='int'), distribution = dict(type='int'), datacenter = dict(type='int'), linode_id = dict(type='int', aliases=['lid']), payment_term = dict(type='int', default=1, choices=[1, 12, 24]), password = dict(type='str'), ssh_pub_key = dict(type='str'), swap = dict(type='int', default=512), wait = dict(type='bool', default=True), wait_timeout = dict(default=300), ) ) state = module.params.get('state') api_key = module.params.get('api_key') name = module.params.get('name') plan = module.params.get('plan') distribution = module.params.get('distribution') datacenter = module.params.get('datacenter') linode_id = module.params.get('linode_id') payment_term = module.params.get('payment_term') password = module.params.get('password') ssh_pub_key = module.params.get('ssh_pub_key') swap = module.params.get('swap') wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) # Setup the api_key if not api_key: try: api_key = os.environ['LINODE_API_KEY'] except KeyError, e: module.fail_json(msg = 'Unable to load %s' % e.message) # setup the auth try: api = linode_api.Api(api_key) api.test_echo() except Exception, e: module.fail_json(msg = '%s' % e.value[0]['ERRORMESSAGE']) linodeServers(module, api, state, name, plan, distribution, datacenter, linode_id, payment_term, password, ssh_pub_key, swap, wait, wait_timeout) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/cloud/rax_clb_nodes0000664000000000000000000002347112316627017017151 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rax_clb_nodes short_description: add, modify and remove nodes from a Rackspace Cloud Load Balancer description: - Adds, modifies and removes nodes from a Rackspace Cloud Load Balancer version_added: "1.4" options: address: required: false description: - IP address or domain name of the node api_key: required: false description: - Rackspace API key (overrides C(credentials)) condition: required: false choices: [ "enabled", "disabled", "draining" ] description: - Condition for the node, which determines its role within the load balancer credentials: required: false description: - File to find the Rackspace credentials in (ignored if C(api_key) and C(username) are provided) load_balancer_id: required: true type: integer description: - Load balancer id node_id: required: false type: integer description: - Node id port: required: false type: integer description: - Port number of the load balanced service on the node region: required: false description: - Region to authenticate in state: required: false default: "present" choices: [ "present", "absent" ] description: - Indicate desired state of the node type: required: false choices: [ "primary", "secondary" ] description: - Type of node username: required: false description: - Rackspace username (overrides C(credentials)) virtualenv: required: false description: - Path to a virtualenv that should be activated before doing anything. The virtualenv has to already exist. Useful if installing pyrax globally is not an option. wait: required: false default: "no" choices: [ "yes", "no" ] description: - Wait for the load balancer to become active before returning wait_timeout: required: false type: integer default: 30 description: - How long to wait before giving up and returning an error weight: required: false description: - Weight of node requirements: [ "pyrax" ] author: Lukasz Kawczynski notes: - "The following environment variables can be used: C(RAX_USERNAME), C(RAX_API_KEY), C(RAX_CREDENTIALS) and C(RAX_REGION)." ''' EXAMPLES = ''' # Add a new node to the load balancer - local_action: module: rax_clb_nodes load_balancer_id: 71 address: 10.2.2.3 port: 80 condition: enabled type: primary wait: yes credentials: /path/to/credentials # Drain connections from a node - local_action: module: rax_clb_nodes load_balancer_id: 71 node_id: 410 condition: draining wait: yes credentials: /path/to/credentials # Remove a node from the load balancer - local_action: module: rax_clb_nodes load_balancer_id: 71 node_id: 410 state: absent wait: yes credentials: /path/to/credentials ''' import os import sys try: import pyrax except ImportError: print("failed=True msg='pyrax is required for this module'") sys.exit(1) def _activate_virtualenv(path): path = os.path.expanduser(path) activate_this = os.path.join(path, 'bin', 'activate_this.py') execfile(activate_this, dict(__file__=activate_this)) def _get_node(lb, node_id): """Return a node with the given `node_id`""" for node in lb.nodes: if node.id == node_id: return node return None def _is_primary(node): """Return True if node is primary and enabled""" return (node.type.lower() == 'primary' and node.condition.lower() == 'enabled') def _get_primary_nodes(lb): """Return a list of primary and enabled nodes""" nodes = [] for node in lb.nodes: if _is_primary(node): nodes.append(node) return nodes def _node_to_dict(node): """Return a dictionary containing node details""" if not node: return {} return { 'address': node.address, 'condition': node.condition, 'id': node.id, 'port': node.port, 'type': node.type, 'weight': node.weight, } def main(): argument_spec = rax_argument_spec() argument_spec.update( dict( address=dict(), condition=dict(choices=['enabled', 'disabled', 'draining']), load_balancer_id=dict(required=True, type='int'), node_id=dict(type='int'), port=dict(type='int'), state=dict(default='present', choices=['present', 'absent']), type=dict(choices=['primary', 'secondary']), virtualenv=dict(), wait=dict(default=False, type='bool'), wait_timeout=dict(default=30, type='int'), weight=dict(type='int'), ) ) module = AnsibleModule( argument_spec=argument_spec, required_together=rax_required_together(), ) address = module.params['address'] condition = (module.params['condition'] and module.params['condition'].upper()) load_balancer_id = module.params['load_balancer_id'] node_id = module.params['node_id'] port = module.params['port'] state = module.params['state'] typ = module.params['type'] and module.params['type'].upper() virtualenv = module.params['virtualenv'] wait = module.params['wait'] wait_timeout = module.params['wait_timeout'] or 1 weight = module.params['weight'] if virtualenv: try: _activate_virtualenv(virtualenv) except IOError, e: module.fail_json(msg='Failed to activate virtualenv %s (%s)' % ( virtualenv, e)) setup_rax_module(module, pyrax) if not pyrax.cloud_loadbalancers: module.fail_json(msg='Failed to instantiate load balancer client ' '(possibly incorrect region)') try: lb = pyrax.cloud_loadbalancers.get(load_balancer_id) except pyrax.exc.PyraxException, e: module.fail_json(msg='%s' % e.message) if node_id: node = _get_node(lb, node_id) else: node = None result = _node_to_dict(node) if state == 'absent': if not node: # Removing a non-existent node module.exit_json(changed=False, state=state) # The API detects this as well but currently pyrax does not return a # meaningful error message if _is_primary(node) and len(_get_primary_nodes(lb)) == 1: module.fail_json( msg='At least one primary node has to be enabled') try: lb.delete_node(node) result = {} except pyrax.exc.NotFound: module.exit_json(changed=False, state=state) except pyrax.exc.PyraxException, e: module.fail_json(msg='%s' % e.message) else: # present if not node: if node_id: # Updating a non-existent node msg = 'Node %d not found' % node_id if lb.nodes: msg += (' (available nodes: %s)' % ', '.join([str(x.id) for x in lb.nodes])) module.fail_json(msg=msg) else: # Creating a new node try: node = pyrax.cloudloadbalancers.Node( address=address, port=port, condition=condition, weight=weight, type=typ) resp, body = lb.add_nodes([node]) result.update(body['nodes'][0]) except pyrax.exc.PyraxException, e: module.fail_json(msg='%s' % e.message) else: # Updating an existing node immutable = { 'address': address, 'port': port, } mutable = { 'condition': condition, 'type': typ, 'weight': weight, } for name, value in immutable.items(): if value: module.fail_json( msg='Attribute %s cannot be modified' % name) for name, value in mutable.items(): if value is None or value == getattr(node, name): mutable.pop(name) if not mutable: module.exit_json(changed=False, state=state, node=result) try: # The diff has to be set explicitly to update node's weight and # type; this should probably be fixed in pyrax lb.update_node(node, diff=mutable) result.update(mutable) except pyrax.exc.PyraxException, e: module.fail_json(msg='%s' % e.message) if wait: pyrax.utils.wait_until(lb, "status", "ACTIVE", interval=1, attempts=wait_timeout) if lb.status != 'ACTIVE': module.fail_json( msg='Load balancer not active after %ds (current status: %s)' % (wait_timeout, lb.status.lower())) kwargs = {'node': result} if result else {} module.exit_json(changed=True, state=state, **kwargs) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.rax import * ### invoke the module main() ansible-1.5.4/library/cloud/nova_keypair0000664000000000000000000001174512316627017017037 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # (c) 2013, Benno Joy # (c) 2013, John Dewey # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . try: from novaclient.v1_1 import client from novaclient import exceptions import time except ImportError: print("failed=True msg='novaclient is required for this module to work'") DOCUMENTATION = ''' --- module: nova_keypair version_added: "1.2" short_description: Add/Delete key pair from nova description: - Add or Remove key pair from nova . options: login_username: description: - login username to authenticate to keystone required: true default: admin login_password: description: - Password of login user required: true default: 'yes' login_tenant_name: description: - The tenant name of the login user required: true default: 'yes' auth_url: description: - The keystone url for authentication required: false default: 'http://127.0.0.1:35357/v2.0/' region_name: description: - Name of the region required: false default: None state: description: - Indicate desired state of the resource choices: ['present', 'absent'] default: present name: description: - Name that has to be given to the key pair required: true default: None public_key: description: - The public key that would be uploaded to nova and injected to vm's upon creation required: false default: None requirements: ["novaclient"] ''' EXAMPLES = ''' # Creates a key pair with the running users public key - nova_keypair: state=present login_username=admin login_password=admin login_tenant_name=admin name=ansible_key public_key={{ lookup('file','~/.ssh/id_rsa.pub') }} # Creates a new key pair and the private key returned after the run. - nova_keypair: state=present login_username=admin login_password=admin login_tenant_name=admin name=ansible_key ''' def main(): module = AnsibleModule( argument_spec = dict( login_username = dict(default='admin'), login_password = dict(required=True), login_tenant_name = dict(required='True'), auth_url = dict(default='http://127.0.0.1:35357/v2.0/'), region_name = dict(default=None), name = dict(required=True), public_key = dict(default=None), state = dict(default='present', choices=['absent', 'present']) ), ) nova = nova_client.Client(module.params['login_username'], module.params['login_password'], module.params['login_tenant_name'], module.params['auth_url'], service_type='compute') try: nova.authenticate() except exc.Unauthorized, e: module.fail_json(msg = "Invalid OpenStack Nova credentials.: %s" % e.message) except exc.AuthorizationFailure, e: module.fail_json(msg = "Unable to authorize user: %s" % e.message) if module.params['state'] == 'present': for key in nova.keypairs.list(): if key.name == module.params['name']: module.exit_json(changed = False, result = "Key present") try: key = nova.keypairs.create(module.params['name'], module.params['public_key']) except Exception, e: module.exit_json(msg = "Error in creating the keypair: %s" % e.message) if not module.params['public_key']: module.exit_json(changed = True, key = key.private_key) module.exit_json(changed = True, key = None) if module.params['state'] == 'absent': for key in nova.keypairs.list(): if key.name == module.params['name']: try: nova.keypairs.delete(module.params['name']) except Exception, e: module.fail_json(msg = "The keypair deletion has failed: %s" % e.message) module.exit_json( changed = True, result = "deleted") module.exit_json(changed = False, result = "not present") # this is magic, see lib/ansible/module.params['common.py from ansible.module_utils.basic import * main() ansible-1.5.4/library/messaging/0000775000000000000000000000000012316627017015264 5ustar rootrootansible-1.5.4/library/messaging/rabbitmq_policy0000664000000000000000000001051412316627017020370 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, John Dewey # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: rabbitmq_policy short_description: Manage the state of policies in RabbitMQ. description: - Manage the state of a virtual host in RabbitMQ. version_added: "1.5" author: John Dewey options: name: description: - The name of the policy to manage. required: true default: null vhost: description: - The name of the vhost to apply to. required: false default: / pattern: description: - A regex of queues to apply the policy to. required: true default: null tags: description: - A dict or string describing the policy. required: true default: null priority: description: - The priority of the policy. required: false default: 0 node: description: - Erlang node name of the rabbit we wish to configure. required: false default: rabbit state: description: - The state of the policy. default: present choices: [present, absent] ''' EXAMPLES = ''' - name: ensure the default vhost contains the HA policy via a dict rabbitmq_policy: name=HA pattern='.*' args: tags: "ha-mode": all - name: ensure the default vhost contains the HA policy rabbitmq_policy: name=HA pattern='.*' tags="ha-mode=all" ''' class RabbitMqPolicy(object): def __init__(self, module, name): self._module = module self._name = name self._vhost = module.params['vhost'] self._pattern = module.params['pattern'] self._tags = module.params['tags'] self._priority = module.params['priority'] self._node = module.params['node'] self._rabbitmqctl = module.get_bin_path('rabbitmqctl', True) def _exec(self, args, run_in_check_mode=False): if not self._module.check_mode or (self._module.check_mode and run_in_check_mode): cmd = [self._rabbitmqctl, '-q', '-n', self._node] args.insert(1, '-p') args.insert(2, self._vhost) rc, out, err = self._module.run_command(cmd + args, check_rc=True) return out.splitlines() return list() def list(self): policies = self._exec(['list_policies'], True) for policy in policies: policy_name = policy.split('\t')[1] if policy_name == self._name: return True return False def set(self): import json args = ['set_policy'] args.append(self._name) args.append(self._pattern) args.append(json.dumps(self._tags)) args.append('--priority') args.append(self._priority) return self._exec(args) def clear(self): return self._exec(['clear_policy', self._name]) def main(): arg_spec = dict( name=dict(required=True), vhost=dict(default='/'), pattern=dict(required=True), tags=dict(type='dict', required=True), priority=dict(default='0'), node=dict(default='rabbit'), state=dict(default='present', choices=['present', 'absent']), ) module = AnsibleModule( argument_spec=arg_spec, supports_check_mode=True ) name = module.params['name'] state = module.params['state'] rabbitmq_policy = RabbitMqPolicy(module, name) changed = False if rabbitmq_policy.list(): if state == 'absent': rabbitmq_policy.clear() changed = True else: changed = False elif state == 'present': rabbitmq_policy.set() changed = True module.exit_json(changed=changed, name=name, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/messaging/rabbitmq_parameter0000664000000000000000000001073112316627017021052 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Chatham Financial # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rabbitmq_parameter short_description: Adds or removes parameters to RabbitMQ description: - Manage dynamic, cluster-wide parameters for RabbitMQ version_added: "1.1" author: Chris Hoffman options: component: description: - Name of the component of which the parameter is being set required: true default: null name: description: - Name of the parameter being set required: true default: null value: description: - Value of the parameter, as a JSON term required: false default: null vhost: description: - vhost to apply access privileges. required: false default: / node: description: - erlang node name of the rabbit we wish to configure required: false default: rabbit state: description: - Specify if user is to be added or removed required: false default: present choices: [ 'present', 'absent'] ''' EXAMPLES = """ # Set the federation parameter 'local_username' to a value of 'guest' (in quotes) - rabbitmq_parameter: component=federation name=local-username value='"guest"' state=present """ class RabbitMqParameter(object): def __init__(self, module, component, name, value, vhost, node): self.module = module self.component = component self.name = name self.value = value self.vhost = vhost self.node = node self._value = None self._rabbitmqctl = module.get_bin_path('rabbitmqctl', True) def _exec(self, args, run_in_check_mode=False): if not self.module.check_mode or (self.module.check_mode and run_in_check_mode): cmd = [self._rabbitmqctl, '-q', '-n', self.node] rc, out, err = self.module.run_command(cmd + args, check_rc=True) return out.splitlines() return list() def get(self): parameters = self._exec(['list_parameters', '-p', self.vhost], True) for param_item in parameters: component, name, value = param_item.split('\t') if component == self.component and name == self.name: self._value = value return True return False def set(self): self._exec(['set_parameter', '-p', self.vhost, self.component, self.name, self.value]) def delete(self): self._exec(['clear_parameter', '-p', self.vhost, self.component, self.name]) def has_modifications(self): return self.value != self._value def main(): arg_spec = dict( component=dict(required=True), name=dict(required=True), value=dict(default=None), vhost=dict(default='/'), state=dict(default='present', choices=['present', 'absent']), node=dict(default='rabbit') ) module = AnsibleModule( argument_spec=arg_spec, supports_check_mode=True ) component = module.params['component'] name = module.params['name'] value = module.params['value'] vhost = module.params['vhost'] state = module.params['state'] node = module.params['node'] rabbitmq_parameter = RabbitMqParameter(module, component, name, value, vhost, node) changed = False if rabbitmq_parameter.get(): if state == 'absent': rabbitmq_parameter.delete() changed = True else: if rabbitmq_parameter.has_modifications(): rabbitmq_parameter.set() changed = True elif state == 'present': rabbitmq_parameter.set() changed = True module.exit_json(changed=changed, component=component, name=name, vhost=vhost, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/messaging/rabbitmq_vhost0000664000000000000000000001010012316627017020223 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Chatham Financial # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: rabbitmq_vhost short_description: Manage the state of a virtual host in RabbitMQ description: - Manage the state of a virtual host in RabbitMQ version_added: "1.1" author: Chris Hoffman options: name: description: - The name of the vhost to manage required: true default: null aliases: [vhost] node: description: - erlang node name of the rabbit we wish to configure required: false default: rabbit tracing: description: - Enable/disable tracing for a vhost default: "no" choices: [ "yes", "no" ] aliases: [trace] state: description: - The state of vhost default: present choices: [present, absent] ''' EXAMPLES = ''' # Ensure that the vhost /test exists. - rabbitmq_vhost: name=/test state=present ''' class RabbitMqVhost(object): def __init__(self, module, name, tracing, node): self.module = module self.name = name self.tracing = tracing self.node = node self._tracing = False self._rabbitmqctl = module.get_bin_path('rabbitmqctl', True) def _exec(self, args, run_in_check_mode=False): if not self.module.check_mode or (self.module.check_mode and run_in_check_mode): cmd = [self._rabbitmqctl, '-q', '-n', self.node] rc, out, err = self.module.run_command(cmd + args, check_rc=True) return out.splitlines() return list() def get(self): vhosts = self._exec(['list_vhosts', 'name', 'tracing'], True) for vhost in vhosts: name, tracing = vhost.split('\t') if name == self.name: self._tracing = self.module.boolean(tracing) return True return False def add(self): return self._exec(['add_vhost', self.name]) def delete(self): return self._exec(['delete_vhost', self.name]) def set_tracing(self): if self.tracing != self._tracing: if self.tracing: self._enable_tracing() else: self._disable_tracing() return True return False def _enable_tracing(self): return self._exec(['trace_on', '-p', self.name]) def _disable_tracing(self): return self._exec(['trace_off', '-p', self.name]) def main(): arg_spec = dict( name=dict(required=True, aliases=['vhost']), tracing=dict(default='off', aliases=['trace'], type='bool'), state=dict(default='present', choices=['present', 'absent']), node=dict(default='rabbit'), ) module = AnsibleModule( argument_spec=arg_spec, supports_check_mode=True ) name = module.params['name'] tracing = module.params['tracing'] state = module.params['state'] node = module.params['node'] rabbitmq_vhost = RabbitMqVhost(module, name, tracing, node) changed = False if rabbitmq_vhost.get(): if state == 'absent': rabbitmq_vhost.delete() changed = True else: if rabbitmq_vhost.set_tracing(): changed = True elif state == 'present': rabbitmq_vhost.add() rabbitmq_vhost.set_tracing() changed = True module.exit_json(changed=changed, name=name, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/messaging/rabbitmq_user0000664000000000000000000001666312316627017020062 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Chatham Financial # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rabbitmq_user short_description: Adds or removes users to RabbitMQ description: - Add or remove users to RabbitMQ and assign permissions version_added: "1.1" author: Chris Hoffman options: user: description: - Name of user to add required: true default: null aliases: [username, name] password: description: - Password of user to add. - To change the password of an existing user, you must also specify C(force=yes). required: false default: null tags: description: - User tags specified as comma delimited required: false default: null vhost: description: - vhost to apply access privileges. required: false default: / node: description: - erlang node name of the rabbit we wish to configure required: false default: rabbit configure_priv: description: - Regular expression to restrict configure actions on a resource for the specified vhost. - By default all actions are restricted. required: false default: ^$ write_priv: description: - Regular expression to restrict configure actions on a resource for the specified vhost. - By default all actions are restricted. required: false default: ^$ read_priv: description: - Regular expression to restrict configure actions on a resource for the specified vhost. - By default all actions are restricted. required: false default: ^$ force: description: - Deletes and recreates the user. required: false default: "no" choices: [ "yes", "no" ] state: description: - Specify if user is to be added or removed required: false default: present choices: [present, absent] ''' EXAMPLES = ''' # Add user to server and assign full access control - rabbitmq_user: user=joe password=changeme vhost=/ configure_priv=.* read_priv=.* write_priv=.* state=present ''' class RabbitMqUser(object): def __init__(self, module, username, password, tags, vhost, configure_priv, write_priv, read_priv, node): self.module = module self.username = username self.password = password self.node = node if tags is None: self.tags = list() else: self.tags = tags.split(',') permissions = dict( vhost=vhost, configure_priv=configure_priv, write_priv=write_priv, read_priv=read_priv ) self.permissions = permissions self._tags = None self._permissions = None self._rabbitmqctl = module.get_bin_path('rabbitmqctl', True) def _exec(self, args, run_in_check_mode=False): if not self.module.check_mode or (self.module.check_mode and run_in_check_mode): cmd = [self._rabbitmqctl, '-q', '-n', self.node] rc, out, err = self.module.run_command(cmd + args, check_rc=True) return out.splitlines() return list() def get(self): users = self._exec(['list_users'], True) for user_tag in users: user, tags = user_tag.split('\t') if user == self.username: for c in ['[',']',' ']: tags = tags.replace(c, '') if tags != '': self._tags = tags.split(',') else: self._tags = list() self._permissions = self._get_permissions() return True return False def _get_permissions(self): perms_out = self._exec(['list_user_permissions', self.username], True) for perm in perms_out: vhost, configure_priv, write_priv, read_priv = perm.split('\t') if vhost == self.permissions['vhost']: return dict(vhost=vhost, configure_priv=configure_priv, write_priv=write_priv, read_priv=read_priv) return dict() def add(self): self._exec(['add_user', self.username, self.password]) def delete(self): self._exec(['delete_user', self.username]) def set_tags(self): self._exec(['set_user_tags', self.username] + self.tags) def set_permissions(self): cmd = ['set_permissions'] cmd.append('-p') cmd.append(self.permissions['vhost']) cmd.append(self.username) cmd.append(self.permissions['configure_priv']) cmd.append(self.permissions['write_priv']) cmd.append(self.permissions['read_priv']) self._exec(cmd) def has_tags_modifications(self): return set(self.tags) != set(self._tags) def has_permissions_modifications(self): return self._permissions != self.permissions def main(): arg_spec = dict( user=dict(required=True, aliases=['username', 'name']), password=dict(default=None), tags=dict(default=None), vhost=dict(default='/'), configure_priv=dict(default='^$'), write_priv=dict(default='^$'), read_priv=dict(default='^$'), force=dict(default='no', type='bool'), state=dict(default='present', choices=['present', 'absent']), node=dict(default='rabbit') ) module = AnsibleModule( argument_spec=arg_spec, supports_check_mode=True ) username = module.params['user'] password = module.params['password'] tags = module.params['tags'] vhost = module.params['vhost'] configure_priv = module.params['configure_priv'] write_priv = module.params['write_priv'] read_priv = module.params['read_priv'] force = module.params['force'] state = module.params['state'] node = module.params['node'] rabbitmq_user = RabbitMqUser(module, username, password, tags, vhost, configure_priv, write_priv, read_priv, node) changed = False if rabbitmq_user.get(): if state == 'absent': rabbitmq_user.delete() changed = True else: if force: rabbitmq_user.delete() rabbitmq_user.add() rabbitmq_user.get() changed = True if rabbitmq_user.has_tags_modifications(): rabbitmq_user.set_tags() changed = True if rabbitmq_user.has_permissions_modifications(): rabbitmq_user.set_permissions() changed = True elif state == 'present': rabbitmq_user.add() rabbitmq_user.set_tags() rabbitmq_user.set_permissions() changed = True module.exit_json(changed=changed, user=username, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/messaging/rabbitmq_plugin0000664000000000000000000000754112316627017020375 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Chatham Financial # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: rabbitmq_plugin short_description: Adds or removes plugins to RabbitMQ description: - Enables or disables RabbitMQ plugins version_added: "1.1" author: Chris Hoffman options: names: description: - Comma-separated list of plugin names required: true default: null aliases: [name] new_only: description: - Only enable missing plugins - Does not disable plugins that are not in the names list required: false default: "no" choices: [ "yes", "no" ] state: description: - Specify if plugins are to be enabled or disabled required: false default: enabled choices: [enabled, disabled] prefix: description: - Specify a custom install prefix to a Rabbit required: false version_added: "1.3" default: null ''' EXAMPLES = ''' # Enables the rabbitmq_management plugin - rabbitmq_plugin: names=rabbitmq_management state=enabled ''' class RabbitMqPlugins(object): def __init__(self, module): self.module = module if module.params['prefix']: self._rabbitmq_plugins = module.params['prefix'] + "/sbin/rabbitmq-plugins" else: self._rabbitmq_plugins = module.get_bin_path('rabbitmq-plugins', True) def _exec(self, args, run_in_check_mode=False): if not self.module.check_mode or (self.module.check_mode and run_in_check_mode): cmd = [self._rabbitmq_plugins] rc, out, err = self.module.run_command(cmd + args, check_rc=True) return out.splitlines() return list() def get_all(self): return self._exec(['list', '-E', '-m'], True) def enable(self, name): self._exec(['enable', name]) def disable(self, name): self._exec(['disable', name]) def main(): arg_spec = dict( names=dict(required=True, aliases=['name']), new_only=dict(default='no', type='bool'), state=dict(default='enabled', choices=['enabled', 'disabled']), prefix=dict(required=False, default=None) ) module = AnsibleModule( argument_spec=arg_spec, supports_check_mode=True ) names = module.params['names'].split(',') new_only = module.params['new_only'] state = module.params['state'] rabbitmq_plugins = RabbitMqPlugins(module) enabled_plugins = rabbitmq_plugins.get_all() enabled = [] disabled = [] if state == 'enabled': if not new_only: for plugin in enabled_plugins: if plugin not in names: rabbitmq_plugins.disable(plugin) disabled.append(plugin) for name in names: if name not in enabled_plugins: rabbitmq_plugins.enable(name) enabled.append(name) else: for plugin in enabled_plugins: if plugin in names: rabbitmq_plugins.disable(plugin) disabled.append(plugin) changed = len(enabled) > 0 or len(disabled) > 0 module.exit_json(changed=changed, enabled=enabled, disabled=disabled) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/0000775000000000000000000000000012316627017017235 5ustar rootrootansible-1.5.4/library/net_infrastructure/bigip_pool_member0000664000000000000000000003030612316627017022634 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Matt Hite # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: bigip_pool_member short_description: "Manages F5 BIG-IP LTM pool members" description: - "Manages F5 BIG-IP LTM pool members via iControl SOAP API" version_added: "1.4" author: Matt Hite notes: - "Requires BIG-IP software version >= 11" - "F5 developed module 'bigsuds' required (see http://devcentral.f5.com)" - "Best run as a local_action in your playbook" - "Supersedes bigip_pool for managing pool members" requirements: - bigsuds options: server: description: - BIG-IP host required: true default: null choices: [] aliases: [] user: description: - BIG-IP username required: true default: null choices: [] aliases: [] password: description: - BIG-IP password required: true default: null choices: [] aliases: [] state: description: - Pool member state required: true default: present choices: ['present', 'absent'] aliases: [] pool: description: - Pool name. This pool must exist. required: true default: null choices: [] aliases: [] partition: description: - Partition required: false default: 'Common' choices: [] aliases: [] host: description: - Pool member IP required: true default: null choices: [] aliases: ['address', 'name'] port: description: - Pool member port required: true default: null choices: [] aliases: [] connection_limit: description: - Pool member connection limit. Setting this to 0 disables the limit. required: false default: null choices: [] aliases: [] description: description: - Pool member description required: false default: null choices: [] aliases: [] rate_limit: description: - Pool member rate limit (connections-per-second). Setting this to 0 disables the limit. required: false default: null choices: [] aliases: [] ratio: description: - Pool member ratio weight. Valid values range from 1 through 100. New pool members -- unless overriden with this value -- default to 1. required: false default: null choices: [] aliases: [] ''' EXAMPLES = ''' ## playbook task examples: --- # file bigip-test.yml # ... - hosts: bigip-test tasks: - name: Add pool member local_action: > bigip_pool_member server=lb.mydomain.com user=admin password=mysecret state=present pool=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 description="web server" connection_limit=100 rate_limit=50 ratio=2 - name: Modify pool member ratio and description local_action: > bigip_pool_member server=lb.mydomain.com user=admin password=mysecret state=present pool=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 ratio=1 description="nginx server" - name: Remove pool member from pool local_action: > bigip_pool_member server=lb.mydomain.com user=admin password=mysecret state=absent pool=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 ''' try: import bigsuds except ImportError: bigsuds_found = False else: bigsuds_found = True # =========================================== # bigip_pool_member module specific support methods. # def bigip_api(bigip, user, password): api = bigsuds.BIGIP(hostname=bigip, username=user, password=password) return api def pool_exists(api, pool): # hack to determine if pool exists result = False try: api.LocalLB.Pool.get_object_status(pool_names=[pool]) result = True except bigsuds.OperationFailed, e: if "was not found" in str(e): result = False else: # genuine exception raise return result def member_exists(api, pool, address, port): # hack to determine if member exists result = False try: members = [{'address': address, 'port': port}] api.LocalLB.Pool.get_member_object_status(pool_names=[pool], members=[members]) result = True except bigsuds.OperationFailed, e: if "was not found" in str(e): result = False else: # genuine exception raise return result def delete_node_address(api, address): result = False try: api.LocalLB.NodeAddressV2.delete_node_address(nodes=[address]) result = True except bigsuds.OperationFailed, e: if "is referenced by a member of pool" in str(e): result = False else: # genuine exception raise return result def remove_pool_member(api, pool, address, port): members = [{'address': address, 'port': port}] api.LocalLB.Pool.remove_member_v2(pool_names=[pool], members=[members]) def add_pool_member(api, pool, address, port): members = [{'address': address, 'port': port}] api.LocalLB.Pool.add_member_v2(pool_names=[pool], members=[members]) def get_connection_limit(api, pool, address, port): members = [{'address': address, 'port': port}] result = api.LocalLB.Pool.get_member_connection_limit(pool_names=[pool], members=[members])[0][0] return result def set_connection_limit(api, pool, address, port, limit): members = [{'address': address, 'port': port}] api.LocalLB.Pool.set_member_connection_limit(pool_names=[pool], members=[members], limits=[[limit]]) def get_description(api, pool, address, port): members = [{'address': address, 'port': port}] result = api.LocalLB.Pool.get_member_description(pool_names=[pool], members=[members])[0][0] return result def set_description(api, pool, address, port, description): members = [{'address': address, 'port': port}] api.LocalLB.Pool.set_member_description(pool_names=[pool], members=[members], descriptions=[[description]]) def get_rate_limit(api, pool, address, port): members = [{'address': address, 'port': port}] result = api.LocalLB.Pool.get_member_rate_limit(pool_names=[pool], members=[members])[0][0] return result def set_rate_limit(api, pool, address, port, limit): members = [{'address': address, 'port': port}] api.LocalLB.Pool.set_member_rate_limit(pool_names=[pool], members=[members], limits=[[limit]]) def get_ratio(api, pool, address, port): members = [{'address': address, 'port': port}] result = api.LocalLB.Pool.get_member_ratio(pool_names=[pool], members=[members])[0][0] return result def set_ratio(api, pool, address, port, ratio): members = [{'address': address, 'port': port}] api.LocalLB.Pool.set_member_ratio(pool_names=[pool], members=[members], ratios=[[ratio]]) def main(): module = AnsibleModule( argument_spec = dict( server = dict(type='str', required=True), user = dict(type='str', required=True), password = dict(type='str', required=True), state = dict(type='str', default='present', choices=['present', 'absent']), pool = dict(type='str', required=True), partition = dict(type='str', default='Common'), host = dict(type='str', required=True, aliases=['address', 'name']), port = dict(type='int', required=True), connection_limit = dict(type='int'), description = dict(type='str'), rate_limit = dict(type='int'), ratio = dict(type='int') ), supports_check_mode=True ) if not bigsuds_found: module.fail_json(msg="the python bigsuds module is required") server = module.params['server'] user = module.params['user'] password = module.params['password'] state = module.params['state'] partition = module.params['partition'] pool = "/%s/%s" % (partition, module.params['pool']) connection_limit = module.params['connection_limit'] description = module.params['description'] rate_limit = module.params['rate_limit'] ratio = module.params['ratio'] host = module.params['host'] address = "/%s/%s" % (partition, host) port = module.params['port'] # sanity check user supplied values if (host and not port) or (port and not host): module.fail_json(msg="both host and port must be supplied") if 1 > port > 65535: module.fail_json(msg="valid ports must be in range 1 - 65535") try: api = bigip_api(server, user, password) if not pool_exists(api, pool): module.fail_json(msg="pool %s does not exist" % pool) result = {'changed': False} # default if state == 'absent': if member_exists(api, pool, address, port): if not module.check_mode: remove_pool_member(api, pool, address, port) deleted = delete_node_address(api, address) result = {'changed': True, 'deleted': deleted} else: result = {'changed': True} elif state == 'present': if not member_exists(api, pool, address, port): if not module.check_mode: add_pool_member(api, pool, address, port) if connection_limit is not None: set_connection_limit(api, pool, address, port, connection_limit) if description is not None: set_description(api, pool, address, port, description) if rate_limit is not None: set_rate_limit(api, pool, address, port, rate_limit) if ratio is not None: set_ratio(api, pool, address, port, ratio) result = {'changed': True} else: # pool member exists -- potentially modify attributes if connection_limit is not None and connection_limit != get_connection_limit(api, pool, address, port): if not module.check_mode: set_connection_limit(api, pool, address, port, connection_limit) result = {'changed': True} if description is not None and description != get_description(api, pool, address, port): if not module.check_mode: set_description(api, pool, address, port, description) result = {'changed': True} if rate_limit is not None and rate_limit != get_rate_limit(api, pool, address, port): if not module.check_mode: set_rate_limit(api, pool, address, port, rate_limit) result = {'changed': True} if ratio is not None and ratio != get_ratio(api, pool, address, port): if not module.check_mode: set_ratio(api, pool, address, port, ratio) result = {'changed': True} except Exception, e: module.fail_json(msg="received exception: %s" % e) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/bigip_monitor_tcp0000664000000000000000000003772412316627017022704 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, serge van Ginderachter # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: bigip_monitor_tcp short_description: "Manages F5 BIG-IP LTM tcp monitors" description: - "Manages F5 BIG-IP LTM tcp monitors via iControl SOAP API" version_added: "1.4" author: Serge van Ginderachter notes: - "Requires BIG-IP software version >= 11" - "F5 developed module 'bigsuds' required (see http://devcentral.f5.com)" - "Best run as a local_action in your playbook" - "Monitor API documentation: https://devcentral.f5.com/wiki/iControl.LocalLB__Monitor.ashx" requirements: - bigsuds options: server: description: - BIG-IP host required: true default: null user: description: - BIG-IP username required: true default: null password: description: - BIG-IP password required: true default: null state: description: - Monitor state required: false default: 'present' choices: ['present', 'absent'] name: description: - Monitor name required: true default: null aliases: ['monitor'] partition: description: - Partition for the monitor required: false default: 'Common' type: description: - The template type of this monitor template required: false default: 'tcp' choices: [ 'TTYPE_TCP', 'TTYPE_TCP_ECHO', 'TTYPE_TCP_HALF_OPEN'] parent: description: - The parent template of this monitor template required: false default: 'tcp' choices: [ 'tcp', 'tcp_echo', 'tcp_half_open'] parent_partition: description: - Partition for the parent monitor required: false default: 'Common' send: description: - The send string for the monitor call required: true default: none receive: description: - The receive string for the monitor call required: true default: none ip: description: - IP address part of the ipport definition. The default API setting is "0.0.0.0". required: false default: none port: description: - port address part op the ipport definition. Tyhe default API setting is 0. required: false default: none interval: description: - The interval specifying how frequently the monitor instance of this template will run. By default, this interval is used for up and down states. The default API setting is 5. required: false default: none timeout: description: - The number of seconds in which the node or service must respond to the monitor request. If the target responds within the set time period, it is considered up. If the target does not respond within the set time period, it is considered down. You can change this number to any number you want, however, it should be 3 times the interval number of seconds plus 1 second. The default API setting is 16. required: false default: none time_until_up: description: - Specifies the amount of time in seconds after the first successful response before a node will be marked up. A value of 0 will cause a node to be marked up immediately after a valid response is received from the node. The default API setting is 0. required: false default: none ''' EXAMPLES = ''' - name: BIGIP F5 | Create TCP Monitor local_action: module: bigip_monitor_tcp state: present server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ item.monitorname }}" type: tcp send: "{{ item.send }}" receive: "{{ item.receive }}" with_items: f5monitors-tcp - name: BIGIP F5 | Create TCP half open Monitor local_action: module: bigip_monitor_tcp state: present server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ item.monitorname }}" type: tcp send: "{{ item.send }}" receive: "{{ item.receive }}" with_items: f5monitors-halftcp - name: BIGIP F5 | Remove TCP Monitor local_action: module: bigip_monitor_tcp state: absent server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ monitorname }}" with_flattened: - f5monitors-tcp - f5monitors-halftcp ''' try: import bigsuds except ImportError: bigsuds_found = False else: bigsuds_found = True TEMPLATE_TYPE = DEFAULT_TEMPLATE_TYPE = 'TTYPE_TCP' TEMPLATE_TYPE_CHOICES = ['tcp', 'tcp_echo', 'tcp_half_open'] DEFAULT_PARENT = DEFAULT_TEMPLATE_TYPE_CHOICE = DEFAULT_TEMPLATE_TYPE.replace('TTYPE_', '').lower() # =========================================== # bigip_monitor module generic methods. # these should be re-useable for other monitor types # def bigip_api(bigip, user, password): api = bigsuds.BIGIP(hostname=bigip, username=user, password=password) return api def check_monitor_exists(module, api, monitor, parent): # hack to determine if monitor exists result = False try: ttype = api.LocalLB.Monitor.get_template_type(template_names=[monitor])[0] parent2 = api.LocalLB.Monitor.get_parent_template(template_names=[monitor])[0] if ttype == TEMPLATE_TYPE and parent == parent2: result = True else: module.fail_json(msg='Monitor already exists, but has a different type (%s) or parent(%s)' % (ttype, parent)) except bigsuds.OperationFailed, e: if "was not found" in str(e): result = False else: # genuine exception raise return result def create_monitor(api, monitor, template_attributes): try: api.LocalLB.Monitor.create_template(templates=[{'template_name': monitor, 'template_type': TEMPLATE_TYPE}], template_attributes=[template_attributes]) except bigsuds.OperationFailed, e: if "already exists" in str(e): return False else: # genuine exception raise return True def delete_monitor(api, monitor): try: api.LocalLB.Monitor.delete_template(template_names=[monitor]) except bigsuds.OperationFailed, e: # maybe it was deleted since we checked if "was not found" in str(e): return False else: # genuine exception raise return True def check_string_property(api, monitor, str_property): return str_property == api.LocalLB.Monitor.get_template_string_property([monitor], [str_property['type']])[0] def set_string_property(api, monitor, str_property): api.LocalLB.Monitor.set_template_string_property(template_names=[monitor], values=[str_property]) def check_integer_property(api, monitor, int_property): return int_property == api.LocalLB.Monitor.get_template_integer_property([monitor], [int_property['type']])[0] def set_integer_property(api, monitor, int_property): api.LocalLB.Monitor.set_template_int_property(template_names=[monitor], values=[int_property]) def update_monitor_properties(api, module, monitor, template_string_properties, template_integer_properties): changed = False for str_property in template_string_properties: if str_property['value'] is not None and not check_string_property(api, monitor, str_property): if not module.check_mode: set_string_property(api, monitor, str_property) changed = True for int_property in template_integer_properties: if int_property['value'] is not None and not check_integer_property(api, monitor, int_property): if not module.check_mode: set_integer_property(api, monitor, int_property) changed = True return changed def get_ipport(api, monitor): return api.LocalLB.Monitor.get_template_destination(template_names=[monitor])[0] def set_ipport(api, monitor, ipport): try: api.LocalLB.Monitor.set_template_destination(template_names=[monitor], destinations=[ipport]) return True, "" except bigsuds.OperationFailed, e: if "Cannot modify the address type of monitor" in str(e): return False, "Cannot modify the address type of monitor if already assigned to a pool." else: # genuine exception raise # =========================================== # main loop # # writing a module for other monitor types should # only need an updated main() (and monitor specific functions) def main(): # begin monitor specific stuff module = AnsibleModule( argument_spec = dict( server = dict(required=True), user = dict(required=True), password = dict(required=True), partition = dict(default='Common'), state = dict(default='present', choices=['present', 'absent']), name = dict(required=True), type = dict(default=DEFAULT_TEMPLATE_TYPE_CHOICE, choices=TEMPLATE_TYPE_CHOICES), parent = dict(default=DEFAULT_PARENT), parent_partition = dict(default='Common'), send = dict(required=False), receive = dict(required=False), ip = dict(required=False), port = dict(required=False, type='int'), interval = dict(required=False, type='int'), timeout = dict(required=False, type='int'), time_until_up = dict(required=False, type='int', default=0) ), supports_check_mode=True ) server = module.params['server'] user = module.params['user'] password = module.params['password'] partition = module.params['partition'] parent_partition = module.params['parent_partition'] state = module.params['state'] name = module.params['name'] type = 'TTYPE_' + module.params['type'].upper() parent = "/%s/%s" % (parent_partition, module.params['parent']) monitor = "/%s/%s" % (partition, name) send = module.params['send'] receive = module.params['receive'] ip = module.params['ip'] port = module.params['port'] interval = module.params['interval'] timeout = module.params['timeout'] time_until_up = module.params['time_until_up'] # tcp monitor has multiple types, so overrule global TEMPLATE_TYPE TEMPLATE_TYPE = type # end monitor specific stuff if not bigsuds_found: module.fail_json(msg="the python bigsuds module is required") api = bigip_api(server, user, password) monitor_exists = check_monitor_exists(module, api, monitor, parent) # ipport is a special setting if monitor_exists: # make sure to not update current settings if not asked cur_ipport = get_ipport(api, monitor) if ip is None: ip = cur_ipport['ipport']['address'] if port is None: port = cur_ipport['ipport']['port'] else: # use API defaults if not defined to create it if interval is None: interval = 5 if timeout is None: timeout = 16 if ip is None: ip = '0.0.0.0' if port is None: port = 0 if send is None: send = '' if receive is None: receive = '' # define and set address type if ip == '0.0.0.0' and port == 0: address_type = 'ATYPE_STAR_ADDRESS_STAR_PORT' elif ip == '0.0.0.0' and port != 0: address_type = 'ATYPE_STAR_ADDRESS_EXPLICIT_PORT' elif ip != '0.0.0.0' and port != 0: address_type = 'ATYPE_EXPLICIT_ADDRESS_EXPLICIT_PORT' else: address_type = 'ATYPE_UNSET' ipport = {'address_type': address_type, 'ipport': {'address': ip, 'port': port}} template_attributes = {'parent_template': parent, 'interval': interval, 'timeout': timeout, 'dest_ipport': ipport, 'is_read_only': False, 'is_directly_usable': True} # monitor specific stuff if type == 'TTYPE_TCP': template_string_properties = [{'type': 'STYPE_SEND', 'value': send}, {'type': 'STYPE_RECEIVE', 'value': receive}] else: template_string_properties = [] template_integer_properties = [{'type': 'ITYPE_INTERVAL', 'value': interval}, {'type': 'ITYPE_TIMEOUT', 'value': timeout}, {'type': 'ITYPE_TIME_UNTIL_UP', 'value': interval}] # main logic, monitor generic try: result = {'changed': False} # default if state == 'absent': if monitor_exists: if not module.check_mode: # possible race condition if same task # on other node deleted it first result['changed'] |= delete_monitor(api, monitor) else: result['changed'] |= True else: # state present ## check for monitor itself if not monitor_exists: # create it if not module.check_mode: # again, check changed status here b/c race conditions # if other task already created it result['changed'] |= create_monitor(api, monitor, template_attributes) else: result['changed'] |= True ## check for monitor parameters # whether it already existed, or was just created, now update # the update functions need to check for check mode but # cannot update settings if it doesn't exist which happens in check mode if monitor_exists and not module.check_mode: result['changed'] |= update_monitor_properties(api, module, monitor, template_string_properties, template_integer_properties) # else assume nothing changed # we just have to update the ipport if monitor already exists and it's different if monitor_exists and cur_ipport != ipport: set_ipport(api, monitor, ipport) result['changed'] |= True #else: monitor doesn't exist (check mode) or ipport is already ok except Exception, e: module.fail_json(msg="received exception: %s" % e) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/arista_lag0000664000000000000000000002644012316627017021274 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # Copyright (C) 2013, Arista Networks # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # DOCUMENTATION = ''' --- module: arista_lag version_added: "1.3" author: Peter Sprygada short_description: Manage port channel (lag) interfaces requirements: - Arista EOS 4.10 - Netdev extension for EOS description: - Manage port channel interface resources on Arista EOS network devices options: interface_id: description: - the full name of the interface required: true state: description: - describe the desired state of the interface related to the config required: false default: 'present' choices: [ 'present', 'absent' ] logging: description: - enables or disables the syslog facility for this module required: false default: false choices: [ 'true', 'false', 'yes', 'no' ] links: description: - array of physical interface links to include in this lag required: false minimum_links: description: - the minimum number of physical interaces that must be operationally up to consider the lag operationally up required: false lacp: description: - enables the use of the LACP protocol for managing link bundles required: false default: 'active' choices: [ 'active', 'passive', 'off' ] notes: - Requires EOS 4.10 or later - The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI) - See http://eos.aristanetworks.com for details ''' EXAMPLES = ''' Example playbook entries using the arista_lag module to manage resource state. Note that interface names must be the full interface name not shortcut names (ie Ethernet, not Et1) tasks: - name: create lag interface action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2 logging=true - name: add member links action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2,Ethernet3 logging=true - name: remove member links action: arista_lag interface_id=Port-Channel1 links=Ethernet2,Ethernet3 logging=true - name: remove lag interface action: arista_lag interface_id=Port-Channel1 state=absent logging=true ''' import syslog import json class AristaLag(object): """ This is the base class managing port-channel (lag) interfaces resources in Arista EOS network devices. This class provides an implementation for creating, updating and deleting port-channel interfaces. Note: The netdev extension for EOS must be installed in order of this module to work properly. The following commands are implemented in this module: * netdev lag list * netdev lag show * netdev lag edit * netdev lag delete """ attributes = ['links', 'minimum_links', 'lacp'] def __init__(self, module): self.module = module self.interface_id = module.params['interface_id'] self.state = module.params['state'] self.links = module.params['links'] self.minimum_links = module.params['minimum_links'] self.lacp = module.params['lacp'] self.logging = module.params['logging'] @property def changed(self): """ The changed property provides a boolean response if the currently loaded resouces has changed from the resource running in EOS. Returns True if the object is not in sync Returns False if the object is in sync. """ return len(self.updates()) > 0 def log(self, entry): """ This method is responsible for sending log messages to the local syslog. """ if self.logging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, entry) def run_command(self, cmd): """ Calls the Ansible module run_command method. This method will directly return the results of the run_command method """ self.log("Command: %s" % cmd) return self.module.run_command(cmd.split()) def get(self): """ This method will return a dictionary with the attributes of the lag interface resource specified in interface_id. The lag interface resource has the following stucture: { "interface_id": , "links": , "minimum_links": , "lacp": [active* | passive | off] } If the lag interface specified by interface_id does not exist in the system, this method will return None. """ cmd = "netdev lag show %s" % self.interface_id (rc, out, err) = self.run_command(cmd) obj = json.loads(out) if obj.get('status') != 200: return None return obj['result'] def create(self): """ Creates a lag interface resource in the current running configuration. If the lag interface already exists, the function will return successfully. This function implements the following commands: * netdev lag create {interface_id} [attributes] Returns the lag interface resource if the create method was successful Returns an error message if there as a problem creating the lag interface """ attribs = [] for attrib in self.attributes: if getattr(self, attrib): attribs.append("--%s" % attrib) attribs.append(getattr(self, attrib)) cmd = "netdev lag create %s " % self.interface_id cmd += " ".join(attribs) (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 201: rc = int(resp['status']) err = resp['message'] out = None else: out = self.get() return (rc, out, err) def update(self): """ Updates an existing lag resource in the current running configuration. If the lag resource does not exist, this method will return an error. This method implements the following commands: * netdev lag edit {interface_id} [attributes] Returns an updated lag interafce resoure if the update method was successful """ attribs = list() for attrib in self.updates(): attribs.append("--%s" % attrib) attribs.append(getattr(self, attrib)) cmd = "netdev lag edit %s " % self.interface_id cmd += " ".join(attribs) (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 200: rc = int(resp['status']) err = resp['message'] out = None else: out = resp['result'] return (rc, out, err) return (2, None, "No attributes have been modified") def delete(self): """ Deletes an existing lag interface resource from the current running configuration. A nonexistent lag interface will return successful for this operation. This method implements the following commands: * netdev lag delete {interface_id} Returns nothing if the delete was successful Returns error message if there was a problem deleting the resource """ cmd = "netdev lag delete %s" % self.interface_id (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 200: rc = resp['status'] err = resp['message'] out = None return (rc, out, err) def updates(self): """ This method will check the current lag interface resource in the running configuration and return a list of attributes that are not in sync with the current resource. """ obj = self.get() update = lambda a, z: a != z updates = list() for attrib in self.attributes: if update(obj[attrib], getattr(self, attrib)): updates.append(attrib) return updates def exists(self): """ Returns True if the current lag interface resource exists and returns False if it does not. This method only checks for the existence of the interface as specified in interface_id. """ (rc, out, err) = self.run_command("netdev lag list") collection = json.loads(out) return self.interface_id in collection.get('result') def main(): module = AnsibleModule( argument_spec = dict( interface_id=dict(default=None, type='str'), state=dict(default='present', choices=['present', 'absent'], type='str'), links=dict(default=None, type='str'), lacp=dict(default=None, choices=['active', 'passive', 'off'], type='str'), minimum_links=dict(default=None, type='int'), logging=dict(default=False, type='bool') ), supports_check_mode = True ) obj = AristaLag(module) rc = None result = dict() if obj.state == 'absent': if obj.exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.delete() if rc !=0: module.fail_json(msg=err, rc=rc) elif obj.state == 'present': if not obj.exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.create() result['results'] = out else: if module.check_mode: module.exit_json(changed=obj.changed) (rc, out, err) = obj.update() result['results'] = out if rc is not None and rc != 0: module.fail_json(msg=err, rc=rc) if rc is None: result['changed'] = False else: result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/openvswitch_port0000664000000000000000000000756412316627017022611 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: openvswitch_port version_added: 1.4 short_description: Manage Open vSwitch ports requirements: [ ovs-vsctl ] description: - Manage Open vSwitch ports options: bridge: required: true description: - Name of bridge to manage port: required: true description: - Name of port to manage on the bridge state: required: false default: "present" choices: [ present, absent ] description: - Whether the port should exist timeout: required: false default: 5 description: - How long to wait for ovs-vswitchd to respond ''' EXAMPLES = ''' # Creates port eth2 on bridge br-ex - openvswitch_port: bridge=br-ex port=eth2 state=present ''' class OVSPort(object): def __init__(self, module): self.module = module self.bridge = module.params['bridge'] self.port = module.params['port'] self.state = module.params['state'] self.timeout = module.params['timeout'] def _vsctl(self, command): '''Run ovs-vsctl command''' return self.module.run_command(['ovs-vsctl', '-t', str(self.timeout)] + command) def exists(self): '''Check if the port already exists''' rc, out, err = self._vsctl(['list-ports', self.bridge]) if rc != 0: raise Exception(err) return any(port.rstrip() == self.port for port in out.split('\n')) def add(self): '''Add the port''' rc, _, err = self._vsctl(['add-port', self.bridge, self.port]) if rc != 0: raise Exception(err) def delete(self): '''Remove the port''' rc, _, err = self._vsctl(['del-port', self.bridge, self.port]) if rc != 0: raise Exception(err) def check(self): '''Run check mode''' try: if self.state == 'absent' and self.exists(): changed = True elif self.state == 'present' and not self.exists(): changed = True else: changed = False except Exception, e: self.module.fail_json(msg=str(e)) self.module.exit_json(changed=changed) def run(self): '''Make the necessary changes''' changed = False try: if self.state == 'absent': if self.exists(): self.delete() changed = True elif self.state == 'present': if not self.exists(): self.add() changed = True except Exception, e: self.module.fail_json(msg=str(e)) self.module.exit_json(changed=changed) def main(): module = AnsibleModule( argument_spec={ 'bridge': {'required': True}, 'port': {'required': True}, 'state': {'default': 'present', 'choices': ['present', 'absent']}, 'timeout': {'default': 5, 'type': 'int'} }, supports_check_mode=True, ) port = OVSPort(module) if module.check_mode: port.check() else: port.run() # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/dnsmadeeasy0000664000000000000000000002672012316627017021464 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: dnsmadeeasy version_added: "1.3" short_description: Interface with dnsmadeeasy.com (a DNS hosting service). description: - "Manages DNS records via the v2 REST API of the DNS Made Easy service. It handles records only; there is no manipulation of domains or monitor/account support yet. See: U(http://www.dnsmadeeasy.com/services/rest-api/)" options: account_key: description: - Accout API Key. required: true default: null account_secret: description: - Accout Secret Key. required: true default: null domain: description: - Domain to work with. Can be the domain name (e.g. "mydomain.com") or the numeric ID of the domain in DNS Made Easy (e.g. "839989") for faster resolution. required: true default: null record_name: description: - Record name to get/create/delete/update. If record_name is not specified; all records for the domain will be returned in "result" regardless of the state argument. required: false default: null record_type: description: - Record type. required: false choices: [ 'A', 'AAAA', 'CNAME', 'HTTPRED', 'MX', 'NS', 'PTR', 'SRV', 'TXT' ] default: null record_value: description: - "Record value. HTTPRED: , MX: , NS: , PTR: , SRV: , TXT: " - "If record_value is not specified; no changes will be made and the record will be returned in 'result' (in other words, this module can be used to fetch a record's current id, type, and ttl)" required: false default: null record_ttl: description: - record's "Time to live". Number of seconds the record remains cached in DNS servers. required: false default: 1800 state: description: - whether the record should exist or not required: true choices: [ 'present', 'absent' ] default: null validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] version_added: 1.5.1 notes: - The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone set. Be sure you are within a few seconds of actual time by using NTP. - This module returns record(s) in the "result" element when 'state' is set to 'present'. This value can be be registered and used in your playbooks. requirements: [ urllib, urllib2, hashlib, hmac ] author: Brice Burgess ''' EXAMPLES = ''' # fetch my.com domain records - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present register: response # create / ensure the presence of a record - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" record_type="A" record_value="127.0.0.1" # update the previously created record - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" record_value="192.168.0.1" # fetch a specific record - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" register: response # delete a record / ensure it is absent - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=absent record_name="test" ''' # ============================================ # DNSMadeEasy module specific support methods. # IMPORT_ERROR = None try: import json from time import strftime, gmtime import hashlib import hmac except ImportError, e: IMPORT_ERROR = str(e) class DME2: def __init__(self, apikey, secret, domain, module): self.module = module self.api = apikey self.secret = secret self.baseurl = 'https://api.dnsmadeeasy.com/V2.0/' self.domain = str(domain) self.domain_map = None # ["domain_name"] => ID self.record_map = None # ["record_name"] => ID self.records = None # ["record_ID"] => # Lookup the domain ID if passed as a domain name vs. ID if not self.domain.isdigit(): self.domain = self.getDomainByName(self.domain)['id'] self.record_url = 'dns/managed/' + str(self.domain) + '/records' def _headers(self): currTime = self._get_date() hashstring = self._create_hash(currTime) headers = {'x-dnsme-apiKey': self.api, 'x-dnsme-hmac': hashstring, 'x-dnsme-requestDate': currTime, 'content-type': 'application/json'} return headers def _get_date(self): return strftime("%a, %d %b %Y %H:%M:%S GMT", gmtime()) def _create_hash(self, rightnow): return hmac.new(self.secret.encode(), rightnow.encode(), hashlib.sha1).hexdigest() def query(self, resource, method, data=None): url = self.baseurl + resource if data and not isinstance(data, basestring): data = urllib.urlencode(data) response, info = fetch_url(self.module, url, data=data, method=method, headers=self._headers()) if info['status'] not in (200, 201, 204): self.module.fail_json(msg="%s returned %s, with body: %s" % (url, info['status'], info['msg'])) try: return json.load(response) except Exception, e: return {} def getDomain(self, domain_id): if not self.domain_map: self._instMap('domain') return self.domains.get(domain_id, False) def getDomainByName(self, domain_name): if not self.domain_map: self._instMap('domain') return self.getDomain(self.domain_map.get(domain_name, 0)) def getDomains(self): return self.query('dns/managed', 'GET')['data'] def getRecord(self, record_id): if not self.record_map: self._instMap('record') return self.records.get(record_id, False) def getRecordByName(self, record_name): if not self.record_map: self._instMap('record') return self.getRecord(self.record_map.get(record_name, 0)) def getRecords(self): return self.query(self.record_url, 'GET')['data'] def _instMap(self, type): #@TODO cache this call so it's executed only once per ansible execution map = {} results = {} # iterate over e.g. self.getDomains() || self.getRecords() for result in getattr(self, 'get' + type.title() + 's')(): map[result['name']] = result['id'] results[result['id']] = result # e.g. self.domain_map || self.record_map setattr(self, type + '_map', map) setattr(self, type + 's', results) # e.g. self.domains || self.records def prepareRecord(self, data): return json.dumps(data, separators=(',', ':')) def createRecord(self, data): #@TODO update the cache w/ resultant record + id when impleneted return self.query(self.record_url, 'POST', data) def updateRecord(self, record_id, data): #@TODO update the cache w/ resultant record + id when impleneted return self.query(self.record_url + '/' + str(record_id), 'PUT', data) def deleteRecord(self, record_id): #@TODO remove record from the cache when impleneted return self.query(self.record_url + '/' + str(record_id), 'DELETE') # =========================================== # Module execution. # def main(): module = AnsibleModule( argument_spec=dict( account_key=dict(required=True), account_secret=dict(required=True, no_log=True), domain=dict(required=True), state=dict(required=True, choices=['present', 'absent']), record_name=dict(required=False), record_type=dict(required=False, choices=[ 'A', 'AAAA', 'CNAME', 'HTTPRED', 'MX', 'NS', 'PTR', 'SRV', 'TXT']), record_value=dict(required=False), record_ttl=dict(required=False, default=1800, type='int'), validate_certs = dict(default='yes', type='bool'), ), required_together=( ['record_value', 'record_ttl', 'record_type'] ) ) if IMPORT_ERROR: module.fail_json(msg="Import Error: " + IMPORT_ERROR) DME = DME2(module.params["account_key"], module.params[ "account_secret"], module.params["domain"], module) state = module.params["state"] record_name = module.params["record_name"] # Follow Keyword Controlled Behavior if not record_name: domain_records = DME.getRecords() if not domain_records: module.fail_json( msg="The requested domain name is not accessible with this api_key; try using its ID if known.") module.exit_json(changed=False, result=domain_records) # Fetch existing record + Build new one current_record = DME.getRecordByName(record_name) new_record = {'name': record_name} for i in ["record_value", "record_type", "record_ttl"]: if module.params[i]: new_record[i[len("record_"):]] = module.params[i] # Compare new record against existing one changed = False if current_record: for i in new_record: if str(current_record[i]) != str(new_record[i]): changed = True new_record['id'] = str(current_record['id']) # Follow Keyword Controlled Behavior if state == 'present': # return the record if no value is specified if not "value" in new_record: if not current_record: module.fail_json( msg="A record with name '%s' does not exist for domain '%s.'" % (record_name, domain)) module.exit_json(changed=False, result=current_record) # create record as it does not exist if not current_record: record = DME.createRecord(DME.prepareRecord(new_record)) module.exit_json(changed=True, result=record) # update the record if changed: DME.updateRecord( current_record['id'], DME.prepareRecord(new_record)) module.exit_json(changed=True, result=new_record) # return the record (no changes) module.exit_json(changed=False, result=current_record) elif state == 'absent': # delete the record if it exists if current_record: DME.deleteRecord(current_record['id']) module.exit_json(changed=True) # record does not exist, return w/o change. module.exit_json(changed=False) else: module.fail_json( msg="'%s' is an unknown value for the state argument" % state) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/net_infrastructure/bigip_monitor_http0000664000000000000000000003601612316627017023066 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, serge van Ginderachter # based on Matt Hite's bigip_pool module # (c) 2013, Matt Hite # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: bigip_monitor_http short_description: "Manages F5 BIG-IP LTM http monitors" description: - "Manages F5 BIG-IP LTM monitors via iControl SOAP API" version_added: "1.4" author: Serge van Ginderachter notes: - "Requires BIG-IP software version >= 11" - "F5 developed module 'bigsuds' required (see http://devcentral.f5.com)" - "Best run as a local_action in your playbook" - "Monitor API documentation: https://devcentral.f5.com/wiki/iControl.LocalLB__Monitor.ashx" requirements: - bigsuds options: server: description: - BIG-IP host required: true default: null user: description: - BIG-IP username required: true default: null password: description: - BIG-IP password required: true default: null state: description: - Monitor state required: false default: 'present' choices: ['present', 'absent'] name: description: - Monitor name required: true default: null aliases: ['monitor'] partition: description: - Partition for the monitor required: false default: 'Common' parent: description: - The parent template of this monitor template required: false default: 'http' parent_partition: description: - Partition for the parent monitor required: false default: 'Common' send: description: - The send string for the monitor call required: true default: none receive: description: - The receive string for the monitor call required: true default: none receive_disable: description: - The receive disable string for the monitor call required: true default: none ip: description: - IP address part of the ipport definition. The default API setting is "0.0.0.0". required: false default: none port: description: - port address part op the ipport definition. Tyhe default API setting is 0. required: false default: none interval: description: - The interval specifying how frequently the monitor instance of this template will run. By default, this interval is used for up and down states. The default API setting is 5. required: false default: none timeout: description: - The number of seconds in which the node or service must respond to the monitor request. If the target responds within the set time period, it is considered up. If the target does not respond within the set time period, it is considered down. You can change this number to any number you want, however, it should be 3 times the interval number of seconds plus 1 second. The default API setting is 16. required: false default: none time_until_up: description: - Specifies the amount of time in seconds after the first successful response before a node will be marked up. A value of 0 will cause a node to be marked up immediately after a valid response is received from the node. The default API setting is 0. required: false default: none ''' EXAMPLES = ''' - name: BIGIP F5 | Create HTTP Monitor local_action: module: bigip_monitor_http state: present server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ item.monitorname }}" send: "{{ item.send }}" receive: "{{ item.receive }}" with_items: f5monitors - name: BIGIP F5 | Remove HTTP Monitor local_action: module: bigip_monitor_http state: absent server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ monitorname }}" ''' try: import bigsuds except ImportError: bigsuds_found = False else: bigsuds_found = True TEMPLATE_TYPE = 'TTYPE_HTTP' DEFAULT_PARENT_TYPE = 'http' # =========================================== # bigip_monitor module generic methods. # these should be re-useable for other monitor types # def bigip_api(bigip, user, password): api = bigsuds.BIGIP(hostname=bigip, username=user, password=password) return api def check_monitor_exists(module, api, monitor, parent): # hack to determine if monitor exists result = False try: ttype = api.LocalLB.Monitor.get_template_type(template_names=[monitor])[0] parent2 = api.LocalLB.Monitor.get_parent_template(template_names=[monitor])[0] if ttype == TEMPLATE_TYPE and parent == parent2: result = True else: module.fail_json(msg='Monitor already exists, but has a different type (%s) or parent(%s)' % (ttype, parent)) except bigsuds.OperationFailed, e: if "was not found" in str(e): result = False else: # genuine exception raise return result def create_monitor(api, monitor, template_attributes): try: api.LocalLB.Monitor.create_template(templates=[{'template_name': monitor, 'template_type': TEMPLATE_TYPE}], template_attributes=[template_attributes]) except bigsuds.OperationFailed, e: if "already exists" in str(e): return False else: # genuine exception raise return True def delete_monitor(api, monitor): try: api.LocalLB.Monitor.delete_template(template_names=[monitor]) except bigsuds.OperationFailed, e: # maybe it was deleted since we checked if "was not found" in str(e): return False else: # genuine exception raise return True def check_string_property(api, monitor, str_property): return str_property == api.LocalLB.Monitor.get_template_string_property([monitor], [str_property['type']])[0] def set_string_property(api, monitor, str_property): api.LocalLB.Monitor.set_template_string_property(template_names=[monitor], values=[str_property]) def check_integer_property(api, monitor, int_property): return int_property == api.LocalLB.Monitor.get_template_integer_property([monitor], [int_property['type']])[0] def set_integer_property(api, monitor, int_property): api.LocalLB.Monitor.set_template_int_property(template_names=[monitor], values=[int_property]) def update_monitor_properties(api, module, monitor, template_string_properties, template_integer_properties): changed = False for str_property in template_string_properties: if str_property['value'] is not None and not check_string_property(api, monitor, str_property): if not module.check_mode: set_string_property(api, monitor, str_property) changed = True for int_property in template_integer_properties: if int_property['value'] is not None and not check_integer_property(api, monitor, int_property): if not module.check_mode: set_integer_property(api, monitor, int_property) changed = True return changed def get_ipport(api, monitor): return api.LocalLB.Monitor.get_template_destination(template_names=[monitor])[0] def set_ipport(api, monitor, ipport): try: api.LocalLB.Monitor.set_template_destination(template_names=[monitor], destinations=[ipport]) return True, "" except bigsuds.OperationFailed, e: if "Cannot modify the address type of monitor" in str(e): return False, "Cannot modify the address type of monitor if already assigned to a pool." else: # genuine exception raise # =========================================== # main loop # # writing a module for other monitor types should # only need an updated main() (and monitor specific functions) def main(): # begin monitor specific stuff module = AnsibleModule( argument_spec = dict( server = dict(required=True), user = dict(required=True), password = dict(required=True), partition = dict(default='Common'), state = dict(default='present', choices=['present', 'absent']), name = dict(required=True), parent = dict(default=DEFAULT_PARENT_TYPE), parent_partition = dict(default='Common'), send = dict(required=False), receive = dict(required=False), receive_disable = dict(required=False), ip = dict(required=False), port = dict(required=False, type='int'), interval = dict(required=False, type='int'), timeout = dict(required=False, type='int'), time_until_up = dict(required=False, type='int', default=0) ), supports_check_mode=True ) server = module.params['server'] user = module.params['user'] password = module.params['password'] partition = module.params['partition'] parent_partition = module.params['parent_partition'] state = module.params['state'] name = module.params['name'] parent = "/%s/%s" % (parent_partition, module.params['parent']) monitor = "/%s/%s" % (partition, name) send = module.params['send'] receive = module.params['receive'] receive_disable = module.params['receive_disable'] ip = module.params['ip'] port = module.params['port'] interval = module.params['interval'] timeout = module.params['timeout'] time_until_up = module.params['time_until_up'] # end monitor specific stuff if not bigsuds_found: module.fail_json(msg="the python bigsuds module is required") api = bigip_api(server, user, password) monitor_exists = check_monitor_exists(module, api, monitor, parent) # ipport is a special setting if monitor_exists: # make sure to not update current settings if not asked cur_ipport = get_ipport(api, monitor) if ip is None: ip = cur_ipport['ipport']['address'] if port is None: port = cur_ipport['ipport']['port'] else: # use API defaults if not defined to create it if interval is None: interval = 5 if timeout is None: timeout = 16 if ip is None: ip = '0.0.0.0' if port is None: port = 0 if send is None: send = '' if receive is None: receive = '' if receive_disable is None: receive_disable = '' # define and set address type if ip == '0.0.0.0' and port == 0: address_type = 'ATYPE_STAR_ADDRESS_STAR_PORT' elif ip == '0.0.0.0' and port != 0: address_type = 'ATYPE_STAR_ADDRESS_EXPLICIT_PORT' elif ip != '0.0.0.0' and port != 0: address_type = 'ATYPE_EXPLICIT_ADDRESS_EXPLICIT_PORT' else: address_type = 'ATYPE_UNSET' ipport = {'address_type': address_type, 'ipport': {'address': ip, 'port': port}} template_attributes = {'parent_template': parent, 'interval': interval, 'timeout': timeout, 'dest_ipport': ipport, 'is_read_only': False, 'is_directly_usable': True} # monitor specific stuff template_string_properties = [{'type': 'STYPE_SEND', 'value': send}, {'type': 'STYPE_RECEIVE', 'value': receive}, {'type': 'STYPE_RECEIVE_DRAIN', 'value': receive_disable}] template_integer_properties = [{'type': 'ITYPE_INTERVAL', 'value': interval}, {'type': 'ITYPE_TIMEOUT', 'value': timeout}, {'type': 'ITYPE_TIME_UNTIL_UP', 'value': time_until_up}] # main logic, monitor generic try: result = {'changed': False} # default if state == 'absent': if monitor_exists: if not module.check_mode: # possible race condition if same task # on other node deleted it first result['changed'] |= delete_monitor(api, monitor) else: result['changed'] |= True else: # state present ## check for monitor itself if not monitor_exists: # create it if not module.check_mode: # again, check changed status here b/c race conditions # if other task already created it result['changed'] |= create_monitor(api, monitor, template_attributes) else: result['changed'] |= True ## check for monitor parameters # whether it already existed, or was just created, now update # the update functions need to check for check mode but # cannot update settings if it doesn't exist which happens in check mode result['changed'] |= update_monitor_properties(api, module, monitor, template_string_properties, template_integer_properties) # we just have to update the ipport if monitor already exists and it's different if monitor_exists and cur_ipport != ipport: set_ipport(api, monitor, ipport) result['changed'] |= True #else: monitor doesn't exist (check mode) or ipport is already ok except Exception, e: module.fail_json(msg="received exception: %s" % e) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/openvswitch_bridge0000664000000000000000000000727212316627017023055 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: openvswitch_bridge version_added: 1.4 short_description: Manage Open vSwitch bridges requirements: [ ovs-vsctl ] description: - Manage Open vSwitch bridges options: bridge: required: true description: - Name of bridge to manage state: required: false default: "present" choices: [ present, absent ] description: - Whether the bridge should exist timeout: required: false default: 5 description: - How long to wait for ovs-vswitchd to respond ''' EXAMPLES = ''' # Create a bridge named br-int - openvswitch_bridge: bridge=br-int state=present ''' class OVSBridge(object): def __init__(self, module): self.module = module self.bridge = module.params['bridge'] self.state = module.params['state'] self.timeout = module.params['timeout'] def _vsctl(self, command): '''Run ovs-vsctl command''' return self.module.run_command(['ovs-vsctl', '-t', str(self.timeout)] + command) def exists(self): '''Check if the bridge already exists''' rc, _, err = self._vsctl(['br-exists', self.bridge]) if rc == 0: # See ovs-vsctl(8) for status codes return True if rc == 2: return False raise Exception(err) def add(self): '''Create the bridge''' rc, _, err = self._vsctl(['add-br', self.bridge]) if rc != 0: raise Exception(err) def delete(self): '''Delete the bridge''' rc, _, err = self._vsctl(['del-br', self.bridge]) if rc != 0: raise Exception(err) def check(self): '''Run check mode''' try: if self.state == 'absent' and self.exists(): changed = True elif self.state == 'present' and not self.exists(): changed = True else: changed = False except Exception, e: self.module.fail_json(msg=str(e)) self.module.exit_json(changed=changed) def run(self): '''Make the necessary changes''' changed = False try: if self.state == 'absent': if self.exists(): self.delete() changed = True elif self.state == 'present': if not self.exists(): self.add() changed = True except Exception, e: self.module.fail_json(msg=str(e)) self.module.exit_json(changed=changed) def main(): module = AnsibleModule( argument_spec={ 'bridge': {'required': True}, 'state': {'default': 'present', 'choices': ['present', 'absent']}, 'timeout': {'default': 5, 'type': 'int'} }, supports_check_mode=True, ) br = OVSBridge(module) if module.check_mode: br.check() else: br.run() # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/arista_interface0000664000000000000000000002173412316627017022472 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # Copyright (C) 2013, Arista Networks # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # DOCUMENTATION = ''' --- module: arista_interface version_added: "1.3" author: Peter Sprygada short_description: Manage physical Ethernet interfaces requirements: - Arista EOS 4.10 - Netdev extension for EOS description: - Manage physical Ethernet interface resources on Arista EOS network devices options: interface_id: description: - the full name of the interface required: true logging: description: - enables or disables the syslog facility for this module required: false default: false choices: [ 'true', 'false', 'yes', 'no' ] admin: description: - controls the operational state of the interface required: false choices: [ 'up', 'down' ] description: description: - a single line text string describing the interface required: false mtu: description: - configureds the maximum transmission unit for the interface required: false default: 1500 speed: description: - sets the interface speed setting required: false default: 'auto' choices: [ 'auto', '100m', '1g', '10g' ] duplex: description: - sets the interface duplex setting required: false default: 'auto' choices: [ 'auto', 'half', 'full' ] notes: - Requires EOS 4.10 or later - The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI) - See http://eos.aristanetworks.com for details ''' EXAMPLES = ''' Example playbook entries using the arista_interface module to manage resource state. Note that interface names must be the full interface name not shortcut names (ie Ethernet, not Et1) tasks: - name: enable interface Ethernet 1 action: arista_interface interface_id=Ethernet1 admin=up speed=10g duplex=full logging=true - name: set mtu on Ethernet 1 action: arista_interface interface_id=Ethernet1 mtu=1600 speed=10g duplex=full logging=true - name: reset changes to Ethernet 1 action: arista_interface interface_id=Ethernet1 admin=down mtu=1500 speed=10g duplex=full logging=true ''' import syslog import json class AristaInterface(object): """ This is the base class for managing physcial Ethernet interface resources in EOS network devices. This class acts as a wrapper around the netdev extension in EOS. You must have the netdev extension installed in order for this module to work properly. The following commands are implemented in this module: * netdev interface list * netdev interface show * netdev interface edit * netdev interface delete This module only allows for the management of physical Ethernet interfaces. """ attributes = ['interface_id', 'admin', 'description', 'mtu', 'speed', 'duplex'] def __init__(self, module): self.module = module self.interface_id = module.params['interface_id'] self.admin = module.params['admin'] self.description = module.params['description'] self.mtu = module.params['mtu'] self.speed = module.params['speed'] self.duplex = module.params['duplex'] self.logging = module.params['logging'] @property def changed(self): """ The changed property provides a boolean response if the currently loaded resouces has changed from the resource running in EOS. Returns True if the object is not in sync Returns False if the object is in sync. """ return len(self.updates()) > 0 def log(self, entry): """ This method is responsible for sending log messages to the local syslog. """ if self.logging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, entry) def run_command(self, cmd): """ Calls the Ansible module run_command method. This method will directly return the results of the run_command method """ self.log(cmd) return self.module.run_command(cmd.split()) def get(self): """ This method will return a dictionary with the attributes of the physical ethernet interface resource specified in interface_id. The physcial ethernet interface resource has the following stucture: { "interface_id": , "description": , "admin": [up | down], "mtu": , "speed": [auto | 100m | 1g | 10g] "duplex": [auto | half | full] } If the physical ethernet interface specified by interface_id does not exist in the system, this method will return None. """ cmd = "netdev interface show %s" % self.interface_id (rc, out, err) = self.run_command(cmd) obj = json.loads(out) if obj.get('status') != 200: return None return obj['result'] def update(self): """ Updates an existing physical ethernet resource in the current running configuration. If the physical ethernet resource does not exist, this method will return an error. This method implements the following commands: * netdev interface edit {interface_id} [attributes] Returns an updated physical ethernet interafce resoure if the update method was successful """ attribs = list() for attrib in self.updates(): attribs.append("--%s" % attrib) attribs.append(str(getattr(self, attrib))) if attribs: cmd = "netdev interface edit %s " % self.interface_id cmd += " ".join(attribs) (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 200: rc = int(resp['status']) err = resp['message'] out = None else: out = resp['result'] return (rc, out, err) return (0, None, "No attributes have been modified") def updates(self): """ This method will check the current phy resource in the running configuration and return a list of attribute that are not in sync with the current resource from the running configuration. """ obj = self.get() update = lambda a, z: a != z updates = list() for attrib in self.attributes: value = getattr(self, attrib) if update(obj[attrib], value) and value is not None: updates.append(attrib) self.log("updates: %s" % updates) return updates def main(): module = AnsibleModule( argument_spec = dict( interface_id=dict(default=None, type='str'), admin=dict(default=None, choices=['up', 'down'], type='str'), description=dict(default=None, type='str'), mtu=dict(default=None, type='int'), speed=dict(default=None, choices=['auto', '100m', '1g', '10g']), duplex=dict(default=None, choices=['auto', 'half', 'full']), logging=dict(default=False, type='bool') ), supports_check_mode = True ) obj = AristaInterface(module) rc = None result = dict() if module.check_mode: module.exit_json(changed=obj.changed) else: if obj.changed: (rc, out, err) = obj.update() result['results'] = out if rc is not None and rc != 0: module.fail_json(msg=err, rc=rc) if rc is None: result['changed'] = False else: result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/bigip_node0000664000000000000000000002156612316627017021271 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Matt Hite # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: bigip_node short_description: "Manages F5 BIG-IP LTM nodes" description: - "Manages F5 BIG-IP LTM nodes via iControl SOAP API" version_added: "1.4" author: Matt Hite notes: - "Requires BIG-IP software version >= 11" - "F5 developed module 'bigsuds' required (see http://devcentral.f5.com)" - "Best run as a local_action in your playbook" requirements: - bigsuds options: server: description: - BIG-IP host required: true default: null choices: [] aliases: [] user: description: - BIG-IP username required: true default: null choices: [] aliases: [] password: description: - BIG-IP password required: true default: null choices: [] aliases: [] state: description: - Pool member state required: true default: present choices: ['present', 'absent'] aliases: [] partition: description: - Partition required: false default: 'Common' choices: [] aliases: [] name: description: - "Node name" required: false default: null choices: [] host: description: - "Node IP. Required when state=present and node does not exist. Error when state=absent." required: true default: null choices: [] aliases: ['address', 'ip'] description: description: - "Node description." required: false default: null choices: [] ''' EXAMPLES = ''' ## playbook task examples: --- # file bigip-test.yml # ... - hosts: bigip-test tasks: - name: Add node local_action: > bigip_node server=lb.mydomain.com user=admin password=mysecret state=present partition=matthite host="{{ ansible_default_ipv4["address"] }}" name="{{ ansible_default_ipv4["address"] }}" # Note that the BIG-IP automatically names the node using the # IP address specified in previous play's host parameter. # Future plays referencing this node no longer use the host # parameter but instead use the name parameter. # Alternatively, you could have specified a name with the # name parameter when state=present. - name: Modify node description local_action: > bigip_node server=lb.mydomain.com user=admin password=mysecret state=present partition=matthite name="{{ ansible_default_ipv4["address"] }}" description="Our best server yet" - name: Delete node local_action: > bigip_node server=lb.mydomain.com user=admin password=mysecret state=absent partition=matthite name="{{ ansible_default_ipv4["address"] }}" ''' try: import bigsuds except ImportError: bigsuds_found = False else: bigsuds_found = True # ========================== # bigip_node module specific # def bigip_api(bigip, user, password): api = bigsuds.BIGIP(hostname=bigip, username=user, password=password) return api def node_exists(api, address): # hack to determine if node exists result = False try: api.LocalLB.NodeAddressV2.get_object_status(nodes=[address]) result = True except bigsuds.OperationFailed, e: if "was not found" in str(e): result = False else: # genuine exception raise return result def create_node_address(api, address, name): try: api.LocalLB.NodeAddressV2.create(nodes=[name], addresses=[address], limits=[0]) result = True desc = "" except bigsuds.OperationFailed, e: if "already exists" in str(e): result = False desc = "referenced name or IP already in use" else: # genuine exception raise return (result, desc) def get_node_address(api, name): return api.LocalLB.NodeAddressV2.get_address(nodes=[name])[0] def delete_node_address(api, address): try: api.LocalLB.NodeAddressV2.delete_node_address(nodes=[address]) result = True desc = "" except bigsuds.OperationFailed, e: if "is referenced by a member of pool" in str(e): result = False desc = "node referenced by pool" else: # genuine exception raise return (result, desc) def set_node_description(api, name, description): api.LocalLB.NodeAddressV2.set_description(nodes=[name], descriptions=[description]) def get_node_description(api, name): return api.LocalLB.NodeAddressV2.get_description(nodes=[name])[0] def main(): module = AnsibleModule( argument_spec = dict( server = dict(type='str', required=True), user = dict(type='str', required=True), password = dict(type='str', required=True), state = dict(type='str', default='present', choices=['present', 'absent']), partition = dict(type='str', default='Common'), name = dict(type='str', required=True), host = dict(type='str', aliases=['address', 'ip']), description = dict(type='str') ), supports_check_mode=True ) if not bigsuds_found: module.fail_json(msg="the python bigsuds module is required") server = module.params['server'] user = module.params['user'] password = module.params['password'] state = module.params['state'] partition = module.params['partition'] host = module.params['host'] name = module.params['name'] address = "/%s/%s" % (partition, name) description = module.params['description'] if state == 'absent' and host is not None: module.fail_json(msg="host parameter invalid when state=absent") try: api = bigip_api(server, user, password) result = {'changed': False} # default if state == 'absent': if node_exists(api, address): if not module.check_mode: deleted, desc = delete_node_address(api, address) if not deleted: module.fail_json(msg="unable to delete: %s" % desc) else: result = {'changed': True} else: # check-mode return value result = {'changed': True} elif state == 'present': if not node_exists(api, address): if host is None: module.fail_json(msg="host parameter required when " \ "state=present and node does not exist") if not module.check_mode: created, desc = create_node_address(api, address=host, name=address) if not created: module.fail_json(msg="unable to create: %s" % desc) else: result = {'changed': True} if description is not None: set_node_description(api, address, description) result = {'changed': True} else: # check-mode return value result = {'changed': True} else: # node exists -- potentially modify attributes if host is not None: if get_node_address(api, address) != host: module.fail_json(msg="Changing the node address is " \ "not supported by the API; " \ "delete and recreate the node.") if description is not None: if get_node_description(api, address) != description: if not module.check_mode: set_node_description(api, address, description) result = {'changed': True} except Exception, e: module.fail_json(msg="received exception: %s" % e) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/bigip_pool0000664000000000000000000004500112316627017021303 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Matt Hite # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: bigip_pool short_description: "Manages F5 BIG-IP LTM pools" description: - "Manages F5 BIG-IP LTM pools via iControl SOAP API" version_added: "1.2" author: Matt Hite notes: - "Requires BIG-IP software version >= 11" - "F5 developed module 'bigsuds' required (see http://devcentral.f5.com)" - "Best run as a local_action in your playbook" requirements: - bigsuds options: server: description: - BIG-IP host required: true default: null choices: [] aliases: [] user: description: - BIG-IP username required: true default: null choices: [] aliases: [] password: description: - BIG-IP password required: true default: null choices: [] aliases: [] state: description: - Pool/pool member state required: false default: present choices: ['present', 'absent'] aliases: [] name: description: - Pool name required: true default: null choices: [] aliases: ['pool'] partition: description: - Partition of pool/pool member required: false default: 'Common' choices: [] aliases: [] lb_method: description: - Load balancing method version_added: "1.3" required: False default: 'round_robin' choices: ['round_robin', 'ratio_member', 'least_connection_member', 'observed_member', 'predictive_member', 'ratio_node_address', 'least_connection_node_address', 'fastest_node_address', 'observed_node_address', 'predictive_node_address', 'dynamic_ratio', 'fastest_app_response', 'least_sessions', 'dynamic_ratio_member', 'l3_addr', 'unknown', 'weighted_least_connection_member', 'weighted_least_connection_node_address', 'ratio_session', 'ratio_least_connection_member', 'ratio_least_connection_node_address'] aliases: [] monitor_type: description: - Monitor rule type when monitors > 1 version_added: "1.3" required: False default: null choices: ['and_list', 'm_of_n'] aliases: [] quorum: description: - Monitor quorum value when monitor_type is m_of_n version_added: "1.3" required: False default: null choices: [] aliases: [] monitors: description: - Monitor template name list. Always use the full path to the monitor. version_added: "1.3" required: False default: null choices: [] aliases: [] slow_ramp_time: description: - Sets the ramp-up time (in seconds) to gradually ramp up the load on newly added or freshly detected up pool members version_added: "1.3" required: False default: null choices: [] aliases: [] service_down_action: description: - Sets the action to take when node goes down in pool version_added: "1.3" required: False default: null choices: ['none', 'reset', 'drop', 'reselect'] aliases: [] host: description: - "Pool member IP" required: False default: null choices: [] aliases: ['address'] port: description: - "Pool member port" required: False default: null choices: [] aliases: [] ''' EXAMPLES = ''' ## playbook task examples: --- # file bigip-test.yml # ... - hosts: localhost tasks: - name: Create pool local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=present name=matthite-pool partition=matthite lb_method=least_connection_member slow_ramp_time=120 - name: Modify load balancer method local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=present name=matthite-pool partition=matthite lb_method=round_robin - hosts: bigip-test tasks: - name: Add pool member local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=present name=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 - name: Remove pool member from pool local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=absent name=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 - hosts: localhost tasks: - name: Delete pool local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=absent name=matthite-pool partition=matthite ''' try: import bigsuds except ImportError: bigsuds_found = False else: bigsuds_found = True # =========================================== # bigip_pool module specific support methods. # def bigip_api(bigip, user, password): api = bigsuds.BIGIP(hostname=bigip, username=user, password=password) return api def pool_exists(api, pool): # hack to determine if pool exists result = False try: api.LocalLB.Pool.get_object_status(pool_names=[pool]) result = True except bigsuds.OperationFailed, e: if "was not found" in str(e): result = False else: # genuine exception raise return result def create_pool(api, pool, lb_method): # create requires lb_method but we don't want to default # to a value on subsequent runs if not lb_method: lb_method = 'round_robin' lb_method = "LB_METHOD_%s" % lb_method.strip().upper() api.LocalLB.Pool.create_v2(pool_names=[pool], lb_methods=[lb_method], members=[[]]) def remove_pool(api, pool): api.LocalLB.Pool.delete_pool(pool_names=[pool]) def get_lb_method(api, pool): lb_method = api.LocalLB.Pool.get_lb_method(pool_names=[pool])[0] lb_method = lb_method.strip().replace('LB_METHOD_', '').lower() return lb_method def set_lb_method(api, pool, lb_method): lb_method = "LB_METHOD_%s" % lb_method.strip().upper() api.LocalLB.Pool.set_lb_method(pool_names=[pool], lb_methods=[lb_method]) def get_monitors(api, pool): result = api.LocalLB.Pool.get_monitor_association(pool_names=[pool])[0]['monitor_rule'] monitor_type = result['type'].split("MONITOR_RULE_TYPE_")[-1].lower() quorum = result['quorum'] monitor_templates = result['monitor_templates'] return (monitor_type, quorum, monitor_templates) def set_monitors(api, pool, monitor_type, quorum, monitor_templates): monitor_type = "MONITOR_RULE_TYPE_%s" % monitor_type.strip().upper() monitor_rule = {'type': monitor_type, 'quorum': quorum, 'monitor_templates': monitor_templates} monitor_association = {'pool_name': pool, 'monitor_rule': monitor_rule} api.LocalLB.Pool.set_monitor_association(monitor_associations=[monitor_association]) def get_slow_ramp_time(api, pool): result = api.LocalLB.Pool.get_slow_ramp_time(pool_names=[pool])[0] return result def set_slow_ramp_time(api, pool, seconds): api.LocalLB.Pool.set_slow_ramp_time(pool_names=[pool], values=[seconds]) def get_action_on_service_down(api, pool): result = api.LocalLB.Pool.get_action_on_service_down(pool_names=[pool])[0] result = result.split("SERVICE_DOWN_ACTION_")[-1].lower() return result def set_action_on_service_down(api, pool, action): action = "SERVICE_DOWN_ACTION_%s" % action.strip().upper() api.LocalLB.Pool.set_action_on_service_down(pool_names=[pool], actions=[action]) def member_exists(api, pool, address, port): # hack to determine if member exists result = False try: members = [{'address': address, 'port': port}] api.LocalLB.Pool.get_member_object_status(pool_names=[pool], members=[members]) result = True except bigsuds.OperationFailed, e: if "was not found" in str(e): result = False else: # genuine exception raise return result def delete_node_address(api, address): result = False try: api.LocalLB.NodeAddressV2.delete_node_address(nodes=[address]) result = True except bigsuds.OperationFailed, e: if "is referenced by a member of pool" in str(e): result = False else: # genuine exception raise return result def remove_pool_member(api, pool, address, port): members = [{'address': address, 'port': port}] api.LocalLB.Pool.remove_member_v2(pool_names=[pool], members=[members]) def add_pool_member(api, pool, address, port): members = [{'address': address, 'port': port}] api.LocalLB.Pool.add_member_v2(pool_names=[pool], members=[members]) def main(): lb_method_choices = ['round_robin', 'ratio_member', 'least_connection_member', 'observed_member', 'predictive_member', 'ratio_node_address', 'least_connection_node_address', 'fastest_node_address', 'observed_node_address', 'predictive_node_address', 'dynamic_ratio', 'fastest_app_response', 'least_sessions', 'dynamic_ratio_member', 'l3_addr', 'unknown', 'weighted_least_connection_member', 'weighted_least_connection_node_address', 'ratio_session', 'ratio_least_connection_member', 'ratio_least_connection_node_address'] monitor_type_choices = ['and_list', 'm_of_n'] service_down_choices = ['none', 'reset', 'drop', 'reselect'] module = AnsibleModule( argument_spec = dict( server = dict(type='str', required=True), user = dict(type='str', required=True), password = dict(type='str', required=True), state = dict(type='str', default='present', choices=['present', 'absent']), name = dict(type='str', required=True, aliases=['pool']), partition = dict(type='str', default='Common'), lb_method = dict(type='str', choices=lb_method_choices), monitor_type = dict(type='str', choices=monitor_type_choices), quorum = dict(type='int'), monitors = dict(type='list'), slow_ramp_time = dict(type='int'), service_down_action = dict(type='str', choices=service_down_choices), host = dict(type='str', aliases=['address']), port = dict(type='int') ), supports_check_mode=True ) if not bigsuds_found: module.fail_json(msg="the python bigsuds module is required") server = module.params['server'] user = module.params['user'] password = module.params['password'] state = module.params['state'] name = module.params['name'] partition = module.params['partition'] pool = "/%s/%s" % (partition, name) lb_method = module.params['lb_method'] if lb_method: lb_method = lb_method.lower() monitor_type = module.params['monitor_type'] if monitor_type: monitor_type = monitor_type.lower() quorum = module.params['quorum'] monitors = module.params['monitors'] if monitors: monitors = [] for monitor in module.params['monitors']: if "/" not in monitor: monitors.append("/%s/%s" % (partition, monitor)) else: monitors.append(monitor) slow_ramp_time = module.params['slow_ramp_time'] service_down_action = module.params['service_down_action'] if service_down_action: service_down_action = service_down_action.lower() host = module.params['host'] address = "/%s/%s" % (partition, host) port = module.params['port'] # sanity check user supplied values if (host and not port) or (port and not host): module.fail_json(msg="both host and port must be supplied") if 1 > port > 65535: module.fail_json(msg="valid ports must be in range 1 - 65535") if monitors: if len(monitors) == 1: # set default required values for single monitor quorum = 0 monitor_type = 'single' elif len(monitors) > 1: if not monitor_type: module.fail_json(msg="monitor_type required for monitors > 1") if monitor_type == 'm_of_n' and not quorum: module.fail_json(msg="quorum value required for monitor_type m_of_n") if monitor_type != 'm_of_n': quorum = 0 elif monitor_type: # no monitors specified but monitor_type exists module.fail_json(msg="monitor_type require monitors parameter") elif quorum is not None: # no monitors specified but quorum exists module.fail_json(msg="quorum requires monitors parameter") try: api = bigip_api(server, user, password) result = {'changed': False} # default if state == 'absent': if host and port and pool: # member removal takes precedent if pool_exists(api, pool) and member_exists(api, pool, address, port): if not module.check_mode: remove_pool_member(api, pool, address, port) deleted = delete_node_address(api, address) result = {'changed': True, 'deleted': deleted} else: result = {'changed': True} elif pool_exists(api, pool): # no host/port supplied, must be pool removal if not module.check_mode: # hack to handle concurrent runs of module # pool might be gone before we actually remove it try: remove_pool(api, pool) result = {'changed': True} except bigsuds.OperationFailed, e: if "was not found" in str(e): result = {'changed': False} else: # genuine exception raise else: # check-mode return value result = {'changed': True} elif state == 'present': update = False if not pool_exists(api, pool): # pool does not exist -- need to create it if not module.check_mode: # a bit of a hack to handle concurrent runs of this module. # even though we've checked the pool doesn't exist, # it may exist by the time we run create_pool(). # this catches the exception and does something smart # about it! try: create_pool(api, pool, lb_method) result = {'changed': True} except bigsuds.OperationFailed, e: if "already exists" in str(e): update = True else: # genuine exception raise else: if monitors: set_monitors(api, pool, monitor_type, quorum, monitors) if slow_ramp_time: set_slow_ramp_time(api, pool, slow_ramp_time) if service_down_action: set_action_on_service_down(api, pool, service_down_action) if host and port: add_pool_member(api, pool, address, port) else: # check-mode return value result = {'changed': True} else: # pool exists -- potentially modify attributes update = True if update: if lb_method and lb_method != get_lb_method(api, pool): if not module.check_mode: set_lb_method(api, pool, lb_method) result = {'changed': True} if monitors: t_monitor_type, t_quorum, t_monitor_templates = get_monitors(api, pool) if (t_monitor_type != monitor_type) or (t_quorum != quorum) or (set(t_monitor_templates) != set(monitors)): if not module.check_mode: set_monitors(api, pool, monitor_type, quorum, monitors) result = {'changed': True} if slow_ramp_time and slow_ramp_time != get_slow_ramp_time(api, pool): if not module.check_mode: set_slow_ramp_time(api, pool, slow_ramp_time) result = {'changed': True} if service_down_action and service_down_action != get_action_on_service_down(api, pool): if not module.check_mode: set_action_on_service_down(api, pool, service_down_action) result = {'changed': True} if (host and port) and not member_exists(api, pool, address, port): if not module.check_mode: add_pool_member(api, pool, address, port) result = {'changed': True} except Exception, e: module.fail_json(msg="received exception: %s" % e) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/arista_vlan0000664000000000000000000002444712316627017021476 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # Copyright (C) 2013, Arista Networks # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # DOCUMENTATION = ''' --- module: arista_vlan version_added: "1.3" author: Peter Sprygada short_description: Manage VLAN resources requirements: - Arista EOS 4.10 - Netdev extension for EOS description: - Manage VLAN resources on Arista EOS network devices. This module requires the Netdev EOS extension to be installed in EOS. For detailed instructions for installing and using the Netdev module please see [link] options: vlan_id: description: - the vlan id required: true state: description: - describe the desired state of the vlan related to the config required: false default: 'present' choices: [ 'present', 'absent' ] logging: description: - enables or disables the syslog facility for this module required: false choices: [ 'true', 'false', 'yes', 'no' ] name: description: - a descriptive name for the vlan required: false notes: - Requires EOS 4.10 or later - The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI) - See http://eos.aristanetworks.com for details ''' EXAMPLES = ''' Example playbook entries using the arista_vlan module to manage resource state. tasks: - name: create vlan 999 action: arista_vlan vlan_id=999 logging=true - name: create / edit vlan 999 action: arista_vlan vlan_id=999 name=test logging=true - name: remove vlan 999 action: arista_vlan vlan_id=999 state=absent logging=true ''' import syslog import json class AristaVlan(object): """ This is the base class for managing VLAN resources in EOS network devices. This class provides basic CRUD functions for VLAN resources. This class acts as a wrapper around the netdev extension in EOS. You must have the netdev extension installed in order for this module to work properly. The following commands are implemented in this module: * netdev vlan create * netdev vlan list * netdev vlan show * netdev vlan edit * netdev vlan delete """ attributes = ['name'] def __init__(self, module): self.module = module self.vlan_id = module.params['vlan_id'] self.name = module.params['name'] self.state = module.params['state'] self.logging = module.boolean(module.params['logging']) @property def changed(self): """ The changed property provides a boolean response if the currently loaded resouces has changed from the resource running in EOS. Returns True if the object is not in sync Returns False if the object is in sync. """ return len(self.updates()) > 0 def log(self, entry): """ This method is responsible for sending log messages to the local syslog. """ if self.logging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_INFO, entry) def run_command(self, cmd): """ Calls the Ansible module run_command method. This method will also send a message to syslog with the command name """ self.log("Command: %s" % cmd) return self.module.run_command(cmd.split()) def delete(self): """ Deletes an existing VLAN resource from the current running configuration. A nonexistent VLAN will return successful for this operation. This method implements the following commands: * netdev vlan delete {vlan_id} Returns nothing if the delete was successful Returns error message if there was a problem deleting the vlan """ cmd = "netdev vlan delete %s" % self.vlan_id (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 200: rc = resp['status'] err = resp['message'] out = None return (rc, out, err) def create(self): """ Creates a VLAN resource in the current running configuration. If the VLAN already exists, the function will return successfully. This function implements the following commands: * netdev vlan create {vlan_id} [--name ] Returns the VLAN resource if the create function was successful Returns an error message if there as a problem creating the vlan """ attribs = [] for attrib in self.attributes: if getattr(self, attrib): attribs.append("--%s" % attrib) attribs.append(getattr(self, attrib)) cmd = "netdev vlan create %s " % self.vlan_id cmd += " ".join(attribs) (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 201: rc = int(resp['status']) err = resp['message'] out = None else: out = resp['result'] return (rc, out, err) def update(self): """ Updates an existing VLAN resource in the current running configuration. If the VLAN resource does not exist, this method will return an error. This method implements the following commands: * netdev vlan edit {vlan_id} [--name ] Returns an updated VLAN resoure if the create method was successful """ attribs = list() for attrib in self.updates(): attribs.append("--%s" % attrib) attribs.append(getattr(self, attrib)) if attribs: cmd = "netdev vlan edit %s " % self.vlan_id cmd += " ".join(attribs) (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 200: rc = int(resp['status']) err = resp['message'] out = None else: out = resp['result'] return (rc, out, err) return (0, None, "No attributes have been modified") def updates(self): """ This method will check the current VLAN resource in the running configuration and return a list of attributes that are not in sync with the current resource from the running configuration. """ obj = self.get() update = lambda a, z: a != z updates = list() for attrib in self.attributes: value = getattr(self, attrib) if update(obj[attrib], update) and value is not None: updates.append(attrib) self.log("updates: %s" % updates) return updates def exists(self): """ Returns True if the current VLAN resource exists and returns False if it does not. This method only checks for the existence of the VLAN ID. """ (rc, out, err) = self.run_command("netdev vlan list") collection = json.loads(out) return str(self.vlan_id) in collection.get('result') def get(self): """ This method will return a dictionary with the attributes of the VLAN resource identified in vlan_id. The VLAN resource has the following stucture: { "vlan_id": , "name": } If the VLAN ID specified by vlan_id does not exist in the system, this method will return None """ cmd = "netdev vlan show %s" % self.vlan_id (rc, out, err) = self.run_command(cmd) obj = json.loads(out) if obj.get('status') != 200: return None return obj['result'] def main(): module = AnsibleModule( argument_spec = dict( vlan_id=dict(default=None, required=True, type='int'), name=dict(default=None, type='str'), state=dict(default='present', choices=['present', 'absent']), logging=dict(default=False, type='bool') ), supports_check_mode = True ) obj = AristaVlan(module) rc = None result = dict() if obj.state == 'absent': if obj.exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.delete() if rc !=0: module.fail_json(msg=err, rc=rc) elif obj.state == 'present': if not obj.exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.create() result['results'] = out else: if obj.changed: if module.check_mode: module.exit_json(changed=obj.changed) (rc, out, err) = obj.update() result['results'] = out if rc is not None and rc != 0: module.fail_json(msg=err, rc=rc) if rc is None: result['changed'] = False else: result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/arista_l2interface0000664000000000000000000003004112316627017022717 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # Copyright (C) 2013, Arista Networks # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # DOCUMENTATION = ''' --- module: arista_l2interface version_added: "1.2" author: Peter Sprygada short_description: Manage layer 2 interfaces requirements: - Arista EOS 4.10 - Netdev extension for EOS description: - Manage layer 2 interface resources on Arista EOS network devices options: interface_id: description: - the full name of the interface required: true state: description: - describe the desired state of the interface related to the config required: false default: 'present' choices: [ 'present', 'absent' ] logging: description: - enables or disables the syslog facility for this module required: false default: false choices: [ 'true', 'false', 'yes', 'no' ] vlan_tagging: description: - specifies whether or not vlan tagging should be enabled for this interface required: false default: true choices: [ 'enable', 'disable' ] tagged_vlans: description: - specifies the list of vlans that should be allowed to transit this interface required: false untagged_vlan: description: - specifies the vlan that untagged traffic should be placed in for transit across a vlan tagged link required: false default: 'default' notes: - Requires EOS 4.10 or later - The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI) - See http://eos.aristanetworks.com for details ''' EXAMPLES = ''' Example playbook entries using the arista_l2interface module to manage resource state. Note that interface names must be the full interface name not shortcut names (ie Ethernet, not Et1) tasks: - name: create switchport ethernet1 access port action: arista_l2interface interface_id=Ethernet1 logging=true - name: create switchport ethernet2 trunk port action: arista_l2interface interface_id=Ethernet2 vlan_tagging=enable logging=true - name: add vlans to red and blue switchport ethernet2 action: arista_l2interface interface_id=Ethernet2 tagged_vlans=red,blue logging=true - name: set untagged vlan for Ethernet1 action: arista_l2interface interface_id=Ethernet1 untagged_vlan=red logging=true - name: convert access to trunk action: arista_l2interface interface_id=Ethernet1 vlan_tagging=enable tagged_vlans=red,blue logging=true - name: convert trunk to access action: arista_l2interface interface_id=Ethernet2 vlan_tagging=disable untagged_vlan=blue logging=true - name: delete switchport ethernet1 action: arista_l2interface interface_id=Ethernet1 state=absent logging=true ''' import syslog import json class AristaL2Interface(object): """ This is the base class managing layer 2 interfaces (switchport) resources in Arista EOS network devices. This class provides an implementation for creating, updating and deleting layer 2 interfaces. Note: The netdev extension for EOS must be installed in order of this module to work properly. The following commands are implemented in this module: * netdev l2interface list * netdev l2interface show * netdev l2interface edit * netdev l2interface delete """ attributes= ['vlan_tagging', 'tagged_vlans', 'untagged_vlan'] def __init__(self, module): self.module = module self.interface_id = module.params['interface_id'] self.state = module.params['state'] self.vlan_tagging = module.params['vlan_tagging'] self.tagged_vlans = module.params['tagged_vlans'] self.untagged_vlan = module.params['untagged_vlan'] self.logging = module.params['logging'] @property def changed(self): """ The changed property provides a boolean response if the currently loaded resouces has changed from the resource running in EOS. Returns True if the object is not in sync Returns False if the object is in sync. """ return len(self.updates()) > 0 def log(self, entry): """ This method is responsible for sending log messages to the local syslog. """ if self.logging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, entry) def run_command(self, cmd): """ Calls the Ansible module run_command method. This method will directly return the results of the run_command method """ self.log("Command: %s" % cmd) return self.module.run_command(cmd.split()) def get(self): """ This method will return a dictionary with the attributes of the layer 2 interface resource specified in interface_id. The layer 2 interface resource has the following stucture: { "interface_id": , "vlan_tagging": [enable* | disable], "tagged_vlans": , "untagged_vlan": } If the layer 2 interface specified by interface_id does not exist in the system, this method will return None. """ cmd = "netdev l2interface show %s" % self.interface_id (rc, out, err) = self.run_command(cmd) obj = json.loads(out) if obj.get('status') != 200: return None return obj['result'] def create(self): """ Creates a layer 2 interface resource in the current running configuration. If the layer 2 interface already exists, the function will return successfully. This function implements the following commands: * netdev l2interface create {interface_id} [attributes] Returns the layer 2 interface resource if the create method was successful Returns an error message if there as a problem creating the layer 2 interface """ attribs = [] for attrib in self.attributes: if getattr(self, attrib): attribs.append("--%s" % attrib) attribs.append(getattr(self, attrib)) cmd = "netdev l2interface create %s " % self.interface_id cmd += " ".join(attribs) (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 201: rc = int(resp['status']) err = resp['message'] out = None else: out = resp['result'] return (rc, out, err) def update(self): """ Updates an existing VLAN resource in the current running configuration. If the VLAN resource does not exist, this method will return an error. This method implements the following commands: * netdev l2interface edit {interface_id} [attributes] Returns an updated layer 2 interafce resoure if the update method was successful """ attribs = list() for attrib in self.updates(): attribs.append("--%s" % attrib) attribs.append(getattr(self, attrib)) cmd = "netdev l2interface edit %s " % self.interface_id cmd += " ".join(attribs) (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 200: rc = int(resp['status']) err = resp['message'] out = None else: out = resp['result'] return (rc, out, err) return (0, None, "No attributes have been modified") def delete(self): """ Deletes an existing layer 2 interface resource from the current running configuration. A nonexistent layer 2 interface will return successful for this operation. This method implements the following commands: * netdev l2interface delete {interface_id} Returns nothing if the delete was successful Returns error message if there was a problem deleting the resource """ cmd = "netdev l2interface delete %s" % self.interface_id (rc, out, err) = self.run_command(cmd) resp = json.loads(out) if resp.get('status') != 200: rc = resp['status'] err = resp['message'] out = None return (rc, out, err) def updates(self): """ This method will check the current layer 2 interface resource in the running configuration and return a list of attributes that are not in sync with the current resource. """ obj = self.get() update = lambda a, z: a != z updates = list() for attrib in self.attributes: value = getattr(self, attrib) if update(obj[attrib], value) and value is not None: updates.append(attrib) self.log("Updates: %s" % updates) return updates def exists(self): """ Returns True if the current layer 2 interface resource exists and returns False if it does not. This method only checks for the existence of the interface as specified in interface_id. """ (rc, out, err) = self.run_command("netdev l2interface list") collection = json.loads(out) return self.interface_id in collection.get('result') def main(): module = AnsibleModule( argument_spec = dict( interface_id=dict(default=None, type='str'), state=dict(default='present', choices=['present', 'absent'], type='str'), vlan_tagging=dict(default=None, choices=['enable', 'disable']), tagged_vlans=dict(default=None, type='str'), untagged_vlan=dict(default=None, type='str'), logging=dict(default=False, type='bool') ), supports_check_mode = True ) obj = AristaL2Interface(module) rc = None result = dict() if obj.state == 'absent': if obj.exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.delete() if rc !=0: module.fail_json(msg=err, rc=rc) elif obj.state == 'present': if not obj.exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.create() result['results'] = out else: if obj.changed: if module.check_mode: module.exit_json(changed=obj.changed) (rc, out, err) = obj.update() result['results'] = out if rc is not None and rc != 0: module.fail_json(msg=err, rc=rc) if rc is None: result['changed'] = False else: result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/net_infrastructure/netscaler0000664000000000000000000001167512316627017021152 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- """ Ansible module to manage Citrix NetScaler entities (c) 2013, Nandor Sivok This file is part of Ansible Ansible is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Ansible is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Ansible. If not, see . """ DOCUMENTATION = ''' --- module: netscaler version_added: "1.1" short_description: Manages Citrix NetScaler entities description: - Manages Citrix NetScaler server and service entities. options: nsc_host: description: - hostname or ip of your netscaler required: true default: null aliases: [] nsc_protocol: description: - protocol used to access netscaler required: false default: https aliases: [] user: description: - username required: true default: null aliases: [] password: description: - password required: true default: null aliases: [] action: description: - the action you want to perform on the entity required: false default: disable choices: ["enable", "disable"] aliases: [] name: description: - name of the entity required: true default: hostname aliases: [] type: description: - type of the entity required: false default: server choices: ["server", "service"] aliases: [] validate_certs: description: - If C(no), SSL certificates for the target url will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] requirements: [ "urllib", "urllib2" ] author: Nandor Sivok ''' EXAMPLES = ''' # Disable the server ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass" # Enable the server ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass action=enable" # Disable the service local:8080 ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass name=local:8080 type=service action=disable" ''' import json import base64 import socket class netscaler(object): _nitro_base_url = '/nitro/v1/' def __init__(self, module): self.module = module def http_request(self, api_endpoint, data_json={}): request_url = self._nsc_protocol + '://' + self._nsc_host + self._nitro_base_url + api_endpoint data_json = urllib.urlencode(data_json) if not len(data_json): data_json = None auth = base64.encodestring('%s:%s' % (self._nsc_user, self._nsc_pass)).replace('\n', '').strip() headers = { 'Authorization': 'Basic %s' % auth, 'Content-Type' : 'application/x-www-form-urlencoded', } response, info = fetch_url(self.module, request_url, data=data_json) return json.load(response.read()) def prepare_request(self, action): resp = self.http_request( 'config', { "object": { "params": {"action": action}, self._type: {"name": self._name} } } ) return resp def core(module): n = netscaler(module) n._nsc_host = module.params.get('nsc_host') n._nsc_user = module.params.get('user') n._nsc_pass = module.params.get('password') n._nsc_protocol = module.params.get('nsc_protocol') n._name = module.params.get('name') n._type = module.params.get('type') action = module.params.get('action') r = n.prepare_request(action) return r['errorcode'], r def main(): module = AnsibleModule( argument_spec = dict( nsc_host = dict(required=True), nsc_protocol = dict(default='https'), user = dict(required=True), password = dict(required=True), action = dict(default='enable', choices=['enable','disable']), name = dict(default=socket.gethostname()), type = dict(default='server', choices=['service', 'server']), validate_certs=dict(default='yes', type='bool'), ) ) rc = 0 try: rc, result = core(module) except Exception, e: module.fail_json(msg=str(e)) if rc != 0: module.fail_json(rc=rc, msg=result) else: result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/inventory/0000775000000000000000000000000012316627017015344 5ustar rootrootansible-1.5.4/library/inventory/group_by0000664000000000000000000000123112316627017017112 0ustar rootroot# -*- mode: python -*- DOCUMENTATION = ''' --- module: group_by short_description: Create Ansible groups based on facts description: - Use facts to create ad-hoc groups that can be used later in a playbook. version_added: "0.9" options: key: description: - The variables whose values will be used as groups required: true author: Jeroen Hoekx notes: - Spaces in group names are converted to dashes '-'. ''' EXAMPLES = ''' # Create groups based on the machine architecture - group_by: key=machine_{{ ansible_machine }} # Create groups like 'kvm-host' - group_by: key=virt_{{ ansible_virtualization_type }}_{{ ansible_virtualization_role }} ''' ansible-1.5.4/library/inventory/add_host0000664000000000000000000000220712316627017017055 0ustar rootroot# -*- mode: python -*- DOCUMENTATION = ''' --- module: add_host short_description: add a host (and alternatively a group) to the ansible-playbook in-memory inventory description: - Use variables to create new hosts and groups in inventory for use in later plays of the same playbook. Takes variables so you can define the new hosts more fully. version_added: "0.9" options: name: aliases: [ 'hostname', 'host' ] description: - The hostname/ip of the host to add to the inventory, can include a colon and a port number. required: true groups: aliases: [ 'groupname', 'group' ] description: - The groups to add the hostname to, comma separated. required: false author: Seth Vidal ''' EXAMPLES = ''' # add host to group 'just_created' with variable foo=42 - add_host: name={{ ip_from_ec2 }} groups=just_created foo=42 # add a host with a non-standard port local to your machines - add_host: name={{ new_ip }}:{{ new_port }} # add a host alias that we reach through a tunnel - add_host: hostname={{ new_ip }} ansible_ssh_host={{ inventory_hostname }} ansible_ssh_port={{ new_port }} ''' ansible-1.5.4/library/files/0000775000000000000000000000000012316627017014411 5ustar rootrootansible-1.5.4/library/files/xattr0000664000000000000000000001405112316627017015477 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: xattr version_added: "1.3" short_description: set/retrieve extended attributes description: - Manages filesystem user defined extended attributes, requires that they are enabled on the target filesystem and that the setfattr/getfattr utilities are present. options: name: required: true default: None aliases: ['path'] description: - The full path of the file/object to get the facts of key: required: false default: None description: - The name of a specific Extended attribute key to set/retrieve value: required: false default: None description: - The value to set the named name/key to, it automatically sets the C(state) to 'set' state: required: false default: get choices: [ 'read', 'present', 'all', 'keys', 'absent' ] description: - defines which state you want to do. C(read) retrieves the current value for a C(key) (default) C(present) sets C(name) to C(value), default if value is set C(all) dumps all data C(keys) retrieves all keys C(absent) deletes the key follow: required: false default: yes choices: [ 'yes', 'no' ] description: - if yes, dereferences symlinks and sets/gets attributes on symlink target, otherwise acts on symlink itself. author: Brian Coca ''' EXAMPLES = ''' # Obtain the extended attributes of /etc/foo.conf - xattr: name=/etc/foo.conf # Sets the key 'foo' to value 'bar' - xattr: path=/etc/foo.conf key=user.foo value=bar # Removes the key 'foo' - xattr: name=/etc/foo.conf key=user.foo state=absent ''' import operator def get_xattr_keys(module,path,follow): cmd = [ module.get_bin_path('getfattr', True) ] # prevents warning and not sure why it's not default cmd.append('--absolute-names') if not follow: cmd.append('-h') cmd.append(path) return _run_xattr(module,cmd) def get_xattr(module,path,key,follow): cmd = [ module.get_bin_path('getfattr', True) ] # prevents warning and not sure why it's not default cmd.append('--absolute-names') if not follow: cmd.append('-h') if key is None: cmd.append('-d') else: cmd.append('-n %s' % key) cmd.append(path) return _run_xattr(module,cmd,False) def set_xattr(module,path,key,value,follow): cmd = [ module.get_bin_path('setfattr', True) ] if not follow: cmd.append('-h') cmd.append('-n %s' % key) cmd.append('-v %s' % value) cmd.append(path) return _run_xattr(module,cmd) def rm_xattr(module,path,key,follow): cmd = [ module.get_bin_path('setfattr', True) ] if not follow: cmd.append('-h') cmd.append('-x %s' % key) cmd.append(path) return _run_xattr(module,cmd,False) def _run_xattr(module,cmd,check_rc=True): try: (rc, out, err) = module.run_command(' '.join(cmd), check_rc=check_rc) except Exception, e: module.fail_json(msg="%s!" % e.strerror) #result = {'raw': out} result = {} for line in out.splitlines(): if re.match("^#", line) or line == "": pass elif re.search('=', line): (key, val) = line.split("=") result[key] = val.strip('"') else: result[line] = '' return result def main(): module = AnsibleModule( argument_spec = dict( name = dict(required=True, aliases=['path']), key = dict(required=False, default=None), value = dict(required=False, default=None), state = dict(required=False, default='read', choices=[ 'read', 'present', 'all', 'keys', 'absent' ], type='str'), follow = dict(required=False, type='bool', default=True), ), supports_check_mode=True, ) path = module.params.get('name') key = module.params.get('key') value = module.params.get('value') state = module.params.get('state') follow = module.params.get('follow') if not os.path.exists(path): module.fail_json(msg="path not found or not accessible!") changed=False msg = "" res = {} if key is None and state in ['present','absent']: module.fail_json(msg="%s needs a key paramter" % state) # All xattr must begin in user namespace if key is not None and not re.match('^user\.',key): key = 'user.%s' % key if (state == 'present' or value is not None): current=get_xattr(module,path,key,follow) if current is None or not key in current or value != current[key]: if not module.check_mode: res = set_xattr(module,path,key,value,follow) changed=True res=current msg="%s set to %s" % (key, value) elif state == 'absent': current=get_xattr(module,path,key,follow) if current is not None and key in current: if not module.check_mode: res = rm_xattr(module,path,key,follow) changed=True res=current msg="%s removed" % (key) elif state == 'keys': res=get_xattr_keys(module,path,follow) msg="returning all keys" elif state == 'all': res=get_xattr(module,path,None,follow) msg="dumping all" else: res=get_xattr(module,path,key,follow) msg="returning %s" % key module.exit_json(changed=changed, msg=msg, xattr=res) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/copy0000664000000000000000000002132612316627017015312 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import os import time DOCUMENTATION = ''' --- module: copy version_added: "historical" short_description: Copies files to remote locations. description: - The M(copy) module copies a file on the local box to remote locations. options: src: description: - Local path to a file to copy to the remote server; can be absolute or relative. If path is a directory, it is copied recursively. In this case, if path ends with "/", only inside contents of that directory are copied to destination. Otherwise, if it does not end with "/", the directory itself with all contents is copied. This behavior is similar to Rsync. required: false default: null aliases: [] content: version_added: "1.1" description: - When used instead of 'src', sets the contents of a file directly to the specified value. required: false default: null dest: description: - Remote absolute path where the file should be copied to. If src is a directory, this must be a directory too. required: true default: null backup: description: - Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. version_added: "0.7" required: false choices: [ "yes", "no" ] default: "no" force: description: - the default is C(yes), which will replace the remote file when contents are different than the source. If C(no), the file will only be transferred if the destination does not exist. version_added: "1.1" required: false choices: [ "yes", "no" ] default: "yes" aliases: [ "thirsty" ] validate: description: - The validation command to run before copying into place. The path to the file to validate is passed in via '%s' which must be present as in the visudo example below. required: false default: "" version_added: "1.2" directory_mode: description: - When doing a recursive copy set the mode for the directories. If this is not set we will default the system defaults. required: false version_added: "1.5" others: description: - all arguments accepted by the M(file) module also work here required: false author: Michael DeHaan notes: - The "copy" module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see synchronize module, which is a wrapper around rsync. ''' EXAMPLES = ''' # Example from Ansible Playbooks - copy: src=/srv/myfiles/foo.conf dest=/etc/foo.conf owner=foo group=foo mode=0644 # Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version - copy: src=/mine/ntp.conf dest=/etc/ntp.conf owner=root group=root mode=644 backup=yes # Copy a new "sudoers" file into place, after passing validation with visudo - copy: src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s' ''' def split_pre_existing_dir(dirname): ''' Return the first pre-existing directory and a list of the new directories that will be created. ''' head, tail = os.path.split(dirname) if not os.path.exists(head): (pre_existing_dir, new_directory_list) = split_pre_existing_dir(head) else: return (head, [ tail ]) new_directory_list.insert(0, tail) return (pre_existing_dir, new_directory_list) def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed): ''' Walk the new directories list and make sure that permissions are as we would expect ''' if len(new_directory_list) > 0: working_dir = os.path.join(pre_existing_dir, new_directory_list.pop(0)) directory_args['path'] = working_dir changed = module.set_directory_attributes_if_different(directory_args, changed) changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed) return changed def main(): module = AnsibleModule( # not checking because of daisy chain to file module argument_spec = dict( src = dict(required=False), original_basename = dict(required=False), # used to handle 'dest is a directory' via template, a slight hack content = dict(required=False, no_log=True), dest = dict(required=True), backup = dict(default=False, type='bool'), force = dict(default=True, aliases=['thirsty'], type='bool'), validate = dict(required=False, type='str'), directory_mode = dict(required=False) ), add_file_common_args=True, ) src = os.path.expanduser(module.params['src']) dest = os.path.expanduser(module.params['dest']) backup = module.params['backup'] force = module.params['force'] original_basename = module.params.get('original_basename',None) validate = module.params.get('validate',None) if not os.path.exists(src): module.fail_json(msg="Source %s failed to transfer" % (src)) if not os.access(src, os.R_OK): module.fail_json(msg="Source %s not readable" % (src)) md5sum_src = module.md5(src) md5sum_dest = None changed = False # Special handling for recursive copy - create intermediate dirs if original_basename and dest.endswith("/"): dest = os.path.join(dest, original_basename) dirname = os.path.dirname(dest) if not os.path.exists(dirname): (pre_existing_dir, new_directory_list) = split_pre_existing_dir(dirname) os.makedirs(dirname) directory_args = module.load_file_common_arguments(module.params) directory_mode = module.params["directory_mode"] if directory_mode is not None: directory_args['mode'] = directory_mode else: directory_args['mode'] = None adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed) if os.path.exists(dest): if not force: module.exit_json(msg="file already exists", src=src, dest=dest, changed=False) if (os.path.isdir(dest)): basename = os.path.basename(src) if original_basename: basename = original_basename dest = os.path.join(dest, basename) if os.access(dest, os.R_OK): md5sum_dest = module.md5(dest) else: if not os.path.exists(os.path.dirname(dest)): module.fail_json(msg="Destination directory %s does not exist" % (os.path.dirname(dest))) if not os.access(os.path.dirname(dest), os.W_OK): module.fail_json(msg="Destination %s not writable" % (os.path.dirname(dest))) backup_file = None if md5sum_src != md5sum_dest or os.path.islink(dest): try: if backup: if os.path.exists(dest): backup_file = module.backup_local(dest) # allow for conversion from symlink. if os.path.islink(dest): os.unlink(dest) open(dest, 'w').close() if validate: (rc,out,err) = module.run_command(validate % src) if rc != 0: module.fail_json(msg="failed to validate: rc:%s error:%s" % (rc,err)) module.atomic_move(src, dest) except IOError: module.fail_json(msg="failed to copy: %s to %s" % (src, dest)) changed = True else: changed = False res_args = dict( dest = dest, src = src, md5sum = md5sum_src, changed = changed ) if backup_file: res_args['backup_file'] = backup_file module.params['dest'] = dest file_args = module.load_file_common_arguments(module.params) res_args['changed'] = module.set_file_attributes_if_different(file_args, res_args['changed']) module.exit_json(**res_args) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/fetch0000664000000000000000000000442412316627017015431 0ustar rootroot# this is a virtual module that is entirely implemented server side DOCUMENTATION = ''' --- module: fetch short_description: Fetches a file from remote nodes description: - This module works like M(copy), but in reverse. It is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname. Note that this module is written to transfer log files that might not be present, so a missing remote file won't be an error unless fail_on_missing is set to 'yes'. version_added: "0.2" options: src: description: - The file on the remote system to fetch. This I(must) be a file, not a directory. Recursive fetching may be supported in a later release. required: true default: null aliases: [] dest: description: - A directory to save the file into. For example, if the I(dest) directory is C(/backup) a I(src) file named C(/etc/profile) on host C(host.example.com), would be saved into C(/backup/host.example.com/etc/profile) required: true default: null fail_on_missing: version_added: "1.1" description: - Makes it fails when the source file is missing. required: false choices: [ "yes", "no" ] default: "no" validate_md5: version_added: "1.4" description: - Verify that the source and destination md5sums match after the files are fetched. required: false choices: [ "yes", "no" ] default: "yes" flat: version_added: "1.2" description: Allows you to override the default behavior of prepending hostname/path/to/file to the destination. If dest ends with '/', it will use the basename of the source file, similar to the copy module. Obviously this is only handy if the filenames are unique. requirements: [] author: Michael DeHaan ''' EXAMPLES = ''' # Store file into /tmp/fetched/host.example.com/tmp/somefile - fetch: src=/tmp/somefile dest=/tmp/fetched # Specifying a path directly - fetch: src=/tmp/somefile dest=/tmp/prefix-{{ ansible_hostname }} flat=yes # Specifying a destination path - fetch: src=/tmp/uniquefile dest=/tmp/special/ flat=yes # Storing in a path relative to the playbook - fetch: src=/tmp/uniquefile dest=special/prefix-{{ ansible_hostname }} flat=yes ''' ansible-1.5.4/library/files/stat0000664000000000000000000001027512316627017015314 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: stat version_added: "1.3" short_description: retrieve file or file system status description: - Retrieves facts for a file similar to the linux/unix 'stat' command. options: path: description: - The full path of the file/object to get the facts of required: true default: null aliases: [] follow: description: - Whether to follow symlinks required: false default: no aliases: [] get_md5: description: - Whether to return the md5 sum of the file required: false default: yes aliases: [] author: Bruce Pennypacker ''' EXAMPLES = ''' # Obtain the stats of /etc/foo.conf, and check that the file still belongs # to 'root'. Fail otherwise. - stat: path=/etc/foo.conf register: st - fail: msg="Whoops! file ownership has changed" when: st.stat.pw_name != 'root' # Determine if a path exists and is a directory. Note we need to test # both that p.stat.isdir actually exists, and also that it's set to true. - stat: path=/path/to/something register: p - debug: msg="Path exists and is a directory" when: p.stat.isdir is defined and p.stat.isdir == true # Don't do md5 checksum - stat: path=/path/to/myhugefile get_md5=no ''' import os import sys from stat import * import pwd def main(): module = AnsibleModule( argument_spec = dict( path = dict(required=True), follow = dict(default='no', type='bool'), get_md5 = dict(default='yes', type='bool') ), supports_check_mode = True ) path = module.params.get('path') path = os.path.expanduser(path) follow = module.params.get('follow') get_md5 = module.params.get('get_md5') try: if follow: st = os.stat(path) else: st = os.lstat(path) except OSError, e: if e.errno == errno.ENOENT: d = { 'exists' : False } module.exit_json(changed=False, stat=d) module.fail_json(msg = e.strerror) mode = st.st_mode # back to ansible d = { 'exists' : True, 'mode' : "%04o" % S_IMODE(mode), 'isdir' : S_ISDIR(mode), 'ischr' : S_ISCHR(mode), 'isblk' : S_ISBLK(mode), 'isreg' : S_ISREG(mode), 'isfifo' : S_ISFIFO(mode), 'islnk' : S_ISLNK(mode), 'issock' : S_ISSOCK(mode), 'uid' : st.st_uid, 'gid' : st.st_gid, 'size' : st.st_size, 'inode' : st.st_ino, 'dev' : st.st_dev, 'nlink' : st.st_nlink, 'atime' : st.st_atime, 'mtime' : st.st_mtime, 'ctime' : st.st_ctime, 'wusr' : bool(mode & stat.S_IWUSR), 'rusr' : bool(mode & stat.S_IRUSR), 'xusr' : bool(mode & stat.S_IXUSR), 'wgrp' : bool(mode & stat.S_IWGRP), 'rgrp' : bool(mode & stat.S_IRGRP), 'xgrp' : bool(mode & stat.S_IXGRP), 'woth' : bool(mode & stat.S_IWOTH), 'roth' : bool(mode & stat.S_IROTH), 'xoth' : bool(mode & stat.S_IXOTH), 'isuid' : bool(mode & stat.S_ISUID), 'isgid' : bool(mode & stat.S_ISGID), } if S_ISLNK(mode): d['lnk_source'] = os.path.realpath(path) if S_ISREG(mode) and get_md5: d['md5'] = module.md5(path) try: pw = pwd.getpwuid(st.st_uid) d['pw_name'] = pw.pw_name except: pass module.exit_json(changed=False, stat=d) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/lineinfile0000664000000000000000000003075612316627017016465 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Daniel Hokka Zakrisson # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import re import os import tempfile DOCUMENTATION = """ --- module: lineinfile author: Daniel Hokka Zakrisson short_description: Ensure a particular line is in a file, or replace an existing line using a back-referenced regular expression. description: - This module will search a file for a line, and ensure that it is present or absent. - This is primarily useful when you want to change a single line in a file only. For other cases, see the M(copy) or M(template) modules. version_added: "0.7" options: dest: required: true aliases: [ name, destfile ] description: - The file to modify. regexp: required: false description: - The regular expression to look for in every line of the file. For C(state=present), the pattern to replace if found; only the last line found will be replaced. For C(state=absent), the pattern of the line to remove. Uses Python regular expressions; see U(http://docs.python.org/2/library/re.html). state: required: false choices: [ present, absent ] default: "present" aliases: [] description: - Whether the line should be there or not. line: required: false description: - Required for C(state=present). The line to insert/replace into the file. If C(backrefs) is set, may contain backreferences that will get expanded with the C(regexp) capture groups if the regexp matches. The backreferences should be double escaped (see examples). backrefs: required: false default: "no" choices: [ "yes", "no" ] version_added: "1.1" description: - Used with C(state=present). If set, line can contain backreferences (both positional and named) that will get populated if the C(regexp) matches. This flag changes the operation of the module slightly; C(insertbefore) and C(insertafter) will be ignored, and if the C(regexp) doesn't match anywhere in the file, the file will be left unchanged. If the C(regexp) does match, the last matching line will be replaced by the expanded line parameter. insertafter: required: false default: EOF description: - Used with C(state=present). If specified, the line will be inserted after the specified regular expression. A special value is available; C(EOF) for inserting the line at the end of the file. May not be used with C(backrefs). choices: [ 'EOF', '*regex*' ] insertbefore: required: false version_added: "1.1" description: - Used with C(state=present). If specified, the line will be inserted before the specified regular expression. A value is available; C(BOF) for inserting the line at the beginning of the file. May not be used with C(backrefs). choices: [ 'BOF', '*regex*' ] create: required: false choices: [ "yes", "no" ] default: "no" description: - Used with C(state=present). If specified, the file will be created if it does not already exist. By default it will fail if the file is missing. backup: required: false default: "no" choices: [ "yes", "no" ] description: - Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. validate: required: false description: - validation to run before copying into place required: false default: None version_added: "1.4" others: description: - All arguments accepted by the M(file) module also work here. required: false """ EXAMPLES = r""" - lineinfile: dest=/etc/selinux/config regexp=^SELINUX= line=SELINUX=disabled - lineinfile: dest=/etc/sudoers state=absent regexp="^%wheel" - lineinfile: dest=/etc/hosts regexp='^127\.0\.0\.1' line='127.0.0.1 localhost' owner=root group=root mode=0644 - lineinfile: dest=/etc/httpd/conf/httpd.conf regexp="^Listen " insertafter="^#Listen " line="Listen 8080" - lineinfile: dest=/etc/services regexp="^# port for http" insertbefore="^www.*80/tcp" line="# port for http by default" # Add a line to a file if it does not exist, without passing regexp - lineinfile: dest=/tmp/testfile line="192.168.1.99 foo.lab.net foo" # Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs. - lineinfile: "dest=/etc/sudoers state=present regexp='^%wheel' line='%wheel ALL=(ALL) NOPASSWD: ALL'" - lineinfile: dest=/opt/jboss-as/bin/standalone.conf regexp='^(.*)Xms(\d+)m(.*)$' line='\1Xms${xms}m\3' backrefs=yes # Validate a the sudoers file before saving - lineinfile: dest=/etc/sudoers state=present regexp='^%ADMIN ALL\=' line='%ADMIN ALL=(ALL) NOPASSWD:ALL' validate='visudo -cf %s' """ def write_changes(module,lines,dest): tmpfd, tmpfile = tempfile.mkstemp() f = os.fdopen(tmpfd,'wb') f.writelines(lines) f.close() validate = module.params.get('validate', None) valid = not validate if validate: (rc, out, err) = module.run_command(validate % tmpfile) valid = rc == 0 if rc != 0: module.fail_json(msg='failed to validate: ' 'rc:%s error:%s' % (rc,err)) if valid: module.atomic_move(tmpfile, dest) def check_file_attrs(module, changed, message): file_args = module.load_file_common_arguments(module.params) if module.set_file_attributes_if_different(file_args, False): if changed: message += " and " changed = True message += "ownership, perms or SE linux context changed" return message, changed def present(module, dest, regexp, line, insertafter, insertbefore, create, backup, backrefs): if not os.path.exists(dest): if not create: module.fail_json(rc=257, msg='Destination %s does not exist !' % dest) destpath = os.path.dirname(dest) if not os.path.exists(destpath): os.makedirs(destpath) lines = [] else: f = open(dest, 'rb') lines = f.readlines() f.close() msg = "" if regexp is not None: mre = re.compile(regexp) if insertafter not in (None, 'BOF', 'EOF'): insre = re.compile(insertafter) elif insertbefore not in (None, 'BOF'): insre = re.compile(insertbefore) else: insre = None # index[0] is the line num where regexp has been found # index[1] is the line num where insertafter/inserbefore has been found index = [-1, -1] m = None for lineno, cur_line in enumerate(lines): if regexp is not None: match_found = mre.search(cur_line) else: match_found = line == cur_line.rstrip('\r\n') if match_found: index[0] = lineno m = match_found elif insre is not None and insre.search(cur_line): if insertafter: # + 1 for the next line index[1] = lineno + 1 if insertbefore: # + 1 for the previous line index[1] = lineno msg = '' changed = False # Regexp matched a line in the file if index[0] != -1: if backrefs: new_line = m.expand(line) else: # Don't do backref expansion if not asked. new_line = line if lines[index[0]] != new_line + os.linesep: lines[index[0]] = new_line + os.linesep msg = 'line replaced' changed = True elif backrefs: # Do absolutely nothing, since it's not safe generating the line # without the regexp matching to populate the backrefs. pass # Add it to the beginning of the file elif insertbefore == 'BOF' or insertafter == 'BOF': lines.insert(0, line + os.linesep) msg = 'line added' changed = True # Add it to the end of the file if requested or # if insertafter=/insertbefore didn't match anything # (so default behaviour is to add at the end) elif insertafter == 'EOF': lines.append(line + os.linesep) msg = 'line added' changed = True # Do nothing if insert* didn't match elif index[1] == -1: pass # insert* matched, but not the regexp else: lines.insert(index[1], line + os.linesep) msg = 'line added' changed = True backupdest = "" if changed and not module.check_mode: if backup and os.path.exists(dest): backupdest = module.backup_local(dest) write_changes(module, lines, dest) msg, changed = check_file_attrs(module, changed, msg) module.exit_json(changed=changed, msg=msg, backup=backupdest) def absent(module, dest, regexp, line, backup): if not os.path.exists(dest): module.exit_json(changed=False, msg="file not present") msg = "" f = open(dest, 'rb') lines = f.readlines() f.close() if regexp is not None: cre = re.compile(regexp) found = [] def matcher(cur_line): if regexp is not None: match_found = cre.search(cur_line) else: match_found = line == cur_line.rstrip('\r\n') if match_found: found.append(cur_line) return not match_found lines = filter(matcher, lines) changed = len(found) > 0 backupdest = "" if changed and not module.check_mode: if backup: backupdest = module.backup_local(dest) write_changes(module, lines, dest) if changed: msg = "%s line(s) removed" % len(found) msg, changed = check_file_attrs(module, changed, msg) module.exit_json(changed=changed, found=len(found), msg=msg, backup=backupdest) def main(): module = AnsibleModule( argument_spec=dict( dest=dict(required=True, aliases=['name', 'destfile']), state=dict(default='present', choices=['absent', 'present']), regexp=dict(default=None), line=dict(aliases=['value']), insertafter=dict(default=None), insertbefore=dict(default=None), backrefs=dict(default=False, type='bool'), create=dict(default=False, type='bool'), backup=dict(default=False, type='bool'), validate=dict(default=None, type='str'), ), mutually_exclusive=[['insertbefore', 'insertafter']], add_file_common_args=True, supports_check_mode=True ) params = module.params create = module.params['create'] backup = module.params['backup'] backrefs = module.params['backrefs'] dest = os.path.expanduser(params['dest']) if os.path.isdir(dest): module.fail_json(rc=256, msg='Destination %s is a directory !' % dest) if params['state'] == 'present': if backrefs and params['regexp'] is None: module.fail_json(msg='regexp= is required with backrefs=true') if params.get('line', None) is None: module.fail_json(msg='line= is required with state=present') # Deal with the insertafter default value manually, to avoid errors # because of the mutually_exclusive mechanism. ins_bef, ins_aft = params['insertbefore'], params['insertafter'] if ins_bef is None and ins_aft is None: ins_aft = 'EOF' # Replace the newline character with an actual newline. Don't replace # escaped \\n, hence sub and not str.replace. line = re.sub(r'\n', os.linesep, params['line']) present(module, dest, params['regexp'], line, ins_aft, ins_bef, create, backup, backrefs) else: if params['regexp'] is None and params.get('line', None) is None: module.fail_json(msg='one of line= or regexp= is required with state=absent') absent(module, dest, params['regexp'], params.get('line', None), backup) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/unarchive0000664000000000000000000001574712316627017016336 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # (c) 2013, Dylan Martin # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: unarchive version_added: 1.4 short_description: Copies an archive to a remote location and unpack it description: - The M(unarchive) module copies an archive file from the local machine to a remote and unpacks it. options: src: description: - Local path to archive file to copy to the remote server; can be absolute or relative. required: true default: null dest: description: - Remote absolute path where the archive should be unpacked required: true default: null copy: description: - Should the file be copied from the local to the remote machine? required: false choices: [ "yes", "no" ] default: "yes" author: Dylan Martin todo: - detect changed/unchanged for .zip files - handle common unarchive args, like preserve owner/timestamp etc... notes: - requires C(tar)/C(unzip) command on target host - can handle I(gzip), I(bzip2) and I(xz) compressed as well as uncompressed tar files - detects type of archive automatically - uses tar's C(--diff arg) to calculate if changed or not. If this C(arg) is not supported, it will always unpack the archive - does not detect if a .zip file is different from destination - always unzips - existing files/directories in the destination which are not in the archive are not touched. This is the same behavior as a normal archive extraction - existing files/directories in the destination which are not in the archive are ignored for purposes of deciding if the archive should be unpacked or not ''' EXAMPLES = ''' # Example from Ansible Playbooks - unarchive: src=foo.tgz dest=/var/lib/foo ''' import os # class to handle .zip files class ZipFile(object): def __init__(self, src, dest, module): self.src = src self.dest = dest self.module = module def is_unarchived(self): return dict(unarchived=False) def unarchive(self): cmd = 'unzip -o "%s" -d "%s"' % (self.src, self.dest) rc, out, err = self.module.run_command(cmd) return dict(cmd=cmd, rc=rc, out=out, err=err) def can_handle_archive(self): cmd = 'unzip -l "%s"' % self.src rc, out, err = self.module.run_command(cmd) if rc == 0: return True return False # class to handle gzipped tar files class TgzFile(object): def __init__(self, src, dest, module): self.src = src self.dest = dest self.module = module self.zipflag = 'z' def is_unarchived(self): dirof = os.path.dirname(self.dest) destbase = os.path.basename(self.dest) cmd = 'tar -v -C "%s" --diff -%sf "%s"' % (self.dest, self.zipflag, self.src) rc, out, err = self.module.run_command(cmd) unarchived = (rc == 0) return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd) def unarchive(self): cmd = 'tar -C "%s" -x%sf "%s"' % (self.dest, self.zipflag, self.src) rc, out, err = self.module.run_command(cmd) return dict(cmd=cmd, rc=rc, out=out, err=err) def can_handle_archive(self): cmd = 'tar -t%sf "%s"' % (self.zipflag, self.src) rc, out, err = self.module.run_command(cmd) if rc == 0: if len(out.splitlines(True)) > 0: return True return False # class to handle tar files that aren't compressed class TarFile(TgzFile): def __init__(self, src, dest, module): self.src = src self.dest = dest self.module = module self.zipflag = '' # class to handle bzip2 compressed tar files class TarBzip(TgzFile): def __init__(self, src, dest, module): self.src = src self.dest = dest self.module = module self.zipflag = 'j' # class to handle xz compressed tar files class TarXz(TgzFile): def __init__(self, src, dest, module): self.src = src self.dest = dest self.module = module self.zipflag = 'J' # try handlers in order and return the one that works or bail if none work def pick_handler(src, dest, module): handlers = [TgzFile, ZipFile, TarFile, TarBzip, TarXz] for handler in handlers: obj = handler(src, dest, module) if obj.can_handle_archive(): return obj raise RuntimeError('Failed to find handler to unarchive "%s"' % src) def main(): module = AnsibleModule( # not checking because of daisy chain to file module argument_spec = dict( src = dict(required=True), original_basename = dict(required=False), # used to handle 'dest is a directory' via template, a slight hack dest = dict(required=True), copy = dict(default=True, type='bool'), ), add_file_common_args=True, ) src = os.path.expanduser(module.params['src']) dest = os.path.expanduser(module.params['dest']) copy = module.params['copy'] # did tar file arrive? if not os.path.exists(src): if copy: module.fail_json(msg="Source '%s' failed to transfer" % src) else: module.fail_json(msg="Source '%s' does not exist" % src) if not os.access(src, os.R_OK): module.fail_json(msg="Source '%s' not readable" % src) # is dest OK to receive tar file? if not os.path.exists(os.path.dirname(dest)): module.fail_json(msg="Destination directory '%s' does not exist" % (os.path.dirname(dest))) if not os.access(os.path.dirname(dest), os.W_OK): module.fail_json(msg="Destination '%s' not writable" % (os.path.dirname(dest))) handler = pick_handler(src, dest, module) res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src) # do we need to do unpack? res_args['check_results'] = handler.is_unarchived() if res_args['check_results']['unarchived']: res_args['changed'] = False module.exit_json(**res_args) # do the unpack try: results = handler.unarchive() except IOError: module.fail_json(msg="failed to unpack %s to %s" % (src, dest)) res_args['changed'] = True module.exit_json(**res_args) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/template0000664000000000000000000000563312316627017016156 0ustar rootroot# this is a virtual module that is entirely implemented server side DOCUMENTATION = ''' --- module: template version_added: historical short_description: Templates a file out to a remote server. description: - Templates are processed by the Jinja2 templating language (U(http://jinja.pocoo.org/docs/)) - documentation on the template formatting can be found in the Template Designer Documentation (U(http://jinja.pocoo.org/docs/templates/)). - "Six additional variables can be used in templates: C(ansible_managed) (configurable via the C(defaults) section of C(ansible.cfg)) contains a string which can be used to describe the template name, host, modification time of the template file and the owner uid, C(template_host) contains the node name of the template's machine, C(template_uid) the owner, C(template_path) the absolute path of the template, C(template_fullpath) is the absolute path of the template, and C(template_run_date) is the date that the template was rendered. Note that including a string that uses a date in the template will resort in the template being marked 'changed' each time." options: src: description: - Path of a Jinja2 formatted template on the local server. This can be a relative or absolute path. required: true default: null aliases: [] dest: description: - Location to render the template to on the remote machine. required: true default: null backup: description: - Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. required: false choices: [ "yes", "no" ] default: "no" validate: description: - validation to run before copying into place required: false default: "" version_added: "1.2" others: description: - all arguments accepted by the M(file) module also work here, as well as the M(copy) module (except the the 'content' parameter). required: false notes: - "Since Ansible version 0.9, templates are loaded with C(trim_blocks=True)." - "Also, you can override jinja2 settings by adding a special header to template file. i.e. C(#jinja2:variable_start_string:'[%' , variable_end_string:'%]') which changes the variable interpolation markers to [% var %] instead of {{ var }}. This is the best way to prevent evaluation of things that look like, but should not be Jinja2. raw/endraw in Jinja2 will not work as you expect because templates in Ansible are recursively evaluated." requirements: [] author: Michael DeHaan ''' EXAMPLES = ''' # Example from Ansible Playbooks - template: src=/mytemplates/foo.j2 dest=/etc/file.conf owner=bin group=wheel mode=0644 # Copy a new "sudoers file into place, after passing validation with visudo - action: template src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s' ''' ansible-1.5.4/library/files/synchronize0000664000000000000000000002435512316627017016720 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012-2013, Timothy Appnel # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: synchronize version_added: "1.4" short_description: Uses rsync to make synchronizing file paths in your playbooks quick and easy. description: - This is a wrapper around rsync. Of course you could just use the command action to call rsync yourself, but you also have to add a fair number of boilerplate options and host facts. You still may need to call rsync directly via C(command) or C(shell) depending on your use case. The synchronize action is meant to do common things with C(rsync) easily. It does not provide access to the full power of rsync, but does make most invocations easier to follow. options: src: description: - Path on the source machine that will be synchronized to the destination; The path can be absolute or relative. required: true dest: description: - Path on the destination machine that will be synchronized from the source; The path can be absolute or relative. required: true dest_port: description: - Port number for ssh on the destination host. The ansible_ssh_port inventory var takes precedence over this value. default: 22 version_added: "1.5" mode: description: - Specify the direction of the synchroniztion. In push mode the localhost or delegate is the source; In pull mode the remote host in context is the source. required: false choices: [ 'push', 'pull' ] default: 'push' archive: description: - Mirrors the rsync archive flag, enables recursive, links, perms, times, owner, group flags and -D. choices: [ 'yes', 'no' ] default: 'yes' required: false existing_only: description: - Skip creating new files on receiver. choices: [ 'yes', 'no' ] default: 'no' required: false version_added: "1.5" delete: description: - Delete files that don't exist (after transfer, not before) in the C(src) path. choices: [ 'yes', 'no' ] default: 'no' required: false dirs: description: - Transfer directories without recursing choices: [ 'yes', 'no' ] default: 'no' required: false recursive: description: - Recurse into directories. choices: [ 'yes', 'no' ] default: the value of the archive option required: false links: description: - Copy symlinks as symlinks. choices: [ 'yes', 'no' ] default: the value of the archive option required: false copy_links: description: - Copy symlinks as the item that they point to (the referent) is copied, rather than the symlink. choices: [ 'yes', 'no' ] default: 'no' required: false perms: description: - Preserve permissions. choices: [ 'yes', 'no' ] default: the value of the archive option required: false times: description: - Preserve modification times choices: [ 'yes', 'no' ] default: the value of the archive option required: false owner: description: - Preserve owner (super user only) choices: [ 'yes', 'no' ] default: the value of the archive option required: false group: description: - Preserve group choices: [ 'yes', 'no' ] default: the value of the archive option required: false rsync_path: description: - Specify the rsync command to run on the remote machine. See C(--rsync-path) on the rsync man page. required: false rsync_timeout: description: - Specify a --timeout for the rsync command in seconds. default: 10 required: false notes: - Inspect the verbose output to validate the destination user/host/path are what was expected. - The remote user for the dest path will always be the remote_user, not the sudo_user. - Expect that dest=~/x will be ~/x even if using sudo. - To exclude files and directories from being synchronized, you may add C(.rsync-filter) files to the source directory. author: Timothy Appnel ''' EXAMPLES = ''' # Synchronization of src on the control machine to dest on the remote hosts synchronize: src=some/relative/path dest=/some/absolute/path # Synchronization without any --archive options enabled synchronize: src=some/relative/path dest=/some/absolute/path archive=no # Synchronization with --archive options enabled except for --recursive synchronize: src=some/relative/path dest=/some/absolute/path recursive=no # Synchronization without --archive options enabled except use --links synchronize: src=some/relative/path dest=/some/absolute/path archive=no links=yes # Synchronization of two paths both on the control machine local_action: synchronize src=some/relative/path dest=/some/absolute/path # Synchronization of src on the inventory host to the dest on the localhost in pull mode synchronize: mode=pull src=some/relative/path dest=/some/absolute/path # Synchronization of src on delegate host to dest on the current inventory host synchronize: > src=some/relative/path dest=/some/absolute/path delegate_to: delegate.host # Synchronize and delete files in dest on the remote host that are not found in src of localhost. synchronize: src=some/relative/path dest=/some/absolute/path delete=yes # Synchronize using an alternate rsync command synchronize: src=some/relative/path dest=/some/absolute/path rsync_path="sudo rsync" # Example .rsync-filter file in the source directory - var # exclude any path whose last part is 'var' - /var # exclude any path starting with 'var' starting at the source directory + /var/conf # include /var/conf even though it was previously excluded ''' def main(): module = AnsibleModule( argument_spec = dict( src = dict(required=True), dest = dict(required=True), dest_port = dict(default=22), delete = dict(default='no', type='bool'), private_key = dict(default=None), rsync_path = dict(default=None), archive = dict(default='yes', type='bool'), existing_only = dict(default='no', type='bool'), dirs = dict(default='no', type='bool'), recursive = dict(type='bool'), links = dict(type='bool'), copy_links = dict(type='bool'), perms = dict(type='bool'), times = dict(type='bool'), owner = dict(type='bool'), group = dict(type='bool'), rsync_timeout = dict(type='int', default=10) ), supports_check_mode = True ) source = module.params['src'] dest = module.params['dest'] dest_port = module.params['dest_port'] delete = module.params['delete'] private_key = module.params['private_key'] rsync_path = module.params['rsync_path'] rsync = module.params.get('local_rsync_path', 'rsync') rsync_timeout = module.params.get('rsync_timeout', 'rsync_timeout') archive = module.params['archive'] existing_only = module.params['existing_only'] dirs = module.params['dirs'] # the default of these params depends on the value of archive recursive = module.params['recursive'] links = module.params['links'] copy_links = module.params['copy_links'] perms = module.params['perms'] times = module.params['times'] owner = module.params['owner'] group = module.params['group'] cmd = '%s --delay-updates -FF --compress --timeout=%s' % (rsync, rsync_timeout) if module.check_mode: cmd = cmd + ' --dry-run' if delete: cmd = cmd + ' --delete-after' if existing_only: cmd = cmd + ' --existing' if archive: cmd = cmd + ' --archive' if recursive is False: cmd = cmd + ' --no-recursive' if links is False: cmd = cmd + ' --no-links' if copy_links is True: cmd = cmd + ' --copy-links' if perms is False: cmd = cmd + ' --no-perms' if times is False: cmd = cmd + ' --no-times' if owner is False: cmd = cmd + ' --no-owner' if group is False: cmd = cmd + ' --no-group' else: if recursive is True: cmd = cmd + ' --recursive' if links is True: cmd = cmd + ' --links' if copy_links is True: cmd = cmd + ' --copy-links' if perms is True: cmd = cmd + ' --perms' if times is True: cmd = cmd + ' --times' if owner is True: cmd = cmd + ' --owner' if group is True: cmd = cmd + ' --group' if dirs: cmd = cmd + ' --dirs' if private_key is None: private_key = '' else: private_key = '-i '+ private_key if dest_port != 22: cmd += " --rsh '%s %s -o %s -o Port=%s'" % ('ssh', private_key, 'StrictHostKeyChecking=no', dest_port) else: cmd += " --rsh '%s %s -o %s'" % ('ssh', private_key, 'StrictHostKeyChecking=no') # need ssh param if rsync_path: cmd = cmd + " --rsync-path '%s'" %(rsync_path) changed_marker = '<>' cmd = cmd + " --out-format='" + changed_marker + "%i %n%L'" # expand the paths if '@' not in source: source = os.path.expanduser(source) if '@' not in dest: dest = os.path.expanduser(dest) cmd = ' '.join([cmd, source, dest]) cmdstr = cmd (rc, out, err) = module.run_command(cmd) if rc: return module.fail_json(msg=err, rc=rc, cmd=cmdstr) else: changed = changed_marker in out return module.exit_json(changed=changed, msg=out.replace(changed_marker,''), rc=rc, cmd=cmdstr) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/file0000664000000000000000000003147712316627017015267 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import shutil import stat import grp import pwd try: import selinux HAVE_SELINUX=True except ImportError: HAVE_SELINUX=False DOCUMENTATION = ''' --- module: file version_added: "historical" short_description: Sets attributes of files description: - Sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. Many other modules support the same options as the M(file) module - including M(copy), M(template), and M(assemble). options: path: description: - 'path to the file being managed. Aliases: I(dest), I(name)' required: true default: [] aliases: ['dest', 'name'] state: description: - If C(directory), all immediate subdirectories will be created if they do not exist. If C(file), the file will NOT be created if it does not exist, see the M(copy) or M(template) module if you want that behavior. If C(link), the symbolic link will be created or changed. Use C(hard) for hardlinks. If C(absent), directories will be recursively deleted, and files or symlinks will be unlinked. If C(touch) (new in 1.4), an empty file will be created if the c(dest) does not exist, while an existing file or directory will receive updated file access and modification times (similar to the way `touch` works from the command line). required: false default: file choices: [ file, link, directory, hard, touch, absent ] mode: required: false default: null choices: [] description: - mode the file or directory should be, such as 0644 as would be fed to I(chmod) owner: required: false default: null choices: [] description: - name of the user that should own the file/directory, as would be fed to I(chown) group: required: false default: null choices: [] description: - name of the group that should own the file/directory, as would be fed to I(chown) src: required: false default: null choices: [] description: - path of the file to link to (applies only to C(state=link)). Will accept absolute, relative and nonexisting paths. Relative paths are not expanded. seuser: required: false default: null choices: [] description: - user part of SELinux file context. Will default to system policy, if applicable. If set to C(_default), it will use the C(user) portion of the policy if available serole: required: false default: null choices: [] description: - role part of SELinux file context, C(_default) feature works as for I(seuser). setype: required: false default: null choices: [] description: - type part of SELinux file context, C(_default) feature works as for I(seuser). selevel: required: false default: "s0" choices: [] description: - level part of the SELinux file context. This is the MLS/MCS attribute, sometimes known as the C(range). C(_default) feature works as for I(seuser). recurse: required: false default: "no" choices: [ "yes", "no" ] version_added: "1.1" description: - recursively set the specified file attributes (applies only to state=directory) force: required: false default: "no" choices: [ "yes", "no" ] description: - 'force the creation of the symlinks in two cases: the source file does not exist (but will appear later); the destination exists and is a file (so, we need to unlink the "path" file and create symlink to the "src" file in place of it).' notes: - See also M(copy), M(template), M(assemble) requirements: [ ] author: Michael DeHaan ''' EXAMPLES = ''' - file: path=/etc/foo.conf owner=foo group=foo mode=0644 - file: src=/file/to/link/to dest=/path/to/symlink owner=foo group=foo state=link ''' def main(): # FIXME: pass this around, should not use global global module module = AnsibleModule( argument_spec = dict( state = dict(choices=['file','directory','link','hard','touch','absent'], default=None), path = dict(aliases=['dest', 'name'], required=True), original_basename = dict(required=False), # Internal use only, for recursive ops recurse = dict(default='no', type='bool'), force = dict(required=False,default=False,type='bool'), diff_peek = dict(default=None), validate = dict(required=False, default=None), ), add_file_common_args=True, supports_check_mode=True ) params = module.params state = params['state'] force = params['force'] params['path'] = path = os.path.expanduser(params['path']) # short-circuit for diff_peek if params.get('diff_peek', None) is not None: appears_binary = False try: f = open(path) b = f.read(8192) f.close() if b.find("\x00") != -1: appears_binary = True except: pass module.exit_json(path=path, changed=False, appears_binary=appears_binary) prev_state = 'absent' if os.path.lexists(path): if os.path.islink(path): prev_state = 'link' elif os.path.isdir(path): prev_state = 'directory' elif os.stat(path).st_nlink > 1: prev_state = 'hard' else: # could be many other things, but defaulting to file prev_state = 'file' if prev_state is not None and state is None: # set state to current type of file state = prev_state elif state is None: # set default state to file state = 'file' # source is both the source of a symlink or an informational passing of the src for a template module # or copy module, even if this module never uses it, it is needed to key off some things src = params.get('src', None) if src: src = os.path.expanduser(src) if src is not None and os.path.isdir(path) and state not in ["link", "absent"]: if params['original_basename']: basename = params['original_basename'] else: basename = os.path.basename(src) params['path'] = path = os.path.join(path, basename) file_args = module.load_file_common_arguments(params) if state in ['link','hard'] and (src is None or path is None): module.fail_json(msg='src and dest are required for creating links') elif path is None: module.fail_json(msg='path is required') changed = False recurse = params['recurse'] if recurse and state == 'file' and prev_state == 'directory': state = 'directory' if prev_state != 'absent' and state == 'absent': try: if prev_state == 'directory': if os.path.islink(path): if module.check_mode: module.exit_json(changed=True) os.unlink(path) else: try: if module.check_mode: module.exit_json(changed=True) shutil.rmtree(path, ignore_errors=False) except Exception, e: module.fail_json(msg="rmtree failed: %s" % str(e)) else: if module.check_mode: module.exit_json(changed=True) os.unlink(path) except Exception, e: module.fail_json(path=path, msg=str(e)) module.exit_json(path=path, changed=True) if prev_state != 'absent' and prev_state != state: if not (force and (prev_state == 'file' or prev_state == 'hard' or prev_state == 'directory') and state == 'link') and state != 'touch': module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, src)) if prev_state == 'absent' and state == 'absent': module.exit_json(path=path, changed=False) if state == 'file': if prev_state != 'file': module.fail_json(path=path, msg='file (%s) does not exist, use copy or template module to create' % path) changed = module.set_file_attributes_if_different(file_args, changed) module.exit_json(path=path, changed=changed) elif state == 'directory': if prev_state == 'absent': if module.check_mode: module.exit_json(changed=True) os.makedirs(path) changed = True changed = module.set_directory_attributes_if_different(file_args, changed) if recurse: for root,dirs,files in os.walk( file_args['path'] ): for dir in dirs: dirname=os.path.join(root,dir) tmp_file_args = file_args.copy() tmp_file_args['path']=dirname changed = module.set_directory_attributes_if_different(tmp_file_args, changed) for file in files: filename=os.path.join(root,file) tmp_file_args = file_args.copy() tmp_file_args['path']=filename changed = module.set_file_attributes_if_different(tmp_file_args, changed) module.exit_json(path=path, changed=changed) elif state in ['link','hard']: if state == 'hard': if os.path.isabs(src): abs_src = src else: module.fail_json(msg="absolute paths are required") if not os.path.exists(abs_src) and not force: module.fail_json(path=path, src=src, msg='src file does not exist') if prev_state == 'absent': changed = True elif prev_state == 'link': old_src = os.readlink(path) if old_src != src: changed = True elif prev_state == 'hard': if not (state == 'hard' and os.stat(path).st_ino == os.stat(src).st_ino): if not force: module.fail_json(dest=path, src=src, msg='Cannot link, different hard link exists at destination') changed = True elif prev_state == 'file': if not force: module.fail_json(dest=path, src=src, msg='Cannot link, file exists at destination') changed = True elif prev_state == 'directory': if not force: module.fail_json(dest=path, src=src, msg='Cannot link, directory exists at destination') changed = True else: module.fail_json(dest=path, src=src, msg='unexpected position reached') if changed and not module.check_mode: if prev_state != 'absent': try: os.unlink(path) except OSError, e: module.fail_json(path=path, msg='Error while removing existing target: %s' % str(e)) try: if state == 'hard': os.link(src,path) else: os.symlink(src, path) except OSError, e: module.fail_json(path=path, msg='Error while linking: %s' % str(e)) changed = module.set_file_attributes_if_different(file_args, changed) module.exit_json(dest=path, src=src, changed=changed) elif state == 'touch': if module.check_mode: module.exit_json(path=path, skipped=True) if prev_state not in ['file', 'directory', 'absent']: module.fail_json(msg='Cannot touch other than files and directories') if prev_state != 'absent': try: os.utime(path, None) except OSError, e: module.fail_json(path=path, msg='Error while touching existing target: %s' % str(e)) else: try: open(path, 'w').close() except OSError, e: module.fail_json(path=path, msg='Error, could not touch target: %s' % str(e)) module.set_file_attributes_if_different(file_args, True) module.exit_json(dest=path, changed=True) else: module.fail_json(path=path, msg='unexpected position reached') # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/acl0000664000000000000000000002230012316627017015070 0ustar rootroot#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: acl version_added: "1.4" short_description: Sets and retrieves file ACL information. description: - Sets and retrieves file ACL information. options: name: required: true default: null description: - The full path of the file or object. aliases: ['path'] state: required: false default: query choices: [ 'query', 'present', 'absent' ] description: - defines whether the ACL should be present or not. The C(query) state gets the current acl C(present) without changing it, for use in 'register' operations. follow: required: false default: yes choices: [ 'yes', 'no' ] description: - whether to follow symlinks on the path if a symlink is encountered. default: version_added: "1.5" required: false default: no choices: [ 'yes', 'no' ] description: - if the target is a directory, setting this to yes will make it the default acl for entities created inside the directory. It causes an error if name is a file. entity: version_added: "1.5" required: false description: - actual user or group that the ACL applies to when matching entity types user or group are selected. etype: version_added: "1.5" required: false default: null choices: [ 'user', 'group', 'mask', 'other' ] description: - if the target is a directory, setting this to yes will make it the default acl for entities created inside the directory. It causes an error if name is a file. permissions: version_added: "1.5" required: false default: null description: - Permissions to apply/remove can be any combination of r, w and x (read, write and execute respectively) entry: required: false default: null description: - DEPRECATED. The acl to set or remove. This must always be quoted in the form of '::'. The qualifier may be empty for some types, but the type and perms are always requried. '-' can be used as placeholder when you do not care about permissions. This is now superceeded by entity, type and permissions fields. author: Brian Coca notes: - The "acl" module requires that acls are enabled on the target filesystem and that the setfacl and getfacl binaries are installed. ''' EXAMPLES = ''' # Grant user Joe read access to a file - acl: name=/etc/foo.conf entity=joe etype=user permissions="r" state=present # Removes the acl for Joe on a specific file - acl: name=/etc/foo.conf entity=joe etype=user state=absent # Sets default acl for joe on foo.d - acl: name=/etc/foo.d entity=joe etype=user permissions=rw default=yes state=present # Same as previous but using entry shorthand - acl: name=/etc/foo.d entrty="default:user:joe:rw-" state=present # Obtain the acl for a specific file - acl: name=/etc/foo.conf register: acl_info ''' def split_entry(entry): ''' splits entry and ensures normalized return''' a = entry.split(':') a.reverse() if len(a) == 3: a.append(False) try: p,e,t,d = a except ValueError, e: print "wtf?? %s => %s" % (entry,a) raise e if t.startswith("u"): t = "user" elif t.startswith("g"): t = "group" elif t.startswith("m"): t = "mask" elif t.startswith("o"): t = "other" else: t = None perms = ['-','-','-'] for char in p: if char == 'r': perms[0] = 'r' if char == 'w': perms[1] = 'w' if char == 'x': perms[2] = 'x' p = ''.join(perms) return [d,t,e,p] def get_acls(module,path,follow): cmd = [ module.get_bin_path('getfacl', True) ] if not follow: cmd.append('-h') # prevents absolute path warnings and removes headers cmd.append('--omit-header') cmd.append('--absolute-names') cmd.append(path) return _run_acl(module,cmd) def set_acl(module,path,entry,follow,default): cmd = [ module.get_bin_path('setfacl', True) ] if not follow: cmd.append('-h') if default: cmd.append('-d') cmd.append('-m "%s"' % entry) cmd.append(path) return _run_acl(module,cmd) def rm_acl(module,path,entry,follow,default): cmd = [ module.get_bin_path('setfacl', True) ] if not follow: cmd.append('-h') if default: cmd.append('-k') entry = entry[0:entry.rfind(':')] cmd.append('-x "%s"' % entry) cmd.append(path) return _run_acl(module,cmd,False) def _run_acl(module,cmd,check_rc=True): try: (rc, out, err) = module.run_command(' '.join(cmd), check_rc=check_rc) except Exception, e: module.fail_json(msg=e.strerror) # trim last line as it is always empty ret = out.splitlines() return ret[0:len(ret)-1] def main(): module = AnsibleModule( argument_spec = dict( name = dict(required=True,aliases=['path'], type='str'), entry = dict(required=False, etype='str'), entity = dict(required=False, type='str', default=''), etype = dict(required=False, choices=['other', 'user', 'group', 'mask'], type='str'), permissions = dict(required=False, type='str'), state = dict(required=False, default='query', choices=[ 'query', 'present', 'absent' ], type='str'), follow = dict(required=False, type='bool', default=True), default= dict(required=False, type='bool', default=False), ), supports_check_mode=True, ) path = module.params.get('name') entry = module.params.get('entry') entity = module.params.get('entity') etype = module.params.get('etype') permissions = module.params.get('permissions') state = module.params.get('state') follow = module.params.get('follow') default = module.params.get('default') if not os.path.exists(path): module.fail_json(msg="path not found or not accessible!") if state in ['present','absent']: if not entry and not etype: module.fail_json(msg="%s requries to have ither either etype and permissions or entry to be set" % state) if entry: if etype or entity or permissions: module.fail_json(msg="entry and another incompatible field (entity, etype or permissions) are also set") if entry.count(":") not in [2,3]: module.fail_json(msg="Invalid entry: '%s', it requires 3 or 4 sections divided by ':'" % entry) default, etype, entity, permissions = split_entry(entry) changed=False msg = "" currentacls = get_acls(module,path,follow) if (state == 'present'): matched = False for oldentry in currentacls: if oldentry.count(":") == 0: continue old_default, old_type, old_entity, old_permissions = split_entry(oldentry) if old_default == default: if old_type == etype: if etype in ['user', 'group']: if old_entity == entity: matched = True if not old_permissions == permissions: changed = True break else: matched = True if not old_permissions == permissions: changed = True break break if not matched: changed=True if changed and not module.check_mode: set_acl(module,path,':'.join([etype, str(entity), permissions]),follow,default) msg="%s is present" % ':'.join([etype, str(entity), permissions]) elif state == 'absent': for oldentry in currentacls: if oldentry.count(":") == 0: continue old_default, old_type, old_entity, old_permissions = split_entry(oldentry) if old_default == default: if old_type == etype: if etype in ['user', 'group']: if old_entity == entity: changed=True break else: changed=True break if changed and not module.check_mode: rm_acl(module,path,':'.join([etype, entity, '---']),follow,default) msg="%s is absent" % ':'.join([etype, entity, '---']) else: msg="current acl" if changed: currentacls = get_acls(module,path,follow) module.exit_json(changed=changed, msg=msg, acl=currentacls) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/ini_file0000664000000000000000000001357712316627017016127 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Jan-Piet Mens # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: ini_file short_description: Tweak settings in INI files description: - Manage (add, remove, change) individual settings in an INI-style file without having to manage the file as a whole with, say, M(template) or M(assemble). Adds missing sections if they don't exist. - Comments are discarded when the source file is read, and therefore will not show up in the destination file. version_added: "0.9" options: dest: description: - Path to the INI-style file; this file is created if required required: true default: null section: description: - Section name in INI file. This is added if C(state=present) automatically when a single value is being set. required: true default: null option: description: - if set (required for changing a I(value)), this is the name of the option. - May be omitted if adding/removing a whole I(section). required: false default: null value: description: - the string value to be associated with an I(option). May be omitted when removing an I(option). required: false default: null backup: description: - Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. required: false default: "no" choices: [ "yes", "no" ] others: description: - all arguments accepted by the M(file) module also work here required: false notes: - While it is possible to add an I(option) without specifying a I(value), this makes no sense. - A section named C(default) cannot be added by the module, but if it exists, individual options within the section can be updated. (This is a limitation of Python's I(ConfigParser).) Either use M(template) to create a base INI file with a C([default]) section, or use M(lineinfile) to add the missing line. requirements: [ ConfigParser ] author: Jan-Piet Mens ''' EXAMPLES = ''' # Ensure "fav=lemonade is in section "[drinks]" in specified file - ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes - ini_file: dest=/etc/anotherconf section=drinks option=temperature value=cold backup=yes ''' import ConfigParser # ============================================================== # do_ini def do_ini(module, filename, section=None, option=None, value=None, state='present', backup=False): changed = False cp = ConfigParser.ConfigParser() try: f = open(filename) cp.readfp(f) except IOError: pass if state == 'absent': if option is None and value is None: if cp.has_section(section): cp.remove_section(section) changed = True else: if option is not None: try: if cp.get(section, option): cp.remove_option(section, option) changed = True except: pass if state == 'present': # DEFAULT section is always there by DEFAULT, so never try to add it. if cp.has_section(section) == False and section.upper() != 'DEFAULT': cp.add_section(section) changed = True if option is not None and value is not None: try: oldvalue = cp.get(section, option) if str(value) != str(oldvalue): cp.set(section, option, value) changed = True except ConfigParser.NoSectionError: cp.set(section, option, value) changed = True except ConfigParser.NoOptionError: cp.set(section, option, value) changed = True if changed: if backup: module.backup_local(filename) try: f = open(filename, 'w') cp.write(f) except: module.fail_json(msg="Can't creat %s" % filename) return changed # ============================================================== # main def main(): module = AnsibleModule( argument_spec = dict( dest = dict(required=True), section = dict(required=True), option = dict(required=False), value = dict(required=False), backup = dict(default='no', type='bool'), state = dict(default='present', choices=['present', 'absent']) ), add_file_common_args = True ) info = dict() dest = os.path.expanduser(module.params['dest']) section = module.params['section'] option = module.params['option'] value = module.params['value'] state = module.params['state'] backup = module.params['backup'] changed = do_ini(module, dest, section, option, value, state, backup) file_args = module.load_file_common_arguments(module.params) changed = module.set_file_attributes_if_different(file_args, changed) # Mission complete module.exit_json(dest=dest, changed=changed, msg="OK") # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/files/assemble0000664000000000000000000001400712316627017016131 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Stephen Fromm # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import os import os.path import shutil import tempfile import re DOCUMENTATION = ''' --- module: assemble short_description: Assembles a configuration file from fragments description: - Assembles a configuration file from fragments. Often a particular program will take a single configuration file and does not support a C(conf.d) style structure where it is easy to build up the configuration from multiple sources. M(assemble) will take a directory of files that can be local or have already been transferred to the system, and concatenate them together to produce a destination file. Files are assembled in string sorting order. Puppet calls this idea I(fragments). version_added: "0.5" options: src: description: - An already existing directory full of source files. required: true default: null aliases: [] dest: description: - A file to create using the concatenation of all of the source files. required: true default: null backup: description: - Create a backup file (if C(yes)), including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. required: false choices: [ "yes", "no" ] default: "no" delimiter: description: - A delimiter to seperate the file contents. version_added: "1.4" required: false default: null remote_src: description: - If False, it will search for src at originating/master machine, if True it will go to the remote/target machine for the src. Default is True. choices: [ "True", "False" ] required: false default: "True" version_added: "1.4" regexp: description: - Assemble files only if C(regex) matches the filename. If not set, all files are assembled. All "\" (backslash) must be escaped as "\\\\" to comply yaml syntax. Uses Python regular expressions; see U(http://docs.python.org/2/library/re.html). required: false default: null others: description: - all arguments accepted by the M(file) module also work here required: false author: Stephen Fromm ''' EXAMPLES = ''' # Example from Ansible Playbooks - assemble: src=/etc/someapp/fragments dest=/etc/someapp/someapp.conf # When a delimiter is specified, it will be inserted in between each fragment - assemble: src=/etc/someapp/fragments dest=/etc/someapp/someapp.conf delimiter='### START FRAGMENT ###' ''' # =========================================== # Support method def assemble_from_fragments(src_path, delimiter=None, compiled_regexp=None): ''' assemble a file from a directory of fragments ''' tmpfd, temp_path = tempfile.mkstemp() tmp = os.fdopen(tmpfd,'w') delimit_me = False for f in sorted(os.listdir(src_path)): if compiled_regexp and not compiled_regexp.search(f): continue fragment = "%s/%s" % (src_path, f) if delimit_me and delimiter: # un-escape anything like newlines delimiter = delimiter.decode('unicode-escape') tmp.write(delimiter) # always make sure there's a newline after the # delimiter, so lines don't run together if delimiter[-1] != '\n': tmp.write('\n') if os.path.isfile(fragment): tmp.write(file(fragment).read()) delimit_me = True tmp.close() return temp_path # ============================================================== # main def main(): module = AnsibleModule( # not checking because of daisy chain to file module argument_spec = dict( src = dict(required=True), delimiter = dict(required=False), dest = dict(required=True), backup=dict(default=False, type='bool'), remote_src=dict(default=False, type='bool'), regexp = dict(required=False), ), add_file_common_args=True ) changed = False pathmd5 = None destmd5 = None src = os.path.expanduser(module.params['src']) dest = os.path.expanduser(module.params['dest']) backup = module.params['backup'] delimiter = module.params['delimiter'] regexp = module.params['regexp'] compiled_regexp = None if not os.path.exists(src): module.fail_json(msg="Source (%s) does not exist" % src) if not os.path.isdir(src): module.fail_json(msg="Source (%s) is not a directory" % src) if regexp != None: try: compiled_regexp = re.compile(regexp) except re.error, e: module.fail_json(msg="Invalid Regexp (%s) in \"%s\"" % (e, regexp)) path = assemble_from_fragments(src, delimiter, compiled_regexp) pathmd5 = module.md5(path) if os.path.exists(dest): destmd5 = module.md5(dest) if pathmd5 != destmd5: if backup and destmd5 is not None: module.backup_local(dest) shutil.copy(path, dest) changed = True os.remove(path) file_args = module.load_file_common_arguments(module.params) changed = module.set_file_attributes_if_different(file_args, changed) # Mission complete module.exit_json(src=src, dest=dest, md5sum=pathmd5, changed=changed, msg="OK") # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/0000775000000000000000000000000012316627017015233 5ustar rootrootansible-1.5.4/library/packaging/redhat_subscription0000664000000000000000000003355212316627017021241 0ustar rootroot#!/usr/bin/python DOCUMENTATION = ''' --- module: redhat_subscription short_description: Manage Red Hat Network registration and subscriptions using the C(subscription-manager) command description: - Manage registration and subscription to the Red Hat Network entitlement platform. version_added: "1.2" author: James Laska notes: - In order to register a system, subscription-manager requires either a username and password, or an activationkey. requirements: - subscription-manager options: state: description: - whether to register and subscribe (C(present)), or unregister (C(absent)) a system required: false choices: [ "present", "absent" ] default: "present" username: description: - Red Hat Network username required: False default: null password: description: - Red Hat Network password required: False default: null server_hostname: description: - Specify an alternative Red Hat Network server required: False default: Current value from C(/etc/rhsm/rhsm.conf) is the default server_insecure: description: - Allow traffic over insecure http required: False default: Current value from C(/etc/rhsm/rhsm.conf) is the default rhsm_baseurl: description: - Specify CDN baseurl required: False default: Current value from C(/etc/rhsm/rhsm.conf) is the default autosubscribe: description: - Upon successful registration, auto-consume available subscriptions required: False default: False activationkey: description: - supply an activation key for use with registration required: False default: null pool: description: - Specify a subscription pool name to consume. Regular expressions accepted. required: False default: '^$' ''' EXAMPLES = ''' # Register as user (joe_user) with password (somepass) and auto-subscribe to available content. - redhat_subscription: action=register username=joe_user password=somepass autosubscribe=true # Register with activationkey (1-222333444) and consume subscriptions matching # the names (Red hat Enterprise Server) and (Red Hat Virtualization) - redhat_subscription: action=register activationkey=1-222333444 pool='^(Red Hat Enterprise Server|Red Hat Virtualization)$' ''' import os import re import types import ConfigParser import shlex class RegistrationBase(object): def __init__(self, module, username=None, password=None): self.module = module self.username = username self.password = password def configure(self): raise NotImplementedError("Must be implemented by a sub-class") def enable(self): # Remove any existing redhat.repo redhat_repo = '/etc/yum.repos.d/redhat.repo' if os.path.isfile(redhat_repo): os.unlink(redhat_repo) def register(self): raise NotImplementedError("Must be implemented by a sub-class") def unregister(self): raise NotImplementedError("Must be implemented by a sub-class") def unsubscribe(self): raise NotImplementedError("Must be implemented by a sub-class") def update_plugin_conf(self, plugin, enabled=True): plugin_conf = '/etc/yum/pluginconf.d/%s.conf' % plugin if os.path.isfile(plugin_conf): cfg = ConfigParser.ConfigParser() cfg.read([plugin_conf]) if enabled: cfg.set('main', 'enabled', 1) else: cfg.set('main', 'enabled', 0) fd = open(plugin_conf, 'rwa+') cfg.write(fd) fd.close() def subscribe(self, **kwargs): raise NotImplementedError("Must be implemented by a sub-class") class Rhsm(RegistrationBase): def __init__(self, module, username=None, password=None): RegistrationBase.__init__(self, module, username, password) self.config = self._read_config() self.module = module def _read_config(self, rhsm_conf='/etc/rhsm/rhsm.conf'): ''' Load RHSM configuration from /etc/rhsm/rhsm.conf. Returns: * ConfigParser object ''' # Read RHSM defaults ... cp = ConfigParser.ConfigParser() cp.read(rhsm_conf) # Add support for specifying a default value w/o having to standup some configuration # Yeah, I know this should be subclassed ... but, oh well def get_option_default(self, key, default=''): sect, opt = key.split('.', 1) if self.has_section(sect) and self.has_option(sect, opt): return self.get(sect, opt) else: return default cp.get_option = types.MethodType(get_option_default, cp, ConfigParser.ConfigParser) return cp def enable(self): ''' Enable the system to receive updates from subscription-manager. This involves updating affected yum plugins and removing any conflicting yum repositories. ''' RegistrationBase.enable(self) self.update_plugin_conf('rhnplugin', False) self.update_plugin_conf('subscription-manager', True) def configure(self, **kwargs): ''' Configure the system as directed for registration with RHN Raises: * Exception - if error occurs while running command ''' args = ['subscription-manager', 'config'] # Pass supplied **kwargs as parameters to subscription-manager. Ignore # non-configuration parameters and replace '_' with '.'. For example, # 'server_hostname' becomes '--system.hostname'. for k,v in kwargs.items(): if re.search(r'^(system|rhsm)_', k): args.append('--%s=%s' % (k.replace('_','.'), v)) self.module.run_command(args, check_rc=True) @property def is_registered(self): ''' Determine whether the current system Returns: * Boolean - whether the current system is currently registered to RHN. ''' # Quick version... if False: return os.path.isfile('/etc/pki/consumer/cert.pem') and \ os.path.isfile('/etc/pki/consumer/key.pem') args = ['subscription-manager', 'identity'] rc, stdout, stderr = self.module.run_command(args, check_rc=False) if rc == 0: return True else: return False def register(self, username, password, autosubscribe, activationkey): ''' Register the current system to the provided RHN server Raises: * Exception - if error occurs while running command ''' args = ['subscription-manager', 'register'] # Generate command arguments if activationkey: args.append('--activationkey "%s"' % activationkey) else: if autosubscribe: args.append('--autosubscribe') if username: args.extend(['--username', username]) if password: args.extend(['--password', password]) rc, stderr, stdout = self.module.run_command(args, check_rc=True) def unsubscribe(self): ''' Unsubscribe a system from all subscribed channels Raises: * Exception - if error occurs while running command ''' args = ['subscription-manager', 'unsubscribe', '--all'] rc, stderr, stdout = self.module.run_command(args, check_rc=True) def unregister(self): ''' Unregister a currently registered system Raises: * Exception - if error occurs while running command ''' args = ['subscription-manager', 'unregister'] rc, stderr, stdout = self.module.run_command(args, check_rc=True) def subscribe(self, regexp): ''' Subscribe current system to available pools matching the specified regular expression Raises: * Exception - if error occurs while running command ''' # Available pools ready for subscription available_pools = RhsmPools(self.module) for pool in available_pools.filter(regexp): pool.subscribe() class RhsmPool(object): ''' Convenience class for housing subscription information ''' def __init__(self, module, **kwargs): self.module = module for k,v in kwargs.items(): setattr(self, k, v) def __str__(self): return str(self.__getattribute__('_name')) def subscribe(self): args = "subscription-manager subscribe --pool %s" % self.PoolId rc, stdout, stderr = self.module.run_command(args, check_rc=True) if rc == 0: return True else: return False class RhsmPools(object): """ This class is used for manipulating pools subscriptions with RHSM """ def __init__(self, module): self.module = module self.products = self._load_product_list() def __iter__(self): return self.products.__iter__() def _load_product_list(self): """ Loads list of all availaible pools for system in data structure """ args = "subscription-manager list --available" rc, stdout, stderr = self.module.run_command(args, check_rc=True) products = [] for line in stdout.split('\n'): # Remove leading+trailing whitespace line = line.strip() # An empty line implies the end of a output group if len(line) == 0: continue # If a colon ':' is found, parse elif ':' in line: (key, value) = line.split(':',1) key = key.strip().replace(" ", "") # To unify value = value.strip() if key in ['ProductName', 'SubscriptionName']: # Remember the name for later processing products.append(RhsmPool(self.module, _name=value, key=value)) elif products: # Associate value with most recently recorded product products[-1].__setattr__(key, value) # FIXME - log some warning? #else: # warnings.warn("Unhandled subscription key/value: %s/%s" % (key,value)) return products def filter(self, regexp='^$'): ''' Return a list of RhsmPools whose name matches the provided regular expression ''' r = re.compile(regexp) for product in self.products: if r.search(product._name): yield product def main(): # Load RHSM configuration from file rhn = Rhsm(AnsibleModule()) module = AnsibleModule( argument_spec = dict( state = dict(default='present', choices=['present', 'absent']), username = dict(default=None, required=False), password = dict(default=None, required=False), server_hostname = dict(default=rhn.config.get_option('server.hostname'), required=False), server_insecure = dict(default=rhn.config.get_option('server.insecure'), required=False), rhsm_baseurl = dict(default=rhn.config.get_option('rhsm.baseurl'), required=False), autosubscribe = dict(default=False, type='bool'), activationkey = dict(default=None, required=False), pool = dict(default='^$', required=False, type='str'), ) ) rhn.module = module state = module.params['state'] username = module.params['username'] password = module.params['password'] server_hostname = module.params['server_hostname'] server_insecure = module.params['server_insecure'] rhsm_baseurl = module.params['rhsm_baseurl'] autosubscribe = module.params['autosubscribe'] == True activationkey = module.params['activationkey'] pool = module.params['pool'] # Ensure system is registered if state == 'present': # Check for missing parameters ... if not (activationkey or username or password): module.fail_json(msg="Missing arguments, must supply an activationkey (%s) or username (%s) and password (%s)" % (activationkey, username, password)) if not activationkey and not (username and password): module.fail_json(msg="Missing arguments, If registering without an activationkey, must supply username or password") # Register system if rhn.is_registered: module.exit_json(changed=False, msg="System already registered.") else: try: rhn.enable() rhn.configure(**module.params) rhn.register(username, password, autosubscribe, activationkey) rhn.subscribe(pool) except CommandException, e: module.fail_json(msg="Failed to register with '%s': %s" % (server_hostname, e)) else: module.exit_json(changed=True, msg="System successfully registered to '%s'." % server_hostname) # Ensure system is *not* registered if state == 'absent': if not rhn.is_registered: module.exit_json(changed=False, msg="System already unregistered.") else: try: rhn.unsubscribe() rhn.unregister() except CommandException, e: module.fail_json(msg="Failed to unregister: %s" % e) else: module.exit_json(changed=True, msg="System successfully unregistered from %s." % server_hostname) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/pip0000664000000000000000000002652112316627017015754 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Matt Wright # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # import tempfile import os DOCUMENTATION = ''' --- module: pip short_description: Manages Python library dependencies. description: - "Manage Python library dependencies. To use this module, one of the following keys is required: C(name) or C(requirements)." version_added: "0.7" options: name: description: - The name of a Python library to install or the url of the remote package. required: false default: null version: description: - The version number to install of the Python library specified in the I(name) parameter required: false default: null requirements: description: - The path to a pip requirements file required: false default: null virtualenv: description: - An optional path to a I(virtualenv) directory to install into required: false default: null virtualenv_site_packages: version_added: "1.0" description: - Whether the virtual environment will inherit packages from the global site-packages directory. Note that if this setting is changed on an already existing virtual environment it will not have any effect, the environment must be deleted and newly created. required: false default: "no" choices: [ "yes", "no" ] virtualenv_command: version_aded: "1.1" description: - The command or a pathname to the command to create the virtual environment with. For example C(pyvenv), C(virtualenv), C(virtualenv2), C(~/bin/virtualenv), C(/usr/local/bin/virtualenv). required: false default: virtualenv state: description: - The state of module required: false default: present choices: [ "present", "absent", "latest" ] extra_args: description: - Extra arguments passed to pip. required: false default: null version_added: "1.0" chdir: description: - cd into this directory before running the command version_added: "1.3" required: false default: null executable: description: - The explicit executable or a pathname to the executable to be used to run pip for a specific version of Python installed in the system. For example C(pip-3.3), if there are both Python 2.7 and 3.3 installations in the system and you want to run pip for the Python 3.3 installation. version_added: "1.3" required: false default: null notes: - Please note that virtualenv (U(http://www.virtualenv.org/)) must be installed on the remote host if the virtualenv parameter is specified. requirements: [ "virtualenv", "pip" ] author: Matt Wright ''' EXAMPLES = ''' # Install (Bottle) python package. - pip: name=bottle # Install (Bottle) python package on version 0.11. - pip: name=bottle version=0.11 # Install (MyApp) using one of the remote protocols (bzr+,hg+,git+,svn+). You do not have to supply '-e' option in extra_args. - pip: name='svn+http://myrepo/svn/MyApp#egg=MyApp' # Install (Bottle) into the specified (virtualenv), inheriting none of the globally installed modules - pip: name=bottle virtualenv=/my_app/venv # Install (Bottle) into the specified (virtualenv), inheriting globally installed modules - pip: name=bottle virtualenv=/my_app/venv virtualenv_site_packages=yes # Install (Bottle) into the specified (virtualenv), using Python 2.7 - pip: name=bottle virtualenv=/my_app/venv virtualenv_command=virtualenv-2.7 # Install specified python requirements. - pip: requirements=/my_app/requirements.txt # Install specified python requirements in indicated (virtualenv). - pip: requirements=/my_app/requirements.txt virtualenv=/my_app/venv # Install specified python requirements and custom Index URL. - pip: requirements=/my_app/requirements.txt extra_args='-i https://example.com/pypi/simple' # Install (Bottle) for Python 3.3 specifically,using the 'pip-3.3' executable. - pip: name=bottle executable=pip-3.3 ''' def _get_cmd_options(module, cmd): thiscmd = cmd + " --help" rc, stdout, stderr = module.run_command(thiscmd) if rc != 0: module.fail_json(msg="Could not get output from %s: %s" % (thiscmd, stdout + stderr)) words = stdout.strip().split() cmd_options = [ x for x in words if x.startswith('--') ] return cmd_options def _get_full_name(name, version=None): if version is None: resp = name else: resp = name + '==' + version return resp def _get_pip(module, env=None, executable=None): # On Debian and Ubuntu, pip is pip. # On Fedora18 and up, pip is python-pip. # On Fedora17 and below, CentOS and RedHat 6 and 5, pip is pip-python. # On Fedora, CentOS, and RedHat, the exception is in the virtualenv. # There, pip is just pip. candidate_pip_basenames = ['pip', 'python-pip', 'pip-python'] pip = None if executable is not None: if os.path.isabs(executable): pip = executable else: # If you define your own executable that executable should be the only candidate. candidate_pip_basenames = [executable] if pip is None: if env is None: opt_dirs = [] else: # Try pip with the virtualenv directory first. opt_dirs = ['%s/bin' % env] for basename in candidate_pip_basenames: pip = module.get_bin_path(basename, False, opt_dirs) if pip is not None: break # pip should have been found by now. The final call to get_bin_path will # trigger fail_json. if pip is None: basename = candidate_pip_basenames[0] pip = module.get_bin_path(basename, True, opt_dirs) return pip def _fail(module, cmd, out, err): msg = '' if out: msg += "stdout: %s" % (out, ) if err: msg += "\n:stderr: %s" % (err, ) module.fail_json(cmd=cmd, msg=msg) def main(): state_map = dict( present='install', absent='uninstall -y', latest='install -U', ) module = AnsibleModule( argument_spec=dict( state=dict(default='present', choices=state_map.keys()), name=dict(default=None, required=False), version=dict(default=None, required=False), requirements=dict(default=None, required=False), virtualenv=dict(default=None, required=False), virtualenv_site_packages=dict(default='no', type='bool'), virtualenv_command=dict(default='virtualenv', required=False), use_mirrors=dict(default='yes', type='bool'), extra_args=dict(default=None, required=False), chdir=dict(default=None, required=False), executable=dict(default=None, required=False), ), required_one_of=[['name', 'requirements']], mutually_exclusive=[['name', 'requirements']], supports_check_mode=True ) state = module.params['state'] name = module.params['name'] version = module.params['version'] requirements = module.params['requirements'] extra_args = module.params['extra_args'] chdir = module.params['chdir'] if state == 'latest' and version is not None: module.fail_json(msg='version is incompatible with state=latest') err = '' out = '' env = module.params['virtualenv'] virtualenv_command = module.params['virtualenv_command'] if env: env = os.path.expanduser(env) virtualenv = os.path.expanduser(virtualenv_command) if os.path.basename(virtualenv) == virtualenv: virtualenv = module.get_bin_path(virtualenv_command, True) if not os.path.exists(os.path.join(env, 'bin', 'activate')): if module.check_mode: module.exit_json(changed=True) if module.params['virtualenv_site_packages']: cmd = '%s --system-site-packages %s' % (virtualenv, env) else: cmd_opts = _get_cmd_options(module, virtualenv) if '--no-site-packages' in cmd_opts: cmd = '%s --no-site-packages %s' % (virtualenv, env) else: cmd = '%s %s' % (virtualenv, env) this_dir = tempfile.gettempdir() if chdir: this_dir = os.path.join(this_dir, chdir) rc, out_venv, err_venv = module.run_command(cmd, cwd=this_dir) out += out_venv err += err_venv if rc != 0: _fail(module, cmd, out, err) pip = _get_pip(module, env, module.params['executable']) cmd = '%s %s' % (pip, state_map[state]) # If there's a virtualenv we want things we install to be able to use other # installations that exist as binaries within this virtualenv. Example: we # install cython and then gevent -- gevent needs to use the cython binary, # not just a python package that will be found by calling the right python. # So if there's a virtualenv, we add that bin/ to the beginning of the PATH # in run_command by setting path_prefix here. path_prefix = None if env: path_prefix="/".join(pip.split('/')[:-1]) # Automatically apply -e option to extra_args when source is a VCS url. VCS # includes those beginning with svn+, git+, hg+ or bzr+ if name: if name.startswith('svn+') or name.startswith('git+') or \ name.startswith('hg+') or name.startswith('bzr+'): args_list = [] # used if extra_args is not used at all if extra_args: args_list = extra_args.split(' ') if '-e' not in args_list: args_list.append('-e') # Ok, we will reconstruct the option string extra_args = ' '.join(args_list) if extra_args: cmd += ' %s' % extra_args if name: cmd += ' %s' % _get_full_name(name, version) elif requirements: cmd += ' -r %s' % requirements if module.check_mode: module.exit_json(changed=True) this_dir = tempfile.gettempdir() if chdir: this_dir = os.path.join(this_dir, chdir) rc, out_pip, err_pip = module.run_command(cmd, path_prefix=path_prefix, cwd=this_dir) out += out_pip err += err_pip if rc == 1 and state == 'absent' and 'not installed' in out_pip: pass # rc is 1 when attempting to uninstall non-installed package elif rc != 0: _fail(module, cmd, out, err) if state == 'absent': changed = 'Successfully uninstalled' in out_pip else: changed = 'Successfully installed' in out_pip module.exit_json(changed=changed, cmd=cmd, name=name, version=version, state=state, requirements=requirements, virtualenv=env, stdout=out, stderr=err) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/openbsd_pkg0000664000000000000000000003065712316627017017464 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Patrik Lundin # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import re import shlex import syslog DOCUMENTATION = ''' --- module: openbsd_pkg author: Patrik Lundin version_added: "1.1" short_description: Manage packages on OpenBSD. description: - Manage packages on OpenBSD using the pkg tools. options: name: required: true description: - Name of the package. state: required: true choices: [ present, latest, absent ] description: - C(present) will make sure the package is installed. C(latest) will make sure the latest version of the package is installed. C(absent) will make sure the specified package is not installed. ''' EXAMPLES = ''' # Make sure nmap is installed - openbsd_pkg: name=nmap state=present # Make sure nmap is the latest version - openbsd_pkg: name=nmap state=latest # Make sure nmap is not installed - openbsd_pkg: name=nmap state=absent ''' # Control if we write debug information to syslog. debug = False # Function used for executing commands. def execute_command(cmd, module): if debug: syslog.syslog("execute_command(): cmd = %s" % cmd) # Break command line into arguments. # This makes run_command() use shell=False which we need to not cause shell # expansion of special characters like '*'. cmd_args = shlex.split(cmd) return module.run_command(cmd_args) # Function used for getting the name of a currently installed package. def get_current_name(name, pkg_spec, module): info_cmd = 'pkg_info' (rc, stdout, stderr) = execute_command("%s" % (info_cmd), module) if rc != 0: return (rc, stdout, stderr) if pkg_spec['version']: pattern = "^%s" % name elif pkg_spec['flavor']: pattern = "^%s-.*-%s\s" % (pkg_spec['stem'], pkg_spec['flavor']) else: pattern = "^%s-" % pkg_spec['stem'] if debug: syslog.syslog("get_current_name(): pattern = %s" % pattern) for line in stdout.splitlines(): if debug: syslog.syslog("get_current_name: line = %s" % line) match = re.search(pattern, line) if match: current_name = line.split()[0] return current_name # Function used to find out if a package is currently installed. def get_package_state(name, pkg_spec, module): info_cmd = 'pkg_info -e' if pkg_spec['version']: command = "%s %s" % (info_cmd, name) elif pkg_spec['flavor']: command = "%s %s-*-%s" % (info_cmd, pkg_spec['stem'], pkg_spec['flavor']) else: command = "%s %s-*" % (info_cmd, pkg_spec['stem']) rc, stdout, stderr = execute_command(command, module) if (stderr): module.fail_json(msg="failed in get_package_state(): " + stderr) if rc == 0: return True else: return False # Function used to make sure a package is present. def package_present(name, installed_state, pkg_spec, module): if module.check_mode: install_cmd = 'pkg_add -Imn' else: install_cmd = 'pkg_add -Im' if installed_state is False: # Attempt to install the package (rc, stdout, stderr) = execute_command("%s %s" % (install_cmd, name), module) # The behaviour of pkg_add is a bit different depending on if a # specific version is supplied or not. # # When a specific version is supplied the return code will be 0 when # a package is found and 1 when it is not, if a version is not # supplied the tool will exit 0 in both cases: if pkg_spec['version']: # Depend on the return code. if debug: syslog.syslog("package_present(): depending on return code") if rc: changed=False else: # Depend on stderr instead. if debug: syslog.syslog("package_present(): depending on stderr") if stderr: # There is a corner case where having an empty directory in # installpath prior to the right location will result in a # "file:/local/package/directory/ is empty" message on stderr # while still installing the package, so we need to look for # for a message like "packagename-1.0: ok" just in case. match = re.search("\W%s-[^:]+: ok\W" % name, stdout) if match: # It turns out we were able to install the package. if debug: syslog.syslog("package_present(): we were able to install package") pass else: # We really did fail, fake the return code. if debug: syslog.syslog("package_present(): we really did fail") rc = 1 changed=False else: if debug: syslog.syslog("package_present(): stderr was not set") if rc == 0: if module.check_mode: module.exit_json(changed=True) changed=True else: rc = 0 stdout = '' stderr = '' changed=False return (rc, stdout, stderr, changed) # Function used to make sure a package is the latest available version. def package_latest(name, installed_state, pkg_spec, module): if module.check_mode: upgrade_cmd = 'pkg_add -umn' else: upgrade_cmd = 'pkg_add -um' pre_upgrade_name = '' if installed_state is True: # Fetch name of currently installed package. pre_upgrade_name = get_current_name(name, pkg_spec, module) if debug: syslog.syslog("package_latest(): pre_upgrade_name = %s" % pre_upgrade_name) # Attempt to upgrade the package. (rc, stdout, stderr) = execute_command("%s %s" % (upgrade_cmd, name), module) # Look for output looking something like "nmap-6.01->6.25: ok" to see if # something changed (or would have changed). Use \W to delimit the match # from progress meter output. match = re.search("\W%s->.+: ok\W" % pre_upgrade_name, stdout) if match: if module.check_mode: module.exit_json(changed=True) changed = True else: changed = False # FIXME: This part is problematic. Based on the issues mentioned (and # handled) in package_present() it is not safe to blindly trust stderr # as an indicator that the command failed, and in the case with # empty installpath directories this will break. # # For now keep this safeguard here, but ignore it if we managed to # parse out a successful update above. This way we will report a # successful run when we actually modify something but fail # otherwise. if changed != True: if stderr: rc=1 return (rc, stdout, stderr, changed) else: # If package was not installed at all just make it present. if debug: syslog.syslog("package_latest(): package is not installed, calling package_present()") return package_present(name, installed_state, pkg_spec, module) # Function used to make sure a package is not installed. def package_absent(name, installed_state, module): if module.check_mode: remove_cmd = 'pkg_delete -In' else: remove_cmd = 'pkg_delete -I' if installed_state is True: # Attempt to remove the package. rc, stdout, stderr = execute_command("%s %s" % (remove_cmd, name), module) if rc == 0: if module.check_mode: module.exit_json(changed=True) changed=True else: changed=False else: rc = 0 stdout = '' stderr = '' changed=False return (rc, stdout, stderr, changed) # Function used to parse the package name based on packages-specs(7) # The general name structure is "stem-version[-flavors]" def parse_package_name(name, pkg_spec, module): # Do some initial matches so we can base the more advanced regex on that. version_match = re.search("-[0-9]", name) versionless_match = re.search("--", name) # Stop if someone is giving us a name that both has a version and is # version-less at the same time. if version_match and versionless_match: module.fail_json(msg="Package name both has a version and is version-less: " + name) # If name includes a version. if version_match: match = re.search("^(?P.*)-(?P[0-9][^-]*)(?P-)?(?P[a-z].*)?$", name) if match: pkg_spec['stem'] = match.group('stem') pkg_spec['version_separator'] = '-' pkg_spec['version'] = match.group('version') pkg_spec['flavor_separator'] = match.group('flavor_separator') pkg_spec['flavor'] = match.group('flavor') else: module.fail_json(msg="Unable to parse package name at version_match: " + name) # If name includes no version but is version-less ("--"). elif versionless_match: match = re.search("^(?P.*)--(?P[a-z].*)?$", name) if match: pkg_spec['stem'] = match.group('stem') pkg_spec['version_separator'] = '-' pkg_spec['version'] = None pkg_spec['flavor_separator'] = '-' pkg_spec['flavor'] = match.group('flavor') else: module.fail_json(msg="Unable to parse package name at versionless_match: " + name) # If name includes no version, and is not version-less, it is all a stem. else: match = re.search("^(?P.*)$", name) if match: pkg_spec['stem'] = match.group('stem') pkg_spec['version_separator'] = None pkg_spec['version'] = None pkg_spec['flavor_separator'] = None pkg_spec['flavor'] = None else: module.fail_json(msg="Unable to parse package name at else: " + name) # Sanity check that there are no trailing dashes in flavor. # Try to stop strange stuff early so we can be strict later. if pkg_spec['flavor']: match = re.search("-$", pkg_spec['flavor']) if match: module.fail_json(msg="Trailing dash in flavor: " + pkg_spec['flavor']) # =========================================== # Main control flow def main(): module = AnsibleModule( argument_spec = dict( name = dict(required=True), state = dict(required=True, choices=['absent', 'installed', 'latest', 'present', 'removed']), ), supports_check_mode = True ) name = module.params['name'] state = module.params['state'] rc = 0 stdout = '' stderr = '' result = {} result['name'] = name result['state'] = state # Parse package name and put results in the pkg_spec dictionary. pkg_spec = {} parse_package_name(name, pkg_spec, module) # Get package state. installed_state = get_package_state(name, pkg_spec, module) # Perform requested action. if state in ['installed', 'present']: (rc, stdout, stderr, changed) = package_present(name, installed_state, pkg_spec, module) elif state in ['absent', 'removed']: (rc, stdout, stderr, changed) = package_absent(name, installed_state, module) elif state == 'latest': (rc, stdout, stderr, changed) = package_latest(name, installed_state, pkg_spec, module) if rc != 0: if stderr: module.fail_json(msg=stderr) else: module.fail_json(msg=stdout) result['changed'] = changed module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/zypper_repository0000664000000000000000000001064512316627017021014 0ustar rootroot#!/usr/bin/python # encoding: utf-8 # (c) 2013, Matthias Vogelgesang # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: zypper_repository author: Matthias Vogelgesang version_added: "1.4" short_description: Add and remove Zypper repositories description: - Add or remove Zypper repositories on SUSE and openSUSE options: name: required: true default: none description: - A name for the repository. repo: required: true default: none description: - URI of the repository or .repo file. state: required: false choices: [ "absent", "present" ] default: "present" description: - A source string state. description: required: false default: none description: - A description of the repository disable_gpg_check: description: - Whether to disable GPG signature checking of all packages. Has an effect only if state is I(present). required: false default: "no" choices: [ "yes", "no" ] aliases: [] notes: [] requirements: [ zypper ] ''' EXAMPLES = ''' # Add NVIDIA repository for graphics drivers - zypper_repository: name=nvidia-repo repo='ftp://download.nvidia.com/opensuse/12.2' state=present # Remove NVIDIA repository - zypper_repository: name=nvidia-repo repo='ftp://download.nvidia.com/opensuse/12.2' state=absent ''' def repo_exists(module, repo): """Return (rc, stdout, stderr, found) tuple""" cmd = ['/usr/bin/zypper', 'lr', '--uri'] rc, stdout, stderr = module.run_command(cmd, check_rc=False) return (rc, stdout, stderr, repo in stdout) def add_repo(module, repo, alias, description, disable_gpg_check): cmd = ['/usr/bin/zypper', 'ar', '--check', '--refresh'] if description: cmd.extend(['--name', description]) if disable_gpg_check: cmd.append('--no-gpgcheck') cmd.append(repo) if not repo.endswith('.repo'): cmd.append(alias) rc, stdout, stderr = module.run_command(cmd, check_rc=False) changed = rc == 0 return (rc, stdout, stderr, changed) def remove_repo(module, repo): cmd = ['/usr/bin/zypper', 'rr', repo] rc, stdout, stderr = module.run_command(cmd, check_rc=False) changed = rc == 0 return (rc, stdout, stderr, changed) def fail_if_rc_is_null(module, rc, stdout, stderr): if rc != 0: module.fail_json(msg=stderr if stderr else stdout) def main(): module = AnsibleModule( argument_spec=dict( name=dict(required=True), repo=dict(required=True), state=dict(choices=['present', 'absent'], default='present'), description=dict(required=False), disable_gpg_check = dict(required=False, default='no', type='bool'), ), supports_check_mode=False, ) repo = module.params['repo'] state = module.params['state'] name = module.params['name'] description = module.params['description'] disable_gpg_check = module.params['disable_gpg_check'] def exit_unchanged(): module.exit_json(changed=False, repo=repo, state=state, name=name) rc, stdout, stderr, exists = repo_exists(module, repo) fail_if_rc_is_null(module, rc, stdout, stderr) if state == 'present': if exists: exit_unchanged() result = add_repo(module, repo, name, description, disable_gpg_check) elif state == 'absent': if not exists: exit_unchanged() result = remove_repo(module, repo) rc, stdout, stderr, changed = result fail_if_rc_is_null(module, rc, stdout, stderr) module.exit_json(changed=changed, repo=repo, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/pkgng0000664000000000000000000001156412316627017016273 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, bleader # Written by bleader # Based on pkgin module written by Shaun Zinck # that was based on pacman module written by Afterburn # that was based on apt module written by Matthew Williams # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: pkgng short_description: Package manager for FreeBSD >= 9.0 description: - Manage binary packages for FreeBSD using 'pkgng' which is available in versions after 9.0. version_added: "1.2" options: name: description: - name of package to install/remove required: true state: description: - state of the package choices: [ 'present', 'absent' ] required: false default: present cached: description: - use local package base or try to fetch an updated one choices: [ 'yes', 'no' ] required: false default: no pkgsite: description: - specify packagesite to use for downloading packages, if not specified, use settings from /usr/local/etc/pkg.conf required: false author: bleader notes: - When using pkgsite, be careful that already in cache packages won't be downloaded again. ''' EXAMPLES = ''' # Install package foo - pkgng: name=foo state=present # Remove packages foo and bar - pkgng: name=foo,bar state=absent ''' import json import shlex import os import sys def query_package(module, pkgin_path, name): rc, out, err = module.run_command("%s info -g -e %s" % (pkgin_path, name)) if rc == 0: return True return False def remove_packages(module, pkgin_path, packages): remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove if not query_package(module, pkgin_path, package): continue if not module.check_mode: rc, out, err = module.run_command("%s delete -y %s" % (pkgin_path, package)) if not module.check_mode and query_package(module, pkgin_path, package): module.fail_json(msg="failed to remove %s: %s" % (package, out)) remove_c += 1 if remove_c > 0: module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, pkgin_path, packages, cached, pkgsite): install_c = 0 if pkgsite != "": pkgsite="PACKAGESITE=%s" % (pkgsite) if not module.check_mode and cached == "no": rc, out, err = module.run_command("%s %s update" % (pkgsite, pkgin_path)) if rc != 0: module.fail_json(msg="Could not update catalogue") for package in packages: if query_package(module, pkgin_path, package): continue if not module.check_mode: rc, out, err = module.run_command("%s %s install -g -U -y %s" % (pkgsite, pkgin_path, package)) if not module.check_mode and not query_package(module, pkgin_path, package): module.fail_json(msg="failed to install %s: %s" % (package, out), stderr=err) install_c += 1 if install_c > 0: module.exit_json(changed=True, msg="present %s package(s)" % (install_c)) module.exit_json(changed=False, msg="package(s) already present") def main(): module = AnsibleModule( argument_spec = dict( state = dict(default="present", choices=["present","absent"]), name = dict(aliases=["pkg"], required=True), cached = dict(default=False, type='bool'), pkgsite = dict(default="", required=False)), supports_check_mode = True) pkgin_path = module.get_bin_path('pkg', True) p = module.params pkgs = p["name"].split(",") if p["state"] == "present": install_packages(module, pkgin_path, pkgs, p["cached"], p["pkgsite"]) elif p["state"] == "absent": remove_packages(module, pkgin_path, pkgs) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/apt_repository0000664000000000000000000003415512316627017020251 0ustar rootroot#!/usr/bin/python # encoding: utf-8 # (c) 2012, Matt Wright # (c) 2013, Alexander Saltanov # (c) 2014, Rutger Spiertz # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: apt_repository short_description: Add and remove APT repositores description: - Add or remove an APT repositories in Ubuntu and Debian. notes: - This module works on Debian and Ubuntu and requires C(python-apt) and C(python-pycurl) packages. - This module supports Debian Squeeze (version 6) as well as its successors. - This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu machines. options: repo: required: true default: none description: - A source string for the repository. state: required: false choices: [ "absent", "present" ] default: "present" description: - A source string state. update_cache: description: - Run the equivalent of C(apt-get update) if has changed. required: false default: "yes" choices: [ "yes", "no" ] author: Alexander Saltanov version_added: "0.7" requirements: [ python-apt, python-pycurl ] ''' EXAMPLES = ''' # Add specified repository into sources list. apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=present # Add source repository into sources list. apt_repository: repo='deb-src http://archive.canonical.com/ubuntu hardy partner' state=present # Remove specified repository from sources list. apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=absent # On Ubuntu target: add nginx stable repository from PPA and install its signing key. # On Debian target: adding PPA is not available, so it will fail immediately. apt_repository: repo='ppa:nginx/stable' ''' import glob try: import json except ImportError: import simplejson as json import os import re import tempfile try: import apt import apt_pkg import aptsources.distro distro = aptsources.distro.get_distro() HAVE_PYTHON_APT = True except ImportError: HAVE_PYTHON_APT = False try: import pycurl HAVE_PYCURL = True except ImportError: HAVE_PYCURL = False VALID_SOURCE_TYPES = ('deb', 'deb-src') class CurlCallback: def __init__(self): self.contents = '' def body_callback(self, buf): self.contents = self.contents + buf class InvalidSource(Exception): pass # Simple version of aptsources.sourceslist.SourcesList. # No advanced logic and no backups inside. class SourcesList(object): def __init__(self): self.files = {} # group sources by file self.default_file = self._apt_cfg_file('Dir::Etc::sourcelist') # read sources.list if it exists if os.path.isfile(self.default_file): self.load(self.default_file) # read sources.list.d for file in glob.iglob('%s/*.list' % self._apt_cfg_dir('Dir::Etc::sourceparts')): self.load(file) def __iter__(self): '''Simple iterator to go over all sources. Empty, non-source, and other not valid lines will be skipped.''' for file, sources in self.files.items(): for n, valid, enabled, source, comment in sources: if valid: yield file, n, enabled, source, comment raise StopIteration def _expand_path(self, filename): if '/' in filename: return filename else: return os.path.abspath(os.path.join(self._apt_cfg_dir('Dir::Etc::sourceparts'), filename)) def _suggest_filename(self, line): def _cleanup_filename(s): return '_'.join(re.sub('[^a-zA-Z0-9]', ' ', s).split()) # Drop options and protocols. line = re.sub('\[[^\]]+\]', '', line) line = re.sub('\w+://', '', line) parts = [part for part in line.split() if part not in VALID_SOURCE_TYPES] return '%s.list' % _cleanup_filename(' '.join(parts[:1])) def _parse(self, line, raise_if_invalid_or_disabled=False): valid = False enabled = True source = '' comment = '' line = line.strip() if line.startswith('#'): enabled = False line = line[1:] # Check for another "#" in the line and treat a part after it as a comment. i = line.find('#') if i > 0: comment = line[i+1:].strip() line = line[:i] # Split a source into substring to make sure that it is source spec. # Duplicated whitespaces in a valid source spec will be removed. source = line.strip() if source: chunks = source.split() if chunks[0] in VALID_SOURCE_TYPES: valid = True source = ' '.join(chunks) if raise_if_invalid_or_disabled and (not valid or not enabled): raise InvalidSource(line) return valid, enabled, source, comment @staticmethod def _apt_cfg_file(filespec): ''' Wrapper for `apt_pkg` module for running with Python 2.5 ''' try: result = apt_pkg.config.find_file(filespec) except AttributeError: result = apt_pkg.Config.FindFile(filespec) return result @staticmethod def _apt_cfg_dir(dirspec): ''' Wrapper for `apt_pkg` module for running with Python 2.5 ''' try: result = apt_pkg.config.find_dir(dirspec) except AttributeError: result = apt_pkg.Config.FindDir(dirspec) return result def load(self, file): group = [] f = open(file, 'r') for n, line in enumerate(f): valid, enabled, source, comment = self._parse(line) group.append((n, valid, enabled, source, comment)) self.files[file] = group def save(self, module): for filename, sources in self.files.items(): if sources: d, fn = os.path.split(filename) fd, tmp_path = tempfile.mkstemp(prefix=".%s-" % fn, dir=d) os.chmod(os.path.join(fd, tmp_path), 0644) f = os.fdopen(fd, 'w') for n, valid, enabled, source, comment in sources: chunks = [] if not enabled: chunks.append('# ') chunks.append(source) if comment: chunks.append(' # ') chunks.append(comment) chunks.append('\n') line = ''.join(chunks) try: f.write(line) except IOError, err: module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, unicode(err))) module.atomic_move(tmp_path, filename) else: del self.files[filename] if os.path.exists(filename): os.remove(filename) def dump(self): return '\n'.join([str(i) for i in self]) def modify(self, file, n, enabled=None, source=None, comment=None): ''' This function to be used with iterator, so we don't care of invalid sources. If source, enabled, or comment is None, original value from line ``n`` will be preserved. ''' valid, enabled_old, source_old, comment_old = self.files[file][n][1:] choice = lambda new, old: old if new is None else new self.files[file][n] = (n, valid, choice(enabled, enabled_old), choice(source, source_old), choice(comment, comment_old)) def _add_valid_source(self, source_new, comment_new, file): # We'll try to reuse disabled source if we have it. # If we have more than one entry, we will enable them all - no advanced logic, remember. found = False for filename, n, enabled, source, comment in self: if source == source_new: self.modify(filename, n, enabled=True) found = True if not found: if file is None: file = self.default_file else: file = self._expand_path(file) if file not in self.files: self.files[file] = [] files = self.files[file] files.append((len(files), True, True, source_new, comment_new)) def add_source(self, line, comment='', file=None): source = self._parse(line, raise_if_invalid_or_disabled=True)[2] # Prefer separate files for new sources. self._add_valid_source(source, comment, file=file or self._suggest_filename(source)) def _remove_valid_source(self, source): # If we have more than one entry, we will remove them all (not comment, remove!) for filename, n, enabled, src, comment in self: if source == src and enabled: self.files[filename].pop(n) def remove_source(self, line): source = self._parse(line, raise_if_invalid_or_disabled=True)[2] self._remove_valid_source(source) class UbuntuSourcesList(SourcesList): LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s' def __init__(self, add_ppa_signing_keys_callback=None): self.add_ppa_signing_keys_callback = add_ppa_signing_keys_callback super(UbuntuSourcesList, self).__init__() def _get_ppa_info(self, owner_name, ppa_name): # we can not use urllib2 here as it does not do cert verification lp_api = self.LP_API % (owner_name, ppa_name) return self._get_ppa_info_curl(lp_api) def _get_ppa_info_curl(self, lp_api): callback = CurlCallback() curl = pycurl.Curl() curl.setopt(pycurl.SSL_VERIFYPEER, 1) curl.setopt(pycurl.SSL_VERIFYHOST, 2) curl.setopt(pycurl.WRITEFUNCTION, callback.body_callback) curl.setopt(pycurl.URL, str(lp_api)) curl.setopt(pycurl.HTTPHEADER, ["Accept: application/json"]) curl.perform() curl.close() lp_page = callback.contents return json.loads(lp_page) def _expand_ppa(self, path): ppa = path.split(':')[1] ppa_owner = ppa.split('/')[0] try: ppa_name = ppa.split('/')[1] except IndexError: ppa_name = 'ppa' line = 'deb http://ppa.launchpad.net/%s/%s/ubuntu %s main' % (ppa_owner, ppa_name, distro.codename) return line, ppa_owner, ppa_name def add_source(self, line, comment='', file=None): if line.startswith('ppa:'): source, ppa_owner, ppa_name = self._expand_ppa(line) if self.add_ppa_signing_keys_callback is not None: info = self._get_ppa_info(ppa_owner, ppa_name) command = ['apt-key', 'adv', '--recv-keys', '--keyserver', 'hkp://keyserver.ubuntu.com:80', info['signing_key_fingerprint']] self.add_ppa_signing_keys_callback(command) file = file or self._suggest_filename('%s_%s' % (line, distro.codename)) else: source = self._parse(line, raise_if_invalid_or_disabled=True)[2] file = file or self._suggest_filename(source) self._add_valid_source(source, comment, file) def remove_source(self, line): if line.startswith('ppa:'): source = self._expand_ppa(line)[0] else: source = self._parse(line, raise_if_invalid_or_disabled=True)[2] self._remove_valid_source(source) def get_add_ppa_signing_key_callback(module): def _run_command(command): module.run_command(command, check_rc=True) if module.check_mode: return None else: return _run_command def main(): module = AnsibleModule( argument_spec=dict( repo=dict(required=True), state=dict(choices=['present', 'absent'], default='present'), update_cache = dict(aliases=['update-cache'], type='bool', default='yes'), ), supports_check_mode=True, ) if not HAVE_PYTHON_APT: module.fail_json(msg='Could not import python modules: apt_pkg. Please install python-apt package.') if not HAVE_PYCURL: module.fail_json(msg='Could not import python modules: pycurl. Please install python-pycurl package.') repo = module.params['repo'] state = module.params['state'] update_cache = module.params['update_cache'] sourceslist = None if isinstance(distro, aptsources.distro.UbuntuDistribution): sourceslist = UbuntuSourcesList(add_ppa_signing_keys_callback=get_add_ppa_signing_key_callback(module)) elif isinstance(distro, aptsources.distro.DebianDistribution) or \ isinstance(distro, aptsources.distro.Distribution): sourceslist = SourcesList() else: module.fail_json(msg='Module apt_repository supports only Debian and Ubuntu.') sources_before = sourceslist.dump() try: if state == 'present': sourceslist.add_source(repo) elif state == 'absent': sourceslist.remove_source(repo) except InvalidSource, err: module.fail_json(msg='Invalid repository string: %s' % unicode(err)) sources_after = sourceslist.dump() changed = sources_before != sources_after if not module.check_mode and changed: try: sourceslist.save(module) if update_cache: cache = apt.Cache() cache.update() except OSError, err: module.fail_json(msg=unicode(err)) module.exit_json(changed=changed, repo=repo, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/macports0000664000000000000000000001502712316627017017013 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Jimmy Tang # Based on okpg (Patrick Pelletier ), pacman # (Afterburn) and pkgin (Shaun Zinck) modules # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: macports author: Jimmy Tang short_description: Package manager for MacPorts description: - Manages MacPorts packages version_added: "1.1" options: name: description: - name of package to install/remove required: true state: description: - state of the package choices: [ 'present', 'absent', 'active', 'inactive' ] required: false default: present update_cache: description: - update the package db first required: false default: "no" choices: [ "yes", "no" ] notes: [] ''' EXAMPLES = ''' - macports: name=foo state=present - macports: name=foo state=present update_cache=yes - macports: name=foo state=absent - macports: name=foo state=active - macports: name=foo state=inactive ''' import pipes def update_package_db(module, port_path): """ Updates packages list. """ rc, out, err = module.run_command("%s sync" % port_path) if rc != 0: module.fail_json(msg="could not update package db") def query_package(module, port_path, name, state="present"): """ Returns whether a package is installed or not. """ if state == "present": rc, out, err = module.run_command("%s installed | grep -q ^.*%s" % (pipes.quote(port_path), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: return True return False elif state == "active": rc, out, err = module.run_command("%s installed %s | grep -q active" % (pipes.quote(port_path), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: return True return False def remove_packages(module, port_path, packages): """ Uninstalls one or more packages if installed. """ remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove if not query_package(module, port_path, package): continue rc, out, err = module.run_command("%s uninstall %s" % (port_path, package)) if query_package(module, port_path, package): module.fail_json(msg="failed to remove %s: %s" % (package, out)) remove_c += 1 if remove_c > 0: module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, port_path, packages): """ Installs one or more packages if not already installed. """ install_c = 0 for package in packages: if query_package(module, port_path, package): continue rc, out, err = module.run_command("%s install %s" % (port_path, package)) if not query_package(module, port_path, package): module.fail_json(msg="failed to install %s: %s" % (package, out)) install_c += 1 if install_c > 0: module.exit_json(changed=True, msg="installed %s package(s)" % (install_c)) module.exit_json(changed=False, msg="package(s) already present") def activate_packages(module, port_path, packages): """ Activate a package if it's inactive. """ activate_c = 0 for package in packages: if not query_package(module, port_path, package): module.fail_json(msg="failed to activate %s, package(s) not present" % (package)) if query_package(module, port_path, package, state="active"): continue rc, out, err = module.run_command("%s activate %s" % (port_path, package)) if not query_package(module, port_path, package, state="active"): module.fail_json(msg="failed to activate %s: %s" % (package, out)) activate_c += 1 if activate_c > 0: module.exit_json(changed=True, msg="activated %s package(s)" % (activate_c)) module.exit_json(changed=False, msg="package(s) already active") def deactivate_packages(module, port_path, packages): """ Deactivate a package if it's active. """ deactivated_c = 0 for package in packages: if not query_package(module, port_path, package): module.fail_json(msg="failed to activate %s, package(s) not present" % (package)) if not query_package(module, port_path, package, state="active"): continue rc, out, err = module.run_command("%s deactivate %s" % (port_path, package)) if query_package(module, port_path, package, state="active"): module.fail_json(msg="failed to deactivated %s: %s" % (package, out)) deactivated_c += 1 if deactivated_c > 0: module.exit_json(changed=True, msg="deactivated %s package(s)" % (deactivated_c)) module.exit_json(changed=False, msg="package(s) already inactive") def main(): module = AnsibleModule( argument_spec = dict( name = dict(aliases=["pkg"], required=True), state = dict(default="present", choices=["present", "installed", "absent", "removed", "active", "inactive"]), update_cache = dict(default="no", aliases=["update-cache"], type='bool') ) ) port_path = module.get_bin_path('port', True, ['/opt/local/bin']) p = module.params if p["update_cache"]: update_package_db(module, port_path) pkgs = p["name"].split(",") if p["state"] in ["present", "installed"]: install_packages(module, port_path, pkgs) elif p["state"] in ["absent", "removed"]: remove_packages(module, port_path, pkgs) elif p["state"] == "active": activate_packages(module, port_path, pkgs) elif p["state"] == "inactive": deactivate_packages(module, port_path, pkgs) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/pkgutil0000664000000000000000000001274212316627017016643 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Alexander Winkler # based on svr4pkg by # Boyd Adamson (2012) # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: pkgutil short_description: Manage CSW-Packages on Solaris description: - Manages CSW packages (SVR4 format) on Solaris 10 and 11. - These were the native packages on Solaris <= 10 and are available as a legacy feature in Solaris 11. - Pkgutil is an advanced packaging system, which resolves dependency on installation. It is designed for CSW packages. version_added: "1.3" author: Alexander Winkler options: name: description: - Package name, e.g. (C(CSWnrpe)) required: true site: description: - Specifies the repository path to install the package from. - Its global definition is done in C(/etc/opt/csw/pkgutil.conf). state: description: - Whether to install (C(present)), or remove (C(absent)) a package. - The upgrade (C(latest)) operation will update/install the package to the latest version available. - "Note: The module has a limitation that (C(latest)) only works for one package, not lists of them." required: true choices: ["present", "absent", "latest"] ''' EXAMPLES = ''' # Install a package pkgutil: name=CSWcommon state=present # Install a package from a specific repository pkgutil: name=CSWnrpe site='ftp://myinternal.repo/opencsw/kiel state=latest' ''' import os import pipes def package_installed(module, name): cmd = [module.get_bin_path('pkginfo', True)] cmd.append('-q') cmd.append(name) rc, out, err = module.run_command(' '.join(cmd)) if rc == 0: return True else: return False def package_latest(module, name, site): # Only supports one package name = pipes.quote(name) site = pipes.quote(site) cmd = [ 'pkgutil', '--single', '-c' ] if site is not None: cmd += [ '-t', site ] cmd.append(name) cmd += [ '| tail -1 | grep -v SAME' ] rc, out, err = module.run_command(' '.join(cmd), use_unsafe_shell=True) if rc == 1: return True else: return False def run_command(module, cmd): progname = cmd[0] cmd[0] = module.get_bin_path(progname, True) return module.run_command(cmd) def package_install(module, state, name, site): cmd = [ 'pkgutil', '-iy' ] if site is not None: cmd += [ '-t', site ] if state == 'latest': cmd += [ '-f' ] cmd.append(name) (rc, out, err) = run_command(module, cmd) return (rc, out, err) def package_upgrade(module, name, site): cmd = [ 'pkgutil', '-ufy' ] if site is not None: cmd += [ '-t', site ] cmd.append(name) (rc, out, err) = run_command(module, cmd) return (rc, out, err) def package_uninstall(module, name): cmd = [ 'pkgutil', '-ry', name] (rc, out, err) = run_command(module, cmd) return (rc, out, err) def main(): module = AnsibleModule( argument_spec = dict( name = dict(required = True), state = dict(required = True, choices=['present', 'absent','latest']), site = dict(default = None), ), supports_check_mode=True ) name = module.params['name'] state = module.params['state'] site = module.params['site'] rc = None out = '' err = '' result = {} result['name'] = name result['state'] = state if state == 'present': if not package_installed(module, name): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = package_install(module, state, name, site) # Stdout is normally empty but for some packages can be # very long and is not often useful if len(out) > 75: out = out[:75] + '...' elif state == 'latest': if not package_installed(module, name): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = package_install(module, state, name, site) else: if not package_latest(module, name, site): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = package_upgrade(module, name, site) if len(out) > 75: out = out[:75] + '...' elif state == 'absent': if package_installed(module, name): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = package_uninstall(module, name) out = out[:75] if rc is None: result['changed'] = False else: result['changed'] = True if out: result['stdout'] = out if err: result['stderr'] = err module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/gem0000664000000000000000000001567712316627017015746 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Johan Wiren # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: gem short_description: Manage Ruby gems description: - Manage installation and uninstallation of Ruby gems. version_added: "1.1" options: name: description: - The name of the gem to be managed. required: true state: description: - The desired state of the gem. C(latest) ensures that the latest version is installed. required: true choices: [present, absent, latest] gem_source: description: - The path to a local gem used as installation source. required: false include_dependencies: description: - Wheter to include dependencies or not. required: false choices: [ "yes", "no" ] default: "yes" repository: description: - The repository from which the gem will be installed required: false aliases: [source] user_install: description: - Install gem in user's local gems cache or for all users required: false default: "yes" version_added: "1.3" executable: description: - Override the path to the gem executable required: false version_added: "1.4" version: description: - Version of the gem to be installed/removed. required: false author: Johan Wiren ''' EXAMPLES = ''' # Installs version 1.0 of vagrant. - gem: name=vagrant version=1.0 state=present # Installs latest available version of rake. - gem: name=rake state=latest # Installs rake version 1.0 from a local gem on disk. - gem: name=rake gem_source=/path/to/gems/rake-1.0.gem state=present ''' import re def get_rubygems_path(module): if module.params['executable']: return module.params['executable'] else: return module.get_bin_path('gem', True) def get_rubygems_version(module): cmd = [ get_rubygems_path(module), '--version' ] (rc, out, err) = module.run_command(cmd, check_rc=True) match = re.match(r'^(\d+)\.(\d+)\.(\d+)', out) if not match: return None return tuple(int(x) for x in match.groups()) def get_installed_versions(module, remote=False): cmd = [ get_rubygems_path(module) ] cmd.append('query') if remote: cmd.append('--remote') if module.params['repository']: cmd.extend([ '--source', module.params['repository'] ]) cmd.append('-n') cmd.append('^%s$' % module.params['name']) (rc, out, err) = module.run_command(cmd, check_rc=True) installed_versions = [] for line in out.splitlines(): match = re.match(r"\S+\s+\((.+)\)", line) if match: versions = match.group(1) for version in versions.split(', '): installed_versions.append(version.split()[0]) return installed_versions def exists(module): if module.params['state'] == 'latest': remoteversions = get_installed_versions(module, remote=True) if remoteversions: module.params['version'] = remoteversions[0] installed_versions = get_installed_versions(module) if module.params['version']: if module.params['version'] in installed_versions: return True else: if installed_versions: return True return False def uninstall(module): if module.check_mode: return cmd = [ get_rubygems_path(module) ] cmd.append('uninstall') if module.params['version']: cmd.extend([ '--version', module.params['version'] ]) else: cmd.append('--all') cmd.append('--executable') cmd.append(module.params['name']) module.run_command(cmd, check_rc=True) def install(module): if module.check_mode: return ver = get_rubygems_version(module) if ver: major = ver[0] else: major = None cmd = [ get_rubygems_path(module) ] cmd.append('install') if module.params['version']: cmd.extend([ '--version', module.params['version'] ]) if module.params['repository']: cmd.extend([ '--source', module.params['repository'] ]) if not module.params['include_dependencies']: cmd.append('--ignore-dependencies') else: if major and major < 2: cmd.append('--include-dependencies') if module.params['user_install']: cmd.append('--user-install') else: cmd.append('--no-user-install') cmd.append('--no-rdoc') cmd.append('--no-ri') cmd.append(module.params['gem_source']) module.run_command(cmd, check_rc=True) def main(): module = AnsibleModule( argument_spec = dict( executable = dict(required=False, type='str'), gem_source = dict(required=False, type='str'), include_dependencies = dict(required=False, default=True, type='bool'), name = dict(required=True, type='str'), repository = dict(required=False, aliases=['source'], type='str'), state = dict(required=False, choices=['present','absent','latest'], type='str'), user_install = dict(required=False, default=True, type='bool'), version = dict(required=False, type='str'), ), supports_check_mode = True, mutually_exclusive = [ ['gem_source','repository'], ['gem_source','version'] ], ) if module.params['version'] and module.params['state'] == 'latest': module.fail_json(msg="Cannot specify version when state=latest") if module.params['gem_source'] and module.params['state'] == 'latest': module.fail_json(msg="Cannot maintain state=latest when installing from local source") if not module.params['gem_source']: module.params['gem_source'] = module.params['name'] changed = False if module.params['state'] in [ 'present', 'latest']: if not exists(module): install(module) changed = True elif module.params['state'] == 'absent': if exists(module): uninstall(module) changed = True result = {} result['name'] = module.params['name'] result['state'] = module.params['state'] if module.params['version']: result['version'] = module.params['version'] result['changed'] = changed module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/yum0000664000000000000000000006730412316627017016002 0ustar rootroot#!/usr/bin/python -tt # -*- coding: utf-8 -*- # (c) 2012, Red Hat, Inc # Written by Seth Vidal # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # import traceback import os import yum try: from yum.misc import find_unfinished_transactions, find_ts_remaining from rpmUtils.miscutils import splitFilename transaction_helpers = True except: transaction_helpers = False DOCUMENTATION = ''' --- module: yum version_added: historical short_description: Manages packages with the I(yum) package manager description: - Installs, upgrade, removes, and lists packages and groups with the I(yum) package manager. options: name: description: - "Package name, or package specifier with version, like C(name-1.0). When using state=latest, this can be '*' which means run: yum -y update. You can also pass a url or a local path to a rpm file." required: true default: null aliases: [] list: description: - Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples. required: false default: null state: description: - Whether to install (C(present), C(latest)), or remove (C(absent)) a package. required: false choices: [ "present", "latest", "absent" ] default: "present" enablerepo: description: - Repoid of repositories to enable for the install/update operation. These repos will not persist beyond the transaction multiple repos separated with a ',' required: false version_added: "0.9" default: null aliases: [] disablerepo: description: - I(repoid) of repositories to disable for the install/update operation These repos will not persist beyond the transaction Multiple repos separated with a ',' required: false version_added: "0.9" default: null aliases: [] conf_file: description: - The remote yum configuration file to use for the transaction. required: false version_added: "0.6" default: null aliases: [] disable_gpg_check: description: - Whether to disable the GPG checking of signatures of packages being installed. Has an effect only if state is I(present) or I(latest). required: false version_added: "1.2" default: "no" choices: ["yes", "no"] aliases: [] notes: [] # informational: requirements for nodes requirements: [ yum, rpm ] author: Seth Vidal ''' EXAMPLES = ''' - name: install the latest version of Apache yum: name=httpd state=latest - name: remove the Apache package yum: name=httpd state=removed - name: install the latest version of Apche from the testing repo yum: name=httpd enablerepo=testing state=installed - name: upgrade all packages yum: name=* state=latest - name: install the nginx rpm from a remote repo yum: name=http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm state=present - name: install nginx rpm from a local file yum: name=/usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm state=present - name: install the 'Development tools' package group yum: name="@Development tools" state=present ''' def_qf = "%{name}-%{version}-%{release}.%{arch}" repoquery='/usr/bin/repoquery' if not os.path.exists(repoquery): repoquery = None yumbin='/usr/bin/yum' import syslog def log(msg): syslog.openlog('ansible-yum', 0, syslog.LOG_USER) syslog.syslog(syslog.LOG_NOTICE, msg) def yum_base(conf_file=None, cachedir=False): my = yum.YumBase() my.preconf.debuglevel=0 my.preconf.errorlevel=0 if conf_file and os.path.exists(conf_file): my.preconf.fn = conf_file if cachedir or os.geteuid() != 0: if hasattr(my, 'setCacheDir'): my.setCacheDir() else: cachedir = yum.misc.getCacheDir() my.repos.setCacheDir(cachedir) my.conf.cache = 0 return my def install_yum_utils(module): if not module.check_mode: yum_path = module.get_bin_path('yum') if yum_path: rc, so, se = module.run_command('%s -y install yum-utils' % yum_path) if rc == 0: this_path = module.get_bin_path('repoquery') global repoquery repoquery = this_path def po_to_nevra(po): if hasattr(po, 'ui_nevra'): return po.ui_nevra else: return '%s-%s-%s.%s' % (po.name, po.version, po.release, po.arch) def is_installed(module, repoq, pkgspec, conf_file, qf=def_qf, en_repos=[], dis_repos=[], is_pkg=False): if not repoq: pkgs = [] try: my = yum_base(conf_file) for rid in en_repos: my.repos.enableRepo(rid) for rid in dis_repos: my.repos.disableRepo(rid) e,m,u = my.rpmdb.matchPackageNames([pkgspec]) pkgs = e + m if not pkgs: pkgs.extend(my.returnInstalledPackagesByDep(pkgspec)) except Exception, e: module.fail_json(msg="Failure talking to yum: %s" % e) return [ po_to_nevra(p) for p in pkgs ] else: cmd = repoq + ["--disablerepo=*", "--pkgnarrow=installed", "--qf", qf, pkgspec] rc,out,err = module.run_command(cmd) if not is_pkg: cmd = repoq + ["--disablerepo=*", "--pkgnarrow=installed", "--qf", qf, "--whatprovides", pkgspec] rc2,out2,err2 = module.run_command(cmd) else: rc2,out2,err2 = (0, '', '') if rc == 0 and rc2 == 0: out += out2 return [ p for p in out.split('\n') if p.strip() ] else: module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err + err2)) return [] def is_available(module, repoq, pkgspec, conf_file, qf=def_qf, en_repos=[], dis_repos=[]): if not repoq: pkgs = [] try: my = yum_base(conf_file) for rid in en_repos: my.repos.enableRepo(rid) for rid in dis_repos: my.repos.disableRepo(rid) e,m,u = my.pkgSack.matchPackageNames([pkgspec]) pkgs = e + m if not pkgs: pkgs.extend(my.returnPackagesByDep(pkgspec)) except Exception, e: module.fail_json(msg="Failure talking to yum: %s" % e) return [ po_to_nevra(p) for p in pkgs ] else: myrepoq = list(repoq) for repoid in dis_repos: r_cmd = ['--disablerepo', repoid] myrepoq.extend(r_cmd) for repoid in en_repos: r_cmd = ['--enablerepo', repoid] myrepoq.extend(r_cmd) cmd = myrepoq + ["--qf", qf, pkgspec] rc,out,err = module.run_command(cmd) if rc == 0: return [ p for p in out.split('\n') if p.strip() ] else: module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err)) return [] def is_update(module, repoq, pkgspec, conf_file, qf=def_qf, en_repos=[], dis_repos=[]): if not repoq: retpkgs = [] pkgs = [] updates = [] try: my = yum_base(conf_file) for rid in en_repos: my.repos.enableRepo(rid) for rid in dis_repos: my.repos.disableRepo(rid) pkgs = my.returnPackagesByDep(pkgspec) + my.returnInstalledPackagesByDep(pkgspec) if not pkgs: e,m,u = my.pkgSack.matchPackageNames([pkgspec]) pkgs = e + m updates = my.doPackageLists(pkgnarrow='updates').updates except Exception, e: module.fail_json(msg="Failure talking to yum: %s" % e) for pkg in pkgs: if pkg in updates: retpkgs.append(pkg) return set([ po_to_nevra(p) for p in retpkgs ]) else: myrepoq = list(repoq) for repoid in dis_repos: r_cmd = ['--disablerepo', repoid] myrepoq.extend(r_cmd) for repoid in en_repos: r_cmd = ['--enablerepo', repoid] myrepoq.extend(r_cmd) cmd = myrepoq + ["--pkgnarrow=updates", "--qf", qf, pkgspec] rc,out,err = module.run_command(cmd) if rc == 0: return set([ p for p in out.split('\n') if p.strip() ]) else: module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err)) return [] def what_provides(module, repoq, req_spec, conf_file, qf=def_qf, en_repos=[], dis_repos=[]): if not repoq: pkgs = [] try: my = yum_base(conf_file) for rid in en_repos: my.repos.enableRepo(rid) for rid in dis_repos: my.repos.disableRepo(rid) pkgs = my.returnPackagesByDep(req_spec) + my.returnInstalledPackagesByDep(req_spec) if not pkgs: e,m,u = my.pkgSack.matchPackageNames([req_spec]) pkgs.extend(e) pkgs.extend(m) e,m,u = my.rpmdb.matchPackageNames([req_spec]) pkgs.extend(e) pkgs.extend(m) except Exception, e: module.fail_json(msg="Failure talking to yum: %s" % e) return set([ po_to_nevra(p) for p in pkgs ]) else: myrepoq = list(repoq) for repoid in dis_repos: r_cmd = ['--disablerepo', repoid] myrepoq.extend(r_cmd) for repoid in en_repos: r_cmd = ['--enablerepo', repoid] myrepoq.extend(r_cmd) cmd = myrepoq + ["--qf", qf, "--whatprovides", req_spec] rc,out,err = module.run_command(cmd) cmd = myrepoq + ["--qf", qf, req_spec] rc2,out2,err2 = module.run_command(cmd) if rc == 0 and rc2 == 0: out += out2 pkgs = set([ p for p in out.split('\n') if p.strip() ]) if not pkgs: pkgs = is_installed(module, repoq, req_spec, conf_file, qf=qf) return pkgs else: module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err + err2)) return [] def transaction_exists(pkglist): """ checks the package list to see if any packages are involved in an incomplete transaction """ conflicts = [] if not transaction_helpers: return conflicts # first, we create a list of the package 'nvreas' # so we can compare the pieces later more easily pkglist_nvreas = [] for pkg in pkglist: pkglist_nvreas.append(splitFilename(pkg)) # next, we build the list of packages that are # contained within an unfinished transaction unfinished_transactions = find_unfinished_transactions() for trans in unfinished_transactions: steps = find_ts_remaining(trans) for step in steps: # the action is install/erase/etc., but we only # care about the package spec contained in the step (action, step_spec) = step (n,v,r,e,a) = splitFilename(step_spec) # and see if that spec is in the list of packages # requested for installation/updating for pkg in pkglist_nvreas: # if the name and arch match, we're going to assume # this package is part of a pending transaction # the label is just for display purposes label = "%s-%s" % (n,a) if n == pkg[0] and a == pkg[4]: if label not in conflicts: conflicts.append("%s-%s" % (n,a)) break return conflicts def local_nvra(module, path): """return nvra of a local rpm passed in""" cmd = ['/bin/rpm', '-qp' ,'--qf', '%{name}-%{version}-%{release}.%{arch}\n', path ] rc, out, err = module.run_command(cmd) if rc != 0: return None nvra = out.split('\n')[0] return nvra def pkg_to_dict(pkgstr): if pkgstr.strip(): n,e,v,r,a,repo = pkgstr.split('|') else: return {'error_parsing': pkgstr} d = { 'name':n, 'arch':a, 'epoch':e, 'release':r, 'version':v, 'repo':repo, 'nevra': '%s:%s-%s-%s.%s' % (e,n,v,r,a) } if repo == 'installed': d['yumstate'] = 'installed' else: d['yumstate'] = 'available' return d def repolist(module, repoq, qf="%{repoid}"): cmd = repoq + ["--qf", qf, "-a"] rc,out,err = module.run_command(cmd) ret = [] if rc == 0: ret = set([ p for p in out.split('\n') if p.strip() ]) return ret def list_stuff(module, conf_file, stuff): qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|%{repoid}" repoq = [repoquery, '--show-duplicates', '--plugins', '--quiet', '-q'] if conf_file and os.path.exists(conf_file): repoq += ['-c', conf_file] if stuff == 'installed': return [ pkg_to_dict(p) for p in is_installed(module, repoq, '-a', conf_file, qf=qf) if p.strip() ] elif stuff == 'updates': return [ pkg_to_dict(p) for p in is_update(module, repoq, '-a', conf_file, qf=qf) if p.strip() ] elif stuff == 'available': return [ pkg_to_dict(p) for p in is_available(module, repoq, '-a', conf_file, qf=qf) if p.strip() ] elif stuff == 'repos': return [ dict(repoid=name, state='enabled') for name in repolist(module, repoq) if name.strip() ] else: return [ pkg_to_dict(p) for p in is_installed(module, repoq, stuff, conf_file, qf=qf) + is_available(module, repoq, stuff, conf_file, qf=qf) if p.strip() ] def install(module, items, repoq, yum_basecmd, conf_file, en_repos, dis_repos): res = {} res['results'] = [] res['msg'] = '' res['rc'] = 0 res['changed'] = False for spec in items: pkg = None # check if pkgspec is installed (if possible for idempotence) # localpkg if spec.endswith('.rpm') and '://' not in spec: # get the pkg name-v-r.arch if not os.path.exists(spec): res['msg'] += "No Package file matching '%s' found on system" % spec module.fail_json(**res) nvra = local_nvra(module, spec) # look for them in the rpmdb if is_installed(module, repoq, nvra, conf_file, en_repos=en_repos, dis_repos=dis_repos): # if they are there, skip it continue pkg = spec # URL elif '://' in spec: pkg = spec #groups :( elif spec.startswith('@'): # complete wild ass guess b/c it's a group pkg = spec # range requires or file-requires or pkgname :( else: # most common case is the pkg is already installed and done # short circuit all the bs - and search for it as a pkg in is_installed # if you find it then we're done if not set(['*','?']).intersection(set(spec)): pkgs = is_installed(module, repoq, spec, conf_file, en_repos=en_repos, dis_repos=dis_repos, is_pkg=True) if pkgs: res['results'].append('%s providing %s is already installed' % (pkgs[0], spec)) continue # look up what pkgs provide this pkglist = what_provides(module, repoq, spec, conf_file, en_repos=en_repos, dis_repos=dis_repos) if not pkglist: res['msg'] += "No Package matching '%s' found available, installed or updated" % spec module.fail_json(**res) # if any of the packages are involved in a transaction, fail now # so that we don't hang on the yum operation later conflicts = transaction_exists(pkglist) if len(conflicts) > 0: res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts) module.fail_json(**res) # if any of them are installed # then nothing to do found = False for this in pkglist: if is_installed(module, repoq, this, conf_file, en_repos=en_repos, dis_repos=dis_repos, is_pkg=True): found = True res['results'].append('%s providing %s is already installed' % (this, spec)) break # if the version of the pkg you have installed is not in ANY repo, but there are # other versions in the repos (both higher and lower) then the previous checks won't work. # so we check one more time. This really only works for pkgname - not for file provides or virt provides # but virt provides should be all caught in what_provides on its own. # highly irritating if not found: if is_installed(module, repoq, spec, conf_file, en_repos=en_repos, dis_repos=dis_repos): found = True res['results'].append('package providing %s is already installed' % (spec)) if found: continue # if not - then pass in the spec as what to install # we could get here if nothing provides it but that's not # the error we're catching here pkg = spec cmd = yum_basecmd + ['install', pkg] if module.check_mode: module.exit_json(changed=True) changed = True rc, out, err = module.run_command(cmd) # Fail on invalid urls: if (rc == 1 and '://' in spec and ('No package %s available.' % spec in out or 'Cannot open: %s. Skipping.' % spec in err)): err = 'Package at %s could not be installed' % spec module.fail_json(changed=False,msg=err,rc=1) elif (rc != 0 and 'Nothing to do' in err) or 'Nothing to do' in out: # avoid failing in the 'Nothing To Do' case # this may happen with an URL spec. # for an already installed group, # we get rc = 0 and 'Nothing to do' in out, not in err. rc = 0 err = '' out = '%s: Nothing to do' % spec changed = False res['rc'] += rc res['results'].append(out) res['msg'] += err # FIXME - if we did an install - go and check the rpmdb to see if it actually installed # look for the pkg in rpmdb # look for the pkg via obsoletes # accumulate any changes res['changed'] |= changed module.exit_json(**res) def remove(module, items, repoq, yum_basecmd, conf_file, en_repos, dis_repos): res = {} res['results'] = [] res['msg'] = '' res['changed'] = False res['rc'] = 0 for pkg in items: is_group = False # group remove - this is doom on a stick if pkg.startswith('@'): is_group = True else: if not is_installed(module, repoq, pkg, conf_file, en_repos=en_repos, dis_repos=dis_repos): res['results'].append('%s is not installed' % pkg) continue # run an actual yum transaction cmd = yum_basecmd + ["remove", pkg] if module.check_mode: module.exit_json(changed=True) rc, out, err = module.run_command(cmd) res['rc'] += rc res['results'].append(out) res['msg'] += err # compile the results into one batch. If anything is changed # then mark changed # at the end - if we've end up failed then fail out of the rest # of the process # at this point we should check to see if the pkg is no longer present if not is_group: # we can't sensibly check for a group being uninstalled reliably # look to see if the pkg shows up from is_installed. If it doesn't if not is_installed(module, repoq, pkg, conf_file, en_repos=en_repos, dis_repos=dis_repos): res['changed'] = True else: module.fail_json(**res) if rc != 0: module.fail_json(**res) module.exit_json(**res) def latest(module, items, repoq, yum_basecmd, conf_file, en_repos, dis_repos): res = {} res['results'] = [] res['msg'] = '' res['changed'] = False res['rc'] = 0 for spec in items: pkg = None basecmd = 'update' cmd = '' # groups, again if spec.startswith('@'): pkg = spec elif spec == '*': #update all # use check-update to see if there is any need rc,out,err = module.run_command(yum_basecmd + ['check-update']) if rc == 100: cmd = yum_basecmd + [basecmd] else: res['results'].append('All packages up to date') continue # dep/pkgname - find it else: if is_installed(module, repoq, spec, conf_file, en_repos=en_repos, dis_repos=dis_repos): basecmd = 'update' else: basecmd = 'install' pkglist = what_provides(module, repoq, spec, conf_file, en_repos=en_repos, dis_repos=dis_repos) if not pkglist: res['msg'] += "No Package matching '%s' found available, installed or updated" % spec module.fail_json(**res) nothing_to_do = True for this in pkglist: if basecmd == 'install' and is_available(module, repoq, this, conf_file, en_repos=en_repos, dis_repos=dis_repos): nothing_to_do = False break if basecmd == 'update' and is_update(module, repoq, this, conf_file, en_repos=en_repos, dis_repos=en_repos): nothing_to_do = False break if nothing_to_do: res['results'].append("All packages providing %s are up to date" % spec) continue # if any of the packages are involved in a transaction, fail now # so that we don't hang on the yum operation later conflicts = transaction_exists(pkglist) if len(conflicts) > 0: res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts) module.fail_json(**res) pkg = spec if not cmd: cmd = yum_basecmd + [basecmd, pkg] if module.check_mode: return module.exit_json(changed=True) rc, out, err = module.run_command(cmd) res['rc'] += rc res['results'].append(out) res['msg'] += err # FIXME if it is - update it and check to see if it applied # check to see if there is no longer an update available for the pkgspec if rc: res['failed'] = True else: res['changed'] = True module.exit_json(**res) def ensure(module, state, pkgspec, conf_file, enablerepo, disablerepo, disable_gpg_check): # take multiple args comma separated items = pkgspec.split(',') # need debug level 2 to get 'Nothing to do' for groupinstall. yum_basecmd = [yumbin, '-d', '2', '-y'] if not repoquery: repoq = None else: repoq = [repoquery, '--show-duplicates', '--plugins', '--quiet', '-q'] if conf_file and os.path.exists(conf_file): yum_basecmd += ['-c', conf_file] if repoq: repoq += ['-c', conf_file] dis_repos =[] en_repos = [] if disablerepo: dis_repos = disablerepo.split(',') if enablerepo: en_repos = enablerepo.split(',') for repoid in dis_repos: r_cmd = ['--disablerepo=%s' % repoid] yum_basecmd.extend(r_cmd) for repoid in en_repos: r_cmd = ['--enablerepo=%s' % repoid] yum_basecmd.extend(r_cmd) if state in ['installed', 'present', 'latest']: my = yum_base(conf_file) try: for r in dis_repos: my.repos.disableRepo(r) current_repos = my.repos.repos.keys() for r in en_repos: try: my.repos.enableRepo(r) new_repos = my.repos.repos.keys() for i in new_repos: if not i in current_repos: rid = my.repos.getRepo(i) a = rid.repoXML.repoid current_repos = new_repos except yum.Errors.YumBaseError, e: module.fail_json(msg="Error setting/accessing repo %s: %s" % (r, e)) except yum.Errors.YumBaseError, e: module.fail_json(msg="Error accessing repos: %s" % e) if state in ['installed', 'present']: if disable_gpg_check: yum_basecmd.append('--nogpgcheck') install(module, items, repoq, yum_basecmd, conf_file, en_repos, dis_repos) elif state in ['removed', 'absent']: remove(module, items, repoq, yum_basecmd, conf_file, en_repos, dis_repos) elif state == 'latest': if disable_gpg_check: yum_basecmd.append('--nogpgcheck') latest(module, items, repoq, yum_basecmd, conf_file, en_repos, dis_repos) # should be caught by AnsibleModule argument_spec return dict(changed=False, failed=True, results='', errors='unexpected state') def main(): # state=installed name=pkgspec # state=removed name=pkgspec # state=latest name=pkgspec # # informational commands: # list=installed # list=updates # list=available # list=repos # list=pkgspec module = AnsibleModule( argument_spec = dict( name=dict(aliases=['pkg']), # removed==absent, installed==present, these are accepted as aliases state=dict(default='installed', choices=['absent','present','installed','removed','latest']), enablerepo=dict(), disablerepo=dict(), list=dict(), conf_file=dict(default=None), disable_gpg_check=dict(required=False, default="no", type='bool'), # this should not be needed, but exists as a failsafe install_repoquery=dict(required=False, default="yes", type='bool'), ), required_one_of = [['name','list']], mutually_exclusive = [['name','list']], supports_check_mode = True ) # this should not be needed, but exists as a failsafe params = module.params if params['install_repoquery'] and not repoquery and not module.check_mode: install_yum_utils(module) if params['list']: if not repoquery: module.fail_json(msg="repoquery is required to use list= with this module. Please install the yum-utils package.") results = dict(results=list_stuff(module, params['conf_file'], params['list'])) module.exit_json(**results) else: pkg = params['name'] state = params['state'] enablerepo = params.get('enablerepo', '') disablerepo = params.get('disablerepo', '') disable_gpg_check = params['disable_gpg_check'] res = ensure(module, state, pkg, params['conf_file'], enablerepo, disablerepo, disable_gpg_check) module.fail_json(msg="we should never get here unless this all failed", **res) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/rhn_channel0000664000000000000000000001057012316627017017440 0ustar rootroot#!/usr/bin/python DOCUMENTATION = ''' --- module: rhn_channel short_description: Adds or removes Red Hat software channels description: - Adds or removes Red Hat software channels version_added: "1.1" author: Vincent Van der Kussen notes: - this module fetches the system id from RHN. requirements: - none options: name: description: - name of the software channel required: true default: null sysname: description: - name of the system as it is known in RHN/Satellite required: true default: null state: description: - whether the channel should be present or not required: false default: present url: description: - The full url to the RHN/Satellite api required: true user: description: - RHN/Satellite user required: true password: description: - "the user's password" required: true ''' EXAMPLES = ''' - rhn_channel: name=rhel-x86_64-server-v2vwin-6 sysname=server01 url=https://rhn.redhat.com/rpc/api user=rhnuser password=guessme ''' import xmlrpclib from operator import itemgetter import re # ------------------------------------------------------- # def get_systemid(client, session, sysname): systems = client.system.listUserSystems(session) for system in systems: if system.get('name') == sysname: idres = system.get('id') idd = int(idres) return idd # ------------------------------------------------------- # # unused: # #def get_localsystemid(): # f = open("/etc/sysconfig/rhn/systemid", "r") # content = f.read() # loc_id = re.search(r'\b(ID-)(\d{10})' ,content) # return loc_id.group(2) # ------------------------------------------------------- # def subscribe_channels(channels, client, session, sysname, sys_id): c = base_channels(client, session, sys_id) c.append(channels) return client.channel.software.setSystemChannels(session, sys_id, c) # ------------------------------------------------------- # def unsubscribe_channels(channels, client, session, sysname, sys_id): c = base_channels(client, session, sys_id) c.remove(channels) return client.channel.software.setSystemChannels(session, sys_id, c) # ------------------------------------------------------- # def base_channels(client, session, sys_id): basechan = client.channel.software.listSystemChannels(session, sys_id) try: chans = [item['label'] for item in basechan] except KeyError: chans = [item['channel_label'] for item in basechan] return chans # ------------------------------------------------------- # def main(): module = AnsibleModule( argument_spec = dict( state = dict(default='present', choices=['present', 'absent']), name = dict(required=True), sysname = dict(required=True), url = dict(required=True), user = dict(required=True), password = dict(required=True, aliases=['pwd']), ) # supports_check_mode=True ) state = module.params['state'] channelname = module.params['name'] systname = module.params['sysname'] saturl = module.params['url'] user = module.params['user'] password = module.params['password'] #initialize connection client = xmlrpclib.Server(saturl, verbose=0) session = client.auth.login(user, password) # get systemid sys_id = get_systemid(client, session, systname) # get channels for system chans = base_channels(client, session, sys_id) if state == 'present': if channelname in chans: module.exit_json(changed=False, msg="Channel %s already exists" % channelname) else: subscribe_channels(channelname, client, session, systname, sys_id) module.exit_json(changed=True, msg="Channel %s added" % channelname) if state == 'absent': if not channelname in chans: module.exit_json(changed=False, msg="Not subscribed to channel %s." % channelname) else: unsubscribe_channels(channelname, client, session, systname, sys_id) module.exit_json(changed=True, msg="Channel %s removed" % channelname) client.auth.logout(session) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/easy_install0000664000000000000000000001416512316627017017654 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Matt Wright # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # import tempfile import os.path DOCUMENTATION = ''' --- module: easy_install short_description: Installs Python libraries description: - Installs Python libraries, optionally in a I(virtualenv) version_added: "0.7" options: name: description: - A Python library name required: true default: null aliases: [] virtualenv: description: - an optional I(virtualenv) directory path to install into. If the I(virtualenv) does not exist, it is created automatically required: false default: null virtualenv_site_packages: version_added: "1.1" description: - Whether the virtual environment will inherit packages from the global site-packages directory. Note that if this setting is changed on an already existing virtual environment it will not have any effect, the environment must be deleted and newly created. required: false default: "no" choices: [ "yes", "no" ] virtualenv_command: version_added: "1.1" description: - The command to create the virtual environment with. For example C(pyvenv), C(virtualenv), C(virtualenv2). required: false default: virtualenv executable: description: - The explicit executable or a pathname to the executable to be used to run easy_install for a specific version of Python installed in the system. For example C(easy_install-3.3), if there are both Python 2.7 and 3.3 installations in the system and you want to run easy_install for the Python 3.3 installation. version_added: "1.3" required: false default: null notes: - Please note that the M(easy_install) module can only install Python libraries. Thus this module is not able to remove libraries. It is generally recommended to use the M(pip) module which you can first install using M(easy_install). - Also note that I(virtualenv) must be installed on the remote host if the C(virtualenv) parameter is specified. requirements: [ "virtualenv" ] author: Matt Wright ''' EXAMPLES = ''' # Examples from Ansible Playbooks - easy_install: name=pip # Install Bottle into the specified virtualenv. - easy_install: name=bottle virtualenv=/webapps/myapp/venv ''' def _is_package_installed(module, name, easy_install): cmd = '%s --dry-run %s' % (easy_install, name) rc, status_stdout, status_stderr = module.run_command(cmd) return not ('Reading' in status_stdout or 'Downloading' in status_stdout) def _get_easy_install(module, env=None, executable=None): candidate_easy_inst_basenames = ['easy_install'] easy_install = None if executable is not None: if os.path.isabs(executable): easy_install = executable else: candidate_easy_inst_basenames.insert(0, executable) if easy_install is None: if env is None: opt_dirs = [] else: # Try easy_install with the virtualenv directory first. opt_dirs = ['%s/bin' % env] for basename in candidate_easy_inst_basenames: easy_install = module.get_bin_path(basename, False, opt_dirs) if easy_install is not None: break # easy_install should have been found by now. The final call to # get_bin_path will trigger fail_json. if easy_install is None: basename = candidate_easy_inst_basenames[0] easy_install = module.get_bin_path(basename, True, opt_dirs) return easy_install def main(): arg_spec = dict( name=dict(required=True), virtualenv=dict(default=None, required=False), virtualenv_site_packages=dict(default='no', type='bool'), virtualenv_command=dict(default='virtualenv', required=False), executable=dict(default='easy_install', required=False), ) module = AnsibleModule(argument_spec=arg_spec, supports_check_mode=True) name = module.params['name'] env = module.params['virtualenv'] executable = module.params['executable'] site_packages = module.params['virtualenv_site_packages'] virtualenv_command = module.params['virtualenv_command'] rc = 0 err = '' out = '' if env: virtualenv = module.get_bin_path(virtualenv_command, True) if not os.path.exists(os.path.join(env, 'bin', 'activate')): if module.check_mode: module.exit_json(changed=True) command = '%s %s' % (virtualenv, env) if site_packages: command += ' --system-site-packages' cwd = tempfile.gettempdir() rc_venv, out_venv, err_venv = module.run_command(command, cwd=cwd) rc += rc_venv out += out_venv err += err_venv easy_install = _get_easy_install(module, env, executable) cmd = None changed = False installed = _is_package_installed(module, name, easy_install) if not installed: if module.check_mode: module.exit_json(changed=True) cmd = '%s %s' % (easy_install, name) rc_easy_inst, out_easy_inst, err_easy_inst = module.run_command(cmd) rc += rc_easy_inst out += out_easy_inst err += err_easy_inst changed = True if rc != 0: module.fail_json(msg=err, cmd=cmd) module.exit_json(changed=changed, binary=easy_install, name=name, virtualenv=env) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/portinstall0000664000000000000000000001540512316627017017536 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, berenddeboer # Written by berenddeboer # Based on pkgng module written by bleader # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: portinstall short_description: Installing packages from FreeBSD's ports system description: - Manage packages for FreeBSD using 'portinstall'. version_added: "1.3" options: name: description: - name of package to install/remove required: true state: description: - state of the package choices: [ 'present', 'absent' ] required: false default: present use_packages: description: - use packages instead of ports whenever available choices: [ 'yes', 'no' ] required: false default: yes author: berenddeboer ''' EXAMPLES = ''' # Install package foo - portinstall: name=foo state=present # Install package security/cyrus-sasl2-saslauthd - portinstall: name=security/cyrus-sasl2-saslauthd state=present # Remove packages foo and bar - portinstall: name=foo,bar state=absent ''' import json import shlex import os import sys def query_package(module, name): pkg_info_path = module.get_bin_path('pkg_info', False) # Assume that if we have pkg_info, we haven't upgraded to pkgng if pkg_info_path: pkgng = False pkg_glob_path = module.get_bin_path('pkg_glob', True) rc, out, err = module.run_command("%s -e `pkg_glob %s`" % (pkg_info_path, pipes.quote(name)), use_unsafe_shell=True) else: pkgng = True pkg_info_path = module.get_bin_path('pkg', True) pkg_info_path = pkg_info_path + " info" rc, out, err = module.run_command("%s %s" % (pkg_info_path, name)) found = rc == 0 if not found: # databases/mysql55-client installs as mysql-client, so try solving # that the ugly way. Pity FreeBSD doesn't have a fool proof way of checking # some package is installed name_without_digits = re.sub('[0-9]', '', name) if name != name_without_digits: if pkgng: rc, out, err = module.run_command("%s %s" % (pkg_info_path, name_without_digits)) else: rc, out, err = module.run_command("%s %s" % (pkg_info_path, name_without_digits)) found = rc == 0 return found def matching_packages(module, name): ports_glob_path = module.get_bin_path('ports_glob', True) rc, out, err = module.run_command("%s %s | wc" % (ports_glob_path, name)) parts = out.split() occurrences = int(parts[0]) if occurrences == 0: name_without_digits = re.sub('[0-9]', '', name) if name != name_without_digits: rc, out, err = module.run_command("%s %s | wc" % (ports_glob_path, name_without_digits)) parts = out.split() occurrences = int(parts[0]) return occurrences def remove_packages(module, packages): remove_c = 0 pkg_glob_path = module.get_bin_path('pkg_glob', True) # If pkg_delete not found, we assume pkgng pkg_delete_path = module.get_bin_path('pkg_delete', False) if not pkg_delete_path: pkg_delete_path = module.get_bin_path('pkg', True) pkg_delete_path = pkg_delete_path + " delete -y" # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove if not query_package(module, package): continue rc, out, err = module.run_command("%s `%s %s`" % (pkg_delete_path, pkg_glob_path, pipes.quote(package)), use_unsafe_shell=True) if query_package(module, package): name_without_digits = re.sub('[0-9]', '', package) rc, out, err = module.run_command("%s `%s %s`" % (pkg_delete_path, pkg_glob_path, pipes.quote(name_without_digits)),use_unsafe_shell=True) if query_package(module, package): module.fail_json(msg="failed to remove %s: %s" % (package, out)) remove_c += 1 if remove_c > 0: module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, packages, use_packages): install_c = 0 # If portinstall not found, automagically install portinstall_path = module.get_bin_path('portinstall', False) if not portinstall_path: pkg_path = module.get_bin_path('pkg', False) if pkg_path: module.run_command("pkg install -y portupgrade") portinstall_path = module.get_bin_path('portinstall', True) if use_packages == "yes": portinstall_params="--use-packages" else: portinstall_params="" for package in packages: if query_package(module, package): continue # TODO: check how many match matches = matching_packages(module, package) if matches == 1: rc, out, err = module.run_command("%s --batch %s %s" % (portinstall_path, portinstall_params, package)) if not query_package(module, package): module.fail_json(msg="failed to install %s: %s" % (package, out)) elif matches == 0: module.fail_json(msg="no matches for package %s" % (package)) else: module.fail_json(msg="%s matches found for package name %s" % (matches, package)) install_c += 1 if install_c > 0: module.exit_json(changed=True, msg="present %s package(s)" % (install_c)) module.exit_json(changed=False, msg="package(s) already present") def main(): module = AnsibleModule( argument_spec = dict( state = dict(default="present", choices=["present","absent"]), name = dict(aliases=["pkg"], required=True), use_packages = dict(type='bool', default='yes'))) p = module.params pkgs = p["name"].split(",") if p["state"] == "present": install_packages(module, pkgs, p["use_packages"]) elif p["state"] == "absent": remove_packages(module, pkgs) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/homebrew0000664000000000000000000001217212316627017016771 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Andrew Dunham # Based on macports (Jimmy Tang ) # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: homebrew author: Andrew Dunham short_description: Package manager for Homebrew description: - Manages Homebrew packages version_added: "1.4" options: name: description: - name of package to install/remove required: true state: description: - state of the package choices: [ 'present', 'absent' ] required: false default: present update_homebrew: description: - update homebrew itself first required: false default: "no" choices: [ "yes", "no" ] install_options: description: - options flags to install a package required: false default: null notes: [] ''' EXAMPLES = ''' - homebrew: name=foo state=present - homebrew: name=foo state=present update_homebrew=yes - homebrew: name=foo state=absent - homebrew: name=foo,bar state=absent - homebrew: name=foo state=present install_options=with-baz,enable-debug ''' def update_homebrew(module, brew_path): """ Updates packages list. """ rc, out, err = module.run_command("%s update" % brew_path) if rc != 0: module.fail_json(msg="could not update homebrew") def query_package(module, brew_path, name, state="present"): """ Returns whether a package is installed or not. """ if state == "present": rc, out, err = module.run_command("%s list %s" % (brew_path, name)) if rc == 0: return True return False def remove_packages(module, brew_path, packages): """ Uninstalls one or more packages if installed. """ removed_count = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove. if not query_package(module, brew_path, package): continue if module.check_mode: module.exit_json(changed=True) rc, out, err = module.run_command([brew_path, 'remove', package]) if query_package(module, brew_path, package): module.fail_json(msg="failed to remove %s: %s" % (package, out.strip())) removed_count += 1 if removed_count > 0: module.exit_json(changed=True, msg="removed %d package(s)" % removed_count) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, brew_path, packages, options): """ Installs one or more packages if not already installed. """ installed_count = 0 for package in packages: if query_package(module, brew_path, package): continue if module.check_mode: module.exit_json(changed=True) cmd = [brew_path, 'install', package] if options: cmd.extend(options) rc, out, err = module.run_command(cmd) if not query_package(module, brew_path, package): module.fail_json(msg="failed to install %s: '%s' %s" % (package, cmd, out.strip())) installed_count += 1 if installed_count > 0: module.exit_json(changed=True, msg="installed %d package(s)" % (installed_count,)) module.exit_json(changed=False, msg="package(s) already present") def generate_options_string(install_options): if install_options is None: return None options = [] for option in install_options: options.append('--%s' % option) return options def main(): module = AnsibleModule( argument_spec = dict( name = dict(aliases=["pkg"], required=True), state = dict(default="present", choices=["present", "installed", "absent", "removed"]), update_homebrew = dict(default="no", aliases=["update-brew"], type='bool'), install_options = dict(default=None, aliases=["options"], type='list') ), supports_check_mode=True ) brew_path = module.get_bin_path('brew', True, ['/usr/local/bin']) p = module.params if p["update_homebrew"]: update_homebrew(module, brew_path) pkgs = p["name"].split(",") if p["state"] in ["present", "installed"]: opt = generate_options_string(p["install_options"]) install_packages(module, brew_path, pkgs, opt) elif p["state"] in ["absent", "removed"]: remove_packages(module, brew_path, pkgs) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/svr4pkg0000664000000000000000000001356112316627017016564 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Boyd Adamson # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: svr4pkg short_description: Manage Solaris SVR4 packages description: - Manages SVR4 packages on Solaris 10 and 11. - These were the native packages on Solaris <= 10 and are available as a legacy feature in Solaris 11. - Note that this is a very basic packaging system. It will not enforce dependencies on install or remove. version_added: "0.9" author: Boyd Adamson options: name: description: - Package name, e.g. C(SUNWcsr) required: true state: description: - Whether to install (C(present)), or remove (C(absent)) a package. - If the package is to be installed, then I(src) is required. - The SVR4 package system doesn't provide an upgrade operation. You need to uninstall the old, then install the new package. required: true choices: ["present", "absent"] src: description: - Specifies the location to install the package from. Required when C(state=present). - "Can be any path acceptable to the C(pkgadd) command's C(-d) option. e.g.: C(somefile.pkg), C(/dir/with/pkgs), C(http:/server/mypkgs.pkg)." - If using a file or directory, they must already be accessible by the host. See the M(copy) module for a way to get them there. proxy: description: - HTTP[s] proxy to be used if C(src) is a URL. response_file: description: - Specifies the location of a response file to be used if package expects input on install. (added in Ansible 1.4) required: false ''' EXAMPLES = ''' # Install a package from an already copied file - svr4pkg: name=CSWcommon src=/tmp/cswpkgs.pkg state=present # Install a package directly from an http site - svr4pkg: name=CSWpkgutil src=http://get.opencsw.org/now state=present # Install a package with a response file - svr4pkg: name=CSWggrep src=/tmp/third-party.pkg response_file=/tmp/ggrep.response state=present # Ensure that a package is not installed. - svr4pkg: name=SUNWgnome-sound-recorder state=absent ''' import os import tempfile def package_installed(module, name): cmd = [module.get_bin_path('pkginfo', True)] cmd.append('-q') cmd.append(name) rc, out, err = module.run_command(' '.join(cmd)) if rc == 0: return True else: return False def create_admin_file(): (desc, filename) = tempfile.mkstemp(prefix='ansible_svr4pkg', text=True) fullauto = ''' mail= instance=unique partial=nocheck runlevel=quit idepend=nocheck rdepend=nocheck space=quit setuid=nocheck conflict=nocheck action=nocheck networktimeout=60 networkretries=3 authentication=quit keystore=/var/sadm/security proxy= basedir=default ''' os.write(desc, fullauto) os.close(desc) return filename def run_command(module, cmd): progname = cmd[0] cmd[0] = module.get_bin_path(progname, True) return module.run_command(cmd) def package_install(module, name, src, proxy, response_file): adminfile = create_admin_file() cmd = [ 'pkgadd', '-na', adminfile, '-d', src ] if proxy is not None: cmd += [ '-x', proxy ] if response_file is not None: cmd += [ '-r', response_file ] cmd.append(name) (rc, out, err) = run_command(module, cmd) os.unlink(adminfile) return (rc, out, err) def package_uninstall(module, name, src): adminfile = create_admin_file() cmd = [ 'pkgrm', '-na', adminfile, name] (rc, out, err) = run_command(module, cmd) os.unlink(adminfile) return (rc, out, err) def main(): module = AnsibleModule( argument_spec = dict( name = dict(required = True), state = dict(required = True, choices=['present', 'absent']), src = dict(default = None), proxy = dict(default = None), response_file = dict(default = None) ), supports_check_mode=True ) state = module.params['state'] name = module.params['name'] src = module.params['src'] proxy = module.params['proxy'] response_file = module.params['response_file'] rc = None out = '' err = '' result = {} result['name'] = name result['state'] = state if state == 'present': if src is None: module.fail_json(name=name, msg="src is required when state=present") if not package_installed(module, name): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = package_install(module, name, src, proxy, response_file) # Stdout is normally empty but for some packages can be # very long and is not often useful if len(out) > 75: out = out[:75] + '...' elif state == 'absent': if package_installed(module, name): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = package_uninstall(module, name, src) out = out[:75] if rc is None: result['changed'] = False else: result['changed'] = True if out: result['stdout'] = out if err: result['stderr'] = err module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/rpm_key0000664000000000000000000001632412316627017016632 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- """ Ansible module to import third party repo keys to your rpm db (c) 2013, Héctor Acosta This file is part of Ansible Ansible is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Ansible is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Ansible. If not, see . """ DOCUMENTATION = ''' --- module: rpm_key author: Hector Acosta short_description: Adds or removes a gpg key from the rpm db description: - Adds or removes (rpm --import) a gpg key to your rpm database. version_added: "1.3" options: key: required: true default: null aliases: [] description: - Key that will be modified. Can be a url, a file, or a keyid if the key already exists in the database. state: required: false default: "present" choices: [present, absent] description: - Wheather the key will be imported or removed from the rpm db. validate_certs: description: - If C(no) and the C(key) is a url starting with https, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] ''' EXAMPLES = ''' # Example action to import a key from a url - rpm_key: state=present key=http://apt.sw.be/RPM-GPG-KEY.dag.txt # Example action to import a key from a file - rpm_key: state=present key=/path/to/key.gpg # Example action to ensure a key is not present in the db - rpm_key: state=absent key=DEADB33F ''' import syslog import os.path import re import tempfile # Attempt to download at most 8192 bytes. # Should be more than enough for all keys MAXBYTES = 8192 def is_pubkey(string): """Verifies if string is a pubkey""" pgp_regex = ".*?(-----BEGIN PGP PUBLIC KEY BLOCK-----.*?-----END PGP PUBLIC KEY BLOCK-----).*" return re.match(pgp_regex, string, re.DOTALL) class RpmKey: def __init__(self, module): self.syslogging = False # If the key is a url, we need to check if it's present to be idempotent, # to do that, we need to check the keyid, which we can get from the armor. keyfile = None should_cleanup_keyfile = False self.module = module self.rpm = self.module.get_bin_path('rpm', True) state = module.params['state'] key = module.params['key'] if '://' in key: keyfile = self.fetch_key(key) keyid = self.getkeyid(keyfile) should_cleanup_keyfile = True elif self.is_keyid(key): keyid = key elif os.path.isfile(key): keyfile = key keyid = self.getkeyid(keyfile) else: self.module.fail_json(msg="Not a valid key %s" % key) keyid = self.normalize_keyid(keyid) if state == 'present': if self.is_key_imported(keyid): module.exit_json(changed=False) else: if not keyfile: self.module.fail_json(msg="When importing a key, a valid file must be given") self.import_key(keyfile, dryrun=module.check_mode) if should_cleanup_keyfile: self.module.cleanup(keyfile) module.exit_json(changed=True) else: if self.is_key_imported(keyid): self.drop_key(keyid, dryrun=module.check_mode) module.exit_json(changed=True) else: module.exit_json(changed=False) def fetch_key(self, url, maxbytes=MAXBYTES): """Downloads a key from url, returns a valid path to a gpg key""" try: rsp, info = fetch_url(self.module, url) key = rsp.read(maxbytes) if not is_pubkey(key): self.module.fail_json(msg="Not a public key: %s" % url) tmpfd, tmpname = tempfile.mkstemp() tmpfile = os.fdopen(tmpfd, "w+b") tmpfile.write(key) tmpfile.close() return tmpname except urllib2.URLError, e: self.module.fail_json(msg=str(e)) def normalize_keyid(self, keyid): """Ensure a keyid doesn't have a leading 0x, has leading or trailing whitespace, and make sure is lowercase""" ret = keyid.strip().lower() if ret.startswith(('0x', '0X')): return ret[2:] else: return ret def getkeyid(self, keyfile): gpg = self.module.get_bin_path('gpg', True) stdout, stderr = self.execute_command([gpg, '--no-tty', '--batch', '--with-colons', '--fixed-list-mode', '--list-packets', keyfile]) for line in stdout.splitlines(): line = line.strip() if line.startswith('keyid:'): # We want just the last 8 characters of the keyid keyid = line.split(':')[1].strip()[8:] return keyid self.json_fail(msg="Unexpected gpg output") def is_keyid(self, keystr): """Verifies if a key, as provided by the user is a keyid""" return re.match('(0x)?(0-9a-f){8}', keystr, flags=re.IGNORECASE) def execute_command(self, cmd): if self.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'Command %s' % '|'.join(cmd)) rc, stdout, stderr = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg=stderr) return stdout, stderr def is_key_imported(self, keyid): stdout, stderr = self.execute_command([self.rpm, '-q', 'gpg-pubkey']) for line in stdout.splitlines(): line = line.strip() if not line: continue match = re.match('gpg-pubkey-([0-9a-f]+)-([0-9a-f]+)', line) if not match: self.module.fail_json(msg="rpm returned unexpected output [%s]" % line) else: if keyid == match.group(1): return True return False def import_key(self, keyfile, dryrun=False): if not dryrun: self.execute_command([self.rpm, '--import', keyfile]) def drop_key(self, key, dryrun=False): if not dryrun: self.execute_command([self.rpm, '--erase', '--allmatches', "gpg-pubkey-%s" % key]) def main(): module = AnsibleModule( argument_spec = dict( state=dict(default='present', choices=['present', 'absent'], type='str'), key=dict(required=True, type='str'), validate_certs=dict(default='yes', type='bool'), ), supports_check_mode=True ) RpmKey(module) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/packaging/apt_key0000664000000000000000000002035212316627017016614 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # (c) 2012, Jayson Vantuyl # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: apt_key author: Jayson Vantuyl & others version_added: "1.0" short_description: Add or remove an apt key description: - Add or remove an I(apt) key, optionally downloading it notes: - doesn't download the key unless it really needs it - as a sanity check, downloaded key id must match the one specified - best practice is to specify the key id and the url options: id: required: false default: none description: - identifier of key data: required: false default: none description: - keyfile contents file: required: false default: none description: - keyfile path keyring: required: false default: none description: - path to specific keyring file in /etc/apt/trusted.gpg.d version_added: "1.3" url: required: false default: none description: - url to retrieve key from. state: required: false choices: [ absent, present ] default: present description: - used to specify if key is being added or revoked validate_certs: description: - If C(no), SSL certificates for the target url will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] ''' EXAMPLES = ''' # Add an Apt signing key, uses whichever key is at the URL - apt_key: url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=present # Add an Apt signing key, will not download if present - apt_key: id=473041FA url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=present # Remove an Apt signing key, uses whichever key is at the URL - apt_key: url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=absent # Remove a Apt specific signing key, leading 0x is valid - apt_key: id=0x473041FA state=absent # Add a key from a file on the Ansible server - apt_key: data="{{ lookup('file', 'apt.gpg') }}" state=present # Add an Apt signing key to a specific keyring file - apt_key: id=473041FA url=https://ftp-master.debian.org/keys/archive-key-6.0.asc keyring=/etc/apt/trusted.gpg.d/debian.gpg state=present ''' # FIXME: standardize into module_common from traceback import format_exc from re import compile as re_compile # FIXME: standardize into module_common from distutils.spawn import find_executable from os import environ from sys import exc_info import traceback match_key = re_compile("^gpg:.*key ([0-9a-fA-F]+):.*$") REQUIRED_EXECUTABLES=['gpg', 'grep', 'apt-key'] def check_missing_binaries(module): missing = [e for e in REQUIRED_EXECUTABLES if not find_executable(e)] if len(missing): module.fail_json(msg="binaries are missing", names=all) def all_keys(module, keyring): if keyring: cmd = "apt-key --keyring %s list" % keyring else: cmd = "apt-key list" (rc, out, err) = module.run_command(cmd) results = [] lines = out.split('\n') for line in lines: if line.startswith("pub"): tokens = line.split() code = tokens[1] (len_type, real_code) = code.split("/") results.append(real_code) return results def key_present(module, key_id): (rc, out, err) = module.run_command("apt-key list | 2>&1 grep -i -q %s" % pipes.quote(key_id), use_unsafe_shell=True) return rc == 0 def download_key(module, url): # FIXME: move get_url code to common, allow for in-memory D/L, support proxies # and reuse here if url is None: module.fail_json(msg="needed a URL but was not specified") try: rsp, info = fetch_url(module, url) return rsp.read() except Exception: module.fail_json(msg="error getting key id from url", traceback=format_exc()) def add_key(module, keyfile, keyring, data=None): if data is not None: if keyring: cmd = "apt-key --keyring %s add -" % keyring else: cmd = "apt-key add -" (rc, out, err) = module.run_command(cmd, data=data, check_rc=True, binary_data=True) else: if keyring: cmd = "apt-key --keyring %s add %s" % (keyring, keyfile) else: cmd = "apt-key add %s" % (keyfile) (rc, out, err) = module.run_command(cmd, check_rc=True) return True def remove_key(module, key_id, keyring): # FIXME: use module.run_command, fail at point of error and don't discard useful stdin/stdout if keyring: cmd = 'apt-key --keyring %s del %s' % (keyring, key_id) else: cmd = 'apt-key del %s' % key_id (rc, out, err) = module.run_command(cmd, check_rc=True) return True def main(): module = AnsibleModule( argument_spec=dict( id=dict(required=False, default=None), url=dict(required=False), data=dict(required=False), file=dict(required=False), key=dict(required=False), keyring=dict(required=False), state=dict(required=False, choices=['present', 'absent'], default='present'), validate_certs=dict(default='yes', type='bool'), ), supports_check_mode=True ) key_id = module.params['id'] url = module.params['url'] data = module.params['data'] filename = module.params['file'] keyring = module.params['keyring'] state = module.params['state'] changed = False if key_id: try: _ = int(key_id, 16) if key_id.startswith('0x'): key_id = key_id[2:] except ValueError: module.fail_json("Invalid key_id") # FIXME: I think we have a common facility for this, if not, want check_missing_binaries(module) keys = all_keys(module, keyring) return_values = {} if state == 'present': if key_id and key_id in keys: module.exit_json(changed=False) else: if not filename and not data: data = download_key(module, url) if key_id and key_id in keys: module.exit_json(changed=False) else: if module.check_mode: module.exit_json(changed=True) if filename: add_key(module, filename, keyring) else: add_key(module, "-", keyring, data) changed=False keys2 = all_keys(module, keyring) if len(keys) != len(keys2): changed=True if key_id and not key_id in keys2: module.fail_json(msg="key does not seem to have been added", id=key_id) module.exit_json(changed=changed) elif state == 'absent': if not key_id: module.fail_json(msg="key is required") if key_id in keys: if module.check_mode: module.exit_json(changed=True) if remove_key(module, key_id, keyring): changed=True else: # FIXME: module.fail_json or exit-json immediately at point of failure module.fail_json(msg="error removing key_id", **return_values) module.exit_json(changed=changed, **return_values) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/packaging/urpmi0000664000000000000000000001417412316627017016321 0ustar rootroot#!/usr/bin/python -tt # -*- coding: utf-8 -*- # (c) 2013, Philippe Makowski # Written by Philippe Makowski # Based on apt module written by Matthew Williams # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: urpmi short_description: Urpmi manager description: - Manages packages with I(urpmi) (such as for Mageia or Mandriva) version_added: "1.3.4" options: pkg: description: - name of package to install, upgrade or remove. required: true default: null state: description: - Indicates the desired package state required: false default: present choices: [ "absent", "present" ] update_cache: description: - update the package database first C(urpmi.update -a). required: false default: no choices: [ "yes", "no" ] no-suggests: description: - Corresponds to the C(--no-suggests) option for I(urpmi). required: false default: yes choices: [ "yes", "no" ] force: description: - Corresponds to the C(--force) option for I(urpmi). required: false default: yes choices: [ "yes", "no" ] author: Philippe Makowski notes: [] ''' EXAMPLES = ''' # install package foo - urpmi: pkg=foo state=present # remove package foo - urpmi: pkg=foo state=absent # description: remove packages foo and bar - urpmi: pkg=foo,bar state=absent # description: update the package database (urpmi.update -a -q) and install bar (bar will be the updated if a newer version exists) - urpmi: name=bar, state=present, update_cache=yes ''' import json import shlex import os import sys try: import rpm USE_PYTHON = True except ImportError: USE_PYTHON = False URPMI_PATH = '/usr/sbin/urpmi' URPME_PATH = '/usr/sbin/urpme' def query_package(module, name): if USE_PYTHON: return rpm.TransactionSet().dbMatch(rpm.RPMTAG_NAME, name).count() != 0 # rpm -q returns 0 if the package is installed, # 1 if it is not installed cmd = "rpm -q %s" % (name) rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc == 0: return True else: return False def query_package_provides(module, name): if USE_PYTHON: return rpm.TransactionSet().dbMatch(rpm.RPMTAG_PROVIDES, name).count() != 0 # rpm -q returns 0 if the package is installed, # 1 if it is not installed cmd = "rpm -q --provides %s" % (name) rc, stdout, stderr = module.run_command(cmd, check_rc=False) return rc == 0 def update_package_db(module): cmd = "urpmi.update -a -q" rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="could not update package db") def remove_packages(module, packages): remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove if not query_package(module, package): continue cmd = "%s --auto %s" % (URPME_PATH, package) rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="failed to remove %s" % (package)) remove_c += 1 if remove_c > 0: module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, pkgspec, force=True, no_suggests=True): packages = "" for package in pkgspec: if not query_package_provides(module, package): packages += "'%s' " % package if len(packages) != 0: if no_suggests: no_suggests_yes = '--no-suggests' else: no_suggests_yes = '' if force: force_yes = '--force' else: force_yes = '' cmd = ("%s --auto %s --quiet %s %s" % (URPMI_PATH, force_yes, no_suggests_yes, packages)) rc, out, err = module.run_command(cmd) installed = True for packages in pkgspec: if not query_package_provides(module, package): installed = False # urpmi always have 0 for exit code if --force is used if rc or not installed: module.fail_json(msg="'urpmi %s' failed: %s" % (packages, err)) else: module.exit_json(changed=True, msg="%s present(s)" % packages) else: module.exit_json(changed=False) def main(): module = AnsibleModule( argument_spec = dict( state = dict(default='installed', choices=['installed', 'removed', 'absent', 'present']), update_cache = dict(default=False, aliases=['update-cache'], type='bool'), force = dict(default=True, type='bool'), no_suggests = dict(default=True, aliases=['no-suggests'], type='bool'), package = dict(aliases=['pkg', 'name'], required=True))) if not os.path.exists(URPMI_PATH): module.fail_json(msg="cannot find urpmi, looking for %s" % (URPMI_PATH)) p = module.params force_yes = p['force'] no_suggest_yes = p['no_suggests'] if p['update_cache']: update_package_db(module) packages = p['package'].split(',') if p['state'] in [ 'installed', 'present' ]: install_packages(module, packages, force_yes, no_suggest_yes) elif p['state'] in [ 'removed', 'absent' ]: remove_packages(module, packages) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/apt0000664000000000000000000004236012316627017015747 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Flowroute LLC # Written by Matthew Williams # Based on yum module written by Seth Vidal # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . # DOCUMENTATION = ''' --- module: apt short_description: Manages apt-packages description: - Manages I(apt) packages (such as for Debian/Ubuntu). version_added: "0.0.2" options: pkg: description: - A package name or package specifier with version, like C(foo) or C(foo=1.0). Shell like wildcards (fnmatch) like apt* are also supported. required: false default: null state: description: - Indicates the desired package state required: false default: present choices: [ "latest", "absent", "present" ] update_cache: description: - Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step required: false default: no choices: [ "yes", "no" ] cache_valid_time: description: - If C(update_cache) is specified and the last run is less or equal than I(cache_valid_time) seconds ago, the C(update_cache) gets skipped. required: false default: no purge: description: - Will force purging of configuration files if the module state is set to I(absent). required: false default: no choices: [ "yes", "no" ] default_release: description: - Corresponds to the C(-t) option for I(apt) and sets pin priorities required: false default: null install_recommends: description: - Corresponds to the C(--no-install-recommends) option for I(apt), default behavior works as apt's default behavior, C(no) does not install recommended packages. Suggested packages are never installed. required: false default: yes choices: [ "yes", "no" ] force: description: - If C(yes), force installs/removes. required: false default: "no" choices: [ "yes", "no" ] upgrade: description: - 'If yes or safe, performs an aptitude safe-upgrade.' - 'If full, performs an aptitude full-upgrade.' - 'If dist, performs an apt-get dist-upgrade.' - 'Note: This does not upgrade a specific package, use state=latest for that.' version_added: "1.1" required: false default: "yes" choices: [ "yes", "safe", "full", "dist"] dpkg_options: description: - Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"' - Options should be supplied as comma separated list required: false default: 'force-confdef,force-confold' requirements: [ python-apt, aptitude ] author: Matthew Williams notes: - Three of the upgrade modes (C(full), C(safe) and its alias C(yes)) require C(aptitude), otherwise C(apt-get) suffices. ''' EXAMPLES = ''' # Update repositories cache and install "foo" package - apt: pkg=foo update_cache=yes # Remove "foo" package - apt: pkg=foo state=absent # Install the package "foo" - apt: pkg=foo state=present # Install the version '1.00' of package "foo" - apt: pkg=foo=1.00 state=present # Update the repository cache and update package "nginx" to latest version using default release squeeze-backport - apt: pkg=nginx state=latest default_release=squeeze-backports update_cache=yes # Install latest version of "openjdk-6-jdk" ignoring "install-recommends" - apt: pkg=openjdk-6-jdk state=latest install_recommends=no # Update all packages to the latest version - apt: upgrade=dist # Run the equivalent of "apt-get update" as a separate step - apt: update_cache=yes # Only run "update_cache=yes" if the last one is more than more than 3600 seconds ago - apt: update_cache=yes cache_valid_time=3600 # Pass options to dpkg on run - apt: upgrade=dist update_cache=yes dpkg_options='force-confold,force-confdef' ''' import traceback # added to stave off future warnings about apt api import warnings warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning) import os import datetime import fnmatch # APT related constants APT_ENV_VARS = dict( DEBIAN_FRONTEND = 'noninteractive', DEBIAN_PRIORITY = 'critical' ) DPKG_OPTIONS = 'force-confdef,force-confold' APT_GET_ZERO = "0 upgraded, 0 newly installed" APTITUDE_ZERO = "0 packages upgraded, 0 newly installed" APT_LISTS_PATH = "/var/lib/apt/lists" APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp" HAS_PYTHON_APT = True try: import apt import apt_pkg except: HAS_PYTHON_APT = False def package_split(pkgspec): parts = pkgspec.split('=') if len(parts) > 1: return parts[0], parts[1] else: return parts[0], None def package_status(m, pkgname, version, cache, state): try: # get the package from the cache, as well as the # the low-level apt_pkg.Package object which contains # state fields not directly acccesible from the # higher-level apt.package.Package object. pkg = cache[pkgname] ll_pkg = cache._cache[pkgname] # the low-level package object except KeyError: if state == 'install': if cache.get_providing_packages(pkgname): return False, True, False m.fail_json(msg="No package matching '%s' is available" % pkgname) else: return False, False, False try: has_files = len(pkg.installed_files) > 0 except UnicodeDecodeError: has_files = True except AttributeError: has_files = False # older python-apt cannot be used to determine non-purged try: package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED except AttributeError: # python-apt 0.7.X has very weak low-level object try: # might not be necessary as python-apt post-0.7.X should have current_state property package_is_installed = pkg.is_installed except AttributeError: # assume older version of python-apt is installed package_is_installed = pkg.isInstalled if version and package_is_installed: try: installed_version = pkg.installed.version except AttributeError: installed_version = pkg.installedVersion return package_is_installed and fnmatch.fnmatch(installed_version, version), False, has_files else: try: package_is_upgradable = pkg.is_upgradable except AttributeError: # assume older version of python-apt is installed package_is_upgradable = pkg.isUpgradable return package_is_installed, package_is_upgradable, has_files def expand_dpkg_options(dpkg_options_compressed): options_list = dpkg_options_compressed.split(',') dpkg_options = "" for dpkg_option in options_list: dpkg_options = '%s -o "Dpkg::Options::=--%s"' \ % (dpkg_options, dpkg_option) return dpkg_options.strip() def expand_pkgspec_from_fnmatches(m, pkgspec, cache): new_pkgspec = [] for pkgname_or_fnmatch_pattern in pkgspec: # note that any of these chars is not allowed in a (debian) pkgname if [c for c in pkgname_or_fnmatch_pattern if c in "*?[]!"]: if "=" in pkgname_or_fnmatch_pattern: m.fail_json(msg="pkgname wildcard and version can not be mixed") # handle multiarch pkgnames, the idea is that "apt*" should # only select native packages. But "apt*:i386" should still work if not ":" in pkgname_or_fnmatch_pattern: matches = fnmatch.filter( [pkg.name for pkg in cache if not ":" in pkg.name], pkgname_or_fnmatch_pattern) else: matches = fnmatch.filter( [pkg.name for pkg in cache], pkgname_or_fnmatch_pattern) if len(matches) == 0: m.fail_json(msg="No package(s) matching '%s' available" % str(pkgname_or_fnmatch_pattern)) else: new_pkgspec.extend(matches) else: new_pkgspec.append(pkgname_or_fnmatch_pattern) return new_pkgspec def install(m, pkgspec, cache, upgrade=False, default_release=None, install_recommends=True, force=False, dpkg_options=expand_dpkg_options(DPKG_OPTIONS)): packages = "" pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache) for package in pkgspec: name, version = package_split(package) installed, upgradable, has_files = package_status(m, name, version, cache, state='install') if not installed or (upgrade and upgradable): packages += "'%s' " % package if len(packages) != 0: if force: force_yes = '--force-yes' else: force_yes = '' if m.check_mode: check_arg = '--simulate' else: check_arg = '' for (k,v) in APT_ENV_VARS.iteritems(): os.environ[k] = v cmd = "%s -y %s %s %s install %s" % (APT_GET_CMD, dpkg_options, force_yes, check_arg, packages) if default_release: cmd += " -t '%s'" % (default_release,) if not install_recommends: cmd += " --no-install-recommends" rc, out, err = m.run_command(cmd) if rc: m.fail_json(msg="'apt-get install %s' failed: %s" % (packages, err), stdout=out, stderr=err) else: m.exit_json(changed=True, stdout=out, stderr=err) else: m.exit_json(changed=False) def remove(m, pkgspec, cache, purge=False, dpkg_options=expand_dpkg_options(DPKG_OPTIONS)): packages = "" pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache) for package in pkgspec: name, version = package_split(package) installed, upgradable, has_files = package_status(m, name, version, cache, state='remove') if installed or (has_files and purge): packages += "'%s' " % package if len(packages) == 0: m.exit_json(changed=False) else: if purge: purge = '--purge' else: purge = '' for (k,v) in APT_ENV_VARS.iteritems(): os.environ[k] = v cmd = "%s -q -y %s %s remove %s" % (APT_GET_CMD, dpkg_options, purge, packages) if m.check_mode: m.exit_json(changed=True) rc, out, err = m.run_command(cmd) if rc: m.fail_json(msg="'apt-get remove %s' failed: %s" % (packages, err), stdout=out, stderr=err) m.exit_json(changed=True, stdout=out, stderr=err) def upgrade(m, mode="yes", force=False, dpkg_options=expand_dpkg_options(DPKG_OPTIONS)): if m.check_mode: check_arg = '--simulate' else: check_arg = '' apt_cmd = None if mode == "dist": # apt-get dist-upgrade apt_cmd = APT_GET_CMD upgrade_command = "dist-upgrade" elif mode == "full": # aptitude full-upgrade apt_cmd = APTITUDE_CMD upgrade_command = "full-upgrade" else: # aptitude safe-upgrade # mode=yes # default apt_cmd = APTITUDE_CMD upgrade_command = "safe-upgrade" if force: if apt_cmd == APT_GET_CMD: force_yes = '--force-yes' else: force_yes = '' else: force_yes = '' apt_cmd_path = m.get_bin_path(apt_cmd, required=True) for (k,v) in APT_ENV_VARS.iteritems(): os.environ[k] = v cmd = '%s -y %s %s %s %s' % (apt_cmd_path, dpkg_options, force_yes, check_arg, upgrade_command) rc, out, err = m.run_command(cmd) if rc: m.fail_json(msg="'%s %s' failed: %s" % (apt_cmd, upgrade_command, err), stdout=out) if (apt_cmd == APT_GET_CMD and APT_GET_ZERO in out) or (apt_cmd == APTITUDE_CMD and APTITUDE_ZERO in out): m.exit_json(changed=False, msg=out, stdout=out, stderr=err) m.exit_json(changed=True, msg=out, stdout=out, stderr=err) def main(): module = AnsibleModule( argument_spec = dict( state = dict(default='installed', choices=['installed', 'latest', 'removed', 'absent', 'present']), update_cache = dict(default=False, aliases=['update-cache'], type='bool'), cache_valid_time = dict(type='int'), purge = dict(default=False, type='bool'), package = dict(default=None, aliases=['pkg', 'name']), default_release = dict(default=None, aliases=['default-release']), install_recommends = dict(default='yes', aliases=['install-recommends'], type='bool'), force = dict(default='no', type='bool'), upgrade = dict(choices=['yes', 'safe', 'full', 'dist']), dpkg_options = dict(default=DPKG_OPTIONS) ), mutually_exclusive = [['package', 'upgrade']], required_one_of = [['package', 'upgrade', 'update_cache']], supports_check_mode = True ) if not HAS_PYTHON_APT: try: module.run_command('apt-get update && apt-get install python-apt -y -q') global apt, apt_pkg import apt import apt_pkg except: module.fail_json(msg="Could not import python modules: apt, apt_pkg. Please install python-apt package.") global APTITUDE_CMD APTITUDE_CMD = module.get_bin_path("aptitude", False) global APT_GET_CMD APT_GET_CMD = module.get_bin_path("apt-get") p = module.params if not APTITUDE_CMD and p.get('upgrade', None) in [ 'full', 'safe', 'yes' ]: module.fail_json(msg="Could not find aptitude. Please ensure it is installed.") install_recommends = p['install_recommends'] dpkg_options = expand_dpkg_options(p['dpkg_options']) try: cache = apt.Cache() if p['default_release']: try: apt_pkg.config['APT::Default-Release'] = p['default_release'] except AttributeError: apt_pkg.Config['APT::Default-Release'] = p['default_release'] # reopen cache w/ modified config cache.open(progress=None) if p['update_cache']: # Default is: always update the cache cache_valid = False if p['cache_valid_time']: tdelta = datetime.timedelta(seconds=p['cache_valid_time']) try: mtime = os.stat(APT_UPDATE_SUCCESS_STAMP_PATH).st_mtime except: mtime = False if mtime is False: # Looks like the update-success-stamp is not available # Fallback: Checking the mtime of the lists try: mtime = os.stat(APT_LISTS_PATH).st_mtime except: mtime = False if mtime is False: # No mtime could be read - looks like lists are not there # We update the cache to be safe cache_valid = False else: mtimestamp = datetime.datetime.fromtimestamp(mtime) if mtimestamp + tdelta >= datetime.datetime.now(): # dont update the cache # the old cache is less than cache_valid_time seconds old - so still valid cache_valid = True if cache_valid is not True: cache.update() cache.open(progress=None) if not p['package'] and not p['upgrade']: module.exit_json(changed=False) force_yes = p['force'] if p['upgrade']: upgrade(module, p['upgrade'], force_yes, dpkg_options) packages = p['package'].split(',') latest = p['state'] == 'latest' for package in packages: if package.count('=') > 1: module.fail_json(msg="invalid package spec: %s" % package) if latest and '=' in package: module.fail_json(msg='version number inconsistent with state=latest: %s' % package) if p['state'] == 'latest': install(module, packages, cache, upgrade=True, default_release=p['default_release'], install_recommends=install_recommends, force=force_yes, dpkg_options=dpkg_options) elif p['state'] in [ 'installed', 'present' ]: install(module, packages, cache, default_release=p['default_release'], install_recommends=install_recommends,force=force_yes, dpkg_options=dpkg_options) elif p['state'] in [ 'removed', 'absent' ]: remove(module, packages, cache, p['purge'], dpkg_options) except apt.cache.LockFailedException: module.fail_json(msg="Failed to lock apt for exclusive operation") # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/npm0000664000000000000000000001560512316627017015757 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Chris Hoffman # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: npm short_description: Manage node.js packages with npm description: - Manage node.js packages with Node Package Manager (npm) version_added: 1.2 author: Chris Hoffman options: name: description: - The name of a node.js library to install required: false path: description: - The base path where to install the node.js libraries required: false version: description: - The version to be installed required: false global: description: - Install the node.js library globally required: false default: no choices: [ "yes", "no" ] executable: description: - The executable location for npm. - This is useful if you are using a version manager, such as nvm required: false production: description: - Install dependencies in production mode, excluding devDependencies required: false choices: [ "yes", "no" ] default: no state: description: - The state of the node.js library required: false default: present choices: [ "present", "absent", "latest" ] ''' EXAMPLES = ''' description: Install "coffee-script" node.js package. - npm: name=coffee-script path=/app/location description: Install "coffee-script" node.js package on version 1.6.1. - npm: name=coffee-script version=1.6.1 path=/app/location description: Install "coffee-script" node.js package globally. - npm: name=coffee-script global=yes description: Remove the globally package "coffee-script". - npm: name=coffee-script global=yes state=absent description: Install packages based on package.json. - npm: path=/app/location description: Update packages based on package.json to their latest version. - npm: path=/app/location state=latest description: Install packages based on package.json using the npm installed with nvm v0.10.1. - npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present ''' import os try: import json except ImportError: import simplejson as json class Npm(object): def __init__(self, module, **kwargs): self.module = module self.glbl = kwargs['glbl'] self.name = kwargs['name'] self.version = kwargs['version'] self.path = kwargs['path'] self.production = kwargs['production'] if kwargs['executable']: self.executable = kwargs['executable'] else: self.executable = module.get_bin_path('npm', True) if kwargs['version']: self.name_version = self.name + '@' + self.version else: self.name_version = self.name def _exec(self, args, run_in_check_mode=False, check_rc=True): if not self.module.check_mode or (self.module.check_mode and run_in_check_mode): cmd = [self.executable] + args if self.glbl: cmd.append('--global') if self.production: cmd.append('--production') if self.name: cmd.append(self.name_version) #If path is specified, cd into that path and run the command. cwd = None if self.path: cwd = self.path rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd) return out return '' def list(self): cmd = ['list', '--json'] installed = list() missing = list() data = json.loads(self._exec(cmd, True, False)) if 'dependencies' in data: for dep in data['dependencies']: if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']: missing.append(dep) else: installed.append(dep) #Named dependency not installed else: missing.append(self.name) return installed, missing def install(self): return self._exec(['install']) def update(self): return self._exec(['update']) def uninstall(self): return self._exec(['uninstall']) def list_outdated(self): outdated = list() data = self._exec(['outdated'], True, False) for dep in data.splitlines(): if dep: # node.js v0.10.22 changed the `npm outdated` module separator # from "@" to " ". Split on both for backwards compatibility. pkg, other = re.split('\s|@', dep, 1) outdated.append(pkg) return outdated def main(): arg_spec = dict( name=dict(default=None), path=dict(default=None), version=dict(default=None), production=dict(default='no', type='bool'), executable=dict(default=None), state=dict(default='present', choices=['present', 'absent', 'latest']) ) arg_spec['global'] = dict(default='no', type='bool') module = AnsibleModule( argument_spec=arg_spec, supports_check_mode=True ) name = module.params['name'] path = module.params['path'] version = module.params['version'] glbl = module.params['global'] production = module.params['production'] executable = module.params['executable'] state = module.params['state'] if not path and not glbl: module.fail_json(msg='path must be specified when not using global') if state == 'absent' and not name: module.fail_json(msg='uninstalling a package is only available for named packages') npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production, \ executable=executable) changed = False if state == 'present': installed, missing = npm.list() if len(missing): changed = True npm.install() elif state == 'latest': installed, missing = npm.list() outdated = npm.list_outdated() if len(missing) or len(outdated): changed = True npm.install() npm.update() else: #absent installed, missing = npm.list() if name in installed: changed = True npm.uninstall() module.exit_json(changed=changed) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/pacman0000664000000000000000000001434712316627017016426 0ustar rootroot#!/usr/bin/python -tt # -*- coding: utf-8 -*- # (c) 2012, Afterburn # Written by Afterburn # Based on apt module written by Matthew Williams # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: pacman short_description: Package manager for Archlinux description: - Manages Archlinux packages version_added: "1.0" options: name: description: - name of package to install, upgrade or remove. required: true state: description: - desired state of the package. required: false choices: [ "installed", "absent" ] update_cache: description: - update the package database first (pacman -Syy). required: false default: "no" choices: [ "yes", "no" ] recurse: description: - remove all not explicitly installed dependencies not required by other packages of the package to remove required: false default: "no" choices: [ "yes", "no" ] version_added: "1.3" author: Afterburn notes: [] ''' EXAMPLES = ''' # Install package foo - pacman: name=foo state=installed # Remove package foo - pacman: name=foo state=absent # Remove packages foo and bar - pacman: name=foo,bar state=absent # Recursively remove package baz - pacman: name=baz state=absent recurse=yes # Update the package database (pacman -Syy) and install bar (bar will be the updated if a newer version exists) - pacman: name=bar, state=installed, update_cache=yes ''' import json import shlex import os import re import sys PACMAN_PATH = "/usr/bin/pacman" def query_package(module, name, state="installed"): # pacman -Q returns 0 if the package is installed, # 1 if it is not installed if state == "installed": cmd = "pacman -Q %s" % (name) rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc == 0: return True return False def update_package_db(module): cmd = "pacman -Syy" rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="could not update package db") def remove_packages(module, packages): if module.params["recurse"]: args = "Rs" else: args = "R" remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove if not query_package(module, package): continue cmd = "pacman -%s %s --noconfirm" % (args, package) rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="failed to remove %s" % (package)) remove_c += 1 if remove_c > 0: module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, packages, package_files): install_c = 0 for i, package in enumerate(packages): if query_package(module, package): continue if package_files[i]: params = '-U %s' % package_files[i] else: params = '-S %s' % package cmd = "pacman %s --noconfirm" % (params) rc, stdout, stderr = module.run_command(cmd, check_rc=False) if rc != 0: module.fail_json(msg="failed to install %s" % (package)) install_c += 1 if install_c > 0: module.exit_json(changed=True, msg="installed %s package(s)" % (install_c)) module.exit_json(changed=False, msg="package(s) already installed") def check_packages(module, packages, state): would_be_changed = [] for package in packages: installed = query_package(module, package) if ((state == "installed" and not installed) or (state == "absent" and installed)): would_be_changed.append(package) if would_be_changed: if state == "absent": state = "removed" module.exit_json(changed=True, msg="%s package(s) would be %s" % ( len(would_be_changed), state)) else: module.exit_json(change=False, msg="package(s) already %s" % state) def main(): module = AnsibleModule( argument_spec = dict( state = dict(default="installed", choices=["installed","absent"]), update_cache = dict(default="no", aliases=["update-cache"], type='bool'), recurse = dict(default="no", type='bool'), name = dict(aliases=["pkg"], required=True)), supports_check_mode = True) if not os.path.exists(PACMAN_PATH): module.fail_json(msg="cannot find pacman, looking for %s" % (PACMAN_PATH)) p = module.params if p["update_cache"] and not module.check_mode: update_package_db(module) pkgs = p["name"].split(",") pkg_files = [] for i, pkg in enumerate(pkgs): if pkg.endswith('.pkg.tar.xz'): # The package given is a filename, extract the raw pkg name from # it and store the filename pkg_files.append(pkg) pkgs[i] = re.sub('-[0-9].*$', '', pkgs[i].split('/')[-1]) else: pkg_files.append(None) if module.check_mode: check_packages(module, pkgs, p['state']) if p["state"] == "installed": install_packages(module, pkgs, pkg_files) elif p["state"] == "absent": remove_packages(module, pkgs) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/rhn_register0000664000000000000000000002617312316627017017662 0ustar rootroot#!/usr/bin/python DOCUMENTATION = ''' --- module: rhn_register short_description: Manage Red Hat Network registration using the C(rhnreg_ks) command description: - Manage registration to the Red Hat Network. version_added: "1.2" author: James Laska notes: - In order to register a system, rhnreg_ks requires either a username and password, or an activationkey. requirements: - rhnreg_ks options: state: description: - whether to register (C(present)), or unregister (C(absent)) a system required: false choices: [ "present", "absent" ] default: "present" username: description: - Red Hat Network username required: False default: null password: description: - Red Hat Network password required: False default: null server_url: description: - Specify an alternative Red Hat Network server URL required: False default: Current value of I(serverURL) from C(/etc/sysconfig/rhn/up2date) is the default activationkey: description: - supply an activation key for use with registration required: False default: null channels: description: - Optionally specify a list of comma-separated channels to subscribe to upon successful registration. required: false default: [] ''' EXAMPLES = ''' # Unregister system from RHN. - rhn_register: state=absent username=joe_user password=somepass # Register as user (joe_user) with password (somepass) and auto-subscribe to available content. - rhn_register: state=present username=joe_user password=somepass # Register with activationkey (1-222333444) and enable extended update support. - rhn_register: state=present activationkey=1-222333444 enable_eus=true # Register as user (joe_user) with password (somepass) against a satellite # server specified by (server_url). - rhn_register: state=present username=joe_user password=somepass server_url=https://xmlrpc.my.satellite/XMLRPC # Register as user (joe_user) with password (somepass) and enable # channels (rhel-x86_64-server-6-foo-1) and (rhel-x86_64-server-6-bar-1). - rhn_register: state=present username=joe_user password=somepass channels=rhel-x86_64-server-6-foo-1,rhel-x86_64-server-6-bar-1 ''' import sys import types import xmlrpclib import urlparse # Attempt to import rhn client tools sys.path.insert(0, '/usr/share/rhn') try: import up2date_client import up2date_client.config except ImportError, e: module.fail_json(msg="Unable to import up2date_client. Is 'rhn-client-tools' installed?\n%s" % e) class Rhn(RegistrationBase): def __init__(self, module, username=None, password=None): RegistrationBase.__init__(self, username, password) self.config = self.load_config() def load_config(self): ''' Read configuration from /etc/sysconfig/rhn/up2date ''' self.config = up2date_client.config.initUp2dateConfig() # Add support for specifying a default value w/o having to standup some # configuration. Yeah, I know this should be subclassed ... but, oh # well def get_option_default(self, key, default=''): # ignore pep8 W601 errors for this line # setting this to use 'in' does not work in the rhn library if self.has_key(key): return self[key] else: return default self.config.get_option = types.MethodType(get_option_default, self.config, up2date_client.config.Config) return self.config @property def hostname(self): ''' Return the non-xmlrpc RHN hostname. This is a convenience method used for displaying a more readable RHN hostname. Returns: str ''' url = urlparse.urlparse(self.config['serverURL']) return url[1].replace('xmlrpc.','') @property def systemid(self): systemid = None xpath_str = "//member[name='system_id']/value/string" if os.path.isfile(self.config['systemIdPath']): fd = open(self.config['systemIdPath'], 'r') xml_data = fd.read() fd.close() # Ugh, xml parsing time ... # First, try parsing with libxml2 ... if systemid is None: try: import libxml2 doc = libxml2.parseDoc(xml_data) ctxt = doc.xpathNewContext() systemid = ctxt.xpathEval(xpath_str)[0].content doc.freeDoc() ctxt.xpathFreeContext() except ImportError: pass # m-kay, let's try with lxml now ... if systemid is None: try: from lxml import etree root = etree.fromstring(xml_data) systemid = root.xpath(xpath_str)[0].text except ImportError: pass # Strip the 'ID-' prefix if systemid is not None and systemid.startswith('ID-'): systemid = systemid[3:] return int(systemid) @property def is_registered(self): ''' Determine whether the current system is registered. Returns: True|False ''' return os.path.isfile(self.config['systemIdPath']) def configure(self, server_url): ''' Configure system for registration ''' self.config.set('serverURL', server_url) self.config.save() def enable(self): ''' Prepare the system for RHN registration. This includes ... * enabling the rhnplugin yum plugin * disabling the subscription-manager yum plugin ''' RegistrationBase.enable(self) self.update_plugin_conf('rhnplugin', True) self.update_plugin_conf('subscription-manager', False) def register(self, enable_eus=False, activationkey=None): ''' Register system to RHN. If enable_eus=True, extended update support will be requested. ''' register_cmd = "/usr/sbin/rhnreg_ks --username '%s' --password '%s' --force" % (self.username, self.password) if enable_eus: register_cmd += " --use-eus-channel" if activationkey is not None: register_cmd += " --activationkey '%s'" % activationkey # FIXME - support --profilename # FIXME - support --systemorgid rc, stdout, stderr = self.module.run_command(register_command, check_rc=True) def api(self, method, *args): ''' Convenience RPC wrapper ''' if not hasattr(self, 'server') or self.server is None: url = "https://xmlrpc.%s/rpc/api" % self.hostname self.server = xmlrpclib.Server(url, verbose=0) self.session = self.server.auth.login(self.username, self.password) func = getattr(self.server, method) return func(self.session, *args) def unregister(self): ''' Unregister a previously registered system ''' # Initiate RPC connection self.api('system.deleteSystems', [self.systemid]) # Remove systemid file os.unlink(self.config['systemIdPath']) def subscribe(self, channels=[]): if len(channels) <= 0: return current_channels = self.api('channel.software.listSystemChannels', self.systemid) new_channels = [item['channel_label'] for item in current_channels] new_channels.extend(channels) return self.api('channel.software.setSystemChannels', self.systemid, new_channels) def _subscribe(self, channels=[]): ''' Subscribe to requested yum repositories using 'rhn-channel' command ''' rhn_channel_cmd = "rhn-channel --user='%s' --password='%s'" % (self.username, self.password) rc, stdout, stderr = self.module.run_command(rhn_channel_cmd + " --available-channels", check_rc=True) # Enable requested repoid's for wanted_channel in channels: # Each inserted repo regexp will be matched. If no match, no success. for availaible_channel in stdout.rstrip().split('\n'): # .rstrip() because of \n at the end -> empty string at the end if re.search(wanted_repo, available_channel): rc, stdout, stderr = self.module.run_command(rhn_channel_cmd + " --add --channel=%s" % available_channel, check_rc=True) def main(): # Read system RHN configuration rhn = Rhn() module = AnsibleModule( argument_spec = dict( state = dict(default='present', choices=['present', 'absent']), username = dict(default=None, required=False), password = dict(default=None, required=False), server_url = dict(default=rhn.config.get_option('serverURL'), required=False), activationkey = dict(default=None, required=False), enable_eus = dict(default=False, type='bool'), channels = dict(default=[], type='list'), ) ) state = module.params['state'] rhn.username = module.params['username'] rhn.password = module.params['password'] rhn.configure(module.params['server_url']) activationkey = module.params['activationkey'] channels = module.params['channels'] # Ensure system is registered if state == 'present': # Check for missing parameters ... if not (activationkey or rhn.username or rhn.password): module.fail_json(msg="Missing arguments, must supply an activationkey (%s) or username (%s) and password (%s)" % (activationkey, rhn.username, rhn.password)) if not activationkey and not (rhn.username and rhn.password): module.fail_json(msg="Missing arguments, If registering without an activationkey, must supply username or password") # Register system if rhn.is_registered: module.exit_json(changed=False, msg="System already registered.") else: try: rhn.enable() rhn.register(module.params['enable_eus'] == True, activationkey) rhn.subscribe(channels) except CommandException, e: module.fail_json(msg="Failed to register with '%s': %s" % (rhn.hostname, e)) else: module.exit_json(changed=True, msg="System successfully registered to '%s'." % rhn.hostname) # Ensure system is *not* registered if state == 'absent': if not rhn.is_registered: module.exit_json(changed=False, msg="System already unregistered.") else: try: rhn.unregister() except CommandException, e: module.fail_json(msg="Failed to unregister: %s" % e) else: module.exit_json(changed=True, msg="System successfully unregistered from %s." % rhn.hostname) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.redhat import * main() ansible-1.5.4/library/packaging/zypper0000664000000000000000000001467012316627017016517 0ustar rootroot#!/usr/bin/python -tt # -*- coding: utf-8 -*- # (c) 2013, Patrick Callahan # based on # openbsd_pkg # (c) 2013 # Patrik Lundin # # yum # (c) 2012, Red Hat, Inc # Written by Seth Vidal # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import re DOCUMENTATION = ''' --- module: zypper author: Patrick Callahan version_added: "1.2" short_description: Manage packages on SuSE and openSuSE description: - Manage packages on SuSE and openSuSE using the zypper and rpm tools. options: name: description: - package name or package specifier wth version C(name) or C(name-1.0). required: true aliases: [ 'pkg' ] state: description: - C(present) will make sure the package is installed. C(latest) will make sure the latest version of the package is installed. C(absent) will make sure the specified package is not installed. required: false choices: [ present, latest, absent ] default: "present" disable_gpg_check: description: - Whether to disable to GPG signature checking of the package signature being installed. Has an effect only if state is I(present) or I(latest). required: false default: "no" choices: [ "yes", "no" ] aliases: [] notes: [] # informational: requirements for nodes requirements: [ zypper, rpm ] author: Patrick Callahan ''' EXAMPLES = ''' # Install "nmap" - zypper: name=nmap state=present # Remove the "nmap" package - zypper: name=nmap state=absent ''' # Function used for getting the name of a currently installed package. def get_current_name(m, name): cmd = '/bin/rpm -q --qf \'%{NAME}-%{VERSION}\'' (rc, stdout, stderr) = m.run_command("%s %s" % (cmd, name)) if rc != 0: return (rc, stdout, stderr) syntax = "%s" for line in stdout.splitlines(): if syntax % name in line: current_name = line.split()[0] return current_name # Function used to find out if a package is currently installed. def get_package_state(m, name): cmd = ['/bin/rpm', '--query', '--info', name] rc, stdout, stderr = m.run_command(cmd, check_rc=False) if rc == 0: return True else: return False # Function used to make sure a package is present. def package_present(m, name, installed_state, disable_gpg_check): if installed_state is False: cmd = ['/usr/bin/zypper', '--non-interactive'] # add global options before zypper command if disable_gpg_check: cmd.append('--no-gpg-check') cmd.extend(['install', '--auto-agree-with-licenses']) cmd.append(name) rc, stdout, stderr = m.run_command(cmd, check_rc=False) if rc == 0: changed=True else: changed=False else: rc = 0 stdout = '' stderr = '' changed=False return (rc, stdout, stderr, changed) # Function used to make sure a package is the latest available version. def package_latest(m, name, installed_state, disable_gpg_check): if installed_state is True: cmd = ['/usr/bin/zypper', '--non-interactive', 'update', '--auto-agree-with-licenses', name] pre_upgrade_name = '' post_upgrade_name = '' # Compare the installed package before and after to know if we changed anything. pre_upgrade_name = get_current_name(m, name) rc, stdout, stderr = m.run_command(cmd, check_rc=False) post_upgrade_name = get_current_name(m, name) if pre_upgrade_name == post_upgrade_name: changed = False else: changed = True return (rc, stdout, stderr, changed) else: # If package was not installed at all just make it present. return package_present(m, name, installed_state, disable_gpg_check) # Function used to make sure a package is not installed. def package_absent(m, name, installed_state): if installed_state is True: cmd = ['/usr/bin/zypper', '--non-interactive', 'remove', name] rc, stdout, stderr = m.run_command(cmd) if rc == 0: changed=True else: changed=False else: rc = 0 stdout = '' stderr = '' changed=False return (rc, stdout, stderr, changed) # =========================================== # Main control flow def main(): module = AnsibleModule( argument_spec = dict( name = dict(required=True, aliases=['pkg']), state = dict(required=False, default='present', choices=['absent', 'installed', 'latest', 'present', 'removed']), disable_gpg_check = dict(required=False, default='no', type='bool'), ), supports_check_mode = False ) params = module.params name = params['name'] state = params['state'] disable_gpg_check = params['disable_gpg_check'] rc = 0 stdout = '' stderr = '' result = {} result['name'] = name result['state'] = state # Get package state installed_state = get_package_state(module, name) # Perform requested action if state in ['installed', 'present']: (rc, stdout, stderr, changed) = package_present(module, name, installed_state, disable_gpg_check) elif state in ['absent', 'removed']: (rc, stdout, stderr, changed) = package_absent(module, name, installed_state) elif state == 'latest': (rc, stdout, stderr, changed) = package_latest(module, name, installed_state, disable_gpg_check) if rc != 0: if stderr: module.fail_json(msg=stderr) else: module.fail_json(msg=stdout) result['changed'] = changed module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/pkgin0000775000000000000000000001211612316627017016272 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Shaun Zinck # Written by Shaun Zinck # Based on pacman module written by Afterburn # that was based on apt module written by Matthew Williams # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: pkgin short_description: Package manager for SmartOS description: - Manages SmartOS packages version_added: "1.0" options: name: description: - name of package to install/remove required: true state: description: - state of the package choices: [ 'present', 'absent' ] required: false default: present author: Shaun Zinck notes: [] ''' EXAMPLES = ''' # install package foo" - pkgin: name=foo state=present # remove package foo - pkgin: name=foo state=absent # remove packages foo and bar - pkgin: name=foo,bar state=absent ''' import json import shlex import os import sys import pipes def query_package(module, pkgin_path, name, state="present"): if state == "present": rc, out, err = module.run_command("%s -y list | grep ^%s" % (pipes.quote(pkgin_path), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: # At least one package with a package name that starts with ``name`` # is installed. For some cases this is not sufficient to determine # wether the queried package is installed. # # E.g. for ``name='gcc47'``, ``gcc47`` not being installed, but # ``gcc47-libs`` being installed, ``out`` would be: # # gcc47-libs-4.7.2nb4 The GNU Compiler Collection (GCC) support shared libraries. # # Multiline output is also possible, for example with the same query # and bot ``gcc47`` and ``gcc47-libs`` being installed: # # gcc47-libs-4.7.2nb4 The GNU Compiler Collection (GCC) support shared libraries. # gcc47-4.7.2nb3 The GNU Compiler Collection (GCC) - 4.7 Release Series # Loop over lines in ``out`` for line in out.split('\n'): # Strip description # (results in sth. like 'gcc47-libs-4.7.2nb4') pkgname_with_version = out.split(' ')[0] # Strip version # (results in sth like 'gcc47-libs') pkgname_without_version = '-'.join(pkgname_with_version.split('-')[:-1]) if name == pkgname_without_version: return True return False def remove_packages(module, pkgin_path, packages): remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove if not query_package(module, pkgin_path, package): continue rc, out, err = module.run_command("%s -y remove %s" % (pkgin_path, package)) if query_package(module, pkgin_path, package): module.fail_json(msg="failed to remove %s: %s" % (package, out)) remove_c += 1 if remove_c > 0: module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, pkgin_path, packages): install_c = 0 for package in packages: if query_package(module, pkgin_path, package): continue rc, out, err = module.run_command("%s -y install %s" % (pkgin_path, package)) if not query_package(module, pkgin_path, package): module.fail_json(msg="failed to install %s: %s" % (package, out)) install_c += 1 if install_c > 0: module.exit_json(changed=True, msg="present %s package(s)" % (install_c)) module.exit_json(changed=False, msg="package(s) already present") def main(): module = AnsibleModule( argument_spec = dict( state = dict(default="present", choices=["present","absent"]), name = dict(aliases=["pkg"], required=True))) pkgin_path = module.get_bin_path('pkgin', True, ['/opt/local/bin']) p = module.params pkgs = p["name"].split(",") if p["state"] == "present": install_packages(module, pkgin_path, pkgs) elif p["state"] == "absent": remove_packages(module, pkgin_path, pkgs) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/swdepot0000664000000000000000000001375612316627017016657 0ustar rootroot#!/usr/bin/python -tt # -*- coding: utf-8 -*- # (c) 2013, Raul Melo # Written by Raul Melo # Based on yum module written by Seth Vidal # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . import re import pipes DOCUMENTATION = ''' --- module: swdepot short_description: Manage packages with swdepot package manager (HP-UX) description: - Will install, upgrade and remove packages with swdepot package manager (HP-UX) version_added: "1.4" notes: [] author: Raul Melo options: name: description: - package name. required: true default: null choices: [] aliases: [] version_added: 1.4 state: description: - whether to install (C(present), C(latest)), or remove (C(absent)) a package. required: true default: null choices: [ 'present', 'latest', 'absent'] aliases: [] version_added: 1.4 depot: description: - The source repository from which install or upgrade a package. required: false default: null choices: [] aliases: [] version_added: 1.4 ''' EXAMPLES = ''' - swdepot: name=unzip-6.0 state=installed depot=repository:/path - swdepot: name=unzip state=latest depot=repository:/path - swdepot: name=unzip state=absent ''' def compare_package(version1, version2): """ Compare version packages. Return values: -1 first minor 0 equal 1 fisrt greater """ def normalize(v): return [int(x) for x in re.sub(r'(\.0+)*$', '', v).split(".")] return cmp(normalize(version1), normalize(version2)) def query_package(module, name, depot=None): """ Returns whether a package is installed or not and version. """ cmd_list = '/usr/sbin/swlist -a revision -l product' if depot: rc, stdout, stderr = module.run_command("%s -s %s %s | grep %s" % (cmd_list, pipes.quote(depot), pipes.quote(name), pipes.quote(name)), use_unsafe_shell=True) else: rc, stdout, stderr = module.run_command("%s %s | grep %s" % (cmd_list, pipes.quote(name), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: version = re.sub("\s\s+|\t" , " ", stdout).strip().split()[1] else: version = None return rc, version def remove_package(module, name): """ Uninstall package if installed. """ cmd_remove = '/usr/sbin/swremove' rc, stdout, stderr = module.run_command("%s %s" % (cmd_remove, name)) if rc == 0: return rc, stdout else: return rc, stderr def install_package(module, depot, name): """ Install package if not already installed """ cmd_install = '/usr/sbin/swinstall -x mount_all_filesystems=false' rc, stdout, stderr = module.run_command("%s -s %s %s" % (cmd_install, depot, name)) if rc == 0: return rc, stdout else: return rc, stderr def main(): module = AnsibleModule( argument_spec = dict( name = dict(aliases=['pkg'], required=True), state = dict(choices=['present', 'absent', 'latest'], required=True), depot = dict(default=None, required=False) ), supports_check_mode=True ) name = module.params['name'] state = module.params['state'] depot = module.params['depot'] changed = False msg = "No changed" rc = 0 if ( state == 'present' or state == 'latest' ) and depot == None: output = "depot parameter is mandatory in present or latest task" module.fail_json(name=name, msg=output, rc=rc) #Check local version rc, version_installed = query_package(module, name) if not rc: installed = True msg = "Already installed" else: installed = False if ( state == 'present' or state == 'latest' ) and installed == False: if module.check_mode: module.exit_json(changed=True) rc, output = install_package(module, depot, name) if not rc: changed = True msg = "Packaged installed" else: module.fail_json(name=name, msg=output, rc=rc) elif state == 'latest' and installed == True: #Check depot version rc, version_depot = query_package(module, name, depot) if not rc: if compare_package(version_installed,version_depot) == -1: if module.check_mode: module.exit_json(changed=True) #Install new version rc, output = install_package(module, depot, name) if not rc: msg = "Packge upgraded, Before " + version_installed + " Now " + version_depot changed = True else: module.fail_json(name=name, msg=output, rc=rc) else: output = "Software package not in repository " + depot module.fail_json(name=name, msg=output, rc=rc) elif state == 'absent' and installed == True: if module.check_mode: module.exit_json(changed=True) rc, output = remove_package(module, name) if not rc: changed = True msg = "Package removed" else: module.fail_json(name=name, msg=output, rc=rc) if module.check_mode: module.exit_json(changed=False) module.exit_json(changed=changed, name=name, state=state, msg=msg) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/packaging/opkg0000664000000000000000000001050412316627017016116 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Patrick Pelletier # Based on pacman (Afterburn) and pkgin (Shaun Zinck) modules # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: opkg author: Patrick Pelletier short_description: Package manager for OpenWrt description: - Manages OpenWrt packages version_added: "1.1" options: name: description: - name of package to install/remove required: true state: description: - state of the package choices: [ 'present', 'absent' ] required: false default: present update_cache: description: - update the package db first required: false default: "no" choices: [ "yes", "no" ] notes: [] ''' EXAMPLES = ''' - opkg: name=foo state=present - opkg: name=foo state=present update_cache=yes - opkg: name=foo state=absent - opkg: name=foo,bar state=absent ''' import pipes def update_package_db(module, opkg_path): """ Updates packages list. """ rc, out, err = module.run_command("%s update" % opkg_path) if rc != 0: module.fail_json(msg="could not update package db") def query_package(module, opkg_path, name, state="present"): """ Returns whether a package is installed or not. """ if state == "present": rc, out, err = module.run_command("%s list-installed | grep -q ^%s" % (pipes.quote(opkg_path), pipes.quote(name)), use_unsafe_shell=True) if rc == 0: return True return False def remove_packages(module, opkg_path, packages): """ Uninstalls one or more packages if installed. """ remove_c = 0 # Using a for loop incase of error, we can report the package that failed for package in packages: # Query the package first, to see if we even need to remove if not query_package(module, opkg_path, package): continue rc, out, err = module.run_command("%s remove %s" % (opkg_path, package)) if query_package(module, opkg_path, package): module.fail_json(msg="failed to remove %s: %s" % (package, out)) remove_c += 1 if remove_c > 0: module.exit_json(changed=True, msg="removed %s package(s)" % remove_c) module.exit_json(changed=False, msg="package(s) already absent") def install_packages(module, opkg_path, packages): """ Installs one or more packages if not already installed. """ install_c = 0 for package in packages: if query_package(module, opkg_path, package): continue rc, out, err = module.run_command("%s install %s" % (opkg_path, package)) if not query_package(module, opkg_path, package): module.fail_json(msg="failed to install %s: %s" % (package, out)) install_c += 1 if install_c > 0: module.exit_json(changed=True, msg="installed %s package(s)" % (install_c)) module.exit_json(changed=False, msg="package(s) already present") def main(): module = AnsibleModule( argument_spec = dict( name = dict(aliases=["pkg"], required=True), state = dict(default="present", choices=["present", "installed", "absent", "removed"]), update_cache = dict(default="no", aliases=["update-cache"], type='bool') ) ) opkg_path = module.get_bin_path('opkg', True, ['/bin']) p = module.params if p["update_cache"]: update_package_db(module, opkg_path) pkgs = p["name"].split(",") if p["state"] in ["present", "installed"]: install_packages(module, opkg_path, pkgs) elif p["state"] in ["absent", "removed"]: remove_packages(module, opkg_path, pkgs) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/web_infrastructure/0000775000000000000000000000000012316627017017224 5ustar rootrootansible-1.5.4/library/web_infrastructure/htpasswd0000664000000000000000000001562712316627017021017 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Nimbis Services, Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = """ module: htpasswd version_added: "1.3" short_description: manage user files for basic authentication description: - Add and remove username/password entries in a password file using htpasswd. - This is used by web servers such as Apache and Nginx for basic authentication. options: path: required: true aliases: [ dest, destfile ] description: - Path to the file that contains the usernames and passwords name: required: true aliases: [ username ] description: - User name to add or remove password: required: false description: - Password associated with user. - Must be specified if user does not exist yet. crypt_scheme: required: false choices: ["apr_md5_crypt", "des_crypt", "ldap_sha1", "plaintext"] default: "apr_md5_crypt" description: - Encryption scheme to be used. state: required: false choices: [ present, absent ] default: "present" description: - Whether the user entry should be present or not create: required: false choices: [ "yes", "no" ] default: "yes" description: - Used with C(state=present). If specified, the file will be created if it does not already exist. If set to "no", will fail if the file does not exist notes: - "This module depends on the I(passlib) Python library, which needs to be installed on all target systems." - "On Debian, Ubuntu, or Fedora: install I(python-passlib)." - "On RHEL or CentOS: Enable EPEL, then install I(python-passlib)." requires: [ passlib>=1.6 ] author: Lorin Hochstein """ EXAMPLES = """ # Add a user to a password file and ensure permissions are set - htpasswd: path=/etc/nginx/passwdfile name=janedoe password=9s36?;fyNp owner=root group=www-data mode=0640 # Remove a user from a password file - htpasswd: path=/etc/apache2/passwdfile name=foobar state=absent """ import os from distutils.version import StrictVersion try: from passlib.apache import HtpasswdFile import passlib except ImportError: passlib_installed = False else: passlib_installed = True def create_missing_directories(dest): destpath = os.path.dirname(dest) if not os.path.exists(destpath): os.makedirs(destpath) def present(dest, username, password, crypt_scheme, create, check_mode): """ Ensures user is present Returns (msg, changed) """ if not os.path.exists(dest): if not create: raise ValueError('Destination %s does not exist' % dest) if check_mode: return ("Create %s" % dest, True) create_missing_directories(dest) if StrictVersion(passlib.__version__) >= StrictVersion('1.6'): ht = HtpasswdFile(dest, new=True, default_scheme=crypt_scheme) else: ht = HtpasswdFile(dest, autoload=False, default=crypt_scheme) if getattr(ht, 'set_password', None): ht.set_password(username, password) else: ht.update(username, password) ht.save() return ("Created %s and added %s" % (dest, username), True) else: if StrictVersion(passlib.__version__) >= StrictVersion('1.6'): ht = HtpasswdFile(dest, new=False, default_scheme=crypt_scheme) else: ht = HtpasswdFile(dest, default=crypt_scheme) found = None if getattr(ht, 'check_password', None): found = ht.check_password(username, password) else: found = ht.verify(username, password) if found: return ("%s already present" % username, False) else: if not check_mode: if getattr(ht, 'set_password', None): ht.set_password(username, password) else: ht.update(username, password) ht.save() return ("Add/update %s" % username, True) def absent(dest, username, check_mode): """ Ensures user is absent Returns (msg, changed) """ if not os.path.exists(dest): raise ValueError("%s does not exists" % dest) if StrictVersion(passlib.__version__) >= StrictVersion('1.6'): ht = HtpasswdFile(dest, new=False) else: ht = HtpasswdFile(dest) if username not in ht.users(): return ("%s not present" % username, False) else: if not check_mode: ht.delete(username) ht.save() return ("Remove %s" % username, True) def check_file_attrs(module, changed, message): file_args = module.load_file_common_arguments(module.params) if module.set_file_attributes_if_different(file_args, False): if changed: message += " and " changed = True message += "ownership, perms or SE linux context changed" return message, changed def main(): arg_spec = dict( path=dict(required=True, aliases=["dest", "destfile"]), name=dict(required=True, aliases=["username"]), password=dict(required=False, default=None), crypt_scheme=dict(required=False, default=None), state=dict(required=False, default="present"), create=dict(type='bool', default='yes'), ) module = AnsibleModule(argument_spec=arg_spec, add_file_common_args=True, supports_check_mode=True) path = module.params['path'] username = module.params['name'] password = module.params['password'] crypt_scheme = module.params['crypt_scheme'] state = module.params['state'] create = module.params['create'] check_mode = module.check_mode if not passlib_installed: module.fail_json(msg="This module requires the passlib Python library") try: if state == 'present': (msg, changed) = present(path, username, password, crypt_scheme, create, check_mode) elif state == 'absent': (msg, changed) = absent(path, username, check_mode) else: module.fail_json(msg="Invalid state: %s" % state) check_file_attrs(module, changed, msg) module.exit_json(msg=msg, changed=changed) except Exception, e: module.fail_json(msg=str(e)) # import module snippets from ansible.module_utils.basic import * if __name__ == '__main__': main() ansible-1.5.4/library/web_infrastructure/supervisorctl0000664000000000000000000001437712316627017022107 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Matt Wright # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # import os DOCUMENTATION = ''' --- module: supervisorctl short_description: Manage the state of a program or group of programs running via Supervisord description: - Manage the state of a program or group of programs running via I(Supervisord) version_added: "0.7" options: name: description: - The name of the I(supervisord) program/process to manage required: true default: null config: description: - configuration file path, passed as -c to supervisorctl required: false default: null version_added: "1.3" server_url: description: - URL on which supervisord server is listening, passed as -s to supervisorctl required: false default: null version_added: "1.3" username: description: - username to use for authentication with server, passed as -u to supervisorctl required: false default: null version_added: "1.3" password: description: - password to use for authentication with server, passed as -p to supervisorctl required: false default: null version_added: "1.3" state: description: - The state of service required: true default: null choices: [ "present", "started", "stopped", "restarted" ] supervisorctl_path: description: - Path to supervisorctl executable to use required: false default: null version_added: "1.4" requirements: - supervisorctl requirements: [ ] author: Matt Wright ''' EXAMPLES = ''' # Manage the state of program to be in 'started' state. - supervisorctl: name=my_app state=started # Restart my_app, reading supervisorctl configuration from a specified file. - supervisorctl: name=my_app state=restarted config=/var/opt/my_project/supervisord.conf # Restart my_app, connecting to supervisord with credentials and server URL. - supervisorctl: name=my_app state=restarted username=test password=testpass server_url=http://localhost:9001 ''' def main(): arg_spec = dict( name=dict(required=True), config=dict(required=False), server_url=dict(required=False), username=dict(required=False), password=dict(required=False), supervisorctl_path=dict(required=False), state=dict(required=True, choices=['present', 'started', 'restarted', 'stopped']) ) module = AnsibleModule(argument_spec=arg_spec, supports_check_mode=True) name = module.params['name'] state = module.params['state'] config = module.params.get('config') server_url = module.params.get('server_url') username = module.params.get('username') password = module.params.get('password') supervisorctl_path = module.params.get('supervisorctl_path') if supervisorctl_path: supervisorctl_path = os.path.expanduser(supervisorctl_path) if os.path.exists(supervisorctl_path) and module.is_executable(supervisorctl_path): supervisorctl_args = [ supervisorctl_path ] else: module.fail_json(msg="Provided path to supervisorctl does not exist or isn't executable: %s" % supervisorctl_path) else: supervisorctl_args = [ module.get_bin_path('supervisorctl', True) ] if config: supervisorctl_args.extend(['-c', os.path.expanduser(config)]) if server_url: supervisorctl_args.extend(['-s', server_url]) if username: supervisorctl_args.extend(['-u', username]) if password: supervisorctl_args.extend(['-p', password]) def run_supervisorctl(cmd, name=None, **kwargs): args = list(supervisorctl_args) # copy the master args args.append(cmd) if name: args.append(name) return module.run_command(args, **kwargs) rc, out, err = run_supervisorctl('status') present = name in out if state == 'present': if not present: if module.check_mode: module.exit_json(changed=True) run_supervisorctl('reread', check_rc=True) rc, out, err = run_supervisorctl('add', name) if '%s: added process group' % name in out: module.exit_json(changed=True, name=name, state=state) else: module.fail_json(msg=out, name=name, state=state) module.exit_json(changed=False, name=name, state=state) rc, out, err = run_supervisorctl('status', name) running = 'RUNNING' in out or '(already running)' in out if running and state == 'started': module.exit_json(changed=False, name=name, state=state) if running and state == 'stopped': if module.check_mode: module.exit_json(changed=True) rc, out, err = run_supervisorctl('stop', name) if '%s: stopped' % name in out: module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) elif state == 'restarted': if module.check_mode: module.exit_json(changed=True) rc, out, err = run_supervisorctl('update', name) rc, out, err = run_supervisorctl('restart', name) if '%s: started' % name in out: module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) elif not running and state == 'started': if module.check_mode: module.exit_json(changed=True) rc, out, err = run_supervisorctl('start',name) if '%s: started' % name in out: module.exit_json(changed=True, name=name, state=state) module.fail_json(msg=out) module.exit_json(changed=False, name=name, state=state) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/web_infrastructure/ejabberd_user0000664000000000000000000001575212316627017021755 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # Copyright (C) 2013, Peter Sprygada # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # DOCUMENTATION = ''' --- module: ejabberd_user version_added: "1.5" author: Peter Sprygada short_description: Manages users for ejabberd servers requirements: - ejabberd description: - This module provides user management for ejabberd servers options: username: description: - the name of the user to manage required: true host: description: - the ejabberd host associated with this username required: true password: description: - the password to assign to the username required: false logging: description: - enables or disables the local syslog facility for this module required: false default: false choices: [ 'true', 'false', 'yes', 'no' ] state: description: - describe the desired state of the user to be managed required: false default: 'present' choices: [ 'present', 'absent' ] notes: - Password parameter is required for state == present only - Passwords must be stored in clear text for this release ''' EXAMPLES = ''' Example playbook entries using the ejabberd_user module to manage users state. tasks: - name: create a user if it does not exists action: ejabberd_user username=test host=server password=password - name: delete a user if it exists action: ejabberd_user username=test host=server state=absent ''' import syslog class EjabberdUserException(Exception): """ Base exeption for EjabberdUser class object """ pass class EjabberdUser(object): """ This object represents a user resource for an ejabberd server. The object manages user creation and deletion using ejabberdctl. The following commands are currently supported: * ejabberdctl register * ejabberdctl deregister """ def __init__(self, module): self.module = module self.logging = module.params.get('logging') self.state = module.params.get('state') self.host = module.params.get('host') self.user = module.params.get('username') self.pwd = module.params.get('password') @property def changed(self): """ This method will check the current user and see if the password has changed. It will return True if the user does not match the supplied credentials and False if it does not """ try: options = [self.user, self.host, self.pwd] (rc, out, err) = self.run_command('check_password', options) except EjabberdUserException, e: (rc, out, err) = (1, None, "required attribute(s) missing") return rc @property def exists(self): """ This method will check to see if the supplied username exists for host specified. If the user exists True is returned, otherwise False is returned """ try: options = [self.user, self.host] (rc, out, err) = self.run_command('check_account', options) except EjabberdUserException, e: (rc, out, err) = (1, None, "required attribute(s) missing") return True if rc == 0 else False def log(self, entry): """ This method will log information to the local syslog facility """ if self.logging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, entry) def run_command(self, cmd, options): """ This method will run the any command specified and return the returns using the Ansible common module """ if not all(options): raise EjabberdUserException cmd = 'ejabberdctl %s ' % cmd cmd += " ".join(options) self.log('command: %s' % cmd) return self.module.run_command(cmd.split()) def update(self): """ The update method will update the credentials for the user provided """ try: options = [self.user, self.host, self.pwd] (rc, out, err) = self.run_command('change_password', options) except EjabberdUserException, e: (rc, out, err) = (1, None, "required attribute(s) missing") return (rc, out, err) def create(self): """ The create method will create a new user on the host with the password provided """ try: options = [self.user, self.host, self.pwd] (rc, out, err) = self.run_command('register', options) except EjabberdUserException, e: (rc, out, err) = (1, None, "required attribute(s) missing") return (rc, out, err) def delete(self): """ The delete method will delete the user from the host """ try: options = [self.user, self.host] (rc, out, err) = self.run_command('unregister', options) except EjabberdUserException, e: (rc, out, err) = (1, None, "required attribute(s) missing") return (rc, out, err) def main(): module = AnsibleModule( argument_spec = dict( host=dict(default=None, type='str'), username=dict(default=None, type='str'), password=dict(default=None, type='str'), state=dict(default='present', choices=['present', 'absent']), logging=dict(default=False, type='bool') ), supports_check_mode = True ) obj = EjabberdUser(module) rc = None result = dict() if obj.state == 'absent': if obj.exists: if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.delete() if rc != 0: module.fail_json(msg=err, rc=rc) elif obj.state == 'present': if not obj.exists: if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.create() elif obj.changed: if module.check_mode: module.exit_json(changed=True) (rc, out, err) = obj.update() if rc is not None and rc != 0: module.fail_json(msg=err, rc=rc) if rc is None: result['changed'] = False else: result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/web_infrastructure/django_manage0000664000000000000000000002457312316627017021734 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Scott Anderson # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: django_manage short_description: Manages a Django application. description: - Manages a Django application using the I(manage.py) application frontend to I(django-admin). With the I(virtualenv) parameter, all management commands will be executed by the given I(virtualenv) installation. version_added: "1.1" options: command: choices: [ 'cleanup', 'flush', 'loaddata', 'runfcgi', 'syncdb', 'test', 'validate', 'migrate', 'collectstatic' ] description: - The name of the Django management command to run. Allowed commands are cleanup, createcachetable, flush, loaddata, syncdb, test, validate. required: true app_path: description: - The path to the root of the Django application where B(manage.py) lives. required: true settings: description: - The Python path to the application's settings module, such as 'myapp.settings'. required: false pythonpath: description: - A directory to add to the Python path. Typically used to include the settings module if it is located external to the application directory. required: false virtualenv: description: - An optional path to a I(virtualenv) installation to use while running the manage application. required: false apps: description: - A list of space-delimited apps to target. Used by the 'test' command. required: false cache_table: description: - The name of the table used for database-backed caching. Used by the 'createcachetable' command. required: false database: description: - The database to target. Used by the 'createcachetable', 'flush', 'loaddata', and 'syncdb' commands. required: false failfast: description: - Fail the command immediately if a test fails. Used by the 'test' command. required: false default: "no" choices: [ "yes", "no" ] fixtures: description: - A space-delimited list of fixture file names to load in the database. B(Required) by the 'loaddata' command. required: false skip: description: - Will skip over out-of-order missing migrations, you can only use this parameter with I(migrate) required: false merge: description: - Will run out-of-order or missing migrations as they are not rollback migrations, you can only use this parameter with 'migrate' command required: false link: description: - Will create links to the files instead of copying them, you can only use this parameter with 'collectstatic' command required: false notes: - I(virtualenv) (U(http://www.virtualenv.org)) must be installed on the remote host if the virtualenv parameter is specified. - This module will create a virtualenv if the virtualenv parameter is specified and a virtualenv does not already exist at the given location. - This module assumes English error messages for the 'createcachetable' command to detect table existence, unfortunately. - To be able to use the migrate command, you must have south installed and added as an app in your settings - To be able to use the collectstatic command, you must have enabled staticfiles in your settings requirements: [ "virtualenv", "django" ] author: Scott Anderson ''' EXAMPLES = """ # Run cleanup on the application installed in 'django_dir'. - django_manage: command=cleanup app_path={{ django_dir }} # Load the initial_data fixture into the application - django_manage: command=loaddata app_path={{ django_dir }} fixtures={{ initial_data }} #Run syncdb on the application - django_manage: > command=syncdb app_path={{ django_dir }} settings={{ settings_app_name }} pythonpath={{ settings_dir }} virtualenv={{ virtualenv_dir }} #Run the SmokeTest test case from the main app. Useful for testing deploys. - django_manage: command=test app_path=django_dir apps=main.SmokeTest """ import os def _fail(module, cmd, out, err, **kwargs): msg = '' if out: msg += "stdout: %s" % (out, ) if err: msg += "\n:stderr: %s" % (err, ) module.fail_json(cmd=cmd, msg=msg, **kwargs) def _ensure_virtualenv(module): venv_param = module.params['virtualenv'] if venv_param is None: return vbin = os.path.join(os.path.expanduser(venv_param), 'bin') activate = os.path.join(vbin, 'activate') if not os.path.exists(activate): virtualenv = module.get_bin_path('virtualenv', True) vcmd = '%s %s' % (virtualenv, venv_param) vcmd = [virtualenv, venv_param] rc, out_venv, err_venv = module.run_command(vcmd) if rc != 0: _fail(module, vcmd, out_venv, err_venv) os.environ["PATH"] = "%s:%s" % (vbin, os.environ["PATH"]) def createcachetable_filter_output(line): return "Already exists" not in line def flush_filter_output(line): return "Installed" in line and "Installed 0 object" not in line def loaddata_filter_output(line): return "Installed" in line and "Installed 0 object" not in line def syncdb_filter_output(line): return ("Creating table " in line) or ("Installed" in line and "Installed 0 object" not in line) def migrate_filter_output(line): return ("Migrating forwards " in line) or ("Installed" in line and "Installed 0 object" not in line) def main(): command_allowed_param_map = dict( cleanup=(), createcachetable=('cache_table', 'database', ), flush=('database', ), loaddata=('database', 'fixtures', ), syncdb=('database', ), test=('failfast', 'testrunner', 'liveserver', 'apps', ), validate=(), migrate=('apps', 'skip', 'merge'), collectstatic=('link', ), ) command_required_param_map = dict( loaddata=('fixtures', ), createcachetable=('cache_table', ), ) # forces --noinput on every command that needs it noinput_commands = ( 'flush', 'syncdb', 'migrate', 'test', 'collectstatic', ) # These params are allowed for certain commands only specific_params = ('apps', 'database', 'failfast', 'fixtures', 'liveserver', 'testrunner') # These params are automatically added to the command if present general_params = ('settings', 'pythonpath', 'database',) specific_boolean_params = ('failfast', 'skip', 'merge', 'link') end_of_command_params = ('apps', 'cache_table', 'fixtures') module = AnsibleModule( argument_spec=dict( command = dict(default=None, required=True), app_path = dict(default=None, required=True), settings = dict(default=None, required=False), pythonpath = dict(default=None, required=False, aliases=['python_path']), virtualenv = dict(default=None, required=False, aliases=['virtual_env']), apps = dict(default=None, required=False), cache_table = dict(default=None, required=False), database = dict(default=None, required=False), failfast = dict(default='no', required=False, choices=BOOLEANS, aliases=['fail_fast']), fixtures = dict(default=None, required=False), liveserver = dict(default=None, required=False, aliases=['live_server']), testrunner = dict(default=None, required=False, aliases=['test_runner']), skip = dict(default=None, required=False, choices=BOOLEANS), merge = dict(default=None, required=False, choices=BOOLEANS), link = dict(default=None, required=False, choices=BOOLEANS), ), ) command = module.params['command'] app_path = module.params['app_path'] virtualenv = module.params['virtualenv'] for param in specific_params: value = module.params[param] if param in specific_boolean_params: value = module.boolean(value) if value and param not in command_allowed_param_map[command]: module.fail_json(msg='%s param is incompatible with command=%s' % (param, command)) for param in command_required_param_map.get(command, ()): if not module.params[param]: module.fail_json(msg='%s param is required for command=%s' % (param, command)) venv = module.params['virtualenv'] _ensure_virtualenv(module) cmd = "python manage.py %s" % (command, ) if command in noinput_commands: cmd = '%s --noinput' % cmd for param in general_params: if module.params[param]: cmd = '%s --%s=%s' % (cmd, param, module.params[param]) for param in specific_boolean_params: if module.boolean(module.params[param]): cmd = '%s --%s' % (cmd, param) # these params always get tacked on the end of the command for param in end_of_command_params: if module.params[param]: cmd = '%s %s' % (cmd, module.params[param]) rc, out, err = module.run_command(cmd, cwd=app_path) if rc != 0: if command == 'createcachetable' and 'table' in err and 'already exists' in err: out = 'Already exists.' else: if "Unknown command:" in err: _fail(module, cmd, err, "Unknown django command: %s" % command) _fail(module, cmd, out, err, path=os.environ["PATH"], syspath=sys.path) changed = False lines = out.split('\n') filt = globals().get(command + "_filter_output", None) if filt: filtered_output = filter(filt, out.split('\n')) if len(filtered_output): changed = filtered_output module.exit_json(changed=changed, out=out, cmd=cmd, app_path=app_path, virtualenv=virtualenv, settings=module.params['settings'], pythonpath=module.params['pythonpath']) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/web_infrastructure/jboss0000664000000000000000000001142312316627017020270 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Jeroen Hoekx # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = """ module: jboss version_added: "1.4" short_description: deploy applications to JBoss description: - Deploy applications to JBoss standalone using the filesystem options: deployment: required: true description: - The name of the deployment src: required: false description: - The remote path of the application ear or war to deploy deploy_path: required: false default: /var/lib/jbossas/standalone/deployments description: - The location in the filesystem where the deployment scanner listens state: required: false choices: [ present, absent ] default: "present" description: - Whether the application should be deployed or undeployed notes: - "The JBoss standalone deployment-scanner has to be enabled in standalone.xml" - "Ensure no identically named application is deployed through the JBoss CLI" author: Jeroen Hoekx """ EXAMPLES = """ # Deploy a hello world application - jboss: src=/tmp/hello-1.0-SNAPSHOT.war deployment=hello.war state=present # Update the hello world application - jboss: src=/tmp/hello-1.1-SNAPSHOT.war deployment=hello.war state=present # Undeploy the hello world application - jboss: deployment=hello.war state=absent """ import os import shutil import time def is_deployed(deploy_path, deployment): return os.path.exists(os.path.join(deploy_path, "%s.deployed"%(deployment))) def is_undeployed(deploy_path, deployment): return os.path.exists(os.path.join(deploy_path, "%s.undeployed"%(deployment))) def is_failed(deploy_path, deployment): return os.path.exists(os.path.join(deploy_path, "%s.failed"%(deployment))) def main(): module = AnsibleModule( argument_spec = dict( src=dict(), deployment=dict(required=True), deploy_path=dict(default='/var/lib/jbossas/standalone/deployments'), state=dict(choices=['absent', 'present'], default='present'), ), ) changed = False src = module.params['src'] deployment = module.params['deployment'] deploy_path = module.params['deploy_path'] state = module.params['state'] if state == 'present' and not src: module.fail_json(msg="Argument 'src' required.") if not os.path.exists(deploy_path): module.fail_json(msg="deploy_path does not exist.") deployed = is_deployed(deploy_path, deployment) if state == 'present' and not deployed: if not os.path.exists(src): module.fail_json(msg='Source file %s does not exist.'%(src)) if is_failed(deploy_path, deployment): ### Clean up old failed deployment os.remove(os.path.join(deploy_path, "%s.failed"%(deployment))) shutil.copyfile(src, os.path.join(deploy_path, deployment)) while not deployed: deployed = is_deployed(deploy_path, deployment) if is_failed(deploy_path, deployment): module.fail_json(msg='Deploying %s failed.'%(deployment)) time.sleep(1) changed = True if state == 'present' and deployed: if module.md5(src) != module.md5(os.path.join(deploy_path, deployment)): os.remove(os.path.join(deploy_path, "%s.deployed"%(deployment))) shutil.copyfile(src, os.path.join(deploy_path, deployment)) deployed = False while not deployed: deployed = is_deployed(deploy_path, deployment) if is_failed(deploy_path, deployment): module.fail_json(msg='Deploying %s failed.'%(deployment)) time.sleep(1) changed = True if state == 'absent' and deployed: os.remove(os.path.join(deploy_path, "%s.deployed"%(deployment))) while deployed: deployed = not is_undeployed(deploy_path, deployment) if is_failed(deploy_path, deployment): module.fail_json(msg='Undeploying %s failed.'%(deployment)) time.sleep(1) changed = True module.exit_json(changed=changed) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/source_control/0000775000000000000000000000000012316627017016347 5ustar rootrootansible-1.5.4/library/source_control/subversion0000664000000000000000000001564312316627017020502 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: subversion short_description: Deploys a subversion repository. description: - Deploy given repository URL / revision to dest. If dest exists, update to the specified revision, otherwise perform a checkout. version_added: "0.7" author: Dane Summers, njharman@gmail.com notes: - Requres I(svn) to be installed on the client. requirements: [] options: repo: description: - The subversion URL to the repository. required: true aliases: [ name, repository ] default: null dest: description: - Absolute path where the repository should be deployed. required: true default: null revision: description: - Specific revision to checkout. required: false default: HEAD aliases: [ version ] force: description: - If C(yes), modified files will be discarded. If C(no), module will fail if it encounters modified files. required: false default: "yes" choices: [ "yes", "no" ] username: description: - --username parameter passed to svn. required: false default: null password: description: - --password parameter passed to svn. required: false default: null executable: required: false default: null version_added: "1.4" description: - Path to svn executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. ''' EXAMPLES = ''' # Checkout subversion repository to specified folder. - subversion: repo=svn+ssh://an.example.org/path/to/repo dest=/src/checkout ''' import re import tempfile class Subversion(object): def __init__( self, module, dest, repo, revision, username, password, svn_path): self.module = module self.dest = dest self.repo = repo self.revision = revision self.username = username self.password = password self.svn_path = svn_path def _exec(self, args): bits = [ self.svn_path, '--non-interactive', '--trust-server-cert', '--no-auth-cache', ] if self.username: bits.extend(["--username", self.username]) if self.password: bits.extend(["--password", self.password]) bits.extend(args) rc, out, err = self.module.run_command(bits, check_rc=True) return out.splitlines() def checkout(self): '''Creates new svn working directory if it does not already exist.''' self._exec(["checkout", "-r", self.revision, self.repo, self.dest]) def switch(self): '''Change working directory's repo.''' # switch to ensure we are pointing at correct repo. self._exec(["switch", self.repo, self.dest]) def update(self): '''Update existing svn working directory.''' self._exec(["update", "-r", self.revision, self.dest]) def revert(self): '''Revert svn working directory.''' self._exec(["revert", "-R", self.dest]) def get_revision(self): '''Revision and URL of subversion working directory.''' text = '\n'.join(self._exec(["info", self.dest])) rev = re.search(r'^Revision:.*$', text, re.MULTILINE).group(0) url = re.search(r'^URL:.*$', text, re.MULTILINE).group(0) return rev, url def has_local_mods(self): '''True if revisioned files have been added or modified. Unrevisioned files are ignored.''' lines = self._exec(["status", self.dest]) # Match only revisioned files, i.e. ignore status '?'. regex = re.compile(r'^[^?]') # Has local mods if more than 0 modifed revisioned files. return len(filter(regex.match, lines)) > 0 def needs_update(self): curr, url = self.get_revision() out2 = '\n'.join(self._exec(["info", "-r", "HEAD", self.dest])) head = re.search(r'^Revision:.*$', out2, re.MULTILINE).group(0) rev1 = int(curr.split(':')[1].strip()) rev2 = int(head.split(':')[1].strip()) change = False if rev1 < rev2: change = True return change, curr, head # =========================================== def main(): module = AnsibleModule( argument_spec=dict( dest=dict(required=True), repo=dict(required=True, aliases=['name', 'repository']), revision=dict(default='HEAD', aliases=['rev', 'version']), force=dict(default='yes', type='bool'), username=dict(required=False), password=dict(required=False), executable=dict(default=None), ), supports_check_mode=True ) dest = os.path.expanduser(module.params['dest']) repo = module.params['repo'] revision = module.params['revision'] force = module.params['force'] username = module.params['username'] password = module.params['password'] svn_path = module.params['executable'] or module.get_bin_path('svn', True) os.environ['LANG'] = 'C' svn = Subversion(module, dest, repo, revision, username, password, svn_path) if not os.path.exists(dest): before = None local_mods = False if module.check_mode: module.exit_json(changed=True) svn.checkout() elif os.path.exists("%s/.svn" % (dest, )): # Order matters. Need to get local mods before switch to avoid false # positives. Need to switch before revert to ensure we are reverting to # correct repo. if module.check_mode: check, before, after = svn.needs_update() module.exit_json(changed=check, before=before, after=after) before = svn.get_revision() local_mods = svn.has_local_mods() svn.switch() if local_mods: if force: svn.revert() else: module.fail_json(msg="ERROR: modified files exist in the repository.") svn.update() else: module.fail_json(msg="ERROR: %s folder already exists, but its not a subversion repository." % (dest, )) after = svn.get_revision() changed = before != after or local_mods module.exit_json(changed=changed, before=before, after=after) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/source_control/hg0000664000000000000000000001671512316627017016702 0ustar rootroot#!/usr/bin/python #-*- coding: utf-8 -*- # (c) 2013, Yeukhon Wong # # This module was originally inspired by Brad Olson's ansible-module-mercurial # . This module tends # to follow the git module implementation. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import ConfigParser DOCUMENTATION = ''' --- module: hg short_description: Manages Mercurial (hg) repositories. description: - Manages Mercurial (hg) repositories. Supports SSH, HTTP/S and local address. version_added: "1.0" author: Yeukhon Wong options: repo: description: - The repository address. required: true default: null aliases: [ name ] dest: description: - Absolute path of where the repository should be cloned to. required: true default: null revision: description: - Equivalent C(-r) option in hg command which could be the changeset, revision number, branch name or even tag. required: false default: "default" aliases: [ version ] force: description: - Discards uncommitted changes. Runs C(hg update -C). required: false default: "yes" choices: [ "yes", "no" ] purge: description: - Deletes untracked files. Runs C(hg purge). required: false default: "no" choices: [ "yes", "no" ] executable: required: false default: null version_added: "1.4" description: - Path to hg executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. notes: - "If the task seems to be hanging, first verify remote host is in C(known_hosts). SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling the hg module, with the following command: ssh-keyscan remote_host.com >> /etc/ssh/ssh_known_hosts." requirements: [ ] ''' EXAMPLES = ''' # Ensure the current working copy is inside the stable branch and deletes untracked files if any. - hg: repo=https://bitbucket.org/user/repo1 dest=/home/user/repo1 revision=stable purge=yes ''' class Hg(object): def __init__(self, module, dest, repo, revision, hg_path): self.module = module self.dest = dest self.repo = repo self.revision = revision self.hg_path = hg_path def _command(self, args_list): (rc, out, err) = self.module.run_command([self.hg_path] + args_list) return (rc, out, err) def _list_untracked(self): args = ['purge', '--config', 'extensions.purge=', '-R', self.dest, '--print'] return self._command(args) def get_revision(self): """ hg id -b -i -t returns a string in the format: "[+] " This format lists the state of the current working copy, and indicates whether there are uncommitted changes by the plus sign. Otherwise, the sign is omitted. Read the full description via hg id --help """ (rc, out, err) = self._command(['id', '-b', '-i', '-t', '-R', self.dest]) if rc != 0: self.module.fail_json(msg=err) else: return out.strip('\n') def has_local_mods(self): now = self.get_revision() if '+' in now: return True else: return False def discard(self): before = self.has_local_mods() if not before: return False (rc, out, err) = self._command(['update', '-C', '-R', self.dest]) if rc != 0: self.module.fail_json(msg=err) after = self.has_local_mods() if before != after and not after: # no more local modification return True def purge(self): # before purge, find out if there are any untracked files (rc1, out1, err1) = self._list_untracked() if rc1 != 0: self.module.fail_json(msg=err1) # there are some untrackd files if out1 != '': args = ['purge', '--config', 'extensions.purge=', '-R', self.dest] (rc2, out2, err2) = self._command(args) if rc2 != 0: self.module.fail_json(msg=err2) return True else: return False def cleanup(self, force, purge): discarded = False purged = False if force: discarded = self.discard() if purge: purged = self.purge() if discarded or purged: return True else: return False def pull(self): return self._command( ['pull', '-R', self.dest, self.repo]) def update(self): return self._command(['update', '-R', self.dest]) def clone(self): return self._command(['clone', self.repo, self.dest, '-r', self.revision]) def switch_version(self): return self._command(['update', '-r', self.revision, '-R', self.dest]) # =========================================== def main(): module = AnsibleModule( argument_spec = dict( repo = dict(required=True, aliases=['name']), dest = dict(required=True), revision = dict(default="default", aliases=['version']), force = dict(default='yes', type='bool'), purge = dict(default='no', type='bool'), executable = dict(default=None), ), ) repo = module.params['repo'] dest = os.path.expanduser(module.params['dest']) revision = module.params['revision'] force = module.params['force'] purge = module.params['purge'] hg_path = module.params['executable'] or module.get_bin_path('hg', True) hgrc = os.path.join(dest, '.hg/hgrc') # initial states before = '' changed = False cleaned = False hg = Hg(module, dest, repo, revision, hg_path) # If there is no hgrc file, then assume repo is absent # and perform clone. Otherwise, perform pull and update. if not os.path.exists(hgrc): (rc, out, err) = hg.clone() if rc != 0: module.fail_json(msg=err) else: # get the current state before doing pulling before = hg.get_revision() # can perform force and purge cleaned = hg.cleanup(force, purge) (rc, out, err) = hg.pull() if rc != 0: module.fail_json(msg=err) (rc, out, err) = hg.update() if rc != 0: module.fail_json(msg=err) hg.switch_version() after = hg.get_revision() if before != after or cleaned: changed = True module.exit_json(before=before, after=after, changed=changed, cleaned=cleaned) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/source_control/github_hooks0000664000000000000000000001353612316627017020767 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Phillip Gentry # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import json import base64 DOCUMENTATION = ''' --- module: github_hooks short_description: Manages github service hooks. description: - Adds service hooks and removes service hooks that have an error status. version_added: "1.4" options: user: description: - Github username. required: true oauthkey: description: - The oauth key provided by github. It can be found/generated on github under "Edit Your Profile" >> "Applications" >> "Personal Access Tokens" required: true repo: description: - "This is the API url for the repository you want to manage hooks for. It should be in the form of: https://api.github.com/repos/user:/repo:. Note this is different than the normal repo url." required: true hookurl: description: - When creating a new hook, this is the url that you want github to post to. It is only required when creating a new hook. required: false action: description: - This tells the githooks module what you want it to do. required: true choices: [ "create", "cleanall" ] validate_certs: description: - If C(no), SSL certificates for the target repo will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] author: Phillip Gentry, CX Inc ''' EXAMPLES = ''' # Example creating a new service hook. It ignores duplicates. - github_hooks: action=create hookurl=http://11.111.111.111:2222 user={{ gituser }} oauthkey={{ oauthkey }} repo=https://api.github.com/repos/pcgentry/Github-Auto-Deploy # Cleaning all hooks for this repo that had an error on the last update. Since this works for all hooks in a repo it is probably best that this would be called from a handler. - local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo }} ''' def list(module, hookurl, oauthkey, repo, user): url = "%s/hooks" % repo auth = base64.encodestring('%s:%s' % (user, oauthkey)).replace('\n', '') headers = { 'Authorization': 'Basic %s' % auth, } response, info = fetch_url(module, url, headers=headers) if info['status'] != 200: return False, '' else: return False, response.read() def clean504(module, hookurl, oauthkey, repo, user): current_hooks = list(hookurl, oauthkey, repo, user)[1] decoded = json.loads(current_hooks) for hook in decoded: if hook['last_response']['code'] == 504: # print "Last response was an ERROR for hook:" # print hook['id'] delete(module, hookurl, oauthkey, repo, user, hook['id']) return 0, current_hooks def cleanall(module, hookurl, oauthkey, repo, user): current_hooks = list(hookurl, oauthkey, repo, user)[1] decoded = json.loads(current_hooks) for hook in decoded: if hook['last_response']['code'] != 200: # print "Last response was an ERROR for hook:" # print hook['id'] delete(module, hookurl, oauthkey, repo, user, hook['id']) return 0, current_hooks def create(module, hookurl, oauthkey, repo, user): url = "%s/hooks" % repo values = { "active": True, "name": "web", "config": { "url": "%s" % hookurl, "content_type": "json" } } data = json.dumps(values) auth = base64.encodestring('%s:%s' % (user, oauthkey)).replace('\n', '') headers = { 'Authorization': 'Basic %s' % auth, } response, info = fetch_url(module, url, data=data, headers=headers) if info['status'] != 200: return 0, '[]' else: return 0, response.read() def delete(module, hookurl, oauthkey, repo, user, hookid): url = "%s/hooks/%s" % (repo, hookid) auth = base64.encodestring('%s:%s' % (user, oauthkey)).replace('\n', '') headers = { 'Authorization': 'Basic %s' % auth, } response, info = fetch_url(module, url, data=data, headers=headers, method='DELETE') return response.read() def main(): module = AnsibleModule( argument_spec=dict( action=dict(required=True), hookurl=dict(required=False), oauthkey=dict(required=True), repo=dict(required=True), user=dict(required=True), validate_certs=dict(default='yes', type='bool'), ) ) action = module.params['action'] hookurl = module.params['hookurl'] oauthkey = module.params['oauthkey'] repo = module.params['repo'] user = module.params['user'] if action == "list": (rc, out) = list(module, hookurl, oauthkey, repo, user) if action == "clean504": (rc, out) = clean504(module, hookurl, oauthkey, repo, user) if action == "cleanall": (rc, out) = cleanall(module, hookurl, oauthkey, repo, user) if action == "create": (rc, out) = create(module, hookurl, oauthkey, repo, user) if rc != 0: module.fail_json(msg="failed", result=out) module.exit_json(msg="success", result=out) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/source_control/git0000664000000000000000000004705512316627017017070 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: git author: Michael DeHaan version_added: "0.0.1" short_description: Deploy software (or files) from git checkouts description: - Manage I(git) checkouts of repositories to deploy files or software. options: repo: required: true aliases: [ name ] description: - git, SSH, or HTTP protocol address of the git repository. dest: required: true description: - Absolute path of where the repository should be checked out to. version: required: false default: "HEAD" description: - What version of the repository to check out. This can be the full 40-character I(SHA-1) hash, the literal string C(HEAD), a branch name, or a tag name. accept_hostkey: required: false default: false version_added: "1.5" description: - Add the hostkey for the repo url if not already added. If ssh_args contains "-o StrictHostKeyChecking=no", this parameter is ignored. ssh_opts: required: false default: None version_added: "1.5" description: - Creates a wrapper script and exports the path as GIT_SSH which git then automatically uses to override ssh arguments. An example value could be "-o StrictHostKeyChecking=no" key_file: requird: false default: None version_added: "1.5" description: - Uses the same wrapper method as ssh_opts to pass "-i " to the ssh arguments used by git reference: required: false default: null version_added: "1.4" description: - Reference repository (see "git clone --reference ...") remote: required: false default: "origin" description: - Name of the remote. force: required: false default: "yes" choices: [ "yes", "no" ] version_added: "0.7" description: - If C(yes), any modified files in the working repository will be discarded. Prior to 0.7, this was always 'yes' and could not be disabled. depth: required: false default: null version_added: "1.2" description: - Create a shallow clone with a history truncated to the specified number or revisions. The minimum possible value is C(1), otherwise ignored. update: required: false default: "yes" choices: [ "yes", "no" ] version_added: "1.2" description: - If C(yes), repository will be updated using the supplied remote. Otherwise the repo will be left untouched. Prior to 1.2, this was always 'yes' and could not be disabled. executable: required: false default: null version_added: "1.4" description: - Path to git executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. bare: required: false default: "no" choices: [ "yes", "no" ] version_added: "1.4" description: - if C(yes), repository will be created as a bare repo, otherwise it will be a standard repo with a workspace. notes: - "If the task seems to be hanging, first verify remote host is in C(known_hosts). SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling the git module, with the following command: ssh-keyscan remote_host.com >> /etc/ssh/ssh_known_hosts." ''' EXAMPLES = ''' # Example git checkout from Ansible Playbooks - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout version=release-0.22 # Example read-write git checkout from github - git: repo=ssh://git@github.com/mylogin/hello.git dest=/home/mylogin/hello # Example just ensuring the repo checkout exists - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no ''' import re import tempfile def write_ssh_wrapper(): fd, wrapper_path = tempfile.mkstemp() fh = os.fdopen(fd, 'w+b') template = """#!/bin/sh if [ -z "$GIT_SSH_OPTS" ]; then BASEOPTS="" else BASEOPTS=$GIT_SSH_OPTS fi if [ -z "$GIT_KEY" ]; then ssh $BASEOPTS "$@" else ssh -i "$GIT_KEY" $BASEOPTS "$@" fi """ fh.write(template) fh.close() st = os.stat(wrapper_path) os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC) return wrapper_path def set_git_ssh(ssh_wrapper, key_file, ssh_opts): if os.environ.get("GIT_SSH"): del os.environ["GIT_SSH"] os.environ["GIT_SSH"] = ssh_wrapper if os.environ.get("GIT_KEY"): del os.environ["GIT_KEY"] if key_file: os.environ["GIT_KEY"] = key_file if os.environ.get("GIT_SSH_OPTS"): del os.environ["GIT_SSH_OPTS"] if ssh_opts: os.environ["GIT_SSH_OPTS"] = ssh_opts def get_version(module, git_path, dest, ref="HEAD"): ''' samples the version of the git repo ''' cmd = "%s rev-parse %s" % (git_path, ref) rc, stdout, stderr = module.run_command(cmd, cwd=dest) sha = stdout.rstrip('\n') return sha def clone(git_path, module, repo, dest, remote, depth, version, bare, reference): ''' makes a new git repo if it does not already exist ''' dest_dirname = os.path.dirname(dest) try: os.makedirs(dest_dirname) except: pass cmd = [ git_path, 'clone' ] if bare: cmd.append('--bare') else: cmd.extend([ '--origin', remote, '--recursive' ]) if is_remote_branch(git_path, module, dest, repo, version) \ or is_remote_tag(git_path, module, dest, repo, version): cmd.extend([ '--branch', version ]) if depth: cmd.extend([ '--depth', str(depth) ]) if reference: cmd.extend([ '--reference', str(reference) ]) cmd.extend([ repo, dest ]) module.run_command(cmd, check_rc=True, cwd=dest_dirname) if bare: if remote != 'origin': module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest) def has_local_mods(module, git_path, dest, bare): if bare: return False cmd = "%s status -s" % (git_path) rc, stdout, stderr = module.run_command(cmd, cwd=dest) lines = stdout.splitlines() return len(lines) > 0 def reset(git_path, module, dest): ''' Resets the index and working tree to HEAD. Discards any changes to tracked files in working tree since that commit. ''' cmd = "%s reset --hard HEAD" % (git_path,) return module.run_command(cmd, check_rc=True, cwd=dest) def get_remote_head(git_path, module, dest, version, remote, bare): cloning = False cwd = None if remote == module.params['repo']: cloning = True else: cwd = dest if version == 'HEAD': if cloning: # cloning the repo, just get the remote's HEAD version cmd = '%s ls-remote %s -h HEAD' % (git_path, remote) else: head_branch = get_head_branch(git_path, module, dest, remote, bare) cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch) elif is_remote_branch(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version) elif is_remote_tag(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version) else: # appears to be a sha1. return as-is since it appears # cannot check for a specific sha1 on remote return version (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd) if len(out) < 1: module.fail_json(msg="Could not determine remote revision for %s" % version) rev = out.split()[0] return rev def is_remote_tag(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version) (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if version in out: return True else: return False def get_branches(git_path, module, dest): branches = [] cmd = '%s branch -a' % (git_path,) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine branch data - received %s" % out) for line in out.split('\n'): branches.append(line.strip()) return branches def get_tags(git_path, module, dest): tags = [] cmd = '%s tag' % (git_path,) (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Could not determine tag data - received %s" % out) for line in out.split('\n'): tags.append(line.strip()) return tags def is_remote_branch(git_path, module, dest, remote, version): cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version) (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) if version in out: return True else: return False def is_local_branch(git_path, module, dest, branch): branches = get_branches(git_path, module, dest) lbranch = '%s' % branch if lbranch in branches: return True elif '* %s' % branch in branches: return True else: return False def is_not_a_branch(git_path, module, dest): branches = get_branches(git_path, module, dest) for b in branches: if b.startswith('* ') and 'no branch' in b: return True return False def get_head_branch(git_path, module, dest, remote, bare=False): ''' Determine what branch HEAD is associated with. This is partly taken from lib/ansible/utils/__init__.py. It finds the correct path to .git/HEAD and reads from that file the branch that HEAD is associated with. In the case of a detached HEAD, this will look up the branch in .git/refs/remotes//HEAD. ''' if bare: repo_path = dest else: repo_path = os.path.join(dest, '.git') # Check if the .git is a file. If it is a file, it means that we are in a submodule structure. if os.path.isfile(repo_path): try: gitdir = yaml.safe_load(open(repo_path)).get('gitdir') # There is a posibility the .git file to have an absolute path. if os.path.isabs(gitdir): repo_path = gitdir else: repo_path = os.path.join(repo_path.split('.git')[0], gitdir) except (IOError, AttributeError): return '' # Read .git/HEAD for the name of the branch. # If we're in a detached HEAD state, look up the branch associated with # the remote HEAD in .git/refs/remotes//HEAD f = open(os.path.join(repo_path, "HEAD")) if is_not_a_branch(git_path, module, dest): f.close() f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD')) branch = f.readline().split('/')[-1].rstrip("\n") f.close() return branch def fetch(git_path, module, repo, dest, version, remote, bare): ''' updates repo from remote sources ''' if bare: (rc, out1, err1) = module.run_command([git_path, 'fetch', remote, '+refs/heads/*:refs/heads/*'], cwd=dest) else: (rc, out1, err1) = module.run_command("%s fetch %s" % (git_path, remote), cwd=dest) if rc != 0: module.fail_json(msg="Failed to download remote objects and refs") if bare: (rc, out2, err2) = module.run_command([git_path, 'fetch', remote, '+refs/tags/*:refs/tags/*'], cwd=dest) else: (rc, out2, err2) = module.run_command("%s fetch --tags %s" % (git_path, remote), cwd=dest) if rc != 0: module.fail_json(msg="Failed to download remote objects and refs") (rc, out3, err3) = submodule_update(git_path, module, dest) return (rc, out1 + out2 + out3, err1 + err2 + err3) def submodule_update(git_path, module, dest): ''' init and update any submodules ''' # skip submodule commands if .gitmodules is not present if not os.path.exists(os.path.join(dest, '.gitmodules')): return (0, '', '') cmd = [ git_path, 'submodule', 'sync' ] (rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest) cmd = [ git_path, 'submodule', 'update', '--init', '--recursive' ] (rc, out, err) = module.run_command(cmd, cwd=dest) if rc != 0: module.fail_json(msg="Failed to init/update submodules") return (rc, out, err) def switch_version(git_path, module, dest, remote, version): ''' once pulled, switch to a particular SHA, tag, or branch ''' cmd = '' if version != 'HEAD': if is_remote_branch(git_path, module, dest, remote, version): if not is_local_branch(git_path, module, dest, version): cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version) else: (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % version) cmd = "%s reset --hard %s/%s" % (git_path, remote, version) else: cmd = "%s checkout --force %s" % (git_path, version) else: branch = get_head_branch(git_path, module, dest, remote) (rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest) if rc != 0: module.fail_json(msg="Failed to checkout branch %s" % branch) cmd = "%s reset --hard %s" % (git_path, remote) (rc, out1, err1) = module.run_command(cmd, cwd=dest) if rc != 0: if version != 'HEAD': module.fail_json(msg="Failed to checkout %s" % (version)) else: module.fail_json(msg="Failed to checkout branch %s" % (branch)) (rc, out2, err2) = submodule_update(git_path, module, dest) return (rc, out1 + out2, err1 + err2) # =========================================== def main(): module = AnsibleModule( argument_spec = dict( dest=dict(required=True), repo=dict(required=True, aliases=['name']), version=dict(default='HEAD'), remote=dict(default='origin'), reference=dict(default=None), force=dict(default='yes', type='bool'), depth=dict(default=None, type='int'), update=dict(default='yes', type='bool'), accept_hostkey=dict(default='no', type='bool'), key_file=dict(default=None, required=False), ssh_opts=dict(default=None, required=False), executable=dict(default=None), bare=dict(default='no', type='bool'), ), supports_check_mode=True ) dest = os.path.abspath(os.path.expanduser(module.params['dest'])) repo = module.params['repo'] version = module.params['version'] remote = module.params['remote'] force = module.params['force'] depth = module.params['depth'] update = module.params['update'] bare = module.params['bare'] reference = module.params['reference'] git_path = module.params['executable'] or module.get_bin_path('git', True) key_file = module.params['key_file'] ssh_opts = module.params['ssh_opts'] # create a wrapper script and export # GIT_SSH= as an environment variable # for git to use the wrapper script ssh_wrapper = None if key_file or ssh_opts: ssh_wrapper = write_ssh_wrapper() set_git_ssh(ssh_wrapper, key_file, ssh_opts) # add the git repo's hostkey if module.params['ssh_opts'] is not None: if not "-o StrictHostKeyChecking=no" in module.params['ssh_opts']: add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey']) else: add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey']) if bare: gitconfig = os.path.join(dest, 'config') else: gitconfig = os.path.join(dest, '.git', 'config') rc, out, err, status = (0, None, None, None) # if there is no git configuration, do a clone operation # else pull and switch the version before = None local_mods = False if not os.path.exists(gitconfig): if module.check_mode: remote_head = get_remote_head(git_path, module, dest, version, repo, bare) module.exit_json(changed=True, before=before, after=remote_head) clone(git_path, module, repo, dest, remote, depth, version, bare, reference) elif not update: # Just return having found a repo already in the dest path # this does no checking that the repo is the actual repo # requested. before = get_version(module, git_path, dest) module.exit_json(changed=False, before=before, after=before) else: # else do a pull local_mods = has_local_mods(module, git_path, dest, bare) before = get_version(module, git_path, dest) if local_mods: # failure should happen regardless of check mode if not force: module.fail_json(msg="Local modifications exist in repository (force=no).") # if force and in non-check mode, do a reset if not module.check_mode: reset(git_path, module, dest) # exit if already at desired sha version remote_head = get_remote_head(git_path, module, dest, version, remote, bare) if before == remote_head: if local_mods: module.exit_json(changed=True, before=before, after=remote_head, msg="Local modifications exist") elif is_remote_tag(git_path, module, dest, repo, version): # if the remote is a tag and we have the tag locally, exit early if version in get_tags(git_path, module, dest): module.exit_json(changed=False, before=before, after=remote_head) else: module.exit_json(changed=False, before=before, after=remote_head) if module.check_mode: module.exit_json(changed=True, before=before, after=remote_head) fetch(git_path, module, repo, dest, version, remote, bare) # switch to version specified regardless of whether # we cloned or pulled if not bare: switch_version(git_path, module, dest, remote, version) # determine if we changed anything after = get_version(module, git_path, dest) changed = False if before != after or local_mods: changed = True # cleanup the wrapper script if ssh_wrapper: os.remove(ssh_wrapper) module.exit_json(changed=changed, before=before, after=after) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.known_hosts import * main() ansible-1.5.4/library/source_control/bzr0000664000000000000000000001443712316627017017100 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, André Paramés # Based on the Git module by Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = u''' --- module: bzr author: André Paramés version_added: "1.1" short_description: Deploy software (or files) from bzr branches description: - Manage I(bzr) branches to deploy files or software. options: name: required: true aliases: [ 'parent' ] description: - SSH or HTTP protocol address of the parent branch. dest: required: true description: - Absolute path of where the branch should be cloned to. version: required: false default: "head" description: - What version of the branch to clone. This can be the bzr revno or revid. force: required: false default: "yes" choices: [ 'yes', 'no' ] description: - If C(yes), any modified files in the working tree will be discarded. executable: required: false default: null version_added: "1.4" description: - Path to bzr executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. ''' EXAMPLES = ''' # Example bzr checkout from Ansible Playbooks - bzr: name=bzr+ssh://foosball.example.org/path/to/branch dest=/srv/checkout version=22 ''' import re class Bzr(object): def __init__(self, module, parent, dest, version, bzr_path): self.module = module self.parent = parent self.dest = dest self.version = version self.bzr_path = bzr_path def _command(self, args_list, cwd=None, **kwargs): (rc, out, err) = self.module.run_command([self.bzr_path] + args_list, cwd=cwd, **kwargs) return (rc, out, err) def get_version(self): '''samples the version of the bzr branch''' cmd = "%s revno" % self.bzr_path rc, stdout, stderr = self.module.run_command(cmd, cwd=self.dest) revno = stdout.strip() return revno def clone(self): '''makes a new bzr branch if it does not already exist''' dest_dirname = os.path.dirname(self.dest) try: os.makedirs(dest_dirname) except: pass if self.version.lower() != 'head': args_list = ["branch", "-r", self.version, self.parent, self.dest] else: args_list = ["branch", self.parent, self.dest] return self._command(args_list, check_rc=True, cwd=dest_dirname) def has_local_mods(self): cmd = "%s status -S" % self.bzr_path rc, stdout, stderr = self.module.run_command(cmd, cwd=self.dest) lines = stdout.splitlines() lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines) return len(lines) > 0 def reset(self, force): ''' Resets the index and working tree to head. Discards any changes to tracked files in the working tree since that commit. ''' if not force and self.has_local_mods(): self.module.fail_json(msg="Local modifications exist in branch (force=no).") return self._command(["revert"], check_rc=True, cwd=self.dest) def fetch(self): '''updates branch from remote sources''' if self.version.lower() != 'head': (rc, out, err) = self._command(["pull", "-r", self.version], cwd=self.dest) else: (rc, out, err) = self._command(["pull"], cwd=self.dest) if rc != 0: self.module.fail_json(msg="Failed to pull") return (rc, out, err) def switch_version(self): '''once pulled, switch to a particular revno or revid''' if self.version.lower() != 'head': args_list = ["revert", "-r", self.version] else: args_list = ["revert"] return self._command(args_list, check_rc=True, cwd=self.dest) # =========================================== def main(): module = AnsibleModule( argument_spec = dict( dest=dict(required=True), name=dict(required=True, aliases=['parent']), version=dict(default='head'), force=dict(default='yes', type='bool'), executable=dict(default=None), ) ) dest = os.path.abspath(os.path.expanduser(module.params['dest'])) parent = module.params['name'] version = module.params['version'] force = module.params['force'] bzr_path = module.params['executable'] or module.get_bin_path('bzr', True) bzrconfig = os.path.join(dest, '.bzr', 'branch', 'branch.conf') rc, out, err, status = (0, None, None, None) bzr = Bzr(module, parent, dest, version, bzr_path) # if there is no bzr configuration, do a branch operation # else pull and switch the version before = None local_mods = False if not os.path.exists(bzrconfig): (rc, out, err) = bzr.clone() else: # else do a pull local_mods = bzr.has_local_mods() before = bzr.get_version() (rc, out, err) = bzr.reset(force) if rc != 0: module.fail_json(msg=err) (rc, out, err) = bzr.fetch() if rc != 0: module.fail_json(msg=err) # switch to version specified regardless of whether # we cloned or pulled (rc, out, err) = bzr.switch_version() # determine if we changed anything after = bzr.get_version() changed = False if before != after or local_mods: changed = True module.exit_json(changed=changed, before=before, after=after) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/0000775000000000000000000000000012316627017014633 5ustar rootrootansible-1.5.4/library/system/modprobe0000664000000000000000000000545412316627017016375 0ustar rootroot#!/usr/bin/python #coding: utf-8 -*- # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This software is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this software. If not, see . DOCUMENTATION = ''' --- module: modprobe short_description: Add or remove kernel modules requirements: [] version_added: 1.4 description: - Add or remove kernel modules. options: name: required: true description: - Name of kernel module to manage. state: required: false default: "present" choices: [ present, absent ] description: - Whether the module should be present or absent. ''' EXAMPLES = ''' # Add the 802.1q module - modprobe: name=8021q state=present ''' def main(): module = AnsibleModule( argument_spec={ 'name': {'required': True}, 'state': {'default': 'present', 'choices': ['present', 'absent']}, }, supports_check_mode=True, ) args = { 'changed': False, 'failed': False, 'name': module.params['name'], 'state': module.params['state'], } # Check if module is present try: modules = open('/proc/modules') present = False for line in modules: if line.startswith(args['name'] + ' '): present = True break modules.close() except IOError, e: module.fail_json(msg=str(e), **args) # Check only; don't modify if module.check_mode: if args['state'] == 'present' and not present: changed = True elif args['state'] == 'absent' and present: changed = True else: changed = False module.exit_json(changed=changed) # Add/remove module as needed if args['state'] == 'present': if not present: rc, _, err = module.run_command(['modprobe', args['name']]) if rc != 0: module.fail_json(msg=err, **args) args['changed'] = True elif args['state'] == 'absent': if present: rc, _, err = module.run_command(['rmmod', args['name']]) if rc != 0: module.fail_json(msg=err, **args) args['changed'] = True module.exit_json(**args) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/lvg0000664000000000000000000002063012316627017015347 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Alexander Bulimov # based on lvol module by Jeroen Hoekx # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- author: Alexander Bulimov module: lvg short_description: Configure LVM volume groups description: - This module creates, removes or resizes volume groups. version_added: "1.1" options: vg: description: - The name of the volume group. required: true pvs: description: - List of comma-separated devices to use as physical devices in this volume group. Required when creating or resizing volume group. required: false pesize: description: - The size of the physical extent in megabytes. Must be a power of 2. default: 4 required: false state: choices: [ "present", "absent" ] default: present description: - Control if the volume group exists. required: false force: choices: [ "yes", "no" ] default: "no" description: - If yes, allows to remove volume group with logical volumes. required: false notes: - module does not modify PE size for already present volume group ''' EXAMPLES = ''' # Create a volume group on top of /dev/sda1 with physical extent size = 32MB. - lvg: vg=vg.services pvs=/dev/sda1 pesize=32 # Create or resize a volume group on top of /dev/sdb1 and /dev/sdc5. # If, for example, we already have VG vg.services on top of /dev/sdb1, # this VG will be extended by /dev/sdc5. Or if vg.services was created on # top of /dev/sda5, we first extend it with /dev/sdb1 and /dev/sdc5, # and then reduce by /dev/sda5. - lvg: vg=vg.services pvs=/dev/sdb1,/dev/sdc5 # Remove a volume group with name vg.services. - lvg: vg=vg.services state=absent ''' def parse_vgs(data): vgs = [] for line in data.splitlines(): parts = line.strip().split(';') vgs.append({ 'name': parts[0], 'pv_count': int(parts[1]), 'lv_count': int(parts[2]), }) return vgs def parse_pvs(data): pvs = [] for line in data.splitlines(): parts = line.strip().split(';') pvs.append({ 'name': parts[0], 'vg_name': parts[1], }) return pvs def main(): module = AnsibleModule( argument_spec = dict( vg=dict(required=True), pvs=dict(type='list'), pesize=dict(type='int', default=4), state=dict(choices=["absent", "present"], default='present'), force=dict(type='bool', default='no'), ), supports_check_mode=True, ) vg = module.params['vg'] state = module.params['state'] force = module.boolean(module.params['force']) pesize = module.params['pesize'] if module.params['pvs']: dev_string = ' '.join(module.params['pvs']) dev_list = module.params['pvs'] elif state == 'present': module.fail_json(msg="No physical volumes given.") if state=='present': ### check given devices for test_dev in dev_list: if not os.path.exists(test_dev): module.fail_json(msg="Device %s not found."%test_dev) ### get pv list pvs_cmd = module.get_bin_path('pvs', True) rc,current_pvs,err = module.run_command("%s --noheadings -o pv_name,vg_name --separator ';'" % pvs_cmd) if rc != 0: module.fail_json(msg="Failed executing pvs command.",rc=rc, err=err) ### check pv for devices pvs = parse_pvs(current_pvs) used_pvs = [ pv for pv in pvs if pv['name'] in dev_list and pv['vg_name'] and pv['vg_name'] != vg ] if used_pvs: module.fail_json(msg="Device %s is already in %s volume group."%(used_pvs[0]['name'],used_pvs[0]['vg_name'])) vgs_cmd = module.get_bin_path('vgs', True) rc,current_vgs,err = module.run_command("%s --noheadings -o vg_name,pv_count,lv_count --separator ';'" % vgs_cmd) if rc != 0: module.fail_json(msg="Failed executing vgs command.",rc=rc, err=err) changed = False vgs = parse_vgs(current_vgs) for test_vg in vgs: if test_vg['name'] == vg: this_vg = test_vg break else: this_vg = None if this_vg is None: if state == 'present': ### create VG if module.check_mode: changed = True else: ### create PV pvcreate_cmd = module.get_bin_path('pvcreate', True) for current_dev in dev_list: rc,_,err = module.run_command("%s %s" % (pvcreate_cmd,current_dev)) if rc == 0: changed = True else: module.fail_json(msg="Creating physical volume '%s' failed" % current_dev, rc=rc, err=err) vgcreate_cmd = module.get_bin_path('vgcreate') rc,_,err = module.run_command("%s -s %s %s %s" % (vgcreate_cmd, pesize, vg, dev_string)) if rc == 0: changed = True else: module.fail_json(msg="Creating volume group '%s' failed"%vg, rc=rc, err=err) else: if state == 'absent': if module.check_mode: module.exit_json(changed=True) else: if this_vg['lv_count'] == 0 or force: ### remove VG vgremove_cmd = module.get_bin_path('vgremove', True) rc,_,err = module.run_command("%s --force %s" % (vgremove_cmd, vg)) if rc == 0: module.exit_json(changed=True) else: module.fail_json(msg="Failed to remove volume group %s"%(vg),rc=rc, err=err) else: module.fail_json(msg="Refuse to remove non-empty volume group %s without force=yes"%(vg)) ### resize VG current_devs = [ pv['name'] for pv in pvs if pv['vg_name'] == vg ] devs_to_remove = list(set(current_devs) - set(dev_list)) devs_to_add = list(set(dev_list) - set(current_devs)) if devs_to_add or devs_to_remove: if module.check_mode: changed = True else: if devs_to_add: devs_to_add_string = ' '.join(devs_to_add) ### create PV pvcreate_cmd = module.get_bin_path('pvcreate', True) for current_dev in devs_to_add: rc,_,err = module.run_command("%s %s" % (pvcreate_cmd, current_dev)) if rc == 0: changed = True else: module.fail_json(msg="Creating physical volume '%s' failed"%current_dev, rc=rc, err=err) ### add PV to our VG vgextend_cmd = module.get_bin_path('vgextend', True) rc,_,err = module.run_command("%s %s %s" % (vgextend_cmd, vg, devs_to_add_string)) if rc == 0: changed = True else: module.fail_json(msg="Unable to extend %s by %s."%(vg, devs_to_add_string),rc=rc,err=err) ### remove some PV from our VG if devs_to_remove: devs_to_remove_string = ' '.join(devs_to_remove) vgreduce_cmd = module.get_bin_path('vgreduce', True) rc,_,err = module.run_command("%s --force %s %s" % (vgreduce_cmd, vg, devs_to_remove_string)) if rc == 0: changed = True else: module.fail_json(msg="Unable to reduce %s by %s."%(vg, devs_to_remove_string),rc=rc,err=err) module.exit_json(changed=changed) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/ohai0000664000000000000000000000316012316627017015476 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: ohai short_description: Returns inventory data from I(Ohai) description: - Similar to the M(facter) module, this runs the I(Ohai) discovery program (U(http://wiki.opscode.com/display/chef/Ohai)) on the remote host and returns JSON inventory data. I(Ohai) data is a bit more verbose and nested than I(facter). version_added: "0.6" options: {} notes: [] requirements: [ "ohai" ] author: Michael DeHaan ''' EXAMPLES = ''' # Retrieve (ohai) data from all Web servers and store in one-file per host ansible webservers -m ohai --tree=/tmp/ohaidata ''' def main(): module = AnsibleModule( argument_spec = dict() ) cmd = ["/usr/bin/env", "ohai"] rc, out, err = module.run_command(cmd, check_rc=True) module.exit_json(**json.loads(out)) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/kernel_blacklist0000664000000000000000000000732412316627017020074 0ustar rootroot#!/usr/bin/python # encoding: utf-8 -*- # (c) 2013, Matthias Vogelgesang # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import os import re DOCUMENTATION = ''' --- module: kernel_blacklist author: Matthias Vogelgesang version_added: 1.4 short_description: Blacklist kernel modules description: - Add or remove kernel modules from blacklist. options: name: required: true description: - Name of kernel module to black- or whitelist. state: required: false default: "present" choices: [ present, absent ] description: - Whether the module should be present in the blacklist or absent. blacklist_file: required: false description: - If specified, use this blacklist file instead of C(/etc/modprobe.d/blacklist-ansible.conf). default: null requirements: [] ''' EXAMPLES = ''' # Blacklist the nouveau driver module - kernel_blacklist: name=nouveau state=present ''' class Blacklist(object): def __init__(self, module, filename): if not os.path.exists(filename): open(filename, 'a').close() self.filename = filename self.module = module def get_pattern(self): return '^blacklist\s*' + self.module + '$' def readlines(self): f = open(self.filename, 'r') lines = f.readlines() f.close() return lines def module_listed(self): lines = self.readlines() pattern = self.get_pattern() for line in lines: stripped = line.strip() if stripped.startswith('#'): continue if re.match(pattern, stripped): return True return False def remove_module(self): lines = self.readlines() pattern = self.get_pattern() f = open(self.filename, 'w') for line in lines: if not re.match(pattern, line.strip()): f.write(line) f.close() def add_module(self): f = open(self.filename, 'a') f.write('blacklist %s\n' % self.module) def main(): module = AnsibleModule( argument_spec=dict( name=dict(required=True), state=dict(required=False, choices=['present', 'absent'], default='present'), blacklist_file=dict(required=False, default=None) ), supports_check_mode=False, ) args = dict(changed=False, failed=False, name=module.params['name'], state=module.params['state']) filename = '/etc/modprobe.d/blacklist-ansible.conf' if module.params['blacklist_file']: filename = module.params['blacklist_file'] blacklist = Blacklist(args['name'], filename) if blacklist.module_listed(): if args['state'] == 'absent': blacklist.remove_module() args['changed'] = True else: if args['state'] == 'present': blacklist.add_module() args['changed'] = True module.exit_json(**args) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/lvol0000664000000000000000000001724312316627017015541 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Jeroen Hoekx , Alexander Bulimov # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- author: Jeroen Hoekx module: lvol short_description: Configure LVM logical volumes description: - This module creates, removes or resizes logical volumes. version_added: "1.1" options: vg: description: - The volume group this logical volume is part of. required: true lv: description: - The name of the logical volume. required: true size: description: - The size of the logical volume, according to lvcreate(8) --size, by default in megabytes or optionally with one of [bBsSkKmMgGtTpPeE] units; or according to lvcreate(8) --extents as a percentage of [VG|PVS|FREE]; resizing is not supported with percentages. state: choices: [ "present", "absent" ] default: present description: - Control if the logical volume exists. required: false force: version_added: "1.5" choices: [ "yes", "no" ] default: "no" description: - Shrink or remove operations of volumes requires this switch. Ensures that that filesystems get never corrupted/destroyed by mistake. required: false notes: - Filesystems on top of the volume are not resized. ''' EXAMPLES = ''' # Create a logical volume of 512m. - lvol: vg=firefly lv=test size=512 # Create a logical volume of 512g. - lvol: vg=firefly lv=test size=512g # Create a logical volume the size of all remaining space in the volume group - lvol: vg=firefly lv=test size=100%FREE # Extend the logical volume to 1024m. - lvol: vg=firefly lv=test size=1024 # Reduce the logical volume to 512m - lvol: vg=firefly lv=test size=512 force=yes # Remove the logical volume. - lvol: vg=firefly lv=test state=absent force=yes ''' import re decimal_point = re.compile(r"(\.|,)") def parse_lvs(data): lvs = [] for line in data.splitlines(): parts = line.strip().split(';') lvs.append({ 'name': parts[0], 'size': int(decimal_point.split(parts[1])[0]), }) return lvs def main(): module = AnsibleModule( argument_spec=dict( vg=dict(required=True), lv=dict(required=True), size=dict(), state=dict(choices=["absent", "present"], default='present'), force=dict(type='bool', default='no'), ), supports_check_mode=True, ) vg = module.params['vg'] lv = module.params['lv'] size = module.params['size'] state = module.params['state'] force = module.boolean(module.params['force']) size_opt = 'L' size_unit = 'm' if size: # LVCREATE(8) -l --extents option with percentage if '%' in size: size_parts = size.split('%', 1) size_percent = int(size_parts[0]) if size_percent > 100: module.fail_json(msg="Size percentage cannot be larger than 100%") size_whole = size_parts[1] if size_whole == 'ORIGIN': module.fail_json(msg="Snapshot Volumes are not supported") elif size_whole not in ['VG', 'PVS', 'FREE']: module.fail_json(msg="Specify extents as a percentage of VG|PVS|FREE") size_opt = 'l' size_unit = '' # LVCREATE(8) -L --size option unit elif size[-1].isalpha(): if size[-1] in 'bBsSkKmMgGtTpPeE': size_unit = size[-1] if size[0:-1].isdigit(): size = int(size[0:-1]) else: module.fail_json(msg="Bad size specification for unit %s" % size_unit) size_opt = 'L' else: module.fail_json(msg="Size unit should be one of [bBsSkKmMgGtTpPeE]") # when no unit, megabytes by default elif size.isdigit(): size = int(size) else: module.fail_json(msg="Bad size specification") if size_opt == 'l': unit = 'm' else: unit = size_unit rc, current_lvs, err = module.run_command( "lvs --noheadings -o lv_name,size --units %s --separator ';' %s" % (unit, vg)) if rc != 0: if state == 'absent': module.exit_json(changed=False, stdout="Volume group %s does not exist." % vg, stderr=False) else: module.fail_json(msg="Volume group %s does not exist." % vg, rc=rc, err=err) changed = False lvs = parse_lvs(current_lvs) for test_lv in lvs: if test_lv['name'] == lv: this_lv = test_lv break else: this_lv = None if state == 'present' and not size: if this_lv is None: module.fail_json(msg="No size given.") else: module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size']) msg = '' if this_lv is None: if state == 'present': ### create LV if module.check_mode: changed = True else: rc, _, err = module.run_command("lvcreate -n %s -%s %s%s %s" % (lv, size_opt, size, size_unit, vg)) if rc == 0: changed = True else: module.fail_json(msg="Creating logical volume '%s' failed" % lv, rc=rc, err=err) else: if state == 'absent': ### remove LV if module.check_mode: module.exit_json(changed=True) if not force: module.fail_json(msg="Sorry, no removal of logical volume %s without force=yes." % (this_lv['name'])) rc, _, err = module.run_command("lvremove --force %s/%s" % (vg, this_lv['name'])) if rc == 0: module.exit_json(changed=True) else: module.fail_json(msg="Failed to remove logical volume %s" % (lv), rc=rc, err=err) elif size_opt == 'l': module.exit_json(changed=False, msg="Resizing extents with percentage not supported.") else: ### resize LV tool = None if size > this_lv['size']: tool = 'lvextend' elif size < this_lv['size']: if not force: module.fail_json(msg="Sorry, no shrinking of %s without force=yes." % (this_lv['name'])) tool = 'lvreduce --force' if tool: if module.check_mode: changed = True else: rc, _, err = module.run_command("%s -%s %s%s %s/%s" % (tool, size_opt, size, size_unit, vg, this_lv['name'])) if rc == 0: changed = True elif "matches existing size" in err: module.exit_json(changed=False, vg=vg, lv=this_lv['name'], size=this_lv['size']) else: module.fail_json(msg="Unable to resize %s to %s%s" % (lv, size, size_unit), rc=rc, err=err) module.exit_json(changed=changed, msg=msg) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/service0000664000000000000000000013270512316627017016226 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: service author: Michael DeHaan version_added: "0.1" short_description: Manage services. description: - Controls services on remote hosts. options: name: required: true description: - Name of the service. state: required: false choices: [ started, stopped, restarted, reloaded ] description: - C(started)/C(stopped) are idempotent actions that will not run commands unless necessary. C(restarted) will always bounce the service. C(reloaded) will always reload. At least one of state and enabled are required. sleep: required: false version_added: "1.3" description: - If the service is being C(restarted) then sleep this many seconds between the stop and start command. This helps to workaround badly behaving init scripts that exit immediately after signaling a process to stop. pattern: required: false version_added: "0.7" description: - If the service does not respond to the status command, name a substring to look for as would be found in the output of the I(ps) command as a stand-in for a status result. If the string is found, the service will be assumed to be running. enabled: required: false choices: [ "yes", "no" ] description: - Whether the service should start on boot. At least one of state and enabled are required. runlevel: required: false default: 'default' description: - "For OpenRC init scripts (ex: Gentoo) only. The runlevel that this service belongs to." arguments: description: - Additional arguments provided on the command line aliases: [ 'args' ] ''' EXAMPLES = ''' # Example action to start service httpd, if not running - service: name=httpd state=started # Example action to stop service httpd, if running - service: name=httpd state=stopped # Example action to restart service httpd, in all cases - service: name=httpd state=restarted # Example action to reload service httpd, in all cases - service: name=httpd state=reloaded # Example action to enable service httpd, and not touch the running state - service: name=httpd enabled=yes # Example action to start service foo, based on running process /usr/bin/foo - service: name=foo pattern=/usr/bin/foo state=started # Example action to restart network service for interface eth0 - service: name=network state=restarted args=eth0 ''' import platform import os import re import tempfile import shlex import select import time import string class Service(object): """ This is the generic Service manipulation class that is subclassed based on platform. A subclass should override the following action methods:- - get_service_tools - service_enable - get_service_status - service_control All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None def __new__(cls, *args, **kwargs): return load_platform_subclass(Service, args, kwargs) def __init__(self, module): self.module = module self.name = module.params['name'] self.state = module.params['state'] self.sleep = module.params['sleep'] self.pattern = module.params['pattern'] self.enable = module.params['enabled'] self.runlevel = module.params['runlevel'] self.changed = False self.running = None self.crashed = None self.action = None self.svc_cmd = None self.svc_initscript = None self.svc_initctl = None self.enable_cmd = None self.arguments = module.params.get('arguments', '') self.rcconf_file = None self.rcconf_key = None self.rcconf_value = None self.svc_change = False # select whether we dump additional debug info through syslog self.syslogging = False # =========================================== # Platform specific methods (must be replaced by subclass). def get_service_tools(self): self.module.fail_json(msg="get_service_tools not implemented on target platform") def service_enable(self): self.module.fail_json(msg="service_enable not implemented on target platform") def get_service_status(self): self.module.fail_json(msg="get_service_status not implemented on target platform") def service_control(self): self.module.fail_json(msg="service_control not implemented on target platform") # =========================================== # Generic methods that should be used on all platforms. def execute_command(self, cmd, daemonize=False): if self.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'Command %s, daemonize %r' % (cmd, daemonize)) # Most things don't need to be daemonized if not daemonize: return self.module.run_command(cmd) # This is complex because daemonization is hard for people. # What we do is daemonize a part of this module, the daemon runs the # command, picks up the return code and output, and returns it to the # main process. pipe = os.pipe() pid = os.fork() if pid == 0: os.close(pipe[0]) # Set stdin/stdout/stderr to /dev/null fd = os.open(os.devnull, os.O_RDWR) if fd != 0: os.dup2(fd, 0) if fd != 1: os.dup2(fd, 1) if fd != 2: os.dup2(fd, 2) if fd not in (0, 1, 2): os.close(fd) # Make us a daemon. Yes, that's all it takes. pid = os.fork() if pid > 0: os._exit(0) os.setsid() os.chdir("/") pid = os.fork() if pid > 0: os._exit(0) # Start the command if isinstance(cmd, basestring): cmd = shlex.split(cmd) p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=lambda: os.close(pipe[1])) stdout = "" stderr = "" fds = [p.stdout, p.stderr] # Wait for all output, or until the main process is dead and its output is done. while fds: rfd, wfd, efd = select.select(fds, [], fds, 1) if not (rfd + wfd + efd) and p.poll() is not None: break if p.stdout in rfd: dat = os.read(p.stdout.fileno(), 4096) if not dat: fds.remove(p.stdout) stdout += dat if p.stderr in rfd: dat = os.read(p.stderr.fileno(), 4096) if not dat: fds.remove(p.stderr) stderr += dat p.wait() # Return a JSON blob to parent os.write(pipe[1], json.dumps([p.returncode, stdout, stderr])) os.close(pipe[1]) os._exit(0) elif pid == -1: self.module.fail_json(msg="unable to fork") else: os.close(pipe[1]) os.waitpid(pid, 0) # Wait for data from daemon process and process it. data = "" while True: rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]]) if pipe[0] in rfd: dat = os.read(pipe[0], 4096) if not dat: break data += dat return json.loads(data) def check_ps(self): # Set ps flags if platform.system() == 'SunOS': psflags = '-ef' else: psflags = 'auxww' # Find ps binary psbin = self.module.get_bin_path('ps', True) (rc, psout, pserr) = self.execute_command('%s %s' % (psbin, psflags)) # If rc is 0, set running as appropriate if rc == 0: self.running = False lines = psout.split("\n") for line in lines: if self.pattern in line and not "pattern=" in line: # so as to not confuse ./hacking/test-module self.running = True break def check_service_changed(self): if self.state and self.running is None: self.module.fail_json(msg="failed determining service state, possible typo of service name?") # Find out if state has changed if not self.running and self.state in ["started", "running", "reloaded"]: self.svc_change = True elif self.running and self.state in ["stopped","reloaded"]: self.svc_change = True elif self.state == "restarted": self.svc_change = True if self.module.check_mode and self.svc_change: self.module.exit_json(changed=True, msg='service state changed') def modify_service_state(self): # Only do something if state will change if self.svc_change: # Control service if self.state in ['started', 'running']: self.action = "start" elif not self.running and self.state == 'reloaded': self.action = "start" elif self.state == 'stopped': self.action = "stop" elif self.state == 'reloaded': self.action = "reload" elif self.state == 'restarted': self.action = "restart" if self.module.check_mode: self.module.exit_json(changed=True, msg='changing service state') return self.service_control() else: # If nothing needs to change just say all is well rc = 0 err = '' out = '' return rc, out, err def service_enable_rcconf(self): if self.rcconf_file is None or self.rcconf_key is None or self.rcconf_value is None: self.module.fail_json(msg="service_enable_rcconf() requires rcconf_file, rcconf_key and rcconf_value") changed = None entry = '%s="%s"\n' % (self.rcconf_key, self.rcconf_value) RCFILE = open(self.rcconf_file, "r") new_rc_conf = [] # Build a list containing the possibly modified file. for rcline in RCFILE: # Parse line removing whitespaces, quotes, etc. rcarray = shlex.split(rcline, comments=True) if len(rcarray) >= 1 and '=' in rcarray[0]: (key, value) = rcarray[0].split("=", 1) if key == self.rcconf_key: if value.upper() == self.rcconf_value: # Since the proper entry already exists we can stop iterating. changed = False break else: # We found the key but the value is wrong, replace with new entry. rcline = entry changed = True # Add line to the list. new_rc_conf.append(rcline) # We are done with reading the current rc.conf, close it. RCFILE.close() # If we did not see any trace of our entry we need to add it. if changed is None: new_rc_conf.append(entry) changed = True if changed is True: if self.module.check_mode: self.module.exit_json(changed=True, msg="changing service enablement") # Create a temporary file next to the current rc.conf (so we stay on the same filesystem). # This way the replacement operation is atomic. rcconf_dir = os.path.dirname(self.rcconf_file) rcconf_base = os.path.basename(self.rcconf_file) (TMP_RCCONF, tmp_rcconf_file) = tempfile.mkstemp(dir=rcconf_dir, prefix="%s-" % rcconf_base) # Write out the contents of the list into our temporary file. for rcline in new_rc_conf: os.write(TMP_RCCONF, rcline) # Close temporary file. os.close(TMP_RCCONF) # Replace previous rc.conf. self.module.atomic_move(tmp_rcconf_file, self.rcconf_file) # =========================================== # Subclass: Linux class LinuxService(Service): """ This is the Linux Service manipulation class - it is currently supporting a mixture of binaries and init scripts for controlling services started at boot, as well as for controlling the current state. """ platform = 'Linux' distribution = None def get_service_tools(self): paths = [ '/sbin', '/usr/sbin', '/bin', '/usr/bin' ] binaries = [ 'service', 'chkconfig', 'update-rc.d', 'rc-service', 'rc-update', 'initctl', 'systemctl', 'start', 'stop', 'restart' ] initpaths = [ '/etc/init.d' ] location = dict() for binary in binaries: location[binary] = None for binary in binaries: location[binary] = self.module.get_bin_path(binary) def check_systemd(name): # verify service is managed by systemd if not location.get('systemctl', None): return False # default to .service if the unit type is not specified if name.find('.') > 0: unit_name, unit_type = name.rsplit('.', 1) if unit_type not in ("service", "socket", "device", "mount", "automount", "swap", "target", "path", "timer", "snapshot"): name = "%s.service" % name else: name = "%s.service" % name rc, out, err = self.execute_command("%s list-unit-files" % (location['systemctl'])) # adjust the service name to account for template service unit files index = name.find('@') if index != -1: name = name[:index+1] self.__systemd_unit = None for line in out.splitlines(): if line.startswith(name): self.__systemd_unit = name return True return False # Locate a tool for enable options if location.get('chkconfig', None) and os.path.exists("/etc/init.d/%s" % self.name): if check_systemd(self.name): # service is managed by systemd self.enable_cmd = location['systemctl'] else: # we are using a standard SysV service self.enable_cmd = location['chkconfig'] elif location.get('update-rc.d', None): if check_systemd(self.name): # service is managed by systemd self.enable_cmd = location['systemctl'] elif location['initctl'] and os.path.exists("/etc/init/%s.conf" % self.name): # service is managed by upstart self.enable_cmd = location['initctl'] elif location['update-rc.d'] and os.path.exists("/etc/init.d/%s" % self.name): # service is managed by with SysV init scripts, but with update-rc.d self.enable_cmd = location['update-rc.d'] else: self.module.fail_json(msg="service not found: %s" % self.name) elif location.get('rc-service', None) and not location.get('systemctl', None): # service is managed by OpenRC self.svc_cmd = location['rc-service'] self.enable_cmd = location['rc-update'] return elif check_systemd(self.name): # service is managed by systemd self.enable_cmd = location['systemctl'] # Locate a tool for runtime service management (start, stop etc.) if location.get('service', None) and os.path.exists("/etc/init.d/%s" % self.name): # SysV init script self.svc_cmd = location['service'] elif location.get('start', None) and os.path.exists("/etc/init/%s.conf" % self.name): # upstart -- rather than being managed by one command, start/stop/restart are actual commands self.svc_cmd = '' else: # still a SysV init script, but /sbin/service isn't installed for initdir in initpaths: initscript = "%s/%s" % (initdir,self.name) if os.path.isfile(initscript): self.svc_initscript = initscript # couldn't find anything yet, assume systemd if self.svc_cmd is None and self.svc_initscript is None: if location.get('systemctl'): self.svc_cmd = location['systemctl'] if self.svc_cmd is None and not self.svc_initscript: self.module.fail_json(msg='cannot find \'service\' binary or init script for service, possible typo in service name?, aborting') if location.get('initctl', None): self.svc_initctl = location['initctl'] def get_service_status(self): self.action = "status" rc, status_stdout, status_stderr = self.service_control() # if we have decided the service is managed by upstart, we check for some additional output... if self.svc_initctl and self.running is None: # check the job status by upstart response initctl_rc, initctl_status_stdout, initctl_status_stderr = self.execute_command("%s status %s" % (self.svc_initctl, self.name)) if initctl_status_stdout.find("stop/waiting") != -1: self.running = False elif initctl_status_stdout.find("start/running") != -1: self.running = True if self.svc_cmd and self.svc_cmd.endswith("rc-service") and self.running is None: openrc_rc, openrc_status_stdout, openrc_status_stderr = self.execute_command("%s %s status" % (self.svc_cmd, self.name)) self.running = "started" in openrc_status_stdout self.crashed = "crashed" in openrc_status_stderr # if the job status is still not known check it by response code # For reference, see: # http://refspecs.linuxbase.org/LSB_4.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html if self.running is None: if rc in [1, 2, 3, 4, 69]: self.running = False elif rc == 0: self.running = True # if the job status is still not known check it by status output keywords if self.running is None: # first tranform the status output that could irritate keyword matching cleanout = status_stdout.lower().replace(self.name.lower(), '') if "stop" in cleanout: self.running = False elif "run" in cleanout and "not" in cleanout: self.running = False elif "run" in cleanout and "not" not in cleanout: self.running = True elif "start" in cleanout and "not" not in cleanout: self.running = True elif 'could not access pid file' in cleanout: self.running = False elif 'is dead and pid file exists' in cleanout: self.running = False elif 'dead but subsys locked' in cleanout: self.running = False elif 'dead but pid file exists' in cleanout: self.running = False # if the job status is still not known check it by special conditions if self.running is None: if self.name == 'iptables' and status_stdout.find("ACCEPT") != -1: # iptables status command output is lame # TODO: lookup if we can use a return code for this instead? self.running = True return self.running def service_enable(self): if self.enable_cmd is None: self.module.fail_json(msg='service name not recognized') # FIXME: we use chkconfig or systemctl # to decide whether to run the command here but need something # similar for upstart if self.enable_cmd.endswith("initctl"): def write_to_override_file(file_name, file_contents, ): override_file = open(file_name, 'w') override_file.write(file_contents) override_file.close() initpath = '/etc/init' manreg = re.compile('^manual\s*$', re.M | re.I) conf_file_name = "%s/%s.conf" % (initpath, self.name) override_file_name = "%s/%s.override" % (initpath, self.name) # Check to see if files contain the manual line in .conf and fail if True if manreg.search(open(conf_file_name).read()): self.module.fail_json(msg="manual stanza not supported in a .conf file") if os.path.exists(override_file_name): override_file_contents = open(override_file_name).read() # Remove manual stanza if present and service enabled if self.enable and manreg.search(override_file_contents): write_to_override_file(override_file_name, manreg.sub('', override_file_contents)) # Add manual stanza if not present and service disabled elif not (self.enable) and not (manreg.search(override_file_contents)): write_to_override_file(override_file_name, override_file_contents + '\nmanual\n') else: return # Add file with manual stanza if service disabled elif not (self.enable): write_to_override_file(override_file_name, 'manual\n') else: return if self.enable_cmd.endswith("chkconfig"): (rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name)) if 'chkconfig --add %s' % self.name in err: self.execute_command("%s --add %s" % (self.enable_cmd, self.name)) (rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name)) if not self.name in out: self.module.fail_json(msg="unknown service name") state = out.split()[-1] if self.enable and ( "3:on" in out and "5:on" in out ): return elif not self.enable and ( "3:off" in out and "5:off" in out ): return if self.enable_cmd.endswith("systemctl"): (rc, out, err) = self.execute_command("%s show %s" % (self.enable_cmd, self.__systemd_unit)) d = dict(line.split('=', 1) for line in out.splitlines()) if "UnitFileState" in d: if self.enable and d["UnitFileState"] == "enabled": return elif not self.enable and d["UnitFileState"] == "disabled": return elif not self.enable: return if self.enable_cmd.endswith("rc-update"): (rc, out, err) = self.execute_command("%s show" % self.enable_cmd) for line in out.splitlines(): service_name, runlevels = line.split('|') service_name = service_name.strip() if service_name != self.name: continue runlevels = re.split(r'\s+', runlevels) # service already enabled for the runlevel if self.enable and self.runlevel in runlevels: return # service already disabled for the runlevel elif not self.enable and self.runlevel not in runlevels: return break else: # service already disabled altogether if not self.enable: return if self.enable_cmd.endswith("update-rc.d"): if self.enable: action = 'enable' else: action = 'disable' (rc, out, err) = self.execute_command("%s -n %s %s" \ % (self.enable_cmd, self.name, action)) self.changed = False for line in out.splitlines(): if line.startswith('rename'): self.changed = True break elif self.enable and line.find('do not exist') != -1: self.changed = True break elif not self.enable and line.find('already exist') != -1: self.changed = True break # Debian compatibility for line in err.splitlines(): if self.enable and line.find('no runlevel symlinks to modify') != -1: self.changed = True break if self.module.check_mode: self.module.exit_json(changed=self.changed) if not self.changed: return if self.enable: # make sure the init.d symlinks are created # otherwise enable might not work (rc, out, err) = self.execute_command("%s %s defaults" \ % (self.enable_cmd, self.name)) if rc != 0: return (rc, out, err) return self.execute_command("%s %s enable" % (self.enable_cmd, self.name)) else: return self.execute_command("%s -f %s remove" % (self.enable_cmd, self.name)) # we change argument depending on real binary used: # - update-rc.d and systemctl wants enable/disable # - chkconfig wants on/off # - rc-update wants add/delete # also, rc-update and systemctl needs the argument order reversed if self.enable: on_off = "on" enable_disable = "enable" add_delete = "add" else: on_off = "off" enable_disable = "disable" add_delete = "delete" if self.enable_cmd.endswith("rc-update"): args = (self.enable_cmd, add_delete, self.name + " " + self.runlevel) elif self.enable_cmd.endswith("systemctl"): args = (self.enable_cmd, enable_disable, self.__systemd_unit) else: args = (self.enable_cmd, self.name, on_off) self.changed = True if self.module.check_mode and self.changed: self.module.exit_json(changed=True) return self.execute_command("%s %s %s" % args) def service_control(self): # Decide what command to run svc_cmd = '' arguments = self.arguments if self.svc_cmd: if not self.svc_cmd.endswith("systemctl"): # SysV and OpenRC take the form svc_cmd = "%s %s" % (self.svc_cmd, self.name) else: # systemd commands take the form svc_cmd = self.svc_cmd arguments = "%s %s" % (self.__systemd_unit, arguments) elif self.svc_initscript: # upstart svc_cmd = "%s" % self.svc_initscript # In OpenRC, if a service crashed, we need to reset its status to # stopped with the zap command, before we can start it back. if self.svc_cmd and self.svc_cmd.endswith('rc-service') and self.action == 'start' and self.crashed: self.execute_command("%s zap" % svc_cmd, daemonize=True) if self.action is not "restart": if svc_cmd != '': # upstart or systemd or OpenRC rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True) else: # SysV rc_state, stdout, stderr = self.execute_command("%s %s %s" % (self.action, self.name, arguments), daemonize=True) elif self.svc_cmd and self.svc_cmd.endswith('rc-service'): # All services in OpenRC support restart. rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True) else: # In other systems, not all services support restart. Do it the hard way. if svc_cmd != '': # upstart or systemd rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % (svc_cmd, 'stop', arguments), daemonize=True) else: # SysV rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % ('stop', self.name, arguments), daemonize=True) if self.sleep: time.sleep(self.sleep) if svc_cmd != '': # upstart or systemd rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % (svc_cmd, 'start', arguments), daemonize=True) else: # SysV rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % ('start', self.name, arguments), daemonize=True) # merge return information if rc1 != 0 and rc2 == 0: rc_state = rc2 stdout = stdout2 stderr = stderr2 else: rc_state = rc1 + rc2 stdout = stdout1 + stdout2 stderr = stderr1 + stderr2 return(rc_state, stdout, stderr) # =========================================== # Subclass: FreeBSD class FreeBsdService(Service): """ This is the FreeBSD Service manipulation class - it uses the /etc/rc.conf file for controlling services started at boot and the 'service' binary to check status and perform direct service manipulation. """ platform = 'FreeBSD' distribution = None def get_service_tools(self): self.svc_cmd = self.module.get_bin_path('service', True) if not self.svc_cmd: self.module.fail_json(msg='unable to find service binary') def get_service_status(self): rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'onestatus', self.arguments)) if rc == 1: self.running = False elif rc == 0: self.running = True def service_enable(self): if self.enable: self.rcconf_value = "YES" else: self.rcconf_value = "NO" rcfiles = [ '/etc/rc.conf','/etc/rc.conf.local', '/usr/local/etc/rc.conf' ] for rcfile in rcfiles: if os.path.isfile(rcfile): self.rcconf_file = rcfile rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'rcvar', self.arguments)) cmd = "%s %s %s %s" % (self.svc_cmd, self.name, 'rcvar', self.arguments) rcvars = shlex.split(stdout, comments=True) if not rcvars: self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr) # In rare cases, i.e. sendmail, rcvar can return several key=value pairs # Usually there is just one, however. In other rare cases, i.e. uwsgi, # rcvar can return extra uncommented data that is not at all related to # the rcvar. We will just take the first key=value pair we come across # and hope for the best. for rcvar in rcvars: if '=' in rcvar: self.rcconf_key = rcvar.split('=')[0] break if self.rcconf_key is None: self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr) return self.service_enable_rcconf() def service_control(self): if self.action is "start": self.action = "onestart" if self.action is "stop": self.action = "onestop" if self.action is "reload": self.action = "onereload" return self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, self.action, self.arguments)) # =========================================== # Subclass: OpenBSD class OpenBsdService(Service): """ This is the OpenBSD Service manipulation class - it uses /etc/rc.d for service control. Enabling a service is currently not supported because the _flags variable is not boolean, you should supply a rc.conf.local file in some other way. """ platform = 'OpenBSD' distribution = None def get_service_tools(self): rcdir = '/etc/rc.d' rc_script = "%s/%s" % (rcdir, self.name) if os.path.isfile(rc_script): self.svc_cmd = rc_script if not self.svc_cmd: self.module.fail_json(msg='unable to find rc.d script') def get_service_status(self): rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'check')) if rc == 1: self.running = False elif rc == 0: self.running = True def service_control(self): return self.execute_command("%s %s" % (self.svc_cmd, self.action)) # =========================================== # Subclass: NetBSD class NetBsdService(Service): """ This is the NetBSD Service manipulation class - it uses the /etc/rc.conf file for controlling services started at boot, check status and perform direct service manipulation. Init scripts in /etc/rcd are used for controlling services (start/stop) as well as for controlling the current state. """ platform = 'NetBSD' distribution = None def get_service_tools(self): initpaths = [ '/etc/rc.d' ] # better: $rc_directories - how to get in here? Run: sh -c '. /etc/rc.conf ; echo $rc_directories' for initdir in initpaths: initscript = "%s/%s" % (initdir,self.name) if os.path.isfile(initscript): self.svc_initscript = initscript if not self.svc_initscript: self.module.fail_json(msg='unable to find rc.d script') def service_enable(self): if self.enable: self.rcconf_value = "YES" else: self.rcconf_value = "NO" rcfiles = [ '/etc/rc.conf' ] # Overkill? for rcfile in rcfiles: if os.path.isfile(rcfile): self.rcconf_file = rcfile self.rcconf_key = "%s" % string.replace(self.name,"-","_") return self.service_enable_rcconf() def get_service_status(self): self.svc_cmd = "%s" % self.svc_initscript rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'onestatus')) if rc == 1: self.running = False elif rc == 0: self.running = True def service_control(self): if self.action is "start": self.action = "onestart" if self.action is "stop": self.action = "onestop" self.svc_cmd = "%s" % self.svc_initscript return self.execute_command("%s %s" % (self.svc_cmd, self.action), daemonize=True) # =========================================== # Subclass: SunOS class SunOSService(Service): """ This is the SunOS Service manipulation class - it uses the svcadm command for controlling services, and svcs command for checking status. It also tries to be smart about taking the service out of maintenance state if necessary. """ platform = 'SunOS' distribution = None def get_service_tools(self): self.svcs_cmd = self.module.get_bin_path('svcs', True) if not self.svcs_cmd: self.module.fail_json(msg='unable to find svcs binary') self.svcadm_cmd = self.module.get_bin_path('svcadm', True) if not self.svcadm_cmd: self.module.fail_json(msg='unable to find svcadm binary') def get_service_status(self): status = self.get_sunos_svcs_status() # Only 'online' is considered properly running. Everything else is off # or has some sort of problem. if status == 'online': self.running = True else: self.running = False def get_sunos_svcs_status(self): rc, stdout, stderr = self.execute_command("%s %s" % (self.svcs_cmd, self.name)) if rc == 1: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) lines = stdout.rstrip("\n").split("\n") status = lines[-1].split(" ")[0] # status is one of: online, offline, degraded, disabled, maintenance, uninitialized # see man svcs(1) return status def service_enable(self): # Get current service enablement status rc, stdout, stderr = self.execute_command("%s -l %s" % (self.svcs_cmd, self.name)) if rc != 0: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) enabled = False temporary = False # look for enabled line, which could be one of: # enabled true (temporary) # enabled false (temporary) # enabled true # enabled false for line in stdout.split("\n"): if line.find("enabled") == 0: if line.find("true") != -1: enabled = True if line.find("temporary") != -1: temporary = True startup_enabled = (enabled and not temporary) or (not enabled and temporary) if self.enable and startup_enabled: return elif (not self.enable) and (not startup_enabled): return # Mark service as started or stopped (this will have the side effect of # actually stopping or starting the service) if self.enable: subcmd = "enable -rs" else: subcmd = "disable -s" rc, stdout, stderr = self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name)) if rc != 0: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) self.changed = True def service_control(self): status = self.get_sunos_svcs_status() # if starting or reloading, clear maintenace states if self.action in ['start', 'reload', 'restart'] and status in ['maintenance', 'degraded']: rc, stdout, stderr = self.execute_command("%s clear %s" % (self.svcadm_cmd, self.name)) if rc != 0: return rc, stdout, stderr status = self.get_sunos_svcs_status() if status in ['maintenance', 'degraded']: self.module.fail_json(msg="Failed to bring service out of %s status." % status) if self.action == 'start': subcmd = "enable -rst" elif self.action == 'stop': subcmd = "disable -st" elif self.action == 'reload': subcmd = "refresh" elif self.action == 'restart' and status == 'online': subcmd = "restart" elif self.action == 'restart' and status != 'online': subcmd = "enable -rst" return self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name)) # =========================================== # Subclass: AIX class AIX(Service): """ This is the AIX Service (SRC) manipulation class - it uses lssrc, startsrc, stopsrc and refresh for service control. Enabling a service is currently not supported. Would require to add an entry in the /etc/inittab file (mkitab, chitab and rmitab commands) """ platform = 'AIX' distribution = None def get_service_tools(self): self.lssrc_cmd = self.module.get_bin_path('lssrc', True) if not self.lssrc_cmd: self.module.fail_json(msg='unable to find lssrc binary') self.startsrc_cmd = self.module.get_bin_path('startsrc', True) if not self.startsrc_cmd: self.module.fail_json(msg='unable to find startsrc binary') self.stopsrc_cmd = self.module.get_bin_path('stopsrc', True) if not self.stopsrc_cmd: self.module.fail_json(msg='unable to find stopsrc binary') self.refresh_cmd = self.module.get_bin_path('refresh', True) if not self.refresh_cmd: self.module.fail_json(msg='unable to find refresh binary') def get_service_status(self): status = self.get_aix_src_status() # Only 'active' is considered properly running. Everything else is off # or has some sort of problem. if status == 'active': self.running = True else: self.running = False def get_aix_src_status(self): rc, stdout, stderr = self.execute_command("%s -s %s" % (self.lssrc_cmd, self.name)) if rc == 1: if stderr: self.module.fail_json(msg=stderr) else: self.module.fail_json(msg=stdout) lines = stdout.rstrip("\n").split("\n") status = lines[-1].split(" ")[-1] # status is one of: active, inoperative return status def service_control(self): if self.action == 'start': srccmd = self.startsrc_cmd elif self.action == 'stop': srccmd = self.stopsrc_cmd elif self.action == 'reload': srccmd = self.refresh_cmd elif self.action == 'restart': self.execute_command("%s -s %s" % (self.stopsrc_cmd, self.name)) srccmd = self.startsrc_cmd if self.arguments and self.action == 'start': return self.execute_command("%s -a \"%s\" -s %s" % (srccmd, self.arguments, self.name)) else: return self.execute_command("%s -s %s" % (srccmd, self.name)) # =========================================== # Main control flow def main(): module = AnsibleModule( argument_spec = dict( name = dict(required=True), state = dict(choices=['running', 'started', 'stopped', 'restarted', 'reloaded']), sleep = dict(required=False, type='int', default=None), pattern = dict(required=False, default=None), enabled = dict(type='bool'), runlevel = dict(required=False, default='default'), arguments = dict(aliases=['args'], default=''), ), supports_check_mode=True ) if module.params['state'] is None and module.params['enabled'] is None: module.fail_json(msg="Neither 'state' nor 'enabled' set") service = Service(module) if service.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'Service instantiated - platform %s' % service.platform) if service.distribution: syslog.syslog(syslog.LOG_NOTICE, 'Service instantiated - distribution %s' % service.distribution) rc = 0 out = '' err = '' result = {} result['name'] = service.name # Find service management tools service.get_service_tools() # Enable/disable service startup at boot if requested if service.module.params['enabled'] is not None: # FIXME: ideally this should detect if we need to toggle the enablement state, though # it's unlikely the changed handler would need to fire in this case so it's a minor thing. service.service_enable() result['enabled'] = service.enable if module.params['state'] is None: # Not changing the running state, so bail out now. result['changed'] = service.changed module.exit_json(**result) result['state'] = service.state # Collect service status if service.pattern: service.check_ps() else: service.get_service_status() # Calculate if request will change service state service.check_service_changed() # Modify service state if necessary (rc, out, err) = service.modify_service_state() if rc != 0: if err and err.find("is already") != -1: # upstart got confused, one such possibility is MySQL on Ubuntu 12.04 # where status may report it has no start/stop links and we could # not get accurate status pass else: if err: module.fail_json(msg=err) else: module.fail_json(msg=out) result['changed'] = service.changed | service.svc_change if service.module.params['enabled'] is not None: result['enabled'] = service.module.params['enabled'] if not service.module.params['state']: status = service.get_service_status() if status is None: result['state'] = 'absent' elif status is False: result['state'] = 'started' else: result['state'] = 'stopped' else: # as we may have just bounced the service the service command may not # report accurate state at this moment so just show what we ran if service.module.params['state'] in ['started','restarted','running','reloaded']: result['state'] = 'started' else: result['state'] = 'stopped' module.exit_json(**result) from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/filesystem0000664000000000000000000000616512316627017016752 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Alexander Bulimov # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- author: Alexander Bulimov module: filesystem short_description: Makes file system on block device description: - This module creates file system. version_added: "1.2" options: fstype: description: - File System type to be created. required: true dev: description: - Target block device. required: true force: choices: [ "yes", "no" ] default: "no" description: - If yes, allows to create new filesystem on devices that already has filesystem. required: false opts: description: - List of options to be passed to mkfs command. notes: - uses mkfs command ''' EXAMPLES = ''' # Create a ext2 filesystem on /dev/sdb1. - filesystem: fstype=ext2 dev=/dev/sdb1 # Create a ext4 filesystem on /dev/sdb1 and check disk blocks. - filesystem: fstype=ext4 dev=/dev/sdb1 opts="-cc" ''' def main(): module = AnsibleModule( argument_spec = dict( fstype=dict(required=True, aliases=['type']), dev=dict(required=True, aliases=['device']), opts=dict(), force=dict(type='bool', default='no'), ), supports_check_mode=True, ) dev = module.params['dev'] fstype = module.params['fstype'] opts = module.params['opts'] force = module.boolean(module.params['force']) changed = False if not os.path.exists(dev): module.fail_json(msg="Device %s not found."%dev) cmd = module.get_bin_path('blkid', required=True) rc,raw_fs,err = module.run_command("%s -o value -s TYPE %s" % (cmd, dev)) fs = raw_fs.strip() if fs == fstype: module.exit_json(changed=False) elif fs and not force: module.fail_json(msg="'%s' is already used as %s, use force=yes to overwrite"%(dev,fs), rc=rc, err=err) ### create fs if module.check_mode: changed = True else: mkfs = module.get_bin_path('mkfs', required=True) cmd = None if opts is None: cmd = "%s -t %s '%s'" % (mkfs, fstype, dev) else: cmd = "%s -t %s %s '%s'" % (mkfs, fstype, opts, dev) rc,_,err = module.run_command(cmd) if rc == 0: changed = True else: module.fail_json(msg="Creating filesystem %s on device '%s' failed"%(fstype,dev), rc=rc, err=err) module.exit_json(changed=changed) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/open_iscsi0000664000000000000000000002722312316627017016717 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Serge van Ginderachter # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: open_iscsi author: Serge van Ginderachter version_added: "1.4" short_description: Manage iscsi targets with open-iscsi description: - Discover targets on given portal, (dis)connect targets, mark targets to manually or auto start, return device nodes of connected targets. requirements: - open_iscsi library and tools (iscsiadm) options: portal: required: false aliases: [ip] description: - the ip address of the iscsi target port: required: false default: 3260 description: - the port on which the iscsi target process listens target: required: false aliases: [name, targetname] description: - the iscsi target name login: required: false choices: [true, false] description: - whether the target node should be connected node_auth: required: false default: CHAP description: - discovery.sendtargets.auth.authmethod node_user: required: false description: - discovery.sendtargets.auth.username node_pass: required: false description: - discovery.sendtargets.auth.password auto_node_startup: aliases: [automatic] required: false choices: [true, false] description: - whether the target node should be automatically connected at startup discover: required: false choices: [true, false] description: - whether the list of target nodes on the portal should be (re)discovered and added to the persistent iscsi database. Keep in mind that iscsiadm discovery resets configurtion, like node.startup to manual, hence combined with auto_node_startup=yes will allways return a changed state. show_nodes: required: false choices: [true, false] description: - whether the list of nodes in the persistent iscsi database should be returned by the module examples: - description: perform a discovery on 10.1.2.3 and show available target nodes code: > open_iscsi: show_nodes=yes discover=yes portal=10.1.2.3 - description: discover targets on portal and login to the one available (only works if exactly one target is exported to the initiator) code: > open_iscsi: portal={{iscsi_target}} login=yes discover=yes - description: connect to the named target, after updating the local persistent database (cache) code: > open_iscsi: login=yes target=iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d - description: discconnect from the cached named target code: > open_iscsi: login=no target=iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d" ''' import glob import time ISCSIADM = 'iscsiadm' def compare_nodelists(l1, l2): l1.sort() l2.sort() return l1 == l2 def iscsi_get_cached_nodes(module, portal=None): cmd = '%s --mode node' % iscsiadm_cmd (rc, out, err) = module.run_command(cmd) if rc == 0: lines = out.splitlines() nodes = [] for line in lines: # line format is "ip:port,target_portal_group_tag targetname" parts = line.split() if len(parts) > 2: module.fail_json(msg='error parsing output', cmd=cmd) target = parts[1] parts = parts[0].split(':') target_portal = parts[0] if portal is None or portal == target_portal: nodes.append(target) # older versions of scsiadm don't have nice return codes # for newer versions see iscsiadm(8); also usr/iscsiadm.c for details # err can contain [N|n]o records... elif rc == 21 or (rc == 255 and err.find("o records found") != -1): nodes = [] else: module.fail_json(cmd=cmd, rc=rc, msg=err) return nodes def iscsi_discover(module, portal, port): cmd = '%s --mode discovery --type sendtargets --portal %s:%s' % (iscsiadm_cmd, portal, port) (rc, out, err) = module.run_command(cmd) if rc > 0: module.fail_json(cmd=cmd, rc=rc, msg=err) def target_loggedon(module, target): cmd = '%s --mode session' % iscsiadm_cmd (rc, out, err) = module.run_command(cmd) if rc == 0: return target in out else: module.fail_json(cmd=cmd, rc=rc, msg=err) def target_login(module, target): node_auth = module.params['node_auth'] node_user = module.params['node_user'] node_pass = module.params['node_pass'] if node_user: params = [('node.session.auth.authmethod', node_auth), ('node.session.auth.username', node_user), ('node.session.auth.password', node_pass)] for (name, value) in params: cmd = '%s --mode node --targetname %s --op=update --name %s --value %s' % (iscsiadm_cmd, target, name, value) (rc, out, err) = module.run_command(cmd) if rc > 0: module.fail_json(cmd=cmd, rc=rc, msg=err) cmd = '%s --mode node --targetname %s --login' % (iscsiadm_cmd, target) (rc, out, err) = module.run_command(cmd) if rc > 0: module.fail_json(cmd=cmd, rc=rc, msg=err) def target_logout(module, target): cmd = '%s --mode node --targetname %s --logout' % (iscsiadm_cmd, target) (rc, out, err) = module.run_command(cmd) if rc > 0: module.fail_json(cmd=cmd, rc=rc, msg=err) def target_device_node(module, target): # if anyone know a better way to find out which devicenodes get created for # a given target... devices = glob.glob('/dev/disk/by-path/*%s*' % target) if len(devices) == 0: return None else: devdisks = [] for dev in devices: # exclude partitions if "-part" not in dev: devdisk = os.path.realpath(dev) # only add once (multi-path?) if devdisk not in devdisks: devdisks.append(devdisk) return devdisks def target_isauto(module, target): cmd = '%s --mode node --targetname %s' % (iscsiadm_cmd, target) (rc, out, err) = module.run_command(cmd) if rc == 0: lines = out.splitlines() for line in lines: if 'node.startup' in line: return 'automatic' in line return False else: module.fail_json(cmd=cmd, rc=rc, msg=err) def target_setauto(module, target): cmd = '%s --mode node --targetname %s --op=update --name node.startup --value automatic' % (iscsiadm_cmd, target) (rc, out, err) = module.run_command(cmd) if rc > 0: module.fail_json(cmd=cmd, rc=rc, msg=err) def target_setmanual(module, target): cmd = '%s --mode node --targetname %s --op=update --name node.startup --value manual' % (iscsiadm_cmd, target) (rc, out, err) = module.run_command(cmd) if rc > 0: module.fail_json(cmd=cmd, rc=rc, msg=err) def main(): # load ansible module object module = AnsibleModule( argument_spec = dict( # target portal = dict(required=False, aliases=['ip']), port = dict(required=False, default=3260), target = dict(required=False, aliases=['name', 'targetname']), node_auth = dict(required=False, default='CHAP'), node_user = dict(required=False), node_pass = dict(required=False), # actions login = dict(type='bool', aliases=['state']), auto_node_startup = dict(type='bool', aliases=['automatic']), discover = dict(type='bool', default=False), show_nodes = dict(type='bool', default=False) ), required_together=[['discover_user', 'discover_pass'], ['node_user', 'node_pass']], supports_check_mode=True ) global iscsiadm_cmd iscsiadm_cmd = module.get_bin_path('iscsiadm', required=True) # parameters portal = module.params['portal'] target = module.params['target'] port = module.params['port'] login = module.params['login'] automatic = module.params['auto_node_startup'] discover = module.params['discover'] show_nodes = module.params['show_nodes'] check = module.check_mode cached = iscsi_get_cached_nodes(module, portal) # return json dict result = {} result['changed'] = False if discover: if portal is None: module.fail_json(msg = "Need to specify at least the portal (ip) to discover") elif check: nodes = cached else: iscsi_discover(module, portal, port) nodes = iscsi_get_cached_nodes(module, portal) if not compare_nodelists(cached, nodes): result['changed'] |= True result['cache_updated'] = True else: nodes = cached if login is not None or automatic is not None: if target is None: if len(nodes) > 1: module.fail_json(msg = "Need to specify a target") else: target = nodes[0] else: # check given target is in cache check_target = False for node in nodes: if node == target: check_target = True break if not check_target: module.fail_json(msg = "Specified target not found") if show_nodes: result['nodes'] = nodes if login is not None: loggedon = target_loggedon(module,target) if (login and loggedon) or (not login and not loggedon): result['changed'] |= False if login: result['devicenodes'] = target_device_node(module,target) elif not check: if login: target_login(module, target) # give udev some time time.sleep(1) result['devicenodes'] = target_device_node(module,target) else: target_logout(module, target) result['changed'] |= True result['connection_changed'] = True else: result['changed'] |= True result['connection_changed'] = True if automatic is not None: isauto = target_isauto(module, target) if (automatic and isauto) or (not automatic and not isauto): result['changed'] |= False result['automatic_changed'] = False elif not check: if automatic: target_setauto(module, target) else: target_setmanual(module, target) result['changed'] |= True result['automatic_changed'] = True else: result['changed'] |= True result['automatic_changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/user0000664000000000000000000014467312316627017015553 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Stephen Fromm # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: user author: Stephen Fromm version_added: "0.2" short_description: Manage user accounts requirements: [ useradd, userdel, usermod ] description: - Manage user accounts and user attributes. options: name: required: true aliases: [ "user" ] description: - Name of the user to create, remove or modify. comment: required: false description: - Optionally sets the description (aka I(GECOS)) of user account. uid: required: false description: - Optionally sets the I(UID) of the user. non_unique: required: false default: "no" choices: [ "yes", "no" ] description: - Optionally when used with the -u option, this option allows to change the user ID to a non-unique value. version_added: "1.1" group: required: false description: - Optionally sets the user's primary group (takes a group name). groups: required: false description: - Puts the user in this comma-delimited list of groups. When set to the empty string ('groups='), the user is removed from all groups except the primary group. append: required: false description: - If C(yes), will only add groups, not set them to just the list in I(groups). shell: required: false description: - Optionally set the user's shell. home: required: false description: - Optionally set the user's home directory. password: required: false description: - Optionally set the user's password to this crypted value. See the user example in the github examples directory for what this looks like in a playbook. The `FAQ `_ contains details on various ways to generate these password values. state: required: false default: "present" choices: [ present, absent ] description: - Whether the account should exist. When C(absent), removes the user account. createhome: required: false default: "yes" choices: [ "yes", "no" ] description: - Unless set to C(no), a home directory will be made for the user when the account is created or if the home directory does not exist. move_home: required: false default: "no" choices: [ "yes", "no" ] description: - If set to C(yes) when used with C(home=), attempt to move the user's home directory to the specified directory if it isn't there already. system: required: false default: "no" choices: [ "yes", "no" ] description: - When creating an account, setting this to C(yes) makes the user a system account. This setting cannot be changed on existing users. force: required: false default: "no" choices: [ "yes", "no" ] description: - When used with C(state=absent), behavior is as with C(userdel --force). login_class: required: false description: - Optionally sets the user's login class for FreeBSD, OpenBSD and NetBSD systems. remove: required: false default: "no" choices: [ "yes", "no" ] description: - When used with C(state=absent), behavior is as with C(userdel --remove). generate_ssh_key: required: false default: "no" choices: [ "yes", "no" ] version_added: "0.9" description: - Whether to generate a SSH key for the user in question. This will B(not) overwrite an existing SSH key. ssh_key_bits: required: false default: 2048 version_added: "0.9" description: - Optionally specify number of bits in SSH key to create. ssh_key_type: required: false default: rsa version_added: "0.9" description: - Optionally specify the type of SSH key to generate. Available SSH key types will depend on implementation present on target host. ssh_key_file: required: false default: $HOME/.ssh/id_rsa version_added: "0.9" description: - Optionally specify the SSH key filename. ssh_key_comment: required: false default: ansible-generated version_added: "0.9" description: - Optionally define the comment for the SSH key. ssh_key_passphrase: required: false version_added: "0.9" description: - Set a passphrase for the SSH key. If no passphrase is provided, the SSH key will default to having no passphrase. update_password: required: false default: always choices: ['always', 'on_create'] version_added: "1.3" description: - C(always) will update passwords if they differ. C(on_create) will only set the password for newly created users. ''' EXAMPLES = ''' # Add the user 'johnd' with a specific uid and a primary group of 'admin' - user: name=johnd comment="John Doe" uid=1040 # Remove the user 'johnd' - user: name=johnd state=absent remove=yes # Create a 2048-bit SSH key for user jsmith - user: name=jsmith generate_ssh_key=yes ssh_key_bits=2048 ''' import os import pwd import grp import syslog import platform try: import spwd HAVE_SPWD=True except: HAVE_SPWD=False class User(object): """ This is a generic User manipulation class that is subclassed based on platform. A subclass may wish to override the following action methods:- - create_user() - remove_user() - modify_user() - ssh_key_gen() - ssh_key_fingerprint() - user_exists() All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None SHADOWFILE = '/etc/shadow' def __new__(cls, *args, **kwargs): return load_platform_subclass(User, args, kwargs) def __init__(self, module): self.module = module self.state = module.params['state'] self.name = module.params['name'] self.uid = module.params['uid'] self.non_unique = module.params['non_unique'] self.group = module.params['group'] self.groups = module.params['groups'] self.comment = module.params['comment'] self.home = module.params['home'] self.shell = module.params['shell'] self.password = module.params['password'] self.force = module.params['force'] self.remove = module.params['remove'] self.createhome = module.params['createhome'] self.move_home = module.params['move_home'] self.system = module.params['system'] self.login_class = module.params['login_class'] self.append = module.params['append'] self.sshkeygen = module.params['generate_ssh_key'] self.ssh_bits = module.params['ssh_key_bits'] self.ssh_type = module.params['ssh_key_type'] self.ssh_comment = module.params['ssh_key_comment'] self.ssh_passphrase = module.params['ssh_key_passphrase'] self.update_password = module.params['update_password'] if module.params['ssh_key_file'] is not None: self.ssh_file = module.params['ssh_key_file'] else: self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type) # select whether we dump additional debug info through syslog self.syslogging = False def execute_command(self, cmd): if self.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'Command %s' % '|'.join(cmd)) return self.module.run_command(cmd) def remove_user_userdel(self): cmd = [self.module.get_bin_path('userdel', True)] if self.force: cmd.append('-f') if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def create_user_useradd(self, command_name='useradd'): cmd = [self.module.get_bin_path(command_name, True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) elif self.group_exists(self.name): # use the -N option (no user group) if a group already # exists with the same name as the user to prevent # errors from useradd trying to create a group when # USERGROUPS_ENAB is set in /etc/login.defs. cmd.append('-N') if self.groups is not None and len(self.groups): groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.password is not None: cmd.append('-p') cmd.append(self.password) if self.createhome: cmd.append('-m') else: cmd.append('-M') if self.system: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def _check_usermod_append(self): # check if this version of usermod can append groups cmd = [self.module.get_bin_path('usermod', True)] cmd.append('--help') rc, data1, data2 = self.execute_command(cmd) helpout = data1 + data2 # check if --append exists lines = helpout.split('\n') for line in lines: if line.strip().startswith('-a, --append'): return True return False def modify_user_usermod(self): cmd = [self.module.get_bin_path('usermod', True)] info = self.user_info() has_append = self._check_usermod_append() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set(remove_existing=False) group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: if has_append: cmd.append('-a') groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: if self.append and not has_append: cmd.append('-A') cmd.append(','.join(group_diff)) else: cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: cmd.append('-d') cmd.append(self.home) if self.move_home: cmd.append('-m') if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd.append('-p') cmd.append(self.password) # skip if no changes to be made if len(cmd) == 1: return (None, '', '') elif self.module.check_mode: return (0, '', '') cmd.append(self.name) return self.execute_command(cmd) def group_exists(self,group): try: if group.isdigit(): if grp.getgrgid(group): return True else: if grp.getgrnam(group): return True except KeyError: return False def group_info(self,group): if not self.group_exists(group): return False if group.isdigit(): return list(grp.getgrgid(group)) else: return list(grp.getgrnam(group)) def get_groups_set(self, remove_existing=True): if self.groups is None: return None info = self.user_info() groups = set(filter(None, self.groups.split(','))) for g in set(groups): if not self.group_exists(g): self.module.fail_json(msg="Group %s does not exist" % (g)) if info and remove_existing and self.group_info(g)[2] == info[3]: groups.remove(g) return groups def user_group_membership(self): groups = [] info = self.get_pwd_info() for group in grp.getgrall(): if self.name in group.gr_mem and not info[3] == group.gr_gid: groups.append(group[0]) return groups def user_exists(self): try: if pwd.getpwnam(self.name): return True except KeyError: return False def get_pwd_info(self): if not self.user_exists(): return False return list(pwd.getpwnam(self.name)) def user_info(self): if not self.user_exists(): return False info = self.get_pwd_info() if len(info[1]) == 1 or len(info[1]) == 0: info[1] = self.user_password() return info def user_password(self): passwd = '' if HAVE_SPWD: try: passwd = spwd.getspnam(self.name)[1] except KeyError: return passwd if not self.user_exists(): return passwd else: # Read shadow file for user's encrypted password string if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK): for line in open(self.SHADOWFILE).readlines(): if line.startswith('%s:' % self.name): passwd = line.split(':')[1] return passwd def get_ssh_key_path(self): info = self.user_info() if os.path.isabs(self.ssh_file): ssh_key_file = self.ssh_file else: ssh_key_file = os.path.join(info[5], self.ssh_file) return ssh_key_file def ssh_key_gen(self): info = self.user_info() if not os.path.exists(info[5]): return (1, '', 'User %s home directory does not exist' % self.name) ssh_key_file = self.get_ssh_key_path() ssh_dir = os.path.dirname(ssh_key_file) if not os.path.exists(ssh_dir): try: os.mkdir(ssh_dir, 0700) os.chown(ssh_dir, info[2], info[3]) except OSError, e: return (1, '', 'Failed to create %s: %s' % (ssh_dir, str(e))) if os.path.exists(ssh_key_file): return (None, 'Key already exists', '') cmd = [self.module.get_bin_path('ssh-keygen', True)] cmd.append('-t') cmd.append(self.ssh_type) cmd.append('-b') cmd.append(self.ssh_bits) cmd.append('-C') cmd.append(self.ssh_comment) cmd.append('-f') cmd.append(ssh_key_file) cmd.append('-N') if self.ssh_passphrase is not None: cmd.append(self.ssh_passphrase) else: cmd.append('') (rc, out, err) = self.execute_command(cmd) if rc == 0: # If the keys were successfully created, we should be able # to tweak ownership. os.chown(ssh_key_file, info[2], info[3]) os.chown('%s.pub' % ssh_key_file, info[2], info[3]) return (rc, out, err) def ssh_key_fingerprint(self): ssh_key_file = self.get_ssh_key_path() if not os.path.exists(ssh_key_file): return (1, 'SSH Key file %s does not exist' % ssh_key_file, '') cmd = [ self.module.get_bin_path('ssh-keygen', True) ] cmd.append('-l') cmd.append('-f') cmd.append(ssh_key_file) return self.execute_command(cmd) def get_ssh_public_key(self): ssh_public_key_file = '%s.pub' % self.get_ssh_key_path() try: f = open(ssh_public_key_file) ssh_public_key = f.read().strip() f.close() except IOError: return None return ssh_public_key def create_user(self): # by default we use the create_user_useradd method return self.create_user_useradd() def remove_user(self): # by default we use the remove_user_userdel method return self.remove_user_userdel() def modify_user(self): # by default we use the modify_user_usermod method return self.modify_user_usermod() def create_homedir(self, path): if not os.path.exists(path): # use /etc/skel if possible if os.path.exists('/etc/skel'): try: shutil.copytree('/etc/skel', path, symlinks=True) except OSError, e: self.module.exit_json(failed=True, msg="%s" % e) else: try: os.makedirs(path) except OSError, e: self.module.exit_json(failed=True, msg="%s" % e) def chown_homedir(self, uid, gid, path): try: os.chown(path, uid, gid) for root, dirs, files in os.walk(path): for d in dirs: os.chown(path, uid, gid) for f in files: os.chown(os.path.join(root, f), uid, gid) except OSError, e: self.module.exit_json(failed=True, msg="%s" % e) # =========================================== class FreeBsdUser(User): """ This is a FreeBSD User manipulation class - it uses the pw command to manipulate the user database, followed by the chpass command to change the password. This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'FreeBSD' distribution = None SHADOWFILE = '/etc/master.passwd' def remove_user(self): cmd = [ self.module.get_bin_path('pw', True), 'userdel', '-n', self.name ] if self.remove: cmd.append('-r') return self.execute_command(cmd) def create_user(self): cmd = [ self.module.get_bin_path('pw', True), 'useradd', '-n', self.name, ] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.createhome: cmd.append('-m') if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) # system cannot be handled currently - should we error if its requested? # create the user (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) # we have to set the password in a second command if self.password is not None: cmd = [ self.module.get_bin_path('chpass', True), '-p', self.password, self.name ] return self.execute_command(cmd) return (rc, out, err) def modify_user(self): cmd = [ self.module.get_bin_path('pw', True), 'usermod', '-n', self.name ] cmd_len = len(cmd) info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.groups is not None: current_groups = self.user_group_membership() groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) groups_need_mod = False if group_diff: if self.append: for g in groups: if g in group_diff: groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append('-G') new_groups = groups if self.append: new_groups = groups | set(current_groups) cmd.append(','.join(new_groups)) # modify the user if cmd will do anything if cmd_len != len(cmd): (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) else: (rc, out, err) = (None, '', '') # we have to set the password in a second command if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd = [ self.module.get_bin_path('chpass', True), '-p', self.password, self.name ] return self.execute_command(cmd) return (rc, out, err) # =========================================== class OpenBSDUser(User): """ This is a OpenBSD User manipulation class. Main differences are that OpenBSD:- - has no concept of "system" account. - has no force delete user This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'OpenBSD' distribution = None SHADOWFILE = '/etc/master.passwd' def create_user(self): cmd = [self.module.get_bin_path('useradd', True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.password is not None: cmd.append('-p') cmd.append(self.password) if self.createhome: cmd.append('-m') cmd.append(self.name) return self.execute_command(cmd) def remove_user_userdel(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def modify_user(self): cmd = [self.module.get_bin_path('usermod', True)] info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups_option = '-G' groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: groups_option = '-S' groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append(groups_option) cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: # find current login class user_login_class = None userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name] (rc, out, err) = self.execute_command(userinfo_cmd) for line in out.splitlines(): tokens = line.split() if tokens[0] == 'class' and len(tokens) == 2: user_login_class = tokens[1] # act only if login_class change if self.login_class != user_login_class: cmd.append('-L') cmd.append(self.login_class) if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd.append('-p') cmd.append(self.password) # skip if no changes to be made if len(cmd) == 1: return (None, '', '') elif self.module.check_mode: return (0, '', '') cmd.append(self.name) return self.execute_command(cmd) # =========================================== class NetBSDUser(User): """ This is a NetBSD User manipulation class. Main differences are that NetBSD:- - has no concept of "system" account. - has no force delete user This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'NetBSD' distribution = None SHADOWFILE = '/etc/master.passwd' def create_user(self): cmd = [self.module.get_bin_path('useradd', True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() if len(groups) > 16: self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups)) cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.password is not None: cmd.append('-p') cmd.append(self.password) if self.createhome: cmd.append('-m') cmd.append(self.name) return self.execute_command(cmd) def remove_user_userdel(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def modify_user(self): cmd = [self.module.get_bin_path('usermod', True)] info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: groups = set(current_groups).union(groups) groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: if len(groups) > 16: self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups)) cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd.append('-p') cmd.append(self.password) # skip if no changes to be made if len(cmd) == 1: return (None, '', '') elif self.module.check_mode: return (0, '', '') cmd.append(self.name) return self.execute_command(cmd) # =========================================== class SunOS(User): """ This is a SunOS User manipulation class - The main difference between this class and the generic user class is that Solaris-type distros don't support the concept of a "system" account and we need to edit the /etc/shadow file manually to set a password. (Ugh) This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'SunOS' distribution = None SHADOWFILE = '/etc/shadow' def remove_user(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def create_user(self): cmd = [self.module.get_bin_path('useradd', True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.createhome: cmd.append('-m') cmd.append(self.name) if self.module.check_mode: return (0, '', '') else: (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) # we have to set the password by editing the /etc/shadow file if self.password is not None: try: lines = [] for line in open(self.SHADOWFILE, 'rb').readlines(): fields = line.strip().split(':') if not fields[0] == self.name: lines.append(line) continue fields[1] = self.password line = ':'.join(fields) lines.append('%s\n' % line) open(self.SHADOWFILE, 'w+').writelines(lines) except Exception, err: self.module.fail_json(msg="failed to update users password: %s" % str(err)) return (rc, out, err) def modify_user_usermod(self): cmd = [self.module.get_bin_path('usermod', True)] cmd_len = len(cmd) info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) groups_need_mod = False if group_diff: if self.append: for g in groups: if g in group_diff: groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append('-G') new_groups = groups if self.append: new_groups.extend(current_groups) cmd.append(','.join(new_groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.module.check_mode: return (0, '', '') else: # modify the user if cmd will do anything if cmd_len != len(cmd): cmd.append(self.name) (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) else: (rc, out, err) = (None, '', '') # we have to set the password by editing the /etc/shadow file if self.update_password == 'always' and self.password is not None and info[1] != self.password: try: lines = [] for line in open(self.SHADOWFILE, 'rb').readlines(): fields = line.strip().split(':') if not fields[0] == self.name: lines.append(line) continue fields[1] = self.password line = ':'.join(fields) lines.append('%s\n' % line) open(self.SHADOWFILE, 'w+').writelines(lines) rc = 0 except Exception, err: self.module.fail_json(msg="failed to update users password: %s" % str(err)) return (rc, out, err) # =========================================== class AIX(User): """ This is a AIX User manipulation class. This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'AIX' distribution = None SHADOWFILE = '/etc/security/passwd' def remove_user(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def create_user_useradd(self, command_name='useradd'): cmd = [self.module.get_bin_path(command_name, True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None and len(self.groups): groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.createhome: cmd.append('-m') cmd.append(self.name) (rc, out, err) = self.execute_command(cmd) # set password with chpasswd if self.password is not None: cmd = [] cmd.append('echo "'+self.name+':'+self.password+'" |') cmd.append(self.module.get_bin_path('chpasswd', True)) cmd.append('-e') cmd.append('-c') self.execute_command(' '.join(cmd)) return (rc, out, err) def modify_user_usermod(self): cmd = [self.module.get_bin_path('usermod', True)] info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) # skip if no changes to be made if len(cmd) == 1: (rc, out, err) = (None, '', '') elif self.module.check_mode: return (True, '', '') else: cmd.append(self.name) (rc, out, err) = self.execute_command(cmd) # set password with chpasswd if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd = [] cmd.append('echo "'+self.name+':'+self.password+'" |') cmd.append(self.module.get_bin_path('chpasswd', True)) cmd.append('-e') cmd.append('-c') (rc2, out2, err2) = self.execute_command(' '.join(cmd)) else: (rc2, out2, err2) = (None, '', '') if rc != None: return (rc, out+out2, err+err2) else: return (rc2, out+out2, err+err2) # =========================================== def main(): ssh_defaults = { 'bits': '2048', 'type': 'rsa', 'passphrase': None, 'comment': 'ansible-generated' } module = AnsibleModule( argument_spec = dict( state=dict(default='present', choices=['present', 'absent'], type='str'), name=dict(required=True, aliases=['user'], type='str'), uid=dict(default=None, type='str'), non_unique=dict(default='no', type='bool'), group=dict(default=None, type='str'), groups=dict(default=None, type='str'), comment=dict(default=None, type='str'), home=dict(default=None, type='str'), shell=dict(default=None, type='str'), password=dict(default=None, type='str'), login_class=dict(default=None, type='str'), # following options are specific to userdel force=dict(default='no', type='bool'), remove=dict(default='no', type='bool'), # following options are specific to useradd createhome=dict(default='yes', type='bool'), system=dict(default='no', type='bool'), # following options are specific to usermod move_home=dict(default='no', type='bool'), append=dict(default='no', type='bool'), # following are specific to ssh key generation generate_ssh_key=dict(type='bool'), ssh_key_bits=dict(default=ssh_defaults['bits'], type='str'), ssh_key_type=dict(default=ssh_defaults['type'], type='str'), ssh_key_file=dict(default=None, type='str'), ssh_key_comment=dict(default=ssh_defaults['comment'], type='str'), ssh_key_passphrase=dict(default=None, type='str'), update_password=dict(default='always',choices=['always','on_create'],type='str') ), supports_check_mode=True ) user = User(module) if user.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'User instantiated - platform %s' % user.platform) if user.distribution: syslog.syslog(syslog.LOG_NOTICE, 'User instantiated - distribution %s' % user.distribution) rc = None out = '' err = '' result = {} result['name'] = user.name result['state'] = user.state if user.state == 'absent': if user.user_exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = user.remove_user() if rc != 0: module.fail_json(name=user.name, msg=err, rc=rc) result['force'] = user.force result['remove'] = user.remove elif user.state == 'present': if not user.user_exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = user.create_user() result['system'] = user.system result['createhome'] = user.createhome else: # modify user (note: this function is check mode aware) (rc, out, err) = user.modify_user() result['append'] = user.append result['move_home'] = user.move_home if rc is not None and rc != 0: module.fail_json(name=user.name, msg=err, rc=rc) if user.password is not None: result['password'] = 'NOT_LOGGING_PASSWORD' if rc is None: result['changed'] = False else: result['changed'] = True if out: result['stdout'] = out if err: result['stderr'] = err if user.user_exists(): info = user.user_info() if info == False: result['msg'] = "failed to look up user name: %s" % user.name result['failed'] = True result['uid'] = info[2] result['group'] = info[3] result['comment'] = info[4] result['home'] = info[5] result['shell'] = info[6] result['uid'] = info[2] if user.groups is not None: result['groups'] = user.groups # deal with ssh key if user.sshkeygen: (rc, out, err) = user.ssh_key_gen() if rc is not None and rc != 0: module.fail_json(name=user.name, msg=err, rc=rc) if rc == 0: result['changed'] = True (rc, out, err) = user.ssh_key_fingerprint() if rc == 0: result['ssh_fingerprint'] = out.strip() else: result['ssh_fingerprint'] = err.strip() result['ssh_key_file'] = user.get_ssh_key_path() result['ssh_public_key'] = user.get_ssh_public_key() # handle missing homedirs info = user.user_info() if user.home is None: user.home = info[5] if not os.path.exists(user.home) and user.createhome: if not module.check_mode: user.create_homedir(user.home) user.chown_homedir(info[2], info[3], user.home) result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/hostname0000664000000000000000000002440412316627017016400 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Hiroaki Nakamura # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: hostname author: Hiroaki Nakamura version_added: "1.4" short_description: Manage hostname requirements: [ hostname ] description: - Set system's hostname - Currently implemented on only Debian, Ubuntu, RedHat and CentOS. options: name: required: true description: - Name of the host ''' EXAMPLES = ''' - hostname: name=web01 ''' class UnimplementedStrategy(object): def __init__(self, module): self.module = module def get_current_hostname(self): self.unimplemented_error() def set_current_hostname(self, name): self.unimplemented_error() def get_permanent_hostname(self): self.unimplemented_error() def set_permanent_hostname(self, name): self.unimplemented_error() def unimplemented_error(self): platform = get_platform() distribution = get_distribution() if distribution is not None: msg_platform = '%s (%s)' % (platform, distribution) else: msg_platform = platform self.module.fail_json( msg='hostname module cannot be used on platform %s' % msg_platform) class Hostname(object): """ This is a generic Hostname manipulation class that is subclassed based on platform. A subclass may wish to set different strategy instance to self.strategy. All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None strategy_class = UnimplementedStrategy def __new__(cls, *args, **kwargs): return load_platform_subclass(Hostname, args, kwargs) def __init__(self, module): self.module = module self.name = module.params['name'] self.strategy = self.strategy_class(module) def get_current_hostname(self): return self.strategy.get_current_hostname() def set_current_hostname(self, name): self.strategy.set_current_hostname(name) def get_permanent_hostname(self): return self.strategy.get_permanent_hostname() def set_permanent_hostname(self, name): self.strategy.set_permanent_hostname(name) class GenericStrategy(object): """ This is a generic Hostname manipulation strategy class. A subclass may wish to override some or all of these methods. - get_current_hostname() - get_permanent_hostname() - set_current_hostname(name) - set_permanent_hostname(name) """ def __init__(self, module): self.module = module HOSTNAME_CMD = '/bin/hostname' def get_current_hostname(self): cmd = [self.HOSTNAME_CMD] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return out.strip() def set_current_hostname(self, name): cmd = [self.HOSTNAME_CMD, name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): return None def set_permanent_hostname(self, name): pass # =========================================== class DebianStrategy(GenericStrategy): """ This is a Debian family Hostname manipulation strategy class - it edits the /etc/hostname file. """ HOSTNAME_FILE = '/etc/hostname' def get_permanent_hostname(self): if not os.path.isfile(self.HOSTNAME_FILE): try: open(self.HOSTNAME_FILE, "a").write("") except IOError, err: self.module.fail_json(msg="failed to write file: %s" % str(err)) try: f = open(self.HOSTNAME_FILE) try: return f.read().strip() finally: f.close() except Exception, err: self.module.fail_json(msg="failed to read hostname: %s" % str(err)) def set_permanent_hostname(self, name): try: f = open(self.HOSTNAME_FILE, 'w+') try: f.write("%s\n" % name) finally: f.close() except Exception, err: self.module.fail_json(msg="failed to update hostname: %s" % str(err)) class DebianHostname(Hostname): platform = 'Linux' distribution = 'Debian' strategy_class = DebianStrategy class UbuntuHostname(Hostname): platform = 'Linux' distribution = 'Ubuntu' strategy_class = DebianStrategy # =========================================== class RedHatStrategy(GenericStrategy): """ This is a Redhat Hostname strategy class - it edits the /etc/sysconfig/network file. """ NETWORK_FILE = '/etc/sysconfig/network' def get_permanent_hostname(self): try: f = open(self.NETWORK_FILE, 'rb') try: for line in f.readlines(): if line.startswith('HOSTNAME'): k, v = line.split('=') return v.strip() finally: f.close() except Exception, err: self.module.fail_json(msg="failed to read hostname: %s" % str(err)) def set_permanent_hostname(self, name): try: lines = [] found = False f = open(self.NETWORK_FILE, 'rb') try: for line in f.readlines(): if line.startswith('HOSTNAME'): lines.append("HOSTNAME=%s\n" % name) found = True else: lines.append(line) finally: f.close() if not found: lines.append("HOSTNAME=%s\n" % name) f = open(self.NETWORK_FILE, 'w+') try: f.writelines(lines) finally: f.close() except Exception, err: self.module.fail_json(msg="failed to update hostname: %s" % str(err)) class RedHat5Hostname(Hostname): platform = 'Linux' distribution = 'Redhat' strategy_class = RedHatStrategy class RedHatServerHostname(Hostname): platform = 'Linux' distribution = 'Red hat enterprise linux server' strategy_class = RedHatStrategy class RedHatWorkstationHostname(Hostname): platform = 'Linux' distribution = 'Red hat enterprise linux workstation' strategy_class = RedHatStrategy class CentOSHostname(Hostname): platform = 'Linux' distribution = 'Centos' strategy_class = RedHatStrategy class AmazonLinuxHostname(Hostname): platform = 'Linux' distribution = 'Amazon' strategy_class = RedHatStrategy class ScientificLinuxHostname(Hostname): platform = 'Linux' distribution = 'Scientific' strategy_class = RedHatStrategy # =========================================== class FedoraStrategy(GenericStrategy): """ This is a Fedora family Hostname manipulation strategy class - it uses the hostnamectl command. """ def get_current_hostname(self): cmd = ['hostname'] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return out.strip() def set_current_hostname(self, name): cmd = ['hostnamectl', '--transient', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) def get_permanent_hostname(self): cmd = 'hostnamectl status | awk \'/^ *Static hostname:/{printf("%s", $3)}\'' rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) return out def set_permanent_hostname(self, name): cmd = ['hostnamectl', '--pretty', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) cmd = ['hostnamectl', '--static', 'set-hostname', name] rc, out, err = self.module.run_command(cmd) if rc != 0: self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err)) class FedoraHostname(Hostname): platform = 'Linux' distribution = 'Fedora' strategy_class = FedoraStrategy class OpenSUSEHostname(Hostname): platform = 'Linux' distribution = 'Opensuse ' strategy_class = FedoraStrategy class ArchHostname(Hostname): platform = 'Linux' distribution = 'Arch' strategy_class = FedoraStrategy # =========================================== def main(): module = AnsibleModule( argument_spec = dict( name=dict(required=True, type='str') ) ) hostname = Hostname(module) changed = False name = module.params['name'] current_name = hostname.get_current_hostname() if current_name != name: hostname.set_current_hostname(name) changed = True permanent_name = hostname.get_permanent_hostname() if permanent_name != name: hostname.set_permanent_hostname(name) changed = True module.exit_json(changed=changed, name=name) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/zfs0000664000000000000000000003134412316627017015365 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Johan Wiren # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: zfs short_description: Manage zfs description: - Manages ZFS file systems on Solaris and FreeBSD. Can manage file systems, volumes and snapshots. See zfs(1M) for more information about the properties. version_added: "1.1" options: name: description: - File system, snapshot or volume name e.g. C(rpool/myfs) required: true state: description: - Whether to create (C(present)), or remove (C(absent)) a file system, snapshot or volume. required: true choices: [present, absent] aclinherit: description: - The aclinherit property. required: False choices: [discard,noallow,restricted,passthrough,passthrough-x] aclmode: description: - The aclmode property. required: False choices: [discard,groupmask,passthrough] atime: description: - The atime property. required: False choices: ['on','off'] canmount: description: - The canmount property. required: False choices: ['on','off','noauto'] casesensitivity: description: - The casesensitivity property. required: False choices: [sensitive,insensitive,mixed] checksum: description: - The checksum property. required: False choices: ['on','off',fletcher2,fletcher4,sha256] compression: description: - The compression property. required: False choices: ['on','off',lzjb,gzip,gzip-1,gzip-2,gzip-3,gzip-4,gzip-5,gzip-6,gzip-7,gzip-8,gzip-9,lz4,zle] copies: description: - The copies property. required: False choices: [1,2,3] dedup: description: - The dedup property. required: False choices: ['on','off'] devices: description: - The devices property. required: False choices: ['on','off'] exec: description: - The exec property. required: False choices: ['on','off'] jailed: description: - The jailed property. required: False choices: ['on','off'] logbias: description: - The logbias property. required: False choices: [latency,throughput] mountpoint: description: - The mountpoint property. required: False nbmand: description: - The nbmand property. required: False choices: ['on','off'] normalization: description: - The normalization property. required: False choices: [none,formC,formD,formKC,formKD] primarycache: description: - The primarycache property. required: False choices: [all,none,metadata] quota: description: - The quota property. required: False readonly: description: - The readonly property. required: False choices: ['on','off'] recordsize: description: - The recordsize property. required: False refquota: description: - The refquota property. required: False refreservation: description: - The refreservation property. required: False reservation: description: - The reservation property. required: False secondarycache: description: - The secondarycache property. required: False choices: [all,none,metadata] setuid: description: - The setuid property. required: False choices: ['on','off'] shareiscsi: description: - The shareiscsi property. required: False choices: ['on','off'] sharenfs: description: - The sharenfs property. required: False sharesmb: description: - The sharesmb property. required: False snapdir: description: - The snapdir property. required: False choices: [hidden,visible] sync: description: - The sync property. required: False choices: ['on','off'] utf8only: description: - The utf8only property. required: False choices: ['on','off'] volsize: description: - The volsize property. required: False volblocksize: description: - The volblocksize property. required: False vscan: description: - The vscan property. required: False choices: ['on','off'] xattr: description: - The xattr property. required: False choices: ['on','off'] zoned: description: - The zoned property. required: False choices: ['on','off'] author: Johan Wiren ''' EXAMPLES = ''' # Create a new file system called myfs in pool rpool - zfs: name=rpool/myfs state=present # Create a new volume called myvol in pool rpool. - zfs: name=rpool/myvol state=present volsize=10M # Create a snapshot of rpool/myfs file system. - zfs: name=rpool/myfs@mysnapshot state=present # Create a new file system called myfs2 with snapdir enabled - zfs: name=rpool/myfs2 state=present snapdir=enabled ''' import os class Zfs(object): def __init__(self, module, name, properties): self.module = module self.name = name self.properties = properties self.changed = False self.immutable_properties = [ 'casesensitivity', 'normalization', 'utf8only' ] def exists(self): cmd = [self.module.get_bin_path('zfs', True)] cmd.append('list') cmd.append('-t all') cmd.append(self.name) (rc, out, err) = self.module.run_command(' '.join(cmd)) if rc == 0: return True else: return False def create(self): if self.module.check_mode: self.changed = True return properties=self.properties volsize = properties.pop('volsize', None) volblocksize = properties.pop('volblocksize', None) if "@" in self.name: action = 'snapshot' else: action = 'create' cmd = [self.module.get_bin_path('zfs', True)] cmd.append(action) if volblocksize: cmd.append('-b %s' % volblocksize) if properties: for prop, value in properties.iteritems(): cmd.append('-o %s="%s"' % (prop, value)) if volsize: cmd.append('-V') cmd.append(volsize) cmd.append(self.name) (rc, err, out) = self.module.run_command(' '.join(cmd)) if rc == 0: self.changed=True else: self.module.fail_json(msg=out) def destroy(self): if self.module.check_mode: self.changed = True return cmd = [self.module.get_bin_path('zfs', True)] cmd.append('destroy') cmd.append(self.name) (rc, err, out) = self.module.run_command(' '.join(cmd)) if rc == 0: self.changed = True else: self.module.fail_json(msg=out) def set_property(self, prop, value): if self.module.check_mode: self.changed = True return cmd = self.module.get_bin_path('zfs', True) args = [cmd, 'set', prop + '=' + value, self.name] (rc, err, out) = self.module.run_command(args) if rc == 0: self.changed = True else: self.module.fail_json(msg=out) def set_properties_if_changed(self): current_properties = self.get_current_properties() for prop, value in self.properties.iteritems(): if current_properties[prop] != value: if prop in self.immutable_properties: self.module.fail_json(msg='Cannot change property %s after creation.' % prop) else: self.set_property(prop, value) def get_current_properties(self): cmd = [self.module.get_bin_path('zfs', True)] cmd.append('get -H all') cmd.append(self.name) rc, out, err = self.module.run_command(' '.join(cmd)) properties = dict() for l in out.splitlines(): p, v = l.split('\t')[1:3] properties[p] = v return properties def run_command(self, cmd): progname = cmd[0] cmd[0] = module.get_bin_path(progname, True) return module.run_command(cmd) def main(): # FIXME: should use dict() constructor like other modules, required=False is default module = AnsibleModule( argument_spec = { 'name': {'required': True}, 'state': {'required': True, 'choices':['present', 'absent']}, 'aclinherit': {'required': False, 'choices':['discard', 'noallow', 'restricted', 'passthrough', 'passthrough-x']}, 'aclmode': {'required': False, 'choices':['discard', 'groupmask', 'passthrough']}, 'atime': {'required': False, 'choices':['on', 'off']}, 'canmount': {'required': False, 'choices':['on', 'off', 'noauto']}, 'casesensitivity': {'required': False, 'choices':['sensitive', 'insensitive', 'mixed']}, 'checksum': {'required': False, 'choices':['on', 'off', 'fletcher2', 'fletcher4', 'sha256']}, 'compression': {'required': False, 'choices':['on', 'off', 'lzjb', 'gzip', 'gzip-1', 'gzip-2', 'gzip-3', 'gzip-4', 'gzip-5', 'gzip-6', 'gzip-7', 'gzip-8', 'gzip-9', 'lz4', 'zle']}, 'copies': {'required': False, 'choices':['1', '2', '3']}, 'dedup': {'required': False, 'choices':['on', 'off']}, 'devices': {'required': False, 'choices':['on', 'off']}, 'exec': {'required': False, 'choices':['on', 'off']}, # Not supported #'groupquota': {'required': False}, 'jailed': {'required': False, 'choices':['on', 'off']}, 'logbias': {'required': False, 'choices':['latency', 'throughput']}, 'mountpoint': {'required': False}, 'nbmand': {'required': False, 'choices':['on', 'off']}, 'normalization': {'required': False, 'choices':['none', 'formC', 'formD', 'formKC', 'formKD']}, 'primarycache': {'required': False, 'choices':['all', 'none', 'metadata']}, 'quota': {'required': False}, 'readonly': {'required': False, 'choices':['on', 'off']}, 'recordsize': {'required': False}, 'refquota': {'required': False}, 'refreservation': {'required': False}, 'reservation': {'required': False}, 'secondarycache': {'required': False, 'choices':['all', 'none', 'metadata']}, 'setuid': {'required': False, 'choices':['on', 'off']}, 'shareiscsi': {'required': False, 'choices':['on', 'off']}, 'sharenfs': {'required': False}, 'sharesmb': {'required': False}, 'snapdir': {'required': False, 'choices':['hidden', 'visible']}, 'sync': {'required': False, 'choices':['on', 'off']}, # Not supported #'userquota': {'required': False}, 'utf8only': {'required': False, 'choices':['on', 'off']}, 'volsize': {'required': False}, 'volblocksize': {'required': False}, 'vscan': {'required': False, 'choices':['on', 'off']}, 'xattr': {'required': False, 'choices':['on', 'off']}, 'zoned': {'required': False, 'choices':['on', 'off']}, }, supports_check_mode=True ) state = module.params.pop('state') name = module.params.pop('name') # Get all valid zfs-properties properties = dict() for prop, value in module.params.iteritems(): if prop in ['CHECKMODE']: continue if value: properties[prop] = value result = {} result['name'] = name result['state'] = state zfs=Zfs(module, name, properties) if state == 'present': if zfs.exists(): zfs.set_properties_if_changed() else: zfs.create() elif state == 'absent': if zfs.exists(): zfs.destroy() result.update(zfs.properties) result['changed'] = zfs.changed module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/group0000664000000000000000000002733312316627017015722 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Stephen Fromm # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: group author: Stephen Fromm version_added: "0.0.2" short_description: Add or remove groups requirements: [ groupadd, groupdel, groupmod ] description: - Manage presence of groups on a host. options: name: required: true description: - Name of the group to manage. gid: required: false description: - Optional I(GID) to set for the group. state: required: false default: "present" choices: [ present, absent ] description: - Whether the group should be present or not on the remote host. system: required: false default: "no" choices: [ "yes", "no" ] description: - If I(yes), indicates that the group created is a system group. ''' EXAMPLES = ''' # Example group command from Ansible Playbooks - group: name=somegroup state=present ''' import grp import syslog import platform class Group(object): """ This is a generic Group manipulation class that is subclassed based on platform. A subclass may wish to override the following action methods:- - group_del() - group_add() - group_mod() All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None GROUPFILE = '/etc/group' def __new__(cls, *args, **kwargs): return load_platform_subclass(Group, args, kwargs) def __init__(self, module): self.module = module self.state = module.params['state'] self.name = module.params['name'] self.gid = module.params['gid'] self.system = module.params['system'] self.syslogging = False def execute_command(self, cmd): if self.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'Command %s' % '|'.join(cmd)) return self.module.run_command(cmd) def group_del(self): cmd = [self.module.get_bin_path('groupdel', True), self.name] return self.execute_command(cmd) def group_add(self, **kwargs): cmd = [self.module.get_bin_path('groupadd', True)] for key in kwargs: if key == 'gid' and kwargs[key] is not None: cmd.append('-g') cmd.append(kwargs[key]) elif key == 'system' and kwargs[key] == True: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def group_mod(self, **kwargs): cmd = [self.module.get_bin_path('groupmod', True)] info = self.group_info() for key in kwargs: if key == 'gid': if kwargs[key] is not None and info[2] != int(kwargs[key]): cmd.append('-g') cmd.append(kwargs[key]) if len(cmd) == 1: return (None, '', '') if self.module.check_mode: return (True, '', '') cmd.append(self.name) return self.execute_command(cmd) def group_exists(self): try: if grp.getgrnam(self.name): return True except KeyError: return False def group_info(self): if not self.group_exists(): return False try: info = list(grp.getgrnam(self.name)) except KeyError: return False return info # =========================================== class SunOS(Group): """ This is a SunOS Group manipulation class. Solaris doesnt have the 'system' group concept. This overrides the following methods from the generic class:- - group_add() """ platform = 'SunOS' distribution = None GROUPFILE = '/etc/group' def group_add(self, **kwargs): cmd = [self.module.get_bin_path('groupadd', True)] for key in kwargs: if key == 'gid' and kwargs[key] is not None: cmd.append('-g') cmd.append(kwargs[key]) cmd.append(self.name) return self.execute_command(cmd) # =========================================== class AIX(Group): """ This is a AIX Group manipulation class. This overrides the following methods from the generic class:- - group_del() - group_add() - group_mod() """ platform = 'AIX' distribution = None GROUPFILE = '/etc/group' def group_del(self): cmd = [self.module.get_bin_path('rmgroup', True), self.name] return self.execute_command(cmd) def group_add(self, **kwargs): cmd = [self.module.get_bin_path('mkgroup', True)] for key in kwargs: if key == 'gid' and kwargs[key] is not None: cmd.append('id='+kwargs[key]) elif key == 'system' and kwargs[key] == True: cmd.append('-a') cmd.append(self.name) return self.execute_command(cmd) def group_mod(self, **kwargs): cmd = [self.module.get_bin_path('chgroup', True)] info = self.group_info() for key in kwargs: if key == 'gid': if kwargs[key] is not None and info[2] != int(kwargs[key]): cmd.append('id='+kwargs[key]) if len(cmd) == 1: return (None, '', '') if self.module.check_mode: return (True, '', '') cmd.append(self.name) return self.execute_command(cmd) # =========================================== class FreeBsdGroup(Group): """ This is a FreeBSD Group manipulation class. This overrides the following methods from the generic class:- - group_del() - group_add() - group_mod() """ platform = 'FreeBSD' distribution = None GROUPFILE = '/etc/group' def group_del(self): cmd = [self.module.get_bin_path('pw', True), 'groupdel', self.name] return self.execute_command(cmd) def group_add(self, **kwargs): cmd = [self.module.get_bin_path('pw', True), 'groupadd', self.name] if self.gid is not None: cmd.append('-g %d' % int(self.gid)) return self.execute_command(cmd) def group_mod(self, **kwargs): cmd = [self.module.get_bin_path('pw', True), 'groupmod', self.name] info = self.group_info() cmd_len = len(cmd) if self.gid is not None and int(self.gid) != info[2]: cmd.append('-g %d' % int(self.gid)) # modify the group if cmd will do anything if cmd_len != len(cmd): if self.module.check_mode: return (True, '', '') return self.execute_command(cmd) return (None, '', '') # =========================================== class OpenBsdGroup(Group): """ This is a OpenBSD Group manipulation class. This overrides the following methods from the generic class:- - group_del() - group_add() - group_mod() """ platform = 'OpenBSD' distribution = None GROUPFILE = '/etc/group' def group_del(self): cmd = [self.module.get_bin_path('groupdel', True), self.name] return self.execute_command(cmd) def group_add(self, **kwargs): cmd = [self.module.get_bin_path('groupadd', True)] if self.gid is not None: cmd.append('-g') cmd.append('%d' % int(self.gid)) cmd.append(self.name) return self.execute_command(cmd) def group_mod(self, **kwargs): cmd = [self.module.get_bin_path('groupmod', True)] info = self.group_info() cmd_len = len(cmd) if self.gid is not None and int(self.gid) != info[2]: cmd.append('-g') cmd.append('%d' % int(self.gid)) if len(cmd) == 1: return (None, '', '') if self.module.check_mode: return (True, '', '') cmd.append(self.name) return self.execute_command(cmd) # =========================================== class NetBsdGroup(Group): """ This is a NetBSD Group manipulation class. This overrides the following methods from the generic class:- - group_del() - group_add() - group_mod() """ platform = 'NetBSD' distribution = None GROUPFILE = '/etc/group' def group_del(self): cmd = [self.module.get_bin_path('groupdel', True), self.name] return self.execute_command(cmd) def group_add(self, **kwargs): cmd = [self.module.get_bin_path('groupadd', True)] if self.gid is not None: cmd.append('-g') cmd.append('%d' % int(self.gid)) cmd.append(self.name) return self.execute_command(cmd) def group_mod(self, **kwargs): cmd = [self.module.get_bin_path('groupmod', True)] info = self.group_info() cmd_len = len(cmd) if self.gid is not None and int(self.gid) != info[2]: cmd.append('-g') cmd.append('%d' % int(self.gid)) if len(cmd) == 1: return (None, '', '') if self.module.check_mode: return (True, '', '') cmd.append(self.name) return self.execute_command(cmd) # =========================================== def main(): module = AnsibleModule( argument_spec = dict( state=dict(default='present', choices=['present', 'absent'], type='str'), name=dict(required=True, type='str'), gid=dict(default=None, type='str'), system=dict(default=False, type='bool'), ), supports_check_mode=True ) group = Group(module) if group.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'Group instantiated - platform %s' % group.platform) if user.distribution: syslog.syslog(syslog.LOG_NOTICE, 'Group instantiated - distribution %s' % group.distribution) rc = None out = '' err = '' result = {} result['name'] = group.name result['state'] = group.state if group.state == 'absent': if group.group_exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = group.group_del() if rc != 0: module.fail_json(name=group.name, msg=err) elif group.state == 'present': if not group.group_exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = group.group_add(gid=group.gid, system=group.system) else: (rc, out, err) = group.group_mod(gid=group.gid) if rc is not None and rc != 0: module.fail_json(name=group.name, msg=err) if rc is None: result['changed'] = False else: result['changed'] = True if out: result['stdout'] = out if err: result['stderr'] = err if group.group_exists(): info = group.group_info() result['system'] = group.system result['gid'] = info[2] module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/mount0000664000000000000000000002210112316627017015714 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Red Hat, inc # Written by Seth Vidal # based on the mount modules from salt and puppet # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: mount short_description: Control active and configured mount points description: - This module controls active and configured mount points in C(/etc/fstab). version_added: "0.6" options: name: description: - "path to the mount point, eg: C(/mnt/files)" required: true default: null aliases: [] src: description: - device to be mounted on I(name). required: true default: null fstype: description: - file-system type required: true default: null opts: description: - mount options (see fstab(8)) required: false default: null dump: description: - dump (see fstab(8)) required: false default: null passno: description: - passno (see fstab(8)) required: false default: null state: description: - If C(mounted) or C(unmounted), the device will be actively mounted or unmounted as well as just configured in I(fstab). C(absent) and C(present) only deal with I(fstab). C(mounted) will also automatically create the mount point directory if it doesn't exist. If C(absent) changes anything, it will remove the mount point directory. required: true choices: [ "present", "absent", "mounted", "unmounted" ] default: null notes: [] requirements: [] author: Seth Vidal ''' EXAMPLES = ''' # Mount DVD read-only - mount: name=/mnt/dvd src=/dev/sr0 fstype=iso9660 opts=ro state=present # Mount up device by label - mount: name=/srv/disk src='LABEL=SOME_LABEL' state=present # Mount up device by UUID - mount: name=/home src='UUID=b3e48f45-f933-4c8e-a700-22a159ec9077' opts=noatime state=present ''' def write_fstab(lines, dest): fs_w = open(dest, 'w') for l in lines: fs_w.write(l) fs_w.flush() fs_w.close() def set_mount(**kwargs): """ set/change a mount point location in fstab """ # kwargs: name, src, fstype, opts, dump, passno, state, fstab=/etc/fstab args = dict( opts = 'defaults', dump = '0', passno = '0', fstab = '/etc/fstab' ) args.update(kwargs) new_line = '%(src)s %(name)s %(fstype)s %(opts)s %(dump)s %(passno)s\n' to_write = [] exists = False changed = False for line in open(args['fstab'], 'r').readlines(): if not line.strip(): to_write.append(line) continue if line.strip().startswith('#'): to_write.append(line) continue if len(line.split()) != 6: # not sure what this is or why it is here # but it is not our fault so leave it be to_write.append(line) continue ld = {} ld['src'], ld['name'], ld['fstype'], ld['opts'], ld['dump'], ld['passno'] = line.split() if ld['name'] != args['name']: to_write.append(line) continue # it exists - now see if what we have is different exists = True for t in ('src', 'fstype','opts', 'dump', 'passno'): if ld[t] != args[t]: changed = True ld[t] = args[t] if changed: to_write.append(new_line % ld) else: to_write.append(line) if not exists: to_write.append(new_line % args) changed = True if changed: write_fstab(to_write, args['fstab']) return (args['name'], changed) def unset_mount(**kwargs): """ remove a mount point from fstab """ # kwargs: name, src, fstype, opts, dump, passno, state, fstab=/etc/fstab args = dict( opts = 'default', dump = '0', passno = '0', fstab = '/etc/fstab' ) args.update(kwargs) to_write = [] changed = False for line in open(args['fstab'], 'r').readlines(): if not line.strip(): to_write.append(line) continue if line.strip().startswith('#'): to_write.append(line) continue if len(line.split()) != 6: # not sure what this is or why it is here # but it is not our fault so leave it be to_write.append(line) continue ld = {} ld['src'], ld['name'], ld['fstype'], ld['opts'], ld['dump'], ld['passno'] = line.split() if ld['name'] != args['name']: to_write.append(line) continue # if we got here we found a match - continue and mark changed changed = True if changed: write_fstab(to_write, args['fstab']) return (args['name'], changed) def mount(module, **kwargs): """ mount up a path or remount if needed """ mount_bin = module.get_bin_path('mount') name = kwargs['name'] if os.path.ismount(name): cmd = [ mount_bin , '-o', 'remount', name ] else: cmd = [ mount_bin, name ] rc, out, err = module.run_command(cmd) if rc == 0: return 0, '' else: return rc, out+err def umount(module, **kwargs): """ unmount a path """ umount_bin = module.get_bin_path('umount') name = kwargs['name'] cmd = [umount_bin, name] rc, out, err = module.run_command(cmd) if rc == 0: return 0, '' else: return rc, out+err def main(): module = AnsibleModule( argument_spec = dict( state = dict(required=True, choices=['present', 'absent', 'mounted', 'unmounted']), name = dict(required=True), opts = dict(default=None), passno = dict(default=None), dump = dict(default=None), src = dict(required=True), fstype = dict(required=True), fstab = dict(default=None) ) ) changed = False rc = 0 args = { 'name': module.params['name'], 'src': module.params['src'], 'fstype': module.params['fstype'] } if module.params['passno'] is not None: args['passno'] = module.params['passno'] if module.params['opts'] is not None: args['opts'] = module.params['opts'] if ' ' in args['opts']: module.fail_json(msg="unexpected space in 'opts' parameter") if module.params['dump'] is not None: args['dump'] = module.params['dump'] if module.params['fstab'] is not None: args['fstab'] = module.params['fstab'] # absent == remove from fstab and unmounted # unmounted == do not change fstab state, but unmount # present == add to fstab, do not change mount state # mounted == add to fstab if not there and make sure it is mounted, if it has changed in fstab then remount it state = module.params['state'] name = module.params['name'] if state == 'absent': name, changed = unset_mount(**args) if changed: if os.path.ismount(name): res,msg = umount(module, **args) if res: module.fail_json(msg="Error unmounting %s: %s" % (name, msg)) if os.path.exists(name): try: os.rmdir(name) except (OSError, IOError), e: module.fail_json(msg="Error rmdir %s: %s" % (name, str(e))) module.exit_json(changed=changed, **args) if state == 'unmounted': if os.path.ismount(name): res,msg = umount(module, **args) if res: module.fail_json(msg="Error unmounting %s: %s" % (name, msg)) changed = True module.exit_json(changed=changed, **args) if state in ['mounted', 'present']: name, changed = set_mount(**args) if state == 'mounted': if not os.path.exists(name): try: os.makedirs(name) except (OSError, IOError), e: module.fail_json(msg="Error making dir %s: %s" % (name, str(e))) res = 0 if os.path.ismount(name): if changed: res,msg = mount(module, **args) else: changed = True res,msg = mount(module, **args) if res: module.fail_json(msg="Error mounting %s: %s" % (name, msg)) module.exit_json(changed=changed, **args) module.fail_json(msg='Unexpected position reached') sys.exit(0) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/cron0000664000000000000000000004036612316627017015530 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # (c) 2012, Dane Summers # (c) 2013, Mike Grozak # (c) 2013, Patrick Callahan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # # Cron Plugin: The goal of this plugin is to provide an indempotent method for # setting up cron jobs on a host. The script will play well with other manually # entered crons. Each cron job entered will be preceded with a comment # describing the job so that it can be found later, which is required to be # present in order for this plugin to find/modify the job. # # This module is based on python-crontab by Martin Owens. # DOCUMENTATION = """ --- module: cron short_description: Manage cron.d and crontab entries. description: - Use this module to manage crontab entries. This module allows you to create named crontab entries, update, or delete them. - 'The module includes one line with the description of the crontab entry C("#Ansible: ") corresponding to the "name" passed to the module, which is used by future ansible/module calls to find/check the state.' version_added: "0.9" options: name: description: - Description of a crontab entry. required: false default: null user: description: - The specific user who's crontab should be modified. required: false default: root job: description: - The command to execute. Required if state=present. required: false default: null state: description: - Whether to ensure the job is present or absent. required: false default: present choices: [ "present", "absent" ] cron_file: description: - If specified, uses this file in cron.d instead of an individual user's crontab. required: false default: null backup: description: - If set, create a backup of the crontab before it is modified. The location of the backup is returned in the C(backup) variable by this module. required: false default: false minute: description: - Minute when the job should run ( 0-59, *, */2, etc ) required: false default: "*" hour: description: - Hour when the job should run ( 0-23, *, */2, etc ) required: false default: "*" day: description: - Day of the month the job should run ( 1-31, *, */2, etc ) required: false default: "*" aliases: [ "dom" ] month: description: - Month of the year the job should run ( 1-12, *, */2, etc ) required: false default: "*" weekday: description: - Day of the week that the job should run ( 0-7 for Sunday - Saturday, *, etc ) required: false default: "*" aliases: [ "dow" ] reboot: description: - If the job should be run at reboot. This option is deprecated. Users should use special_time. version_added: "1.0" required: false default: "no" choices: [ "yes", "no" ] special_time: description: - Special time specification nickname. version_added: "1.3" required: false default: null choices: [ "reboot", "yearly", "annually", "monthly", "weekly", "daily", "hourly" ] requirements: - cron author: Dane Summers updates: [ 'Mike Grozak', 'Patrick Callahan' ] """ EXAMPLES = ''' # Ensure a job that runs at 2 and 5 exists. # Creates an entry like "* 5,2 * * ls -alh > /dev/null" - cron: name="check dirs" hour="5,2" job="ls -alh > /dev/null" # Ensure an old job is no longer present. Removes any job that is prefixed # by "#Ansible: an old job" from the crontab - cron: name="an old job" state=absent # Creates an entry like "@reboot /some/job.sh" - cron: name="a job for reboot" special_time=reboot job="/some/job.sh" # Creates a cron file under /etc/cron.d - cron: name="yum autoupdate" weekday="2" minute=0 hour=12 user="root" job="YUMINTERACTIVE=0 /usr/sbin/yum-autoupdate" cron_file=ansible_yum-autoupdate # Removes a cron file from under /etc/cron.d - cron: cron_file=ansible_yum-autoupdate state=absent ''' import os import re import tempfile import platform import pipes CRONCMD = "/usr/bin/crontab" class CronTabError(Exception): pass class CronTab(object): """ CronTab object to write time based crontab file user - the user of the crontab (defaults to root) cron_file - a cron file under /etc/cron.d """ def __init__(self, module, user=None, cron_file=None): self.module = module self.user = user self.root = (os.getuid() == 0) self.lines = None self.ansible = "#Ansible: " # select whether we dump additional debug info through syslog self.syslogging = False if cron_file: self.cron_file = '/etc/cron.d/%s' % cron_file else: self.cron_file = None self.read() def read(self): # Read in the crontab from the system self.lines = [] if self.cron_file: # read the cronfile try: f = open(self.cron_file, 'r') self.lines = f.read().splitlines() f.close() except IOError, e: # cron file does not exist return except: raise CronTabError("Unexpected error:", sys.exc_info()[0]) else: # using safely quoted shell for now, but this really should be two non-shell calls instead. FIXME (rc, out, err) = self.module.run_command(self._read_user_execute(), use_unsafe_shell=True) if rc != 0 and rc != 1: # 1 can mean that there are no jobs. raise CronTabError("Unable to read crontab") lines = out.splitlines() count = 0 for l in lines: if count > 2 or (not re.match( r'# DO NOT EDIT THIS FILE - edit the master and reinstall.', l) and not re.match( r'# \(/tmp/.*installed on.*\)', l) and not re.match( r'# \(.*version.*\)', l)): self.lines.append(l) count += 1 def log_message(self, message): if self.syslogging: syslog.syslog(syslog.LOG_NOTICE, 'ansible: "%s"' % message) def is_empty(self): if len(self.lines) == 0: return True else: return False def write(self, backup_file=None): """ Write the crontab to the system. Saves all information. """ if backup_file: fileh = open(backup_file, 'w') elif self.cron_file: fileh = open(self.cron_file, 'w') else: filed, path = tempfile.mkstemp(prefix='crontab') fileh = os.fdopen(filed, 'w') fileh.write(self.render()) fileh.close() # return if making a backup if backup_file: return # Add the entire crontab back to the user crontab if not self.cron_file: # quoting shell args for now but really this should be two non-shell calls. FIXME (rc, out, err) = self.module.run_command(self._write_execute(path), use_unsafe_shell=True) os.unlink(path) if rc != 0: self.module.fail_json(msg=err) def add_job(self, name, job): # Add the comment self.lines.append("%s%s" % (self.ansible, name)) # Add the job self.lines.append("%s" % (job)) def update_job(self, name, job): return self._update_job(name, job, self.do_add_job) def do_add_job(self, lines, comment, job): lines.append(comment) lines.append("%s" % (job)) def remove_job(self, name): return self._update_job(name, "", self.do_remove_job) def do_remove_job(self, lines, comment, job): return None def remove_job_file(self): try: os.unlink(self.cron_file) return True except OSError, e: # cron file does not exist return False except: raise CronTabError("Unexpected error:", sys.exc_info()[0]) def find_job(self, name): comment = None for l in self.lines: if comment is not None: if comment == name: return [comment, l] else: comment = None elif re.match( r'%s' % self.ansible, l): comment = re.sub( r'%s' % self.ansible, '', l) return [] def get_cron_job(self,minute,hour,day,month,weekday,job,special): if special: if self.cron_file: return "@%s %s %s" % (special, self.user, job) else: return "@%s %s" % (special, job) else: if self.cron_file: return "%s %s %s %s %s %s %s" % (minute,hour,day,month,weekday,self.user,job) else: return "%s %s %s %s %s %s" % (minute,hour,day,month,weekday,job) return None def get_jobnames(self): jobnames = [] for l in self.lines: if re.match( r'%s' % self.ansible, l): jobnames.append(re.sub( r'%s' % self.ansible, '', l)) return jobnames def _update_job(self, name, job, addlinesfunction): ansiblename = "%s%s" % (self.ansible, name) newlines = [] comment = None for l in self.lines: if comment is not None: addlinesfunction(newlines, comment, job) comment = None elif l == ansiblename: comment = l else: newlines.append(l) self.lines = newlines if len(newlines) == 0: return True else: return False # TODO add some more error testing def render(self): """ Render this crontab as it would be in the crontab. """ crons = [] for cron in self.lines: crons.append(cron) result = '\n'.join(crons) if result and result[-1] not in ['\n', '\r']: result += '\n' return result def _read_user_execute(self): """ Returns the command line for reading a crontab """ user = '' if self.user: if platform.system() == 'SunOS': return "su %s -c '%s -l'" % (pipes.quote(self.user), pipes.quote(CRONCMD)) else: user = '-u %s' % pipes.quote(self.user) return "%s %s %s" % (CRONCMD , user, '-l') def _write_execute(self, path): """ Return the command line for writing a crontab """ user = '' if self.user: if platform.system() == 'SunOS': return "chown %s %s ; su '%s' -c '%s %s'" % (pipes.quote(self.user), pipes.quote(path), pipes.quote(self.user), CRONCMD, pipes.quote(path)) else: user = '-u %s' % pipes.quote(self.user) return "%s %s %s" % (CRONCMD , user, pipes.quote(path)) #================================================== def main(): # The following example playbooks: # # - cron: name="check dirs" hour="5,2" job="ls -alh > /dev/null" # # - name: do the job # cron: name="do the job" hour="5,2" job="/some/dir/job.sh" # # - name: no job # cron: name="an old job" state=absent # # Would produce: # # Ansible: check dirs # * * 5,2 * * ls -alh > /dev/null # # Ansible: do the job # * * 5,2 * * /some/dir/job.sh module = AnsibleModule( argument_spec = dict( name=dict(required=False), user=dict(required=False), job=dict(required=False), cron_file=dict(required=False), state=dict(default='present', choices=['present', 'absent']), backup=dict(default=False, type='bool'), minute=dict(default='*'), hour=dict(default='*'), day=dict(aliases=['dom'], default='*'), month=dict(default='*'), weekday=dict(aliases=['dow'], default='*'), reboot=dict(required=False, default=False, type='bool'), special_time=dict(required=False, default=None, choices=["reboot", "yearly", "annually", "monthly", "weekly", "daily", "hourly"], type='str') ), supports_check_mode = False, ) name = module.params['name'] user = module.params['user'] job = module.params['job'] cron_file = module.params['cron_file'] state = module.params['state'] backup = module.params['backup'] minute = module.params['minute'] hour = module.params['hour'] day = module.params['day'] month = module.params['month'] weekday = module.params['weekday'] reboot = module.params['reboot'] special_time = module.params['special_time'] do_install = state == 'present' changed = False res_args = dict() # Ensure all files generated are only writable by the owning user. Primarily relevant for the cron_file option. os.umask(022) crontab = CronTab(module, user, cron_file) if crontab.syslogging: syslog.openlog('ansible-%s' % os.path.basename(__file__)) syslog.syslog(syslog.LOG_NOTICE, 'cron instantiated - name: "%s"' % name) # --- user input validation --- if (special_time or reboot) and \ (True in [(x != '*') for x in [minute, hour, day, month, weekday]]): module.fail_json(msg="You must specify time and date fields or special time.") if cron_file and do_install: if not user: module.fail_json(msg="To use file=... parameter you must specify user=... as well") if reboot and special_time: module.fail_json(msg="reboot and special_time are mutually exclusive") if name is None and do_install: module.fail_json(msg="You must specify 'name' to install a new cron job") if job is None and do_install: module.fail_json(msg="You must specify 'job' to install a new cron job") if job and name is None and not do_install: module.fail_json(msg="You must specify 'name' to remove a cron job") if reboot: if special_time: module.fail_json(msg="reboot and special_time are mutually exclusive") else: special_time = "reboot" # if requested make a backup before making a change if backup: (backuph, backup_file) = tempfile.mkstemp(prefix='crontab') crontab.write(backup_file) if crontab.cron_file and not name and not do_install: changed = crontab.remove_job_file() module.exit_json(changed=changed,cron_file=cron_file,state=state) job = crontab.get_cron_job(minute, hour, day, month, weekday, job, special_time) old_job = crontab.find_job(name) if do_install: if len(old_job) == 0: crontab.add_job(name, job) changed = True if len(old_job) > 0 and old_job[1] != job: crontab.update_job(name, job) changed = True else: if len(old_job) > 0: crontab.remove_job(name) changed = True res_args = dict( jobs = crontab.get_jobnames(), changed = changed ) if changed: crontab.write() # retain the backup only if crontab or cron file have changed if backup: if changed: res_args['backup_file'] = backup_file else: os.unlink(backup_file) if cron_file: res_args['cron_file'] = cron_file module.exit_json(**res_args) # --- should never get here module.exit_json(msg="Unable to execute cron task.") # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/firewalld0000664000000000000000000003104112316627017016526 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Adam Miller (maxamillion@fedoraproject.org) # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: firewalld short_description: Manage arbitrary ports/services with firewalld description: - This module allows for addition or deletion of services and ports either tcp or udp in either running or permanent firewalld rules version_added: "1.4" options: service: description: - "Name of a service to add/remove to/from firewalld - service must be listed in /etc/services" required: false default: null port: description: - "Name of a port to add/remove to/from firewalld must be in the form PORT/PROTOCOL" required: false default: null rich_rule: description: - "Rich rule to add/remove to/from firewalld" required: false default: null zone: description: - 'The firewalld zone to add/remove to/from (NOTE: default zone can be configured per system but "public" is default from upstream. Available choices can be extended based on per-system configs, listed here are "out of the box" defaults).' required: false default: system-default(public) choices: [ "work", "drop", "internal", "external", "trusted", "home", "dmz", "public", "block"] permanent: description: - "Should this configuration be in the running firewalld configuration or persist across reboots" required: true default: true state: description: - "Should this port accept(enabled) or reject(disabled) connections" required: true default: enabled timeout: description: - "The amount of time the rule should be in effect for when non-permanent" required: false default: 0 notes: - Not tested on any debian based system requirements: [ firewalld >= 0.2.11 ] author: Adam Miller ''' EXAMPLES = ''' - firewalld: service=https permanent=true state=enabled - firewalld: port=8081/tcp permanent=true state=disabled - firewalld: zone=dmz service=http permanent=true state=enabled - firewalld: rich_rule='rule service name="ftp" audit limit value="1/m" accept' permanent=true state=enabled ''' import os import re import sys try: import firewall.config FW_VERSION = firewall.config.VERSION from firewall.client import FirewallClient fw = FirewallClient() except ImportError: print "fail=True msg='firewalld required for this module'" sys.exit(1) ################ # port handling # def get_port_enabled(zone, port_proto): if port_proto in fw.getPorts(zone): return True else: return False def set_port_enabled(zone, port, protocol, timeout): fw.addPort(zone, port, protocol, timeout) def set_port_disabled(zone, port, protocol): fw.removePort(zone, port, protocol) def get_port_enabled_permanent(zone, port_proto): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() if tuple(port_proto) in fw_settings.getPorts(): return True else: return False def set_port_enabled_permanent(zone, port, protocol): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() fw_settings.addPort(port, protocol) fw_zone.update(fw_settings) def set_port_disabled_permanent(zone, port, protocol): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() fw_settings.removePort(port, protocol) fw_zone.update(fw_settings) #################### # service handling # def get_service_enabled(zone, service): if service in fw.getServices(zone): return True else: return False def set_service_enabled(zone, service, timeout): fw.addService(zone, service, timeout) def set_service_disabled(zone, service): fw.removeService(zone, service) def get_service_enabled_permanent(zone, service): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() if service in fw_settings.getServices(): return True else: return False def set_service_enabled_permanent(zone, service): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() fw_settings.addService(service) fw_zone.update(fw_settings) def set_service_disabled_permanent(zone, service): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() fw_settings.removeService(service) fw_zone.update(fw_settings) #################### # rich rule handling # def get_rich_rule_enabled(zone, rule): if rule in fw.getRichRules(zone): return True else: return False def set_rich_rule_enabled(zone, rule, timeout): fw.addRichRule(zone, rule, timeout) def set_rich_rule_disabled(zone, rule): fw.removeRichRule(zone, rule) def get_rich_rule_enabled_permanent(zone, rule): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() if rule in fw_settings.getRichRules(): return True else: return False def set_rich_rule_enabled_permanent(zone, rule): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() fw_settings.addRichRule(rule) fw_zone.update(fw_settings) def set_rich_rule_disabled_permanent(zone, rule): fw_zone = fw.config().getZoneByName(zone) fw_settings = fw_zone.getSettings() fw_settings.removeRichRule(rule) fw_zone.update(fw_settings) def main(): module = AnsibleModule( argument_spec = dict( service=dict(required=False,default=None), port=dict(required=False,default=None), rich_rule=dict(required=False,default=None), zone=dict(required=False,default=None), permanent=dict(type='bool',required=True), state=dict(choices=['enabled', 'disabled'], required=True), timeout=dict(type='int',required=False,default=0), ), supports_check_mode=True ) ## Pre-run version checking if FW_VERSION < "0.2.11": module.fail_json(msg='unsupported version of firewalld, requires >= 2.0.11') ## Global Vars changed=False msgs = [] service = module.params['service'] rich_rule = module.params['rich_rule'] if module.params['port'] != None: port, protocol = module.params['port'].split('/') if protocol == None: module.fail_json(msg='improper port format (missing protocol?)') else: port = None if module.params['zone'] != None: zone = module.params['zone'] else: zone = fw.getDefaultZone() permanent = module.params['permanent'] desired_state = module.params['state'] timeout = module.params['timeout'] ## Check for firewalld running try: if fw.connected == False: module.fail_json(msg='firewalld service must be running') except AttributeError: module.fail_json(msg="firewalld connection can't be established,\ version likely too old. Requires firewalld >= 2.0.11") modification_count = 0 if service != None: modification_count += 1 if port != None: modification_count += 1 if rich_rule != None: modification_count += 1 if modification_count > 1: module.fail_json(msg='can only operate on port, service or rich_rule at once') if service != None: if permanent: is_enabled = get_service_enabled_permanent(zone, service) msgs.append('Permanent operation') if desired_state == "enabled": if is_enabled == False: if module.check_mode: module.exit_json(changed=True) set_service_enabled_permanent(zone, service) changed=True elif desired_state == "disabled": if is_enabled == True: if module.check_mode: module.exit_json(changed=True) set_service_disabled_permanent(zone, service) changed=True else: is_enabled = get_service_enabled(zone, service) msgs.append('Non-permanent operation') if desired_state == "enabled": if is_enabled == False: if module.check_mode: module.exit_json(changed=True) set_service_enabled(zone, service, timeout) changed=True elif desired_state == "disabled": if is_enabled == True: if module.check_mode: module.exit_json(changed=True) set_service_disabled(zone, service) changed=True if changed == True: msgs.append("Changed service %s to %s" % (service, desired_state)) if port != None: if permanent: is_enabled = get_port_enabled_permanent(zone, [port, protocol]) msgs.append('Permanent operation') if desired_state == "enabled": if is_enabled == False: if module.check_mode: module.exit_json(changed=True) set_port_enabled_permanent(zone, port, protocol) changed=True elif desired_state == "disabled": if is_enabled == True: if module.check_mode: module.exit_json(changed=True) set_port_disabled_permanent(zone, port, protocol) changed=True else: is_enabled = get_port_enabled(zone, [port,protocol]) msgs.append('Non-permanent operation') if desired_state == "enabled": if is_enabled == False: if module.check_mode: module.exit_json(changed=True) set_port_enabled(zone, port, protocol, timeout) changed=True elif desired_state == "disabled": if is_enabled == True: if module.check_mode: module.exit_json(changed=True) set_port_disabled(zone, port, protocol) changed=True if changed == True: msgs.append("Changed port %s to %s" % ("%s/%s" % (port, protocol), \ desired_state)) if rich_rule != None: if permanent: is_enabled = get_rich_rule_enabled_permanent(zone, rich_rule) msgs.append('Permanent operation') if desired_state == "enabled": if is_enabled == False: if module.check_mode: module.exit_json(changed=True) set_rich_rule_enabled_permanent(zone, rich_rule) changed=True elif desired_state == "disabled": if is_enabled == True: if module.check_mode: module.exit_json(changed=True) set_rich_rule_disabled_permanent(zone, rich_rule) changed=True else: is_enabled = get_rich_rule_enabled(zone, rich_rule) msgs.append('Non-permanent operation') if desired_state == "enabled": if is_enabled == False: if module.check_mode: module.exit_json(changed=True) set_rich_rule_enabled(zone, rich_rule, timeout) changed=True elif desired_state == "disabled": if is_enabled == True: if module.check_mode: module.exit_json(changed=True) set_rich_rule_disabled(zone, rich_rule) changed=True if changed == True: msgs.append("Changed rich_rule %s to %s" % (rich_rule, desired_state)) module.exit_json(changed=changed, msg=', '.join(msgs)) ################################################# # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/facter0000664000000000000000000000303712316627017016025 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: facter short_description: Runs the discovery program I(facter) on the remote system description: - Runs the I(facter) discovery program (U(https://github.com/puppetlabs/facter)) on the remote system, returning JSON data that can be useful for inventory purposes. version_added: "0.2" options: {} notes: [] requirements: [ "facter", "ruby-json" ] author: Michael DeHaan ''' EXAMPLES = ''' # Example command-line invocation ansible www.example.net -m facter ''' def main(): module = AnsibleModule( argument_spec = dict() ) cmd = ["/usr/bin/env", "facter", "--json"] rc, out, err = module.run_command(cmd, check_rc=True) module.exit_json(**json.loads(out)) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/selinux0000664000000000000000000001447412316627017016257 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Derek Carter # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: selinux short_description: Change policy and state of SELinux description: - Configures the SELinux mode and policy. A reboot may be required after usage. Ansible will not issue this reboot but will let you know when it is required. version_added: "0.7" options: policy: description: - "name of the SELinux policy to use (example: C(targeted)) will be required if state is not C(disabled)" required: false default: null state: description: - The SELinux mode required: true default: null choices: [ "enforcing", "permissive", "disabled" ] conf: description: - path to the SELinux configuration file, if non-standard required: false default: "/etc/selinux/config" notes: - Not tested on any debian based system requirements: [ libselinux-python ] author: Derek Carter ''' EXAMPLES = ''' - selinux: policy=targeted state=enforcing - selinux: policy=targeted state=permissive - selinux: state=disabled ''' import os import re import sys try: import selinux except ImportError: print "failed=True msg='libselinux-python required for this module'" sys.exit(1) # getter subroutines def get_config_state(configfile): myfile = open(configfile, "r") lines = myfile.readlines() myfile.close() for line in lines: stateline = re.match('^SELINUX=.*$', line) if (stateline): return(line.split('=')[1].strip()) def get_config_policy(configfile): myfile = open(configfile, "r") lines = myfile.readlines() myfile.close() for line in lines: stateline = re.match('^SELINUXTYPE=.*$', line) if (stateline): return(line.split('=')[1].strip()) # setter subroutines def set_config_state(state, configfile): #SELINUX=permissive # edit config file with state value stateline='SELINUX=%s' % state myfile = open(configfile, "r") lines = myfile.readlines() myfile.close() myfile = open(configfile, "w") for line in lines: myfile.write(re.sub(r'^SELINUX=.*', stateline, line)) myfile.close() def set_state(state): if (state == 'enforcing'): selinux.security_setenforce(1) elif (state == 'permissive'): selinux.security_setenforce(0) elif (state == 'disabled'): pass else: msg = 'trying to set invalid runtime state %s' % state module.fail_json(msg=msg) def set_config_policy(policy, configfile): # edit config file with state value #SELINUXTYPE=targeted policyline='SELINUXTYPE=%s' % policy myfile = open(configfile, "r") lines = myfile.readlines() myfile.close() myfile = open(configfile, "w") for line in lines: myfile.write(re.sub(r'^SELINUXTYPE=.*', policyline, line)) myfile.close() def main(): module = AnsibleModule( argument_spec = dict( policy=dict(required=False), state=dict(choices=['enforcing', 'permissive', 'disabled'], required=True), configfile=dict(aliases=['conf','file'], default='/etc/selinux/config') ), supports_check_mode=True ) # global vars changed=False msgs = [] configfile = module.params['configfile'] policy = module.params['policy'] state = module.params['state'] runtime_enabled = selinux.is_selinux_enabled() runtime_policy = selinux.selinux_getpolicytype()[1] runtime_state = 'disabled' if (runtime_enabled): # enabled means 'enforcing' or 'permissive' if (selinux.security_getenforce()): runtime_state = 'enforcing' else: runtime_state = 'permissive' config_policy = get_config_policy(configfile) config_state = get_config_state(configfile) # check to see if policy is set if state is not 'disabled' if (state != 'disabled'): if not policy: module.fail_json(msg='policy is required if state is not \'disabled\'') else: if not policy: policy = config_policy # check changed values and run changes if (policy != runtime_policy): if module.check_mode: module.exit_json(changed=True) # cannot change runtime policy msgs.append('reboot to change the loaded policy') changed=True if (policy != config_policy): if module.check_mode: module.exit_json(changed=True) msgs.append('config policy changed from \'%s\' to \'%s\'' % (config_policy, policy)) set_config_policy(policy, configfile) changed=True if (state != runtime_state): if module.check_mode: module.exit_json(changed=True) if (state == 'disabled'): msgs.append('state change will take effect next reboot') else: if (runtime_enabled): set_state(state) msgs.append('runtime state changed from \'%s\' to \'%s\'' % (runtime_state, state)) else: msgs.append('state change will take effect next reboot') changed=True if (state != config_state): if module.check_mode: module.exit_json(changed=True) msgs.append('config state changed from \'%s\' to \'%s\'' % (config_state, state)) set_config_state(state, configfile) changed=True module.exit_json(changed=changed, msg=', '.join(msgs), configfile=configfile, policy=policy, state=state) ################################################# # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/seboolean0000664000000000000000000001475512316627017016541 0ustar rootroot#!/usr/bin/python # (c) 2012, Stephen Fromm # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: seboolean short_description: Toggles SELinux booleans. description: - Toggles SELinux booleans. version_added: "0.7" options: name: description: - Name of the boolean to configure required: true default: null persistent: description: - Set to C(yes) if the boolean setting should survive a reboot required: false default: no choices: [ "yes", "no" ] state: description: - Desired boolean value required: true default: null choices: [ 'yes', 'no' ] notes: - Not tested on any debian based system requirements: [ ] author: Stephen Fromm ''' EXAMPLES = ''' # Set (httpd_can_network_connect) flag on and keep it persistent across reboots - seboolean: name=httpd_can_network_connect state=yes persistent=yes ''' try: import selinux HAVE_SELINUX=True except ImportError: HAVE_SELINUX=False try: import semanage HAVE_SEMANAGE=True except ImportError: HAVE_SEMANAGE=False def has_boolean_value(module, name): bools = [] try: rc, bools = selinux.security_get_boolean_names() except OSError, e: module.fail_json(msg="Failed to get list of boolean names") if name in bools: return True else: return False def get_boolean_value(module, name): state = 0 try: state = selinux.security_get_boolean_active(name) except OSError, e: module.fail_json(msg="Failed to determine current state for boolean %s" % name) if state == 1: return True else: return False # The following method implements what setsebool.c does to change # a boolean and make it persist after reboot.. def semanage_boolean_value(module, name, state): rc = 0 value = 0 if state: value = 1 handle = semanage.semanage_handle_create() if handle is None: module.fail_json(msg="Failed to create semanage library handle") try: managed = semanage.semanage_is_managed(handle) if managed < 0: module.fail_json(msg="Failed to determine whether policy is manage") if managed == 0: if os.getuid() == 0: module.fail_json(msg="Cannot set persistent booleans without managed policy") else: module.fail_json(msg="Cannot set persistent booleans; please try as root") if semanage.semanage_connect(handle) < 0: module.fail_json(msg="Failed to connect to semanage") if semanage.semanage_begin_transaction(handle) < 0: module.fail_json(msg="Failed to begin semanage transaction") rc, sebool = semanage.semanage_bool_create(handle) if rc < 0: module.fail_json(msg="Failed to create seboolean with semanage") if semanage.semanage_bool_set_name(handle, sebool, name) < 0: module.fail_json(msg="Failed to set seboolean name with semanage") semanage.semanage_bool_set_value(sebool, value) rc, boolkey = semanage.semanage_bool_key_extract(handle, sebool) if rc < 0: module.fail_json(msg="Failed to extract boolean key with semanage") if semanage.semanage_bool_modify_local(handle, boolkey, sebool) < 0: module.fail_json(msg="Failed to modify boolean key with semanage") if semanage.semanage_bool_set_active(handle, boolkey, sebool) < 0: module.fail_json(msg="Failed to set boolean key active with semanage") semanage.semanage_bool_key_free(boolkey) semanage.semanage_bool_free(sebool) semanage.semanage_set_reload(handle, 0) if semanage.semanage_commit(handle) < 0: module.fail_json(msg="Failed to commit changes to semanage") semanage.semanage_disconnect(handle) semanage.semanage_handle_destroy(handle) except Exception, e: module.fail_json(msg="Failed to manage policy for boolean %s: %s" % (name, str(e))) return True def set_boolean_value(module, name, state): rc = 0 value = 0 if state: value = 1 try: rc = selinux.security_set_boolean(name, value) except OSError, e: module.fail_json(msg="Failed to set boolean %s to %s" % (name, value)) if rc == 0: return True else: return False def main(): module = AnsibleModule( argument_spec = dict( name=dict(required=True), persistent=dict(default='no', type='bool'), state=dict(required=True, type='bool') ), supports_check_mode=True ) if not HAVE_SELINUX: module.fail_json(msg="This module requires libselinux-python support") if not HAVE_SEMANAGE: module.fail_json(msg="This module requires libsemanage-python support") if not selinux.is_selinux_enabled(): module.fail_json(msg="SELinux is disabled on this host.") name = module.params['name'] persistent = module.params['persistent'] state = module.params['state'] result = {} result['name'] = name if not has_boolean_value(module, name): module.fail_json(msg="SELinux boolean %s does not exist." % name) cur_value = get_boolean_value(module, name) if cur_value == state: result['state'] = cur_value result['changed'] = False module.exit_json(**result) if module.check_mode: module.exit_json(changed=True) if persistent: r = semanage_boolean_value(module, name, state) else: r = set_boolean_value(module, name, state) result['changed'] = r if not r: module.fail_json(msg="Failed to set boolean %s to %s" % (name, value)) try: selinux.security_commit_booleans() except: module.fail_json(msg="Failed to commit pending boolean %s value" % name) module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/sysctl0000664000000000000000000002565112316627017016110 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, David "DaviXX" CHANIAL # (c) 2014, James Tanner # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # DOCUMENTATION = ''' --- module: sysctl short_description: Manage entries in sysctl.conf. description: - This module manipulates sysctl entries and optionally performs a C(/sbin/sysctl -p) after changing them. version_added: "1.0" options: name: description: - The dot-separated path (aka I(key)) specifying the sysctl variable. required: true default: null aliases: [ 'key' ] value: description: - Desired value of the sysctl key. required: false default: null aliases: [ 'val' ] state: description: - Whether the entry should be present or absent in the sysctl file. choices: [ "present", "absent" ] default: present ignoreerrors: description: - Use this option to ignore errors about unknown keys. choices: [ "yes", "no" ] default: no reload: description: - If C(yes), performs a I(/sbin/sysctl -p) if the C(sysctl_file) is updated. If C(no), does not reload I(sysctl) even if the C(sysctl_file) is updated. choices: [ "yes", "no" ] default: "yes" sysctl_file: description: - Specifies the absolute path to C(sysctl.conf), if not C(/etc/sysctl.conf). required: false default: /etc/sysctl.conf sysctl_set: description: - Verify token value with the sysctl command and set with -w if necessary choices: [ "yes", "no" ] required: false version_added: 1.5 default: False notes: [] requirements: [] author: David "DaviXX" CHANIAL ''' EXAMPLES = ''' # Set vm.swappiness to 5 in /etc/sysctl.conf - sysctl: name=vm.swappiness value=5 state=present # Remove kernel.panic entry from /etc/sysctl.conf - sysctl: name=kernel.panic state=absent sysctl_file=/etc/sysctl.conf # Set kernel.panic to 3 in /tmp/test_sysctl.conf - sysctl: name=kernel.panic value=3 sysctl_file=/tmp/test_sysctl.conf reload=no # Set ip fowarding on in /proc and do not reload the sysctl file - sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes # Set ip forwarding on in /proc and in the sysctl file and reload if necessary - sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes state=present reload=yes ''' # ============================================================== import os import tempfile import re class SysctlModule(object): def __init__(self, module): self.module = module self.args = self.module.params self.sysctl_cmd = self.module.get_bin_path('sysctl', required=True) self.sysctl_file = self.args['sysctl_file'] self.proc_value = None # current token value in proc fs self.file_value = None # current token value in file self.file_lines = [] # all lines in the file self.file_values = {} # dict of token values self.changed = False # will change occur self.set_proc = False # does sysctl need to set value self.write_file = False # does the sysctl file need to be reloaded self.process() # ============================================================== # LOGIC # ============================================================== def process(self): # Whitespace is bad self.args['name'] = self.args['name'].strip() self.args['value'] = self._parse_value(self.args['value']) thisname = self.args['name'] # get the current proc fs value self.proc_value = self.get_token_curr_value(thisname) # get the currect sysctl file value self.read_sysctl_file() if thisname not in self.file_values: self.file_values[thisname] = None # update file contents with desired token/value self.fix_lines() # what do we need to do now? if self.file_values[thisname] is None and self.args['state'] == "present": self.changed = True self.write_file = True elif self.file_values[thisname] != self.args['value']: self.changed = True self.write_file = True if self.args['sysctl_set']: if self.proc_value is None: self.changed = True elif not self._values_is_equal(self.proc_value, self.args['value']): self.changed = True self.set_proc = True # Do the work if not self.module.check_mode: if self.write_file: self.write_sysctl() if self.write_file and self.args['reload']: self.reload_sysctl() if self.set_proc: self.set_token_value(self.args['name'], self.args['value']) def _values_is_equal(self, a, b): """Expects two string values. It will split the string by whitespace and compare each value. It will return True if both lists are the same, contain the same elements and the same order.""" if a is None or b is None: return False a = a.split() b = b.split() if len(a) != len(b): return False return len([i for i, j in zip(a, b) if i == j]) == len(a) def _parse_value(self, value): if value is None: return '' elif value.lower() in BOOLEANS_TRUE: return '1' elif value.lower() in BOOLEANS_FALSE: return '0' else: return value.strip() # ============================================================== # SYSCTL COMMAND MANAGEMENT # ============================================================== # Use the sysctl command to find the current value def get_token_curr_value(self, token): thiscmd = "%s -e -n %s" % (self.sysctl_cmd, token) rc,out,err = self.module.run_command(thiscmd) if rc != 0: return None else: return out # Use the sysctl command to set the current value def set_token_value(self, token, value): if len(value.split()) > 0: value = '"' + value + '"' thiscmd = "%s -w %s=%s" % (self.sysctl_cmd, token, value) rc,out,err = self.module.run_command(thiscmd) if rc != 0: self.module.fail_json(msg='setting %s failed: %s' % (token, out + err)) else: return rc # Run sysctl -p def reload_sysctl(self): # do it if get_platform().lower() == 'freebsd': # freebsd doesn't support -p, so reload the sysctl service rc,out,err = self.module.run_command('/etc/rc.d/sysctl reload') else: # system supports reloading via the -p flag to sysctl, so we'll use that sysctl_args = [self.sysctl_cmd, '-p', self.sysctl_file] if self.args['ignoreerrors']: sysctl_args.insert(1, '-e') rc,out,err = self.module.run_command(sysctl_args) if rc != 0: self.module.fail_json(msg="Failed to reload sysctl: %s" % str(out) + str(err)) # ============================================================== # SYSCTL FILE MANAGEMENT # ============================================================== # Get the token value from the sysctl file def read_sysctl_file(self): lines = open(self.sysctl_file, "r").readlines() for line in lines: line = line.strip() self.file_lines.append(line) # don't split empty lines or comments if not line or line.startswith("#"): continue k, v = line.split('=',1) k = k.strip() v = v.strip() self.file_values[k] = v.strip() # Fix the value in the sysctl file content def fix_lines(self): checked = [] self.fixed_lines = [] for line in self.file_lines: if not line.strip() or line.strip().startswith("#"): self.fixed_lines.append(line) continue tmpline = line.strip() k, v = line.split('=',1) k = k.strip() v = v.strip() if k not in checked: checked.append(k) if k == self.args['name']: if self.args['state'] == "present": new_line = "%s = %s\n" % (k, self.args['value']) self.fixed_lines.append(new_line) else: new_line = "%s = %s\n" % (k, v) self.fixed_lines.append(new_line) if self.args['name'] not in checked and self.args['state'] == "present": new_line = "%s = %s\n" % (self.args['name'], self.args['value']) self.fixed_lines.append(new_line) # Completely rewrite the sysctl file def write_sysctl(self): # open a tmp file fd, tmp_path = tempfile.mkstemp('.conf', '.ansible_m_sysctl_', os.path.dirname(self.sysctl_file)) f = open(tmp_path,"w") try: for l in self.fixed_lines: f.write(l.strip() + "\n") except IOError, e: self.module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, str(e))) f.flush() f.close() # replace the real one self.module.atomic_move(tmp_path, self.sysctl_file) # ============================================================== # main def main(): # defining module module = AnsibleModule( argument_spec = dict( name = dict(aliases=['key'], required=True), value = dict(aliases=['val'], required=False), state = dict(default='present', choices=['present', 'absent']), reload = dict(default=True, type='bool'), sysctl_set = dict(default=False, type='bool'), ignoreerrors = dict(default=False, type='bool'), sysctl_file = dict(default='/etc/sysctl.conf') ), supports_check_mode=True ) result = SysctlModule(module) module.exit_json(changed=result.changed) sys.exit(0) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/at0000664000000000000000000001531112316627017015163 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # # (c) 2014, Richard Isaacson # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: at short_description: Schedule the execution of a command or scripts via the at command. description: - Use this module to schedule a command or script to run once in the future. - All jobs are executed in the a queue. version_added: "0.0" options: user: description: - The user to execute the at command as. required: false default: null command: description: - A command to be executed in the future. required: false default: null script_file: description: - An existing script to be executed in the future. required: false default: null unit_count: description: - The count of units in the future to execute the command or script. required: true unit_type: description: - The type of units in the future to execute the command or script. required: true choices: ["minutes", "hours", "days", "weeks"] action: description: - The action to take for the job defaulting to add. Unique will verify that there is only one entry in the queue. - Delete will remove all existing queued jobs. required: true choices: ["add", "delete", "unique"] default: add requirements: - at author: Richard Isaacson ''' EXAMPLES = ''' # Schedule a command to execute in 20 minutes as root. - at: command="ls -d / > /dev/null" unit_count=20 unit_type="minutes" # Schedule a script to execute in 1 hour as the neo user. - at: script_file="/some/script.sh" user="neo" unit_count=1 unit_type="hours" # Match a command to an existing job and delete the job. - at: command="ls -d / > /dev/null" action="delete" # Schedule a command to execute in 20 minutes making sure it is unique in the queue. - at: command="ls -d / > /dev/null" action="unique" unit_count=20 unit_type="minutes" ''' import os import tempfile def matching_jobs(module, at_cmd, script_file, user=None): matching_jobs = [] atq_cmd = module.get_bin_path('atq', True) # Get list of job numbers for the user. atq_command = "%s" % (atq_cmd) if user: atq_command = "su '%s' -c '%s'" % (user, atq_command) rc, out, err = module.run_command(atq_command) if rc != 0: module.fail_json(msg=err) current_jobs = out.splitlines() if len(current_jobs) == 0: return matching_jobs # Read script_file into a string. script_file_string = open(script_file).read().strip() # Loop through the jobs. # If the script text is contained in a job add job number to list. for current_job in current_jobs: split_current_job = current_job.split() at_command = "%s -c %s" % (at_cmd, split_current_job[0]) if user: at_command = "su '%s' -c '%s'" % (user, at_command) rc, out, err = module.run_command(at_command) if rc != 0: module.fail_json(msg=err) if script_file_string in out: matching_jobs.append(split_current_job[0]) # Return the list. return matching_jobs #================================================ def main(): module = AnsibleModule( argument_spec = dict( user=dict(required=False), command=dict(required=False), script_file=dict(required=False), unit_count=dict(required=False, type='int'), unit_type=dict(required=False, default=None, choices=["minutes", "hours", "days", "weeks"], type="str"), action=dict(required=False, default="add", choices=["add", "delete", "unique"], type="str") ), supports_check_mode = False, ) at_cmd = module.get_bin_path('at', True) user = module.params['user'] command = module.params['command'] script_file = module.params['script_file'] unit_count = module.params['unit_count'] unit_type = module.params['unit_type'] action = module.params['action'] if ((action == 'add') and (not unit_count or not unit_type)): module.fail_json(msg="add action requires unit_count and unit_type") if (not command) and (not script_file): module.fail_json(msg="command or script_file not specified") if command and script_file: module.fail_json(msg="command and script_file are mutually exclusive") result = {} result['action'] = action result['changed'] = False # If command transform it into a script_file if command: filed, script_file = tempfile.mkstemp(prefix='at') fileh = os.fdopen(filed, 'w') fileh.write(command) fileh.close() # if delete then return if action == 'delete': for matching_job in matching_jobs(module, at_cmd, script_file, user): at_command = "%s -d %s" % (at_cmd, matching_job) if user: at_command = "su '%s' -c '%s'" % (user, at_ccommand) rc, out, err = module.run_command(at_command) if rc != 0: module.fail_json(msg=err) result['changed'] = True module.exit_json(**result) # if unique if existing return unchanged if action == 'unique': if len(matching_jobs(module, at_cmd, script_file, user)) != 0: module.exit_json(**result) result['script_file'] = script_file result['unit_count'] = unit_count result['unit_type'] = unit_type at_command = "%s now + %s %s -f %s" % (at_cmd, unit_count, unit_type, script_file) if user: # We expect that if this is an installed the permissions are already correct for the user to execute it. at_command = "su '%s' -c '%s'" % (user, at_command) rc, out, err = module.run_command(at_command) if rc != 0: module.fail_json(msg=err) if command: os.unlink(script_file) result['changed'] = True module.exit_json(**result) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/ping0000664000000000000000000000302412316627017015512 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: ping version_added: historical short_description: Try to connect to host and return C(pong) on success. description: - A trivial test module, this module always returns C(pong) on successful contact. It does not make sense in playbooks, but it is useful from C(/usr/bin/ansible) options: {} author: Michael DeHaan ''' EXAMPLES = ''' # Test 'webservers' status ansible webservers -m ping ''' def main(): module = AnsibleModule( argument_spec = dict( data=dict(required=False, default=None), ), supports_check_mode = True ) result = dict(ping='pong') if module.params['data']: result['ping'] = module.params['data'] module.exit_json(**result) from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/authorized_key0000664000000000000000000003310012316627017017601 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- """ Ansible module to add authorized_keys for ssh logins. (c) 2012, Brad Olson This file is part of Ansible Ansible is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Ansible is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Ansible. If not, see . """ DOCUMENTATION = ''' --- module: authorized_key short_description: Adds or removes an SSH authorized key description: - Adds or removes an SSH authorized key for a user from a remote host. version_added: "0.5" options: user: description: - The username on the remote host whose authorized_keys file will be modified required: true default: null aliases: [] key: description: - The SSH public key, as a string required: true default: null path: description: - Alternate path to the authorized_keys file required: false default: "(homedir)+/.ssh/authorized_keys" version_added: "1.2" manage_dir: description: - Whether this module should manage the directory of the authorized_keys file. Make sure to set C(manage_dir=no) if you are using an alternate directory for authorized_keys set with C(path), since you could lock yourself out of SSH access. See the example below. required: false choices: [ "yes", "no" ] default: "yes" version_added: "1.2" state: description: - Whether the given key (with the given key_options) should or should not be in the file required: false choices: [ "present", "absent" ] default: "present" key_options: description: - A string of ssh key options to be prepended to the key in the authorized_keys file required: false default: null version_added: "1.4" description: - "Adds or removes authorized keys for particular user accounts" author: Brad Olson ''' EXAMPLES = ''' # Example using key data from a local file on the management machine - authorized_key: user=charlie key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}" # Using alternate directory locations: - authorized_key: user=charlie key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}" path='/etc/ssh/authorized_keys/charlie' manage_dir=no # Using with_file - name: Set up authorized_keys for the deploy user authorized_key: user=deploy key="{{ item }}" with_file: - public_keys/doe-jane - public_keys/doe-john # Using key_options: - authorized_key: user=charlie key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}" key_options='no-port-forwarding,host="10.0.1.1"' ''' # Makes sure the public key line is present or absent in the user's .ssh/authorized_keys. # # Arguments # ========= # user = username # key = line to add to authorized_keys for user # path = path to the user's authorized_keys file (default: ~/.ssh/authorized_keys) # manage_dir = whether to create, and control ownership of the directory (default: true) # state = absent|present (default: present) # # see example in examples/playbooks import sys import os import pwd import os.path import tempfile import re import shlex class keydict(dict): """ a dictionary that maintains the order of keys as they are added """ # http://stackoverflow.com/questions/2328235/pythonextend-the-dict-class def __init__(self, *args, **kw): super(keydict,self).__init__(*args, **kw) self.itemlist = super(keydict,self).keys() def __setitem__(self, key, value): self.itemlist.append(key) super(keydict,self).__setitem__(key, value) def __iter__(self): return iter(self.itemlist) def keys(self): return self.itemlist def values(self): return [self[key] for key in self] def itervalues(self): return (self[key] for key in self) def keyfile(module, user, write=False, path=None, manage_dir=True): """ Calculate name of authorized keys file, optionally creating the directories and file, properly setting permissions. :param str user: name of user in passwd file :param bool write: if True, write changes to authorized_keys file (creating directories if needed) :param str path: if not None, use provided path rather than default of '~user/.ssh/authorized_keys' :param bool manage_dir: if True, create and set ownership of the parent dir of the authorized_keys file :return: full path string to authorized_keys for user """ try: user_entry = pwd.getpwnam(user) except KeyError, e: module.fail_json(msg="Failed to lookup user %s: %s" % (user, str(e))) if path is None: homedir = user_entry.pw_dir sshdir = os.path.join(homedir, ".ssh") keysfile = os.path.join(sshdir, "authorized_keys") else: sshdir = os.path.dirname(path) keysfile = path if not write: return keysfile uid = user_entry.pw_uid gid = user_entry.pw_gid if manage_dir in BOOLEANS_TRUE: if not os.path.exists(sshdir): os.mkdir(sshdir, 0700) if module.selinux_enabled(): module.set_default_selinux_context(sshdir, False) os.chown(sshdir, uid, gid) os.chmod(sshdir, 0700) if not os.path.exists(keysfile): basedir = os.path.dirname(keysfile) if not os.path.exists(basedir): os.makedirs(basedir) try: f = open(keysfile, "w") #touches file so we can set ownership and perms finally: f.close() if module.selinux_enabled(): module.set_default_selinux_context(keysfile, False) try: os.chown(keysfile, uid, gid) os.chmod(keysfile, 0600) except OSError: pass return keysfile def parseoptions(module, options): ''' reads a string containing ssh-key options and returns a dictionary of those options ''' options_dict = keydict() #ordered dict if options: token_exp = [ # matches separator (r',+', False), # matches option with value, e.g. from="x,y" (r'([a-z0-9-]+)="((?:[^"\\]|\\.)*)"', True), # matches single option, e.g. no-agent-forwarding (r'[a-z0-9-]+', True) ] pos = 0 while pos < len(options): match = None for pattern, is_valid_option in token_exp: regex = re.compile(pattern, re.IGNORECASE) match = regex.match(options, pos) if match: text = match.group(0) if is_valid_option: if len(match.groups()) == 2: options_dict[match.group(1)] = match.group(2) else: options_dict[text] = None break if not match: module.fail_json(msg="invalid option string: %s" % options) else: pos = match.end(0) return options_dict def parsekey(module, raw_key): ''' parses a key, which may or may not contain a list of ssh-key options at the beginning ''' VALID_SSH2_KEY_TYPES = [ 'ssh-ed25519', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-dss', 'ssh-rsa', ] options = None # connection options key = None # encrypted key string key_type = None # type of ssh key type_index = None # index of keytype in key string|list # remove comment yaml escapes raw_key = raw_key.replace('\#', '#') # split key safely lex = shlex.shlex(raw_key) lex.quotes = ["'", '"'] lex.commenters = '' #keep comment hashes lex.whitespace_split = True key_parts = list(lex) for i in range(0, len(key_parts)): if key_parts[i] in VALID_SSH2_KEY_TYPES: type_index = i key_type = key_parts[i] break # check for options if type_index is None: return None elif type_index > 0: options = " ".join(key_parts[:type_index]) # parse the options (if any) options = parseoptions(module, options) # get key after the type index key = key_parts[(type_index + 1)] # set comment to everything after the key if len(key_parts) > (type_index + 1): comment = " ".join(key_parts[(type_index + 2):]) return (key, key_type, options, comment) def readkeys(module, filename): if not os.path.isfile(filename): return {} keys = {} f = open(filename) for line in f.readlines(): key_data = parsekey(module, line) if key_data: # use key as identifier keys[key_data[0]] = key_data else: # for an invalid line, just append the line # to the array so it will be re-output later keys[line] = line f.close() return keys def writekeys(module, filename, keys): fd, tmp_path = tempfile.mkstemp('', 'tmp', os.path.dirname(filename)) f = open(tmp_path,"w") try: for index, key in keys.items(): try: (keyhash,type,options,comment) = key option_str = "" if options: option_strings = [] for option_key in options.keys(): if options[option_key]: option_strings.append("%s=\"%s\"" % (option_key, options[option_key])) else: option_strings.append("%s" % option_key) option_str = ",".join(option_strings) option_str += " " key_line = "%s%s %s %s\n" % (option_str, type, keyhash, comment) except: key_line = key f.writelines(key_line) except IOError, e: module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, str(e))) f.close() module.atomic_move(tmp_path, filename) def enforce_state(module, params): """ Add or remove key. """ user = params["user"] key = params["key"] path = params.get("path", None) manage_dir = params.get("manage_dir", True) state = params.get("state", "present") key_options = params.get("key_options", None) # extract indivial keys into an array, skipping blank lines and comments key = [s for s in key.splitlines() if s and not s.startswith('#')] # check current state -- just get the filename, don't create file do_write = False params["keyfile"] = keyfile(module, user, do_write, path, manage_dir) existing_keys = readkeys(module, params["keyfile"]) # Check our new keys, if any of them exist we'll continue. for new_key in key: parsed_new_key = parsekey(module, new_key) if key_options is not None: parsed_options = parseoptions(module, key_options) parsed_new_key = (parsed_new_key[0], parsed_new_key[1], parsed_options, parsed_new_key[3]) if not parsed_new_key: module.fail_json(msg="invalid key specified: %s" % new_key) present = False matched = False non_matching_keys = [] if parsed_new_key[0] in existing_keys: present = True # Then we check if everything matches, including # the key type and options. If not, we append this # existing key to the non-matching list # We only want it to match everything when the state # is present if parsed_new_key != existing_keys[parsed_new_key[0]] and state == "present": non_matching_keys.append(existing_keys[parsed_new_key[0]]) else: matched = True # handle idempotent state=present if state=="present": if len(non_matching_keys) > 0: for non_matching_key in non_matching_keys: if non_matching_key[0] in existing_keys: del existing_keys[non_matching_key[0]] do_write = True if not matched: existing_keys[parsed_new_key[0]] = parsed_new_key do_write = True elif state=="absent": if not matched: continue del existing_keys[parsed_new_key[0]] do_write = True if do_write: writekeys(module, keyfile(module, user, do_write, path, manage_dir), existing_keys) params['changed'] = True return params def main(): module = AnsibleModule( argument_spec = dict( user = dict(required=True, type='str'), key = dict(required=True, type='str'), path = dict(required=False, type='str'), manage_dir = dict(required=False, type='bool', default=True), state = dict(default='present', choices=['absent','present']), key_options = dict(required=False, type='str'), unique = dict(default=False, type='bool'), ) ) results = enforce_state(module, module.params) module.exit_json(**results) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/system/setup0000664000000000000000000027714312316627017015734 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import os import array import fcntl import fnmatch import glob import platform import re import socket import struct import datetime import getpass import ConfigParser import StringIO DOCUMENTATION = ''' --- module: setup version_added: historical short_description: Gathers facts about remote hosts options: filter: version_added: "1.1" description: - if supplied, only return facts that match this shell-style (fnmatch) wildcard. required: false default: '*' fact_path: version_added: "1.3" description: - path used for local ansible facts (*.fact) - files in this dir will be run (if executable) and their results be added to ansible_local facts if a file is not executable it is read. File/results format can be json or ini-format required: false default: '/etc/ansible/facts.d' description: - This module is automatically called by playbooks to gather useful variables about remote hosts that can be used in playbooks. It can also be executed directly by C(/usr/bin/ansible) to check what variables are available to a host. Ansible provides many I(facts) about the system, automatically. notes: - More ansible facts will be added with successive releases. If I(facter) or I(ohai) are installed, variables from these programs will also be snapshotted into the JSON file for usage in templating. These variables are prefixed with C(facter_) and C(ohai_) so it's easy to tell their source. All variables are bubbled up to the caller. Using the ansible facts and choosing to not install I(facter) and I(ohai) means you can avoid Ruby-dependencies on your remote systems. (See also M(facter) and M(ohai).) - The filter option filters only the first level subkey below ansible_facts. author: Michael DeHaan ''' EXAMPLES = """ # Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts). ansible all -m setup --tree /tmp/facts # Display only facts regarding memory found by ansible on all hosts and output them. ansible all -m setup -a 'filter=ansible_*_mb' # Display only facts returned by facter. ansible all -m setup -a 'filter=facter_*' # Display only facts about certain interfaces. ansible all -m setup -a 'filter=ansible_eth[0-2]' """ try: import selinux HAVE_SELINUX=True except ImportError: HAVE_SELINUX=False try: import json except ImportError: import simplejson as json class Facts(object): """ This class should only attempt to populate those facts that are mostly generic to all systems. This includes platform facts, service facts (eg. ssh keys or selinux), and distribution facts. Anything that requires extensive code or may have more than one possible implementation to establish facts for a given topic should subclass Facts. """ _I386RE = re.compile(r'i[3456]86') # For the most part, we assume that platform.dist() will tell the truth. # This is the fallback to handle unknowns or exceptions OSDIST_DICT = { '/etc/redhat-release': 'RedHat', '/etc/vmware-release': 'VMwareESX', '/etc/openwrt_release': 'OpenWrt', '/etc/system-release': 'OtherLinux', '/etc/alpine-release': 'Alpine', '/etc/release': 'Solaris', '/etc/arch-release': 'Archlinux', '/etc/SuSE-release': 'SuSE', '/etc/os-release': 'Debian' } SELINUX_MODE_DICT = { 1: 'enforcing', 0: 'permissive', -1: 'disabled' } # A list of dicts. If there is a platform with more than one # package manager, put the preferred one last. If there is an # ansible module, use that as the value for the 'name' key. PKG_MGRS = [ { 'path' : '/usr/bin/yum', 'name' : 'yum' }, { 'path' : '/usr/bin/apt-get', 'name' : 'apt' }, { 'path' : '/usr/bin/zypper', 'name' : 'zypper' }, { 'path' : '/usr/sbin/urpmi', 'name' : 'urpmi' }, { 'path' : '/usr/bin/pacman', 'name' : 'pacman' }, { 'path' : '/bin/opkg', 'name' : 'opkg' }, { 'path' : '/opt/local/bin/pkgin', 'name' : 'pkgin' }, { 'path' : '/opt/local/bin/port', 'name' : 'macports' }, { 'path' : '/sbin/apk', 'name' : 'apk' }, { 'path' : '/usr/sbin/pkg', 'name' : 'pkgng' }, { 'path' : '/usr/sbin/swlist', 'name' : 'SD-UX' }, ] def __init__(self): self.facts = {} self.get_platform_facts() self.get_distribution_facts() self.get_cmdline() self.get_public_ssh_host_keys() self.get_selinux_facts() self.get_pkg_mgr_facts() self.get_lsb_facts() self.get_date_time_facts() self.get_user_facts() self.get_local_facts() self.get_env_facts() def populate(self): return self.facts # Platform # platform.system() can be Linux, Darwin, Java, or Windows def get_platform_facts(self): self.facts['system'] = platform.system() self.facts['kernel'] = platform.release() self.facts['machine'] = platform.machine() self.facts['python_version'] = platform.python_version() self.facts['fqdn'] = socket.getfqdn() self.facts['hostname'] = platform.node().split('.')[0] self.facts['domain'] = '.'.join(self.facts['fqdn'].split('.')[1:]) arch_bits = platform.architecture()[0] self.facts['userspace_bits'] = arch_bits.replace('bit', '') if self.facts['machine'] == 'x86_64': self.facts['architecture'] = self.facts['machine'] if self.facts['userspace_bits'] == '64': self.facts['userspace_architecture'] = 'x86_64' elif self.facts['userspace_bits'] == '32': self.facts['userspace_architecture'] = 'i386' elif Facts._I386RE.search(self.facts['machine']): self.facts['architecture'] = 'i386' if self.facts['userspace_bits'] == '64': self.facts['userspace_architecture'] = 'x86_64' elif self.facts['userspace_bits'] == '32': self.facts['userspace_architecture'] = 'i386' else: self.facts['architecture'] = self.facts['machine'] if self.facts['system'] == 'Linux': self.get_distribution_facts() elif self.facts['system'] == 'AIX': rc, out, err = module.run_command("/usr/sbin/bootinfo -p") data = out.split('\n') self.facts['architecture'] = data[0] def get_local_facts(self): fact_path = module.params.get('fact_path', None) if not fact_path or not os.path.exists(fact_path): return local = {} for fn in sorted(glob.glob(fact_path + '/*.fact')): # where it will sit under local facts fact_base = os.path.basename(fn).replace('.fact','') if os.access(fn, os.X_OK): # run it # try to read it as json first # if that fails read it with ConfigParser # if that fails, skip it rc, out, err = module.run_command(fn) else: out = open(fn).read() # load raw json fact = 'loading %s' % fact_base try: fact = json.loads(out) except ValueError, e: # load raw ini cp = ConfigParser.ConfigParser() try: cp.readfp(StringIO.StringIO(out)) except ConfigParser.Error, e: fact="error loading fact - please check content" else: fact = {} #print cp.sections() for sect in cp.sections(): if sect not in fact: fact[sect] = {} for opt in cp.options(sect): val = cp.get(sect, opt) fact[sect][opt]=val local[fact_base] = fact if not local: return self.facts['local'] = local # platform.dist() is deprecated in 2.6 # in 2.6 and newer, you should use platform.linux_distribution() def get_distribution_facts(self): # A list with OS Family members OS_FAMILY = dict( RedHat = 'RedHat', Fedora = 'RedHat', CentOS = 'RedHat', Scientific = 'RedHat', SLC = 'RedHat', Ascendos = 'RedHat', CloudLinux = 'RedHat', PSBM = 'RedHat', OracleLinux = 'RedHat', OVS = 'RedHat', OEL = 'RedHat', Amazon = 'RedHat', XenServer = 'RedHat', Ubuntu = 'Debian', Debian = 'Debian', SLES = 'Suse', SLED = 'Suse', OpenSuSE = 'Suse', SuSE = 'Suse', Gentoo = 'Gentoo', Archlinux = 'Archlinux', Mandriva = 'Mandrake', Mandrake = 'Mandrake', Solaris = 'Solaris', Nexenta = 'Solaris', OmniOS = 'Solaris', OpenIndiana = 'Solaris', SmartOS = 'Solaris', AIX = 'AIX', Alpine = 'Alpine', MacOSX = 'Darwin', FreeBSD = 'FreeBSD', HPUX = 'HP-UX' ) if self.facts['system'] == 'AIX': self.facts['distribution'] = 'AIX' rc, out, err = module.run_command("/usr/bin/oslevel") data = out.split('.') self.facts['distribution_version'] = data[0] self.facts['distribution_release'] = data[1] elif self.facts['system'] == 'HP-UX': self.facts['distribution'] = 'HP-UX' rc, out, err = module.run_command("/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'") data = re.search('HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out) if data: self.facts['distribution_version'] = data.groups()[0] self.facts['distribution_release'] = data.groups()[1] elif self.facts['system'] == 'Darwin': self.facts['distribution'] = 'MacOSX' rc, out, err = module.run_command("/usr/bin/sw_vers -productVersion") data = out.split()[-1] self.facts['distribution_version'] = data elif self.facts['system'] == 'FreeBSD': self.facts['distribution'] = 'FreeBSD' self.facts['distribution_release'] = platform.release() self.facts['distribution_version'] = platform.version() elif self.facts['system'] == 'OpenBSD': self.facts['distribution'] = 'OpenBSD' self.facts['distribution_release'] = platform.release() rc, out, err = module.run_command("/sbin/sysctl -n kern.version") match = re.match('OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out) if match: self.facts['distribution_version'] = match.groups()[0] else: self.facts['distribution_version'] = 'release' else: dist = platform.dist() self.facts['distribution'] = dist[0].capitalize() or 'NA' self.facts['distribution_version'] = dist[1] or 'NA' self.facts['distribution_release'] = dist[2] or 'NA' # Try to handle the exceptions now ... for (path, name) in Facts.OSDIST_DICT.items(): if os.path.exists(path): if self.facts['distribution'] == 'Fedora': pass elif name == 'RedHat': data = get_file_content(path) if 'Red Hat' in data: self.facts['distribution'] = name else: self.facts['distribution'] = data.split()[0] elif name == 'OtherLinux': data = get_file_content(path) if 'Amazon' in data: self.facts['distribution'] = 'Amazon' self.facts['distribution_version'] = data.split()[-1] elif name == 'OpenWrt': data = get_file_content(path) if 'OpenWrt' in data: self.facts['distribution'] = name version = re.search('DISTRIB_RELEASE="(.*)"', data) if version: self.facts['distribution_version'] = version.groups()[0] release = re.search('DISTRIB_CODENAME="(.*)"', data) if release: self.facts['distribution_release'] = release.groups()[0] elif name == 'Alpine': data = get_file_content(path) self.facts['distribution'] = 'Alpine' self.facts['distribution_version'] = data elif name == 'Solaris': data = get_file_content(path).split('\n')[0] ora_prefix = '' if 'Oracle Solaris' in data: data = data.replace('Oracle ','') ora_prefix = 'Oracle ' self.facts['distribution'] = data.split()[0] self.facts['distribution_version'] = data.split()[1] self.facts['distribution_release'] = ora_prefix + data elif name == 'SuSE': data = get_file_content(path).splitlines() self.facts['distribution_release'] = data[2].split('=')[1].strip() elif name == 'Debian': data = get_file_content(path).split('\n')[0] release = re.search("PRETTY_NAME.+ \(?([^ ]+?)\)?\"", data) if release: self.facts['distribution_release'] = release.groups()[0] else: self.facts['distribution'] = name self.facts['os_family'] = self.facts['distribution'] if self.facts['distribution'] in OS_FAMILY: self.facts['os_family'] = OS_FAMILY[self.facts['distribution']] def get_cmdline(self): data = get_file_content('/proc/cmdline') if data: self.facts['cmdline'] = {} for piece in shlex.split(data): item = piece.split('=', 1) if len(item) == 1: self.facts['cmdline'][item[0]] = True else: self.facts['cmdline'][item[0]] = item[1] def get_public_ssh_host_keys(self): dsa_filename = '/etc/ssh/ssh_host_dsa_key.pub' rsa_filename = '/etc/ssh/ssh_host_rsa_key.pub' ecdsa_filename = '/etc/ssh/ssh_host_ecdsa_key.pub' if self.facts['system'] == 'Darwin': dsa_filename = '/etc/ssh_host_dsa_key.pub' rsa_filename = '/etc/ssh_host_rsa_key.pub' ecdsa_filename = '/etc/ssh_host_ecdsa_key.pub' dsa = get_file_content(dsa_filename) rsa = get_file_content(rsa_filename) ecdsa = get_file_content(ecdsa_filename) if dsa is None: dsa = 'NA' else: self.facts['ssh_host_key_dsa_public'] = dsa.split()[1] if rsa is None: rsa = 'NA' else: self.facts['ssh_host_key_rsa_public'] = rsa.split()[1] if ecdsa is None: ecdsa = 'NA' else: self.facts['ssh_host_key_ecdsa_public'] = ecdsa.split()[1] def get_pkg_mgr_facts(self): self.facts['pkg_mgr'] = 'unknown' for pkg in Facts.PKG_MGRS: if os.path.exists(pkg['path']): self.facts['pkg_mgr'] = pkg['name'] if self.facts['system'] == 'OpenBSD': self.facts['pkg_mgr'] = 'openbsd_pkg' def get_lsb_facts(self): lsb_path = module.get_bin_path('lsb_release') if lsb_path: rc, out, err = module.run_command([lsb_path, "-a"]) if rc == 0: self.facts['lsb'] = {} for line in out.split('\n'): if len(line) < 1: continue value = line.split(':', 1)[1].strip() if 'LSB Version:' in line: self.facts['lsb']['release'] = value elif 'Distributor ID:' in line: self.facts['lsb']['id'] = value elif 'Description:' in line: self.facts['lsb']['description'] = value elif 'Release:' in line: self.facts['lsb']['release'] = value elif 'Codename:' in line: self.facts['lsb']['codename'] = value if 'lsb' in self.facts and 'release' in self.facts['lsb']: self.facts['lsb']['major_release'] = self.facts['lsb']['release'].split('.')[0] elif lsb_path is None and os.path.exists('/etc/lsb-release'): self.facts['lsb'] = {} f = open('/etc/lsb-release', 'r') try: for line in f.readlines(): value = line.split('=',1)[1].strip() if 'DISTRIB_ID' in line: self.facts['lsb']['id'] = value elif 'DISTRIB_RELEASE' in line: self.facts['lsb']['release'] = value elif 'DISTRIB_DESCRIPTION' in line: self.facts['lsb']['description'] = value elif 'DISTRIB_CODENAME' in line: self.facts['lsb']['codename'] = value finally: f.close() else: return self.facts if 'lsb' in self.facts and 'release' in self.facts['lsb']: self.facts['lsb']['major_release'] = self.facts['lsb']['release'].split('.')[0] def get_selinux_facts(self): if not HAVE_SELINUX: self.facts['selinux'] = False return self.facts['selinux'] = {} if not selinux.is_selinux_enabled(): self.facts['selinux']['status'] = 'disabled' else: self.facts['selinux']['status'] = 'enabled' try: self.facts['selinux']['policyvers'] = selinux.security_policyvers() except OSError, e: self.facts['selinux']['policyvers'] = 'unknown' try: (rc, configmode) = selinux.selinux_getenforcemode() if rc == 0: self.facts['selinux']['config_mode'] = Facts.SELINUX_MODE_DICT.get(configmode, 'unknown') else: self.facts['selinux']['config_mode'] = 'unknown' except OSError, e: self.facts['selinux']['config_mode'] = 'unknown' try: mode = selinux.security_getenforce() self.facts['selinux']['mode'] = Facts.SELINUX_MODE_DICT.get(mode, 'unknown') except OSError, e: self.facts['selinux']['mode'] = 'unknown' try: (rc, policytype) = selinux.selinux_getpolicytype() if rc == 0: self.facts['selinux']['type'] = policytype else: self.facts['selinux']['type'] = 'unknown' except OSError, e: self.facts['selinux']['type'] = 'unknown' def get_date_time_facts(self): self.facts['date_time'] = {} now = datetime.datetime.now() self.facts['date_time']['year'] = now.strftime('%Y') self.facts['date_time']['month'] = now.strftime('%m') self.facts['date_time']['day'] = now.strftime('%d') self.facts['date_time']['hour'] = now.strftime('%H') self.facts['date_time']['minute'] = now.strftime('%M') self.facts['date_time']['second'] = now.strftime('%S') self.facts['date_time']['epoch'] = now.strftime('%s') if self.facts['date_time']['epoch'] == '' or self.facts['date_time']['epoch'][0] == '%': self.facts['date_time']['epoch'] = str(int(time.time())) self.facts['date_time']['date'] = now.strftime('%Y-%m-%d') self.facts['date_time']['time'] = now.strftime('%H:%M:%S') self.facts['date_time']['iso8601_micro'] = now.utcnow().strftime("%Y-%m-%dT%H:%M:%S.%fZ") self.facts['date_time']['iso8601'] = now.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ") self.facts['date_time']['tz'] = time.strftime("%Z") self.facts['date_time']['tz_offset'] = time.strftime("%z") # User def get_user_facts(self): self.facts['user_id'] = getpass.getuser() def get_env_facts(self): self.facts['env'] = {} for k,v in os.environ.iteritems(): self.facts['env'][k] = v class Hardware(Facts): """ This is a generic Hardware subclass of Facts. This should be further subclassed to implement per platform. If you subclass this, it should define: - memfree_mb - memtotal_mb - swapfree_mb - swaptotal_mb - processor (a list) - processor_cores - processor_count All subclasses MUST define platform. """ platform = 'Generic' def __new__(cls, *arguments, **keyword): subclass = cls for sc in Hardware.__subclasses__(): if sc.platform == platform.system(): subclass = sc return super(cls, subclass).__new__(subclass, *arguments, **keyword) def __init__(self): Facts.__init__(self) def populate(self): return self.facts class LinuxHardware(Hardware): """ Linux-specific subclass of Hardware. Defines memory and CPU facts: - memfree_mb - memtotal_mb - swapfree_mb - swaptotal_mb - processor (a list) - processor_cores - processor_count In addition, it also defines number of DMI facts and device facts. """ platform = 'Linux' MEMORY_FACTS = ['MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'] def __init__(self): Hardware.__init__(self) def populate(self): self.get_cpu_facts() self.get_memory_facts() self.get_dmi_facts() self.get_device_facts() self.get_mount_facts() return self.facts def get_memory_facts(self): if not os.access("/proc/meminfo", os.R_OK): return for line in open("/proc/meminfo").readlines(): data = line.split(":", 1) key = data[0] if key in LinuxHardware.MEMORY_FACTS: val = data[1].strip().split(' ')[0] self.facts["%s_mb" % key.lower()] = long(val) / 1024 def get_cpu_facts(self): i = 0 physid = 0 coreid = 0 sockets = {} cores = {} if not os.access("/proc/cpuinfo", os.R_OK): return self.facts['processor'] = [] for line in open("/proc/cpuinfo").readlines(): data = line.split(":", 1) key = data[0].strip() # model name is for Intel arch, Processor (mind the uppercase P) # works for some ARM devices, like the Sheevaplug. if key == 'model name' or key == 'Processor': if 'processor' not in self.facts: self.facts['processor'] = [] self.facts['processor'].append(data[1].strip()) i += 1 elif key == 'physical id': physid = data[1].strip() if physid not in sockets: sockets[physid] = 1 elif key == 'core id': coreid = data[1].strip() if coreid not in sockets: cores[coreid] = 1 elif key == 'cpu cores': sockets[physid] = int(data[1].strip()) elif key == 'siblings': cores[coreid] = int(data[1].strip()) self.facts['processor_count'] = sockets and len(sockets) or i self.facts['processor_cores'] = sockets.values() and sockets.values()[0] or 1 self.facts['processor_threads_per_core'] = ((cores.values() and cores.values()[0] or 1) / self.facts['processor_cores']) self.facts['processor_vcpus'] = (self.facts['processor_threads_per_core'] * self.facts['processor_count'] * self.facts['processor_cores']) def get_dmi_facts(self): ''' learn dmi facts from system Try /sys first for dmi related facts. If that is not available, fall back to dmidecode executable ''' if os.path.exists('/sys/devices/virtual/dmi/id/product_name'): # Use kernel DMI info, if available # DMI SPEC -- http://www.dmtf.org/sites/default/files/standards/documents/DSP0134_2.7.0.pdf FORM_FACTOR = [ "Unknown", "Other", "Unknown", "Desktop", "Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower", "Portable", "Laptop", "Notebook", "Hand Held", "Docking Station", "All In One", "Sub Notebook", "Space-saving", "Lunch Box", "Main Server Chassis", "Expansion Chassis", "Sub Chassis", "Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis", "Rack Mount Chassis", "Sealed-case PC", "Multi-system", "CompactPCI", "AdvancedTCA", "Blade" ] DMI_DICT = { 'bios_date': '/sys/devices/virtual/dmi/id/bios_date', 'bios_version': '/sys/devices/virtual/dmi/id/bios_version', 'form_factor': '/sys/devices/virtual/dmi/id/chassis_type', 'product_name': '/sys/devices/virtual/dmi/id/product_name', 'product_serial': '/sys/devices/virtual/dmi/id/product_serial', 'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid', 'product_version': '/sys/devices/virtual/dmi/id/product_version', 'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor' } for (key,path) in DMI_DICT.items(): data = get_file_content(path) if data is not None: if key == 'form_factor': try: self.facts['form_factor'] = FORM_FACTOR[int(data)] except IndexError, e: self.facts['form_factor'] = 'unknown (%s)' % data else: self.facts[key] = data else: self.facts[key] = 'NA' else: # Fall back to using dmidecode, if available dmi_bin = module.get_bin_path('dmidecode') DMI_DICT = { 'bios_date': 'bios-release-date', 'bios_version': 'bios-version', 'form_factor': 'chassis-type', 'product_name': 'system-product-name', 'product_serial': 'system-serial-number', 'product_uuid': 'system-uuid', 'product_version': 'system-version', 'system_vendor': 'system-manufacturer' } for (k, v) in DMI_DICT.items(): if dmi_bin is not None: (rc, out, err) = module.run_command('%s -s %s' % (dmi_bin, v)) if rc == 0: # Strip out commented lines (specific dmidecode output) thisvalue = ''.join([ line for line in out.split('\n') if not line.startswith('#') ]) try: json.dumps(thisvalue) except UnicodeDecodeError: thisvalue = "NA" self.facts[k] = thisvalue else: self.facts[k] = 'NA' else: self.facts[k] = 'NA' def get_mount_facts(self): self.facts['mounts'] = [] mtab = get_file_content('/etc/mtab', '') for line in mtab.split('\n'): if line.startswith('/'): fields = line.rstrip('\n').split() if(fields[2] != 'none'): size_total = None size_available = None try: statvfs_result = os.statvfs(fields[1]) size_total = statvfs_result.f_bsize * statvfs_result.f_blocks size_available = statvfs_result.f_bsize * (statvfs_result.f_bavail) except OSError, e: continue self.facts['mounts'].append( {'mount': fields[1], 'device':fields[0], 'fstype': fields[2], 'options': fields[3], # statvfs data 'size_total': size_total, 'size_available': size_available, }) def get_device_facts(self): self.facts['devices'] = {} lspci = module.get_bin_path('lspci') if lspci: rc, pcidata, err = module.run_command([lspci, '-D']) else: pcidata = None try: block_devs = os.listdir("/sys/block") except OSError: return for block in block_devs: virtual = 1 sysfs_no_links = 0 try: path = os.readlink(os.path.join("/sys/block/", block)) except OSError, e: if e.errno == errno.EINVAL: path = block sysfs_no_links = 1 else: continue if "virtual" in path: continue sysdir = os.path.join("/sys/block", path) if sysfs_no_links == 1: for folder in os.listdir(sysdir): if "device" in folder: virtual = 0 break if virtual: continue d = {} diskname = os.path.basename(sysdir) for key in ['vendor', 'model']: d[key] = get_file_content(sysdir + "/device/" + key) for key,test in [ ('removable','/removable'), \ ('support_discard','/queue/discard_granularity'), ]: d[key] = get_file_content(sysdir + test) d['partitions'] = {} for folder in os.listdir(sysdir): m = re.search("(" + diskname + "\d+)", folder) if m: part = {} partname = m.group(1) part_sysdir = sysdir + "/" + partname part['start'] = get_file_content(part_sysdir + "/start",0) part['sectors'] = get_file_content(part_sysdir + "/size",0) part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size",512) part['size'] = module.pretty_bytes((float(part['sectors']) * float(part['sectorsize']))) d['partitions'][partname] = part d['rotational'] = get_file_content(sysdir + "/queue/rotational") d['scheduler_mode'] = "" scheduler = get_file_content(sysdir + "/queue/scheduler") if scheduler is not None: m = re.match(".*?(\[(.*)\])", scheduler) if m: d['scheduler_mode'] = m.group(2) d['sectors'] = get_file_content(sysdir + "/size") if not d['sectors']: d['sectors'] = 0 d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size") if not d['sectorsize']: d['sectorsize'] = 512 d['size'] = module.pretty_bytes(float(d['sectors']) * float(d['sectorsize'])) d['host'] = "" # domains are numbered (0 to ffff), bus (0 to ff), slot (0 to 1f), and function (0 to 7). m = re.match(".+/([a-f0-9]{4}:[a-f0-9]{2}:[0|1][a-f0-9]\.[0-7])/", sysdir) if m and pcidata: pciid = m.group(1) did = re.escape(pciid) m = re.search("^" + did + "\s(.*)$", pcidata, re.MULTILINE) d['host'] = m.group(1) d['holders'] = [] if os.path.isdir(sysdir + "/holders"): for folder in os.listdir(sysdir + "/holders"): if not folder.startswith("dm-"): continue name = get_file_content(sysdir + "/holders/" + folder + "/dm/name") if name: d['holders'].append(name) else: d['holders'].append(folder) self.facts['devices'][diskname] = d class SunOSHardware(Hardware): """ In addition to the generic memory and cpu facts, this also sets swap_reserved_mb and swap_allocated_mb that is available from *swap -s*. """ platform = 'SunOS' def __init__(self): Hardware.__init__(self) def populate(self): self.get_cpu_facts() self.get_memory_facts() return self.facts def get_cpu_facts(self): physid = 0 sockets = {} rc, out, err = module.run_command("/usr/bin/kstat cpu_info") self.facts['processor'] = [] for line in out.split('\n'): if len(line) < 1: continue data = line.split(None, 1) key = data[0].strip() # "brand" works on Solaris 10 & 11. "implementation" for Solaris 9. if key == 'module:': brand = '' elif key == 'brand': brand = data[1].strip() elif key == 'clock_MHz': clock_mhz = data[1].strip() elif key == 'implementation': processor = brand or data[1].strip() # Add clock speed to description for SPARC CPU if self.facts['machine'] != 'i86pc': processor += " @ " + clock_mhz + "MHz" if 'processor' not in self.facts: self.facts['processor'] = [] self.facts['processor'].append(processor) elif key == 'chip_id': physid = data[1].strip() if physid not in sockets: sockets[physid] = 1 else: sockets[physid] += 1 # Counting cores on Solaris can be complicated. # https://blogs.oracle.com/mandalika/entry/solaris_show_me_the_cpu # Treat 'processor_count' as physical sockets and 'processor_cores' as # virtual CPUs visisble to Solaris. Not a true count of cores for modern SPARC as # these processors have: sockets -> cores -> threads/virtual CPU. if len(sockets) > 0: self.facts['processor_count'] = len(sockets) self.facts['processor_cores'] = reduce(lambda x, y: x + y, sockets.values()) else: self.facts['processor_cores'] = 'NA' self.facts['processor_count'] = len(self.facts['processor']) def get_memory_facts(self): rc, out, err = module.run_command(["/usr/sbin/prtconf"]) for line in out.split('\n'): if 'Memory size' in line: self.facts['memtotal_mb'] = line.split()[2] rc, out, err = module.run_command("/usr/sbin/swap -s") allocated = long(out.split()[1][:-1]) reserved = long(out.split()[5][:-1]) used = long(out.split()[8][:-1]) free = long(out.split()[10][:-1]) self.facts['swapfree_mb'] = free / 1024 self.facts['swaptotal_mb'] = (free + used) / 1024 self.facts['swap_allocated_mb'] = allocated / 1024 self.facts['swap_reserved_mb'] = reserved / 1024 class OpenBSDHardware(Hardware): """ OpenBSD-specific subclass of Hardware. Defines memory, CPU and device facts: - memfree_mb - memtotal_mb - swapfree_mb - swaptotal_mb - processor (a list) - processor_cores - processor_count - processor_speed - devices """ platform = 'OpenBSD' DMESG_BOOT = '/var/run/dmesg.boot' def __init__(self): Hardware.__init__(self) def populate(self): self.sysctl = self.get_sysctl() self.get_memory_facts() self.get_processor_facts() self.get_device_facts() return self.facts def get_sysctl(self): rc, out, err = module.run_command(["/sbin/sysctl", "hw"]) if rc != 0: return dict() sysctl = dict() for line in out.splitlines(): (key, value) = line.split('=') sysctl[key] = value.strip() return sysctl def get_memory_facts(self): # Get free memory. vmstat output looks like: # procs memory page disks traps cpu # r b w avm fre flt re pi po fr sr wd0 fd0 int sys cs us sy id # 0 0 0 47512 28160 51 0 0 0 0 0 1 0 116 89 17 0 1 99 rc, out, err = module.run_command("/usr/bin/vmstat") if rc == 0: self.facts['memfree_mb'] = long(out.splitlines()[-1].split()[4]) / 1024 self.facts['memtotal_mb'] = long(self.sysctl['hw.usermem']) / 1024 / 1024 # Get swapctl info. swapctl output looks like: # total: 69268 1K-blocks allocated, 0 used, 69268 available # And for older OpenBSD: # total: 69268k bytes allocated = 0k used, 69268k available rc, out, err = module.run_command("/sbin/swapctl -sk") if rc == 0: data = out.split() self.facts['swapfree_mb'] = long(data[-2].translate(None, "kmg")) / 1024 self.facts['swaptotal_mb'] = long(data[1].translate(None, "kmg")) / 1024 def get_processor_facts(self): processor = [] dmesg_boot = get_file_content(OpenBSDHardware.DMESG_BOOT) if not dmesg_boot: rc, dmesg_boot, err = module.run_command("/sbin/dmesg") i = 0 for line in dmesg_boot.splitlines(): if line.split(' ', 1)[0] == 'cpu%i:' % i: processor.append(line.split(' ', 1)[1]) i = i + 1 processor_count = i self.facts['processor'] = processor self.facts['processor_count'] = processor_count # I found no way to figure out the number of Cores per CPU in OpenBSD self.facts['processor_cores'] = 'NA' def get_device_facts(self): devices = [] devices.extend(self.sysctl['hw.disknames'].split(',')) self.facts['devices'] = devices class FreeBSDHardware(Hardware): """ FreeBSD-specific subclass of Hardware. Defines memory and CPU facts: - memfree_mb - memtotal_mb - swapfree_mb - swaptotal_mb - processor (a list) - processor_cores - processor_count - devices """ platform = 'FreeBSD' DMESG_BOOT = '/var/run/dmesg.boot' def __init__(self): Hardware.__init__(self) def populate(self): self.get_cpu_facts() self.get_memory_facts() self.get_dmi_facts() self.get_device_facts() self.get_mount_facts() return self.facts def get_cpu_facts(self): self.facts['processor'] = [] rc, out, err = module.run_command("/sbin/sysctl -n hw.ncpu") self.facts['processor_count'] = out.strip() dmesg_boot = get_file_content(FreeBSDHardware.DMESG_BOOT) if not dmesg_boot: rc, dmesg_boot, err = module.run_command("/sbin/dmesg") for line in dmesg_boot.split('\n'): if 'CPU:' in line: cpu = re.sub(r'CPU:\s+', r"", line) self.facts['processor'].append(cpu.strip()) if 'Logical CPUs per core' in line: self.facts['processor_cores'] = line.split()[4] def get_memory_facts(self): rc, out, err = module.run_command("/sbin/sysctl vm.stats") for line in out.split('\n'): data = line.split() if 'vm.stats.vm.v_page_size' in line: pagesize = long(data[1]) if 'vm.stats.vm.v_page_count' in line: pagecount = long(data[1]) if 'vm.stats.vm.v_free_count' in line: freecount = long(data[1]) self.facts['memtotal_mb'] = pagesize * pagecount / 1024 / 1024 self.facts['memfree_mb'] = pagesize * freecount / 1024 / 1024 # Get swapinfo. swapinfo output looks like: # Device 1M-blocks Used Avail Capacity # /dev/ada0p3 314368 0 314368 0% # rc, out, err = module.run_command("/usr/sbin/swapinfo -m") lines = out.split('\n') if len(lines[-1]) == 0: lines.pop() data = lines[-1].split() self.facts['swaptotal_mb'] = data[1] self.facts['swapfree_mb'] = data[3] def get_mount_facts(self): self.facts['mounts'] = [] fstab = get_file_content('/etc/fstab') if fstab: for line in fstab.split('\n'): if line.startswith('#') or line.strip() == '': continue fields = re.sub(r'\s+',' ',line.rstrip('\n')).split() self.facts['mounts'].append({'mount': fields[1] , 'device': fields[0], 'fstype' : fields[2], 'options': fields[3]}) def get_device_facts(self): sysdir = '/dev' self.facts['devices'] = {} drives = re.compile('(ada?\d+|da\d+|a?cd\d+)') #TODO: rc, disks, err = module.run_command("/sbin/sysctl kern.disks") slices = re.compile('(ada?\d+s\d+\w*|da\d+s\d+\w*)') if os.path.isdir(sysdir): dirlist = sorted(os.listdir(sysdir)) for device in dirlist: d = drives.match(device) if d: self.facts['devices'][d.group(1)] = [] s = slices.match(device) if s: self.facts['devices'][d.group(1)].append(s.group(1)) def get_dmi_facts(self): ''' learn dmi facts from system Use dmidecode executable if available''' # Fall back to using dmidecode, if available dmi_bin = module.get_bin_path('dmidecode') DMI_DICT = dict( bios_date='bios-release-date', bios_version='bios-version', form_factor='chassis-type', product_name='system-product-name', product_serial='system-serial-number', product_uuid='system-uuid', product_version='system-version', system_vendor='system-manufacturer' ) for (k, v) in DMI_DICT.items(): if dmi_bin is not None: (rc, out, err) = module.run_command('%s -s %s' % (dmi_bin, v)) if rc == 0: # Strip out commented lines (specific dmidecode output) self.facts[k] = ''.join([ line for line in out.split('\n') if not line.startswith('#') ]) try: json.dumps(self.facts[k]) except UnicodeDecodeError: self.facts[k] = 'NA' else: self.facts[k] = 'NA' else: self.facts[k] = 'NA' class NetBSDHardware(Hardware): """ NetBSD-specific subclass of Hardware. Defines memory and CPU facts: - memfree_mb - memtotal_mb - swapfree_mb - swaptotal_mb - processor (a list) - processor_cores - processor_count - devices """ platform = 'NetBSD' MEMORY_FACTS = ['MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'] def __init__(self): Hardware.__init__(self) def populate(self): self.get_cpu_facts() self.get_memory_facts() self.get_mount_facts() return self.facts def get_cpu_facts(self): i = 0 physid = 0 sockets = {} if not os.access("/proc/cpuinfo", os.R_OK): return self.facts['processor'] = [] for line in open("/proc/cpuinfo").readlines(): data = line.split(":", 1) key = data[0].strip() # model name is for Intel arch, Processor (mind the uppercase P) # works for some ARM devices, like the Sheevaplug. if key == 'model name' or key == 'Processor': if 'processor' not in self.facts: self.facts['processor'] = [] self.facts['processor'].append(data[1].strip()) i += 1 elif key == 'physical id': physid = data[1].strip() if physid not in sockets: sockets[physid] = 1 elif key == 'cpu cores': sockets[physid] = int(data[1].strip()) if len(sockets) > 0: self.facts['processor_count'] = len(sockets) self.facts['processor_cores'] = reduce(lambda x, y: x + y, sockets.values()) else: self.facts['processor_count'] = i self.facts['processor_cores'] = 'NA' def get_memory_facts(self): if not os.access("/proc/meminfo", os.R_OK): return for line in open("/proc/meminfo").readlines(): data = line.split(":", 1) key = data[0] if key in NetBSDHardware.MEMORY_FACTS: val = data[1].strip().split(' ')[0] self.facts["%s_mb" % key.lower()] = long(val) / 1024 def get_mount_facts(self): self.facts['mounts'] = [] fstab = get_file_content('/etc/fstab') if fstab: for line in fstab.split('\n'): if line.startswith('#') or line.strip() == '': continue fields = re.sub(r'\s+',' ',line.rstrip('\n')).split() self.facts['mounts'].append({'mount': fields[1] , 'device': fields[0], 'fstype' : fields[2], 'options': fields[3]}) class AIX(Hardware): """ AIX-specific subclass of Hardware. Defines memory and CPU facts: - memfree_mb - memtotal_mb - swapfree_mb - swaptotal_mb - processor (a list) - processor_cores - processor_count """ platform = 'AIX' def __init__(self): Hardware.__init__(self) def populate(self): self.get_cpu_facts() self.get_memory_facts() self.get_dmi_facts() return self.facts def get_cpu_facts(self): self.facts['processor'] = [] rc, out, err = module.run_command("/usr/sbin/lsdev -Cc processor") if out: i = 0 for line in out.split('\n'): if 'Available' in line: if i == 0: data = line.split(' ') cpudev = data[0] i += 1 self.facts['processor_count'] = int(i) rc, out, err = module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a type") data = out.split(' ') self.facts['processor'] = data[1] rc, out, err = module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a smt_threads") data = out.split(' ') self.facts['processor_cores'] = int(data[1]) def get_memory_facts(self): pagesize = 4096 rc, out, err = module.run_command("/usr/bin/vmstat -v") for line in out.split('\n'): data = line.split() if 'memory pages' in line: pagecount = long(data[0]) if 'free pages' in line: freecount = long(data[0]) self.facts['memtotal_mb'] = pagesize * pagecount / 1024 / 1024 self.facts['memfree_mb'] = pagesize * freecount / 1024 / 1024 # Get swapinfo. swapinfo output looks like: # Device 1M-blocks Used Avail Capacity # /dev/ada0p3 314368 0 314368 0% # rc, out, err = module.run_command("/usr/sbin/lsps -s") if out: lines = out.split('\n') data = lines[1].split() swaptotal_mb = long(data[0].rstrip('MB')) percused = int(data[1].rstrip('%')) self.facts['swaptotal_mb'] = swaptotal_mb self.facts['swapfree_mb'] = long(swaptotal_mb * ( 100 - percused ) / 100) def get_dmi_facts(self): rc, out, err = module.run_command("/usr/sbin/lsattr -El sys0 -a fwversion") data = out.split() self.facts['firmware_version'] = data[1].strip('IBM,') class HPUX(Hardware): """ HP-UX-specifig subclass of Hardware. Defines memory and CPU facts: - memfree_mb - memtotal_mb - swapfree_mb - swaptotal_mb - processor - processor_cores - processor_count - model - firmware """ platform = 'HP-UX' def __init__(self): Hardware.__init__(self) def populate(self): self.get_cpu_facts() self.get_memory_facts() self.get_hw_facts() return self.facts def get_cpu_facts(self): if self.facts['architecture'] == '9000/800': rc, out, err = module.run_command("ioscan -FkCprocessor|wc -l") self.facts['processor_count'] = int(out.strip()) #Working with machinfo mess elif self.facts['architecture'] == 'ia64': if self.facts['distribution_version'] == "B.11.23": rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep 'Number of CPUs'") self.facts['processor_count'] = int(out.strip().split('=')[1]) rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep 'processor family'") self.facts['processor'] = re.search('.*(Intel.*)', out).groups()[0].strip() rc, out, err = module.run_command("ioscan -FkCprocessor|wc -l") self.facts['processor_cores'] = int(out.strip()) if self.facts['distribution_version'] == "B.11.31": #if machinfo return cores strings release B.11.31 > 1204 rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep core|wc -l") if out.strip()== '0': rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Intel") self.facts['processor_count'] = int(out.strip().split(" ")[0]) #If hyperthreading is active divide cores by 2 rc, out, err = module.run_command("/usr/sbin/psrset |grep LCPU") data = re.sub(' +',' ',out).strip().split(' ') if len(data) == 1: hyperthreading = 'OFF' else: hyperthreading = data[1] rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep logical") data = out.strip().split(" ") if hyperthreading == 'ON': self.facts['processor_cores'] = int(data[0])/2 else: if len(data) == 1: self.facts['processor_cores'] = self.facts['processor_count'] else: self.facts['processor_cores'] = int(data[0]) rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Intel |cut -d' ' -f4-") self.facts['processor'] = out.strip() else: rc, out, err = module.run_command("/usr/contrib/bin/machinfo |egrep 'socket[s]?$' | tail -1") self.facts['processor_count'] = int(out.strip().split(" ")[0]) rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep -e '[0-9] core' |tail -1") self.facts['processor_cores'] = int(out.strip().split(" ")[0]) rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Intel") self.facts['processor'] = out.strip() def get_memory_facts(self): pagesize = 4096 rc, out, err = module.run_command("/usr/bin/vmstat|tail -1") data = int(re.sub(' +',' ',out).split(' ')[5].strip()) self.facts['memfree_mb'] = pagesize * data / 1024 / 1024 if self.facts['architecture'] == '9000/800': rc, out, err = module.run_command("grep Physical /var/adm/syslog/syslog.log") data = re.search('.*Physical: ([0-9]*) Kbytes.*',out).groups()[0].strip() self.facts['memtotal_mb'] = int(data) / 1024 else: rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep Memory") data = re.search('Memory[\ :=]*([0-9]*).*MB.*',out).groups()[0].strip() self.facts['memtotal_mb'] = int(data) rc, out, err = module.run_command("/usr/sbin/swapinfo -m -d -f -q") self.facts['swaptotal_mb'] = int(out.strip()) rc, out, err = module.run_command("/usr/sbin/swapinfo -m -d -f |egrep '^dev|^fs'") swap = 0 for line in out.strip().split('\n'): swap += int(re.sub(' +',' ',line).split(' ')[3].strip()) self.facts['swapfree_mb'] = swap def get_hw_facts(self): rc, out, err = module.run_command("model") self.facts['model'] = out.strip() if self.facts['architecture'] == 'ia64': rc, out, err = module.run_command("/usr/contrib/bin/machinfo |grep -i 'Firmware revision' |grep -v BMC") self.facts['firmware_version'] = out.split(':')[1].strip() class Darwin(Hardware): """ Darwin-specific subclass of Hardware. Defines memory and CPU facts: - processor - processor_cores - memtotal_mb - memfree_mb - model - osversion - osrevision """ platform = 'Darwin' def __init__(self): Hardware.__init__(self) def populate(self): self.sysctl = self.get_sysctl() self.get_mac_facts() self.get_cpu_facts() self.get_memory_facts() return self.facts def get_sysctl(self): rc, out, err = module.run_command(["/usr/sbin/sysctl", "hw", "machdep", "kern"]) if rc != 0: return dict() sysctl = dict() for line in out.splitlines(): if line.rstrip("\n"): (key, value) = re.split(' = |: ', line, maxsplit=1) sysctl[key] = value.strip() return sysctl def get_system_profile(self): rc, out, err = module.run_command(["/usr/sbin/system_profiler", "SPHardwareDataType"]) if rc != 0: return dict() system_profile = dict() for line in out.splitlines(): if ': ' in line: (key, value) = line.split(': ', 1) system_profile[key.strip()] = ' '.join(value.strip().split()) return system_profile def get_mac_facts(self): self.facts['model'] = self.sysctl['hw.model'] self.facts['osversion'] = self.sysctl['kern.osversion'] self.facts['osrevision'] = self.sysctl['kern.osrevision'] def get_cpu_facts(self): if 'machdep.cpu.brand_string' in self.sysctl: # Intel self.facts['processor'] = self.sysctl['machdep.cpu.brand_string'] self.facts['processor_cores'] = self.sysctl['machdep.cpu.core_count'] else: # PowerPC system_profile = self.get_system_profile() self.facts['processor'] = '%s @ %s' % (system_profile['Processor Name'], system_profile['Processor Speed']) self.facts['processor_cores'] = self.sysctl['hw.physicalcpu'] def get_memory_facts(self): self.facts['memtotal_mb'] = long(self.sysctl['hw.memsize']) / 1024 / 1024 self.facts['memfree_mb'] = long(self.sysctl['hw.usermem']) / 1024 / 1024 class Network(Facts): """ This is a generic Network subclass of Facts. This should be further subclassed to implement per platform. If you subclass this, you must define: - interfaces (a list of interface names) - interface_ dictionary of ipv4, ipv6, and mac address information. All subclasses MUST define platform. """ platform = 'Generic' IPV6_SCOPE = { '0' : 'global', '10' : 'host', '20' : 'link', '40' : 'admin', '50' : 'site', '80' : 'organization' } def __new__(cls, *arguments, **keyword): subclass = cls for sc in Network.__subclasses__(): if sc.platform == platform.system(): subclass = sc return super(cls, subclass).__new__(subclass, *arguments, **keyword) def __init__(self, module): self.module = module Facts.__init__(self) def populate(self): return self.facts class LinuxNetwork(Network): """ This is a Linux-specific subclass of Network. It defines - interfaces (a list of interface names) - interface_ dictionary of ipv4, ipv6, and mac address information. - all_ipv4_addresses and all_ipv6_addresses: lists of all configured addresses. - ipv4_address and ipv6_address: the first non-local address for each family. """ platform = 'Linux' def __init__(self, module): Network.__init__(self, module) def populate(self): ip_path = self.module.get_bin_path('ip') if ip_path is None: return self.facts default_ipv4, default_ipv6 = self.get_default_interfaces(ip_path) interfaces, ips = self.get_interfaces_info(ip_path, default_ipv4, default_ipv6) self.facts['interfaces'] = interfaces.keys() for iface in interfaces: self.facts[iface] = interfaces[iface] self.facts['default_ipv4'] = default_ipv4 self.facts['default_ipv6'] = default_ipv6 self.facts['all_ipv4_addresses'] = ips['all_ipv4_addresses'] self.facts['all_ipv6_addresses'] = ips['all_ipv6_addresses'] return self.facts def get_default_interfaces(self, ip_path): # Use the commands: # ip -4 route get 8.8.8.8 -> Google public DNS # ip -6 route get 2404:6800:400a:800::1012 -> ipv6.google.com # to find out the default outgoing interface, address, and gateway command = dict( v4 = [ip_path, '-4', 'route', 'get', '8.8.8.8'], v6 = [ip_path, '-6', 'route', 'get', '2404:6800:400a:800::1012'] ) interface = dict(v4 = {}, v6 = {}) for v in 'v4', 'v6': if v == 'v6' and self.facts['os_family'] == 'RedHat' \ and self.facts['distribution_version'].startswith('4.'): continue if v == 'v6' and not socket.has_ipv6: continue rc, out, err = module.run_command(command[v]) if not out: # v6 routing may result in # RTNETLINK answers: Invalid argument continue words = out.split('\n')[0].split() # A valid output starts with the queried address on the first line if len(words) > 0 and words[0] == command[v][-1]: for i in range(len(words) - 1): if words[i] == 'dev': interface[v]['interface'] = words[i+1] elif words[i] == 'src': interface[v]['address'] = words[i+1] elif words[i] == 'via' and words[i+1] != command[v][-1]: interface[v]['gateway'] = words[i+1] return interface['v4'], interface['v6'] def get_interfaces_info(self, ip_path, default_ipv4, default_ipv6): interfaces = {} ips = dict( all_ipv4_addresses = [], all_ipv6_addresses = [], ) for path in glob.glob('/sys/class/net/*'): if not os.path.isdir(path): continue device = os.path.basename(path) interfaces[device] = { 'device': device } if os.path.exists(os.path.join(path, 'address')): macaddress = open(os.path.join(path, 'address')).read().strip() if macaddress and macaddress != '00:00:00:00:00:00': interfaces[device]['macaddress'] = macaddress if os.path.exists(os.path.join(path, 'mtu')): interfaces[device]['mtu'] = int(open(os.path.join(path, 'mtu')).read().strip()) if os.path.exists(os.path.join(path, 'operstate')): interfaces[device]['active'] = open(os.path.join(path, 'operstate')).read().strip() != 'down' # if os.path.exists(os.path.join(path, 'carrier')): # interfaces[device]['link'] = open(os.path.join(path, 'carrier')).read().strip() == '1' if os.path.exists(os.path.join(path, 'device','driver', 'module')): interfaces[device]['module'] = os.path.basename(os.path.realpath(os.path.join(path, 'device', 'driver', 'module'))) if os.path.exists(os.path.join(path, 'type')): type = open(os.path.join(path, 'type')).read().strip() if type == '1': interfaces[device]['type'] = 'ether' elif type == '512': interfaces[device]['type'] = 'ppp' elif type == '772': interfaces[device]['type'] = 'loopback' if os.path.exists(os.path.join(path, 'bridge')): interfaces[device]['type'] = 'bridge' interfaces[device]['interfaces'] = [ os.path.basename(b) for b in glob.glob(os.path.join(path, 'brif', '*')) ] if os.path.exists(os.path.join(path, 'bridge', 'bridge_id')): interfaces[device]['id'] = open(os.path.join(path, 'bridge', 'bridge_id')).read().strip() if os.path.exists(os.path.join(path, 'bridge', 'stp_state')): interfaces[device]['stp'] = open(os.path.join(path, 'bridge', 'stp_state')).read().strip() == '1' if os.path.exists(os.path.join(path, 'bonding')): interfaces[device]['type'] = 'bonding' interfaces[device]['slaves'] = open(os.path.join(path, 'bonding', 'slaves')).read().split() interfaces[device]['mode'] = open(os.path.join(path, 'bonding', 'mode')).read().split()[0] interfaces[device]['miimon'] = open(os.path.join(path, 'bonding', 'miimon')).read().split()[0] interfaces[device]['lacp_rate'] = open(os.path.join(path, 'bonding', 'lacp_rate')).read().split()[0] primary = open(os.path.join(path, 'bonding', 'primary')).read() if primary: interfaces[device]['primary'] = primary path = os.path.join(path, 'bonding', 'all_slaves_active') if os.path.exists(path): interfaces[device]['all_slaves_active'] = open(path).read() == '1' # Check whether a interface is in promiscuous mode if os.path.exists(os.path.join(path,'flags')): promisc_mode = False # The second byte indicates whether the interface is in promiscuous mode. # 1 = promisc # 0 = no promisc data = int(open(os.path.join(path, 'flags')).read().strip(),16) promisc_mode = (data & 0x0100 > 0) interfaces[device]['promisc'] = promisc_mode def parse_ip_output(output, secondary=False): for line in output.split('\n'): if not line: continue words = line.split() if words[0] == 'inet': if '/' in words[1]: address, netmask_length = words[1].split('/') else: # pointopoint interfaces do not have a prefix address = words[1] netmask_length = "32" address_bin = struct.unpack('!L', socket.inet_aton(address))[0] netmask_bin = (1<<32) - (1<<32>>int(netmask_length)) netmask = socket.inet_ntoa(struct.pack('!L', netmask_bin)) network = socket.inet_ntoa(struct.pack('!L', address_bin & netmask_bin)) iface = words[-1] if iface != device: interfaces[iface] = {} if not secondary or "ipv4" not in interfaces[iface]: interfaces[iface]['ipv4'] = {'address': address, 'netmask': netmask, 'network': network} else: if "ipv4_secondaries" not in interfaces[iface]: interfaces[iface]["ipv4_secondaries"] = [] interfaces[iface]["ipv4_secondaries"].append({ 'address': address, 'netmask': netmask, 'network': network, }) # add this secondary IP to the main device if secondary: if "ipv4_secondaries" not in interfaces[device]: interfaces[device]["ipv4_secondaries"] = [] interfaces[device]["ipv4_secondaries"].append({ 'address': address, 'netmask': netmask, 'network': network, }) # If this is the default address, update default_ipv4 if 'address' in default_ipv4 and default_ipv4['address'] == address: default_ipv4['netmask'] = netmask default_ipv4['network'] = network default_ipv4['macaddress'] = macaddress default_ipv4['mtu'] = interfaces[device]['mtu'] default_ipv4['type'] = interfaces[device].get("type", "unknown") default_ipv4['alias'] = words[-1] if not address.startswith('127.'): ips['all_ipv4_addresses'].append(address) elif words[0] == 'inet6': address, prefix = words[1].split('/') scope = words[3] if 'ipv6' not in interfaces[device]: interfaces[device]['ipv6'] = [] interfaces[device]['ipv6'].append({ 'address' : address, 'prefix' : prefix, 'scope' : scope }) # If this is the default address, update default_ipv6 if 'address' in default_ipv6 and default_ipv6['address'] == address: default_ipv6['prefix'] = prefix default_ipv6['scope'] = scope default_ipv6['macaddress'] = macaddress default_ipv6['mtu'] = interfaces[device]['mtu'] default_ipv6['type'] = interfaces[device].get("type", "unknown") if not address == '::1': ips['all_ipv6_addresses'].append(address) ip_path = module.get_bin_path("ip") args = [ip_path, 'addr', 'show', 'primary', device] rc, stdout, stderr = self.module.run_command(args) primary_data = stdout args = [ip_path, 'addr', 'show', 'secondary', device] rc, stdout, stderr = self.module.run_command(args) secondary_data = stdout parse_ip_output(primary_data) parse_ip_output(secondary_data, secondary=True) # replace : by _ in interface name since they are hard to use in template new_interfaces = {} for i in interfaces: if ':' in i: new_interfaces[i.replace(':','_')] = interfaces[i] else: new_interfaces[i] = interfaces[i] return new_interfaces, ips class GenericBsdIfconfigNetwork(Network): """ This is a generic BSD subclass of Network using the ifconfig command. It defines - interfaces (a list of interface names) - interface_ dictionary of ipv4, ipv6, and mac address information. - all_ipv4_addresses and all_ipv6_addresses: lists of all configured addresses. It currently does not define - default_ipv4 and default_ipv6 - type, mtu and network on interfaces """ platform = 'Generic_BSD_Ifconfig' def __init__(self, module): Network.__init__(self, module) def populate(self): ifconfig_path = module.get_bin_path('ifconfig') if ifconfig_path is None: return self.facts route_path = module.get_bin_path('route') if route_path is None: return self.facts default_ipv4, default_ipv6 = self.get_default_interfaces(route_path) interfaces, ips = self.get_interfaces_info(ifconfig_path) self.merge_default_interface(default_ipv4, interfaces, 'ipv4') self.merge_default_interface(default_ipv6, interfaces, 'ipv6') self.facts['interfaces'] = interfaces.keys() for iface in interfaces: self.facts[iface] = interfaces[iface] self.facts['default_ipv4'] = default_ipv4 self.facts['default_ipv6'] = default_ipv6 self.facts['all_ipv4_addresses'] = ips['all_ipv4_addresses'] self.facts['all_ipv6_addresses'] = ips['all_ipv6_addresses'] return self.facts def get_default_interfaces(self, route_path): # Use the commands: # route -n get 8.8.8.8 -> Google public DNS # route -n get -inet6 2404:6800:400a:800::1012 -> ipv6.google.com # to find out the default outgoing interface, address, and gateway command = dict( v4 = [route_path, '-n', 'get', '8.8.8.8'], v6 = [route_path, '-n', 'get', '-inet6', '2404:6800:400a:800::1012'] ) interface = dict(v4 = {}, v6 = {}) for v in 'v4', 'v6': if v == 'v6' and not socket.has_ipv6: continue rc, out, err = module.run_command(command[v]) if not out: # v6 routing may result in # RTNETLINK answers: Invalid argument continue lines = out.split('\n') for line in lines: words = line.split() # Collect output from route command if len(words) > 1: if words[0] == 'interface:': interface[v]['interface'] = words[1] if words[0] == 'gateway:': interface[v]['gateway'] = words[1] return interface['v4'], interface['v6'] def get_interfaces_info(self, ifconfig_path): interfaces = {} current_if = {} ips = dict( all_ipv4_addresses = [], all_ipv6_addresses = [], ) # FreeBSD, DragonflyBSD, NetBSD, OpenBSD and OS X all implicitly add '-a' # when running the command 'ifconfig'. # Solaris must explicitly run the command 'ifconfig -a'. rc, out, err = module.run_command([ifconfig_path, '-a']) for line in out.split('\n'): if line: words = line.split() if re.match('^\S', line) and len(words) > 3: current_if = self.parse_interface_line(words) interfaces[ current_if['device'] ] = current_if elif words[0].startswith('options='): self.parse_options_line(words, current_if, ips) elif words[0] == 'nd6': self.parse_nd6_line(words, current_if, ips) elif words[0] == 'ether': self.parse_ether_line(words, current_if, ips) elif words[0] == 'media:': self.parse_media_line(words, current_if, ips) elif words[0] == 'status:': self.parse_status_line(words, current_if, ips) elif words[0] == 'lladdr': self.parse_lladdr_line(words, current_if, ips) elif words[0] == 'inet': self.parse_inet_line(words, current_if, ips) elif words[0] == 'inet6': self.parse_inet6_line(words, current_if, ips) else: self.parse_unknown_line(words, current_if, ips) return interfaces, ips def parse_interface_line(self, words): device = words[0][0:-1] current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} current_if['flags'] = self.get_options(words[1]) current_if['mtu'] = words[3] current_if['macaddress'] = 'unknown' # will be overwritten later return current_if def parse_options_line(self, words, current_if, ips): # Mac has options like this... current_if['options'] = self.get_options(words[0]) def parse_nd6_line(self, words, current_if, ips): # FreBSD has options like this... current_if['options'] = self.get_options(words[1]) def parse_ether_line(self, words, current_if, ips): current_if['macaddress'] = words[1] def parse_media_line(self, words, current_if, ips): # not sure if this is useful - we also drop information current_if['media'] = words[1] if len(words) > 2: current_if['media_select'] = words[2] if len(words) > 3: current_if['media_type'] = words[3][1:] if len(words) > 4: current_if['media_options'] = self.get_options(words[4]) def parse_status_line(self, words, current_if, ips): current_if['status'] = words[1] def parse_lladdr_line(self, words, current_if, ips): current_if['lladdr'] = words[1] def parse_inet_line(self, words, current_if, ips): address = {'address': words[1]} # deal with hex netmask if re.match('([0-9a-f]){8}', words[3]) and len(words[3]) == 8: words[3] = '0x' + words[3] if words[3].startswith('0x'): address['netmask'] = socket.inet_ntoa(struct.pack('!L', int(words[3], base=16))) else: # otherwise assume this is a dotted quad address['netmask'] = words[3] # calculate the network address_bin = struct.unpack('!L', socket.inet_aton(address['address']))[0] netmask_bin = struct.unpack('!L', socket.inet_aton(address['netmask']))[0] address['network'] = socket.inet_ntoa(struct.pack('!L', address_bin & netmask_bin)) # broadcast may be given or we need to calculate if len(words) > 5: address['broadcast'] = words[5] else: address['broadcast'] = socket.inet_ntoa(struct.pack('!L', address_bin | (~netmask_bin & 0xffffffff))) # add to our list of addresses if not words[1].startswith('127.'): ips['all_ipv4_addresses'].append(address['address']) current_if['ipv4'].append(address) def parse_inet6_line(self, words, current_if, ips): address = {'address': words[1]} if (len(words) >= 4) and (words[2] == 'prefixlen'): address['prefix'] = words[3] if (len(words) >= 6) and (words[4] == 'scopeid'): address['scope'] = words[5] localhost6 = ['::1', '::1/128', 'fe80::1%lo0'] if address['address'] not in localhost6: ips['all_ipv6_addresses'].append(address['address']) current_if['ipv6'].append(address) def parse_unknown_line(self, words, current_if, ips): # we are going to ignore unknown lines here - this may be # a bad idea - but you can override it in your subclass pass def get_options(self, option_string): start = option_string.find('<') + 1 end = option_string.rfind('>') if (start > 0) and (end > 0) and (end > start + 1): option_csv = option_string[start:end] return option_csv.split(',') else: return [] def merge_default_interface(self, defaults, interfaces, ip_type): if not 'interface' in defaults.keys(): return if not defaults['interface'] in interfaces: return ifinfo = interfaces[defaults['interface']] # copy all the interface values across except addresses for item in ifinfo.keys(): if item != 'ipv4' and item != 'ipv6': defaults[item] = ifinfo[item] if len(ifinfo[ip_type]) > 0: for item in ifinfo[ip_type][0].keys(): defaults[item] = ifinfo[ip_type][0][item] class DarwinNetwork(GenericBsdIfconfigNetwork, Network): """ This is the Mac OS X/Darwin Network Class. It uses the GenericBsdIfconfigNetwork unchanged """ platform = 'Darwin' # media line is different to the default FreeBSD one def parse_media_line(self, words, current_if, ips): # not sure if this is useful - we also drop information current_if['media'] = 'Unknown' # Mac does not give us this current_if['media_select'] = words[1] if len(words) > 2: current_if['media_type'] = words[2][1:] if len(words) > 3: current_if['media_options'] = self.get_options(words[3]) class FreeBSDNetwork(GenericBsdIfconfigNetwork, Network): """ This is the FreeBSD Network Class. It uses the GenericBsdIfconfigNetwork unchanged. """ platform = 'FreeBSD' class AIXNetwork(GenericBsdIfconfigNetwork, Network): """ This is the AIX Network Class. It uses the GenericBsdIfconfigNetwork unchanged. """ platform = 'AIX' # AIX 'ifconfig -a' does not have three words in the interface line def get_interfaces_info(self, ifconfig_path): interfaces = {} current_if = {} ips = dict( all_ipv4_addresses = [], all_ipv6_addresses = [], ) rc, out, err = module.run_command([ifconfig_path, '-a']) for line in out.split('\n'): if line: words = line.split() # only this condition differs from GenericBsdIfconfigNetwork if re.match('^\w*\d*:', line): current_if = self.parse_interface_line(words) interfaces[ current_if['device'] ] = current_if elif words[0].startswith('options='): self.parse_options_line(words, current_if, ips) elif words[0] == 'nd6': self.parse_nd6_line(words, current_if, ips) elif words[0] == 'ether': self.parse_ether_line(words, current_if, ips) elif words[0] == 'media:': self.parse_media_line(words, current_if, ips) elif words[0] == 'status:': self.parse_status_line(words, current_if, ips) elif words[0] == 'lladdr': self.parse_lladdr_line(words, current_if, ips) elif words[0] == 'inet': self.parse_inet_line(words, current_if, ips) elif words[0] == 'inet6': self.parse_inet6_line(words, current_if, ips) else: self.parse_unknown_line(words, current_if, ips) return interfaces, ips # AIX 'ifconfig -a' does not inform about MTU, so remove current_if['mtu'] here def parse_interface_line(self, words): device = words[0][0:-1] current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} current_if['flags'] = self.get_options(words[1]) current_if['macaddress'] = 'unknown' # will be overwritten later return current_if class OpenBSDNetwork(GenericBsdIfconfigNetwork, Network): """ This is the OpenBSD Network Class. It uses the GenericBsdIfconfigNetwork. """ platform = 'OpenBSD' # Return macaddress instead of lladdr def parse_lladdr_line(self, words, current_if, ips): current_if['macaddress'] = words[1] class SunOSNetwork(GenericBsdIfconfigNetwork, Network): """ This is the SunOS Network Class. It uses the GenericBsdIfconfigNetwork. Solaris can have different FLAGS and MTU for IPv4 and IPv6 on the same interface so these facts have been moved inside the 'ipv4' and 'ipv6' lists. """ platform = 'SunOS' # Solaris 'ifconfig -a' will print interfaces twice, once for IPv4 and again for IPv6. # MTU and FLAGS also may differ between IPv4 and IPv6 on the same interface. # 'parse_interface_line()' checks for previously seen interfaces before defining # 'current_if' so that IPv6 facts don't clobber IPv4 facts (or vice versa). def get_interfaces_info(self, ifconfig_path): interfaces = {} current_if = {} ips = dict( all_ipv4_addresses = [], all_ipv6_addresses = [], ) rc, out, err = module.run_command([ifconfig_path, '-a']) for line in out.split('\n'): if line: words = line.split() if re.match('^\S', line) and len(words) > 3: current_if = self.parse_interface_line(words, current_if, interfaces) interfaces[ current_if['device'] ] = current_if elif words[0].startswith('options='): self.parse_options_line(words, current_if, ips) elif words[0] == 'nd6': self.parse_nd6_line(words, current_if, ips) elif words[0] == 'ether': self.parse_ether_line(words, current_if, ips) elif words[0] == 'media:': self.parse_media_line(words, current_if, ips) elif words[0] == 'status:': self.parse_status_line(words, current_if, ips) elif words[0] == 'lladdr': self.parse_lladdr_line(words, current_if, ips) elif words[0] == 'inet': self.parse_inet_line(words, current_if, ips) elif words[0] == 'inet6': self.parse_inet6_line(words, current_if, ips) else: self.parse_unknown_line(words, current_if, ips) # 'parse_interface_line' and 'parse_inet*_line' leave two dicts in the # ipv4/ipv6 lists which is ugly and hard to read. # This quick hack merges the dictionaries. Purely cosmetic. for iface in interfaces: for v in 'ipv4', 'ipv6': combined_facts = {} for facts in interfaces[iface][v]: combined_facts.update(facts) if len(combined_facts.keys()) > 0: interfaces[iface][v] = [combined_facts] return interfaces, ips def parse_interface_line(self, words, current_if, interfaces): device = words[0][0:-1] if device not in interfaces.keys(): current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'} else: current_if = interfaces[device] flags = self.get_options(words[1]) if 'IPv4' in flags: v = 'ipv4' if 'IPv6' in flags: v = 'ipv6' current_if[v].append({'flags': flags, 'mtu': words[3]}) current_if['macaddress'] = 'unknown' # will be overwritten later return current_if # Solaris displays single digit octets in MAC addresses e.g. 0:1:2:d:e:f # Add leading zero to each octet where needed. def parse_ether_line(self, words, current_if, ips): macaddress = '' for octet in words[1].split(':'): octet = ('0' + octet)[-2:None] macaddress += (octet + ':') current_if['macaddress'] = macaddress[0:-1] class Virtual(Facts): """ This is a generic Virtual subclass of Facts. This should be further subclassed to implement per platform. If you subclass this, you should define: - virtualization_type - virtualization_role - container (e.g. solaris zones, freebsd jails, linux containers) All subclasses MUST define platform. """ def __new__(cls, *arguments, **keyword): subclass = cls for sc in Virtual.__subclasses__(): if sc.platform == platform.system(): subclass = sc return super(cls, subclass).__new__(subclass, *arguments, **keyword) def __init__(self): Facts.__init__(self) def populate(self): return self.facts class LinuxVirtual(Virtual): """ This is a Linux-specific subclass of Virtual. It defines - virtualization_type - virtualization_role """ platform = 'Linux' def __init__(self): Virtual.__init__(self) def populate(self): self.get_virtual_facts() return self.facts # For more information, check: http://people.redhat.com/~rjones/virt-what/ def get_virtual_facts(self): if os.path.exists("/proc/xen"): self.facts['virtualization_type'] = 'xen' self.facts['virtualization_role'] = 'guest' try: for line in open('/proc/xen/capabilities'): if "control_d" in line: self.facts['virtualization_role'] = 'host' except IOError: pass return if os.path.exists('/proc/vz'): self.facts['virtualization_type'] = 'openvz' if os.path.exists('/proc/bc'): self.facts['virtualization_role'] = 'host' else: self.facts['virtualization_role'] = 'guest' return if os.path.exists('/proc/1/cgroup'): for line in open('/proc/1/cgroup').readlines(): if re.search('/lxc/', line): self.facts['virtualization_type'] = 'lxc' self.facts['virtualization_role'] = 'guest' return product_name = get_file_content('/sys/devices/virtual/dmi/id/product_name') if product_name in ['KVM', 'Bochs']: self.facts['virtualization_type'] = 'kvm' self.facts['virtualization_role'] = 'guest' return if product_name == 'RHEV Hypervisor': self.facts['virtualization_type'] = 'RHEV' self.facts['virtualization_role'] = 'guest' return if product_name == 'VMware Virtual Platform': self.facts['virtualization_type'] = 'VMware' self.facts['virtualization_role'] = 'guest' return bios_vendor = get_file_content('/sys/devices/virtual/dmi/id/bios_vendor') if bios_vendor == 'Xen': self.facts['virtualization_type'] = 'xen' self.facts['virtualization_role'] = 'guest' return if bios_vendor == 'innotek GmbH': self.facts['virtualization_type'] = 'virtualbox' self.facts['virtualization_role'] = 'guest' return sys_vendor = get_file_content('/sys/devices/virtual/dmi/id/sys_vendor') # FIXME: This does also match hyperv if sys_vendor == 'Microsoft Corporation': self.facts['virtualization_type'] = 'VirtualPC' self.facts['virtualization_role'] = 'guest' return if sys_vendor == 'Parallels Software International Inc.': self.facts['virtualization_type'] = 'parallels' self.facts['virtualization_role'] = 'guest' return if os.path.exists('/proc/self/status'): for line in open('/proc/self/status').readlines(): if re.match('^VxID: \d+', line): self.facts['virtualization_type'] = 'linux_vserver' if re.match('^VxID: 0', line): self.facts['virtualization_role'] = 'host' else: self.facts['virtualization_role'] = 'guest' return if os.path.exists('/proc/cpuinfo'): for line in open('/proc/cpuinfo').readlines(): if re.match('^model name.*QEMU Virtual CPU', line): self.facts['virtualization_type'] = 'kvm' elif re.match('^vendor_id.*User Mode Linux', line): self.facts['virtualization_type'] = 'uml' elif re.match('^model name.*UML', line): self.facts['virtualization_type'] = 'uml' elif re.match('^vendor_id.*PowerVM Lx86', line): self.facts['virtualization_type'] = 'powervm_lx86' elif re.match('^vendor_id.*IBM/S390', line): self.facts['virtualization_type'] = 'ibm_systemz' else: continue self.facts['virtualization_role'] = 'guest' return # Beware that we can have both kvm and virtualbox running on a single system if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK): modules = [] for line in open("/proc/modules").readlines(): data = line.split(" ", 1) modules.append(data[0]) if 'kvm' in modules: self.facts['virtualization_type'] = 'kvm' self.facts['virtualization_role'] = 'host' return if 'vboxdrv' in modules: self.facts['virtualization_type'] = 'virtualbox' self.facts['virtualization_role'] = 'host' return class HPUXVirtual(Virtual): """ This is a HP-UX specific subclass of Virtual. It defines - virtualization_type - virtualization_role """ platform = 'HP-UX' def __init__(self): Virtual.__init__(self) def populate(self): self.get_virtual_facts() return self.facts def get_virtual_facts(self): if os.path.exists('/usr/sbin/vecheck'): rc, out, err = module.run_command("/usr/sbin/vecheck") if rc == 0: self.facts['virtualization_type'] = 'guest' self.facts['virtualization_role'] = 'HP vPar' if os.path.exists('/opt/hpvm/bin/hpvminfo'): rc, out, err = module.run_command("/opt/hpvm/bin/hpvminfo") if rc == 0 and re.match('.*Running.*HPVM vPar.*', out): self.facts['virtualization_type'] = 'guest' self.facts['virtualization_role'] = 'HPVM vPar' elif rc == 0 and re.match('.*Running.*HPVM guest.*', out): self.facts['virtualization_type'] = 'guest' self.facts['virtualization_role'] = 'HPVM IVM' elif rc == 0 and re.match('.*Running.*HPVM host.*', out): self.facts['virtualization_type'] = 'host' self.facts['virtualization_role'] = 'HPVM' if os.path.exists('/usr/sbin/parstatus'): rc, out, err = module.run_command("/usr/sbin/parstatus") if rc == 0: self.facts['virtualization_type'] = 'guest' self.facts['virtualization_role'] = 'HP nPar' class SunOSVirtual(Virtual): """ This is a SunOS-specific subclass of Virtual. It defines - virtualization_type - virtualization_role - container """ platform = 'SunOS' def __init__(self): Virtual.__init__(self) def populate(self): self.get_virtual_facts() return self.facts def get_virtual_facts(self): rc, out, err = module.run_command("/usr/sbin/prtdiag") for line in out.split('\n'): if 'VMware' in line: self.facts['virtualization_type'] = 'vmware' self.facts['virtualization_role'] = 'guest' if 'Parallels' in line: self.facts['virtualization_type'] = 'parallels' self.facts['virtualization_role'] = 'guest' if 'VirtualBox' in line: self.facts['virtualization_type'] = 'virtualbox' self.facts['virtualization_role'] = 'guest' if 'HVM domU' in line: self.facts['virtualization_type'] = 'xen' self.facts['virtualization_role'] = 'guest' # Check if it's a zone if os.path.exists("/usr/bin/zonename"): rc, out, err = module.run_command("/usr/bin/zonename") if out.rstrip() != "global": self.facts['container'] = 'zone' # Check if it's a branded zone (i.e. Solaris 8/9 zone) if os.path.isdir('/.SUNWnative'): self.facts['container'] = 'zone' # If it's a zone check if we can detect if our global zone is itself virtualized. # Relies on the "guest tools" (e.g. vmware tools) to be installed if 'container' in self.facts and self.facts['container'] == 'zone': rc, out, err = module.run_command("/usr/sbin/modinfo") for line in out.split('\n'): if 'VMware' in line: self.facts['virtualization_type'] = 'vmware' self.facts['virtualization_role'] = 'guest' if 'VirtualBox' in line: self.facts['virtualization_type'] = 'virtualbox' self.facts['virtualization_role'] = 'guest' def get_file_content(path, default=None): data = default if os.path.exists(path) and os.access(path, os.R_OK): data = open(path).read().strip() if len(data) == 0: data = default return data def ansible_facts(module): facts = {} facts.update(Facts().populate()) facts.update(Hardware().populate()) facts.update(Network(module).populate()) facts.update(Virtual().populate()) return facts # =========================================== def run_setup(module): setup_options = {} facts = ansible_facts(module) for (k, v) in facts.items(): setup_options["ansible_%s" % k.replace('-', '_')] = v # Look for the path to the facter and ohai binary and set # the variable to that path. facter_path = module.get_bin_path('facter') ohai_path = module.get_bin_path('ohai') # if facter is installed, and we can use --json because # ruby-json is ALSO installed, include facter data in the JSON if facter_path is not None: rc, out, err = module.run_command(facter_path + " --json") facter = True try: facter_ds = json.loads(out) except: facter = False if facter: for (k,v) in facter_ds.items(): setup_options["facter_%s" % k] = v # ditto for ohai if ohai_path is not None: rc, out, err = module.run_command(ohai_path) ohai = True try: ohai_ds = json.loads(out) except: ohai = False if ohai: for (k,v) in ohai_ds.items(): k2 = "ohai_%s" % k.replace('-', '_') setup_options[k2] = v setup_result = { 'ansible_facts': {} } for (k,v) in setup_options.items(): if module.params['filter'] == '*' or fnmatch.fnmatch(k, module.params['filter']): setup_result['ansible_facts'][k] = v # hack to keep --verbose from showing all the setup module results setup_result['verbose_override'] = True return setup_result def main(): global module module = AnsibleModule( argument_spec = dict( filter=dict(default="*", required=False), fact_path=dict(default='/etc/ansible/facts.d', required=False), ), supports_check_mode = True, ) data = run_setup(module) module.exit_json(**data) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/network/0000775000000000000000000000000012316627017015000 5ustar rootrootansible-1.5.4/library/network/get_url0000664000000000000000000002374312316627017016375 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Jan-Piet Mens # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # # see examples/playbooks/get_url.yml import shutil import datetime import re import tempfile DOCUMENTATION = ''' --- module: get_url short_description: Downloads files from HTTP, HTTPS, or FTP to node description: - Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server I(must) have direct access to the remote resource. - By default, if an environment variable C(_proxy) is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see `setting the environment `_), or by using the use_proxy option. version_added: "0.6" options: url: description: - HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path required: true default: null aliases: [] dest: description: - absolute path of where to download the file to. - If C(dest) is a directory, either the server provided filename or, if none provided, the base name of the URL on the remote server will be used. If a directory, C(force) has no effect. If C(dest) is a directory, the file will always be downloaded (regardless of the force option), but replaced only if the contents changed. required: true default: null force: description: - If C(yes) and C(dest) is not a directory, will download the file every time and replace the file if the contents change. If C(no), the file will only be downloaded if the destination does not exist. Generally should be C(yes) only for small local files. Prior to 0.6, this module behaved as if C(yes) was the default. version_added: "0.7" required: false choices: [ "yes", "no" ] default: "no" aliases: [ "thirsty" ] sha256sum: description: - If a SHA-256 checksum is passed to this parameter, the digest of the destination file will be calculated after it is downloaded to ensure its integrity and verify that the transfer completed successfully. version_added: "1.3" required: false default: null use_proxy: description: - if C(no), it will not use a proxy, even if one is defined in an environment variable on the target hosts. required: false default: 'yes' choices: ['yes', 'no'] validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. required: false default: 'yes' choices: ['yes', 'no'] others: description: - all arguments accepted by the M(file) module also work here required: false notes: - This module doesn't yet support configuration for proxies. # informational: requirements for nodes requirements: [ urllib2, urlparse ] author: Jan-Piet Mens ''' EXAMPLES=''' - name: download foo.conf get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf mode=0440 - name: download file with sha256 check get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf sha256sum=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c ''' try: import hashlib HAS_HASHLIB=True except ImportError: HAS_HASHLIB=False # ============================================================== # url handling def url_filename(url): fn = os.path.basename(urlparse.urlsplit(url)[2]) if fn == '': return 'index.html' return fn def url_get(module, url, dest, use_proxy, last_mod_time, force): """ Download data from the url and store in a temporary file. Return (tempfile, info about the request) """ rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time) if info['status'] == 304: module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', '')) # create a temporary file and copy content to do md5-based replacement if info['status'] != 200: module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest) fd, tempname = tempfile.mkstemp() f = os.fdopen(fd, 'wb') try: shutil.copyfileobj(rsp, f) except Exception, err: os.remove(tempname) module.fail_json(msg="failed to create temporary content file: %s" % str(err)) f.close() rsp.close() return tempname, info def extract_filename_from_headers(headers): """ Extracts a filename from the given dict of HTTP headers. Looks for the content-disposition header and applies a regex. Returns the filename if successful, else None.""" cont_disp_regex = 'attachment; ?filename="(.+)"' res = None if 'content-disposition' in headers: cont_disp = headers['content-disposition'] match = re.match(cont_disp_regex, cont_disp) if match: res = match.group(1) # Try preventing any funny business. res = os.path.basename(res) return res # ============================================================== # main def main(): argument_spec = url_argument_spec() argument_spec.update( dest = dict(required=True), sha256sum = dict(default=''), ) module = AnsibleModule( # not checking because of daisy chain to file module argument_spec = argument_spec, add_file_common_args=True ) url = module.params['url'] dest = os.path.expanduser(module.params['dest']) force = module.params['force'] sha256sum = module.params['sha256sum'] use_proxy = module.params['use_proxy'] dest_is_dir = os.path.isdir(dest) last_mod_time = None if not dest_is_dir and os.path.exists(dest): if not force: module.exit_json(msg="file already exists", dest=dest, url=url, changed=False) # If the file already exists, prepare the last modified time for the # request. mtime = os.path.getmtime(dest) last_mod_time = datetime.datetime.utcfromtimestamp(mtime) # download to tmpsrc tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force) # Now the request has completed, we can finally generate the final # destination file name from the info dict. if dest_is_dir: filename = extract_filename_from_headers(info) if not filename: # Fall back to extracting the filename from the URL. # Pluck the URL from the info, since a redirect could have changed # it. filename = url_filename(info['url']) dest = os.path.join(dest, filename) md5sum_src = None md5sum_dest = None # raise an error if there is no tmpsrc file if not os.path.exists(tmpsrc): os.remove(tmpsrc) module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg']) if not os.access(tmpsrc, os.R_OK): os.remove(tmpsrc) module.fail_json( msg="Source %s not readable" % (tmpsrc)) md5sum_src = module.md5(tmpsrc) # check if there is no dest file if os.path.exists(dest): # raise an error if copy has no permission on dest if not os.access(dest, os.W_OK): os.remove(tmpsrc) module.fail_json( msg="Destination %s not writable" % (dest)) if not os.access(dest, os.R_OK): os.remove(tmpsrc) module.fail_json( msg="Destination %s not readable" % (dest)) md5sum_dest = module.md5(dest) else: if not os.access(os.path.dirname(dest), os.W_OK): os.remove(tmpsrc) module.fail_json( msg="Destination %s not writable" % (os.path.dirname(dest))) if md5sum_src != md5sum_dest: try: shutil.copyfile(tmpsrc, dest) except Exception, err: os.remove(tmpsrc) module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, str(err))) changed = True else: changed = False # Check the digest of the destination file and ensure that it matches the # sha256sum parameter if it is present if sha256sum != '': # Remove any non-alphanumeric characters, including the infamous # Unicode zero-width space stripped_sha256sum = re.sub(r'\W+', '', sha256sum) if not HAS_HASHLIB: os.remove(dest) module.fail_json(msg="The sha256sum parameter requires hashlib, which is available in Python 2.5 and higher") else: destination_checksum = module.sha256(dest) if stripped_sha256sum != destination_checksum: os.remove(dest) module.fail_json(msg="The SHA-256 checksum for %s did not match %s; it was %s." % (dest, sha256sum, destination_checksum)) os.remove(tmpsrc) # allow file attribute changes module.params['path'] = dest file_args = module.load_file_common_arguments(module.params) file_args['path'] = dest changed = module.set_file_attributes_if_different(file_args, changed) # Mission complete module.exit_json(url=url, dest=dest, src=tmpsrc, md5sum=md5sum_src, sha256sum=sha256sum, changed=changed, msg=info.get('msg', '')) # import module snippets from ansible.module_utils.basic import * from ansible.module_utils.urls import * main() ansible-1.5.4/library/network/slurp0000664000000000000000000000377512316627017016104 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . DOCUMENTATION = ''' --- module: slurp version_added: historical short_description: Slurps a file from remote nodes description: - This module works like M(fetch). It is used for fetching a base64- encoded blob containing the data in a remote file. options: src: description: - The file on the remote system to fetch. This I(must) be a file, not a directory. required: true default: null aliases: [] notes: - "See also: M(fetch)" requirements: [] author: Michael DeHaan ''' EXAMPLES = ''' ansible host -m slurp -a 'src=/tmp/xx' host | success >> { "content": "aGVsbG8gQW5zaWJsZSB3b3JsZAo=", "encoding": "base64" } ''' import base64 def main(): module = AnsibleModule( argument_spec = dict( src = dict(required=True, aliases=['path']), ), supports_check_mode=True ) source = module.params['src'] if not os.path.exists(source): module.fail_json(msg="file not found: %s" % source) if not os.access(source, os.R_OK): module.fail_json(msg="file is not readable: %s" % source) data = base64.b64encode(file(source).read()) module.exit_json(content=data, encoding='base64') # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/library/network/uri0000664000000000000000000004120512316627017015524 0ustar rootroot#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2013, Romeo Theriault # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # # see examples/playbooks/uri.yml import shutil import tempfile import base64 import datetime try: import json except ImportError: import simplejson as json DOCUMENTATION = ''' --- module: uri short_description: Interacts with webservices description: - Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE HTTP authentication mechanisms. version_added: "1.1" options: url: description: - HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path required: true default: null aliases: [] dest: description: - path of where to download the file to (if desired). If I(dest) is a directory, the basename of the file on the remote server will be used. required: false default: null user: description: - username for the module to use for Digest, Basic or WSSE authentication. required: false default: null password: description: - password for the module to use for Digest, Basic or WSSE authentication. required: false default: null body: description: - The body of the http request/response to the web service. required: false default: null method: description: - The HTTP method of the request or response. required: false choices: [ "GET", "POST", "PUT", "HEAD", "DELETE", "OPTIONS", "PATCH" ] default: "GET" return_content: description: - Whether or not to return the body of the request as a "content" key in the dictionary result. If the reported Content-type is "application/json", then the JSON is additionally loaded into a key called C(json) in the dictionary results. required: false choices: [ "yes", "no" ] default: "no" force_basic_auth: description: - httplib2, the library used by the uri module only sends authentication information when a webservice responds to an initial request with a 401 status. Since some basic auth services do not properly send a 401, logins will fail. This option forces the sending of the Basic authentication header upon initial request. required: false choices: [ "yes", "no" ] default: "no" follow_redirects: description: - Whether or not the URI module should follow redirects. C(all) will follow all redirects. C(safe) will follow only "safe" redirects, where "safe" means that the client is only doing a GET or HEAD on the URI to which it is being redirected. C(none) will not follow any redirects. Note that C(yes) and C(no) choices are accepted for backwards compatibility, where C(yes) is the equivalent of C(all) and C(no) is the equivalent of C(safe). C(yes) and C(no) are deprecated and will be removed in some future version of Ansible. required: false choices: [ "all", "safe", "none" ] default: "safe" creates: description: - a filename, when it already exists, this step will not be run. required: false removes: description: - a filename, when it does not exist, this step will not be run. required: false status_code: description: - A valid, numeric, HTTP status code that signifies success of the request. required: false default: 200 timeout: description: - The socket level timeout in seconds required: false default: 30 HEADER_: description: - Any parameter starting with "HEADER_" is a sent with your request as a header. For example, HEADER_Content-Type="application/json" would send the header "Content-Type" along with your request with a value of "application/json". required: false default: null others: description: - all arguments accepted by the M(file) module also work here required: false # informational: requirements for nodes requirements: [ urlparse, httplib2 ] author: Romeo Theriault ''' EXAMPLES = ''' # Check that you can connect (GET) to a page and it returns a status 200 - uri: url=http://www.example.com # Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents. - action: uri url=http://www.example.com return_content=yes register: webpage - action: fail when: 'AWESOME' not in "{{ webpage.content }}" # Create a JIRA issue. - action: > uri url=https://your.jira.example.com/rest/api/2/issue/ method=POST user=your_username password=your_pass body="{{ lookup('file','issue.json') }}" force_basic_auth=yes status_code=201 HEADER_Content-Type="application/json" - action: > uri url=https://your.form.based.auth.examle.com/index.php method=POST body="name=your_username&password=your_password&enter=Sign%20in" status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded" register: login # Login to a form based webpage, then use the returned cookie to # access the app in later tasks. - action: uri url=https://your.form.based.auth.example.com/dashboard.php method=GET return_content=yes HEADER_Cookie="{{login.set_cookie}}" ''' HAS_HTTPLIB2 = True try: import httplib2 except ImportError: HAS_HTTPLIB2 = False HAS_URLPARSE = True try: import urlparse import socket except ImportError: HAS_URLPARSE = False def write_file(module, url, dest, content): # create a tempfile with some test content fd, tmpsrc = tempfile.mkstemp() f = open(tmpsrc, 'wb') try: f.write(content) except Exception, err: os.remove(tmpsrc) module.fail_json(msg="failed to create temporary content file: %s" % str(err)) f.close() md5sum_src = None md5sum_dest = None # raise an error if there is no tmpsrc file if not os.path.exists(tmpsrc): os.remove(tmpsrc) module.fail_json(msg="Source %s does not exist" % (tmpsrc)) if not os.access(tmpsrc, os.R_OK): os.remove(tmpsrc) module.fail_json( msg="Source %s not readable" % (tmpsrc)) md5sum_src = module.md5(tmpsrc) # check if there is no dest file if os.path.exists(dest): # raise an error if copy has no permission on dest if not os.access(dest, os.W_OK): os.remove(tmpsrc) module.fail_json( msg="Destination %s not writable" % (dest)) if not os.access(dest, os.R_OK): os.remove(tmpsrc) module.fail_json( msg="Destination %s not readable" % (dest)) md5sum_dest = module.md5(dest) else: if not os.access(os.path.dirname(dest), os.W_OK): os.remove(tmpsrc) module.fail_json( msg="Destination dir %s not writable" % (os.path.dirname(dest))) if md5sum_src != md5sum_dest: try: shutil.copyfile(tmpsrc, dest) except Exception, err: os.remove(tmpsrc) module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, str(err))) os.remove(tmpsrc) def url_filename(url): fn = os.path.basename(urlparse.urlsplit(url)[2]) if fn == '': return 'index.html' return fn def uri(module, url, dest, user, password, body, method, headers, redirects, socket_timeout): # To debug #httplib2.debug = 4 # Handle Redirects if redirects == "all" or redirects == "yes": follow_redirects = True follow_all_redirects = True elif redirects == "none": follow_redirects = False follow_all_redirects = False else: follow_redirects = True follow_all_redirects = False # Create a Http object and set some default options. h = httplib2.Http(disable_ssl_certificate_validation=True, timeout=socket_timeout) h.follow_all_redirects = follow_all_redirects h.follow_redirects = follow_redirects h.forward_authorization_headers = True # If they have a username or password verify they have both, then add them to the request if user is not None and password is None: module.fail_json(msg="Both a username and password need to be set.") if password is not None and user is None: module.fail_json(msg="Both a username and password need to be set.") if user is not None and password is not None: h.add_credentials(user, password) # is dest is set and is a directory, let's check if we get redirected and # set the filename from that url redirected = False resp_redir = {} r = {} if dest is not None: dest = os.path.expanduser(dest) if os.path.isdir(dest): # first check if we are redirected to a file download h.follow_redirects=False # Try the request try: resp_redir, content_redir = h.request(url, method=method, body=body, headers=headers) # if we are redirected, update the url with the location header, # and update dest with the new url filename except: pass if 'status' in resp_redir and resp_redir['status'] in ["301", "302", "303", "307"]: url = resp_redir['location'] redirected = True dest = os.path.join(dest, url_filename(url)) # if destination file already exist, only download if file newer if os.path.exists(dest): t = datetime.datetime.utcfromtimestamp(os.path.getmtime(dest)) tstamp = t.strftime('%a, %d %b %Y %H:%M:%S +0000') headers['If-Modified-Since'] = tstamp # do safe redirects now, including 307 h.follow_redirects=follow_redirects # Make the request, or try to :) try: resp, content = h.request(url, method=method, body=body, headers=headers) r['redirected'] = redirected r.update(resp_redir) r.update(resp) try: return r, unicode(content.decode('unicode_escape')), dest except: return r, content, dest except httplib2.RedirectMissingLocation: module.fail_json(msg="A 3xx redirect response code was provided but no Location: header was provided to point to the new location.") except httplib2.RedirectLimit: module.fail_json(msg="The maximum number of redirections was reached without coming to a final URI.") except httplib2.ServerNotFoundError: module.fail_json(msg="Unable to resolve the host name given.") except httplib2.RelativeURIError: module.fail_json(msg="A relative, as opposed to an absolute URI, was passed in.") except httplib2.FailedToDecompressContent: module.fail_json(msg="The headers claimed that the content of the response was compressed but the decompression algorithm applied to the content failed.") except httplib2.UnimplementedDigestAuthOptionError: module.fail_json(msg="The server requested a type of Digest authentication that we are unfamiliar with.") except httplib2.UnimplementedHmacDigestAuthOptionError: module.fail_json(msg="The server requested a type of HMACDigest authentication that we are unfamiliar with.") except httplib2.UnimplementedHmacDigestAuthOptionError: module.fail_json(msg="The server requested a type of HMACDigest authentication that we are unfamiliar with.") except socket.error, e: module.fail_json(msg="Socket error: %s to %s" % (e, url)) def main(): module = AnsibleModule( argument_spec = dict( url = dict(required=True), dest = dict(required=False, default=None), user = dict(required=False, default=None), password = dict(required=False, default=None), body = dict(required=False, default=None), method = dict(required=False, default='GET', choices=['GET', 'POST', 'PUT', 'HEAD', 'DELETE', 'OPTIONS', 'PATCH']), return_content = dict(required=False, default='no', type='bool'), force_basic_auth = dict(required=False, default='no', type='bool'), follow_redirects = dict(required=False, default='safe', choices=['all', 'safe', 'none', 'yes', 'no']), creates = dict(required=False, default=None), removes = dict(required=False, default=None), status_code = dict(required=False, default=200, type='int'), timeout = dict(required=False, default=30, type='int'), ), check_invalid_arguments=False, add_file_common_args=True ) if not HAS_HTTPLIB2: module.fail_json(msg="httplib2 is not installed") if not HAS_URLPARSE: module.fail_json(msg="urlparse is not installed") url = module.params['url'] user = module.params['user'] password = module.params['password'] body = module.params['body'] method = module.params['method'] dest = module.params['dest'] return_content = module.params['return_content'] force_basic_auth = module.params['force_basic_auth'] redirects = module.params['follow_redirects'] creates = module.params['creates'] removes = module.params['removes'] status_code = int(module.params['status_code']) socket_timeout = module.params['timeout'] # Grab all the http headers. Need this hack since passing multi-values is currently a bit ugly. (e.g. headers='{"Content-Type":"application/json"}') dict_headers = {} for key, value in module.params.iteritems(): if key.startswith("HEADER_"): skey = key.replace("HEADER_", "") dict_headers[skey] = value if creates is not None: # do not run the command if the line contains creates=filename # and the filename already exists. This allows idempotence # of uri executions. creates = os.path.expanduser(creates) if os.path.exists(creates): module.exit_json(stdout="skipped, since %s exists" % creates, skipped=True, changed=False, stderr=False, rc=0) if removes is not None: # do not run the command if the line contains removes=filename # and the filename do not exists. This allows idempotence # of uri executions. v = os.path.expanduser(removes) if not os.path.exists(removes): module.exit_json(stdout="skipped, since %s does not exist" % removes, skipped=True, changed=False, stderr=False, rc=0) # httplib2 only sends authentication after the server asks for it with a 401. # Some 'basic auth' servies fail to send a 401 and require the authentication # up front. This creates the Basic authentication header and sends it immediately. if force_basic_auth: dict_headers["Authorization"] = "Basic {0}".format(base64.b64encode("{0}:{1}".format(user, password))) # Make the request resp, content, dest = uri(module, url, dest, user, password, body, method, dict_headers, redirects, socket_timeout) resp['status'] = int(resp['status']) # Write the file out if requested if dest is not None: if resp['status'] == 304: status_code = 304 changed = False else: write_file(module, url, dest, content) # allow file attribute changes changed = True module.params['path'] = dest file_args = module.load_file_common_arguments(module.params) file_args['path'] = dest changed = module.set_file_attributes_if_different(file_args, changed) resp['path'] = dest else: changed = False # Transmogrify the headers, replacing '-' with '_', since variables dont work with dashes. uresp = {} for key, value in resp.iteritems(): ukey = key.replace("-", "_") uresp[ukey] = value if 'content_type' in uresp: if uresp['content_type'].startswith('application/json'): try: js = json.loads(content) uresp['json'] = js except: pass if resp['status'] != status_code: module.fail_json(msg="Status code was not " + str(status_code), content=content, **uresp) elif return_content: module.exit_json(changed=changed, content=content, **uresp) else: module.exit_json(changed=changed, **uresp) # import module snippets from ansible.module_utils.basic import * main() ansible-1.5.4/bin/0000775000000000000000000000000012316627017012413 5ustar rootrootansible-1.5.4/bin/ansible-doc0000775000000000000000000002207112316627017014523 0ustar rootroot#!/usr/bin/env python # (c) 2012, Jan-Piet Mens # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # import os import sys import textwrap import re import optparse import datetime import subprocess from ansible import utils from ansible.utils import module_docs import ansible.constants as C from ansible.utils import version import traceback MODULEDIR = C.DEFAULT_MODULE_PATH BLACKLIST_EXTS = ('.swp', '.bak', '~', '.rpm') _ITALIC = re.compile(r"I\(([^)]+)\)") _BOLD = re.compile(r"B\(([^)]+)\)") _MODULE = re.compile(r"M\(([^)]+)\)") _URL = re.compile(r"U\(([^)]+)\)") _CONST = re.compile(r"C\(([^)]+)\)") PAGER = 'less' LESS_OPTS = 'FRSX' # -F (quit-if-one-screen) -R (allow raw ansi control chars) # -S (chop long lines) -X (disable termcap init and de-init) def pager_print(text): ''' just print text ''' print text def pager_pipe(text, cmd): ''' pipe text through a pager ''' if 'LESS' not in os.environ: os.environ['LESS'] = LESS_OPTS try: cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout) cmd.communicate(input=text) except IOError: pass except KeyboardInterrupt: pass def pager(text): ''' find reasonable way to display text ''' # this is a much simpler form of what is in pydoc.py if not sys.stdout.isatty(): pager_print(text) elif 'PAGER' in os.environ: if sys.platform == 'win32': pager_print(text) else: pager_pipe(text, os.environ['PAGER']) elif hasattr(os, 'system') and os.system('(less) 2> /dev/null') == 0: pager_pipe(text, 'less') else: pager_print(text) def tty_ify(text): t = _ITALIC.sub("`" + r"\1" + "'", text) # I(word) => `word' t = _BOLD.sub("*" + r"\1" + "*", t) # B(word) => *word* t = _MODULE.sub("[" + r"\1" + "]", t) # M(word) => [word] t = _URL.sub(r"\1", t) # U(word) => word t = _CONST.sub("`" + r"\1" + "'", t) # C(word) => `word' return t def get_man_text(doc): opt_indent=" " text = [] text.append("> %s\n" % doc['module'].upper()) desc = "".join(doc['description']) text.append("%s\n" % textwrap.fill(tty_ify(desc), initial_indent=" ", subsequent_indent=" ")) if 'option_keys' in doc and len(doc['option_keys']) > 0: text.append("Options (= is mandatory):\n") for o in doc['option_keys']: opt = doc['options'][o] if opt.get('required', False): opt_leadin = "=" else: opt_leadin = "-" text.append("%s %s" % (opt_leadin, o)) desc = "".join(opt['description']) if 'choices' in opt: choices = ", ".join(str(i) for i in opt['choices']) desc = desc + " (Choices: " + choices + ")" text.append("%s\n" % textwrap.fill(tty_ify(desc), initial_indent=opt_indent, subsequent_indent=opt_indent)) if 'notes' in doc and len(doc['notes']) > 0: notes = "".join(doc['notes']) text.append("Notes:%s\n" % textwrap.fill(tty_ify(notes), initial_indent=" ", subsequent_indent=opt_indent)) if 'requirements' in doc and doc['requirements'] is not None and len(doc['requirements']) > 0: req = ", ".join(doc['requirements']) text.append("Requirements:%s\n" % textwrap.fill(tty_ify(req), initial_indent=" ", subsequent_indent=opt_indent)) if 'examples' in doc and len(doc['examples']) > 0: text.append("Example%s:\n" % ('' if len(doc['examples']) < 2 else 's')) for ex in doc['examples']: text.append("%s\n" % (ex['code'])) if 'plainexamples' in doc and doc['plainexamples'] is not None: text.append(doc['plainexamples']) text.append('') return "\n".join(text) def get_snippet_text(doc): text = [] desc = tty_ify("".join(doc['short_description'])) text.append("- name: %s" % (desc)) text.append(" action: %s" % (doc['module'])) for o in doc['options']: opt = doc['options'][o] desc = tty_ify("".join(opt['description'])) s = o + "=" text.append(" %-20s # %s" % (s, desc)) text.append('') return "\n".join(text) def get_module_list_text(module_list): text = [] for module in sorted(set(module_list)): if module in module_docs.BLACKLIST_MODULES: continue filename = utils.plugins.module_finder.find_plugin(module) if filename is None: continue if os.path.isdir(filename): continue try: doc, plainexamples = module_docs.get_docstring(filename) desc = tty_ify(doc.get('short_description', '?')) if len(desc) > 55: desc = desc + '...' text.append("%-20s %-60.60s" % (module, desc)) except: traceback.print_exc() sys.stderr.write("ERROR: module %s has a documentation error formatting or is missing documentation\n" % module) pass return "\n".join(text) def main(): p = optparse.OptionParser( version=version("%prog"), usage='usage: %prog [options] [module...]', description='Show Ansible module documentation', ) p.add_option("-M", "--module-path", action="store", dest="module_path", default=MODULEDIR, help="Ansible modules/ directory") p.add_option("-l", "--list", action="store_true", default=False, dest='list_dir', help='List available modules') p.add_option("-s", "--snippet", action="store_true", default=False, dest='show_snippet', help='Show playbook snippet for specified module(s)') p.add_option('-v', action='version', help='Show version number and exit') (options, args) = p.parse_args() if options.module_path is not None: for i in options.module_path.split(os.pathsep): utils.plugins.module_finder.add_directory(i) if options.list_dir: # list all modules paths = utils.plugins.module_finder._get_paths() module_list = [] for path in paths: # os.system("ls -C %s" % (path)) if os.path.isdir(path): for module in os.listdir(path): if any(module.endswith(x) for x in BLACKLIST_EXTS): continue module_list.append(module) pager(get_module_list_text(module_list)) sys.exit() if len(args) == 0: p.print_help() def print_paths(finder): ''' Returns a string suitable for printing of the search path ''' # Uses a list to get the order right ret = [] for i in finder._get_paths(): if i not in ret: ret.append(i) return os.pathsep.join(ret) text = '' for module in args: filename = utils.plugins.module_finder.find_plugin(module) if filename is None: sys.stderr.write("module %s not found in %s\n" % (module, print_paths(utils.plugins.module_finder))) continue if any(filename.endswith(x) for x in BLACKLIST_EXTS): continue try: doc, plainexamples = module_docs.get_docstring(filename) except: traceback.print_exc() sys.stderr.write("ERROR: module %s has a documentation error formatting or is missing documentation\n" % module) continue if doc is not None: all_keys = [] for (k,v) in doc['options'].iteritems(): all_keys.append(k) all_keys = sorted(all_keys) doc['option_keys'] = all_keys doc['filename'] = filename doc['docuri'] = doc['module'].replace('_', '-') doc['now_date'] = datetime.date.today().strftime('%Y-%m-%d') doc['plainexamples'] = plainexamples if options.show_snippet: text += get_snippet_text(doc) else: text += get_man_text(doc) else: # this typically means we couldn't even parse the docstring, not just that the YAML is busted, # probably a quoting issue. sys.stderr.write("ERROR: module %s missing documentation (or could not parse documentation)\n" % module) pager(text) if __name__ == '__main__': main() ansible-1.5.4/bin/ansible-pull0000775000000000000000000001676012316627017014742 0ustar rootroot#!/usr/bin/env python # (c) 2012, Stephen Fromm # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # # ansible-pull is a script that runs ansible in local mode # after checking out a playbooks directory from source repo. There is an # example playbook to bootstrap this script in the examples/ dir which # installs ansible and sets it up to run on cron. # usage: # ansible-pull -d /var/lib/ansible \ # -U http://example.net/content.git [-C production] \ # [path/playbook.yml] # # the -d and -U arguments are required; the -C argument is optional. # # ansible-pull accepts an optional argument to specify a playbook # location underneath the workdir and then searches the source repo # for playbooks in the following order, stopping at the first match: # # 1. $workdir/path/playbook.yml, if specified # 2. $workdir/$fqdn.yml # 3. $workdir/$hostname.yml # 4. $workdir/local.yml # # the source repo must contain at least one of these playbooks. import os import shutil import subprocess import sys import datetime import socket from ansible import utils from ansible.utils import cmd_functions from ansible import errors DEFAULT_REPO_TYPE = 'git' DEFAULT_PLAYBOOK = 'local.yml' PLAYBOOK_ERRORS = {1: 'File does not exist', 2: 'File is not readable'} VERBOSITY=0 def increment_debug(option, opt, value, parser): global VERBOSITY VERBOSITY += 1 def try_playbook(path): if not os.path.exists(path): return 1 if not os.access(path, os.R_OK): return 2 return 0 def select_playbook(path, args): playbook = None if len(args) > 0 and args[0] is not None: playbook = "%s/%s" % (path, args[0]) rc = try_playbook(playbook) if rc != 0: print >>sys.stderr, "%s: %s" % (playbook, PLAYBOOK_ERRORS[rc]) return None return playbook else: fqdn = socket.getfqdn() hostpb = "%s/%s.yml" % (path, fqdn) shorthostpb = "%s/%s.yml" % (path, fqdn.split('.')[0]) localpb = "%s/%s" % (path, DEFAULT_PLAYBOOK) errors = [] for pb in [hostpb, shorthostpb, localpb]: rc = try_playbook(pb) if rc == 0: playbook = pb break else: errors.append("%s: %s" % (pb, PLAYBOOK_ERRORS[rc])) if playbook is None: print >>sys.stderr, "\n".join(errors) return playbook def main(args): """ Set up and run a local playbook """ usage = "%prog [options] [playbook.yml]" parser = utils.SortedOptParser(usage=usage) parser.add_option('--purge', default=False, action='store_true', help='purge checkout after playbook run') parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true', help='only run the playbook if the repository has been updated') parser.add_option('-f', '--force', dest='force', default=False, action='store_true', help='run the playbook even if the repository could ' 'not be updated') parser.add_option('-d', '--directory', dest='dest', default=None, help='directory to checkout repository to') #parser.add_option('-l', '--live', default=True, action='store_live', # help='Print the ansible-playbook output while running') parser.add_option('-U', '--url', dest='url', default=None, help='URL of the playbook repository') parser.add_option('-C', '--checkout', dest='checkout', help='branch/tag/commit to checkout. ' 'Defaults to behavior of repository module.') parser.add_option('-i', '--inventory-file', dest='inventory', help="location of the inventory host file") parser.add_option('-v', '--verbose', default=False, action="callback", callback=increment_debug, help='Pass -vvvv to ansible-playbook') parser.add_option('-m', '--module-name', dest='module_name', default=DEFAULT_REPO_TYPE, help='Module name used to check out repository. ' 'Default is %s.' % DEFAULT_REPO_TYPE) parser.add_option('--vault-password-file', dest='vault_password_file', help="vault password file") options, args = parser.parse_args(args) hostname = socket.getfqdn() if not options.dest: # use a hostname dependent directory, in case of $HOME on nfs options.dest = utils.prepare_writeable_dir('~/.ansible/pull/%s' % hostname) options.dest = os.path.abspath(options.dest) if not options.url: parser.error("URL for repository not specified, use -h for help") return 1 now = datetime.datetime.now() print >>sys.stderr, now.strftime("Starting ansible-pull at %F %T") inv_opts = 'localhost,' limit_opts = 'localhost:%s:127.0.0.1' % hostname repo_opts = "name=%s dest=%s" % (options.url, options.dest) if VERBOSITY == 0: base_opts = '-c local --limit "%s"' % limit_opts elif VERBOSITY > 0: debug_level = ''.join([ "v" for x in range(0, VERBOSITY) ]) base_opts = '-%s -c local --limit "%s"' % (debug_level, limit_opts) if options.checkout: repo_opts += ' version=%s' % options.checkout path = utils.plugins.module_finder.find_plugin(options.module_name) if path is None: sys.stderr.write("module '%s' not found.\n" % options.module_name) return 1 cmd = 'ansible all -i "%s" %s -m %s -a "%s"' % ( inv_opts, base_opts, options.module_name, repo_opts ) # RUN THE CHECKOUT COMMAND rc, out, err = cmd_functions.run_cmd(cmd, live=True) if rc != 0: if options.force: print "Unable to update repository. Continuing with (forced) run of playbook." else: return rc elif options.ifchanged and '"changed": true' not in out: print "Repository has not changed, quitting." return 0 playbook = select_playbook(options.dest, args) if playbook is None: print >>sys.stderr, "Could not find a playbook to run." return 1 cmd = 'ansible-playbook %s %s' % (base_opts, playbook) if options.vault_password_file: cmd += " --vault-password-file=%s" % options.vault_password_file if options.inventory: cmd += ' -i "%s"' % options.inventory os.chdir(options.dest) # RUN THE PLAYBOOK COMMAND rc, out, err = cmd_functions.run_cmd(cmd, live=True) if options.purge: os.chdir('/') try: shutil.rmtree(options.dest) except Exception, e: print >>sys.stderr, "Failed to remove %s: %s" % (options.dest, str(e)) return rc if __name__ == '__main__': try: sys.exit(main(sys.argv[1:])) except KeyboardInterrupt, e: print >>sys.stderr, "Exit on user request.\n" sys.exit(1) ansible-1.5.4/bin/ansible-vault0000775000000000000000000001551412316627017015115 0ustar rootroot#!/usr/bin/env python # (c) 2014, James Tanner # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # # ansible-pull is a script that runs ansible in local mode # after checking out a playbooks directory from source repo. There is an # example playbook to bootstrap this script in the examples/ dir which # installs ansible and sets it up to run on cron. import os import sys import traceback from ansible import utils from ansible import errors from ansible.utils.vault import VaultEditor from optparse import OptionParser #------------------------------------------------------------------------------------- # Utility functions for parsing actions/options #------------------------------------------------------------------------------------- VALID_ACTIONS = ("create", "decrypt", "edit", "encrypt", "rekey") def build_option_parser(action): """ Builds an option parser object based on the action the user wants to execute. """ usage = "usage: %%prog [%s] [--help] [options] file_name" % "|".join(VALID_ACTIONS) epilog = "\nSee '%s --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0]) OptionParser.format_epilog = lambda self, formatter: self.epilog parser = OptionParser(usage=usage, epilog=epilog) if not action: parser.print_help() sys.exit() # options for all actions #parser.add_option('-c', '--cipher', dest='cipher', default="AES256", help="cipher to use") parser.add_option('--debug', dest='debug', action="store_true", help="debug") parser.add_option('--vault-password-file', dest='password_file', help="vault password file") # options specific to actions if action == "create": parser.set_usage("usage: %prog create [options] file_name") elif action == "decrypt": parser.set_usage("usage: %prog decrypt [options] file_name") elif action == "edit": parser.set_usage("usage: %prog edit [options] file_name") elif action == "encrypt": parser.set_usage("usage: %prog encrypt [options] file_name") elif action == "rekey": parser.set_usage("usage: %prog rekey [options] file_name") # done, return the parser return parser def get_action(args): """ Get the action the user wants to execute from the sys argv list. """ for i in range(0,len(args)): arg = args[i] if arg in VALID_ACTIONS: del args[i] return arg return None def get_opt(options, k, defval=""): """ Returns an option from an Optparse values instance. """ try: data = getattr(options, k) except: return defval if k == "roles_path": if os.pathsep in data: data = data.split(os.pathsep)[0] return data #------------------------------------------------------------------------------------- # Command functions #------------------------------------------------------------------------------------- def _read_password(filename): f = open(filename, "rb") data = f.read() f.close # get rid of newline chars data = data.strip() return data def execute_create(args, options, parser): if len(args) > 1: raise errors.AnsibleError("'create' does not accept more than one filename") if not options.password_file: password, new_password = utils.ask_vault_passwords(ask_vault_pass=True, confirm_vault=True) else: password = _read_password(options.password_file) cipher = 'AES256' if hasattr(options, 'cipher'): cipher = options.cipher this_editor = VaultEditor(cipher, password, args[0]) this_editor.create_file() def execute_decrypt(args, options, parser): if not options.password_file: password, new_password = utils.ask_vault_passwords(ask_vault_pass=True) else: password = _read_password(options.password_file) cipher = 'AES256' if hasattr(options, 'cipher'): cipher = options.cipher for f in args: this_editor = VaultEditor(cipher, password, f) this_editor.decrypt_file() print "Decryption successful" def execute_edit(args, options, parser): if len(args) > 1: raise errors.AnsibleError("create does not accept more than one filename") if not options.password_file: password, new_password = utils.ask_vault_passwords(ask_vault_pass=True) else: password = _read_password(options.password_file) cipher = None for f in args: this_editor = VaultEditor(cipher, password, f) this_editor.edit_file() def execute_encrypt(args, options, parser): if len(args) > 1: raise errors.AnsibleError("'create' does not accept more than one filename") if not options.password_file: password, new_password = utils.ask_vault_passwords(ask_vault_pass=True, confirm_vault=True) else: password = _read_password(options.password_file) cipher = 'AES256' if hasattr(options, 'cipher'): cipher = options.cipher for f in args: this_editor = VaultEditor(cipher, password, f) this_editor.encrypt_file() print "Encryption successful" def execute_rekey(args, options, parser): if not options.password_file: password, __ = utils.ask_vault_passwords(ask_vault_pass=True) else: password = _read_password(options.password_file) __, new_password = utils.ask_vault_passwords(ask_vault_pass=False, ask_new_vault_pass=True, confirm_new=True) cipher = None for f in args: this_editor = VaultEditor(cipher, password, f) this_editor.rekey_file(new_password) print "Rekey successful" #------------------------------------------------------------------------------------- # MAIN #------------------------------------------------------------------------------------- def main(): action = get_action(sys.argv) parser = build_option_parser(action) (options, args) = parser.parse_args() # execute the desired action try: fn = globals()["execute_%s" % action] fn(args, options, parser) except Exception, err: if options.debug: print traceback.format_exc() print "ERROR:",err sys.exit(1) if __name__ == "__main__": main() ansible-1.5.4/bin/ansible-galaxy0000775000000000000000000007263412316627017015255 0ustar rootroot#!/usr/bin/env python ######################################################################## # # (C) 2013, James Cammarata # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # ######################################################################## import datetime import json import os import os.path import shutil import sys import tarfile import tempfile import urllib import urllib2 import yaml from collections import defaultdict from distutils.version import LooseVersion from jinja2 import Environment from optparse import OptionParser import ansible.constants as C default_meta_template = """--- galaxy_info: author: {{ author }} description: {{description}} company: {{ company }} # Some suggested licenses: # - BSD (default) # - MIT # - GPLv2 # - GPLv3 # - Apache # - CC-BY license: {{ license }} min_ansible_version: {{ min_ansible_version }} # # Below are all platforms currently available. Just uncomment # the ones that apply to your role. If you don't see your # platform on this list, let us know and we'll get it added! # #platforms: {%- for platform,versions in platforms.iteritems() %} #- name: {{ platform }} # versions: # - all {%- for version in versions %} # - {{ version }} {%- endfor %} {%- endfor %} # # Below are all categories currently available. Just as with # the platforms above, uncomment those that apply to your role. # #categories: {%- for category in categories %} #- {{ category.name }} {%- endfor %} dependencies: [] # List your role dependencies here, one per line. Only # dependencies available via galaxy should be listed here. # Be sure to remove the '[]' above if you add dependencies # to this list. {% for dependency in dependencies %} #- {{ dependency }} {% endfor %} """ default_readme_template = """Role Name ======== A brief description of the role goes here. Requirements ------------ Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. Role Variables -------------- A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. Dependencies ------------ A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. Example Playbook ------------------------- Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: - hosts: servers roles: - { role: username.rolename, x: 42 } License ------- BSD Author Information ------------------ An optional section for the role authors to include contact information, or a website (HTML is not allowed). """ #------------------------------------------------------------------------------------- # Utility functions for parsing actions/options #------------------------------------------------------------------------------------- VALID_ACTIONS = ("init", "info", "install", "list", "remove") def get_action(args): """ Get the action the user wants to execute from the sys argv list. """ for i in range(0,len(args)): arg = args[i] if arg in VALID_ACTIONS: del args[i] return arg return None def build_option_parser(action): """ Builds an option parser object based on the action the user wants to execute. """ usage = "usage: %%prog [%s] [--help] [options] ..." % "|".join(VALID_ACTIONS) epilog = "\nSee '%s --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0]) OptionParser.format_epilog = lambda self, formatter: self.epilog parser = OptionParser(usage=usage, epilog=epilog) if not action: parser.print_help() sys.exit() # options for all actions # - none yet # options specific to actions if action == "info": parser.set_usage("usage: %prog info [options] role_name[,version]") elif action == "init": parser.set_usage("usage: %prog init [options] role_name") parser.add_option( '-p', '--init-path', dest='init_path', default="./", help='The path in which the skeleton role will be created.' 'The default is the current working directory.') elif action == "install": parser.set_usage("usage: %prog install [options] [-r FILE | role_name(s)[,version] | tar_file(s)]") parser.add_option( '-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False, help='Ignore errors and continue with the next specified role.') parser.add_option( '-n', '--no-deps', dest='no_deps', action='store_true', default=False, help='Don\'t download roles listed as dependencies') parser.add_option( '-r', '--role-file', dest='role_file', help='A file containing a list of roles to be imported') elif action == "remove": parser.set_usage("usage: %prog remove role1 role2 ...") elif action == "list": parser.set_usage("usage: %prog list [role_name]") # options that apply to more than one action if action != "init": parser.add_option( '-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH, help='The path to the directory containing your roles.' 'The default is the roles_path configured in your ' 'ansible.cfg file (/etc/ansible/roles if not configured)') if action in ("info","init","install"): parser.add_option( '-s', '--server', dest='api_server', default="galaxy.ansible.com", help='The API server destination') if action in ("init","install"): parser.add_option( '-f', '--force', dest='force', action='store_true', default=False, help='Force overwriting an existing role') # done, return the parser return parser def get_opt(options, k, defval=""): """ Returns an option from an Optparse values instance. """ try: data = getattr(options, k) except: return defval if k == "roles_path": if os.pathsep in data: data = data.split(os.pathsep)[0] return data def exit_without_ignore(options, rc=1): """ Exits with the specified return code unless the option --ignore-errors was specified """ if not get_opt(options, "ignore_errors", False): print 'You can use --ignore-errors to skip failed roles.' sys.exit(rc) #------------------------------------------------------------------------------------- # Galaxy API functions #------------------------------------------------------------------------------------- def api_get_config(api_server): """ Fetches the Galaxy API current version to ensure the API server is up and reachable. """ try: url = 'https://%s/api/' % api_server data = json.load(urllib2.urlopen(url)) if not data.get("current_version",None): return None else: return data except: return None def api_lookup_role_by_name(api_server, role_name): """ Uses the Galaxy API to do a lookup on the role owner/name. """ role_name = urllib.quote(role_name) try: parts = role_name.split(".") user_name = ".".join(parts[0:-1]) role_name = parts[-1] print " downloading role '%s', owned by %s" % (role_name, user_name) except: parser.print_help() print "Invalid role name (%s). You must specify username.rolename" % role_name sys.exit(1) url = 'https://%s/api/v1/roles/?owner__username=%s&name=%s' % (api_server,user_name,role_name) try: data = json.load(urllib2.urlopen(url)) if len(data["results"]) == 0: return None else: return data["results"][0] except: return None def api_fetch_role_related(api_server, related, role_id): """ Uses the Galaxy API to fetch the list of related items for the given role. The url comes from the 'related' field of the role. """ try: url = 'https://%s/api/v1/roles/%d/%s/?page_size=50' % (api_server, int(role_id), related) data = json.load(urllib2.urlopen(url)) results = data['results'] done = (data.get('next', None) == None) while not done: url = 'https://%s%s' % (api_server, data['next']) print url data = json.load(urllib2.urlopen(url)) results += data['results'] done = (data.get('next', None) == None) return results except: return None def api_get_list(api_server, what): """ Uses the Galaxy API to fetch the list of items specified. """ try: url = 'https://%s/api/v1/%s/?page_size' % (api_server, what) data = json.load(urllib2.urlopen(url)) if "results" in data: results = data['results'] else: results = data done = True if "next" in data: done = (data.get('next', None) == None) while not done: url = 'https://%s%s' % (api_server, data['next']) print url data = json.load(urllib2.urlopen(url)) results += data['results'] done = (data.get('next', None) == None) return results except: print " - failed to download the %s list" % what return None #------------------------------------------------------------------------------------- # Role utility functions #------------------------------------------------------------------------------------- def get_role_path(role_name, options): """ Returns the role path based on the roles_path option and the role name. """ roles_path = get_opt(options,'roles_path') roles_path = os.path.join(roles_path, role_name) roles_path = os.path.expanduser(roles_path) return roles_path def get_role_metadata(role_name, options): """ Returns the metadata as YAML, if the file 'meta/main.yml' exists in the specified role_path """ role_path = os.path.join(get_role_path(role_name, options), 'meta/main.yml') try: if os.path.isfile(role_path): f = open(role_path, 'r') meta_data = yaml.safe_load(f) f.close() return meta_data else: return None except: return None def get_galaxy_install_info(role_name, options): """ Returns the YAML data contained in 'meta/.galaxy_install_info', if it exists. """ try: info_path = os.path.join(get_role_path(role_name, options), 'meta/.galaxy_install_info') if os.path.isfile(info_path): f = open(info_path, 'r') info_data = yaml.safe_load(f) f.close() return info_data else: return None except: return None def write_galaxy_install_info(role_name, role_version, options): """ Writes a YAML-formatted file to the role's meta/ directory (named .galaxy_install_info) which contains some information we can use later for commands like 'list' and 'info'. """ info = dict( version = role_version, install_date = datetime.datetime.utcnow().strftime("%c"), ) try: info_path = os.path.join(get_role_path(role_name, options), 'meta/.galaxy_install_info') f = open(info_path, 'w+') info_data = yaml.safe_dump(info, f) f.close() except: return False return True def remove_role(role_name, options): """ Removes the specified role from the roles path. There is a sanity check to make sure there's a meta/main.yml file at this path so the user doesn't blow away random directories """ if get_role_metadata(role_name, options): role_path = get_role_path(role_name, options) shutil.rmtree(role_path) return True else: return False def fetch_role(role_name, target, role_data, options): """ Downloads the archived role from github to a temp location, extracts it, and then copies the extracted role to the role library path. """ # first grab the file and save it to a temp location archive_url = 'https://github.com/%s/%s/archive/%s.tar.gz' % (role_data["github_user"], role_data["github_repo"], target) print " - downloading role from %s" % archive_url try: url_file = urllib2.urlopen(archive_url) temp_file = tempfile.NamedTemporaryFile(delete=False) data = url_file.read() while data: temp_file.write(data) data = url_file.read() temp_file.close() return temp_file.name except Exception, e: # TODO: better urllib2 error handling for error # messages that are more exact print "Error: failed to download the file." return False def install_role(role_name, role_version, role_filename, options): # the file is a tar, so open it that way and extract it # to the specified (or default) roles directory if not tarfile.is_tarfile(role_filename): print "Error: the file downloaded was not a tar.gz" return False else: role_tar_file = tarfile.open(role_filename, "r:gz") # verify the role's meta file meta_file = None members = role_tar_file.getmembers() for member in members: if "/meta/main.yml" in member.name: meta_file = member break if not meta_file: print "Error: this role does not appear to have a meta/main.yml file." return False else: try: meta_file_data = yaml.safe_load(role_tar_file.extractfile(meta_file)) except: print "Error: this role does not appear to have a valid meta/main.yml file." return False # we strip off the top-level directory for all of the files contained within # the tar file here, since the default is 'github_repo-target', and change it # to the specified role's name role_path = os.path.join(get_opt(options, 'roles_path', '/etc/ansible/roles'), role_name) role_path = os.path.expanduser(role_path) print " - extracting %s to %s" % (role_name, role_path) try: if os.path.exists(role_path): if not os.path.isdir(role_path): print "Error: the specified roles path exists and is not a directory." return False elif not get_opt(options, "force", False): print "Error: the specified role %s appears to already exist. Use --force to replace it." % role_name return False else: # using --force, remove the old path if not remove_role(role_name, options): print "Error: %s doesn't appear to contain a role." % role_path print "Please remove this directory manually if you really want to put the role here." return False else: os.makedirs(role_path) # now we do the actual extraction to the role_path for member in members: # we only extract files if member.isreg(): member.name = "/".join(member.name.split("/")[1:]) role_tar_file.extract(member, role_path) # write out the install info file for later use write_galaxy_install_info(role_name, role_version, options) except OSError, e: print "Error: you do not have permission to modify files in %s" % role_path return False # return the parsed yaml metadata print "%s was installed successfully" % role_name return meta_file_data #------------------------------------------------------------------------------------- # Action functions #------------------------------------------------------------------------------------- def execute_init(args, options, parser): """ Executes the init action, which creates the skeleton framework of a role that complies with the galaxy metadata format. """ init_path = get_opt(options, 'init_path', './') api_server = get_opt(options, "api_server", "galaxy.ansible.com") force = get_opt(options, 'force', False) api_config = api_get_config(api_server) if not api_config: print "The API server (%s) is not responding, please try again later." % api_server sys.exit(1) try: role_name = args.pop(0).strip() if role_name == "": raise Exception("") role_path = os.path.join(init_path, role_name) if os.path.exists(role_path): if os.path.isfile(role_path): print "The path %s already exists, but is a file - aborting" % role_path sys.exit(1) elif not force: print "The directory %s already exists." % role_path print "" print "You can use --force to re-initialize this directory,\n" + \ "however it will reset any main.yml files that may have\n" + \ "been modified there already." sys.exit(1) except Exception, e: parser.print_help() print "No role name specified for init" sys.exit(1) ROLE_DIRS = ('defaults','files','handlers','meta','tasks','templates','vars') # create the default README.md if not os.path.exists(role_path): os.makedirs(role_path) readme_path = os.path.join(role_path, "README.md") f = open(readme_path, "wb") f.write(default_readme_template) f.close for dir in ROLE_DIRS: dir_path = os.path.join(init_path, role_name, dir) main_yml_path = os.path.join(dir_path, 'main.yml') # create the directory if it doesn't exist already if not os.path.exists(dir_path): os.makedirs(dir_path) # now create the main.yml file for that directory if dir == "meta": # create a skeleton meta/main.yml with a valid galaxy_info # datastructure in place, plus with all of the available # tags/platforms included (but commented out) and the # dependencies section platforms = api_get_list(api_server, "platforms") if not platforms: platforms = [] categories = api_get_list(api_server, "categories") if not categories: categories = [] # group the list of platforms from the api based # on their names, with the release field being # appended to a list of versions platform_groups = defaultdict(list) for platform in platforms: platform_groups[platform['name']].append(platform['release']) platform_groups[platform['name']].sort() inject = dict( author = 'your name', company = 'your company (optional)', license = 'license (GPLv2, CC-BY, etc)', min_ansible_version = '1.2', platforms = platform_groups, categories = categories, ) rendered_meta = Environment().from_string(default_meta_template).render(inject) f = open(main_yml_path, 'w') f.write(rendered_meta) f.close() pass elif dir not in ('files','templates'): # just write a (mostly) empty YAML file for main.yml f = open(main_yml_path, 'w') f.write('---\n# %s file for %s\n' % (dir,role_name)) f.close() print "%s was created successfully" % role_name def execute_info(args, options, parser): """ Executes the info action. This action prints out detailed information about an installed role as well as info available from the galaxy API. """ pass def execute_install(args, options, parser): """ Executes the installation action. The args list contains the roles to be installed, unless -f was specified. The list of roles can be a name (which will be downloaded via the galaxy API and github), or it can be a local .tar.gz file. """ role_file = get_opt(options, "role_file", None) api_server = get_opt(options, "api_server", "galaxy.ansible.com") no_deps = get_opt(options, "no_deps", False) if len(args) == 0 and not role_file: # the user needs to specify one of either --role-file # or specify a single user/role name parser.print_help() print "You must specify a user/role name or a roles file" sys.exit() elif len(args) == 1 and role_file: # using a role file is mutually exclusive of specifying # the role name on the command line parser.print_help() print "Please specify a user/role name, or a roles file, but not both" sys.exit(1) api_config = api_get_config(api_server) if not api_config: print "The API server (%s) is not responding, please try again later." % api_server sys.exit(1) roles_done = [] if role_file: # roles listed in a file, one per line # so we'll go through and grab them all f = open(role_file, 'r') roles_left = f.readlines() f.close() else: # roles were specified directly, so we'll just go out grab them # (and their dependencies, unless the user doesn't want us to). roles_left = args while len(roles_left) > 0: # query the galaxy API for the role data role_name = roles_left.pop(0).strip() role_version = None if role_name == "" or role_name.startswith("#"): continue elif role_name.find(',') != -1: role_name,role_version = role_name.split(',',1) role_name = role_name.strip() role_version = role_version.strip() if os.path.isfile(role_name): # installing a local tar.gz tar_file = role_name role_name = os.path.basename(role_name).replace('.tar.gz','') if tarfile.is_tarfile(tar_file): print " - installing %s as %s" % (tar_file, role_name) if not install_role(role_name, role_version, tar_file, options): exit_without_ignore(options) else: print "%s (%s) was NOT installed successfully." % (role_name,tar_file) exit_without_ignore(options) else: # installing remotely role_data = api_lookup_role_by_name(api_server, role_name) if not role_data: print "Sorry, %s was not found on %s." % (role_name, api_server) continue role_versions = api_fetch_role_related(api_server, 'versions', role_data['id']) if not role_version: # convert the version names to LooseVersion objects # and sort them to get the latest version. If there # are no versions in the list, we'll grab the head # of the master branch if len(role_versions) > 0: loose_versions = [LooseVersion(a.get('name',None)) for a in role_versions] loose_versions.sort() role_version = str(loose_versions[-1]) else: role_version = 'master' print " no version specified, installing %s" % role_version else: if role_versions and role_version not in [a.get('name',None) for a in role_versions]: print "The specified version (%s) was not found in the list of available versions." % role_version exit_without_ignore(options) continue # download the role. if --no-deps was specified, we stop here, # otherwise we recursively grab roles and all of their deps. tmp_file = fetch_role(role_name, role_version, role_data, options) if tmp_file and install_role(role_name, role_version, tmp_file, options): # we're done with the temp file, clean it up os.unlink(tmp_file) # install dependencies, if we want them if not no_deps: role_dependencies = role_data['summary_fields']['dependencies'] # api_fetch_role_related(api_server, 'dependencies', role_data['id']) for dep_name in role_dependencies: #dep_name = "%s.%s" % (dep['owner'], dep['name']) if not get_role_metadata(dep_name, options): print ' adding dependency: %s' % dep_name roles_left.append(dep_name) else: print ' dependency %s is already installed, skipping.' % dep_name else: if tmp_file: os.unlink(tmp_file) print "%s was NOT installed successfully." % role_name exit_without_ignore(options) sys.exit(0) def execute_remove(args, options, parser): """ Executes the remove action. The args list contains the list of roles to be removed. This list can contain more than one role. """ if len(args) == 0: parser.print_help() print 'You must specify at least one role to remove.' sys.exit() for role in args: if get_role_metadata(role, options): if remove_role(role, options): print 'successfully removed %s' % role else: print "failed to remove role: %s" % role else: print '%s is not installed, skipping.' % role sys.exit(0) def execute_list(args, options, parser): """ Executes the list action. The args list can contain zero or one role. If one is specified, only that role will be shown, otherwise all roles in the specified directory will be shown. """ if len(args) > 1: print "Please specify only one role to list, or specify no roles to see a full list" sys.exit(1) if len(args) == 1: # show only the request role, if it exists role_name = args[0] metadata = get_role_metadata(role_name, options) if metadata: install_info = get_galaxy_install_info(role_name, options) version = None if install_info: version = install_info.get("version", None) if not version: version = "(unknown version)" # show some more info about single roles here print " %s, %s" % (role_name, version) else: print "The role %s was not found" % role_name else: # show all valid roles in the roles_path directory roles_path = get_opt(options, 'roles_path') roles_path = os.path.expanduser(roles_path) if not os.path.exists(roles_path): parser.print_help() print "The path %s does not exist. Please specify a valid path with --roles-path" % roles_path sys.exit(1) elif not os.path.isdir(roles_path): print "%s exists, but it is not a directory. Please specify a valid path with --roles-path" % roles_path parser.print_help() sys.exit(1) path_files = os.listdir(roles_path) for path_file in path_files: if get_role_metadata(path_file, options): install_info = get_galaxy_install_info(path_file, options) version = None if install_info: version = install_info.get("version", None) if not version: version = "(unknown version)" print " %s, %s" % (path_file, version) sys.exit(0) #------------------------------------------------------------------------------------- # The main entry point #------------------------------------------------------------------------------------- def main(): # parse the CLI options action = get_action(sys.argv) parser = build_option_parser(action) (options, args) = parser.parse_args() # execute the desired action if 1: #try: fn = globals()["execute_%s" % action] fn(args, options, parser) #except KeyError, e: # print "Error: %s is not a valid action. Valid actions are: %s" % (action, ", ".join(VALID_ACTIONS)) # sys.exit(1) if __name__ == "__main__": main() ansible-1.5.4/bin/ansible-playbook0000775000000000000000000003014012316627017015572 0ustar rootroot#!/usr/bin/env python # (C) 2012, Michael DeHaan, # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . ####################################################### import sys import os import stat import ansible.playbook import ansible.constants as C import ansible.utils.template from ansible import errors from ansible import callbacks from ansible import utils from ansible.color import ANSIBLE_COLOR, stringc from ansible.callbacks import display def colorize(lead, num, color): """ Print 'lead' = 'num' in 'color' """ if num != 0 and ANSIBLE_COLOR and color is not None: return "%s%s%-15s" % (stringc(lead, color), stringc("=", color), stringc(str(num), color)) else: return "%s=%-4s" % (lead, str(num)) def hostcolor(host, stats, color=True): if ANSIBLE_COLOR and color: if stats['failures'] != 0 or stats['unreachable'] != 0: return "%-37s" % stringc(host, 'red') elif stats['changed'] != 0: return "%-37s" % stringc(host, 'yellow') else: return "%-37s" % stringc(host, 'green') return "%-26s" % host def main(args): ''' run ansible-playbook operations ''' # create parser for CLI options parser = utils.base_parser( constants=C, usage = "%prog playbook.yml", connect_opts=True, runas_opts=True, subset_opts=True, check_opts=True, diff_opts=True ) #parser.add_option('--vault-password', dest="vault_password", # help="password for vault encrypted files") parser.add_option('-e', '--extra-vars', dest="extra_vars", action="append", help="set additional variables as key=value or YAML/JSON", default=[]) parser.add_option('-t', '--tags', dest='tags', default='all', help="only run plays and tasks tagged with these values") parser.add_option('--skip-tags', dest='skip_tags', help="only run plays and tasks whose tags do not match these values") parser.add_option('--syntax-check', dest='syntax', action='store_true', help="perform a syntax check on the playbook, but do not execute it") parser.add_option('--list-tasks', dest='listtasks', action='store_true', help="list all tasks that would be executed") parser.add_option('--step', dest='step', action='store_true', help="one-step-at-a-time: confirm each task before running") parser.add_option('--start-at-task', dest='start_at', help="start the playbook at the task matching this name") options, args = parser.parse_args(args) if len(args) == 0: parser.print_help(file=sys.stderr) return 1 # su and sudo command line arguments need to be mutually exclusive if (options.su or options.su_user or options.ask_su_pass) and \ (options.sudo or options.sudo_user or options.ask_sudo_pass): parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') " "and su arguments ('-su', '--su-user', and '--ask-su-pass') are " "mutually exclusive") if (options.ask_vault_pass and options.vault_password_file): parser.error("--ask-vault-pass and --vault-password-file are mutually exclusive") inventory = ansible.inventory.Inventory(options.inventory) inventory.subset(options.subset) if len(inventory.list_hosts()) == 0: raise errors.AnsibleError("provided hosts list is empty") sshpass = None sudopass = None su_pass = None vault_pass = None if not options.listhosts and not options.syntax and not options.listtasks: options.ask_pass = options.ask_pass or C.DEFAULT_ASK_PASS options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS # Never ask for an SSH password when we run with local connection if options.connection == "local": options.ask_pass = False options.ask_sudo_pass = options.ask_sudo_pass or C.DEFAULT_ASK_SUDO_PASS options.ask_su_pass = options.ask_su_pass or C.DEFAULT_ASK_SU_PASS options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS (sshpass, sudopass, su_pass, vault_pass) = utils.ask_passwords(ask_pass=options.ask_pass, ask_sudo_pass=options.ask_sudo_pass, ask_su_pass=options.ask_su_pass, ask_vault_pass=options.ask_vault_pass) options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER options.su_user = options.su_user or C.DEFAULT_SU_USER if options.vault_password_file: this_path = os.path.expanduser(options.vault_password_file) try: f = open(this_path, "rb") tmp_vault_pass=f.read() f.close() except (OSError, IOError), e: raise errors.AnsibleError("Could not read %s: %s" % (this_path, e)) # get rid of newline chars tmp_vault_pass = tmp_vault_pass.strip() if not options.ask_vault_pass: vault_pass = tmp_vault_pass extra_vars = {} for extra_vars_opt in options.extra_vars: if extra_vars_opt.startswith("@"): # Argument is a YAML file (JSON is a subset of YAML) extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:])) elif extra_vars_opt and extra_vars_opt[0] in '[{': # Arguments as YAML extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml(extra_vars_opt)) else: # Arguments as Key-value extra_vars = utils.combine_vars(extra_vars, utils.parse_kv(extra_vars_opt)) only_tags = options.tags.split(",") skip_tags = options.skip_tags if options.skip_tags is not None: skip_tags = options.skip_tags.split(",") for playbook in args: if not os.path.exists(playbook): raise errors.AnsibleError("the playbook: %s could not be found" % playbook) if not (os.path.isfile(playbook) or stat.S_ISFIFO(os.stat(playbook).st_mode)): raise errors.AnsibleError("the playbook: %s does not appear to be a file" % playbook) # run all playbooks specified on the command line for playbook in args: # let inventory know which playbooks are using so it can know the basedirs inventory.set_playbook_basedir(os.path.dirname(playbook)) stats = callbacks.AggregateStats() playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY) if options.step: playbook_cb.step = options.step if options.start_at: playbook_cb.start_at = options.start_at runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=utils.VERBOSITY) pb = ansible.playbook.PlayBook( playbook=playbook, module_path=options.module_path, inventory=inventory, forks=options.forks, remote_user=options.remote_user, remote_pass=sshpass, callbacks=playbook_cb, runner_callbacks=runner_cb, stats=stats, timeout=options.timeout, transport=options.connection, sudo=options.sudo, sudo_user=options.sudo_user, sudo_pass=sudopass, extra_vars=extra_vars, private_key_file=options.private_key_file, only_tags=only_tags, skip_tags=skip_tags, check=options.check, diff=options.diff, su=options.su, su_pass=su_pass, su_user=options.su_user, vault_password=vault_pass ) if options.listhosts or options.listtasks or options.syntax: print '' print 'playbook: %s' % playbook print '' playnum = 0 for (play_ds, play_basedir) in zip(pb.playbook, pb.play_basedirs): playnum += 1 play = ansible.playbook.Play(pb, play_ds, play_basedir) label = play.name if options.listhosts: hosts = pb.inventory.list_hosts(play.hosts) print ' play #%d (%s): host count=%d' % (playnum, label, len(hosts)) for host in hosts: print ' %s' % host if options.listtasks: matched_tags, unmatched_tags = play.compare_tags(pb.only_tags) # Remove skipped tasks matched_tags = matched_tags - set(pb.skip_tags) unmatched_tags.discard('all') unknown_tags = ((set(pb.only_tags) | set(pb.skip_tags)) - (matched_tags | unmatched_tags)) if unknown_tags: continue print ' play #%d (%s):' % (playnum, label) for task in play.tasks(): if (set(task.tags).intersection(pb.only_tags) and not set(task.tags).intersection(pb.skip_tags)): if getattr(task, 'name', None) is not None: # meta tasks have no names print ' %s' % task.name print '' continue if options.syntax: # if we've not exited by now then we are fine. print 'Playbook Syntax is fine' return 0 failed_hosts = [] unreachable_hosts = [] try: pb.run() hosts = sorted(pb.stats.processed.keys()) display(callbacks.banner("PLAY RECAP")) playbook_cb.on_stats(pb.stats) for h in hosts: t = pb.stats.summarize(h) if t['failures'] > 0: failed_hosts.append(h) if t['unreachable'] > 0: unreachable_hosts.append(h) retries = failed_hosts + unreachable_hosts if len(retries) > 0: filename = pb.generate_retry_inventory(retries) if filename: display(" to retry, use: --limit @%s\n" % filename) for h in hosts: t = pb.stats.summarize(h) display("%s : %s %s %s %s" % ( hostcolor(h, t), colorize('ok', t['ok'], 'green'), colorize('changed', t['changed'], 'yellow'), colorize('unreachable', t['unreachable'], 'red'), colorize('failed', t['failures'], 'red')), screen_only=True ) display("%s : %s %s %s %s" % ( hostcolor(h, t, False), colorize('ok', t['ok'], None), colorize('changed', t['changed'], None), colorize('unreachable', t['unreachable'], None), colorize('failed', t['failures'], None)), log_only=True ) print "" if len(failed_hosts) > 0: return 2 if len(unreachable_hosts) > 0: return 3 except errors.AnsibleError, e: display("ERROR: %s" % e, color='red') return 1 return 0 if __name__ == "__main__": display(" ", log_only=True) display(" ".join(sys.argv), log_only=True) display(" ", log_only=True) try: sys.exit(main(sys.argv[1:])) except errors.AnsibleError, e: display("ERROR: %s" % e, color='red', stderr=True) sys.exit(1) except KeyboardInterrupt, ke: display("ERROR: interrupted", color='red', stderr=True) sys.exit(1) ansible-1.5.4/bin/ansible0000775000000000000000000002046512316627017013765 0ustar rootroot#!/usr/bin/env python # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . ######################################################## import os import sys from ansible.runner import Runner import ansible.constants as C from ansible import utils from ansible import errors from ansible import callbacks from ansible import inventory ######################################################## class Cli(object): ''' code behind bin/ansible ''' # ---------------------------------------------- def __init__(self): self.stats = callbacks.AggregateStats() self.callbacks = callbacks.CliRunnerCallbacks() # ---------------------------------------------- def parse(self): ''' create an options parser for bin/ansible ''' parser = utils.base_parser( constants=C, runas_opts=True, subset_opts=True, async_opts=True, output_opts=True, connect_opts=True, check_opts=True, diff_opts=False, usage='%prog [options]' ) parser.add_option('-a', '--args', dest='module_args', help="module arguments", default=C.DEFAULT_MODULE_ARGS) parser.add_option('-m', '--module-name', dest='module_name', help="module name to execute (default=%s)" % C.DEFAULT_MODULE_NAME, default=C.DEFAULT_MODULE_NAME) options, args = parser.parse_args() self.callbacks.options = options if len(args) == 0 or len(args) > 1: parser.print_help() sys.exit(1) # su and sudo command line arguments need to be mutually exclusive if (options.su or options.su_user or options.ask_su_pass) and \ (options.sudo or options.sudo_user or options.ask_sudo_pass): parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') " "and su arguments ('-su', '--su-user', and '--ask-su-pass') are " "mutually exclusive") if (options.ask_vault_pass and options.vault_password_file): parser.error("--ask-vault-pass and --vault-password-file are mutually exclusive") return (options, args) # ---------------------------------------------- def run(self, options, args): ''' use Runner lib to do SSH things ''' pattern = args[0] """ inventory_manager = inventory.Inventory(options.inventory) if options.subset: inventory_manager.subset(options.subset) hosts = inventory_manager.list_hosts(pattern) if len(hosts) == 0: callbacks.display("No hosts matched") sys.exit(0) if options.listhosts: for host in hosts: callbacks.display(' %s' % host) sys.exit(0) if ((options.module_name == 'command' or options.module_name == 'shell') and not options.module_args): callbacks.display("No argument passed to %s module" % options.module_name, color='red', stderr=True) sys.exit(1) """ sshpass = None sudopass = None su_pass = None vault_pass = None options.ask_pass = options.ask_pass or C.DEFAULT_ASK_PASS # Never ask for an SSH password when we run with local connection if options.connection == "local": options.ask_pass = False options.ask_sudo_pass = options.ask_sudo_pass or C.DEFAULT_ASK_SUDO_PASS options.ask_su_pass = options.ask_su_pass or C.DEFAULT_ASK_SU_PASS options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS (sshpass, sudopass, su_pass, vault_pass) = utils.ask_passwords(ask_pass=options.ask_pass, ask_sudo_pass=options.ask_sudo_pass, ask_su_pass=options.ask_su_pass, ask_vault_pass=options.ask_vault_pass) # read vault_pass from a file if options.vault_password_file: this_path = os.path.expanduser(options.vault_password_file) try: f = open(this_path, "rb") tmp_vault_pass=f.read() f.close() except (OSError, IOError), e: raise errors.AnsibleError("Could not read %s: %s" % (this_path, e)) # get rid of newline chars tmp_vault_pass = tmp_vault_pass.strip() if not options.ask_vault_pass: vault_pass = tmp_vault_pass inventory_manager = inventory.Inventory(options.inventory) if options.subset: inventory_manager.subset(options.subset) hosts = inventory_manager.list_hosts(pattern) if len(hosts) == 0: callbacks.display("No hosts matched") sys.exit(0) if options.listhosts: for host in hosts: callbacks.display(' %s' % host) sys.exit(0) if ((options.module_name == 'command' or options.module_name == 'shell') and not options.module_args): callbacks.display("No argument passed to %s module" % options.module_name, color='red', stderr=True) sys.exit(1) if options.su_user or options.ask_su_pass: options.su = True elif options.sudo_user or options.ask_sudo_pass: options.sudo = True options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER options.su_user = options.su_user or C.DEFAULT_SU_USER if options.tree: utils.prepare_writeable_dir(options.tree) runner = Runner( module_name=options.module_name, module_path=options.module_path, module_args=options.module_args, remote_user=options.remote_user, remote_pass=sshpass, inventory=inventory_manager, timeout=options.timeout, private_key_file=options.private_key_file, forks=options.forks, pattern=pattern, callbacks=self.callbacks, sudo=options.sudo, sudo_pass=sudopass, sudo_user=options.sudo_user, transport=options.connection, subset=options.subset, check=options.check, diff=options.check, su=options.su, su_pass=su_pass, su_user=options.su_user, vault_pass=vault_pass ) if options.seconds: callbacks.display("background launch...\n\n", color='cyan') results, poller = runner.run_async(options.seconds) results = self.poll_while_needed(poller, options) else: results = runner.run() return (runner, results) # ---------------------------------------------- def poll_while_needed(self, poller, options): ''' summarize results from Runner ''' # BACKGROUND POLL LOGIC when -B and -P are specified if options.seconds and options.poll_interval > 0: poller.wait(options.seconds, options.poll_interval) return poller.results ######################################################## if __name__ == '__main__': callbacks.display("", log_only=True) callbacks.display(" ".join(sys.argv), log_only=True) callbacks.display("", log_only=True) cli = Cli() (options, args) = cli.parse() try: (runner, results) = cli.run(options, args) for result in results['contacted'].values(): if 'failed' in result or result.get('rc', 0) != 0: sys.exit(2) if results['dark']: sys.exit(3) except errors.AnsibleError, e: # Generic handler for ansible specific errors callbacks.display("ERROR: %s" % str(e), stderr=True, color='red') sys.exit(1) ansible-1.5.4/legacy/0000775000000000000000000000000012316627017013107 5ustar rootrootansible-1.5.4/legacy/gce_tests.py0000664000000000000000000012755312316627017015456 0ustar rootroot#!/usr/bin/env python # Copyright 2013 Google Inc. # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # This is a custom functional test script for the Google Compute Engine # ansible modules. In order to run these tests, you must: # 1) Create a Google Cloud Platform account and enable the Google # Compute Engine service and billing # 2) Download, install, and configure 'gcutil' # see [https://developers.google.com/compute/docs/gcutil/] # 3) Convert your GCE Service Account private key from PKCS12 to PEM format # $ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret \ # > -nodes -nocerts | openssl rsa -out pkey.pem # 4) Make sure you have libcloud 0.13.3 or later installed. # 5) Make sure you have a libcloud 'secrets.py' file in your PYTHONPATH # 6) Set GCE_PARAMS and GCE_KEYWORD_PARMS in your 'secrets.py' file. # 7) Set up a simple hosts file # $ echo 127.0.0.1 > ~/ansible_hosts # $ echo "export ANSIBLE_HOSTS='~/ansible_hosts'" >> ~/.bashrc # $ . ~/.bashrc # 8) Set up your ansible 'hacking' environment # $ cd ~/ansible # $ . hacking/env-setup # $ export ANSIBLE_HOST_KEY_CHECKING=no # $ ansible all -m ping # 9) Set your PROJECT variable below # 10) Run and time the tests and log output, take ~30 minutes to run # $ time stdbuf -oL python test/gce_tests.py 2>&1 | tee log # # Last update: gcutil-1.11.0 and v1beta16 # Set this to your test Project ID PROJECT="google.com:erjohnso" # debugging DEBUG=False # lots of debugging output VERBOSE=True # on failure, display ansible command and expected/actual result # location - note that some tests rely on the module's 'default' # region/zone, which should match the settings below. REGION="us-central1" ZONE="%s-a" % REGION # Peeking is a way to trigger looking at a specified set of resources # before and/or after a test run. The 'test_cases' data structure below # has a few tests with 'peek_before' and 'peek_after'. When those keys # are set and PEEKING_ENABLED is True, then these steps will be executed # to aid in debugging tests. Normally, this is not needed. PEEKING_ENABLED=False # disks DNAME="aaaaa-ansible-disk" DNAME2="aaaaa-ansible-disk2" DNAME6="aaaaa-ansible-inst6" DNAME7="aaaaa-ansible-inst7" USE_PD="true" KERNEL="https://www.googleapis.com/compute/v1beta16/projects/google/global/kernels/gce-no-conn-track-v20130813" # instances INAME="aaaaa-ansible-inst" INAME2="aaaaa-ansible-inst2" INAME3="aaaaa-ansible-inst3" INAME4="aaaaa-ansible-inst4" INAME5="aaaaa-ansible-inst5" INAME6="aaaaa-ansible-inst6" INAME7="aaaaa-ansible-inst7" TYPE="n1-standard-1" IMAGE="https://www.googleapis.com/compute/v1beta16/projects/debian-cloud/global/images/debian-7-wheezy-v20131014" NETWORK="default" SCOPES="https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.full_control" # networks / firewalls NETWK1="ansible-network1" NETWK2="ansible-network2" NETWK3="ansible-network3" CIDR1="10.240.16.0/24" CIDR2="10.240.32.0/24" CIDR3="10.240.64.0/24" GW1="10.240.16.1" GW2="10.240.32.1" FW1="ansible-fwrule1" FW2="ansible-fwrule2" FW3="ansible-fwrule3" FW4="ansible-fwrule4" # load-balancer tests HC1="ansible-hc1" HC2="ansible-hc2" HC3="ansible-hc3" LB1="ansible-lb1" LB2="ansible-lb2" from commands import getstatusoutput as run import sys test_cases = [ {'id': '01', 'desc': 'Detach / Delete disk tests', 'setup': ['gcutil addinstance "%s" --wait_until_running --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --persistent_boot_disk=%s' % (INAME, ZONE, TYPE, NETWORK, SCOPES, IMAGE, USE_PD), 'gcutil adddisk "%s" --size_gb=2 --zone=%s --wait_until_complete' % (DNAME, ZONE)], 'tests': [ {'desc': 'DETACH_ONLY but disk not found [success]', 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % ("missing-disk", INAME, ZONE), 'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "%s", "name": "missing-disk", "state": "absent", "zone": "%s"}' % (INAME, ZONE), }, {'desc': 'DETACH_ONLY but instance not found [success]', 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % (DNAME, "missing-instance", ZONE), 'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "missing-instance", "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (DNAME, ZONE), }, {'desc': 'DETACH_ONLY but neither disk nor instance exists [success]', 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % ("missing-disk", "missing-instance", ZONE), 'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "missing-instance", "name": "missing-disk", "state": "absent", "zone": "%s"}' % (ZONE), }, {'desc': 'DETACH_ONLY but disk is not currently attached [success]', 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % (DNAME, INAME, ZONE), 'r': '127.0.0.1 | success >> {"changed": false, "detach_only": true, "detached_from_instance": "%s", "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (INAME, DNAME, ZONE), }, {'desc': 'DETACH_ONLY disk is attached and should be detached [success]', 'setup': ['gcutil attachdisk --disk="%s,mode=READ_ONLY" --zone=%s %s' % (DNAME, ZONE, INAME), 'sleep 10'], 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s detach_only=yes state=absent' % (DNAME, INAME, ZONE), 'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": true, "detach_only": true, "detached_from_instance": "%s", "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (INAME, INAME, DNAME, ZONE), 'teardown': ['gcutil detachdisk --zone=%s --device_name=%s %s' % (ZONE, DNAME, INAME)], }, {'desc': 'DETACH_ONLY but not instance specified [FAIL]', 'm': 'gce_pd', 'a': 'name=%s zone=%s detach_only=yes state=absent' % (DNAME, ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must specify an instance name when detaching a disk"}', }, {'desc': 'DELETE but disk not found [success]', 'm': 'gce_pd', 'a': 'name=%s zone=%s state=absent' % ("missing-disk", ZONE), 'r': '127.0.0.1 | success >> {"changed": false, "name": "missing-disk", "state": "absent", "zone": "%s"}' % (ZONE), }, {'desc': 'DELETE but disk is attached [FAIL]', 'setup': ['gcutil attachdisk --disk="%s,mode=READ_ONLY" --zone=%s %s' % (DNAME, ZONE, INAME), 'sleep 10'], 'm': 'gce_pd', 'a': 'name=%s zone=%s state=absent' % (DNAME, ZONE), 'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"The disk resource 'projects/%s/zones/%s/disks/%s' is already being used by 'projects/%s/zones/%s/instances/%s'\"}" % (PROJECT, ZONE, DNAME, PROJECT, ZONE, INAME), 'teardown': ['gcutil detachdisk --zone=%s --device_name=%s %s' % (ZONE, DNAME, INAME)], }, {'desc': 'DELETE disk [success]', 'm': 'gce_pd', 'a': 'name=%s zone=%s state=absent' % (DNAME, ZONE), 'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "size_gb": 2, "state": "absent", "zone": "%s"}' % (DNAME, ZONE), }, ], 'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE), 'sleep 15', 'gcutil deletedisk -f "%s" --zone=%s' % (INAME, ZONE), 'sleep 10', 'gcutil deletedisk -f "%s" --zone=%s' % (DNAME, ZONE), 'sleep 10'], }, {'id': '02', 'desc': 'Create disk but do not attach (e.g. no instance_name param)', 'setup': [], 'tests': [ {'desc': 'CREATE_NO_ATTACH "string" for size_gb [FAIL]', 'm': 'gce_pd', 'a': 'name=%s size_gb="foo" zone=%s' % (DNAME, ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}', }, {'desc': 'CREATE_NO_ATTACH negative size_gb [FAIL]', 'm': 'gce_pd', 'a': 'name=%s size_gb=-2 zone=%s' % (DNAME, ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}', }, {'desc': 'CREATE_NO_ATTACH size_gb exceeds quota [FAIL]', 'm': 'gce_pd', 'a': 'name=%s size_gb=9999 zone=%s' % ("big-disk", ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Requested disk size exceeds quota"}', }, {'desc': 'CREATE_NO_ATTACH create the disk [success]', 'm': 'gce_pd', 'a': 'name=%s zone=%s' % (DNAME, ZONE), 'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "size_gb": 10, "state": "present", "zone": "%s"}' % (DNAME, ZONE), }, {'desc': 'CREATE_NO_ATTACH but disk already exists [success]', 'm': 'gce_pd', 'a': 'name=%s zone=%s' % (DNAME, ZONE), 'r': '127.0.0.1 | success >> {"changed": false, "name": "%s", "size_gb": 10, "state": "present", "zone": "%s"}' % (DNAME, ZONE), }, ], 'teardown': ['gcutil deletedisk -f "%s" --zone=%s' % (DNAME, ZONE), 'sleep 10'], }, {'id': '03', 'desc': 'Create and attach disk', 'setup': ['gcutil addinstance "%s" --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --persistent_boot_disk=%s' % (INAME2, ZONE, TYPE, NETWORK, SCOPES, IMAGE, USE_PD), 'gcutil addinstance "%s" --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --persistent_boot_disk=%s' % (INAME, ZONE, "g1-small", NETWORK, SCOPES, IMAGE, USE_PD), 'gcutil adddisk "%s" --size_gb=2 --zone=%s' % (DNAME, ZONE), 'gcutil adddisk "%s" --size_gb=2 --zone=%s --wait_until_complete' % (DNAME2, ZONE),], 'tests': [ {'desc': 'CREATE_AND_ATTACH "string" for size_gb [FAIL]', 'm': 'gce_pd', 'a': 'name=%s size_gb="foo" instance_name=%s zone=%s' % (DNAME, INAME, ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}', }, {'desc': 'CREATE_AND_ATTACH negative size_gb [FAIL]', 'm': 'gce_pd', 'a': 'name=%s size_gb=-2 instance_name=%s zone=%s' % (DNAME, INAME, ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Must supply a size_gb larger than 1 GB"}', }, {'desc': 'CREATE_AND_ATTACH size_gb exceeds quota [FAIL]', 'm': 'gce_pd', 'a': 'name=%s size_gb=9999 instance_name=%s zone=%s' % ("big-disk", INAME, ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Requested disk size exceeds quota"}', }, {'desc': 'CREATE_AND_ATTACH missing instance [FAIL]', 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s' % (DNAME, "missing-instance", ZONE), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Instance %s does not exist in zone %s"}' % ("missing-instance", ZONE), }, {'desc': 'CREATE_AND_ATTACH disk exists but not attached [success]', 'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s' % (DNAME, INAME, ZONE), 'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": true, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME, DNAME, ZONE), 'peek_after': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], }, {'desc': 'CREATE_AND_ATTACH disk exists already attached [success]', 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s' % (DNAME, INAME, ZONE), 'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": false, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME, DNAME, ZONE), }, {'desc': 'CREATE_AND_ATTACH attached RO, attempt RO to 2nd inst [success]', 'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s' % (DNAME, INAME2, ZONE), 'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": true, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME2, DNAME, ZONE), 'peek_after': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], }, {'desc': 'CREATE_AND_ATTACH attached RO, attach RW to self [FAILED no-op]', 'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s mode=READ_WRITE' % (DNAME, INAME, ZONE), 'r': '127.0.0.1 | success >> {"attached_mode": "READ_ONLY", "attached_to_instance": "%s", "changed": false, "name": "%s", "size_gb": 2, "state": "present", "zone": "%s"}' % (INAME, DNAME, ZONE), }, {'desc': 'CREATE_AND_ATTACH attached RW, attach RW to other [FAIL]', 'setup': ['gcutil attachdisk --disk=%s,mode=READ_WRITE --zone=%s %s' % (DNAME2, ZONE, INAME), 'sleep 10'], 'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s mode=READ_WRITE' % (DNAME2, INAME2, ZONE), 'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[200], API error code[RESOURCE_IN_USE] and message: The disk resource 'projects/%s/zones/%s/disks/%s' is already being used in read-write mode\"}" % (PROJECT, ZONE, DNAME2), 'peek_after': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], }, {'desc': 'CREATE_AND_ATTACH attach too many disks to inst [FAIL]', 'setup': ['gcutil adddisk aa-disk-dummy --size_gb=2 --zone=%s' % (ZONE), 'gcutil adddisk aa-disk-dummy2 --size_gb=2 --zone=%s --wait_until_complete' % (ZONE), 'gcutil attachdisk --disk=aa-disk-dummy --zone=%s %s' % (ZONE, INAME), 'sleep 5'], 'peek_before': ["gcutil --format=csv listinstances --zone=%s --filter=\"name eq 'aaaa.*'\"" % (ZONE)], 'm': 'gce_pd', 'a': 'name=%s instance_name=%s zone=%s' % ("aa-disk-dummy2", INAME, ZONE), 'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[200], API error code[LIMIT_EXCEEDED] and message: Exceeded limit 'maximum_persistent_disks' on resource 'projects/%s/zones/%s/instances/%s'. Limit: 4\"}" % (PROJECT, ZONE, INAME), 'teardown': ['gcutil detachdisk --device_name=aa-disk-dummy --zone=%s %s' % (ZONE, INAME), 'sleep 3', 'gcutil deletedisk -f aa-disk-dummy --zone=%s' % (ZONE), 'sleep 10', 'gcutil deletedisk -f aa-disk-dummy2 --zone=%s' % (ZONE), 'sleep 10'], }, ], 'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME2, ZONE), 'sleep 15', 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE), 'sleep 15', 'gcutil deletedisk -f "%s" --zone=%s' % (INAME, ZONE), 'sleep 10', 'gcutil deletedisk -f "%s" --zone=%s' % (INAME2, ZONE), 'sleep 10', 'gcutil deletedisk -f "%s" --zone=%s' % (DNAME, ZONE), 'sleep 10', 'gcutil deletedisk -f "%s" --zone=%s' % (DNAME2, ZONE), 'sleep 10'], }, {'id': '04', 'desc': 'Delete / destroy instances', 'setup': ['gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME, ZONE, TYPE, IMAGE), 'gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME2, ZONE, TYPE, IMAGE), 'gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME3, ZONE, TYPE, IMAGE), 'gcutil addinstance "%s" --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME4, ZONE, TYPE, IMAGE), 'gcutil addinstance "%s" --wait_until_running --zone=%s --machine_type=%s --image="%s" --persistent_boot_disk=false' % (INAME5, ZONE, TYPE, IMAGE)], 'tests': [ {'desc': 'DELETE instance, bad zone param [FAIL]', 'm': 'gce', 'a': 'name=missing-inst zone=bogus state=absent', 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "value of zone must be one of: us-central1-a,us-central1-b,us-central2-a,europe-west1-a,europe-west1-b, got: bogus"}', }, {'desc': 'DELETE non-existent instance, no-op [success]', 'm': 'gce', 'a': 'name=missing-inst zone=%s state=absent' % (ZONE), 'r': '127.0.0.1 | success >> {"changed": false, "name": "missing-inst", "state": "absent", "zone": "%s"}' % (ZONE), }, {'desc': 'DELETE an existing named instance [success]', 'm': 'gce', 'a': 'name=%s zone=%s state=absent' % (INAME, ZONE), 'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "state": "absent", "zone": "%s"}' % (INAME, ZONE), }, {'desc': 'DELETE list of instances with a non-existent one [success]', 'm': 'gce', 'a': 'instance_names=%s,missing,%s zone=%s state=absent' % (INAME2,INAME3, ZONE), 'r': '127.0.0.1 | success >> {"changed": true, "instance_names": ["%s", "%s"], "state": "absent", "zone": "%s"}' % (INAME2, INAME3, ZONE), }, {'desc': 'DELETE list of instances all pre-exist [success]', 'm': 'gce', 'a': 'instance_names=%s,%s zone=%s state=absent' % (INAME4,INAME5, ZONE), 'r': '127.0.0.1 | success >> {"changed": true, "instance_names": ["%s", "%s"], "state": "absent", "zone": "%s"}' % (INAME4, INAME5, ZONE), }, ], 'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME2, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME3, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME4, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME5, ZONE), 'sleep 10'], }, {'id': '05', 'desc': 'Create instances', 'setup': ['gcutil adddisk --source_image=%s --zone=%s %s --wait_until_complete' % (IMAGE, ZONE, DNAME7), 'gcutil addinstance boo --wait_until_running --zone=%s --machine_type=%s --network=%s --disk=%s,mode=READ_WRITE,boot --kernel=%s' % (ZONE,TYPE,NETWORK,DNAME7,KERNEL), ], 'tests': [ {'desc': 'CREATE_INSTANCE invalid image arg [FAIL]', 'm': 'gce', 'a': 'name=foo image=foo', 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required create instance variable"}', }, {'desc': 'CREATE_INSTANCE metadata a list [FAIL]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s zone=%s metadata=\'[\\"foo\\":\\"bar\\",\\"baz\\":1]\'' % (INAME,ZONE), 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata syntax"}', }, {'desc': 'CREATE_INSTANCE metadata not a dict [FAIL]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s zone=%s metadata=\\"foo\\":\\"bar\\",\\"baz\\":1' % (INAME,ZONE), 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata syntax"}', }, {'desc': 'CREATE_INSTANCE with metadata form1 [FAIL]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s zone=%s metadata=\'{"foo":"bar","baz":1}\'' % (INAME,ZONE), 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata: malformed string"}', }, {'desc': 'CREATE_INSTANCE with metadata form2 [FAIL]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s zone=%s metadata={\'foo\':\'bar\',\'baz\':1}' % (INAME,ZONE), 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata: malformed string"}', }, {'desc': 'CREATE_INSTANCE with metadata form3 [FAIL]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s zone=%s metadata="foo:bar" '% (INAME,ZONE), 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata syntax"}', }, {'desc': 'CREATE_INSTANCE with metadata form4 [FAIL]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s zone=%s metadata="{\'foo\':\'bar\'}"'% (INAME,ZONE), 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "bad metadata: malformed string"}', }, {'desc': 'CREATE_INSTANCE invalid image arg [FAIL]', 'm': 'gce', 'a': 'instance_names=foo,bar image=foo', 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required create instance variable"}', }, {'desc': 'CREATE_INSTANCE single inst, using defaults [success]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s' % (INAME), 'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": "debian-7-wheezy-v20130816", "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.175.15", "public_ip": "173.255.120.190", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME, ZONE, INAME, ZONE), }, {'desc': 'CREATE_INSTANCE the same instance again, no-op [success]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s' % (INAME), 'r': '127.0.0.1 | success >> {"changed": false, "instance_data": [{"image": "debian-7-wheezy-v20130816", "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.175.15", "public_ip": "173.255.120.190", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME, ZONE, INAME, ZONE), }, {'desc': 'CREATE_INSTANCE instance with alt type [success]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s machine_type=n1-standard-2' % (INAME2), 'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": "debian-7-wheezy-v20130816", "machine_type": "n1-standard-2", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.192.227", "public_ip": "173.255.121.233", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME2, ZONE, INAME2, ZONE), }, {'desc': 'CREATE_INSTANCE instance with root pd [success]', 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s persistent_boot_disk=yes' % (INAME3), 'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": null, "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.178.140", "public_ip": "173.255.121.176", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME3, ZONE, INAME3, ZONE), }, {'desc': 'CREATE_INSTANCE instance with root pd, that already exists [success]', 'setup': ['gcutil adddisk --source_image=%s --zone=%s %s --wait_until_complete' % (IMAGE, ZONE, DNAME6),], 'strip_numbers': True, 'm': 'gce', 'a': 'name=%s zone=%s persistent_boot_disk=yes' % (INAME6, ZONE), 'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": null, "machine_type": "n1-standard-1", "metadata": {}, "name": "%s", "network": "default", "private_ip": "10.240.178.140", "public_ip": "173.255.121.176", "status": "RUNNING", "tags": [], "zone": "%s"}], "name": "%s", "state": "present", "zone": "%s"}' % (INAME6, ZONE, INAME6, ZONE), }, {'desc': 'CREATE_INSTANCE instance with root pd attached to other inst [FAIL]', 'm': 'gce', 'a': 'name=%s zone=%s persistent_boot_disk=yes' % (INAME7, ZONE), 'r': '127.0.0.1 | FAILED >> {"failed": true, "msg": "Unexpected error attempting to create instance %s, error: The disk resource \'projects/%s/zones/%s/disks/%s\' is already being used in read-write mode"}' % (INAME7,PROJECT,ZONE,DNAME7), }, {'desc': 'CREATE_INSTANCE use *all* the options! [success]', 'strip_numbers': True, 'm': 'gce', 'a': 'instance_names=%s,%s metadata=\'{\\"foo\\":\\"bar\\", \\"baz\\":1}\' tags=t1,t2,t3 zone=%s image=centos-6-v20130731 persistent_boot_disk=yes' % (INAME4,INAME5,ZONE), 'r': '127.0.0.1 | success >> {"changed": true, "instance_data": [{"image": null, "machine_type": "n1-standard-1", "metadata": {"baz": "1", "foo": "bar"}, "name": "%s", "network": "default", "private_ip": "10.240.130.4", "public_ip": "173.255.121.97", "status": "RUNNING", "tags": ["t1", "t2", "t3"], "zone": "%s"}, {"image": null, "machine_type": "n1-standard-1", "metadata": {"baz": "1", "foo": "bar"}, "name": "%s", "network": "default", "private_ip": "10.240.207.226", "public_ip": "173.255.121.85", "status": "RUNNING", "tags": ["t1", "t2", "t3"], "zone": "%s"}], "instance_names": ["%s", "%s"], "state": "present", "zone": "%s"}' % (INAME4, ZONE, INAME5, ZONE, INAME4, INAME5, ZONE), }, ], 'teardown': ['gcutil deleteinstance -f "%s" --zone=%s' % (INAME, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME2, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME3, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME4, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME5, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME6, ZONE), 'gcutil deleteinstance -f "%s" --zone=%s' % (INAME7, ZONE), 'gcutil deleteinstance -f boo --zone=%s' % (ZONE), 'sleep 10', 'gcutil deletedisk -f "%s" --zone=%s' % (INAME3, ZONE), 'gcutil deletedisk -f "%s" --zone=%s' % (INAME4, ZONE), 'gcutil deletedisk -f "%s" --zone=%s' % (INAME5, ZONE), 'gcutil deletedisk -f "%s" --zone=%s' % (INAME6, ZONE), 'gcutil deletedisk -f "%s" --zone=%s' % (INAME7, ZONE), 'sleep 10'], }, {'id': '06', 'desc': 'Delete / destroy networks and firewall rules', 'setup': ['gcutil addnetwork --range="%s" --gateway="%s" %s' % (CIDR1, GW1, NETWK1), 'gcutil addnetwork --range="%s" --gateway="%s" %s' % (CIDR2, GW2, NETWK2), 'sleep 5', 'gcutil addfirewall --allowed="tcp:80" --network=%s %s' % (NETWK1, FW1), 'gcutil addfirewall --allowed="tcp:80" --network=%s %s' % (NETWK2, FW2), 'sleep 5'], 'tests': [ {'desc': 'DELETE bogus named firewall [success]', 'm': 'gce_net', 'a': 'fwname=missing-fwrule state=absent', 'r': '127.0.0.1 | success >> {"changed": false, "fwname": "missing-fwrule", "state": "absent"}', }, {'desc': 'DELETE bogus named network [success]', 'm': 'gce_net', 'a': 'name=missing-network state=absent', 'r': '127.0.0.1 | success >> {"changed": false, "name": "missing-network", "state": "absent"}', }, {'desc': 'DELETE named firewall rule [success]', 'm': 'gce_net', 'a': 'fwname=%s state=absent' % (FW1), 'r': '127.0.0.1 | success >> {"changed": true, "fwname": "%s", "state": "absent"}' % (FW1), 'teardown': ['sleep 5'], # pause to give GCE time to delete fwrule }, {'desc': 'DELETE unused named network [success]', 'm': 'gce_net', 'a': 'name=%s state=absent' % (NETWK1), 'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "state": "absent"}' % (NETWK1), }, {'desc': 'DELETE named network *and* fwrule [success]', 'm': 'gce_net', 'a': 'name=%s fwname=%s state=absent' % (NETWK2, FW2), 'r': '127.0.0.1 | success >> {"changed": true, "fwname": "%s", "name": "%s", "state": "absent"}' % (FW2, NETWK2), }, ], 'teardown': ['gcutil deletenetwork -f %s' % (NETWK1), 'gcutil deletenetwork -f %s' % (NETWK2), 'sleep 5', 'gcutil deletefirewall -f %s' % (FW1), 'gcutil deletefirewall -f %s' % (FW2)], }, {'id': '07', 'desc': 'Create networks and firewall rules', 'setup': ['gcutil addnetwork --range="%s" --gateway="%s" %s' % (CIDR1, GW1, NETWK1), 'sleep 5', 'gcutil addfirewall --allowed="tcp:80" --network=%s %s' % (NETWK1, FW1), 'sleep 5'], 'tests': [ {'desc': 'CREATE network without specifying ipv4_range [FAIL]', 'm': 'gce_net', 'a': 'name=fail', 'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Missing required 'ipv4_range' parameter\"}", }, {'desc': 'CREATE network with specifying bad ipv4_range [FAIL]', 'm': 'gce_net', 'a': 'name=fail ipv4_range=bad_value', 'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[400], API error code[None] and message: Invalid value for field 'resource.IPv4Range': 'bad_value'. Must be a CIDR address range that is contained in the RFC1918 private address blocks: [10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16]\"}", }, {'desc': 'CREATE existing network, not changed [success]', 'm': 'gce_net', 'a': 'name=%s ipv4_range=%s' % (NETWK1, CIDR1), 'r': '127.0.0.1 | success >> {"changed": false, "ipv4_range": "%s", "name": "%s", "state": "present"}' % (CIDR1, NETWK1), }, {'desc': 'CREATE new network, changed [success]', 'm': 'gce_net', 'a': 'name=%s ipv4_range=%s' % (NETWK2, CIDR2), 'r': '127.0.0.1 | success >> {"changed": true, "ipv4_range": "10.240.32.0/24", "name": "%s", "state": "present"}' % (NETWK2), }, {'desc': 'CREATE new fw rule missing params [FAIL]', 'm': 'gce_net', 'a': 'name=%s fwname=%s' % (NETWK1, FW1), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required firewall rule parameter(s)"}', }, {'desc': 'CREATE new fw rule bad params [FAIL]', 'm': 'gce_net', 'a': 'name=%s fwname=broken allowed=blah src_tags="one,two"' % (NETWK1), 'r': "127.0.0.1 | FAILED >> {\"changed\": false, \"failed\": true, \"msg\": \"Unexpected response: HTTP return_code[400], API error code[None] and message: Invalid value for field 'resource.allowed[0].IPProtocol': 'blah'. Must be one of [\\\"tcp\\\", \\\"udp\\\", \\\"icmp\\\"] or an IP protocol number between 0 and 255\"}", }, {'desc': 'CREATE existing fw rule [success]', 'm': 'gce_net', 'a': 'name=%s fwname=%s allowed="tcp:80" src_tags="one,two"' % (NETWK1, FW1), 'r': '127.0.0.1 | success >> {"allowed": "tcp:80", "changed": false, "fwname": "%s", "ipv4_range": "%s", "name": "%s", "src_range": null, "src_tags": ["one", "two"], "state": "present"}' % (FW1, CIDR1, NETWK1), }, {'desc': 'CREATE new fw rule [success]', 'm': 'gce_net', 'a': 'name=%s fwname=%s allowed="tcp:80" src_tags="one,two"' % (NETWK1, FW3), 'r': '127.0.0.1 | success >> {"allowed": "tcp:80", "changed": true, "fwname": "%s", "ipv4_range": "%s", "name": "%s", "src_range": null, "src_tags": ["one", "two"], "state": "present"}' % (FW3, CIDR1, NETWK1), }, {'desc': 'CREATE new network *and* fw rule [success]', 'm': 'gce_net', 'a': 'name=%s ipv4_range=%s fwname=%s allowed="tcp:80" src_tags="one,two"' % (NETWK3, CIDR3, FW4), 'r': '127.0.0.1 | success >> {"allowed": "tcp:80", "changed": true, "fwname": "%s", "ipv4_range": "%s", "name": "%s", "src_range": null, "src_tags": ["one", "two"], "state": "present"}' % (FW4, CIDR3, NETWK3), }, ], 'teardown': ['gcutil deletefirewall -f %s' % (FW1), 'gcutil deletefirewall -f %s' % (FW2), 'gcutil deletefirewall -f %s' % (FW3), 'gcutil deletefirewall -f %s' % (FW4), 'sleep 5', 'gcutil deletenetwork -f %s' % (NETWK1), 'gcutil deletenetwork -f %s' % (NETWK2), 'gcutil deletenetwork -f %s' % (NETWK3), 'sleep 5'], }, {'id': '08', 'desc': 'Create load-balancer resources', 'setup': ['gcutil addinstance "%s" --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --nopersistent_boot_disk' % (INAME, ZONE, TYPE, NETWORK, SCOPES, IMAGE), 'gcutil addinstance "%s" --wait_until_running --zone=%s --machine_type=%s --network=%s --service_account_scopes="%s" --image="%s" --nopersistent_boot_disk' % (INAME2, ZONE, TYPE, NETWORK, SCOPES, IMAGE), ], 'tests': [ {'desc': 'Do nothing [FAIL]', 'm': 'gce_lb', 'a': 'httphealthcheck_port=7', 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Nothing to do, please specify a \\\"name\\\" or \\\"httphealthcheck_name\\\" parameter"}', }, {'desc': 'CREATE_HC create basic http healthcheck [success]', 'm': 'gce_lb', 'a': 'httphealthcheck_name=%s' % (HC1), 'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_healthy_count": 2, "httphealthcheck_host": null, "httphealthcheck_interval": 5, "httphealthcheck_name": "%s", "httphealthcheck_path": "/", "httphealthcheck_port": 80, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "name": null, "state": "present"}' % (HC1), }, {'desc': 'CREATE_HC (repeat, no-op) create basic http healthcheck [success]', 'm': 'gce_lb', 'a': 'httphealthcheck_name=%s' % (HC1), 'r': '127.0.0.1 | success >> {"changed": false, "httphealthcheck_healthy_count": 2, "httphealthcheck_host": null, "httphealthcheck_interval": 5, "httphealthcheck_name": "%s", "httphealthcheck_path": "/", "httphealthcheck_port": 80, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "name": null, "state": "present"}' % (HC1), }, {'desc': 'CREATE_HC create custom http healthcheck [success]', 'm': 'gce_lb', 'a': 'httphealthcheck_name=%s httphealthcheck_port=1234 httphealthcheck_path="/whatup" httphealthcheck_host="foo" httphealthcheck_interval=300' % (HC2), 'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_healthy_count": 2, "httphealthcheck_host": "foo", "httphealthcheck_interval": 300, "httphealthcheck_name": "%s", "httphealthcheck_path": "/whatup", "httphealthcheck_port": 1234, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "name": null, "state": "present"}' % (HC2), }, {'desc': 'CREATE_HC create (broken) custom http healthcheck [FAIL]', 'm': 'gce_lb', 'a': 'httphealthcheck_name=%s httphealthcheck_port="string" httphealthcheck_path=7' % (HC3), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Unexpected response: HTTP return_code[400], API error code[None] and message: Invalid value for: Expected a signed integer, got \'string\' (class java.lang.String)"}', }, {'desc': 'CREATE_LB create lb, missing region [FAIL]', 'm': 'gce_lb', 'a': 'name=%s' % (LB1), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Missing required region name"}', }, {'desc': 'CREATE_LB create lb, bogus region [FAIL]', 'm': 'gce_lb', 'a': 'name=%s region=bogus' % (LB1), 'r': '127.0.0.1 | FAILED >> {"changed": false, "failed": true, "msg": "Unexpected response: HTTP return_code[404], API error code[None] and message: The resource \'projects/%s/regions/bogus\' was not found"}' % (PROJECT), }, {'desc': 'CREATE_LB create lb, minimal params [success]', 'strip_numbers': True, 'm': 'gce_lb', 'a': 'name=%s region=%s' % (LB1, REGION), 'r': '127.0.0.1 | success >> {"changed": true, "external_ip": "173.255.123.245", "httphealthchecks": [], "members": [], "name": "%s", "port_range": "1-65535", "protocol": "tcp", "region": "%s", "state": "present"}' % (LB1, REGION), }, {'desc': 'CREATE_LB create lb full params [success]', 'strip_numbers': True, 'm': 'gce_lb', 'a': 'httphealthcheck_name=%s httphealthcheck_port=5055 httphealthcheck_path="/howami" name=%s port_range=8000-8888 region=%s members=%s/%s,%s/%s' % (HC3,LB2,REGION,ZONE,INAME,ZONE,INAME2), 'r': '127.0.0.1 | success >> {"changed": true, "external_ip": "173.255.126.81", "httphealthcheck_healthy_count": 2, "httphealthcheck_host": null, "httphealthcheck_interval": 5, "httphealthcheck_name": "%s", "httphealthcheck_path": "/howami", "httphealthcheck_port": 5055, "httphealthcheck_timeout": 5, "httphealthcheck_unhealthy_count": 2, "httphealthchecks": ["%s"], "members": ["%s/%s", "%s/%s"], "name": "%s", "port_range": "8000-8888", "protocol": "tcp", "region": "%s", "state": "present"}' % (HC3,HC3,ZONE,INAME,ZONE,INAME2,LB2,REGION), }, ], 'teardown': [ 'gcutil deleteinstance --zone=%s -f %s %s' % (ZONE, INAME, INAME2), 'gcutil deleteforwardingrule --region=%s -f %s %s' % (REGION, LB1, LB2), 'sleep 10', 'gcutil deletetargetpool --region=%s -f %s-tp %s-tp' % (REGION, LB1, LB2), 'sleep 10', 'gcutil deletehttphealthcheck -f %s %s %s' % (HC1, HC2, HC3), ], }, {'id': '09', 'desc': 'Destroy load-balancer resources', 'setup': ['gcutil addhttphealthcheck %s' % (HC1), 'sleep 5', 'gcutil addhttphealthcheck %s' % (HC2), 'sleep 5', 'gcutil addtargetpool --health_checks=%s --region=%s %s-tp' % (HC1, REGION, LB1), 'sleep 5', 'gcutil addforwardingrule --target=%s-tp --region=%s %s' % (LB1, REGION, LB1), 'sleep 5', 'gcutil addtargetpool --region=%s %s-tp' % (REGION, LB2), 'sleep 5', 'gcutil addforwardingrule --target=%s-tp --region=%s %s' % (LB2, REGION, LB2), 'sleep 5', ], 'tests': [ {'desc': 'DELETE_LB: delete a non-existent LB [success]', 'm': 'gce_lb', 'a': 'name=missing state=absent', 'r': '127.0.0.1 | success >> {"changed": false, "name": "missing", "state": "absent"}', }, {'desc': 'DELETE_LB: delete a non-existent LB+HC [success]', 'm': 'gce_lb', 'a': 'name=missing httphealthcheck_name=alsomissing state=absent', 'r': '127.0.0.1 | success >> {"changed": false, "httphealthcheck_name": "alsomissing", "name": "missing", "state": "absent"}', }, {'desc': 'DELETE_LB: destroy standalone healthcheck [success]', 'm': 'gce_lb', 'a': 'httphealthcheck_name=%s state=absent' % (HC2), 'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_name": "%s", "name": null, "state": "absent"}' % (HC2), }, {'desc': 'DELETE_LB: destroy standalone balancer [success]', 'm': 'gce_lb', 'a': 'name=%s state=absent' % (LB2), 'r': '127.0.0.1 | success >> {"changed": true, "name": "%s", "state": "absent"}' % (LB2), }, {'desc': 'DELETE_LB: destroy LB+HC [success]', 'm': 'gce_lb', 'a': 'name=%s httphealthcheck_name=%s state=absent' % (LB1, HC1), 'r': '127.0.0.1 | success >> {"changed": true, "httphealthcheck_name": "%s", "name": "%s", "state": "absent"}' % (HC1,LB1), }, ], 'teardown': [ 'gcutil deleteforwardingrule --region=%s -f %s %s' % (REGION, LB1, LB2), 'sleep 10', 'gcutil deletetargetpool --region=%s -f %s-tp %s-tp' % (REGION, LB1, LB2), 'sleep 10', 'gcutil deletehttphealthcheck -f %s %s' % (HC1, HC2), ], }, ] def main(tests_to_run=[]): for test in test_cases: if tests_to_run and test['id'] not in tests_to_run: continue print "=> starting/setup '%s:%s'"% (test['id'], test['desc']) if DEBUG: print "=debug>", test['setup'] for c in test['setup']: (s,o) = run(c) test_i = 1 for t in test['tests']: if DEBUG: print "=>debug>", test_i, t['desc'] # run any test-specific setup commands if t.has_key('setup'): for setup in t['setup']: (status, output) = run(setup) # run any 'peek_before' commands if t.has_key('peek_before') and PEEKING_ENABLED: for setup in t['peek_before']: (status, output) = run(setup) # run the ansible test if 'a' exists, otherwise # an empty 'a' directive allows test to run # setup/teardown for a subsequent test. if t['a']: if DEBUG: print "=>debug>", t['m'], t['a'] acmd = "ansible all -o -m %s -a \"%s\"" % (t['m'],t['a']) #acmd = "ANSIBLE_KEEP_REMOTE_FILES=1 ansible all -vvv -m %s -a \"%s\"" % (t['m'],t['a']) (s,o) = run(acmd) # check expected output if DEBUG: print "=debug>", o.strip(), "!=", t['r'] print "=> %s.%02d '%s':" % (test['id'], test_i, t['desc']), if t.has_key('strip_numbers'): # strip out all numbers so we don't trip over different # IP addresses is_good = (o.strip().translate(None, "0123456789") == t['r'].translate(None, "0123456789")) else: is_good = (o.strip() == t['r']) if is_good: print "PASS" else: print "FAIL" if VERBOSE: print "=>", acmd print "=> Expected:", t['r'] print "=> Got:", o.strip() # run any 'peek_after' commands if t.has_key('peek_after') and PEEKING_ENABLED: for setup in t['peek_after']: (status, output) = run(setup) # run any test-specific teardown commands if t.has_key('teardown'): for td in t['teardown']: (status, output) = run(td) test_i += 1 print "=> completing/teardown '%s:%s'" % (test['id'], test['desc']) if DEBUG: print "=debug>", test['teardown'] for c in test['teardown']: (s,o) = run(c) if __name__ == '__main__': tests_to_run = [] if len(sys.argv) == 2: if sys.argv[1] in ["--help", "--list"]: print "usage: %s [id1,id2,...,idN]" % sys.argv[0] print " * An empty argument list will execute all tests" print " * Do not need to specify tests in numerical order" print " * List test categories with --list or --help" print "" for test in test_cases: print "\t%s:%s" % (test['id'], test['desc']) sys.exit(0) else: tests_to_run = sys.argv[1].split(',') main(tests_to_run) ansible-1.5.4/README.md0000664000000000000000000000457512316627017013135 0ustar rootroot[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible) Ansible ======= Ansible is a radically simple configuration-management, application deployment, task-execution, and multinode orchestration engine. Read the documentation and more at http://ansible.com/ Many users run straight from the development branch (it's generally fine to do so), but you might also wish to consume a release. You can find instructions [here](http://docs.ansible.com/intro_getting_started.html) for a variety of platforms. If you want a tarball of the last release, go to [releases.ansible.com](http://releases.ansible.com/ansible) and you can also install with pip. Design Principles ================= * Have a dead simple setup process and a minimal learning curve * Be super fast & parallel by default * Require no server or client daemons; use existing SSHd * Use a language that is both machine and human friendly * Focus on security and easy auditability/review/rewriting of content * Manage remote machines instantly, without bootstrapping * Allow module development in any dynamic language, not just Python * Be usable as non-root * Be the easiest IT automation system to use, ever. Get Involved ============ * Read [Contributing.md](https://github.com/ansible/ansible/blob/devel/CONTRIBUTING.md) for all kinds of ways to contribute to and interact with the project, including mailing list information and how to submit bug reports and code to Ansible. * All code submissions are done through pull requests. Take care to make sure no merge commits are in the submission, and use "git rebase" vs "git merge" for this reason. If submitting a large code change (other than modules), it's probably a good idea to join ansible-devel and talk about what you would like to do or add first and to avoid duplicate efforts. This not only helps everyone know what's going on, it also helps save time and effort if we decide some changes are needed. * irc.freenode.net: #ansible Branch Info =========== * Releases are named after Van Halen songs. * The devel branch corresponds to the release actively under development. * Various release-X.Y branches exist for previous releases * We'd love to have your contributions, read "CONTRIBUTING.md" for process notes. Author ====== Michael DeHaan -- michael@ansible.com [Ansible, Inc](http://ansible.com) ansible-1.5.4/examples/0000775000000000000000000000000012316627017013461 5ustar rootrootansible-1.5.4/examples/DOCUMENTATION.yml0000664000000000000000000000155112316627017016217 0ustar rootroot--- # If a key doesn't apply to your module (ex: choices, default, or # aliases) you can use the word 'null', or an empty list, [], where # appropriate. module: modulename short_description: This is a sentence describing the module description: - Longer description of the module - You might include instructions version_added: "X.Y" author: Your AWESOME name here notes: - Other things consumers of your module should know requirements: - list of required things - like the factor package - or a specic platform options: # One or more of the following option_name: description: - Words go here - that describe - this option required: true or false default: a string or the word null choices: [list, of, choices] aliases: [list, of, aliases] version_added: 1.X ansible-1.5.4/examples/ansible.cfg0000664000000000000000000001374712316627017015573 0ustar rootroot# config file for ansible -- http://ansible.com/ # ============================================== # nearly all parameters can be overridden in ansible-playbook # or with command line flags. ansible will read ANSIBLE_CONFIG, # ansible.cfg in the current working directory, .ansible.cfg in # the home directory or /etc/ansible/ansible.cfg, whichever it # finds first [defaults] # some basic default values... hostfile = /etc/ansible/hosts library = /usr/share/ansible remote_tmp = $HOME/.ansible/tmp pattern = * forks = 5 poll_interval = 15 sudo_user = root #ask_sudo_pass = True #ask_pass = True transport = smart remote_port = 22 # additional paths to search for roles in, colon seperated #roles_path = /etc/ansible/roles # uncomment this to disable SSH key host checking #host_key_checking = False # change this for alternative sudo implementations sudo_exe = sudo # what flags to pass to sudo #sudo_flags = -H # SSH timeout timeout = 10 # default user to use for playbooks if user is not specified # (/usr/bin/ansible will use current user as default) #remote_user = root # logging is off by default unless this path is defined # if so defined, consider logrotate #log_path = /var/log/ansible.log # default module name for /usr/bin/ansible #module_name = command # use this shell for commands executed under sudo # you may need to change this to bin/bash in rare instances # if sudo is constrained #executable = /bin/sh # if inventory variables overlap, does the higher precedence one win # or are hash values merged together? The default is 'replace' but # this can also be set to 'merge'. #hash_behaviour = replace # How to handle variable replacement - as of 1.2, Jinja2 variable syntax is # preferred, but we still support the old $variable replacement too. # Turn off ${old_style} variables here if you like. #legacy_playbook_variables = yes # list any Jinja2 extensions to enable here: #jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n # if set, always use this private key file for authentication, same as # if passing --private-key to ansible or ansible-playbook #private_key_file = /path/to/file # format of string {{ ansible_managed }} available within Jinja2 # templates indicates to users editing templates files will be replaced. # replacing {file}, {host} and {uid} and strftime codes with proper values. ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} # by default, ansible-playbook will display "Skipping [host]" if it determines a task # should not be run on a host. Set this to "False" if you don't want to see these "Skipping" # messages. NOTE: the task header will still be shown regardless of whether or not the # task is skipped. #display_skipped_hosts = True # by default (as of 1.3), Ansible will raise errors when attempting to dereference # Jinja2 variables that are not set in templates or action lines. Uncomment this line # to revert the behavior to pre-1.3. #error_on_undefined_vars = False # set plugin path directories here, seperate with colons action_plugins = /usr/share/ansible_plugins/action_plugins callback_plugins = /usr/share/ansible_plugins/callback_plugins connection_plugins = /usr/share/ansible_plugins/connection_plugins lookup_plugins = /usr/share/ansible_plugins/lookup_plugins vars_plugins = /usr/share/ansible_plugins/vars_plugins filter_plugins = /usr/share/ansible_plugins/filter_plugins # don't like cows? that's unfortunate. # set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 #nocows = 1 # don't like colors either? # set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 #nocolor = 1 # the CA certificate path used for validating SSL certs. This path # should exist on the controlling node, not the target nodes # common locations: # RHEL/CentOS: /etc/pki/tls/certs/ca-bundle.crt # Fedora : /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem # Ubuntu : /usr/share/ca-certificates/cacert.org/cacert.org.crt #ca_file_path = # the http user-agent string to use when fetching urls. Some web server # operators block the default urllib user agent as it is frequently used # by malicious attacks/scripts, so we set it to something unique to # avoid issues. #http_user_agent = ansible-agent [paramiko_connection] # uncomment this line to cause the paramiko connection plugin to not record new host # keys encountered. Increases performance on new host additions. Setting works independently of the # host key checking setting above. #record_host_keys=False # by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this # line to disable this behaviour. #pty=False [ssh_connection] # ssh arguments to use # Leaving off ControlPersist will result in poor performance, so use # paramiko on older platforms rather than removing it #ssh_args = -o ControlMaster=auto -o ControlPersist=60s # The path to use for the ControlPath sockets. This defaults to # "%(directory)s/ansible-ssh-%%h-%%p-%%r", however on some systems with # very long hostnames or very long path names (caused by long user names or # deeply nested home directories) this can exceed the character limit on # file socket names (108 characters for most platforms). In that case, you # may wish to shorten the string below. # # Example: # control_path = %(directory)s/%%h-%%r #control_path = %(directory)s/ansible-ssh-%%h-%%p-%%r # Enabling pipelining reduces the number of SSH operations required to # execute a module on the remote server. This can result in a significant # performance improvement when enabled, however when using "sudo:" you must # first disable 'requiretty' in /etc/sudoers # # By default, this option is disabled to preserve compatibility with # sudoers configurations that have requiretty (the default on many distros). # #pipelining = False # if True, make ansible use scp if the connection type is ssh # (default is sftp) #scp_if_ssh = True [accelerate] accelerate_port = 5099 accelerate_timeout = 30 accelerate_connect_timeout = 5.0 ansible-1.5.4/examples/issues/0000775000000000000000000000000012316627017014774 5ustar rootrootansible-1.5.4/examples/issues/ISSUE_TEMPLATE.md0000664000000000000000000000315212316627017017502 0ustar rootroot##### Issue Type: What kind of ticket is this? You can say “Bug Report”, “Feature Idea”, “Feature Pull Request”, “New Module Pull Request”, “Bugfix Pull Request”, “Documentation Report”, or “Docs Pull Request”. ##### Ansible Version: Please supply the verbatim output from running “ansible --version”. ##### Environment: What OS are you running Ansible from and what OS are you managing? Examples include RHEL 5/6, Centos 5/6, Ubuntu 12.04/13.10, *BSD, Solaris. If this is a generic feature request or it doesn’t apply, just say “N/A”. ##### Summary: Please summarize your request in this space. You will earn bonus points for being succinct, but please add enough detail so we can understand the request. ##### Steps To Reproduce: If this is a bug ticket, please enter the steps you use to reproduce the problem in the space below. If this is a feature request, please enter the steps you would use to use the feature. If an example playbook is useful, please include a short reproducer inline, indented by four spaces. If a longer one is necessary, please link one uploaded to gist.github.com. ##### Expected Results: Please enter your expected results in this space. When running the steps supplied above, what would you expect to happen? If showing example output, indent your output by four spaces so it will render correctly in GitHub. ##### Actual Results: Please enter your actual results in this space. When running the steps supplied above, what actually happened? If showing example output, indent your output by four spaces so it will render correctly in GitHub. ansible-1.5.4/examples/scripts/0000775000000000000000000000000012316627017015150 5ustar rootrootansible-1.5.4/examples/scripts/uptime.py0000775000000000000000000000153212316627017017031 0ustar rootroot#!/usr/bin/python # (c) 2012, Michael DeHaan # example of getting the uptime of all hosts, 10 at a time import ansible.runner import sys # construct the ansible runner and execute on all hosts results = ansible.runner.Runner( pattern='*', forks=10, module_name='command', module_args='/usr/bin/uptime', ).run() if results is None: print "No hosts found" sys.exit(1) print "UP ***********" for (hostname, result) in results['contacted'].items(): if not 'failed' in result: print "%s >>> %s" % (hostname, result['stdout']) print "FAILED *******" for (hostname, result) in results['contacted'].items(): if 'failed' in result: print "%s >>> %s" % (hostname, result['msg']) print "DOWN *********" for (hostname, result) in results['dark'].items(): print "%s >>> %s" % (hostname, result) ansible-1.5.4/examples/scripts/yaml_to_ini.py0000775000000000000000000001667112316627017020043 0ustar rootroot# (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . import ansible.constants as C from ansible.inventory.host import Host from ansible.inventory.group import Group from ansible import errors from ansible import utils import os import yaml import sys class InventoryParserYaml(object): ''' Host inventory parser for ansible ''' def __init__(self, filename=C.DEFAULT_HOST_LIST): sys.stderr.write("WARNING: YAML inventory files are deprecated in 0.6 and will be removed in 0.7, to migrate" + " download and run https://github.com/ansible/ansible/blob/devel/examples/scripts/yaml_to_ini.py\n") fh = open(filename) data = fh.read() fh.close() self._hosts = {} self._parse(data) def _make_host(self, hostname): if hostname in self._hosts: return self._hosts[hostname] else: host = Host(hostname) self._hosts[hostname] = host return host # see file 'test/yaml_hosts' for syntax def _parse(self, data): # FIXME: refactor into subfunctions all = Group('all') ungrouped = Group('ungrouped') all.add_child_group(ungrouped) self.groups = dict(all=all, ungrouped=ungrouped) grouped_hosts = [] yaml = utils.parse_yaml(data) # first add all groups for item in yaml: if type(item) == dict and 'group' in item: group = Group(item['group']) for subresult in item.get('hosts',[]): if type(subresult) in [ str, unicode ]: host = self._make_host(subresult) group.add_host(host) grouped_hosts.append(host) elif type(subresult) == dict: host = self._make_host(subresult['host']) vars = subresult.get('vars',{}) if type(vars) == list: for subitem in vars: for (k,v) in subitem.items(): host.set_variable(k,v) elif type(vars) == dict: for (k,v) in subresult.get('vars',{}).items(): host.set_variable(k,v) else: raise errors.AnsibleError("unexpected type for variable") group.add_host(host) grouped_hosts.append(host) vars = item.get('vars',{}) if type(vars) == dict: for (k,v) in item.get('vars',{}).items(): group.set_variable(k,v) elif type(vars) == list: for subitem in vars: if type(subitem) != dict: raise errors.AnsibleError("expected a dictionary") for (k,v) in subitem.items(): group.set_variable(k,v) self.groups[group.name] = group all.add_child_group(group) # add host definitions for item in yaml: if type(item) in [ str, unicode ]: host = self._make_host(item) if host not in grouped_hosts: ungrouped.add_host(host) elif type(item) == dict and 'host' in item: host = self._make_host(item['host']) vars = item.get('vars', {}) if type(vars)==list: varlist, vars = vars, {} for subitem in varlist: vars.update(subitem) for (k,v) in vars.items(): host.set_variable(k,v) groups = item.get('groups', {}) if type(groups) in [ str, unicode ]: groups = [ groups ] if type(groups)==list: for subitem in groups: if subitem in self.groups: group = self.groups[subitem] else: group = Group(subitem) self.groups[group.name] = group all.add_child_group(group) group.add_host(host) grouped_hosts.append(host) if host not in grouped_hosts: ungrouped.add_host(host) # make sure ungrouped.hosts is the complement of grouped_hosts ungrouped_hosts = [host for host in ungrouped.hosts if host not in grouped_hosts] if __name__ == "__main__": if len(sys.argv) != 2: print "usage: yaml_to_ini.py /path/to/ansible/hosts" sys.exit(1) result = "" original = sys.argv[1] yamlp = InventoryParserYaml(filename=sys.argv[1]) dirname = os.path.dirname(original) group_names = [ g.name for g in yamlp.groups.values() ] for group_name in sorted(group_names): record = yamlp.groups[group_name] if group_name == 'all': continue hosts = record.hosts result = result + "[%s]\n" % record.name for h in hosts: result = result + "%s\n" % h.name result = result + "\n" groupfiledir = os.path.join(dirname, "group_vars") if not os.path.exists(groupfiledir): print "* creating: %s" % groupfiledir os.makedirs(groupfiledir) groupfile = os.path.join(groupfiledir, group_name) print "* writing group variables for %s into %s" % (group_name, groupfile) groupfh = open(groupfile, 'w') groupfh.write(yaml.dump(record.get_variables())) groupfh.close() for (host_name, host_record) in yamlp._hosts.iteritems(): hostfiledir = os.path.join(dirname, "host_vars") if not os.path.exists(hostfiledir): print "* creating: %s" % hostfiledir os.makedirs(hostfiledir) hostfile = os.path.join(hostfiledir, host_record.name) print "* writing host variables for %s into %s" % (host_record.name, hostfile) hostfh = open(hostfile, 'w') hostfh.write(yaml.dump(host_record.get_variables())) hostfh.close() # also need to keep a hash of variables per each host # and variables per each group # and write those to disk newfilepath = os.path.join(dirname, "hosts.new") fdh = open(newfilepath, 'w') fdh.write(result) fdh.close() print "* COMPLETE: review your new inventory file and replace your original when ready" print "* new inventory file saved as %s" % newfilepath print "* edit group specific variables in %s/group_vars/" % dirname print "* edit host specific variables in %s/host_vars/" % dirname # now need to write this to disk as (oldname).new # and inform the user ansible-1.5.4/examples/playbooks/0000775000000000000000000000000012316627017015464 5ustar rootrootansible-1.5.4/examples/playbooks/README.md0000664000000000000000000000022312316627017016740 0ustar rootrootPlaybook Examples ================= Playbook examples have moved. See [the Ansible-Examples repo](https://github.com/ansible/ansible-examples). ansible-1.5.4/examples/hosts0000664000000000000000000000170512316627017014547 0ustar rootroot# This is the default ansible 'hosts' file. # # It should live in /etc/ansible/hosts # # - Comments begin with the '#' character # - Blank lines are ignored # - Groups of hosts are delimited by [header] elements # - You can enter hostnames or ip addresses # - A hostname/ip can be a member of multiple groups # Ex 1: Ungrouped hosts, specify before any group headers. green.example.com blue.example.com 192.168.100.1 192.168.100.10 # Ex 2: A collection of hosts belonging to the 'webservers' group [webservers] alpha.example.org beta.example.org 192.168.1.100 192.168.1.110 # If you have multiple hosts following a pattern you can specify # them like this: www[001:006].example.com # Ex 3: A collection of database servers in the 'dbservers' group [dbservers] db01.intranet.mydomain.net db02.intranet.mydomain.net 10.25.1.56 10.25.1.57 # Here's another example of host ranges, this time there are no # leading 0s: db-[99:101]-node.example.com ansible-1.5.4/CHANGELOG.md0000664000000000000000000020376312316627017013467 0ustar rootrootAnsible Changes By Release ========================== ## 1.5.4 "Love Walks In" - April 1, 2014 - Security fix for safe_eval, which further hardens the checking of the evaluation function. - Changing order of variable precendence for system facts, to ensure that inventory variables take precedence over any facts that may be set on a host. ## 1.5.3 "Love Walks In" - March 13, 2014 - Fix validate_certs and run_command errors from previous release - Fixes to the git module related to host key checking ## 1.5.2 "Love Walks In" - March 11, 2014 - Fix module errors in airbrake and apt from previous release ## 1.5.1 "Love Walks In" - March 10, 2014 - Force command action to not be executed by the shell unless specifically enabled. - Validate SSL certs accessed through urllib*. - Implement new default cipher class AES256 in ansible-vault. - Misc bug fixes. ## 1.5 "Love Walks In" - February 28, 2014 Major features/changes: * when_foo which was previously deprecated is now removed, use "when:" instead. Code generates appropriate error suggestion. * include + with_items which was previously deprecated is now removed, ditto. Use with_nested / with_together, etc. * only_if, which is much older than when_foo and was deprecated, is similarly removed. * ssh connection plugin is now more efficient if you add 'pipelining=True' in ansible.cfg under [ssh_connection], see example.cfg * localhost/127.0.0.1 is not required to be in inventory if referenced, if not in inventory, it does not implicitly appear in the 'all' group. * git module has new parameters (accept_hostkey, key_file, ssh_opts) to ease the usage of git and ssh protocols. * when using accelerate mode, the daemon will now be restarted when specifying a different remote_user between plays. * added no_log: option for tasks. When used, no logging information will be sent to syslog during the module execution. * acl module now handles 'default' and allows for either shorthand entry or specific fields per entry section * play_hosts is a new magic variable to provide a list of hosts in scope for the current play. * ec2 module now accepts 'exact_count' and 'count_tag' as a way to enforce a running number of nodes by tags. * all ec2 modules that work with Eucalyptus also now support a 'validate_certs' option, which can be set to 'off' for installations using self-signed certs. * Start of new integration test infrastructure (WIP, more details TBD) * if repoquery is unavailble, the yum module will automatically attempt to install yum-utils * ansible-vault: a framework for encrypting your playbooks and variable files New modules: * cloud: ec2_elb_lb * cloud: ec2_key * cloud: ec2_snapshot * cloud: rax_dns * cloud: rax_dns_record * cloud: rax_files * cloud: rax_files_objects * cloud: rax_keypair * cloud: rax_queue * cloud: docker_image * messaging: rabbitmq_policy * system: at * utilities: assert Other notable changes (many new module params & bugfixes may not not listed): * no_reboot is now defaulted to "no" in the ec2_ami module to ensure filesystem consistency in the resulting AMI. * sysctl module overhauled * authorized_key module overhauled * synchronized module now handles local transport better * apt_key module now ignores case on keys * zypper_repository now skips on check mode * file module now responds to force behavior when dealing with hardlinks * new lookup plugin 'csvfile' * fixes to allow hash_merge behavior to work with dynamic inventory * mysql module will use port argument on dump/import * subversion module now ignores locale to better intercept status messages * rax api_key argument is no longer logged * backwards/forwards compatibility for OpenStack modules, 'quantum' modules grok neutron renaming * hosts properly uniqueified if appearing in redundant groups * hostname module support added for ScientificLinux * ansible-pull can now show live stdout and pass verbosity levels to ansible-playbook * ec2 instances can now be stopped or started * additional volumes can be created when creating new ec2 instances * user module can move a home directory * significant enhancement and cleanup of rackspace modules * ansible_ssh_private_key_file can be templated * docker module updated to support docker-py 0.3.0 * various other bug fixes * md5 logic improved during sudo operation * support for ed25519 keys in authorized_key module * ability to set directory permissions during a recursive copy (directory_mode parameter) * update docker module, support for using docker python library 0.3.0 ## 1.4.5 "Could This Be Magic" - February 12, 2014 - fixed issue with permissions being incorrect on fireball/accelerate keys when the umask setting was too loose. ## 1.4.4 "Could This Be Magic" - January 6, 2014 - fixed a minor issue with newer versions of pip dropping the "use-mirrors" parameter. ## 1.4.3 "Could This Be Magic" - December 20, 2013 - Fixed role_path parsing from ansible.cfg - Fixed default role templates ## 1.4.2 "Could This Be Magic" - December 18, 2013 * Fixed a few bugs related to unicode * Fixed errors in the ssh connection method with large data returns * Miscellaneous fixes for a few modules * Add the ansible-galaxy command ## 1.4.1 "Could This Be Magic" - November 27, 2013 * Misc fixes to accelerate mode and various modules. ## 1.4 "Could This Be Magic" - November 21, 2013 Highlighted new features: * Added do-until feature, which can be used to retry a failed task a specified number of times with a delay in-between the retries. * Added failed_when option for tasks, which can be used to specify logical statements that make it easier to determine when a task has failed, or to make it easier to ignore certain non-zero return codes for some commands. * Added the "subelement" lookup plugin, which allows iteration of the keys of a dictionary or items in a list. * Added the capability to use either paramiko or ssh for the inital setup connection of an accelerated playbook. * Automatically provide advice on common parser errors users encounter. * Deprecation warnings are now shown for legacy features: when_integer/etc, only_if, include+with_items, etc. Can be disabled in ansible.cfg * The system will now provide helpful tips around possible YAML syntax errors increasing ease of use for new users. * warnings are now shown for using {{ foo }} in loops and conditionals, and suggest leaving the variable expressions bare as per docs. * The roles search path is now configurable in ansible.cfg. 'roles_path' in the config setting. * Includes with parameters can now be done like roles for consistency: - { include: song.yml, year:1984, song:'jump' } * The name of each role is now shown before each task if roles are being used * Adds a "var=" option to the debug module for debugging variable data. "debug: var=hostvars['hostname']" and "debug: var=foo" are all valid syntax. * Variables in {{ format }} can be used as references even if they are structured data * Can force binding of accelerate to ipv6 ports. * the apt module will auto-install python-apt if not present rather than requiring a manual installation * the copy module is now recursive if the local 'src' parameter is a directory. * syntax checks now scan included task and variable files as well as main files New modules and plugins. * cloud: ec2_eip -- manage AWS elastic IPs * cloud: ec2_vpc -- manage ec2 virtual private clouds * cloud: elasticcache -- Manages clusters in Amazon Elasticache * cloud: rax_network -- sets up Rackspace networks * cloud: rax_facts: retrieve facts about a Rackspace Cloud Server * cloud: rax_clb_nodes -- manage Rackspace cloud load balanced nodes * cloud: rax_clb -- manages Rackspace cloud load balancers * cloud: docker - instantiates/removes/manages docker containers * cloud: ovirt -- VM lifecycle controls for ovirt * files: acl -- set or get acls on a file * files: unarchive: pushes and extracts tarballs * files: synchronize: a useful wraper around rsyncing trees of files * system: firewalld -- manage the firewalld configuration * system: modprobe -- manage kernel modules on systems that support modprobe/rmmod * system: open_iscsi -- manage targets on an initiator using open-iscsi * system: blacklist: add or remove modules from the kernel blacklist * system: hostname - sets the systems hostname * utilities: include_vars -- dynamically load variables based on conditions. * packaging: zypper_repository - adds or removes Zypper repositories * packaging: urpmi - work with urpmi packages * packaging: swdepot - a module for working with swdepot * notification: grove - notifies to Grove hosted IRC channels * web_infrastructure: ejabberd_user: add and remove users to ejabberd * web_infrastructure: jboss: deploys or undeploys apps to jboss * source_control: github_hooks: manages GitHub service hooks * net_infrastructure: bigip_monitor_http: manages F5 BIG-IP LTM http monitors * net_infrastructure: bigip_monitor_tcp: manages F5 BIG-IP LTM TCP monitors * net_infrastructure: bigip_pool_member: manages F5 BIG-IP LTM pool members * net_infrastructure: bigip_node: manages F5 BIG-IP LTM nodes * net_infrastructure: openvswitch_port * net_infrastructure: openvswitch_bridge Plugins: * jail connection module (FreeBSD) * lxc connection module * added inventory script for listing FreeBSD jails * added md5 as a Jinja2 filter: {{ path | md5 }} * added a fileglob filter that will return files matching a glob pattern. with_items: "/foo/pattern/*.txt | fileglob" * 'changed' filter returns whether a previous step was changed easier. when: registered_result | changed * DOCS NEEDED: 'unique' and 'intersect' filters are added for dealing with lists. * DOCS NEEDED: new lookup plugin added for etcd * a 'func' connection type to help people migrating from func/certmaster. Misc changes (all module additions/fixes may not listed): * (docs pending) New features for accelerate mode: configurable timeouts and a keepalives for long running tasks. * Added a `delimiter` field to the assemble module. * Added `ansible_env` to the list of facts returned by the setup module. * Added `state=touch` to the file module, which functions similarly to the command-line version of `touch`. * Added a -vvvv level, which will show SSH client debugging information in the event of a failure. * Includes now support the more standard syntax, similar to that of role includes and dependencies. * Changed the `user:` parameter on plays to `remote_user:` to prevent confusion with the module of the same name. Still backwards compatible on play parameters. * Added parameter to allow the fetch module to skip the md5 validation step ('validate_md5=false'). This is usefull when fetching files that are actively being written to, such as live log files. * Inventory hosts are used in the order they appear in the inventory. * in hosts: foo[2-5] type syntax, the iterators now are zero indexed and the last index is non-inclusive, to match Python standards. * There is now a way for a callback plugin to disable itself. See osx_say example code for an example. * Many bugfixes to modules of all types. * Complex arguments now can be used with async tasks * SSH ControlPath is now configurable in ansible.cfg. There is a limit to the lengths of these paths, see how to shorten them in ansible.cfg. * md5sum support on AIX with csum. * Extremely large documentation refactor into subchapters * Added 'append_privs' option to the mysql_user module * Can now update (temporarily change) host variables using the "add_host" module for existing hosts. * Fixes for IPv6 addresses in inventory text files * name of executable can be passed to pip/gem etc, for installing under *different* interpreters * copy of ./hacking/env-setup added for fish users, ./hacking/env-setup.fish * file module more tolerant of non-absolute paths in softlinks. * miscellaneous fixes/upgrades to async polling logic. * conditions on roles now pass to dependent roles * ansible_sudo_pass can be set in a host variable if desired * misc fixes for the pip an easy_install modules * support for running handlers that have parameterized names based on role parameters * added support for compressing MySQL dumps and extracting during import * Boto version compatibility fixes for the EC2 inventory script * in the EC2 inventory script, a group 'EC2' and 'RDS' contains EC2 and RDS hosts. * umask is enforced by the cron module * apt packages that are not-removed and not-upgraded do not count as changes * the assemble module can now use src files from the local server and copy them over dynamically * authorization code has been standardized between Amazon cloud modules * the wait_for module can now also wait for files to exist or a regex string to exist in a file * leading ranges are now allowed in ranged hostname patterns, ex: [000-250].example.com * pager support added to ansible-doc (so it will auto-invoke less, etc) * misc fixes to the cron module * get_url module now understands content-disposition headers for deciding filenames * it is possible to have subdirectories in between group_vars/ and host_vars/ and the final filename, like host_vars/rack42/asdf for the variables for host 'asdf'. The intermediate directories are ignored, and do not put a file in there twice. ## 1.3.4 "Top of the World" (reprise) - October 29, 2013 * Fixed a bug in the copy module, where a filename containing the string "raw" was handled incorrectly * Fixed a bug in accelerate mode, where copying a zero-length file out would fail ## 1.3.3 "Top of the World" (reprise) - October 9, 2013 Additional fixes for accelerate mode. ## 1.3.2 "Top of the World" (reprise) - September 19th, 2013 Multiple accelerate mode fixes: * Make packet reception less greedy, so multiple frames of data are not consumed by one call. * Adding two timeout values (one for connection and one for data reception timeout). * Added keepalive packets, so async mode is no longer required for long-running tasks. * Modified accelerate daemon to use the verbose logging level of the ansible command that started it. * Fixed bug where accelerate would not work in check-mode. * Added a -vvvv level, which will show SSH client debugging information in the event of a failure. * Fixed bug in apt_repository module where the repository cache was not being updated. * Fixed bug where "too many open files" errors would be encountered due to pseudo TTY's not being closed properly. ## 1.3.1 "Top of the World" (reprise) - September 16th, 2013 Fixing a bug in accelerate mode whereby the gather_facts step would always be run via sudo regardless of the play settings. ## 1.3 "Top of the World" - September 13th, 2013 Highlighted new features: * accelerated mode: An enhanced fireball mode that requires zero bootstrapping and fewer requirements plus adds capabilities like sudo commands. * role defaults: Allows roles to define a set of variables at the lowest priority. These variables can be overridden by any other variable. * new /etc/ansible/facts.d allows JSON or INI-style facts to be provided from the remote node, and supports executable fact programs in this dir. Files must end in *.fact. * added the ability to make undefined template variables raise errors (see ansible.cfg) * (DOCS PENDING) sudo: True/False and sudo_user: True/False can be set at include and role level * added changed_when: (expression) which allows overriding whether a result is changed or not and can work with registered expressions * --extra-vars can now take a file as input, e.g., "-e @filename" and can also be formatted as YAML * external inventory scripts may now return host variables in one pass, which allows them to be much more efficient for large numbers of hosts * if --forks exceeds the numbers of hosts, it will be automatically reduced. Set forks to 0 and you get "as many forks as I have hosts" out of the box. * enabled error_on_undefined_vars by default, which will make errors in playbooks more obvious * role dependencies -- one role can now pull in another, with parameters of its own. * added the ability to have tasks execute even during a check run (always_run). * added the ability to set the maximum failure percentage for a group of hosts. New modules: * notifications: datadog_event -- send data to datadog * cloud: digital_ocean -- module for DigitalOcean provisioning that also includes inventory support * cloud: rds -- Amazon Relational Database Service * cloud: linode -- modules for Linode provisioning that also includes inventory support * cloud: route53 -- manage Amazon DNS entries * cloud: ec2_ami -- manages (and creates!) ec2 AMIs * database: mysql_replication -- manages mysql replication settings for masters/slaves * database: mysql_variables -- manages mysql runtime variables * database: redis -- manages redis databases (slave mode and flushing data) * net_infrastructure: arista_interface * net_infrastructure: arista_lag * net_infrastructure: arista_l2interface * net_infrastructure: arista_vlan * system: stat -- reports on stat(istics) of remote files, for use with 'register' * web_infrastructure: htpasswd -- manipulate htpasswd files * packaging: rpm_key -- adds or removes RPM signing keys * packaging: apt_repository -- rewritten to remove dependencies * monitoring: boundary_meter -- adds or removes boundary.com meters * net_infrastructure: dnsmadeeasy - manipulate DNS Made Easy records * files: xattr -- manages extended attributes on files Misc changes: * return 3 when there are hosts that were unreachable during a run * the yum module now supports wildcard values for the enablerepo argument * added an inventory script to pull host information from Zabbix * async mode no longer allows with_* lookup plugins due to incompatibilities * Added OpenRC support (Gentoo) to the service module * ansible_ssh_user value is available to templates * added placement_group parameter to ec2 module * new sha256sum parameter added to get_url module for checksum validation * search for mount binaries in system path and sbin vs assuming path * allowed inventory file to be read from a pipe * added Solaris distribution facts * fixed bug along error path in quantum_network module * user password update mode is controllable in user module now (at creation vs. every time) * added check mode support to the OpenBSD package module * Fix for MySQL 5.6 compatibility * HP UX virtualization facts * fixed some executable bits in git * made rhn_register module compatible with EL5 * fix for setup module epoch time on Solaris * sudo_user is now expanded later, allowing it to be set at inventory scope * mongodb_user module changed to also support MongoDB 2.2 * new state=hard option added to the file module for hardlinks vs softlinks * fixes to apt module purging option behavior * fixes for device facts with multiple PCI domains * added "with_inventory_hostnames" lookup plugin, which can take a pattern and loop over hostnames matching the pattern and is great for use with delegate_to and so on * ec2 module supports adding to multiple security groups * cloudformation module includes fixes for the error path, and the 'wait_for' parameter was removed * added --only-if-changed to ansible-pull, which runs only if the repo has changes (not default) * added 'mandatory', a Jinja2 filter that checks if a variable is defined: {{ foo|mandatory }} * added support for multiple size formats to the lvol module * timing reporting on wait_for module now includes the delay time * IRC module can now send a server password * "~" now expanded on each component of configured plugin paths * fix for easy_install module when dealing with virtualenv * rackspace module now explicitly indicates rackspace vs vanilla openstack * add_host module does not report changed=True any longer * explanatory error message when using fireball with sudo has been improved * git module now automatically pulls down git submodules * negated patterns do not require "all:!foo", you can just say "!foo" now to select all not foos * fix for Debian services always reporting changed when toggling enablement bit * roles files now tolerate files named 'main.yaml' and 'main' in addition to main.yml * some help cleanup to command line flags on scripts * force option reinstated for file module so it can create symlinks to non-existent files, etc. * added termination support to ec2 module * --ask-sudo-pass or --sudo-user does not enable all options to use sudo in ansible-playbook * include/role conditionals are added ahead of task conditionals so they can short circuit properly * added pipes.quote in various places so paths with spaces are better tolerated * error handling while executing Jinja2 filters has been improved * upgrades to atomic replacement logic when copying files across partitions/etc * mysql user module can try to login before requiring explicit password * various additional options added to supervisorctl module * only add non unique parameter on group creation when required * allow rabbitmq_plugin to specify a non-standard RabbitMQ path * authentication fixes to keystone_user module * added IAM role support to EC2 module * fixes for OpenBSD package module to avoid shell expansion * git module upgrades to allow --depth and --version to be used together * new lookup plugin, "with_flattened" * extra vars (-e) variables can be used in playbook include paths * improved reporting for invalid sudo passwords * improved reporting for inability to find a suitable tmp location * require libselinux-python to perform file operations if SELinux is operational * ZFS module fixes for byte display constants and handling paths with spaces * setup module more tolerant of gathering facts against things it does not have permission to read * can specify name=* state=latest to update all yum modules * major speedups to the yum module for default cases * ec2_facts module will now run in check mode * sleep option on service module for sleeping between stop/restart * fix for IPv6 facts on BSD * added Jinja2 filters: skipped, whether a result was skipped * added Jinja2 filters: quote, quotes a string if it needs to be quoted * allow force=yes to affect apt upgrades * fix for saving conditionals in variable names * support for multiple host ranges in INI inventory, e.g., db[01:10:3]node-[01:10] * fixes/improvements to cron module * add user_install=no option to gem module to install gems system wide * added raw=yes to allow copying without python on remote machines * added with_indexed_items lookup plugin * Linode inventory plugin now significantly faster * added recurse=yes parameter to pacman module for package removal * apt_key module can now target specific keyrings (keyring=filename) * ec2 module change reporting improved * hg module now expands user paths (~) * SSH connection type known host checking now can process hashed known_host files * lvg module now checks for executables in more correct locations * copy module now works correctly with sudo_user * region parameter added to ec2_elb module * better default XMPP module message types * fixed conditional tests against raw booleans * mysql module grant removal is now smarter * apt-remove is now forced to be non-interactive * support ; comments in INI file module * fixes to callbacks WRT async output (fire and forget tasks now trigger callbacks!) * folder support for s3 module * added new example inventory plugin for Red Hat OpenShift * and other misc. bugfixes ## 1.2.3 "Hear About It Later" (reprise) -- Aug 21, 2013 * Local security fixes for predictable file locations for ControlPersist and retry file paths on shared machines on operating systems without kernel symlink/hardlink protections. ## 1.2.2 "Hear About It Later" (reprise) -- July 4, 2013 * Added a configuration file option [paramiko_connection] record_host_keys which allows the code that paramiko uses to update known_hosts to be disabled. This is done because paramiko can be very slow at doing this if you have a large number of hosts and some folks may not want this behavior. This can be toggled independently of host key checking and does not affect the ssh transport plugin. Use of the ssh transport plugin is preferred if you have ControlPersist capability, and Ansible by default in 1.2.1 and later will autodetect. ## 1.2.1 "Hear About It Later" -- July 4, 2013 * Connection default is now "smart", which discovers if the system openssh can support ControlPersist, and uses it if so, if not falls back to paramiko. * Host key checking is on by default. Disable it if you like by adding host_key_checking=False in the [default] section of /etc/ansible/ansible.cfg or ~/ansible.cfg or by exporting ANSIBLE_HOST_KEY_CHECKING=False * Paramiko now records host keys it was in contact with host key checking is on. It is somewhat sluggish when doing this, so switch to the 'ssh' transport if this concerns you. ## 1.2 "Right Now" -- June 10, 2013 Core Features: * capability to set 'all_errors_fatal: True' in a playbook to force any error to stop execution versus a whole group or serial block needing to fail usable, without breaking the ability to override in ansible * ability to use variables from {{ }} syntax in mainline playbooks, new 'when' conditional, as detailed in documentation. Can disable old style replacements in ansible.cfg if so desired, but are still active by default. * can set ansible_ssh_private_key_file as an inventory variable (similar to ansible_ssh_host, etc) * 'when' statement can be affixed to task includes to auto-affix the conditional to each task therein * cosmetic: "*****" banners in ansible-playbook output are now constant width * --limit can now be given a filename (--limit @filename) to constrain a run to a host list on disk * failed playbook runs will create a retry file in /var/tmp/ansible usable with --limit * roles allow easy arrangement of reusable tasks/handlers/files/templates * pre_tasks and post_tasks allow for separating tasks into blocks where handlers will fire around them automatically * "meta: flush_handler" task capability added for when you really need to force handlers to run * new --start-at-task option to ansible playbook allows starting at a specific task name in a long playbook * added a log file for ansible/ansible-playbook, set 'log_path' in the configuration file or ANSIBLE_LOG_PATH in environment * debug mode always outputs debug in playbooks, without needing to specify -v * external inventory script added for Spacewalk / Red Hat Satellite servers * It is now possible to feed JSON structures to --extra-vars. Pass in a JSON dictionary/hash to feed in complex data. * group_vars/ and host_vars/ directories can now be kept alongside the playbook as well as inventory (or both!) * more filters: ability to say {{ foo|success }} and {{ foo|failed }} and when: foo|success and when: foo|failed * more filters: {{ path|basename }} and {{ path|dirname }} * lookup plugins now use the basedir of the file they have included from, avoiding needs of ../../../ in places and increasing the ease at which things can be reorganized. Modules added: * cloud: rax: module for creating instances in the rackspace cloud (uses pyrax) * packages: npm: node.js package management * packages: pkgng: next-gen package manager for FreeBSD * packages: redhat_subscription: manage Red Hat subscription usage * packages: rhn_register: basic RHN registration * packages: zypper (SuSE) * database: postgresql_priv: manages postgresql priveledges * networking: bigip_pool: load balancing with F5s * networking: ec2_elb: add and remove machines from ec2 elastic load balancers * notification: hipchat: send notification events to hipchat * notification: flowdock: send messages to flowdock during playbook runs * notification: campfire: send messages to campfire during playbook runs * notification: mqtt: send messages to the Mosquitto message bus * notification: irc: send messages to IRC channels * notification: filesystem - a wrapper around mkfs * notification: jabber: send jabber chat messages * notification: osx_say: make OS X say things out loud * openstack: keystone_user * openstack: glance_image * openstack: nova_compute * openstack: nova_keypair * openstack: quantum_floating_ip * openstack: quantum_floating_ip_associate * openstack: quantum_network * openstack: quantum_router * openstack: quantum_router_gateway * openstack: quantum_router_interface * openstack: quantum_subnet * monitoring: newrelic_deployment: notifies newrelic of new deployments * monitoring: airbrake_deployment - notify airbrake of new deployments * monitoring: pingdom * monitoring: pagerduty * monitoring: monit * utility: set_fact: sets a variable, which can be the result of a template evaluation Modules removed * vagrant -- can't be compatible with both versions at once, just run things though the vagrant provisioner in vagrant core Bugfixes and Misc Changes: * service module happier if only enabled=yes|no specified and no state * mysql_db: use --password= instead of -p in dump/import so it doesn't go interactive if no pass set * when using -c ssh and the ansible user is the current user, don't pass a -o to allow SSH config to be * overwrite parameter added to the s3 module * private_ip parameter added to the ec2 module * $FILE and $PIPE now tolerate unicode * various plugin loading operations have been made more efficient * hostname now uses platform.node versus socket.gethostname to be more consistant with Unix 'hostname' * fix for SELinux operations on Unicode path names * inventory directory locations now ignore files with .ini extensions, making hybrid inventory easier * copy module in check-mode now reports back correct changed status when used with force=no * added avail. zone to ec2 module * fixes to the hash variable merging logic if so enabled in the main settings file (default is to replace, not merge hashes) * group_vars and host_vars files can now end in a .yaml or .yml extension, (previously required no extension, still favored) * ec2vol module improvements * if the user module is told to generate the ssh key, the key generated is now returned in the results * misc fixes to the Riak module * make template module slightly more efficient * base64encode / decode filters are now available to templates * libvirt module can now work with multiple different libvirt connecton URIs * fix for postgresql password escaping * unicode fix for shlex.split in some cases * apt module upgrade logic improved * URI module now can follow redirects * yum module can now install off http URLs * sudo password now defaults to ssh password if you ask for both and just hit enter on the second prompt * validate feature on copy and template module, for example, running visudo prior to copying the file over * network facts upgraded to return advanced configs (bonding, etc) * region support added to ec2 module * riak module gets a wait for ring option * improved check mode support in the file module * exception handling added to handle scenario when attempt to log to systemd journal fails * fix for upstart handling when toggling the enablement and running bits at the same time * when registering a task with a conditional attached, and the task is skipped by the conditional, the variable is still registered for the host, with the attribute skipped: True. * delegate_to tasks can look up ansible_ssh_private_key_file variable from inventory correctly now * s3 module takes a 'dest' parameter to change the destination for uploads * apt module gets a cache_valid_time option to avoid redundant cache updates * ec2 module better understands security groups * fix for postgresql codec usage * setup module now tolerant of OpenVZ interfaces * check mode reporting improved for files and directories * doc system now reports on module requirements * group_by module can now also make use of globally scoped variables * localhost and 127.0.0.1 are now fuzzy matched in inventory (are now more or less interchangeable) * AIX improvements/fixes for users, groups, facts * lineinfile now does atomic file replacements * fix to not pass PasswordAuthentication=no in the config file unneccessarily for SSH connection type * for for authorized_key on Debian Squeeze * fixes for apt_repository module reporting changed incorrectly on certain repository types * allow the virtualenv argument to the pip module to be a pathname * service pattern argument now correctly read for BSD services * fetch location can now be controlled more directly via the 'flat' parameter. * added basename and dirname as Jinja2 filters available to all templates * pip works better when sudoing from unpriveledged users * fix for user creation with groups specification reporting 'changed' incorrectly in some cases * fix for some unicode encoding errors in outputing some data in verbose mode * improved FreeBSD, NetBSD and Solaris facts * debug module always outputs data without having to specify -v * fix for sysctl module creating new keys (must specify checks=none) * NetBSD and OpenBSD support for the user and groups modules * Add encrypted password support to password lookup ## 1.1 "Mean Street" -- 4/2/2013 Core Features * added --check option for "dry run" mode * added --diff option to show how templates or copied files change, or might change * --list-tasks for the playbook will list the tasks without running them * able to set the environment by setting "environment:" as a dictionary on any task (go proxy support!) * added ansible_ssh_user and ansible_ssh_pass for per-host/group username and password * jinja2 extensions can now be loaded from the config file * support for complex arguments to modules (within reason) * can specify ansible_connection=X to define the connection type in inventory variables * a new chroot connection type * module common code now has basic type checking (and casting) capability * module common now supports a 'no_log' attribute to mark a field as not to be syslogged * inventory can now point to a directory containing multiple scripts/hosts files, if using this, put group_vars/host_vars directories inside this directory * added configurable crypt scheme for 'vars_prompt' * password generating lookup plugin -- $PASSWORD(path/to/save/data/in) * added --step option to ansible-playbook, works just like Linux interactive startup! Modules Added: * bzr (bazaar version control) * cloudformation * django-manage * gem (ruby gems) * homebrew * lvg (logical volume groups) * lvol (LVM logical volumes) * macports * mongodb_user * netscaler * okg * openbsd_pkg * rabbit_mq_plugin * rabbit_mq_user * rabbit_mq_vhost * rabbit_mq_parameter * rhn_channel * s3 -- allows putting file contents in buckets for sharing over s3 * uri module -- can get/put/post/etc * vagrant -- launching VMs with vagrant, this is different from existing vagrant plugin * zfs Bugfixes and Misc Changes: * stderr shown when commands fail to parse * uses yaml.safe_dump in filter plugins * authentication Q&A no longer happens before --syntax-check, but after * ability to get hostvars data for nodes not in the setup cache yet * SSH timeout now correctly passed to native SSH connection plugin * raise an error when multiple when_ statements are provided * --list-hosts applies host limit selections better * (internals) template engine specifications to use template_ds everywhere * better error message when your host file can not be found * end of line comments now work in the inventory file * directory destinations now work better with remote md5 code * lookup plugin macros like $FILE and $ENV now work without returning arrays in variable definitions/playbooks * uses yaml.safe_load everywhere * able to add EXAMPLES to documentation via EXAMPLES docstring, rather than just in main documentation YAML * can set ANSIBLE_COW_SELECTION to pick other cowsay types (including random) * to_nice_yaml and to_nice_json available as Jinja2 filters that indent and sort * cowsay able to run out of macports (very important!) * improved logging for fireball mode * nicer error message when talking to an older system that needs a JSON module installed * 'magic' variable 'inventory_dir' now gives path to inventory file * 'magic' variable 'vars' works like 'hostvars' but gives global scope variables, useful for debugging in templates mostly * conditionals can be used on plugins like add_host * developers: all callbacks now have access to a ".runner" and ".playbook", ".play", and ".task" object (use getattr, they may not always be set!) Facts: * block device facts for the setup module * facts for AIX * fact detection for OS type on Amazon Linux * device fact gathering stability improvements * ansible_os_family fact added * user_id (remote user name) * a whole series of current time information under the 'datetime' hash * more OS X facts * support for detecting Alpine Linux * added facts for OpenBSD Module Changes/Fixes: * ansible module common code (and ONLY that) which is mixed in with modules, is now BSD licensed. App remains GPLv3. * service code works better on platforms that mix upstart, systemd, and system-v * service enablement idempotence fixes for systemd and upstart * service status 4 is also 'not running' * supervisorctl restart fix * increased error handling for ec2 module * can recursively set permissions on directories * ec2: change to the way AMI tags are handled * cron module can now also manipulate cron.d files * virtualenv module can now inherit system site packages (or not) * lineinfile module now has an insertbefore option * NetBSD service module support * fixes to sysctl module where item has multiple values * AIX support for the user and group modules * able to specify a different hg repo to pull from than the original set * add_host module can set ports and other inventory variables * add_host module can add modules to multiple groups (groups=a,b,c), groups now alias for groupname * subnet ID can be set on EC2 module * MySQL module password handling improvements * added new virtualenv flags to pip and easy_install modules * various improvements to lineinfile module, now accepts common arguments from file * force= now replaces thirsty where used before, thirsty remains an alias * setup module can take a 'filter=' parameter to just return a few facts (not used by playbooks) * cron module works even if no crontab is present (for cron.d) * security group ID settable on EC2 module * misc fixes to sysctl module * fix to apt module so packages not in cache are still removable * charset fix to mail module * postresql db module now does not try to create the 'PUBLIC' user * SVN module now works correctly with self signed certs * apt module now has an upgrade parameter (values=yes, no, or 'dist') * nagios module gets new silence/unsilence commands * ability to disable proxy usage in get_url (use_proxy=no) * more OS X facts * added a 'fail_on_missing' (default no) option to fetch * added timeout to the uri module (default 30 seconds, adjustable) * ec2 now has a 'wait' parameter to wait for the instance to be active, eliminates need for separate wait_for call. * allow regex backreferences in lineinfile * id attribute on ec2 module can be used to set idempotent-do-not-recreate launches * icinga support for nagios module * fix default logins when no my.conf for MySQL module * option to create users with non-unique UIDs (user module) * macports module can enable/disable packages * quotes in my.cnf are stripped by the MySQL modules * Solaris Service management added * service module will attempt to auto-add unmanaged chkconfig services when needed * service module supports systemd service unit files Plugins: * added 'with_random_choice' filter plugin * fixed ~ expansion for fileglob * with_nested allows for nested loops (see examples in examples/playbooks) ## 1.0 "Eruption" -- Feb 1 2013 New modules: * new sysctl module * new pacman module (Arch linux) * new apt_key module * hg module now in core * new ec2_facts module * added pkgin module for Joyent SmartOS New config settings: * sudo_exe parameter can be set in config to use sudo alternatives * sudo_flags parameter can alter the flags used with sudo New playbook/language features: * added when_failed and when_changed * task includes can now be of infinite depth * when_set and when_unset can take more than one var (when_set: $a and $b and $c) * added the with_sequence lookup plugin * can override "connection:" on an indvidual task * parameterized playbook includes can now define complex variables (not just all on one line) * making inventory variables available for use in vars_files paths * messages when skipping plays are now more clear * --extra-vars now has maximum precedence (as intended) Module fixes and new flags: * ability to use raw module without python on remote system * fix for service status checking on Ubuntu * service module now responds to additional exit code for SERVICE_UNAVAILABLE * fix for raw module with '-c local' * various fixes to git module * ec2 module now reports the public DNS name * can pass executable= to the raw module to specify alternative shells * fix for postgres module when user contains a "-" * added additional template variables -- $template_fullpath and $template_run_date * raise errors on invalid arguments used with a task include statement * shell/command module takes a executable= parameter to specify a different shell than /bin/sh * added return code and error output to the raw module * added support for @reboot to the cron module * misc fixes to the pip module * nagios module can schedule downtime for all services on the host * various subversion module improvements * various mail module improvements * SELinux fix for files created by authorized_key module * "template override" ?? * get_url module can now send user/password authorization * ec2 module can now deploy multiple simultaneous instances * fix for apt_key modules stalling in some situations * fix to enable Jinja2 {% include %} to work again in template * ec2 module is now powered by Boto * setup module can now detect if package manager is using pacman * fix for yum module with enablerepo in use on EL 6 Core fixes and new behaviors: * various fixes for variable resolution in playbooks * fixes for handling of "~" in some paths * various fixes to DWIM'ing of relative paths * /bin/ansible now takes a --list-hosts just like ansible-playbook did * various patterns can now take a regex vs a glob if they start with "~" (need docs on which!) - also /usr/bin/ansible * allow intersecting host patterns by using "&" ("webservers:!debian:&datacenter1") * handle tilde shell character for --private-key * hash merging policy is now selectable in the config file, can choose to override or merge * environment variables now available for setting all plugin paths (ANSIBLE_CALLBACK_PLUGINS, etc) * added packaging file for macports (not upstreamed yet) * hacking/test-module script now uses /usr/bin/env properly * fixed error formatting for certain classes of playbook syntax errors * fix for processing returns with large volumes of output Inventory files/scripts: * hostname patterns in the inventory file can now use alphabetic ranges * whitespace is now allowed around group variables in the inventory file * inventory scripts can now define groups of groups and group vars (need example for docs?) ## 0.9 "Dreams" -- Nov 30 2012 Highlighted core changes: * various performance tweaks, ansible executes dramatically less SSH ops per unit of work * close paramiko SFTP connections less often on copy/template operations (speed increase) * change the way we use multiprocessing (speed/RAM usage improvements) * able to set default for asking password & sudo password in config file * ansible now installs nicely if running inside a virtualenv * flag to allow SSH connection to move files by scp vs sftp (in config file) * additional RPM subpackages for easily installing fireball mode deps (server and node) * group_vars/host_vars now available to ansible, not just playbooks * native ssh connection type (-c ssh) now supports passwords as well as keys * ansible-doc program to show details Other core changes: * fix for template calls when last character is '$' * if ansible_python_interpreter is set on a delegated host, it now works as intended * --limit can now take "," as separator as well as ";" or ":" * msg is now displaced with newlines when a task fails * if any with_ plugin has no results in a list (empty list for with_items, etc), the task is now skipped * various output formatting fixes/improvements * fix for Xen dom0/domU detection in default facts * 'ansible_domain' fact now available (ex value: example.com) * configured remote temp file location is now always used even for root * 'register'-ed variables are not recorded for skipped hosts (for example, using only_if/when) * duplicate host records for the same host can no longer result when a host is listed in multiple groups * ansible-pull now passes --limit to prevent running on multiple hosts when used with generic playbooks * remote md5sum check fixes for Solaris 10 * ability to configure syslog facility used by remote module calls * in templating, stray '$' characters are now handled more correctly Playbook changes: * relative paths now work for 'first_available_file' * various templating engine fixes * 'when' is an easier form of only if * --list-hosts on the playbook command now supports multiple playbooks on the same command line * playbook includes can now be parameterized Module additions: * (addhost) new module for adding a temporary host record (used for creating new guests) * (group_by) module allows partitioning hosts based on group data * (ec2) new module for creating ec2 hosts * (script) added 'script' module for pushing and running self-deleting remote scripts * (svr4pkg) solaris svr4pkg module Module changes: * (authorized key) module uses temp file now to prevent failure on full disk * (fetch) now uses the 'slurp' internal code to work as you would expect under sudo'ed accounts * (fetch) internal usage of md5 sums fixed for BSD * (get_url) thirsty is no longer required for directory destinations * (git) various git module improvements/tweaks * (group) now subclassed for various platforms, includes SunOS support * (lineinfile) create= option on lineinfile can create the file when it does not exist * (mysql_db) module takes new grant options * (postgresql_db) module now takes role_attr_flags * (service) further upgrades to service module service status reporting * (service) tweaks to get service module to play nice with BSD style service systems (rc.conf) * (service) possible to pass additional arguments to services * (shell) and command module now take an 'executable=' flag for specifying an alternate shell than /bin/sh * (user) ability to create SSH keys for users when using user module to create users * (user) atomic replacement of files preserves permissions of original file * (user) module can create SSH keys * (user) module now does Solaris and BSD * (yum) module takes enablerepo= and disablerepo= * (yum) misc yum module fixing for various corner cases Plugin changes: * EC2 inventory script now produces nicer failure message if AWS is down (or similar) * plugin loading code now more streamlined * lookup plugins for DNS text records, environment variables, and redis * added a template lookup plugin $TEMPLATE('filename.j2') * various tweaks to the EC2 inventory plugin * jinja2 filters are now pluggable so it's easy to write your own (to_json/etc, are now impl. as such) ## 0.8 "Cathedral" -- Oct 19, 2012 Highlighted Core Changes: * fireball mode -- ansible can bootstrap a ephemeral 0mq (zeromq) daemon that runs as a given user and expires after X period of time. It is very fast. * playbooks with errors now return 2 on failure. 1 indicates a more fatal syntax error. Similar for /usr/bin/ansible * server side action code (template, etc) are now fully pluggable * ability to write lookup plugins, like the code powering "with_fileglob" (see below) Other Core Changes: * ansible config file can also go in 'ansible.cfg' in cwd in addition to ~/.ansible.cfg and /etc/ansible/ansible.cfg * fix for inventory hosts at API level when hosts spec is a list and not a colon delimited string * ansible-pull example now sets up logrotate for the ansible-pull cron job log * negative host matching (!hosts) fixed for external inventory script usage * internals: os.executable check replaced with utils function so it plays nice on AIX * Debian packaging now includes ansible-pull manpage * magic variable 'ansible_ssh_host' can override the hostname (great for usage with tunnels) * date command usage in build scripts fixed for OS X * don't use SSH agent with paramiko if a password is specified * make output be cleaner on multi-line command/shell errors * /usr/bin/ansible now prints things when tasks are skipped, like when creates= is used with -m command and /usr/bin/ansible * when trying to async a module that is not a 'normal' asyncable module, ansible will now let you know * ability to access inventory variables via 'hostvars' for hosts not yet included in any play, using on demand lookups * merged ansible-plugins, ansible-resources, and ansible-docs into the main project * you can set ANSIBLE_NOCOWS=1 if you want to disable cowsay if it is installed. Though no one should ever want to do this! Cows are great! * you can set ANSIBLE_FORCE_COLOR=1 to force color mode even when running without a TTY * fatal errors are now properly colored red. * skipped messages are now cyan, to differentiate them from unchanged messages. * extensive documentation upgrades * delegate_action to localhost (aka local_action) will always use the local connection type Highlighted playbook changes: * is_set is available for use inside of an only_if expression: is_set('ansible_eth0'). We intend to further upgrade this with a 'when' keyword providing better options to 'only_if' in the next release. Also is_unset('ansible_eth0') * playbooks can import playbooks in other directories and then be able to import tasks relative to them * FILE($path) now allows access of contents of file in a path, very good for use with SSH keys * similarly PIPE($command) will run a local command and return the results of executing this command * if all hosts in a play fail, stop the playbook, rather than letting the console log spool on by * only_if using register variables that are booleans now works in a boolean way like you'd expect * task includes now work with with_items (such as: include: path/to/wordpress.yml user=$item) * when using a $list variable with $var or ${var} syntax it will automatically join with commas * setup is not run more than once when we know it is has already been run in a play that included another play, etc * can set/override sudo and sudo_user on individual tasks in a play, defaults to what is set in the play if not present * ability to use with_fileglob to iterate over local file patterns * templates now use Jinja2's 'trim_blocks=True' to avoid stray newlines, small changes to templates may be required in rare cases. Other playbook changes: * to_yaml and from_yaml are available as Jinja2 filters * $group and $group_names are now accessible in with_items * where 'stdout' is provided a new 'stdout_lines' variable (type == list) is now generated and usable with with_items * when local_action is used the transport is automatically overridden to the local type * output on failed playbook commands is now nicely split for stderr/stdout and syntax errors * if local_action is not used and delegate_to was 127.0.0.1 or localhost, use local connection regardless * when running a playbook, and the statement has changed, prints 'changed:' now versus 'ok:' so it is obvious without colored mode * variables now usable within vars_prompt (just not host/group vars) * setup facts are now retained across plays (dictionary just gets updated as needed) * --sudo-user now works with --extra-vars * fix for multi_line strings with only_if New Modules: * ini_file module for manipulating INI files * new LSB facts (release, distro, etc) * pause module -- (pause seconds=10) (pause minutes=1) (pause prompt=foo) -- it's an action plugin * a module for adding entries to the main crontab (though you may still wish to just drop template files into cron.d) * debug module can be used for outputing messages without using 'shell echo' * a fail module is now available for causing errors, you might want to use it with only_if to fail in certain conditions Other module Changes, Upgrades, and Fixes: * removes= exists on command just like creates= * postgresql modules now take an optional port= parameter * /proc/cmdline info is now available in Linux facts * public host key detection for OS X * lineinfile module now uses 'search' not exact 'match' in regexes, making it much more intuitive and not needing regex syntax most of the time * added force=yes|no (default no) option for file module, which allows transition between files to directories and so on * additional facts for SunOS virtualization * copy module is now atomic when used across volumes * url_get module now returns 'dest' with the location of the file saved * fix for yum module when using local RPMs vs downloading * cleaner error messages with copy if destination directory does not exist * setup module now still works if PATH is not set * service module status now correct for services with 'subsys locked' status * misc fixes/upgrades to the wait_for module * git module now expands any "~" in provided destination paths * ignore stop error code failure for service module with state=restarted, always try to start * inline documentation for modules allows documentation source to built without pull requests to the ansible-docs project, among other things * variable '$ansible_managed' is now great to include at the top of your templates and includes useful information and a warning that it will be replaced * "~" now expanded in command module when using creates/removes * mysql module can do dumps and imports * selinux policy is only required if setting to not disabled * various fixes for yum module when working with packages not in any present repo ## 0.7 "Panama" -- Sept 6 2012 Module changes: * login_unix_socket option for mysql user and database modules (see PR #781 for doc notes) * new modules -- pip, easy_install, apt_repository, supervisorctl * error handling for setup module when SELinux is in a weird state * misc yum module fixes * better changed=True/False detection in user module on older Linux distros * nicer errors from modules when arguments are not key=value * backup option on copy (backup=yes), as well as template, assemble, and lineinfile * file module will not recurse on directory properties * yum module now workable without having repoquery installed, but doesn't support comparisons or list= if so * setup module now detects interfaces with aliases * better handling of VM guest type detection in setup module * new module boilerplate code to check for mutually required arguments, arguments required together, exclusive args * add pattern= as a paramter to the service module (for init scripts that don't do status, or do poor status) * various fixes to mysql & postresql modules * added a thirsty= option (boolean, default no) to the get_url module to decide to download the file every time or not * added a wait_for module to poll for ports being open * added a nagios module for controlling outage windows and alert statuses * added a seboolean module for getsebool/setsebool type operations * added a selinux module for controlling overall SELinux policy * added a subversion module * added lineinfile for adding and removing lines from basic files * added facts for ARM-based CPUs * support for systemd in the service module * git moduleforce reset behavior is now controllable * file module can now operate on special files (block devices, etc) Core changes: * ansible --version will now give branch/SHA information if running from git * better sudo permissions when encountering different umasks * when using paramiko and SFTP is not accessible, do not traceback, but return a nice human readable msg * use -vvv for extreme debug levels. -v gives more playbook output as before * -vv shows module arguments to all module calls (and maybe some other things later) * don not pass "--" to sudo to work on older EL5 * make remote_md5 internal function work with non-bash shells * allow user to be passed in via --extra-vars (regression) * add --limit option, which can be used to further confine the pattern given in ansible-playbooks * adds ranged patterns like dbservers[0-49] for usage with patterns or --limit * -u and user: defaults to current user, rather than root, override as before * /etc/ansible/ansible.cfg and ~/ansible.cfg now available to set default values and other things * (developers) ANSIBLE_KEEP_REMOTE_FILES=1 can be used in debugging (envrionment variable) * (developers) connection types are now plugins * (developers) callbacks can now be extended via plugins * added FreeBSD ports packaging scripts * check for terminal properties prior to engaging color modes * explicitly disable password auth with -c ssh, as it is not used anyway Playbooks: * YAML syntax errors detected and show where the problem is * if you ctrl+c a playbook it will not traceback (usually) * vars_prompt now has encryption options (see examples/playbooks/prompts.yml) * allow variables in parameterized task include parameters (regression) * add ability to store the result of any command in a register (see examples/playbooks/register_logic.yml) * --list-hosts to show what hosts are included in each play of a playbook * fix a variable ordering issue that could affect vars_files with selective file source lists * adds 'delegate_to' for a task, which can be used to signal outage windows and load balancers on behalf of hosts * adds 'serial' to playbook, allowing you to specify how many hosts can be processing a playbook at one time (default 0=all) * adds 'local_action: ' as an alias to 'delegate_to: 127.0.0.1' ## 0.6 "Cabo" -- August 6, 2012 playbooks: * support to tag tasks and includes and use --tags in playbook CLI * playbooks can now include other playbooks (example/playbooks/nested_playbooks.yml) * vars_files now usable with with_items, provided file paths don't contain host specific facts * error reporting if with_items value is unbound * with_items no longer creates lots of tasks, creates one task that makes multiple calls * can use host_specific facts inside with_items (see above) * at the top level of a playbook, set 'gather_facts: no' to skip fact gathering * first_available_file and with_items used together will now raise an error * to catch typos, like 'var' for 'vars', playbooks and tasks now yell on invalid parameters * automatically load (directory_of_inventory_file)/group_vars/groupname and /host_vars/hostname in vars_files * playbook is now colorized, set ANSIBLE_NOCOLOR=1 if you do not like this, does not colorize if not a TTY * hostvars now preserved between plays (regression in 0.5 from 0.4), useful for sharing vars in multinode configs * ignore_errors: yes on a task can be used to allow a task to fail and not stop the play * with_items with the apt/yum module will install/remove/update everything in a single command inventory: * groups variable available as a hash to return the hosts in each group name * in YAML inventory, hosts can list their groups in inverted order now also (see tests/yaml_hosts) * YAML inventory is deprecated and will be removed in 0.7 * ec2 inventory script * support ranges of hosts in the host file, like www[001-100].example.com (supports leading zeros and also not) modules: * fetch module now does not fail a system when requesting file paths (ex: logs) that don't exist * apt module now takes an optional install-recommends=yes|no (default yes) * fixes to the return codes of the copy module * copy module takes a remote md5sum to avoid large file transfer * various user and group module fixes (error handling, etc) * apt module now takes an optional force parameter * slightly better psychic service status handling for the service module * fetch module fixes for SSH connection type * modules now consistently all take yes/no for boolean parameters (and DWIM on true/false/1/0/y/n/etc) * setup module no longer saves to disk, template module now only used in playbooks * setup module no longer needs to run twice per playbook * apt module now passes DEBIAN_FRONTEND=noninteractive * mount module (manages active mounts + fstab) * setup module fixes if no ipv6 support * internals: template in common module boilerplate, also causes less SSH operations when used * git module fixes * setup module overhaul, more modular * minor caching logic added to inventory to reduce hammering of inventory scripts. * MySQL and PostgreSQL modules for user and db management * vars_prompt now supports private password entry (see examples/playbooks/prompts.yml) * yum module modified to be more tolerant of plugins spewing random console messages (ex: RHN) internals: * when sudoing to root, still use /etc/ansible/setup as the metadata path, as if root * paramiko is now only imported if needed when running from source checkout * cowsay support on Ubuntu * various ssh connection fixes for old Ubuntu clients * ./hacking/test-module now supports options like ansible takes and has a debugger mode * sudoing to a user other than root now works more seamlessly (uses /tmp, avoids umask issues) ## 0.5 "Amsterdam" ------- July 04, 2012 * Service module gets more accurate service states when running with upstart * Jinja2 usage in playbooks (not templates), reinstated, supports %include directive * support for --connection ssh (supports Kerberos, bastion hosts, etc), requires ControlMaster * misc tracebacks replaced with error messages * various API/internals refactoring * vars can be built from other variables * support for exclusion of hosts/groups with "!groupname" * various changes to support md5 tool differences for FreeBSD nodes & OS X clients * "unparseable" command output shows in command output for easier debugging * mktemp is no longer required on remotes (not available on BSD) * support for older versions of python-apt in the apt module * a new "assemble" module, for constructing files from pieces of files (inspired by Puppet "fragments" idiom) * ability to override most default values with ANSIBLE_FOO environment variables * --module-path parameter can support multiple directories separated with the OS path separator * with_items can take a variable of type list * ansible_python_interpreter variable available for systems with more than one Python * BIOS and VMware "fact" upgrades * cowsay is used by ansible-playbook if installed to improve output legibility (try installing it) * authorized_key module * SELinux facts now sourced from the python selinux library * removed module debug option -D * added --verbose, which shows output from successful playbook operations * print the output of the raw command inside /usr/bin/ansible as with command/shell * basic setup module support for Solaris * ./library relative to the playbook is always in path so modules can be included in tarballs with playbooks ## 0.4 "Unchained" ------- May 23, 2012 Internals/Core * internal inventory API now more object oriented, parsers decoupled * async handling improvements * misc fixes for running ansible on OS X (overlord only) * sudo improvements, now works much more smoothly * sudo to a particular user with -U/--sudo-user, or using 'sudo_user: foo' in a playbook * --private-key CLI option to work with pem files Inventory * can use -i host1,host2,host3:port to specify hosts not in inventory (replaces --override-hosts) * ansible INI style format can do groups of groups [groupname:children] and group vars [groupname:vars] * groups and users module takes an optional system=yes|no on creation (default no) * list of hosts in playbooks can be expressed as a YAML list in addition to ; delimited Playbooks * variables can be replaced like ${foo.nested_hash_key.nested_subkey[array_index]} * unicode now ok in templates (assumes utf8) * able to pass host specifier or group name in to "hosts:" with --extra-vars * ansible-pull script and example playbook (extreme scaling, remediation) * inventory_hostname variable available that contains the value of the host as ansible knows it * variables in the 'all' section can be used to define other variables based on those values * 'group_names' is now a variable made available to templates * first_available_file feature, see selective_file_sources.yml in examples/playbooks for info * --extra-vars="a=2 b=3" etc, now available to inject parameters into playbooks from CLI Incompatible Changes * jinja2 is only usable in templates, not playbooks, use $foo instead * --override-hosts removed, can use -i with comma notation (-i "ahost,bhost") * modules can no longer include stderr output (paramiko limitation from sudo) Module Changes * tweaks to SELinux implementation for file module * fixes for yum module corner cases on EL5 * file module now correctly returns the mode in octal * fix for symlink handling in the file module * service takes an enable=yes|no which works with chkconfig or updates-rc.d as appropriate * service module works better on Ubuntu * git module now does resets and such to work more smoothly on updates * modules all now log to syslog * enabled=yes|no on a service can be used to toggle chkconfig & updates-rc.d states * git module supports branch= * service fixes to better detect status using return codes of the service script * custom facts provided by the setup module mean no dependency on Ruby, facter, or ohai * service now has a state=reloaded * raw module for bootstrapping and talking to routers w/o Python, etc Misc Bugfixes * fixes for variable parsing in only_if lines * misc fixes to key=value parsing * variables with mixed case now legal * fix to internals of hacking/test-module development script ## 0.3 "Baluchitherium" -- April 23, 2012 * Packaging for Debian, Gentoo, and Arch * Improvements to the apt and yum modules * A virt module * SELinux support for the file module * Ability to use facts from other systems in templates (aka exported resources like support) * Built in Ansible facts so you don't need ohai, facter, or Ruby * tempdir selections that work with noexec mounted /tmp * templates happen locally, not remotely, so no dependency on python-jinja2 for remote computers * advanced inventory format in YAML allows more control over variables per host and per group * variables in playbooks can be structured/nested versus just a flat namespace * manpage upgrades (docs) * various bugfixes * can specify a default --user for playbooks rather than specifying it in the playbook file * able to specify ansible port in ansible host file (see docs) * refactored Inventory API to make it easier to write scripts using Ansible * looping capability for playbooks (with_items) * support for using sudo with a password * module arguments can be unicode * A local connection type, --connection=local, for use with cron or in kickstarts * better module debugging with -D * fetch module for pulling in files from remote hosts * command task supports creates=foo for idempotent semantics, won't run if file foo already exists ## 0.0.2 and 0.0.1 * Initial stages of project ansible-1.5.4/hacking/0000775000000000000000000000000012316627017013247 5ustar rootrootansible-1.5.4/hacking/authors.sh0000775000000000000000000000065212316627017015276 0ustar rootroot#!/bin/sh # script from http://stackoverflow.com/questions/12133583 set -e # Get a list of authors ordered by number of commits # and remove the commit count column AUTHORS=$(git --no-pager shortlog -nse | cut -f 2- | sort -f) if [ -z "$AUTHORS" ] ; then echo "Authors list was empty" exit 1 fi # Display the authors list and write it to the file echo "$AUTHORS" | tee "$(git rev-parse --show-toplevel)/AUTHORS.TXT" ansible-1.5.4/hacking/test-module0000775000000000000000000001404612316627017015444 0ustar rootroot#!/usr/bin/env python # (c) 2012, Michael DeHaan # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # # this script is for testing modules without running through the # entire guts of ansible, and is very helpful for when developing # modules # # example: # test-module -m ../library/command -a "/bin/sleep 3" # test-module -m ../library/service -a "name=httpd ensure=restarted" # test-module -m ../library/service -a "name=httpd ensure=restarted" --debugger /usr/bin/pdb import sys import base64 import os import subprocess import traceback import optparse import ansible.utils as utils import ansible.module_common as module_common import ansible.constants as C try: import json except ImportError: import simplejson as json def parse(): """parse command line :return : (options, args)""" parser = optparse.OptionParser() parser.usage = "%prog -[options] (-h for help)" parser.add_option('-m', '--module-path', dest='module_path', help="REQUIRED: full path of module source to execute") parser.add_option('-a', '--args', dest='module_args', default="", help="module argument string") parser.add_option('-D', '--debugger', dest='debugger', help="path to python debugger (e.g. /usr/bin/pdb)") parser.add_option('-I', '--interpreter', dest='interpreter', help="path to interpeter to use for this module (e.g. ansible_python_interpreter=/usr/bin/python)", metavar='INTERPRETER_TYPE=INTERPRETER_PATH') options, args = parser.parse_args() if not options.module_path: parser.print_help() sys.exit(1) else: return options, args def write_argsfile(argstring, json=False): """ Write args to a file for old-style module's use. """ argspath = os.path.expanduser("~/.ansible_test_module_arguments") argsfile = open(argspath, 'w') if json: args = utils.parse_kv(argstring) argstring = utils.jsonify(args) argsfile.write(argstring) argsfile.close() return argspath def boilerplate_module(modfile, args, interpreter): """ simulate what ansible does with new style modules """ #module_fh = open(modfile) #module_data = module_fh.read() #module_fh.close() replacer = module_common.ModuleReplacer() #included_boilerplate = module_data.find(module_common.REPLACER) != -1 or module_data.find("import ansible.module_utils") != -1 complex_args = {} if args.startswith("@"): # Argument is a YAML file (JSON is a subset of YAML) complex_args = utils.combine_vars(complex_args, utils.parse_yaml_from_file(args[1:])) args='' inject = {} if interpreter: if '=' not in interpreter: print 'interpeter must by in the form of ansible_python_interpreter=/usr/bin/python' sys.exit(1) interpreter_type, interpreter_path = interpreter.split('=') if not interpreter_type.startswith('ansible_'): interpreter_type = 'ansible_%s' % interpreter_type if not interpreter_type.endswith('_interpreter'): interpreter_type = '%s_interpreter' % interpreter_type inject[interpreter_type] = interpreter_path (module_data, module_style, shebang) = replacer.modify_module( modfile, complex_args, args, inject ) modfile2_path = os.path.expanduser("~/.ansible_module_generated") print "* including generated source, if any, saving to: %s" % modfile2_path print "* this may offset any line numbers in tracebacks/debuggers!" modfile2 = open(modfile2_path, 'w') modfile2.write(module_data) modfile2.close() modfile = modfile2_path return (modfile2_path, module_style) def runtest( modfile, argspath): """Test run a module, piping it's output for reporting.""" os.system("chmod +x %s" % modfile) invoke = "%s" % (modfile) if argspath is not None: invoke = "%s %s" % (modfile, argspath) cmd = subprocess.Popen(invoke, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (out, err) = cmd.communicate() try: print "***********************************" print "RAW OUTPUT" print out print err results = utils.parse_json(out) except: print "***********************************" print "INVALID OUTPUT FORMAT" print out traceback.print_exc() sys.exit(1) print "***********************************" print "PARSED OUTPUT" print utils.jsonify(results,format=True) def rundebug(debugger, modfile, argspath): """Run interactively with console debugger.""" if argspath is not None: subprocess.call("%s %s %s" % (debugger, modfile, argspath), shell=True) else: subprocess.call("%s %s" % (debugger, modfile), shell=True) def main(): options, args = parse() (modfile, module_style) = boilerplate_module(options.module_path, options.module_args, options.interpreter) argspath=None if module_style != 'new': if module_style == 'non_native_want_json': argspath = write_argsfile(options.module_args, json=True) elif module_style == 'old': argspath = write_argsfile(options.module_args, json=False) else: raise Exception("internal error, unexpected module style: %s" % module_style) if options.debugger: rundebug(options.debugger, modfile, argspath) else: runtest(modfile, argspath) if __name__ == "__main__": main() ansible-1.5.4/hacking/env-setup0000775000000000000000000000271612316627017015131 0ustar rootroot#!/bin/bash # usage: source ./hacking/env-setup [-q] # modifies environment for running Ansible from checkout # When run using source as directed, $0 gets set to bash, so we must use $BASH_SOURCE if [ -n "$BASH_SOURCE" ] ; then HACKING_DIR=`dirname $BASH_SOURCE` elif [ $(basename $0) = "env-setup" ]; then HACKING_DIR=`dirname $0` else HACKING_DIR="$PWD/hacking" fi # The below is an alternative to readlink -fn which doesn't exist on OS X # Source: http://stackoverflow.com/a/1678636 FULL_PATH=`python -c "import os; print(os.path.realpath('$HACKING_DIR'))"` ANSIBLE_HOME=`dirname "$FULL_PATH"` PREFIX_PYTHONPATH="$ANSIBLE_HOME/lib" PREFIX_PATH="$ANSIBLE_HOME/bin" PREFIX_MANPATH="$ANSIBLE_HOME/docs/man" [[ $PYTHONPATH != ${PREFIX_PYTHONPATH}* ]] && export PYTHONPATH=$PREFIX_PYTHONPATH:$PYTHONPATH [[ $PATH != ${PREFIX_PATH}* ]] && export PATH=$PREFIX_PATH:$PATH unset ANSIBLE_LIBRARY export ANSIBLE_LIBRARY="$ANSIBLE_HOME/library:`python $HACKING_DIR/get_library.py`" [[ $MANPATH != ${PREFIX_MANPATH}* ]] && export MANPATH=$PREFIX_MANPATH:$MANPATH # Print out values unless -q is set if [ $# -eq 0 -o "$1" != "-q" ] ; then echo "" echo "Setting up Ansible to run out of checkout..." echo "" echo "PATH=$PATH" echo "PYTHONPATH=$PYTHONPATH" echo "ANSIBLE_LIBRARY=$ANSIBLE_LIBRARY" echo "MANPATH=$MANPATH" echo "" echo "Remember, you may wish to specify your host file with -i" echo "" echo "Done!" echo "" fi ansible-1.5.4/hacking/README.md0000664000000000000000000000252112316627017014526 0ustar rootroot'Hacking' directory tools ========================= Env-setup --------- The 'env-setup' script modifies your environment to allow you to run ansible from a git checkout using python 2.6+. (You may not use python 3 at this time). First, set up your environment to run from the checkout: $ source ./hacking/env-setup You will need some basic prerequisites installed. If you do not already have them and do not wish to install them from your operating system package manager, you can install them from pip $ easy_install pip # if pip is not already available $ pip install pyyaml jinja2 From there, follow ansible instructions on docs.ansible.com as normal. Test-module ----------- 'test-module' is a simple program that allows module developers (or testers) to run a module outside of the ansible program, locally, on the current machine. Example: $ ./hacking/test-module -m library/commands/shell -a "echo hi" This is a good way to insert a breakpoint into a module, for instance. Module-formatter ---------------- The module formatter is a script used to generate manpages and online module documentation. This is used by the system makefiles and rarely needs to be run directly. Authors ------- 'authors' is a simple script that generates a list of everyone who has contributed code to the ansible repository. ansible-1.5.4/hacking/module_formatter.py0000775000000000000000000002552612316627017017206 0ustar rootroot#!/usr/bin/env python # (c) 2012, Jan-Piet Mens # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # import os import glob import sys import yaml import codecs import json import ast import re import optparse import time import datetime import subprocess import cgi from jinja2 import Environment, FileSystemLoader import ansible.utils import ansible.utils.module_docs as module_docs ##################################################################################### # constants and paths # if a module is added in a version of Ansible older than this, don't print the version added information # in the module documentation because everyone is assumed to be running something newer than this already. TO_OLD_TO_BE_NOTABLE = 1.0 # Get parent directory of the directory this script lives in MODULEDIR=os.path.abspath(os.path.join( os.path.dirname(os.path.realpath(__file__)), os.pardir, 'library' )) # The name of the DOCUMENTATION template EXAMPLE_YAML=os.path.abspath(os.path.join( os.path.dirname(os.path.realpath(__file__)), os.pardir, 'examples', 'DOCUMENTATION.yml' )) _ITALIC = re.compile(r"I\(([^)]+)\)") _BOLD = re.compile(r"B\(([^)]+)\)") _MODULE = re.compile(r"M\(([^)]+)\)") _URL = re.compile(r"U\(([^)]+)\)") _CONST = re.compile(r"C\(([^)]+)\)") ##################################################################################### def rst_ify(text): ''' convert symbols like I(this is in italics) to valid restructured text ''' t = _ITALIC.sub(r'*' + r"\1" + r"*", text) t = _BOLD.sub(r'**' + r"\1" + r"**", t) t = _MODULE.sub(r'``' + r"\1" + r"``", t) t = _URL.sub(r"\1", t) t = _CONST.sub(r'``' + r"\1" + r"``", t) return t ##################################################################################### def html_ify(text): ''' convert symbols like I(this is in italics) to valid HTML ''' t = cgi.escape(text) t = _ITALIC.sub("" + r"\1" + "", t) t = _BOLD.sub("" + r"\1" + "", t) t = _MODULE.sub("" + r"\1" + "", t) t = _URL.sub("" + r"\1" + "", t) t = _CONST.sub("" + r"\1" + "", t) return t ##################################################################################### def rst_fmt(text, fmt): ''' helper for Jinja2 to do format strings ''' return fmt % (text) ##################################################################################### def rst_xline(width, char="="): ''' return a restructured text line of a given length ''' return char * width ##################################################################################### def write_data(text, options, outputname, module): ''' dumps module output to a file or the screen, as requested ''' if options.output_dir is not None: f = open(os.path.join(options.output_dir, outputname % module), 'w') f.write(text.encode('utf-8')) f.close() else: print text ##################################################################################### def list_modules(module_dir): ''' returns a hash of categories, each category being a hash of module names to file paths ''' categories = dict(all=dict()) files = glob.glob("%s/*" % module_dir) for d in files: if os.path.isdir(d): files2 = glob.glob("%s/*" % d) for f in files2: tokens = f.split("/") module = tokens[-1] category = tokens[-2] if not category in categories: categories[category] = {} categories[category][module] = f categories['all'][module] = f return categories ##################################################################################### def generate_parser(): ''' generate an optparse parser ''' p = optparse.OptionParser( version='%prog 1.0', usage='usage: %prog [options] arg1 arg2', description='Generate module documentation from metadata', ) p.add_option("-A", "--ansible-version", action="store", dest="ansible_version", default="unknown", help="Ansible version number") p.add_option("-M", "--module-dir", action="store", dest="module_dir", default=MODULEDIR, help="Ansible library path") p.add_option("-T", "--template-dir", action="store", dest="template_dir", default="hacking/templates", help="directory containing Jinja2 templates") p.add_option("-t", "--type", action='store', dest='type', choices=['rst'], default='rst', help="Document type") p.add_option("-v", "--verbose", action='store_true', default=False, help="Verbose") p.add_option("-o", "--output-dir", action="store", dest="output_dir", default=None, help="Output directory for module files") p.add_option("-I", "--includes-file", action="store", dest="includes_file", default=None, help="Create a file containing list of processed modules") p.add_option('-V', action='version', help='Show version number and exit') return p ##################################################################################### def jinja2_environment(template_dir, typ): env = Environment(loader=FileSystemLoader(template_dir), variable_start_string="@{", variable_end_string="}@", trim_blocks=True, ) env.globals['xline'] = rst_xline if typ == 'rst': env.filters['convert_symbols_to_format'] = rst_ify env.filters['html_ify'] = html_ify env.filters['fmt'] = rst_fmt env.filters['xline'] = rst_xline template = env.get_template('rst.j2') outputname = "%s_module.rst" else: raise Exception("unknown module format type: %s" % typ) return env, template, outputname ##################################################################################### def process_module(module, options, env, template, outputname, module_map): print "rendering: %s" % module fname = module_map[module] # ignore files with extensions if os.path.basename(fname).find(".") != -1: return # use ansible core library to parse out doc metadata YAML and plaintext examples doc, examples = ansible.utils.module_docs.get_docstring(fname, verbose=options.verbose) # crash if module is missing documentation and not explicitly hidden from docs index if doc is None and module not in ansible.utils.module_docs.BLACKLIST_MODULES: sys.stderr.write("*** ERROR: CORE MODULE MISSING DOCUMENTATION: %s, %s ***\n" % (fname, module)) sys.exit(1) if doc is None: return "SKIPPED" all_keys = [] if not 'version_added' in doc: sys.stderr.write("*** ERROR: missing version_added in: %s ***\n" % module) sys.exit(1) added = 0 if doc['version_added'] == 'historical': del doc['version_added'] else: added = doc['version_added'] # don't show version added information if it's too old to be called out if added: added_tokens = str(added).split(".") added = added_tokens[0] + "." + added_tokens[1] added_float = float(added) if added and added_float < TO_OLD_TO_BE_NOTABLE: del doc['version_added'] for (k,v) in doc['options'].iteritems(): all_keys.append(k) all_keys = sorted(all_keys) doc['option_keys'] = all_keys doc['filename'] = fname doc['docuri'] = doc['module'].replace('_', '-') doc['now_date'] = datetime.date.today().strftime('%Y-%m-%d') doc['ansible_version'] = options.ansible_version doc['plainexamples'] = examples #plain text # here is where we build the table of contents... text = template.render(doc) write_data(text, options, outputname, module) ##################################################################################### def process_category(category, categories, options, env, template, outputname): module_map = categories[category] category_file_path = os.path.join(options.output_dir, "list_of_%s_modules.rst" % category) category_file = open(category_file_path, "w") print "*** recording category %s in %s ***" % (category, category_file_path) # TODO: start a new category file category = category.replace("_"," ") category = category.title() modules = module_map.keys() modules.sort() category_header = "%s Modules" % (category.title()) underscores = "`" * len(category_header) category_file.write("""\ %s %s .. toctree:: :maxdepth: 1 """ % (category_header, underscores)) for module in modules: result = process_module(module, options, env, template, outputname, module_map) if result != "SKIPPED": category_file.write(" %s_module\n" % module) category_file.close() # TODO: end a new category file ##################################################################################### def validate_options(options): ''' validate option parser options ''' if not options.module_dir: print >>sys.stderr, "--module-dir is required" sys.exit(1) if not os.path.exists(options.module_dir): print >>sys.stderr, "--module-dir does not exist: %s" % options.module_dir sys.exit(1) if not options.template_dir: print "--template-dir must be specified" sys.exit(1) ##################################################################################### def main(): p = generate_parser() (options, args) = p.parse_args() validate_options(options) env, template, outputname = jinja2_environment(options.template_dir, options.type) categories = list_modules(options.module_dir) last_category = None category_names = categories.keys() category_names.sort() category_list_path = os.path.join(options.output_dir, "modules_by_category.rst") category_list_file = open(category_list_path, "w") category_list_file.write("Module Index\n") category_list_file.write("============\n") category_list_file.write("\n\n") category_list_file.write(".. toctree::\n") category_list_file.write(" :maxdepth: 1\n\n") for category in category_names: category_list_file.write(" list_of_%s_modules\n" % category) process_category(category, categories, options, env, template, outputname) category_list_file.close() if __name__ == '__main__': main() ansible-1.5.4/hacking/env-setup.fish0000664000000000000000000000265512316627017016060 0ustar rootroot#!/usr/bin/env fish # usage: . ./hacking/env-setup [-q] # modifies environment for running Ansible from checkout set HACKING_DIR (dirname (status -f)) set FULL_PATH (python -c "import os; print(os.path.realpath('$HACKING_DIR'))") set ANSIBLE_HOME (dirname $FULL_PATH) set PREFIX_PYTHONPATH $ANSIBLE_HOME/lib set PREFIX_PATH $ANSIBLE_HOME/bin set PREFIX_MANPATH $ANSIBLE_HOME/docs/man # Set PYTHONPATH if not set -q PYTHONPATH set -gx PYTHONPATH $PREFIX_PYTHONPATH else switch PYTHONPATH case "$PREFIX_PYTHONPATH*" case "*" echo "Appending PYTHONPATH" set -gx PYTHONPATH $PREFIX_PYTHONPATH:$PYTHONPATH end end # Set PATH if not contains $PREFIX_PATH $PATH set -gx PATH $PREFIX_PATH $PATH end # Set MANPATH if not contains $PREFIX_MANPATH $MANPATH if not set -q MANPATH set -gx MANPATH $PREFIX_MANPATH else set -gx MANPATH $PREFIX_MANPATH $MANPATH end end set -gx ANSIBLE_LIBRARY $ANSIBLE_HOME/library if set -q argv switch $argv case '-q' '--quiet' case '*' echo "" echo "Setting up Ansible to run out of checkout..." echo "" echo "PATH=$PATH" echo "PYTHONPATH=$PYTHONPATH" echo "ANSIBLE_LIBRARY=$ANSIBLE_LIBRARY" echo "MANPATH=$MANPATH" echo "" echo "Remember, you may wish to specify your host file with -i" echo "" echo "Done!" echo "" end end ansible-1.5.4/hacking/templates/0000775000000000000000000000000012316627017015245 5ustar rootrootansible-1.5.4/hacking/templates/rst.j20000664000000000000000000000434312316627017016316 0ustar rootroot.. _@{ module }@: {% if short_description %} {% set title = module + ' - ' + short_description|convert_symbols_to_format %} {% else %} {% set title = module %} {% endif %} {% set title_len = title|length %} @{ title }@ @{ '+' * title_len }@ {% if author %} :Author: @{ author }@ {% endif %} .. contents:: :local: :depth: 1 {# ------------------------------------------ # # Please note: this looks like a core dump # but it isn't one. # --------------------------------------------#} Synopsis -------- {% if version_added is defined -%} .. versionadded:: @{ version_added }@ {% endif %} {% for desc in description -%} @{ desc | convert_symbols_to_format }@ {% endfor %} {% if options -%} Options ------- .. raw:: html {% for k in option_keys %} {% set v = options[k] %} {% if v.get('type', 'not_bool') == 'bool' %} {% else %} {% endif %} {% endfor %}
parameter required default choices comments
@{ k }@ {% if v.get('required', False) %}yes{% else %}no{% endif %} {% if v['default'] %}@{ v['default'] }@{% endif %}
  • yes
  • no
    {% for choice in v.get('choices',[]) -%}
  • @{ choice }@
  • {% endfor -%}
{% for desc in v.description -%}@{ desc | html_ify }@{% endfor -%}{% if v['version_added'] %} (added in Ansible @{v['version_added']}@){% endif %}
{% endif %} {% if requirements %} {% for req in requirements %} .. note:: Requires @{ req | convert_symbols_to_format }@ {% endfor %} {% endif %} {% if examples or plainexamples %} Examples -------- .. raw:: html {% for example in examples %} {% if example['description'] %}

@{ example['description'] | html_ify }@

{% endif %}

@{ example['code'] | escape | indent(4, True) }@
    

{% endfor %}
{% if plainexamples %} :: @{ plainexamples | indent(4, True) }@ {% endif %} {% endif %} {% if notes %} {% for note in notes %} .. note:: @{ note | convert_symbols_to_format }@ {% endfor %} {% endif %} ansible-1.5.4/hacking/get_library.py0000775000000000000000000000155012316627017016130 0ustar rootroot#!/usr/bin/env python # (c) 2014, Will Thames # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . # import ansible.constants as C import sys def main(): print C.DEFAULT_MODULE_PATH return 0 if __name__ == '__main__': sys.exit(main()) ansible-1.5.4/docsite/0000775000000000000000000000000012316627017013275 5ustar rootrootansible-1.5.4/docsite/rst/0000775000000000000000000000000012316627017014105 5ustar rootrootansible-1.5.4/docsite/rst/playbooks_error_handling.rst0000664000000000000000000000634612316627017021730 0ustar rootrootError Handling In Playbooks =========================== .. contents:: Topics Ansible normally has defaults that make sure to check the return codes of commands and modules and it fails fast -- forcing an error to be dealt with unless you decide otherwise. Sometimes a command that returns 0 isn't an error. Sometimes a command might not always need to report that it 'changed' the remote system. This section describes how to change the default behavior of Ansible for certain tasks so output and error handling behavior is as desired. .. _ignoring_failed_commands: Ignoring Failed Commands ```````````````````````` .. versionadded:: 0.6 Generally playbooks will stop executing any more steps on a host that has a failure. Sometimes, though, you want to continue on. To do so, write a task that looks like this:: - name: this will not be counted as a failure command: /bin/false ignore_errors: yes Note that the above system only governs the failure of the particular task, so if you have an undefined variable used, it will still raise an error that users will need to address. .. _controlling_what_defines_failure: Controlling What Defines Failure ```````````````````````````````` .. versionadded:: 1.4 Suppose the error code of a command is meaningless and to tell if there is a failure what really matters is the output of the command, for instance if the string "FAILED" is in the output. Ansible in 1.4 and later provides a way to specify this behavior as follows:: - name: this command prints FAILED when it fails command: /usr/bin/example-command -x -y -z register: command_result failed_when: "'FAILED' in command_result.stderr" In previous version of Ansible, this can be still be accomplished as follows:: - name: this command prints FAILED when it fails command: /usr/bin/example-command -x -y -z register: command_result ignore_errors: True - name: fail the play if the previous command did not succeed fail: msg="the command failed" when: "'FAILED' in command_result.stderr" .. _override_the_changed_result: Overriding The Changed Result ````````````````````````````` .. versionadded:: 1.3 When a shell/command or other module runs it will typically report "changed" status based on whether it thinks it affected machine state. Sometimes you will know, based on the return code or output that it did not make any changes, and wish to override the "changed" result such that it does not appear in report output or does not cause handlers to fire:: tasks: - shell: /usr/bin/billybass --mode="take me to the river" register: bass_result changed_when: "bass_result.rc != 2" # this will never report 'changed' status - shell: wall 'beep' changed_when: False .. seealso:: :doc:`playbooks` An introduction to playbooks :doc:`playbooks_best_practices` Best practices in playbooks :doc:`playbooks_conditionals` Conditional statements in playbooks :doc:`playbooks_variables` All about variables `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_acceleration.rst0000664000000000000000000000731412316627017021360 0ustar rootrootAccelerated Mode ================ .. versionadded:: 1.3 You Might Not Need This! ```````````````````````` Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called "SSH pipelining" and should read the :ref:`pipelining` section of the documentation. For users on 1.5 and later, accelerate mode only makes sense if you are (A) are managing from an Enterprise Linux 6 or earlier host and still are on paramiko, or (B) can't enable TTYs with sudo as described in the pipelining docs. If you can use pipelining, Ansible will reduce the amount of files transferred over the wire, making everything much more efficient, and performance will be on par with accelerate mode in nearly all cases, possibly excluding very large file transfer. Because less moving parts are involved, pipelining is better than accelerate mode for nearly all use cases. Accelerate mode remains around in support of EL6 control machines and other constrained environments. Accelerate Mode Details ``````````````````````` While OpenSSH using the ControlPersist feature is quite fast and scalable, there is a certain small amount of overhead involved in using SSH connections. While many people will not encounter a need, if you are running on a platform that doesn't have ControlPersist support (such as an EL6 control machine), you'll probably be even more interested in tuning options. Accelerate mode is there to help connections work faster, but still uses SSH for initial secure key exchange. There is no additional public key infrastructure to manage, and this does not require things like NTP or even DNS. Accelerated mode can be anywhere from 2-6x faster than SSH with ControlPersist enabled, and 10x faster than paramiko. Accelerated mode works by launching a temporary daemon over SSH. Once the daemon is running, Ansible will connect directly to it via a socket connection. Ansible secures this communication by using a temporary AES key that is exchanged during the SSH connection (this key is different for every host, and is also regenerated periodically). By default, Ansible will use port 5099 for the accelerated connection, though this is configurable. Once running, the daemon will accept connections for 30 minutes, after which time it will terminate itself and need to be restarted over SSH. Accelerated mode offers several improvements over the (deprecated) original fireball mode from which it was based: * No bootstrapping is required, only a single line needs to be added to each play you wish to run in accelerated mode. * Support for sudo commands (see below for more details and caveats) is available. * There are fewer requirements. ZeroMQ is no longer required, nor are there any special packages beyond python-keyczar * python 2.5 or higher is required. In order to use accelerated mode, simply add `accelerate: true` to your play:: --- - hosts: all accelerate: true tasks: - name: some task command: echo {{ item }} with_items: - foo - bar - baz If you wish to change the port Ansible will use for the accelerated connection, just add the `accelerated_port` option:: --- - hosts: all accelerate: true # default port is 5099 accelerate_port: 10000 The `accelerate_port` option can also be specified in the environment variable ACCELERATE_PORT, or in your `ansible.cfg` configuration:: [accelerate] accelerate_port = 5099 As noted above, accelerated mode also supports running tasks via sudo, however there are two important caveats: * You must remove requiretty from your sudoers options. * Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo'ed commands. ansible-1.5.4/docsite/rst/playbooks_async.rst0000664000000000000000000000377512316627017020053 0ustar rootrootAsynchronous Actions and Polling ================================ By default tasks in playbooks block, meaning the connections stay open until the task is done on each node. This may not always be desirable, or you may be running operations that take longer than the SSH timeout. The easiest way to do this is to kick them off all at once and then poll until they are done. You will also want to use asynchronous mode on very long running operations that might be subject to timeout. To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status. The default poll value is 10 seconds if you do not specify a value for `poll`:: --- - hosts: all remote_user: root tasks: - name: simulate long running op (15 sec), wait for up to 45, poll every 5 command: /bin/sleep 15 async: 45 poll: 5 .. note:: There is no default for the async time limit. If you leave off the 'async' keyword, the task runs synchronously, which is Ansible's default. Alternatively, if you do not need to wait on the task to complete, you may "fire and forget" by specifying a poll value of 0:: --- - hosts: all remote_user: root tasks: - name: simulate long running op, allow to run for 45, fire and forget command: /bin/sleep 15 async: 45 poll: 0 .. note:: You shouldn't "fire and forget" with operations that require exclusive locks, such as yum transactions, if you expect to run other commands later in the playbook against those same resources. .. note:: Using a higher value for ``--forks`` will result in kicking off asynchronous tasks even faster. This also increases the efficiency of polling. .. seealso:: :doc:`playbooks` An introduction to playbooks `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_variables.rst0000664000000000000000000011134712316627017020701 0ustar rootrootVariables ========= .. contents:: Topics While automation exists to make it easier to make things repeatable, all of your systems are likely not exactly alike. All of your systems are likely not the same. On some systems you may want to set some behavior or configuration that is slightly different from others. Also, some of the observed behavior or state of remote systems might need to influence how you configure those systems. (Such as you might need to find out the IP address of a system and even use it as a configuration value on another system). You might have some templates for configuration files that are mostly the same, but slightly different based on those variables. Variables in Ansible are how we deal with differences between systems. Once understanding variables you'll also want to dig into :doc:`playbooks_conditionals` and :doc:`playbooks_loops`. Useful things like the "group_by" module and the "when" conditional can also be used with variables, and to help manage differences between systems. It's highly recommended that you consult the ansible-examples github repository to see a lot of examples of variables put to use. .. _valid_variable_names: What Makes A Valid Variable Name ```````````````````````````````` Before we start using variables it's important to know what are valid variable names. Variable names should be letters, numbers, and underscores. Variables should always start with a letter. "foo_port" is a great variable. "foo5" is fine too. "foo-port", "foo port", "foo.port" and "12" are not valid variable names. Easy enough, let's move on. .. _variables_in_inventory: Variables Defined in Inventory `````````````````````````````` We've actually already covered a lot about variables in another section, so so far this shouldn't be terribly new, but a bit of a refresher. Often you'll want to set variables based on what groups a machine is in. For instance, maybe machines in Boston want to use 'boston.ntp.example.com' as an NTP server. See the :doc:`intro_inventory` document for multiple ways on how to define variables in inventory. .. _playbook_variables: Variables Defined in a Playbook ``````````````````````````````` In a playbook, it's possible to define variables directly inline like so:: - hosts: webservers vars: http_port: 80 This can be nice as it's right there when you are reading the playbook. .. _included_variables: Variables defined from included files and roles ``````````````````````````````````````````````` It turns out we've already talked about variables in another place too. As described in :doc:`playbooks_roles`, variables can also be included in the playbook via include files, which may or may not be part of an "Ansible Role". Usage of roles is preferred as it provides a nice organizational system. .. _about_jinja2: Using Variables: About Jinja2 ````````````````````````````` It's nice enough to know about how to define variables, but how do you use them? Ansible allows you to reference variables in your playbooks using the Jinja2 templating system. While you can do a lot of complex things in Jinja, only the basics are things you really need to learn at first. For instance, in a simple template, you can do something like:: My amp goes to {{ max_amp_value }} And that will provide the most basic form of variable substitution. This is also valid directly in playbooks, and you'll occasionally want to do things like:: template: src=foo.cfg.j2 dest={{ remote_install_path}}/foo.cfg In the above example, we used a variable to help decide where to place a file. Inside a template you automatically have access to all of the variables that are in scope for a host. Actually it's more than that -- you can also read variables about other hosts. We'll show how to do that in a bit. .. note:: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible templates are pure machine-parseable YAML. This is an rather important feature as it means it is possible to code-generate pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can unlock possibilities. .. _jinja2_filters: Jinja2 Filters `````````````` .. note:: These are infrequently utilized features. Use them if they fit a use case you have, but this is optional knowledge. Filters in Jinja2 are a way of transforming template expressions from one kind of data into another. Jinja2 ships with many of these. See `builtin filters`_ in the official Jinja2 template documentation. In addition to those, Ansible supplies many more. .. _filters_for_formatting_data: Filters For Formatting Data --------------------------- The following filters will take a data structure in a template and render it in a slightly different format. These are occasionally useful for debugging:: {{ some_variable | to_nice_json }} {{ some_variable | to_nice_yaml }} .. _filters_used_with_conditionals: Filters Often Used With Conditionals ------------------------------------ The following tasks are illustrative of how filters can be used with conditionals:: tasks: - shell: /usr/bin/foo register: result ignore_errors: True - debug: msg="it failed" when: result|failed # in most cases you'll want a handler, but if you want to do something right now, this is nice - debug: msg="it changed" when: result|changed - debug: msg="it succeeded" when: result|success - debug: msg="it was skipped" when: result|skipped .. _forcing_variables_to_be_defined: Forcing Variables To Be Defined ------------------------------- The default behavior from ansible and ansible.cfg is to fail if variables are undefined, but you can turn this off. This allows an explicit check with this feature off:: {{ variable | mandatory }} The variable value will be used as is, but the template evaluation will raise an error if it is undefined. .. _defaulting_undefined_variables: Defaulting Undefined Variables ------------------------------ Jinja2 provides a useful 'default' filter, that is often a better approach to failing if a variable is not defined. {{ some_variable | default(5) }} In the above example, if the variable 'some_variable' is not defined, the value used will be 5, rather than an error being raised. .. _set_theory_filters: Set Theory Filters -------------------- All these functions return a unique set from sets or lists. .. versionadded:: 1.4 To get a unique set from a list:: {{ list1 | unique }} To get a union of two lists:: {{ list1 | union(list2) }} To get the intersection of 2 lists (unique list of all items in both):: {{ list1 | intersect(list2) }} To get the difference of 2 lists (items in 1 that don't exist in 2):: {{ list1 | difference(list2) }} To get the symmetric difference of 2 lists (items exclusive to each list):: {{ list1 | symmetric_difference(list2) }} .. _other_useful_filters: Other Useful Filters -------------------- To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':: {{ path | basename }} To get the directory from a path:: {{ path | dirname }} To expand a path containing a tilde (`~`) character (new in version 1.5):: {{ path | expanduser }} To work with Base64 encoded strings:: {{ encoded | b64decode }} {{ decoded | b64encode }} To take an md5sum of a filename:: {{ filename | md5 }} To cast values as certain types, such as when you input a string as "True" from a vars_prompt and the system doesn't know it is a boolean value:: - debug: msg=test when: some_string_value | bool A few useful filters are typically added with each new Ansible release. The development documentation shows how to extend Ansible filters by writing your own as plugins, though in general, we encourage new ones to be added to core so everyone can make use of them. .. _yaml_gotchas: Hey Wait, A YAML Gotcha ``````````````````````` YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren't trying to start a YAML dictionary. This is covered on the :doc:`YAMLSyntax` page. This won't work:: - hosts: app_servers vars: app_path: {{ base_path }}/22 Do it like this and you'll be fine:: - hosts: app_servers vars: app_path: "{{ base_path }}/22" .. _vars_and_facts: Information discovered from systems: Facts `````````````````````````````````````````` There are other places where variables can come from, but these are a type of variable that are discovered, not set by the user. Facts are information derived from speaking with your remote systems. An example of this might be the ip address of the remote host, or what the operating system is. To see what information is available, try the following:: ansible hostname -m setup This will return a ginormous amount of variable data, which may look like this, as taken from Ansible 1.4 on a Ubuntu 12.04 system:: "ansible_all_ipv4_addresses": [ "REDACTED IP ADDRESS" ], "ansible_all_ipv6_addresses": [ "REDACTED IPV6 ADDRESS" ], "ansible_architecture": "x86_64", "ansible_bios_date": "09/20/2012", "ansible_bios_version": "6.00", "ansible_cmdline": { "BOOT_IMAGE": "/boot/vmlinuz-3.5.0-23-generic", "quiet": true, "ro": true, "root": "UUID=4195bff4-e157-4e41-8701-e93f0aec9e22", "splash": true }, "ansible_date_time": { "date": "2013-10-02", "day": "02", "epoch": "1380756810", "hour": "19", "iso8601": "2013-10-02T23:33:30Z", "iso8601_micro": "2013-10-02T23:33:30.036070Z", "minute": "33", "month": "10", "second": "30", "time": "19:33:30", "tz": "EDT", "year": "2013" }, "ansible_default_ipv4": { "address": "REDACTED", "alias": "eth0", "gateway": "REDACTED", "interface": "eth0", "macaddress": "REDACTED", "mtu": 1500, "netmask": "255.255.255.0", "network": "REDACTED", "type": "ether" }, "ansible_default_ipv6": {}, "ansible_devices": { "fd0": { "holders": [], "host": "", "model": null, "partitions": {}, "removable": "1", "rotational": "1", "scheduler_mode": "deadline", "sectors": "0", "sectorsize": "512", "size": "0.00 Bytes", "support_discard": "0", "vendor": null }, "sda": { "holders": [], "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)", "model": "VMware Virtual S", "partitions": { "sda1": { "sectors": "39843840", "sectorsize": 512, "size": "19.00 GB", "start": "2048" }, "sda2": { "sectors": "2", "sectorsize": 512, "size": "1.00 KB", "start": "39847934" }, "sda5": { "sectors": "2093056", "sectorsize": 512, "size": "1022.00 MB", "start": "39847936" } }, "removable": "0", "rotational": "1", "scheduler_mode": "deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "VMware," }, "sr0": { "holders": [], "host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)", "model": "VMware IDE CDR10", "partitions": {}, "removable": "1", "rotational": "1", "scheduler_mode": "deadline", "sectors": "2097151", "sectorsize": "512", "size": "1024.00 MB", "support_discard": "0", "vendor": "NECVMWar" } }, "ansible_distribution": "Ubuntu", "ansible_distribution_release": "precise", "ansible_distribution_version": "12.04", "ansible_domain": "", "ansible_env": { "COLORTERM": "gnome-terminal", "DISPLAY": ":0", "HOME": "/home/mdehaan", "LANG": "C", "LESSCLOSE": "/usr/bin/lesspipe %s %s", "LESSOPEN": "| /usr/bin/lesspipe %s", "LOGNAME": "root", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:", "MAIL": "/var/mail/root", "OLDPWD": "/root/ansible/docsite", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "PWD": "/root/ansible", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/bash", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "mdehaan", "TERM": "xterm", "USER": "root", "USERNAME": "root", "XAUTHORITY": "/home/mdehaan/.Xauthority", "_": "/usr/local/bin/ansible" }, "ansible_eth0": { "active": true, "device": "eth0", "ipv4": { "address": "REDACTED", "netmask": "255.255.255.0", "network": "REDACTED" }, "ipv6": [ { "address": "REDACTED", "prefix": "64", "scope": "link" } ], "macaddress": "REDACTED", "module": "e1000", "mtu": 1500, "type": "ether" }, "ansible_form_factor": "Other", "ansible_fqdn": "ubuntu2", "ansible_hostname": "ubuntu2", "ansible_interfaces": [ "lo", "eth0" ], "ansible_kernel": "3.5.0-23-generic", "ansible_lo": { "active": true, "device": "lo", "ipv4": { "address": "127.0.0.1", "netmask": "255.0.0.0", "network": "127.0.0.0" }, "ipv6": [ { "address": "::1", "prefix": "128", "scope": "host" } ], "mtu": 16436, "type": "loopback" }, "ansible_lsb": { "codename": "precise", "description": "Ubuntu 12.04.2 LTS", "id": "Ubuntu", "major_release": "12", "release": "12.04" }, "ansible_machine": "x86_64", "ansible_memfree_mb": 74, "ansible_memtotal_mb": 991, "ansible_mounts": [ { "device": "/dev/sda1", "fstype": "ext4", "mount": "/", "options": "rw,errors=remount-ro", "size_available": 15032406016, "size_total": 20079898624 } ], "ansible_os_family": "Debian", "ansible_pkg_mgr": "apt", "ansible_processor": [ "Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz" ], "ansible_processor_cores": 1, "ansible_processor_count": 1, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 1, "ansible_product_name": "VMware Virtual Platform", "ansible_product_serial": "REDACTED", "ansible_product_uuid": "REDACTED", "ansible_product_version": "None", "ansible_python_version": "2.7.3", "ansible_selinux": false, "ansible_ssh_host_key_dsa_public": "REDACTED KEY VALUE" "ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE" "ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE" "ansible_swapfree_mb": 665, "ansible_swaptotal_mb": 1021, "ansible_system": "Linux", "ansible_system_vendor": "VMware, Inc.", "ansible_user_id": "root", "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "VMware" In the above the model of the first harddrive may be referenced in a template or playbook as:: {{ ansible_devices.sda.model }} Similarly, the hostname as the system reports it is:: {{ ansible_hostname }} Facts are frequently used in conditionals (see :doc:`playbooks_conditionals`) and also in templates. Facts can be also used to create dynamic groups of hosts that match particular criteria, see the :doc:`modules` documentation on 'group_by' for details, as well as in generalized conditional statements as discussed in the :doc:`playbooks_conditionals` chapter. .. _disabling_facts: Turning Off Facts ````````````````` If you know you don't need any fact data about your hosts, and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms. In any play, just do this:: - hosts: whatever gather_facts: no .. _local_facts: Local Facts (Facts.d) ````````````````````` .. versionadded:: 1.3 As discussed in the playbooks chapter, Ansible facts are a way of getting data about remote systems for use in playbook variables. Usually these are discovered automatically by the 'setup' module in Ansible. Users can also write custom facts modules, as described in the API guide. However, what if you want to have a simple way to provide system or user provided data for use in Ansible variables, without writing a fact module? For instance, what if you want users to be able to control some aspect about how their systems are managed? "Facts.d" is one such mechanism. .. note:: Perhaps "local facts" is a bit of a misnomer, it means "locally supplied user values" as opposed to "centrally supplied user values", or what facts are -- "locally dynamically determined values". If a remotely managed system has an "/etc/ansible/facts.d" directory, any files in this directory ending in ".fact", can be JSON, INI, or executable files returning JSON, and these can supply local facts in Ansible. For instance assume a /etc/ansible/facts.d/preferences.fact:: [general] asdf=1 bar=2 This will produce a hash variable fact named "general" with 'asdf' and 'bar' as members. To validate this, run the following:: ansible -m setup -a "filter=ansible_local" And you will see the following fact added:: "ansible_local": { "preferences": { "general": { "asdf" : "1", "bar" : "2" } } } And this data can be accessed in a template/playbook as:: {{ ansible_local.preferences.general.asdf }} The local namespace prevents any user supplied fact from overriding system facts or variables defined elsewhere in the playbook. .. _registered_variables: Registered Variables ```````````````````` Another major use of variables is running a command and using the result of that command to save the result into a variable. Results will vary from module to module. Use of -v when executing playbooks will show possible values for the results. The value of a task being executed in ansible can be saved in a variable and used later. See some examples of this in the :doc:`playbooks_conditionals` chapter. While it's mentioned elsewhere in that document too, here's a quick syntax example:: - hosts: web_servers tasks: - shell: /usr/bin/foo register: foo_result ignore_errors: True - shell: /usr/bin/bar when: foo_result.rc == 5 Registered variables are valid on the host the remainder of the playbook run, which is the same as the lifetime of "facts" in Ansible. Effectively registered variables are just like facts. .. _accessing_complex_variable_data: Accessing Complex Variable Data ``````````````````````````````` We already talked about facts a little higher up in the documentation. Some provided facts, like networking information, are made available as nested data structures. To access them a simple {{ foo }} is not sufficient, but it is still easy to do. Here's how we get an IP address:: {{ ansible_eth0["ipv4"]["address"] }} OR alternatively:: {{ ansible_eth0.ipv4.address }} Similarly, this is how we access the first element of an array:: {{ foo[0] }} .. _magic_variables_and_hostvars: Magic Variables, and How To Access Information About Other Hosts ```````````````````````````````````````````````````````````````` Even if you didn't define them yourself, Ansible provides a few variables for you automatically. The most important of these are 'hostvars', 'group_names', and 'groups'. Users should not use these names themselves as they are reserved. 'environment' is also reserved. Hostvars lets you ask about the variables of another host, including facts that have been gathered about that host. If, at this point, you haven't talked to that host yet in any play in the playbook or set of playbooks, you can get at the variables, but you will not be able to see the facts. If your database server wants to use the value of a 'fact' from another node, or an inventory variable assigned to another node, it's easy to do so within a template or even an action line:: {{ hostvars['test.example.com']['ansible_distribution'] }} Additionally, *group_names* is a list (array) of all the groups the current host is in. This can be used in templates using Jinja2 syntax to make template source files that vary based on the group membership (or role) of the host:: {% if 'webserver' in group_names %} # some part of a configuration file that only applies to webservers {% endif %} *groups* is a list of all the groups (and hosts) in the inventory. This can be used to enumerate all hosts within a group. For example:: {% for host in groups['app_servers'] %} # something that applies to all app servers. {% endfor %} A frequently used idiom is walking a group to find all IP addresses in that group:: {% for host in groups['app_servers'] %} {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }} {% endfor %} An example of this could include pointing a frontend proxy server to all of the app servers, setting up the correct firewall rules between servers, etc. Additionally, *inventory_hostname* is the name of the hostname as configured in Ansible's inventory host file. This can be useful for when you don't want to rely on the discovered hostname `ansible_hostname` or for other mysterious reasons. If you have a long FQDN, *inventory_hostname_short* also contains the part up to the first period, without the rest of the domain. *play_hosts* is available as a list of hostnames that are in scope for the current play. This may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer. Don't worry about any of this unless you think you need it. You'll know when you do. Also available, *inventory_dir* is the pathname of the directory holding Ansible's inventory host file, *inventory_file* is the pathname and the filename pointing to the Ansible's inventory host file. .. _variable_file_seperation_details: Variable File Separation ```````````````````````` It's a great idea to keep your playbooks under source control, but you may wish to make the playbook source public while keeping certain important variables private. Similarly, sometimes you may just want to keep certain information in different files, away from the main playbook. You can do this by using an external variables file, or files, just like this:: --- - hosts: all remote_user: root vars: favcolor: blue vars_files: - /vars/external_vars.yml tasks: - name: this is just a placeholder command: /bin/echo foo This removes the risk of sharing sensitive data with others when sharing your playbook source with them. The contents of each variables file is a simple YAML dictionary, like this:: --- # in the above example, this would be vars/external_vars.yml somevar: somevalue password: magic .. note:: It's also possible to keep per-host and per-group variables in very similar files, this is covered in :doc:`intro_patterns`. .. _passing_variables_on_the_command_line: Passing Variables On The Command Line ````````````````````````````````````` In addition to `vars_prompt` and `vars_files`, it is possible to send variables over the Ansible command line. This is particularly useful when writing a generic release playbook where you may want to pass in the version of the application to deploy:: ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo" This is useful, for, among other things, setting the hosts group or the user for the playbook. Example:: --- - hosts: '{{ hosts }}' remote_user: '{{ user }}' tasks: - ... ansible-playbook release.yml --extra-vars "hosts=vipers user=starbuck" As of Ansible 1.2, you can also pass in extra vars as quoted JSON, like so:: --extra-vars '{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}' The key=value form is obviously simpler, but it's there if you need it! As of Ansible 1.3, extra vars can be loaded from a JSON file with the "@" syntax:: --extra-vars "@some_file.json" Also as of Ansible 1.3, extra vars can be formatted as YAML, either on the command line or in a file as above. .. _conditional_imports: Conditional Imports ``````````````````` .. note:: This behavior is infrequently used in Ansible. You may wish to skip this section. The 'group_by' module as described in the module documentation is a better way to achieve this behavior in most cases. Sometimes you will want to do certain things differently in a playbook based on certain criteria. Having one playbook that works on multiple platforms and OS versions is a good example. As an example, the name of the Apache package may be different between CentOS and Debian, but it is easily handled with a minimum of syntax in an Ansible Playbook:: --- - hosts: all remote_user: root vars_files: - "vars/common.yml" - [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ] tasks: - name: make sure apache is running service: name={{ apache }} state=running .. note:: The variable 'ansible_os_family' is being interpolated into the list of filenames being defined for vars_files. As a reminder, the various YAML files contain just keys and values:: --- # for vars/CentOS.yml apache: httpd somethingelse: 42 How does this work? If the operating system was 'CentOS', the first file Ansible would try to import would be 'vars/CentOS.yml', followed by '/vars/os_defaults.yml' if that file did not exist. If no files in the list were found, an error would be raised. On Debian, it would instead first look towards 'vars/Debian.yml' instead of 'vars/CentOS.yml', before falling back on 'vars/os_defaults.yml'. Pretty simple. To use this conditional import feature, you'll need facter or ohai installed prior to running the playbook, but you can of course push this out with Ansible if you like:: # for facter ansible -m yum -a "pkg=facter ensure=installed" ansible -m yum -a "pkg=ruby-json ensure=installed" # for ohai ansible -m yum -a "pkg=ohai ensure=installed" Ansible's approach to configuration -- separating variables from tasks, keeps your playbooks from turning into arbitrary code with ugly nested ifs, conditionals, and so on - and results in more streamlined & auditable configuration rules -- especially because there are a minimum of decision points to track. .. _variable_precedence: Variable Precedence: Where Should I Put A Variable? ``````````````````````````````````````````````````` A lot of folks may ask about how variables override another. Ultimately it's Ansible's philosophy that it's better you know where to put a variable, and then you have to think about it a lot less. Avoid defining the variable "x" in 47 places and then ask the question "which x gets used". Why? Because that's not Ansible's Zen philosophy of doing things. There is only one Empire State Building. One Mona Lisa, etc. Figure out where to define a variable, and don't make it complicated. However, let's go ahead and get precedence out of the way! It exists. It's a real thing, and you might have a use for it. If multiple variables of the same name are defined in different places, they win in a certain order, which is:: * -e variables always win * then comes "most everything else" * then comes variables defined in inventory * then comes facts discovered about a system * then "role defaults", which are the most "defaulty" and lose in priority to everything. .. note:: In versions prior to 1.5.4, facts discovered about a system were in the "most everything else" category above. That seems a little theoretical. Let's show some examples and where you would choose to put what based on the kind of control you might want over values. First off, group variables are super powerful. Site wide defaults should be defined as a 'group_vars/all' setting. Group variables are generally placed alongside your inventory file. They can also be returned by a dynamic inventory script (see :doc:`intro_dynamic_inventory`) or defined in things like :doc:`tower` from the UI or API:: --- # file: /etc/ansible/group_vars/all # this is the site wide default ntp_server: default-time.example.com Regional information might be defined in a 'group_vars/region' variable. If this group is a child of the 'all' group (which it is, because all groups are), it will override the group that is higher up and more general:: --- # file: /etc/ansible/group_vars/boston ntp_server: boston-time.example.com If for some crazy reason we wanted to tell just a specific host to use a specific NTP server, it would then override the group variable!:: --- # file: /etc/ansible/host_vars/xyz.boston.example.com ntp_server: override.example.com So that covers inventory and what you would normally set there. It's a great place for things that deal with geography or behavior. Since groups are frequently the entity that maps roles onto hosts, it is sometimes a shortcut to set variables on the group instead of defining them on a role. You could go either way. Remember: Child groups override parent groups, and hosts always override their groups. Next up: learning about role variable precedence. We'll pretty much assume you are using roles at this point. You should be using roles for sure. Roles are great. You are using roles aren't you? Hint hint. Ok, so if you are writing a redistributable role with reasonable defaults, put those in the 'roles/x/defaults/main.yml' file. This means the role will bring along a default value but ANYTHING in Ansible will override it. It's just a default. That's why it says "defaults" :) See :doc:`playbooks_roles` for more info about this:: --- # file: roles/x/defaults/main.yml # if not overriden in inventory or as a parameter, this is the value that will be used http_port: 80 if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be overridden by inventory, you should but it in roles/x/vars/main.yml like so, and inventory values cannot override it. -e however, still will:: --- # file: roles/x/vars/main.yml # this will absolutely be used in this role http_port: 80 So the above is a great way to plug in constants about the role that are always true. If you are not sharing your role with others, app specific behaviors like ports is fine to put in here. But if you are sharing roles with others, putting variables in here might be bad. Nobody will be able to override them with inventory, but they still can by passing a parameter to the role. Parameterized roles are useful. If you are using a role and want to override a default, pass it as a parameter to the role like so:: roles: - { name: apache, http_port: 8080 } This makes it clear to the playbook reader that you've made a conscious choice to override some default in the role, or pass in some configuration that the role can't assume by itself. It also allows you to pass something site-specific that isn't really part of the role you are sharing with others. This can often be used for things that might apply to some hosts multiple times, like so:: roles: - { role: app_user, name: Ian } - { role: app_user, name: Terry } - { role: app_user, name: Graham } - { role: app_user, name: John } That's a bit arbitrary, but you can see how the same role was invoked multiple Times. In that example it's quite likely there was no default for 'name' supplied at all. Ansible can yell at you when variables aren't defined -- it's the default behavior in fact. So that's a bit about roles. There are a few bonus things that go on with roles. Generally speaking, variables set in one role are available to others. This means if you have a "roles/common/vars/main.yml" you can set variables in there and make use of them in other roles and elsewhere in your playbook:: roles: - { role: common_settings } - { role: something, foo: 12 } - { role: something_else } .. note:: There are some protections in place to avoid the need to namespace variables. In the above, variables defined in common_settings are most definitely available to 'app_user' and 'something_else' tasks, but if "something's" guaranteed to have foo set at 12, even if somewhere deep in common settings it set foo to 20. So, that's precedence, explained in a more direct way. Don't worry about precedence, just think about if your role is defining a variable that is a default, or a "live" variable you definitely want to use. Inventory lies in precedence right in the middle, and if you want to forcibly override something, use -e. If you found that a little hard to understand, take a look at the `ansible-examples`_ repo on our github for a bit more about how all of these things can work together. .. _ansible-examples: https://github.com/ansible/ansible-examples .. _builtin filters: http://jinja.pocoo.org/docs/templates/#builtin-filters .. seealso:: :doc:`playbooks` An introduction to playbooks :doc:`playbooks_conditionals` Conditional statements in playbooks :doc:`playbooks_loops` Looping in playbooks :doc:`playbooks_roles` Playbook organization by roles :doc:`playbooks_best_practices` Best practices in playbooks `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/YAMLSyntax.rst0000664000000000000000000000644512316627017016621 0ustar rootrootYAML Syntax =========== This page provides a basic overview of correct YAML syntax, which is how Ansible playbooks (our configuration management language) are expressed. We use YAML because it is easier for humans to read and write than other common data formats like XML or JSON. Further, there are libraries available in most programming languages for working with YAML. You may also wish to read :doc:`playbooks` at the same time to see how this is used in practice. YAML Basics ----------- For Ansible, nearly every YAML file starts with a list. Each item in the list is a list of key/value pairs, commonly called a "hash" or a "dictionary". So, we need to know how to write lists and dictionaries in YAML. There's another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) should begin with ``---``. This is part of the YAML format and indicates the start of a document. All members of a list are lines beginning at the same indentation level starting with a ``-`` (dash) character:: --- # A list of tasty fruits - Apple - Orange - Strawberry - Mango A dictionary is represented in a simple ``key:`` and ``value`` form:: --- # An employee record name: Example Developer job: Developer skill: Elite Dictionaries can also be represented in an abbreviated form if you really want to:: --- # An employee record {name: Example Developer, job: Developer, skill: Elite} .. _truthiness: Ansible doesn't really use these too much, but you can also specify a boolean value (true/false) in several forms:: --- create_key: yes needs_agent: no knows_oop: True likes_emacs: TRUE uses_cvs: false Let's combine what we learned so far in an arbitrary YAML example. This really has nothing to do with Ansible, but will give you a feel for the format:: --- # An employee record name: Example Developer job: Developer skill: Elite employed: True foods: - Apple - Orange - Strawberry - Mango languages: ruby: Elite python: Elite dotnet: Lame That's all you really need to know about YAML to start writing `Ansible` playbooks. Gotchas ------- While YAML is generally friendly, the following is going to result in a YAML syntax error: foo: somebody said I should put a colon here: so I did You will want to quote any hash values using colons, like so: foo: "somebody said I should put a colon here: so I did" And then the colon will be preserved. Further, Ansible uses "{{ var }}" for variables. If a value after a colon starts with a "{", YAML will think it is a dictionary, so you must quote it, like so:: foo: "{{ variable }}" .. seealso:: :doc:`playbooks` Learn what playbooks can do and how to write/run them. `YAMLLint `_ YAML Lint (online) helps you debug YAML syntax if you are having problems `Github examples directory `_ Complete playbook files from the github project source `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks.rst0000664000000000000000000000262212316627017016644 0ustar rootrootPlaybooks ````````` Playbooks are Ansible's configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. If Ansible modules are the tools in your workshop, playbooks are your design plans. At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts, interacting with monitoring servers and load balancers along the way. While there's a lot of information here, there's no need to learn everything at once. You can start small and pick up more features over time as you need them. Playbooks are designed to be human-readable and are developed in a basic text language. There are multiple ways to organize playbooks and the files they include, and we'll offer up some suggestions on that and making the most out of Ansible. It is recommended to look at `Example Playbooks `_ while reading along with the playbook documentation. These illustrate best practices as well as how to put many of the various concepts together. .. toctree:: :maxdepth: 1 playbooks_intro playbooks_roles playbooks_variables playbooks_conditionals playbooks_loops playbooks_best_practices ansible-1.5.4/docsite/rst/playbooks_prompts.rst0000664000000000000000000000556612316627017020442 0ustar rootrootPrompts ======= When running a playbook, you may wish to prompt the user for certain input, and can do so with the 'vars_prompt' section. A common use for this might be for asking for sensitive data that you do not want to record. This has uses beyond security, for instance, you may use the same playbook for all software releases and would prompt for a particular release version in a push-script. Here is a most basic example:: --- - hosts: all remote_user: root vars: from: "camelot" vars_prompt: name: "what is your name?" quest: "what is your quest?" favcolor: "what is your favorite color?" If you have a variable that changes infrequently, it might make sense to provide a default value that can be overridden. This can be accomplished using the default argument:: vars_prompt: - name: "release_version" prompt: "Product release version" default: "1.0" An alternative form of vars_prompt allows for hiding input from the user, and may later support some other options, but otherwise works equivalently:: vars_prompt: - name: "some_password" prompt: "Enter password" private: yes - name: "release_version" prompt: "Product release version" private: no If `Passlib `_ is installed, vars_prompt can also crypt the entered value so you can use it, for instance, with the user module to define a password:: vars_prompt: - name: "my_password2" prompt: "Enter password2" private: yes encrypt: "md5_crypt" confirm: yes salt_size: 7 You can use any crypt scheme supported by 'Passlib': - *des_crypt* - DES Crypt - *bsdi_crypt* - BSDi Crypt - *bigcrypt* - BigCrypt - *crypt16* - Crypt16 - *md5_crypt* - MD5 Crypt - *bcrypt* - BCrypt - *sha1_crypt* - SHA-1 Crypt - *sun_md5_crypt* - Sun MD5 Crypt - *sha256_crypt* - SHA-256 Crypt - *sha512_crypt* - SHA-512 Crypt - *apr_md5_crypt* - Apache’s MD5-Crypt variant - *phpass* - PHPass’ Portable Hash - *pbkdf2_digest* - Generic PBKDF2 Hashes - *cta_pbkdf2_sha1* - Cryptacular’s PBKDF2 hash - *dlitz_pbkdf2_sha1* - Dwayne Litzenberger’s PBKDF2 hash - *scram* - SCRAM Hash - *bsd_nthash* - FreeBSD’s MCF-compatible nthash encoding However, the only parameters accepted are 'salt' or 'salt_size'. You can use your own salt using 'salt', or have one generated automatically using 'salt_size'. If nothing is specified, a salt of size 8 will be generated. .. seealso:: :doc:`playbooks` An introduction to playbooks :doc:`playbooks_conditionals` Conditional statements in playbooks :doc:`playbooks_variables` All about variables `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_conditionals.rst0000664000000000000000000002350612316627017021416 0ustar rootrootConditionals ============ .. contents:: Topics Often the result of a play may depend on the value of a variable, fact (something learned about the remote system), or previous task result. In some cases, the values of variables may depend on other variables. Further, additional groups can be created to manage hosts based on whether the hosts match other criteria. There are many options to control execution flow in Ansible. Let's dig into what they are. .. contents:: :depth: 2 The When Statement `````````````````` Sometimes you will want to skip a particular step on a particular host. This could be something as simple as not installing a certain package if the operating system is a particular version, or it could be something like performing some cleanup steps if a filesystem is getting full. This is easy to do in Ansible, with the `when` clause, which contains a Jinja2 expression (see :doc:`playbooks_variables`). It's actually pretty simple:: tasks: - name: "shutdown Debian flavored systems" command: /sbin/shutdown -t now when: ansible_os_family == "Debian" A number of Jinja2 "filters" can also be used in when statements, some of which are unique and provided by Ansible. Suppose we want to ignore the error of one statement and then decide to do something conditionally based on success or failure:: tasks: - command: /bin/false register: result ignore_errors: True - command: /bin/something when: result|failed - command: /bin/something_else when: result|success - command: /bin/still/something_else when: result|skipped Note that was a little bit of foreshadowing on the 'register' statement. We'll get to it a bit later in this chapter. As a reminder, to see what facts are available on a particular system, you can do:: ansible hostname.example.com -m setup Tip: Sometimes you'll get back a variable that's a string and you'll want to do a math operation comparison on it. You can do this like so:: tasks: - shell: echo "only on Red Hat 6, derivatives, and later" when: ansible_os_family == "RedHat" and ansible_lsb.major_release|int >= 6 .. note:: the above example requires the lsb_release package on the target host in order to return the ansible_lsb.major_release fact. Variables defined in the playbooks or inventory can also be used. An example may be the execution of a task based on a variable's boolean value:: vars: epic: true Then a conditional execution might look like:: tasks: - shell: echo "This certainly is epic!" when: epic or:: tasks: - shell: echo "This certainly isn't epic!" when: not epic If a required variable has not been set, you can skip or fail using Jinja2's `defined` test. For example:: tasks: - shell: echo "I've got '{{ foo }}' and am not afraid to use it!" when: foo is defined - fail: msg="Bailing out. this play requires 'bar'" when: bar is not defined This is especially useful in combination with the conditional import of vars files (see below). Note that when combining `when` with `with_items` (see :doc:`playbooks_loops`), be aware that the `when` statement is processed separately for each item. This is by design:: tasks: - command: echo {{ item }} with_items: [ 0, 2, 4, 6, 8, 10 ] when: item > 5 Loading in Custom Facts ``````````````````````` It's also easy to provide your own facts if you want, which is covered in :doc:`developing_modules`. To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks:: tasks: - name: gather site specific fact data action: site_facts - command: /usr/bin/thingy when: my_custom_fact_just_retrieved_from_the_remote_system == '1234' Applying 'when' to roles and includes ````````````````````````````````````` Note that if you have several tasks that all share the same conditional statement, you can affix the conditional to a task include statement as below. Note this does not work with playbook includes, just task includes. All the tasks get evaluated, but the conditional is applied to each and every task:: - include: tasks/sometasks.yml when: "'reticulating splines' in output" Or with a role:: - hosts: webservers roles: - { role: debian_stock_config, when: ansible_os_family == 'Debian' } You will note a lot of 'skipped' output by default in Ansible when using this approach on systems that don't match the criteria. Read up on the 'group_by' module in the :doc:`modules` docs for a more streamlined way to accomplish the same thing. Conditional Imports ``````````````````` .. note:: This is an advanced topic that is infrequently used. You can probably skip this section. Sometimes you will want to do certain things differently in a playbook based on certain criteria. Having one playbook that works on multiple platforms and OS versions is a good example. As an example, the name of the Apache package may be different between CentOS and Debian, but it is easily handled with a minimum of syntax in an Ansible Playbook:: --- - hosts: all remote_user: root vars_files: - "vars/common.yml" - [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ] tasks: - name: make sure apache is running service: name={{ apache }} state=running .. note:: The variable 'ansible_os_family' is being interpolated into the list of filenames being defined for vars_files. As a reminder, the various YAML files contain just keys and values:: --- # for vars/CentOS.yml apache: httpd somethingelse: 42 How does this work? If the operating system was 'CentOS', the first file Ansible would try to import would be 'vars/CentOS.yml', followed by '/vars/os_defaults.yml' if that file did not exist. If no files in the list were found, an error would be raised. On Debian, it would instead first look towards 'vars/Debian.yml' instead of 'vars/CentOS.yml', before falling back on 'vars/os_defaults.yml'. Pretty simple. To use this conditional import feature, you'll need facter or ohai installed prior to running the playbook, but you can of course push this out with Ansible if you like:: # for facter ansible -m yum -a "pkg=facter ensure=installed" ansible -m yum -a "pkg=ruby-json ensure=installed" # for ohai ansible -m yum -a "pkg=ohai ensure=installed" Ansible's approach to configuration -- separating variables from tasks, keeps your playbooks from turning into arbitrary code with ugly nested ifs, conditionals, and so on - and results in more streamlined & auditable configuration rules -- especially because there are a minimum of decision points to track. Selecting Files And Templates Based On Variables ```````````````````````````````````````````````` .. note:: This is an advanced topic that is infrequently used. You can probably skip this section. Sometimes a configuration file you want to copy, or a template you will use may depend on a variable. The following construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than putting a lot of if conditionals in a template. The following example shows how to template out a configuration file that was very different between, say, CentOS and Debian:: - name: template a file template: src={{ item }} dest=/etc/myapp/foo.conf with_first_found: - files: - {{ ansible_distribution }}.conf - default.conf paths: - search_location_one/somedir/ - /opt/other_location/somedir/ Register Variables `````````````````` Often in a playbook it may be useful to store the result of a given command in a variable and access it later. Use of the command module in this way can in many ways eliminate the need to write site specific facts, for instance, you could test for the existence of a particular program. The 'register' keyword decides what variable to save a result in. The resulting variables can be used in templates, action lines, or *when* statements. It looks like this (in an obviously trivial example):: - name: test play hosts: all tasks: - shell: cat /etc/motd register: motd_contents - shell: echo "motd contains the word hi" when: motd_contents.stdout.find('hi') != -1 As shown previously, the registered variable's string contents are accessible with the 'stdout' value. The registered result can be used in the "with_items" of a task if it is converted into a list (or already is a list) as shown below. "stdout_lines" is already available on the object as well though you could also call "home_dirs.stdout.split()" if you wanted, and could split by other fields:: - name: registered variable usage as a with_items list hosts: all tasks: - name: retrieve the list of home directories command: ls /home register: home_dirs - name: add home dirs to the backup spooler file: path=/mnt/bkspool/{{ item }} src=/home/{{ item }} state=link with_items: home_dirs.stdout_lines # same as with_items: home_dirs.stdout.split() .. seealso:: :doc:`playbooks` An introduction to playbooks :doc:`playbooks_roles` Playbook organization by roles :doc:`playbooks_best_practices` Best practices in playbooks :doc:`playbooks_conditionals` Conditional statements in playbooks :doc:`playbooks_variables` All about variables `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/guide_rax.rst0000664000000000000000000004743312316627017016621 0ustar rootrootRackspace Cloud Guide ===================== .. _introduction: Introduction ```````````` .. note:: This section of the documentation is under construction. We are in the process of adding more examples about the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud in `ansible-examples `_. Ansible contains a number of core modules for interacting with Rackspace Cloud. The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in Rackspace Cloud context. Prerequisites for using the rax modules are minimal. In addition to ansible itself, all of the modules require and are tested against pyrax 1.5 or higher. You'll need this Python module installed on the execution host. pyrax is not currently available in many operating system package repositories, so you will likely need to install it via pip: .. code-block:: bash $ pip install pyrax The following steps will often execute from the control machine against the Rackspace Cloud API, so it makes sense to add localhost to the inventory file. (Ansible may not require this manual step in the future): .. code-block:: ini [localhost] localhost ansible_connection=local In playbook steps we'll typically be using the following pattern: .. code-block:: yaml - hosts: localhost connection: local gather_facts: False tasks: .. _credentials_file: Credentials File ```````````````` The `rax.py` inventory script and all `rax` modules support a standard `pyrax` credentials file that looks like: .. code-block:: ini [rackspace_cloud] username = myraxusername api_key = d41d8cd98f00b204e9800998ecf8427e Setting the environment parameter RAX_CREDS_FILE to the path of this file will help Ansible find how to load this information. More information about this credentials file can be found at https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating .. _virtual_environment: Running from a Python Virtual Environment (Optional) ++++++++++++++++++++++++++++++++++++++++++++++++++++ Special considerations need to be taken if pyrax is not installed globally but instead using a python virtualenv (it's fine if you install it globally). Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done so via the interpret line in the modules, however when instructed using ansible_python_interpreter, ansible will use this specified path instead for finding python. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows: .. code-block:: ini [localhost] localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python .. _provisioning: Provisioning ```````````` Now for the fun parts. The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server against the Rackspace cloud API. .. note:: Authentication with the Rackspace-related modules is handled by either specifying your username and API key as environment variables or passing them as module arguments. Here is a basic example of provisioning a instance in ad-hoc mode: .. code-block:: bash $ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes" -c local Here's what it would look like in a playbook, assuming the parameters were defined in variables: .. code-block:: yaml tasks: - name: Provision a set of instances local_action: module: rax name: "{{ rax_name }}" flavor: "{{ rax_flavor }}" image: "{{ rax_image }}" count: "{{ rax_count }}" group: "{{ group }}" wait: yes register: rax By registering the return value of the step, it is then possible to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts immediately in a subsequent task:: - name: Add the instances we created (by public IP) to the group 'raxhosts' local_action: module: add_host hostname: "{{ item.name }}" ansible_ssh_host: "{{ item.rax_accessipv4 }}" ansible_ssh_pass: "{{ item.rax_adminpass }}" groupname: raxhosts with_items: rax.success when: rax.action == 'create' With the host group now created, a second play in your provision playbook could now configure them, for example:: - name: Configuration play hosts: raxhosts user: root roles: - ntp - webserver The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us to the next section. .. _host_inventory: Host Inventory `````````````` Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle his is to use the rax inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by their meta data. Utilizing meta data is highly recommended in rax and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended. In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not. .. _raxpy: rax.py ++++++ To use the rackspace dynamic inventory script, copy ``rax.py`` from ``plugins/inventory`` into your inventory directory and make it executable. You can specify credentials for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable. .. note:: Users of :doc:`tower` will note that dynamic inventory is natively supported by Tower, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps:: $ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup ``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions. When using ``rax.py``, you will not have a 'localhost' defined in the inventory. As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it. Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead of an individual file, will cause ansible to evaluate each file in that directory for inventory. Let's test our inventory script to see if it can talk to Rackspace Cloud. .. code-block:: bash $ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup Assuming things are properly configured, the ``rax.py`` inventory script will output information similar to the following information, which will be utilized for inventory and variables. .. code-block:: json { "ORD": [ "test" ], "_meta": { "hostvars": { "test": { "ansible_ssh_host": "1.1.1.1", "rax_accessipv4": "1.1.1.1", "rax_accessipv6": "2607:f0d0:1002:51::4", "rax_addresses": { "private": [ { "addr": "2.2.2.2", "version": 4 } ], "public": [ { "addr": "1.1.1.1", "version": 4 }, { "addr": "2607:f0d0:1002:51::4", "version": 6 } ] }, "rax_config_drive": "", "rax_created": "2013-11-14T20:48:22Z", "rax_flavor": { "id": "performance1-1", "links": [ { "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1", "rel": "bookmark" } ] }, "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0", "rax_human_id": "test", "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a", "rax_image": { "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653", "links": [ { "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653", "rel": "bookmark" } ] }, "rax_key_name": null, "rax_links": [ { "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a", "rel": "self" }, { "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a", "rel": "bookmark" } ], "rax_metadata": { "foo": "bar" }, "rax_name": "test", "rax_name_attr": "name", "rax_networks": { "private": [ "2.2.2.2" ], "public": [ "1.1.1.1", "2607:f0d0:1002:51::4" ] }, "rax_os-dcf_diskconfig": "AUTO", "rax_os-ext-sts_power_state": 1, "rax_os-ext-sts_task_state": null, "rax_os-ext-sts_vm_state": "active", "rax_progress": 100, "rax_status": "ACTIVE", "rax_tenant_id": "111111", "rax_updated": "2013-11-14T20:49:27Z", "rax_user_id": "22222" } } } } .. _standard_inventory: Standard Inventory ++++++++++++++++++ When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be adventageous to retrieve discoverable hostvar information from the Rackspace API. This can be achieved with the ``rax_facts`` module and an inventory file similar to the following: .. code-block:: ini [test_servers] hostname1 rax_region=ORD hostname2 rax_region=ORD .. code-block:: yaml - name: Gather info about servers hosts: test_servers gather_facts: False tasks: - name: Get facts about servers local_action: module: rax_facts credentials: ~/.raxpub name: "{{ inventory_hostname }}" region: "{{ rax_region }}" - name: Map some facts set_fact: ansible_ssh_host: "{{ rax_accessipv4 }}" While you don't need to know how it works, it may be interesting to know what kind of variables are returned. The ``rax_facts`` module provides facts as followings, which match the ``rax.py`` inventory script: .. code-block:: json { "ansible_facts": { "rax_accessipv4": "1.1.1.1", "rax_accessipv6": "2607:f0d0:1002:51::4", "rax_addresses": { "private": [ { "addr": "2.2.2.2", "version": 4 } ], "public": [ { "addr": "1.1.1.1", "version": 4 }, { "addr": "2607:f0d0:1002:51::4", "version": 6 } ] }, "rax_config_drive": "", "rax_created": "2013-11-14T20:48:22Z", "rax_flavor": { "id": "performance1-1", "links": [ { "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1", "rel": "bookmark" } ] }, "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0", "rax_human_id": "test", "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a", "rax_image": { "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653", "links": [ { "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653", "rel": "bookmark" } ] }, "rax_key_name": null, "rax_links": [ { "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a", "rel": "self" }, { "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a", "rel": "bookmark" } ], "rax_metadata": { "foo": "bar" }, "rax_name": "test", "rax_name_attr": "name", "rax_networks": { "private": [ "2.2.2.2" ], "public": [ "1.1.1.1", "2607:f0d0:1002:51::4" ] }, "rax_os-dcf_diskconfig": "AUTO", "rax_os-ext-sts_power_state": 1, "rax_os-ext-sts_task_state": null, "rax_os-ext-sts_vm_state": "active", "rax_progress": 100, "rax_status": "ACTIVE", "rax_tenant_id": "111111", "rax_updated": "2013-11-14T20:49:27Z", "rax_user_id": "22222" }, "changed": false } Use Cases ````````` This section covers some additional usage examples built around a specific use case. .. _example_1: Example 1 +++++++++ Create an isolated cloud network and build a server .. code-block:: yaml - name: Build Servers on an Isolated Network hosts: localhost connection: local gather_facts: False tasks: - name: Network create request local_action: module: rax_network credentials: ~/.raxpub label: my-net cidr: 192.168.3.0/24 region: IAD state: present - name: Server create request local_action: module: rax credentials: ~/.raxpub name: web%04d.example.org flavor: 2 image: ubuntu-1204-lts-precise-pangolin disk_config: manual networks: - public - my-net region: IAD state: present count: 5 exact_count: yes group: web wait: yes wait_timeout: 360 register: rax .. _example_2: Example 2 +++++++++ Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html .. code-block:: yaml --- - name: Build environment hosts: localhost connection: local gather_facts: False tasks: - name: Load Balancer create request local_action: module: rax_clb credentials: ~/.raxpub name: my-lb port: 80 protocol: HTTP algorithm: ROUND_ROBIN type: PUBLIC timeout: 30 region: IAD wait: yes state: present meta: app: my-cool-app register: clb - name: Network create request local_action: module: rax_network credentials: ~/.raxpub label: my-net cidr: 192.168.3.0/24 state: present region: IAD register: network - name: Server create request local_action: module: rax credentials: ~/.raxpub name: web%04d.example.org flavor: performance1-1 image: ubuntu-1204-lts-precise-pangolin disk_config: manual networks: - public - private - my-net region: IAD state: present count: 5 exact_count: yes group: web wait: yes register: rax - name: Add servers to web host group local_action: module: add_host hostname: "{{ item.name }}" ansible_ssh_host: "{{ item.rax_accessipv4 }}" ansible_ssh_pass: "{{ item.rax_adminpass }}" ansible_ssh_user: root groupname: web with_items: rax.success when: rax.action == 'create' - name: Add servers to Load balancer local_action: module: rax_clb_nodes credentials: ~/.raxpub load_balancer_id: "{{ clb.balancer.id }}" address: "{{ item.rax_networks.private|first }}" port: 80 condition: enabled type: primary wait: yes region: IAD with_items: rax.success when: rax.action == 'create' - name: Configure servers hosts: web handlers: - name: restart nginx service: name=nginx state=restarted tasks: - name: Install nginx apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400 notify: - restart nginx - name: Ensure nginx starts on boot service: name=nginx state=started enabled=yes - name: Create custom index.html copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html owner=root group=root mode=0644 .. _advanced_usage: Advanced Usage `````````````` .. _awx_autoscale: Autoscaling with Tower ++++++++++++++++++++++ :doc:`tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way to reconfigure ephmeral nodes. See the Tower documentation for more details. A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared with remote hosts. .. _pending_information: Pending Information ``````````````````` More to come! ansible-1.5.4/docsite/rst/developing_inventory.rst0000664000000000000000000000734112316627017021115 0ustar rootrootDeveloping Dynamic Inventory Sources ==================================== .. contents:: Topics :local: As described in :doc:`intro_dynamic_inventory`, ansible can pull inventory information from dynamic sources, including cloud sources. How do we write a new one? Simple! We just create a script or program that can return JSON in the right format when fed the proper arguments. You can do this in any language. .. _inventory_script_conventions: Script Conventions `````````````````` When the external node script is called with the single argument ``--list``, the script must return a JSON hash/dictionary of all the groups to be managed. Each group's value should be either a hash/dictionary containing a list of each host/IP, potential child groups, and potential group variables, or simply a list of host/IP addresses, like so:: { "databases" : { "hosts" : [ "host1.example.com", "host2.example.com" ], "vars" : { "a" : true } }, "webservers" : [ "host2.example.com", "host3.example.com" ], "atlanta" : { "hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ], "vars" : { "b" : false }, "children": [ "marietta", "5points" ] }, "marietta" : [ "host6.example.com" ], "5points" : [ "host7.example.com" ] } .. versionadded:: 1.0 Before version 1.0, each group could only have a list of hostnames/IP addresses, like the webservers, marietta, and 5points groups above. When called with the arguments ``--host `` (where is a host from above), the script must return either an empty JSON hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. Returning variables is optional, if the script does not wish to do this, returning an empty hash/dictionary is the way to go:: { "favcolor" : "red", "ntpserver" : "wolf.example.com", "monitoring" : "pack.example.com" } .. _inventory_script_tuning: Tuning the External Inventory Script ```````````````````````````````````` .. versionadded:: 1.3 The stock inventory script system detailed above works for all versions of Ansible, but calling ``--host`` for every host can be rather expensive, especially if it involves expensive API calls to a remote subsystem. In Ansible 1.3 or later, if the inventory script returns a top level element called "_meta", it is possible to return all of the host variables in one inventory script call. When this meta element contains a value for "hostvars", the inventory script will not be invoked with ``--host`` for each host. This results in a significant performance increase for large numbers of hosts, and also makes client side caching easier to implement for the inventory script. The data to be added to the top level JSON dictionary looks like this:: { # results of inventory script as above go here # ... "_meta" : { "hostvars" : { "moocow.example.com" : { "asdf" : 1234 }, "llama.example.com" : { "asdf" : 5678 }, } } } .. seealso:: :doc:`developing_api` Python API to Playbooks and Ad Hoc Task Execution :doc:`developing_modules` How to develop modules :doc:`developing_plugins` How to develop plugins `Ansible Tower `_ REST API endpoint and GUI for Ansible, syncs with dynamic inventory `Development Mailing List `_ Mailing list for development topics `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/developing_plugins.rst0000664000000000000000000001343012316627017020535 0ustar rootrootDeveloping Plugins ================== .. contents:: Topics Ansible is pluggable in a lot of other ways separate from inventory scripts and callbacks. Many of these features are there to cover fringe use cases and are infrequently needed, and others are pluggable simply because they are there to implement core features in ansible and were most convenient to be made pluggable. This section will explore these features, though they are generally not common in terms of things people would look to extend quite as often. .. _developing_connection_type_plugins: Connection Type Plugins ----------------------- By default, ansible ships with a 'paramiko' SSH, native ssh (just called 'ssh'), 'local' connection type, and there are also some minor players like 'chroot' and 'jail'. All of these can be used in playbooks and with /usr/bin/ansible to decide how you want to talk to remote machines. The basics of these connection types are covered in the :doc:`intro_getting_started` section. Should you want to extend Ansible to support other transports (SNMP? Message bus? Carrier Pigeon?) it's as simple as copying the format of one of the existing modules and dropping it into the connection plugins directory. The value of 'smart' for a connection allows selection of paramiko or openssh based on system capabilities, and chooses 'ssh' if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support 'smart'. More documentation on writing connection plugins is pending, though you can jump into `lib/ansible/runner/connection_plugins `_ and figure things out pretty easily. .. _developing_lookup_plugins: Lookup Plugins -------------- Language constructs like "with_fileglob" and "with_items" are implemented via lookup plugins. Just like other plugin types, you can write your own. More documentation on writing connection plugins is pending, though you can jump into `lib/ansible/runner/lookup_plugins `_ and figure things out pretty easily. .. _developing_vars_plugins: Vars Plugins ------------ Playbook constructs like 'host_vars' and 'group_vars' work via 'vars' plugins. They inject additional variable data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables can also be returned from inventory, so in most cases, you won't need to write or understand vars_plugins. More documentation on writing connection plugins is pending, though you can jump into `lib/ansible/inventory/vars_plugins `_ and figure things out pretty easily. If you find yourself wanting to write a vars_plugin, it's more likely you should write an inventory script instead. .. _developing_filter_plugins: Filter Plugins -------------- If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default), they can be extended by writing a filter plugin. Most of the time, when someone comes up with an idea for a new filter they would like to make available in a playbook, we'll just include them in 'core.py' instead. Jump into `lib/ansible/runner/filter_plugins/ `_ for details. .. _developing_callbacks: Callbacks --------- Callbacks are one of the more interesting plugin types. Adding additional callback plugins to Ansible allows for adding new behaviors when responding to events. .. _callback_examples: Examples ++++++++ Example callbacks are shown in `plugins/callbacks `_. The `log_plays `_ callback is an example of how to intercept playbook events to a log file, and the `mail `_ callback sends email when playbooks complete. The `osx_say `_ callback provided is particularly entertaining -- it will respond with computer synthesized speech on OS X in relation to playbook events, and is guaranteed to entertain and/or annoy coworkers. .. _configuring_callbacks: Configuring +++++++++++ To active a callback drop it in a callback directory as configured in :ref:`ansible.cfg `. .. _callback_development: Development +++++++++++ More information will come later, though see the source of any of the existing callbacks and you should be able to get started quickly. They should be reasonably self explanatory. .. _distributing_plugins: Distributing Plugins -------------------- Plugins are loaded from both Python's site_packages (those that ship with ansible) and a configured plugins directory, which defaults to /usr/share/ansible/plugins, in a subfolder for each plugin type:: * action_plugins * lookup_plugins * callback_plugins * connection_plugins * filter_plugins * vars_plugins To change this path, edit the ansible configuration file. In addition, plugins can be shipped in a subdirectory relative to a top-level playbook, in folders named the same as indicated above. .. seealso:: :doc:`modules` List of built-in modules :doc:`developing_api` Learn about the Python API for task execution :doc:`developing_inventory` Learn about how to develop dynamic inventory sources :doc:`developing_modules` Learn about how to write Ansible modules `Mailing List `_ The development mailing list `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_roles.rst0000664000000000000000000003215712316627017020056 0ustar rootrootPlaybook Roles and Include Statements ===================================== .. contents:: Topics Introduction ```````````` While it is possible to write a playbook in one very large file (and you might start out learning playbooks this way), eventually you'll want to reuse files and start to organize things. At a basic level, including task files allows you to break up bits of configuration policy into smaller files. Task includes pull in tasks from other files. Since handlers are tasks too, you can also include handler files from the 'handlers:' section. See :doc:`playbooks` if you need a review of these concepts. Playbooks can also include plays from other playbook files. When that is done, the plays will be inserted into the playbook to form a longer list of plays. When you start to think about it -- tasks, handlers, variables, and so on -- begin to form larger concepts. You start to think about modeling what something is, rather than how to make something look like something. It's no longer "apply this handful of THINGS to these hosts", you say "these hosts are dbservers" or "these hosts are webservers". In programming, we might call that "encapsulating" how things work. For instance, you can drive a car without knowing how the engine works. Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions -- they allow you to focus more on the big picture and only dive down into the details when needed. We'll start with understanding includes so roles make more sense, but our ultimate goal should be understanding roles -- roles are great and you should use them every time you write playbooks. See the `ansible-examples `_ repository on GitHub for lots of examples of all of this put together. You may wish to have this open in a separate tab as you dive in. Task Include Files And Encouraging Reuse ```````````````````````````````````````` Suppose you want to reuse lists of tasks between plays or playbooks. You can use include files to do this. Use of included task lists is a great way to define a role that system is going to fulfill. Remember, the goal of a play in a playbook is to map a group of systems into multiple roles. Let's see what this looks like... A task include file simply contains a flat list of tasks, like so:: --- # possibly saved as tasks/foo.yml - name: placeholder foo command: /bin/foo - name: placeholder bar command: /bin/bar Include directives look like this, and can be mixed in with regular tasks in a playbook:: tasks: - include: tasks/foo.yml You can also pass variables into includes. We call this a 'parameterized include'. For instance, if deploying multiple wordpress instances, I could contain all of my wordpress tasks in a single wordpress.yml file, and use it like so:: tasks: - include: wordpress.yml user=timmy - include: wordpress.yml user=alice - include: wordpress.yml user=bob If you are running Ansible 1.4 and later, include syntax is streamlined to match roles, and also allows passing list and dictionary parameters:: tasks: - { include: wordpress.yml, user: timmy, ssh_keys: [ 'keys/one.txt', 'keys/two.txt' ] } Using either syntax, variables passed in can then be used in the included files. We've already covered them a bit in :doc:`playbooks_variables`. You can reference them like this:: {{ user }} (In addition to the explicitly passed-in parameters, all variables from the vars section are also available for use here as well.) Starting in 1.0, variables can also be passed to include files using an alternative syntax, which also supports structured variables:: tasks: - include: wordpress.yml vars: remote_user: timmy some_list_variable: - alpha - beta - gamma Playbooks can include other playbooks too, but that's mentioned in a later section. .. note:: As of 1.0, task include statements can be used at arbitrary depth. They were previously limited to a single level, so task includes could not include other files containing task includes. Includes can also be used in the 'handlers' section, for instance, if you want to define how to restart apache, you only have to do that once for all of your playbooks. You might make a handlers.yml that looks like:: --- # this might be in a file like handlers/handlers.yml - name: restart apache service: name=apache state=restarted And in your main playbook file, just include it like so, at the bottom of a play:: handlers: - include: handlers/handlers.yml You can mix in includes along with your regular non-included tasks and handlers. Includes can also be used to import one playbook file into another. This allows you to define a top-level playbook that is composed of other playbooks. For example:: - name: this is a play at the top level of a file hosts: all remote_user: root tasks: - name: say hi tags: foo shell: echo "hi..." - include: load_balancers.yml - include: webservers.yml - include: dbservers.yml Note that you cannot do variable substitution when including one playbook inside another. .. note:: You can not conditionally path the location to an include file, like you can with 'vars_files'. If you find yourself needing to do this, consider how you can restructure your playbook to be more class/role oriented. This is to say you cannot use a 'fact' to decide what include file to use. All hosts contained within the play are going to get the same tasks. ('*when*' provides some ability for hosts to conditionally skip tasks). .. _roles: Roles ````` .. versionadded:: 1.2 Now that you have learned about vars_files, tasks, and handlers, what is the best way to organize your playbooks? The short answer is to use roles! Roles are ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users. Roles are just automation around 'include' directives as described above, and really don't contain much additional magic beyond some improvements to search path handling for referenced files. However, that can be a big thing! Example project structure:: site.yml webservers.yml fooservers.yml roles/ common/ files/ templates/ tasks/ handlers/ vars/ meta/ webservers/ files/ templates/ tasks/ handlers/ vars/ meta/ In a playbook, it would look like this:: --- - hosts: webservers roles: - common - webservers This designates the following behaviors, for each role 'x': - If roles/x/tasks/main.yml exists, tasks listed therein will be added to the play - If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play - If roles/x/vars/main.yml exists, variables listed therein will be added to the play - If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later) - Any copy tasks can reference files in roles/x/files/ without having to path them relatively or absolutely - Any script tasks can reference scripts in roles/x/files/ without having to path them relatively or absolutely - Any template tasks can reference files in roles/x/templates/ without having to path them relatively or absolutely - Any include tasks can reference files in roles/x/tasks/ without having to path them relatively or absolutely In Ansible 1.4 and later you can configure a roles_path to search for roles. Use this to check all of your common roles out to one location, and share them easily between multiple playbook projects. See :doc:`intro_configuration` for details about how to set this up in ansible.cfg. .. note:: Role dependencies are discussed below. If any files are not present, they are just ignored. So it's ok to not have a 'vars/' subdirectory for the role, for instance. Note, you are still allowed to list tasks, vars_files, and handlers "loose" in playbooks without using roles, but roles are a good organizational feature and are highly recommended. if there are loose things in the playbook, the roles are evaluated first. Also, should you wish to parameterize roles, by adding variables, you can do so, like this:: --- - hosts: webservers roles: - common - { role: foo_app_instance, dir: '/opt/a', port: 5000 } - { role: foo_app_instance, dir: '/opt/b', port: 5001 } While it's probably not something you should do often, you can also conditionally apply roles like so:: --- - hosts: webservers roles: - { role: some_role, when: "ansible_os_family == 'RedHat'" } This works by applying the conditional to every task in the role. Conditionals are covered later on in the documentation. Finally, you may wish to assign tags to the roles you specify. You can do so inline::: --- - hosts: webservers roles: - { role: foo, tags: ["bar", "baz"] } If the play still has a 'tasks' section, those tasks are executed after roles are applied. If you want to define certain tasks to happen before AND after roles are applied, you can do this:: --- - hosts: webservers pre_tasks: - shell: echo 'hello' roles: - { role: some_role } tasks: - shell: echo 'still busy' post_tasks: - shell: echo 'goodbye' .. note:: If using tags with tasks (described later as a means of only running part of a playbook), be sure to also tag your pre_tasks and post_tasks and pass those along as well, especially if the pre and post tasks are used for monitoring outage window control or load balancing. Role Default Variables `````````````````````` .. versionadded:: 1.3 Role default variables allow you to set default variables for included or dependent roles (see below). To create defaults, simply add a `defaults/main.yml` file in your role directory. These variables will have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables. Role Dependencies ````````````````` .. versionadded:: 1.3 Role dependencies allow you to automatically pull in other roles when using a role. Role dependencies are stored in the `meta/main.yml` file contained within the role directory. This file should contain a list of roles and parameters to insert before the specified role, such as the following in an example `roles/myapp/meta/main.yml`:: --- dependencies: - { role: common, some_parameter: 3 } - { role: apache, port: 80 } - { role: postgres, dbname: blarg, other_parameter: 12 } Role dependencies can also be specified as a full path, just like top level roles:: --- dependencies: - { role: '/path/to/common/roles/foo', x: 1 } Roles dependencies are always executed before the role that includes them, and are recursive. By default, roles can also only be added as a dependency once - if another role also lists it as a dependency it will not be run again. This behavior can be overridden by adding `allow_duplicates: yes` to the `meta/main.yml` file. For example, a role named 'car' could add a role named 'wheel' to its dependencies as follows:: --- dependencies: - { role: wheel, n: 1 } - { role: wheel, n: 2 } - { role: wheel, n: 3 } - { role: wheel, n: 4 } And the `meta/main.yml` for wheel contained the following:: --- allow_duplicates: yes dependencies: - { role: tire } - { role: brake } The resulting order of execution would be as follows:: tire(n=1) brake(n=1) wheel(n=1) tire(n=2) brake(n=2) wheel(n=2) ... car .. note:: Variable inheritance and scope are detailed in the :doc:`playbooks_variables`. Ansible Galaxy `````````````` `Ansible Galaxy `_, is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible roles and can be a great way to get a jumpstart on your automation projects. You can sign up with social auth, and the download client 'ansible-galaxy' is included in Ansible 1.4.2 and later. Read the "About" page on the Galaxy site for more information. .. seealso:: :doc:`YAMLSyntax` Learn about YAML syntax :doc:`playbooks` Review the basic Playbook language features :doc:`playbooks_best_practices` Various tips about managing playbooks in the real world :doc:`playbooks_variables` All about variables in playbooks :doc:`playbooks_conditionals` Conditionals in playbooks :doc:`playbooks_loops` Loops in playbooks :doc:`modules` Learn about available modules :doc:`developing_modules` Learn how to extend Ansible by writing your own modules `GitHub Ansible examples `_ Complete playbook files from the GitHub project source `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups ansible-1.5.4/docsite/rst/galaxy.rst0000664000000000000000000000067112316627017016130 0ustar rootrootAnsible Galaxy `````````````` `Ansible Galaxy `_, is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible roles and can be a great way to get a jumpstart on your automation projects. You can sign up with social auth, and the download client 'ansible-galaxy' is included in Ansible 1.4.2 and later. Read the "About" page on the Galaxy site for more information. ansible-1.5.4/docsite/rst/playbooks_loops.rst0000664000000000000000000003370712316627017020070 0ustar rootrootLoops ===== Often you'll want to do many things in one task, such as create a lot of users, install a lot of packages, or repeat a polling step until a certain result is reached. This chapter is all about how to use loops in playbooks. .. contents:: Topics .. _standard_loops: Standard Loops `````````````` To save some typing, repeated tasks can be written in short-hand like so:: - name: add several users user: name={{ item }} state=present groups=wheel with_items: - testuser1 - testuser2 If you have defined a YAML list in a variables file, or the 'vars' section, you can also do:: with_items: somelist The above would be the equivalent of:: - name: add user testuser1 user: name=testuser1 state=present groups=wheel - name: add user testuser2 user: name=testuser2 state=present groups=wheel The yum and apt modules use with_items to execute fewer package manager transactions. Note that the types of items you iterate over with 'with_items' do not have to be simple lists of strings. If you have a list of hashes, you can reference subkeys using things like:: - name: add several users user: name={{ item.name }} state=present groups={{ item.groups }} with_items: - { name: 'testuser1', groups: 'wheel' } - { name: 'testuser2', groups: 'root' } .. _nested_loops: Nested Loops ```````````` Loops can be nested as well:: - name: give users access to multiple databases mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL append_privs=yes password=foo with_nested: - [ 'alice', 'bob', 'eve' ] - [ 'clientdb', 'employeedb', 'providerdb' ] As with the case of 'with_items' above, you can use previously defined variables. Just specify the variable's name without templating it with '{{ }}':: - name: here, 'users' contains the above list of employees mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL append_privs=yes password=foo with_nested: - users - [ 'clientdb', 'employeedb', 'providerdb' ] .. _looping_over_hashes: Looping over Hashes ``````````````````` .. versionadded:: 1.5 Suppose you have the following variable:: --- users: alice: name: Alice Appleworth telephone: 123-456-7890 bob: name: Bob Bananarama telephone: 987-654-3210 And you want to print every user's name and phone number. You can loop through the elements of a hash using ``with_dict`` like this:: tasks: - name: Print phone records debug: msg="User {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})" with_dict: users .. _looping_over_fileglobs: Looping over Fileglobs `````````````````````` ``with_fileglob`` matches all files in a single directory, non-recursively, that match a pattern. It can be used like this:: --- - hosts: all tasks: # first ensure our target directory exists - file: dest=/etc/fooapp state=directory # copy each file over that matches the given pattern - copy: src={{ item }} dest=/etc/fooapp/ owner=root mode=600 with_fileglob: - /playbooks/files/fooapp/* Looping over Parallel Sets of Data `````````````````````````````````` .. note:: This is an uncommon thing to want to do, but we're documenting it for completeness. You probably won't be reaching for this one often. Suppose you have the following variable data was loaded in via somewhere:: --- alpha: [ 'a', 'b', 'c', 'd' ] numbers: [ 1, 2, 3, 4 ] And you want the set of '(a, 1)' and '(b, 2)' and so on. Use 'with_together' to get this:: tasks: - debug: msg="{{ item.0 }} and {{ item.1 }}" with_together: - alpha - numbers Looping over Subelements ```````````````````````` Suppose you want to do something like loop over a list of users, creating them, and allowing them to login by a certain set of SSH keys. How might that be accomplished? Let's assume you had the following defined and loaded in via "vars_files" or maybe a "group_vars/all" file:: --- users: - name: alice authorized: - /tmp/alice/onekey.pub - /tmp/alice/twokey.pub - name: bob authorized: - /tmp/bob/id_rsa.pub It might happen like so:: - user: name={{ item.name }} state=present generate_ssh_key=yes with_items: users - authorized_key: "user={{ item.0.name }} key='{{ lookup('file', item.1) }}'" with_subelements: - users - authorized Subelements walks a list of hashes (aka dictionaries) and then traverses a list with a given key inside of those records. The authorized_key pattern is exactly where it comes up most. .. _looping_over_integer_sequences: Looping over Integer Sequences `````````````````````````````` ``with_sequence`` generates a sequence of items in ascending numerical order. You can specify a start, end, and an optional step value. Arguments should be specified in key=value pairs. If supplied, the 'format' is a printf style string. Numerical values can be specified in decimal, hexadecimal (0x3f8) or octal (0600). Negative numbers are not supported. This works as follows:: --- - hosts: all tasks: # create groups - group: name=evens state=present - group: name=odds state=present # create some test users - user: name={{ item }} state=present groups=evens with_sequence: start=0 end=32 format=testuser%02x # create a series of directories with even numbers for some reason - file: dest=/var/stuff/{{ item }} state=directory with_sequence: start=4 end=16 stride=2 # a simpler way to use the sequence plugin # create 4 groups - group: name=group{{ item }} state=present with_sequence: count=4 .. _random_choice: Random Choices `````````````` The 'random_choice' feature can be used to pick something at random. While it's not a load balancer (there are modules for those), it can somewhat be used as a poor man's loadbalancer in a MacGyver like situation:: - debug: msg={{ item }} with_random_choice: - "go through the door" - "drink from the goblet" - "press the red button" - "do nothing" One of the provided strings will be selected at random. At a more basic level, they can be used to add chaos and excitement to otherwise predictable automation environments. .. _do_until_loops: Do-Until Loops `````````````` .. versionadded: 1.4 Sometimes you would want to retry a task until a certain condition is met. Here's an example:: - action: shell /usr/bin/foo register: result until: result.stdout.find("all systems go") != -1 retries: 5 delay: 10 The above example run the shell module recursively till the module's result has "all systems go" in it's stdout or the task has been retried for 5 times with a delay of 10 seconds. The default value for "retries" is 3 and "delay" is 5. The task returns the results returned by the last task run. The results of individual retries can be viewed by -vv option. The registered variable will also have a new key "attempts" which will have the number of the retries for the task. .. _with_first_found: Finding First Matched Files ``````````````````````````` .. note:: This is an uncommon thing to want to do, but we're documenting it for completeness. You probably won't be reaching for this one often. This isn't exactly a loop, but it's close. What if you want to use a reference to a file based on the first file found that matches a given criteria, and some of the filenames are determined by variable names? Yes, you can do that as follows:: - name: INTERFACES | Create Ansible header for /etc/network/interfaces template: src={{ item }} dest=/etc/foo.conf with_first_found: - "{{ansible_virtualization_type}_foo.conf" - "default_foo.conf" This tool also has a long form version that allows for configurable search paths. Here's an example:: - name: some configuration template template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root with_first_found: - files: - "{{inventory_hostname}}/etc/file.cfg" paths: - ../../../templates.overwrites - ../../../templates - files: - etc/file.cfg paths: - templates .. _looping_over_the_results_of_a_program_execution: Iterating Over The Results of a Program Execution ````````````````````````````````````````````````` .. note:: This is an uncommon thing to want to do, but we're documenting it for completeness. You probably won't be reaching for this one often. Sometimes you might want to execute a program, and based on the output of that program, loop over the results of that line by line. Ansible provides a neat way to do that, though you should remember, this is always executed on the control machine, not the local machine:: - name: Example of looping over a command result shell: /usr/bin/frobnicate {{ item }} with_lines: /usr/bin/frobnications_per_host --param {{ inventory_hostname }} Ok, that was a bit arbitrary. In fact, if you're doing something that is inventory related you might just want to write a dynamic inventory source instead (see :doc:`intro_dynamic_inventory`), but this can be occasionally useful in quick-and-dirty implementations. Should you ever need to execute a command remotely, you would not use the above method. Instead do this:: - name: Example of looping over a REMOTE command result shell: /usr/bin/something register: command_result - name: Do something with each result shell: /usr/bin/something_else --param {{ item }} with_items: command_result.stdout_lines .. _indexed_lists: Looping Over A List With An Index ````````````````````````````````` .. note:: This is an uncommon thing to want to do, but we're documenting it for completeness. You probably won't be reaching for this one often. .. versionadded: 1.3 If you want to loop over an array and also get the numeric index of where you are in the array as you go, you can also do that. It's uncommonly used:: - name: indexed loop demo debug: msg="at array position {{ item.0 }} there is a value {{ item.1 }}" with_indexed_items: some_list .. _flattening_a_list: Flattening A List ````````````````` .. note:: This is an uncommon thing to want to do, but we're documenting it for completeness. You probably won't be reaching for this one often. In rare instances you might have several lists of lists, and you just want to iterate over every item in all of those lists. Assume a really crazy hypothetical datastructure:: ---- # file: roles/foo/vars/main.yml packages_base: - [ 'foo-package', 'bar-package' ] packages_apps: - [ ['one-package', 'two-package' ]] - [ ['red-package'], ['blue-package']] As you can see the formatting of packages in these lists is all over the place. How can we install all of the packages in both lists?:: - name: flattened loop demo yum: name={{ item }} state=installed with_flattened: - packages_base - packages_apps That's how! .. _using_register_with_a_loop: Using register with a loop `````````````````````````` When using ``register`` with a loop the data structure placed in the variable during a loop, will contain a ``results`` attribute, that is a list of all responses from the module. Here is an example of using ``register`` with ``with_items``:: - shell: echo "{{ item }}" with_items: - one - two register: echo This differs from the data structure returned when using ``register`` without a loop:: { "changed": true, "msg": "All items completed", "results": [ { "changed": true, "cmd": "echo \"one\" ", "delta": "0:00:00.003110", "end": "2013-12-19 12:00:05.187153", "invocation": { "module_args": "echo \"one\"", "module_name": "shell" }, "item": "one", "rc": 0, "start": "2013-12-19 12:00:05.184043", "stderr": "", "stdout": "one" }, { "changed": true, "cmd": "echo \"two\" ", "delta": "0:00:00.002920", "end": "2013-12-19 12:00:05.245502", "invocation": { "module_args": "echo \"two\"", "module_name": "shell" }, "item": "two", "rc": 0, "start": "2013-12-19 12:00:05.242582", "stderr": "", "stdout": "two" } ] } Subsequent loops over the registered variable to inspect the results may look like:: - name: Fail if return code is not 0 fail: msg: "The command ({{ item.cmd }}) did not have a 0 return code" when: item.rc != 0 with_items: echo.results .. _writing_your_own_iterators: Writing Your Own Iterators `````````````````````````` While you ordinarily shouldn't have to, should you wish to write your own ways to loop over arbitrary datastructures, you can read :doc:`developing_plugins` for some starter information. Each of the above features are implemented as plugins in ansible, so there are many implementations to reference. .. seealso:: :doc:`playbooks` An introduction to playbooks :doc:`playbooks_roles` Playbook organization by roles :doc:`playbooks_best_practices` Best practices in playbooks :doc:`playbooks_conditionals` Conditional statements in playbooks :doc:`playbooks_variables` All about variables `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_best_practices.rst0000664000000000000000000003203312316627017021715 0ustar rootrootBest Practices ============== Here are some tips for making the most of Ansible playbooks. You can find some example playbooks illustrating these best practices in our `ansible-examples repository `_. (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!). .. contents:: Topics .. _content_organization: Content Organization ++++++++++++++++++++++ The following section shows one of many possible ways to organize playbook content. Your usage of Ansible should fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit. (One thing you will definitely want to do though, is use the "roles" organization feature, which is documented as part of the main playbooks page. See :doc:`playbooks_roles`). .. _directory_layout: Directory Layout ```````````````` The top level of the directory would contain files and directories like so:: production # inventory file for production servers stage # inventory file for stage environment group_vars/ group1 # here we assign variables to particular groups group2 # "" host_vars/ hostname1 # if systems need specific variables, put them here hostname2 # "" site.yml # master playbook webservers.yml # playbook for webserver tier dbservers.yml # playbook for dbserver tier roles/ common/ # this hierarchy represents a "role" tasks/ # main.yml # <-- tasks file can include smaller files if warranted handlers/ # main.yml # <-- handlers file templates/ # <-- files for use with the template resource ntp.conf.j2 # <------- templates end in .j2 files/ # bar.txt # <-- files for use with the copy resource foo.sh # <-- script files for use with the script resource vars/ # main.yml # <-- variables associated with this role webtier/ # same kind of structure as "common" was above, done for the webtier role monitoring/ # "" fooapp/ # "" .. _stage_vs_prod: How to Arrange Inventory, Stage vs Production ````````````````````````````````````````````` In the example below, the *production* file contains the inventory of all of your production hosts. Of course you can pull inventory from an external data source as well, but this is just a basic example. It is suggested that you define groups based on purpose of the host (roles) and also geography or datacenter location (if applicable):: # file: production [atlanta-webservers] www-atl-1.example.com www-atl-2.example.com [boston-webservers] www-bos-1.example.com www-bos-2.example.com [atlanta-dbservers] db-atl-1.example.com db-atl-2.example.com [boston-dbservers] db-bos-1.example.com # webservers in all geos [webservers:children] atlanta-webservers boston-webservers # dbservers in all geos [dbservers:children] atlanta-dbservers boston-dbservers # everything in the atlanta geo [atlanta:children] atlanta-webservers atlanta-dbservers # everything in the boston geo [boston:children] boston-webservers boston-dbservers .. _groups_and_hosts: Group And Host Variables ```````````````````````` Now, groups are nice for organization, but that's not all groups are good for. You can also assign variables to them! For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let's set those now:: --- # file: group_vars/atlanta ntp: ntp-atlanta.example.com backup: backup-atlanta.example.com Variables aren't just for geographic information either! Maybe the webservers have some configuration that doesn't make sense for the database servers:: --- # file: group_vars/webservers apacheMaxRequestsPerChild: 3000 apacheMaxClients: 900 If we had any default values, or values that were universally true, we would put them in a file called group_vars/all:: --- # file: group_vars/all ntp: ntp-boston.example.com backup: backup-boston.example.com We can define specific hardware variance in systems in a host_vars file, but avoid doing this unless you need to:: --- # file: host_vars/db-bos-1.example.com foo_agent_port: 86 bar_agent_port: 99 .. _split_by_role: Top Level Playbooks Are Separated By Role ````````````````````````````````````````` In site.yml, we include a playbook that defines our entire infrastructure. Note this is SUPER short, because it's just including some other playbooks. Remember, playbooks are nothing more than lists of plays:: --- # file: site.yml - include: webservers.yml - include: dbservers.yml In a file like webservers.yml (also at the top level), we simply map the configuration of the webservers group to the roles performed by the webservers group. Also notice this is incredibly short. For example:: --- # file: webservers.yml - hosts: webservers roles: - common - webtier .. _role_organization: Task And Handler Organization For A Role ```````````````````````````````````````` Below is an example tasks file that explains how a role works. Our common role here just sets up NTP, but it could do more if we wanted:: --- # file: roles/common/tasks/main.yml - name: be sure ntp is installed yum: pkg=ntp state=installed tags: ntp - name: be sure ntp is configured template: src=ntp.conf.j2 dest=/etc/ntp.conf notify: - restart ntpd tags: ntp - name: be sure ntpd is running and enabled service: name=ntpd state=running enabled=yes tags: ntp Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at the end of each play:: --- # file: roles/common/handlers/main.yml - name: restart ntpd service: name=ntpd state=restarted See :doc:`playbooks_roles` for more information. .. _organization_examples: What This Organization Enables (Examples) ````````````````````````````````````````` Above we've shared our basic organizational structure. Now what sort of use cases does this layout enable? Lots! If I want to reconfigure my whole infrastructure, it's just:: ansible-playbook -i production site.yml What about just reconfiguring NTP on everything? Easy.:: ansible-playbook -i production site.yml --tags ntp What about just reconfiguring my webservers?:: ansible-playbook -i production webservers.yml What about just my webservers in Boston?:: ansible-playbook -i production webservers.yml --limit boston What about just the first 10, and then the next 10?:: ansible-playbook -i production webservers.yml --limit boston[0-10] ansible-playbook -i production webservers.yml --limit boston[10-20] And of course just basic ad-hoc stuff is also possible.:: ansible -i production -m ping ansible -i production -m command -a '/sbin/reboot' --limit boston And there are some useful commands to know (at least in 1.1 and higher):: # confirm what task names would be run if I ran this command and said "just ntp tasks" ansible-playbook -i production webservers.yml --tags ntp --list-tasks # confirm what hostnames might be communicated with if I said "limit to boston" ansible-playbook -i production webservers.yml --limit boston --list-hosts .. _dep_vs_config: Deployment vs Configuration Organization ```````````````````````````````````````` The above setup models a typical configuration topology. When doing multi-tier deployments, there are going to be some additional playbooks that hop between tiers to roll out an application. In this case, 'site.yml' may be augmented by playbooks like 'deploy_exampledotcom.yml' but the general concepts can still apply. Consider "playbooks" as a sports metaphor -- you don't have to just have one set of plays to use against your infrastructure all the time -- you can have situational plays that you use at different times and for different purposes. Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and just keep the OS configuration in separate playbooks from the app deployment. .. _stage_vs_production: Stage vs Production +++++++++++++++++++ As also mentioned above, a good way to keep your stage (or testing) and production environments separate is to use a separate inventory file for stage and production. This way you pick with -i what you are targeting. Keeping them all in one file can lead to surprises! Testing things in a stage environment before trying in production is always a great idea. Your environments need not be the same size and you can use group variables to control the differences between those environments. .. _rolling_update: Rolling Updates +++++++++++++++ Understand the 'serial' keyword. If updating a webserver farm you really want to use it to control how many machines you are updating at once in the batch. See :doc:`playbooks_delegation`. .. _mention_the_state: Always Mention The State ++++++++++++++++++++++++ The 'state' parameter is optional to a lot of modules. Whether 'state=present' or 'state=absent', it's always best to leave that parameter in your playbooks to make it clear, especially as some modules support additional states. .. _group_by_roles: Group By Roles ++++++++++++++ A system can be in multiple groups. See :doc:`intro_inventory` and :doc:`intro_patterns`. Having groups named after things like *webservers* and *dbservers* is repeated in the examples because it's a very powerful concept. This allows playbooks to target machines based on role, as well as to assign role specific variables using the group variable system. See :doc:`playbooks_roles`. .. _os_variance: Operating System and Distribution Variance ++++++++++++++++++++++++++++++++++++++++++ When dealing with a parameter that is different between two different operating systems, the best way to handle this is by using the group_by module. This makes a dynamic group of hosts matching certain criteria, even if that group is not defined in the inventory file:: --- # talk to all hosts just so we can learn about them - hosts: all tasks: - group_by: key={{ ansible_distribution }} # now just on the CentOS hosts... - hosts: CentOS gather_facts: False tasks: - # tasks that only happen on CentOS go here If group-specific settings are needed, this can also be done. For example:: --- # file: group_vars/all asdf: 10 --- # file: group_vars/CentOS asdf: 42 In the above example, CentOS machines get the value of '42' for asdf, but other machines get '10'. .. _ship_modules_with_playbooks: Bundling Ansible Modules With Playbooks +++++++++++++++++++++++++++++++++++++++ .. versionadded:: 0.5 If a playbook has a "./library" directory relative to its YAML file, this directory can be used to add ansible modules that will automatically be in the ansible module path. This is a great way to keep modules that go with a playbook together. .. _whitespace: Whitespace and Comments +++++++++++++++++++++++ Generous use of whitespace to break things up, and use of comments (which start with '#'), is encouraged. .. _name_tasks: Always Name Tasks +++++++++++++++++ It is possible to leave off the 'name' for a given task, though it is recommended to provide a description about why something is being done instead. This name is shown when the playbook is run. .. _keep_it_simple: Keep It Simple ++++++++++++++ When you can do something simply, do something simply. Do not reach to use every feature of Ansible together, all at once. Use what works for you. For example, you will probably not need ``vars``, ``vars_files``, ``vars_prompt`` and ``--extra-vars`` all at once, while also using an external inventory file. .. _version_control: Version Control +++++++++++++++ Use version control. Keep your playbooks and inventory file in git (or another version control system), and commit when you make changes to them. This way you have an audit trail describing when and why you changed the rules that are automating your infrastructure. .. seealso:: :doc:`YAMLSyntax` Learn about YAML syntax :doc:`playbooks` Review the basic playbook features :doc:`modules` Learn about available modules :doc:`developing_modules` Learn how to extend Ansible by writing your own modules :doc:`intro_patterns` Learn about how to select hosts `Github examples directory `_ Complete playbook files from the github project source `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups ansible-1.5.4/docsite/rst/modules/0000775000000000000000000000000012316627017015555 5ustar rootrootansible-1.5.4/docsite/rst/modules/.gitdir0000664000000000000000000000000012316627017017026 0ustar rootrootansible-1.5.4/docsite/rst/playbooks_vault.rst0000664000000000000000000001014212316627017020053 0ustar rootrootVault ===== .. contents:: Topics New in Ansible 1.5, "Vault" is a feature of ansible that allows keeping encrypted data in source control. To enable this feature, a command line tool, `ansible-vault` is used to edit files, and a command line flag `--ask-vault-pass` or `--vault-password-file` is used. .. _what_can_be_encrypted_with_vault: What Can Be Encrypted With Vault ```````````````````````````````` The vault feature can encrypt any structured data file used by Ansible. This can include "group_vars/" or "host_vars/" inventory variables, variables loaded by "include_vars" or "vars_files", or variable files passed on the ansible-playbook command line with "-e @file.yml" or "-e @file.json". Role variables and defaults are also included! Because Ansible tasks, handlers, and so on are also data, these two can also be encrypted with vault. If you'd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However, that might be a little much and could annoy your coworkers :) .. _creating_files: Creating Encrypted Files ```````````````````````` To create a new encrypted data file, run the following command:: ansible-vault create foo.yml First you will be prompted for a password. The password used with vault currently must be the same for all files you wish to use together at the same time. After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vim. Once you are done with the editor session, the file will be saved as encrypted data. The default cipher is AES (which is shared-secret based). .. _editing_encrypted_files: Editing Encrypted Files ``````````````````````` To edit an encrypted file in place, use the `ansible-vault edit` command. This command will decrypt the file to a temporary file and allow you to edit the file, saving it back when done and removing the temporary file:: ansible-vault edit foo.yml .. _rekeying_files: Rekeying Encrypted Files ```````````````````````` Should you wish to change your password on a vault-encrypted file or files, you can do so with the rekey command:: ansible-vault rekey foo.yml bar.yml baz.yml This command can rekey multiple data files at once and will ask for the original password and also the new password. .. _encrypting_files: Encrypting Unencrypted Files ```````````````````````````` If you have existing files that you wish to encrypt, use the `ansible-vault encrypt` command. This command can operate on multiple files at once:: ansible-vault encrypt foo.yml bar.yml baz.yml .. _decrypting_files: Decrypting Encrypted Files `````````````````````````` If you have existing files that you no longer want to keep encrypted, you can permanently decrypt them by running the `ansible-vault decrypt` command. This command will save them unencrypted to the disk, so be sure you do not want `ansible-vault edit` instead:: ansible-vault decrypt foo.yml bar.yml baz.yml .. _running_a_playbook_with_vault: Running a Playbook With Vault ````````````````````````````` To run a playbook that contains vault-encrypted data files, you must pass one of two flags. To specify the vault-password interactively:: ansible-playbook site.yml --ask-vault-pass This prompt will then be used to decrypt (in memory only) any vault encrypted files that are accessed. Currently this requires that all passwords be encrypted with the same password. Alternatively, passwords can be specified with a file. If this is done, be careful to ensure permissions on the file are such that no one else can access your key, and do not add your key to source control:: ansible-playbook site.yml --vault-password-file ~/.vault_pass.txt The password should be a string stored as a single line in the file. This is likely something you may wish to do if using Ansible from a continuous integration system like Jenkins. (The `--vault-password-file` option can also be used with the :ref:`ansible-pull` command if you wish, though this would require distributing the keys to your nodes, so understand the implications -- vault is more intended for push mode). ansible-1.5.4/docsite/rst/index.rst0000664000000000000000000000451412316627017015752 0ustar rootrootAnsible Documentation ===================== About Ansible ````````````` Welcome to the Ansible documentation! Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. Ansible's goals are foremost those of simplicity and maximum ease of use. It also has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with an accelerated socket mode and pull modes as alternatives), and a language that is designed around auditability by humans -- even those not familiar with the program. We believe simplicity is relevant to all sizes of environments and design for busy users of all types -- whether this means developers, sysadmins, release engineers, IT managers, and everywhere in between. Ansible is appropriate for managing small setups with a handful of instances as well as enterprise environments with many thousands. Ansible manages machines in an agentless manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. As OpenSSH is one of the most peer reviewed open source components, the security exposure of using the tool is greatly reduced. Ansible is decentralized -- it relies on your existing OS credentials to control access to remote machines; if needed it can easily connect with Kerberos, LDAP, and other centralized authentication management systems. This documentation covers the current released version of Ansible (1.5.1) and also some development version features (1.6.0). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release. .. _an_introduction: .. toctree:: :maxdepth: 1 intro quickstart playbooks playbooks_special_topics modules modules_by_category guides developing tower community galaxy faq glossary YAMLSyntax guru ansible-1.5.4/docsite/rst/intro_patterns.rst0000664000000000000000000000622712316627017017721 0ustar rootrootPatterns ++++++++ .. contents:: Topics Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in terms of :doc:`playbooks` it actually means what hosts to apply a particular configuration or IT process to. We'll go over how to use the command line in :doc:`intro_adhoc` section, however, basically it looks like this:: ansible -m -a Such as:: ansible webservers -m service -a "name=httpd state=restarted" A pattern usually refers to a set of groups (which are sets of hosts) -- in the above case, machines in the "webservers" group. Anyway, to use Ansible, you'll first need to know how to tell Ansible which hosts in your inventory to talk to. This is done by designating particular host names or groups of hosts. The following patterns are equivalent and target all hosts in the inventory:: all * It is also possible to address a specific host or set of hosts by name:: one.example.com one.example.com:two.example.com 192.168.1.50 192.168.1.* The following patterns address one or more groups. Groups separated by a colon indicate an "OR" configuration. This means the host may be in either one group or the other:: webservers webservers:dbservers You can exclude groups as well, for instance, all machines must be in the group webservers but not in the group phoenix:: webservers:!phoenix You can also specify the intersection of two groups. This would mean the hosts must be in the group webservers and the host must also be in the group staging:: webservers:&staging You can do combinations:: webservers:dbservers:&staging:!phoenix The above configuration means "all machines in the groups 'webservers' and 'dbservers' are to be managed if they are in the group 'staging' also, but the machines are not to be managed if they are in the group 'phoenix' ... whew! You can also use variables if you want to pass some group specifiers via the "-e" argument to ansible-playbook, but this is uncommonly used:: webservers:!{{excluded}}:&{{required}} You also don't have to manage by strictly defined groups. Individual host names, IPs and groups, can also be referenced using wildcards:: *.example.com *.com It's also ok to mix wildcard patterns and groups at the same time:: one*.com:dbservers Most people don't specify patterns as regular expressions, but you can. Just start the pattern with a '~':: ~(web|db).*\.example\.com While we're jumping a bit ahead, additionally, you can add an exclusion criteria just by supplying the ``--limit`` flag to /usr/bin/ansible or /usr/bin/ansible-playbook:: ansible-playbook site.yml --limit datacenter2 Easy enough. See :doc:`intro_adhoc` and then :doc:`playbooks` for how to apply this knowledge. .. seealso:: :doc:`intro_adhoc` Examples of basic commands :doc:`playbooks` Learning ansible's configuration management language `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_delegation.rst0000664000000000000000000001334012316627017021036 0ustar rootrootDelegation, Rolling Updates, and Local Actions ============================================== .. contents:: Topics Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf of another, or doing local steps with reference to some remote hosts. This in particular this is very applicable when setting up continuous deployment infrastructure or zero downtime rolling updates, where you might be talking with load balancers or monitoring systems. Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how many machines to process at once during a rolling update. This section covers all of these features. For examples of these items in use, `please see the ansible-examples repository `_. There are quite a few examples of zero-downtime update procedures for different kinds of applications. You should also consult the :doc:`modules` section, various modules like 'ec2_elb', 'nagios', and 'bigip_pool', and 'netscaler' dovetail neatly with the concepts mentioned here. You'll also want to read up on :doc:`playbooks_roles`, as the 'pre_task' and 'post_task' concepts are the places where you would typically call these modules. .. _rolling_update_batch_size: Rolling Update Batch Size ````````````````````````` .. versionadded:: 0.7 By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling updates use case, you can define how many hosts Ansible should manage at a single time by using the ''serial'' keyword:: - name: test play hosts: webservers serial: 3 In the above example, if we had 100 hosts, 3 hosts in the group 'webservers' would complete the play completely before moving on to the next 3 hosts. .. _maximum_failure_percentage: Maximum Failure Percentage `````````````````````````` .. versionadded:: 1.3 By default, Ansible will continue executing actions as long as there are hosts in the group that have not yet failed. In some situations, such as with the rolling updates described above, it may be desirable to abort the play when a certain threshold of failures have been reached. To achieve this, as of version 1.3 you can set a maximum failure percentage on a play as follows:: - hosts: webservers max_fail_percentage: 30 serial: 10 In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted. .. note:: The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort when 2 of the systems failed, the percentage should be set at 49 rather than 50. .. _delegation: Delegation `````````` .. versionadded:: 0.7 This isn't actually rolling update specific but comes up frequently in those cases. If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task. This is ideal for placing nodes in a load balanced pool, or removing them. It is also very useful for controlling outage windows. Using this with the 'serial' keyword to control the number of hosts executing at one time is also a good idea:: --- - hosts: webservers serial: 5 tasks: - name: take out of load balancer pool command: /usr/bin/take_out_of_pool {{ inventory_hostname }} delegate_to: 127.0.0.1 - name: actual steps would go here yum: name=acme-web-stack state=latest - name: add back to load balancer pool command: /usr/bin/add_back_to_pool {{ inventory_hostname }} delegate_to: 127.0.0.1 These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1:: --- # ... tasks: - name: take out of load balancer pool local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }} # ... - name: add back to load balancer pool local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }} A common pattern is to use a local action to call 'rsync' to recursively copy files to the managed servers. Here is an example:: --- # ... tasks: - name: recursively copy files from management server to target local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/ Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync will need to ask for a passphrase. .. _local_playbooks: Local Playbooks ``````````````` It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook on a crontab. This may also be used to run a playbook inside a OS installer, such as an Anaconda kickstart. To run an entire playbook locally, just set the "hosts:" line to "hosts:127.0.0.1" and then run the playbook like so:: ansible-playbook playbook.yml --connection=local Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook use the default remote connection type:: - hosts: 127.0.0.1 connection: local .. seealso:: :doc:`playbooks` An introduction to playbooks `Ansible Examples on GitHub `_ Many examples of full-stack deployments `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_environment.rst0000664000000000000000000000332112316627017021265 0ustar rootrootSetting the Environment (and Working With Proxies) ================================================== .. versionadded:: 1.1 It is quite possible that you may need to get package updates through a proxy, or even get some package updates through a proxy and access other packages not through a proxy. Or maybe a script you might wish to call may also need certain environment variables set to run properly. Ansible makes it easy for you to configure your environment by using the 'environment' keyword. Here is an example:: - hosts: all remote_user: root tasks: - apt: name=cobbler state=installed environment: http_proxy: http://proxy.example.com:8080 The environment can also be stored in a variable, and accessed like so:: - hosts: all remote_user: root # here we make a variable named "env" that is a dictionary vars: proxy_env: http_proxy: http://proxy.example.com:8080 tasks: - apt: name=cobbler state=installed environment: proxy_env While just proxy settings were shown above, any number of settings can be supplied. The most logical place to define an environment hash might be a group_vars file, like so:: --- # file: group_vars/boston ntp_server: ntp.bos.example.com backup: bak.bos.example.com proxy_env: http_proxy: http://proxy.bos.example.com:8080 https_proxy: http://proxy.bos.example.com:8080 .. seealso:: :doc:`playbooks` An introduction to playbooks `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/intro_configuration.rst0000664000000000000000000005007612316627017020731 0ustar rootrootThe Ansible Configuration File ++++++++++++++++++++++++++++++ .. contents:: Topics .. highlight:: bash Certain settings in Ansible are adjustable via a configuration file. The stock configuration should be sufficient for most users, but there may be reasons you would want to change them. Changes can be made and used in a configuration file which will be processed in the following order:: * ANSIBLE_CONFIG (an environment variable) * ansible.cfg (in the current directory) * .ansible.cfg (in the home directory) * /etc/ansible/ansible.cfg Prior to 1.5 the order was:: * ansible.cfg (in the current directory) * ANSIBLE_CONFIG (an environment variable) * .ansible.cfg (in the home directory) * /etc/ansible/ansible.cfg Ansible will process the above list and use the first file found. Settings in files are not merged together. .. _getting_the_latest_configuration: Getting the latest configuration ```````````````````````````````` If installing ansible from a package manager, the latest ansible.cfg should be present in /etc/ansible, possibly as a ".rpmnew" file (or other) as appropriate in the case of updates. If you have installed from pip or from source, however, you may want to create this file in order to override default settings in Ansible. You may wish to consult the `ansible.cfg in source control `_ for all of the possible latest values. .. _environmental_configuration: Environmental configuration ``````````````````````````` Ansible also allows configuration of settings via environment variables. If these environment variables are set, they will override any setting loaded from the configuration file. These variables are for brevity not defined here, but look in 'constants.py' in the source tree if you want to use these. They are mostly considered to be a legacy system as compared to the config file, but are equally valid. .. _config_values_by_section: Explanation of values by section ```````````````````````````````` The configuration file is broken up into sections. Most options are in the "general" section but some sections of the file are specific to certain connection types. .. _general_defaults: General defaults ---------------- In the [defaults] section of ansible.cfg, the following settings are tunable: .. _action_plugins: action_plugins ============== Actions are pieces of code in ansible that enable things like module execution, templating, and so forth. This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:: action_plugins = /usr/share/ansible_plugins/action_plugins Most users will not need to use this feature. See :doc:`developing_plugins` for more details. .. _ansible_managed: ansible_managed =============== Ansible-managed is a string that can be inserted into files written by Ansible's config templating system, if you use a string like:: {{ ansible_managed }} The default configuration shows who modified a file and when:: ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} This is useful to tell users that a file has been placed by Ansible and manual changes are likely to be overwritten. Note that if using this feature, and there is a date in the string, the template will be reported changed each time as the date is updated. .. _ask_pass: ask_pass ======== This controls whether an Ansible playbook should prompt for a password by default. The default behavior is no:: #ask_pass=True If using SSH keys for authentication, it's probably not needed to change this setting. .. _ask_sudo_pass: ask_sudo_pass ============= Similar to ask_pass, this controls whether an Ansible playbook should prompt for a sudo password by default when sudoing. The default behavior is also no:: #ask_sudo_pass=True Users on platforms where sudo passwords are enabled should consider changing this setting. .. _callback_plugins: callback_plugins ================ This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:: callback_plugins = /usr/share/ansible_plugins/callback_plugins Most users will not need to use this feature. See :doc:`developing_plugins` for more details .. _connection_plugins: connection_plugins ================== This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:: connection_plugins = /usr/share/ansible_plugins/connection_plugins Most users will not need to use this feature. See :doc:`developing_plugins` for more details .. _deprecation_warnings: deprecation_warnings ==================== .. versionadded:: 1.3 Allows disabling of deprecating warnings in ansible-playbook output:: deprecation_warnings = True Deprecation warnings indicate usage of legacy features that are slated for removal in a future release of Ansible. .. _display_skipped_hosts: display_skipped_hosts ===================== If set to `False`, ansible will not display any status for a task that is skipped. The default behavior is to display skipped tasks:: #display_skipped_hosts=True Note that Ansible will always show the task header for any task, regardless of whether or not the task is skipped. .. _error_on_undefined_vars: error_on_undefined_vars ======================= On by default since Ansible 1.3, this causes ansible to fail steps that reference variable names that are likely typoed:: #error_on_undefined_vars=True If set to False, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written. .. _executable: executable ========== This indicates the command to use to spawn a shell under a sudo environment. Users may need to change this in rare instances to /bin/bash in rare instances when sudo is constrained, but in most cases it may be left as is:: #executable = /bin/bash .. _filter_plugins: filter_plugins ============== This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:: filter_plugins = /usr/share/ansible_plugins/filter_plugins Most users will not need to use this feature. See :doc:`developing_plugins` for more details .. _forks: forks ===== This is the default number of parallel processes to spawn when communicating with remote hosts. Since Ansible 1.3, the fork number is automatically limited to the number of possible hosts, so this is really a limit of how much network and CPU load you think you can handle. Many users may set this to 50, some set it to 500 or more. If you have a large number of hosts, higher values will make actions across all of those hosts complete faster. The default is very very conservative:: forks=5 hash_behaviour ============== Ansible by default will override variables in specific precedence orders, as described in :doc:`playbooks_variables`. When a variable of higher precedence wins, it will replace the other value. Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged together. This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars (integers, strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it, and playbooks in the official examples repos do not use this setting:: #hash_behaviour=replace The valid values are either 'replace' (the default) or 'merge'. .. _hostfile: hostfile ======== This is the default location of the inventory file, script, or directory that Ansible will use to determine what hosts it has available to talk to:: hostfile = /etc/ansible/hosts .. _host_key_checking: host_key_checking ================= As described in :doc:`intro_getting_started`, host key checking is on by default in Ansible 1.3 and later. If you understand the implications and wish to disable it, you may do so here by setting the value to False:: host_key_checking=True .. _jinja2_extensions: jinja2_extensions ================= This is a developer-specific feature that allows enabling additional Jinja2 extensions:: jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n If you do not know what these do, you probably don't need to change this setting :) .. _legacy_playbook_variables: legacy_playbook_variables ========================= Ansible prefers to use Jinja2 syntax '{{ like_this }}' to indicate a variable should be substituted in a particular string. However, older versions of playbooks used a more Perl-style syntax. This syntax was undesirable as it frequently conflicted with bash and was hard to explain to new users when referencing complicated variable hierarchies, so we have standardized on the '{{ jinja2 }}' way. To ensure a string like '$foo' is not inadvertently replaced in a Perl or Bash script template, the old form of templating (which is still enabled as of Ansible 1.4) can be disabled like so :: legacy_playbook_variables = no .. _library: library ======= This is the default location Ansible looks to find modules:: library = /usr/share/ansible Ansible knows how to look in multiple locations if you feed it a colon separated path, and it also will look for modules in the "./library" directory alongside a playbook. .. _log_path: log_path ======== If present and configured in ansible.cfg, Ansible will log information about executions at the designated location. Be sure the user running Ansible has permissions on the logfile:: log_path=/var/log/ansible.log This behavior is not on by default. Note that ansible will, without this setting, record module arguments called to the syslog of managed machines. Password arguments are excluded. For Enterprise users seeking more detailed logging history, you may be interested in :doc:`tower`. .. _lookup_plugins: lookup_plugins ============== This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:: lookup_plugins = /usr/share/ansible_plugins/lookup_plugins Most users will not need to use this feature. See :doc:`developing_plugins` for more details .. _module_name: module_name =========== This is the default module name (-m) value for /usr/bin/ansible. The default is the 'command' module. Remember the command module doesn't support shell variables, pipes, or quotes, so you might wish to change it to 'shell':: module_name = command .. _nocolor: nocolor ======= By default ansible will try to colorize output to give a better indication of failure and status information. If you dislike this behavior you can turn it off by setting 'nocolor' to 1:: nocolor=0 .. _nocows: nocows ====== By default ansible will take advantage of cowsay if installed to make /usr/bin/ansible-playbook runs more exciting. Why? We believe systems management should be a happy experience. If you do not like the cows, you can disable them by setting 'nocows' to 1:: nocows=0 .. _pattern: pattern ======= This is the default group of hosts to talk to in a playbook if no "hosts:" stanza is supplied. The default is to talk to all hosts. You may wish to change this to protect yourself from surprises:: hosts=* Note that /usr/bin/ansible always requires a host pattern and does not use this setting, only /usr/bin/ansible-playbook. .. _poll_interval: poll_interval ============= For asynchronous tasks in Ansible (covered in :doc:`playbooks_async`), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed:: poll_interval=15 .. _private_key_file: private_key_file ================ If you are using a pem file to authenticate with machines rather than SSH agent or passwords, you can set the default value here to avoid re-specifying ``--ansible-private-keyfile`` with every invocation:: private_key_file=/path/to/file.pem .. _remote_port: remote_port =========== This sets the default SSH port on all of your systems, for systems that didn't specify an alternative value in inventory. The default is the standard 22:: remote_port = 22 .. _remote_tmp: remote_tmp ========== Ansible works by transferring modules to your remote machines, running them, and then cleaning up after itself. In some cases, you may not wish to use the default location and would like to change the path. You can do so by altering this setting:: remote_tmp = $HOME/.ansible/tmp The default is to use a subdirectory of the user's home directory. Ansible will then choose a random directory name inside this location. .. _remote_user: remote_user =========== This is the default username ansible will connect as for /usr/bin/ansible-playbook. Note that /usr/bin/ansible will always default to the current user:: remote_user = root .. _roles_path: roles_path ========== .. versionadded: '1.4' The roles path indicate additional directories beyond the 'roles/' subdirectory of a playbook project to search to find Ansible roles. For instance, if there was a source control repository of common roles and a different repository of playbooks, you might choose to establish a convention to checkout roles in /opt/mysite/roles like so:: roles_path = /opt/mysite/roles Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths that were searched. .. _sudo_exe: sudo_exe ======== If using an alternative sudo implementation on remote machines, the path to sudo can be replaced here provided the sudo implementation is matching CLI flags with the standard sudo:: sudo_exe=sudo .. _sudo_flags: sudo_flags ========== Additional flags to pass to sudo when engaging sudo support. The default is '-H' which preserves the environment of the original user. In some situations you may wish to add or remote flags, but in general most users will not need to change this setting:: sudo_flags=-H .. _sudo_user: sudo_user ========= This is the default user to sudo to if ``--sudo-user`` is not specified or 'sudo_user' is not specified in an Ansible playbook. The default is the most logical: 'root':: sudo_user=root .. _timeout: timeout ======= This is the default SSH timeout to use on connection attempts:: timeout = 10 .. _transport: transport ========= This is the default transport to use if "-c " is not specified to /usr/bin/ansible or /usr/bin/ansible-playbook. The default is 'smart', which will use 'ssh' (OpenSSH based) if the local operating system is new enough to support ControlPersist technology, and then will otherwise use 'paramiko'. Other transport options include 'local', 'chroot', 'jail', and so on. Users should usually leave this setting as 'smart' and let their playbooks choose an alternate setting when needed with the 'connection:' play parameter. .. _vars_plugins: vars_plugins ============ This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different locations:: vars_plugins = /usr/share/ansible_plugins/vars_plugins Most users will not need to use this feature. See :doc:`developing_plugins` for more details .. _paramiko_settings: Paramiko Specific Settings -------------------------- Paramiko is the default SSH connection implementation on Enterprise Linux 6 or earlier, and is not used by default on other platforms. Settings live under the [paramiko] header. .. _record_host_keys: record_host_keys ================ The default setting of yes will record newly discovered and approved (if host key checking is enabled) hosts in the user's hostfile. This setting may be inefficient for large numbers of hosts, and in those situations, using the ssh transport is definitely recommended instead. Setting it to False will improve performance and is recommended when host key checking is disabled:: record_host_keys=True .. _openssh_settings: OpenSSH Specific Settings ------------------------- Under the [ssh_connection] header, the following settings are tunable for SSH connections. OpenSSH is the default connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating systems except Enterprise Linux 6 or earlier). .. _ssh_args: ssh_args ======== If set, this will pass a specific set of options to Ansible rather than Ansible's usual defaults:: ssh_args = -o ControlMaster=auto -o ControlPersist=60s In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate. .. _control_path: control_path ============ This is the location to save ControlPath sockets. This defaults to:: control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r On some systems with very long hostnames or very long path names (caused by long user names or deeply nested home directories) this can exceed the character limit on file socket names (108 characters for most platforms). In that case, you may wish to shorten the string to something like the below:: control_path = %(directory)s/%%h-%%r Ansible 1.4 and later will instruct users to run with "-vvvv" in situations where it hits this problem and if so it is easy to tell there is too long of a Control Path filename. This may be frequently encountered on EC2. .. _scp_if_ssh: scp_if_ssh ========== Occasionally users may be managing a remote system that doesn't have SFTP enabled. If set to True, we can cause scp to be used to transfer remote files instead:: scp_if_ssh=False There's really no reason to change this unless problems are encountered, and then there's also no real drawback to managing the switch. Most environments support SFTP by default and this doesn't usually need to be changed. .. _pipelining: pipelining ========== Enabling pipelining reduces the number of SSH operations required to execute a module on the remote server, by executing many ansible modules without actual file transfer. This can result in a very significant performance improvement when enabled, however when using "sudo:" operations you must first disable 'requiretty' in /etc/sudoers on all managed hosts. By default, this option is disabled to preserve compatibility with sudoers configurations that have requiretty (the default on many distros), but is highly recommended if you can enable it, eliminating the need for :doc:`playbooks_acceleration`:: pipelining=False .. _accelerate_settings: Accelerate Mode Settings ------------------------ Under the [accelerate] header, the following settings are tunable for :doc:`playbooks_acceleration`. Acceleration is a useful performance feature to use if you cannot enable :ref:`pipelining` in your environment, but is probably not needed if you can. .. _accelerate_port: accelerate_port =============== .. versionadded:: 1.3 This is the port to use for accelerate mode:: accelerate_port = 5099 .. _accelerate_timeout: accelerate_timeout ================== .. versionadded:: 1.4 This setting controls the timeout for receiving data from a client. If no data is received during this time, the socket connection will be closed. A keepalive packet is sent back to the controller every 15 seconds, so this timeout should not be set lower than 15 (by default, the timeout is 30 seconds):: accelerate_timeout = 30 .. _accelerate_connect_timeout: accelerate_connect_timeout ========================== .. versionadded:: 1.4 This setting controls the timeout for the socket connect call, and should be kept relatively low. The connection to the `accelerate_port` will be attempted 3 times before Ansible will fall back to ssh or paramiko (depending on your default connection setting) to try and start the accelerate daemon remotely. The default setting is 1.0 seconds:: accelerate_connect_timeout = 1.0 Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you're on a very fast and reliable LAN. If you're connecting to systems over the internet, it may be necessary to increase this timeout. ansible-1.5.4/docsite/rst/guide_aws.rst0000664000000000000000000003177312316627017016621 0ustar rootrootAmazon Web Services Guide ========================= .. _aws_intro: Introduction ```````````` .. note:: This section of the documentation is under construction. We are in the process of adding more examples about all of the EC2 modules and how they work together. There's also an ec2 example in the language_features directory of `the ansible-examples github repository `_ that you may wish to consult. Once complete, there will also be new examples of ec2 in ansible-examples. Ansible contains a number of core modules for interacting with Amazon Web Services (AWS). These also work with Eucalyptus, which is an AWS compatible private cloud solution. There are other supported cloud types, but this documentation chapter is about AWS API clouds. The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context. Requirements for the AWS modules are minimal. All of the modules require and are tested against boto 2.5 or higher. You'll need this Python module installed on the execution host. If you are using Red Hat Enterprise Linux or CentOS, install boto from `EPEL `_: .. code-block:: bash $ yum install python-boto You can also install it via pip if you want. The following steps will often execute outside the host loop, so it makes sense to add localhost to inventory. Ansible may not require this step in the future:: [local] localhost And in your playbook steps we'll typically be using the following pattern for provisioning steps:: - hosts: localhost connection: local gather_facts: False .. _aws_provisioning: Provisioning ```````````` The ec2 module provides the ability to provision instances within EC2. Typically the provisioning task will be performed against your Ansible master server in a play that operates on localhost using the ``local`` connection type. If you are doing an EC2 operation mid-stream inside a regular play operating on remote hosts, you may want to use the ``local_action`` keyword for that particular task. Read :doc:`playbooks_delegation` for more about local actions. .. note:: Authentication with the AWS-related modules is handled by either specifying your access and secret key as ENV variables or passing them as module arguments. .. note:: To talk to specific endpoints, the environmental variable EC2_URL can be set. This is useful if using a private cloud like Eucalyptus, exporting the variable as EC2_URL=https://myhost:8773/services/Eucalyptus. This can be set using the 'environment' keyword in Ansible if you like. Here is an example of provisioning a number of instances in ad-hoc mode: .. code-block:: bash # ansible localhost -m ec2 -a "image=ami-6e649707 instance_type=m1.large keypair=mykey group=webservers wait=yes" -c local In a play, this might look like (assuming the parameters are held as vars):: tasks: - name: Provision a set of instances ec2: > keypair={{mykeypair}} group={{security_group}} instance_type={{instance_type}} image={{image}} wait=true count={{number}} register: ec2 By registering the return its then possible to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task:: - name: Add all instance public IPs to host group add_host: hostname={{ item.public_ip }} groupname=ec2hosts with_items: ec2.instances With the host group now created, a second play in your provision playbook might now have some configuration steps:: - name: Configuration play hosts: ec2hosts user: ec2-user gather_facts: true tasks: - name: Check NTP service service: name=ntpd state=started Rather than include configuration inline, you may also choose to just do it as a task include or a role. The method above ties the configuration of a host with the provisioning step. This isn't always ideal and leads us onto the next section. .. _aws_advanced: Advanced Usage `````````````` .. _aws_host_inventory: Host Inventory ++++++++++++++ Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the ec2 inventory plugin. Even for larger environments, you might have nodes spun up from Cloud Formations or other tooling. You don't have to use Ansible to spin up guests. Once these are created and you wish to configure them, the EC2 API can be used to return system grouping with the help of the EC2 inventory script. This script can be used to group resources by their security group or tags. Tagging is highly recommended in EC2 and can provide an easy way to sort between host groups and roles. The inventory script is documented doc:`api` section. You may wish to schedule a regular refresh of the inventory cache to accommodate for frequent changes in resources: .. code-block:: bash # ./ec2.py --refresh-cache Put this into a crontab as appropriate to make calls from your Ansible master server to the EC2 API endpoints and gather host information. The aim is to keep the view of hosts as up-to-date as possible, so schedule accordingly. Playbook calls could then also be scheduled to act on the refreshed hosts inventory after each refresh. This approach means that machine images can remain "raw", containing no payload and OS-only. Configuration of the workload is handled entirely by Ansible. Tags ++++ There's a feature in the ec2 inventory script where hosts tagged with certain keys and values automatically appear in certain groups. For instance, if a host is given the "class" tag with the value of "webserver", it will be automatically discoverable via a dynamic group like so:: - hosts: tag_class_webserver tasks: - ping Using this philosophy can be a great way to manage groups dynamically, without having to maintain seperate inventory. .. _aws_pull: Pull Configuration ++++++++++++++++++ For some the delay between refreshing host information and acting on that host information (i.e. running Ansible tasks against the hosts) may be too long. This may be the case in such scenarios where EC2 AutoScaling is being used to scale the number of instances as a result of a particular event. Such an event may require that hosts come online and are configured as soon as possible (even a 1 minute delay may be undesirable). Its possible to pre-bake machine images which contain the necessary ansible-pull script and components to pull and run a playbook via git. The machine images could be configured to run ansible-pull upon boot as part of the bootstrapping procedure. Read :ref:`ansible-pull` for more information on pull-mode playbooks. (Various developments around Ansible are also going to make this easier in the near future. Stay tuned!) .. _aws_autoscale: Autoscaling with Ansible Tower ++++++++++++++++++++++++++++++ :doc:`tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes. See the Tower documentation for more details. Click on the Tower link in the sidebar for details. A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared with remote hosts. .. _aws_use_cases: Use Cases ````````` This section covers some usage examples built around a specific use case. .. _aws_cloudformation_example: Example 1 +++++++++ Example 1: I'm using CloudFormation to deploy a specific infrastructure stack. I'd like to manage configuration of the instances with Ansible. Provision instances with your tool of choice and consider using the inventory plugin to group hosts based on particular tags or security group. Consider tagging instances you wish to managed with Ansible with a suitably unique key=value tag. .. note:: Ansible also has a cloudformation module you may wish to explore. .. _aws_autoscale_example: Example 2 +++++++++ Example 2: I'm using AutoScaling to dynamically scale up and scale down the number of instances. This means the number of hosts is constantly fluctuating but I'm letting EC2 automatically handle the provisioning of these instances. I don't want to fully bake a machine image, I'd like to use Ansible to configure the hosts. There are several approaches to this use case. The first is to use the inventory plugin to regularly refresh host information and then target hosts based on the latest inventory data. The second is to use ansible-pull triggered by a user-data script (specified in the launch configuration) which would then mean that each instance would fetch Ansible and the latest playbook from a git repository and run locally to configure itself. You could also use the Tower callback feature. .. _aws_builds: Example 3 +++++++++ Example 3: I don't want to use Ansible to manage my instances but I'd like to consider using Ansible to build my fully-baked machine images. There's nothing to stop you doing this. If you like working with Ansible's playbook format then writing a playbook to create an image; create an image file with dd, give it a filesystem and then install packages and finally chroot into it for further configuration. Ansible has the 'chroot' plugin for this purpose, just add the following to your inventory file:: /chroot/path ansible_connection=chroot And in your playbook:: hosts: /chroot/path Example 4 +++++++++ How would I create a new ec2 instance, provision it and then destroy it all in the same play? .. code-block:: yaml # Use the ec2 module to create a new host and then add # it to a special "ec2hosts" group. - hosts: localhost connection: local gather_facts: False vars: ec2_access_key: "--REMOVED--" ec2_secret_key: "--REMOVED--" keypair: "mykeyname" instance_type: "t1.micro" image: "ami-d03ea1e0" group: "mysecuritygroup" region: "us-west-2" zone: "us-west-2c" tasks: - name: make one instance ec2: image={{ image }} instance_type={{ instance_type }} aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} keypair={{ keypair }} instance_tags='{"foo":"bar"}' region={{ region }} group={{ group }} wait=true register: ec2_info - debug: var=ec2_info - debug: var=item with_items: ec2_info.instance_ids - add_host: hostname={{ item.public_ip }} groupname=ec2hosts with_items: ec2_info.instances - name: wait for instances to listen on port:22 wait_for: state=started host={{ item.public_dns_name }} port=22 with_items: ec2_info.instances # Connect to the node and gather facts, # including the instance-id. These facts # are added to inventory hostvars for the # duration of the playbook's execution # Typical "provisioning" tasks would go in # this playbook. - hosts: ec2hosts gather_facts: True user: ec2-user sudo: True tasks: # fetch instance data from the metadata servers in ec2 - ec2_facts: # show all known facts for this host - debug: var=hostvars[inventory_hostname] # just show the instance-id - debug: msg="{{ hostvars[inventory_hostname]['ansible_ec2_instance-id'] }}" # Using the instanceid, call the ec2 module # locally to remove the instance by declaring # it's state is "absent" - hosts: ec2hosts gather_facts: True connection: local vars: ec2_access_key: "--REMOVED--" ec2_secret_key: "--REMOVED--" region: "us-west-2" tasks: - name: destroy all instances ec2: state='absent' aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} region={{ region }} instance_ids={{ item }} wait=true with_items: hostvars[inventory_hostname]['ansible_ec2_instance-id'] .. note:: more examples of this are pending. You may also be interested in the ec2_ami module for taking AMIs of running instances. .. _aws_pending: Pending Information ``````````````````` In the future look here for more topics. .. seealso:: :doc:`modules` All the documentation for Ansible modules :doc:`playbooks` An introduction to playbooks :doc:`playbooks_delegation` Delegation, useful for working with loud balancers, clouds, and locally executed steps. `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_lookups.rst0000664000000000000000000001312712316627017020422 0ustar rootrootUsing Lookups ============= Lookup plugins allow access of data in Ansible from outside sources. This can include the filesystem but also external datastores. These values are then made available using the standard templating system in Ansible, and are typically used to load variables or templates with information from those systems. .. note:: This is considered an advanced feature, and many users will probably not rely on these features. .. contents:: Topics .. _getting_file_contents: Intro to Lookups: Getting File Contents ``````````````````````````````````````` The file lookup is the most basic lookup type. Contents can be read off the filesystem as follows:: - hosts: all vars: contents: "{{ lookup('file', '/etc/foo.txt') }}" tasks: - debug: msg="the value of foo.txt is {{ contents }}" .. _password_lookup: The Password Lookup ``````````````````` .. note:: A great alternative to the password lookup plugin, if you don't need to generate random passwords on a per-host basis, would be to use :doc:`playbooks_vault`. Read the documentation there and consider using it first, it will be more desirable for most applications. ``password`` generates a random plaintext password and stores it in a file at a given filepath. (Docs about crypted save modes are pending) If the file exists previously, it will retrieve its contents, behaving just like with_file. Usage of variables like "{{ inventory_hostname }}" in the filepath can be used to set up random passwords per host (what simplifies password management in 'host_vars' variables). Generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9 and punctuation (". , : - _"). The default length of a generated password is 20 characters. This length can be changed by passing an extra parameter:: --- - hosts: all tasks: # create a mysql user with a random password: - mysql_user: name={{ client }} password="{{ lookup('password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword length=15') }}" priv={{ client }}_{{ tier }}_{{ role }}.*:ALL (...) .. note:: If the file already exists, no data will be written to it. If the file has contents, those contents will be read in as the password. Empty files cause the password to return as an empty string Starting in version 1.4, password accepts a "chars" parameter to allow defining a custom character set in the generated passwords. It accepts comma separated list of names that are either string module attributes (ascii_letters,digits, etc) or are used literally:: --- - hosts: all tasks: # create a mysql user with a random password using only ascii letters: - mysql_user: name={{ client }} password="{{ lookup('password', '/tmp/passwordfile chars=ascii') }}" priv={{ client }}_{{ tier }}_{{ role }}.*:ALL # create a mysql user with a random password using only digits: - mysql_user: name={{ client }} password="{{ lookup('password', '/tmp/passwordfile chars=digits') }}" priv={{ client }}_{{ tier }}_{{ role }}.*:ALL # create a mysql user with a random password using many different char sets: - mysql_user: name={{ client }} password="{{ lookup('password', '/tmp/passwordfile chars=ascii,numbers,digits,hexdigits,punctuation') }}" priv={{ client }}_{{ tier }}_{{ role }}.*:ALL (...) To enter comma use two commas ',,' somewhere - preferably at the end. Quotes and double quotes are not supported. .. _more_lookups: More Lookups ```````````` .. note:: This feature is very infrequently used in Ansible. You may wish to skip this section. .. versionadded:: 0.8 Various *lookup plugins* allow additional ways to iterate over data. In :doc:`Loops ` you will learn how to use them to walk over collections of numerous types. However, they can also be used to pull in data from remote sources, such as shell commands or even key value stores. This section will cover lookup plugins in this capacity. Here are some examples:: --- - hosts: all tasks: - debug: msg="{{ lookup('env','HOME') }} is an environment variable" - debug: msg="{{ item }} is a line from the result of this command" with_lines: - cat /etc/motd - debug: msg="{{ lookup('pipe','date') }} is the raw result of running this command" - debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,somekey') }} is value in Redis for somekey" - debug: msg="{{ lookup('dnstxt', 'example.com') }} is a DNS TXT record for example.com" - debug: msg="{{ lookup('template', './some_template.j2') }} is a value from evaluation of this template" As an alternative you can also assign lookup plugins to variables or use them elsewhere. This macros are evaluated each time they are used in a task (or template):: vars: motd_value: "{{ lookup('file', '/etc/motd') }}" tasks: - debug: msg="motd value is {{ motd_value }}" .. seealso:: :doc:`playbooks` An introduction to playbooks :doc:`playbooks_conditionals` Conditional statements in playbooks :doc:`playbooks_variables` All about variables :doc:`playbooks_loops` Looping in playbooks `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/guru.rst0000664000000000000000000000157512316627017015631 0ustar rootrootAnsible Guru ```````````` While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a bit more. `Ansible Guru `_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shoudn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert. For those interested, click through the link above. You can sign up in minutes! For users looking for more hands-on help, we also have some more information on our `Services page `_, and support is also included with :doc:`tower`. ansible-1.5.4/docsite/rst/community.rst0000664000000000000000000000104312316627017016661 0ustar rootrootCommunity Information ````````````````````` Ansible is an open source project designed to bring together developers and administrators of all kinds to collaborate on building IT automation solutions that work well for them. Should you wish to get more involved -- whether in terms of just asking a question, helping other users, introducing new people to Ansible, or helping with the software or documentation, we welcome your contributions to the project. `Ways to interact `_ ansible-1.5.4/docsite/rst/glossary.rst0000664000000000000000000005550612316627017016515 0ustar rootrootGlossary ======== The following is a list (and re-explanation) of term definitions used elsewhere in the Ansible documentation. Consult the documentation home page for the full documentation and to see the terms in context, but this should be a good resource to check your knowledge of Ansible's components and understand how they fit together. It's something you might wish to read for review or when a term comes up on the mailing list. Action ++++++ An action is a part of a task that specifies which of the modules to run and the arguments to pass to that module. Each task can have only one action, but it may also have other parameters. Ad Hoc ++++++ Refers to running Ansible to perform some quick command, using /usr/bin/ansible, rather than the orchestration language, which is /usr/bin/ansible-playbook. An example of an ad-hoc command might be rebooting 50 machines in your infrastructure. Anything you can do ad-hoc can be accomplished by writing a playbook, and playbooks can also glue lots of other operations together. Async +++++ Refers to a task that is configured to run in the background rather than waiting for completion. If you have a long process that would run longer than the SSH timeout, it would make sense to launch that task in async mode. Async modes can poll for completion every so many seconds, or can be configured to "fire and forget" in which case Ansible will not even check on the task again, it will just kick it off and proceed to future steps. Async modes work with both /usr/bin/ansible and /usr/bin/ansible-playbook. Callback Plugin +++++++++++++++ Refers to some user-written code that can intercept results from Ansible and do something with them. Some supplied examples in the GitHub project perform custom logging, send email, or even play sound effects. Check Mode ++++++++++ Refers to running Ansible with the ``--check`` option, which does not make any changes on the remote systems, but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called "dry run" modes in other systems, though the user should be warned that this does not take into account unexpected command failures or cascade effects (which is true of similar modes in other systems). Use this to get an idea of what might happen, but it is not a substitute for a good staging environment. Connection Type, Connection Plugin ++++++++++++++++++++++++++++++++++ By default, Ansible talks to remote machines through pluggable libraries. Ansible supports native OpenSSH ('ssh'), or a Python implementation called 'paramiko'. OpenSSH is preferred if you are using a recent version, and also enables some features like Kerberos and jump hosts. This is covered in the getting started section. There are also other connection types like 'accelerate' mode, which must be bootstrapped over one of the SSH-based connection types but is very fast, and local mode, which acts on the local system. Users can also write their own connection plugins. Conditionals ++++++++++++ A conditional is an expression that evaluates to true or false that decides whether a given task will be executed on a given machine or not. Ansible's conditionals are powered by the 'when' statement, and are discussed in the playbook documentation. Diff Mode +++++++++ A ``--diff`` flag can be passed to Ansible to show how template files change when they are overwritten, or how they might change when used with ``--check`` mode. These diffs come out in unified diff format. Facts +++++ Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered by Ansible when running plays by executing the internal 'setup' module on the remote nodes. You never have to call the setup module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of users who are switching from other configuration management systems, the fact module will also pull in facts from the 'ohai' and 'facter' tools if they are installed, which are fact libraries from Chef and Puppet, respectively. Filter Plugin +++++++++++++ A filter plugin is something that most users will never need to understand. These allow for the creation of new Jinja2 filters, which are more or less only of use to people who know what Jinja2 filters are. If you need them, you can learn how to write them in the API docs section. Forks +++++ Ansible talks to remote nodes in parallel and the level of parallelism can be set either by passing ``--forks``, or editing the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can easily set this to a value like 50 for increased parallelism. Gather Facts (Boolean) ++++++++++++++++++++++ Facts are mentioned above. Sometimes when running a multi-play playbook, it is desirable to have some plays that don't bother with fact computation if they aren't going to need to utilize any of these values. Setting `gather_facts: False` on a playbook allows this implicit fact gathering to be skipped. Globbing ++++++++ Globbing is a way to select lots of hosts based on wildcards, rather than the name of the host specifically, or the name of the group they are in. For instance, it is possible to select "www*" to match all hosts starting with "www". This concept is pulled directly from Func, one of Michael's earlier projects. In addition to basic globbing, various set operations are also possible, such as 'hosts in this group and not in another group', and so on. Group +++++ A group consists of several hosts assigned to a pool that can be conveniently targeted together, and also given variables that they share in common. Group Vars ++++++++++ The "group_vars/" files are files that live in a directory alongside an inventory file, with an optional filename named after each group. This is a convenient place to put variables that will be provided to a given group, especially complex data structures, so that these variables do not have to be embedded in the inventory file or playbook. Handlers ++++++++ Handlers are just like regular tasks in an Ansible playbook (see Tasks), but are only run if the Task contains a "notify" directive and also indicates that it changed something. For example, if a config file is changed then the task referencing the config file templating operation may notify a service restart handler. This means services can be bounced only if they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most common usage. Host ++++ A host is simply a remote machine that Ansible manages. They can have individual variables assigned to them, and can also be organized in groups. All hosts have a name they can be reached at (which is either an IP address or a domain name) and optionally a port number if they are not to be accessed on the default SSH port. Host Specifier ++++++++++++++ Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems. This "hosts:" directive in each play is often called the hosts specifier. It may select one system, many systems, one or more groups, or even some hosts that are in one group and explicitly not in another. Host Vars +++++++++ Just like "Group Vars", a directory alongside the inventory file named "host_vars/" can contain a file named after each hostname in the inventory file, in YAML format. This provides a convenient place to assign variables to the host without having to embed them in the inventory file. The Host Vars file can also be used to define complex data structures that can't be represented in the inventory file. Lazy Evaluation +++++++++++++++ In general, Ansible evaluates any variables in playbook content at the last possible second, which means that if you define a data structure that data structure itself can define variable values within it, and everything "just works" as you would expect. This also means variable strings can include other variables inside of those strings. Lookup Plugin +++++++++++++ A lookup plugin is a way to get data into Ansible from the outside world. These are how such things as "with_items", a basic looping plugin, are implemented, but there are also lookup plugins like "with_file" which loads data from a file, and even ones for querying environment variables, DNS text records, or key value stores. Lookup plugins can also be accessed in templates, e.g., ``{{ lookup('file','/path/to/file') }}``. Multi-Tier ++++++++++ The concept that IT systems are not managed one system at a time, but by interactions between multiple systems, and groups of systems, in well defined orders. For instance, a web server may need to be updated before a database server, and pieces on the web server may need to be updated after *THAT* database server, and various load balancers and monitoring servers may need to be contacted. Ansible models entire IT topologies and workflows rather than looking at configuration from a "one system at a time" perspective. Idempotency +++++++++++ The concept that change commands should only be applied when they need to be applied, and that it is better to describe the desired state of a system than the process of how to get to that state. As an analogy, the path from North Carolina in the United States to California involves driving a very long way West, but if I were instead in Anchorage, Alaska, driving a long way west is no longer the right way to get to California. Ansible's Resources like you to say "put me in California" and then decide how to get there. If you were already in California, nothing needs to happen, and it will let you know it didn't need to change anything. Includes ++++++++ The idea that playbook files (which are nothing more than lists of plays) can include other lists of plays, and task lists can externalize lists of tasks in other files, and similarly with handlers. Includes can be parameterized, which means that the loaded file can pass variables. For instance, an included play for setting up a WordPress blog may take a parameter called "user" and that play could be included more than once to create a blog for both "alice" and "bob". Inventory +++++++++ A file (by default, Ansible uses a simple INI format) that describes Hosts and Groups in Ansible. Inventory can also be provided via an "Inventory Script" (sometimes called an "External Inventory Script"). Inventory Script ++++++++++++++++ A very simple program (or a complicated one) that looks up hosts, group membership for hosts, and variable information from an external resource -- whether that be a SQL database, a CMDB solution, or something like LDAP. This concept was adapted from Puppet (where it is called an "External Nodes Classifier") and works more or less exactly the same way. Jinja2 ++++++ Jinja2 is the preferred templating language of Ansible's template module. It is a very simple Python template language that is generally readable and easy to write. JSON ++++ Ansible uses JSON for return data from remote modules. This allows modules to be written in any language, not just Python. Library +++++++ A collection of modules made available to /usr/bin/ansible or an Ansible playbook. Limit Groups ++++++++++++ By passing ``--limit somegroup`` to ansible or ansible-playbook, the commands can be limited to a subset of hosts. For instance, this can be used to run a playbook that normally targets an entire set of servers to one particular server. Local Connection ++++++++++++++++ By using "connection: local" in a playbook, or passing "-c local" to /usr/bin/ansible, this indicates that we are managing the local host and not a remote machine. Local Action ++++++++++++ A local_action directive in a playbook targeting remote machines means that the given step will actually occur on the local machine, but that the variable '{{ ansible_hostname }}' can be passed in to reference the remote hostname being referred to in that step. This can be used to trigger, for example, an rsync operation. Loops +++++ Generally, Ansible is not a programming language. It prefers to be more declarative, though various constructs like "with_items" allow a particular task to be repeated for multiple items in a list. Certain modules, like yum and apt, are actually optimized for this, and can install all packages given in those lists within a single transaction, dramatically speeding up total time to configuration. Modules +++++++ Modules are the units of work that Ansible ships out to remote machines. Modules are kicked off by either /usr/bin/ansible or /usr/bin/ansible-playbook (where multiple tasks use lots of different modules in conjunction). Modules can be implemented in any language, including Perl, Bash, or Ruby -- but can leverage some useful communal library code if written in Python. Modules just have to return JSON or simple key=value pairs. Once modules are executed on remote machines, they are removed, so no long running daemons are used. Ansible refers to the collection of available modules as a 'library'. Notify ++++++ The act of a task registering a change event and informing a handler task that another action needs to be run at the end of the play. If a handler is notified by multiple tasks, it will still be run only once. Handlers are run in the order they are listed, not in the order that they are notified. Orchestration +++++++++++++ Many software automation systems use this word to mean different things. Ansible uses it as a conductor would conduct an orchestra. A datacenter or cloud architecture is full of many systems, playing many parts -- web servers, database servers, maybe load balancers, monitoring systems, continuous integration systems, etc. In performing any process, it is necessary to touch systems in particular orders, often to simulate rolling updates or to deploy software correctly. Some system may perform some steps, then others, then previous systems already processed may need to perform more steps. Along the way, emails may need to be sent or web services contacted. Ansible orchestration is all about modeling that kind of process. paramiko ++++++++ By default, Ansible manages machines over SSH. The library that Ansible uses by default to do this is a Python-powered library called paramiko. The paramiko library is generally fast and easy to manage, though users desiring Kerberos or Jump Host support may wish to switch to a native SSH binary such as OpenSSH by specifying the connection type in their playbook, or using the "-c ssh" flag. Playbooks +++++++++ Playbooks are the language by which Ansible orchestrates, configures, administers, or deploys systems. They are called playbooks partially because it's a sports analogy, and it's supposed to be fun using them. They aren't workbooks :) Plays +++++ A playbook is a list of plays. A play is minimally a mapping between a set of hosts selected by a host specifier (usually chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that those systems will perform. There can be one or many plays in a playbook. Pull Mode +++++++++ By default, Ansible runs in push mode, which allows it very fine-grained control over when it talks to each system. Pull mode is provided for when you would rather have nodes check in every N minutes on a particular schedule. It uses a program called ansible-pull and can also be set up (or reconfigured) using a push-mode playbook. Most Ansible users use push mode, but pull mode is included for variety and the sake of having choices. ansible-pull works by checking configuration orders out of git on a crontab and then managing the machine locally, using the local connection plugin. Push Mode +++++++++ Push mode is the default mode of Ansible. In fact, it's not really a mode at all -- it's just how Ansible works when you aren't thinking about it. Push mode allows Ansible to be fine-grained and conduct nodes through complex orchestration processes without waiting for them to check in. Register Variable +++++++++++++++++ The result of running any task in Ansible can be stored in a variable for use in a template or a conditional statement. The keyword used to define the variable is called 'register', taking its name from the idea of registers in assembly programming (though Ansible will never feel like assembly programming). There are an infinite number of variable names you can use for registration. Resource Model ++++++++++++++ Ansible modules work in terms of resources. For instance, the file module will select a particular file and ensure that the attributes of that resource match a particular model. As an example, we might wish to change the owner of /etc/motd to 'root' if it is not already set to root, or set its mode to '0644' if it is not already set to '0644'. The resource models are 'idempotent' meaning change commands are not run unless needed, and Ansible will bring the system back to a desired state regardless of the actual state -- rather than you having to tell it how to get to the state. Roles +++++ Roles are units of organization in Ansible. Assigning a role to a group of hosts (or a set of groups, or host patterns, etc.) implies that they should implement a specific behavior. A role may include applying certain variable values, certain tasks, and certain handlers -- or just one or more of these things. Because of the file structure associated with a role, roles become redistributable units that allow you to share behavior among playbooks -- or even with other users. Rolling Update ++++++++++++++ The act of addressing a number of nodes in a group N at a time to avoid updating them all at once and bringing the system offline. For instance, in a web topology of 500 nodes handling very large volume, it may be reasonable to update 10 or 20 machines at a time, moving on to the next 10 or 20 when done. The "serial:" keyword in an Ansible playbook controls the size of the rolling update pool. The default is to address the batch size all at once, so this is something that you must opt-in to. OS configuration (such as making sure config files are correct) does not typically have to use the rolling update model, but can do so if desired. Runner ++++++ A core software component of Ansible that is the power behind /usr/bin/ansible directly -- and corresponds to the invocation of each task in a playbook. The Runner is something Ansible developers may talk about, but it's not really user land vocabulary. Serial ++++++ See "Rolling Update". Sudo ++++ Ansible does not require root logins, and since it's daemonless, definitely does not require root level daemons (which can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a sudo command, and can work with both password-less and password-based sudo. Some operations that don't normally work with sudo (like scp file transfer) can be achieved with Ansible's copy, template, and fetch modules while running in sudo mode. SSH (Native) ++++++++++++ Native OpenSSH as an Ansible transport is specified with "-c ssh" (or a config file, or a directive in the playbook) and can be useful if wanting to login via Kerberized SSH or using SSH jump hosts, etc. In 1.2.1, 'ssh' will be used by default if the OpenSSH binary on the control machine is sufficiently new. Previously, Ansible selected 'paramiko' as a default. Using a client that supports ControlMaster and ControlPersist is recommended for maximum performance -- if you don't have that and don't need Kerberos, jump hosts, or other features, paramiko is a good choice. Ansible will warn you if it doesn't detect ControlMaster/ControlPersist capability. Tags ++++ Ansible allows tagging resources in a playbook with arbitrary keywords, and then running only the parts of the playbook that correspond to those keywords. For instance, it is possible to have an entire OS configuration, and have certain steps labeled "ntp", and then run just the "ntp" steps to reconfigure the time server information on a remote host. Tasks +++++ Playbooks exist to run tasks. Tasks combine an action (a module and its arguments) with a name and optionally some other keywords (like looping directives). Handlers are also tasks, but they are a special kind of task that do not run unless they are notified by name when a task reports an underlying change on a remote system. Templates +++++++++ Ansible can easily transfer files to remote systems, but often it is desirable to substitute variables in other files. Variables may come from the inventory file, Host Vars, Group Vars, or Facts. Templates use the Jinja2 template engine and can also include logical constructs like loops and if statements. Transport +++++++++ Ansible uses "Connection Plugins" to define types of available transports. These are simply how Ansible will reach out to managed systems. Transports included are paramiko, SSH (using OpenSSH), and local. When ++++ An optional conditional statement attached to a task that is used to determine if the task should run or not. If the expression following the "when:" keyword evaluates to false, the task will be ignored. Van Halen +++++++++ For no particular reason, other than the fact that Michael really likes them, all Ansible releases are codenamed after Van Halen songs. There is no preference given to David Lee Roth vs. Sammy Lee Hagar-era songs, and instrumentals are also allowed. It is unlikely that there will ever be a Jump release, but a Van Halen III codename release is possible. You never know. Vars (Variables) ++++++++++++++++ As opposed to Facts, variables are names of values (they can be simple scalar values -- integers, booleans, strings) or complex ones (dictionaries/hashes, lists) that can be used in templates and playbooks. They are declared things, not things that are inferred from the remote system's current state or nature (which is what Facts are). YAML ++++ Ansible does not want to force people to write programming language code to automate infrastructure, so Ansible uses YAML to define playbook configuration languages and also variable files. YAML is nice because it has a minimum of syntax and is very clean and easy for people to skim. It is a good data format for configuration files and humans, but also machine readable. Ansible's usage of YAML stemmed from Michael's first use of it inside of Cobbler around 2006. YAML is fairly popular in the dynamic language community and the format has libraries available for serialization in many different languages (Python, Perl, Ruby, etc.). .. seealso:: :doc:`faq` Frequently asked questions :doc:`playbooks` An introduction to playbooks :doc:`playbooks_best_practices` Best practices advice `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_special_topics.rst0000664000000000000000000000117012316627017021722 0ustar rootrootPlaybooks: Special Topics ````````````````````````` Here are some playbook features that not everyone may need to learn, but can be quite useful for particular applications. Browsing these topics is recommended as you may find some useful tips here, but feel free to learn the basics of Ansible first and adopt these only if they seem relevant or useful to your environment. .. toctree:: :maxdepth: 1 playbooks_acceleration playbooks_async playbooks_checkmode playbooks_delegation playbooks_environment playbooks_error_handling playbooks_lookups playbooks_prompts playbooks_tags playbooks_vault ansible-1.5.4/docsite/rst/developing_modules.rst0000664000000000000000000004133512316627017020531 0ustar rootrootDeveloping Modules ================== .. contents:: Topics Ansible modules are reusable units of magic that can be used by the Ansible API, or by the `ansible` or `ansible-playbook` programs. See :doc:`modules` for a list of various ones developed in core. Modules can be written in any language and are found in the path specified by `ANSIBLE_LIBRARY` or the ``--module-path`` command line option. Should you develop an interesting Ansible module, consider sending a pull request to the `github project `_ to see about getting your module included in the core project. .. _module_dev_tutorial: Tutorial ```````` Let's build a very-basic module to get and set the system time. For starters, let's build a module that just outputs the current time. We are going to use Python here but any language is possible. Only File I/O and outputting to standard out are required. So, bash, C++, clojure, Python, Ruby, whatever you want is fine. Now Python Ansible modules contain some extremely powerful shortcuts (that all the core modules use) but first we are going to build a module the very hard way. The reason we do this is because modules written in any language OTHER than Python are going to have to do exactly this. We'll show the easy way later. So, here's an example. You would never really need to build a module to set the system time, the 'command' module could already be used to do this. Though we're going to make one. Reading the modules that come with ansible (linked above) is a great way to learn how to write modules. Keep in mind, though, that some modules in ansible's source tree are internalisms, so look at `service` or `yum`, and don't stare too close into things like `async_wrapper` or you'll turn to stone. Nobody ever executes async_wrapper directly. Ok, let's get going with an example. We'll use Python. For starters, save this as a file named `time`:: #!/usr/bin/python import datetime import json date = str(datetime.datetime.now()) print json.dumps({ "time" : date }) .. _module_testing: Testing Modules ``````````````` There's a useful test script in the source checkout for ansible:: git clone git@github.com:ansible/ansible.git source ansible/hacking/env-setup chmod +x ansible/hacking/test-module Let's run the script you just wrote with that:: ansible/hacking/test-module -m ./time You should see output that looks something like this:: {u'time': u'2012-03-14 22:13:48.539183'} If you did not, you might have a typo in your module, so recheck it and try again. .. _reading_input: Reading Input ````````````` Let's modify the module to allow setting the current time. We'll do this by seeing if a key value pair in the form `time=` is passed in to the module. Ansible internally saves arguments to an arguments file. So we must read the file and parse it. The arguments file is just a string, so any form of arguments are legal. Here we'll do some basic parsing to treat the input as key=value. The example usage we are trying to achieve to set the time is:: time time="March 14 22:10" If no time parameter is set, we'll just leave the time as is and return the current time. .. note:: This is obviously an unrealistic idea for a module. You'd most likely just use the shell module. However, it probably makes a decent tutorial. Let's look at the code. Read the comments as we'll explain as we go. Note that this is highly verbose because it's intended as an educational example. You can write modules a lot shorter than this:: #!/usr/bin/python # import some python modules that we'll use. These are all # available in Python's core import datetime import sys import json import os import shlex # read the argument string from the arguments file args_file = sys.argv[1] args_data = file(args_file).read() # for this module, we're going to do key=value style arguments # this is up to each module to decide what it wants, but all # core modules besides 'command' and 'shell' take key=value # so this is highly recommended arguments = shlex.split(args_data) for arg in arguments: # ignore any arguments without an equals in it if arg.find("=") != -1: (key, value) = arg.split("=") # if setting the time, the key 'time' # will contain the value we want to set the time to if key == "time": # now we'll affect the change. Many modules # will strive to be 'idempotent', meaning they # will only make changes when the desired state # expressed to the module does not match # the current state. Look at 'service' # or 'yum' in the main git tree for an example # of how that might look. rc = os.system("date -s \"%s\"" % value) # always handle all possible errors # # when returning a failure, include 'failed' # in the return data, and explain the failure # in 'msg'. Both of these conventions are # required however additional keys and values # can be added. if rc != 0: print json.dumps({ "failed" : True, "msg" : "failed setting the time" }) sys.exit(1) # when things do not fail, we do not # have any restrictions on what kinds of # data are returned, but it's always a # good idea to include whether or not # a change was made, as that will allow # notifiers to be used in playbooks. date = str(datetime.datetime.now()) print json.dumps({ "time" : date, "changed" : True }) sys.exit(0) # if no parameters are sent, the module may or # may not error out, this one will just # return the time date = str(datetime.datetime.now()) print json.dumps({ "time" : date }) Let's test that module:: ansible/hacking/test-module -m ./time -a time=\"March 14 12:23\" This should return something like:: {"changed": true, "time": "2012-03-14 12:23:00.000307"} .. _module_provided_facts: Module Provided 'Facts' ``````````````````````` The 'setup' module that ships with Ansible provides many variables about a system that can be used in playbooks and templates. However, it's possible to also add your own facts without modifying the system module. To do this, just have the module return a `ansible_facts` key, like so, along with other return data:: { "changed" : True, "rc" : 5, "ansible_facts" : { "leptons" : 5000 "colors" : { "red" : "FF0000", "white" : "FFFFFF" } } } These 'facts' will be available to all statements called after that module (but not before) in the playbook. A good idea might be make a module called 'site_facts' and always call it at the top of each playbook, though we're always open to improving the selection of core facts in Ansible as well. .. _common_module_boilerplate: Common Module Boilerplate ````````````````````````` As mentioned, if you are writing a module in Python, there are some very powerful shortcuts you can use. Modules are still transferred as one file, but an arguments file is no longer needed, so these are not only shorter in terms of code, they are actually FASTER in terms of execution time. Rather than mention these here, the best way to learn is to read some of the `source of the modules `_ that come with Ansible. The 'group' and 'user' modules are reasonably non-trivial and showcase what this looks like. Key parts include always ending the module file with:: from ansible.module_utils.basic import * main() And instantiating the module class like:: module = AnsibleModule( argument_spec = dict( state = dict(default='present', choices=['present', 'absent']), name = dict(required=True), enabled = dict(required=True, choices=BOOLEANS), something = dict(aliases=['whatever']) ) ) The AnsibleModule provides lots of common code for handling returns, parses your arguments for you, and allows you to check inputs. Successful returns are made like this:: module.exit_json(changed=True, something_else=12345) And failures are just as simple (where 'msg' is a required parameter to explain the error):: module.fail_json(msg="Something fatal happened") There are also other useful functions in the module class, such as module.md5(path). See lib/ansible/module_common.py in the source checkout for implementation details. Again, modules developed this way are best tested with the hacking/test-module script in the git source checkout. Because of the magic involved, this is really the only way the scripts can function outside of Ansible. If submitting a module to ansible's core code, which we encourage, use of the AnsibleModule class is required. .. _developing_for_check_mode: Check Mode `````````` .. versionadded:: 1.1 Modules may optionally support check mode. If the user runs Ansible in check mode, the module should try to predict whether changes will occur. For your module to support check mode, you must pass ``supports_check_mode=True`` when instantiating the AnsibleModule object. The AnsibleModule.check_mode attribute will evaluate to True when check mode is enabled. For example:: module = AnsibleModule( argument_spec = dict(...), supports_check_mode=True ) if module.check_mode: # Check if any changes would be made by don't actually make those changes module.exit_json(changed=check_if_system_state_would_be_changed()) Remember that, as module developer, you are responsible for ensuring that no system state is altered when the user enables check mode. If your module does not support check mode, when the user runs Ansible in check mode, your module will simply be skipped. .. _module_dev_pitfalls: Common Pitfalls ``````````````` You should also never do this in a module:: print "some status message" Because the output is supposed to be valid JSON. Except that's not quite true, but we'll get to that later. Modules must not output anything on standard error, because the system will merge standard out with standard error and prevent the JSON from parsing. Capturing standard error and returning it as a variable in the JSON on standard out is fine, and is, in fact, how the command module is implemented. If a module returns stderr or otherwise fails to produce valid JSON, the actual output will still be shown in Ansible, but the command will not succeed. Always use the hacking/test-module script when developing modules and it will warn you about these kind of things. .. _module_dev_conventions: Conventions/Recommendations ``````````````````````````` As a reminder from the example code above, here are some basic conventions and guidelines: * If the module is addressing an object, the parameter for that object should be called 'name' whenever possible, or accept 'name' as an alias. * If you have a company module that returns facts specific to your installations, a good name for this module is `site_facts`. * Modules accepting boolean status should generally accept 'yes', 'no', 'true', 'false', or anything else a user may likely throw at them. The AnsibleModule common code supports this with "choices=BOOLEANS" and a module.boolean(value) casting function. * Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the module file, and have the module raise JSON error messages when the import fails. * Modules must be self contained in one file to be auto-transferred by ansible. * If packaging modules in an RPM, they only need to be installed on the control machine and should be dropped into /usr/share/ansible. This is entirely optional and up to you. * Modules should return JSON or key=value results all on one line. JSON is best if you can do JSON. All return types must be hashes (dictionaries) although they can be nested. Lists or simple scalar values are not supported, though they can be trivially contained inside a dictionary. * In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'. * Return codes from modules are not actually not signficant, but continue on with 0=success and non-zero=failure for reasons of future proofing. * As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form. .. _module_dev_shorthand: Shorthand Vs JSON ````````````````` To make it easier to write modules in bash and in cases where a JSON module might not be available, it is acceptable for a module to return key=value output all on one line, like this. The Ansible parser will know what to do:: somekey=1 somevalue=2 rc=3 favcolor=red If you're writing a module in Python or Ruby or whatever, though, returning JSON is probably the simplest way to go. .. _module_documenting: Documenting Your Module ``````````````````````` All modules included in the CORE distribution must have a ``DOCUMENTATION`` string. This string MUST be a valid YAML document which conforms to the schema defined below. You may find it easier to start writing your ``DOCUMENTATION`` string in an editor with YAML syntax highlighting before you include it in your Python file. .. _module_doc_example: Example +++++++ See an example documentation string in the checkout under `examples/DOCUMENTATION.yml `_. Include it in your module file like this:: #!/usr/bin/env python # Copyright header.... DOCUMENTATION = ''' --- module: modulename short_description: This is a sentence describing the module # ... snip ... ''' The ``description``, and ``notes`` fields support formatting with some special macros. These formatting functions are ``U()``, ``M()``, ``I()``, and ``C()`` for URL, module, italic, and constant-width respectively. It is suggested to use ``C()`` for file and option names, and ``I()`` when referencing parameters; module names should be specifies as ``M(module)``. Examples (which typically contain colons, quotes, etc.) are difficult to format with YAML, so these must be written in plain text in an ``EXAMPLES`` string within the module like this:: EXAMPLES = ''' - action: modulename opt1=arg1 opt2=arg2 ''' The EXAMPLES section, just like the documentation section, is required in all module pull requests for new modules. .. _module_dev_testing: Building & Testing ++++++++++++++++++ Put your completed module file into the 'library' directory and then run the command: ``make webdocs``. The new 'modules.html' file will be built and appear in the 'docsite/' directory. .. tip:: If you're having a problem with the syntax of your YAML you can validate it on the `YAML Lint `_ website. .. tip:: You can use ANSIBLE_KEEP_REMOTE_FILES=1 to prevent ansible from deleting the remote files so you can debug your module. .. _module_contribution: Getting Your Module Into Core ````````````````````````````` High-quality modules with minimal dependencies can be included in the core, but core modules (just due to the programming preferences of the developers) will need to be implemented in Python and use the AnsibleModule common code, and should generally use consistent arguments with the rest of the program. Stop by the mailing list to inquire about requirements if you like, and submit a github pull request to the main project. .. seealso:: :doc:`modules` Learn about available modules :doc:`developing_plugins` Learn about developing plugins :doc:`developing_api` Learn about the Python API for playbook and task execution `Github modules directory `_ Browse source of core modules `Mailing List `_ Development mailing list `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/intro_inventory.rst0000664000000000000000000002022712316627017020112 0ustar rootroot.. _inventory: Inventory ========= .. contents:: Topics Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of systems listed in Ansible's inventory file, which defaults to being saved in the location /etc/ansible/hosts. Not only is this inventory configurable, but you can also use multiple inventory files at the same time (explained below) and also pull inventory from dynamic or cloud sources, as described in :doc:`intro_dynamic_inventory`. .. _inventoryformat: Hosts and Groups ++++++++++++++++ The format for /etc/ansible/hosts is an INI format and looks like this:: mail.example.com [webservers] foo.example.com bar.example.com [dbservers] one.example.com two.example.com three.example.com The things in brackets are group names, which are used in classifying systems and deciding what systems you are controlling at what times and for what purpose. It is ok to put systems in more than one group, for instance a server could be both a webserver and a dbserver. If you do, note that variables will come from all of the groups they are a member of, and variable precedence is detailed in a later chapter. If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Ports listed in your SSH config file won't be used, so it is important that you set them if things are not running on the default port:: badwolf.example.com:5309 Suppose you have just static IPs and want to set up some aliases that don't live in your host file, or you are connecting through tunnels. You can do things like this:: jumper ansible_ssh_port=5555 ansible_ssh_host=192.168.1.50 In the above example, trying to ansible against the host alias "jumper" (which may not even be a real hostname) will contact 192.168.1.50 on port 5555. Note that this is using a feature of the inventory file to define some special variables. Generally speaking this is not the best way to define variables that describe your system policy, but we'll share suggestions on doing this later. We're just getting started. Adding a lot of hosts? If you have a lot of hosts following similar patterns you can do this rather than listing each hostname:: [webservers] www[01:50].example.com For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define alphabetic ranges:: [databases] db-[a:f].example.com You can also select the connection type and user on a per host basis:: [targets] localhost ansible_connection=local other1.example.com ansible_connection=ssh ansible_ssh_user=mpdehaan other2.example.com ansible_connection=ssh ansible_ssh_user=mdehaan As mentioned above, setting these in the inventory file is only a shorthand, and we'll discuss how to store them in individual files in the 'host_vars' directory a bit later on. .. _host_variables: Host Variables ++++++++++++++ As alluded to above, it is easy to assign variables to hosts that will be used later in playbooks:: [atlanta] host1 http_port=80 maxRequestsPerChild=808 host2 http_port=303 maxRequestsPerChild=909 .. _group_variables: Group Variables +++++++++++++++ Variables can also be applied to an entire group at once:: [atlanta] host1 host2 [atlanta:vars] ntp_server=ntp.atlanta.example.com proxy=proxy.atlanta.example.com .. _subgroups: Groups of Groups, and Group Variables +++++++++++++++++++++++++++++++++++++ It is also possible to make groups of groups and assign variables to groups. These variables can be used by /usr/bin/ansible-playbook, but not /usr/bin/ansible:: [atlanta] host1 host2 [raleigh] host2 host3 [southeast:children] atlanta raleigh [southeast:vars] some_server=foo.southeast.example.com halon_system_timeout=30 self_destruct_countdown=60 escape_pods=2 [usa:children] southeast northeast southwest northwest If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory file, see the next section. .. _splitting_out_vars: Splitting Out Host and Group Specific Data ++++++++++++++++++++++++++++++++++++++++++ The preferred practice in Ansible is actually not to store variables in the main inventory file. In addition to the storing variables directly in the INI file, host and group variables can be stored in individual files relative to the inventory file. These variable files are in YAML format. See :doc:`YAMLSyntax` if you are new to YAML. Assuming the inventory file path is:: /etc/ansible/hosts If the host is named 'foosball', and in groups 'raleigh' and 'webservers', variables in YAML files at the following locations will be made available to the host:: /etc/ansible/group_vars/raleigh /etc/ansible/group_vars/webservers /etc/ansible/host_vars/foosball For instance, suppose you have hosts grouped by datacenter, and each datacenter uses some different servers. The data in the groupfile '/etc/ansible/group_vars/raleigh' for the 'raleigh' group might look like:: --- ntp_server: acme.example.org database_server: storage.example.org It is ok if these files do not exist, as this is an optional feature. Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in either the playbook directory OR the inventory directory. If both paths exist, variables in the playbook directory will be loaded second. Tip: Keeping your inventory file and variables in a git repo (or other version control) is an excellent way to track changes to your inventory and host variables. .. _behavioral_parameters: List of Behavioral Inventory Parameters +++++++++++++++++++++++++++++++++++++++ As alluded to above, setting the following variables controls how ansible interacts with remote hosts. Some we have already mentioned:: ansible_ssh_host The name of the host to connect to, if different from the alias you wish to give to it. ansible_ssh_port The ssh port number, if not 22 ansible_ssh_user The default ssh user name to use. ansible_ssh_pass The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys) ansible_sudo_pass The sudo password to use (this is insecure, we strongly recommend using --ask-sudo-pass) ansible_connection Connection type of the host. Candidates are local, ssh or paramiko. The default is paramiko before Ansible 1.2, and 'smart' afterwards which detects whether usage of 'ssh' would be feasible based on whether ControlPersist is supported. ansible_ssh_private_key_file Private key file used by ssh. Useful if using multiple keys and you don't want to use SSH agent. ansible_python_interpreter The target host python path. This is useful for systems with more than one Python or not located at "/usr/bin/python" such as \*BSD, or where /usr/bin/python is not a 2.X series Python. We do not use the "/usr/bin/env" mechanism as that requires the remote user's path to be set right and also assumes the "python" executable is named python, where the executable might be named something like "python26". ansible\_\*\_interpreter Works for anything such as ruby or perl and works just like ansible_python_interpreter. This replaces shebang of modules which will run on that host. Examples from a host file:: some_host ansible_ssh_port=2222 ansible_ssh_user=manager aws_host ansible_ssh_private_key_file=/home/example/.ssh/aws.pem freebsd_host ansible_python_interpreter=/usr/local/bin/python ruby_module_host ansible_ruby_interpreter=/usr/bin/ruby.1.9.3 .. seealso:: :doc:`intro_dynamic_inventory` Pulling inventory from dynamic sources, such as cloud providers :doc:`intro_adhoc` Examples of basic commands :doc:`playbooks` Learning ansible's configuration management language `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/guide_rolling_upgrade.rst0000664000000000000000000003615612316627017021204 0ustar rootrootContinuous Delivery and Rolling Upgrades ======================================== .. _lamp_introduction: Introduction ```````````` Continuous Delivery is the concept of frequently delivering updates to your software application. The idea is that by updating more often, you do not have to wait for a specific timed period, and your organization gets better at the process of responding to change. Some Ansible users are deploying updates to their end users on an hourly or even more frequent basis -- sometimes every time there is an approved code change. To achieve this, you need tools to be able to quickly apply those updates in a zero-downtime way. This document describes in detail how to achieve this goal, using one of Ansible's most complete example playbooks as a template: lamp_haproxy. This example uses a lot of Ansible features: roles, templates, and group variables, and it also comes with an orchestration playbook that can do zero-downtime rolling upgrades of the web application stack. .. note:: `Click here for the latest playbooks for this example `_. The playbooks deploy Apache, PHP, MySQL, Nagios, and HAProxy to a CentOS-based set of servers. We're not going to cover how to run these playbooks here. Read the included README in the github project along with the example for that information. Instead, we're going to take a close look at every part of the playbook and describe what it does. .. _lamp_deployment: Site Deployment ``````````````` Let's start with ``site.yml``. This is our site-wide deployment playbook. It can be used to initially deploy the site, as well as push updates to all of the servers:: --- # This playbook deploys the whole application stack in this site. # Apply common configuration to all hosts - hosts: all roles: - common # Configure and deploy database servers. - hosts: dbservers roles: - db # Configure and deploy the web servers. Note that we include two roles # here, the 'base-apache' role which simply sets up Apache, and 'web' # which includes our example web application. - hosts: webservers roles: - base-apache - web # Configure and deploy the load balancer(s). - hosts: lbservers roles: - haproxy # Configure and deploy the Nagios monitoring node(s). - hosts: monitoring roles: - base-apache - nagios .. note:: If you're not familiar with terms like playbooks and plays, you should review :doc:`playbooks`. In this playbook we have 5 plays. The first one targets ``all`` hosts and applies the ``common`` role to all of the hosts. This is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply to all of the servers. The next four plays run against specific host groups and apply specific roles to those servers. Along with the roles for Nagios monitoring, the database, and the web application, we've implemented a ``base-apache`` role that installs and configures a basic Apache setup. This is used by both the sample web application and the Nagios hosts. .. _lamp_roles: Reusable Content: Roles ``````````````````````` By now you should have a bit of understanding about roles and how they work in Ansible. Roles are a way to organize content: tasks, handlers, templates, and files, into reusable components. This example has six roles: ``common``, ``base-apache``, ``db``, ``haproxy``, ``nagios``, and ``web``. How you organize your roles is up to you and your application, but most sites will have one or more common roles that are applied to all systems, and then a series of application-specific roles that install and configure particular parts of the site. Roles can have variables and dependencies, and you can pass in parameters to roles to modify their behavior. You can read more about roles in the :doc:`playbooks_roles` section. .. _lamp_group_variables: Configuration: Group Variables `````````````````````````````` Group variables are variables that are applied to groups of servers. They can be used in templates and in playbooks to customize behavior and to provide easily-changed settings and parameters. They are stored in a directory called ``group_vars`` in the same location as your inventory. Here is lamp_haproxy's ``group_vars/all`` file. As you might expect, these variables are applied to all of the machines in your inventory:: --- httpd_port: 80 ntpserver: 192.168.1.2 This is a YAML file, and you can create lists and dictionaries for more complex variable structures. In this case, we are just setting two variables, one for the port for the web server, and one for the NTP server that our machines should use for time synchronization. Here's another group variables file. This is ``group_vars/dbservers`` which applies to the hosts in the ``dbservers`` group:: --- mysqlservice: mysqld mysql_port: 3306 dbuser: root dbname: foodb upassword: usersecret If you look in the example, there are group variables for the ``webservers`` group and the ``lbservers`` group, similarly. These variables are used in a variety of places. You can use them in playbooks, like this, in ``roles/db/tasks/main.yml``:: - name: Create Application Database mysql_db: name={{ dbname }} state=present - name: Create Application DB User mysql_user: name={{ dbuser }} password={{ upassword }} priv=*.*:ALL host='%' state=present You can also use these variables in templates, like this, in ``roles/common/templates/ntp.conf.j2``:: driftfile /var/lib/ntp/drift restrict 127.0.0.1 restrict -6 ::1 server {{ ntpserver }} includefile /etc/ntp/crypto/pw keys /etc/ntp/keys You can see that the variable substitution syntax of {{ and }} is the same for both templates and variables. The syntax inside the curly braces is Jinja2, and you can do all sorts of operations and apply different filters to the data inside. In templates, you can also use for loops and if statements to handle more complex situations, like this, in ``roles/common/templates/iptables.j2``:: {% if inventory_hostname in groups['dbservers'] %} -A INPUT -p tcp --dport 3306 -j ACCEPT {% endif %} This is testing to see if the inventory name of the machine we're currently operating on (``inventory_hostname``) exists in the inventory group ``dbservers``. If so, that machine will get an iptables ACCEPT line for port 3306. Here's another example, from the same template:: {% for host in groups['monitoring'] %} -A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT {% endfor %} This loops over all of the hosts in the group called ``monitoring``, and adds an ACCEPT line for each monitoring hosts's default IPV4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts. You can learn a lot more about Jinja2 and its capabilities `here `_, and you can read more about Ansible variables in general in the :doc:`playbooks_variables` section. .. _lamp_rolling_upgrade: The Rolling Upgrade ``````````````````` Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible's orchestration features come into play. While some applications use the term 'orchestration' to mean basic ordering or command-blasting, Ansible referes to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it. Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_upgrade.yml``. Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this:: - hosts: monitoring tasks: [] What's going on here, and why are there no tasks? You might know that Ansible gathers "facts" from the servers before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution versions, etc. In our case, we need to know something about all of the monitoring servers in our environment before we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this pattern sometimes, and it's a useful trick to know. The next part is the update play. The first part looks like this:: - hosts: webservers user: root serial: 1 This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will paralleize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time. Here is the next part of the update play:: pre_tasks: - name: disable nagios alerts for this host webserver service nagios: action=disable_alerts host={{ ansible_hostname }} services=webserver delegate_to: "{{ item }}" with_items: groups.monitoring - name: disable the server in haproxy shell: echo "disable server myapplb/{{ ansible_hostname }}" | socat stdio /var/lib/haproxy/stats delegate_to: "{{ item }}" with_items: groups.lbservers The ``pre_tasks`` keyword just lets you list tasks to run before the roles are called. This will make more sense in a minute. If you look at the names of these tasks, you can see that we are disabling Nagios alerts and then removing the webserver that we are currently updating from the HAProxy load balancing pool. The ``delegate_to`` and ``with_items`` arguments, used together, cause Ansible to loop over each monitoring server and load balancer, and perform that operation (delegate that operation) on the monitoring or load balancing server, "on behalf" of the webserver. In programming terms, the outer loop is the list of web servers, and the inner loop is the list of monitoring servers. Note that the HAProxy step looks a little complicated. We're using HAProxy in this example because it's freely available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS Elastic IP setup?), you can use modules included in core Ansible to communicate with them instead. You might also wish to use other monitoring modules instead of nagios, but this just shows the main goal of the 'pre tasks' section -- take the server out of monitoring, and take it out of rotation. The next step simply re-applies the proper roles to the web servers. This will cause any configuration management declarations in ``web`` and ``base-apache`` roles to be applied to the web servers, including an update of the web application code itself. We don't have to do it this way--we could instead just purely update the web application, but this is a good example of how roles can be used to reuse tasks:: roles: - common - base-apache - web Finally, in the ``post_tasks`` section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool:: post_tasks: - name: Enable the server in haproxy shell: echo "enable server myapplb/{{ ansible_hostname }}" | socat stdio /var/lib/haproxy/stats delegate_to: "{{ item }}" with_items: groups.lbservers - name: re-enable nagios alerts nagios: action=enable_alerts host={{ ansible_hostname }} services=webserver delegate_to: "{{ item }}" with_items: groups.monitoring Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate modules instead. .. _lamp_end_notes: Managing Other Load Balancers ````````````````````````````` In this example, we use the simple HAProxy load balancer to front-end the web servers. It's easy to configure and easy to manage. As we have mentioned, Ansible has built-in support for a variety of other load balancers like Citrix NetScaler, F5 BigIP, Amazon Elastic Load Balancers, and more. See the :doc:`modules` documentation for more information. For other load balancers, you may need to send shell commands to them (like we do for HAProxy above), or call an API, if your load balancer exposes one. For the load balancers for which Ansible has modules, you may want to run them as a ``local_action`` if they contact an API. You can read more about local actions in the :doc:`playbooks_delegation` section. Should you develop anything interesting for some hardware where there is not a core module, it might make for a good module for core inclusion! .. _lamp_end_to_end: Continuous Delivery End-To-End `````````````````````````````` Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of organizations use a continuous integration tool like `Jenkins `_ or `Atlassian Bamboo `_ to tie the development, test, release, and deploy steps together. You may also want to use a tool like `Gerrit `_ to add a code review step to commits to either the application code itself, or to your Ansible playbooks, or both. Depending on your environment, you might be deploying continuously to a test environment, running an integration test battery against that environment, and then deploying automatically into production. Or you could keep it simple and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you. For integration with Continuous Integration systems, you can easily trigger playbook runs using the ``ansible-playbook`` command line tool, or, if you're using :doc:`tower`, the ``tower-cli`` or the built-in REST API. (The tower-cli command 'joblaunch' will spawn a remote job over the REST API and is pretty slick). This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers, for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to easily manage complicated environments and automate common operations. .. seealso:: `lamp_haproxy example `_ The lamp_haproxy example discussed here. :doc:`playbooks` An introduction to playbooks :doc:`playbooks_roles` An introduction to playbook roles :doc:`playbooks_variables` An introduction to Ansible variables `Ansible.com: Continuous Delivery `_ An introduction to Continuous Delivery with Ansible ansible-1.5.4/docsite/rst/intro_getting_started.rst0000664000000000000000000001543512316627017021251 0ustar rootrootGetting Started =============== .. contents:: Topics .. _gs_about: Foreword ```````` Now that you've read :doc:`intro_installation` and installed Ansible, it's time to dig in and get started with some commands. What we are showing first are not the powerful configuration/deployment/orchestration of Ansible, called playbooks. Playbooks are covered in a separate section. This section is about how to get going initially. Once you have these concepts down, read :doc:`intro_adhoc` for some more detail, and then you'll be ready to dive into playbooks and explore the most interesting parts! .. _remote_connection_information: Remote Connection Information ````````````````````````````` Before we get started, it's important to understand how Ansible is communicating with remote machines over SSH. By default, Ansible 1.3 and later will try to use native OpenSSH for remote communication when possible. This enables both ControlPersist (a performance feature), Kerberos, and options in ~/.ssh/config such as Jump Host setup. When using Enterprise Linux 6 operating systems as the control machine (Red Hat Enterprise Linux and derivatives such as CentOS), however, the version of OpenSSH may be too old to support ControlPersist. On these operating systems, Ansible will fallback into using a high-quality Python implementation of OpenSSH called 'paramiko'. If you wish to use features like Kerberized SSH and more, consider using Fedora, OS X, or Ubuntu as your control machine until a newer version of OpenSSH is available for your platform -- or engage 'accelerated mode' in Ansible. See :doc:`playbooks_acceleration`. In Ansible 1.2 and before, the default was strictly paramiko and native SSH had to be explicitly selected with -c ssh or set in the configuration file. Occasionally you'll encounter a device that doesn't do SFTP. This is rare, but if talking with some remote devices that don't support SFTP, you can switch to SCP mode in :doc:`intro_configuration`. When speaking with remote machines, Ansible will by default assume you are using SSH keys -- which we encourage -- but passwords are fine too. To enable password auth, supply the option ``--ask-pass`` where needed. If using sudo features and when sudo requires a password, also supply ``--ask-sudo-pass`` as appropriate. While it may be common sense, it is worth sharing: Any management system benefits from being run near the machines being managed. If running in a cloud, consider running Ansible from a machine inside that cloud. It will work better than on the open internet in most cases. As an advanced topic, Ansible doesn't just have to connect remotely over SSH. The transports are pluggable, and there are options for managing things locally, as well as managing chroot, lxc, and jail containers. A mode called 'ansible-pull' can also invert the system and have systems 'phone home' via scheduled git checkouts to pull configuration directives from a central repository. .. _your_first_commands: Your first commands ``````````````````` Now that you've installed Ansible, it's time to get started with some basics. Edit (or create) /etc/ansible/hosts and put one or more remote systems in it, for which you have your SSH key in ``authorized_keys``:: 192.168.1.50 aserver.example.org bserver.example.org This is an inventory file, which is also explained in greater depth here: :doc:`intro_inventory`. We'll assume you are using SSH keys for authentication. To set up SSH agent to avoid retyping passwords, you can do: .. code-block:: bash $ ssh-agent bash $ ssh-add ~/.ssh/id_rsa (Depending on your setup, you may wish to use Ansible's ``--private-key`` option to specify a pem file instead) Now ping all your nodes: .. code-block:: bash $ ansible all -m ping Ansible will attempt to remote connect to the machines using your current user name, just like SSH would. To override the remote user name, just use the '-u' parameter. If you would like to access sudo mode, there are also flags to do that: .. code-block:: bash # as bruce $ ansible all -m ping -u bruce # as bruce, sudoing to root $ ansible all -m ping -u bruce --sudo # as bruce, sudoing to batman $ ansible all -m ping -u bruce --sudo --sudo-user batman (The sudo implementation is changeable in Ansible's configuration file if you happen to want to use a sudo replacement. Flags passed to sudo (like -H) can also be set there.) Now run a live command on all of your nodes: .. code-block:: bash $ ansible all -a "/bin/echo hello" Congratulations. You've just contacted your nodes with Ansible. It's soon going to be time to read some of the more real-world :doc:`intro_adhoc`, and explore what you can do with different modules, as well as the Ansible :doc:`playbooks` language. Ansible is not just about running commands, it also has powerful configuration management and deployment features. There's more to explore, but you already have a fully working infrastructure! .. _a_note_about_host_key_checking: Host Key Checking ````````````````` Ansible 1.2.1 and later have host key checking enabled by default. If a host is reinstalled and has a different key in 'known_hosts', this will result in a error message until corrected. If a host is not initially in 'known_hosts' this will result in prompting for confirmation of the key, which results in a interactive experience if using Ansible, from say, cron. You might not want this. If you wish to disable this behavior and understand the implications, you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg:: [defaults] host_key_checking = False Alternatively this can be set by an environment variable: .. code-block:: bash $ export ANSIBLE_HOST_KEY_CHECKING=False Also note that host key checking in paramiko mode is reasonably slow, therefore switching to 'ssh' is also recommended when using this feature. .. _a_note_about_logging: Ansible will log some information about module arguments on the remote system in the remote syslog. To enable basic logging on the control machine see :doc:`intro_configuration` document and set the 'log_path' configuration file setting. Enterprise users may also be interested in :doc:`tower`. Tower provides a very robust database logging feature where it is possible to drill down and see history based on hosts, projects, and particular inventories over time -- explorable both graphically and through a REST API. .. seealso:: :doc:`intro_inventory` More information about inventory :doc:`intro_adhoc` Examples of basic commands :doc:`playbooks` Learning Ansible's configuration management language `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_checkmode.rst0000664000000000000000000000420712316627017020647 0ustar rootrootCheck Mode ("Dry Run") ====================== .. versionadded:: 1.1 .. contents:: Topics When ansible-playbook is executed with ``--check`` it will not make any changes on remote systems. Instead, any module instrumented to support 'check mode' (which contains most of the primary core modules, but it is not required that all modules do this) will report what changes they would have made rather than making them. Other modules that do not support check mode will also take no action, but just will not report what changes they might have made. Check mode is just a simulation, and if you have steps that use conditionals that depend on the results of prior commands, it may be less useful for you. However it is great for one-node-at-time basic configuration management use cases. Example:: ansible-playbook foo.yml --check .. _forcing_to_run_in_check_mode: Running a task in check mode ```````````````````````````` .. versionadded:: 1.3 Sometimes you may want to have a task to be executed even in check mode. To achieve this, use the `always_run` clause on the task. Its value is a Jinja2 expression, just like the `when` clause. In simple cases a boolean YAML value would be sufficient as a value. Example:: tasks: - name: this task is run even in check mode command: /something/to/run --even-in-check-mode always_run: yes As a reminder, a task with a `when` clause evaluated to false, will still be skipped even if it has a `always_run` clause evaluated to true. .. _diff_mode: Showing Differences with ``--diff`` ``````````````````````````````````` .. versionadded:: 1.1 The ``--diff`` option to ansible-playbook works great with ``--check`` (detailed above) but can also be used by itself. When this flag is supplied, if any templated files on the remote system are changed, and the ansible-playbook CLI will report back the textual changes made to the file (or, if used with ``--check``, the changes that would have been made). Since the diff feature produces a large amount of output, it is best used when checking a single host at a time, like so:: ansible-playbook foo.yml --check --diff --limit foo.example.com ansible-1.5.4/docsite/rst/modules.rst0000664000000000000000000000444012316627017016311 0ustar rootrootAbout Modules ============= .. toctree:: :maxdepth: 4 .. _modules_intro: Introduction ```````````` Ansible ships with a number of modules (called the 'module library') that can be executed directly on remote hosts or through :doc:`Playbooks `. Users can also write their own modules. These modules can control system resources, like services, packages, or files (anything really), or handle executing system commands. Let's review how we execute three different modules from the command line:: ansible webservers -m service -a "name=httpd state=running" ansible webservers -m ping ansible webservers -m command -a "/sbin/reboot -t now" Each module supports taking arguments. Nearly all modules take ``key=value`` arguments, space delimited. Some modules take no arguments, and the command/shell modules simply take the string of the command you want to run. From playbooks, Ansible modules are executed in a very similar way:: - name: reboot the servers action: command /sbin/reboot -t now Which can be abbreviated to:: - name: reboot the servers command: /sbin/reboot -t now All modules technically return JSON format data, though if you are using the command line or playbooks, you don't really need to know much about that. If you're writing your own module, you care, and this means you do not have to write modules in any particular language -- you get to choose. Modules are `idempotent`, meaning they will seek to avoid changes to the system unless a change needs to be made. When using Ansible playbooks, these modules can trigger 'change events' in the form of notifying 'handlers' to run additional tasks. Documentation for each module can be accessed from the command line with the ansible-doc tool:: ansible-doc yum .. seealso:: :doc:`intro_adhoc` Examples of using modules in /usr/bin/ansible :doc:`playbooks` Examples of using modules with /usr/bin/ansible-playbook :doc:`developing_modules` How to write your own modules :doc:`developing_api` Examples of using modules with the Python API `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/quickstart.rst0000664000000000000000000000061612316627017017034 0ustar rootrootQuickstart Video ```````````````` We've recorded a short video that shows how to get started with Ansible that you may like to use alongside the documentation. The `quickstart video `_ is about 20 minutes long and will show you some of the basics about your first steps with Ansible. Enjoy, and be sure to visit the rest of the documentation to learn more. ansible-1.5.4/docsite/rst/intro_dynamic_inventory.rst0000664000000000000000000002646212316627017021625 0ustar rootroot.. _dynamic_inventory: Dynamic Inventory ================= .. contents:: Topics Often a user of a configuration management system will want to keep inventory in a different software system. Ansible provides a basic text-based system as described in :doc:`intro_inventory` but what if you want to use something else? Frequent examples include pulling inventory from a cloud provider, LDAP, `Cobbler `_, or a piece of expensive enterprisey CMDB software. Ansible easily supports all of these options via an external inventory system. The plugins directory contains some of these already -- including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack, examples of some of which will be detailed below. doc:`tower` also provides a database to store inventory results that is both web and REST Accessible. Tower syncs with all Ansible dynamic inventory sources you might be using, and also includes a graphical inventory editor. By having a database record of all of your hosts, it's easy to correlate past event history and see which ones have had failures on their last playbook runs. For information about writing your own dynamic inventory source, see :doc:`developing_inventory`. .. _cobbler_example: Example: The Cobbler External Inventory Script `````````````````````````````````````````````` It is expected that many Ansible users with a reasonable amount of physical hardware may also be `Cobbler `_ users. (note: Cobbler was originally written by Michael DeHaan and is now lead by James Cammarata, who also works for Ansible, Inc). While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic layer that allows it to represent data for multiple configuration management systems (even at the same time), and has been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler using Cobbler's XMLRPC API. To tie Ansible's inventory to Cobbler (optional), copy `this script `_ to /etc/ansible and `chmod +x` the file. cobblerd will now need to be running when you are using Ansible and you'll need to use Ansible's ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``). First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet. Let's explore what this does. In cobbler, assume a scenario somewhat like the following:: cobbler profile add --name=webserver --distro=CentOS6-x86_64 cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3" cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4" cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5" In the example above, the system 'foo.example.com' will be addressable by ansible directly, but will also be addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, we'll try to contact system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you try "ansible foo" it wouldn't find the system... but "ansible 'foo*'" would, because the system DNS name starts with 'foo'. The script doesn't just provide host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates:: # file: /srv/motd.j2 Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }} Which could be executed just like this:: ansible webserver -m setup ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd" .. note:: The name 'webserver' came from cobbler, as did the variables for the config file. You can still pass in your own variables like normal in Ansible, but variables from the external inventory script will override any that have the same name. So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system 'foo':: Welcome, I am templated with a value of a=2, b=3, and c=4 And on system 'bar' (bar.example.com):: Welcome, I am templated with a value of a=2, b=3, and c=5 And technically, though there is no major good reason to do it, this also works too:: ansible webserver -m shell -a "echo {{ a }}" So in other words, you can use those variables in arguments/actions as well. .. _aws_example: Example: AWS EC2 External Inventory Script `````````````````````````````````````````` If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the `EC2 external inventory `_ script. You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script after marking it executable:: ansible -i ec2.py -u ubuntu us-east-1d -m ping The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini `_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally. To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a `variety of methods `_ available, but the simplest is just to export two environment variables:: export AWS_ACCESS_KEY_ID='AK123' export AWS_SECRET_ACCESS_KEY='abc123' You can test the script by itself to make sure your config is correct:: cd plugins/inventory ./ec2.py --list After a few moments, you should see your entire EC2 inventory across all regions in JSON. Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables. At their heart, inventory files are simply a mapping from some name to a destination address. The default ``ec2.ini`` settings are configured for running Ansible from outside EC2 (from your laptop for example) -- and this is not the most efficient way to manage EC2. If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public DNS names. In this case, you can modify the ``destination_variable`` in ``ec2.ini`` to be the private DNS name of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the only way to access an instance is via its private IP address. For VPC instances, `vpc_destination_variable` in ``ec2.ini`` provides a means of using which ever `boto.ec2.instance variable `_ makes the most sense for your use case. The EC2 external inventory provides mappings to instances from several groups: Instance ID These are groups of one since instance IDs are unique. e.g. ``i-00112233`` ``i-a1b1c1d1`` Region A group of all instances in an AWS region. e.g. ``us-east-1`` ``us-west-2`` Availability Zone A group of all instances in an availability zone. e.g. ``us-east-1a`` ``us-east-1b`` Security Group Instances belong to one or more security groups. A group is created for each security group, with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is prefixed by ``security_group_`` e.g. ``security_group_default`` ``security_group_webservers`` ``security_group_Pete_s_Fancy_Group`` Tags Each instance can have a variety of key/value pairs associated with it called Tags. The most common tag key is 'Name', though anything is possible. Each key/value pair is its own group of instances, again with special characters converted to underscores, in the format ``tag_KEY_VALUE`` e.g. ``tag_Name_Web`` ``tag_Name_redis-master-001`` ``tag_aws_cloudformation_logical-id_WebServerGroup`` When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the ``--host HOST`` option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS to get information about that specific instance. It then makes information about that instance available as variables to your playbooks. Each variable is prefixed by ``ec2_``. Here are some of the variables available: - ec2_architecture - ec2_description - ec2_dns_name - ec2_id - ec2_image_id - ec2_instance_type - ec2_ip_address - ec2_kernel - ec2_key_name - ec2_launch_time - ec2_monitored - ec2_ownerId - ec2_placement - ec2_platform - ec2_previous_state - ec2_private_dns_name - ec2_private_ip_address - ec2_public_dns_name - ec2_ramdisk - ec2_region - ec2_root_device_name - ec2_root_device_type - ec2_security_group_ids - ec2_security_group_names - ec2_spot_instance_request_id - ec2_state - ec2_state_code - ec2_state_reason - ec2_status - ec2_subnet_id - ec2_tag_Name - ec2_tenancy - ec2_virtualization_type - ec2_vpc_id Both ``ec2_security_group_ids`` and ``ec2_security_group_names`` are comma-separated lists of all security groups. Each EC2 tag is a variable in the format ``ec2_tag_KEY``. To see the complete list of variables available for an instance, run the script by itself:: cd plugins/inventory ./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable in ec2.ini. To explicitly clear the cache, you can run the ec2.py script with the ``--refresh-cache`` parameter. .. _other_inventory_scripts: Other inventory scripts ``````````````````````` In addition to Cobbler and EC2, inventory scripts are also available for:: BSD Jails Digital Ocean Linode OpenShift OpenStack Nova Red Hat's SpaceWalk Vagrant (not to be confused with the provisioner in vagrant, which is preferred) Zabbix Sections on how to use these in more detail will be added over time, but by looking at the "plugins/" directory of the Ansible checkout it should be very obvious how to use them. The process for the AWS inventory script is the same. If you develop an interesting inventory script that might be general purpose, please submit a pull request -- we'd likely be glad to include it in the project. .. _using_multiple_sources: Using Multiple Inventory Sources ```````````````````````````````` If the location given to -i in Ansible is a directory (or as so configured in ansible.cfg), Ansible can use multiple inventory sources at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant hybrid cloud! .. seealso:: :doc:`intro_inventory` All about static inventory files `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/intro_adhoc.rst0000664000000000000000000002527212316627017017140 0ustar rootrootIntroduction To Ad-Hoc Commands =============================== .. contents:: Topics .. highlight:: bash The following examples show how to use `/usr/bin/ansible` for running ad hoc tasks. What's an ad-hoc command? An ad-hoc command is something that you might type in to do something really quick, but don't want to save for later. This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language -- ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for. Generally speaking, the true power of Ansible lies in playbooks. Why would you use ad-hoc tasks versus playbooks? For instance, if you wanted to power off all of your lab for Christmas vacation, you could execute a quick one-liner in Ansible without writing a playbook. For configuration management and deployments, though, you'll want to pick up on using '/usr/bin/ansible-playbook' -- the concepts you will learn here will port over directly to the playbook language. (See :doc:`playbooks` for more information about those) If you haven't read :doc:`intro_inventory` already, please look that over a bit first and then we'll get going. .. _parallelism_and_shell_commands: Parallelism and Shell Commands `````````````````````````````` Arbitrary example. Let's use Ansible's command line tool to reboot all web servers in Atlanta, 10 at a time. First, let's set up SSH-agent so it can remember our credentials:: $ ssh-agent bash $ ssh-add ~/.ssh/id_rsa If you don't want to use ssh-agent and want to instead SSH with a password instead of keys, you can with ``--ask-pass`` (``-k``), but it's much better to just use ssh-agent. Now to run the command on all servers in a group, in this case, *atlanta*, in 10 parallel forks:: $ ansible atlanta -a "/sbin/reboot" -f 10 /usr/bin/ansible will default to running from your user account. If you do not like this behavior, pass in "-u username". If you want to run commands as a different user, it looks like this:: $ ansible atlanta -a "/usr/bin/foo" -u username Often you'll not want to just do things from your user account. If you want to run commands through sudo:: $ ansible atlanta -a "/usr/bin/foo" -u username --sudo [--ask-sudo-pass] Use ``--ask-sudo-pass`` (``-K``) if you are not using passwordless sudo. This will interactively prompt you for the password to use. Use of passwordless sudo makes things easier to automate, but it's not required. It is also possible to sudo to a user other than root using ``--sudo-user`` (``-U``):: $ ansible atlanta -a "/usr/bin/foo" -u username -U otheruser [--ask-sudo-pass] .. note:: Rarely, some users have security rules where they constrain their sudo environment to running specific command paths only. This does not work with ansible's no-bootstrapping philosophy and hundreds of different modules. If doing this, use Ansible from a special account that does not have this constraint. One way of doing this without sharing access to unauthorized users would be gating Ansible with :doc:`tower`, which can hold on to an SSH credential and let members of certain organizations use it on their behalf without having direct access. Ok, so those are basics. If you didn't read about patterns and groups yet, go back and read :doc:`intro_patterns`. The ``-f 10`` in the above specifies the usage of 10 simultaneous processes to use. You can also set this in :doc:`intro_configuration` to avoid setting it again. The default is actually 5, which is really small and conservative. You are probably going to want to talk to a lot more simultaneous hosts so feel free to crank this up. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will take a little longer. Feel free to push this value as high as your system can handle it! You can also select what Ansible "module" you want to run. Normally commands also take a ``-m`` for module name, but the default module name is 'command', so we didn't need to specify that all of the time. We'll use ``-m`` in later examples to run some other :doc:`modules`. .. note:: The :ref:`command` module does not support shell variables and things like piping. If we want to execute a module using a shell, use the 'shell' module instead. Read more about the differences on the :doc:`modules` page. Using the :ref:`shell` module looks like this:: $ ansible raleigh -m shell -a 'echo $TERM' When running any command with the Ansible *ad hoc* CLI (as opposed to :doc:`Playbooks `), pay particular attention to shell quoting rules, so the local shell doesn't eat a variable before it gets passed to Ansible. For example, using double vs single quotes in the above example would evaluate the variable on the box you were on. So far we've been demoing simple command execution, but most Ansible modules usually do not work like simple scripts. They make the remote system look like you state, and run the commands necessary to get it there. This is commonly referred to as 'idempotence', and is a core design goal of Ansible. However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both. .. _file_transfer: File Transfer ````````````` Here's another use case for the `/usr/bin/ansible` command line. Ansible can SCP lots of files to multiple machines in parallel. To transfer a file directly to many different servers:: $ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts" If you use playbooks, you can also take advantage of the ``template`` module, which takes this another step further. (See module and playbook documentation). The ``file`` module allows changing ownership and permissions on files. These same options can be passed directly to the ``copy`` module as well:: $ ansible webservers -m file -a "dest=/srv/foo/a.txt mode=600" $ ansible webservers -m file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan" The ``file`` module can also create directories, similar to ``mkdir -p``:: $ ansible webservers -m file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory" As well as delete directories (recursively) and delete files:: $ ansible webservers -m file -a "dest=/path/to/c state=absent" .. _managing_packages: Managing Packages ````````````````` There are modules available for yum and apt. Here are some examples with yum. Ensure a package is installed, but don't update it:: $ ansible webservers -m yum -a "name=acme state=installed" Ensure a package is installed to a specific version:: $ ansible webservers -m yum -a "name=acme-1.5 state=installed" Ensure a package is at the latest version:: $ ansible webservers -m yum -a "name=acme state=latest" Ensure a package is not installed:: $ ansible webservers -m yum -a "name=acme state=removed" Ansible has modules for managing packages under many platforms. If your package manager does not have a module available for it, you can install for other packages using the command module or (better!) contribute a module for other package managers. Stop by the mailing list for info/details. .. _users_and_groups: Users and Groups ```````````````` The 'user' module allows easy creation and manipulation of existing user accounts, as well as removal of user accounts that may exist:: $ ansible all -m user -a "name=foo password=" $ ansible all -m user -a "name=foo state=absent" See the :doc:`modules` section for details on all of the available options, including how to manipulate groups and group membership. .. _from_source_control: Deploying From Source Control ````````````````````````````` Deploy your webapp straight from git:: $ ansible webservers -m git -a "repo=git://foo.example.org/repo.git dest=/srv/myapp version=HEAD" Since Ansible modules can notify change handlers it is possible to tell Ansible to run specific tasks when the code is updated, such as deploying Perl/Python/PHP/Ruby directly from git and then restarting apache. .. _managing_services: Managing Services ````````````````` Ensure a service is started on all webservers:: $ ansible webservers -m service -a "name=httpd state=started" Alternatively, restart a service on all webservers:: $ ansible webservers -m service -a "name=httpd state=restarted" Ensure a service is stopped:: $ ansible webservers -m service -a "name=httpd state=stopped" .. _time_limited_background_operations: Time Limited Background Operations `````````````````````````````````` Long running operations can be backgrounded, and their status can be checked on later. The same job ID is given to the same task on all hosts, so you won't lose track. If you kick hosts and don't want to poll, it looks like this:: $ ansible all -B 3600 -a "/usr/bin/long_running_operation --do-stuff" If you do decide you want to check on the job status later, you can:: $ ansible all -m async_status -a "jid=123456789" Polling is built-in and looks like this:: $ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff" The above example says "run for 30 minutes max (``-B``: 30*60=1800), poll for status (``-P``) every 60 seconds". Poll mode is smart so all jobs will be started before polling will begin on any machine. Be sure to use a high enough ``--forks`` value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (``-B``), the process on the remote nodes will be terminated. Typically you'll be only be backgrounding long-running shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks ` also support polling, and have a simplified syntax for this. .. _checking_facts: Gathering Facts ``````````````` Facts are described in the playbooks section and represent discovered variables about a system. These can be used to implement conditional execution of tasks but also just to get ad-hoc information about your system. You can see all facts via:: $ ansible all -m setup Its also possible to filter this output to just export certain facts, see the "setup" module documentation for details. Read more about facts at :doc:`playbooks_variables` once you're ready to read up on :doc:`Playbooks `. .. seealso:: :doc:`intro_configuration` All about the Ansible config file :doc:`modules` A list of available modules :doc:`playbooks` Using Ansible for configuration management & deployment `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/playbooks_tags.rst0000664000000000000000000000270612316627017017665 0ustar rootrootTags ==== If you have a large playbook it may become useful to be able to run a specific part of the configuration without running the whole playbook. Both plays and tasks support a "tags:" attribute for this reason. Example:: tasks: - yum: name={{ item }} state=installed with_items: - httpd - memcached tags: - packages - template: src=templates/src.j2 dest=/etc/foo.conf tags: - configuration If you wanted to just run the "configuration" and "packages" part of a very long playbook, you could do this:: ansible-playbook example.yml --tags "configuration,packages" On the other hand, if you want to run a playbook *without* certain tasks, you could do this:: ansible-playbook example.yml --skip-tags "notification" You may also apply tags to roles:: roles: - { role: webserver, port: 5000, tags: [ 'web', 'foo' ] } And you may also tag basic include statements:: - include: foo.yml tags=web,foo Both of these have the function of tagging every single task inside the include statement. .. seealso:: :doc:`playbooks` An introduction to playbooks :doc:`playbooks_roles` Playbook organization by roles `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/faq.rst0000664000000000000000000002774612316627017015426 0ustar rootrootFrequently Asked Questions ========================== Here are some commonly-asked questions and their answers. .. _users_and_ports: How do I handle different machines needing different user accounts or ports to log in with? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Setting inventory variables in the inventory file is the easiest way. For instance, suppose these hosts have different usernames and ports:: [webservers] asdf.example.com ansible_ssh_port=5000 ansible_ssh_user=alice jkl.example.com ansible_ssh_port=5001 ansible_ssh_user=bob You can also dictate the connection type to be used, if you want:: [testcluster] localhost ansible_connection=local /path/to/chroot1 ansible_connection=chroot foo.example.com bar.example.com You may also wish to keep these in group variables instead, or file in them in a group_vars/ file. See the rest of the documentation for more information about how to organize variables. .. _use_ssh: How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Switch your default connection type in the configuration file to 'ssh', or use '-c ssh' to use Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, 'ssh' will be used by default if OpenSSH is new enough to support ControlPersist as an option. Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko. We keep paramiko as the default as if you are first installing Ansible on an EL box, it offers a better experience for new users. .. _ec2_cloud_performance: How do I speed up management inside EC2? ++++++++++++++++++++++++++++++++++++++++ Don't try to manage a fleet of EC2 machines from your laptop. Connect to a management node inside EC2 first and run Ansible from there. .. _python_interpreters: How do I handle python pathing not having a Python 2.X in /usr/bin/python on a remote machine? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ While you can write ansible modules in any language, most ansible modules are written in Python, and some of these are important core ones. By default Ansible assumes it can find a /usr/bin/python on your remote system that is a 2.X version of Python, specifically 2.4 or higher. Setting of an inventory variable 'ansible_python_interpreter' on any host will allow Ansible to auto-replace the interpreter used when executing python modules. Thus, you can point to any python you want on the system if /usr/bin/python on your system does not point to a Python 2.X interpreter. Some Linux operating systems, such as Arch, may only have Python 3 installed by default. This is not sufficient and you will get syntax errors trying to run modules with Python 3. Python 3 is essentially not the same language as Python 2. Ansible modules currently need to support older Pythons for users that still have Enterprise Linux 5 deployed, so they are not yet ported to run under Python 3.0. This is not a problem though as you can just install Python 2 also on a managed host. Python 3.0 support will likely be addressed at a later point in time when usage becomes more mainstream. Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time. .. _use_roles: What is the best way to make content reusable/redistributable? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content self contained, and works will with things like git submodules for sharing content with others. If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended. .. _configuration_file: Where does the configuration file live and what can I configure in it? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ See :doc:`intro_configuration`. .. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how: How do I disable cowsay? ++++++++++++++++++++++++ If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide that you would like to work in a professional cow-free environment, you can either uninstall cowsay, or set an environment variable:: export ANSIBLE_NOCOWS=1 .. _browse_facts: How do I see a list of all of the ansible\_ variables? ++++++++++++++++++++++++++++++++++++++++++++++++++++++ Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the "setup" module as an ad-hoc action:: ansible -m setup hostname This will print out a dictionary of all of the facts that are available for that particular host. .. _host_loops: How do I loop over a list of hosts in a group, inside of a template? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:: {% for host in groups['db_servers'] %} {{ host }} {% endfor %} If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:: - hosts: db_servers tasks: - # doesn't matter what you do, just that they were talked to previously. Then you can use the facts inside your template, like this:: {% for host in groups['db_servers'] %} {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }} {% endfor %} .. _programatic_access_to_a_variable: How do I access a variable name programatically? ++++++++++++++++++++++++++++++++++++++++++++++++ An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied via a role parameter or other input. Variable names can be built by adding strings together, like so:: {{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }} The trick about going through hostvars is neccessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname' is a magic variable that indiciates the current host you are looping over in the host loop. .. _first_host_in_a_group: How do I access a variable of the first host in a group? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory was static and predictable. (If you are using :doc:`tower`, it will use database order, so this isn't a problem even if you are using cloud based inventory scripts). Anyway, here's the trick:: {{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }} Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact: - set_fact: headnode={{ groups[['webservers'][0]] }} - debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }} Notice how we interchanged the bracket syntax for dots -- that can be done anywhere. .. _file_recursion: How do I copy files recursively onto a target host? +++++++++++++++++++++++++++++++++++++++++++++++++++ The "copy" module doesn't handle recursive copies of directories. A common solution to do this is to use a local action to call 'rsync' to recursively copy files to the managed servers. Here is an example:: --- # ... tasks: - name: recursively copy files from management server to target local_action: command rsync -a /path/to/files $inventory_hostname:/path/to/target/ Note that you'll need passphrase-less SSH or ssh-agent set up to let rsync copy without prompting for a passphrase or password. .. _shell_env: How do I access shell environment variables? ++++++++++++++++++++++++++++++++++++++++++++ If you just need to access existing variables, use the 'env' lookup plugin. For example, to access the value of the HOME environment variable on management machine:: --- # ... vars: local_home: "{{ lookup('env','HOME') }}" If you need to set environment variables, see the Advanced Playbooks section about environments. Ansible 1.4 will also make remote environment variables available via facts in the 'ansible_env' variable:: {{ ansible_env.SOME_VARIABLE }} .. _user_passwords: How do I generate crypted passwords for the user module? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The mkpasswd utility that is available on most Linux systems is a great option:: mkpasswd --method=SHA-512 If this utility is not installed on your system (e.g. you are using OS X) then you can still easily generate these passwords using Python. First, ensure that the `Passlib `_ password hashing library is installed. pip install passlib Once the library is ready, SHA512 password values can then be generated as follows:: python -c "from passlib.hash import sha512_crypt; print sha512_crypt.encrypt('')" .. _commercial_support: Can I get training on Ansible or find commercial support? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Yes! See `our Guru offering _` for online support, and support is also included with :doc:`tower`. You can also read our `service page `_ and email `info@ansible.com `_ for further details. .. _web_interface: Is there a web interface / REST API / etc? ++++++++++++++++++++++++++++++++++++++++++ Yes! Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See :doc:`tower`. .. _docs_contributions: How do I submit a change to the documentation? ++++++++++++++++++++++++++++++++++++++++++++++ Great question! Documentation for Ansible is kept in the main project git repository, and complete instructions for contributing can be found in the docs README `viewable on GitHub `_. Thanks! .. _keep_secret_data: How do I keep secret data in my playbook? +++++++++++++++++++++++++++++++++++++++++ If you would like to keep secret data in your Ansible content and still share it publically or keep things in source control, see :doc:`playbooks_vault`. .. _i_dont_see_my_question: I don't see my question here ++++++++++++++++++++++++++++ Please see the section below for a link to IRC and the Google Group, where you can ask your question there. .. seealso:: :doc:`index` The documentation index :doc:`playbooks` An introduction to playbooks :doc:`playbooks_best_practices` Best practices advice `User Mailing List `_ Have a question? Stop by the google group! `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/developing.rst0000664000000000000000000000104012316627017016766 0ustar rootrootDeveloper Information ````````````````````` Learn how to build modules of your own in any language, and also how to extend Ansible through several kinds of plugins. Explore Ansible's Python API and write Python plugins to integrate with other solutions in your environment. .. toctree:: :maxdepth: 1 developing_api developing_inventory developing_modules developing_plugins Developers will also likely be interested in the fully-discoverable in :doc:`tower`. It's great for embedding Ansible in all manner of applications. ansible-1.5.4/docsite/rst/playbooks_intro.rst0000664000000000000000000003335612316627017020067 0ustar rootrootIntro to Playbooks ================== .. _about_playbooks: About Playbooks ``````````````` Playbooks are a completely different way to use ansible than in adhoc task execution mode, and are particularly powerful. Simply put, playbooks are the basis for a really simple configuration management and multi-machine deployment system, unlike any that already exist, and one that is very well suited to deploying complex applications. Playbooks can declare configurations, but they can also orchestrate steps of any manual ordered process, even as different steps must bounce back and forth between sets of machines in particular orders. They can launch tasks synchronously or asynchronously. While you might run the main /usr/bin/ansible program for ad-hoc tasks, playbooks are more likely to be kept in source control and used to push out your configuration or assure the configurations of your remote systems are in spec. There are also some full sets of playbooks illustrating a lot of these techniques in the `ansible-examples repository `_. We'd recommend looking at these in another tab as you go along. There are also many jumping off points after you learn playbooks, so hop back to the documentation index after you're done with this section. .. _playbook_language_example: Playbook Language Example ````````````````````````` Playbooks are expressed in YAML format (see :doc:`YAMLSyntax`) and have a minimum of syntax, which intentionally tries to not be a programming language or script, but rather a model of a configuration or a process. Each playbook is composed of one or more 'plays' in a list. The goal of a play is to map a group of hosts to some well defined roles, represented by things ansible calls tasks. At a basic level, a task is nothing more than a call to an ansible module, which you should have learned about in earlier chapters. By composing a playbook of multiple 'plays', it is possible to orchestrate multi-machine deployments, running certain steps on all machines in the webservers group, then certain steps on the database server group, then more commands back on the webservers group, etc. "plays" are more or less a sports analogy. You can have quite a lot of plays that affect your systems to do different things. It's not as if you were just defining one particular state or model, and you can run different plays at different times. For starters, here's a playbook that contains just one play:: --- - hosts: webservers vars: http_port: 80 max_clients: 200 remote_user: root tasks: - name: ensure apache is at the latest version yum: pkg=httpd state=latest - name: write the apache config file template: src=/srv/httpd.j2 dest=/etc/httpd.conf notify: - restart apache - name: ensure apache is running service: name=httpd state=started handlers: - name: restart apache service: name=httpd state=restarted Below, we'll break down what the various features of the playbook language are. .. _playbook_basics: Basics `````` .. _playbook_hosts_and_users: Hosts and Users +++++++++++++++ For each play in a playbook, you get to choose which machines in your infrastructure to target and what remote user to complete the steps (called tasks) as. The `hosts` line is a list of one or more groups or host patterns, separated by colons, as described in the :doc:`intro_patterns` documentation. The `remote_user` is just the name of the user account:: --- - hosts: webservers remote_user: root .. note:: The `remote_user` parameter was formerly called just `user`. It was renamed in Ansible 1.4 to make it more distinguishable from the `user` module (used to create users on remote systems). Remote users can also be defined per task:: --- - hosts: webservers remote_user: root tasks: - name: test connection ping: remote_user: yourname .. note:: The `remote_user` parameter for tasks was added in 1.4. Support for running things from sudo is also available:: --- - hosts: webservers remote_user: yourname sudo: yes You can also use sudo on a particular task instead of the whole play:: --- - hosts: webservers remote_user: yourname tasks: - service: name=nginx state=started sudo: yes You can also login as you, and then sudo to different users than root:: --- - hosts: webservers remote_user: yourname sudo: yes sudo_user: postgres If you need to specify a password to sudo, run `ansible-playbook` with ``--ask-sudo-pass`` (`-K`). If you run a sudo playbook and the playbook seems to hang, it's probably stuck at the sudo prompt. Just `Control-C` to kill it and run it again with `-K`. .. important:: When using `sudo_user` to a user other than root, the module arguments are briefly written into a random tempfile in /tmp. These are deleted immediately after the command is executed. This only occurs when sudoing from a user like 'bob' to 'timmy', not when going from 'bob' to 'root', or logging in directly as 'bob' or 'root'. If this concerns you that this data is briefly readable (not writable), avoid transferring uncrypted passwords with `sudo_user` set. In other cases, '/tmp' is not used and this does not come into play. Ansible also takes care to not log password parameters. .. _tasks_list: Tasks list ++++++++++ Each play contains a list of tasks. Tasks are executed in order, one at a time, against all machines matched by the host pattern, before moving on to the next task. It is important to understand that, within a play, all hosts are going to get the same task directives. It is the purpose of a play to map a selection of hosts to tasks. When running the playbook, which runs top to bottom, hosts with failed tasks are taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun. The goal of each task is to execute a module, with very specific arguments. Variables, as mentioned above, can be used in arguments to modules. Modules are 'idempotent', meaning if you run them again, they will make only the changes they must in order to bring the system to the desired state. This makes it very safe to rerun the same playbook multiple times. They won't change things unless they have to change things. The `command` and `shell` modules will typically rerun the same command again, which is totally ok if the command is something like 'chmod' or 'setsebool', etc. Though there is a 'creates' flag available which can be used to make these modules also idempotent. Every task should have a `name`, which is included in the output from running the playbook. This is output for humans, so it is nice to have reasonably good descriptions of each task step. If the name is not provided though, the string fed to 'action' will be used for output. Tasks can be declared using the legacy "action: module options" format, but it is recommended that you use the more conventional "module: options" format. This recommended format is used throughout the documentation, but you may encounter the older format in some playbooks. Here is what a basic task looks like, as with most modules, the service module takes key=value arguments:: tasks: - name: make sure apache is running service: name=httpd state=running The `command` and `shell` modules are the only modules that just take a list of arguments and don't use the key=value form. This makes them work as simply as you would expect:: tasks: - name: disable selinux command: /sbin/setenforce 0 The command and shell module care about return codes, so if you have a command whose successful exit code is not zero, you may wish to do this:: tasks: - name: run this command and ignore the result shell: /usr/bin/somecommand || /bin/true Or this:: tasks: - name: run this command and ignore the result shell: /usr/bin/somecommand ignore_errors: True If the action line is getting too long for comfort you can break it on a space and indent any continuation lines:: tasks: - name: Copy ansible inventory file to client copy: src=/etc/ansible/hosts dest=/etc/ansible/hosts owner=root group=root mode=0644 Variables can be used in action lines. Suppose you defined a variable called 'vhost' in the 'vars' section, you could do this:: tasks: - name: create a virtual host file for {{ vhost }} template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }} Those same variables are usable in templates, which we'll get to later. Now in a very basic playbook all the tasks will be listed directly in that play, though it will usually make more sense to break up tasks using the 'include:' directive. We'll show that a bit later. .. _action_shorthand: Action Shorthand ```````````````` .. versionadded:: 0.8 Ansible prefers listing modules like this in 0.8 and later:: template: src=templates/foo.j2 dest=/etc/foo.conf You will notice in earlier versions, this was only available as:: action: template src=templates/foo.j2 dest=/etc/foo.conf The old form continues to work in newer versions without any plan of deprecation. .. _handlers: Handlers: Running Operations On Change `````````````````````````````````````` As we've mentioned, modules are written to be 'idempotent' and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change. These 'notify' actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks. For instance, multiple resources may indicate that apache needs to be restarted because they have changed a config file, but apache will only be bounced once to avoid unnecessary restarts. Here's an example of restarting two services when the contents of a file change, but only if the file changes:: - name: template configuration file template: src=template.j2 dest=/etc/foo.conf notify: - restart memcached - restart apache The things listed in the 'notify' section of a task are called handlers. Handlers are lists of tasks, not really any different from regular tasks, that are referenced by name. Handlers are what notifiers notify. If nothing notifies a handler, it will not run. Regardless of how many things notify a handler, it will run only once, after all of the tasks complete in a particular play. Here's an example handlers section:: handlers: - name: restart memcached service: name=memcached state=restarted - name: restart apache service: name=apache state=restarted Handlers are best used to restart services and trigger reboots. You probably won't need them for much else. .. note:: Notify handlers are always run in the order written. Roles are described later on. It's worthwhile to point out that handlers are automatically processed between 'pre_tasks', 'roles', 'tasks', and 'post_tasks' sections. If you ever want to flush all the handler commands immediately though, in 1.2 and later, you can:: tasks: - shell: some tasks go here - meta: flush_handlers - shell: some other tasks In the above example any queued up handlers would be processed early when the 'meta' statement was reached. This is a bit of a niche case but can come in handy from time to time. .. _executing_a_playbook: Executing A Playbook ```````````````````` Now that you've learned playbook syntax, how do you run a playbook? It's simple. Let's run a playbook using a parallelism level of 10:: ansible-playbook playbook.yml -f 10 .. _ansible-pull: Ansible-Pull ```````````` Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead of pushing configuration out to them, you can. Ansible-pull is a small script that will checkout a repo of configuration instructions from git, and then run ansible-playbook against that content. Assuming you load balance your checkout location, ansible-pull scales essentially infinitely. Run ``ansible-pull --help`` for details. There's also a `clever playbook `_ available to using ansible in push mode to configure ansible-pull via a crontab! .. _tips_and_tricks: Tips and Tricks ``````````````` Look at the bottom of the playbook execution for a summary of the nodes that were targeted and how they performed. General failures and fatal "unreachable" communication attempts are kept separate in the counts. If you ever want to see detailed output from successful modules as well as unsuccessful ones, use the ``--verbose`` flag. This is available in Ansible 0.5 and later. Ansible playbook output is vastly upgraded if the cowsay package is installed. Try it! To see what hosts would be affected by a playbook before you run it, you can do this:: ansible-playbook playbook.yml --list-hosts. .. seealso:: :doc:`YAMLSyntax` Learn about YAML syntax :doc:`playbooks_best_practices` Various tips about managing playbooks in the real world :doc:`index` Hop back to the documentation index for a lot of special topics about playbooks :doc:`modules` Learn about available modules :doc:`developing_modules` Learn how to extend Ansible by writing your own modules :doc:`intro_patterns` Learn about how to select hosts `Github examples directory `_ Complete end-to-end playbook examples `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups ansible-1.5.4/docsite/rst/intro.rst0000664000000000000000000000115712316627017015776 0ustar rootrootIntroduction ============ Before we dive into the really fun parts -- playbooks, configuration management, deployment, and orchestration, we'll learn how to get Ansible installed and some basic concepts. We'll go over how to execute ad-hoc commands in parallel across your nodes using /usr/bin/ansible. We'll also see what sort of modules are available in Ansible's core (though you can also write your own, which we'll also show later). .. toctree:: :maxdepth: 1 intro_installation intro_getting_started intro_inventory intro_dynamic_inventory intro_patterns intro_adhoc intro_configuration ansible-1.5.4/docsite/rst/intro_installation.rst0000664000000000000000000002070712316627017020561 0ustar rootrootInstallation ============ .. contents:: Topics .. _getting_ansible: Getting Ansible ``````````````` You may also wish to follow the `Github project `_ if you have a github account. This is also where we keep the issue tracker for sharing bugs and feature ideas. .. _what_will_be_installed: Basics / What Will Be Installed ``````````````````````````````` Ansible by default manages machines over the SSH protocol. Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there's no real question about how to upgrade Ansible when moving to a new version. .. _what_version: What Version To Pick? ````````````````````` Because it runs so easily from source and does not require any installation of software on remote machines, many users will actually track the development version. Ansible's release cycles are usually about two months long. Due to this short release cycle, minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch. Major bugs will still have maintenance releases when needed, though these are infrequent. If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager. For other installation options, we recommend installing via "pip", which is the Python package manager, though other options are also available. If you wish to track the development release to use and test the latest features, we will share information about running from source. It's not necessary to install the program to run from source. .. _control_machine_requirements: Control Machine Requirements ```````````````````````````` Currently Ansible can be run from any machine with Python 2.6 installed (Windows isn't supported for the control machine). This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on. .. _managed_node_requirements: Managed Node Requirements ````````````````````````` On the managed nodes, you only need Python 2.4 or later, but if you are running less than Python 2.5 on the remotes, you will also need: * ``python-simplejson`` .. note:: Ansible's "raw" module (for executing commands in a quick and dirty way) and the script module don't even need that. So technically, you can use Ansible to install python-simplejson using the raw module, which then allows you to use everything else. (That's jumping ahead though.) .. note:: If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before using any copy/file/template related functions in Ansible. You can of course still use the yum module in Ansible to install this package on remote systems that do not have it. .. note:: Python 3 is a slightly different language than Python 2 and most Python programs (including Ansible) are not switching over yet. However, some Linux distributions (Gentoo, Arch) may not have a Python 2.X interpreter installed by default. On those systems, you should install one, and set the 'ansible_python_interpreter' variable in inventory (see :doc:`intro_inventory`) to point at your 2.X Python. Distributions like Red Hat Enterprise Linux, CentOS, Fedora, and Ubuntu all have a 2.X interpreter installed by default and this does not apply to those distributions. This is also true of nearly all Unix systems. If you need to bootstrap these remote systems by installing Python 2.X, using the 'raw' module will be able to do it remotely. .. _installing_the_control_machine: Installing the Control Machine `````````````````````````````` .. _from_source: Running From Source +++++++++++++++++++ Ansible is trivially easy to run from a checkout, root permissions are not required to use it and there is no software to actually install for Ansible itself. No daemons or database setup are required. Because of this, many users in our community use the development version of Ansible all of the time, so they can take advantage of new features when they are implemented, and also easily contribute to the project. Because there is nothing to install, following the development version is significantly easier than most open source projects. To install from source. .. code-block:: bash $ git clone git://github.com/ansible/ansible.git $ cd ./ansible $ source ./hacking/env-setup If you don't have pip installed in your version of Python, install pip:: $ sudo easy_install pip Ansible also uses the following Python modules that need to be installed:: $ sudo pip install paramiko PyYAML jinja2 httplib2 Once running the env-setup script you'll be running from checkout and the default inventory file will be /etc/ansible/hosts. You can optionally specify an inventory file (see :doc:`intro_inventory`) other than /etc/ansible/hosts: .. code-block:: bash $ echo "127.0.0.1" > ~/ansible_hosts $ export ANSIBLE_HOSTS=~/ansible_hosts You can read more about the inventory file in later parts of the manual. Now let's test things with a ping command: .. code-block:: bash $ ansible all -m ping --ask-pass You can also use "sudo make install" if you wish. .. _from_yum: Latest Release Via Yum ++++++++++++++++++++++ RPMs are available from yum for `EPEL `_ 6 and currently supported Fedora distributions. Ansible itself can manage earlier operating systems that contain Python 2.4 or higher (so also EL5). Fedora users can install Ansible directly, though if you are using RHEL or CentOS and have not already done so, `configure EPEL `_ .. code-block:: bash # install the epel-release RPM if needed on CentOS, RHEL, or Scientific Linux $ sudo yum install ansible You can also build an RPM yourself. From the root of a checkout or tarball, use the ``make rpm`` command to build an RPM you can distribute and install. Make sure you have ``rpm-build``, ``make``, and ``python2-devel`` installed. .. code-block:: bash $ git clone git://github.com/ansible/ansible.git $ cd ./ansible $ make rpm $ sudo rpm -Uvh ~/rpmbuild/ansible-*.noarch.rpm .. _from_apt: Latest Releases Via Apt (Ubuntu) ++++++++++++++++++++++++++++++++ Ubuntu builds are available `in a PPA here `_. Once configured, .. code-block:: bash $ sudo add-apt-repository ppa:rquillo/ansible $ sudo apt-get update $ sudo apt-get install ansible Debian/Ubuntu packages can also be built from the source checkout, run: .. code-block:: bash $ make deb You may also wish to run from source to get the latest, which is covered above. .. _from_pkg: Latest Releases Via pkg (FreeBSD) +++++++++++++++++++++++++++++++++ .. code-block:: bash $ sudo pkg install ansible You may also wish to install from ports, run: .. code-block:: bash $ sudo make -C /usr/ports/sysutils/ansible install .. _from_pip: Latest Releases Via Pip +++++++++++++++++++++++ Ansible can be installed via "pip", the Python package manager. If 'pip' isn't already available in your version of Python, you can get pip by:: $ sudo easy_install pip Then install Ansible with:: $ sudo pip install ansible Readers that use virtualenv can also install Ansible under virtualenv, though we'd recommend to not worry about it and just install Ansible globally. Do not use easy_install to install ansible directly. .. _tagged_releases: Tarballs of Tagged Releases +++++++++++++++++++++++++++ Packaging Ansible or wanting to build a local package yourself, but don't want to do a git checkout? Tarballs of releases are available on the `Ansible downloads `_ page. These releases are also tagged in the `git repository `_ with the release version. .. seealso:: :doc:`intro_adhoc` Examples of basic commands :doc:`playbooks` Learning ansible's configuration management language `Mailing List `_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/guides.rst0000664000000000000000000000064412316627017016123 0ustar rootrootDetailed Guides ``````````````` This section is new and evolving. The idea here is explore particular use cases in greater depth and provide a more "top down" explanation of some basic features. .. toctree:: :maxdepth: 1 guide_aws guide_rax guide_vagrant guide_rolling_upgrade Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continous Deployment, and more. ansible-1.5.4/docsite/rst/developing_api.rst0000664000000000000000000000625412316627017017633 0ustar rootrootPython API ========== .. contents:: Topics There are several interesting ways to use Ansible from an API perspective. You can use the Ansible python API to control nodes, you can extend Ansible to respond to various python events, you can write various plugins, and you can plug in inventory data from external data sources. This document covers the Runner and Playbook API at a basic level. If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously, or have access control and logging demands, take a look at :doc:`tower` as it has a very nice REST API that provides all of these things at a higher level. Ansible is written in its own API so you have a considerable amount of power across the board. This chapter discusses the Python API. .. _python_api: Python API ---------- The Python API is very powerful, and is how the ansible CLI and ansible-playbook are implemented. It's pretty simple:: import ansible.runner runner = ansible.runner.Runner( module_name='ping', module_args='', pattern='web*', forks=10 ) datastructure = runner.run() The run method returns results per host, grouped by whether they could be contacted or not. Return types are module specific, as expressed in the :doc:`modules` documentation.:: { "dark" : { "web1.example.com" : "failure message" }, "contacted" : { "web2.example.com" : 1 } } A module can return any type of JSON data it wants, so Ansible can be used as a framework to rapidly build powerful applications and scripts. .. _detailed_api_example: Detailed API Example ```````````````````` The following script prints out the uptime information for all hosts:: #!/usr/bin/python import ansible.runner import sys # construct the ansible runner and execute on all hosts results = ansible.runner.Runner( pattern='*', forks=10, module_name='command', module_args='/usr/bin/uptime', ).run() if results is None: print "No hosts found" sys.exit(1) print "UP ***********" for (hostname, result) in results['contacted'].items(): if not 'failed' in result: print "%s >>> %s" % (hostname, result['stdout']) print "FAILED *******" for (hostname, result) in results['contacted'].items(): if 'failed' in result: print "%s >>> %s" % (hostname, result['msg']) print "DOWN *********" for (hostname, result) in results['dark'].items(): print "%s >>> %s" % (hostname, result) Advanced programmers may also wish to read the source to ansible itself, for it uses the Runner() API (with all available options) to implement the command line tools ``ansible`` and ``ansible-playbook``. .. seealso:: :doc:`developing_inventory` Developing dynamic inventory integrations :doc:`developing_modules` How to develop modules :doc:`developing_plugins` How to develop plugins `Development Mailing List `_ Mailing list for development topics `irc.freenode.net `_ #ansible IRC chat channel ansible-1.5.4/docsite/rst/tower.rst0000664000000000000000000000202012316627017015771 0ustar rootrootAnsible Tower ````````````` `Ansible Tower `_ (formerly 'AWX') is a web-based solution that makes Ansible even more easy to use for IT teams of all kinds. It's designed to be the hub for all of your automation tasks. Tower allows you to control access to who can access what, even allowing sharing of SSH credentials without someone being able to transfer those credentials. Inventory can be graphically managed or synced with a wide variety of cloud sources. It logs all of your jobs, integrates well with LDAP, and has an amazing browsable REST API. Command line tools are available for easy integration with Jenkins as well. Provisioning callbacks provide great support for autoscaling topologies. Find out more about Tower features and how to download it on the `Ansible Tower webpage `_. Tower is free for usage for up to 10 nodes, and comes bundled with amazing support from Ansible, Inc. As you would expect, Ansible is installed using Ansible playbooks! ansible-1.5.4/docsite/rst/guide_vagrant.rst0000664000000000000000000001040612316627017017457 0ustar rootrootUsing Vagrant and Ansible ========================= .. _vagrant_intro: Introduction ```````````` Vagrant is a tool to manage virtual machine environments, and allows you to configure and use reproducable work environments on top of various virtualization and cloud platforms. It also has integration with Ansible as a provisioner for these virtual machines, and the two tools work together well. This guide will describe how to use Vagrant and Ansible together. If you're not familar with Vagrant, you should visit `the documentation `_. This guide assumes that you already have Ansible installed and working. Running from a Git checkout is fine. Follow the :doc:`intro_installation` guide for more information. .. _vagrant_setup: Vagrant Setup ````````````` The first step once you've installed Vagrant is to create a ``Vagrantfile`` and customize it to suit your needs. This is covered in detail in the Vagrant documentation, but here is a quick example: .. code-block:: bash $ mkdir vagrant-test $ cd vagrant-test $ vagrant init precise32 http://files.vagrantup.com/precise32.box This will create a file called Vagrantfile that you can edit to suit your needs. The default Vagrantfile has a lot of comments. Here is a simplified example that includes a section to use the Ansible provisioner: .. code-block:: ruby # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "precise32" config.vm.box_url = "http://files.vagrantup.com/precise32.box" config.vm.network :public_network config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" end end The Vagrantfile has a lot of options, but these are the most important ones. Notice the ``config.vm.provision`` section that refers to an Ansible playbook called ``playbook.yml`` in the same directory as the Vagrantfile. Vagrant runs the provisioner once the virtual machine has booted and is ready for SSH access. .. code-block:: bash $ vagrant up This will start the VM and run the provisioning playbook. There are a lot of Ansible options you can configure in your Vagrantfile. Some particularly useful options are ``ansible.extra_vars``, ``ansible.sudo`` and ``ansible.sudo_user``, and ``ansible.host_key_checking`` which you can disable to avoid SSH connection problems to new virtual machines. Visit the `Ansible Provisioner documentation `_ for more information. To re-run a playbook on an existing VM, just run: .. code-block:: bash $ vagrant provision This will re-run the playbook. .. _running_ansible: Running Ansible Manually ```````````````````````` Sometimes you may want to run Ansible manually against the machines. This is pretty easy to do. Vagrant automatically creates an inventory file for each Vagrant machine in the same directory called ``vagrant_ansible_inventory_machinename``. It configures the inventory file according to the SSH tunnel that Vagrant automatically creates, and executes ``ansible-playbook`` with the correct username and SSH key options to allow access. A typical automatically-created inventory file may look something like this: .. code-block:: none # Generated by Vagrant machine ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 If you want to run Ansible manually, you will want to make sure to pass ``ansible`` or ``ansible-playbook`` commands the correct arguments for the username (usually ``vagrant``) and the SSH key (usually ``~/.vagrant.d/insecure_private_key``), and the autogenerated inventory file. Here is an example: .. code-block:: bash $ ansible-playbook -i vagrant_ansible_inventory_machinename --private-key=~/.vagrant.d/insecure_private_key -u vagrant playbook.yml .. seealso:: `Vagrant Home `_ The Vagrant homepage with downloads `Vagrant Documentation `_ Vagrant Documentation `Ansible Provisioner `_ The Vagrant documentation for the Ansible provisioner :doc:`playbooks` An introduction to playbooks ansible-1.5.4/docsite/man/0000775000000000000000000000000012316627017014050 5ustar rootrootansible-1.5.4/docsite/man/ansible.1.html0000664000000000000000000001564012316627017016520 0ustar rootroot ansible

Name

ansible — run a command somewhere else

Synopsis

ansible <host-pattern> [-f forks] [-m module_name] [-a args]

DESCRIPTION

Ansible is an extra-simple tool/framework/API for doing 'remote things' over SSH.

ARGUMENTS

host-pattern
A name of a group in the inventory file, a shell-like glob selecting hosts in inventory file, or any combination of the two separated by semicolons.

OPTIONS

-i PATH, --inventory=PATH
The PATH to the inventory hosts file, which defaults to /etc/ansible/hosts.
-f NUM, --forks=NUM
Level of parallelism. NUM is specified as an integer, the default is 5.
-m NAME, --module-name=NAME
Execute the module called NAME.
-M DIRECTORY, --module-path=DIRECTORY
The DIRECTORY to load modules from. The default is /usr/share/ansible.
-a 'ARGUMENTS', --args='ARGUMENTS'
The ARGUMENTS to pass to the module.
-k, --ask-pass
Prompt for the SSH password instead of assuming key-based authentication with ssh-agent.
-o, --one-line
Try to output everything on one line.
-t DIRECTORY, --tree=DIRECTORY
Save contents in this output DIRECTORY, with the results saved in a file named after each host.
-T SECONDS, --timeout=SECONDS
Connection timeout to use when trying to talk to hosts, in SECONDS.
-B NUM, --background=NUM
Run commands in the background, killing the task after NUM seconds.
-P NUM, --poll=NUM
Poll a background job every NUM seconds. Requires -B.
-u USERNAME, --remote-user=USERNAME
Use this remote USERNAME instead of root.

INVENTORY

Ansible stores the hosts it can potentially operate on in an inventory file. The syntax is one host per line. Groups headers are allowed and are included on their own line, enclosed in square brackets.

FILES

/etc/ansible/hosts — Default inventory file

/usr/share/ansible/ — Default module library

ENVIRONMENT

The following environment variables may specified.

ANSIBLE_HOSTS  — Override the default ansible hosts file

ANSIBLE_LIBRARY — Override the default ansible module library path

AUTHOR

Ansible was originally written by Michael DeHaan. See the AUTHORS file for a complete list of contributors.

COPYRIGHT

Copyright © 2012, Michael DeHaan

Ansible is released under the terms of the GPLv3 License.

SEE ALSO

ansible-playbook(1)

Extensive documentation as well as IRC and mailing list info is available on the ansible home page: https://ansible.github.com/

ansible-1.5.4/docsite/man/ansible-playbook.1.html0000664000000000000000000001040112316627017020324 0ustar rootroot ansible-playbook

Name

ansible-playbook — run an ansible playbook

Synopsis

ansible-playbook <filename.yml> … [options]

DESCRIPTION

Ansible playbooks are a configuration and multinode deployment system. Ansible-playbook is the tool used to run them. See the project home page (link below) for more information.

ARGUMENTS

filename.yml
The names of one or more YAML format files to run as ansible playbooks.

OPTIONS

-i PATH, --inventory=PATH
The PATH to the inventory hosts file, which defaults to /etc/ansible/hosts.
-M DIRECTORY, --module-path=DIRECTORY
The DIRECTORY to load modules from. The default is /usr/share/ansible.
-f NUM, --forks=NUM
Level of parallelism. NUM is specified as an integer, the default is 5.
-k, --ask-pass
Prompt for the SSH password instead of assuming key-based authentication with ssh-agent.
-T SECONDS, --timeout=SECONDS
Connection timeout to use when trying to talk to hosts, in SECONDS.

ENVIRONMENT

The following environment variables may specified.

ANSIBLE_HOSTS  — Override the default ansible hosts file

ANSIBLE_LIBRARY — Override the default ansible module library path

AUTHOR

Ansible was originally written by Michael DeHaan. See the AUTHORS file for a complete list of contributors.

COPYRIGHT

Copyright © 2012, Michael DeHaan

Ansible is released under the terms of the GPLv3 License.

SEE ALSO

ansible(1)

Extensive documentation as well as IRC and mailing list info is available on the ansible home page: https://ansible.github.com/

ansible-1.5.4/docsite/modules.js0000664000000000000000000000013212316627017015277 0ustar rootrootfunction AnsibleModules($scope) { $scope.modules = []; $scope.orderProp = "module"; }ansible-1.5.4/docsite/js/0000775000000000000000000000000012316627017013711 5ustar rootrootansible-1.5.4/docsite/js/ansible/0000775000000000000000000000000012316627017015326 5ustar rootrootansible-1.5.4/docsite/js/ansible/application.js0000664000000000000000000000552612316627017020177 0ustar rootrootangular.module('ansibleApp', []).filter('moduleVersion', function() { return function(modules, version) { var parseVersionString = function (str) { if (typeof(str) != 'string') { return false; } var x = str.split('.'); // parse from string or default to 0 if can't parse var maj = parseInt(x[0]) || 0; var min = parseInt(x[1]) || 0; var pat = parseInt(x[2]) || 0; return { major: maj, minor: min, patch: pat } } var vMinMet = function(vmin, vcurrent) { minimum = parseVersionString(vmin); running = parseVersionString(vcurrent); if (running.major != minimum.major) return (running.major > minimum.major); else { if (running.minor != minimum.minor) return (running.minor > minimum.minor); else { if (running.patch != minimum.patch) return (running.patch > minimum.patch); else return true; } } }; var result = []; if (!version) { return modules; } for (var i = 0; i < modules.length; i++) { if (vMinMet(modules[i].version_added, version)) { result[result.length] = modules[i]; } } return result; }; }).filter('uniqueVersion', function() { return function(modules) { var result = []; var inArray = function (needle, haystack) { var length = haystack.length; for(var i = 0; i < length; i++) { if(haystack[i] == needle) return true; } return false; } var parseVersionString = function (str) { if (typeof(str) != 'string') { return false; } var x = str.split('.'); // parse from string or default to 0 if can't parse var maj = parseInt(x[0]) || 0; var min = parseInt(x[1]) || 0; var pat = parseInt(x[2]) || 0; return { major: maj, minor: min, patch: pat } } for (var i = 0; i < modules.length; i++) { if (!inArray(modules[i].version_added, result)) { // Some module do not define version if (modules[i].version_added) { result[result.length] = "" + modules[i].version_added; } } } result.sort( function (a, b) { ao = parseVersionString(a); bo = parseVersionString(b); if (ao.major == bo.major) { if (ao.minor == bo.minor) { if (ao.patch == bo.patch) { return 0; } else { return (ao.patch > bo.patch) ? 1 : -1; } } else { return (ao.minor > bo.minor) ? 1 : -1; } } else { return (ao.major > bo.major) ? 1 : -1; } }); return result; }; }); ansible-1.5.4/docsite/_themes/0000775000000000000000000000000012316627017014721 5ustar rootrootansible-1.5.4/docsite/_themes/srtd/0000775000000000000000000000000012316627017015675 5ustar rootrootansible-1.5.4/docsite/_themes/srtd/versions.html0000664000000000000000000000225212316627017020434 0ustar rootroot{% if READTHEDOCS %} {# Add rst-badge after rst-versions for small badge style. #}
Read the Docs v: {{ current_version }}
Versions
{% for slug, url in versions %}
{{ slug }}
{% endfor %}
Downloads
{% for type, url in downloads %}
{{ type }}
{% endfor %}
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.
{% endif %} ansible-1.5.4/docsite/_themes/srtd/layout_old.html0000664000000000000000000001625512316627017020747 0ustar rootroot{# basic/layout.html ~~~~~~~~~~~~~~~~~ Master layout template for Sphinx themes. :copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- block doctype -%} {%- endblock %} {%- set reldelim1 = reldelim1 is not defined and ' »' or reldelim1 %} {%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %} {%- set render_sidebar = (not embedded) and (not theme_nosidebar|tobool) and (sidebars != []) %} {%- set url_root = pathto('', 1) %} {# XXX necessary? #} {%- if url_root == '#' %}{% set url_root = '' %}{% endif %} {%- if not embedded and docstitle %} {%- set titlesuffix = " — "|safe + docstitle|e %} {%- else %} {%- set titlesuffix = "" %} {%- endif %} {%- macro relbar() %} {%- endmacro %} {%- macro sidebar() %} {%- if render_sidebar %}
{%- block sidebarlogo %} {%- if logo %} {%- endif %} {%- endblock %} {%- if sidebars != None %} {#- new style sidebar: explicitly include/exclude templates #} {%- for sidebartemplate in sidebars %} {%- include sidebartemplate %} {%- endfor %} {%- else %} {#- old style sidebars: using blocks -- should be deprecated #} {%- block sidebartoc %} {%- include "localtoc.html" %} {%- endblock %} {%- block sidebarrel %} {%- include "relations.html" %} {%- endblock %} {%- block sidebarsourcelink %} {%- include "sourcelink.html" %} {%- endblock %} {%- if customsidebar %} {%- include customsidebar %} {%- endif %} {%- block sidebarsearch %} {%- include "searchbox.html" %} {%- endblock %} {%- endif %}
{%- endif %} {%- endmacro %} {%- macro script() %} {%- for scriptfile in script_files %} {%- endfor %} {%- endmacro %} {%- macro css() %} {%- for cssfile in css_files %} {%- endfor %} {%- endmacro %} {{ metatags }} {%- block htmltitle %} {{ title|striptags|e }}{{ titlesuffix }} {%- endblock %} {{ css() }} {%- if not embedded %} {{ script() }} {%- if use_opensearch %} {%- endif %} {%- if favicon %} {%- endif %} {%- endif %} {%- block linktags %} {%- if hasdoc('about') %} {%- endif %} {%- if hasdoc('genindex') %} {%- endif %} {%- if hasdoc('search') %} {%- endif %} {%- if hasdoc('copyright') %} {%- endif %} {%- if parents %} {%- endif %} {%- if next %} {%- endif %} {%- if prev %} {%- endif %} {%- endblock %} {%- block extrahead %} {% endblock %} {%- block header %}{% endblock %} {%- block relbar1 %}{{ relbar() }}{% endblock %} {%- block content %} {%- block sidebar1 %} {# possible location for sidebar #} {% endblock %}
{%- block document %}
{%- if render_sidebar %}
{%- endif %}
{% block body %} {% endblock %}
{%- if render_sidebar %}
{%- endif %}
{%- endblock %} {%- block sidebar2 %}{{ sidebar() }}{% endblock %}
{%- endblock %} {%- block relbar2 %}{{ relbar() }}{% endblock %} {%- block footer %}

asdf asdf asdf asdf 22

{%- endblock %} ansible-1.5.4/docsite/_themes/srtd/static/0000775000000000000000000000000012316627017017164 5ustar rootrootansible-1.5.4/docsite/_themes/srtd/static/js/0000775000000000000000000000000012316627017017600 5ustar rootrootansible-1.5.4/docsite/_themes/srtd/static/js/theme.js0000664000000000000000000000130512316627017021237 0ustar rootroot$( document ).ready(function() { // Shift nav in mobile when clicking the menu. $("[data-toggle='wy-nav-top']").click(function() { $("[data-toggle='wy-nav-shift']").toggleClass("shift"); $("[data-toggle='rst-versions']").toggleClass("shift"); }); // Close menu when you click a link. $(".wy-menu-vertical .current ul li a").click(function() { $("[data-toggle='wy-nav-shift']").removeClass("shift"); $("[data-toggle='rst-versions']").toggleClass("shift"); }); $("[data-toggle='rst-current-version']").click(function() { $("[data-toggle='rst-versions']").toggleClass("shift-up"); }); $("table.docutils:not(.field-list").wrap("
"); }); ansible-1.5.4/docsite/_themes/srtd/static/css/0000775000000000000000000000000012316627017017754 5ustar rootrootansible-1.5.4/docsite/_themes/srtd/static/css/theme.css0000664000000000000000000030753712316627017021607 0ustar rootroot* { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } article, aside, details, figcaption, figure, footer, header, hgroup, nav, section { display: block; } audio, canvas, video { display: inline-block; *display: inline; *zoom: 1; } audio:not([controls]) { display: none; } [hidden] { display: none; } * { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } html { font-size: 100%; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; } body { margin: 0; } a:hover, a:active { outline: 0; } abbr[title] { border-bottom: 1px dotted; } b, strong { font-weight: bold; } blockquote { margin: 0; } dfn { font-style: italic; } hr { display: block; height: 1px; border: 0; border-top: 1px solid #ccc; margin: 20px 0; padding: 0; } ins { background: #ff9; color: #000; text-decoration: none; } mark { background: #ff0; color: #000; font-style: italic; font-weight: bold; } pre, code, .rst-content tt, kbd, samp { font-family: monospace, serif; _font-family: "courier new", monospace; font-size: 1em; } pre { white-space: pre; } q { quotes: none; } q:before, q:after { content: ""; content: none; } small { font-size: 85%; } sub, sup { font-size: 75%; line-height: 0; position: relative; vertical-align: baseline; } sup { top: -0.5em; } sub { bottom: -0.25em; } ul, ol, dl { margin: 0; padding: 0; list-style: none; list-style-image: none; } li { list-style: none; } dd { margin: 0; } img { border: 0; -ms-interpolation-mode: bicubic; vertical-align: middle; max-width: 100%; } svg:not(:root) { overflow: hidden; } figure { margin: 0; } form { margin: 0; } fieldset { border: 0; margin: 0; padding: 0; } label { cursor: pointer; } legend { border: 0; *margin-left: -7px; padding: 0; white-space: normal; } button, input, select, textarea { font-size: 100%; margin: 0; vertical-align: baseline; *vertical-align: middle; } button, input { line-height: normal; } button, input[type="button"], input[type="reset"], input[type="submit"] { cursor: pointer; -webkit-appearance: button; *overflow: visible; } button[disabled], input[disabled] { cursor: default; } input[type="checkbox"], input[type="radio"] { box-sizing: border-box; padding: 0; *width: 13px; *height: 13px; } input[type="search"] { -webkit-appearance: textfield; -moz-box-sizing: content-box; -webkit-box-sizing: content-box; box-sizing: content-box; } input[type="search"]::-webkit-search-decoration, input[type="search"]::-webkit-search-cancel-button { -webkit-appearance: none; } button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; } textarea { overflow: auto; vertical-align: top; resize: vertical; } table { border-collapse: collapse; border-spacing: 0; } td { vertical-align: top; } .chromeframe { margin: 0.2em 0; background: #ccc; color: #000; padding: 0.2em 0; } .ir { display: block; border: 0; text-indent: -999em; overflow: hidden; background-color: transparent; background-repeat: no-repeat; text-align: left; direction: ltr; *line-height: 0; } .ir br { display: none; } .hidden { display: none !important; visibility: hidden; } .visuallyhidden { border: 0; clip: rect(0 0 0 0); height: 1px; margin: -1px; overflow: hidden; padding: 0; position: absolute; width: 1px; } .visuallyhidden.focusable:active, .visuallyhidden.focusable:focus { clip: auto; height: auto; margin: 0; overflow: visible; position: static; width: auto; } .invisible { visibility: hidden; } .relative { position: relative; } big, small { font-size: 100%; } @media print { html, body, section { background: none !important; } * { box-shadow: none !important; text-shadow: none !important; filter: none !important; -ms-filter: none !important; } a, a:visited { text-decoration: underline; } .ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after { content: ""; } pre, blockquote { page-break-inside: avoid; } thead { display: table-header-group; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page { margin: 0.5cm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } .font-smooth, .icon:before, .wy-inline-validate.wy-inline-validate-success .wy-input-context:before, .wy-inline-validate.wy-inline-validate-danger .wy-input-context:before, .wy-inline-validate.wy-inline-validate-warning .wy-input-context:before, .wy-inline-validate.wy-inline-validate-info .wy-input-context:before, .wy-tag-input-group .wy-tag .wy-tag-remove:before, .rst-content .admonition-title:before, .rst-content h1 .headerlink:before, .rst-content h2 .headerlink:before, .rst-content h3 .headerlink:before, .rst-content h4 .headerlink:before, .rst-content h5 .headerlink:before, .rst-content h6 .headerlink:before, .rst-content dl dt .headerlink:before, .wy-alert, .rst-content .note, .rst-content .attention, .rst-content .caution, .rst-content .danger, .rst-content .error, .rst-content .hint, .rst-content .important, .rst-content .tip, .rst-content .warning, .btn, input[type="text"], input[type="password"], input[type="email"], input[type="url"], input[type="date"], input[type="month"], input[type="time"], input[type="datetime"], input[type="datetime-local"], input[type="week"], input[type="number"], input[type="search"], input[type="tel"], input[type="color"], select, textarea, .wy-tag-input-group, .wy-menu-vertical li.on a, .wy-menu-vertical li.current>a, .wy-side-nav-search>a, .wy-side-nav-search .wy-dropdown>a, .wy-nav-top a { -webkit-font-smoothing: antialiased; } .clearfix { *zoom: 1; } .clearfix:before, .clearfix:after { display: table; content: ""; } .clearfix:after { clear: both; } @font-face { font-family: fontawesome-webfont; font-weight: normal; font-style: normal; src: url("../font/fontawesome_webfont.eot"); src: url("../font/fontawesome_webfont.eot?#iefix") format("embedded-opentype"), url("../font/fontawesome_webfont.woff") format("woff"), url("../font/fontawesome_webfont.ttf") format("truetype"), url("../font/fontawesome_webfont.svg#fontawesome-webfont") format("svg"); } .icon:before, .wy-inline-validate.wy-inline-validate-success .wy-input-context:before, .wy-inline-validate.wy-inline-validate-danger .wy-input-context:before, .wy-inline-validate.wy-inline-validate-warning .wy-input-context:before, .wy-inline-validate.wy-inline-validate-info .wy-input-context:before, .wy-tag-input-group .wy-tag .wy-tag-remove:before, .rst-content .admonition-title:before, .rst-content h1 .headerlink:before, .rst-content h2 .headerlink:before, .rst-content h3 .headerlink:before, .rst-content h4 .headerlink:before, .rst-content h5 .headerlink:before, .rst-content h6 .headerlink:before, .rst-content dl dt .headerlink:before { display: inline-block; font-family: fontawesome-webfont; font-style: normal; font-weight: normal; line-height: 1; text-decoration: inherit; } a .icon, a .wy-inline-validate.wy-inline-validate-success .wy-input-context, .wy-inline-validate.wy-inline-validate-success a .wy-input-context, a .wy-inline-validate.wy-inline-validate-danger .wy-input-context, .wy-inline-validate.wy-inline-validate-danger a .wy-input-context, a .wy-inline-validate.wy-inline-validate-warning .wy-input-context, .wy-inline-validate.wy-inline-validate-warning a .wy-input-context, a .wy-inline-validate.wy-inline-validate-info .wy-input-context, .wy-inline-validate.wy-inline-validate-info a .wy-input-context, a .wy-tag-input-group .wy-tag .wy-tag-remove, .wy-tag-input-group .wy-tag a .wy-tag-remove, a .rst-content .admonition-title, .rst-content a .admonition-title, a .rst-content h1 .headerlink, .rst-content h1 a .headerlink, a .rst-content h2 .headerlink, .rst-content h2 a .headerlink, a .rst-content h3 .headerlink, .rst-content h3 a .headerlink, a .rst-content h4 .headerlink, .rst-content h4 a .headerlink, a .rst-content h5 .headerlink, .rst-content h5 a .headerlink, a .rst-content h6 .headerlink, .rst-content h6 a .headerlink, a .rst-content dl dt .headerlink, .rst-content dl dt a .headerlink { display: inline-block; text-decoration: inherit; } .icon-large:before { vertical-align: -10%; font-size: 1.33333em; } .btn .icon, .btn .wy-inline-validate.wy-inline-validate-success .wy-input-context, .wy-inline-validate.wy-inline-validate-success .btn .wy-input-context, .btn .wy-inline-validate.wy-inline-validate-danger .wy-input-context, .wy-inline-validate.wy-inline-validate-danger .btn .wy-input-context, .btn .wy-inline-validate.wy-inline-validate-warning .wy-input-context, .wy-inline-validate.wy-inline-validate-warning .btn .wy-input-context, .btn .wy-inline-validate.wy-inline-validate-info .wy-input-context, .wy-inline-validate.wy-inline-validate-info .btn .wy-input-context, .btn .wy-tag-input-group .wy-tag .wy-tag-remove, .wy-tag-input-group .wy-tag .btn .wy-tag-remove, .btn .rst-content .admonition-title, .rst-content .btn .admonition-title, .btn .rst-content h1 .headerlink, .rst-content h1 .btn .headerlink, .btn .rst-content h2 .headerlink, .rst-content h2 .btn .headerlink, .btn .rst-content h3 .headerlink, .rst-content h3 .btn .headerlink, .btn .rst-content h4 .headerlink, .rst-content h4 .btn .headerlink, .btn .rst-content h5 .headerlink, .rst-content h5 .btn .headerlink, .btn .rst-content h6 .headerlink, .rst-content h6 .btn .headerlink, .btn .rst-content dl dt .headerlink, .rst-content dl dt .btn .headerlink, .nav .icon, .nav .wy-inline-validate.wy-inline-validate-success .wy-input-context, .wy-inline-validate.wy-inline-validate-success .nav .wy-input-context, .nav .wy-inline-validate.wy-inline-validate-danger .wy-input-context, .wy-inline-validate.wy-inline-validate-danger .nav .wy-input-context, .nav .wy-inline-validate.wy-inline-validate-warning .wy-input-context, .wy-inline-validate.wy-inline-validate-warning .nav .wy-input-context, .nav .wy-inline-validate.wy-inline-validate-info .wy-input-context, .wy-inline-validate.wy-inline-validate-info .nav .wy-input-context, .nav .wy-tag-input-group .wy-tag .wy-tag-remove, .wy-tag-input-group .wy-tag .nav .wy-tag-remove, .nav .rst-content .admonition-title, .rst-content .nav .admonition-title, .nav .rst-content h1 .headerlink, .rst-content h1 .nav .headerlink, .nav .rst-content h2 .headerlink, .rst-content h2 .nav .headerlink, .nav .rst-content h3 .headerlink, .rst-content h3 .nav .headerlink, .nav .rst-content h4 .headerlink, .rst-content h4 .nav .headerlink, .nav .rst-content h5 .headerlink, .rst-content h5 .nav .headerlink, .nav .rst-content h6 .headerlink, .rst-content h6 .nav .headerlink, .nav .rst-content dl dt .headerlink, .rst-content dl dt .nav .headerlink { display: inline; } .btn .icon.icon-large, .btn .wy-inline-validate.wy-inline-validate-success .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-success .btn .icon-large.wy-input-context, .btn .wy-inline-validate.wy-inline-validate-danger .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-danger .btn .icon-large.wy-input-context, .btn .wy-inline-validate.wy-inline-validate-warning .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-warning .btn .icon-large.wy-input-context, .btn .wy-inline-validate.wy-inline-validate-info .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-info .btn .icon-large.wy-input-context, .btn .wy-tag-input-group .wy-tag .icon-large.wy-tag-remove, .wy-tag-input-group .wy-tag .btn .icon-large.wy-tag-remove, .btn .rst-content .icon-large.admonition-title, .rst-content .btn .icon-large.admonition-title, .btn .rst-content h1 .icon-large.headerlink, .rst-content h1 .btn .icon-large.headerlink, .btn .rst-content h2 .icon-large.headerlink, .rst-content h2 .btn .icon-large.headerlink, .btn .rst-content h3 .icon-large.headerlink, .rst-content h3 .btn .icon-large.headerlink, .btn .rst-content h4 .icon-large.headerlink, .rst-content h4 .btn .icon-large.headerlink, .btn .rst-content h5 .icon-large.headerlink, .rst-content h5 .btn .icon-large.headerlink, .btn .rst-content h6 .icon-large.headerlink, .rst-content h6 .btn .icon-large.headerlink, .btn .rst-content dl dt .icon-large.headerlink, .rst-content dl dt .btn .icon-large.headerlink, .nav .icon.icon-large, .nav .wy-inline-validate.wy-inline-validate-success .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-success .nav .icon-large.wy-input-context, .nav .wy-inline-validate.wy-inline-validate-danger .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-danger .nav .icon-large.wy-input-context, .nav .wy-inline-validate.wy-inline-validate-warning .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-warning .nav .icon-large.wy-input-context, .nav .wy-inline-validate.wy-inline-validate-info .icon-large.wy-input-context, .wy-inline-validate.wy-inline-validate-info .nav .icon-large.wy-input-context, .nav .wy-tag-input-group .wy-tag .icon-large.wy-tag-remove, .wy-tag-input-group .wy-tag .nav .icon-large.wy-tag-remove, .nav .rst-content .icon-large.admonition-title, .rst-content .nav .icon-large.admonition-title, .nav .rst-content h1 .icon-large.headerlink, .rst-content h1 .nav .icon-large.headerlink, .nav .rst-content h2 .icon-large.headerlink, .rst-content h2 .nav .icon-large.headerlink, .nav .rst-content h3 .icon-large.headerlink, .rst-content h3 .nav .icon-large.headerlink, .nav .rst-content h4 .icon-large.headerlink, .rst-content h4 .nav .icon-large.headerlink, .nav .rst-content h5 .icon-large.headerlink, .rst-content h5 .nav .icon-large.headerlink, .nav .rst-content h6 .icon-large.headerlink, .rst-content h6 .nav .icon-large.headerlink, .nav .rst-content dl dt .icon-large.headerlink, .rst-content dl dt .nav .icon-large.headerlink { line-height: 0.9em; } .btn .icon.icon-spin, .btn .wy-inline-validate.wy-inline-validate-success .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-success .btn .icon-spin.wy-input-context, .btn .wy-inline-validate.wy-inline-validate-danger .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-danger .btn .icon-spin.wy-input-context, .btn .wy-inline-validate.wy-inline-validate-warning .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-warning .btn .icon-spin.wy-input-context, .btn .wy-inline-validate.wy-inline-validate-info .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-info .btn .icon-spin.wy-input-context, .btn .wy-tag-input-group .wy-tag .icon-spin.wy-tag-remove, .wy-tag-input-group .wy-tag .btn .icon-spin.wy-tag-remove, .btn .rst-content .icon-spin.admonition-title, .rst-content .btn .icon-spin.admonition-title, .btn .rst-content h1 .icon-spin.headerlink, .rst-content h1 .btn .icon-spin.headerlink, .btn .rst-content h2 .icon-spin.headerlink, .rst-content h2 .btn .icon-spin.headerlink, .btn .rst-content h3 .icon-spin.headerlink, .rst-content h3 .btn .icon-spin.headerlink, .btn .rst-content h4 .icon-spin.headerlink, .rst-content h4 .btn .icon-spin.headerlink, .btn .rst-content h5 .icon-spin.headerlink, .rst-content h5 .btn .icon-spin.headerlink, .btn .rst-content h6 .icon-spin.headerlink, .rst-content h6 .btn .icon-spin.headerlink, .btn .rst-content dl dt .icon-spin.headerlink, .rst-content dl dt .btn .icon-spin.headerlink, .nav .icon.icon-spin, .nav .wy-inline-validate.wy-inline-validate-success .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-success .nav .icon-spin.wy-input-context, .nav .wy-inline-validate.wy-inline-validate-danger .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-danger .nav .icon-spin.wy-input-context, .nav .wy-inline-validate.wy-inline-validate-warning .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-warning .nav .icon-spin.wy-input-context, .nav .wy-inline-validate.wy-inline-validate-info .icon-spin.wy-input-context, .wy-inline-validate.wy-inline-validate-info .nav .icon-spin.wy-input-context, .nav .wy-tag-input-group .wy-tag .icon-spin.wy-tag-remove, .wy-tag-input-group .wy-tag .nav .icon-spin.wy-tag-remove, .nav .rst-content .icon-spin.admonition-title, .rst-content .nav .icon-spin.admonition-title, .nav .rst-content h1 .icon-spin.headerlink, .rst-content h1 .nav .icon-spin.headerlink, .nav .rst-content h2 .icon-spin.headerlink, .rst-content h2 .nav .icon-spin.headerlink, .nav .rst-content h3 .icon-spin.headerlink, .rst-content h3 .nav .icon-spin.headerlink, .nav .rst-content h4 .icon-spin.headerlink, .rst-content h4 .nav .icon-spin.headerlink, .nav .rst-content h5 .icon-spin.headerlink, .rst-content h5 .nav .icon-spin.headerlink, .nav .rst-content h6 .icon-spin.headerlink, .rst-content h6 .nav .icon-spin.headerlink, .nav .rst-content dl dt .icon-spin.headerlink, .rst-content dl dt .nav .icon-spin.headerlink { display: inline-block; } .btn.icon:before, .wy-inline-validate.wy-inline-validate-success .btn.wy-input-context:before, .wy-inline-validate.wy-inline-validate-danger .btn.wy-input-context:before, .wy-inline-validate.wy-inline-validate-warning .btn.wy-input-context:before, .wy-inline-validate.wy-inline-validate-info .btn.wy-input-context:before, .wy-tag-input-group .wy-tag .btn.wy-tag-remove:before, .rst-content .btn.admonition-title:before, .rst-content h1 .btn.headerlink:before, .rst-content h2 .btn.headerlink:before, .rst-content h3 .btn.headerlink:before, .rst-content h4 .btn.headerlink:before, .rst-content h5 .btn.headerlink:before, .rst-content h6 .btn.headerlink:before, .rst-content dl dt .btn.headerlink:before { opacity: 0.5; -webkit-transition: opacity 0.05s ease-in; -moz-transition: opacity 0.05s ease-in; transition: opacity 0.05s ease-in; } .btn.icon:hover:before, .wy-inline-validate.wy-inline-validate-success .btn.wy-input-context:hover:before, .wy-inline-validate.wy-inline-validate-danger .btn.wy-input-context:hover:before, .wy-inline-validate.wy-inline-validate-warning .btn.wy-input-context:hover:before, .wy-inline-validate.wy-inline-validate-info .btn.wy-input-context:hover:before, .wy-tag-input-group .wy-tag .btn.wy-tag-remove:hover:before, .rst-content .btn.admonition-title:hover:before, .rst-content h1 .btn.headerlink:hover:before, .rst-content h2 .btn.headerlink:hover:before, .rst-content h3 .btn.headerlink:hover:before, .rst-content h4 .btn.headerlink:hover:before, .rst-content h5 .btn.headerlink:hover:before, .rst-content h6 .btn.headerlink:hover:before, .rst-content dl dt .btn.headerlink:hover:before { opacity: 1; } .btn-mini .icon:before, .btn-mini .wy-inline-validate.wy-inline-validate-success .wy-input-context:before, .wy-inline-validate.wy-inline-validate-success .btn-mini .wy-input-context:before, .btn-mini .wy-inline-validate.wy-inline-validate-danger .wy-input-context:before, .wy-inline-validate.wy-inline-validate-danger .btn-mini .wy-input-context:before, .btn-mini .wy-inline-validate.wy-inline-validate-warning .wy-input-context:before, .wy-inline-validate.wy-inline-validate-warning .btn-mini .wy-input-context:before, .btn-mini .wy-inline-validate.wy-inline-validate-info .wy-input-context:before, .wy-inline-validate.wy-inline-validate-info .btn-mini .wy-input-context:before, .btn-mini .wy-tag-input-group .wy-tag .wy-tag-remove:before, .wy-tag-input-group .wy-tag .btn-mini .wy-tag-remove:before, .btn-mini .rst-content .admonition-title:before, .rst-content .btn-mini .admonition-title:before, .btn-mini .rst-content h1 .headerlink:before, .rst-content h1 .btn-mini .headerlink:before, .btn-mini .rst-content h2 .headerlink:before, .rst-content h2 .btn-mini .headerlink:before, .btn-mini .rst-content h3 .headerlink:before, .rst-content h3 .btn-mini .headerlink:before, .btn-mini .rst-content h4 .headerlink:before, .rst-content h4 .btn-mini .headerlink:before, .btn-mini .rst-content h5 .headerlink:before, .rst-content h5 .btn-mini .headerlink:before, .btn-mini .rst-content h6 .headerlink:before, .rst-content h6 .btn-mini .headerlink:before, .btn-mini .rst-content dl dt .headerlink:before, .rst-content dl dt .btn-mini .headerlink:before { font-size: 14px; vertical-align: -15%; } li .icon, li .wy-inline-validate.wy-inline-validate-success .wy-input-context, .wy-inline-validate.wy-inline-validate-success li .wy-input-context, li .wy-inline-validate.wy-inline-validate-danger .wy-input-context, .wy-inline-validate.wy-inline-validate-danger li .wy-input-context, li .wy-inline-validate.wy-inline-validate-warning .wy-input-context, .wy-inline-validate.wy-inline-validate-warning li .wy-input-context, li .wy-inline-validate.wy-inline-validate-info .wy-input-context, .wy-inline-validate.wy-inline-validate-info li .wy-input-context, li .wy-tag-input-group .wy-tag .wy-tag-remove, .wy-tag-input-group .wy-tag li .wy-tag-remove, li .rst-content .admonition-title, .rst-content li .admonition-title, li .rst-content h1 .headerlink, .rst-content h1 li .headerlink, li .rst-content h2 .headerlink, .rst-content h2 li .headerlink, li .rst-content h3 .headerlink, .rst-content h3 li .headerlink, li .rst-content h4 .headerlink, .rst-content h4 li .headerlink, li .rst-content h5 .headerlink, .rst-content h5 li .headerlink, li .rst-content h6 .headerlink, .rst-content h6 li .headerlink, li .rst-content dl dt .headerlink, .rst-content dl dt li .headerlink { display: inline-block; } li .icon-large:before, li .icon-large:before { width: 1.875em; } ul.icons { list-style-type: none; margin-left: 2em; text-indent: -0.8em; } ul.icons li .icon, ul.icons li .wy-inline-validate.wy-inline-validate-success .wy-input-context, .wy-inline-validate.wy-inline-validate-success ul.icons li .wy-input-context, ul.icons li .wy-inline-validate.wy-inline-validate-danger .wy-input-context, .wy-inline-validate.wy-inline-validate-danger ul.icons li .wy-input-context, ul.icons li .wy-inline-validate.wy-inline-validate-warning .wy-input-context, .wy-inline-validate.wy-inline-validate-warning ul.icons li .wy-input-context, ul.icons li .wy-inline-validate.wy-inline-validate-info .wy-input-context, .wy-inline-validate.wy-inline-validate-info ul.icons li .wy-input-context, ul.icons li .wy-tag-input-group .wy-tag .wy-tag-remove, .wy-tag-input-group .wy-tag ul.icons li .wy-tag-remove, ul.icons li .rst-content .admonition-title, .rst-content ul.icons li .admonition-title, ul.icons li .rst-content h1 .headerlink, .rst-content h1 ul.icons li .headerlink, ul.icons li .rst-content h2 .headerlink, .rst-content h2 ul.icons li .headerlink, ul.icons li .rst-content h3 .headerlink, .rst-content h3 ul.icons li .headerlink, ul.icons li .rst-content h4 .headerlink, .rst-content h4 ul.icons li .headerlink, ul.icons li .rst-content h5 .headerlink, .rst-content h5 ul.icons li .headerlink, ul.icons li .rst-content h6 .headerlink, .rst-content h6 ul.icons li .headerlink, ul.icons li .rst-content dl dt .headerlink, .rst-content dl dt ul.icons li .headerlink { width: 0.8em; } ul.icons li .icon-large:before, ul.icons li .icon-large:before { vertical-align: baseline; } .icon-glass:before { content: "\f000"; } .icon-music:before { content: "\f001"; } .icon-search:before { content: "\f002"; } .icon-envelope-alt:before { content: "\f003"; } .icon-heart:before { content: "\f004"; } .icon-star:before { content: "\f005"; } .icon-star-empty:before { content: "\f006"; } .icon-user:before { content: "\f007"; } .icon-film:before { content: "\f008"; } .icon-th-large:before { content: "\f009"; } .icon-th:before { content: "\f00a"; } .icon-th-list:before { content: "\f00b"; } .icon-ok:before { content: "\f00c"; } .icon-remove:before, .wy-tag-input-group .wy-tag .wy-tag-remove:before { content: "\f00d"; } .icon-zoom-in:before { content: "\f00e"; } .icon-zoom-out:before { content: "\f010"; } .icon-power-off:before, .icon-off:before { content: "\f011"; } .icon-signal:before { content: "\f012"; } .icon-gear:before, .icon-cog:before { content: "\f013"; } .icon-trash:before { content: "\f014"; } .icon-home:before { content: "\f015"; } .icon-file-alt:before { content: "\f016"; } .icon-time:before { content: "\f017"; } .icon-road:before { content: "\f018"; } .icon-download-alt:before { content: "\f019"; } .icon-download:before { content: "\f01a"; } .icon-upload:before { content: "\f01b"; } .icon-inbox:before { content: "\f01c"; } .icon-play-circle:before { content: "\f01d"; } .icon-rotate-right:before, .icon-repeat:before { content: "\f01e"; } .icon-refresh:before { content: "\f021"; } .icon-list-alt:before { content: "\f022"; } .icon-lock:before { content: "\f023"; } .icon-flag:before { content: "\f024"; } .icon-headphones:before { content: "\f025"; } .icon-volume-off:before { content: "\f026"; } .icon-volume-down:before { content: "\f027"; } .icon-volume-up:before { content: "\f028"; } .icon-qrcode:before { content: "\f029"; } .icon-barcode:before { content: "\f02a"; } .icon-tag:before { content: "\f02b"; } .icon-tags:before { content: "\f02c"; } .icon-book:before { content: "\f02d"; } .icon-bookmark:before { content: "\f02e"; } .icon-print:before { content: "\f02f"; } .icon-camera:before { content: "\f030"; } .icon-font:before { content: "\f031"; } .icon-bold:before { content: "\f032"; } .icon-italic:before { content: "\f033"; } .icon-text-height:before { content: "\f034"; } .icon-text-width:before { content: "\f035"; } .icon-align-left:before { content: "\f036"; } .icon-align-center:before { content: "\f037"; } .icon-align-right:before { content: "\f038"; } .icon-align-justify:before { content: "\f039"; } .icon-list:before { content: "\f03a"; } .icon-indent-left:before { content: "\f03b"; } .icon-indent-right:before { content: "\f03c"; } .icon-facetime-video:before { content: "\f03d"; } .icon-picture:before { content: "\f03e"; } .icon-pencil:before { content: "\f040"; } .icon-map-marker:before { content: "\f041"; } .icon-adjust:before { content: "\f042"; } .icon-tint:before { content: "\f043"; } .icon-edit:before { content: "\f044"; } .icon-share:before { content: "\f045"; } .icon-check:before { content: "\f046"; } .icon-move:before { content: "\f047"; } .icon-step-backward:before { content: "\f048"; } .icon-fast-backward:before { content: "\f049"; } .icon-backward:before { content: "\f04a"; } .icon-play:before { content: "\f04b"; } .icon-pause:before { content: "\f04c"; } .icon-stop:before { content: "\f04d"; } .icon-forward:before { content: "\f04e"; } .icon-fast-forward:before { content: "\f050"; } .icon-step-forward:before { content: "\f051"; } .icon-eject:before { content: "\f052"; } .icon-chevron-left:before { content: "\f053"; } .icon-chevron-right:before { content: "\f054"; } .icon-plus-sign:before { content: "\f055"; } .icon-minus-sign:before { content: "\f056"; } .icon-remove-sign:before, .wy-inline-validate.wy-inline-validate-danger .wy-input-context:before { content: "\f057"; } .icon-ok-sign:before { content: "\f058"; } .icon-question-sign:before { content: "\f059"; } .icon-info-sign:before { content: "\f05a"; } .icon-screenshot:before { content: "\f05b"; } .icon-remove-circle:before { content: "\f05c"; } .icon-ok-circle:before { content: "\f05d"; } .icon-ban-circle:before { content: "\f05e"; } .icon-arrow-left:before { content: "\f060"; } .icon-arrow-right:before { content: "\f061"; } .icon-arrow-up:before { content: "\f062"; } .icon-arrow-down:before { content: "\f063"; } .icon-mail-forward:before, .icon-share-alt:before { content: "\f064"; } .icon-resize-full:before { content: "\f065"; } .icon-resize-small:before { content: "\f066"; } .icon-plus:before { content: "\f067"; } .icon-minus:before { content: "\f068"; } .icon-asterisk:before { content: "\f069"; } .icon-exclamation-sign:before, .wy-inline-validate.wy-inline-validate-warning .wy-input-context:before, .wy-inline-validate.wy-inline-validate-info .wy-input-context:before, .rst-content .admonition-title:before { content: "\f06a"; } .icon-gift:before { content: "\f06b"; } .icon-leaf:before { content: "\f06c"; } .icon-fire:before { content: "\f06d"; } .icon-eye-open:before { content: "\f06e"; } .icon-eye-close:before { content: "\f070"; } .icon-warning-sign:before { content: "\f071"; } .icon-plane:before { content: "\f072"; } .icon-calendar:before { content: "\f073"; } .icon-random:before { content: "\f074"; } .icon-comment:before { content: "\f075"; } .icon-magnet:before { content: "\f076"; } .icon-chevron-up:before { content: "\f077"; } .icon-chevron-down:before { content: "\f078"; } .icon-retweet:before { content: "\f079"; } .icon-shopping-cart:before { content: "\f07a"; } .icon-folder-close:before { content: "\f07b"; } .icon-folder-open:before { content: "\f07c"; } .icon-resize-vertical:before { content: "\f07d"; } .icon-resize-horizontal:before { content: "\f07e"; } .icon-bar-chart:before { content: "\f080"; } .icon-twitter-sign:before { content: "\f081"; } .icon-facebook-sign:before { content: "\f082"; } .icon-camera-retro:before { content: "\f083"; } .icon-key:before { content: "\f084"; } .icon-gears:before, .icon-cogs:before { content: "\f085"; } .icon-comments:before { content: "\f086"; } .icon-thumbs-up-alt:before { content: "\f087"; } .icon-thumbs-down-alt:before { content: "\f088"; } .icon-star-half:before { content: "\f089"; } .icon-heart-empty:before { content: "\f08a"; } .icon-signout:before { content: "\f08b"; } .icon-linkedin-sign:before { content: "\f08c"; } .icon-pushpin:before { content: "\f08d"; } .icon-external-link:before { content: "\f08e"; } .icon-signin:before { content: "\f090"; } .icon-trophy:before { content: "\f091"; } .icon-github-sign:before { content: "\f092"; } .icon-upload-alt:before { content: "\f093"; } .icon-lemon:before { content: "\f094"; } .icon-phone:before { content: "\f095"; } .icon-unchecked:before, .icon-check-empty:before { content: "\f096"; } .icon-bookmark-empty:before { content: "\f097"; } .icon-phone-sign:before { content: "\f098"; } .icon-twitter:before { content: "\f099"; } .icon-facebook:before { content: "\f09a"; } .icon-github:before { content: "\f09b"; } .icon-unlock:before { content: "\f09c"; } .icon-credit-card:before { content: "\f09d"; } .icon-rss:before { content: "\f09e"; } .icon-hdd:before { content: "\f0a0"; } .icon-bullhorn:before { content: "\f0a1"; } .icon-bell:before { content: "\f0a2"; } .icon-certificate:before { content: "\f0a3"; } .icon-hand-right:before { content: "\f0a4"; } .icon-hand-left:before { content: "\f0a5"; } .icon-hand-up:before { content: "\f0a6"; } .icon-hand-down:before { content: "\f0a7"; } .icon-circle-arrow-left:before { content: "\f0a8"; } .icon-circle-arrow-right:before { content: "\f0a9"; } .icon-circle-arrow-up:before { content: "\f0aa"; } .icon-circle-arrow-down:before { content: "\f0ab"; } .icon-globe:before { content: "\f0ac"; } .icon-wrench:before { content: "\f0ad"; } .icon-tasks:before { content: "\f0ae"; } .icon-filter:before { content: "\f0b0"; } .icon-briefcase:before { content: "\f0b1"; } .icon-fullscreen:before { content: "\f0b2"; } .icon-group:before { content: "\f0c0"; } .icon-link:before { content: "\f0c1"; } .icon-cloud:before { content: "\f0c2"; } .icon-beaker:before { content: "\f0c3"; } .icon-cut:before { content: "\f0c4"; } .icon-copy:before { content: "\f0c5"; } .icon-paperclip:before, .icon-paper-clip:before { content: "\f0c6"; } .icon-save:before { content: "\f0c7"; } .icon-sign-blank:before { content: "\f0c8"; } .icon-reorder:before { content: "\f0c9"; } .icon-list-ul:before { content: "\f0ca"; } .icon-list-ol:before { content: "\f0cb"; } .icon-strikethrough:before { content: "\f0cc"; } .icon-underline:before { content: "\f0cd"; } .icon-table:before { content: "\f0ce"; } .icon-magic:before { content: "\f0d0"; } .icon-truck:before { content: "\f0d1"; } .icon-pinterest:before { content: "\f0d2"; } .icon-pinterest-sign:before { content: "\f0d3"; } .icon-google-plus-sign:before { content: "\f0d4"; } .icon-google-plus:before { content: "\f0d5"; } .icon-money:before { content: "\f0d6"; } .icon-caret-down:before { content: "\f0d7"; } .icon-caret-up:before { content: "\f0d8"; } .icon-caret-left:before { content: "\f0d9"; } .icon-caret-right:before { content: "\f0da"; } .icon-columns:before { content: "\f0db"; } .icon-sort:before { content: "\f0dc"; } .icon-sort-down:before { content: "\f0dd"; } .icon-sort-up:before { content: "\f0de"; } .icon-envelope:before { content: "\f0e0"; } .icon-linkedin:before { content: "\f0e1"; } .icon-rotate-left:before, .icon-undo:before { content: "\f0e2"; } .icon-legal:before { content: "\f0e3"; } .icon-dashboard:before { content: "\f0e4"; } .icon-comment-alt:before { content: "\f0e5"; } .icon-comments-alt:before { content: "\f0e6"; } .icon-bolt:before { content: "\f0e7"; } .icon-sitemap:before { content: "\f0e8"; } .icon-umbrella:before { content: "\f0e9"; } .icon-paste:before { content: "\f0ea"; } .icon-lightbulb:before { content: "\f0eb"; } .icon-exchange:before { content: "\f0ec"; } .icon-cloud-download:before { content: "\f0ed"; } .icon-cloud-upload:before { content: "\f0ee"; } .icon-user-md:before { content: "\f0f0"; } .icon-stethoscope:before { content: "\f0f1"; } .icon-suitcase:before { content: "\f0f2"; } .icon-bell-alt:before { content: "\f0f3"; } .icon-coffee:before { content: "\f0f4"; } .icon-food:before { content: "\f0f5"; } .icon-file-text-alt:before { content: "\f0f6"; } .icon-building:before { content: "\f0f7"; } .icon-hospital:before { content: "\f0f8"; } .icon-ambulance:before { content: "\f0f9"; } .icon-medkit:before { content: "\f0fa"; } .icon-fighter-jet:before { content: "\f0fb"; } .icon-beer:before { content: "\f0fc"; } .icon-h-sign:before { content: "\f0fd"; } .icon-plus-sign-alt:before { content: "\f0fe"; } .icon-double-angle-left:before { content: "\f100"; } .icon-double-angle-right:before { content: "\f101"; } .icon-double-angle-up:before { content: "\f102"; } .icon-double-angle-down:before { content: "\f103"; } .icon-angle-left:before { content: "\f104"; } .icon-angle-right:before { content: "\f105"; } .icon-angle-up:before { content: "\f106"; } .icon-angle-down:before { content: "\f107"; } .icon-desktop:before { content: "\f108"; } .icon-laptop:before { content: "\f109"; } .icon-tablet:before { content: "\f10a"; } .icon-mobile-phone:before { content: "\f10b"; } .icon-circle-blank:before { content: "\f10c"; } .icon-quote-left:before { content: "\f10d"; } .icon-quote-right:before { content: "\f10e"; } .icon-spinner:before { content: "\f110"; } .icon-circle:before { content: "\f111"; } .icon-mail-reply:before, .icon-reply:before { content: "\f112"; } .icon-github-alt:before { content: "\f113"; } .icon-folder-close-alt:before { content: "\f114"; } .icon-folder-open-alt:before { content: "\f115"; } .icon-expand-alt:before { content: "\f116"; } .icon-collapse-alt:before { content: "\f117"; } .icon-smile:before { content: "\f118"; } .icon-frown:before { content: "\f119"; } .icon-meh:before { content: "\f11a"; } .icon-gamepad:before { content: "\f11b"; } .icon-keyboard:before { content: "\f11c"; } .icon-flag-alt:before { content: "\f11d"; } .icon-flag-checkered:before { content: "\f11e"; } .icon-terminal:before { content: "\f120"; } .icon-code:before { content: "\f121"; } .icon-reply-all:before { content: "\f122"; } .icon-mail-reply-all:before { content: "\f122"; } .icon-star-half-full:before, .icon-star-half-empty:before { content: "\f123"; } .icon-location-arrow:before { content: "\f124"; } .icon-crop:before { content: "\f125"; } .icon-code-fork:before { content: "\f126"; } .icon-unlink:before { content: "\f127"; } .icon-question:before { content: "\f128"; } .icon-info:before { content: "\f129"; } .icon-exclamation:before { content: "\f12a"; } .icon-superscript:before { content: "\f12b"; } .icon-subscript:before { content: "\f12c"; } .icon-eraser:before { content: "\f12d"; } .icon-puzzle-piece:before { content: "\f12e"; } .icon-microphone:before { content: "\f130"; } .icon-microphone-off:before { content: "\f131"; } .icon-shield:before { content: "\f132"; } .icon-calendar-empty:before { content: "\f133"; } .icon-fire-extinguisher:before { content: "\f134"; } .icon-rocket:before { content: "\f135"; } .icon-maxcdn:before { content: "\f136"; } .icon-chevron-sign-left:before { content: "\f137"; } .icon-chevron-sign-right:before { content: "\f138"; } .icon-chevron-sign-up:before { content: "\f139"; } .icon-chevron-sign-down:before { content: "\f13a"; } .icon-html5:before { content: "\f13b"; } .icon-css3:before { content: "\f13c"; } .icon-anchor:before { content: "\f13d"; } .icon-unlock-alt:before { content: "\f13e"; } .icon-bullseye:before { content: "\f140"; } .icon-ellipsis-horizontal:before { content: "\f141"; } .icon-ellipsis-vertical:before { content: "\f142"; } .icon-rss-sign:before { content: "\f143"; } .icon-play-sign:before { content: "\f144"; } .icon-ticket:before { content: "\f145"; } .icon-minus-sign-alt:before { content: "\f146"; } .icon-check-minus:before { content: "\f147"; } .icon-level-up:before { content: "\f148"; } .icon-level-down:before { content: "\f149"; } .icon-check-sign:before, .wy-inline-validate.wy-inline-validate-success .wy-input-context:before { content: "\f14a"; } .icon-edit-sign:before { content: "\f14b"; } .icon-external-link-sign:before { content: "\f14c"; } .icon-share-sign:before { content: "\f14d"; } .icon-compass:before { content: "\f14e"; } .icon-collapse:before { content: "\f150"; } .icon-collapse-top:before { content: "\f151"; } .icon-expand:before { content: "\f152"; } .icon-euro:before, .icon-eur:before { content: "\f153"; } .icon-gbp:before { content: "\f154"; } .icon-dollar:before, .icon-usd:before { content: "\f155"; } .icon-rupee:before, .icon-inr:before { content: "\f156"; } .icon-yen:before, .icon-jpy:before { content: "\f157"; } .icon-renminbi:before, .icon-cny:before { content: "\f158"; } .icon-won:before, .icon-krw:before { content: "\f159"; } .icon-bitcoin:before, .icon-btc:before { content: "\f15a"; } .icon-file:before { content: "\f15b"; } .icon-file-text:before { content: "\f15c"; } .icon-sort-by-alphabet:before { content: "\f15d"; } .icon-sort-by-alphabet-alt:before { content: "\f15e"; } .icon-sort-by-attributes:before { content: "\f160"; } .icon-sort-by-attributes-alt:before { content: "\f161"; } .icon-sort-by-order:before { content: "\f162"; } .icon-sort-by-order-alt:before { content: "\f163"; } .icon-thumbs-up:before { content: "\f164"; } .icon-thumbs-down:before { content: "\f165"; } .icon-youtube-sign:before { content: "\f166"; } .icon-youtube:before { content: "\f167"; } .icon-xing:before { content: "\f168"; } .icon-xing-sign:before { content: "\f169"; } .icon-youtube-play:before { content: "\f16a"; } .icon-dropbox:before { content: "\f16b"; } .icon-stackexchange:before { content: "\f16c"; } .icon-instagram:before { content: "\f16d"; } .icon-flickr:before { content: "\f16e"; } .icon-adn:before { content: "\f170"; } .icon-bitbucket:before { content: "\f171"; } .icon-bitbucket-sign:before { content: "\f172"; } .icon-tumblr:before { content: "\f173"; } .icon-tumblr-sign:before { content: "\f174"; } .icon-long-arrow-down:before { content: "\f175"; } .icon-long-arrow-up:before { content: "\f176"; } .icon-long-arrow-left:before { content: "\f177"; } .icon-long-arrow-right:before { content: "\f178"; } .icon-apple:before { content: "\f179"; } .icon-windows:before { content: "\f17a"; } .icon-android:before { content: "\f17b"; } .icon-linux:before { content: "\f17c"; } .icon-dribbble:before { content: "\f17d"; } .icon-skype:before { content: "\f17e"; } .icon-foursquare:before { content: "\f180"; } .icon-trello:before { content: "\f181"; } .icon-female:before { content: "\f182"; } .icon-male:before { content: "\f183"; } .icon-gittip:before { content: "\f184"; } .icon-sun:before { content: "\f185"; } .icon-moon:before { content: "\f186"; } .icon-archive:before { content: "\f187"; } .icon-bug:before { content: "\f188"; } .icon-vk:before { content: "\f189"; } .icon-weibo:before { content: "\f18a"; } .icon-renren:before { content: "\f18b"; } .wy-alert, .rst-content .note, .rst-content .attention, .rst-content .caution, .rst-content .danger, .rst-content .error, .rst-content .hint, .rst-content .important, .rst-content .tip, .rst-content .warning { padding: 24px; line-height: 24px; margin-bottom: 24px; border-left: solid 3px transparent; } .wy-alert strong, .rst-content .note strong, .rst-content .attention strong, .rst-content .caution strong, .rst-content .danger strong, .rst-content .error strong, .rst-content .hint strong, .rst-content .important strong, .rst-content .tip strong, .rst-content .warning strong, .wy-alert a, .rst-content .note a, .rst-content .attention a, .rst-content .caution a, .rst-content .danger a, .rst-content .error a, .rst-content .hint a, .rst-content .important a, .rst-content .tip a, .rst-content .warning a { color: #fff; } .wy-alert.wy-alert-danger, .rst-content .wy-alert-danger.note, .rst-content .wy-alert-danger.attention, .rst-content .wy-alert-danger.caution, .rst-content .danger, .rst-content .error, .rst-content .wy-alert-danger.hint, .rst-content .wy-alert-danger.important, .rst-content .wy-alert-danger.tip, .rst-content .wy-alert-danger.warning { background: #e74c3c; color: #fff; border-color: #d62c1a; } .wy-alert.wy-alert-warning, .rst-content .wy-alert-warning.note, .rst-content .attention, .rst-content .caution, .rst-content .wy-alert-warning.danger, .rst-content .wy-alert-warning.error, .rst-content .wy-alert-warning.hint, .rst-content .wy-alert-warning.important, .rst-content .wy-alert-warning.tip, .rst-content .warning { background: #e67e22; color: #fff; border-color: #bf6516; } .wy-alert.wy-alert-info, .rst-content .note, .rst-content .wy-alert-info.attention, .rst-content .wy-alert-info.caution, .rst-content .wy-alert-info.danger, .rst-content .wy-alert-info.error, .rst-content .hint, .rst-content .important, .rst-content .tip, .rst-content .wy-alert-info.warning { background: #2980b9; color: #fff; border-color: #20638f; } .wy-alert.wy-alert-success, .rst-content .wy-alert-success.note, .rst-content .wy-alert-success.attention, .rst-content .wy-alert-success.caution, .rst-content .wy-alert-success.danger, .rst-content .wy-alert-success.error, .rst-content .wy-alert-success.hint, .rst-content .wy-alert-success.important, .rst-content .wy-alert-success.tip, .rst-content .wy-alert-success.warning { background: #27ae60; color: #fff; border-color: #1e8449; } .wy-alert.wy-alert-neutral, .rst-content .wy-alert-neutral.note, .rst-content .wy-alert-neutral.attention, .rst-content .wy-alert-neutral.caution, .rst-content .wy-alert-neutral.danger, .rst-content .wy-alert-neutral.error, .rst-content .wy-alert-neutral.hint, .rst-content .wy-alert-neutral.important, .rst-content .wy-alert-neutral.tip, .rst-content .wy-alert-neutral.warning { background: #f3f6f6; border-color: #e1e4e5; } .wy-alert.wy-alert-neutral strong, .rst-content .wy-alert-neutral.note strong, .rst-content .wy-alert-neutral.attention strong, .rst-content .wy-alert-neutral.caution strong, .rst-content .wy-alert-neutral.danger strong, .rst-content .wy-alert-neutral.error strong, .rst-content .wy-alert-neutral.hint strong, .rst-content .wy-alert-neutral.important strong, .rst-content .wy-alert-neutral.tip strong, .rst-content .wy-alert-neutral.warning strong { color: #404040; } .wy-alert.wy-alert-neutral a, .rst-content .wy-alert-neutral.note a, .rst-content .wy-alert-neutral.attention a, .rst-content .wy-alert-neutral.caution a, .rst-content .wy-alert-neutral.danger a, .rst-content .wy-alert-neutral.error a, .rst-content .wy-alert-neutral.hint a, .rst-content .wy-alert-neutral.important a, .rst-content .wy-alert-neutral.tip a, .rst-content .wy-alert-neutral.warning a { color: #2980b9; } .wy-tray-container { position: fixed; top: -50px; left: 0; width: 100%; -webkit-transition: top 0.2s ease-in; -moz-transition: top 0.2s ease-in; transition: top 0.2s ease-in; } .wy-tray-container.on { top: 0; } .wy-tray-container li { display: none; width: 100%; background: #343131; padding: 12px 24px; color: #fff; margin-bottom: 6px; text-align: center; box-shadow: 0 5px 5px 0 rgba(0, 0, 0, 0.1), 0px -1px 2px -1px rgba(255, 255, 255, 0.5) inset; } .wy-tray-container li.wy-tray-item-success { background: #27ae60; } .wy-tray-container li.wy-tray-item-info { background: #2980b9; } .wy-tray-container li.wy-tray-item-warning { background: #e67e22; } .wy-tray-container li.wy-tray-item-danger { background: #e74c3c; } .btn { display: inline-block; *display: inline; zoom: 1; line-height: normal; white-space: nowrap; vertical-align: baseline; text-align: center; cursor: pointer; -webkit-user-drag: none; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; font-size: 100%; padding: 6px 12px; color: #fff; border: 1px solid rgba(0, 0, 0, 0.1); border-bottom: solid 3px rgba(0, 0, 0, 0.1); background-color: #27ae60; text-decoration: none; font-weight: 500; box-shadow: 0px 1px 2px -1px rgba(255, 255, 255, 0.5) inset; -webkit-transition: all 0.1s linear; -moz-transition: all 0.1s linear; transition: all 0.1s linear; outline-none: false; } .btn-hover { background: #2e8ece; color: #fff; } .btn:hover { background: #2cc36b; color: #fff; } .btn:focus { background: #2cc36b; color: #fff; outline: 0; } .btn:active { border-top: solid 3px rgba(0, 0, 0, 0.1); border-bottom: solid 1px rgba(0, 0, 0, 0.1); box-shadow: 0px 1px 2px -1px rgba(0, 0, 0, 0.5) inset; } .btn[disabled] { background-image: none; filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); filter: alpha(opacity=40); opacity: 0.4; cursor: not-allowed; box-shadow: none; } .btn-disabled { background-image: none; filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); filter: alpha(opacity=40); opacity: 0.4; cursor: not-allowed; box-shadow: none; } .btn-disabled:hover, .btn-disabled:focus, .btn-disabled:active { background-image: none; filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); filter: alpha(opacity=40); opacity: 0.4; cursor: not-allowed; box-shadow: none; } .btn::-moz-focus-inner { padding: 0; border: 0; } .btn-small { font-size: 80%; } .btn-info { background-color: #2980b9 !important; } .btn-info:hover { background-color: #2e8ece !important; } .btn-neutral { background-color: #f3f6f6 !important; color: #404040 !important; } .btn-neutral:hover { background-color: #e5ebeb !important; color: #404040; } .btn-danger { background-color: #e74c3c !important; } .btn-danger:hover { background-color: #ea6153 !important; } .btn-warning { background-color: #e67e22 !important; } .btn-warning:hover { background-color: #e98b39 !important; } .btn-invert { background-color: #343131; } .btn-invert:hover { background-color: #413d3d !important; } .btn-link { background-color: transparent !important; color: #2980b9; border-color: transparent; } .btn-link:hover { background-color: transparent !important; color: #409ad5; border-color: transparent; } .btn-link:active { background-color: transparent !important; border-color: transparent; border-top: solid 1px transparent; border-bottom: solid 3px transparent; } .wy-btn-group .btn, .wy-control .btn { vertical-align: middle; } .wy-btn-group { margin-bottom: 24px; *zoom: 1; } .wy-btn-group:before, .wy-btn-group:after { display: table; content: ""; } .wy-btn-group:after { clear: both; } .wy-dropdown { position: relative; display: inline-block; } .wy-dropdown:hover .wy-dropdown-menu { display: block; } .wy-dropdown .caret:after { font-family: fontawesome-webfont; content: "\f0d7"; font-size: 70%; } .wy-dropdown-menu { position: absolute; top: 100%; left: 0; display: none; float: left; min-width: 100%; background: #fcfcfc; z-index: 100; border: solid 1px #cfd7dd; box-shadow: 0 5px 5px 0 rgba(0, 0, 0, 0.1); padding: 12px; } .wy-dropdown-menu>dd>a { display: block; clear: both; color: #404040; white-space: nowrap; font-size: 90%; padding: 0 12px; } .wy-dropdown-menu>dd>a:hover { background: #2980b9; color: #fff; } .wy-dropdown-menu>dd.divider { border-top: solid 1px #cfd7dd; margin: 6px 0; } .wy-dropdown-menu>dd.search { padding-bottom: 12px; } .wy-dropdown-menu>dd.search input[type="search"] { width: 100%; } .wy-dropdown-menu>dd.call-to-action { background: #e3e3e3; text-transform: uppercase; font-weight: 500; font-size: 80%; } .wy-dropdown-menu>dd.call-to-action:hover { background: #e3e3e3; } .wy-dropdown-menu>dd.call-to-action .btn { color: #fff; } .wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu { background: #fcfcfc; margin-top: 2px; } .wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a { padding: 6px 12px; } .wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover { background: #2980b9; color: #fff; } .wy-dropdown.wy-dropdown-left .wy-dropdown-menu { right: 0; text-align: right; } .wy-dropdown-arrow:before { content: " "; border-bottom: 5px solid #f5f5f5; border-left: 5px solid transparent; border-right: 5px solid transparent; position: absolute; display: block; top: -4px; left: 50%; margin-left: -3px; } .wy-dropdown-arrow.wy-dropdown-arrow-left:before { left: 11px; } .wy-form-stacked select { display: block; } .wy-form-aligned input, .wy-form-aligned textarea, .wy-form-aligned select, .wy-form-aligned .wy-help-inline, .wy-form-aligned label { display: inline-block; *display: inline; *zoom: 1; vertical-align: middle; } .wy-form-aligned .wy-control-group>label { display: inline-block; vertical-align: middle; width: 10em; margin: 0.5em 1em 0 0; float: left; } .wy-form-aligned .wy-control { float: left; } .wy-form-aligned .wy-control label { display: block; } .wy-form-aligned .wy-control select { margin-top: 0.5em; } fieldset { border: 0; margin: 0; padding: 0; } legend { display: block; width: 100%; border: 0; padding: 0; white-space: normal; margin-bottom: 24px; font-size: 150%; *margin-left: -7px; } label { display: block; margin: 0 0 0.3125em 0; color: #999; font-size: 90%; } button, input, select, textarea { font-size: 100%; margin: 0; vertical-align: baseline; *vertical-align: middle; } button, input { line-height: normal; } button { -webkit-appearance: button; cursor: pointer; *overflow: visible; } button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; } button[disabled] { cursor: default; } input[type="button"], input[type="reset"], input[type="submit"] { -webkit-appearance: button; cursor: pointer; *overflow: visible; } input[type="text"], input[type="password"], input[type="email"], input[type="url"], input[type="date"], input[type="month"], input[type="time"], input[type="datetime"], input[type="datetime-local"], input[type="week"], input[type="number"], input[type="search"], input[type="tel"], input[type="color"] { -webkit-appearance: none; padding: 6px; display: inline-block; border: 1px solid #ccc; font-size: 80%; font-family: "Lato", "proxima-nova", "Helvetica Neue", Arial, sans-serif; box-shadow: inset 0 1px 3px #ddd; border-radius: 0; -webkit-transition: border 0.3s linear; -moz-transition: border 0.3s linear; transition: border 0.3s linear; } input[type="datetime-local"] { padding: 0.34375em 0.625em; } input[disabled] { cursor: default; } input[type="checkbox"], input[type="radio"] { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; padding: 0; margin-right: 0.3125em; *height: 13px; *width: 13px; } input[type="search"] { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { -webkit-appearance: none; } input[type="text"]:focus, input[type="password"]:focus, input[type="email"]:focus, input[type="url"]:focus, input[type="date"]:focus, input[type="month"]:focus, input[type="time"]:focus, input[type="datetime"]:focus, input[type="datetime-local"]:focus, input[type="week"]:focus, input[type="number"]:focus, input[type="search"]:focus, input[type="tel"]:focus, input[type="color"]:focus { outline: 0; outline: thin dotted \9; border-color: #2980b9; } input.no-focus:focus { border-color: #ccc !important; } input[type="file"]:focus, input[type="radio"]:focus, input[type="checkbox"]:focus { outline: thin dotted #333; outline: 1px auto #129fea; } input[type="text"][disabled], input[type="password"][disabled], input[type="email"][disabled], input[type="url"][disabled], input[type="date"][disabled], input[type="month"][disabled], input[type="time"][disabled], input[type="datetime"][disabled], input[type="datetime-local"][disabled], input[type="week"][disabled], input[type="number"][disabled], input[type="search"][disabled], input[type="tel"][disabled], input[type="color"][disabled] { cursor: not-allowed; background-color: #f3f6f6; color: #cad2d3; } input:focus:invalid, textarea:focus:invalid, select:focus:invalid { color: #e74c3c; border: 1px solid #e74c3c; } input:focus:invalid:focus, textarea:focus:invalid:focus, select:focus:invalid:focus { border-color: #e9322d; } input[type="file"]:focus:invalid:focus, input[type="radio"]:focus:invalid:focus, input[type="checkbox"]:focus:invalid:focus { outline-color: #e9322d; } input.wy-input-large { padding: 12px; font-size: 100%; } textarea { overflow: auto; vertical-align: top; width: 100%; } select, textarea { padding: 0.5em 0.625em; display: inline-block; border: 1px solid #ccc; font-size: 0.8em; box-shadow: inset 0 1px 3px #ddd; -webkit-transition: border 0.3s linear; -moz-transition: border 0.3s linear; transition: border 0.3s linear; } select { border: 1px solid #ccc; background-color: #fff; } select[multiple] { height: auto; } select:focus, textarea:focus { outline: 0; } select[disabled], textarea[disabled], input[readonly], select[readonly], textarea[readonly] { cursor: not-allowed; background-color: #fff; color: #cad2d3; border-color: transparent; } .wy-checkbox, .wy-radio { margin: 0.5em 0; color: #404040 !important; display: block; } .wy-form-message-inline { display: inline-block; *display: inline; *zoom: 1; vertical-align: middle; } .wy-input-prefix, .wy-input-suffix { white-space: nowrap; } .wy-input-prefix .wy-input-context, .wy-input-suffix .wy-input-context { padding: 6px; display: inline-block; font-size: 80%; background-color: #f3f6f6; border: solid 1px #ccc; color: #999; } .wy-input-suffix .wy-input-context { border-left: 0; } .wy-input-prefix .wy-input-context { border-right: 0; } .wy-inline-validate { white-space: nowrap; } .wy-inline-validate .wy-input-context { padding: 0.5em 0.625em; display: inline-block; font-size: 80%; } .wy-inline-validate.wy-inline-validate-success .wy-input-context { color: #27ae60; } .wy-inline-validate.wy-inline-validate-danger .wy-input-context { color: #e74c3c; } .wy-inline-validate.wy-inline-validate-warning .wy-input-context { color: #e67e22; } .wy-inline-validate.wy-inline-validate-info .wy-input-context { color: #2980b9; } .wy-control-group { margin-bottom: 24px; *zoom: 1; } .wy-control-group:before, .wy-control-group:after { display: table; content: ""; } .wy-control-group:after { clear: both; } .wy-control-group.wy-control-group-error .wy-form-message, .wy-control-group.wy-control-group-error label { color: #e74c3c; } .wy-control-group.wy-control-group-error input[type="text"], .wy-control-group.wy-control-group-error input[type="password"], .wy-control-group.wy-control-group-error input[type="email"], .wy-control-group.wy-control-group-error input[type="url"], .wy-control-group.wy-control-group-error input[type="date"], .wy-control-group.wy-control-group-error input[type="month"], .wy-control-group.wy-control-group-error input[type="time"], .wy-control-group.wy-control-group-error input[type="datetime"], .wy-control-group.wy-control-group-error input[type="datetime-local"], .wy-control-group.wy-control-group-error input[type="week"], .wy-control-group.wy-control-group-error input[type="number"], .wy-control-group.wy-control-group-error input[type="search"], .wy-control-group.wy-control-group-error input[type="tel"], .wy-control-group.wy-control-group-error input[type="color"] { border: solid 2px #e74c3c; } .wy-control-group.wy-control-group-error textarea { border: solid 2px #e74c3c; } .wy-control-group.fluid-input input[type="text"], .wy-control-group.fluid-input input[type="password"], .wy-control-group.fluid-input input[type="email"], .wy-control-group.fluid-input input[type="url"], .wy-control-group.fluid-input input[type="date"], .wy-control-group.fluid-input input[type="month"], .wy-control-group.fluid-input input[type="time"], .wy-control-group.fluid-input input[type="datetime"], .wy-control-group.fluid-input input[type="datetime-local"], .wy-control-group.fluid-input input[type="week"], .wy-control-group.fluid-input input[type="number"], .wy-control-group.fluid-input input[type="search"], .wy-control-group.fluid-input input[type="tel"], .wy-control-group.fluid-input input[type="color"] { width: 100%; } .wy-form-message-inline { display: inline-block; padding-left: 0.3em; color: #666; vertical-align: middle; font-size: 90%; } .wy-form-message { display: block; color: #ccc; font-size: 70%; margin-top: 0.3125em; font-style: italic; } .wy-tag-input-group { padding: 4px 4px 0px 4px; display: inline-block; border: 1px solid #ccc; font-size: 80%; font-family: "Lato", "proxima-nova", "Helvetica Neue", Arial, sans-serif; box-shadow: inset 0 1px 3px #ddd; -webkit-transition: border 0.3s linear; -moz-transition: border 0.3s linear; transition: border 0.3s linear; } .wy-tag-input-group .wy-tag { display: inline-block; background-color: rgba(0, 0, 0, 0.1); padding: 0.5em 0.625em; border-radius: 2px; position: relative; margin-bottom: 4px; } .wy-tag-input-group .wy-tag .wy-tag-remove { color: #ccc; margin-left: 5px; } .wy-tag-input-group .wy-tag .wy-tag-remove:hover { color: #e74c3c; } .wy-tag-input-group label { margin-left: 5px; display: inline-block; margin-bottom: 0; } .wy-tag-input-group input { border: none; font-size: 100%; margin-bottom: 4px; box-shadow: none; } .wy-form-upload { border: solid 1px #ccc; border-bottom: solid 3px #ccc; background-color: #fff; padding: 24px; display: inline-block; text-align: center; cursor: pointer; color: #404040; -webkit-transition: border-color 0.1s ease-in; -moz-transition: border-color 0.1s ease-in; transition: border-color 0.1s ease-in; *zoom: 1; } .wy-form-upload:before, .wy-form-upload:after { display: table; content: ""; } .wy-form-upload:after { clear: both; } @media screen and (max-width: 480px) { .wy-form-upload { width: 100%; } } .wy-form-upload .image-drop { display: none; } .wy-form-upload .image-desktop { display: none; } .wy-form-upload .image-loading { display: none; } .wy-form-upload .wy-form-upload-icon { display: block; font-size: 32px; color: #b3b3b3; } .wy-form-upload .image-drop .wy-form-upload-icon { color: #27ae60; } .wy-form-upload p { font-size: 90%; } .wy-form-upload .wy-form-upload-image { float: left; margin-right: 24px; } @media screen and (max-width: 480px) { .wy-form-upload .wy-form-upload-image { width: 100%; margin-bottom: 24px; } } .wy-form-upload img { max-width: 125px; max-height: 125px; opacity: 0.9; -webkit-transition: opacity 0.1s ease-in; -moz-transition: opacity 0.1s ease-in; transition: opacity 0.1s ease-in; } .wy-form-upload .wy-form-upload-content { float: left; } @media screen and (max-width: 480px) { .wy-form-upload .wy-form-upload-content { width: 100%; } } .wy-form-upload:hover { border-color: #b3b3b3; color: #404040; } .wy-form-upload:hover .image-desktop { display: block; } .wy-form-upload:hover .image-drag { display: none; } .wy-form-upload:hover img { opacity: 1; } .wy-form-upload:active { border-top: solid 3px #ccc; border-bottom: solid 1px #ccc; } .wy-form-upload.wy-form-upload-big { width: 100%; text-align: center; padding: 72px; } .wy-form-upload.wy-form-upload-big .wy-form-upload-content { float: none; } .wy-form-upload.wy-form-upload-file p { margin-bottom: 0; } .wy-form-upload.wy-form-upload-file .wy-form-upload-icon { display: inline-block; font-size: inherit; } .wy-form-upload.wy-form-upload-drop { background-color: #ddf7e8; } .wy-form-upload.wy-form-upload-drop .image-drop { display: block; } .wy-form-upload.wy-form-upload-drop .image-desktop { display: none; } .wy-form-upload.wy-form-upload-drop .image-drag { display: none; } .wy-form-upload.wy-form-upload-loading .image-drag { display: none; } .wy-form-upload.wy-form-upload-loading .image-desktop { display: none; } .wy-form-upload.wy-form-upload-loading .image-loading { display: block; } .wy-form-upload.wy-form-upload-loading .wy-input-prefix { display: none; } .wy-form-upload.wy-form-upload-loading p { margin-bottom: 0; } .rotate-90 { -webkit-transform: rotate(90deg); -moz-transform: rotate(90deg); -ms-transform: rotate(90deg); -o-transform: rotate(90deg); transform: rotate(90deg); } .rotate-180 { -webkit-transform: rotate(180deg); -moz-transform: rotate(180deg); -ms-transform: rotate(180deg); -o-transform: rotate(180deg); transform: rotate(180deg); } .rotate-270 { -webkit-transform: rotate(270deg); -moz-transform: rotate(270deg); -ms-transform: rotate(270deg); -o-transform: rotate(270deg); transform: rotate(270deg); } .mirror { -webkit-transform: scaleX(-1); -moz-transform: scaleX(-1); -ms-transform: scaleX(-1); -o-transform: scaleX(-1); transform: scaleX(-1); } .mirror.rotate-90 { -webkit-transform: scaleX(-1) rotate(90deg); -moz-transform: scaleX(-1) rotate(90deg); -ms-transform: scaleX(-1) rotate(90deg); -o-transform: scaleX(-1) rotate(90deg); transform: scaleX(-1) rotate(90deg); } .mirror.rotate-180 { -webkit-transform: scaleX(-1) rotate(180deg); -moz-transform: scaleX(-1) rotate(180deg); -ms-transform: scaleX(-1) rotate(180deg); -o-transform: scaleX(-1) rotate(180deg); transform: scaleX(-1) rotate(180deg); } .mirror.rotate-270 { -webkit-transform: scaleX(-1) rotate(270deg); -moz-transform: scaleX(-1) rotate(270deg); -ms-transform: scaleX(-1) rotate(270deg); -o-transform: scaleX(-1) rotate(270deg); transform: scaleX(-1) rotate(270deg); } .wy-form-gallery-manage { margin-left: -12px; margin-right: -12px; } .wy-form-gallery-manage li { float: left; padding: 12px; width: 20%; cursor: pointer; } @media screen and (max-width: 768px) { .wy-form-gallery-manage li { width: 25%; } } @media screen and (max-width: 480px) { .wy-form-gallery-manage li { width: 50%; } } .wy-form-gallery-manage li:active { cursor: move; } .wy-form-gallery-manage li>a { padding: 12px; background-color: #fff; border: solid 1px #e1e4e5; border-bottom: solid 3px #e1e4e5; display: inline-block; -webkit-transition: all 0.1s ease-in; -moz-transition: all 0.1s ease-in; transition: all 0.1s ease-in; } .wy-form-gallery-manage li>a:active { border: solid 1px #ccc; border-top: solid 3px #ccc; } .wy-form-gallery-manage img { width: 100%; -webkit-transition: all 0.05s ease-in; -moz-transition: all 0.05s ease-in; transition: all 0.05s ease-in; } li.wy-form-gallery-edit { position: relative; color: #fff; padding: 24px; width: 100%; display: block; background-color: #343131; border-radius: 4px; } li.wy-form-gallery-edit .arrow { position: absolute; display: block; top: -50px; left: 50%; margin-left: -25px; z-index: 500; height: 0; width: 0; border-color: transparent; border-style: solid; border-width: 25px; border-bottom-color: #343131; } @media only screen and (max-width: 480px) { .wy-form button[type="submit"] { margin: 0.7em 0 0; } .wy-form input[type="text"], .wy-form input[type="password"], .wy-form input[type="email"], .wy-form input[type="url"], .wy-form input[type="date"], .wy-form input[type="month"], .wy-form input[type="time"], .wy-form input[type="datetime"], .wy-form input[type="datetime-local"], .wy-form input[type="week"], .wy-form input[type="number"], .wy-form input[type="search"], .wy-form input[type="tel"], .wy-form input[type="color"] { margin-bottom: 0.3em; display: block; } .wy-form label { margin-bottom: 0.3em; display: block; } .wy-form input[type="password"], .wy-form input[type="email"], .wy-form input[type="url"], .wy-form input[type="date"], .wy-form input[type="month"], .wy-form input[type="time"], .wy-form input[type="datetime"], .wy-form input[type="datetime-local"], .wy-form input[type="week"], .wy-form input[type="number"], .wy-form input[type="search"], .wy-form input[type="tel"], .wy-form input[type="color"] { margin-bottom: 0; } .wy-form-aligned .wy-control-group label { margin-bottom: 0.3em; text-align: left; display: block; width: 100%; } .wy-form-aligned .wy-controls { margin: 1.5em 0 0 0; } .wy-form .wy-help-inline, .wy-form-message-inline, .wy-form-message { display: block; font-size: 80%; padding: 0.2em 0 0.8em; } } @media screen and (max-width: 768px) { .tablet-hide { display: none; } } @media screen and (max-width: 480px) { .mobile-hide { display: none; } } .float-left { float: left; } .float-right { float: right; } .full-width { width: 100%; } .wy-grid-one-col { *zoom: 1; max-width: 68em; margin-left: auto; margin-right: auto; max-width: 1066px; margin-top: 1.618em; } .wy-grid-one-col:before, .wy-grid-one-col:after { display: table; content: ""; } .wy-grid-one-col:after { clear: both; } .wy-grid-one-col section { display: block; float: left; margin-right: 2.35765%; width: 100%; background: #fff; padding: 1.618em; margin-right: 0; } .wy-grid-one-col section:last-child { margin-right: 0; } .wy-grid-index-card { *zoom: 1; max-width: 68em; margin-left: auto; margin-right: auto; max-width: 460px; margin-top: 1.618em; background: #fff; padding: 1.618em; } .wy-grid-index-card:before, .wy-grid-index-card:after { display: table; content: ""; } .wy-grid-index-card:after { clear: both; } .wy-grid-index-card header, .wy-grid-index-card section, .wy-grid-index-card aside { display: block; float: left; margin-right: 2.35765%; width: 100%; } .wy-grid-index-card header:last-child, .wy-grid-index-card section:last-child, .wy-grid-index-card aside:last-child { margin-right: 0; } .wy-grid-index-card.twocol { max-width: 768px; } .wy-grid-index-card.twocol section { display: block; float: left; margin-right: 2.35765%; width: 48.82117%; } .wy-grid-index-card.twocol section:last-child { margin-right: 0; } .wy-grid-index-card.twocol aside { display: block; float: left; margin-right: 2.35765%; width: 48.82117%; } .wy-grid-index-card.twocol aside:last-child { margin-right: 0; } .wy-grid-search-filter { *zoom: 1; max-width: 68em; margin-left: auto; margin-right: auto; margin-bottom: 24px; } .wy-grid-search-filter:before, .wy-grid-search-filter:after { display: table; content: ""; } .wy-grid-search-filter:after { clear: both; } .wy-grid-search-filter .wy-grid-search-filter-input { display: block; float: left; margin-right: 2.35765%; width: 74.41059%; } .wy-grid-search-filter .wy-grid-search-filter-input:last-child { margin-right: 0; } .wy-grid-search-filter .wy-grid-search-filter-btn { display: block; float: left; margin-right: 2.35765%; width: 23.23176%; } .wy-grid-search-filter .wy-grid-search-filter-btn:last-child { margin-right: 0; } .wy-table, .rst-content table.docutils, .rst-content table.field-list { border-collapse: collapse; border-spacing: 0; empty-cells: show; margin-bottom: 24px; } .wy-table caption, .rst-content table.docutils caption, .rst-content table.field-list caption { color: #000; font: italic 85%/1 arial, sans-serif; padding: 1em 0; text-align: center; } .wy-table td, .rst-content table.docutils td, .rst-content table.field-list td, .wy-table th, .rst-content table.docutils th, .rst-content table.field-list th { font-size: 90%; margin: 0; overflow: visible; padding: 8px 16px; } .wy-table td:first-child, .rst-content table.docutils td:first-child, .rst-content table.field-list td:first-child, .wy-table th:first-child, .rst-content table.docutils th:first-child, .rst-content table.field-list th:first-child { border-left-width: 0; } .wy-table thead, .rst-content table.docutils thead, .rst-content table.field-list thead { color: #000; text-align: left; vertical-align: bottom; white-space: nowrap; } .wy-table thead th, .rst-content table.docutils thead th, .rst-content table.field-list thead th { font-weight: bold; border-bottom: solid 2px #e1e4e5; } .wy-table td, .rst-content table.docutils td, .rst-content table.field-list td { background-color: transparent; vertical-align: middle; } .wy-table td p, .rst-content table.docutils td p, .rst-content table.field-list td p { line-height: 18px; margin-bottom: 0; } .wy-table .wy-table-cell-min, .rst-content table.docutils .wy-table-cell-min, .rst-content table.field-list .wy-table-cell-min { width: 1%; padding-right: 0; } .wy-table .wy-table-cell-min input[type=checkbox], .rst-content table.docutils .wy-table-cell-min input[type=checkbox], .rst-content table.field-list .wy-table-cell-min input[type=checkbox], .wy-table .wy-table-cell-min input[type=checkbox], .rst-content table.docutils .wy-table-cell-min input[type=checkbox], .rst-content table.field-list .wy-table-cell-min input[type=checkbox] { margin: 0; } .wy-table-secondary { color: gray; font-size: 90%; } .wy-table-tertiary { color: gray; font-size: 80%; } .wy-table-odd td, .wy-table-striped tr:nth-child(2n-1) td, .rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td { background-color: #f3f6f6; } .wy-table-backed { background-color: #f3f6f6; } .wy-table-bordered-all, .rst-content table.docutils { border: 1px solid #e1e4e5; } .wy-table-bordered-all td, .rst-content table.docutils td { border-bottom: 1px solid #e1e4e5; border-left: 1px solid #e1e4e5; } .wy-table-bordered-all tbody>tr:last-child td, .rst-content table.docutils tbody>tr:last-child td { border-bottom-width: 0; } .wy-table-bordered { border: 1px solid #e1e4e5; } .wy-table-bordered-rows td { border-bottom: 1px solid #e1e4e5; } .wy-table-bordered-rows tbody>tr:last-child td { border-bottom-width: 0; } .wy-table-horizontal tbody>tr:last-child td { border-bottom-width: 0; } .wy-table-horizontal td, .wy-table-horizontal th { border-width: 0 0 1px 0; border-bottom: 1px solid #e1e4e5; } .wy-table-horizontal tbody>tr:last-child td { border-bottom-width: 0; } .wy-table-responsive { margin-bottom: 24px; max-width: 100%; overflow: auto; } .wy-table-responsive table { margin-bottom: 0 !important; } .wy-table-responsive table td, .wy-table-responsive table th { white-space: nowrap; } html { height: 100%; overflow-x: hidden; } body { font-family: "Lato", "proxima-nova", "Helvetica Neue", Arial, sans-serif; font-weight: normal; color: #404040; min-height: 100%; overflow-x: hidden; background: #edf0f2; } a { color: #2980b9; text-decoration: none; } a:hover { color: #3091d1; } .link-danger { color: #e74c3c; } .link-danger:hover { color: #d62c1a; } .text-left { text-align: left; } .text-center { text-align: center; } .text-right { text-align: right; } h1, h2, h3, h4, h5, h6, legend { margin-top: 0; font-weight: 700; font-family: "Roboto Slab", "ff-tisa-web-pro", "Georgia", Arial, sans-serif; } p { line-height: 24px; margin: 0; font-size: 16px; margin-bottom: 24px; } h1 { font-size: 175%; } h2 { font-size: 150%; } h3 { font-size: 125%; } h4 { font-size: 115%; } h5 { font-size: 110%; } h6 { font-size: 100%; } small { font-size: 80%; } code, .rst-content tt { white-space: nowrap; max-width: 100%; background: #fff; border: solid 1px #e1e4e5; font-size: 75%; padding: 0 5px; font-family: "Incosolata", "Consolata", "Monaco", monospace; color: #e74c3c; overflow-x: auto; } code.code-large, .rst-content tt.code-large { font-size: 90%; } .full-width { width: 100%; } .wy-plain-list-disc, .rst-content .section ul, .rst-content .toctree-wrapper ul { list-style: disc; line-height: 24px; margin-bottom: 24px; } .wy-plain-list-disc li, .rst-content .section ul li, .rst-content .toctree-wrapper ul li { list-style: disc; margin-left: 24px; } .wy-plain-list-disc li ul, .rst-content .section ul li ul, .rst-content .toctree-wrapper ul li ul { margin-bottom: 0; } .wy-plain-list-disc li li, .rst-content .section ul li li, .rst-content .toctree-wrapper ul li li { list-style: circle; } .wy-plain-list-disc li li li, .rst-content .section ul li li li, .rst-content .toctree-wrapper ul li li li { list-style: square; } .wy-plain-list-decimal, .rst-content .section ol, .rst-content ol.arabic { list-style: decimal; line-height: 24px; margin-bottom: 24px; } .wy-plain-list-decimal li, .rst-content .section ol li, .rst-content ol.arabic li { list-style: decimal; margin-left: 24px; } .wy-type-large { font-size: 120%; } .wy-type-normal { font-size: 100%; } .wy-type-small { font-size: 100%; } .wy-type-strike { text-decoration: line-through; } .wy-text-warning { color: #e67e22 !important; } a.wy-text-warning:hover { color: #eb9950 !important; } .wy-text-info { color: #2980b9 !important; } a.wy-text-info:hover { color: #409ad5 !important; } .wy-text-success { color: #27ae60 !important; } a.wy-text-success:hover { color: #36d278 !important; } .wy-text-danger { color: #e74c3c !important; } a.wy-text-danger:hover { color: #ed7669 !important; } .wy-text-neutral { color: #404040 !important; } a.wy-text-neutral:hover { color: #595959 !important; } .codeblock-example { border: 1px solid #e1e4e5; border-bottom: none; padding: 24px; padding-top: 48px; font-weight: 500; background: #fff; position: relative; } .codeblock-example:after { content: "Example"; position: absolute; top: 0px; left: 0px; background: #9b59b6; color: #fff; padding: 6px 12px; } .codeblock-example.prettyprint-example-only { border: 1px solid #e1e4e5; margin-bottom: 24px; } .codeblock, div[class^='highlight'] { border: 1px solid #e1e4e5; padding: 0px; overflow-x: auto; background: #fff; margin: 1px 0 24px 0; } .codeblock div[class^='highlight'], div[class^='highlight'] div[class^='highlight'] { border: none; background: none; margin: 0; } .linenodiv pre { border-right: solid 1px #e6e9ea; margin: 0; padding: 12px 12px; font-family: "Incosolata", "Consolata", "Monaco", monospace; font-size: 12px; line-height: 1.5; color: #d9d9d9; } div[class^='highlight'] pre { white-space: pre; margin: 0; padding: 12px 12px; font-family: "Incosolata", "Consolata", "Monaco", monospace; font-size: 12px; line-height: 1.5; display: block; overflow: auto; color: #404040; } pre.literal-block { @extends .codeblock; } @media print { .codeblock, div[class^='highlight'], div[class^='highlight'] pre { white-space: pre-wrap; } } .hll { background-color: #f8f8f8; border: 1px solid #ccc; padding: 1.5px 5px; } .c { color: #998; font-style: italic; } .err { color: #a61717; background-color: #e3d2d2; } .k { font-weight: bold; } .o { font-weight: bold; } .cm { color: #998; font-style: italic; } .cp { color: #999; font-weight: bold; } .c1 { color: #998; font-style: italic; } .cs { color: #999; font-weight: bold; font-style: italic; } .gd { color: #000; background-color: #fdd; } .gd .x { color: #000; background-color: #faa; } .ge { font-style: italic; } .gr { color: #a00; } .gh { color: #999; } .gi { color: #000; background-color: #dfd; } .gi .x { color: #000; background-color: #afa; } .go { color: #888; } .gp { color: #555; } .gs { font-weight: bold; } .gu { color: purple; font-weight: bold; } .gt { color: #a00; } .kc { font-weight: bold; } .kd { font-weight: bold; } .kn { font-weight: bold; } .kp { font-weight: bold; } .kr { font-weight: bold; } .kt { color: #458; font-weight: bold; } .m { color: #099; } .s { color: #d14; } .n { color: #333; } .na { color: teal; } .nb { color: #0086b3; } .nc { color: #458; font-weight: bold; } .no { color: teal; } .ni { color: purple; } .ne { color: #900; font-weight: bold; } .nf { color: #900; font-weight: bold; } .nn { color: #555; } .nt { color: navy; } .nv { color: teal; } .ow { font-weight: bold; } .w { color: #bbb; } .mf { color: #099; } .mh { color: #099; } .mi { color: #099; } .mo { color: #099; } .sb { color: #d14; } .sc { color: #d14; } .sd { color: #d14; } .s2 { color: #d14; } .se { color: #d14; } .sh { color: #d14; } .si { color: #d14; } .sx { color: #d14; } .sr { color: #009926; } .s1 { color: #d14; } .ss { color: #990073; } .bp { color: #999; } .vc { color: teal; } .vg { color: teal; } .vi { color: teal; } .il { color: #099; } .gc { color: #999; background-color: #eaf2f5; } .wy-breadcrumbs li { display: inline-block; } .wy-breadcrumbs li.wy-breadcrumbs-aside { float: right; } .wy-breadcrumbs li a { display: inline-block; padding: 5px; } .wy-breadcrumbs li a:first-child { padding-left: 0; } .wy-breadcrumbs-extra { margin-bottom: 0; color: #b3b3b3; font-size: 80%; display: inline-block; } @media screen and (max-width: 480px) { .wy-breadcrumbs-extra { display: none; } .wy-breadcrumbs li.wy-breadcrumbs-aside { display: none; } } @media print { .wy-breadcrumbs li.wy-breadcrumbs-aside { display: none; } } .wy-affix { position: fixed; top: 1.618em; } .wy-menu a:hover { text-decoration: none; } .wy-menu-horiz { *zoom: 1; } .wy-menu-horiz:before, .wy-menu-horiz:after { display: table; content: ""; } .wy-menu-horiz:after { clear: both; } .wy-menu-horiz ul, .wy-menu-horiz li { display: inline-block; } .wy-menu-horiz li:hover { background: rgba(255, 255, 255, 0.1); } .wy-menu-horiz li.divide-left { border-left: solid 1px #404040; } .wy-menu-horiz li.divide-right { border-right: solid 1px #404040; } .wy-menu-horiz a { height: 32px; display: inline-block; line-height: 32px; padding: 0 16px; } .wy-menu-vertical header { height: 32px; display: inline-block; line-height: 32px; padding: 0 1.618em; display: block; font-weight: bold; text-transform: uppercase; font-size: 80%; color: #2980b9; white-space: nowrap; } .wy-menu-vertical ul { margin-bottom: 0; } .wy-menu-vertical li.divide-top { border-top: solid 1px #404040; } .wy-menu-vertical li.divide-bottom { border-bottom: solid 1px #404040; } .wy-menu-vertical li.current { background: #e3e3e3; } .wy-menu-vertical li.current a { color: gray; border-right: solid 1px #c9c9c9; padding: 0.4045em 2.427em; } .wy-menu-vertical li.current a:hover { background: #d6d6d6; } .wy-menu-vertical li.on a, .wy-menu-vertical li.current>a { color: #404040; padding: 0.4045em 1.618em; font-weight: bold; position: relative; background: #fcfcfc; border: none; border-bottom: solid 1px #c9c9c9; border-top: solid 1px #c9c9c9; padding-left: 1.618em -4px; } .wy-menu-vertical li.on a:hover, .wy-menu-vertical li.current>a:hover { background: #fcfcfc; } .wy-menu-vertical li.tocktree-l2.current>a { background: #c9c9c9; } .wy-menu-vertical li.current ul { display: block; } .wy-menu-vertical li ul { margin-bottom: 0; display: none; } .wy-menu-vertical li ul li a { margin-bottom: 0; color: #b3b3b3; font-weight: normal; } .wy-menu-vertical a { display: inline-block; line-height: 18px; padding: 0.4045em 1.618em; display: block; position: relative; font-size: 90%; color: #b3b3b3; } .wy-menu-vertical a:hover { background-color: #4e4a4a; cursor: pointer; } .wy-menu-vertical a:active { background-color: #2980b9; cursor: pointer; color: #fff; } .wy-side-nav-search { z-index: 200; background-color: #2980b9; text-align: center; padding: 0.809em; display: block; color: #fcfcfc; margin-bottom: 0.809em; } .wy-side-nav-search input[type=text] { width: 100%; border-radius: 50px; padding: 6px 12px; border-color: #2472a4; } .wy-side-nav-search img { display: block; margin: auto auto 0.809em auto; height: 214px; width: 26px; background-color: #2980b9; padding: 5px; /* border-radius: 100%; */ } .wy-side-nav-search>a, .wy-side-nav-search .wy-dropdown>a { color: #fcfcfc; font-size: 100%; font-weight: bold; display: inline-block; padding: 4px 6px; margin-bottom: 0.809em; } .wy-side-nav-search>a:hover, .wy-side-nav-search .wy-dropdown>a:hover { background: rgba(255, 255, 255, 0.1); } .wy-nav .wy-menu-vertical header { color: #2980b9; } .wy-nav .wy-menu-vertical a { color: #b3b3b3; } .wy-nav .wy-menu-vertical a:hover { background-color: #2980b9; color: #fff; } [data-menu-wrap] { -webkit-transition: all 0.2s ease-in; -moz-transition: all 0.2s ease-in; transition: all 0.2s ease-in; position: absolute; opacity: 1; width: 100%; opacity: 0; } [data-menu-wrap].move-center { left: 0; right: auto; opacity: 1; } [data-menu-wrap].move-left { right: auto; left: -100%; opacity: 0; } [data-menu-wrap].move-right { right: -100%; left: auto; opacity: 0; } .wy-body-for-nav { background: left repeat-y #fff; background-image: url(data:image/png; base64, iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAIAAACQd1PeAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNiAoTWFjaW50b3NoKSIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDoxOERBMTRGRDBFMUUxMUUzODUwMkJCOThDMEVFNURFMCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDoxOERBMTRGRTBFMUUxMUUzODUwMkJCOThDMEVFNURFMCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE4REExNEZCMEUxRTExRTM4NTAyQkI5OEMwRUU1REUwIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjE4REExNEZDMEUxRTExRTM4NTAyQkI5OEMwRUU1REUwIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+EwrlwAAAAA5JREFUeNpiMDU0BAgwAAE2AJgB9BnaAAAAAElFTkSuQmCC); background-size: 300px 1px; } .wy-grid-for-nav { position: absolute; width: 100%; height: 100%; } .wy-nav-side { position: absolute; top: 0; left: 0; width: 300px; overflow: hidden; min-height: 100%; background: #343131; z-index: 200; } .wy-nav-top { display: none; background: #2980b9; color: #fff; padding: 0.4045em 0.809em; position: relative; line-height: 50px; text-align: center; font-size: 100%; *zoom: 1; } .wy-nav-top:before, .wy-nav-top:after { display: table; content: ""; } .wy-nav-top:after { clear: both; } .wy-nav-top a { color: #fff; font-weight: bold; } .wy-nav-top img { margin-right: 12px; height: 45px; width: 45px; background-color: #2980b9; padding: 5px; border-radius: 100%; } .wy-nav-top i { font-size: 30px; float: left; cursor: pointer; } .wy-nav-content-wrap { margin-left: 300px; background: #fff; min-height: 100%; } .wy-nav-content { padding: 1.618em 3.236em; height: 100%; max-width: 1140px; margin: auto; } .wy-body-mask { position: fixed; width: 100%; height: 100%; background: rgba(0, 0, 0, 0.2); display: none; z-index: 499; } .wy-body-mask.on { display: block; } footer { color: #999; } footer p { margin-bottom: 12px; } .rst-footer-buttons { *zoom: 1; } .rst-footer-buttons:before, .rst-footer-buttons:after { display: table; content: ""; } .rst-footer-buttons:after { clear: both; } #search-results .search li { margin-bottom: 24px; border-bottom: solid 1px #e1e4e5; padding-bottom: 24px; } #search-results .search li:first-child { border-top: solid 1px #e1e4e5; padding-top: 24px; } #search-results .search li a { font-size: 120%; margin-bottom: 12px; display: inline-block; } #search-results .context { color: gray; font-size: 90%; } @media screen and (max-width: 768px) { .wy-body-for-nav { background: #fff; } .wy-nav-top { display: block; } .wy-nav-side { left: -300px; } .wy-nav-side.shift { width: 85%; left: 0; } .wy-nav-content-wrap { margin-left: 0; } .wy-nav-content-wrap .wy-nav-content { padding: 1.618em; } .wy-nav-content-wrap.shift { position: fixed; min-width: 100%; left: 85%; top: 0; height: 100%; overflow: hidden; } } @media screen and (min-width: 1400px) { .wy-nav-content-wrap { background: #fff; } .wy-nav-content { margin: 0; background: #fff; } } @media print { .wy-nav-side { display: none; } .wy-nav-content-wrap { margin-left: 0; } } .rst-versions { position: fixed; bottom: 0; left: 0; width: 300px; color: #fcfcfc; background: #1f1d1d; border-top: solid 10px #343131; font-family: "Lato", "proxima-nova", "Helvetica Neue", Arial, sans-serif; z-index: 400; } .rst-versions a { color: #2980b9; text-decoration: none; } .rst-versions .rst-badge-small { display: none; } .rst-versions .rst-current-version { padding: 12px; background-color: #272525; display: block; text-align: right; font-size: 90%; cursor: pointer; color: #27ae60; *zoom: 1; } .rst-versions .rst-current-version:before, .rst-versions .rst-current-version:after { display: table; content: ""; } .rst-versions .rst-current-version:after { clear: both; } .rst-versions .rst-current-version .icon, .rst-versions .rst-current-version .wy-inline-validate.wy-inline-validate-success .wy-input-context, .wy-inline-validate.wy-inline-validate-success .rst-versions .rst-current-version .wy-input-context, .rst-versions .rst-current-version .wy-inline-validate.wy-inline-validate-danger .wy-input-context, .wy-inline-validate.wy-inline-validate-danger .rst-versions .rst-current-version .wy-input-context, .rst-versions .rst-current-version .wy-inline-validate.wy-inline-validate-warning .wy-input-context, .wy-inline-validate.wy-inline-validate-warning .rst-versions .rst-current-version .wy-input-context, .rst-versions .rst-current-version .wy-inline-validate.wy-inline-validate-info .wy-input-context, .wy-inline-validate.wy-inline-validate-info .rst-versions .rst-current-version .wy-input-context, .rst-versions .rst-current-version .wy-tag-input-group .wy-tag .wy-tag-remove, .wy-tag-input-group .wy-tag .rst-versions .rst-current-version .wy-tag-remove, .rst-versions .rst-current-version .rst-content .admonition-title, .rst-content .rst-versions .rst-current-version .admonition-title, .rst-versions .rst-current-version .rst-content h1 .headerlink, .rst-content h1 .rst-versions .rst-current-version .headerlink, .rst-versions .rst-current-version .rst-content h2 .headerlink, .rst-content h2 .rst-versions .rst-current-version .headerlink, .rst-versions .rst-current-version .rst-content h3 .headerlink, .rst-content h3 .rst-versions .rst-current-version .headerlink, .rst-versions .rst-current-version .rst-content h4 .headerlink, .rst-content h4 .rst-versions .rst-current-version .headerlink, .rst-versions .rst-current-version .rst-content h5 .headerlink, .rst-content h5 .rst-versions .rst-current-version .headerlink, .rst-versions .rst-current-version .rst-content h6 .headerlink, .rst-content h6 .rst-versions .rst-current-version .headerlink, .rst-versions .rst-current-version .rst-content dl dt .headerlink, .rst-content dl dt .rst-versions .rst-current-version .headerlink { color: #fcfcfc; } .rst-versions .rst-current-version .icon-book { float: left; } .rst-versions .rst-current-version.rst-out-of-date { background-color: #e74c3c; color: #fff; } .rst-versions.shift-up .rst-other-versions { display: block; } .rst-versions .rst-other-versions { font-size: 90%; padding: 12px; color: gray; display: none; } .rst-versions .rst-other-versions hr { display: block; height: 1px; border: 0; margin: 20px 0; padding: 0; border-top: solid 1px #413d3d; } .rst-versions .rst-other-versions dd { display: inline-block; margin: 0; } .rst-versions .rst-other-versions dd a { display: inline-block; padding: 6px; color: #fcfcfc; } .rst-versions.rst-badge { width: auto; bottom: 20px; right: 20px; left: auto; border: none; max-width: 300px; } .rst-versions.rst-badge .icon-book { float: none; } .rst-versions.rst-badge.shift-up .rst-current-version { text-align: right; } .rst-versions.rst-badge.shift-up .rst-current-version .icon-book { float: left; } .rst-versions.rst-badge .rst-current-version { width: auto; height: 30px; line-height: 30px; padding: 0 6px; display: block; text-align: center; } @media screen and (max-width: 768px) { .rst-versions { width: 85%; display: none; } .rst-versions.shift { display: block; } img { width: 100%; height: auto; } } .rst-content img { max-width: 100%; height: auto !important; } .rst-content .section>img { margin-bottom: 24px; } .rst-content a.reference.external:after { font-family: fontawesome-webfont; content: " \f08e "; color: #b3b3b3; vertical-align: super; font-size: 60%; } .rst-content blockquote { margin-left: 24px; line-height: 24px; margin-bottom: 24px; } .rst-content .note .last, .rst-content .note p.first, .rst-content .attention .last, .rst-content .attention p.first, .rst-content .caution .last, .rst-content .caution p.first, .rst-content .danger .last, .rst-content .danger p.first, .rst-content .error .last, .rst-content .error p.first, .rst-content .hint .last, .rst-content .hint p.first, .rst-content .important .last, .rst-content .important p.first, .rst-content .tip .last, .rst-content .tip p.first, .rst-content .warning .last, .rst-content .warning p.first { margin-bottom: 0; } .rst-content .admonition-title { font-weight: bold; } .rst-content .admonition-title:before { margin-right: 4px; } .rst-content .admonition table { border-color: rgba(0, 0, 0, 0.1); } .rst-content .admonition table td, .rst-content .admonition table th { background: transparent !important; border-color: rgba(0, 0, 0, 0.1) !important; } .rst-content .section ol.loweralpha, .rst-content .section ol.loweralpha li { list-style: lower-alpha; } .rst-content .section ol.upperalpha, .rst-content .section ol.upperalpha li { list-style: upper-alpha; } .rst-content .section ol p, .rst-content .section ul p { margin-bottom: 12px; } .rst-content .line-block { margin-left: 24px; } .rst-content .topic-title { font-weight: bold; margin-bottom: 12px; } .rst-content .toc-backref { color: #404040; } .rst-content .align-right { float: right; margin: 0px 0px 24px 24px; } .rst-content .align-left { float: left; margin: 0px 24px 24px 0px; } .rst-content h1 .headerlink, .rst-content h2 .headerlink, .rst-content h3 .headerlink, .rst-content h4 .headerlink, .rst-content h5 .headerlink, .rst-content h6 .headerlink, .rst-content dl dt .headerlink { display: none; visibility: hidden; font-size: 14px; } .rst-content h1 .headerlink:after, .rst-content h2 .headerlink:after, .rst-content h3 .headerlink:after, .rst-content h4 .headerlink:after, .rst-content h5 .headerlink:after, .rst-content h6 .headerlink:after, .rst-content dl dt .headerlink:after { visibility: visible; content: "\f0c1"; font-family: fontawesome-webfont; display: inline-block; } .rst-content h1:hover .headerlink, .rst-content h2:hover .headerlink, .rst-content h3:hover .headerlink, .rst-content h4:hover .headerlink, .rst-content h5:hover .headerlink, .rst-content h6:hover .headerlink, .rst-content dl dt:hover .headerlink { display: inline-block; } .rst-content .sidebar { float: right; width: 40%; display: block; margin: 0 0 24px 24px; padding: 24px; background: #f3f6f6; border: solid 1px #e1e4e5; } .rst-content .sidebar p, .rst-content .sidebar ul, .rst-content .sidebar dl { font-size: 90%; } .rst-content .sidebar .last { margin-bottom: 0; } .rst-content .sidebar .sidebar-title { display: block; font-family: "Roboto Slab", "ff-tisa-web-pro", "Georgia", Arial, sans-serif; font-weight: bold; background: #e1e4e5; padding: 6px 12px; margin: -24px; margin-bottom: 24px; font-size: 100%; } .rst-content .highlighted { background: #f1c40f; display: inline-block; font-weight: bold; padding: 0 6px; } .rst-content .footnote-reference, .rst-content .citation-reference { vertical-align: super; font-size: 90%; } .rst-content table.docutils.citation, .rst-content table.docutils.footnote { background: none; border: none; color: #999; } .rst-content table.docutils.citation td, .rst-content table.docutils.citation tr, .rst-content table.docutils.footnote td, .rst-content table.docutils.footnote tr { border: none; background-color: transparent !important; white-space: normal; } .rst-content table.docutils.citation td.label, .rst-content table.docutils.footnote td.label { padding-left: 0; padding-right: 0; vertical-align: top; } .rst-content table.field-list { border: none; } .rst-content table.field-list td { border: none; } .rst-content table.field-list .field-name { padding-right: 10px; text-align: left; } .rst-content table.field-list .field-body { text-align: left; padding-left: 0; } .rst-content tt { color: #000; } .rst-content tt big, .rst-content tt em { font-size: 100% !important; line-height: normal; } .rst-content tt .xref, a .rst-content tt { font-weight: bold; } .rst-content dl { margin-bottom: 24px; } .rst-content dl dt { font-weight: bold; } .rst-content dl p, .rst-content dl table, .rst-content dl ul, .rst-content dl ol { margin-bottom: 12px !important; } .rst-content dl dd { margin: 0 0 12px 24px; } .rst-content dl:not(.docutils) { margin-bottom: 24px; } .rst-content dl:not(.docutils) dt { display: inline-block; margin: 6px 0; font-size: 90%; line-height: normal; background: #e7f2fa; color: #2980b9; border-top: solid 3px #6ab0de; padding: 6px; position: relative; } .rst-content dl:not(.docutils) dt:before { color: #6ab0de; } .rst-content dl:not(.docutils) dt .headerlink { color: #404040; font-size: 100% !important; } .rst-content dl:not(.docutils) dl dt { margin-bottom: 6px; border: none; border-left: solid 3px #ccc; background: #f0f0f0; color: gray; } .rst-content dl:not(.docutils) dl dt .headerlink { color: #404040; font-size: 100% !important; } .rst-content dl:not(.docutils) dt:first-child { margin-top: 0; } .rst-content dl:not(.docutils) tt { font-weight: bold; } .rst-content dl:not(.docutils) tt.descname, .rst-content dl:not(.docutils) tt.descclassname { background-color: transparent; border: none; padding: 0; font-size: 100% !important; } .rst-content dl:not(.docutils) tt.descname { font-weight: bold; } .rst-content dl:not(.docutils) .viewcode-link { display: inline-block; color: #27ae60; font-size: 80%; padding-left: 24px; } .rst-content dl:not(.docutils) .optional { display: inline-block; padding: 0 4px; color: #000; font-weight: bold; } .rst-content dl:not(.docutils) .property { display: inline-block; padding-right: 8px; } @media screen and (max-width: 480px) { .rst-content .sidebar { width: 100%; } } span[id*='MathJax-Span'] { color: #404040; } .admonition.note span[id*='MathJax-Span'] { color: #fff; } .admonition.warning span[id*='MathJax-Span'] { color: #fff; } .search-reset-start { color: #463E3F; float: right; position: relative; top: -25px; left: -10px; z-index: 10; } .search-reset-start:hover { cursor: pointer; color: #2980B9; } #search-box-id { padding-right: 25px; } ansible-1.5.4/docsite/_themes/srtd/static/css/badge_only.css0000664000000000000000000000566012316627017022600 0ustar rootroot.font-smooth,.icon:before{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:before,.clearfix:after{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:fontawesome-webfont;font-weight:normal;font-style:normal;src:url("../font/fontawesome_webfont.eot");src:url("../font/fontawesome_webfont.eot?#iefix") format("embedded-opentype"),url("../font/fontawesome_webfont.woff") format("woff"),url("../font/fontawesome_webfont.ttf") format("truetype"),url("../font/fontawesome_webfont.svg#fontawesome-webfont") format("svg")}.icon:before{display:inline-block;font-family:fontawesome-webfont;font-style:normal;font-weight:normal;line-height:1;text-decoration:inherit}a .icon{display:inline-block;text-decoration:inherit}li .icon{display:inline-block}li .icon-large:before,li .icon-large:before{width:1.875em}ul.icons{list-style-type:none;margin-left:2em;text-indent:-0.8em}ul.icons li .icon{width:0.8em}ul.icons li .icon-large:before,ul.icons li .icon-large:before{vertical-align:baseline}.icon-book:before{content:"\f02d"}.icon-caret-down:before{content:"\f0d7"}.icon-caret-up:before{content:"\f0d8"}.icon-caret-left:before{content:"\f0d9"}.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;border-top:solid 10px #343131;font-family:"Lato","proxima-nova","Helvetica Neue",Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:before,.rst-versions .rst-current-version:after{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-versions .rst-current-version .icon{color:#fcfcfc}.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:gray;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:solid 1px #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px}.rst-versions.rst-badge .icon-book{float:none}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge .rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width: 768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}img{width:100%;height:auto}} ansible-1.5.4/docsite/_themes/srtd/static/font/0000775000000000000000000000000012316627017020132 5ustar rootrootansible-1.5.4/docsite/_themes/srtd/static/font/fontawesome_webfont.eot0000664000000000000000000011103512316627017024717 0ustar rootroot7LPB5fFontAwesomeRegular$Version 3.2.0 2013&FontAwesome RegularBSGP/3{YD MFx>ޝƏ)[1ɵH-A)Fٜ1./ dU'&a /s%%<Ԯ O%pV "uKupp^cRB`\}TیW  ѣzҮ5*LzSq>?T35(}aKP'245$[GVzi\:^ C=P O*-<@a(0LšĻ "gmL%ƪiC̼aA^舱 hQ% ./"}҂JDpb('0E1 hQQJ Dp7Hk3L4K4d S*?e [!=8gF"bT*?G.W0Q'^xn?[("wZhMz"zh,jD &%X 5g"`hԚl 78K'ļmF{b|-4HfP[1<æͥ+-wE :hKuOTRT;)/- 9L2 %3LU28ŁY% )Gդ5uǝ:Ab65W{D!dNǣzKE<UEqU*5Z-}Sq*(Y!`;0.AGCF|y)5.@C4XGZ`-.a6 .m|>d;-xPKLH *KG%EF9B͆Hi0zu1Tp`FU ֏cc+U^µ4x(F4xXݽz(R6LV?SGc"W/B]O5!.Оu_wΥQC\UĶ`v=MHSײ7]\bB-Aj]zxW Wp_5MQTw"Aqތ].Rӡn (Zw)夶up[2bg8@W)ʿk쒫Y8L<'siBB D(OC0POtt6¨/6oÍMEh. 7MI$tUf! 96H"r6A8ʉE"Q)fRJJh$Ǧ9[Vf%cxPqDP͐$艨,vUIm5XLj$q51hh'  .L LЀӮbÑط%UFUt\şB] Sb[3fu{S6>%nEq`{1y*VJT(B+i0(/$و;*aR)I:Ra3َ{osCh"KAG<^a2PKHy3NyGFF)Lo7[: T*BIa xeU,ҵSG7Õqo99H+,WeNe{/#/(,w{ PzAt3hf'VR# ӻr뿾FfO[rI'"-:?$d@4t_I& AB$qb{F $à"ς)f68A۠5Lt|1Tqx| dZIZO(}͙DLȹbA2Fd =Gb4eD C"#'cKj}} 1x&$*<20b_e@ꕂ j o#)ϴYi2uv{@$A9zU:-k"HkGiA[="D\dj}r! Uߛ~e: +GQl'ZggW%Gn%i,Z.T0U+S>TJ|QTiLeD-A)p%F?ELb'͎HHMپ5E%2iUOhx#r.rl )5-C.(,o8'٭q& |7QoM+}Q.NW33: ;=wD_ jezO!fFNqQ=8"Ys1]#;+"HZ̈́yFDs}&^B.49`ģ'Zb0Jw0vTRW&x۲MAಈ7t=CF$m˥fjvB5lu I70<%h^ޡ~ܝ"_S&!";1pjd,*` z_!ohH(rz}v:c%]x*bbX|9(-("͊Lyϳ{"1P˒Z&B€xPS դYuJ e98GULcKP4IQ|j%H<"`2ZQuU1TWv Q,p[ @ f֤ۮEҳzfD2%+Tc)F9v"bc>F*gqR+Ձ9^NPbAnhZT)L!X2NUȊ =J+Pa3·Z |Fm-+}xP A0œ,JӪmPNb ;XSV]eSΫ2)l`3~ ]#/ҁZ{=dCR܂Z&fl>e3a ~`rIǴPNC`\QXAH$ΐ-;Dm靦 -~2{e.^{sLz݄[r$[3%儕*fZgt,H9NMO iR?G Qt:殾@03PPXCt\ROV-5pg2{Qʻo'a_ f?Mwpx dط ΜsU+6^vIV>̲wG>4KH\ r{¡]S Zhq^ <E#"I/CgvIy =Frގ^L`W4Ld a/;'f,9̤]N?.^d|K6z8>g\䯽@#sObTҴՕ rB@ %XBG{h.c*IbT/QRJ-= _*q}8ԍ/"]ţeF8b9鶾%+d)Rם@:M~!hFB ` Ŋj<^T^clw+X;rR*ZjT5L;([')3Ka+@.^?Sm#=c|aÈ5kRCA%hfyY} h\fas×(N~6,irث_z?`(m %ɉ|R"c@} py`SAv'PGa&,o^<gj B@ALڣu=lrjy#R_ [*h$ vH79 k2U+4W`DPFeEt+MdBpʼ+ D(@ۗFZ1BiKϵ8.rxB0ʨ^nx\'٘>VDϼ܁mPӀ(x9e{ܠL[E[n_Fc XQ68j-wkvsl[yl.i N4!hWy\FMb6Ol(X=mWCv8)\#3L,TGĈ{GӅB2FSsLR2Q|="*R~ƮSoPe79E8Z D`':HNR}/ʐ(%FhrbRA[6 浺ppB"ДDlSjA:ECUl.TzbA2рKMH̾T {5h[2`}Ux '/~EM{^,yJX u i4d;}h,\m$v /yQ 'X%, Zb3ª;@S} RI6Nguc#U @i&2P@8־u'b/Ч"NYPO6k6iCN!H"A`>@qN# @h((eJ7>†i_!afă:٫,"p e:l @ޢqkSNS b{s<l?t WLX'v<,FY/eMz0+Ԛy꜊B 0x]je]R\`u.E. ʻ릙B<6rj Ç9߃Bf-,QϑN#/M" qA@2X~D0]C[ipr @aTUtBoݕҜ]mA#O(JiԌ6v[{qyXR6Tv(rs5s<>$,mRF#:lBD`H9PUȂH4@ !465Dc>>˱:A0a0@Գثqėf[z_<~qbV!k29ňՌB=N'*W݉~Hx8ʘ TGFzh[o5,-X_dƆK@UOYc` Զ49J=P_8tnRE!5>D ./QДҨ1`n[4N}ݯ7}.pN ,n{rR Tař^*M|"*VigA FL 4$-᭄Ml6 >)}fq_WD! eo(H2?/H=?*qqUh6o} k-e0 $/~FBs :䴅cEvFsH%ine\5<b=f> PCEyd,K *3 ο3  ٥*F5bkeV]UOubd`Gs{KUOP݆gQH@e*Fĩ/jsW ]Qi yq@JSXU)r& U}ѓ5 /꼃M?Fo_So(qNV&,0Cɏ,\L￯5^ȿ #4JocԬXmTL4M&s ak#-PXD, ?pV02i n)nCwD1BȲ)xgk1>WF{Pi1U ;w,5Nt3/Ţ \KvprԒ q'#9THr6^d$E|clTCH{' * ;e<>¾4nUo~q1GM6QjJQT}FS𯮺髻@jfM-37qUmHYZ}*"'D{c ͠Z3 w 1)_o⬲\Q,F k i0 -aՉԍ$hJ^HUyh'7K7 b3t*iĭQgieI(t%a%R©83GTUz$M*~ƀJصnu B4γR-:;q(;QĘ)1EFv:T~לzi4*оT\<韴5Ǐ!efO/w)9E˱yFt/t>^G^,U Ɯًi!sNvS$!FqoB2 `Oe@5֑r;[U .|"?!̀j@=1Rkk||{%-='qb  6 tnK]{3HB v,n)Cő]|TQH<GMx4]b  P|' *!+qwtnω!%|O;6IiMқ(dVe0Bf\N〃|b_K1LT%Y0rExשyLc\YqPL5$|A}@t-{}lf~mKKڽh>G~^+qcEy'OC0C1"4O-܈|S8cZ<= M4t?]9 +J3Cd -dX!!ZVRg 呒ś̲\8^!#xi_Dƙ.:D3dVmwZlӟyB|tRA31>$Ul6@;_4tA!3|+p[AkXAkOFtnpP(6ؕ"7/1)A@V !1( C@ i.**PXC' ^GT O\/{7,Yo|Cs x!qF ײ`>֘F="g^Ř:WP*BMBɯ+6șDžiRgyr%/^MU{U%b (b9b/e(h|kBn FVK_#( J)gK+C (P*n4@,NkgTxXc6g/9XO,xčq+daSL.9؈l9 {eT1؆GG)xaG#\N$v؏`Kt|NU3kJ|Zq9z1!'"d} yuJYygF/F[r],O'ct4Ìw 6cH`&{SPe֖m'/j74Gj|>̩( EAf>bKu rn bc;gKC;8irA1/ j}yI@T|dP8τA=)%'n! k`X=em^0Ռi-GAM|3XpmEP̬30V^O߁,BQtGyy#BXD?fD,J׹^+4 7Jn?V99~:?׆(" tcfIb Ƒ>Ey?L `r&޸T 00!kwgT/u' FG'n2J 3L-%O 1LvއLo -ȩDZ_-scI/ܓj]+"׬yQ8yy C`M:XSk]M:Wq`$E蹘 {X7fQr9W6^@tQ$YFfhtnKKn(H[;}/#f|$T,@8@h68/RK㾯6{E% Ǚr@MH DFJ? *͎hBS4ȱTe,F BB7@պĽ^@8 q3"OHlN *iC-dwMP)71B[O o4 o73%Bᐙ3qhKY~eIPm2w"uʔ^*,;Ln:PĒ辂",#f]$W>d$æXqtldd]L& ^|j&G3\l ֛lU.@M\n)#2AǼIrn<ƓwTz0nʃr,(^&ҕWŘ08vգ"vϝc}B*@I,w0y) s7'ed22|4P?ҏkhfy !ś/gG#:Zm=Δ~ qN] %x`U`2#j8Rc\\{8LۈC-\ׯnOD`&Rl[gP`k6=թ%i~AgD@hO(f7X!)$b9S`XH e ~Ǹ@X֔c(@SU8%Q QzM(܉F}@7˰qqH{5Y4֍ NAIo|}- D&>/}|C[$:7*t,Pph iM- q4NI9 WzH1.#*Nzt$S\0.2)n).w,rˢ[;*\pH,?q8$̟a>{8M=OIA92@=|ɏ)Xhq-Xh-U2bHuߤ>lOJ'E8bjj JFA+M& Xϕ0:a،P,/(lu)%#r,fGDD^X%Q{i x`φ`dQIx I=aG6!g$YM4YA1MN趄(4qYlWg7$ ˨īY ઠ%%`'^F6ǵqmHi*ni,c~`4ȗ4g $a*x =20HBQ@5HhtULJ6 e)C<8-2z!脔wp֎X*1Z\bCe?# I0rCiƂ?X 0\ok"{5K*":r]?X9@#&ciACnt'*͕ P& 0 옙tn'6/hY;<$b1?=։+DX8TΤi ]8װ‚.V94qMLt4`) ak@Ivn.4_c?'P+UؓdrEr:g߄9NMz}ϬmNX2PE ;ёpQBOɳy@TUOGp?{9'Fr'@3 fC>wRTj@?#){Ā&Dsr>0+>:& 1Ad'Nf0\BZP`4G%C@9 d" AR2dW}ԘxF8YMMC1iB&~ mVŒѲWg}.5s{x.vaa^kQf \EjRGORbgH䗽IAbE7vE&*𥎰 AC.dܣkܦO T^.Fh1u5^(YʁʔI#s4 ^Rkjfuh g@XdI!fS4{A)nJBv.4>ކr9,it&+r63]46=M [vaQʂ(EguU;&2= #LpD,z yDWvȰi+{$x䉃-uFP]sȩDM(4yl!bBҰVJˊk<^'VfG_P0E3|bA8%QS='q6%3Y> ӟ,q96Jv,HdzL[g;>%|~{Gl~x1@Y:MTT$bA}$rL烤O č< NV!/˛iEe4]鏤\]9Q-xKS(#6dBB,';SШ`rs~I,// {JJ`|R손`4t$'wLf0Wv#msMuq2 L(C HSZH|zPpAҔԄ4=nV5ke3] +RRd ]D7q,{BFh@RG~{cHf Tl*%n*8v^ZQQ[uQ% s9^="EzS4h+%Xh$6@j0' ?,!3hГa@mFsX 9t!(4u>3P <_ۊ|4S&g|r;L.Q:p DfIPC4 ȍdHl,pd  pAoD 0(,dP6zdwyT#g\ameyDk`bИ~AԂ<2j"H3NLBc-}/=G"˷܆E8nU8&Yۏ?W&vn8nŏ22z\kl[HTe3\@&Ը%LEaP3 ljOuIp`1@!&63T1?;F;]TBMy ;Ɖlnt$Xvt2L0[Pi !=#A0TK/-đwܟ0.4"Wm47!Bg^PBF܊tH4xc8DhEge3hzWf H`"YP2Kzk; wIۈ\"11ቝŭb\8cNzH"j}:gurԚN,Y<3(ts|%Rb-ԐlɪFp+`3qs&h3 * 0˱CZ# =ah 8kv\XXQ"1,x :pmz!>RD'MN"y- 3#J{e*n<ߴy&ũB\m*[F O/l^ֺA[")0T@XD8b/( |WHBB)33 ǒ ~${::> Z2c^&C\VhڍKi%I,q?2!,a64h $w hɸ{z`GyvOm^Z1PV9l]Խ}Lپ: \xBnC>5? X, @p*x8C>v;^ K4p}%xKR j4GZW 0u< .B;4"/x\Ec|}M+sK= rf#unb1<  j/<$n2*ߚ,`RnPCJ ]- psoa Dd", ^]::1cw^_4Y 'ZCJB)wf,\j>,ʸ}Hܭ7 @ VF~dG(qy0,|\!*f 7D./B)ܝ[B',T<}bcS ^.5#bgxPK̒aD 6fWe&o圻(*+ P*8!_fkg+mƙmbM/dG~ *~-*w"=E9|G鋈W I5"`6:.߂LH̍*y Łz^N.Bќ2xe%5.;v+$BkRPH+ƤƑ@`? 􀲽S]BV|tzpC҇bX8TyK;TM ,C19% B̴"aƾ/3W "ʇYf5}I,=67ILɟ#{$L4V9 Vs ٔN^5JiG$  &v>^l@`\S+.fo.]t mzɍQ}-~QebIC4|HaY}1|˓?M;R~G4I8V--t]X>;*w'v$4v$_J{'_[suA,)z+z E~Q4Ip;/͐ϕG<߭nsv1ADSRݰ.~,҆fP""cCr :K 6c| #)3a5PrA BՀz o f ِ  )DCFIjfaD#e#\8bԨ>V#؎S! Vh^0#xH"ײ~Kלݤ$TĠ/[\wu!t s{XAM"G\{ bu!9Iv#"r0iNRG74Wc<;LonQ$@_v]<_#8 |Mfe@P+Za;%>B؉c(G{SgCyZaPvJ+a_VL=],^ В@Ӯ Ev+fs9Lp`xAHF`mXʹ^ ,tjԜ- ԧ ZcD7 AA?dp+)Sn< JB 灄^1O[\O VbH CP:`WI~CXSU72GάXֵE#2tHEs`w+D=u~bt[_N(vˋl(Bꄽ<(CBB#- ~],z@)!\|O_O3FM-ʡFӈ I6yu{pOY'"-.#ds!LGov$~]PN{No`obf,c8g3sߏJuX,zL5!9tqF5?XaS+&3a& Hd7t_$nD%DA*Hq*X=OOOjR8x$4Vu|Q:], 53Ia0_&,:ݜD=G ò1U^,Q> ޴ L5@ˉ*/^n`11QUUksCxヌG!r6syxId~ٍD]g>gXEy3n{XHPW> ' D򲙇xE&"&k8D='!Aͱ-Н4ԸKS یT" -WdHy]6Ŏ yg[@W@|`q_a1):l4s\/D D/5Ic U5ZEMGNH}iekhq`j GO 92C':Ҡٖb`j {Î]1F {@8wibxV)+Q9df{@b,";BcDQ9eNT&^2PoN06.6ymLB4[b(*NBQT"-᱇< 50yv[a۟\_&p4{d"ɂZx9'kG`ZDq8 #8@D0AkzD&qC@7ÌvpADQV)@\)P{_\) i@SPnHa>qaNBP|T)$ƥ0G-* "^ށצL,B0}ʁpB3P7B,X q0;܌<x,*;/'>rbab(Xai搐k.NLvc̉P|L(XICdaLV$'?JJT,pǙ%BŻɬ|9YcӢ}S#6Rl>Yx| P U8 4 V ƥC۴hБ#'*_ֽ!t.+"͚-z_F`P4rӊq}M4*0Ίc!Ԉ2'܅9AEoJWw;kzxsXmhbHthnHh?ۼ;MIz:a28zЈ}򊒩D}\WP(5c R}C(3Z)ī=#ig6R $qؚH,>,wT]70ѡ(4m %p%]ΌɣwWN"%Y8t>G<y!J,ymԅWo6`S Qkq% a5gcDVCKK*&(dj8*W ?{RU3xGhe?IKTX#Fb#$PɼP#^]yB:`xK0 鴧k]t$4¬xc?ĿWI %Á@8[na(pcFSdD3_ˈVv/3i^fbsxj3Y3l4 -E=s\YZ  k]k4m2pտ+ ~q!Q> tw,\ӛlMs@' y¡8hIS:0֒GzKnxFlۄ4n8HI 60~!l%t aʴT.RxeN@p a:i%C-0{19LKlKM9jI?iĕFc?hYl l^:ZmqLɐ$*'fXeF0nSlC\[,S@S@c녀C>` Zc4M,Pŭ5bz-6Qv8/@lKn/$09GCL\&f}@bw,IK9B *x4B1"kk2DIx 0jn07g ؐH.S vM\ &k3xO%"R$F &^25z5& ٮ"]kHx&U/1IVS{I^2xpԔց7g}BHz kw.G(D تml3gTIYi5"aL퍥B9 ,(C+"Ζg\oJJSb<%cFhj*Ykb@3M<3c4 XJqyA&E4kfA #_q7<_򔠷ۑLԊ 1Y^! [GxvcQ`!&M{0*nN .D*A:ILb;(PxϜ ALmS`VC= n!z+JA8H=^vψ9i|FtƆA 둽e&f'ٛ 2|`',3;(廨  M|`۹\kH^V<0>@5]PWRzi}fhi HۼDЙ[Q)֬dd E $ %JտZR)$sVCy-@Z6\]8l{=Ǘ4D;A}p7 /;G[ BH"2)~4-oZ?1 ĮVF+D"T+$z2.93d;!*U9̙/A#!D{m"~`XqSEk0ӷ3^ j~#뽸>/1X}nT9;qBSfr "f;럘(8["SZ K`O4O!pd7X@N73EP8rN#gm Ed1VƐn^ē,B,ul&!KlmӮѷEC|mjPh?P `2ДȐ9z˨C4Aq")݊jO%cwJE| 0:Mv}C Kv!+$GaS{{ kJ< /;W4E<-k8";#7t ֚|P2r= x=ƮRϗ/oub%b/qO[6[foh"x(.ߓ@ Jb3nAT$hbZ$Xx;Z7wNDISHUMR1#ukNt[&ZjwQe8`sI0aVqmx!9.#PŞOw+ <15EYS֜ :cxGd vf=i ϼ!L>ŁIdW `qb >L.K9O|.1Ci68IΒeG'JL[Y灌v&+PYMaBM5bCUAGm=PSʣ%Ѝ ݠj@櫙eL-V&u-ˑ(LK 4nGheɎg;jFOg2c$xJȣ+X0Mdf́g*p|Z(F_'+^OfEiVBsu=yY"T,T? |z lF?5> RlOO;Yj I?.86o&3Ua _ΕҔL?LV ò-%ƕFд†&PY& ]WPH "f}?b`'Av_# s\֢V9W ½3ʎ]qL?a~SNQt^0jvER̃ĵB Rds`'AV #-* V hkTU2 *K*lU<Aiٟs$<lt7&J+3?,fL4ք+%s6y*m9Pz[(-C;Tj.IQo_knT̢@3h+z#`gvU;pnzhbBb#^@o\a=]p=m4%Fi6Į![Fݎ%ZR{̈́MH}BQ^vym^= rώ{ Vd J>gƣTvn^CS˄R(p Z!4Z0(=۲LY`Fd ˶/ pm#0fީҺ tϣqe@[T .y$rh`@=:'8VV`孰K1 G}$70siY2?z}JA"wZzZwY v'H6$8ĉ?i'Eh} Eb5z_SReq X J Dhq >0⹜ /P~mMٱ6▂0q`1 Affiռ6$m/ХNphnC5VKEK<|Fghxg:#xYNnu37OBwvB-fzHC۟?D,hH[LxtOY`r k$-34ۀd@HK)-C:]mK@؛fN3aZ Xc}{6ckk /0TN Z]F,;8 EJ)zYp (DB0٩G@ +U5N@p-Jpo%z9΁p0lųI>"{b`EO藉$Ԯ i!Al\u,A]="/i c?ЗbY$b#C3܍f*j DFl|3bI) np2ߝ+q:&V򷇿wIzU y:W}6f#hKC#6nP2J yy&>]8_BpR `?gcF9V&$M`B^rUyA}4'᪊n 䉕{O.rU^Ld!|c#TnO$v׀tpRw^l* 5L/[(vTuơ3hHt;!dg F$ RDn^e\W/k__X'jJM LZfl4ћF茝`N$,- )H"¨;6:vi'Z>G(Apر5jeX!DBP ͠9! ,dTT(fhYX sɆ8#GQF3ƭn7bEtm ҈tΤ@OH2T߃ vOt#=$ꆨf4dO1 ųf(#eyt`[6%WUw\8*l8QJ`(B#\_ $;MN=1N] 54IM>( 0WѬ9p9lu`TR+X[ 2[ 9?StףbvlĂ5?.qú\|"e5X]wTɇ͏ )/~A|]~7T}z/ LCBX g#'g0[Ri*ydߢQ) 'wD]qe%S_|T󵍭wp?Pi,ƏiLQdH #2G`&io,wݡ%jYg*⧨& C'ݠ ai^@[ X]Fg 1̶4p%[oP.f,__Km z25 ؼ()VÕZ5yx+ill$hD`8zF3d<^,84{v~p0fpHlF TzP0[l/Z5f O/=(T 77TB{4qwX&c綒u_ʙ3oNPM9ջU̟c =<]R/*j_AO?g} 2Ǔe8=y6kªzRZfE䲩!E6p nE^(BiՒ'lKjXWx6d[.˒3JQݳYn"P%dQWLCE.Ֆ*ُ_½VqJjTϯ<xy);Z"HG* ۑ%h˭`whjqqAw}7PݸSW6eUkBW*B3&I(_ &t8,(G:J+1xlg1#1]Ba-H/7Z'i"^(vPXsxxIһT/O 5A46rh4iBOQh1%%r,cڎtSF5 Zz Lkn<ȲE,K.P%HX\͟ː[=n/ (|\ظ" P3ҁ@| tP?A_-$qݞT{CYD8tXyllPptvY-dKCWo )A:g)Ч 5☝bxL<19gqcQ8P̩PJv_\ݒ rXMxX:#"F1}[_(3RF. J <gdT O{adEiLACs-˱~5a,7+ǰp9` $$vىƨKQ|/&ÃY~9*v-a[:UKEvY9&cPD).&u$ :'9'0`li~HF;uB섾h_lZ:h|;»TD:2."pT4c/Z͕.D*oWvЀT[TA,IPd\ }/"bmA c2S4zμ 3p 5C <#$$2D Qn-VTao$>úNLkKkb񜟽st~r `W.  "Ը/dxB8;Th6Z$B7;2IeC7y^`Ik7)$ьQLS}%"q M, nL܊ssSm.!-Rc@kOtzǚЬOi;)Eo܀2k۳d]z ҊS 0b!0A >Xå):l~+d4D6gv8jqs]|ҳJwX +,Y"Pje(tDv:@4^A-#nv;MGCKZY6n2bvO9_U^|\` KwwKG;#EFH@!K ƈى.$k;X*70&ׯgvKJ(*8UK5OcLKoNΪDp!d0z((M f jTĵbX=lL@BRH6]I5R?pDVFxi ЧQTY(Lj~~XV0b\B.ɼa&֩m7FedQS7YԵF;wzeΌ:>mMONcOQ? 1N=wug#sTpQ3J(7s4ؑc/> Cg^c|!sI/,\JPP =cFn|5#u*g(U279<,NE~ZXs6z-%"慛J/"Ò3{lEJMs'Ne~! aZ1x@k`);l̤vy yM@+gf<0LZc%x}qEpQH^a+!q\/]l5ـ/f#/T3{ʍ6hQܶY԰qt)U.h*҄$Y·z%4u1]IE\)JSCff2jt@Y|\I~U "VOe-HvŴ 4mE"u" I>:ZǑO?ı@'Qܕdk*i[p^|en-y=.ī?'.^=tQmy2vp%tZEY,KtUΜ:p!"4؛Ί_oP "aȚ/. Y# 'eVxĶAGo #}ixr|2?*s( P;#;m89mJaZf6o}OA1߿ "yţK?"$ bh-wv+^'T.iٹ`T8Z7M5_C ;;>t{*g7W.Z㎙ʜJ(qKY8):YPbMwM4ޫ ;jKc-n}΢>۩FX#.ڨDi~CÃba3sC_y hN.fhH.QV͕KcǶq0c |Kh׮,\^e/1Q=fOL9}ri3ICPI\b5nb3xsw)1:3zs,@#b U0*ؙ)  ҡ l"Im0ôqDr%,Wbg!pFBݞi *6"8U(? cȯd6X$0Xl@eK>ƁmرJVʰHP`Fe$zޤG q 0[̆=鷑blCt+d{̂) }x$ק &/i6y5REFF\3cm=q/aZRe% w?V(:J݋kĢ^2PZK%c&-F@K#Ə>%M9 N`zcmap j5gaspglyfā2head16\"hhea$ hmtxlocaqmaxp namef<epost0 2webf,RQ=T02bDRR x͑JQn~ {(f@}ZRXH!e $6ADtV $Y,LԨn\ls!ųF*rR(TD)ɥ6P͸~/ 6ᣂh}c,8.E"5iHkHD G_xF%&).qW6$e . ܚ3fl?z]7ϫgyxx9+&mXGI?ك)O"MөH2tnxڼ |T0~s;Yd2d$dYĈ. (.*jԪ]jjW}mkWm^?[!sܙ$$}{}=ssxNl!v2] .q83H(Oe!өH: =Rrx7Oon6zż1*.`ts䊖`cubтJ­$C^8ޛd!Hܼ-;<|M`Mq7sf{c<>ܱnYz./ pB'GX a\N>qbwfAt0xvp)/2Q^S;~K^r5LX˧Վ״#1:;܉̉㺹A9$YGXv[esL' ilaGx{xP:)g.%2զ\ɲht%ho;:CsڿZQ^qF*G-&4R0LWqL|}RZ֋&{k;.ux̲]o~ms[?`S;f\ ojf.6%B*u)!f©cbS7CںGuб}֮E\j7hRnfLJ!ǾK T $#++ rv)gb<(}+o>c[ kϽ:[HouYs[-8swmǿq&dZ knzӸK;/sN[ ,Ճ3OG*Π `bL&.&}lg!pq_XXSW:U ?` I…D9> S ?>䫓hE R|z„;E< 4.50"9Sn)O*Obeɰȁ[oU7rLb}l 17W>zj7>gqOzU!M:GUcFYaǹX =fxGsfZh$̀_\)=ws'GSSU(7%\pQ*ۑ,=` U" )wL5&`3<p'9x$!Cfrvƪ ;IG= t&i雡Ζ L֍U-T 9>a 9f3Z|HS XʇP+$WACVwcO@J|AY#Cq1ܨjQ>>K+Ԓf'fRxD 0{Icm8\[uwٸ:!x8;ti)0,v{0j ]%N`41NU+V=TڔEx/#a+A-Az˒PA2qR[abn50"G_Cu "|D%oY 1bUMSB'_sqU3Qn![ mHy1QAN ;źRsɅX-;n^M•XyѢa(v}: ݀`D$?]|^D<[tås*r="kjaJ~0))zJ'z?<<#3&wccC9x0T)$ܣ>a ָO]}SZ_J zp9Sϕ `;fkbl9/߳0:FwQ ""Ԭ cH#U]oyF^W+گ|Xo@  53B+ǐ.Yӟ֎ݻ,sRܶ24?5/g_AFdŃYj_ӎ}XA\qaC(9A /A!ʚ>yi>ͧZyvȟ@5p }6e$k{b]mZﺘX@o-dYm8m#if8vy%G,xӵd%Dg~nA0M rk[n*ڊN> Y%8[z[o>ջcL+XZzCjb= wVۀ'~st#i!U(?t@)itq4I|I/Jv yٻE FR>K |f $,! ? φZJ}MɈ#7~/\ſkޣ7dbu֏΋~'*E #B b@71͝Hct<=t>N~L4_˥c+#?0HNjD7OV+}D=@i\$Zym /c4.K@fȏi< s%yw.!i+p9gSv–q>':0JʬK|e< (1w/dc\_:ѣEl;5 L$ffEyw DM!VdpXTW(ᇸ|dG{'OՙLJ3&f+ ntѱ!UO-Zs?!w |`pJ$Bjm{D[-~W-c wZ A6D^Ȍ45 ʠuX:)%7wQ ӻ!eD]F,!ÍCi;{NgpQ=sڐE0ç Aʜ8j<(4skP1 ģKUaWv~;Gs(ӱ<}$i¥7cd~MJ:y…3n>o[#IBّ2\?l@";%# fRm$ˤ:'?RO'%I9RJuAc7#zvl"Hu&cy!W瀦IbWv/WsHsHMeuHۦ_" 4n[xzԴq9X9DB tZR׫#}jcCbCW'pt8 \JJ4ǙȘq1擸RK)0&r)ڭFRO,K1hcǷz'm>cCxoO7Z =Cc#"}*f*H4l2+Naɳ;Tw. MȠSQ1!hZTJ)SMtGStǠ$Ci&G>nK~Z..13r?vvT&7KsSbn#IaYg$(3!p#mCr$2Huǽ$}xaa< b ~Ht$4bAXR0K8c l$o8 Z P6!nH!H f$>*e7#ֿ>O.(r2>I@hAߊ= =ͤo0նPk{EOa^㣷@887>jN_ҿL[g\zm0ڨE .Mlep&k_!#6.?{A9x[7QwFGU=ף ʒaJ&I!a| T9>gX;z 39"#^rS[q𩸓V,SԉP/URc75-EsaI C:gZrάm0$Uu8mLB1ٴ22उ;neF /S"I?F[QSs1A&7Q{nT*2N[_bj0+-s(9XZMUT`,,JOY93WYBQ=Opj qTD>1B9U؅Nh?^zu|mU|a.*4Ad>Zeew,3R\Wƹ^g2T馴L,jXLVvrwRͳh&6X[j%Uy^AOUIk+j:q0"Jc/{C!%gO,\tEq ?P'q% m01a\IH(qV-Pft tp`ё阚 ́#+Wj4Uj!kGcDWՍ&Ʒeb#*W; $%:SMT:cL/.s>Lc>TGgܔ0y3#Ŷc1oб-FṇVte:ЏezrYmLHz756/^ZH!(R ow,CEt!]|B.|881.0>VN?!pSˁF<+ɺFP/.~gPRlr]T|ʁ8з2IB'T~8ɲ*,JȔBh{@y9Q!R,4ﱏ9#dɩ02q"Pat\SW֒bpϋhBV%tBd3{j/hgE܉4Gl`Is= ɭ**sH>~;p-VLЯ%iPOq13bXw%Rb.5gNWx*21v$cOc٩q/>;V~*BKZVeE`QH6C ж񖧰|Y߼ʧ NǯtLGt(b:űsY^UCUwo% }u2ݦQJjdӧ>cSOD6Y6ҏ E83_2ޜ'E6Ó{(&qig K//?&eC̷+Ru].E)nԭ(0pWIbc+B4=&_diLcR̠MՎ&i&IBYZ` Qʊ1}QBX;d,}.u:-5_L=jBRNR:j _bLNtUx+e$Z|9 ҵ[G]+(ԣ@Rއ71b"SN~//@iX>(2I:;D^M%h⨪B-哯:-˅'q*58\oYUq"rZUYKEf`{|Džq΃T%9!]剦ԪutJY[asR7ᇵB_G^k#ll]\#Eٯ%+I|<?M\s5իt֦.lr_`,rd03K)e?{hwi㍉pxmjět^q&S/;8mhsXwہuYኴ.wE'Mv^M名7BJwʴaxw}/.v6:ÜI stny RݺWou=]۴D=-quX7ۻ\-+zſlZGټț[y#2/xzTr\tl+:'3FYb *L[`Zׯ"-T;cC xRGDmҲt&X/&@۰u4Ook]O `9c}Fqh c5( i~˞=3x{oaǺW|h>z}?lYuDo}ޟoA8 s]=ճ.)$il6BxvS">4<' die-I&_aޔt q Z*%r/Te6C D7a<} +-8 11ؗ Cz9-($A&eK~% UǙCaH\R1Yn![*q]v7 Q*%2ـ D(%lt *]@2?m%VR}YuKIuY&9/j 7^s2&s ]TX V?-/[ę_gTQ sQߺ+NXq!z{)j (I=,H3qzYj̽/hfq  1T0=7[hsq`lEgt膿|aI9v1|;zvJ"uMͭ+t/5xJƐ|SOAucj %(|bn/g@U(;+u:u R&.!-G),c妱\k93Vo1MxSNՑ[˔ΣNTU<1j廠ivBL|Ao.2"Iz'rT` rj 9[W 9qX&yVpriV,0W΁k^P$aFyG;j`C5- `4eцIyQMmRBMe1@.b1Bz;c0 `Ak̎ Otַfd⋧4&k{;Y>NUv#o܆}6>}p}&ݬC~N95|yMd˧X.!NkGF _־,B 5DɅ@Cn7>kSBUc(o#p &^]. ׷UׯoStV \2V 's|A|<y}O-l|zb :-r},ݗ~T lLg]5yQ L Ht)X-3ةAVGnWƥ>W΍}Gh40=ko珳 "}1[$߅%*:͌gJ2-1tI3Zt->}xԼl{wØC67g?yG_ &E:c&˚96̍K䡽/>: Ͼ'<9ipźZZ{gȁrw;^%7{樸?`7WTeQI?Mòǩ?beQUm/9X^ ٢?);)ݪI,h2ާMN.-&.Y{͇k% 4R`I]6 }fgRC Ve0=Bi>#)=d$\!$>nI G%v׷j"Ms,vTm56哊 pcQ.L׀TIj2 uN녭`"U#^T'"U]IRSa`2j \ig$ nZsVDStZm\zb5>OX?Hdd"hQgoj1 zv柽`m_.>n!躢%+&Kż3I;yޖ/m4dz;?|Ιϛ?'ڴrcpo>jӦ=x@qv:j1hX9f%Io>[yFQ [y<2ʖp |ܡ [ؼY-Hqc%xP6g_$oHhhLZ*y)ԢCR~X-P!Gf/ *.8$>.Zgc_⧂l")Kv0Gʺ̺=jO+q٤#;,d:aD/`3a"Oy1IuM Tc{{7~7/? wm+Ȍeո3/&*:*`mo<2IG>Dw LB_s*#S<@0jڗ7; ۧԸ+4G0mW+T,INv8&KXl33FZ,ԯmO$ e!_,ˌJ+$ni>z9f'Kt 4V#S҄ E< 0eGvSe|z`"$*%r :XWxT-wi =t%vˤ<.lK6QJ̪te ?Zr;*|9ۿi)?>n+;!-`c'-C?}/޵41Lf[ə~I&9nʈkaSv`}+sIv?D'Spqz aYF"lx*Jo o=^B<5Ȍfty=Z_+릲6y9u3TpJm!6(83N(4ħ08عhQg'j,/FF T<6;Ty9*Y'5(8]B:.%Ӳ5.ml  j2+ERWןg?fzj<5g|}S_F?ufoFW/=]/Q I^vk]?]9df,?qɻfZ!:W1i,Ym|_9+WyyCH:bѽB([bpH:> NB5=ļ޼qC]2װ̸i009dgG:쌘A_o9w"a{HZgtvvtBƺ5Ii d\֐KmظmrY+4[]]Nc5L|oܲe[y[0n8yUy˝JM31UGD|иmHEd~(OqY*[-63}L{.}F~qQךgVvǂXA)Rzew[Fiܳ`S~oGZ^B[ڷtڕ ]4La˦V Zu-䚳E+8?펕k?z{eJYw97>B-8tsT쌸@?/<;mnENh#AVl^W5D¿{Q.ZPpJjsWgղ&.k?Oxg8w.Jčg:Nv{(F1u܇W }l1[f{/umBgcoQ<^Q[GMtP니_S=dԞZ-Ag[T)Z,.Xƾ1.8Xj!CaJͦuY-* ]$3 R B)Vb33{.NTgMQўy6va&b!.N_ _]ڈA3]W{rj߀tG\H`/;ذ򙧵PI+q5pM{&sy'~c&T6 Zm}mm}^U+>Pc}Z#_GcCf78}z6.Y΂-xmv龾tex+b@)+q1+-V d+Vσhx':)Oy),m׬>!3}Ucs/-p%g|%Bs:ꚽw]jGfc5HF5i5W6TÄ;OW”&PAthg gDm{]2Yv'>Eg+c؃,-xg : -icvgdq薀bnvPtd\7onRڟno0hVLHTMMӐlCƲ5ݤn5`Rƥ|4e" Qt@iU4|,iՖOj~CpǗv h5}5E }~>q!ĸ71xku2'bi ƯyW3]3+[v:͝[vf1U3sfF%Y\dy].qRIvZv˅(jn{Jr=0az/:Ξ ?$ 9]ɢFk N۰ bc@ uNpU/< [t?̳$@7\[rϣ{Rcsx$_y,yŽs[Vg +&x\.^"M8sw۳?Y.q«e|m[By$V;U١:^fώ2ؖNVDd2˰2 N !O8?vUT#@hIq]M;VA"t0/Wܛ$v?%zhP-ªP.Wdn'֙cs/Qĥ̡sYrM暭I!qQ p `ѡfC[j@pP-xaAL/d*j_!$LU;a+:qާiNizL\8Ẕm\~$b*,d5D7FSF~u=X&6msXC16d] ]WBE}!:^oW*VZ뜢/S WEHSWty5kM9IϝlTNn׹c< p'>L&5 w>z R^e(&U^Iֆ$I67Y#6"UFQ6ND0)bEEAEq`[%IAJ2vH&jKϐEQ/:;,ixxky D|6Hv#! !rT%I&OĄ.)F" y DLfUzE-*|7 6&DkE{&*d @dECpK8FYEY^&!wZ;o6b8t7N d'`d:U6l4I"$6U;:"p<8@1l$J<`Akx$"xBp\ ]"ɨU]0%"1=Z`X"Xmk3 6RJN#E fl1PUp b`A &*vKY8Ι ydĈ$<\+ndC:bJER} ,N/9%A4# 5 # `*o3< v%Oi Q2Y!9yGE҈.I6ީA4݈5x;1+,KGU4I 4 $$փȂրL! pYID(p嚈^48jmA30- t-TKTn4`D 8;ǾAEW ZWS89yRάګhsc2Y#>%Bg"6~BW-N]ֲOH!(N ׉~QQ.jB#Yfdo4WfsQ3oNU"xbv*oDuW94Y;J?$ro5c8o_cv)Q9616"HB okkQX>Yv0ԷQEN aڵaۨXmg! eǾdd'Pє9rE +%Yk9wa K{@{.:@`3}K`&/,r<_ 1!5Esn-";Ln[ϸ@܄,"B0׫KܸW &n1/) 9WKW̞' ?w-e}Jsף<~uIm_G}sg_/] Ko֢uOcLOk(}!Wny[a+,%mB\$#ې3rX,N{3%m }ItKJ\e.I4rggnCs//5ML_w=+rK+׮]U/\#̾Z[c:A?3:u LT0>QwW\ykb%\.臰+M9rpc'i<0{CM{W3~+0;zm pY{ZwgP%K)-A"8?;o=:+rڻڻ3 , q혧/6NmGLv$(sK V n?wۘq 9:ٓL4ٷc.öPmi)1׎fˢ>j %J[f&ѝ: .~^ -vb8y:iž+-{XVܷb,6~}+7ɘp1a"?cs$WĒs7/xxүԼUC⿾BLD?$}ڥhĞCoo3oOZꪁ;ҫ,ŋU;f_8mU ^G[%gG8;9dx)2x;E$Ӗ2A >÷~ \qIEq.%vd2R!R4g*SHOAPy]yRdA,*,}s3c7|^袺NfV]e(݂FI/ҋW?sG㵇t-Jzp7ȭ-> k~nVz/]?<7\~bw+C[/g܊^2&^ø^R,l8y8+=_cߍӯw$,$! ]'ݦqyn-_aφ>JPVqsU樣CU_'v񪯩}*i|\KM宲dNvdjAOKpYє5ǜ[ sSl7kV_f.pŰ1!*tXmL\WrWvVAV_rda)>`1~gaLl%7yc-^vեoi3P<[&F^޶d6@zc=glO^O/3𖺮Ծ; ].`GמeMgK#p6p˹VŁ+֟X,xUr|%.:Į]V\}>] Gߚ3k[ڟ^:;wY?}ڦ{VϞW+-rL'7T^4:g```۵c7qJEfc;t }wLb}'<1)*:vSrTb[81'o, _p݉OBLZ@e+%1/)zx Cl*iv;8_p:ExӦ@pՅ?w% ͋Y34Yd`6u}B%=*n٩mfJUX."N!"SLLn򞉿k?w.2'xJRJ6'!j@jyAdty"ļBayF KozmԶ!i c!?,C%J3S}YKKtEvDo~zɶ٪qkkn;.u[  L$GG>q%VC̛Y]{CxI+[\3yCYC쟻9(K^WO3{?ΚȮ3osGq̖{wO^TONwSqorml;16m060ѫLI@BBztQnz6\9K}m % IⅾM _s9ݫnjh;}hAow}צ9+oG8Ċ5Qj5mX]*)\X9RO2 aBžh{FeӋpxZڿ #WoFZ%ÛW? ?Zp^tIruk;<mj5={j"]X=gPiޚv\R!ŵ]>2>a0E1|"8g`=W2M۽iVW :a#<~!DrQEU hh(6&tceiuϸ^j@κh'HMMDWƀXkƹ[$[-M>wu\rI&ΈtwGd2?;w-m zRw8-pjX]Yk7`Gͣ14-m]ǖw3PƝsf lp4D'hxn=S[nTx;nS.jy*ƍТ!dB窰qRDЀ[Fyta@fos68JuU3JGOĤ~rhc8&i_2c%T)|D~d(?`ڏn*CR#&EBDCEshIMI^\& )!qҀ|oJA!˳ g֣ &YN$STf `Y~>}Xrzhsd;a&%dR_LԠN0R% XvM32>Uvơf}QKц9qg}w#2KS2ѥ-&;0Bran2Vl/ fg! ^ wJ6"4'^ z哥+V$۫ԋ}ΙΦ-OG}+v1 <N%uA)۪n1'XVg7,2$⋅C$d+= s앫_\'=3k~jX гxhP":nXT*Eԡ4Nen1YRti%L@djh>S qyAWG'0_9P"- &fl c |Q߶`P+|f]u x FϮ2RF 8jyCNo~ozf^NlӥS,/pOLLz*7.ȸ`yzɼ+gF#󂴖Q :%c3I}\dDW&۩]vȖFoynT尘dH]1ZzR1$G5GH B](hZ#IcE?Mat.Gҹ{tSA6ILjQV.ʹccA%%3aO;PrH&3 NYˀ9SԱ׎C3Vv^Q/lQ03åWn'mYXDPK~t5  E?99pafhNTG"Sh)ZԱ`?.B)9))hmLQ$o>&}$ lH6 )$3D ӨނTT=)ldz\ːw>(0IGA^$5ZB$L]`L42\Zq H_gIys?k#KDa:a|RҽKmojumkSg:#}HncZkW/=s"h# (7ϴI̬yɤgjj4\-h36OP:r-PXYxYÖin[ȦM_ux/dKJb( e5׃N~R$č AD/#]~T Oav$F7hIyn!QXu,gd7YJHC8g.SÜ%ꋰ%v@N\h`?/~b$lZ˴o޷.ݛGNp>a-]}Y3?]~Cy\^1P< j!_vy?:Rz HOOud8q 7bTwzLHvi[z--6nl&췯=ikLF[`ށFJg]\~Ow:w-/Vm׾R{`ݑ/pǧ52lm?qN,hĦ=O]6ǥWAeyGrc,a0&RL&?\2cj m9O)TV{#4y)UZ40*OxL"vǩSR>EA-h͙+9fKcpd?[7nyjhyNOg8bzx# RRn8*f+^t.1N`BXoC@V5( (Q\ck-L˶(LIAK0Kmc*˖%*YZ2@6)n N")P`f2c[ jZ ӠVKAph9~$ۥO\n\+UQ(Q8 DO 7P: e%Gl#OrMa̜*r2¬'cdg**~>hӔ\\[۫^$*;ix|E)p8"Կqr΅ބ:R6~C+;MmR}ɞfɵG*6ҩ3\I|zcLcS?E¯|Ԙa2zd.F45`@ t&YZb@J|TmRQFʞwMsVi<6Prkm;oK/]{Iw \}d#0!ó& H1w_d ߴ4 ^oZO*? 2{G|hd\JW~EoG.0." vSfc#RZUh z#;oWDfD810xgϡ2*?כ8pۼ[I2^ cR;⑟h$L~EV5e1%-ݎ-]TyG'Pr+f?8#˓C7]q3u7}5:}s{{dǃf|}@%7UXٱkk9{U3=gں`ȊBv_72Zzg1&E4ɵd~P$senc1WfGF07VZ0Pe lN"o"v*]!^1Ɋ%vN]u2#.4jwGzNF+tvK7 7XgyjPG@kl=-M,cL1P<}1x9NueĔwS:L`xW,L ,{ɩTKb}A|pDM )hE%/orY2*@5&bNXJ)CzxhpA4k Ť%*9uJؕlZ+yU|, *Ǐ_2wxNʦoŊ>P^-":HUlTW2]7 w} z)m 㲵;lJ\W)-j3䪥D٠BnQ=r>C*B YÔC {5 ĿU6rdJϷ.1%n m/ȑ1]V&|3}#77A͛tMu]Mv{SZ(J N k~%R92jGPO }p>JG9Yϟi-7cX /e2#2 #a7d}JTEn7 (Sc"Yl$)>+|64AvM4Hp2&<'zN UEǒnOJa$qCSV\%ve(# ,Deꡃ;W>_L"(iq9oiZN(KNزJb^0\{cf$e*F&-&1\0cKS㥫Cc6y^3fc_n62Vw2cW'ʌ9c+M `v],IdFO@t$/]+QtX4qVߔ|뮑im'?Qx?^ڌ"ܼ{wp3uJI_,ǹ=*PjQF+KU@$XA'٤L fDnv :ڴ jJR'Xҍ]Sy]Z|ݨnn#iAGi^CI(0 U$ !+~|V^795hyXW&UFna%I䷿q]zPuEx}Q#VzStca_;F{mfdNq ѣåL'Q_Dnk^#SrQEY7aЏ>an˭X<%Yq2μihlk• t7pn{WNձA0-w N/mmsZsFwPsRW,cPVt@1psNw7ed|\:r: bF[ܞ f'SnjE&YBE3+'/w@}ҿ~܋\AW$ط*;H][RmO~PX5ًӳK,5Ա`oʙϮom*5[昫iYYIn((9O`OCd,!¤@Tfu ;2 h4uR$?wleG60y= *~w1e< Q9@*VaXLA 8DY#|r"HyGggd+]U W ]ͩ'h"UJjtg(\Ë~2O Aߥh4,!3I)}=+n3h˴؞WB0 S,DоKfOv(8N<}͗B ;ďC x>. wX4|> k^Zڍ :$4"8ELz,"L4n4s7_B~#8Kʕ;8Ľb@ Aٳ_2 ˓֍M ԵɾtRDx H y6 U' }ȵuWWhljԮBU ׭`?Pp=lǭ-b5a=\D xh<׊yG /'hS=zRzD-^^klc|zu,3ZjpN'YꢭEv[zt8KV`Ue_FCق2)=Thr1_Fg{>Hs>2=5E  (-,h1ڀ4*vTPN4B h$1U,H^"Eoxi6Ȭ5 _]WZ^x^e%zM2?~̇bWK+xwMt,WEEބ?L*SZB&% +o.0T8V2[ǡ:V (vkRhi 2ʤdtsKFisU<4ߢTW|q,\L8 -M_+ dl-J:\/DŻMmG'lCNdvv/^iM?ʾ_Q.tb21)yA`R$8I.9~Fذu=V (!gx(;D7l8O"J, g|xo95PȞ v=O5UvJcٱiOno̟}0Fxr ekgIA9Z}0)  @df - _́ ekNzd.eW}һ*5X\PkؓTF4}& ][uFVwW@&gKZ'U˥ 儸%F1h|`ԏbU,L@R`&*.cK쯈,ޥ-xqԶpN&/ń m Ș&sk8}MA*t 2mËjĩ]~x ip#J.H "KCԘ޾Jaw/=;n Y|Rrhyh*斅Cy@gbY4N_Re/@L-Tq Y :+wT0  eH)ːA&5YKL+i 1r銀SJLÈ)y͠Nb)kGB;)M7#e@,Y`gxL-~ WeeFN|H_Q> >FvsZ؉KSF%ҏ)XcS?x7Ju)葆2CtjIYٱ%ppu^o$oSKʖ黷c䬗sؤ@V !冀ÂS t~rJ'FXU\B98V\tʁgeE"(Z>*-1G/e}5y B׬^#ܭ/bKFD25OSh\ϋ _YXJ#Jl0zYQ D;Nkg ^7 gruݕ;D^h^&pF,Dǂe;UXSb|qjj6j5pC9fYj6~gMesCOp>oLGVӂ:QvEd ut-̾udp(,Y0;['L('GUŊ3{A?ZiA%N6"#:;z\4|3>VUxSIA 6ZT'Ο-&Qw9Nv$Y|wE"Gj4̅gGQ 1#oSR~*#Օ+?6UJ!}'oQU_ NA7f~(xa[ :6Mj9+C6u>;<7 urj!hf7/bʔE9UknF]aUNS nCD%m)5bx`E8Z͚R]+P3}OgciڎK6w <  sj"IʎF QEh1YbkF~|-&2c9N c-{^q֢ YwNlKo{]W';g= ~CpߺWaӦVcj<o.XĪՆv-C~Fwhlңk>4riPb_h0 EL&쑮-3On`p\UUQe~ZU[Vg(T2+Zn=1CwNM+׳5 ?1 Uϥ[A~f]b]厽-Py 8f/nʑz_u/h, j7uz21#z,zQAlzx-C#F^^0'DGfL¨<`u+<(rQ1~^pJZ,0}9-׬E\Nȸ882-7hX< ' Ο_u [8mYsgdرjJ.Xv"x,8'TV,3":b@ NGZz#ޣKf~>2^Yo /}a)̚qs._zf0 <61SEߘY>lZ7~&3I;O9z BwÙ̖t- I{.1?)#,N^'Ćpn!5"> 穿t>5jTO+4gOV[>I[iJć+kJyd = <6_#b:iΊ @ G0@bufT(w?R) ϹE:0%JePap#4xǧ y^gi0~po+GQ2o!z|iۗ cROQAE@ 4=inoaZ  g 7=s'8Dw޴f8Gkvo0]w0މԥ'N\W?V^A~:ԡY[b.Bxeg.togfO~_=o>tDž[=(^Of{{0[paGKGA$OČ0"P 18#AtW0G"b? N-./ &'6?_9fnXLjw#I$۝%z]M`/L4='2sylMwX#zU@P徦kI TfiFAj\l)W)j̯R)aq~dӸ:ڨ4b o|MH6WcqOȹQ[^lT5IZKi*+%E[ZetX/ UG%%'< 6*FXnXLkkB풞&lbvp)&Ѧ@;ghEά]iom匬_, oSӛƹzUm44}:S 4d|Ԍl  3S<]#p?F^nٶeմkfhk1[^LlJEWVE!߹3 ƲO61iM Qv Gj OՔ׏hn >,nKO4Ye |in#Q]LaG }: 3,c/tZǫ;tJ)I)VA }*A &>oI9݂|>f,Ź:FO'kf(d(s)eoZAg?oB.AIg,I |`4)ێG"c4.BI>U@ ߄3D ~ cYg’9^AW&*,RwϚ=v 6>:ìvۗ66w-wuz'fݻx>t~ޑ.HDnSv wufży452uH[pf>R۰2ۖZߙ `,}&=c{?d5E5ɼd<ɝ\$xyIab{V/$4X;c'=:|u~[33Y;m~ۡ60%K HlBUz[UuSn^mfWUǪ̪Ho2o7%^ض>گM[[Am/ |@L D)9cک+RJ-;6Bsj j t˝d ų < 4g5#!4QLA s #$hZ<֐` V‚,]G$gFlq<5E;|cXZj1 nDG>S'hKD?'AC . )4gRtXn?n"HQn.Zayb-}EBNȳf@(i-!4nlvvfaEv W۔tð4PpHK0HEb m^MحY[Ƥ [9%gFŰNJ; |erTFc4 #&E -j.Vai4YUUƐ 8BG͌[UuuNi0cX&a+Ӄ*v)Hq6#;.j=3VТh 54b9HUJZK yFhJ!(y%00a@teeU!jL5Ѳ{b2Pmc>]6stԲ Ѵt߶Z]M &q |K ZB y-],V$Q;qT<IT7S>96Nx4xd"PU8i6PYVuCV; jwy ^ - X]>%vQ`¬{*Q4w׸!QrUFkwJ^hUFϡМ$2 U , 9V;s6 6LlZc=G*h}k}֫4(!R4BJ.bﱖqǼfCz4tzO ՜@CƨqGSB=8Fr4j6|nX&S*n^R@ukQc]1 TZF-|&8֢FgWrj5NlK9fb_5]׸6rg՝˶oZڂѕ svwMd~;]2Q뎺}/Ec`c9@CaƋghLF4LxؐpxnG X\5>ÿj9{p#R: 5phW(w\r(7~rÍG:Wf2m#@eꙺ{]*:0}Bo[ڥdmwr 9 o8W_E-uϷOܟ?/\6{@kO6O.UqVԽhb`cKfۯZ$`h!ͲHCƋLJYt@ 0^BVct="`l=p*A"PkhojCOh#X<}]YU yhw;0}uҩn0eklBa08m^-YXqwnܾ-,"zj Nvw ~&O+HA "۷+9 Oe9D ͻLUY7@?K72'b185v4TUͭ;QW+k u0wgBFwjaءT^6YR)J;ULHW;aة Qޓgiv_"#E44r|ML,M3Ğ:L"yH,^JD .kxf[rqҨdn7TFiNJ2w_FRפqxKn!۸\(GN>XEcڃHXŊ,5*m[H,,T>Lw;RQQw˵c 5h=&{XJH h(g {fn0*\]6? as¦ևoݶXހ Az[祷R{OIozaUO=b ʧz E^B6hӳ+[NRN6~Vo~΀V+_^AW fz?b/C PAp<$L'4 1μDf>7=-ꩁ_^{I2T|vL{ RcƋ.t󊿢ğN&=?j^ůo_`e{ƫ[nHTp2ڌI \eo,B`tkr\0|0A VNp `ixNrl&j#!Mnf"u䵛5Hhqr!4#H&uhaElgF At K,W:DPY`Z Zhݢm=n5I,'\Ӏ"ʪiʖ4\}]6Zm@0ģε״q4²Qo(WW ',M'ͱώHGr3I5_?ZoXf*zl6uŵ`a9ɘ 17++o[X_ee`rENSAq@enoK㢐WVAQc`ZO^%s?}NdԫQkK_cb&j` x ֺhd1:ċ@v Æo%t,xg)d#xA+/=ؼ:lzgmUt|]'jkúYuuݰ G$W pO BSDHKޫRESjsU,6/:.*`*uѰ!P::]P6N.owx}=tN"on@GgoL+$絨-:^zTl&tT~zlq:zT9|)BeB#f oMx?.ڙ`8P'k-[ 7{qb޿ c|bgAb~Ļ<]oeGw_u]=7ǽ~]/HHt@2N/1( 84s5Z{/`>un{h)Erc{?exc``#10LbXɆYŃ/VeB؎Ӄs? \ugT5]·_߇@@#A-UB.Bۄ+D,D|]"'v@HRJZ([("{S 9rrrs˫[,&!RHA\OUBw蕬*VSLlQR %Uejjj6:ՖIUU_R?DZZ^h}fVݢYǚκG0Dxc`d``lgda& fB0xmN@+C\6Ƹpm1a*^B AR!J!mc.\ t'9{:E0d2g "9cV2y8ќ£ylEk^%'gVo?U_?a\e`!|G1|')|g9|_q?/+*&o;.!~')~%~_7-{ğgwĿoR@DU2FujPZԦuG4IS4MhڕviA'E{>/G::::i6,&L[h6:Nd:Nh;NgЙtMйtOЅt]LХt]NWЕt]Mеt]O7ЍtL&Bٕ!i@PHIi'%RF9-Tk?3}FٙXRΕrS)7rK)KJgrg>A`i-SߩJ)X<3#ͬSq62T$Ff晁 e^C~X24̳?A-K3<&&L+ȌHr1 XQͱQc%j~d˥NX#'g,~"R\ 0nH0ٟN Z#]Ѱ--50/5l)MՄV2ʼneu Eb}e=p~fRfzxY[닾ym6@n:"D)D B+Th~TDZoȡP$fP.L/$yd/3|ň7O--M<աit9e^)7YZ]e4xVhv)8e~4dplyz="a0gM!~Tco]`M-`ppUw4ċ&[m=جLN|wTr< dže1u[XU'*ckZ c !#N&'3FE3/%`@4;_#;O iV8w(FmIIZ(: UNJ)ma5]Rfj-jZCnXeaq|q1HϩUC5u64.-FNhES mrzY^ - &W$ &X5fP 0guTNsFr1: m򲊝/T-8"p'ƅfZ(&LvB%e*M:~lXS.EPkmMP^ 'MMv%:xO镾q2̹~Nt.(_w6I`X(wдSœ8.J֪YE CW<츚nՏxTuQu,VQdZ)CFY6g97mJotjHw.u޸[s:Ks97I5#]4%2F2XQ͔/1yAb>iIr:6oUk`& N!td y*[cگ WSk"+i"}Ɖ/jK:\d;s1~0TdϴC5g~\Msu[4?7EےqϪfN5 jwQQansible-1.5.4/docsite/_themes/srtd/static/font/fontawesome_webfont.svg0000664000000000000000000060230512316627017024734 0ustar rootroot ansible-1.5.4/docsite/_themes/srtd/static/font/fontawesome_webfont.ttf0000664000000000000000000023234412316627017024734 0ustar rootroot`FFTMepaGDEF OS/2z(`cmap5jgaspglyf2head\"6hhea $hmtxlocaq4maxp!D name<e!dpost2$webfRQ4=T0<33sZ3pyrs@ # dHN@ / _!"""`>N^n~.>N^n~ / _!"""`!@P`p 0@P`pd]YTC ߷ݹ   p7!!!@pp p1]!2#!"&463!&54>3!2+@&&&&@+$(($F#+&4&&4&x+#+".4>32".4>32467632DhgZghDDhg-iWDhgZghDDhg-iW&@ (8 2N++NdN+';2N++NdN+'3 8!  #"'#"$&6$ rL46$܏ooo|W%r4L&V|oooܳ%=M%+".'&%&'3!26<.#!";2>767>7#!"&5463!2 %3@m00m@3%    @ :"7..7":6]^B@B^^BB^ $΄+0110+$ (   t1%%1+`B^^B@B^^"'.54632>324 #LoP$$Po>Z$_dC+I@$$@I+"#"'%#"&547&547%62V??V8<8y   b% I))9I  + % %#"'%#"&547&547%62q2ZZ2IzyV)??V8<8)>~>[   2 b% I))9I '%#!"&54>322>32 &6 yy 6Fe= BSSB =eF6 >xx5eud_C(+5++5+(C_due> /?O_o54&+";2654&+";2654&+";264&#!"3!2654&+";2654&+";264&#!"3!2654&+";2654&+";2654&+";267#!"&5463!2&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&^BB^^B@B^@&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&B^^B@B^^/?#!"&5463!2#!"&5463!2#!"&5463!2#!"&5463!2L44LL44LL44LL44LL44LL44LL44LL44L4LL44LL4LL44LL4LL44LL4LL44LL /?O_o#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!28((88(@(88((88(@(88((88(@(88((88(@(88((88(@(88((88(@(88((88(@(88((88(@(88((88(@(8 (88((88(88((88(88((88(88((88(88((88(88((88(88((88(88((88(88((88/?O_#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!28((88(@(88((88(@(88(@(88((88((88(@(88(@(88((88(@(88((8 (88((88(88((88(88((88(88((88(88((88(88((88y"/&4?62 62,PP&PP,jPn#$"' "/&47 &4?62 62 PP&P&&P&P&P&&P&P#+D++"&=#"&=46;546;232  #"'#"$&6$   @    @  rK56$܏ooo|W@    @   rjK&V|oooܳ0#!"&=463!2  #"'#"$&6$   @ rK56$܏ooo|W@  @ rjK&V|oooܳ)5 $&54762>54&'.7>"&5462zz+i *bkQнQkb* j*LhLLhLzzBm +*i JyhQQhyJ i*+ mJ4LL44LL/?O%+"&=46;2%+"&546;2%+"&546;2+"&546;2+"&546;2`r@@r@@n4&"2#"/+"&/&'#"'&'&547>7&/.=46?67&'&547>3267676;27632Ԗ #H  ,/ 1)  ~'H  (C  ,/ 1)  $H ԖԖm 6%2X  % l2 k r6 [21 ..9Q $ k2 k w3 [20/;Cg+"&546;2+"&546;2+"&546;2!3!2>!'&'!+#!"&5#"&=463!7>3!2!2@@@@@@@`0 o`^BB^`5FN(@(NF5 @@@L%%Ju  @LSyuS@%44%f5#!!!"&5465 7#"' '&/&6762546;2&&??>  LL >  X   &&&AJ A J Wh#3!!"&5!!&'&'#!"&5463!2`(8x 8((88((`8(8( 9 h(88(@(8(` ,#!"&=46;46;2.  6 $$ @(r^aa@@`(_^aa2NC5.+";26#!26'.#!"3!"547>3!";26/.#!2W  .@   @.$S   S$@   9I   I6>  >%=$4&"2$4&"2#!"&5463!2?!2"'&763!463!2!2&4&&4&&4&&48(@(88(ч::(8@6@*&&*4&&4&&4&&4& (88(@(8888)@)'&&@$0"'&76;46;232  >& $$ `  (r^aa` @`2(^aa$0++"&5#"&54762  >& $$ ^ ?  @(r^aa` ? (^aa #!.'!!!%#!"&547>3!2<<<_@`&& 5@5 @  &&>=(""='#"'&5476.  6 $$   ! (r^aaJ %%(_^aa3#!"'&?&#"3267672#"$&6$3276&@*hQQhwI mʬzzk)'@&('QнQh_   z8zoe$G!"$'"&5463!23267676;2#!"&4?&#"+"&=!2762@hk4&&&GaF * &@&ɆF * Ak4&nf&&&4BHrd@&&4rd  Moe&/?O_o+"&=46;25+"&=46;25+"&=46;2#!"&=463!25#!"&=463!25#!"&=463!24&#!"3!26#!"&5463!2 @  @  @  @  @  @  @    @    @    @   ^B@B^^BB^`@  @ @  @ @  @ @  @ @  @ @  @ 3@  MB^^B@B^^!54&"#!"&546;54 32@Ԗ@8(@(88( p (8jj(88(@(88@7+"&5&5462#".#"#"&5476763232>32@@ @ @KjKך=}\I&:k~&26]S &H&  &H5KKut,4, & x:;*4*&K#+"&546;227654$ >3546;2+"&="&/&546$ <X@@Gv"DװD"vG@@X<4L41!Sk @ G< _bb_ 4.54632&4&&M4&UF &""""& F&M&&M&%.D.%G-Ik"'!"&5463!62#"&54>4.54632#"&54767>4&'&'&54632#"&547>7676'&'.'&54632&4&&M4&UF &""""& FU &'8JSSJ8'&  &'.${{$.'& &M&&M&%.D.%7;&'66'&;4[&$ [2[ $&[  #/37#5#5!#5!!!!!!!#5!#5!5##!35!!! #'+/37;?3#3#3#3#3#3#3#3#3#3#3#3#3#3#3#3#3???? ^>>~??????~??~??^??^^? ^??4&"2#"'.5463!2KjKKjv%'45%5&5L45&% jKKjK@5%%%%54L5&6'k54&"2#"'.5463!2#"&'654'.#32KjKKjv%'45%5&5L45&%%'4$.%%5&55&% jKKjK@5%%%%54L5&6'45%%%54'&55&6' yTdt#!"&'&74676&7>7>76&7>7>76&7>7>76&7>7>63!2#!"3!2676'3!26?6&#!"3!26?6&#!"g(sAeM ,*$/ !'& JP$G] x6,& `   h `   "9Hv@WkNC<.  &k& ( "$p" . #u&#  %!' pJvwEF#  @   @  2#"' #"'.546763!''!0#GG$/!''! 8""8  X! 8" "8  <)!!#"&=!4&"27+#!"&=#"&546;463!232(8&4&&4 8(@(8 qO@8((`(@Oq8(&4&&4&@` (88( Oq (8(`(q!)2"&42#!"&546;7>3!2  Ijjjj3e55e3gr`Ijjjj1GG1r Q37&'&#7676767;"'&#"4?6764/%2"%ժIM <5:YK5 g'9') //8Pp]`O8:8 /\>KM'B0Q>_O 4h 7f:jCR1'  -! rA@   %e%3267654'&'&#"32654'&#"767676765'&'&'&'&/-72632;2/&+L@%&):SPJ+BUT4N-M.   3T|-)JXg+59-,*@?|Z\2BJIRtT! RHForB^ pKK ,!zb+e^  BW  S  //rAFt/9)ijLU>7H$$ J767676?7>5?5&'&'7327>3"#"'&/&IL( 8  )g ='"B!F76@% ,=&+ @7$ ~)J~U%@ @,Q5(?2&g  9,&kɞ-   i;?!6?2&'.'&'&"#"2#"'&#"#&5'56767676'&64&'&'&#"#&'52"/&6;#"&?62+Q6s%"* ' G+"!  1( 8nMHX0: &n+r  , !~:~!PP!~:~!P5d: +UM6a'.'  -   !& #>q\ 0f!)V%%%%h;?!6?2&'.'&'&"#"52#"'&#"#&5'56767676''&'&'&#"#&'5&=!/&4?6!546Q6s>"* ' g)^!  1( 8nMHR-: &n2  , %%%%5d: +UM6a'4.'  -    !& #(,  0f!)V:~!PP!~:~!PP!/?%#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2&&&&&&&&&&&&&&&&&&&&f&&&&f&&&&f&&&&/?%#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2&&&&&&&&&&&&&&&&&&&&f&&&&f&&&&f&&&&/?%#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2&&&&&&&&&&&&&&&&&&&&f&&&&f&&&&f&&&&/?%#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2&&&&&&&&&&&&&&&&&&&&f&&&&f&&&&f&&&&/?O_o%+"&=46;2+"&=46;2+"&=46;2#!"&=463!2+"&=46;2#!"&=463!2#!"&=463!2#!"&=463!2        @     @   @   @   s  s    s    s  s  /?O#"'&47632#!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2     @     @   @  @          s  s  s  /?O#"&54632 #!"&=463!2#!"&=463!2#!"&=463!2#!"&=463!2`      @     @   @  @     @   s  s  s  #"'#!"&5463!2632' mw@www '*wwww."&462!5 !"3!2654&#!"&5463!2pppp@  @ ^BB^^B@B^ppp@@  @    @B^^BB^^k%!7'34#"3276' !7632k[[v  6`%`$65&%[[k `5%&&'4&"2"&'&54 Ԗ!?H?!,,ԖԖmF!&&!Fm,%" $$ ^aa`@^aa-4'.'&"26% 547>7>2"KjKXQqYn 243nYqQ$!+!77!+!$5KK,ԑ ]""]ً 9>H7'3&7#!"&5463!2'&#!"3!26=4?6 !762xtt`  ^Qwww@?6 1B^^B@B^ @(` `\\\P`tt8`  ^Ͼww@w 1^BB^^B~ @` \ \P+Z#!"&5463!12+"3!26=47676#"'&=# #"'.54>;547632www M8 pB^^B@B^ 'sw- 9*##;Noj' #ww@w "^BB^^B  *  "g`81T`PSA:'*4/D#!"&5463!2#"'&#!"3!26=4?632"'&4?62 62www@?6 1 B^^B@B^ @ BRnBBn^ww@w 1 ^BB^^B @ BnnBC"&=!32"'&46;!"'&4762!#"&4762+!54624&&4&&44&&4&&44&&44&&4&&44&&6'&'+"&546;267: &&&& s @  Z&&&&Z +6'&''&'+"&546;267667: : &&&&  s @  :  Z&&&&Z  : z6'&''&47667S::s @  : 4 : | &546h!!0a   $#!"&5463!2#!"&5463!2&&&&&&&&@&&&&&&&&#!"&5463!2&&&&@&&&&&54646&5-:s  :  :4:  +&5464646;2+"&5&5-&&&&:s  :  : &&&& :  &54646;2+"&5-&&&&s  : &&&&  62#!"&!"&5463!24 @ &&&&-:&&&&5 &4762 "t%%%k%K%%%%K%k%%k%%%K%k%&j%K%uK"/&547 &54?62K%t%j%L%%%%L$l$%4'u%%K'45%'45%K&&u%#/54&#!4&+"!"3!;265!26 $$ &&&&&&&&@^aa@&&&&&&&&+^aa54&#!"3!26 $$ &&&&@^aa@&&&&+^aa+74/7654/&#"'&#"32?32?6 $$ }ZZZZ^aaZZZZ^aa#4/&"'&"327> $$ [4h4[j^aa"ZiZJ^aa:F%54&+";264.#"32767632;265467>$ $$ oW  5!"40K(0?i+! ":^aaXRd D4!&.uC$=1/J=^aa.:%54&+4&#!";#"3!2654&+";26 $$ ```^aa^aa/_#"&=46;.'+"&=32+546;2>++"&=.'#"&=46;>7546;232m&&m l&&l m&&m l&&ls&%&&%&&%&&%&&&l m&&m l&&l m&&m ,&%&&%&&%&&%&#/;"/"/&4?'&4?627626.  6 $$ I     ͒(r^aaɒ    (_^aa , "'&4?6262.  6 $$ Z4f44fz(r^aaZ&4ff4(_^aa "4'32>&#" $&6$  WoɒV󇥔 zzz8YW˼[?zz:zz@5K #!#"'&547632!2A4@%&&K%54'u%%&54&K&&4A5K$l$L%%%54'&&J&j&K5K #"/&47!"&=463!&4?632%u'43'K&&%@4AA4&&K&45&%@6%u%%K&j&%K55K&$l$K&&u#5K@!#"'+"&5"/&547632K%K&56$K55K$l$K&&#76%%53'K&&%@4AA4&&K&45&%%u'5K"#"'&54?63246;2632K%u'45%u&&J'45%&L44L&%54'K%5%t%%$65&K%%4LL4@&%%K',"&5#"#"'.'547!34624&bqb>#  5&44& 6Uue7D#  "dž&/#!"&546262"/"/&47'&463!2 &@&&4L  r&4  r L&& 4&&&L rI@& r  L4&& s/"/"/&47'&463!2#!"&546262&4  r L&& &@&&4L  r@@& r  L4&& 4&&&L r##!+"&5!"&=463!46;2!28(`8((8`(88(8((8(8 (8`(88(8((8(88(`8#!"&=463!28(@(88((8 (88((88z5'%+"&5&/&67-.?>46;2%6.@g.L44L.g@. .@g. L44L .g@.g.n.4LL43.n.gg.n.34LL4͙.n.g -  $54&+";264'&+";26/a^    ^aa fm  @ J%55!;263'&#"$4&#"32+#!"&5#"&5463!"&46327632#!2$$8~+(888(+}(`8((8`]]k==k]]8,8e8P88P8`(88(@MMO4&#"327>76$32#"'.#"#".'.54>54&'&54>7>7>32&z&^&./+>*>J> Wm7' '"''? &4&c&^|h_bml/J@L@ #M6:D 35sҟw$ '% ' \t3#!"&=463!2'.54>54''  @ 1O``O1CZZ71O``O1BZZ7@  @ N]SHH[3`)TtbN]SHH[3^)Tt!1&' 547 $4&#"2654632 '&476 ==嘅}(zVl''ٌ@uhyyhu9(}VzD##D# =CU%7.5474&#"2654632%#"'&547.'&476!27632#76$7&'7+NWb=嘧}(zVi\j1  z,X Y[6 $!%'FuJiys?_9ɍ?kyhun(}Vz YF  KA؉La  02-F"@Qsp@_!3%54&+";264'&+";26#!"&'&7>2    #%;"";%#`,@L 5 `   `  L`4LH` `   a 5 L@ #37;?Os!!!!%!!!!%!!!!!!!!%!!4&+";26!!%!!!!74&+";26%#!"&546;546;2!546;232 `@ `@ @@ @ @  @  @  @  @ L44LL4^B@B^^B@B^4L  @@@@    @@   @@    M4LL44L`B^^B``B^^B`L7q.+"&=46;2#"&=".'673!54632#"&=!"+"&=46;2>767>3!546327>7&54>$32dFK1A  0) L.٫C58.H(Ye#3C $=463!22>=463!2#!"&5463!2#!"&5463!2H&&/7#"&463!2!2KjKKjKjKKj &&&%&& &5jKKjKKjKKjK%z 0&4&&3D7&4& %&#!"&5463!2!2\@\\@\\@\\\\ W*#!"&547>3!2!"4&5463!2!2W+B"5P+B@"5^=\@\ \H#t3G#3G:_Ht\\ @+32"'&46;#"&4762&&4&&44&&44&&4@"&=!"'&4762!54624&&44&&44&&4&& /!!!!4&#!"3!26#!"&5463!2  @ ^BB^^B@B^  @ @B^^BB^^0@67&#".'&'#"'#"'32>54'6#!"&5463!28ADAE=\W{O[/5dI kDtpČe1?*w@www (M& B{Wta28r=Ku?RZ^GwT -@www#7#546;5#"#3!#!"&5463!28nw@wwwjm1'ې{@www#'.>4&#"26546326"&462!5!&  !5!!=!!%#!"&5463!2B^8(Ԗ>@|K55KK55K^B(8ԖԖ€>v5KK55KKHG4&"&#"2654'32#".'#"'#"&54$327.54632@pp)*Pppp)*Pb '"+`N*(a;2̓c`." b PTY9ppP*)pppP*)b ".`(*Nͣ2ͣ`+"' b MRZB4&"24&"264&"26#"/+"&/&'#"'&547>7&/.=46?67&'&547>3267676;27632#"&'"'#"'&547&'&=4767&547>32626?2#"&'"'#"'&547&'&=4767&547>32626?2ԖLhLKjKLhLKjK "8w s%(  ")v  >  "8x s"+  ")v  <  3zLLz3 3>8L3)x3 3zLLz3 3>8L3)x3 ԖԖ4LL45KK54LL45KK #)0C wZ l/ Y N,& #)0C vZl. Y L0"qG^^Gqq$ ]G)FqqG^^Gqq$ ]G)Fq%O#"'#"&'&4>7>7.546$ '&'&'# '32$7>54'VZ|$2 $ |E~E<| $ 2$|ZV:(t}X(  &%(Hw쉉xH(%& (XZT\MKG<m$4&"24&#!4654&#+32;254'>4'654&'>7+"&'&#!"&5463!6767>763232&4&&4N2`@`%)7&,$)' %/0Ӄy#5 +1 &<$]`{t5KK5$e:1&+'3TF0h4&&4&3M:;b^v+D2 5#$IIJ 2E=\$YJ!$MCeM-+(K55KK5y*%Au]c=p4&"24&'>54'64&'654&+"+322654&5!267+#"'.'&'&'!"&5463!27>;2&4&&4+ 5#bW0/% ')$,&7)%`@``2Nh0##T3'"( 0;e$5KK5 tip<& 1&4&&4&#\=E2 JIURI$#5 2D+v^b;:M2gc]vDEA%!bSV2MK55K(,,MeCM$!J@#"&547&547%6@?V8 b% I)94.""'." 67"'.54632>32+C`\hxeH>Hexh\`C+ED4 #LoP$$Po>Q|I.3MCCM3.I|Q/Z$_dC+I@$$@I+ (@%#!"&5463!2#!"3!:"&5!"&5463!462 ww@  B^^B  4&@&&&4 `  ww   ^B@B^ 24& && &%573#7.";2634&#"35#347>32#!"&5463!2FtIG9;HIxI<,tԩw@wwwz4DD43EEueB&#1s@www .4&"26#!+"'!"&5463"&463!2#2&S3 Ll&c4LL44LL4c@& &{LhLLhL'?#!"&5463!2#!"3!26546;2"/"/&47'&463!2www@B^^B@B^@&4t  r &&`ww@w@^BB^^B@R&t r  4&&@"&5!"&5463!462 #!"&54&>3!2654&#!*.54&>3!24&@&&&4 sw  @B^^B  @w4& && &3@w   ^BB^    I&5!%5!>732#!"&=4632654&'&'.=463!5463!2!2JJSq*5&=CKuuKC=&5*q͍S8( ^B@B^ (8`N`Ѣ΀GtO6)"M36J[E@@E[J63M")6OtG(8`B^^B`8%-3%'&76'&76''&76'&76'&6#5436&76+".=4'>54'6'&&"."&'./"?+"&5463!2  2  5    z<: Ʃw 49[aA)O%-j'&]]5r,%O)@a[9( 0BA; + >HCwww  5 /)  u    @wa-6OUyU[q ( - q[UyUP6$C +) (  8&/ &ww'?$4&"2$4&"2#!"&5463!3!267!2#!#!"&5!"'&762&4&&4&&4&&48(@(88(c==c(8*&&*6&4&&4&&4&&4& (88(@(88HH88`(@&&('@1d4&'.54654'&#"#"&#"32632327>7#"&#"#"&54654&54>76763232632   N<;+gC8A`1a99gw|98aIe$IVNz<:LQJ  ,-[% 061I()W,$-7,oIX()oζA;=N0 eTZ  (O#".'&'&'&'.54767>3232>32 e^\3@P bMO0# 382W# & 9C9 Lĉ" 82<*9FF(W283 #0OMb P@3\^e FF9*<28 "L 9C9 & #!"3!2654&#!"&5463!2`B^^B@B^^ީwww@w^BB^^B@B^ww@w#!72#"' #"'.546763YY !''!0#GG$/!''!&UUjZ 8""8  X! 8" "8 EU4'./.#"#".'.'.54>54.'.#"32676#!"&5463!2G55 :8 c7 )1)  05.D <90)$9w@wwwW + AB 7c  )$+ -.1 9$)0< D.59@www,T1# '327.'327.=.547&54632676TC_LҬ#+i!+*pDNBN,y[`m`%i]hbEm}a u&,SXK &$f9s? !#!#3546;#"'/8 "# R&=4'>54'6'&&"."&'./"?'&54$ 49[aA)O%-j'&]]5r,%O)@a[9( 0BA; + >HCaaoMa-6OUyU[q ( - q[UyUP6$C +) (  8&/ &fMa%+"&54&"32#!"&5463!54 &@&Ԗ`(88(@(88(r&&jj8((88(@(8#'+2#!"&5463"!54&#265!375!35!B^^BB^^B   `^B@B^^BB^  ` !="&462+"&'&'.=476;+"&'&$'.=476; pppp$!$qr % }#ߺppp!E$ rqܢ# % ֻ!)?"&462"&4624&#!"3!26!.#!"#!"&547>3!2/B//B//B//B @   2^B@B^\77\aB//B//B//B/@    ~B^^B@2^5BB52.42##%&'.67#"&=463! 25KK5L4_u:B&1/&.- zB^^B4LvyKjK4L[!^k'!A3;):2*54&#"+323254'>4'654&'!267+#"'&#!"&5463!2>767>32!2&4&&4N2$YGB (HGEG HQ#5K4Li!<;5KK5 A# ("/?&}vh4&&4&3M95S+C=,@QQ9@@IJ 2E=L5i>9eME;K55K J7R>@#zD<7?s%3#".'.'&'&'.#"!"3!32>$4&"2#!"#"&?&547&'#"&5463!&546323!2` #A<(H(GY$2NL4K5#aWTƾh&4&&4K5;=!ihv}&?/"( #A  5K2*!Q@.'!&=C+S59M34L=E2 JI UR@@&4&&4&5K;ELf9>igR7J K5h4&"24#"."&#"4&#"".#"!54>7#!"&54.'&'.5463246326326&4&&4IJ 2E=L43M95S+C=,@QQ9@@E;K55K J7R>@#zD9eMZ4&&4&<#5K4LN2$YGB (HGEG HV;5KK5 A# ("/?&}vhi!<4<p4.=!32>332653272673264&"2/#"'#"&5#"&54>767>5463!2@@2*! Q@.'!&=C+S59M34L.9E2 JI UR&4&&4&Lf6Aig6Jy#@>R7J K55K;E@TƾH #A<(H(GY$2NL4K#5#a=4&&4&D=ihv}&?/"( #A  5KK5;+54&#!764/&"2?64/!26 $$ & [6[[j6[&^aa@&4[[6[[6&+^aa+4/&"!"3!277$ $$ [6[ &&[6j[ ^aae6[j[6&&4[j[^aa+4''&"2?;2652?$ $$ [6[[6&&4[^aaf6j[[6[ &&[^aa+4/&"4&+"'&"2? $$ [6&&4[j[6[j^aad6[&& [6[[j^aa   $2>767676&67>?&'4&'.'.'."#&6'&6&'3.'.&'&'&&'&6'&>567>#7>7636''&'&&'.'"6&'6'..'/"&'&76.'7>767&.'"76.7"7"#76'&'.'2#22676767765'4.6326&'.'&'"'>7>&&'.54>'>7>67&'&#674&7767>&/45'.67>76'27".#6'>776'>7647>?6#76'6&'676'&67.'&'6.'.#&'.&6'&.5/a^D&"      4   $!   #          .0"Y +  !       $     "  +       Α      ^aa                        P   ' -( # * $  "  !     * !   (         $      2 ~/$4&"2 #"/&547#"32>32&4&&4V%54'j&&'/덹:,{ &4&&4&V%%l$65&b'Cr! " k[G +;%!5!!5!!5!#!"&5463!2#!"&5463!2#!"&5463!2&&&&&&&&&&&&@&&&&&&&&&&&&{#"'&5&763!2{' **)*)'/!5!#!"&5!3!26=#!5!463!5463!2!2^B@B^&@&`^B`8(@(8`B^ B^^B&&B^(88(^G 76#!"'&? #!"&5476 #"'&5463!2 '&763!2#"'c)'&@**@&('c (&*cc*&' *@&('c'(&*cc*&('c'(&@*19AS[#"&532327#!"&54>322>32"&462 &6 +&'654'32>32"&462QgRp|Kx;CByy 6Fe= BPPB =eF6 ԖV>!pRgQBC;xK|Ԗ{QNa*+%xx5eud_C(+5++5+(C_due2ԖԖ>NQ{u%+*jԖԖp!Ci4/&#"#".'32?64/&#"327.546326#"/&547'#"/&4?632632(* 8( !)(A(')* 8( !USxySSXXVzxTTUSxySSXXVzxT@(  (8 *(('( (8 SSUSx{VXXTTSSUSx{VXXT#!"5467&5432632t,Ԟ;F`j)6,>jK?s !%#!"&7#"&463!2+!'5#8EjjE8@&&&&@XYY&4&&4&qDS%q%N\jx2"&4#"'#"'&7>76326?'&'#"'.'&676326326&'&#"32>'&#"3254?''74&&4&l NnbSVZ bRSD zz DSRb)+USbn \.2Q\dJ'.2Q\dJ.Q2.'Jd\Q2.'Jd`!O` ` &4&&4r$#@B10M5TNT{L5T II T5L;l'OT4M01B@#$*3;$*3;;3*$;3*$: $/ @@Qq`@"%3<2#!"&5!"&5467>3!263! !!#!!46!#!(88(@(8(8(`((8D<++<8(`(8(`8(@(88( 8((`(8((<`(8(``(8||?%#"'&54632#"'&#"32654'&#"#"'&54632|udqܟs] = OfjL?R@T?"& > f?rRX=Edudsq = _MjiL?T@R?E& f > =XRr?b!1E)!34&'.##!"&5#3463!24&+";26#!"&5463!2 08((88(@(8  8((88((`(1  `(88((88(@  `(88(@(8(`#!"&5463!2w@www`@www/%#!"&=463!2#!"&=463!2#!"&=463!2&&&&&&&&&&&&&&&&&&&&&&&&@'7G$"&462"&462#!"&=463!2"&462#!"&=463!2#!"&=463!2ppppppp @   ppp @    @   Рpppppp  ppp    <L\l|#"'732654'>75"##5!!&54>54&#"'>3235#!"&=463!2!5346=#'73#!"&=463!2#!"&=463!2}mQjB919+i1$AjM_3</BB/.#U_:IdDRE @  k*Gj @   @   TP\BX-@8 C)5Xs J@$3T4+,:;39SG2S.7<  vcc)( %Ll}    5e2#!"&=463%&'&5476!2/&'&#"!#"/&'&=4'&?5732767654'&@02uBo  T25XzrDCBBEh:%)0%HPIP{rQ9f#-+>;I@KM-/Q"@@@#-a[ $&P{<8[;:XICC>.'5oe71#.0(  l0&%,"J&9%$<=DTIcs&/6323276727#"327676767654./&'&'737#"'&'&'&54'&54&#!"3!260% <4"VRt8<@< -#=XYhW8+0$"+dTLx-'I&JKkmuw<=V@!X@ v '|N;!/!$8:IObV;C#V  &   ( mL.A:9 !./KLwPM$@@ /?O_o%54&#!"3!2654&#!"3!2654&#!"3!2654&#!"3!2654&#!"3!2654&#!"3!2654&#!"3!2654&#!"3!2654&#!"3!26#!"&5463!2@@@@@@@@@^BB^^B@B^NB^^B@B^^#+3 '$"/&4762%/?/?/?/?%k*66bbbb|<<<bbbbbbbb%k66Ƒbbb<<<<^bbbbbb@M$4&"2!#"4&"2&#"&5!"&5#".54634&>?>;5463!2LhLLh LhLLhL! 'ԖԖ@' !&  ?&&LhLLhL hLLhL jjjj &@6/" &&J#"'676732>54.#"7>76'&54632#"&7>54&#"&54$ ok; -j=yhwi[+PM 3ѩk=J%62>VcaaQ^ ]G"'9r~:`}Ch 0=Z٤W=#uY2BrUI1^Fk[|aL2#!67673254.#"67676'&54632#"&7>54&#"#"&5463ww+U ,iXբW<"uW1AqSH1bdww"3g!"&'>32 327#".54632%#!654.54>4&'.'37!"463!2!#!!3 _Znh7 1-$ g &Wa3\@0g]Bj> ҩw,',CMC,.BA.51 KL~w9&!q[-A"" ""$!'JNv=Cdy4Shh/`R~ wITBqIE2;$@;Ft.  @M_~w`-co%4.'&#"32>4.#"326!#!".547>7&54>7#"&54676!#!5!3l $-1!6hpT6Gs~@;k^7x!=kB]f0@\3aWGN.BB.!5@@5!;y{^<% L@ (վ^lG'!$"" "$8^2"&5!#2!46#!"&5463!2rM* *M~~M**M~~M*jjj&&&&`P%挐|NN||NN|*jjjj@&&&&@ "'&463!2@4@&Z4@4&@ #!"&4762&&4Z4&&4@@ "'&4762&4@4&@&4&@ "&5462@@4&&44@&&@ 3!!%!!26#!"&5463!2`m` ^BB^^B@B^  `@B^^BB^^@ "'&463!2#!"&4762@4@&&&&44@4&Z4&&4@ "'&463!2@4@&4@4&@ #!"&4762&&4Z4&&4@:#!"&5;2>76%6+".'&$'.5463!2^B@B^,9j9Gv33vG9H9+bI\ A+=66=+A [">nSMA_:B^^B1&c*/11/*{'VO3@/$$/@*?Nh^l+!+"&5462!4&#"!/!#>32]_gTRdgdQV?U I*Gg?!2IbbIJaaiwE3300 084#"$'&6?6332>4.#"#!"&54766$32z䜬m IwhQQhbF*@&('kz   _hQнQGB'(&*eoz(q!#"'&547"'#"'&54>7632&4762.547>32#".'632%k'45%&+~(  (h  &  \(  (  &  ~+54'k%5%l%%l$65+~  &  (  (\  &  h(  (~+%'!)19K4&"24&"26.676&$4&"24&"24&"2#!"'&46$ KjKKj KjKKje2.e<^P,bKjKKjKjKKj KjKKj##LlLKjKKjK jKKjK~-M7>7&54$ LhяW.{+9E=cQdFK1A  0) pJ2`[Q?l&٫C58.H(Y':d 6?32$64&$ #"'#"&'&4>7>7.546'&'&'# '32$7>54'Yj`a#",5NK ~EVZ|$2 $ |: $ 2$|ZV:(t}hfR88T h̲X(  &%(Hw(%& (XZT\MKG{x|!#"'.7#"'&7>3!2%632u  j H{(e 9 1bU#!"&546;5!32#!"&546;5!32#!"&546;5463!5#"&5463!2+!2328((88(``(88((88(``(88((88(`L4`(88(@(88(`4L`(8 (88(@(88((88(@(88((88(@(84L8(@(88((8L48OY"&546226562#"'.#"#"'.'."#"'.'.#"#"&5476$32&"5462И&4&NdN!>! 1X:Dx+  +ww+  +xD:X1 -U !*,*&4&hh&&2NN2D &  ..J< $$ 767#"&'"&547&547&547.'&54>2l4  2cKEooED ) ) Dg-;</- ?.P^P.? -/<;-gYY  .2 L4H|O--O|HeO , , Oeq1Ls26%%4.2,44,2.4%%62sL1qcqAAq4#!#"'&547632!2#"&=!"&=463!54632  @  `     ` ?`   @  @  !    54&+4&+"#"276#!"5467&5432632   `  _ v,Ԝ;G_j)``    _ ԟ7 ,>jL>54'&";;265326#!"5467&5432632    v,Ԝ;G_j) `   `7 ,>jL>X`$"&462#!"&54>72654&'547 7"2654'54622654'54&'46.' &6 &4&&4&yy %:hD:FppG9Fj 8P8 LhL 8P8 E; Dh:% >4&&4&}yyD~s[4Dd=PppP=d>hh>@jY*(88(*Y4LL4Y*(88(*YDw" A4*[s~>M4&"27 $=.54632>32#"' 65#"&4632632 65.5462&4&&4G9& <#5KK5!!5KK5#< &ܤ9Gpp&4&&4&@>buោؐ&$KjKnjjKjK$&jjb>Ppp %!5!#"&5463!!35463!2+32@\\8(@(8\@@\\@\(88(\ -4#"&54"3#!"&5!"&56467&5462P;U gI@L4@Ԗ@4L8P8° U;Ig04LjjL4(88(¥'@"4&+32!#!"&+#!"&5463!2pP@@Pjj@@\@\&0pj \\&-B+"&5.5462265462265462+"&5#"&5463!2G9L44L9G&4&&4&&4&&4&&4&L44L &=d4LL4 d=&&`&&&&`&&&&4LL4  &(/C#!"&=463!25#!"&=463!2!!"&5!!&'&'#!"&5463!2@@`(8x 8((88((`8(`@@@@8( 9 h(88(@(8(`/?O_o-=%+"&=46;25+"&=46;2+"&=46;2%+"&=46;2+"&=46;2%+"&=46;2%+"&=46;2%+"&=46;2+"&=46;2%+"&=46;2%+"&=46;2%+"&=46;2+"&=46;2%+"&=46;2%+"&=46;2+"&=46;2%+"&=46;2+"&=46;2!!!5463!2#!"&5463!2 @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @ &&&&@  @ @  @  @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @ @  @  @  @   `&&&& /?O_o%+"&=46;25+"&=46;2+"&=46;2%+"&=46;2+"&=46;2%+"&=46;2%+"&=46;2+"&=46;2%+"&=46;2+"&=46;2!!#!"&=!!5463!24&+"#54&+";26=3;26%#!"&5463!463!2!2 @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @  @ 8(@(8 @  @  @  @  @ &&&@8((8@&@  @ @  @  @  @ @  @ @  @ @  @ @  @ @  @ @  @  @  @  (88(  @  ``   `` -&&& (88(&@<c$4&"2!#4&"254&+54&+"#";;26=326+"&5!"&5#"&46346?>;463!2KjKKjKjKKj&ԖԖ&&@&&KjKKjK jKKjK .&jjjj&4&@@&&#'1?I54&+54&+"#";;26=326!5!#"&5463!!35463!2+32 \\8(@(8\ \\@\(88(\: #32+53##'53535'575#5#5733#5;2+3@E&&`@@` `@@`&&E%@`@ @ @      @ :#@!3!57#"&5'7!7!K5@   @5K@@@ #3%4&+"!4&+";265!;26#!"&5463!2&&&&&&&&w@www&&@&&&&@&&@www#354&#!4&+"!"3!;265!26#!"&5463!2&&&&&@&&@&w@www@&@&&&&&&@&:@www-M3)$"'&4762 "'&4762 s 2  .   2 w 2  .   2 w 2    2  ww  2    2  ww M3)"/&47 &4?62"/&47 &4?62S .  2 w 2   .  2 w 2  M . 2    2 .  . 2    2 .M3S)$"' "/&4762"' "/&47623 2  ww  2    2  ww  2    2 w 2   .v 2 w 2   .M3s)"'&4?62 62"'&4?62 623 .  . 2    2 .  . 2    2 .   2 w 2v .   2 w 2-Ms3 "'&4762s w 2  .   2 ww  2    2 MS3"/&47 &4?62S .  2 w 2  M . 2    2 .M 3S"' "/&47623 2  ww  2   m 2 w 2   .M-3s"'&4?62 623 .  . 2    2- .   2 w 2/4&#!"3!26#!#!"&54>5!"&5463!2  @ ^B && B^^B@B^ @  MB^%Q= &&& $$ (r^aa(^aa!C#!"&54>;2+";2#!"&54>;2+";2pPPpQh@&&@j8(PppPPpQh@&&@j8(Pp@PppPhQ&&j (8pPPppPhQ&&j (8p!C+"&=46;26=4&+"&5463!2+"&=46;26=4&+"&5463!2Qh@&&@j8(PppPPpQh@&&@j8(PppPPp@hQ&&j (8pPPppP@hQ&&j (8pPPpp !)19A$#"&4632"&462"&462"&462"&462$"&462"&462"&462U;bqb&44&ɢ5"  #D7euU6 &4&m 1X".4>2".4>24&#""'&#";2>#".'&547&5472632>3=T==T==T==T=v)GG+v@bRRb@=&\Nj!>3lkik3hPTDDTPTDDTPTDDTPTDD|x xXK--K|Mp<# )>dA{RXtfOT# RNftWQ,%4&#!"&=4&#!"3!26#!"&5463!2!28(@(88((88((8\@\\@\\(88(@(88(@(88@\\\\ u'E4#!"3!2676%!54&#!"&=4&#!">#!"&5463!2!2325([5@(\&8((88((8,9.+C\\@\ \6Z]#+#,k(88(@(88(;5E>:5E\\\ \1. #3C++"&=#"&=46;546;2324&#!"3!26#!"&5463!2@@8(@(88((8]@]]]`@@r(88(@(88@\\]/2#!"&54634&#!"3!262#!"&=463]]@]] 8(@(88((8]@\\]`(88(@(88@@$4@"&'&676267>"&462"&462.  > $$ n%%/02 KjKKjKKjKKjKfff^aayy/PccP/jKKjKKjKKjKffff@^aa$4@&'."'.7>2"&462"&462.  > $$ n20/%7KjKKjKKjKKjKfff^aa3/PccP/y jKKjKKjKKjKffff@^aa +7#!"&463!2"&462"&462.  > $$ &&&&KjKKjKKjKKjKfff^aa4&&4&jKKjKKjKKjKffff@^aa#+3C54&+54&+"#";;26=3264&"24&"2$#"'##"3!2@@KjKKjKKjKKjKܒ,gjKKjKKjKKjKXԀ,, #/;GS_kw+"=4;27+"=4;2'+"=4;2#!"=43!2%+"=4;2'+"=4;2+"=4;2'+"=4;2+"=4;2+"=4;2+"=4;2+"=4;2+"=4;54;2!#!"&5463!2`````````````````````p`K55KK55Kp`````````````````````````5KK55KK@*V#"'.#"63232+"&5.5462#"/.#"#"'&547>32327676R?d^7ac77,9xm#@#KjK# ڗXF@Fp:f_ #WIpp&3z h[ 17q%q#::#5KKu't#!X: %#+=&>7p @ *2Fr56565'5&'. #"32325#"'+"&5.5462#"/.#"#"'&547>32327676@ͳ8 2.,#,fk*1x-!#@#KjK# ڗXF@Fp:f_ #WIpp&3z e`vo8t-  :5 [*#::#5KKu't#!X: %#+=&>7p  3$ "/&47 &4?62#!"&=463!2I.  2 w 2   -@). 2    2 . -@@-S$9%"'&4762  /.7> "/&47 &4?62i2  .   2 w E > u > .  2 w 2   2    2  ww !   h. 2    2 . ;#"'&476#"'&7'.'#"'&476' )'s "+5+@ա' )'F*4*Er4M:}}8 GO *4*~ (-/' #"'%#"&7&67%632B;>< V??V --C4 <B=cB5 !% %!b 7I))9I7 #"'.5!".67632y( #  ##@,( )8! !++"&=!"&5#"&=46;546;2!76232-SSS  SS``  K$4&"24&"24&"27"&5467.546267>5.5462 8P88P88P88P8P88P4,DS,4pp4,,4pp4,6d7AL*',4ppP88P8P88P8HP88P8`4Y&+(>EY4PppP4Y4Y4PppP4Y%*54&#"#"/.7!2<'G,')7N;2]=A+#H  0PRH6^;<T%-S#:/*@Z}   >h.%#!"&=46;#"&=463!232#!"&=463!2&&&@@&&&@&&&&&&&&&&&&f&&&&b#!"&=463!2#!"&'&63!2&&&&''%@% &&&&&&&&k"G%#/&'#!53#5!36?!#!'&54>54&#"'6763235 Ź}4NZN4;)3.i%Sin1KXL7觧* #& *@jC?.>!&1' \%Awc8^;:+54&#"'6763235 Ź}4NZN4;)3.i%PlnEcdJ觧* #& *-@jC?.>!&1' \%AwcBiC:D'P%! #!"&'&6763!2P &:&? &:&?5"K,)""K,)h#".#""#"&54>54&#"#"'./"'"5327654.54632326732>32YO)I-D%n  "h.=T#)#lQTv%.%P_ % %_P%.%vUPl#)#T=@/#,-91P+R[Ql#)#|'' 59%D-I)OY[R+P19-,##,-91P+R[YO)I-D%95%_P%.%v'3!2#!"&463!5&=462 =462 &546 &&&&&4&r&4&@&4&&4&G݀&&&&f s CK&=462 #"'32=462!2#!"&463!5&'"/&4762%4632e*&4&i76`al&4&&&&&}n  R   R zfOego&&5`3&&&4&&4& D R   R zv"!676"'.5463!2@@w^Cct~5  5~tcC&&@?JV|RIIR|V&&#G!!%4&+";26%4&+";26%#!"&546;546;2!546;232@@@@L44LL4^B@B^^B@B^4L  N4LL44L`B^^B``B^^B`LL4&"2%#"'%.5!#!"&54675#"#"'.7>7&5462!467%632&4&&4  @ o&&}c ;pG=(  8Ai8^^.   &4&&4&` ` fs&& jo/;J!# 2 KAE*,B^^B! ` $ -4&"2#"/&7#"/&767%676$!28P88PQr @ U @ {`PTP88P8P`  @U @rQ!6'&'&'&+!!!!2е sXVqQ @@vt %764' 64/&"2 $$ f3f4:4^aaf4334f:4:^aa %64'&" 2 $$ :4f3f4F^aa4f44f^aa 764'&"27 2 $$ f:4:f4334^aaf4:4f3^aa %64/&" &"2 $$ -f44f4^aa4f3f4:w^aa@7!!/#35%!'!%j/d jg2|855dc b @! !%!!7!FG)DH:&H dS)U4&"2#"/ $'#"'&5463!2#"&=46;5.546232+>7'&763!2&4&&4f ]wq4qw] `dC&&:FԖF:&&Cd`4&&4& ]] `d[}&&"uFjjFu"&&y}[d#2#!"&546;4 +"&54&" (88(@(88( r&@&Ԗ8((88(@(8@&&jj'3"&462&    .  > $$ Ԗ>aX,fff^aaԖԖa>TX,,~ffff@^aa/+"&=46;2+"&=46;2+"&=46;28((88((88((88((88((88((8 (88((88((88((88((88((88/+"&=46;2+"&=46;2+"&=46;28((88((88((88((88((88((8 (88((88(88((88(88((885E$4&"2%&'&;26%&.$'&;276#!"&5463!2KjKKj   f  \ w@wwwjKKjK"H   ܚ  f   @www   $64'&327/a^ ! ^aaJ@%% 65/ 64'&"2 "/64&"'&476227<ij6j6u%k%~8p8}%%%k%}8p8~%<@% %% !232"'&76;!"/&76  ($>( J &% $%64/&"'&"2#!"&5463!2ff4-4ff4fw@wwwf4f-f4@www/#5#5'&76 764/&"%#!"&5463!248` # \P\w@www4`8  #@  `\P\`@www)4&#!"273276#!"&5463!2& *f4 'w@www`&')4f*@www%5 64'&"3276'7>332#!"&5463!2`'(wƒa8! ,j.( &w@www`4`*'?_`ze<  bw4/*@www-.  6 $$  (r^aaO(_^aa -"'&763!24&#!"3!26#!"&5463!2yB(( @   w@www]#@##   @ @www -#!"'&7624&#!"3!26#!"&5463!2y((@B@u @   w@www###@  @ @www -'&54764&#!"3!26#!"&5463!2@@####@w@wwwB((@@www`%#"'#"&=46;&7#"&=46;632/.#"!2#!!2#!32>?6#  !"'?_  BCbCaf\ + ~2   }0$  q 90r p r%D p u?#!"&=46;#"&=46;54632'.#"!2#!!546;2D a__ g *`-Uh1    ߫}   $^L  4b+"&=.'&?676032654.'.5467546;2'.#"ǟ B{PDg q%%Q{%P46'-N/B).ĝ 9kC< Q 7>W*_x*%K./58`7E%_ ,-3  cVO2")#,)9;J) "!* #VD,'#/&>AX>++"''&=46;267!"&=463!&+"&=463!2+32Ԫ$   pU9ӑ @/*f o  VRfq f=SE!#"&5!"&=463!5!"&=46;&76;2>76;232#!!2#![       % )   "  Jg Uh BW&WX hU g L\+"&5##"/&67>7> 7!"&=463!2+;26=46;2#!"&=463!2$=5R9[/*G :!3' &&@` fvQJ+-    (->K\rB&&@ n#467!!3'##467!++"'#+"&'#"&=46;'#"&=46;&76;2!6;2!6;232+32QKt# #FNQo!"դѧ !mY Zga~bm] [o"U+, @h h@@X hh @83H\#5"'#"&+73273&#&+5275363534."#22>4.#2>ut 3NtRP*Ho2 Lo@!R(Ozh=,GID2F 8PuE>.'%&TeQ,jm{+>R{?jJrL6V @`7>wmR1q uWei/rr :Vr" $7V4&#"326#"'&76;46;232!5346=#'73#"'&'73267##"&54632BX;4>ID2F +>R{8PuE>.'%&TeQ,jm{?jJrL6 @`rr :Vr3>wmR1q uWei@ \%4&#"326#!"&5463!2+".'&'.5467>767>7>7632!2&%%&&&& &7.' :@$LBWM{#&$h1D!  .I/! Nr&&%%&&&&V?, L=8=9%pEL+%%r@W!<%*',<2(<&L,"r@ \#"&546324&#!"3!26%#!#"'.'.'&'.'.546767>;&%%&&&& &i7qN !/I.  !D1h$&#{MWBL$@: '.&&%%&&&&=XNr%(M&<(2<,'*%<!W@r%%+LEp%9=8=L  +=\d%54#"327354"%###5#5#"'&53327#"'#3632#"'&=4762#3274645"=424'.'&!  7>76#'#3%54'&#"32763##"'&5#327#!"&5463!2BBPJNC'%! B? )#!CC $)  54f"@@ B+,A  A+&+A  ZK35N # J!1331CCC $)w@www2"33FYF~(-&"o4*)$(* (&;;&&:LA3  8334S,;;,WT+<<+T;(\g7x:&&::&&<r%-@www  +=[c}#"'632#542%35!33!3##"'&5#327%54'&#"5#353276%5##"=354'&#"32767654"2 '.'&547>76 3#&'&'3#"'&=47632%#5#"'&53327''RZZ:kid YYY .06 62+YY-06 R[!.'CD''EH$VVX::Y X;:Y fyd/%jG%EC&&CE%O[52. [$C-D..D^^* ly1%=^I86i077S 3 $EWgO%33%OO%35 EEFWt;PP;pt;PP;pqJgTFQ%33&PP%33%R 7>%3!+}{'+"&72'&76;2+"'66;2U &  ( P *'eJ."-dZ-n -'74'&+";27&+";276'56#!"&5463!2~} 7e  ۩w@www"  $Q #'!# @www/4'&327$ '.'.4>7>76 "!!jG~GkjGGk[J@&& @lAIddIAllAIddIA@ '5557 ,VWQV.RW=?l%l`~0~#%5!'#3! %% %=#y ?R'UaM|qByy[C#jXAAҷhUHG/?%##"547#3!264&#"3254&+";267#!"&5463!2R܂#-$䵀((((tQQttQvQtn?D~|D?x##))((QttQvQtt2#!"&54634&"2$4&"2ww@ww||||||w@www||||||| !3 37! $$ n6^55^h ^aaM1^aaP *Cg'.676.7>.'$7>&'.'&'? 7%&'.'.'>767$/u5'&$I7ob?K\[zH,1+.@\7':Yi4&67&'&676'.'>7646&' '7>6'&'&7>7#!"&5463!2PR$++'TJXj7-FC',,&C ."!$28 h /" +p^&+3$ i0(w@www+.i6=Bn \C1XR:#"'jj 8Q.cAj57!? "0D$4" P[ & 2@wwwN#3!!327#"'&'&'&5#567676l '2CusfLM`iQN<:[@@''|v$%L02366k67MN#3%5#"'&'&5!5!#33276#!"&5463!2cXV3%  10D*+=>NC>9w@www8c'#Z99*(lN+*$% @www@#"'&76;46;23   &  ++"&5#"&7632  ^  c  & @#!'&5476!2 &  ^  b '&=!"&=463!546  &    q&8#"'&#"#"5476323276326767q'T1[VA=QQ3qpHih"-bfGw^44O#A?66%CKJA}} !"䒐""A$@C3^q|z=KK?6 lk)  %!%!VVuuu^-m5w}n~7M[264&"264&"2"&546+"&=##"&5'#"&5!467'&766276#"&54632    *<;V<<O@-K<&4'>&4.'.'.'.'.'&6&'.'.6767645.'#.'6&'&7676"&'&627>76'&7>'&'&'&'&766'.7>7676>76&6763>6&'&232.'.6'4.?4.'&#>7626'.'&#"'.'.'&676.67>7>5'&7>.'&'&'&7>7>767&'&67636'.'&67>7>.'.67 \ U7  J#!W! '  " ';%  k )"    '   /7*   I ,6 *&"!   O6* O $.( *.'  .x,  $CN      * 8   7%&&_f& ",VL,G$3@@$+ "  V5 3"  ""#dA++ y0D- %&n 4P'A5j$9E#"c7Y 6" & 8Z(;=I50 ' !!e  R   "+0n?t(-z.'< >R$A"24B@( ~ 9B9, *$        < > ?0D9f?Ae  .(;1.D 4H&.Ct iY% *  7      J  <    W 0%$  ""I! *  D  ,4A'4J" .0f6D4pZ{+*D_wqi;W1G("% %T7F}AG!1#%  JG 3  '.2>Vb%&#'32&'!>?>'&' &>"6&#">&'>26 $$ *b6~#= XP2{&%gx| .W)oOLOsEzG< CK}E $MFD<5+ z^aa$MWM 1>]|YY^D եA<KmE6<" @9I5*^aa>^4./.543232654.#"#".#"32>#"'#"$&547&54632632':XM1h*+D($,/9p`DoC&JV*55K55K55q*)y(;:*h )k5=x*& *x?/%4&#!"3!264&#!"3!26#!"&5463!2  &&&&&&&&19#"'#++"&5#"&5475##"&54763!2"&4628(3- &B..B& -3(8IggI`(8+Ue&.BB.&+8(kk`%-"&5#"&5#"&5#"&5463!2"&4628P8@B\B@B\B@8P8pPPp@`(88(`p.BB.0.BB.(88(Pppͺ!%>&'&#"'.$ $$ ^/(V=$<;$=V).X^aaJ`"(("`J^aa,I4."2>%'%"/'&5%&'&?'&767%476762%6[՛[[՛o ܴ   $ $ " $ $  ՛[[՛[[5` ^ ^ 2` `2 ^ ^ ` 1%#"$54732$%#"$&546$76327668ʴhf킐&^zs,!V[vn) 6<ׂf{z}))Ns3(@ +4&#!"3!2#!"&5463!2#!"&5463!2@&&&f&&&&@&&&&4&&4&@&&&&&&&& `BH+"/##"./#"'.?&5#"&46;'&462!76232!46 `&C6@Bb03eI;:&&&4L4&F Z4&w4) '' 5r&4&&4&&4}G3#&/.#./.'&4?63%27>'./&'&7676>767>?>%6})N @2*&@P9A #sGq] #lh<* 46+(  < 5R5"*>%"/ +[>hy  K !/Ui%6&'&676&'&6'.7>%.$76$% $.5476$6?62'.76&&'&676%.76&'..676#"NDQt -okQ//jo_  %&JՂYJA-.-- 9\DtT+X?*<UW3' 26$>>W0 {"F!"E    ^f`$"_]\<`F`FDh>CwlsJ@ ;=?s  :i_^{8+?` ) O`s2RDE58/K` &1:%#"'>7&54&5#"'>71654'6&5%zxb(zxbACC=ggF0ɖF(U!,CC=ggFÜ ɖF(U f5B_< <<pU3U3]yn2@ z5u@55 z55@,s@@(@@- MM- MM @@ -`b $ 648""""""@N@ ,@ PBp<$H<TfT H R , D x 6 \ DLX*Dx8JN2f$P`"VtLv$~*h 6 n !&!v!!""p"#&##$8$%%f& &&'`''(*((()")X)* *B*+,n,-z..:../@//0D01~12l233R344>4458566V67"78P89|9::b:=>>l>>?R?@l@@ABBxBCCDCDZDEFrFGDGHHIFIIIIJJLJJJKK\KLJLM*MMNhNOFOPPjPQDQQR2RjRS2TVVVWLWxWXX\XXY@YjYYYZ0Z~Z[[6[[\V\t\]6]x]^@^^_d_`$aab(bhbc2cccdle?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopq rstuvwxyz{|}~     " !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvuni00A0uni2000uni2001uni2002uni2003uni2004uni2005uni2006uni2007uni2008uni2009uni200Auni202Funi205FuniE000glassmusicsearchenvelopeheartstar star_emptyuserfilmth_largethth_listokremovezoom_inzoom_outoffsignalcogtrashhomefile_alttimeroad download_altdownloaduploadinbox play_circlerepeatrefreshlist_altlockflag headphones volume_off volume_down volume_upqrcodebarcodetagtagsbookbookmarkprintcamerafontbolditalic text_height text_width align_left align_center align_right align_justifylist indent_left indent_rightfacetime_videopicturepencil map_markeradjusttinteditsharecheckmove step_backward fast_backwardbackwardplaypausestopforward fast_forward step_forwardeject chevron_left chevron_right plus_sign minus_sign remove_signok_sign question_sign info_sign screenshot remove_circle ok_circle ban_circle arrow_left arrow_rightarrow_up arrow_down share_alt resize_full resize_smallexclamation_signgiftleaffireeye_open eye_close warning_signplanecalendarrandomcommentmagnet chevron_up chevron_downretweet shopping_cart folder_close folder_openresize_verticalresize_horizontal bar_chart twitter_sign facebook_sign camera_retrokeycogscomments thumbs_up_altthumbs_down_alt star_half heart_emptysignout linkedin_signpushpin external_linksignintrophy github_sign upload_altlemonphone check_emptybookmark_empty phone_signtwitterfacebookgithubunlock credit_cardrsshddbullhornbell certificate hand_right hand_lefthand_up hand_downcircle_arrow_leftcircle_arrow_rightcircle_arrow_upcircle_arrow_downglobewrenchtasksfilter briefcase fullscreengrouplinkcloudbeakercutcopy paper_clipsave sign_blankreorderulol strikethrough underlinetablemagictruck pinterestpinterest_signgoogle_plus_sign google_plusmoney caret_downcaret_up caret_left caret_rightcolumnssort sort_downsort_up envelope_altlinkedinundolegal dashboard comment_alt comments_altboltsitemapumbrellapaste light_bulbexchangecloud_download cloud_uploaduser_md stethoscopesuitcasebell_altcoffeefood file_text_altbuildinghospital ambulancemedkit fighter_jetbeerh_signf0fedouble_angle_leftdouble_angle_rightdouble_angle_updouble_angle_down angle_left angle_rightangle_up angle_downdesktoplaptoptablet mobile_phone circle_blank quote_left quote_rightspinnercirclereply github_altfolder_close_altfolder_open_alt expand_alt collapse_altsmilefrownmehgamepadkeyboardflag_altflag_checkeredterminalcode reply_allstar_half_emptylocation_arrowcrop code_forkunlink_279 exclamation superscript subscript_283 puzzle_piece microphonemicrophone_offshieldcalendar_emptyfire_extinguisherrocketmaxcdnchevron_sign_leftchevron_sign_rightchevron_sign_upchevron_sign_downhtml5css3anchor unlock_altbullseyeellipsis_horizontalellipsis_vertical_303 play_signticketminus_sign_alt check_minuslevel_up level_down check_sign edit_sign_312 share_signcompasscollapse collapse_top_317eurgbpusdinrjpycnykrwbtcfile file_textsort_by_alphabet_329sort_by_attributessort_by_attributes_alt sort_by_ordersort_by_order_alt_334_335 youtube_signyoutubexing xing_sign youtube_playdropbox stackexchange instagramflickradnf171bitbucket_signtumblr tumblr_signlong_arrow_down long_arrow_uplong_arrow_leftlong_arrow_rightwindowsandroidlinuxdribbleskype foursquaretrellofemalemalegittipsun_366archivebugvkweiborenren_372_373_374QQansible-1.5.4/docsite/_themes/srtd/breadcrumbs.html0000664000000000000000000000070312316627017021054 0ustar rootroot
  • Docs »
  • {{ title }}
  • {% if not pagename.endswith('_module') and (not 'list_of' in pagename) and (not 'category' in pagename) %}
  • Edit on GitHub
  • {% endif %}

ansible-1.5.4/docsite/_themes/srtd/theme.conf0000664000000000000000000000014712316627017017650 0ustar rootroot[theme] inherit = basic stylesheet = css/theme.min.css [options] typekit_id = hiw1hhg analytics_id = ansible-1.5.4/docsite/_themes/srtd/search.html0000664000000000000000000000277512316627017020043 0ustar rootroot{# basic/search.html ~~~~~~~~~~~~~~~~~ Template for the search page. :copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- extends "layout.html" %} {% set title = _('Search') %} {% set script_files = script_files + ['_static/searchtools.js'] %} {% block extrahead %} {# this is used when loading the search index using $.ajax fails, such as on Chrome for documents on localhost #} {{ super() }} {% endblock %} {% block body %} {% if search_performed %}

{{ _('Search Results') }}

{% if not search_results %}

{{ _('Your search did not match any documents. Please make sure that all words are spelled correctly and that you\'ve selected enough categories.') }}

{% endif %} {% endif %}
{% if search_results %}
    {% for href, caption, context in search_results %}
  • {{ caption }}

    {{ context|e }}

  • {% endfor %}
{% endif %}
{% endblock %} ansible-1.5.4/docsite/_themes/srtd/searchbox.html0000664000000000000000000000426312316627017020546 0ustar rootroot
ansible-1.5.4/docsite/_themes/srtd/footer.html0000664000000000000000000000240212316627017020057 0ustar rootroot
{% if next or prev %} {% endif %}

© Copyright 2014 Ansible, Inc.. {%- if last_updated %} {% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %} {%- endif %}

Ansible docs are generated from GitHub sources using Sphinx using a theme provided by Read the Docs. {% if pagename.endswith("_module") %}. Module documentation is not edited directly, but is generated from the source code for the modules. To submit an update to module docs, edit the 'DOCUMENTATION' metadata in the module source tree. {% endif %}
ansible-1.5.4/docsite/_themes/srtd/__init__.py0000664000000000000000000000056312316627017020012 0ustar rootroot"""Sphinx ReadTheDocs theme. From https://github.com/ryan-roemer/sphinx-bootstrap-theme. """ import os VERSION = (0, 1, 5) __version__ = ".".join(str(v) for v in VERSION) __version_full__ = __version__ def get_html_theme_path(): """Return list of HTML theme paths.""" cur_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) return cur_dir ansible-1.5.4/docsite/_themes/srtd/layout.html0000664000000000000000000001635512316627017020112 0ustar rootroot{# TEMPLATE VAR SETTINGS #} {%- set url_root = pathto('', 1) %} {%- if url_root == '#' %}{% set url_root = '' %}{% endif %} {%- if not embedded and docstitle %} {%- set titlesuffix = " — "|safe + docstitle|e %} {%- else %} {%- set titlesuffix = "" %} {%- endif %} {% block htmltitle %} {{ title|striptags|e }}{{ titlesuffix }} {% endblock %} {# FAVICON #} {% if favicon %} {% endif %} {# CSS #} {# JS #} {% if not embedded %} {%- for scriptfile in script_files %} {%- endfor %} {% if use_opensearch %} {% endif %} {% endif %} {# RTD hosts these file themselves, so just load on non RTD builds #} {% if not READTHEDOCS %} {% endif %} {% for cssfile in css_files %} {% endfor %} {%- block linktags %} {%- if hasdoc('about') %} {%- endif %} {%- if hasdoc('genindex') %} {%- endif %} {%- if hasdoc('search') %} {%- endif %} {%- if hasdoc('copyright') %} {%- endif %} {%- if parents %} {%- endif %} {%- if next %} {%- endif %} {%- if prev %} {%- endif %} {%- endblock %} {%- block extrahead %} {% endblock %}
{# SIDE NAV, TOGGLES ON MOBILE #}
{# MOBILE NAV, TRIGGLES SIDE NAV ON TOGGLE #} {# PAGE CONTENT #}
{% include "breadcrumbs.html" %}
{% block body %}{% endblock %}
{% include "footer.html" %}
{% include "versions.html" %} ansible-1.5.4/docsite/README.md0000664000000000000000000000253412316627017014560 0ustar rootrootHomepage and documentation source for Ansible ============================================= This project hosts the source behind [docs.ansible.com](http://docs.ansible.com/) Contributions to the documentation are welcome. To make changes, submit a pull request that changes the reStructuredText files in the "rst/" directory only, and Michael can do a docs build and push the static files. If you wish to verify output from the markup such as link references, you may install sphinx and build the documentation by running `make viewdocs` from the `ansible/docsite` directory. To include module documentation you'll need to run `make webdocs` at the top level of the repository. The generated html files are in docsite/htmlout/. If you do not want to learn the reStructuredText format, you can also [file issues] about documentation problems on the Ansible GitHub project. Note that module documentation can actually be [generated from a DOCUMENTATION docstring][module-docs] in the modules directory, so corrections to modules written as such need to be made in the module source, rather than in docsite source. To install sphinx and the required theme, install pip and then "pip install sphinx sphinx_rtd_theme" [file issues]: https://github.com/ansible/ansible/issues [module-docs]: http://docs.ansible.com/developing_modules.html#documenting-your-module ansible-1.5.4/docsite/_static/0000775000000000000000000000000012316665545014733 5ustar rootrootansible-1.5.4/docsite/_static/ansible-local.css0000664000000000000000000000015212316627017020140 0ustar rootroot/* Local CSS tweaks for ansible */ .dropdown-menu { overflow-y: auto; } h2 { padding-top: 40px; }ansible-1.5.4/docsite/_static/minus.png0000664000000000000000000000030712316627017016564 0ustar rootrootPNG  IHDR &q pHYs  tIME <8tEXtComment̖RIDATc= 0 && !jQuery(node.parentNode).hasClass(className)) { var span = document.createElement("span"); span.className = className; span.appendChild(document.createTextNode(val.substr(pos, text.length))); node.parentNode.insertBefore(span, node.parentNode.insertBefore( document.createTextNode(val.substr(pos + text.length)), node.nextSibling)); node.nodeValue = val.substr(0, pos); } } else if (!jQuery(node).is("button, select, textarea")) { jQuery.each(node.childNodes, function() { highlight(this); }); } } return this.each(function() { highlight(this); }); }; /** * Small JavaScript module for the documentation. */ var Documentation = { init : function() { this.fixFirefoxAnchorBug(); this.highlightSearchWords(); this.initIndexTable(); }, /** * i18n support */ TRANSLATIONS : {}, PLURAL_EXPR : function(n) { return n == 1 ? 0 : 1; }, LOCALE : 'unknown', // gettext and ngettext don't access this so that the functions // can safely bound to a different name (_ = Documentation.gettext) gettext : function(string) { var translated = Documentation.TRANSLATIONS[string]; if (typeof translated == 'undefined') return string; return (typeof translated == 'string') ? translated : translated[0]; }, ngettext : function(singular, plural, n) { var translated = Documentation.TRANSLATIONS[singular]; if (typeof translated == 'undefined') return (n == 1) ? singular : plural; return translated[Documentation.PLURALEXPR(n)]; }, addTranslations : function(catalog) { for (var key in catalog.messages) this.TRANSLATIONS[key] = catalog.messages[key]; this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')'); this.LOCALE = catalog.locale; }, /** * add context elements like header anchor links */ addContextElements : function() { $('div[id] > :header:first').each(function() { $('\u00B6'). attr('href', '#' + this.id). attr('title', _('Permalink to this headline')). appendTo(this); }); $('dt[id]').each(function() { $('\u00B6'). attr('href', '#' + this.id). attr('title', _('Permalink to this definition')). appendTo(this); }); }, /** * workaround a firefox stupidity */ fixFirefoxAnchorBug : function() { if (document.location.hash && $.browser.mozilla) window.setTimeout(function() { document.location.href += ''; }, 10); }, /** * highlight the search words provided in the url in the text */ highlightSearchWords : function() { var params = $.getQueryParameters(); var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : []; if (terms.length) { var body = $('div.body'); window.setTimeout(function() { $.each(terms, function() { body.highlightText(this.toLowerCase(), 'highlighted'); }); }, 10); $('') .appendTo($('#searchbox')); } }, /** * init the domain index toggle buttons */ initIndexTable : function() { var togglers = $('img.toggler').click(function() { var src = $(this).attr('src'); var idnum = $(this).attr('id').substr(7); $('tr.cg-' + idnum).toggle(); if (src.substr(-9) == 'minus.png') $(this).attr('src', src.substr(0, src.length-9) + 'plus.png'); else $(this).attr('src', src.substr(0, src.length-8) + 'minus.png'); }).css('display', ''); if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) { togglers.click(); } }, /** * helper function to hide the search marks again */ hideSearchWords : function() { $('#searchbox .highlight-link').fadeOut(300); $('span.highlighted').removeClass('highlighted'); }, /** * make the url absolute */ makeURL : function(relativeURL) { return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL; }, /** * get the current relative url */ getCurrentURL : function() { var path = document.location.pathname; var parts = path.split(/\//); $.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() { if (this == '..') parts.pop(); }); var url = parts.join('/'); return path.substring(url.lastIndexOf('/') + 1, path.length - 1); } }; // quick alias for translations _ = Documentation.gettext; $(document).ready(function() { Documentation.init(); }); ansible-1.5.4/docsite/_static/basic.css0000664000000000000000000002041712316627017016522 0ustar rootroot/* * basic.css * ~~~~~~~~~ * * Sphinx stylesheet -- basic theme. * * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ /* -- main layout ----------------------------------------------------------- */ div.clearer { clear: both; } /* -- relbar ---------------------------------------------------------------- */ div.related { width: 100%; font-size: 90%; } div.related h3 { display: none; } div.related ul { margin: 0; padding: 0 0 0 10px; list-style: none; } div.related li { display: inline; } div.related li.right { float: right; margin-right: 5px; } /* -- sidebar --------------------------------------------------------------- */ div.sphinxsidebarwrapper { padding: 10px 5px 0 10px; } div.sphinxsidebar { float: left; width: 230px; margin-left: -100%; font-size: 90%; } div.sphinxsidebar ul { list-style: none; } div.sphinxsidebar ul ul, div.sphinxsidebar ul.want-points { margin-left: 20px; list-style: square; } div.sphinxsidebar ul ul { margin-top: 0; margin-bottom: 0; } div.sphinxsidebar form { margin-top: 10px; } div.sphinxsidebar input { border: 1px solid #98dbcc; font-family: sans-serif; font-size: 1em; } div.sphinxsidebar #searchbox input[type="text"] { width: 170px; } div.sphinxsidebar #searchbox input[type="submit"] { width: 30px; } img { border: 0; } /* -- search page ----------------------------------------------------------- */ ul.search { margin: 10px 0 0 20px; padding: 0; } ul.search li { padding: 5px 0 5px 20px; background-image: url(file.png); background-repeat: no-repeat; background-position: 0 7px; } ul.search li a { font-weight: bold; } ul.search li div.context { color: #888; margin: 2px 0 0 30px; text-align: left; } ul.keywordmatches li.goodmatch a { font-weight: bold; } /* -- index page ------------------------------------------------------------ */ table.contentstable { width: 90%; } table.contentstable p.biglink { line-height: 150%; } a.biglink { font-size: 1.3em; } span.linkdescr { font-style: italic; padding-top: 5px; font-size: 90%; } /* -- general index --------------------------------------------------------- */ table.indextable { width: 100%; } table.indextable td { text-align: left; vertical-align: top; } table.indextable dl, table.indextable dd { margin-top: 0; margin-bottom: 0; } table.indextable tr.pcap { height: 10px; } table.indextable tr.cap { margin-top: 10px; background-color: #f2f2f2; } img.toggler { margin-right: 3px; margin-top: 3px; cursor: pointer; } div.modindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } div.genindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } /* -- general body styles --------------------------------------------------- */ a.headerlink { visibility: hidden; } h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, h4:hover > a.headerlink, h5:hover > a.headerlink, h6:hover > a.headerlink, dt:hover > a.headerlink { visibility: visible; } div.body p.caption { text-align: inherit; } div.body td { text-align: left; } .field-list ul { padding-left: 1em; } .first { margin-top: 0 !important; } p.rubric { margin-top: 30px; font-weight: bold; } img.align-left, .figure.align-left, object.align-left { clear: left; float: left; margin-right: 1em; } img.align-right, .figure.align-right, object.align-right { clear: right; float: right; margin-left: 1em; } img.align-center, .figure.align-center, object.align-center { display: block; margin-left: auto; margin-right: auto; } .align-left { text-align: left; } .align-center { text-align: center; } .align-right { text-align: right; } /* -- sidebars -------------------------------------------------------------- */ div.sidebar { margin: 0 0 0.5em 1em; border: 1px solid #ddb; padding: 7px 7px 0 7px; background-color: #ffe; width: 40%; float: right; } p.sidebar-title { font-weight: bold; } /* -- topics ---------------------------------------------------------------- */ div.topic { border: 1px solid #ccc; padding: 7px 7px 0 7px; margin: 10px 0 10px 0; } p.topic-title { font-size: 1.1em; font-weight: bold; margin-top: 10px; } /* -- admonitions ----------------------------------------------------------- */ div.admonition { margin-top: 10px; margin-bottom: 10px; padding: 7px; } div.admonition dt { font-weight: bold; } div.admonition dl { margin-bottom: 0; } p.admonition-title { margin: 0px 10px 5px 0px; font-weight: bold; } div.body p.centered { text-align: center; margin-top: 25px; } /* -- tables ---------------------------------------------------------------- */ table.docutils { border: 0; border-collapse: collapse; } table.docutils td, table.docutils th { padding: 1px 8px 1px 5px; border-top: 0; border-left: 0; border-right: 0; border-bottom: 1px solid #aaa; } table.field-list td, table.field-list th { border: 0 !important; } table.footnote td, table.footnote th { border: 0 !important; } th { text-align: left; padding-right: 5px; } table.citation { border-left: solid 1px gray; margin-left: 1px; } table.citation td { border-bottom: none; } /* -- other body styles ----------------------------------------------------- */ ol.arabic { list-style: decimal; } ol.loweralpha { list-style: lower-alpha; } ol.upperalpha { list-style: upper-alpha; } ol.lowerroman { list-style: lower-roman; } ol.upperroman { list-style: upper-roman; } dl { margin-bottom: 15px; } dd p { margin-top: 0px; } dd ul, dd table { margin-bottom: 10px; } dd { margin-top: 3px; margin-bottom: 10px; margin-left: 30px; } dt:target, .highlighted { background-color: #fbe54e; } dl.glossary dt { font-weight: bold; font-size: 1.1em; } .field-list ul { margin: 0; padding-left: 1em; } .field-list p { margin: 0; } .refcount { color: #060; } .optional { font-size: 1.3em; } .versionmodified { font-style: italic; } .system-message { background-color: #fda; padding: 5px; border: 3px solid red; } .footnote:target { background-color: #ffa; } .line-block { display: block; margin-top: 1em; margin-bottom: 1em; } .line-block .line-block { margin-top: 0; margin-bottom: 0; margin-left: 1.5em; } .guilabel, .menuselection { font-family: sans-serif; } .accelerator { text-decoration: underline; } .classifier { font-style: oblique; } abbr, acronym { border-bottom: dotted 1px; cursor: help; } /* -- code displays --------------------------------------------------------- */ pre { overflow: auto; overflow-y: hidden; /* fixes display issues on Chrome browsers */ } td.linenos pre { padding: 5px 0px; border: 0; background-color: transparent; color: #aaa; } table.highlighttable { margin-left: 0.5em; } table.highlighttable td { padding: 0 0.5em 0 0.5em; } tt.descname { background-color: transparent; font-weight: bold; font-size: 1.2em; } tt.descclassname { background-color: transparent; } tt.xref, a tt { background-color: transparent; font-weight: bold; } h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt { background-color: transparent; } .viewcode-link { float: right; } .viewcode-back { float: right; font-family: sans-serif; } div.viewcode-block:target { margin: -1px -10px; padding: 0 10px; } /* -- math display ---------------------------------------------------------- */ img.math { vertical-align: middle; } div.body div.math p { text-align: center; } span.eqno { float: right; } /* -- printout stylesheet --------------------------------------------------- */ @media print { div.document, div.documentwrapper, div.bodywrapper { margin: 0 !important; width: 100%; } div.sphinxsidebar, div.related, div.footer, #top-link { display: none; } }ansible-1.5.4/docsite/_static/plus.png0000664000000000000000000000030712316627017016414 0ustar rootrootPNG  IHDR &q pHYs  tIME 1l9tEXtComment̖RIDATcz(BpipPc |IENDB`ansible-1.5.4/docsite/_static/solar.css0000664000000000000000000001446412316627017016566 0ustar rootroot/* solar.css * Modified from sphinxdoc.css of the sphinxdoc theme. */ @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'Open Sans', sans-serif; font-size: 14px; line-height: 150%; text-align: center; color: #002b36; padding: 0; margin: 0px 80px 0px 80px; min-width: 740px; -moz-box-shadow: 0px 0px 10px #93a1a1; -webkit-box-shadow: 0px 0px 10px #93a1a1; box-shadow: 0px 0px 10px #93a1a1; background: url("subtle_dots.png") repeat; } div.document { background-color: #fcfcfc; text-align: left; background-repeat: repeat-x; } div.bodywrapper { margin: 0 240px 0 0; border-right: 1px dotted #eee8d5; } div.body { background-color: white; margin: 0; padding: 0.5em 20px 20px 20px; } div.related { font-size: 1em; background: #002b36; color: #839496; padding: 5px 0px; } div.related ul { height: 2em; margin: 2px; } div.related ul li { margin: 0; padding: 0; height: 2em; float: left; } div.related ul li.right { float: right; margin-right: 5px; } div.related ul li a { margin: 0; padding: 2px 5px; line-height: 2em; text-decoration: none; color: #839496; } div.related ul li a:hover { background-color: #073642; -webkit-border-radius: 2px; -moz-border-radius: 2px; border-radius: 2px; } div.sphinxsidebarwrapper { padding: 0; } div.sphinxsidebar { margin: 0; padding: 0.5em 15px 15px 0; width: 210px; float: right; font-size: 0.9em; text-align: left; } div.sphinxsidebar h3, div.sphinxsidebar h4 { margin: 1em 0 0.5em 0; font-size: 1em; padding: 0.7em; background-color: #eeeff1; } div.sphinxsidebar h3 a { color: #2E3436; } div.sphinxsidebar ul { padding-left: 1.5em; margin-top: 7px; padding: 0; line-height: 150%; color: #586e75; } div.sphinxsidebar ul ul { margin-left: 20px; } div.sphinxsidebar input { border: 1px solid #eee8d5; } div.footer { background-color: #93a1a1; color: #eee; padding: 3px 8px 3px 0; clear: both; font-size: 0.8em; text-align: right; } div.footer a { color: #eee; text-decoration: none; } /* -- body styles ----------------------------------------------------------- */ p { margin: 0.8em 0 0.5em 0; } div.body a, div.sphinxsidebarwrapper a { color: #268bd2; text-decoration: none; } div.body a:hover, div.sphinxsidebarwrapper a:hover { border-bottom: 1px solid #268bd2; } h1, h2, h3, h4, h5, h6 { font-family: "Open Sans", sans-serif; font-weight: 300; } h1 { margin: 0; padding: 0.7em 0 0.3em 0; line-height: 1.2em; color: #002b36; text-shadow: #eee 0.1em 0.1em 0.1em; } h2 { margin: 1.3em 0 0.2em 0; padding: 0 0 10px 0; color: #073642; border-bottom: 1px solid #eee; } h3 { margin: 1em 0 -0.3em 0; padding-bottom: 5px; } h3, h4, h5, h6 { color: #073642; border-bottom: 1px dotted #eee; } div.body h1 a, div.body h2 a, div.body h3 a, div.body h4 a, div.body h5 a, div.body h6 a { color: #657B83!important; } h1 a.anchor, h2 a.anchor, h3 a.anchor, h4 a.anchor, h5 a.anchor, h6 a.anchor { display: none; margin: 0 0 0 0.3em; padding: 0 0.2em 0 0.2em; color: #aaa!important; } h1:hover a.anchor, h2:hover a.anchor, h3:hover a.anchor, h4:hover a.anchor, h5:hover a.anchor, h6:hover a.anchor { display: inline; } h1 a.anchor:hover, h2 a.anchor:hover, h3 a.anchor:hover, h4 a.anchor:hover, h5 a.anchor:hover, h6 a.anchor:hover { color: #777; background-color: #eee; } a.headerlink { color: #c60f0f!important; font-size: 1em; margin-left: 6px; padding: 0 4px 0 4px; text-decoration: none!important; } a.headerlink:hover { background-color: #ccc; color: white!important; } cite, code, tt { font-family: 'Source Code Pro', monospace; font-size: 0.9em; letter-spacing: 0.01em; background-color: #eeeff2; font-style: normal; } hr { border: 1px solid #eee; margin: 2em; } .highlight { -webkit-border-radius: 2px; -moz-border-radius: 2px; border-radius: 2px; } pre { font-family: 'Source Code Pro', monospace; font-style: normal; font-size: 0.9em; letter-spacing: 0.015em; line-height: 120%; padding: 0.7em; white-space: pre-wrap; /* css-3 */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ word-wrap: break-word; /* Internet Explorer 5.5+ */ } pre a { color: inherit; text-decoration: underline; } td.linenos pre { padding: 0.5em 0; } div.quotebar { background-color: #f8f8f8; max-width: 250px; float: right; padding: 2px 7px; border: 1px solid #ccc; } div.topic { background-color: #f8f8f8; } table { border-collapse: collapse; margin: 0 -0.5em 0 -0.5em; } table td, table th { padding: 0.2em 0.5em 0.2em 0.5em; } div.admonition { font-size: 0.9em; margin: 1em 0 1em 0; border: 1px solid #eee; background-color: #f7f7f7; padding: 0; -moz-box-shadow: 0px 8px 6px -8px #93a1a1; -webkit-box-shadow: 0px 8px 6px -8px #93a1a1; box-shadow: 0px 8px 6px -8px #93a1a1; } div.admonition p { margin: 0.5em 1em 0.5em 1em; padding: 0.2em; } div.admonition pre { margin: 0.4em 1em 0.4em 1em; } div.admonition p.admonition-title { margin: 0; padding: 0.2em 0 0.2em 0.6em; color: white; border-bottom: 1px solid #eee8d5; font-weight: bold; background-color: #268bd2; } div.warning p.admonition-title, div.important p.admonition-title { background-color: #cb4b16; } div.hint p.admonition-title, div.tip p.admonition-title { background-color: #859900; } div.caution p.admonition-title, div.attention p.admonition-title, div.danger p.admonition-title, div.error p.admonition-title { background-color: #dc322f; } div.admonition ul, div.admonition ol { margin: 0.1em 0.5em 0.5em 3em; padding: 0; } div.versioninfo { margin: 1em 0 0 0; border: 1px solid #eee; background-color: #DDEAF0; padding: 8px; line-height: 1.3em; font-size: 0.9em; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #eee; border-bottom: 1px solid #eee; } ansible-1.5.4/docsite/_static/sidebar.js0000664000000000000000000001103212316627017016667 0ustar rootroot/* * sidebar.js * ~~~~~~~~~~ * * This script makes the Sphinx sidebar collapsible. * * .sphinxsidebar contains .sphinxsidebarwrapper. This script adds * in .sphixsidebar, after .sphinxsidebarwrapper, the #sidebarbutton * used to collapse and expand the sidebar. * * When the sidebar is collapsed the .sphinxsidebarwrapper is hidden * and the width of the sidebar and the margin-left of the document * are decreased. When the sidebar is expanded the opposite happens. * This script saves a per-browser/per-session cookie used to * remember the position of the sidebar among the pages. * Once the browser is closed the cookie is deleted and the position * reset to the default (expanded). * * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ $(function() { // global elements used by the functions. // the 'sidebarbutton' element is defined as global after its // creation, in the add_sidebar_button function var bodywrapper = $('.bodywrapper'); var sidebar = $('.sphinxsidebar'); var sidebarwrapper = $('.sphinxsidebarwrapper'); // original margin-left of the bodywrapper and width of the sidebar // with the sidebar expanded var bw_margin_expanded = bodywrapper.css('margin-left'); var ssb_width_expanded = sidebar.width(); // margin-left of the bodywrapper and width of the sidebar // with the sidebar collapsed var bw_margin_collapsed = '.8em'; var ssb_width_collapsed = '.8em'; // colors used by the current theme var dark_color = $('.related').css('background-color'); var light_color = $('.document').css('background-color'); function sidebar_is_collapsed() { return sidebarwrapper.is(':not(:visible)'); } function toggle_sidebar() { if (sidebar_is_collapsed()) expand_sidebar(); else collapse_sidebar(); } function collapse_sidebar() { sidebarwrapper.hide(); sidebar.css('width', ssb_width_collapsed); bodywrapper.css('margin-left', bw_margin_collapsed); sidebarbutton.css({ 'margin-left': '0', 'height': bodywrapper.height() }); sidebarbutton.find('span').text('»'); sidebarbutton.attr('title', _('Expand sidebar')); document.cookie = 'sidebar=collapsed'; } function expand_sidebar() { bodywrapper.css('margin-left', bw_margin_expanded); sidebar.css('width', ssb_width_expanded); sidebarwrapper.show(); sidebarbutton.css({ 'margin-left': ssb_width_expanded-12, 'height': bodywrapper.height() }); sidebarbutton.find('span').text('«'); sidebarbutton.attr('title', _('Collapse sidebar')); document.cookie = 'sidebar=expanded'; } function add_sidebar_button() { sidebarwrapper.css({ 'float': 'left', 'margin-right': '0', 'width': ssb_width_expanded - 28 }); // create the button sidebar.append( '
«
' ); var sidebarbutton = $('#sidebarbutton'); light_color = sidebarbutton.css('background-color'); // find the height of the viewport to center the '<<' in the page var viewport_height; if (window.innerHeight) viewport_height = window.innerHeight; else viewport_height = $(window).height(); sidebarbutton.find('span').css({ 'display': 'block', 'margin-top': (viewport_height - sidebar.position().top - 20) / 2 }); sidebarbutton.click(toggle_sidebar); sidebarbutton.attr('title', _('Collapse sidebar')); sidebarbutton.css({ 'color': '#FFFFFF', 'border-left': '1px solid ' + dark_color, 'font-size': '1.2em', 'cursor': 'pointer', 'height': bodywrapper.height(), 'padding-top': '1px', 'margin-left': ssb_width_expanded - 12 }); sidebarbutton.hover( function () { $(this).css('background-color', dark_color); }, function () { $(this).css('background-color', light_color); } ); } function set_position_from_cookie() { if (!document.cookie) return; var items = document.cookie.split(';'); for(var k=0; k -1) start = i; }); start = Math.max(start - 120, 0); var excerpt = ((start > 0) ? '...' : '') + $.trim(text.substr(start, 240)) + ((start + 240 - text.length) ? '...' : ''); var rv = $('
').text(excerpt); $.each(hlwords, function() { rv = rv.highlightText(this, 'highlighted'); }); return rv; } /** * Porter Stemmer */ var Stemmer = function() { var step2list = { ational: 'ate', tional: 'tion', enci: 'ence', anci: 'ance', izer: 'ize', bli: 'ble', alli: 'al', entli: 'ent', eli: 'e', ousli: 'ous', ization: 'ize', ation: 'ate', ator: 'ate', alism: 'al', iveness: 'ive', fulness: 'ful', ousness: 'ous', aliti: 'al', iviti: 'ive', biliti: 'ble', logi: 'log' }; var step3list = { icate: 'ic', ative: '', alize: 'al', iciti: 'ic', ical: 'ic', ful: '', ness: '' }; var c = "[^aeiou]"; // consonant var v = "[aeiouy]"; // vowel var C = c + "[^aeiouy]*"; // consonant sequence var V = v + "[aeiou]*"; // vowel sequence var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0 var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 var s_v = "^(" + C + ")?" + v; // vowel in stem this.stemWord = function (w) { var stem; var suffix; var firstch; var origword = w; if (w.length < 3) return w; var re; var re2; var re3; var re4; firstch = w.substr(0,1); if (firstch == "y") w = firstch.toUpperCase() + w.substr(1); // Step 1a re = /^(.+?)(ss|i)es$/; re2 = /^(.+?)([^s])s$/; if (re.test(w)) w = w.replace(re,"$1$2"); else if (re2.test(w)) w = w.replace(re2,"$1$2"); // Step 1b re = /^(.+?)eed$/; re2 = /^(.+?)(ed|ing)$/; if (re.test(w)) { var fp = re.exec(w); re = new RegExp(mgr0); if (re.test(fp[1])) { re = /.$/; w = w.replace(re,""); } } else if (re2.test(w)) { var fp = re2.exec(w); stem = fp[1]; re2 = new RegExp(s_v); if (re2.test(stem)) { w = stem; re2 = /(at|bl|iz)$/; re3 = new RegExp("([^aeiouylsz])\\1$"); re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); if (re2.test(w)) w = w + "e"; else if (re3.test(w)) { re = /.$/; w = w.replace(re,""); } else if (re4.test(w)) w = w + "e"; } } // Step 1c re = /^(.+?)y$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; re = new RegExp(s_v); if (re.test(stem)) w = stem + "i"; } // Step 2 re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; suffix = fp[2]; re = new RegExp(mgr0); if (re.test(stem)) w = stem + step2list[suffix]; } // Step 3 re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; suffix = fp[2]; re = new RegExp(mgr0); if (re.test(stem)) w = stem + step3list[suffix]; } // Step 4 re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; re2 = /^(.+?)(s|t)(ion)$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; re = new RegExp(mgr1); if (re.test(stem)) w = stem; } else if (re2.test(w)) { var fp = re2.exec(w); stem = fp[1] + fp[2]; re2 = new RegExp(mgr1); if (re2.test(stem)) w = stem; } // Step 5 re = /^(.+?)e$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; re = new RegExp(mgr1); re2 = new RegExp(meq1); re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) w = stem; } re = /ll$/; re2 = new RegExp(mgr1); if (re.test(w) && re2.test(w)) { re = /.$/; w = w.replace(re,""); } // and turn initial Y back to y if (firstch == "y") w = firstch.toLowerCase() + w.substr(1); return w; } } /** * Search Module */ var Search = { _index : null, _queued_query : null, _pulse_status : -1, init : function() { var params = $.getQueryParameters(); if (params.q) { var query = params.q[0]; $('input[name="q"]')[0].value = query; this.performSearch(query); } }, loadIndex : function(url) { $.ajax({type: "GET", url: url, data: null, success: null, dataType: "script", cache: true}); }, setIndex : function(index) { var q; this._index = index; if ((q = this._queued_query) !== null) { this._queued_query = null; Search.query(q); } }, hasIndex : function() { return this._index !== null; }, deferQuery : function(query) { this._queued_query = query; }, stopPulse : function() { this._pulse_status = 0; }, startPulse : function() { if (this._pulse_status >= 0) return; function pulse() { Search._pulse_status = (Search._pulse_status + 1) % 4; var dotString = ''; for (var i = 0; i < Search._pulse_status; i++) dotString += '.'; Search.dots.text(dotString); if (Search._pulse_status > -1) window.setTimeout(pulse, 500); }; pulse(); }, /** * perform a search for something */ performSearch : function(query) { // create the required interface elements this.out = $('#search-results'); this.title = $('

' + _('Searching') + '

').appendTo(this.out); this.dots = $('').appendTo(this.title); this.status = $('

').appendTo(this.out); this.output = $('