pax_global_header00006660000000000000000000000064126502306050014511gustar00rootroot0000000000000052 comment=43c79ad94ca9c564c406b772819fa3adf145c0dd heat-cfntools-1.4.2/000077500000000000000000000000001265023060500142635ustar00rootroot00000000000000heat-cfntools-1.4.2/.gitignore000066400000000000000000000001431265023060500162510ustar00rootroot00000000000000*.pyc *.swp build dist heat_cfntools.egg-info/ .testrepository/ subunit.log .tox AUTHORS ChangeLog heat-cfntools-1.4.2/.gitreview000066400000000000000000000001221265023060500162640ustar00rootroot00000000000000[gerrit] host=review.openstack.org port=29418 project=openstack/heat-cfntools.git heat-cfntools-1.4.2/.testr.conf000066400000000000000000000003461265023060500163540ustar00rootroot00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ ./heat_cfntools/tests $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list heat-cfntools-1.4.2/CONTRIBUTING.rst000066400000000000000000000010271265023060500167240ustar00rootroot00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps in this page: http://docs.openstack.org/infra/manual/developers.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: http://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/heat heat-cfntools-1.4.2/LICENSE000066400000000000000000000236371265023060500153030ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. heat-cfntools-1.4.2/MANIFEST.in000066400000000000000000000002001265023060500160110ustar00rootroot00000000000000include CONTRIBUTING.rst include MANIFEST.in include README.rst include AUTHORS LICENSE include ChangeLog graft doc graft tools heat-cfntools-1.4.2/README.rst000066400000000000000000000016701265023060500157560ustar00rootroot00000000000000========================= Heat CloudFormation Tools ========================= There are several bootstrap methods for cloudformations: 1. Create image with application ready to go 2. Use cloud-init to run a startup script passed as userdata to the nova server create 3. Use the CloudFormation instance helper scripts This package contains files required for choice #3. cfn-init - Reads the AWS::CloudFormation::Init for the instance resource, installs packages, and starts services cfn-signal - Waits for an application to be ready before continuing, ie: supporting the WaitCondition feature cfn-hup - Handle updates from the UpdateStack CloudFormation API call * Free software: Apache license * Source: http://git.openstack.org/cgit/openstack/heat-cfntools * Bugs: http://bugs.launchpad.net/heat-cfntools Related projects ---------------- * http://wiki.openstack.org/Heat heat-cfntools-1.4.2/bin/000077500000000000000000000000001265023060500150335ustar00rootroot00000000000000heat-cfntools-1.4.2/bin/cfn-create-aws-symlinks000077500000000000000000000055541265023060500214400ustar00rootroot00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Creates symlinks for the cfn-* scripts in this directory to /opt/aws/bin """ import argparse import glob import os import os.path def create_symlink(source_file, target_file, override=False): if os.path.exists(target_file): if (override): os.remove(target_file) else: print('%s already exists, will not replace with symlink' % target_file) return print('%s -> %s' % (source_file, target_file)) os.symlink(source_file, target_file) def check_dirs(source_dir, target_dir): print('%s -> %s' % (source_dir, target_dir)) if source_dir == target_dir: print('Source and target are the same %s' % target_dir) return False if not os.path.exists(target_dir): try: os.makedirs(target_dir) except OSError as exc: print('Could not create target directory %s: %s' % (target_dir, exc)) return False return True def create_symlinks(source_dir, target_dir, glob_pattern, override): source_files = glob.glob(os.path.join(source_dir, glob_pattern)) for source_file in source_files: target_file = os.path.join(target_dir, os.path.basename(source_file)) create_symlink(source_file, target_file, override=override) if __name__ == '__main__': description = 'Creates symlinks for the cfn-* scripts to /opt/aws/bin' parser = argparse.ArgumentParser(description=description) parser.add_argument( '-t', '--target', dest="target_dir", help="Target directory to create symlinks", default='/opt/aws/bin', required=False) parser.add_argument( '-s', '--source', dest="source_dir", help="Source directory to create symlinks from. " "Defaults to the directory where this script is", default='/usr/bin', required=False) parser.add_argument( '-f', '--force', dest="force", action='store_true', help="If specified, will create symlinks even if " "there is already a target file", required=False) args = parser.parse_args() if not check_dirs(args.source_dir, args.target_dir): exit(1) create_symlinks(args.source_dir, args.target_dir, 'cfn-*', args.force) heat-cfntools-1.4.2/bin/cfn-get-metadata000077500000000000000000000057201265023060500200660ustar00rootroot00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Implements cfn-get-metadata CloudFormation functionality """ import argparse import logging from heat_cfntools.cfntools import cfn_helper description = " " parser = argparse.ArgumentParser(description=description) parser.add_argument('-s', '--stack', dest="stack_name", help="A Heat stack name", required=True) parser.add_argument('-r', '--resource', dest="logical_resource_id", help="A Heat logical resource ID", required=True) parser.add_argument('--access-key', dest="access_key", help="A Keystone access key", required=False) parser.add_argument('--secret-key', dest="secret_key", help="A Keystone secret key", required=False) parser.add_argument('--region', dest="region", help="Openstack region", required=False) parser.add_argument('--credential-file', dest="credential_file", help="credential-file", required=False) parser.add_argument('-u', '--url', dest="url", help="service url", required=False) parser.add_argument('-k', '--key', dest="key", help="key", required=False) args = parser.parse_args() if not args.stack_name: print('The Stack name must not be empty.') exit(1) if not args.logical_resource_id: print('The Resource ID must not be empty') exit(1) log_format = '%(levelname)s [%(asctime)s] %(message)s' logging.basicConfig(format=log_format, level=logging.DEBUG) LOG = logging.getLogger('cfntools') log_file_name = "/var/log/cfn-get-metadata.log" file_handler = logging.FileHandler(log_file_name) file_handler.setFormatter(logging.Formatter(log_format)) LOG.addHandler(file_handler) metadata = cfn_helper.Metadata(args.stack_name, args.logical_resource_id, access_key=args.access_key, secret_key=args.secret_key, region=args.region, credentials_file=args.credential_file) metadata.retrieve() LOG.debug(str(metadata)) metadata.display(args.key) heat-cfntools-1.4.2/bin/cfn-hup000077500000000000000000000066641265023060500163350ustar00rootroot00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Implements cfn-hup CloudFormation functionality """ import argparse import logging import os import os.path from heat_cfntools.cfntools import cfn_helper description = " " parser = argparse.ArgumentParser(description=description) parser.add_argument('-c', '--config', dest="config_dir", help="Hook Config Directory", required=False, default='/etc/cfn/hooks.d') parser.add_argument('-f', '--no-daemon', dest="no_daemon", action="store_true", help="Do not run as a daemon", required=False) parser.add_argument('-v', '--verbose', action="store_true", dest="verbose", help="Verbose logging", required=False) args = parser.parse_args() # Setup logging log_format = '%(levelname)s [%(asctime)s] %(message)s' log_file_name = "/var/log/cfn-hup.log" log_level = logging.INFO if args.verbose: log_level = logging.DEBUG logging.basicConfig(filename=log_file_name, format=log_format, level=log_level) LOG = logging.getLogger('cfntools') main_conf_path = '/etc/cfn/cfn-hup.conf' try: main_config_file = open(main_conf_path) except IOError as exc: LOG.error('Could not open main configuration at %s' % main_conf_path) exit(1) config_files = [] hooks_conf_path = '/etc/cfn/hooks.conf' if os.path.exists(hooks_conf_path): try: config_files.append(open(hooks_conf_path)) except IOError as exc: LOG.exception(exc) if args.config_dir and os.path.exists(args.config_dir): try: for f in os.listdir(args.config_dir): config_files.append(open(os.path.join(args.config_dir, f))) except OSError as exc: LOG.exception(exc) if not config_files: LOG.error('No hook files found at %s or %s' % (hooks_conf_path, args.config_dir)) exit(1) try: mainconfig = cfn_helper.HupConfig([main_config_file] + config_files) except Exception as ex: LOG.error('Cannot load configuration: %s' % str(ex)) exit(1) if not mainconfig.unique_resources_get(): LOG.error('No hooks were found. Add some to %s or %s' % (hooks_conf_path, args.config_dir)) exit(1) for r in mainconfig.unique_resources_get(): LOG.debug('Checking resource %s' % r) metadata = cfn_helper.Metadata(mainconfig.stack, r, credentials_file=mainconfig.credential_file, region=mainconfig.region) metadata.retrieve() try: metadata.cfn_hup(mainconfig.hooks) except Exception as e: LOG.exception("Error processing metadata") exit(1) heat-cfntools-1.4.2/bin/cfn-init000077500000000000000000000047751265023060500165050ustar00rootroot00000000000000#!/usr/bin/python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Implements cfn-init CloudFormation functionality """ import argparse import logging from heat_cfntools.cfntools import cfn_helper description = " " parser = argparse.ArgumentParser(description=description) parser.add_argument('-s', '--stack', dest="stack_name", help="A Heat stack name", required=False) parser.add_argument('-r', '--resource', dest="logical_resource_id", help="A Heat logical resource ID", required=False) parser.add_argument('--access-key', dest="access_key", help="A Keystone access key", required=False) parser.add_argument('--secret-key', dest="secret_key", help="A Keystone secret key", required=False) parser.add_argument('--region', dest="region", help="Openstack region", required=False) parser.add_argument('-c', '--configsets', dest="configsets", help="An optional list of configSets (default: default)", required=False) args = parser.parse_args() log_format = '%(levelname)s [%(asctime)s] %(message)s' log_file_name = "/var/log/cfn-init.log" logging.basicConfig(filename=log_file_name, format=log_format, level=logging.DEBUG) LOG = logging.getLogger('cfntools') metadata = cfn_helper.Metadata(args.stack_name, args.logical_resource_id, access_key=args.access_key, secret_key=args.secret_key, region=args.region, configsets=args.configsets) metadata.retrieve() try: metadata.cfn_init() except Exception as e: LOG.exception("Error processing metadata") exit(1) heat-cfntools-1.4.2/bin/cfn-push-stats000077500000000000000000000254011265023060500176420ustar00rootroot00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Implements cfn-push-stats CloudFormation functionality """ import argparse import logging import os import subprocess # Override BOTO_CONFIG, which makes boto look only at the specified # config file, instead of the default locations os.environ['BOTO_CONFIG'] = '/var/lib/heat-cfntools/cfn-boto-cfg' from boto.ec2 import cloudwatch log_format = '%(levelname)s [%(asctime)s] %(message)s' log_file_name = "/var/log/cfn-push-stats.log" logging.basicConfig(filename=log_file_name, format=log_format, level=logging.DEBUG) LOG = logging.getLogger('cfntools') try: import psutil except ImportError: LOG.warn("psutil not available. If you want process and memory " "statistics, you need to install it.") from heat_cfntools.cfntools import cfn_helper KILO = 1024 MEGA = 1048576 GIGA = 1073741824 unit_map = {'bytes': 1, 'kilobytes': KILO, 'megabytes': MEGA, 'gigabytes': GIGA} description = " " parser = argparse.ArgumentParser(description=description) parser.add_argument('-v', '--verbose', action="store_true", help="Verbose logging", required=False) parser.add_argument('--credential-file', dest="credential_file", help="credential-file", required=False, default='/etc/cfn/cfn-credentials') parser.add_argument('--service-failure', required=False, action="store_true", help='Reports a service failure.') parser.add_argument('--mem-util', required=False, action="store_true", help='Reports memory utilization in percentages.') parser.add_argument('--mem-used', required=False, action="store_true", help='Reports memory used (excluding cache/buffers) ' 'in megabytes.') parser.add_argument('--mem-avail', required=False, action="store_true", help='Reports available memory (including cache/buffers) ' 'in megabytes.') parser.add_argument('--swap-util', required=False, action="store_true", help='Reports swap utilization in percentages.') parser.add_argument('--swap-used', required=False, action="store_true", help='Reports allocated swap space in megabytes.') parser.add_argument('--disk-space-util', required=False, action="store_true", help='Reports disk space utilization in percentages.') parser.add_argument('--disk-space-used', required=False, action="store_true", help='Reports allocated disk space in gigabytes.') parser.add_argument('--disk-space-avail', required=False, action="store_true", help='Reports available disk space in gigabytes.') parser.add_argument('--memory-units', required=False, default='megabytes', help='Specifies units for memory metrics.') parser.add_argument('--disk-units', required=False, default='megabytes', help='Specifies units for disk metrics.') parser.add_argument('--disk-path', required=False, default='/', help='Selects the disk by the path on which to report.') parser.add_argument('--cpu-util', required=False, action="store_true", help='Reports cpu utilization in percentages.') parser.add_argument('--haproxy', required=False, action='store_true', help='Reports HAProxy loadbalancer usage.') parser.add_argument('--haproxy-latency', required=False, action='store_true', help='Reports HAProxy latency') parser.add_argument('--heartbeat', required=False, action='store_true', help='Sends a Heartbeat.') parser.add_argument('--watch', required=False, help='the name of the watch to post to.') parser.add_argument('--metric', required=False, help='name of the metric to post to.') parser.add_argument('--units', required=False, help='name of the units to be used for the specified' 'metric') parser.add_argument('--value', required=False, help='value to post to the specified metric') args = parser.parse_args() LOG.debug('cfn-push-stats called %s ' % (str(args))) credentials = cfn_helper.parse_creds_file(args.credential_file) namespace = 'system/linux' data = {} # Generic user-specified metric # ============================= if args.metric and args.units and args.value: data[args.metric] = { 'Value': args.value, 'Units': args.units} # service failure # =============== if args.service_failure: data['ServiceFailure'] = { 'Value': 1, 'Units': 'Counter'} # heartbeat # ======== if args.heartbeat: data['Heartbeat'] = { 'Value': 1, 'Units': 'Counter'} # memory space # ============ if args.mem_util or args.mem_used or args.mem_avail: mem = psutil.phymem_usage() if args.mem_util: data['MemoryUtilization'] = { 'Value': mem.percent, 'Units': 'Percent'} if args.mem_used: data['MemoryUsed'] = { 'Value': mem.used / unit_map[args.memory_units], 'Units': args.memory_units} if args.mem_avail: data['MemoryAvailable'] = { 'Value': mem.free / unit_map[args.memory_units], 'Units': args.memory_units} # swap space # ========== if args.swap_util or args.swap_used: swap = psutil.virtmem_usage() if args.swap_util: data['SwapUtilization'] = { 'Value': swap.percent, 'Units': 'Percent'} if args.swap_used: data['SwapUsed'] = { 'Value': swap.used / unit_map[args.memory_units], 'Units': args.memory_units} # disk space # ========== if args.disk_space_util or args.disk_space_used or args.disk_space_avail: disk = psutil.disk_usage(args.disk_path) if args.disk_space_util: data['DiskSpaceUtilization'] = { 'Value': disk.percent, 'Units': 'Percent'} if args.disk_space_used: data['DiskSpaceUsed'] = { 'Value': disk.used / unit_map[args.disk_units], 'Units': args.disk_units} if args.disk_space_avail: data['DiskSpaceAvailable'] = { 'Value': disk.free / unit_map[args.disk_units], 'Units': args.disk_units} # cpu utilization # =============== if args.cpu_util: # blocks for 1 second. cpu_percent = psutil.cpu_percent(interval=1) data['CPUUtilization'] = { 'Value': cpu_percent, 'Units': 'Percent'} # HAProxy # ======= def parse_haproxy_unix_socket(res, latency_only=False): # http://docs.amazonwebservices.com/ElasticLoadBalancing/latest # /DeveloperGuide/US_MonitoringLoadBalancerWithCW.html type_map = {'FRONTEND': '0', 'BACKEND': '1', 'SERVER': '2', 'SOCKET': '3'} num_map = {'status': 17, 'svname': 1, 'check_duration': 38, 'type': 32, 'req_tot': 48, 'hrsp_2xx': 40, 'hrsp_3xx': 41, 'hrsp_4xx': 42, 'hrsp_5xx': 43} def add_stat(key, value, unit='Counter'): res[key] = {'Value': value, 'Units': unit} echo = subprocess.Popen(['echo', 'show stat'], stdout=subprocess.PIPE) socat = subprocess.Popen(['socat', 'stdio', '/tmp/.haproxy-stats'], stdin=echo.stdout, stdout=subprocess.PIPE) end_pipe = socat.stdout raw = [l.strip('\n').split(',') for l in end_pipe if l[0] != '#' and len(l) > 2] latency = 0 up_count = 0 down_count = 0 for f in raw: if latency_only is False: if f[num_map['type']] == type_map['FRONTEND']: add_stat('RequestCount', f[num_map['req_tot']]) add_stat('HTTPCode_ELB_4XX', f[num_map['hrsp_4xx']]) add_stat('HTTPCode_ELB_5XX', f[num_map['hrsp_5xx']]) elif f[num_map['type']] == type_map['BACKEND']: add_stat('HTTPCode_Backend_2XX', f[num_map['hrsp_2xx']]) add_stat('HTTPCode_Backend_3XX', f[num_map['hrsp_3xx']]) add_stat('HTTPCode_Backend_4XX', f[num_map['hrsp_4xx']]) add_stat('HTTPCode_Backend_5XX', f[num_map['hrsp_5xx']]) else: if f[num_map['status']] == 'UP': up_count = up_count + 1 else: down_count = down_count + 1 if f[num_map['check_duration']] != '': latency = max(float(f[num_map['check_duration']]), latency) # note: haproxy's check_duration is in ms, but Latency is in seconds add_stat('Latency', str(latency / 1000), unit='Seconds') if latency_only is False: add_stat('HealthyHostCount', str(up_count)) add_stat('UnHealthyHostCount', str(down_count)) def send_stats(info): # Create boto connection, need the hard-coded port/path as boto # can't read these from config values in BOTO_CONFIG # FIXME : currently only http due to is_secure=False client = cloudwatch.CloudWatchConnection( aws_access_key_id=credentials['AWSAccessKeyId'], aws_secret_access_key=credentials['AWSSecretKey'], is_secure=False, port=8003, path="/v1", debug=0) # Then we send the metric datapoints passed in "info", note this could # contain multiple keys as the options parsed above are not exclusive # The alarm name is passed as a dimension so the metric datapoint can # be associated with the alarm/watch in the engine metadata = cfn_helper.Metadata('not-used', None) metric_dims = metadata.get_tags() if args.watch: metric_dims['AlarmName'] = args.watch for key in info: LOG.info("Sending metric %s, Units %s, Value %s" % (key, info[key]['Units'], info[key]['Value'])) client.put_metric_data(namespace=namespace, name=key, value=info[key]['Value'], timestamp=None, # means use "now" in the engine unit=info[key]['Units'], dimensions=metric_dims, statistics=None) if args.haproxy: namespace = 'AWS/ELB' lb_data = {} parse_haproxy_unix_socket(lb_data) send_stats(lb_data) elif args.haproxy_latency: namespace = 'AWS/ELB' lb_data = {} parse_haproxy_unix_socket(lb_data, latency_only=True) send_stats(lb_data) else: send_stats(data) heat-cfntools-1.4.2/bin/cfn-signal000077500000000000000000000073311265023060500170060ustar00rootroot00000000000000#!/usr/bin/env python # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Implements cfn-signal CloudFormation functionality """ import argparse import logging import sys from heat_cfntools.cfntools import cfn_helper description = " " parser = argparse.ArgumentParser(description=description) parser.add_argument('-s', '--success', dest="success", help="signal status to report", default='true', required=False) parser.add_argument('-r', '--reason', dest="reason", help="The reason for the failure", default="Configuration Complete", required=False) parser.add_argument('-d', '--data', dest="data", default="Application has completed configuration.", help="The data to send", required=False) parser.add_argument('-i', '--id', dest="unique_id", help="the unique id to send back to the WaitCondition", default=None, required=False) parser.add_argument('-e', '--exit-code', dest="exit_code", help="The exit code from a process to interpret", default=None, required=False) parser.add_argument('--exit', dest="exit", help="DEPRECATED! Use -e or --exit-code instead.", default=None, required=False) parser.add_argument('url', help='the url to post to') parser.add_argument('-k', '--insecure', help="This will make insecure https request to cfn-api.", action='store_true') args = parser.parse_args() log_format = '%(levelname)s [%(asctime)s] %(message)s' log_file_name = "/var/log/cfn-signal.log" logging.basicConfig(filename=log_file_name, format=log_format, level=logging.DEBUG) LOG = logging.getLogger('cfntools') LOG.debug('cfn-signal called %s ' % (str(args))) if args.exit: LOG.warning('--exit DEPRECATED! Use -e or --exit-code instead.') status = 'FAILURE' exit_code = args.exit_code or args.exit if exit_code: # "exit_code" takes precedence over "success". if exit_code == '0': status = 'SUCCESS' else: if args.success == 'true': status = 'SUCCESS' unique_id = args.unique_id if unique_id is None: LOG.debug('No id passed from the command line') md = cfn_helper.Metadata('not-used', None) unique_id = md.get_instance_id() if unique_id is None: LOG.error('Could not get the instance id from metadata!') import socket unique_id = socket.getfqdn() LOG.debug('id: %s' % (unique_id)) body = { "Status": status, "Reason": args.reason, "UniqueId": unique_id, "Data": args.data } data = cfn_helper.json.dumps(body) cmd = ['curl'] if args.insecure: cmd.append('--insecure') cmd.extend([ '-X', 'PUT', '-H', 'Content-Type:', '--data-binary', data, args.url ]) command = cfn_helper.CommandRunner(cmd).run() if command.status != 0: LOG.error(command.stderr) sys.exit(command.status) heat-cfntools-1.4.2/doc/000077500000000000000000000000001265023060500150305ustar00rootroot00000000000000heat-cfntools-1.4.2/doc/.gitignore000066400000000000000000000000171265023060500170160ustar00rootroot00000000000000target/ build/ heat-cfntools-1.4.2/doc/Makefile000066400000000000000000000126751265023060500165030ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Heat.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Heat.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Heat" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Heat" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." heat-cfntools-1.4.2/doc/README.rst000066400000000000000000000006631265023060500165240ustar00rootroot00000000000000====================== Building the man pages ====================== Dependencies ============ Sphinx_ You'll need sphinx (the python one) and if you are using the virtualenv you'll need to install it in the virtualenv specifically so that it can load the cinder modules. :: sudo yum install python-sphinx sudo pip-python install sphinxcontrib-httpdomain Use `make` ========== To build the man pages: make man heat-cfntools-1.4.2/doc/source/000077500000000000000000000000001265023060500163305ustar00rootroot00000000000000heat-cfntools-1.4.2/doc/source/_static/000077500000000000000000000000001265023060500177565ustar00rootroot00000000000000heat-cfntools-1.4.2/doc/source/_static/basic.css000066400000000000000000000146251265023060500215610ustar00rootroot00000000000000/** * Sphinx stylesheet -- basic theme * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ /* -- main layout ----------------------------------------------------------- */ div.clearer { clear: both; } /* -- relbar ---------------------------------------------------------------- */ div.related { width: 100%; font-size: 90%; } div.related h3 { display: none; } div.related ul { margin: 0; padding: 0 0 0 10px; list-style: none; } div.related li { display: inline; } div.related li.right { float: right; margin-right: 5px; } /* -- sidebar --------------------------------------------------------------- */ div.sphinxsidebarwrapper { padding: 10px 5px 0 10px; } div.sphinxsidebar { float: left; width: 230px; margin-left: -100%; font-size: 90%; } div.sphinxsidebar ul { list-style: none; } div.sphinxsidebar ul ul, div.sphinxsidebar ul.want-points { margin-left: 20px; list-style: square; } div.sphinxsidebar ul ul { margin-top: 0; margin-bottom: 0; } div.sphinxsidebar form { margin-top: 10px; } div.sphinxsidebar input { border: 1px solid #98dbcc; font-family: sans-serif; font-size: 1em; } img { border: 0; } /* -- search page ----------------------------------------------------------- */ ul.search { margin: 10px 0 0 20px; padding: 0; } ul.search li { padding: 5px 0 5px 20px; background-image: url(file.png); background-repeat: no-repeat; background-position: 0 7px; } ul.search li a { font-weight: bold; } ul.search li div.context { color: #888; margin: 2px 0 0 30px; text-align: left; } ul.keywordmatches li.goodmatch a { font-weight: bold; } /* -- index page ------------------------------------------------------------ */ table.contentstable { width: 90%; } table.contentstable p.biglink { line-height: 150%; } a.biglink { font-size: 1.3em; } span.linkdescr { font-style: italic; padding-top: 5px; font-size: 90%; } /* -- general index --------------------------------------------------------- */ table.indextable td { text-align: left; vertical-align: top; } table.indextable dl, table.indextable dd { margin-top: 0; margin-bottom: 0; } table.indextable tr.pcap { height: 10px; } table.indextable tr.cap { margin-top: 10px; background-color: #f2f2f2; } img.toggler { margin-right: 3px; margin-top: 3px; cursor: pointer; } /* -- general body styles --------------------------------------------------- */ a.headerlink { visibility: hidden; } h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, h4:hover > a.headerlink, h5:hover > a.headerlink, h6:hover > a.headerlink, dt:hover > a.headerlink { visibility: visible; } div.body p.caption { text-align: inherit; } div.body td { text-align: left; } .field-list ul { padding-left: 1em; } .first { } p.rubric { margin-top: 30px; font-weight: bold; } /* -- sidebars -------------------------------------------------------------- */ div.sidebar { margin: 0 0 0.5em 1em; border: 1px solid #ddb; padding: 7px 7px 0 7px; background-color: #ffe; width: 40%; float: right; } p.sidebar-title { font-weight: bold; } /* -- topics ---------------------------------------------------------------- */ div.topic { border: 1px solid #ccc; padding: 7px 7px 0 7px; margin: 10px 0 10px 0; } p.topic-title { font-size: 1.1em; font-weight: bold; margin-top: 10px; } /* -- admonitions ----------------------------------------------------------- */ div.admonition { margin-top: 10px; margin-bottom: 10px; padding: 7px; } div.admonition dt { font-weight: bold; } div.admonition dl { margin-bottom: 0; } p.admonition-title { margin: 0px 10px 5px 0px; font-weight: bold; } div.body p.centered { text-align: center; margin-top: 25px; } /* -- tables ---------------------------------------------------------------- */ table.docutils { border: 0; border-collapse: collapse; } table.docutils td, table.docutils th { padding: 1px 8px 1px 0; border-top: 0; border-left: 0; border-right: 0; border-bottom: 1px solid #aaa; } table.field-list td, table.field-list th { border: 0 !important; } table.footnote td, table.footnote th { border: 0 !important; } th { text-align: left; padding-right: 5px; } /* -- other body styles ----------------------------------------------------- */ dl { margin-bottom: 15px; } dd p { margin-top: 0px; } dd ul, dd table { margin-bottom: 10px; } dd { margin-top: 3px; margin-bottom: 10px; margin-left: 30px; } dt:target, .highlight { background-color: #fbe54e; } dl.glossary dt { font-weight: bold; font-size: 1.1em; } .field-list ul { margin: 0; padding-left: 1em; } .field-list p { margin: 0; } .refcount { color: #060; } .optional { font-size: 1.3em; } .versionmodified { font-style: italic; } .system-message { background-color: #fda; padding: 5px; border: 3px solid red; } .footnote:target { background-color: #ffa } .line-block { display: block; margin-top: 1em; margin-bottom: 1em; } .line-block .line-block { margin-top: 0; margin-bottom: 0; margin-left: 1.5em; } /* -- code displays --------------------------------------------------------- */ pre { overflow: auto; } td.linenos pre { padding: 5px 0px; border: 0; background-color: transparent; color: #aaa; } table.highlighttable { margin-left: 0.5em; } table.highlighttable td { padding: 0 0.5em 0 0.5em; } tt.descname { background-color: transparent; font-weight: bold; font-size: 1.2em; } tt.descclassname { background-color: transparent; } tt.xref, a tt { background-color: transparent; font-weight: bold; } h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt { background-color: transparent; } /* -- math display ---------------------------------------------------------- */ img.math { vertical-align: middle; } div.body div.math p { text-align: center; } span.eqno { float: right; } /* -- printout stylesheet --------------------------------------------------- */ @media print { div.document, div.documentwrapper, div.bodywrapper { margin: 0 !important; width: 100%; } div.sphinxsidebar, div.related, div.footer, #top-link { display: none; } } heat-cfntools-1.4.2/doc/source/_static/default.css000066400000000000000000000070771265023060500221270ustar00rootroot00000000000000/** * Sphinx stylesheet -- default theme * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: sans-serif; font-size: 100%; background-color: #11303d; color: #000; margin: 0; padding: 0; } div.document { background-color: #1c4e63; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 230px; } div.body { background-color: #ffffff; color: #000000; padding: 0 20px 30px 20px; } div.footer { color: #ffffff; width: 100%; padding: 9px 0 9px 0; text-align: center; font-size: 75%; } div.footer a { color: #ffffff; text-decoration: underline; } div.related { background-color: #133f52; line-height: 30px; color: #ffffff; } div.related a { color: #ffffff; } div.sphinxsidebar { } div.sphinxsidebar h3 { font-family: 'Trebuchet MS', sans-serif; color: #ffffff; font-size: 1.4em; font-weight: normal; margin: 0; padding: 0; } div.sphinxsidebar h3 a { color: #ffffff; } div.sphinxsidebar h4 { font-family: 'Trebuchet MS', sans-serif; color: #ffffff; font-size: 1.3em; font-weight: normal; margin: 5px 0 0 0; padding: 0; } div.sphinxsidebar p { color: #ffffff; } div.sphinxsidebar p.topless { margin: 5px 10px 10px 10px; } div.sphinxsidebar ul { margin: 10px; padding: 0; color: #ffffff; } div.sphinxsidebar a { color: #98dbcc; } div.sphinxsidebar input { border: 1px solid #98dbcc; font-family: sans-serif; font-size: 1em; } /* -- body styles ----------------------------------------------------------- */ a { color: #355f7c; text-decoration: none; } a:hover { text-decoration: underline; } div.body p, div.body dd, div.body li { text-align: left; line-height: 130%; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Trebuchet MS', sans-serif; background-color: #f2f2f2; font-weight: normal; color: #20435c; border-bottom: 1px solid #ccc; margin: 20px -20px 10px -20px; padding: 3px 0 3px 10px; } div.body h1 { margin-top: 0; font-size: 200%; } div.body h2 { font-size: 160%; } div.body h3 { font-size: 140%; } div.body h4 { font-size: 120%; } div.body h5 { font-size: 110%; } div.body h6 { font-size: 100%; } a.headerlink { color: #c60f0f; font-size: 0.8em; padding: 0 4px 0 4px; text-decoration: none; } a.headerlink:hover { background-color: #c60f0f; color: white; } div.body p, div.body dd, div.body li { text-align: left; line-height: 130%; } div.admonition p.admonition-title + p { display: inline; } div.admonition p { margin-bottom: 5px; } div.admonition pre { margin-bottom: 5px; } div.admonition ul, div.admonition ol { margin-bottom: 5px; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } div.warning { background-color: #ffe4e4; border: 1px solid #f66; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre { padding: 5px; background-color: #eeffcc; color: #333333; line-height: 120%; border: 1px solid #ac9; border-left: none; border-right: none; } tt { background-color: #ecf0f3; padding: 0 1px 0 1px; font-size: 0.95em; } .warning tt { background: #efc2c2; } .note tt { background: #d6d6d6; } heat-cfntools-1.4.2/doc/source/_static/header-line.gif000066400000000000000000000000601265023060500226160ustar00rootroot00000000000000GIF89a€Åâêÿÿÿ!ù,ŒËéQ;heat-cfntools-1.4.2/doc/source/_static/header_bg.jpg000066400000000000000000000072321265023060500223640ustar00rootroot00000000000000ÿØÿàJFIFddÿìDuckyPÿîAdobedÀÿÛ„      ÿÀ¡åÿÄu !1AQa"q‘2¡BR±ÁbÑ!1ÿÚ ?ÿez«F•ºñU®«þO¥¯’ôª»8H+Í{|m¶µfu7‡uÚ›â´k$+ËÙÑNîÚ7n. aÛíUBªn\jgZÂË’i8XÅÔëJ5û•ì­.ši?Râ¾wmR­|+5ÆëŸ:Î…ÄÇ.ÎËV¿«ù‰^=PW®„¯ 'fòˉ­^½|¡{—¦rTsµaµƒ§õ«7¾&3!¬}¾‡5kt´1é¼w§¶êÍ7ÇZ‘ZïªìMªñZÀG‚µŒ?85„o©¨ëQ:ÈÑÕFœ_¢fF;õÝ’ÊA^>ΚöRË·­Z¯Fö6λ[¯¯¯­Ó®©? BhìúíØ¢IUÆ©ÑÇê½wzW\寱3ÅÖé¤Á8Ë–ˆ±µV¬£Gª ô% ZßY Ý:ù&ëíâõdWu×¼Kÿo$iß±ã$=¸Æ|‘^ªÄJhŠè•–ú²+ºªI$¢6#xë_Á´· ÛÊÓè@H €è´"º¯M¶‰ªÚ6ˆ(P?×­ÂVQSèî¾S.½´˜Å–%/×ÉRÉ;V®oê*nQS¹UÛ©5ÖÞ4b§ËUéïm7\<ì1ô¨ÛÛLÖñÙ&§L]p³µªÍM#²Sö­ ‰\/WÂbxæ6Ýù4¨õYüe{)~¸µ«þÙŸ¨»+~ÎÇÙeñöjÒÂ.&·×T«ÊÊ^œ¢ 9ß­6­—®>¡#Ïý_›åu½fµZ¬fCXú´V£WÞTǃ;f¾…l­”Ÿ¹ÎšÕÇ^/;‘cÄèÕ´ÞMb-:›¬é®ƒtnYZ­H°¼º6—£:+=5E¬Âµ­oíöãH«ëD´_°WÊ)÷-4 ‹–ç@7W¨V¸»)o@ Ù¸®g}ˆ¯M:×rM¿?r5¥Æ4Y§|ìEz'ðA®>å÷#NêœaVðŒÖ³ЧŒ}Âæ;q˜’4è«m" “ÓÀU€4½H­/@®©mimPü«¸ÿÑÝ_.8öÖÎ×%: ±Ö•N‰n)ŠVÕ«ˆã¼’‘äì·b´üxò‹šš¿Ô—Ûf÷ÿM\}íVœbR&+¥öe¾ŒŽ鉬é¹jk]•t¯R†Ú‹l?ª•£…¼L®IºM|¢R9öV½“gûÌñÚ šÎáñ|‰l£ –§J­¢%¿$¤z+Ò—¤¨`úu‡ ¤{zªø,çs:ÖcQdó¦ÆZyûzšº´ÂƒXÎã|RXήF®c_uÿhÜŠí×dÓú—XJÏiô a®Jì+ÓEjÕ¤§‚8Þ¶öÅ5SkiV¡¤ß¨x§õô"±ñ¹÷)…0Um FÞÖ=”ëÆ}ÒA§MWJÕ7¡7GkQ5¡–£\s0=¬µ8ô#yBK[Y¢InA¤¿É…° 6‘´¾ÀlŠÒ*¶@Ë©×ÙÉÖf‘‰;uó£_ÚrJFø:ÖZŸ"ý¢kBR-ºÖ-Œ¸Gž”ìëí\3eŒ—²Õ³´Û_]Œ·,î’VÑnäínØ®'V17®µ (äµZ•#/®ªó„ül Ä¿O[i¬7ûFå"Wªÿ,èþ¿R¿ÉÉ5«˜Ý¿¯Æü—í¤1‹âæÜ´@‰W[ak±sHôuEuÜ,uQiªqVýo†V}IR8®¼W «½o•œÏŠì_«ÚÓßV* ªuXð(źÖ^¬§³{)JüjÚùQÝXêå¯EE}Iïþ"‘WRQ 'TJAtOºº§ GeÓ;d•¬Ç¦Q†fµ˜[ªZàR"ëi¸޵­œâ ZŽŠ’ÆǦµxÁ¨s¤¨Éà*…tH‚¤Ò"¬i[ ÒA"€ùÚ­š:õÀ¼Z@E_(+JµÅïÅûk/È£ÎÛ•d¡ùA˜÷&ï×YSäkOM:å5¶Äªó÷ÑÖÏgè3QƵ®#öòR9Þ¼lü±š›ˆèÜ$ç\‘WUÛ^î*3‘Ú”t|k§“:Ö:qvÍÞ|Ö¸(„ä¡ UÍùZŠ3Z©«†›^Šõ[0–Iº;>¹¬FR%#‚¤V¥£J¯À¤/I«ÓBR"£i'®Â‘›uk0ŸE#¯]y%Hã—£&®cº¥SÑ61j>S°¤eÖ6yܔޕ£1àµc²Kz§¡/üf‰6çÀWXü…*ú;ª/GU_@Ö,?UKòPâEi( ´–€m 4ˆ­ ,¢¡@ø++$ë]YÖâÜråj¥[WîDŽí]WÔU‚£j&?ôéNªÑÎïrâú¢^”×^•+ŽËAºcЕ¦Sˆ%W>Îm¿l§û03ZÖWSËݶwŒ®¹†¼–޵£OÓrQ¼LDúŠE¯[k/ìMÒ5ñÎī֣+AHU9Ó @G¢µ•à5˜ZjÖëÁ9¥å#PžVA× …‡ †¥4¿Ú«ü07Ïк ±Ó‰ŠÅx…i 7"Å X`^$üëëþǫťGøÇ¸ÈRЇ\áUWЮ€Žª¾Ð¢¬Oä}€Ë¢ ±ä ÁÄŠªâx8‚@¼}×Z„€BU ¼H4¨E(U†âQaP>7Ñåˆ×EIª  „â—×Ô+Qè•@¼@q:ýÀœ@¼Y‰U¤€°B}^, Ä @Rãè@â¡^( ‰âˆSˆ|èFÞl€kXªþ@©}ªë €,¨ÄêB$0Œ…j@©@,0«Ä ª€,` ‹(†4Ä8Š//,âh@Ò@XdR!ô @J¤"À°‚ <Ði˜@€°üX Qa! «PH @ XaH`X*,P ‚ €E‚„‚*à ¢ÀE @P TX €,‚€E ÍZÏb£IÛÁ ÿÙheat-cfntools-1.4.2/doc/source/_static/jquery.tweet.js000066400000000000000000000163531265023060500227720ustar00rootroot00000000000000(function($) { $.fn.tweet = function(o){ var s = { username: ["seaofclouds"], // [string] required, unless you want to display our tweets. :) it can be an array, just do ["username1","username2","etc"] list: null, //[string] optional name of list belonging to username avatar_size: null, // [integer] height and width of avatar if displayed (48px max) count: 3, // [integer] how many tweets to display? intro_text: null, // [string] do you want text BEFORE your your tweets? outro_text: null, // [string] do you want text AFTER your tweets? join_text: null, // [string] optional text in between date and tweet, try setting to "auto" auto_join_text_default: "i said,", // [string] auto text for non verb: "i said" bullocks auto_join_text_ed: "i", // [string] auto text for past tense: "i" surfed auto_join_text_ing: "i am", // [string] auto tense for present tense: "i was" surfing auto_join_text_reply: "i replied to", // [string] auto tense for replies: "i replied to" @someone "with" auto_join_text_url: "i was looking at", // [string] auto tense for urls: "i was looking at" http:... loading_text: null, // [string] optional loading text, displayed while tweets load query: null // [string] optional search query }; if(o) $.extend(s, o); $.fn.extend({ linkUrl: function() { var returning = []; var regexp = /((ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?)/gi; this.each(function() { returning.push(this.replace(regexp,"$1")); }); return $(returning); }, linkUser: function() { var returning = []; var regexp = /[\@]+([A-Za-z0-9-_]+)/gi; this.each(function() { returning.push(this.replace(regexp,"@$1")); }); return $(returning); }, linkHash: function() { var returning = []; var regexp = / [\#]+([A-Za-z0-9-_]+)/gi; this.each(function() { returning.push(this.replace(regexp, ' #$1')); }); return $(returning); }, capAwesome: function() { var returning = []; this.each(function() { returning.push(this.replace(/\b(awesome)\b/gi, '$1')); }); return $(returning); }, capEpic: function() { var returning = []; this.each(function() { returning.push(this.replace(/\b(epic)\b/gi, '$1')); }); return $(returning); }, makeHeart: function() { var returning = []; this.each(function() { returning.push(this.replace(/(<)+[3]/gi, "")); }); return $(returning); } }); function relative_time(time_value) { var parsed_date = Date.parse(time_value); var relative_to = (arguments.length > 1) ? arguments[1] : new Date(); var delta = parseInt((relative_to.getTime() - parsed_date) / 1000); var pluralize = function (singular, n) { return '' + n + ' ' + singular + (n == 1 ? '' : 's'); }; if(delta < 60) { return 'less than a minute ago'; } else if(delta < (45*60)) { return 'about ' + pluralize("minute", parseInt(delta / 60)) + ' ago'; } else if(delta < (24*60*60)) { return 'about ' + pluralize("hour", parseInt(delta / 3600)) + ' ago'; } else { return 'about ' + pluralize("day", parseInt(delta / 86400)) + ' ago'; } } function build_url() { var proto = ('https:' == document.location.protocol ? 'https:' : 'http:'); if (s.list) { return proto+"//api.twitter.com/1/"+s.username[0]+"/lists/"+s.list+"/statuses.json?per_page="+s.count+"&callback=?"; } else if (s.query == null && s.username.length == 1) { return proto+'//twitter.com/status/user_timeline/'+s.username[0]+'.json?count='+s.count+'&callback=?'; } else { var query = (s.query || 'from:'+s.username.join('%20OR%20from:')); return proto+'//search.twitter.com/search.json?&q='+query+'&rpp='+s.count+'&callback=?'; } } return this.each(function(){ var list = $('
    ').appendTo(this); var intro = '

    '+s.intro_text+'

    '; var outro = '

    '+s.outro_text+'

    '; var loading = $('

    '+s.loading_text+'

    '); if(typeof(s.username) == "string"){ s.username = [s.username]; } if (s.loading_text) $(this).append(loading); $.getJSON(build_url(), function(data){ if (s.loading_text) loading.remove(); if (s.intro_text) list.before(intro); $.each((data.results || data), function(i,item){ // auto join text based on verb tense and content if (s.join_text == "auto") { if (item.text.match(/^(@([A-Za-z0-9-_]+)) .*/i)) { var join_text = s.auto_join_text_reply; } else if (item.text.match(/(^\w+:\/\/[A-Za-z0-9-_]+\.[A-Za-z0-9-_:%&\?\/.=]+) .*/i)) { var join_text = s.auto_join_text_url; } else if (item.text.match(/^((\w+ed)|just) .*/im)) { var join_text = s.auto_join_text_ed; } else if (item.text.match(/^(\w*ing) .*/i)) { var join_text = s.auto_join_text_ing; } else { var join_text = s.auto_join_text_default; } } else { var join_text = s.join_text; }; var from_user = item.from_user || item.user.screen_name; var profile_image_url = item.profile_image_url || item.user.profile_image_url; var join_template = ' '+join_text+' '; var join = ((s.join_text) ? join_template : ' '); var avatar_template = ''+from_user+'\'s avatar'; var avatar = (s.avatar_size ? avatar_template : ''); var date = ''+relative_time(item.created_at)+''; var text = '' +$([item.text]).linkUrl().linkUser().linkHash().makeHeart().capAwesome().capEpic()[0]+ ''; // until we create a template option, arrange the items below to alter a tweet's display. list.append('
  • ' + avatar + date + join + text + '
  • '); list.children('li:first').addClass('tweet_first'); list.children('li:odd').addClass('tweet_even'); list.children('li:even').addClass('tweet_odd'); }); if (s.outro_text) list.after(outro); }); }); }; })(jQuery);heat-cfntools-1.4.2/doc/source/_static/nature.css000066400000000000000000000101111265023060500217600ustar00rootroot00000000000000/* * nature.css_t * ~~~~~~~~~~~~ * * Sphinx stylesheet -- nature theme. * * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: Arial, sans-serif; font-size: 100%; background-color: #111; color: #555; margin: 0; padding: 0; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ theme_sidebarwidth|toint }}px; } hr { border: 1px solid #B1B4B6; } div.document { background-color: #eee; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 30px 30px; font-size: 0.9em; } div.footer { color: #555; width: 100%; padding: 13px 0; text-align: center; font-size: 75%; } div.footer a { color: #444; text-decoration: underline; } div.related { background-color: #6BA81E; line-height: 32px; color: #fff; text-shadow: 0px 1px 0 #444; font-size: 0.9em; } div.related a { color: #E2F3CC; } div.sphinxsidebar { font-size: 0.75em; line-height: 1.5em; } div.sphinxsidebarwrapper{ padding: 20px 0; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: Arial, sans-serif; color: #222; font-size: 1.2em; font-weight: normal; margin: 0; padding: 5px 10px; background-color: #ddd; text-shadow: 1px 1px 0 white } div.sphinxsidebar h4{ font-size: 1.1em; } div.sphinxsidebar h3 a { color: #444; } div.sphinxsidebar p { color: #888; padding: 5px 20px; } div.sphinxsidebar p.topless { } div.sphinxsidebar ul { margin: 10px 20px; padding: 0; color: #000; } div.sphinxsidebar a { color: #444; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: sans-serif; font-size: 1em; } div.sphinxsidebar input[type=text]{ margin-left: 20px; } /* -- body styles ----------------------------------------------------------- */ a { color: #005B81; text-decoration: none; } a:hover { color: #E32E00; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: Arial, sans-serif; background-color: #BED4EB; font-weight: normal; color: #212224; margin: 30px 0px 10px 0px; padding: 5px 0 5px 10px; text-shadow: 0px 1px 0 white } div.body h1 { border-top: 20px solid white; margin-top: 0; font-size: 200%; } div.body h2 { font-size: 150%; background-color: #C8D5E3; } div.body h3 { font-size: 120%; background-color: #D8DEE3; } div.body h4 { font-size: 110%; background-color: #D8DEE3; } div.body h5 { font-size: 100%; background-color: #D8DEE3; } div.body h6 { font-size: 100%; background-color: #D8DEE3; } a.headerlink { color: #c60f0f; font-size: 0.8em; padding: 0 4px 0 4px; text-decoration: none; } a.headerlink:hover { background-color: #c60f0f; color: white; } div.body p, div.body dd, div.body li { line-height: 1.5em; } div.admonition p.admonition-title + p { display: inline; } div.highlight{ background-color: white; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } div.warning { background-color: #ffe4e4; border: 1px solid #f66; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre { padding: 10px; background-color: White; color: #222; line-height: 1.2em; border: 1px solid #C6C9CB; font-size: 1.1em; margin: 1.5em 0 1.5em 0; -webkit-box-shadow: 1px 1px 1px #d8d8d8; -moz-box-shadow: 1px 1px 1px #d8d8d8; } tt { background-color: #ecf0f3; color: #222; /* padding: 1px 2px; */ font-size: 1.1em; font-family: monospace; } .viewcode-back { font-family: Arial, sans-serif; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #ac9; border-bottom: 1px solid #ac9; } heat-cfntools-1.4.2/doc/source/_static/openstack_logo.png000066400000000000000000000071261265023060500235010ustar00rootroot00000000000000‰PNG  IHDR§8˜&èztEXtSoftwareAdobe ImageReadyqÉe< øIDATxÚìOP×Çß®VB ‹Á8c–¤cÒÄÅÄžtÚ¤ˆ:=¤ É%G‹c;“1\zèè¥ö`39tz2LO½4v;íøÐyòo:q‚ìq¦qÒ96c0âø+iûû­~O<–•´RfÌÎ>½}ÿvßg¿¿÷Þ®„¤ë:s̱J4Ù¹ŽUªIûÝ€Î=>òp=l «B˜Íol²éµµ¾Ÿ5?êtŸg¡°i°Óè£>C 1‚M-¶ŽGìÁÚzß/@8ÑÂg;´”Î4,åU·oÃ@8 ûG =Tí’/ãÔÉ%á&UDã@ŸlsÙIôZc͸"É^·,±ŸIU >EEîíö»ïþýñZÄéÎC¦œ¿×üÁjEo¬ªbªÛü.™à,Ÿ‚ʲ‹yª¼FXQÜÌ­x2aÅí6Â×H'X_ëŸÿæ(èdJ¾:ü­&¶2ŸU–4¥§ÚԀɕ†©ÚëËÄ{«k¶ôx÷r.—§Þúi:ÝzHàä– PY–Y]­ß®º•ÛpeÀó°Á)ª íÇZ™ª6:Wѱý‡“Špž<Úâ€éXY­¨Ç—k¨ZWï\=ÇvÙÄ™–!ØtÜöN´†šZ§'«<åṯJsÆö¡]‘‡K1ãqèÂê ÛJ&˜ fýϵ´9=Vœ« Â.ÐuóÁ¥' Î_F—#¿Óêú xyõ à!|“ÂQ¶½ô×(¡Î]c––úg8Q8˜ï®¶0lOœ *Øy‚ª7‰¶’^ŒÍDb; jº_1‰­ØÉ#GG˜³ÆieêAm¸bóä‚…\j0Ñ>þú ßXÏÖÆvPù‚á|ûí·5:?,#üÎ;ïD²¤Á- Ç£BFy¢6ëÈšÒÇáX˜>£Ú¡0D îJeÔHöƒë ñš§JîÝð^."” º4òpXN,"s.”'y¢yÒó7×¢˜öÆ!wÃv—ÊÐΞ={ âû± jB„`úÜÖ ®·p󧛎ÛË­Zè“WÁÁ¼»I²`x⬆/xÇa„ã!!n“—­ŽË6ÓcùãpL…mÂèŽqŸ1Þ¢Ãyû/RÚqˆ»lj3"Àë ô0°aþ jÛ Õ; ñ¡\ËE8ÛÛ…<—»Ÿê~—ÏcÄQöC×Miq¯*Å‚6»Ûñ™+Ãù[÷zÊ'Bø|ã±’”=D >@ÑØi!8µ°È¤NÓ"eº y¢\õL`†(í02Hé#V*M ¨4>䪠¶õ €ô›Ú¥á×uÞ´?϶ëŽ c}†êå`•ù*¥»Àëƒca³ü!¡¬¯]¨7›ºÒÇá,ªŒiÆHMkU4œÿüOÖ7Ô‚0N½.v R˜TˆßéÉ(Å3R~X¸_Ür¥&åâå…Mn:Dðô@Ü_øvŠ¿ Âfª£‹ƒ éù„¦×”þ˜¶YfˆÊŒӾà g Di²A/æ½X¦s¸DªiL²hlùF¾% €4ÎÎw¾A“ë·c ÙlγõòÛ(Í>­¶¢Ç‹aÐñAZJR S–›Â·T,L9nÖÕUÊ©•¨SCOœcæ%¦N^ @iYé¢Õ’S»HÈyÖ^"&õäKI}…d–+Î1V¾—NøÀÜê¥ÔúB\‘iq<œc¢U6£å™aA¥ìªg¯y"‡…MCbÚ¥•ÎØ>)Î8+ßsáhŽqe ×y[<© ˜¯-ÅÊ8›Î6µ=^e2›Ã‹h1ãqZrŠšfî¥ßJ*TŽ€ñ埠…ƒæ%¢,jïgFMë“W²¤/—z–RD4‹ò£‚G,°¼‘bÔ3/œ¦·’Ênœ{Z½ÖÕÞ_ëöä—ô56»76R»Œ+Âùô ñý‚f{ÙâšÅ{…aˆU‡hô¨´¤fîdáYxŒ¿üabEwOib¦Œ+ª–2Mx†Ïó©yæ±BÁ¶œhi‘tIøÛIopñV’ 8Í´ÌÂO¤Ûj©¢Æåb'}>v˜[ÞÚÌ$˜\\0¶¸'˜j÷­$t»Ì-¡ Ò#E&LÀúLOu̪‚/oD„6GLnëˆÐãÍAªãuRãzʇ¥½¼¹5AÏűÌv¡í#^‚ßtøB¶u€ÄGé*¿)Ct“[©ç(-Y…(O/»Fõ^ʦêv„®æ¶³•ódë ‰é:éÒ™î¡crL—k¥T1@N;1Â_ñ+ k3Z­ªLY^b'¼uˆk”ô)T»\¡/Ž{ö£;a›€ŽÒâù Ð±xÁr½¢FË7ã˜ÆŒÌ7„/‘°í—)¥Ç“jFMeâçó³kTHH‰ äÃfèe?Û~`„€{7 h}ç.“8 Ê7?¸$¨&®¼‘ï<%Ý»àTN/ÀéѬ†°çÇU›ßNÞúîF6gjU5{´–~áw3™7»Î|Š‹µUW³D*¹g×…?Ú°œH°oâqÐ`×Yæ8+K]8M“œW»,LJø……4ÅiœµYŸÑvò2cVí¦åÇLÃ;õ„¥¨’û$C1LÆê΂³àÄ߆ñ¾ìMýèE¶ZИ©Ñïgß­¯g©õõ’úâéʧ °vÁéXùMÜ9«B ™ŽñºK–e7«ò6°Õ]ÛjjÒnUp­èŠÝ²Ì\õ*{¼•Ø3œ._sc™nøN';’Ød §ÏœßLÝ×A=±Ï×`«Ò™¡˜«¤ÓÞ*ãI€³Ñ“cF­ìžkI§øÒãEÙëenou:¬¸™‹†.H#'Óa ¿’A_ËHÍN³DÂAó0*gŠàÄ_Ýr”pëU,•ÒAý– ýqîäbŒÕkϰ¤?ý°Å ð±$µ3ŠÓºÔã9kW^æ“^X‰O°"‰ù׿ù­óïGJc‘†Z_W^8I=Q®â´öé6TT×S,•XgÅüP\ô™l©Ê»0‡ÊïƒÌ"< ;”7œ~Ï£ÆcMÎ0%–tþW‘cû`;žÝ›žB ·Ò`²9˜-ÍTÕ­Ü®RÿužgËñíÉà\liWš­-Ëx‡ù1åŠqV†éÌåZÕ…qXžX&ÖÉãxz<ÆÏEl“¹<ÌcuÌñVõ–N´û¦9 ¸n5“J&§oyÔ÷oy|œÔS•Ì’ø«"{2ì¼k|B°= ûð³Û»Ò}øÙçÆþ½﬚§½3yŸÍ/,튿öþ'–P£Ý¸ý%túÜŽ#[]Xö“÷,à^ÜÏóa›¢÷gv–}õ½vÔk¤™¸mȯI®zË 'ÚÔÌt3™LÎonn~Ç?½)û®Ýt×'R èÀ³Ý)ÉBð­/'ÙÙÓìxs#kRý9Ó;}Šu´>•é\;†å¾úR€ÝºóuÕ\dm™róÕå÷ÕeVÑ/J×Áç&µÞˆçíÇc›¤pÚyL+*ä98o³Jz /Ö»±¹S!Íõ–k¶¾[ÊÎè-ÍÇà<¶¦t]ÖS)WÄ['ë&ŸÙ\ìÞÙÛ~ìwݤ–Q3ZªŠæÙ»ž·•Žw.vÔr|mGgç²:#½5Ìxcü¬ç¥® \¹êZ"7ÞØàÏ Š¹NTí›}n TD¼9¸!Ôw ®©açë¯S î „·³ã„íó. œhfêO57¯'¶¶èÆsw¦D¼µŠîÖå7=žWJ膬»B˜ƒ)•$C òò9ÃþìÕ”L™PMÑ¥cù"ºfíĶ:σb3Öf9îDµ5·£µ¹ n¤Se»y¿¦13;«;z”e²ÎôÔM¯?þ—u¶ø¦—½&¦]L¦îJŒ=öHÒ†,±OÝ’4GK¤a$ÀVñÿ//:vº¬œ JÂÓ¡Šdë(g7\“S3¬CÃJå:Z[ €ŒáEŽº¸{µ3œÀñ#‚°Ýº3™)Õ“Tïpùÿ©§wŒ9¿ ³õ¢‡éÍMMè}Ibq·Ûýß?­º¯ÿqnõW+++ëÉD’áëuð·ûpA?±_9Süïuv“ƒØñ8YI»ÞSÆ–ž¼¤Ý¡ð'>7Æiæq+æA8²ÁŒ.Ý6Š×—­.»†eñáª(B(Ž7±\Lƒ7 ~îì¨É© Ü­›¯Q©¬ g?G›ð•:YmPeEQ\ëëë©ÅX,…«L'Zjy¥²¡\X‰; ·` µ>©dp>)æÀy0à<¬ß[wþOæè㫎9V‰æüâ‡cœŽ9æÀ阧cŽ•Ûþ/Àf÷¡Î¢0IEND®B`‚heat-cfntools-1.4.2/doc/source/_static/tweaks.css000066400000000000000000000031031265023060500217630ustar00rootroot00000000000000body { background: #fff url(../_static/header_bg.jpg) top left no-repeat; } #header { width: 950px; margin: 0 auto; height: 102px; } #header h1#logo { background: url(../_static/openstack_logo.png) top left no-repeat; display: block; float: left; text-indent: -9999px; width: 175px; height: 55px; } #navigation { background: url(../_static/header-line.gif) repeat-x 0 bottom; display: block; float: left; margin: 27px 0 0 25px; padding: 0; } #navigation li{ float: left; display: block; margin-right: 25px; } #navigation li a { display: block; font-weight: normal; text-decoration: none; background-position: 50% 0; padding: 20px 0 5px; color: #353535; font-size: 14px; } #navigation li a.current, #navigation li a.section { border-bottom: 3px solid #cf2f19; color: #cf2f19; } div.related { background-color: #cde2f8; border: 1px solid #b0d3f8; } div.related a { color: #4078ba; text-shadow: none; } div.sphinxsidebarwrapper { padding-top: 0; } pre { color: #555; } div.documentwrapper h1, div.documentwrapper h2, div.documentwrapper h3, div.documentwrapper h4, div.documentwrapper h5, div.documentwrapper h6 { font-family: 'PT Sans', sans-serif !important; color: #264D69; border-bottom: 1px dotted #C5E2EA; padding: 0; background: none; padding-bottom: 5px; } div.documentwrapper h3 { color: #CF2F19; } a.headerlink { color: #fff !important; margin-left: 5px; background: #CF2F19 !important; } div.body { margin-top: -25px; margin-left: 230px; } div.document { width: 960px; margin: 0 auto; }heat-cfntools-1.4.2/doc/source/_theme/000077500000000000000000000000001265023060500175715ustar00rootroot00000000000000heat-cfntools-1.4.2/doc/source/_theme/layout.html000066400000000000000000000073051265023060500220010ustar00rootroot00000000000000{% extends "basic/layout.html" %} {% set css_files = css_files + ['_static/tweaks.css'] %} {% set script_files = script_files + ['_static/jquery.tweet.js'] %} {%- macro sidebar() %} {%- if not embedded %}{% if not theme_nosidebar|tobool %}
    {%- block sidebarlogo %} {%- if logo %} {%- endif %} {%- endblock %} {%- block sidebartoc %} {%- if display_toc %}

    {{ _('Table Of Contents') }}

    {{ toc }} {%- endif %} {%- endblock %} {%- block sidebarrel %} {%- if prev %}

    {{ _('Previous topic') }}

    {{ prev.title }}

    {%- endif %} {%- if next %}

    {{ _('Next topic') }}

    {{ next.title }}

    {%- endif %} {%- endblock %} {%- block sidebarsourcelink %} {%- if show_source and has_source and sourcename %}

    {{ _('This Page') }}

    {%- endif %} {%- endblock %} {%- if customsidebar %} {% include customsidebar %} {%- endif %} {%- block sidebarsearch %} {%- if pagename != "search" %} {%- endif %} {%- endblock %}
    {%- endif %}{% endif %} {%- endmacro %} {% block relbar1 %}{% endblock relbar1 %} {% block header %} {% endblock %}heat-cfntools-1.4.2/doc/source/_theme/theme.conf000066400000000000000000000001071265023060500215400ustar00rootroot00000000000000[theme] inherit = basic stylesheet = nature.css pygments_style = tango heat-cfntools-1.4.2/doc/source/conf.py000066400000000000000000000210111265023060500176220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # Heat documentation build configuration file, created by # sphinx-quickstart on Thu Dec 13 11:23:35 2012. # # This file is execfile()d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) # -- General configuration ---------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.ifconfig', 'sphinx.ext.viewcode'] # Add any paths that contain templates here, relative to this directory. #templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'man/index' # General information about the project. project = u'Heat cfntools' copyright = u'2012,2013 Heat Developers' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. #from heat import version as heat_version # The full version, including alpha/beta/rc tags. #release = heat_version.version_info.release_string() # The short X.Y version #version = heat_version.version_info.version_string() # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme_path = ['.'] html_theme = '_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { "nosidebar": "false" } # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'Heatdoc' # -- Options for LaTeX output ------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]) #latex_documents = [ # ('index', 'Heat.tex', u'Heat Documentation', # u'Heat Developers', 'manual'), #] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output ------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('man/cfn-create-aws-symlinks', 'cfn-create-aws-symlinks', u'Creates symlinks for the cfn-* scripts in this directory to /opt/aws/bin', [u'Heat Developers'], 1), ('man/cfn-get-metadata', 'cfn-get-metadata', u'Implements cfn-get-metadata CloudFormation functionality', [u'Heat Developers'], 1), ('man/cfn-hup', 'cfn-hup', u'Implements cfn-hup CloudFormation functionality', [u'Heat Developers'], 1), ('man/cfn-init', 'cfn-init', u'Implements cfn-init CloudFormation functionality', [u'Heat Developers'], 1), ('man/cfn-push-stats', 'cfn-push-stats', u'Implements cfn-push-stats CloudFormation functionality', [u'Heat Developers'], 1), ('man/cfn-signal', 'cfn-signal', u'Implements cfn-signal CloudFormation functionality', [u'Heat Developers'], 1), ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ----------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) #texinfo_documents = [ # ('index', 'Heat', u'Heat Documentation', # u'Heat Developers', 'Heat', 'One line description of project.', # 'Miscellaneous'), #] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' heat-cfntools-1.4.2/doc/source/man/000077500000000000000000000000001265023060500171035ustar00rootroot00000000000000heat-cfntools-1.4.2/doc/source/man/cfn-create-aws-symlinks.rst000066400000000000000000000012521265023060500243030ustar00rootroot00000000000000======================= cfn-create-aws-symlinks ======================= .. program:: cfn-create-aws-symlinks SYNOPSIS ======== ``cfn-create-aws-symlinks`` DESCRIPTION =========== Creates symlinks for the cfn-* scripts in this directory to /opt/aws/bin OPTIONS ======= .. cmdoption:: -t, --target Target directory to create symlinks, defaults to /opt/aws/bin .. cmdoption:: -s, --source Source directory to create symlinks from. Defaults to the directory where this script is .. cmdoption:: -f, --force If specified, will create symlinks even if there is already a target file BUGS ==== Heat bugs are managed through Launchpad heat-cfntools-1.4.2/doc/source/man/cfn-get-metadata.rst000066400000000000000000000012721265023060500227400ustar00rootroot00000000000000================ cfn-get-metadata ================ .. program:: cfn-get-metadata SYNOPSIS ======== ``cfn-get-metadata`` DESCRIPTION =========== Implements cfn-get-metadata CloudFormation functionality OPTIONS ======= .. cmdoption:: -s --stack A Heat stack name .. cmdoption:: -r --resource A Heat logical resource ID .. cmdoption:: --access-key A Keystone access key .. cmdoption:: --secret-key A Keystone secret key .. cmdoption:: --region Openstack region .. cmdoption:: --credential-file credential-file .. cmdoption:: -u --url service url .. cmdoption:: -k --key key BUGS ==== Heat bugs are managed through Launchpad heat-cfntools-1.4.2/doc/source/man/cfn-hup.rst000066400000000000000000000007011265023060500211730ustar00rootroot00000000000000======= cfn-hup ======= .. program:: cfn-hup SYNOPSIS ======== ``cfn-hup`` DESCRIPTION =========== Implements cfn-hup CloudFormation functionality OPTIONS ======= .. cmdoption:: -c, --config Hook Config Directory, defaults to /etc/cfn/hooks.d .. cmdoption:: -f, --no-daemon Do not run as a daemon .. cmdoption:: -v, --verbose Verbose logging BUGS ==== Heat bugs are managed through Launchpad heat-cfntools-1.4.2/doc/source/man/cfn-init.rst000066400000000000000000000011441265023060500213440ustar00rootroot00000000000000======== cfn-init ======== .. program:: cfn-init SYNOPSIS ======== ``cfn-init`` DESCRIPTION =========== Implements cfn-init CloudFormation functionality OPTIONS ======= .. cmdoption:: -s, --stack A Heat stack name .. cmdoption:: -r, --resource A Heat logical resource ID .. cmdoption:: --access-key A Keystone access key .. cmdoption:: --secret-key A Keystone secret key .. cmdoption:: --region Openstack region .. cmdoption:: -c, --configsets An optional list of configSets (default: default) BUGS ==== Heat bugs are managed through Launchpad heat-cfntools-1.4.2/doc/source/man/cfn-push-stats.rst000066400000000000000000000031551265023060500225200ustar00rootroot00000000000000============== cfn-push-stats ============== .. program:: cfn-push-stats SYNOPSIS ======== ``cfn-push-stats`` DESCRIPTION =========== Implements cfn-push-stats CloudFormation functionality OPTIONS ======= .. cmdoption:: -v, --verbose Verbose logging .. cmdoption:: --credential-file credential-file .. cmdoption:: --service-failure Reports a service falure. .. cmdoption:: --mem-util Reports memory utilization in percentages. .. cmdoption:: --mem-used Reports memory used (excluding cache and buffers) in megabytes. .. cmdoption:: --mem-avail Reports available memory (including cache and buffers) in megabytes. .. cmdoption:: --swap-util Reports swap utilization in percentages. .. cmdoption:: --swap-used Reports allocated swap space in megabytes. .. cmdoption:: --disk-space-util Reports disk space utilization in percentages. .. cmdoption:: --disk-space-used Reports allocated disk space in gigabytes. .. cmdoption:: --disk-space-avail Reports available disk space in gigabytes. .. cmdoption:: --memory-units Specifies units for memory metrics. .. cmdoption:: --disk-units Specifies units for disk metrics. .. cmdoption:: --disk-path Selects the disk by the path on which to report. .. cmdoption:: --cpu-util Reports cpu utilization in percentages. .. cmdoption:: --haproxy Reports HAProxy loadbalancer usage. .. cmdoption:: --haproxy-latency Reports HAProxy latency .. cmdoption:: --heartbeat Sends a Heartbeat. .. cmdoption:: --watch the name of the watch to post to. BUGS ==== Heat bugs are managed through Launchpad heat-cfntools-1.4.2/doc/source/man/cfn-signal.rst000066400000000000000000000011061265023060500216540ustar00rootroot00000000000000========== cfn-signal ========== .. program:: cfn-signal SYNOPSIS ======== ``cfn-signal`` DESCRIPTION =========== Implements cfn-signal CloudFormation functionality OPTIONS ======= .. cmdoption:: -s, --success signal status to report .. cmdoption:: -r, --reason The reason for the failure .. cmdoption:: --data The data to send .. cmdoption:: -i, --id the unique id to send back to the WaitCondition .. cmdoption:: -e, --exit The exit code from a procecc to interpret BUGS ==== Heat bugs are managed through Launchpad heat-cfntools-1.4.2/doc/source/man/index.rst000066400000000000000000000004361265023060500207470ustar00rootroot00000000000000=================================== Man pages for Heat cfntools utilities =================================== ------------- Heat cfntools ------------- .. toctree:: :maxdepth: 2 cfn-create-aws-symlinks cfn-get-metadata cfn-hup cfn-init cfn-push-stats cfn-signal heat-cfntools-1.4.2/heat_cfntools/000077500000000000000000000000001265023060500171135ustar00rootroot00000000000000heat-cfntools-1.4.2/heat_cfntools/__init__.py000066400000000000000000000000001265023060500212120ustar00rootroot00000000000000heat-cfntools-1.4.2/heat_cfntools/cfntools/000077500000000000000000000000001265023060500207425ustar00rootroot00000000000000heat-cfntools-1.4.2/heat_cfntools/cfntools/__init__.py000066400000000000000000000000001265023060500230410ustar00rootroot00000000000000heat-cfntools-1.4.2/heat_cfntools/cfntools/cfn_helper.py000066400000000000000000001514501265023060500234270ustar00rootroot00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Implements cfn metadata handling Not implemented yet: * command line args - placeholders are ignored """ import atexit import contextlib import errno import functools import grp import json import logging import os import os.path import pwd try: import rpmUtils.miscutils as rpmutils import rpmUtils.updates as rpmupdates rpmutils_present = True except ImportError: rpmutils_present = False import re import shutil import six import six.moves.configparser as ConfigParser import subprocess import tempfile # Override BOTO_CONFIG, which makes boto look only at the specified # config file, instead of the default locations os.environ['BOTO_CONFIG'] = '/var/lib/heat-cfntools/cfn-boto-cfg' from boto import cloudformation LOG = logging.getLogger(__name__) def to_boolean(b): val = b.lower().strip() if isinstance(b, six.string_types) else b return val in [True, 'true', 'yes', '1', 1] def parse_creds_file(path='/etc/cfn/cfn-credentials'): '''Parse the cfn credentials file. Default location is as specified, and it is expected to contain exactly two keys "AWSAccessKeyId" and "AWSSecretKey) The two keys are returned a dict (if found) ''' creds = {'AWSAccessKeyId': None, 'AWSSecretKey': None} for line in open(path): for key in creds: match = re.match("^%s *= *(.*)$" % key, line) if match: creds[key] = match.group(1) return creds class HupConfig(object): def __init__(self, fp_list): self.config = ConfigParser.SafeConfigParser() for fp in fp_list: self.config.readfp(fp) self.load_main_section() self.hooks = [] for s in self.config.sections(): if s != 'main': self.hooks.append(Hook( s, self.config.get(s, 'triggers'), self.config.get(s, 'path'), self.config.get(s, 'runas'), self.config.get(s, 'action'))) def load_main_section(self): # required values self.stack = self.config.get('main', 'stack') self.credential_file = self.config.get('main', 'credential-file') try: with open(self.credential_file) as f: self.credentials = f.read() except Exception: raise Exception("invalid credentials file %s" % self.credential_file) # optional values try: self.region = self.config.get('main', 'region') except ConfigParser.NoOptionError: self.region = 'nova' try: self.interval = self.config.getint('main', 'interval') except ConfigParser.NoOptionError: self.interval = 10 def __str__(self): return '{stack: %s, credential_file: %s, region: %s, interval:%d}' % \ (self.stack, self.credential_file, self.region, self.interval) def unique_resources_get(self): resources = [] for h in self.hooks: r = h.resource_name_get() if r not in resources: resources.append(h.resource_name_get()) return resources class Hook(object): def __init__(self, name, triggers, path, runas, action): self.name = name self.triggers = triggers self.path = path self.runas = runas self.action = action def resource_name_get(self): sp = self.path.split('.') return sp[1] def event(self, ev_name, ev_object, ev_resource): if self.resource_name_get() == ev_resource and \ ev_name in self.triggers: CommandRunner(self.action, shell=True).run(user=self.runas) else: LOG.debug('event: {%s, %s, %s} did not match %s' % (ev_name, ev_object, ev_resource, self.__str__())) def __str__(self): return '{%s, %s, %s, %s, %s}' % \ (self.name, self.triggers, self.path, self.runas, self.action) class ControlledPrivilegesFailureException(Exception): pass @contextlib.contextmanager def controlled_privileges(user): orig_euid = None try: real = pwd.getpwnam(user) if os.geteuid() != real.pw_uid: orig_euid = os.geteuid() os.seteuid(real.pw_uid) LOG.debug("Privileges set for user %s" % user) except Exception as e: raise ControlledPrivilegesFailureException(e) try: yield finally: if orig_euid is not None: try: os.seteuid(orig_euid) LOG.debug("Original privileges restored.") except Exception as e: LOG.error("Error restoring privileges %s" % e) class CommandRunner(object): """Helper class to run a command and store the output.""" def __init__(self, command, shell=False, nextcommand=None): self._command = command self._shell = shell self._next = nextcommand self._stdout = None self._stderr = None self._status = None def __str__(self): s = "CommandRunner:" s += "\n\tcommand: %s" % self._command if self._status: s += "\n\tstatus: %s" % self.status if self._stdout: s += "\n\tstdout: %s" % self.stdout if self._stderr: s += "\n\tstderr: %s" % self.stderr return s def run(self, user='root', cwd=None, env=None): """Run the Command and return the output. Returns: self """ LOG.debug("Running command: %s" % self._command) cmd = self._command shell = self._shell # Ensure commands that are given as string are run on shell assert isinstance(cmd, six.string_types) is bool(shell) try: with controlled_privileges(user): subproc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd, env=env, shell=shell) output = subproc.communicate() self._status = subproc.returncode self._stdout = output[0] self._stderr = output[1] except ControlledPrivilegesFailureException as e: LOG.error("Error setting privileges for user '%s': %s" % (user, e)) self._status = 126 self._stderr = six.text_type(e) if self._status: LOG.debug("Return code of %d after executing: '%s'\n" "stdout: '%s'\n" "stderr: '%s'" % (self._status, cmd, self._stdout, self._stderr)) if self._next: self._next.run() return self @property def stdout(self): return self._stdout @property def stderr(self): return self._stderr @property def status(self): return self._status class RpmHelper(object): if rpmutils_present: _rpm_util = rpmupdates.Updates([], []) @classmethod def compare_rpm_versions(cls, v1, v2): """Compare two RPM version strings. Arguments: v1 -- a version string v2 -- a version string Returns: 0 -- the versions are equal 1 -- v1 is greater -1 -- v2 is greater """ if v1 and v2: return rpmutils.compareVerOnly(v1, v2) elif v1: return 1 elif v2: return -1 else: return 0 @classmethod def newest_rpm_version(cls, versions): """Returns the highest (newest) version from a list of versions. Arguments: versions -- A list of version strings e.g., ['2.0', '2.2', '2.2-1.fc16', '2.2.22-1.fc16'] """ if versions: if isinstance(versions, six.string_types): return versions versions = sorted(versions, rpmutils.compareVerOnly, reverse=True) return versions[0] else: return None @classmethod def rpm_package_version(cls, pkg): """Returns the version of an installed RPM. Arguments: pkg -- A package name """ cmd = "rpm -q --queryformat '%{VERSION}-%{RELEASE}' %s" % pkg command = CommandRunner(cmd).run() return command.stdout @classmethod def rpm_package_installed(cls, pkg): """Indicates whether pkg is in rpm database. Arguments: pkg -- A package name (with optional version and release spec). e.g., httpd e.g., httpd-2.2.22 e.g., httpd-2.2.22-1.fc16 """ cmd = ['rpm', '-q', pkg] command = CommandRunner(cmd).run() return command.status == 0 @classmethod def yum_package_available(cls, pkg): """Indicates whether pkg is available via yum. Arguments: pkg -- A package name (with optional version and release spec). e.g., httpd e.g., httpd-2.2.22 e.g., httpd-2.2.22-1.fc16 """ cmd = ['yum', '-y', '--showduplicates', 'list', 'available', pkg] command = CommandRunner(cmd).run() return command.status == 0 @classmethod def dnf_package_available(cls, pkg): """Indicates whether pkg is available via dnf. Arguments: pkg -- A package name (with optional version and release spec). e.g., httpd e.g., httpd-2.2.22 e.g., httpd-2.2.22-1.fc21 """ cmd = ['dnf', '-y', '--showduplicates', 'list', 'available', pkg] command = CommandRunner(cmd).run() return command.status == 0 @classmethod def zypper_package_available(cls, pkg): """Indicates whether pkg is available via zypper. Arguments: pkg -- A package name (with optional version and release spec). e.g., httpd e.g., httpd-2.2.22 e.g., httpd-2.2.22-1.fc16 """ cmd = ['zypper', '-n', '--no-refresh', 'search', pkg] command = CommandRunner(cmd).run() return command.status == 0 @classmethod def install(cls, packages, rpms=True, zypper=False, dnf=False): """Installs (or upgrades) packages via RPM, yum, dnf, or zypper. Arguments: packages -- a list of packages to install rpms -- if True: * use RPM to install the packages * packages must be a list of URLs to retrieve RPMs if False: * use Yum to install packages * packages is a list of: - pkg name (httpd), or - pkg name with version spec (httpd-2.2.22), or - pkg name with version-release spec (httpd-2.2.22-1.fc16) zypper -- if True: * overrides use of yum, use zypper instead dnf -- if True: * overrides use of yum, use dnf instead * packages must be in same format as yum pkg list """ if rpms: cmd = ['rpm', '-U', '--force', '--nosignature'] elif zypper: cmd = ['zypper', '-n', 'install'] elif dnf: # use dnf --best to upgrade outdated-but-installed packages cmd = ['dnf', '-y', '--best', 'install'] else: cmd = ['yum', '-y', 'install'] cmd.extend(packages) LOG.info("Installing packages: %s" % cmd) command = CommandRunner(cmd).run() if command.status: LOG.warn("Failed to install packages: %s" % cmd) @classmethod def downgrade(cls, packages, rpms=True, zypper=False, dnf=False): """Downgrades a set of packages via RPM, yum, dnf, or zypper. Arguments: packages -- a list of packages to downgrade rpms -- if True: * use RPM to downgrade (replace) the packages * packages must be a list of URLs to retrieve the RPMs if False: * use Yum to downgrade packages * packages is a list of: - pkg name with version spec (httpd-2.2.22), or - pkg name with version-release spec (httpd-2.2.22-1.fc16) dnf -- if True: * Use dnf instead of RPM/yum """ if rpms: cls.install(packages) elif zypper: cmd = ['zypper', '-n', 'install', '--oldpackage'] cmd.extend(packages) LOG.info("Downgrading packages: %s", cmd) command = CommandRunner(cmd).run() if command.status: LOG.warn("Failed to downgrade packages: %s" % cmd) elif dnf: cmd = ['dnf', '-y', 'downgrade'] cmd.extend(packages) LOG.info("Downgrading packages: %s", cmd) command = CommandRunner(cmd).run() if command.status: LOG.warn("Failed to downgrade packages: %s" % cmd) else: cmd = ['yum', '-y', 'downgrade'] cmd.extend(packages) LOG.info("Downgrading packages: %s" % cmd) command = CommandRunner(cmd).run() if command.status: LOG.warn("Failed to downgrade packages: %s" % cmd) class PackagesHandler(object): _packages = {} _package_order = ["dpkg", "rpm", "apt", "yum", "dnf"] @staticmethod def _pkgsort(pkg1, pkg2): order = PackagesHandler._package_order p1_name = pkg1[0] p2_name = pkg2[0] if p1_name in order and p2_name in order: return cmp(order.index(p1_name), order.index(p2_name)) elif p1_name in order: return -1 elif p2_name in order: return 1 else: return cmp(p1_name.lower(), p2_name.lower()) def __init__(self, packages): self._packages = packages def _handle_gem_packages(self, packages): """very basic support for gems.""" # TODO(asalkeld) support versions # -b == local & remote install # -y == install deps opts = ['-b', '-y'] for pkg_name, versions in packages.items(): if len(versions) > 0: cmd = ['gem', 'install'] + opts cmd.extend(['--version', versions[0], pkg_name]) CommandRunner(cmd).run() else: cmd = ['gem', 'install'] + opts cmd.append(pkg_name) CommandRunner(cmd).run() def _handle_python_packages(self, packages): """very basic support for easy_install.""" # TODO(asalkeld) support versions for pkg_name, versions in packages.items(): cmd = ['easy_install', pkg_name] CommandRunner(cmd).run() def _handle_zypper_packages(self, packages): """Handle installation, upgrade, or downgrade of packages via yum. Arguments: packages -- a package entries map of the form: "pkg_name" : "version", "pkg_name" : ["v1", "v2"], "pkg_name" : [] For each package entry: * if no version is supplied and the package is already installed, do nothing * if no version is supplied and the package is _not_ already installed, install it * if a version string is supplied, and the package is already installed, determine whether to downgrade or upgrade (or do nothing if version matches installed package) * if a version array is supplied, choose the highest version from the array and follow same logic for version string above """ # collect pkgs for batch processing at end installs = [] downgrades = [] for pkg_name, versions in packages.items(): ver = RpmHelper.newest_rpm_version(versions) pkg = "%s-%s" % (pkg_name, ver) if ver else pkg_name if RpmHelper.rpm_package_installed(pkg): # FIXME:print non-error, but skipping pkg pass elif not RpmHelper.zypper_package_available(pkg): LOG.warn("Skipping package '%s' - unavailable via zypper", pkg) elif not ver: installs.append(pkg) else: current_ver = RpmHelper.rpm_package_version(pkg) rc = RpmHelper.compare_rpm_versions(current_ver, ver) if rc < 0: installs.append(pkg) elif rc > 0: downgrades.append(pkg) if installs: RpmHelper.install(installs, rpms=False, zypper=True) if downgrades: RpmHelper.downgrade(downgrades, zypper=True) def _handle_dnf_packages(self, packages): """Handle installation, upgrade, or downgrade of packages via dnf. Arguments: packages -- a package entries map of the form: "pkg_name" : "version", "pkg_name" : ["v1", "v2"], "pkg_name" : [] For each package entry: * if no version is supplied and the package is already installed, do nothing * if no version is supplied and the package is _not_ already installed, install it * if a version string is supplied, and the package is already installed, determine whether to downgrade or upgrade (or do nothing if version matches installed package) * if a version array is supplied, choose the highest version from the array and follow same logic for version string above """ # collect pkgs for batch processing at end installs = [] downgrades = [] for pkg_name, versions in packages.items(): ver = RpmHelper.newest_rpm_version(versions) pkg = "%s-%s" % (pkg_name, ver) if ver else pkg_name if RpmHelper.rpm_package_installed(pkg): # FIXME:print non-error, but skipping pkg pass elif not RpmHelper.dnf_package_available(pkg): LOG.warn("Skipping package '%s'. Not available via yum" % pkg) elif not ver: installs.append(pkg) else: current_ver = RpmHelper.rpm_package_version(pkg) rc = RpmHelper.compare_rpm_versions(current_ver, ver) if rc < 0: installs.append(pkg) elif rc > 0: downgrades.append(pkg) if installs: RpmHelper.install(installs, rpms=False, dnf=True) if downgrades: RpmHelper.downgrade(downgrades, rpms=False, dnf=True) def _handle_yum_packages(self, packages): """Handle installation, upgrade, or downgrade of packages via yum. Arguments: packages -- a package entries map of the form: "pkg_name" : "version", "pkg_name" : ["v1", "v2"], "pkg_name" : [] For each package entry: * if no version is supplied and the package is already installed, do nothing * if no version is supplied and the package is _not_ already installed, install it * if a version string is supplied, and the package is already installed, determine whether to downgrade or upgrade (or do nothing if version matches installed package) * if a version array is supplied, choose the highest version from the array and follow same logic for version string above """ cmd = CommandRunner(['which', 'yum']).run() if cmd.status == 1: # yum not available, use DNF if available self._handle_dnf_packages(packages) return elif cmd.status == 127: # `which` command not found LOG.info("`which` not found. Using yum without checking if dnf " "is available") # collect pkgs for batch processing at end installs = [] downgrades = [] for pkg_name, versions in packages.items(): ver = RpmHelper.newest_rpm_version(versions) pkg = "%s-%s" % (pkg_name, ver) if ver else pkg_name if RpmHelper.rpm_package_installed(pkg): # FIXME:print non-error, but skipping pkg pass elif not RpmHelper.yum_package_available(pkg): LOG.warn("Skipping package '%s'. Not available via yum" % pkg) elif not ver: installs.append(pkg) else: current_ver = RpmHelper.rpm_package_version(pkg) rc = RpmHelper.compare_rpm_versions(current_ver, ver) if rc < 0: installs.append(pkg) elif rc > 0: downgrades.append(pkg) if installs: RpmHelper.install(installs, rpms=False) if downgrades: RpmHelper.downgrade(downgrades) def _handle_rpm_packages(self, packages): """Handle installation, upgrade, or downgrade of packages via rpm. Arguments: packages -- a package entries map of the form: "pkg_name" : "url" For each package entry: * if the EXACT package is already installed, skip it * if a different version of the package is installed, overwrite it * if the package isn't installed, install it """ #FIXME: handle rpm installs pass def _handle_apt_packages(self, packages): """very basic support for apt.""" # TODO(asalkeld) support versions pkg_list = list(packages) env = {'DEBIAN_FRONTEND': 'noninteractive'} cmd = ['apt-get', '-y', 'install'] + pkg_list CommandRunner(cmd).run(env=env) # map of function pointers to handle different package managers _package_handlers = {"yum": _handle_yum_packages, "dnf": _handle_dnf_packages, "zypper": _handle_zypper_packages, "rpm": _handle_rpm_packages, "apt": _handle_apt_packages, "rubygems": _handle_gem_packages, "python": _handle_python_packages} def _package_handler(self, manager_name): handler = None if manager_name in self._package_handlers: handler = self._package_handlers[manager_name] return handler def apply_packages(self): """Install, upgrade, or downgrade packages listed. Each package is a dict containing package name and a list of versions Install order: * dpkg * rpm * apt * yum * dnf """ if not self._packages: return try: packages = sorted( self._packages.items(), cmp=PackagesHandler._pkgsort) except TypeError: # On Python 3, we have to use key instead of cmp # This could also work on Python 2.7, but not on 2.6 packages = sorted( self._packages.items(), key=functools.cmp_to_key(PackagesHandler._pkgsort)) for manager, package_entries in packages: handler = self._package_handler(manager) if not handler: LOG.warn("Skipping invalid package type: %s" % manager) else: handler(self, package_entries) class FilesHandler(object): def __init__(self, files): self._files = files def apply_files(self): if not self._files: return for fdest, meta in self._files.items(): dest = fdest.encode() try: os.makedirs(os.path.dirname(dest)) except OSError as e: if e.errno == errno.EEXIST: LOG.debug(str(e)) else: LOG.exception(e) if 'content' in meta: if isinstance(meta['content'], six.string_types): f = open(dest, 'w+') f.write(meta['content']) f.close() else: f = open(dest, 'w+') f.write(json.dumps(meta['content'], indent=4) .encode('UTF-8')) f.close() elif 'source' in meta: CommandRunner(['curl', '-o', dest, meta['source']]).run() else: LOG.error('%s %s' % (dest, str(meta))) continue uid = -1 gid = -1 if 'owner' in meta: try: user_info = pwd.getpwnam(meta['owner']) uid = user_info[2] except KeyError: pass if 'group' in meta: try: group_info = grp.getgrnam(meta['group']) gid = group_info[2] except KeyError: pass os.chown(dest, uid, gid) if 'mode' in meta: os.chmod(dest, int(meta['mode'], 8)) class SourcesHandler(object): '''tar, tar+gzip,tar+bz2 and zip.''' _sources = {} def __init__(self, sources): self._sources = sources def _url_to_tmp_filename(self, url): tempdir = tempfile.mkdtemp() atexit.register(lambda: shutil.rmtree(tempdir, True)) name = os.path.basename(url) return os.path.join(tempdir, name) def _splitext(self, path): (r, ext) = os.path.splitext(path) return (r, ext.lower()) def _github_ball_type(self, url): ext = "" if url.endswith('/'): url = url[0:-1] sp = url.split('/') if len(sp) > 2: http = sp[0].startswith('http') github = sp[2].endswith('github.com') btype = sp[-2] if http and github: if 'zipball' == btype: ext = '.zip' elif 'tarball' == btype: ext = '.tgz' return ext def _source_type(self, url): (r, ext) = self._splitext(url) if ext == '.gz': (r, ext2) = self._splitext(r) if ext2 == '.tar': ext = '.tgz' elif ext == '.bz2': (r, ext2) = self._splitext(r) if ext2 == '.tar': ext = '.tbz2' elif ext == "": ext = self._github_ball_type(url) return ext def _apply_source_cmd(self, dest, url): cmd = "" basename = os.path.basename(url) stype = self._source_type(url) if stype == '.tgz': cmd = "curl -s '%s' | gunzip | tar -xvf -" % url elif stype == '.tbz2': cmd = "curl -s '%s' | bunzip2 | tar -xvf -" % url elif stype == '.zip': tmp = self._url_to_tmp_filename(url) cmd = "curl -s -o '%s' '%s' && unzip -o '%s'" % (tmp, url, tmp) elif stype == '.tar': cmd = "curl -s '%s' | tar -xvf -" % url elif stype == '.gz': (r, ext) = self._splitext(basename) cmd = "curl -s '%s' | gunzip > '%s'" % (url, r) elif stype == '.bz2': (r, ext) = self._splitext(basename) cmd = "curl -s '%s' | bunzip2 > '%s'" % (url, r) if cmd != '': cmd = "mkdir -p '%s'; cd '%s'; %s" % (dest, dest, cmd) return cmd def _apply_source(self, dest, url): cmd = self._apply_source_cmd(dest, url) #FIXME bug 1498298 if cmd != '': runner = CommandRunner(cmd, shell=True) runner.run() def apply_sources(self): if not self._sources: return for dest, url in self._sources.items(): self._apply_source(dest, url) class ServicesHandler(object): _services = {} def __init__(self, services, resource=None, hooks=None): self._services = services self.resource = resource self.hooks = hooks def _handle_sysv_command(self, service, command): if os.path.exists("/bin/systemctl"): service_exe = "/bin/systemctl" service = '%s.service' % service service_start = [service_exe, 'start', service] service_status = [service_exe, 'status', service] service_stop = [service_exe, 'stop', service] elif os.path.exists("/sbin/service"): service_exe = "/sbin/service" service_start = [service_exe, service, 'start'] service_status = [service_exe, service, 'status'] service_stop = [service_exe, service, 'stop'] else: service_exe = "/usr/sbin/service" service_start = [service_exe, service, 'start'] service_status = [service_exe, service, 'status'] service_stop = [service_exe, service, 'stop'] if os.path.exists("/bin/systemctl"): enable_exe = "/bin/systemctl" enable_on = [enable_exe, 'enable', service] enable_off = [enable_exe, 'disable', service] elif os.path.exists("/sbin/chkconfig"): enable_exe = "/sbin/chkconfig" enable_on = [enable_exe, service, 'on'] enable_off = [enable_exe, service, 'off'] else: enable_exe = "/usr/sbin/update-rc.d" enable_on = [enable_exe, service, 'enable'] enable_off = [enable_exe, service, 'disable'] cmd = None if "enable" == command: cmd = enable_on elif "disable" == command: cmd = enable_off elif "start" == command: cmd = service_start elif "stop" == command: cmd = service_stop elif "status" == command: cmd = service_status if cmd is not None: command = CommandRunner(cmd) command.run() return command else: LOG.error("Unknown sysv command %s" % command) def _initialize_service(self, handler, service, properties): if "enabled" in properties: enable = to_boolean(properties["enabled"]) if enable: LOG.info("Enabling service %s" % service) handler(self, service, "enable") else: LOG.info("Disabling service %s" % service) handler(self, service, "disable") if "ensureRunning" in properties: ensure_running = to_boolean(properties["ensureRunning"]) command = handler(self, service, "status") running = command.status == 0 if ensure_running and not running: LOG.info("Starting service %s" % service) handler(self, service, "start") elif not ensure_running and running: LOG.info("Stopping service %s" % service) handler(self, service, "stop") def _monitor_service(self, handler, service, properties): if "ensureRunning" in properties: ensure_running = to_boolean(properties["ensureRunning"]) command = handler(self, service, "status") running = command.status == 0 if ensure_running and not running: LOG.warn("Restarting service %s" % service) start_cmd = handler(self, service, "start") if start_cmd.status != 0: LOG.warning('Service %s did not start. STDERR: %s' % (service, start_cmd.stderr)) for h in self.hooks: h.event('service.restarted', service, self.resource) def _monitor_services(self, handler, services): for service, properties in services.items(): self._monitor_service(handler, service, properties) def _initialize_services(self, handler, services): for service, properties in services.items(): self._initialize_service(handler, service, properties) # map of function pointers to various service handlers _service_handlers = { "sysvinit": _handle_sysv_command, "systemd": _handle_sysv_command } def _service_handler(self, manager_name): handler = None if manager_name in self._service_handlers: handler = self._service_handlers[manager_name] return handler def apply_services(self): """Starts, stops, enables, disables services.""" if not self._services: return for manager, service_entries in self._services.items(): handler = self._service_handler(manager) if not handler: LOG.warn("Skipping invalid service type: %s" % manager) else: self._initialize_services(handler, service_entries) def monitor_services(self): """Restarts failed services, and runs hooks.""" if not self._services: return for manager, service_entries in self._services.items(): handler = self._service_handler(manager) if not handler: LOG.warn("Skipping invalid service type: %s" % manager) else: self._monitor_services(handler, service_entries) class ConfigsetsHandler(object): def __init__(self, configsets, selectedsets): self.configsets = configsets self.selectedsets = selectedsets def expand_sets(self, list, executionlist): for elem in list: if isinstance(elem, dict): dictkeys = elem.keys() if len(dictkeys) != 1 or dictkeys.pop() != 'ConfigSet': raise Exception('invalid ConfigSets metadata') dictkey = elem.values().pop() try: self.expand_sets(self.configsets[dictkey], executionlist) except KeyError: raise Exception("Undefined ConfigSet '%s' referenced" % dictkey) else: executionlist.append(elem) def get_configsets(self): """Returns a list of Configsets to execute in template.""" if not self.configsets: if self.selectedsets: raise Exception('Template has no configSets') return if not self.selectedsets: if 'default' not in self.configsets: raise Exception('Template has no default configSet, must' ' specify') self.selectedsets = 'default' selectedlist = [x.strip() for x in self.selectedsets.split(',')] executionlist = [] for item in selectedlist: if item not in self.configsets: raise Exception("Requested configSet '%s' not in configSets" " section" % item) self.expand_sets(self.configsets[item], executionlist) if not executionlist: raise Exception( "Requested configSet %s empty?" % self.selectedsets) return executionlist def metadata_server_port( datafile='/var/lib/heat-cfntools/cfn-metadata-server'): """Return the the metadata server port. Reads the :NNNN from the end of the URL in cfn-metadata-server """ try: f = open(datafile) server_url = f.read().strip() f.close() except IOError: return None if len(server_url) < 1: return None if server_url[-1] == '/': server_url = server_url[:-1] try: return int(server_url.split(':')[-1]) except ValueError: return None class CommandsHandlerRunError(Exception): pass class CommandsHandler(object): def __init__(self, commands): self.commands = commands def apply_commands(self): """Execute commands on the instance in alphabetical order by name.""" if not self.commands: return for command_label in sorted(self.commands): LOG.debug("%s is being processed" % command_label) self._initialize_command(command_label, self.commands[command_label]) def _initialize_command(self, command_label, properties): command_status = None cwd = None env = properties.get("env", None) if "cwd" in properties: cwd = os.path.expanduser(properties["cwd"]) if not os.path.exists(cwd): LOG.error("%s has failed. " % command_label + "%s path does not exist" % cwd) return if "test" in properties: test = CommandRunner(properties["test"], shell=True) test_status = test.run('root', cwd, env).status if test_status != 0: LOG.info("%s test returns false, skipping command" % command_label) return else: LOG.debug("%s test returns true, proceeding" % command_label) if "command" in properties: try: command = properties["command"] shell = isinstance(command, six.string_types) command = CommandRunner(command, shell=shell) command.run('root', cwd, env) command_status = command.status except OSError as e: if e.errno == errno.EEXIST: LOG.debug(str(e)) else: LOG.exception(e) else: LOG.error("%s has failed. " % command_label + "'command' property missing") return if command_status == 0: LOG.info("%s has been successfully executed" % command_label) else: if "ignoreErrors" in properties and \ to_boolean(properties["ignoreErrors"]): LOG.info("%s has failed (status=%d). Explicit ignoring" % (command_label, command_status)) else: raise CommandsHandlerRunError("%s has failed." % command_label) class GroupsHandler(object): def __init__(self, groups): self.groups = groups def apply_groups(self): """Create Linux/UNIX groups and assign group IDs.""" if not self.groups: return for group, properties in self.groups.items(): LOG.debug("%s group is being created" % group) self._initialize_group(group, properties) def _initialize_group(self, group, properties): gid = properties.get("gid", None) cmd = ['groupadd', group] if gid is not None: cmd.extend(['--gid', str(gid)]) command = CommandRunner(cmd) command.run() command_status = command.status if command_status == 0: LOG.info("%s has been successfully created" % group) elif command_status == 9: LOG.error("An error occured creating %s group : " % group + "group name not unique") elif command_status == 4: LOG.error("An error occured creating %s group : " % group + "GID not unique") elif command_status == 3: LOG.error("An error occured creating %s group : " % group + "GID not valid") elif command_status == 2: LOG.error("An error occured creating %s group : " % group + "Invalid syntax") else: LOG.error("An error occured creating %s group" % group) class UsersHandler(object): def __init__(self, users): self.users = users def apply_users(self): """Create Linux/UNIX users and assign user IDs, groups and homedir.""" if not self.users: return for user, properties in self.users.items(): LOG.debug("%s user is being created" % user) self._initialize_user(user, properties) def _initialize_user(self, user, properties): uid = properties.get("uid", None) homeDir = properties.get("homeDir", None) cmd = ['useradd', user] if uid is not None: cmd.extend(['--uid', six.text_type(uid)]) if homeDir is not None: cmd.extend(['--home', six.text_type(homeDir)]) if "groups" in properties: groups = ','.join(properties["groups"]) cmd.extend(['--groups', groups]) #Users are created as non-interactive system users with a shell #of /sbin/nologin. This is by design and cannot be modified. cmd.extend(['--shell', '/sbin/nologin']) command = CommandRunner(cmd) command.run() command_status = command.status if command_status == 0: LOG.info("%s has been successfully created" % user) elif command_status == 9: LOG.error("An error occured creating %s user : " % user + "user name not unique") elif command_status == 6: LOG.error("An error occured creating %s user : " % user + "group does not exist") elif command_status == 4: LOG.error("An error occured creating %s user : " % user + "UID not unique") elif command_status == 3: LOG.error("An error occured creating %s user : " % user + "Invalid argument") elif command_status == 2: LOG.error("An error occured creating %s user : " % user + "Invalid syntax") else: LOG.error("An error occured creating %s user" % user) class MetadataServerConnectionError(Exception): pass class Metadata(object): _metadata = None _init_key = "AWS::CloudFormation::Init" DEFAULT_PORT = 8000 def __init__(self, stack, resource, access_key=None, secret_key=None, credentials_file=None, region=None, configsets=None): self.stack = stack self.resource = resource self.access_key = access_key self.secret_key = secret_key self.region = region self.credentials_file = credentials_file self.access_key = access_key self.secret_key = secret_key self.configsets = configsets # TODO(asalkeld) is this metadata for the local resource? self._is_local_metadata = True self._metadata = None self._has_changed = False def remote_metadata(self): """Connect to the metadata server and retreive the metadata.""" if self.credentials_file: credentials = parse_creds_file(self.credentials_file) access_key = credentials['AWSAccessKeyId'] secret_key = credentials['AWSSecretKey'] elif self.access_key and self.secret_key: access_key = self.access_key secret_key = self.secret_key else: raise MetadataServerConnectionError("No credentials!") port = metadata_server_port() or self.DEFAULT_PORT client = cloudformation.CloudFormationConnection( aws_access_key_id=access_key, aws_secret_access_key=secret_key, is_secure=False, port=port, path="/v1", debug=0) res = client.describe_stack_resource(self.stack, self.resource) # Note pending upstream patch will make this response a # boto.cloudformation.stack.StackResourceDetail object # which aligns better with all the existing calls # see https://github.com/boto/boto/pull/857 resource_detail = res['DescribeStackResourceResponse'][ 'DescribeStackResourceResult']['StackResourceDetail'] return resource_detail['Metadata'] def get_nova_meta(self, cache_path='/var/lib/heat-cfntools/nova_meta.json'): """Get nova's meta_data.json and cache it. Since this is called repeatedly return the cached metadata, if we have it. """ url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' if not os.path.exists(cache_path): cmd = ['curl', '-o', cache_path, url] CommandRunner(cmd).run() try: with open(cache_path) as fd: try: return json.load(fd) except ValueError: pass except IOError: pass return None def get_instance_id(self): """Get the unique identifier for this server.""" instance_id = None md = self.get_nova_meta() if md is not None: instance_id = md.get('uuid') return instance_id def get_tags(self): """Get the tags for this server.""" tags = {} md = self.get_nova_meta() if md is not None: tags.update(md.get('meta', {})) tags['InstanceId'] = md['uuid'] return tags def retrieve( self, meta_str=None, default_path='/var/lib/heat-cfntools/cfn-init-data', last_path='/var/cache/heat-cfntools/last_metadata'): """Read the metadata from the given filename or from the remote server. Returns: True -- success False -- error """ if self.resource is not None: res_last_path = last_path + '_' + self.resource else: res_last_path = last_path if meta_str: self._data = meta_str else: try: self._data = self.remote_metadata() except MetadataServerConnectionError as ex: LOG.warn("Unable to retrieve remote metadata : %s" % str(ex)) # If reading remote metadata fails, we fall-back on local files # in order to get the most up-to-date version, we try: # /var/cache/heat-cfntools/last_metadata, followed by # /var/lib/heat-cfntools/cfn-init-data # This should allow us to do the right thing both during the # first cfn-init run (when we only have cfn-init-data), and # in the event of a temporary interruption to connectivity # affecting cfn-hup, in which case we want to use the locally # cached metadata or the logic below could re-run a stale # cfn-init-data fd = None for filepath in [res_last_path, last_path, default_path]: try: fd = open(filepath) except IOError: LOG.warn("Unable to open local metadata : %s" % filepath) continue else: LOG.info("Opened local metadata %s" % filepath) break if fd: self._data = fd.read() fd.close() else: LOG.error("Unable to read any valid metadata!") return if isinstance(self._data, str): self._metadata = json.loads(self._data) else: self._metadata = self._data last_data = "" for metadata_file in [res_last_path, last_path]: try: with open(metadata_file) as lm: try: last_data = json.load(lm) except ValueError: pass lm.close() except IOError: LOG.warn("Unable to open local metadata : %s" % metadata_file) continue if self._metadata != last_data: self._has_changed = True # if cache dir does not exist try to create it cache_dir = os.path.dirname(last_path) if not os.path.isdir(cache_dir): try: os.makedirs(cache_dir, mode=0o700) except IOError as e: LOG.warn('could not create metadata cache dir %s [%s]' % (cache_dir, e)) return # save current metadata to file tmp_dir = os.path.dirname(last_path) with tempfile.NamedTemporaryFile(dir=tmp_dir, mode='wb', delete=False) as cf: os.chmod(cf.name, 0o600) cf.write(json.dumps(self._metadata).encode('UTF-8')) os.rename(cf.name, last_path) cf.close() if res_last_path != last_path: shutil.copy(last_path, res_last_path) return True def __str__(self): return json.dumps(self._metadata) def display(self, key=None): """Print the metadata to the standard output stream. By default the full metadata is displayed but the ouptut can be limited to a specific with the argument. Arguments: key -- the metadata's key to display, nested keys can be specified separating them by the dot character. e.g., "foo.bar" If the key contains a dot, it should be surrounded by single quotes e.g., "foo.'bar.1'" """ if self._metadata is None: return if key is None: print(str(self)) return value = None md = self._metadata while True: key_match = re.match(r'^(?:(?:\'([^\']+)\')|([^\.]+))(?:\.|$)', key) if not key_match: break k = key_match.group(1) or key_match.group(2) if isinstance(md, dict) and k in md: key = key.replace(key_match.group(), '') value = md = md[k] else: break if key != '': value = None if value is not None: print(json.dumps(value)) return def _is_valid_metadata(self): """Should find the AWS::CloudFormation::Init json key.""" is_valid = self._metadata and \ self._init_key in self._metadata and \ self._metadata[self._init_key] if is_valid: self._metadata = self._metadata[self._init_key] return is_valid def _process_config(self, config="config"): """Parse and process a config section. * packages * sources * groups * users * files * commands * services """ try: self._config = self._metadata[config] except KeyError: raise Exception("Could not find '%s' set in template, may need to" " specify another set." % config) PackagesHandler(self._config.get("packages")).apply_packages() SourcesHandler(self._config.get("sources")).apply_sources() GroupsHandler(self._config.get("groups")).apply_groups() UsersHandler(self._config.get("users")).apply_users() FilesHandler(self._config.get("files")).apply_files() CommandsHandler(self._config.get("commands")).apply_commands() ServicesHandler(self._config.get("services")).apply_services() def cfn_init(self): """Process the resource metadata.""" if not self._is_valid_metadata(): raise Exception("invalid metadata") else: executionlist = ConfigsetsHandler(self._metadata.get("configSets"), self.configsets).get_configsets() if not executionlist: self._process_config() else: for item in executionlist: self._process_config(item) def cfn_hup(self, hooks): """Process the resource metadata.""" if not self._is_valid_metadata(): LOG.debug( 'Metadata does not contain a %s section' % self._init_key) if self._is_local_metadata: self._config = self._metadata.get("config", {}) s = self._config.get("services") sh = ServicesHandler(s, resource=self.resource, hooks=hooks) sh.monitor_services() if self._has_changed: for h in hooks: h.event('post.update', self.resource, self.resource) heat-cfntools-1.4.2/heat_cfntools/tests/000077500000000000000000000000001265023060500202555ustar00rootroot00000000000000heat-cfntools-1.4.2/heat_cfntools/tests/__init__.py000066400000000000000000000000001265023060500223540ustar00rootroot00000000000000heat-cfntools-1.4.2/heat_cfntools/tests/test_cfn_helper.py000066400000000000000000001547571265023060500240160ustar00rootroot00000000000000# # Copyright 2013 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import boto.cloudformation as cfn import fixtures import json import mock import os import tempfile import testtools import testtools.matchers as ttm from heat_cfntools.cfntools import cfn_helper def popen_root_calls(calls, shell=False): kwargs = {'env': None, 'cwd': None, 'stderr': -1, 'stdout': -1, 'shell': shell} return [ mock.call(call, **kwargs) for call in calls ] class FakePOpen(): def __init__(self, stdout='', stderr='', returncode=0): self.returncode = returncode self.stdout = stdout self.stderr = stderr def communicate(self): return (self.stdout, self.stderr) def wait(self): pass @mock.patch.object(cfn_helper.pwd, 'getpwnam') @mock.patch.object(cfn_helper.os, 'seteuid') @mock.patch.object(cfn_helper.os, 'geteuid') class TestCommandRunner(testtools.TestCase): def test_command_runner(self, mock_geteuid, mock_seteuid, mock_getpwnam): def returns(*args, **kwargs): if args[0][0] == '/bin/command1': return FakePOpen('All good') elif args[0][0] == '/bin/command2': return FakePOpen('Doing something', 'error', -1) else: raise Exception('This should never happen') with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns cmd2 = cfn_helper.CommandRunner(['/bin/command2']) cmd1 = cfn_helper.CommandRunner(['/bin/command1'], nextcommand=cmd2) cmd1.run('root') self.assertEqual( 'CommandRunner:\n\tcommand: [\'/bin/command1\']\n\tstdout: ' 'All good', str(cmd1)) self.assertEqual( 'CommandRunner:\n\tcommand: [\'/bin/command2\']\n\tstatus: ' '-1\n\tstdout: Doing something\n\tstderr: error', str(cmd2)) calls = popen_root_calls([['/bin/command1'], ['/bin/command2']]) mock_popen.assert_has_calls(calls) def test_privileges_are_lowered_for_non_root_user(self, mock_geteuid, mock_seteuid, mock_getpwnam): pw_entry = mock.MagicMock() pw_entry.pw_uid = 1001 mock_getpwnam.return_value = pw_entry mock_geteuid.return_value = 0 calls = [mock.call(1001), mock.call(0)] with mock.patch('subprocess.Popen') as mock_popen: command = ['/bin/command', '--option=value', 'arg1', 'arg2'] cmd = cfn_helper.CommandRunner(command) cmd.run(user='nonroot') self.assertTrue(mock_geteuid.called) mock_getpwnam.assert_called_once_with('nonroot') mock_seteuid.assert_has_calls(calls) self.assertTrue(mock_popen.called) def test_run_returns_when_cannot_set_privileges(self, mock_geteuid, mock_seteuid, mock_getpwnam): msg = '[Error 1] Permission Denied' mock_seteuid.side_effect = Exception(msg) with mock.patch('subprocess.Popen') as mock_popen: command = ['/bin/command2'] cmd = cfn_helper.CommandRunner(command) cmd.run(user='nonroot') self.assertTrue(mock_getpwnam.called) self.assertTrue(mock_seteuid.called) self.assertFalse(mock_popen.called) self.assertEqual(126, cmd.status) self.assertEqual(msg, cmd.stderr) def test_privileges_are_restored_for_command_failure(self, mock_geteuid, mock_seteuid, mock_getpwnam): pw_entry = mock.MagicMock() pw_entry.pw_uid = 1001 mock_getpwnam.return_value = pw_entry mock_geteuid.return_value = 0 calls = [mock.call(1001), mock.call(0)] with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = ValueError('Something wrong') command = ['/bin/command', '--option=value', 'arg1', 'arg2'] cmd = cfn_helper.CommandRunner(command) self.assertRaises(ValueError, cmd.run, user='nonroot') self.assertTrue(mock_geteuid.called) mock_getpwnam.assert_called_once_with('nonroot') mock_seteuid.assert_has_calls(calls) self.assertTrue(mock_popen.called) @mock.patch.object(cfn_helper, 'controlled_privileges') class TestPackages(testtools.TestCase): def test_yum_install(self, mock_cp): def returns(*args, **kwargs): if args[0][0] == 'rpm' and args[0][1] == '-q': return FakePOpen(returncode=1) else: return FakePOpen(returncode=0) calls = [['which', 'yum']] for pack in ('httpd', 'wordpress', 'mysql-server'): calls.append(['rpm', '-q', pack]) calls.append(['yum', '-y', '--showduplicates', 'list', 'available', pack]) calls = popen_root_calls(calls) packages = { "yum": { "mysql-server": [], "httpd": [], "wordpress": [] } } with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns cfn_helper.PackagesHandler(packages).apply_packages() mock_popen.assert_has_calls(calls, any_order=True) def test_dnf_install_yum_unavailable(self, mock_cp): def returns(*args, **kwargs): if ((args[0][0] == 'rpm' and args[0][1] == '-q') or (args[0][0] == 'which' and args[0][1] == 'yum')): return FakePOpen(returncode=1) else: return FakePOpen(returncode=0) calls = [['which', 'yum']] for pack in ('httpd', 'wordpress', 'mysql-server'): calls.append(['rpm', '-q', pack]) calls.append(['dnf', '-y', '--showduplicates', 'list', 'available', pack]) calls = popen_root_calls(calls) packages = { "yum": { "mysql-server": [], "httpd": [], "wordpress": [] } } with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns cfn_helper.PackagesHandler(packages).apply_packages() mock_popen.assert_has_calls(calls, any_order=True) def test_dnf_install(self, mock_cp): def returns(*args, **kwargs): if args[0][0] == 'rpm' and args[0][1] == '-q': return FakePOpen(returncode=1) else: return FakePOpen(returncode=0) calls = [] for pack in ('httpd', 'wordpress', 'mysql-server'): calls.append(['rpm', '-q', pack]) calls.append(['dnf', '-y', '--showduplicates', 'list', 'available', pack]) calls = popen_root_calls(calls) packages = { "dnf": { "mysql-server": [], "httpd": [], "wordpress": [] } } with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns cfn_helper.PackagesHandler(packages).apply_packages() mock_popen.assert_has_calls(calls, any_order=True) def test_zypper_install(self, mock_cp): def returns(*args, **kwargs): if args[0][0].startswith('rpm') and args[0][1].startswith('-q'): return FakePOpen(returncode=1) else: return FakePOpen(returncode=0) calls = [] for pack in ('httpd', 'wordpress', 'mysql-server'): calls.append(['rpm', '-q', pack]) calls.append(['zypper', '-n', '--no-refresh', 'search', pack]) calls = popen_root_calls(calls) packages = { "zypper": { "mysql-server": [], "httpd": [], "wordpress": [] } } with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns cfn_helper.PackagesHandler(packages).apply_packages() mock_popen.assert_has_calls(calls, any_order=True) def test_apt_install(self, mock_cp): packages = { "apt": { "mysql-server": [], "httpd": [], "wordpress": [] } } with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen(returncode=0) cfn_helper.PackagesHandler(packages).apply_packages() self.assertTrue(mock_popen.called) @mock.patch.object(cfn_helper, 'controlled_privileges') class TestServicesHandler(testtools.TestCase): def test_services_handler_systemd(self, mock_cp): calls = [] returns = [] # apply_services calls.append(['/bin/systemctl', 'enable', 'httpd.service']) returns.append(FakePOpen()) calls.append(['/bin/systemctl', 'status', 'httpd.service']) returns.append(FakePOpen(returncode=-1)) calls.append(['/bin/systemctl', 'start', 'httpd.service']) returns.append(FakePOpen()) calls.append(['/bin/systemctl', 'enable', 'mysqld.service']) returns.append(FakePOpen()) calls.append(['/bin/systemctl', 'status', 'mysqld.service']) returns.append(FakePOpen(returncode=-1)) calls.append(['/bin/systemctl', 'start', 'mysqld.service']) returns.append(FakePOpen()) # monitor_services not running calls.append(['/bin/systemctl', 'status', 'httpd.service']) returns.append(FakePOpen(returncode=-1)) calls.append(['/bin/systemctl', 'start', 'httpd.service']) returns.append(FakePOpen()) calls = popen_root_calls(calls) calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True)) returns.append(FakePOpen()) calls.extend(popen_root_calls([['/bin/systemctl', 'status', 'mysqld.service']])) returns.append(FakePOpen(returncode=-1)) calls.extend(popen_root_calls([['/bin/systemctl', 'start', 'mysqld.service']])) returns.append(FakePOpen()) calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True)) returns.append(FakePOpen()) # monitor_services running calls.extend(popen_root_calls([['/bin/systemctl', 'status', 'httpd.service']])) returns.append(FakePOpen()) calls.extend(popen_root_calls([['/bin/systemctl', 'status', 'mysqld.service']])) returns.append(FakePOpen()) #calls = popen_root_calls(calls) services = { "systemd": { "mysqld": {"enabled": "true", "ensureRunning": "true"}, "httpd": {"enabled": "true", "ensureRunning": "true"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.return_value = True with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() # services not running sh.monitor_services() # services running sh.monitor_services() mock_popen.assert_has_calls(calls, any_order=True) mock_exists.assert_called_with('/bin/systemctl') def test_services_handler_systemd_disabled(self, mock_cp): calls = [] # apply_services calls.append(['/bin/systemctl', 'disable', 'httpd.service']) calls.append(['/bin/systemctl', 'status', 'httpd.service']) calls.append(['/bin/systemctl', 'stop', 'httpd.service']) calls.append(['/bin/systemctl', 'disable', 'mysqld.service']) calls.append(['/bin/systemctl', 'status', 'mysqld.service']) calls.append(['/bin/systemctl', 'stop', 'mysqld.service']) calls = popen_root_calls(calls) services = { "systemd": { "mysqld": {"enabled": "false", "ensureRunning": "false"}, "httpd": {"enabled": "false", "ensureRunning": "false"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.return_value = True with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen() sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() mock_popen.assert_has_calls(calls, any_order=True) mock_exists.assert_called_with('/bin/systemctl') def test_services_handler_sysv_service_chkconfig(self, mock_cp): def exists(*args, **kwargs): return args[0] != '/bin/systemctl' calls = [] returns = [] # apply_services calls.append(['/sbin/chkconfig', 'httpd', 'on']) returns.append(FakePOpen()) calls.append(['/sbin/service', 'httpd', 'status']) returns.append(FakePOpen(returncode=-1)) calls.append(['/sbin/service', 'httpd', 'start']) returns.append(FakePOpen()) # monitor_services not running calls.append(['/sbin/service', 'httpd', 'status']) returns.append(FakePOpen(returncode=-1)) calls.append(['/sbin/service', 'httpd', 'start']) returns.append(FakePOpen()) calls = popen_root_calls(calls) calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True)) returns.append(FakePOpen()) # monitor_services running calls.extend(popen_root_calls([['/sbin/service', 'httpd', 'status']])) returns.append(FakePOpen()) services = { "sysvinit": { "httpd": {"enabled": "true", "ensureRunning": "true"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.side_effect = exists with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() # services not running sh.monitor_services() # services running sh.monitor_services() mock_popen.assert_has_calls(calls) mock_exists.assert_any_call('/bin/systemctl') mock_exists.assert_any_call('/sbin/service') mock_exists.assert_any_call('/sbin/chkconfig') def test_services_handler_sysv_disabled_service_chkconfig(self, mock_cp): def exists(*args, **kwargs): return args[0] != '/bin/systemctl' calls = [] # apply_services calls.append(['/sbin/chkconfig', 'httpd', 'off']) calls.append(['/sbin/service', 'httpd', 'status']) calls.append(['/sbin/service', 'httpd', 'stop']) calls = popen_root_calls(calls) services = { "sysvinit": { "httpd": {"enabled": "false", "ensureRunning": "false"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.side_effect = exists with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen() sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() mock_popen.assert_has_calls(calls) mock_exists.assert_any_call('/bin/systemctl') mock_exists.assert_any_call('/sbin/service') mock_exists.assert_any_call('/sbin/chkconfig') def test_services_handler_sysv_systemctl(self, mock_cp): calls = [] returns = [] # apply_services calls.append(['/bin/systemctl', 'enable', 'httpd.service']) returns.append(FakePOpen()) calls.append(['/bin/systemctl', 'status', 'httpd.service']) returns.append(FakePOpen(returncode=-1)) calls.append(['/bin/systemctl', 'start', 'httpd.service']) returns.append(FakePOpen()) # monitor_services not running calls.append(['/bin/systemctl', 'status', 'httpd.service']) returns.append(FakePOpen(returncode=-1)) calls.append(['/bin/systemctl', 'start', 'httpd.service']) returns.append(FakePOpen()) shell_calls = [] shell_calls.append('/bin/services_restarted') returns.append(FakePOpen()) calls = popen_root_calls(calls) calls.extend(popen_root_calls(shell_calls, shell=True)) # monitor_services running calls.extend(popen_root_calls([['/bin/systemctl', 'status', 'httpd.service']])) returns.append(FakePOpen()) services = { "sysvinit": { "httpd": {"enabled": "true", "ensureRunning": "true"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.return_value = True with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() # services not running sh.monitor_services() # services running sh.monitor_services() mock_popen.assert_has_calls(calls) mock_exists.assert_called_with('/bin/systemctl') def test_services_handler_sysv_disabled_systemctl(self, mock_cp): calls = [] # apply_services calls.append(['/bin/systemctl', 'disable', 'httpd.service']) calls.append(['/bin/systemctl', 'status', 'httpd.service']) calls.append(['/bin/systemctl', 'stop', 'httpd.service']) calls = popen_root_calls(calls) services = { "sysvinit": { "httpd": {"enabled": "false", "ensureRunning": "false"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.return_value = True with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen() sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() mock_popen.assert_has_calls(calls) mock_exists.assert_called_with('/bin/systemctl') def test_services_handler_sysv_service_updaterc(self, mock_cp): calls = [] returns = [] # apply_services calls.append(['/usr/sbin/update-rc.d', 'httpd', 'enable']) returns.append(FakePOpen()) calls.append(['/usr/sbin/service', 'httpd', 'status']) returns.append(FakePOpen(returncode=-1)) calls.append(['/usr/sbin/service', 'httpd', 'start']) returns.append(FakePOpen()) # monitor_services not running calls.append(['/usr/sbin/service', 'httpd', 'status']) returns.append(FakePOpen(returncode=-1)) calls.append(['/usr/sbin/service', 'httpd', 'start']) returns.append(FakePOpen()) shell_calls = [] shell_calls.append('/bin/services_restarted') returns.append(FakePOpen()) calls = popen_root_calls(calls) calls.extend(popen_root_calls(shell_calls, shell=True)) # monitor_services running calls.extend(popen_root_calls([['/usr/sbin/service', 'httpd', 'status']])) returns.append(FakePOpen()) services = { "sysvinit": { "httpd": {"enabled": "true", "ensureRunning": "true"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.return_value = False with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() # services not running sh.monitor_services() # services running sh.monitor_services() mock_popen.assert_has_calls(calls) mock_exists.assert_any_call('/bin/systemctl') mock_exists.assert_any_call('/sbin/service') mock_exists.assert_any_call('/sbin/chkconfig') def test_services_handler_sysv_disabled_service_updaterc(self, mock_cp): calls = [] returns = [] # apply_services calls.append(['/usr/sbin/update-rc.d', 'httpd', 'disable']) returns.append(FakePOpen()) calls.append(['/usr/sbin/service', 'httpd', 'status']) returns.append(FakePOpen()) calls.append(['/usr/sbin/service', 'httpd', 'stop']) returns.append(FakePOpen()) calls = popen_root_calls(calls) services = { "sysvinit": { "httpd": {"enabled": "false", "ensureRunning": "false"} } } hooks = [ cfn_helper.Hook( 'hook1', 'service.restarted', 'Resources.resource1.Metadata', 'root', '/bin/services_restarted') ] with mock.patch('os.path.exists') as mock_exists: mock_exists.return_value = False with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) sh.apply_services() mock_popen.assert_has_calls(calls) mock_exists.assert_any_call('/bin/systemctl') mock_exists.assert_any_call('/sbin/service') mock_exists.assert_any_call('/sbin/chkconfig') class TestHupConfig(testtools.TestCase): def test_load_main_section(self): fcreds = tempfile.NamedTemporaryFile() fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n'.encode('UTF-8')) fcreds.flush() main_conf = tempfile.NamedTemporaryFile() main_conf.write(('''[main] stack=teststack credential-file=%s''' % fcreds.name).encode('UTF-8')) main_conf.flush() mainconfig = cfn_helper.HupConfig([open(main_conf.name)]) self.assertEqual( '{stack: teststack, credential_file: %s, ' 'region: nova, interval:10}' % fcreds.name, str(mainconfig)) main_conf.close() main_conf = tempfile.NamedTemporaryFile() main_conf.write(('''[main] stack=teststack region=region1 credential-file=%s-invalid interval=120''' % fcreds.name).encode('UTF-8')) main_conf.flush() e = self.assertRaises(Exception, cfn_helper.HupConfig, [open(main_conf.name)]) self.assertIn('invalid credentials file', str(e)) fcreds.close() @mock.patch.object(cfn_helper, 'controlled_privileges') def test_hup_config(self, mock_cp): hooks_conf = tempfile.NamedTemporaryFile() def write_hook_conf(f, name, triggers, path, action): f.write(( '[%s]\ntriggers=%s\npath=%s\naction=%s\nrunas=root\n\n' % ( name, triggers, path, action)).encode('UTF-8')) write_hook_conf( hooks_conf, 'hook2', 'service2.restarted', 'Resources.resource2.Metadata', '/bin/hook2') write_hook_conf( hooks_conf, 'hook1', 'service1.restarted', 'Resources.resource1.Metadata', '/bin/hook1') write_hook_conf( hooks_conf, 'hook3', 'service3.restarted', 'Resources.resource3.Metadata', '/bin/hook3') write_hook_conf( hooks_conf, 'cfn-http-restarted', 'service.restarted', 'Resources.resource.Metadata', '/bin/cfn-http-restarted') hooks_conf.flush() fcreds = tempfile.NamedTemporaryFile() fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n'.encode('UTF-8')) fcreds.flush() main_conf = tempfile.NamedTemporaryFile() main_conf.write(('''[main] stack=teststack credential-file=%s region=region1 interval=120''' % fcreds.name).encode('UTF-8')) main_conf.flush() mainconfig = cfn_helper.HupConfig([ open(main_conf.name), open(hooks_conf.name)]) unique_resources = mainconfig.unique_resources_get() self.assertThat([ 'resource', 'resource1', 'resource2', 'resource3', ], ttm.Equals(sorted(unique_resources))) hooks = sorted(mainconfig.hooks, key=lambda hook: hook.resource_name_get()) self.assertEqual(len(hooks), 4) self.assertEqual( '{cfn-http-restarted, service.restarted,' ' Resources.resource.Metadata, root, /bin/cfn-http-restarted}', str(hooks[0])) self.assertEqual( '{hook1, service1.restarted, Resources.resource1.Metadata,' ' root, /bin/hook1}', str(hooks[1])) self.assertEqual( '{hook2, service2.restarted, Resources.resource2.Metadata,' ' root, /bin/hook2}', str(hooks[2])) self.assertEqual( '{hook3, service3.restarted, Resources.resource3.Metadata,' ' root, /bin/hook3}', str(hooks[3])) calls = [] calls.extend(popen_root_calls(['/bin/cfn-http-restarted'], shell=True)) calls.extend(popen_root_calls(['/bin/hook1'], shell=True)) calls.extend(popen_root_calls(['/bin/hook2'], shell=True)) calls.extend(popen_root_calls(['/bin/hook3'], shell=True)) #calls = popen_root_calls(calls) with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen('All good') for hook in hooks: hook.event(hook.triggers, None, hook.resource_name_get()) hooks_conf.close() fcreds.close() main_conf.close() mock_popen.assert_has_calls(calls) class TestCfnHelper(testtools.TestCase): def _check_metadata_content(self, content, value): with tempfile.NamedTemporaryFile() as metadata_info: metadata_info.write(content.encode('UTF-8')) metadata_info.flush() port = cfn_helper.metadata_server_port(metadata_info.name) self.assertEqual(value, port) def test_metadata_server_port(self): self._check_metadata_content("http://172.20.42.42:8000\n", 8000) def test_metadata_server_port_https(self): self._check_metadata_content("https://abc.foo.bar:6969\n", 6969) def test_metadata_server_port_noport(self): self._check_metadata_content("http://172.20.42.42\n", None) def test_metadata_server_port_justip(self): self._check_metadata_content("172.20.42.42", None) def test_metadata_server_port_weird(self): self._check_metadata_content("::::", None) self._check_metadata_content("beforecolons:aftercolons", None) def test_metadata_server_port_emptyfile(self): self._check_metadata_content("\n", None) self._check_metadata_content("", None) def test_metadata_server_nofile(self): random_filename = self.getUniqueString() self.assertIsNone(cfn_helper.metadata_server_port(random_filename)) def test_to_boolean(self): self.assertTrue(cfn_helper.to_boolean(True)) self.assertTrue(cfn_helper.to_boolean('true')) self.assertTrue(cfn_helper.to_boolean('yes')) self.assertTrue(cfn_helper.to_boolean('1')) self.assertTrue(cfn_helper.to_boolean(1)) self.assertFalse(cfn_helper.to_boolean(False)) self.assertFalse(cfn_helper.to_boolean('false')) self.assertFalse(cfn_helper.to_boolean('no')) self.assertFalse(cfn_helper.to_boolean('0')) self.assertFalse(cfn_helper.to_boolean(0)) self.assertFalse(cfn_helper.to_boolean(None)) self.assertFalse(cfn_helper.to_boolean('fingle')) def test_parse_creds_file(self): def parse_creds_test(file_contents, creds_match): with tempfile.NamedTemporaryFile(mode='w') as fcreds: fcreds.write(file_contents) fcreds.flush() creds = cfn_helper.parse_creds_file(fcreds.name) self.assertThat(creds_match, ttm.Equals(creds)) parse_creds_test( 'AWSAccessKeyId=foo\nAWSSecretKey=bar\n', {'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'} ) parse_creds_test( 'AWSAccessKeyId =foo\nAWSSecretKey= bar\n', {'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'} ) parse_creds_test( 'AWSAccessKeyId = foo\nAWSSecretKey = bar\n', {'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'} ) class TestMetadataRetrieve(testtools.TestCase): def setUp(self): super(TestMetadataRetrieve, self).setUp() self.tdir = self.useFixture(fixtures.TempDir()) self.last_file = os.path.join(self.tdir.path, 'last_metadata') def test_metadata_retrieve_files(self): md_data = {"AWS::CloudFormation::Init": {"config": {"files": { "/tmp/foo": {"content": "bar"}}}}} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) with tempfile.NamedTemporaryFile(mode='w+') as default_file: default_file.write(md_str) default_file.flush() self.assertThat(default_file.name, ttm.FileContains(md_str)) self.assertTrue( md.retrieve(default_path=default_file.name, last_path=self.last_file)) self.assertThat(self.last_file, ttm.FileContains(md_str)) self.assertThat(md_data, ttm.Equals(md._metadata)) md = cfn_helper.Metadata('teststack', None) self.assertTrue(md.retrieve(default_path=default_file.name, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) def test_metadata_retrieve_none(self): md = cfn_helper.Metadata('teststack', None) default_file = os.path.join(self.tdir.path, 'default_file') self.assertFalse(md.retrieve(default_path=default_file, last_path=self.last_file)) self.assertIsNone(md._metadata) displayed = self.useFixture(fixtures.StringStream('stdout')) fake_stdout = displayed.stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) md.display() fake_stdout.flush() self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") def test_metadata_retrieve_passed(self): md_data = {"AWS::CloudFormation::Init": {"config": {"files": { "/tmp/foo": {"content": "bar"}}}}} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) self.assertTrue(md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) self.assertEqual(md_str, str(md)) displayed = self.useFixture(fixtures.StringStream('stdout')) fake_stdout = displayed.stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) md.display() fake_stdout.flush() self.assertEqual(displayed.getDetails()['stdout'].as_text(), "{\"AWS::CloudFormation::Init\": {\"config\": {" "\"files\": {\"/tmp/foo\": {\"content\": \"bar\"}" "}}}}\n") def test_metadata_retrieve_by_key_passed(self): md_data = {"foo": {"bar": {"fred.1": "abcd"}}} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) self.assertTrue(md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) self.assertEqual(md_str, str(md)) displayed = self.useFixture(fixtures.StringStream('stdout')) fake_stdout = displayed.stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) md.display("foo") fake_stdout.flush() self.assertEqual(displayed.getDetails()['stdout'].as_text(), "{\"bar\": {\"fred.1\": \"abcd\"}}\n") def test_metadata_retrieve_by_nested_key_passed(self): md_data = {"foo": {"bar": {"fred.1": "abcd"}}} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) self.assertTrue(md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) self.assertEqual(md_str, str(md)) displayed = self.useFixture(fixtures.StringStream('stdout')) fake_stdout = displayed.stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) md.display("foo.bar.'fred.1'") fake_stdout.flush() self.assertEqual(displayed.getDetails()['stdout'].as_text(), '"abcd"\n') def test_metadata_retrieve_key_none(self): md_data = {"AWS::CloudFormation::Init": {"config": {"files": { "/tmp/foo": {"content": "bar"}}}}} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) self.assertTrue(md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) self.assertEqual(md_str, str(md)) displayed = self.useFixture(fixtures.StringStream('stdout')) fake_stdout = displayed.stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) md.display("no_key") fake_stdout.flush() self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") def test_metadata_retrieve_by_nested_key_none(self): md_data = {"foo": {"bar": {"fred.1": "abcd"}}} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) self.assertTrue(md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) self.assertEqual(md_str, str(md)) displayed = self.useFixture(fixtures.StringStream('stdout')) fake_stdout = displayed.stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) md.display("foo.fred") fake_stdout.flush() self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") def test_metadata_retrieve_by_nested_key_none_with_matching_string(self): md_data = {"foo": "bar"} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) self.assertTrue(md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) self.assertEqual(md_str, str(md)) displayed = self.useFixture(fixtures.StringStream('stdout')) fake_stdout = displayed.stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) md.display("foo.bar") fake_stdout.flush() self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") def test_metadata_creates_cache(self): temp_home = tempfile.mkdtemp() def cleanup_temp_home(thome): os.unlink(os.path.join(thome, 'cache', 'last_metadata')) os.rmdir(os.path.join(thome, 'cache')) os.rmdir(os.path.join(thome)) self.addCleanup(cleanup_temp_home, temp_home) last_path = os.path.join(temp_home, 'cache', 'last_metadata') md_data = {"AWS::CloudFormation::Init": {"config": {"files": { "/tmp/foo": {"content": "bar"}}}}} md_str = json.dumps(md_data) md = cfn_helper.Metadata('teststack', None) self.assertFalse(os.path.exists(last_path), "last_metadata file already exists") self.assertTrue(md.retrieve(meta_str=md_str, last_path=last_path)) self.assertTrue(os.path.exists(last_path), "last_metadata file should exist") # Ensure created dirs and file have right perms self.assertTrue(os.stat(last_path).st_mode & 0o600 == 0o600) self.assertTrue( os.stat(os.path.dirname(last_path)).st_mode & 0o700 == 0o700) def test_is_valid_metadata(self): md_data = {"AWS::CloudFormation::Init": {"config": {"files": { "/tmp/foo": {"content": "bar"}}}}} md = cfn_helper.Metadata('teststack', None) self.assertTrue( md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) self.assertTrue(md._is_valid_metadata()) self.assertThat( md_data['AWS::CloudFormation::Init'], ttm.Equals(md._metadata)) def test_remote_metadata(self): md_data = {"AWS::CloudFormation::Init": {"config": {"files": { "/tmp/foo": {"content": "bar"}}}}} with mock.patch.object( cfn.CloudFormationConnection, 'describe_stack_resource' ) as mock_dsr: mock_dsr.return_value = { 'DescribeStackResourceResponse': { 'DescribeStackResourceResult': { 'StackResourceDetail': {'Metadata': md_data}}}} md = cfn_helper.Metadata( 'teststack', None, access_key='foo', secret_key='bar') self.assertTrue(md.retrieve(last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) with tempfile.NamedTemporaryFile(mode='w') as fcreds: fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n') fcreds.flush() md = cfn_helper.Metadata( 'teststack', None, credentials_file=fcreds.name) self.assertTrue(md.retrieve(last_path=self.last_file)) self.assertThat(md_data, ttm.Equals(md._metadata)) def test_nova_meta_with_cache(self): meta_in = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f", "availability_zone": "nova", "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", "launch_index": 0, "meta": {}, "public_keys": {"heat_key": "ssh-rsa etc...\n"}, "name": "as-WikiDatabase-4ykioj3lgi57"} md_str = json.dumps(meta_in) md = cfn_helper.Metadata('teststack', None) with tempfile.NamedTemporaryFile(mode='w+') as default_file: default_file.write(md_str) default_file.flush() self.assertThat(default_file.name, ttm.FileContains(md_str)) meta_out = md.get_nova_meta(cache_path=default_file.name) self.assertEqual(meta_in, meta_out) @mock.patch.object(cfn_helper, 'controlled_privileges') def test_nova_meta_curl(self, mock_cp): url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' temp_home = tempfile.mkdtemp() cache_path = os.path.join(temp_home, 'meta_data.json') def cleanup_temp_home(thome): os.unlink(cache_path) os.rmdir(thome) self.addCleanup(cleanup_temp_home, temp_home) meta_in = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f", "availability_zone": "nova", "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", "launch_index": 0, "meta": {"freddy": "is hungry"}, "public_keys": {"heat_key": "ssh-rsa etc...\n"}, "name": "as-WikiDatabase-4ykioj3lgi57"} md_str = json.dumps(meta_in) def write_cache_file(*params, **kwargs): with open(cache_path, 'w+') as cache_file: cache_file.write(md_str) cache_file.flush() self.assertThat(cache_file.name, ttm.FileContains(md_str)) return FakePOpen('Downloaded', '', 0) with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = write_cache_file md = cfn_helper.Metadata('teststack', None) meta_out = md.get_nova_meta(cache_path=cache_path) self.assertEqual(meta_in, meta_out) mock_popen.assert_has_calls( popen_root_calls([['curl', '-o', cache_path, url]])) @mock.patch.object(cfn_helper, 'controlled_privileges') def test_nova_meta_curl_corrupt(self, mock_cp): url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' temp_home = tempfile.mkdtemp() cache_path = os.path.join(temp_home, 'meta_data.json') def cleanup_temp_home(thome): os.unlink(cache_path) os.rmdir(thome) self.addCleanup(cleanup_temp_home, temp_home) md_str = "this { is not really json" def write_cache_file(*params, **kwargs): with open(cache_path, 'w+') as cache_file: cache_file.write(md_str) cache_file.flush() self.assertThat(cache_file.name, ttm.FileContains(md_str)) return FakePOpen('Downloaded', '', 0) with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = write_cache_file md = cfn_helper.Metadata('teststack', None) meta_out = md.get_nova_meta(cache_path=cache_path) self.assertIsNone(meta_out) mock_popen.assert_has_calls( popen_root_calls([['curl', '-o', cache_path, url]])) @mock.patch.object(cfn_helper, 'controlled_privileges') def test_nova_meta_curl_failed(self, mock_cp): url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' temp_home = tempfile.mkdtemp() cache_path = os.path.join(temp_home, 'meta_data.json') def cleanup_temp_home(thome): os.rmdir(thome) self.addCleanup(cleanup_temp_home, temp_home) with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen('Failed', '', 1) md = cfn_helper.Metadata('teststack', None) meta_out = md.get_nova_meta(cache_path=cache_path) self.assertIsNone(meta_out) mock_popen.assert_has_calls( popen_root_calls([['curl', '-o', cache_path, url]])) def test_get_tags(self): fake_tags = {'foo': 'fee', 'apple': 'red'} md_data = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f", "availability_zone": "nova", "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", "launch_index": 0, "meta": fake_tags, "public_keys": {"heat_key": "ssh-rsa etc...\n"}, "name": "as-WikiDatabase-4ykioj3lgi57"} tags_expect = fake_tags tags_expect['InstanceId'] = md_data['uuid'] md = cfn_helper.Metadata('teststack', None) with mock.patch.object(md, 'get_nova_meta') as mock_method: mock_method.return_value = md_data tags = md.get_tags() mock_method.assert_called_once_with() self.assertEqual(tags_expect, tags) def test_get_instance_id(self): uuid = "f9431d18-d971-434d-9044-5b38f5b4646f" md_data = {"uuid": uuid, "availability_zone": "nova", "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", "launch_index": 0, "public_keys": {"heat_key": "ssh-rsa etc...\n"}, "name": "as-WikiDatabase-4ykioj3lgi57"} md = cfn_helper.Metadata('teststack', None) with mock.patch.object(md, 'get_nova_meta') as mock_method: mock_method.return_value = md_data self.assertEqual(md.get_instance_id(), uuid) mock_method.assert_called_once_with() class TestCfnInit(testtools.TestCase): def setUp(self): super(TestCfnInit, self).setUp() self.tdir = self.useFixture(fixtures.TempDir()) self.last_file = os.path.join(self.tdir.path, 'last_metadata') def test_cfn_init(self): with tempfile.NamedTemporaryFile(mode='w+') as foo_file: md_data = {"AWS::CloudFormation::Init": {"config": {"files": { foo_file.name: {"content": "bar"}}}}} md = cfn_helper.Metadata('teststack', None) self.assertTrue( md.retrieve(meta_str=md_data, last_path=self.last_file)) md.cfn_init() self.assertThat(foo_file.name, ttm.FileContains('bar')) @mock.patch.object(cfn_helper, 'controlled_privileges') def test_cfn_init_with_ignore_errors_false(self, mock_cp): md_data = {"AWS::CloudFormation::Init": {"config": {"commands": { "00_foo": {"command": "/bin/command1", "ignoreErrors": "false"}}}}} with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen('Doing something', 'error', -1) md = cfn_helper.Metadata('teststack', None) self.assertTrue( md.retrieve(meta_str=md_data, last_path=self.last_file)) self.assertRaises(cfn_helper.CommandsHandlerRunError, md.cfn_init) mock_popen.assert_has_calls(popen_root_calls(['/bin/command1'], shell=True)) @mock.patch.object(cfn_helper, 'controlled_privileges') def test_cfn_init_with_ignore_errors_true(self, mock_cp): calls = [] returns = [] calls.extend(popen_root_calls(['/bin/command1'], shell=True)) returns.append(FakePOpen('Doing something', 'error', -1)) calls.extend(popen_root_calls(['/bin/command2'], shell=True)) returns.append(FakePOpen('All good')) #calls = popen_root_calls(calls) md_data = {"AWS::CloudFormation::Init": {"config": {"commands": { "00_foo": {"command": "/bin/command1", "ignoreErrors": "true"}, "01_bar": {"command": "/bin/command2", "ignoreErrors": "false"} }}}} with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns md = cfn_helper.Metadata('teststack', None) self.assertTrue( md.retrieve(meta_str=md_data, last_path=self.last_file)) md.cfn_init() mock_popen.assert_has_calls(calls) @mock.patch.object(cfn_helper, 'controlled_privileges') def test_cfn_init_runs_list_commands_without_shell(self, mock_cp): calls = [] returns = [] # command supplied as list shouldn't run on shell calls.extend(popen_root_calls([['/bin/command1', 'arg']], shell=False)) returns.append(FakePOpen('Doing something')) # command supplied as string should run on shell calls.extend(popen_root_calls(['/bin/command2'], shell=True)) returns.append(FakePOpen('All good')) md_data = {"AWS::CloudFormation::Init": {"config": {"commands": { "00_foo": {"command": ["/bin/command1", "arg"]}, "01_bar": {"command": "/bin/command2"} }}}} with mock.patch('subprocess.Popen') as mock_popen: mock_popen.side_effect = returns md = cfn_helper.Metadata('teststack', None) self.assertTrue( md.retrieve(meta_str=md_data, last_path=self.last_file)) md.cfn_init() mock_popen.assert_has_calls(calls) class TestSourcesHandler(testtools.TestCase): def test_apply_sources_empty(self): sh = cfn_helper.SourcesHandler({}) sh.apply_sources() def _test_apply_sources(self, url, end_file): dest = tempfile.mkdtemp() self.addCleanup(os.rmdir, dest) sources = {dest: url} td = os.path.dirname(end_file) er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -" calls = popen_root_calls([er % (dest, dest, url)], shell=True) with mock.patch.object(tempfile, 'mkdtemp') as mock_mkdtemp: mock_mkdtemp.return_value = td with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen('Curl good') sh = cfn_helper.SourcesHandler(sources) sh.apply_sources() mock_popen.assert_has_calls(calls) mock_mkdtemp.assert_called_with() @mock.patch.object(cfn_helper, 'controlled_privileges') def test_apply_sources_github(self, mock_cp): url = "https://github.com/NoSuchProject/tarball/NoSuchTarball" dest = tempfile.mkdtemp() self.addCleanup(os.rmdir, dest) sources = {dest: url} er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -" calls = popen_root_calls([er % (dest, dest, url)], shell=True) with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen('Curl good') sh = cfn_helper.SourcesHandler(sources) sh.apply_sources() mock_popen.assert_has_calls(calls) @mock.patch.object(cfn_helper, 'controlled_privileges') def test_apply_sources_general(self, mock_cp): url = "https://website.no.existe/a/b/c/file.tar.gz" dest = tempfile.mkdtemp() self.addCleanup(os.rmdir, dest) sources = {dest: url} er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -" calls = popen_root_calls([er % (dest, dest, url)], shell=True) with mock.patch('subprocess.Popen') as mock_popen: mock_popen.return_value = FakePOpen('Curl good') sh = cfn_helper.SourcesHandler(sources) sh.apply_sources() mock_popen.assert_has_calls(calls) def test_apply_source_cmd(self): sh = cfn_helper.SourcesHandler({}) er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | %s | tar -xvf -" dest = '/tmp' # test tgz url = 'http://www.example.com/a.tgz' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) # test tar.gz url = 'http://www.example.com/a.tar.gz' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) # test github - tarball 1 url = 'https://github.com/openstack/heat-cfntools/tarball/master' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) # test github - tarball 2 url = 'https://github.com/openstack/heat-cfntools/tarball/master/' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) # test tbz2 url = 'http://www.example.com/a.tbz2' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "bunzip2"), cmd) # test tar.bz2 url = 'http://www.example.com/a.tar.bz2' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "bunzip2"), cmd) # test zip er = "mkdir -p '%s'; cd '%s'; curl -s -o '%s' '%s' && unzip -o '%s'" url = 'http://www.example.com/a.zip' d = "/tmp/tmp2I0yNK" tmp = "%s/a.zip" % d with mock.patch.object(tempfile, 'mkdtemp') as mock_mkdtemp: mock_mkdtemp.return_value = d cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, tmp, url, tmp), cmd) # test gz er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | %s > '%s'" url = 'http://www.example.com/a.sh.gz' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "gunzip", "a.sh"), cmd) # test bz2 url = 'http://www.example.com/a.sh.bz2' cmd = sh._apply_source_cmd(dest, url) self.assertEqual(er % (dest, dest, url, "bunzip2", "a.sh"), cmd) # test other url = 'http://www.example.com/a.sh' cmd = sh._apply_source_cmd(dest, url) self.assertEqual("", cmd) mock_mkdtemp.assert_called_with() heat-cfntools-1.4.2/heat_cfntools/tests/test_cfn_hup.py000066400000000000000000000064651265023060500233230ustar00rootroot00000000000000# # Copyright 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock import tempfile import testtools from heat_cfntools.cfntools import cfn_helper class TestCfnHup(testtools.TestCase): def setUp(self): super(TestCfnHup, self).setUp() self.logger = self.useFixture(fixtures.FakeLogger()) self.stack_name = self.getUniqueString() self.resource = self.getUniqueString() self.region = self.getUniqueString() self.creds = tempfile.NamedTemporaryFile() self.metadata = cfn_helper.Metadata(self.stack_name, self.resource, credentials_file=self.creds.name, region=self.region) self.init_content = self.getUniqueString() self.init_temp = tempfile.NamedTemporaryFile() self.service_name = self.getUniqueString() self.init_section = {'AWS::CloudFormation::Init': { 'config': { 'services': { 'sysvinit': { self.service_name: { 'enabled': True, 'ensureRunning': True, } } }, 'files': { self.init_temp.name: { 'content': self.init_content } } } } } def _mock_retrieve_metadata(self, desired_metadata): with mock.patch.object( cfn_helper.Metadata, 'remote_metadata') as mock_method: mock_method.return_value = desired_metadata with tempfile.NamedTemporaryFile() as last_md: self.metadata.retrieve(last_path=last_md.name) def _test_cfn_hup_metadata(self, metadata): self._mock_retrieve_metadata(metadata) FakeServicesHandler = mock.Mock() FakeServicesHandler.monitor_services.return_value = None self.useFixture( fixtures.MonkeyPatch( 'heat_cfntools.cfntools.cfn_helper.ServicesHandler', FakeServicesHandler)) section = self.getUniqueString() triggers = 'post.add,post.delete,post.update' path = 'Resources.%s.Metadata' % self.resource runas = 'root' action = '/bin/sh -c "true"' hook = cfn_helper.Hook(section, triggers, path, runas, action) with mock.patch.object(cfn_helper.Hook, 'event') as mock_method: mock_method.return_value = None self.metadata.cfn_hup([hook]) def test_cfn_hup_empty_metadata(self): self._test_cfn_hup_metadata({}) def test_cfn_hup_cfn_init_metadata(self): self._test_cfn_hup_metadata(self.init_section) heat-cfntools-1.4.2/requirements.txt000066400000000000000000000001121265023060500175410ustar00rootroot00000000000000pbr>=0.6,!=0.7,<1.0 boto>=2.12.0,!=2.13.0 psutil>=1.1.1,<2.0.0 six>=1.9.0 heat-cfntools-1.4.2/setup.cfg000066400000000000000000000015211265023060500161030ustar00rootroot00000000000000[metadata] name = heat-cfntools summary = Tools required to be installed on Heat provisioned cloud instances description-file = README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = http://www.openstack.org/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 [files] packages = heat_cfntools scripts = bin/cfn-create-aws-symlinks bin/cfn-get-metadata bin/cfn-hup bin/cfn-init bin/cfn-push-stats bin/cfn-signal [global] setup-hooks = pbr.hooks.setup_hook [wheel] universal = 1 heat-cfntools-1.4.2/setup.py000077500000000000000000000014151265023060500160010ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools setuptools.setup( setup_requires=['pbr'], pbr=True) heat-cfntools-1.4.2/test-requirements.txt000066400000000000000000000002071265023060500205230ustar00rootroot00000000000000# Hacking already pins down pep8, pyflakes and flake8 hacking>=0.8.0,<0.9 mock>=1.0 discover testrepository>=0.0.18 testtools>=0.9.34 heat-cfntools-1.4.2/tools/000077500000000000000000000000001265023060500154235ustar00rootroot00000000000000heat-cfntools-1.4.2/tools/lintstack.py000077500000000000000000000144471265023060500200060ustar00rootroot00000000000000#!/usr/bin/env python # Copyright (c) 2012, AT&T Labs, Yun Mao # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """pylint error checking.""" import cStringIO as StringIO import json import re import sys from pylint import lint from pylint.reporters import text # Note(maoy): E1103 is error code related to partial type inference ignore_codes = ["E1103"] # Note(maoy): the error message is the pattern of E0202. It should be ignored # for nova.tests modules ignore_messages = ["An attribute affected in nova.tests"] # Note(maoy): we ignore all errors in openstack.common because it should be # checked elsewhere. We also ignore nova.tests for now due to high false # positive rate. ignore_modules = ["nova/openstack/common/", "nova/tests/"] KNOWN_PYLINT_EXCEPTIONS_FILE = "tools/pylint_exceptions" class LintOutput(object): _cached_filename = None _cached_content = None def __init__(self, filename, lineno, line_content, code, message, lintoutput): self.filename = filename self.lineno = lineno self.line_content = line_content self.code = code self.message = message self.lintoutput = lintoutput @classmethod def from_line(cls, line): m = re.search(r"(\S+):(\d+): \[(\S+)(, \S+)?] (.*)", line) matched = m.groups() filename, lineno, code, message = (matched[0], int(matched[1]), matched[2], matched[-1]) if cls._cached_filename != filename: with open(filename) as f: cls._cached_content = list(f.readlines()) cls._cached_filename = filename line_content = cls._cached_content[lineno - 1].rstrip() return cls(filename, lineno, line_content, code, message, line.rstrip()) @classmethod def from_msg_to_dict(cls, msg): """From the output of pylint msg, to a dict, where each key is a unique error identifier, value is a list of LintOutput """ result = {} for line in msg.splitlines(): obj = cls.from_line(line) if obj.is_ignored(): continue key = obj.key() if key not in result: result[key] = [] result[key].append(obj) return result def is_ignored(self): if self.code in ignore_codes: return True if any(self.filename.startswith(name) for name in ignore_modules): return True if any(msg in self.message for msg in ignore_messages): return True return False def key(self): if self.code in ["E1101", "E1103"]: # These two types of errors are like Foo class has no member bar. # We discard the source code so that the error will be ignored # next time another Foo.bar is encountered. return self.message, "" return self.message, self.line_content.strip() def json(self): return json.dumps(self.__dict__) def review_str(self): return ("File %(filename)s\nLine %(lineno)d:%(line_content)s\n" "%(code)s: %(message)s" % self.__dict__) class ErrorKeys(object): @classmethod def print_json(cls, errors, output=sys.stdout): print >>output, "# automatically generated by tools/lintstack.py" for i in sorted(errors.keys()): print >>output, json.dumps(i) @classmethod def from_file(cls, filename): keys = set() for line in open(filename): if line and line[0] != "#": d = json.loads(line) keys.add(tuple(d)) return keys def run_pylint(): buff = StringIO.StringIO() reporter = text.ParseableTextReporter(output=buff) args = ["--include-ids=y", "-E", "nova"] lint.Run(args, reporter=reporter, exit=False) val = buff.getvalue() buff.close() return val def generate_error_keys(msg=None): print "Generating", KNOWN_PYLINT_EXCEPTIONS_FILE if msg is None: msg = run_pylint() errors = LintOutput.from_msg_to_dict(msg) with open(KNOWN_PYLINT_EXCEPTIONS_FILE, "w") as f: ErrorKeys.print_json(errors, output=f) def validate(newmsg=None): print "Loading", KNOWN_PYLINT_EXCEPTIONS_FILE known = ErrorKeys.from_file(KNOWN_PYLINT_EXCEPTIONS_FILE) if newmsg is None: print "Running pylint. Be patient..." newmsg = run_pylint() errors = LintOutput.from_msg_to_dict(newmsg) print "Unique errors reported by pylint: was %d, now %d." \ % (len(known), len(errors)) passed = True for err_key, err_list in errors.items(): for err in err_list: if err_key not in known: print err.lintoutput print passed = False if passed: print "Congrats! pylint check passed." redundant = known - set(errors.keys()) if redundant: print "Extra credit: some known pylint exceptions disappeared." for i in sorted(redundant): print json.dumps(i) print "Consider regenerating the exception file if you will." else: print ("Please fix the errors above. If you believe they are false" " positives, run 'tools/lintstack.py generate' to overwrite.") sys.exit(1) def usage(): print """Usage: tools/lintstack.py [generate|validate] To generate pylint_exceptions file: tools/lintstack.py generate To validate the current commit: tools/lintstack.py """ def main(): option = "validate" if len(sys.argv) > 1: option = sys.argv[1] if option == "generate": generate_error_keys() elif option == "validate": validate() else: usage() if __name__ == "__main__": main() heat-cfntools-1.4.2/tools/lintstack.sh000077500000000000000000000042061265023060500177600ustar00rootroot00000000000000#!/usr/bin/env bash # Copyright (c) 2012-2013, AT&T Labs, Yun Mao # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Use lintstack.py to compare pylint errors. # We run pylint twice, once on HEAD, once on the code before the latest # commit for review. set -e TOOLS_DIR=$(cd $(dirname "$0") && pwd) # Get the current branch name. GITHEAD=`git rev-parse --abbrev-ref HEAD` if [[ "$GITHEAD" == "HEAD" ]]; then # In detached head mode, get revision number instead GITHEAD=`git rev-parse HEAD` echo "Currently we are at commit $GITHEAD" else echo "Currently we are at branch $GITHEAD" fi cp -f $TOOLS_DIR/lintstack.py $TOOLS_DIR/lintstack.head.py if git rev-parse HEAD^2 2>/dev/null; then # The HEAD is a Merge commit. Here, the patch to review is # HEAD^2, the master branch is at HEAD^1, and the patch was # written based on HEAD^2~1. PREV_COMMIT=`git rev-parse HEAD^2~1` git checkout HEAD~1 # The git merge is necessary for reviews with a series of patches. # If not, this is a no-op so won't hurt either. git merge $PREV_COMMIT else # The HEAD is not a merge commit. This won't happen on gerrit. # Most likely you are running against your own patch locally. # We assume the patch to examine is HEAD, and we compare it against # HEAD~1 git checkout HEAD~1 fi # First generate tools/pylint_exceptions from HEAD~1 $TOOLS_DIR/lintstack.head.py generate # Then use that as a reference to compare against HEAD git checkout $GITHEAD $TOOLS_DIR/lintstack.head.py echo "Check passed. FYI: the pylint exceptions are:" cat $TOOLS_DIR/pylint_exceptions heat-cfntools-1.4.2/tox.ini000066400000000000000000000012331265023060500155750ustar00rootroot00000000000000[tox] envlist = py34,py27,pep8 [testenv] setenv = VIRTUAL_ENV={envdir} deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = python setup.py testr --slowest --testr-args='{posargs}' [testenv:pep8] commands = flake8 flake8 --filename=cfn-* bin [testenv:pylint] setenv = VIRTUAL_ENV={envdir} deps = -r{toxinidir}/requirements.txt pylint==0.26.0 commands = bash tools/lintstack.sh [testenv:cover] commands = python setup.py testr --coverage --testr-args='{posargs}' [testenv:venv] commands = {posargs} [flake8] show-source = true exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,tools