slapos.core-1.3.18/0000755000000000000000000000000013006632706013775 5ustar rootroot00000000000000slapos.core-1.3.18/setup.cfg0000644000000000000000000000032213006632706015613 0ustar rootroot00000000000000[build_sphinx] source-dir = documentation/source build-dir = documentation/build all_files = 1 [upload_sphinx] upload-dir = documentation/build/html [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 slapos.core-1.3.18/slapos/0000755000000000000000000000000013006632706015276 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/README.proxy.txt0000644000000000000000000000075312752436134020165 0ustar rootroot00000000000000proxy ===== Implement minimalist SlapOS Master server without any security, designed to work only from localhost with one SlapOS Node (a.k.a Computer). It implements (or should implement) the SLAP API, as currently implemented in the SlapOS Master (see slaptool.py in Master). The only behavioral difference from the SlapOS Master is: When the proxy doesn't find any free partition (and/or in case of slave instance, any compatible master instance), it will throw a NotFoundError (404). slapos.core-1.3.18/slapos/cli/0000755000000000000000000000000013006632706016045 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/cli/list.py0000644000000000000000000000500612752436134017377 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import sys from slapos.cli.config import ClientConfigCommand from slapos.client import init, ClientConfig def resetLogger(logger): """Remove all formatters, log files, etc.""" if not getattr(logger, 'parent', None): return handler = logger.parent.handlers[0] logger.parent.removeHandler(handler) logger.addHandler(logging.StreamHandler(sys.stdout)) class ListCommand(ClientConfigCommand): """request an instance and get status and parameters of instance""" def get_parser(self, prog_name): ap = super(ListCommand, self).get_parser(prog_name) return ap def take_action(self, args): configp = self.fetch_config(args) conf = ClientConfig(args, configp) local = init(conf, self.app.log) do_list(self.app.log, conf, local) def do_list(logger, conf, local): resetLogger(logger) # XXX catch exception instance_dict = local['slap'].getOpenOrderDict() if instance_dict == {}: logger.info('No existing service.') return logger.info('List of services:') for title, instance in instance_dict.iteritems(): logger.info('%s %s', title, instance._software_release_url) slapos.core-1.3.18/slapos/cli/boot.py0000644000000000000000000001074412752436134017374 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import subprocess from time import sleep import glob import os from slapos.cli.command import must_be_root from slapos.cli.entry import SlapOSApp from slapos.cli.config import ConfigCommand def _removeTimestamp(instancehome): """ Remove .timestamp from all partitions """ timestamp_glob_path = "%s/slappart*/.timestamp" % instancehome for timestamp_path in glob.glob(timestamp_glob_path): print "Removing %s" % timestamp_path os.remove(timestamp_path) def _runBang(app): """ Launch slapos node format. """ print "[BOOT] Invoking slapos node bang..." result = app.run(['node', 'bang', '-m', 'Reboot']) if result == 1: return 0 return 1 def _runFormat(app): """ Launch slapos node format. """ print "[BOOT] Invoking slapos node format..." result = app.run(['node', 'format', '--now', '--verbose']) if result == 1: return 0 return 1 def _ping(): """ Ping a hostname """ print "[BOOT] Invoking ping to ipv4 network..." p = subprocess.Popen( ["ping", "-c", "2", "www.google.com"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() if p.returncode == 0: print "[BOOT] IPv4 network reachable..." return 1 print "[BOOT] [ERROR] IPv4 network unreachable..." return 0 def _ping6(): """ Ping an ipv6 address """ print "[BOOT] Invoking ping to ipv6 network..." p = subprocess.Popen( ["ping6", "-c", "2", "ipv6.google.com"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() if p.returncode == 0: print "[BOOT] IPv6 network reachable..." return 1 print "[BOOT] [ERROR] IPv6 network unreachable..." return 0 class BootCommand(ConfigCommand): """ Test network and invoke simple format and bang (Use on Linux startup) """ command_group = 'node' def get_parser(self, prog_name): ap = super(BootCommand, self).get_parser(prog_name) ap.add_argument('-m', '--message', default="Reboot", help='Message for bang') return ap @must_be_root def take_action(self, args): configp = self.fetch_config(args) # Make sure ipv4 is working instance_root = configp.get('slapos','instance_root') is_ready = _ping() while is_ready == 0: sleep(5) is_ready = _ping() # Make sure ipv6 is working is_ready = _ping6() while is_ready == 0: sleep(5) is_ready = _ping6() app = SlapOSApp() # Make sure slapos node format returns ok is_ready = _runFormat(app) while is_ready == 0: print "[BOOT] [ERROR] Fail to format, try again in 15 seconds..." sleep(15) is_ready = _runFormat(app) # Make sure slapos node bang returns ok is_ready = _runBang(app) while is_ready == 0: print "[BOOT] [ERROR] Fail to bang, try again in 15 seconds..." sleep(15) is_ready = _runBang(app) _removeTimestamp(instance_root) slapos.core-1.3.18/slapos/cli/request.py0000644000000000000000000001125612752436134020120 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import pprint from slapos.cli.config import ClientConfigCommand from slapos.client import init, ClientConfig, _getSoftwareReleaseFromSoftwareString from slapos.slap import ResourceNotReady def parse_option_dict(options): """ Parse a list of option strings like foo=bar baz=qux and return a dictionary. Will raise if keys are repeated. """ ret = {} for option_pair in (options or []): key, value = option_pair.split('=', 1) if key in ret: raise ValueError("Multiple values provided for the same key '%s'" % key) ret[key] = value return ret class RequestCommand(ClientConfigCommand): """request an instance and get status and parameters of instance""" def get_parser(self, prog_name): ap = super(RequestCommand, self).get_parser(prog_name) ap.add_argument('reference', help='Your instance reference') ap.add_argument('software_url', help='Your software url') # XXX TODO can we do a minimal check for correctness of this argument? # the alternative is a silent failure for mistyped/obsolete/invalid URL ap.add_argument('--node', nargs='+', help="Node request option 'option1=value1 option2=value2' (i.e. computer_guid=COMP-1234)") ap.add_argument('--type', help='Software type to be requested') ap.add_argument('--state', help='State of the requested instance') ap.add_argument('--slave', action='store_true', help='Ask for a slave instance') ap.add_argument('--parameters', nargs='+', help="Give your configuration 'option1=value1 option2=value2'") return ap def take_action(self, args): args.node = parse_option_dict(args.node) args.parameters = parse_option_dict(args.parameters) configp = self.fetch_config(args) conf = ClientConfig(args, configp) local = init(conf, self.app.log) do_request(self.app.log, conf, local) def do_request(logger, conf, local): logger.info('Requesting %s as instance of %s...', conf.reference, conf.software_url) conf.software_url = _getSoftwareReleaseFromSoftwareString( logger, conf.software_url, local['product']) if conf.software_url in local: conf.software_url = local[conf.software_url] try: partition = local['slap'].registerOpenOrder().request( software_release=conf.software_url, partition_reference=conf.reference, partition_parameter_kw=conf.parameters, software_type=conf.type, filter_kw=conf.node, state=conf.state, shared=conf.slave ) logger.info('Instance requested.\nState is : %s.', partition.getState()) logger.info('Connection parameters of instance are:') logger.info(pprint.pformat(partition.getConnectionParameterDict())) logger.info('You can rerun the command to get up-to-date information.') except ResourceNotReady: logger.warning('Instance requested. Master is provisioning it. Please rerun in a ' 'couple of minutes to get connection information.') exit(2) slapos.core-1.3.18/slapos/cli/register.py0000644000000000000000000003427713006625060020253 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import getpass import os import re import shutil import stat import sys import pkg_resources import requests from slapos.cli.command import Command, must_be_root from slapos.util import parse_certificate_key_pair class RegisterCommand(Command): """ register a node in the SlapOS cloud """ command_group = 'node' def get_parser(self, prog_name): ap = super(RegisterCommand, self).get_parser(prog_name) ap.add_argument('node_name', help='Name of the node') ap.add_argument('--interface-name', default='eth0', help='Primary network interface. IP of Partitions ' 'will be added to it' ' (default: %(default)s)') ap.add_argument('--master-url', default='https://slap.vifib.com', help='URL of SlapOS Master REST API' ' (default: %(default)s)') ap.add_argument('--master-url-web', default='https://slapos.vifib.com', help='URL of SlapOS Master webservice to register certificates' ' (default: %(default)s)') ap.add_argument('--partition-number', default=10, type=int, help='Number of partitions to create in the SlapOS Node' ' (default: %(default)s)') ap.add_argument('--ipv4-local-network', default='10.0.0.0/16', help='Subnetwork used to assign local IPv4 addresses. ' 'It should be a not used network in order to avoid conflicts' ' (default: %(default)s)') ap.add_argument('--ipv6-interface', help='Interface name to get ipv6') ap.add_argument('--login-auth', action='store_true', help='Force login and password authentication') ap.add_argument('--login', help='Your SlapOS Master login. ' 'Asks it interactively, then password.') ap.add_argument('--password', help='Your SlapOS Master password. If not provided, ' 'asks it interactively. NOTE: giving password as parameter ' 'should be avoided for security reasons.') ap.add_argument('--token', help="SlapOS 'computer security' authentication token") ap.add_argument('--create-tap', '-t', action='store_true', help='Will trigger creation of one virtual "tap" interface per ' 'Partition and attach it to primary interface. Requires ' 'primary interface to be a bridge. ' 'Needed to host virtual machines' ' (default: %(default)s)') ap.add_argument('--dry-run', '-n', action='store_true', help='Simulate the execution steps' ' (default: %(default)s)') return ap @must_be_root def take_action(self, args): try: conf = RegisterConfig(logger=self.app.log) conf.setConfig(args) return_code = do_register(conf) except SystemExit as err: return_code = err sys.exit(return_code) # XXX dry_run will happily register a new node on the slapos master. Isn't it supposed to be no-op? def check_credentials(url, login, password): """Check if login and password are correct""" req = requests.get(url, auth=(login, password), verify=False) return 'Logout' in req.text def get_certificate_key_pair(logger, master_url_web, node_name, token=None, login=None, password=None): """Download certificates from SlapOS Master""" if token: req = requests.post('/'.join([master_url_web, 'add-a-server/WebSection_registerNewComputer']), data={'title': node_name}, headers={'X-Access-Token': token}, verify=False) else: register_server_url = '/'.join([master_url_web, ("add-a-server/WebSection_registerNewComputer?dialog_id=WebSection_viewServerInformationDialog&dialog_method=WebSection_registerNewComputer&title={}&object_path=/erp5/web_site_module/hosting/add-a-server&update_method=&cancel_url=https%3A//www.vifib.net/add-a-server/WebSection_viewServerInformationDialog&Base_callDialogMethod=&field_your_title=Essai1&dialog_category=None&form_id=view".format(node_name))]) req = requests.get(register_server_url, auth=(login, password), verify=False) if not req.ok and 'Certificate still active.' in req.text: # raise a readable exception if the computer name is already used, # instead of an opaque 500 Internal Error. # this will not work with the new API. logger.error('The node name "%s" is already in use. ' 'Please change the name, or revoke the active ' 'certificate if you want to replace the node.', node_name) sys.exit(1) if req.status_code == 403: if token: msg = 'Please check the authentication token or require a new one.' else: msg = 'Please check username and password.' logger.critical('Access denied to the SlapOS Master. %s', msg) sys.exit(1) elif not req.ok and 'NotImplementedError' in req.text and not token: logger.critical('This SlapOS server does not support login/password ' 'authentication. Please use the token.') sys.exit(1) else: req.raise_for_status() return parse_certificate_key_pair(req.text) def get_computer_name(certificate): """Parse certificate to get computer name and return it""" k = certificate.find("COMP-") i = certificate.find("/email", k) return certificate[k:i] def save_former_config(conf): """Save former configuration if found""" former = '/etc/opt/slapos' if not os.path.exists(os.path.join(former, 'slapos.cfg')): return saved = former + '.old' while os.path.exists(saved): conf.logger.info('Slapos configuration detected in %s', saved) if saved[-1] == 'd': saved += '.1' else: # XXX this goes from 19 to 110 saved = saved[:-1] + str(int(saved[-1]) + 1) conf.logger.info('Former slapos configuration detected ' 'in %s moving to %s', former, saved) shutil.move(former, saved) def fetch_configuration_template(): template_arg_list = (__name__.split('.')[0], 'slapos.cfg.example') with pkg_resources.resource_stream(*template_arg_list) as fout: slapos_node_configuration_template = fout.read() return slapos_node_configuration_template def slapconfig(conf): """Base Function to configure slapos in /etc/opt/slapos""" dry_run = conf.dry_run # Create slapos configuration directory if needed slap_conf_dir = os.path.normpath(conf.slapos_configuration) # Make sure everybody can read slapos configuration directory: # Add +x to directories in path directory = os.path.dirname(slap_conf_dir) while True: if os.path.dirname(directory) == directory: break # Do "chmod g+xro+xr" os.chmod(directory, os.stat(directory).st_mode | stat.S_IXGRP | stat.S_IRGRP | stat.S_IXOTH | stat.S_IROTH) directory = os.path.dirname(directory) if not os.path.exists(slap_conf_dir): conf.logger.info('Creating directory: %s', slap_conf_dir) if not dry_run: os.mkdir(slap_conf_dir, 0o711) user_certificate_repository_path = os.path.join(slap_conf_dir, 'ssl') if not os.path.exists(user_certificate_repository_path): conf.logger.info('Creating directory: %s', user_certificate_repository_path) if not dry_run: os.mkdir(user_certificate_repository_path, 0o711) key_file = os.path.join(user_certificate_repository_path, 'key') cert_file = os.path.join(user_certificate_repository_path, 'certificate') for src, dst in [(conf.key, key_file), (conf.certificate, cert_file)]: conf.logger.info('Copying to %r, and setting minimum privileges', dst) if not dry_run: with open(dst, 'w') as destination: destination.write(''.join(src)) os.chmod(dst, 0o600) os.chown(dst, 0, 0) certificate_repository_path = os.path.join(slap_conf_dir, 'ssl', 'partition_pki') if not os.path.exists(certificate_repository_path): conf.logger.info('Creating directory: %s', certificate_repository_path) if not dry_run: os.mkdir(certificate_repository_path, 0o711) # Put slapos configuration file config_path = os.path.join(slap_conf_dir, 'slapos.cfg') # XXX: We should actually get the template from the egg, not from git cfg = fetch_configuration_template() to_replace = [ ('computer_id', conf.computer_id), ('master_url', conf.master_url), ('key_file', key_file), ('cert_file', cert_file), ('certificate_repository_path', certificate_repository_path), ('interface_name', conf.interface_name), ('ipv4_local_network', conf.ipv4_local_network), ('partition_amount', conf.partition_number), ('create_tap', conf.create_tap) ] if conf.ipv6_interface: to_replace.append(('ipv6_interface', conf.ipv6_interface)) for key, value in to_replace: cfg = re.sub('\n\s*%s\s*=.*' % key, '\n%s = %s' % (key, value), cfg) if not dry_run: with open(config_path, 'w') as fout: fout.write(cfg.encode('utf8')) conf.logger.info('SlapOS configuration written to %s', config_path) class RegisterConfig(object): """ Class containing all parameters needed for configuration """ def __init__(self, logger): self.logger = logger def setConfig(self, options): """ Set options given by parameters. """ # Set options parameters for option, value in options.__dict__.items(): setattr(self, option, value) def COMPConfig(self, slapos_configuration, computer_id, certificate, key): self.slapos_configuration = slapos_configuration self.computer_id = computer_id self.certificate = certificate self.key = key def displayUserConfig(self): self.logger.debug('Computer Name: %s', self.node_name) self.logger.debug('Master URL: %s', self.master_url) self.logger.debug('Number of partition: %s', self.partition_number) self.logger.info('Using Interface %s', self.interface_name) self.logger.debug('Ipv4 sub network: %s', self.ipv4_local_network) self.logger.debug('Ipv6 Interface: %s', self.ipv6_interface) def gen_auth(conf): ask = True if conf.login: if conf.password: yield conf.login, conf.password ask = False else: yield conf.login, getpass.getpass() while ask: yield raw_input('SlapOS Master Login: '), getpass.getpass() def do_register(conf): """Register new computer on SlapOS Master and generate slapos.cfg""" if conf.login or conf.login_auth: for login, password in gen_auth(conf): if check_credentials(conf.master_url_web, login, password): break conf.logger.warning('Wrong login/password') else: return 1 certificate, key = get_certificate_key_pair(conf.logger, conf.master_url_web, conf.node_name, login=login, password=password) else: while not conf.token: conf.token = raw_input('Computer security token: ').strip() certificate, key = get_certificate_key_pair(conf.logger, conf.master_url_web, conf.node_name, token=conf.token) # get computer id COMP = get_computer_name(certificate) # Getting configuration parameters conf.COMPConfig(slapos_configuration='/etc/opt/slapos/', computer_id=COMP, certificate=certificate, key=key) # Save former configuration if not conf.dry_run: save_former_config(conf) # Prepare Slapos Configuration slapconfig(conf) conf.logger.info('Node has successfully been configured as %s.', COMP) conf.logger.info('Now please invoke slapos node boot on your site.') return 0 slapos.core-1.3.18/slapos/cli/bang.py0000644000000000000000000000362412752436134017337 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from slapos.cli.command import must_be_root from slapos.cli.config import ConfigCommand from slapos.bang import do_bang class BangCommand(ConfigCommand): """ request update on all partitions """ command_group = 'node' def get_parser(self, prog_name): ap = super(BangCommand, self).get_parser(prog_name) ap.add_argument('-m', '--message', help='Message for bang') return ap @must_be_root def take_action(self, args): configp = self.fetch_config(args) do_bang(configp, args.message) slapos.core-1.3.18/slapos/cli/supervisord.py0000644000000000000000000000456112752436134021016 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from slapos.cli.config import ConfigCommand from slapos.grid.svcbackend import (launchSupervisord, createSupervisordConfiguration) class SupervisordCommand(ConfigCommand): """ launch, if not already running, supervisor daemon """ command_group = 'node' def get_parser(self, prog_name): ap = super(SupervisordCommand, self).get_parser(prog_name) ap.add_argument('-n', '--nodaemon', action='store_true', help='Do not daemonize supervisord') return ap def take_action(self, args): configp = self.fetch_config(args) instance_root = configp.get('slapos', 'instance_root') if args.nodaemon: supervisord_additional_argument_list = ['--nodaemon'] else: supervisord_additional_argument_list = [] createSupervisordConfiguration(instance_root) launchSupervisord( instance_root=instance_root, logger=self.app.log, supervisord_additional_argument_list=supervisord_additional_argument_list ) slapos.core-1.3.18/slapos/cli/collect.py0000644000000000000000000000350512752436134020053 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from slapos.collect import do_collect from slapos.cli.command import must_be_root from slapos.cli.config import ConfigCommand class CollectCommand(ConfigCommand): """ Collect system consumption and data and store. """ command_group = 'node' def get_parser(self, prog_name): ap = super(CollectCommand, self).get_parser(prog_name) return ap @must_be_root def take_action(self, args): configp = self.fetch_config(args) do_collect(configp) slapos.core-1.3.18/slapos/cli/proxy_show.py0000644000000000000000000001605012752436134020646 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import collections import hashlib import lxml.etree import prettytable import sqlite3 from slapos.cli.config import ConfigCommand from slapos.proxy import ProxyConfig from slapos.proxy.db_version import DB_VERSION from slapos.util import sqlite_connect class ProxyShowCommand(ConfigCommand): """ display proxy instances and parameters """ def get_parser(self, prog_name): ap = super(ProxyShowCommand, self).get_parser(prog_name) ap.add_argument('-u', '--database-uri', help='URI for sqlite database') ap.add_argument('--computers', help='view computer information', action='store_true') ap.add_argument('--software', help='view software releases', action='store_true') ap.add_argument('--partitions', help='view partitions', action='store_true') ap.add_argument('--slaves', help='view slave instances', action='store_true') ap.add_argument('--params', help='view published parameters', action='store_true') ap.add_argument('--network', help='view network settings', action='store_true') return ap def take_action(self, args): configp = self.fetch_config(args) conf = ProxyConfig(logger=self.app.log) conf.mergeConfig(args, configp) conf.setConfig() do_show(conf=conf) tbl_partition = 'partition' + DB_VERSION def coalesce(*seq): el = None for el in seq: if el is not None: return el return el def log_table(logger, qry, tablename, skip=None): if skip is None: skip = set() columns = [c[0] for c in qry.description if c[0] not in skip] rows = [] for row in qry.fetchall(): rows.append([coalesce(row[col], '-') for col in columns]) pt = prettytable.PrettyTable(columns) # https://code.google.com/p/prettytable/wiki/Tutorial for row in rows: pt.add_row(row) if rows: if skip: logger.info('table %s: skipping %s', tablename, ', '.join(skip)) else: logger.info('table %s', tablename) else: logger.info('table %s: empty', tablename) return for line in pt.get_string(border=True, padding_width=0, vrules=prettytable.NONE).split('\n'): logger.info(line) def log_params(logger, conn): cur = conn.cursor() qry = cur.execute("SELECT reference, partition_reference, software_type, connection_xml FROM %s" % tbl_partition) for row in qry.fetchall(): if not row['connection_xml']: continue xml = str(row['connection_xml']) logger.info('%s: %s (type %s)', row['reference'], row['partition_reference'], row['software_type']) instance = lxml.etree.fromstring(xml) for parameter in list(instance): name = parameter.get('id') text = parameter.text if text and name in ('ssh-key', 'ssh-public-key'): text = text[:20] + '...' + text[-20:] logger.info(' %s = %s', name, text) def log_computer_table(logger, conn): tbl_computer = 'computer' + DB_VERSION cur = conn.cursor() qry = cur.execute("SELECT * FROM %s" % tbl_computer) log_table(logger, qry, tbl_computer) def log_software_table(logger, conn): tbl_software = 'software' + DB_VERSION cur = conn.cursor() qry = cur.execute("SELECT *, md5(url) as md5 FROM %s" % tbl_software) log_table(logger, qry, tbl_software) def log_partition_table(logger, conn): cur = conn.cursor() qry = cur.execute("SELECT * FROM %s WHERE slap_state<>'free'" % tbl_partition) log_table(logger, qry, tbl_partition, skip=['xml', 'connection_xml', 'slave_instance_list']) def log_slave_table(logger, conn): tbl_slave = 'slave' + DB_VERSION cur = conn.cursor() qry = cur.execute("SELECT * FROM %s" % tbl_slave) log_table(logger, qry, tbl_slave, skip=['connection_xml']) def log_network(logger, conn): tbl_partition_network = 'partition_network' + DB_VERSION cur = conn.cursor() addr = collections.defaultdict(list) qry = cur.execute(""" SELECT * FROM %s WHERE partition_reference NOT IN ( SELECT reference FROM %s WHERE slap_state='free') """ % (tbl_partition_network, tbl_partition)) for row in qry: addr[row['partition_reference']].append(row['address']) for partition_reference in sorted(addr.keys()): addresses = addr[partition_reference] logger.info('%s: %s', partition_reference, ', '.join(addresses)) def do_show(conf): conf.logger.debug('Using database: %s', conf.database_uri) conn = sqlite_connect(conf.database_uri) conn.row_factory = sqlite3.Row conn.create_function('md5', 1, lambda s: hashlib.md5(s).hexdigest()) call_table = [ (conf.computers, log_computer_table), (conf.software, log_software_table), (conf.partitions, log_partition_table), (conf.slaves, log_slave_table), (conf.params, log_params), (conf.network, log_network) ] if not any(flag for flag, func in call_table): to_call = [func for flag, func in call_table] else: to_call = [func for flag, func in call_table if flag] for idx, func in enumerate(to_call): func(conf.logger, conn) if idx < len(to_call) - 1: conf.logger.info(' ') slapos.core-1.3.18/slapos/cli/coloredlogs/0000755000000000000000000000000013006632706020361 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/cli/coloredlogs/LICENSE.txt0000644000000000000000000000204012752436134022204 0ustar rootroot00000000000000Copyright (c) 2013 Peter Odding Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. slapos.core-1.3.18/slapos/cli/coloredlogs/__init__.py0000644000000000000000000002155012752436134022501 0ustar rootroot00000000000000""" Colored terminal output for Python's logging module. Author: Peter Odding Last Change: May 10, 2014 URL: https://github.com/xolox/python-coloredlogs """ # Semi-standard module versioning. __version__ = '0.5' # Standard library modules. import copy import logging import os import re import socket import sys import time # Portable color codes from http://en.wikipedia.org/wiki/ANSI_escape_code#Colors. ansi_color_codes = dict(black=0, red=1, green=2, yellow=3, blue=4, magenta=5, cyan=6, white=7) # The logging handler attached to the root logger (initialized by install()). root_handler = None def ansi_text(message, color='black', bold=False): """ Wrap text in ANSI escape sequences for the given color and/or style. :param message: The text message (a string). :param color: The name of a color (one of the strings black, red, green, yellow, blue, magenta, cyan or white). :param bold: ``True`` if the text should be bold, ``False`` otherwise. :returns: The text message wrapped in ANSI escape sequences. """ return '\x1b[%i;3%im%s\x1b[0m' % (bold and 1 or 0, ansi_color_codes[color], message) def install(level=logging.INFO, **kw): """ Install a :py:class:`ColoredStreamHandler` for the root logger. Calling this function multiple times will never install more than one handler. :param level: The logging level to filter on (defaults to ``INFO``). :param kw: Optional keyword arguments for :py:class:`ColoredStreamHandler`. """ global root_handler if not root_handler: # Create the root handler. root_handler = ColoredStreamHandler(level=level, **kw) # Install the root handler. root_logger = logging.getLogger() root_logger.setLevel(logging.NOTSET) root_logger.addHandler(root_handler) # TODO Move these functions into ColoredStreamHandler? def increase_verbosity(): """ Increase the verbosity of the root handler by one defined level. Understands custom logging levels like defined by my ``verboselogs`` module. """ defined_levels = find_defined_levels() current_level = get_level() closest_level = min(defined_levels, key=lambda l: abs(l - current_level)) set_level(defined_levels[max(0, defined_levels.index(closest_level) - 1)]) def is_verbose(): """ Check whether the log level of the root handler is set to a verbose level. :returns: ``True`` if the root handler is verbose, ``False`` if not. """ return get_level() < logging.INFO def get_level(): """ Get the logging level of the root handler. :returns: The logging level of the root handler (an integer). """ install() return root_handler.level def set_level(level): """ Set the logging level of the root handler. :param level: The logging level to filter on (an integer). """ install() root_handler.level = level def find_defined_levels(): """ Find the defined logging levels. """ defined_levels = set() for name in dir(logging): if name.isupper(): value = getattr(logging, name) if isinstance(value, int): defined_levels.add(value) return sorted(defined_levels) class ColoredStreamHandler(logging.StreamHandler): """ The :py:class:`ColoredStreamHandler` class enables colored terminal output for a logger created with Python's :py:mod:`logging` module. The log handler formats log messages including timestamps, logger names and severity levels. It uses `ANSI escape sequences`_ to highlight timestamps and debug messages in green and error and warning messages in red. The handler does not use ANSI escape sequences when output redirection applies, for example when the standard error stream is being redirected to a file. Here's an example of its use:: # Create a logger object. import logging logger = logging.getLogger('your-module') # Initialize coloredlogs. import coloredlogs coloredlogs.install() coloredlogs.set_level(logging.DEBUG) # Some examples. logger.debug("this is a debugging message") logger.info("this is an informational message") logger.warn("this is a warning message") logger.error("this is an error message") logger.fatal("this is a fatal message") logger.critical("this is a critical message") .. _ANSI escape sequences: http://en.wikipedia.org/wiki/ANSI_escape_code#Colors """ def __init__(self, stream=sys.stderr, level=logging.NOTSET, isatty=None, show_name=True, show_severity=True, show_timestamps=True, show_hostname=True, use_chroot=True): logging.StreamHandler.__init__(self, stream) self.level = level self.show_timestamps = show_timestamps self.show_hostname = show_hostname self.show_name = show_name self.show_severity = show_severity if isatty is not None: self.isatty = isatty else: # Protect against sys.stderr.isatty() not being defined (e.g. in # the Python Interface to Vim). try: self.isatty = stream.isatty() except Exception: self.isatty = False if show_hostname: chroot_file = '/etc/debian_chroot' if use_chroot and os.path.isfile(chroot_file): with open(chroot_file) as handle: self.hostname = handle.read().strip() else: self.hostname = re.sub(r'\.local$', '', socket.gethostname()) if show_name: self.pid = os.getpid() def emit(self, record): """ Called by the :py:mod:`logging` module for each log record. Formats the log message and passes it onto :py:func:`logging.StreamHandler.emit()`. """ # If the message doesn't need to be rendered we take a shortcut. if record.levelno < self.level: return # Make sure the message is a string. message = record.msg try: if not isinstance(message, basestring): message = unicode(message) except NameError: if not isinstance(message, str): message = str(message) # Colorize the log message text. severity = record.levelname if severity == 'CRITICAL': message = self.wrap_color('red', message, bold=True) elif severity == 'ERROR': message = self.wrap_color('red', message) elif severity == 'WARNING': message = self.wrap_color('yellow', message) elif severity == 'VERBOSE': # The "VERBOSE" logging level is not defined by Python's logging # module; I've extended the logging module to support this level. message = self.wrap_color('blue', message) elif severity == 'DEBUG': message = self.wrap_color('green', message) # Compose the formatted log message as: # timestamp hostname name severity message # Everything except the message text is optional. parts = [] if self.show_timestamps: parts.append(self.wrap_color('green', self.render_timestamp(record.created))) if self.show_hostname: parts.append(self.wrap_color('magenta', self.hostname)) if self.show_name: parts.append(self.wrap_color('blue', self.render_name(record.name))) if self.show_severity: parts.append(self.wrap_color('black', severity, bold=True)) parts.append(message) message = ' '.join(parts) # Copy the original record so we don't break other handlers. record = copy.copy(record) record.msg = message # Use the built-in stream handler to handle output. logging.StreamHandler.emit(self, record) def render_timestamp(self, created): """ Format the time stamp of the log record. Receives the time when the LogRecord was created (as returned by :py:func:`time.time()`). By default this returns a string in the format ``YYYY-MM-DD HH:MM:SS``. """ return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(created)) def render_name(self, name): """ Format the name of the logger. Receives the name of the logger used to log the call. By default this returns a string in the format ``NAME[PID]`` (where PID is the process ID reported by :py:func:`os.getpid()`). """ return '%s[%s]' % (name, self.pid) def wrap_color(self, colorname, message, bold=False): """ Wrap text in ANSI escape sequences for the given color [and optionally to enable bold font]. """ if self.isatty: return ansi_text(message, color=colorname, bold=bold) else: return message # vim: ts=4 sw=4 et slapos.core-1.3.18/slapos/cli/coloredlogs/demo.py0000644000000000000000000000230412752436134021662 0ustar rootroot00000000000000# Demonstration of the coloredlogs package. # # Author: Peter Odding # Last Change: May 10, 2014 # URL: https://github.com/xolox/python-coloredlogs # Standard library modules. import logging import time # Modules included in our package. import coloredlogs # If my verbose logger is installed, we'll use that for the demo. try: from verboselogs import VerboseLogger as DemoLogger except ImportError: from logging import getLogger as DemoLogger # Initialize the logger and handler. logger = DemoLogger('coloredlogs') def main(): # Initialize colored output to the terminal. coloredlogs.install(level=logging.DEBUG) # Print some examples with different timestamps. for level in ['debug', 'verbose', 'info', 'warn', 'error', 'critical']: if hasattr(logger, level): getattr(logger, level)("message with level %r", level) time.sleep(1) # Show how exceptions are logged. try: class RandomException(Exception): pass raise RandomException("Something went horribly wrong!") except Exception as e: logger.exception(e) logger.info("Done, exiting ..") if __name__ == '__main__': main() slapos.core-1.3.18/slapos/cli/coloredlogs/converter.py0000644000000000000000000001000712752436134022744 0ustar rootroot00000000000000""" Program to convert text with ANSI escape sequences to HTML. Author: Peter Odding Last Change: May 10, 2014 URL: https://github.com/xolox/python-coloredlogs """ # Standard library modules. import pipes import re import subprocess import sys import tempfile import webbrowser # Portable color codes from http://en.wikipedia.org/wiki/ANSI_escape_code#Colors. EIGHT_COLOR_PALETTE = ('black', 'red', 'rgb(78, 154, 6)', # green 'rgb(196, 160, 0)', # yellow 'blue', 'rgb(117, 80, 123)', # magenta 'cyan', 'white') # Regular expression that matches strings we want to convert. Used to separate # all special strings and literal output in a single pass (this allows us to # properly encode the output without resorting to nasty hacks). token_pattern = re.compile('(https?://\\S+|www\\.\\S+|\x1b\\[.*?m)', re.UNICODE) def main(): """ Command line interface for the ``ansi2html`` program. Takes a command (and its arguments) and runs the program under ``script`` (emulating an interactive terminal), intercepts the output of the command and converts ANSI escape sequences in the output to HTML. """ command = ['script', '-qe'] command.extend(['-c', ' '.join(pipes.quote(a) for a in sys.argv[1:])]) command.append('/dev/null') program = subprocess.Popen(command, stdout=subprocess.PIPE) stdout, stderr = program.communicate() html_output = convert(stdout) if sys.stdout.isatty(): fd, filename = tempfile.mkstemp(suffix='.html') with open(filename, 'w') as handle: handle.write(html_output) webbrowser.open(filename) else: print(html_output) def convert(text): """ Convert text with ANSI escape sequences to HTML. :param text: The text with ANSI escape sequences (a string). :returns: The text converted to HTML (a string). """ output = [] for token in token_pattern.split(text): if token.startswith(('http://', 'https://', 'www.')): url = token if '://' not in token: url = 'http://' + url text = url.partition('://')[2] token = u'%s' % (html_encode(url), html_encode(text)) elif token.startswith('\x1b['): ansi_codes = token[2:-1].split(';') if ansi_codes == ['0']: token = '' else: styles = [] for code in ansi_codes: if code == '1': styles.append('font-weight: bold;') elif code.startswith('3') and len(code) == 2: styles.append('color: %s;' % EIGHT_COLOR_PALETTE[int(code[1])]) if styles: token = '' % ' '.join(styles) else: token = '' else: token = html_encode(token) token = encode_whitespace(token) output.append(token) return ''.join(output) def encode_whitespace(text): """ Encode whitespace in text as HTML so that all whitespace (specifically indentation and line breaks) is preserved when the text is rendered in a web browser. :param text: The plain text (a string). :returns: The text converted to HTML (a string). """ text = text.replace('\r\n', '\n') text = text.replace('\n', '
\n') text = text.replace(' ', ' ') return text def html_encode(text): """ Encode special characters as HTML so that web browsers render the characters literally instead of messing up the rendering :-). :param text: The plain text (a string). :returns: The text converted to HTML (a string). """ text = text.replace('&', '&') text = text.replace('<', '<') text = text.replace('>', '>') text = text.replace('"', '"') return text # vim: ts=4 sw=4 et slapos.core-1.3.18/slapos/cli/proxy_start.py0000644000000000000000000000502412752436134021022 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging from slapos.cli.config import ConfigCommand from slapos.proxy import do_proxy, ProxyConfig class ProxyStartCommand(ConfigCommand): """ minimalist, stand-alone SlapOS Master """ def get_parser(self, prog_name): ap = super(ProxyStartCommand, self).get_parser(prog_name) ap.add_argument('-u', '--database-uri', help='URI for sqlite database') ap.add_argument('--port', help='Port to use') ap.add_argument('--host', help='Host to use') return ap def take_action(self, args): configp = self.fetch_config(args) conf = ProxyConfig(logger=self.app.log) conf.mergeConfig(args, configp) if not self.app.options.log_file and hasattr(conf, 'log_file'): # no log file is provided by argparser, # we set up the one from config file_handler = logging.FileHandler(conf.log_file) formatter = logging.Formatter(self.app.LOG_FILE_MESSAGE_FORMAT) file_handler.setFormatter(formatter) self.app.log.addHandler(file_handler) conf.setConfig() do_proxy(conf=conf) slapos.core-1.3.18/slapos/cli/info.py0000644000000000000000000000616112752436134017362 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import pprint import sys from slapos.cli.config import ClientConfigCommand from slapos.client import init, ClientConfig from slapos.slap import ResourceNotReady, NotFoundError def resetLogger(logger): """Remove all formatters, log files, etc.""" if not getattr(logger, 'parent', None): return handler = logger.parent.handlers[0] logger.parent.removeHandler(handler) logger.addHandler(logging.StreamHandler(sys.stdout)) class InfoCommand(ClientConfigCommand): """get status, software_release and parameters of an instance""" def get_parser(self, prog_name): ap = super(InfoCommand, self).get_parser(prog_name) ap.add_argument('reference', help='Your instance reference') return ap def take_action(self, args): configp = self.fetch_config(args) conf = ClientConfig(args, configp) local = init(conf, self.app.log) exit_code = do_info(self.app.log, conf, local) if exit_code != 0: exit(exit_code) def do_info(logger, conf, local): resetLogger(logger) try: instance = local['slap'].registerOpenOrder().getInformation( partition_reference=conf.reference, ) except ResourceNotReady: logger.warning('Instance does not exist or is not ready yet.') return(2) except NotFoundError: logger.warning('Instance %s does not exist.', conf.reference) return(2) logger.info('Software Release URL: %s', instance._software_release_url) logger.info('Instance state: %s', instance._requested_state) logger.info('Instance parameters:') logger.info(pprint.pformat(instance._parameter_dict)) logger.info('Connection parameters:') logger.info(pprint.pformat(instance._connection_dict)) slapos.core-1.3.18/slapos/cli/entry.py0000644000000000000000000002711512752436134017572 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import argparse import codecs import collections import locale import logging import sys import os # hack to avoid a bug in cmd2: https://bitbucket.org/catherinedevlin/cmd2/issue/1/silent-editor-check # must be done before importing cliff os.environ.setdefault('EDITOR', 'vi') from cliff.app import App from cliff.commandmanager import CommandManager, LOG from requests.packages import urllib3 import slapos.version # silence messages like 'Starting connection' that are logged with INFO urllib3_logger = logging.getLogger('requests.packages.urllib3') urllib3_logger.setLevel(logging.WARNING) urllib3.disable_warnings() class SlapOSCommandManager(CommandManager): def find_command(self, argv): """Given an argument list, find a command and return the processor and any remaining arguments. """ # a little cheating, 'slapos node' is not documented by the help command if argv == ['node']: argv = ['node', 'status'] search_args = argv[:] name = '' while search_args: if search_args[0].startswith('-'): LOG.critical('slapos: invalid option %r' % search_args[0]) sys.exit(5) next_val = search_args.pop(0) name = '%s %s' % (name, next_val) if name else next_val if name in self.commands: cmd_ep = self.commands[name] cmd_factory = cmd_ep.load() return (cmd_factory, name, search_args) else: LOG.critical('slapos: the command %r does not exist or is not yet implemented.\n' '\n' 'Available commands: %s\n\n' 'Please find documentation and forum at http://community.slapos.org\n' 'Please also make sure that the SlapOS Node package is up to date.', ' '.join(argv), ', '.join(sorted(repr(c) for c in self.commands))) sys.exit(5) class SlapOSHelpAction(argparse.Action): """ Adapted from cliff.help.HelpAction, this class detects and outputs command groups, via the .command_group attribute of the Command class. Must be a class attribute in case the class cannot be instantiated ('Could not load' message). """ def __call__(self, parser, namespace, values, option_string=None): app = self.default parser.print_help(app.stdout) command_manager = app.command_manager groups = collections.defaultdict(list) for name, ep in sorted(command_manager): command_group, help_line = self._help_line(ep, name) groups[command_group].append(help_line) for group in sorted(groups): app.stdout.write('\n%s commands:\n' % group) for line in sorted(groups[group]): app.stdout.write(line) sys.exit(0) def _help_line(self, ep, name): try: factory = ep.load() except Exception as err: return 'Could not load %r\n' % ep try: cmd = factory(self, None) except Exception as err: return 'Could not instantiate %r: %s\n' % (ep, err) one_liner = cmd.get_description().split('\n')[0] group = getattr(factory, 'command_group', 'other') return group, ' %-13s %s\n' % (name, one_liner) class SlapOSApp(App): # # self.options.verbose_level: # -q -> 0 (WARNING) # -> 1 (INFO) # -v -> 2 (DEBUG) # -vv -> 3 (...) # etc. # log = logging.getLogger('slapos') def __init__(self): super(SlapOSApp, self).__init__( description='SlapOS client %s' % slapos.version.version, version=slapos.version.version, command_manager=SlapOSCommandManager('slapos.cli'), ) def _set_streams(self, stdin, stdout, stderr): try: # SlapOS: might fail in some systems locale.setlocale(locale.LC_ALL, '') except locale.Error: pass if sys.version_info[:2] == (2, 6): # Configure the input and output streams. If a stream is # provided, it must be configured correctly by the # caller. If not, make sure the versions of the standard # streams used by default are wrapped with encodings. This # works around a problem with Python 2.6 fixed in 2.7 and # later (http://hg.python.org/cpython/rev/e60ef17561dc/). lang, encoding = locale.getdefaultlocale() encoding = getattr(sys.stdout, 'encoding', None) or encoding self.stdin = stdin or codecs.getreader(encoding)(sys.stdin) self.stdout = stdout or codecs.getwriter(encoding)(sys.stdout) self.stderr = stderr or codecs.getwriter(encoding)(sys.stderr) else: self.stdin = stdin or sys.stdin self.stdout = stdout or sys.stdout self.stderr = stderr or sys.stderr def build_option_parser(self, *args, **kw): kw.setdefault('argparse_kwargs', {}) kw['argparse_kwargs']['conflict_handler'] = 'resolve' parser = super(SlapOSApp, self).build_option_parser(*args, **kw) # add two aliases for --log-file (for compatibility with old commands) parser.add_argument( '--log-file', '--logfile', '--log_file', action='store', default=None, help='Specify a file to log output (default: console only)', ) parser.add_argument( '--log-color', action='store_true', help='Colored log output in console (stripped if redirected)', default=True, ) parser.add_argument( '--log-time', action='store_false', default=True, help='Include timestamp in console log', ) parser.add_argument( '-h', '--help', action=SlapOSHelpAction, nargs=0, default=self, # tricky help="show this help message and exit", ) return parser def initialize_app(self, argv): if self.options.verbose_level > 2: self.log.debug('initialize_app') def prepare_to_run_command(self, cmd): if self.options.verbose_level > 2: self.log.debug('prepare_to_run_command %s', cmd.__class__.__name__) def clean_up(self, cmd, result, err): if self.options.verbose_level > 2: self.log.debug('clean_up %s', cmd.__class__.__name__) @property def CONSOLE_MESSAGE_FORMAT(self): fmt = [] if self.options.log_time and not self.options.log_color: fmt.append('[%(asctime)s]') if not self.options.log_color: fmt.append('%(levelname)s') fmt.append('%(message)s') return ' '.join(fmt) @property def LOG_FILE_MESSAGE_FORMAT(self): return '[%(asctime)s] %(levelname)-8s %(message)s' def configure_logging(self): """Create logging handlers for any log output. """ root_logger = logging.getLogger('') root_logger.setLevel(logging.DEBUG) # Set up logging to a file if self.options.log_file: file_handler = logging.FileHandler( filename=self.options.log_file, ) formatter = logging.Formatter(self.LOG_FILE_MESSAGE_FORMAT) file_handler.setFormatter(formatter) root_logger.addHandler(file_handler) # Always send higher-level messages to the console via stderr if self.options.log_color: import coloredlogs console = coloredlogs.ColoredStreamHandler(show_name=True, # logger name (slapos) and PID show_severity=True, show_timestamps=self.options.log_time, show_hostname=False) else: console = logging.StreamHandler(self.stderr) console_level = {0: logging.WARNING, 1: logging.INFO, 2: logging.DEBUG, }.get(self.options.verbose_level, logging.DEBUG) console.setLevel(console_level) formatter = logging.Formatter(self.CONSOLE_MESSAGE_FORMAT) console.setFormatter(formatter) root_logger.addHandler(console) return def run(self, argv): # same as cliff.app.App.run except that it won't re-raise # a logged exception, and doesn't use --debug self.options, remainder = self.parser.parse_known_args(argv) self.configure_logging() self.interactive_mode = not remainder try: self.initialize_app(remainder) except Exception as err: self.log.exception(err) return 1 if self.interactive_mode: result = self.interact() else: result = self.run_subcommand(remainder) return result def run_subcommand(self, argv): # same as cliff.app.App.run_subcommand except that it won't re-raise # a logged exception, and doesn't use --debug subcommand = self.command_manager.find_command(argv) cmd_factory, cmd_name, sub_argv = subcommand cmd = cmd_factory(self, self.options) err = None result = 1 try: self.prepare_to_run_command(cmd) full_name = (cmd_name if self.interactive_mode else ' '.join([self.NAME, cmd_name]) ) cmd_parser = cmd.get_parser(full_name) parsed_args = cmd_parser.parse_args(sub_argv) result = cmd.run(parsed_args) except Exception as err: self.log.exception(err) try: self.clean_up(cmd, result, err) except Exception as err2: self.log.exception(err2) else: try: self.clean_up(cmd, result, None) except Exception as err3: self.log.exception(err3) return result def main(argv=sys.argv[1:]): app = SlapOSApp() if not argv: argv = ['-h'] return app.run(argv) if __name__ == '__main__': sys.exit(main(sys.argv[1:])) slapos.core-1.3.18/slapos/cli/supervisorctl.py0000644000000000000000000001020212752436134021342 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import argparse from slapos.cli.command import check_root_user from slapos.cli.config import ConfigCommand import supervisor.supervisorctl class SupervisorctlCommand(ConfigCommand): """ open supervisor console, for process management """ command_group = 'node' def get_parser(self, prog_name): ap = super(SupervisorctlCommand, self).get_parser(prog_name) ap.add_argument('supervisor_args', nargs=argparse.REMAINDER, help='parameters passed to supervisorctl') return ap def _should_check_current_user_is_root(self, configp): if not configp.has_option('slapos', 'root_check'): return True return configp.getboolean('slapos', 'root_check') def _should_forbid_supervisord_launch(self, configp): if not configp.has_option('slapos', 'forbid_supervisord_automatic_launch'): return False return configp.getboolean('slapos', 'forbid_supervisord_automatic_launch') def take_action(self, args): configp = self.fetch_config(args) # Parse if we have to check if running from root # XXX document this feature. if self._should_check_current_user_is_root(configp): check_root_user(self) instance_root = configp.get('slapos', 'instance_root') forbid_supervisord_launch = self._should_forbid_supervisord_launch(configp) do_supervisorctl( self.app.log, instance_root, args.supervisor_args, forbid_supervisord_launch ) def do_supervisorctl(logger, instance_root, supervisor_args, forbid_supervisord_launch=False): from slapos.grid.svcbackend import (launchSupervisord, _getSupervisordConfigurationFilePath) if forbid_supervisord_launch is False: launchSupervisord(instance_root=instance_root, logger=logger) supervisor.supervisorctl.main( args=['-c', _getSupervisordConfigurationFilePath(instance_root)] + supervisor_args ) class SupervisorctlAliasCommand(SupervisorctlCommand): def take_action(self, args): args.supervisor_args = [self.alias] + args.supervisor_args super(SupervisorctlAliasCommand, self).take_action(args) class SupervisorctlStatusCommand(SupervisorctlAliasCommand): """alias for 'node supervisorctl status'""" alias = 'status' class SupervisorctlStartCommand(SupervisorctlAliasCommand): """alias for 'node supervisorctl start'""" alias = 'start' class SupervisorctlStopCommand(SupervisorctlAliasCommand): """alias for 'node supervisorctl stop'""" alias = 'stop' class SupervisorctlRestartCommand(SupervisorctlAliasCommand): """alias for 'node supervisorctl restart'""" alias = 'restart' class SupervisorctlTailCommand(SupervisorctlAliasCommand): """alias for 'node supervisorctl tail'""" alias = 'tail' slapos.core-1.3.18/slapos/cli/console.py0000644000000000000000000001100712752436134020064 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import textwrap from slapos.cli.config import ClientConfigCommand from slapos.client import init, do_console, ClientConfig class ShellNotFound(Exception): pass class ConsoleCommand(ClientConfigCommand): """ open python console with slap library imported You can play with the global "slap" object and with the global "request" method. examples : >>> # Request instance >>> request(kvm, "myuniquekvm") >>> # Request software installation on owned computer >>> supply(kvm, "mycomputer") >>> # Fetch instance informations on already launched instance >>> request(kvm, "myuniquekvm").getConnectionParameter("url") """ def get_parser(self, prog_name): ap = super(ConsoleCommand, self).get_parser(prog_name) ap.add_argument('-u', '--master_url', help='Url of SlapOS Master to use') ap.add_argument('-k', '--key_file', help='SSL Authorisation key file') ap.add_argument('-c', '--cert_file', help='SSL Authorisation certificate file') shell = ap.add_mutually_exclusive_group() shell.add_argument('-i', '--ipython', action='store_true', help='Use IPython shell if available (default)') shell.add_argument('-b', '--bpython', action='store_true', help='Use BPython shell if available') shell.add_argument('-p', '--python', action='store_true', help='Use plain Python shell') return ap def take_action(self, args): configp = self.fetch_config(args) conf = ClientConfig(args, configp) local = init(conf, self.app.log) if not any([args.python, args.ipython, args.bpython]): args.ipython = True if args.ipython: try: do_ipython_console(local) except ShellNotFound: self.app.log.info('IPython not available - using plain Python shell') do_console(local) elif args.bpython: try: do_bpython_console(local) except ShellNotFound: self.app.log.info('bpython not available - using plain Python shell') do_console(local) else: do_console(local) console_banner = """\ slapos console allows you interact with slap API. You can play with the global "slap" object and with the global request() and supply() methods. examples : >>> # Request instance >>> request(kvm, "myuniquekvm") >>> # Request software installation on owned computer >>> supply(kvm, "mycomputer") >>> # Fetch instance informations on already launched instance >>> request(kvm, "myuniquekvm").getConnectionParameter("url") """ def do_bpython_console(local): try: from bpython import embed except ImportError: raise ShellNotFound embed(banner=console_banner, locals_=local) def do_ipython_console(local): try: from IPython import embed except ImportError: raise ShellNotFound embed(banner1=console_banner, user_ns=local) slapos.core-1.3.18/slapos/cli/__init__.py0000644000000000000000000000000012752436134020150 0ustar rootroot00000000000000slapos.core-1.3.18/slapos/cli/supply.py0000644000000000000000000000512412752436134017761 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from slapos.cli.config import ClientConfigCommand from slapos.client import init, ClientConfig, _getSoftwareReleaseFromSoftwareString class SupplyCommand(ClientConfigCommand): """ supply a Software to a node """ def get_parser(self, prog_name): ap = super(SupplyCommand, self).get_parser(prog_name) ap.add_argument('software_url', help='Your software url') ap.add_argument('node', help='Target node') return ap def take_action(self, args): configp = self.fetch_config(args) conf = ClientConfig(args, configp) local = init(conf, self.app.log) do_supply(self.app.log, args.software_url, args.node, local) def do_supply(logger, software_release, computer_id, local): """ Request installation of Software Release 'software_release' on computer 'computer_id'. """ logger.info('Requesting software installation of %s...', software_release) software_release = _getSoftwareReleaseFromSoftwareString( logger, software_release, local['product']) local['supply']( software_release=software_release, computer_guid=computer_id, state='available' ) logger.info('Done.') slapos.core-1.3.18/slapos/cli/format.py0000644000000000000000000001124512752436134017716 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import sys from slapos.cli.command import check_root_user from slapos.cli.config import ConfigCommand from slapos.format import do_format, FormatConfig, tracing_monkeypatch, UsageError from slapos.util import string_to_boolean class FormatCommand(ConfigCommand): """ create users, partitions and network configuration """ command_group = 'node' def get_parser(self, prog_name): ap = super(FormatCommand, self).get_parser(prog_name) ap.add_argument('-x', '--computer_xml', help="Path to file with computer's XML. If does not exists, will be created") ap.add_argument('--computer_json', help="Path to a JSON version of the computer's XML (for development only)") ap.add_argument('-i', '--input_definition_file', help="Path to file to read definition of computer instead of " "declaration. Using definition file allows to disable " "'discovery' of machine services and allows to define computer " "configuration in fully controlled manner.") ap.add_argument('-o', '--output_definition_file', help="Path to file to write definition of computer from " "declaration.") ap.add_argument('--alter_user', choices=['True', 'False'], help='Shall slapformat alter user database' ' (default: %(default)s)') ap.add_argument('--alter_network', choices=['True', 'False'], help='Shall slapformat alter network configuration' ' (default: %(default)s)') ap.add_argument('--now', default=False, action="store_true", help='Launch slapformat without delay' ' (default: %(default)s)') ap.add_argument('-n', '--dry_run', default=False, action="store_true", help="Don't actually do anything" " (default: %(default)s)") ap.add_argument('-c', '--console', help="Console output (obsolete)") return ap def take_action(self, args): configp = self.fetch_config(args) conf = FormatConfig(logger=self.app.log) conf.mergeConfig(args, configp) # Parse if we have to check if running from root # XXX document this feature. if string_to_boolean(getattr(conf, 'root_check', 'True').lower()): check_root_user(self) if not self.app.options.log_file and conf.log_file: # no log file is provided by argparser, # we set up the one from config file_handler = logging.FileHandler(conf.log_file) formatter = logging.Formatter(self.app.LOG_FILE_MESSAGE_FORMAT) file_handler.setFormatter(formatter) self.app.log.addHandler(file_handler) try: conf.setConfig() except UsageError as err: sys.stderr.write(err.message + '\n') sys.stderr.write("For help use --help\n") sys.exit(1) tracing_monkeypatch(conf) do_format(conf=conf) slapos.core-1.3.18/slapos/cli/cache.py0000644000000000000000000001006612752436134017471 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import ast import hashlib import json import re import requests import sys import prettytable from slapos.grid import networkcache from slapos.grid.distribution import distribution_tuple from slapos.cli.config import ConfigCommand class CacheLookupCommand(ConfigCommand): """ perform a query to the networkcache You can provide either a complete URL to the software release, or a corresponding MD5 hash value. The command will report which OS distribution/version have a binary cache of the software release, and which ones are compatible with the OS you are currently running. """ def get_parser(self, prog_name): ap = super(CacheLookupCommand, self).get_parser(prog_name) ap.add_argument('software_url', help='Your software url or MD5 hash') return ap def take_action(self, args): configp = self.fetch_config(args) cache_dir = configp.get('networkcache', 'download-binary-dir-url') do_lookup(self.app.log, cache_dir, args.software_url) def looks_like_md5(s): """ Return True if the parameter looks like an hashed value. Not 100% precise, but we're actually more interested in filtering out URLs and pathnames. """ return re.match('[0-9a-f]{32}', s) def do_lookup(logger, cache_dir, software_url): if looks_like_md5(software_url): md5 = software_url else: md5 = hashlib.md5(software_url).hexdigest() try: url = '%s/%s' % (cache_dir, md5) logger.debug('Connecting to %s', url) req = requests.get(url, timeout=5) except (requests.Timeout, requests.ConnectionError): logger.critical('Cannot connect to cache server at %s', url) sys.exit(10) if not req.ok: if req.status_code == 404: logger.critical('Object not in cache: %s', software_url) else: logger.critical('Error while looking object %s: %s', software_url, req.reason) sys.exit(10) entries = req.json() if not entries: logger.info('Object found in cache, but has no binary entries.') return ostable = sorted(ast.literal_eval(json.loads(entry[0])['os']) for entry in entries) pt = prettytable.PrettyTable(['distribution', 'version', 'id', 'compatible?']) linux_distribution = distribution_tuple() for os in ostable: compatible = 'yes' if networkcache.os_matches(os, linux_distribution) else 'no' pt.add_row([os[0], os[1], os[2], compatible]) meta = json.loads(entries[0][0]) logger.info('Software URL: %s', meta['software_url']) logger.info('MD5: %s', md5) for line in pt.get_string(border=True, padding_width=0, vrules=prettytable.NONE).split('\n'): logger.info(line) slapos.core-1.3.18/slapos/cli/configure_client.py0000644000000000000000000001333613006625060021737 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import re import os import sys import requests from slapos.cli.config import ClientConfigCommand from slapos.util import mkdir_p, parse_certificate_key_pair class ConfigureClientCommand(ClientConfigCommand): """ configure slapos client with an existing account """ def get_parser(self, prog_name): ap = super(ConfigureClientCommand, self).get_parser(prog_name) ap.add_argument('--master-url', default='https://slap.vifib.com', help='URL of SlapOS Master REST API' ' (default: %(default)s)') ap.add_argument('--master-url-web', default='https://slapos.vifib.com', help='URL of SlapOS Master webservice to register certificates' ' (default: %(default)s)') ap.add_argument('--token', help="SlapOS 'credential security' authentication token " "(use '--token ask' for interactive prompt)") return ap def take_action(self, args): do_configure_client(logger=self.app.log, master_url_web=args.master_url_web, token=args.token, config_path=self.config_path(args), master_url=args.master_url) def get_certificate_key_pair(logger, master_url_web, token): req = requests.post('/'.join([master_url_web, 'myspace/my_account/request-a-certificate/WebSection_requestNewCertificate']), data={}, headers={'X-Access-Token': token}, verify=False) if req.status_code == 403: logger.critical('Access denied to the SlapOS Master. ' 'Please check the authentication token or require a new one.') sys.exit(1) req.raise_for_status() return parse_certificate_key_pair(req.text) def fetch_configuration_template(): # XXX: change to local version. req = requests.get('http://git.erp5.org/gitweb/slapos.core.git/blob_plain/HEAD:/slapos-client.cfg.example') req.raise_for_status() return req.text def do_configure_client(logger, master_url_web, token, config_path, master_url): while not token: token = raw_input('Credential security token: ').strip() # Check for existence of previous configuration, certificate or key files # where we expect to create them. If so, ask the use to manually remove them. if os.path.exists(config_path): logger.critical('There is a file in %s. ' 'Please remove it before creating a new configuration.', config_path) sys.exit(1) basedir = os.path.dirname(config_path) if not os.path.isdir(basedir): logger.debug('Creating directory %s', basedir) mkdir_p(basedir, mode=0o700) cert_path = os.path.join(basedir, 'client.crt') if os.path.exists(cert_path): logger.critical('There is a file in %s. ' 'Please remove it before creating a new certificate.', cert_path) sys.exit(1) key_path = os.path.join(basedir, 'client.key') if os.path.exists(key_path): logger.critical('There is a file in %s. ' 'Please remove it before creating a new key.', key_path) sys.exit(1) # retrieve a template for the configuration file cfg = fetch_configuration_template() cfg = re.sub('master_url = .*', 'master_url = %s' % master_url, cfg) cfg = re.sub('cert_file = .*', 'cert_file = %s' % cert_path, cfg) cfg = re.sub('key_file = .*', 'key_file = %s' % key_path, cfg) # retrieve and parse the certicate and key certificate, key = get_certificate_key_pair(logger, master_url_web, token) # write everything with os.fdopen(os.open(config_path, os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o600), 'w') as fout: logger.debug('Writing configuration to %s', config_path) fout.write(cfg) with os.fdopen(os.open(cert_path, os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o600), 'w') as fout: logger.debug('Writing certificate to %s', cert_path) fout.write(certificate) with os.fdopen(os.open(key_path, os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o600), 'w') as fout: logger.debug('Writing key to %s', key_path) fout.write(key) logger.info('SlapOS client configuration written to %s', config_path) slapos.core-1.3.18/slapos/cli/slapgrid.py0000644000000000000000000001556512752436134020244 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from slapos.cli.command import check_root_user from slapos.cli.config import ConfigCommand from slapos.grid.utils import setRunning, setFinished from slapos.grid.slapgrid import (merged_options, check_missing_parameters, check_missing_files, random_delay, create_slapgrid_object) from slapos.util import string_to_boolean class SlapgridCommand(ConfigCommand): command_group = 'node' method_name = NotImplemented default_pidfile = NotImplemented def get_parser(self, prog_name): ap = super(SlapgridCommand, self).get_parser(prog_name) # TODO move more options to the instance, software and report subclasses ap.add_argument('--instance-root', help='The instance root directory location.') ap.add_argument('--software-root', help='The software_root directory location.') ap.add_argument('--master-url', help='The master server URL. Mandatory.') ap.add_argument('--computer-id', help='The computer id defined in the server.') ap.add_argument('--supervisord-socket', help='The socket supervisor will use.') ap.add_argument('--supervisord-configuration-path', help='The location where supervisord configuration will be stored.') ap.add_argument('--buildout', help='Location of buildout binary.') ap.add_argument('--pidfile', help='The location where pidfile will be created. ' 'Can be provided by configuration file, or defaults ' 'to %s' % self.default_pidfile) ap.add_argument('--key_file', help='SSL Authorisation key file.') ap.add_argument('--cert_file', help='SSL Authorisation certificate file.') ap.add_argument('--signature_private_key_file', help='Signature private key file.') ap.add_argument('--master_ca_file', help='Root certificate of SlapOS master key.') ap.add_argument('--certificate_repository_path', help='Path to directory where downloaded certificates would be stored.') ap.add_argument('--maximum-periodicity', type=int, help='Periodicity at which buildout should be run in instance.') ap.add_argument('--promise-timeout', default=3, type=int, help='Promise timeout in seconds' ' (default: %(default)s)') ap.add_argument('--now', action='store_true', help='Launch slapgrid without delay. Default behavior.') ap.add_argument('--maximal_delay', help='Deprecated. Will only work from configuration file in the future.') return ap def take_action(self, args): configp = self.fetch_config(args) options = merged_options(args, configp) # Parse if we have to check if running from root # XXX document this feature. if string_to_boolean(options.get('root_check', 'True').lower()): check_root_user(self) check_missing_parameters(options) check_missing_files(options) random_delay(options, logger=self.app.log) slapgrid_object = create_slapgrid_object(options, logger=self.app.log) pidfile = options.get('pidfile') or self.default_pidfile if pidfile: setRunning(logger=self.app.log, pidfile=pidfile) try: return getattr(slapgrid_object, self.method_name)() finally: if pidfile: setFinished(pidfile) class SoftwareCommand(SlapgridCommand): """run software installation/deletion""" method_name = 'processSoftwareReleaseList' default_pidfile = '/opt/slapos/slapgrid-sr.pid' def get_parser(self, prog_name): ap = super(SoftwareCommand, self).get_parser(prog_name) only = ap.add_mutually_exclusive_group() only.add_argument('--all', action='store_true', help='Process all Software Releases, even if already installed.') only.add_argument('--only-sr', '--only', help='Force the update of a single software release (can be full URL or MD5 hash), ' 'even if is already installed. This option will make all other ' 'sofware releases be ignored.') return ap class InstanceCommand(SlapgridCommand): """run instance deployment""" method_name = 'processComputerPartitionList' default_pidfile = '/opt/slapos/slapgrid-cp.pid' def get_parser(self, prog_name): ap = super(InstanceCommand, self).get_parser(prog_name) only = ap.add_mutually_exclusive_group() only.add_argument('--all', action='store_true', help='Process all Computer Partitions.') only.add_argument('--only-cp', '--only', help='Update a single or a list of computer partitions ' '(ie.:slappartX, slappartY), ' 'this option will make all other computer partitions be ignored.') return ap class ReportCommand(SlapgridCommand): """run instance reports and garbage collection""" method_name = 'agregateAndSendUsage' default_pidfile = '/opt/slapos/slapgrid-ur.pid' slapos.core-1.3.18/slapos/cli/configure_local/0000755000000000000000000000000013006632706021200 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/cli/configure_local/__init__.py0000644000000000000000000002460112752436134023320 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2013 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import pkg_resources import re import subprocess import sys from slapos.cli.command import must_be_root from slapos.format import FormatConfig from slapos.cli.config import ConfigCommand from slapos.grid.slapgrid import create_slapgrid_object from slapos.grid.utils import updateFile, createPrivateDirectory from slapos.grid.svcbackend import launchSupervisord DEFAULT_COMPUTER_ID = 'local_computer' class ConfigureLocalCommand(ConfigCommand): """ Configure a slapos node, from scratch to ready-ro-use, using slapproxy. """ def get_parser(self, prog_name): ap = super(self.__class__, self).get_parser(prog_name) ap.add_argument('--interface-name', default='lo', help='Primary network interface. IP of Partitions ' 'will be added to it' ' (default: %(default)s)') ap.add_argument('--partition-number', default=20, type=int, help='Number of partitions to create in the SlapOS Node' ' (default: %(default)s)') ap.add_argument('--ipv4-local-network', default='10.0.0.0/16', help='Subnetwork used to assign local IPv4 addresses. ' 'It should be a not used network in order to' ' avoid conflicts (default: %(default)s)') ap.add_argument('--daemon-listen-ip', default='127.0.0.1', help='Listening IP of the "slapproxy" daemon' ' (default: %(default)s)') ap.add_argument('--daemon-listen-port', default='8080', help='Listening port of the "slapproxy" daemon' ' (default: %(default)s)') ap.add_argument('--slapos-instance-root', default='/srv/slapgrid', help='Target location of the SlapOS configuration' ' directory (default: %(default)s)') ap.add_argument('--slapos-software-root', default='/opt/slapgrid', help='Target location of the SlapOS configuration' ' directory (default: %(default)s)') ap.add_argument('--slapos-buildout-directory', default='/opt/slapos', help='Target location of the SlapOS configuration' ' directory (default: %(default)s)') ap.add_argument('--slapos-configuration-directory', default='/etc/opt/slapos', help='Target location of the SlapOS configuration' ' directory (default: %(default)s)') return ap @must_be_root def take_action(self, args): try: return_code = do_configure(args, self.fetch_config, self.app.log) except SystemExit as err: return_code = err sys.exit(return_code) def _createConfigurationDirectory(target_directory): target_directory = os.path.normpath(target_directory) if not os.path.exists(target_directory): os.makedirs(target_directory) def _replaceParameterValue(original_content, to_replace): """ Replace in a .ini-like file the value of all parameters specified in to_replace by their value. """ for key, value in to_replace: original_content = re.sub('%s\s+=.*' % key, '%s = %s' % (key, value), original_content) return original_content def _generateSlaposNodeConfigurationFile(slapos_node_config_path, args): template_arg_list = (__name__, '../../slapos.cfg.example') with pkg_resources.resource_stream(*template_arg_list) as fout: slapos_node_configuration_template = fout.read() master_url = 'http://%s:%s' % (args.daemon_listen_ip, args.daemon_listen_port) slapos_home = args.slapos_buildout_directory to_replace = [ ('computer_id', DEFAULT_COMPUTER_ID), ('master_url', master_url), ('interface_name', args.interface_name), ('ipv4_local_network', args.ipv4_local_network), ('partition_amount', args.partition_number), ('instance_root', args.slapos_instance_root), ('software_root', args.slapos_software_root), ('computer_xml', '%s/slapos.xml' % slapos_home), ('log_file', '%s/log/slapos-node-format.log' % slapos_home), ('use_unique_local_address_block', 'false') ] slapos_node_configuration_content = _replaceParameterValue( slapos_node_configuration_template, to_replace) slapos_node_configuration_content = re.sub( '(key_file|cert_file|certificate_repository_path).*=.*\n', '', slapos_node_configuration_content) with open(slapos_node_config_path, 'w') as fout: fout.write(slapos_node_configuration_content.encode('utf8')) def _generateSlaposProxyConfigurationFile(conf): template_arg_list = (__name__, '../../slapos-proxy.cfg.example') with pkg_resources.resource_stream(*template_arg_list) as fout: slapos_proxy_configuration_template = fout.read() slapos_proxy_configuration_path = os.path.join( conf.slapos_configuration_directory, 'slapos-proxy.cfg') listening_ip, listening_port = \ conf.daemon_listen_ip, conf.daemon_listen_port to_replace = [ ('host', listening_ip), ('port', listening_port), ('master_url', 'http://%s:%s/' % (listening_ip, listening_port)), ('computer_id', DEFAULT_COMPUTER_ID), ('instance_root', conf.instance_root), ('software_root', conf.software_root) ] slapos_proxy_configuration_content = _replaceParameterValue( slapos_proxy_configuration_template, to_replace) with open(slapos_proxy_configuration_path, 'w') as fout: fout.write(slapos_proxy_configuration_content.encode('utf8')) return slapos_proxy_configuration_path def _addProxyToSupervisor(conf): """ Create a supervisord configuration file containing informations to run slapproxy as daemon """ program_partition_template = """\ [program:slapproxy] directory=%(slapos_buildout_directory)s command=%(program_command)s process_name=slapproxy autostart=true autorestart=true startsecs=0 startretries=0 exitcodes=0 stopsignal=TERM stopwaitsecs=60 user=0 group=0 serverurl=AUTO redirect_stderr=true stdout_logfile=%(log_file)s stdout_logfile_maxbytes=100KB stdout_logfile_backups=1 stderr_logfile=%(log_file)s stderr_logfile_maxbytes=100KB stderr_logfile_backups=1 """ % {'log_file': '%s/log/slapos-proxy.log' % conf.slapos_buildout_directory, 'slapos_buildout_directory': conf.slapos_buildout_directory, 'program_command': '%s/bin/slapos proxy start --cfg %s' % \ (conf.slapos_buildout_directory, conf.proxy_configuration_file)} supervisord_conf_folder_path = os.path.join(conf.instance_root, 'etc', 'supervisord.conf.d') _createConfigurationDirectory(supervisord_conf_folder_path) updateFile( os.path.join(supervisord_conf_folder_path, 'slapproxy.conf'), program_partition_template) def _runFormat(slapos_directory): """ Launch slapos node format. """ subprocess.Popen( ["%s/bin/slapos" % slapos_directory, "node", "format", "--now"]).communicate() def do_configure(args, fetch_config_func, logger): """ Generate configuration files, Create the instance path by running slapformat (but will crash), Add proxy to supervisor, Run supervisor, which will run the proxy, Run format, which will finish correctly. """ slapos_node_config_path = os.path.join( args.slapos_configuration_directory, 'slapos.cfg') if os.path.exists(slapos_node_config_path): logger.error('A SlapOS configuration directory already exist at' ' %s. Aborting.' % slapos_node_config_path) raise SystemExit(1) if not getattr(args, 'cfg', None): args.cfg = slapos_node_config_path _createConfigurationDirectory(args.slapos_configuration_directory) _generateSlaposNodeConfigurationFile(slapos_node_config_path, args) configp = fetch_config_func(args) conf = FormatConfig(logger=logger) conf.mergeConfig(args, configp) slapgrid = create_slapgrid_object(conf.__dict__, logger) createPrivateDirectory(os.path.join(conf.slapos_buildout_directory, 'log')) _runFormat(conf.slapos_buildout_directory) slapgrid.checkEnvironmentAndCreateStructure() proxy_configuration_file = _generateSlaposProxyConfigurationFile(conf) conf.proxy_configuration_file = proxy_configuration_file _addProxyToSupervisor(conf) home_folder_path = os.environ['HOME'] createPrivateDirectory("%s/.slapos" % home_folder_path) slapos_client_cfg_path = '%s/.slapos/slapos-client.cfg' % home_folder_path if not os.path.exists(slapos_client_cfg_path): os.symlink(slapos_node_config_path, slapos_client_cfg_path) launchSupervisord(instance_root=conf.instance_root, logger=logger) _runFormat(conf.slapos_buildout_directory) return 0 slapos.core-1.3.18/slapos/cli/command.py0000644000000000000000000000421212752436134020040 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import argparse import functools import os import sys from cliff import command class Command(command.Command): def get_parser(self, prog_name): parser = argparse.ArgumentParser( description=self.get_description(), prog=prog_name, formatter_class=argparse.RawDescriptionHelpFormatter ) return parser def run(self, parsed_args): return self.take_action(parsed_args) def check_root_user(config_command_instance): if sys.platform != 'cygwin' and os.getuid() != 0: config_command_instance.app.log.error('This slapos command must be run as root.') sys.exit(5) def must_be_root(func): @functools.wraps(func) def inner(self, *args, **kw): check_root_user(self) return func(self, *args, **kw) return inner slapos.core-1.3.18/slapos/cli/config.py0000644000000000000000000000650112752436134017672 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import ConfigParser import os from slapos.cli.command import Command class ConfigError(Exception): pass class ConfigCommand(Command): """ Base class for commands that require a configuration file """ default_config_var = 'SLAPOS_CONFIGURATION' # use this if default_config_var does not exist default_config_path = '/etc/opt/slapos/slapos.cfg' def get_parser(self, prog_name): ap = super(ConfigCommand, self).get_parser(prog_name) ap.add_argument('--cfg', help='SlapOS configuration file' ' (default: $%s or %s)' % (self.default_config_var, self.default_config_path)) return ap def config_path(self, args): if args.cfg: cfg_path = args.cfg else: cfg_path = os.environ.get(self.default_config_var, self.default_config_path) return os.path.expanduser(cfg_path) def fetch_config(self, args): """ Returns a configuration object if file exists/readable/valid, will raise an error otherwise. The exception may come from the configparser itself if the configuration content is very broken, and will clearly show what is wrong with the file. """ cfg_path = self.config_path(args) self.app.log.debug('Loading config: %s', cfg_path) if not os.path.exists(cfg_path): raise ConfigError('Configuration file does not exist: %s' % cfg_path) configp = ConfigParser.SafeConfigParser() if configp.read(cfg_path) != [cfg_path]: # bad permission, etc. raise ConfigError('Cannot parse configuration file: %s' % cfg_path) return configp class ClientConfigCommand(ConfigCommand): """ Base class for client commands, that use the client configuration file """ default_config_var = 'SLAPOS_CLIENT_CONFIGURATION' default_config_path = '~/.slapos/slapos-client.cfg' command_group = 'client' slapos.core-1.3.18/slapos/cli/remove.py0000644000000000000000000000477612752436134017736 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from slapos.cli.config import ClientConfigCommand from slapos.client import init, ClientConfig class RemoveCommand(ClientConfigCommand): """ remove a Software from a node """ def get_parser(self, prog_name): ap = super(RemoveCommand, self).get_parser(prog_name) ap.add_argument('software_url', help='Your software url') ap.add_argument('node', help="Target node") return ap def take_action(self, args): configp = self.fetch_config(args) conf = ClientConfig(args, configp) local = init(conf, self.app.log) do_remove(self.app.log, args.software_url, args.node, local) def do_remove(logger, software_url, computer_id, local): """ Request deletion of Software Release 'software_url' from computer 'computer_id'. """ logger.info('Requesting deletion of %s Software Release...', software_url) if software_url in local: software_url = local[software_url] local['slap'].registerSupply().supply( software_release=software_url, computer_guid=computer_id, state='destroyed' ) logger.info('Done.') slapos.core-1.3.18/slapos/bang.py0000644000000000000000000000377012752436134016572 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import slapos.slap.slap def do_bang(configp, message): computer_id = configp.get('slapos', 'computer_id') master_url = configp.get('slapos', 'master_url') if configp.has_option('slapos', 'key_file'): key_file = configp.get('slapos', 'key_file') else: key_file = None if configp.has_option('slapos', 'cert_file'): cert_file = configp.get('slapos', 'cert_file') else: cert_file = None slap = slapos.slap.slap() slap.initializeConnection(master_url, key_file=key_file, cert_file=cert_file) computer = slap.registerComputer(computer_id) print 'Banging to %r' % master_url computer.bang(message) print 'Bang with message %r' % message slapos.core-1.3.18/slapos/README.slap.txt0000644000000000000000000000271412752436134017742 0ustar rootroot00000000000000slap ==== Simple Language for Accounting and Provisioning python library. Developer note - python version ------------------------------- This library is used on client (slapgrid) and server side. Server is using python2.4 and client is using python2.6 Having this in mind, code of this library *have* to work on python2.4 How it works ------------ The SLAP main server which is in charge of service coordination receives from participating servers the number of computer paritions which are available, the type of resource which a party is ready provide, and request from parties for resources which are needed. Each participating server is identified by a unique ID and runs a slap-server daemon. This daemon collects from the main server the installation tasks and does the installation of resources, then notifies the main server of completion whenever a resource is configured, installed and available. The data structure on the main server is the following: A - Action: an action which can happen to provide a resource or account its usage CP - Computer Partition: provides a URL to Access a Cloud Resource RI - Resource Item: describes a resource CI - Contract Item: describes the contract to attach the DL to (This is unclear still) R - Resource: describes a type of cloud resource (ex. MySQL Table) is published on slapgrid.org DL - Delivery Line: Describes an action happening on a resource item on a computer partition D - Delivery: groups multiple Delivery Lines slapos.core-1.3.18/slapos/README.grid.txt0000644000000000000000000000527012752436134017730 0ustar rootroot00000000000000grid ==== slapgrid is a client of SLAPos. SLAPos provides support for deploying a SaaS system in a minute. Slapgrid allows you to easily deploy instances of softwares based on buildout profiles. For more informations about SLAP and SLAPos, please see the SLAP documentation. Requirements ------------ A working SLAP server with informations about your computer, in order to retrieve them. As Vifib servers use IPv6 only, we strongly recommend an IPv6 enabled UNIX box. For the same reasons, Python >= 2.6 with development headers is also strongly recommended (IPv6 support is not complete in previous releases). For now, gcc and glibc development headers are required to build most software releases. Concepts -------- Here are the fundamental concepts of slapgrid : A Software Release (SR) is just a software. A Computer Partition (CP) is an instance of a Software Release. Imagine you want to install with slapgrid some software and run it. You will have to install the software as a Software Release, and then instantiate it, i.e configuring it for your needs, as a Computer Partition. How it works ------------ When run, slapgrid will authenticate to the SLAP library with a computer_id and fetch the list of Software Releases to install or remove and Computer Partitions to start or stop. Then, it will process each Software Release, and each Computer Partition. It will also periodically send to SLAP the usage report of each Computer Partition. Installation ------------ With easy_install:: $ easy_install slapgrid slapgrid needs several directories to be created and configured before being able to run : a software releases directory, and an instances directory with configured computer partition directory(ies). You should create for each Computer Partition directory created a specific user and associate it with its Computer Partition directory. Each Computer Partition directory should belongs to this specific user, with permissions of 0750. Usage ----- slapgrid needs several informations in order to run. You can specify them by adding arguments to the slapgrid command line, or by putting then in a configuration file. Beware : you need a valid computer resource on server side. Examples -------- simple example : Just run slapgrid: $ slapgrid --instance-root /path/to/instance/root --software-root /path/to/software_root --master-url https://some.server/some.resource --computer-id my.computer.id configuration file example:: [slapgrid] instance_root = /path/to/instance/root software_root = /path/to/software/root master_url = https://slapos.server/slap_service computer_id = my.computer.id then run slapgrid:: $ slapgrid --configuration-file = path/to/configuration/file slapos.core-1.3.18/slapos/slapos-proxy.cfg.example0000644000000000000000000000564613006632705022103 0ustar rootroot00000000000000# This is an example configuration file for a standalone micro slapos master # a.k.a slapproxy [slapos] instance_root = /srv/slapgrid software_root = /opt/slapgrid computer_id = local_computer [slapproxy] host = 127.0.0.1 port = 5000 database_uri = /opt/slapos/slapproxy.db ############################### # Optional, advanced parameters ############################### # Below is the list of software maintained by slapos.org and contributors # It is used to simulate a proper configuration of a real slapos master. software_product_list = erp5 http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.195:/software/erp5/software.cfg erp5_branch http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/heads/erp5:/software/erp5/software.cfg kumofs http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.141:/software/kumofs/software.cfg kvm http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.193:/software/kvm/software.cfg maarch http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.159:/software/maarch/software.cfg mariadb http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.152:/software/mariadb/software.cfg memcached http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.82:/software/memcached/software.cfg slaposwebrunner http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.160:/software/slaprunner/software.cfg wordpress http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.163:/software/wordpress/software.cfg zabbixagent http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/tags/slapos-0.162:/software/zabbix-agent/software.cfg # Here goes the list of slapos masters that slapos.proxy is authorized to contact to forward an instance request. # Each section beginning by "multimaster/" is a different SlapOS Master, represented by its URL. # For each section, you need to specify the URL of the SlapOS Master in the section name itself. # For each section, you can specify if needed the location of key/certificate used to authenticate to this slapOS Master. # For each section, you can specify if needed a list of Software Releases, separated by carrier return. Any instance request matching one of those Software Releases will be automatically forwarded to this SlapOS Master and will not be allocated locally. # When doing an instance request, you can specify in filter (a.k.a SLA) a "master_url" that will be used by the slapproxy to forward the request. [multimaster/https://slap.vifib.com] key = key file path coming from your slapos master account cert = certificate file path coming from your slapos master account software_release_list = http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD:/software/apache-frontend/software.cfg [multimaster/http://imaginary-slapos-master.com] # No certificate here: it is http. software_release_list = http://mywebsite.me/my_software_release.cfg /some/arbitrary/local/unix/path slapos.core-1.3.18/slapos/README.console.txt0000644000000000000000000000546612752436134020454 0ustar rootroot00000000000000console ------- The slapconsole tool allows to interact with a SlapOS Master throught the SLAP library. For more information about SlapOS or slapconsole usages, please go to http://community.slapos.org. The slapconsole tool is only a bare Python console with several global variables defined and initialized. Initialization and configuration file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Slapconsole allows to automatically connect to a Master using URL and SSL certificate from given slapos.cfg. Certificate has to be *USER* certificate, manually obtained from SlapOS master web interface. Slapconsole tools reads the given slapos.cfg configuration file and use the following informations : * Master URL is read from [slapos] in the "master_url" parameter. * SSL Certificate is read from [slapconsole] in the "cert_file" parameter. * SSL Key is read from [slapconsole] in the "key_file" parameter. See slapos.cfg.example for examples. Global functions/variables ~~~~~~~~~~~~~~~~~~~~~~~~~~ * "request()" is a shorthand for slap.registerOpenOrder().request() allowing to request instances. * "supply()" is a shorthand for slap.registerSupply().supply() allowing to request software installation. For more information about those methods, please read the SLAP library documentation. * "product" is an instance of slap.SoftwareProductCollection whose only goal is to retrieve the URL of the best Software Release of a given Software Product as attribute. for each attribute call, it will retrieve from the SlapOS Master the best available Software Release URL and return it. This allows to request instances in a few words, i.e:: request("mykvm", "http://www.url.com/path/to/current/best/known/kvm/software.cfg") can be simplified into :: request("mykvm", product.kvm) * "slap" is an instance of the SLAP library. It is only used for advanced usages. "slap" instance is obtained by doing :: slap = slapos.slap.slap() slap.initializeConnection(config.master_url, key_file=config.key_file, cert_file=config.cert_file) Examples ~~~~~~~~ :: >>> # Request instance >>> request(product.kvm, "myuniquekvm") >>> # Request instance on specific computer >>> request(product.kvm, "myotheruniquekvm", filter_kw={ "computer_guid": "COMP-12345" }) >>> # Request instance, specifying parameters (here nbd_ip and nbd_port) >>> request(product.kvm, "mythirduniquekvm", partition_parameter_kw={"nbd_ip":"2a01:e35:2e27:460:e2cb:4eff:fed9:48dc", "nbd_port":"1024"}) >>> # Request software installation on owned computer >>> supply(product.kvm, "mycomputer") >>> # Fetch existing instance status >>> request(product.kvm, "myuniquekvm").getState() >>> # Fetch instance information on already launched instance >>> request(product.kvm, "myuniquekvm").getConnectionParameter("url") slapos.core-1.3.18/slapos/human.py0000644000000000000000000000701612752436135016771 0ustar rootroot00000000000000#!/usr/bin/env python import sys """ Bytes-to-human / human-to-bytes converter. Based on: http://goo.gl/kTQMs Working with Python 2.x and 3.x. Author: Giampaolo Rodola' License: MIT """ # see: http://goo.gl/kTQMs SYMBOLS = { 'customary' : ('B', 'K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y'), 'slapos' : ('', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'), 'customary_ext' : ('byte', 'kilo', 'mega', 'giga', 'tera', 'peta', 'exa', 'zetta', 'iotta'), 'iec' : ('Bi', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi', 'Yi'), 'iec_ext' : ('byte', 'kibi', 'mebi', 'gibi', 'tebi', 'pebi', 'exbi', 'zebi', 'yobi'), } def bytes2human(n, format='%(value).1f %(symbol)s', symbols='slapos'): """ Convert n bytes into a human readable string based on format. symbols can be either "customary", "customary_ext", "iec" or "iec_ext", see: http://goo.gl/kTQMs >>> bytes2human(0) '0.0 B' >>> bytes2human(0.9) '0.0 B' >>> bytes2human(1) '1.0 B' >>> bytes2human(1.9) '1.0 B' >>> bytes2human(1024) '1.0 K' >>> bytes2human(1048576) '1.0 M' >>> bytes2human(1099511627776127398123789121) '909.5 Y' >>> bytes2human(9856, symbols="customary") '9.6 K' >>> bytes2human(9856, symbols="customary_ext") '9.6 kilo' >>> bytes2human(9856, symbols="iec") '9.6 Ki' >>> bytes2human(9856, symbols="iec_ext") '9.6 kibi' >>> bytes2human(10000, "%(value).1f %(symbol)s/sec") '9.8 K/sec' >>> # precision can be adjusted by playing with %f operator >>> bytes2human(10000, format="%(value).5f %(symbol)s") '9.76562 K' """ n = int(n) if n < 0: raise ValueError("n < 0") symbols = SYMBOLS[symbols] prefix = {} for i, s in enumerate(symbols[1:]): prefix[s] = 1 << (i+1)*10 for symbol in reversed(symbols[1:]): if n >= prefix[symbol]: value = float(n) / prefix[symbol] return format % locals() return format % dict(symbol=symbols[0], value=n) def human2bytes(s): """ Attempts to guess the string format based on default symbols set and return the corresponding bytes as an integer. When unable to recognize the format ValueError is raised. >>> human2bytes('0 B') 0 >>> human2bytes('1 K') 1024 >>> human2bytes('1 M') 1048576 >>> human2bytes('1 Gi') 1073741824 >>> human2bytes('1 tera') 1099511627776 >>> human2bytes('0.5kilo') 512 >>> human2bytes('0.1 byte') 0 >>> human2bytes('1 k') # k is an alias for K 1024 >>> human2bytes('12 foo') Traceback (most recent call last): ... ValueError: can't interpret '12 foo' """ init = s num = "" while s and s[0:1].isdigit() or s[0:1] == '.': num += s[0] s = s[1:] num = float(num) letter = s.strip() for name, sset in SYMBOLS.items(): if letter in sset: break else: if letter == 'k': # treat 'k' as an alias for 'K' as per: http://goo.gl/kTQMs sset = SYMBOLS['customary'] letter = letter.upper() else: raise ValueError("can't interpret %r" % init) prefix = {sset[0]:1} for i, s in enumerate(sset[1:]): prefix[s] = 1 << (i+1)*10 return int(num * prefix[letter]) if __name__ == "__main__": import doctest doctest.testmod() slapos.core-1.3.18/slapos/grid/0000755000000000000000000000000013006632706016223 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/grid/networkcache.py0000644000000000000000000001402312752436134021256 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010, 2011, 2012 ViFiB SARL and Contributors. # All Rights Reserved. # # This software is subject to the provisions of the Zope Public License, # Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution. # THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED # WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS # FOR A PARTICULAR PURPOSE. # ############################################################################## import ast import json import platform import shutil import traceback from slapos.grid.distribution import os_matches, distribution_tuple try: try: from slapos.libnetworkcache import NetworkcacheClient, UploadError, \ DirectoryNotFound except ImportError: LIBNETWORKCACHE_ENABLED = False else: LIBNETWORKCACHE_ENABLED = True except: print 'There was problem while trying to import slapos.libnetworkcache:'\ '\n%s' % traceback.format_exc() LIBNETWORKCACHE_ENABLED = False print 'Networkcache forced to be disabled.' def fallback_call(function): """Decorator which disallow to have any problem while calling method""" def wrapper(self, *args, **kwd): """ Log the call, and the result of the call """ try: return function(self, *args, **kwd) except: # indeed, *any* exception is swallowed print 'There was problem while calling method %r:\n%s' % ( function.__name__, traceback.format_exc()) return False wrapper.__doc__ = function.__doc__ return wrapper @fallback_call def download_network_cached(cache_url, dir_url, software_url, software_root, key, path, logger, signature_certificate_list, download_from_binary_cache_url_blacklist=None): """Downloads from a network cache provider return True if download succeeded. """ if not LIBNETWORKCACHE_ENABLED: return False if not(cache_url and dir_url and software_url and software_root): return False for url in download_from_binary_cache_url_blacklist: if software_url.startswith(url): return False try: nc = NetworkcacheClient(cache_url, dir_url, signature_certificate_list=signature_certificate_list or None) except TypeError: logger.warning('Incompatible version of networkcache, not using it.') return False logger.info('Downloading %s binary from network cache.' % software_url) try: file_descriptor = None json_entry_list = nc.select_generic(key) for entry in json_entry_list: json_information, _ = entry try: tags = json.loads(json_information) if tags.get('machine') != platform.machine(): continue if not os_matches(ast.literal_eval(tags.get('os')), distribution_tuple()): continue if tags.get('software_url') != software_url: continue if tags.get('software_root') != software_root: continue sha512 = tags.get('sha512') file_descriptor = nc.download(sha512) break except Exception: continue if file_descriptor is not None: f = open(path, 'w+b') try: shutil.copyfileobj(file_descriptor, f) finally: f.close() file_descriptor.close() return True except (IOError, DirectoryNotFound), e: logger.info('Failed to download from network cache %s: %s' % \ (software_url, str(e))) return False @fallback_call def upload_network_cached(software_root, software_url, cached_key, cache_url, dir_url, path, logger, signature_private_key_file, shacache_ca_file, shacache_cert_file, shacache_key_file, shadir_ca_file, shadir_cert_file, shadir_key_file): """Upload file to a network cache server""" if not LIBNETWORKCACHE_ENABLED: return False if not (software_root and software_url and cached_key \ and cache_url and dir_url): return False logger.info('Uploading %s binary into network cache.' % software_url) # YXU: "file" and "urlmd5" should be removed when server side is ready kw = dict( file="file", urlmd5="urlmd5", software_url=software_url, software_root=software_root, machine=platform.machine(), os=str(distribution_tuple()) ) f = open(path, 'r') # convert '' into None in order to call nc nicely if not signature_private_key_file: signature_private_key_file = None if not shacache_ca_file: shacache_ca_file = None if not shacache_cert_file: shacache_cert_file = None if not shacache_key_file: shacache_key_file = None if not shadir_ca_file: shadir_ca_file = None if not shadir_cert_file: shadir_cert_file = None if not shadir_key_file: shadir_key_file = None try: nc = NetworkcacheClient(cache_url, dir_url, signature_private_key_file=signature_private_key_file, shacache_ca_file=shacache_ca_file, shacache_cert_file=shacache_cert_file, shacache_key_file=shacache_key_file, shadir_ca_file=shadir_ca_file, shadir_cert_file=shadir_cert_file, shadir_key_file=shadir_key_file) except TypeError: logger.warning('Incompatible version of networkcache, not using it.') return False try: return nc.upload_generic(f, cached_key, **kw) except (IOError, UploadError), e: logger.info('Failed to upload file. %s' % (str(e))) return False finally: f.close() return True slapos.core-1.3.18/slapos/grid/distribution.py0000644000000000000000000000731512752436134021326 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## """ Provides helper functions to check if two binary caches are compatible. os_matches(...): returns True if the arguments reference compatible platforms. patched_linux_distribution(...): a patched version of platform.linux_distribution() this is the same function provided with the python package in Debian and Ubuntu: see http://bugs.python.org/issue9514 otherwise, Ubuntu will always be reported as an unstable Debian, regardless of the version. distribution_tuple() returns a (distname, version, id) tuple under linux or cygwin """ import platform import re def _debianize(os): """ keep only the major release number in case of debian, otherwise minor releases would be seen as not compatible to each other. """ distname, version, id_ = os if distname == 'debian' and '.' in version: version = version.split('.')[0] return distname, version, id_ def os_matches(os1, os2): return _debianize(os1) == _debianize(os2) _distributor_id_file_re = re.compile("(?:DISTRIB_ID\s*=)\s*(.*)", re.I) _release_file_re = re.compile("(?:DISTRIB_RELEASE\s*=)\s*(.*)", re.I) _codename_file_re = re.compile("(?:DISTRIB_CODENAME\s*=)\s*(.*)", re.I) def patched_linux_distribution(distname='', version='', id='', supported_dists=platform._supported_dists, full_distribution_name=1): # check for the Debian/Ubuntu /etc/lsb-release file first, needed so # that the distribution doesn't get identified as Debian. try: etclsbrel = open("/etc/lsb-release", "rU") for line in etclsbrel: m = _distributor_id_file_re.search(line) if m: _u_distname = m.group(1).strip() m = _release_file_re.search(line) if m: _u_version = m.group(1).strip() m = _codename_file_re.search(line) if m: _u_id = m.group(1).strip() if _u_distname and _u_version: return (_u_distname, _u_version, _u_id) except (EnvironmentError, UnboundLocalError): pass return platform.linux_distribution(distname, version, id, supported_dists, full_distribution_name) def distribution_tuple(): if platform.system().startswith('CYGWIN_'): return (platform.system(), platform.platform(), '') else: return patched_linux_distribution() slapos.core-1.3.18/slapos/grid/svcbackend.py0000644000000000000000000001767113003671621020710 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import pkg_resources import socket as socketlib import subprocess import stat import sys import time import xmlrpclib from slapos.grid.utils import (createPrivateDirectory, SlapPopen, updateFile) from supervisor import xmlrpc, states def getSupervisorRPC(socket): supervisor_transport = xmlrpc.SupervisorTransport('', '', 'unix://' + socket) server_proxy = xmlrpclib.ServerProxy('http://127.0.0.1', supervisor_transport) return getattr(server_proxy, 'supervisor') def _getSupervisordSocketPath(instance_root): return os.path.join(instance_root, 'supervisord.socket') def _getSupervisordConfigurationFilePath(instance_root): return os.path.join(instance_root, 'etc', 'supervisord.conf') def _getSupervisordConfigurationDirectory(instance_root): return os.path.join(instance_root, 'etc', 'supervisord.conf.d') def createSupervisordConfiguration(instance_root, watchdog_command=''): """ Create supervisord related files and directories. """ if not os.path.isdir(instance_root): raise OSError('%s does not exist.' % instance_root) supervisord_configuration_file_path = _getSupervisordConfigurationFilePath(instance_root) supervisord_configuration_directory = _getSupervisordConfigurationDirectory(instance_root) supervisord_socket = _getSupervisordSocketPath(instance_root) # Create directory accessible for the instances. var_directory = os.path.join(instance_root, 'var') if not os.path.isdir(var_directory): os.mkdir(var_directory) os.chmod(var_directory, stat.S_IRWXU | stat.S_IROTH | stat.S_IXOTH | \ stat.S_IRGRP | stat.S_IXGRP ) etc_directory = os.path.join(instance_root, 'etc') if not os.path.isdir(etc_directory): os.mkdir(etc_directory) # Creates instance_root structure createPrivateDirectory(os.path.join(instance_root, 'var', 'log')) createPrivateDirectory(os.path.join(instance_root, 'var', 'run')) createPrivateDirectory(os.path.join(instance_root, 'etc')) createPrivateDirectory(supervisord_configuration_directory) # Creates supervisord configuration updateFile(supervisord_configuration_file_path, pkg_resources.resource_stream(__name__, 'templates/supervisord.conf.in').read() % { 'supervisord_configuration_directory': supervisord_configuration_directory, 'supervisord_socket': os.path.abspath(supervisord_socket), 'supervisord_loglevel': 'info', 'supervisord_logfile': os.path.abspath( os.path.join(instance_root, 'var', 'log', 'supervisord.log')), 'supervisord_logfile_maxbytes': '50MB', 'supervisord_nodaemon': 'false', 'supervisord_pidfile': os.path.abspath( os.path.join(instance_root, 'var', 'run', 'supervisord.pid')), 'supervisord_logfile_backups': '10', 'watchdog_command': watchdog_command, } ) def _updateWatchdog(socket): """ In special cases, supervisord can be started using configuration with empty watchdog parameter. Then, when running slapgrid, the real watchdog configuration is generated. We thus need to reload watchdog configuration if needed and start it. """ supervisor = getSupervisorRPC(socket) if supervisor.getProcessInfo('watchdog')['state'] not in states.RUNNING_STATES: # XXX workaround for https://github.com/Supervisor/supervisor/issues/339 # In theory, only reloadConfig is needed. supervisor.removeProcessGroup('watchdog') supervisor.reloadConfig() supervisor.addProcessGroup('watchdog') def launchSupervisord(instance_root, logger, supervisord_additional_argument_list=None): configuration_file = _getSupervisordConfigurationFilePath(instance_root) socket = _getSupervisordSocketPath(instance_root) if os.path.exists(socket): trynum = 1 while trynum < 6: try: supervisor = getSupervisorRPC(socket) status = supervisor.getState() except xmlrpclib.Fault as e: if e.faultCode == 6 and e.faultString == 'SHUTDOWN_STATE': logger.info('Supervisor in shutdown procedure, will check again later.') trynum += 1 time.sleep(2 * trynum) except Exception: # In case if there is problem with connection, assume that supervisord # is not running and try to run it break else: if status['statename'] == 'RUNNING' and status['statecode'] == 1: logger.debug('Supervisord already running.') _updateWatchdog(socket) return elif status['statename'] == 'SHUTDOWN_STATE' and status['statecode'] == 6: logger.info('Supervisor in shutdown procedure, will check again later.') trynum += 1 time.sleep(2 * trynum) else: log_message = 'Unknown supervisord state %r. Will try to start.' % status logger.warning(log_message) break supervisord_argument_list = ['-c', configuration_file] if supervisord_additional_argument_list is not None: supervisord_argument_list.extend(supervisord_additional_argument_list) logger.info("Launching supervisord with clean environment.") # Extract python binary to prevent shebang size limit invocation_list = [sys.executable, '-c'] invocation_list.append( "import sys ; sys.path=" + str(sys.path) + " ; " + "import supervisor.supervisord ; " + "sys.argv[1:1]=" + str(supervisord_argument_list) + " ; " + "supervisor.supervisord.main()") supervisord_popen = SlapPopen(invocation_list, env={}, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, logger=logger) result = supervisord_popen.communicate()[0] if supervisord_popen.returncode: logger.warning('Supervisord unknown problem: %s' % result) raise RuntimeError('Failed to launch supervisord : %s' % result) try: default_timeout = socketlib.getdefaulttimeout() current_timeout = 1 trynum = 1 while trynum < 6: try: socketlib.setdefaulttimeout(current_timeout) supervisor = getSupervisorRPC(socket) status = supervisor.getState() if status['statename'] == 'RUNNING' and status['statecode'] == 1: return logger.warning('Wrong status name %(statename)r and code ' '%(statecode)r, trying again' % status) trynum += 1 except Exception: current_timeout = 5 * trynum trynum += 1 else: logger.info('Supervisord started correctly in try %s.' % trynum) return logger.warning('Issue while checking supervisord.') finally: socketlib.setdefaulttimeout(default_timeout) slapos.core-1.3.18/slapos/grid/templates/0000755000000000000000000000000013006632706020221 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/grid/templates/supervisord.conf.in0000644000000000000000000000121412752436134024064 0ustar rootroot00000000000000[rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [include] files = %(supervisord_configuration_directory)s/*.conf [supervisorctl] serverurl = unix://%(supervisord_socket)s [supervisord] loglevel = %(supervisord_loglevel)s logfile = %(supervisord_logfile)s logfile_maxbytes = %(supervisord_logfile_maxbytes)s nodaemon = %(supervisord_nodaemon)s pidfile = %(supervisord_pidfile)s logfile-backups = %(supervisord_logfile_backups)s [unix_http_server] file=%(supervisord_socket)s chmod=0700 [eventlistener:watchdog] command=%(watchdog_command)s events=PROCESS_STATE_EXITED, PROCESS_STATE_FATAL slapos.core-1.3.18/slapos/grid/templates/group_partition_supervisord.conf.in0000644000000000000000000000006212752436134027371 0ustar rootroot00000000000000[group:%(instance_id)s] programs=%(program_list)s slapos.core-1.3.18/slapos/grid/templates/buildout-tail.cfg.in0000644000000000000000000000155712752436134024101 0ustar rootroot00000000000000# This is beginning of zc.builodout profile's tail added by slapgrid [buildout] # put buildout generated binaries in specific directory bin-directory = ${buildout:directory}/sbin # protect software and run parts offline offline = true [slap-connection] computer-id = %(computer_id)s partition-id = %(partition_id)s server-url = %(server_url)s software-release-url = %(software_release_url)s key-file = %(key_file)s cert-file = %(cert_file)s [slap_connection] # Kept for backward compatiblity computer_id = %(computer_id)s partition_id = %(partition_id)s server_url = %(server_url)s software_release_url = %(software_release_url)s key_file = %(key_file)s cert_file = %(cert_file)s [storage-configuration] storage-home = %(storage_home)s [network-information] global-ipv4-network = %(global_ipv4_network_prefix)s # This is end of zc.builodout profile's tail added by slapgrid slapos.core-1.3.18/slapos/grid/templates/program_partition_supervisord.conf.in0000644000000000000000000000113512752436134027706 0ustar rootroot00000000000000[program:%(program_id)s] directory=%(program_directory)s command=%(program_command)s process_name=%(program_name)s autostart=false autorestart=false startsecs=0 startretries=0 exitcodes=0 stopsignal=TERM stopwaitsecs=60 stopasgroup=true killasgroup=true user=%(user_id)s group=%(group_id)s serverurl=AUTO redirect_stderr=true stdout_logfile=%(instance_path)s/.%(program_id)s.log stdout_logfile_maxbytes=100KB stdout_logfile_backups=1 stderr_logfile=%(instance_path)s/.%(program_id)s.log stderr_logfile_maxbytes=100KB stderr_logfile_backups=1 environment=USER="%(USER)s",LOGNAME="%(USER)s",HOME="%(HOME)s" slapos.core-1.3.18/slapos/grid/templates/iptables-ipv4-firewall-add.in0000644000000000000000000000000012752436134025557 0ustar rootroot00000000000000slapos.core-1.3.18/slapos/grid/exception.py0000644000000000000000000000300412752436134020574 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## """Exposed exceptions""" class PathDoesNotExistError(Exception): pass class WrongPermissionError(Exception): pass class BuildoutFailedError(Exception): pass class DiskSpaceError(Exception): pass slapos.core-1.3.18/slapos/grid/watchdog.py0000644000000000000000000001617112752436134020407 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import argparse import os.path import sys import slapos.slap.slap from slapos.grid.slapgrid import COMPUTER_PARTITION_TIMESTAMP_FILENAME, \ COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME from slapos.grid.SlapObject import WATCHDOG_MARK def parseArgumentTuple(): parser = argparse.ArgumentParser() parser.add_argument("--master-url", help="The master server URL. Mandatory.", required=True) parser.add_argument("--computer-id", help="The computer id defined in the server.", required=True) parser.add_argument("--certificate-repository-path", help="Path to partition certificates.", default=None) parser.add_argument("--instance-root-path", help="Path to instance root directory.", default=None) option = parser.parse_args() # Build option_dict option_dict = {} for argument_key, argument_value in vars(option).iteritems(): option_dict.update({argument_key: argument_value}) return option_dict class Watchdog(object): process_state_events = ['PROCESS_STATE_EXITED', 'PROCESS_STATE_FATAL'] def __init__(self, master_url, computer_id, certificate_repository_path=None, instance_root_path=None): self.master_url = master_url self.computer_id = computer_id self.certificate_repository_path = certificate_repository_path self.instance_root_path = instance_root_path self.stdin = sys.stdin self.stdout = sys.stdout self.stderr = sys.stderr self.slap = slapos.slap.slap() def initialize_connection(self, partition_id): cert_file = None key_file = None if self.certificate_repository_path: cert_file = os.path.join(self.certificate_repository_path, "%s.crt" % partition_id) key_file = os.path.join(self.certificate_repository_path, "%s.key" % partition_id) self.slap.initializeConnection( slapgrid_uri=self.master_url, key_file=key_file, cert_file=cert_file) def write_stdout(self, s): self.stdout.write(s) self.stdout.flush() def write_stderr(self, s): self.stderr.write(s) self.stderr.flush() def run(self): while True: self.write_stdout('READY\n') line = self.stdin.readline() # read header line from stdin headers = dict([x.split(':') for x in line.split()]) data = sys.stdin.read(int(headers['len'])) # read the event payload self.handle_event(headers, data) self.write_stdout('RESULT 2\nOK') # transition from READY to ACKNOWLEDGED def handle_event(self, headers, payload): if headers['eventname'] in self.process_state_events: payload_dict = dict([x.split(':') for x in payload.split()]) if WATCHDOG_MARK in payload_dict['processname'] and \ not self.has_bang_already_been_called(payload_dict['groupname']): self.handle_process_state_change_event(headers, payload_dict) def has_bang_already_been_called(self, partition_name): """ Checks if bang has already been called since last successful deployment """ if not self.instance_root_path: # Backward compatibility return False partition_home_path = os.path.join( self.instance_root_path, partition_name ) partition_timestamp_file_path = os.path.join( partition_home_path, COMPUTER_PARTITION_TIMESTAMP_FILENAME ) slapos_last_bang_timestamp_file_path = os.path.join( partition_home_path, COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME ) if not os.path.exists(slapos_last_bang_timestamp_file_path): # Never heard of any previous bang return False if not os.path.exists(partition_timestamp_file_path): # Partition never managed to deploy successfully, ignore bang return True last_bang_timestamp = int(open(slapos_last_bang_timestamp_file_path, 'r').read()) deployment_timestamp = int(open(partition_timestamp_file_path, 'r').read()) if deployment_timestamp > last_bang_timestamp: # It previously banged BEFORE latest successful deployment # i.e it haven't banged since last successful deployment return False # It previously banged AFTER latest successful deployment: ignore return True def create_partition_bang_timestamp_file(self, partition_name): """ Copy the timestamp file of the partition to a bang timestamp file. If timestamp file does not exist, create a dummy bang timestamp file. """ if not self.instance_root_path: # Backward compatibility return partition_home_path = os.path.join( self.instance_root_path, partition_name ) partition_timestamp_file_path = os.path.join( partition_home_path, COMPUTER_PARTITION_TIMESTAMP_FILENAME ) slapos_last_bang_timestamp_file_path = os.path.join( partition_home_path, COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME ) if os.path.exists(partition_timestamp_file_path): timestamp = open(partition_timestamp_file_path, 'r').read() else: timestamp = '0' open(slapos_last_bang_timestamp_file_path, 'w').write(timestamp) def handle_process_state_change_event(self, headers, payload_dict): partition_id = payload_dict['groupname'] self.initialize_connection(partition_id) partition = slapos.slap.ComputerPartition( computer_id=self.computer_id, connection_helper=self.slap._connection_helper, partition_id=partition_id) partition.bang("%s process in partition %s encountered a problem" % (payload_dict['processname'], partition_id)) self.create_partition_bang_timestamp_file(payload_dict['groupname']) def main(): watchdog = Watchdog(**parseArgumentTuple()) watchdog.run() if __name__ == '__main__': main() slapos.core-1.3.18/slapos/grid/__init__.py0000644000000000000000000000245212752436134020343 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## slapos.core-1.3.18/slapos/grid/utils.py0000644000000000000000000003020412752436134017740 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import grp import hashlib import os import pkg_resources import pwd import stat import subprocess import sys from slapos.grid.exception import BuildoutFailedError, WrongPermissionError # Such umask by default will create paths with full permission # for user, non writable by group and not accessible by others SAFE_UMASK = 0o27 PYTHON_ENVIRONMENT_REMOVE_LIST = [ 'PYTHONHOME', 'PYTHONPATH', 'PYTHONSTARTUP', 'PYTHONY2K', 'PYTHONOPTIMIZE', 'PYTHONDEBUG', 'PYTHONDONTWRITEBYTECODE', 'PYTHONINSPECT', 'PYTHONNOUSERSITE', 'PYTHONNOUSERSITE', 'PYTHONUNBUFFERED', 'PYTHONVERBOSE', ] SYSTEM_ENVIRONMENT_REMOVE_LIST = [ 'CONFIG_SITE', 'ENV', 'LOGNAME', 'TEMP', 'TMP', 'TMPDIR', 'USER', ] LOCALE_ENVIRONMENT_REMOVE_LIST = [ 'LANG', 'LANGUAGE', 'LC_ADDRESS', 'LC_COLLATE', 'LC_CTYPE', 'LC_IDENTIFICATION', 'LC_MEASUREMENT', 'LC_MESSAGES', 'LC_MONETARY', 'LC_NAME', 'LC_NUMERIC', 'LC_PAPER', 'LC_SOURCED', 'LC_TELEPHONE', 'LC_TIME', ] class SlapPopen(subprocess.Popen): """ Almost normal subprocess with greedish features and logging. Each line is logged "live", and self.output is a string containing the whole log. """ def __init__(self, *args, **kwargs): logger = kwargs.pop('logger') kwargs.update(stdin=subprocess.PIPE) if sys.platform == 'cygwin' and kwargs.get('env') == {}: kwargs['env'] = None subprocess.Popen.__init__(self, *args, **kwargs) self.stdin.flush() self.stdin.close() self.stdin = None # XXX-Cedric: this algorithm looks overkill for simple logging. output_lines = [] while True: line = self.stdout.readline() if line == '' and self.poll() is not None: break if line: output_lines.append(line) logger.info(line.rstrip('\n')) self.output = ''.join(output_lines) def md5digest(url): return hashlib.md5(url).hexdigest() def getCleanEnvironment(logger, home_path='/tmp'): changed_env = {} removed_env = [] env = os.environ.copy() # Clean python related environment variables for k in PYTHON_ENVIRONMENT_REMOVE_LIST + SYSTEM_ENVIRONMENT_REMOVE_LIST \ + LOCALE_ENVIRONMENT_REMOVE_LIST: old = env.pop(k, None) if old is not None: removed_env.append(k) changed_env['HOME'] = env['HOME'] = home_path for k in sorted(changed_env.iterkeys()): logger.debug('Overridden %s = %r' % (k, changed_env[k])) if removed_env: logger.debug('Removed from environment: %s' % ', '.join(sorted(removed_env))) return env def setRunning(logger, pidfile): """Creates a pidfile. If a pidfile already exists, we exit""" # XXX might use http://code.activestate.com/recipes/577911-context-manager-for-a-daemon-pid-file/ if os.path.exists(pidfile): try: pid = int(open(pidfile, 'r').readline()) except ValueError: pid = None # XXX This could use psutil library. if pid and os.path.exists("/proc/%s" % pid): logger.info('New slapos process started, but another slapos ' 'process is aleady running with pid %s, exiting.' % pid) sys.exit(10) logger.info('Existing pid file %r was stale, overwritten' % pidfile) # Start new process write_pid(logger, pidfile) def setFinished(pidfile): try: os.remove(pidfile) except OSError: pass def write_pid(logger, pidfile): try: with open(pidfile, 'w') as fout: fout.write('%s' % os.getpid()) except (IOError, OSError): logger.critical('slapgrid could not write pidfile %s' % pidfile) raise def dropPrivileges(uid, gid, logger): """Drop privileges to uid, gid if current uid is 0 Do tests to check if dropping was successful and that no system call is able to re-raise dropped privileges Does nothing if uid and gid are not 0 """ # XXX-Cedric: remove format / just do a print, otherwise formatting is done # twice current_uid, current_gid = os.getuid(), os.getgid() if uid == 0 or gid == 0: raise OSError('Dropping privileges to uid = %r or ' 'gid = %r is too dangerous' % (uid, gid)) if current_uid or current_gid: logger.debug('Running as uid = %r, gid = %r, dropping ' 'not needed and not possible' % (current_uid, current_gid)) return # drop privileges user_name = pwd.getpwuid(uid)[0] group_list = set(x.gr_gid for x in grp.getgrall() if user_name in x.gr_mem) group_list.add(gid) os.initgroups(pwd.getpwuid(uid)[0], gid) os.setgid(gid) os.setuid(uid) # assert that privileges are dropped message_pre = 'After dropping to uid = %r and gid = %r ' \ 'and group_list = %s' % (uid, gid, group_list) new_uid, new_gid, new_group_list = os.getuid(), os.getgid(), os.getgroups() if not (new_uid == uid and new_gid == gid and set(new_group_list) == group_list): raise OSError('%s new_uid = %r and new_gid = %r and ' 'new_group_list = %r which is fatal.' % (message_pre, new_uid, new_gid, new_group_list)) # assert that it is not possible to go back to running one try: try: os.setuid(current_uid) except OSError: try: os.setgid(current_gid) except OSError: try: os.setgroups([current_gid]) except OSError: raise except OSError: pass else: raise ValueError('%s it was possible to go back to uid = %r and gid = ' '%r which is fatal.' % (message_pre, current_uid, current_gid)) logger.debug('Succesfully dropped privileges to uid=%r gid=%r' % (uid, gid)) def bootstrapBuildout(path, logger, buildout=None, additional_buildout_parameter_list=None): if additional_buildout_parameter_list is None: additional_buildout_parameter_list = [] # Reads uid/gid of path, launches buildout with thoses privileges stat_info = os.stat(path) uid = stat_info.st_uid gid = stat_info.st_gid invocation_list = [sys.executable, '-S'] if buildout is not None: invocation_list.append(buildout) invocation_list.extend(additional_buildout_parameter_list) else: try: __import__('zc.buildout') except ImportError: logger.warning('Using old style bootstrap of included bootstrap file. ' 'Consider having zc.buildout available in search path.') invocation_list.append(pkg_resources.resource_filename(__name__, 'zc.buildout-bootstrap.py')) invocation_list.extend(additional_buildout_parameter_list) else: # buildout is importable, so use this one invocation_list.extend(["-c", "import sys ; sys.path=" + str(sys.path) + " ; import zc.buildout.buildout ; sys.argv[1:1]=" + repr(additional_buildout_parameter_list + ['bootstrap']) + " ; " "zc.buildout.buildout.main()"]) if buildout is not None: invocation_list.append('bootstrap') try: umask = os.umask(SAFE_UMASK) logger.debug('Set umask from %03o to %03o' % (umask, SAFE_UMASK)) logger.debug('Invoking: %r in directory %r' % (' '.join(invocation_list), path)) process_handler = SlapPopen(invocation_list, preexec_fn=lambda: dropPrivileges(uid, gid, logger=logger), cwd=path, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, logger=logger) if process_handler.returncode is None or process_handler.returncode != 0: message = 'Failed to run buildout profile in directory %r' % path logger.error(message) raise BuildoutFailedError('%s:\n%s\n' % (message, process_handler.output)) except OSError as exc: logger.exception(exc) raise BuildoutFailedError(exc) finally: old_umask = os.umask(umask) logger.debug('Restore umask from %03o to %03o' % (old_umask, umask)) def launchBuildout(path, buildout_binary, logger, additional_buildout_parameter_list=None): """ Launches buildout.""" if additional_buildout_parameter_list is None: additional_buildout_parameter_list = [] # Reads uid/gid of path, launches buildout with thoses privileges stat_info = os.stat(path) uid = stat_info.st_uid gid = stat_info.st_gid # Extract python binary to prevent shebang size limit line = open(buildout_binary, 'r').readline() invocation_list = [] if line.startswith('#!'): line = line[2:] # Prepares parameters for buildout invocation_list = line.split() + [buildout_binary] # Run buildout without reading user defaults invocation_list.append('-U') invocation_list.extend(additional_buildout_parameter_list) try: umask = os.umask(SAFE_UMASK) logger.debug('Set umask from %03o to %03o' % (umask, SAFE_UMASK)) logger.debug('Invoking: %r in directory %r' % (' '.join(invocation_list), path)) process_handler = SlapPopen(invocation_list, preexec_fn=lambda: dropPrivileges(uid, gid, logger=logger), cwd=path, env=getCleanEnvironment(logger=logger, home_path=path), stdout=subprocess.PIPE, stderr=subprocess.STDOUT, logger=logger) if process_handler.returncode is None or process_handler.returncode != 0: message = 'Failed to run buildout profile in directory %r' % path logger.error(message) raise BuildoutFailedError('%s:\n%s\n' % (message, process_handler.output)) except OSError as exc: logger.exception(exc) raise BuildoutFailedError(exc) finally: old_umask = os.umask(umask) logger.debug('Restore umask from %03o to %03o' % (old_umask, umask)) def updateFile(file_path, content, mode=0o600): """Creates or updates a file with "content" as content.""" altered = False if not (os.path.isfile(file_path)) or \ not (hashlib.md5(open(file_path).read()).digest() == hashlib.md5(content).digest()): with open(file_path, 'w') as fout: fout.write(content) altered = True os.chmod(file_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) if stat.S_IMODE(os.stat(file_path).st_mode) != mode: os.chmod(file_path, mode) altered = True return altered def updateExecutable(executable_path, content): """Creates or updates an executable file with "content" as content.""" return updateFile(executable_path, content, 0o700) def createPrivateDirectory(path): """Creates a directory belonging to root with umask 077""" if not os.path.isdir(path): os.mkdir(path) os.chmod(path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) permission = stat.S_IMODE(os.stat(path).st_mode) if permission != 0o700: raise WrongPermissionError('Wrong permissions in %s: ' 'is 0%o, should be 0700' % (path, permission)) slapos.core-1.3.18/slapos/grid/slapgrid.py0000644000000000000000000017722013003675561020416 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import pkg_resources import random import socket import StringIO import subprocess import sys import tempfile import time import traceback import warnings import logging import json import shutil if sys.version_info < (2, 6): warnings.warn('Used python version (%s) is old and has problems with' ' IPv6 connections' % sys.version.split('\n')[0]) from lxml import etree from slapos.slap.slap import NotFoundError from slapos.slap.slap import ServerError from slapos.slap.slap import COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME from slapos.util import mkdir_p, chownDirectory, string_to_boolean from slapos.grid.exception import BuildoutFailedError from slapos.grid.SlapObject import Software, Partition from slapos.grid.svcbackend import (launchSupervisord, createSupervisordConfiguration, _getSupervisordConfigurationDirectory, _getSupervisordSocketPath) from slapos.grid.utils import (md5digest, dropPrivileges, SlapPopen, updateFile) from slapos.human import human2bytes import slapos.slap from netaddr import valid_ipv4, valid_ipv6 # XXX: should be moved to SLAP library COMPUTER_PARTITION_DESTROYED_STATE = 'destroyed' COMPUTER_PARTITION_STARTED_STATE = 'started' COMPUTER_PARTITION_STOPPED_STATE = 'stopped' # Global variables about return state of slapgrid SLAPGRID_SUCCESS = 0 SLAPGRID_FAIL = 1 SLAPGRID_PROMISE_FAIL = 2 PROMISE_TIMEOUT = 3 COMPUTER_PARTITION_TIMESTAMP_FILENAME = '.timestamp' COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME = '.slapos_latest_bang_timestamp' COMPUTER_PARTITION_INSTALL_ERROR_FILENAME = '.slapgrid-%s-error.log' # XXX hardcoded watchdog_path WATCHDOG_PATH = '/opt/slapos/bin/slapos-watchdog' class _formatXMLError(Exception): pass class FPopen(subprocess.Popen): def __init__(self, *args, **kwargs): kwargs['stdin'] = subprocess.PIPE kwargs['stderr'] = subprocess.STDOUT kwargs.setdefault('stdout', subprocess.PIPE) kwargs.setdefault('close_fds', True) kwargs.setdefault('shell', True) subprocess.Popen.__init__(self, *args, **kwargs) self.stdin.flush() self.stdin.close() self.stdin = None def check_missing_parameters(options): required = set([ 'computer_id', # XXX: instance_root is better named "partition_root" 'instance_root', 'master_url', 'software_root', ]) if 'key_file' in options: required.add('certificate_repository_path') required.add('cert_file') if 'cert_file' in options: required.add('certificate_repository_path') required.add('key_file') missing = required.difference(options) if missing: raise RuntimeError('Missing mandatory parameters: %s' % ', '.join(sorted(missing))) # parameter can NOT be empty string or None for option in required: if not options.get(option): missing.add(option) if missing: raise RuntimeError('Mandatory parameters present but empty: %s' % ', '.join(sorted(missing))) def check_missing_files(options): req_files = [ options.get('key_file'), options.get('cert_file'), options.get('master_ca_file'), options.get('shacache-ca-file'), options.get('shacache-cert-file'), options.get('shacache-key-file'), options.get('shadir-ca-file'), options.get('shadir-cert-file'), options.get('shadir-key-file'), options.get('signature-private-key-file', options.get('signature_private_key_file')), ] req_dirs = [ options.get('certificate_repository_path') ] for f in req_files: if f and not os.path.exists(f): raise RuntimeError('File %r does not exist.' % f) for d in req_dirs: if d and not os.path.isdir(d): raise RuntimeError('Directory %r does not exist' % d) def merged_options(args, configp): options = dict(configp.items('slapos')) if configp.has_section('networkcache'): options.update(dict(configp.items('networkcache'))) for key, value in vars(args).iteritems(): if value is not None: options[key] = value if options.get('all'): options['develop'] = True # Parse cache / binary cache options # Backward compatibility about "binary-cache-url-blacklist" deprecated option if (options.get("binary-cache-url-blacklist") and not options.get("download-from-binary-cache-url-blacklist")): options["download-from-binary-cache-url-blacklist"] = \ options["binary-cache-url-blacklist"] options["download-from-binary-cache-url-blacklist"] = [ url.strip() for url in options.get( "download-from-binary-cache-url-blacklist", "").split('\n') if url] options["upload-to-binary-cache-url-blacklist"] = [ url.strip() for url in options.get( "upload-to-binary-cache-url-blacklist", "").split('\n') if url] options['firewall'] = {} if configp.has_section('firewall'): options['firewall'] = dict(configp.items('firewall')) options['firewall']["authorized_sources"] = [ source.strip() for source in options['firewall'].get( "authorized_sources", "").split('\n') if source] options['firewall']['firewall_cmd'] = options['firewall'].get( "firewall_cmd", "firewall-cmd") options['firewall']['firewall_executable'] = options['firewall'].get( "firewall_executable", "") options['firewall']['dbus_executable'] = options['firewall'].get( "dbus_executable", "") options['firewall']['reload_config_cmd'] = options['firewall'].get( "reload_config_cmd", "slapos node restart firewall") return options def random_delay(options, logger): """ Sleep for a random time to avoid SlapOS Master being DDOSed by an army of SlapOS Nodes configured with cron. """ if options['now']: # XXX-Cedric: deprecate '--now' return maximal_delay = int(options.get('maximal_delay', '0')) if maximal_delay: duration = random.randint(1, maximal_delay) logger.info('Sleeping for %s seconds. To disable this feature, ' 'check --now parameter in slapgrid help.', duration) time.sleep(duration) def create_slapgrid_object(options, logger): signature_certificate_list = None if 'signature-certificate-list' in options: cert_marker = '-----BEGIN CERTIFICATE-----' signature_certificate_list = [ cert_marker + '\n' + q.strip() for q in options['signature-certificate-list'].split(cert_marker) if q.strip() ] op = options software_min_free_space = human2bytes(op.get('software_min_free_space', '1000M')) instance_min_free_space = human2bytes(op.get('instance_min_free_space', '1000M')) return Slapgrid(software_root=op['software_root'], instance_root=op['instance_root'], master_url=op['master_url'], computer_id=op['computer_id'], buildout=op.get('buildout'), logger=logger, maximum_periodicity = op.get('maximum_periodicity', 86400), key_file=op.get('key_file'), cert_file=op.get('cert_file'), signature_private_key_file=op.get( 'signature-private-key-file', op.get('signature_private_key_file')), signature_certificate_list=signature_certificate_list, download_binary_cache_url=op.get('download-binary-cache-url'), upload_binary_cache_url=op.get('upload-binary-cache-url'), download_from_binary_cache_url_blacklist= op.get('download-from-binary-cache-url-blacklist', []), upload_to_binary_cache_url_blacklist= op.get('upload-to-binary-cache-url-blacklist', []), upload_cache_url=op.get('upload-cache-url'), download_binary_dir_url=op.get('download-binary-dir-url'), upload_binary_dir_url=op.get('upload-binary-dir-url'), upload_dir_url=op.get('upload-dir-url'), master_ca_file=op.get('master_ca_file'), certificate_repository_path=op.get('certificate_repository_path'), promise_timeout=op.get('promise_timeout', PROMISE_TIMEOUT), shacache_ca_file=op.get('shacache-ca-file'), shacache_cert_file=op.get('shacache-cert-file'), shacache_key_file=op.get('shacache-key-file'), shadir_ca_file=op.get('shadir-ca-file'), shadir_cert_file=op.get('shadir-cert-file'), shadir_key_file=op.get('shadir-key-file'), forbid_supervisord_automatic_launch=string_to_boolean(op.get('forbid_supervisord_automatic_launch', 'false')), develop=op.get('develop', False), # Try to fetch from deprecated argument software_release_filter_list=op.get('only-sr', op.get('only_sr')), # Try to fetch from deprecated argument computer_partition_filter_list=op.get('only-cp', op.get('only_cp')), software_min_free_space=software_min_free_space, instance_min_free_space=instance_min_free_space, instance_storage_home=op.get('instance_storage_home'), ipv4_global_network=op.get('ipv4_global_network'), firewall_conf=op.get('firewall')) def check_required_only_partitions(existing, required): """ Verify the existence of partitions specified by the --only parameter """ missing = set(required) - set(existing) if missing: plural = ['s', ''][len(missing) == 1] raise ValueError('Unknown partition%s: %s' % (plural, ', '.join(sorted(missing)))) class Slapgrid(object): """ Main class for SlapGrid. Fetches and processes informations from master server and pushes usage information to master server. """ class PromiseError(Exception): pass def __init__(self, software_root, instance_root, master_url, computer_id, buildout, logger, maximum_periodicity=86400, key_file=None, cert_file=None, signature_private_key_file=None, signature_certificate_list=None, download_binary_cache_url=None, upload_binary_cache_url=None, download_from_binary_cache_url_blacklist=None, upload_to_binary_cache_url_blacklist=None, upload_cache_url=None, download_binary_dir_url=None, upload_binary_dir_url=None, upload_dir_url=None, master_ca_file=None, certificate_repository_path=None, promise_timeout=3, shacache_ca_file=None, shacache_cert_file=None, shacache_key_file=None, shadir_ca_file=None, shadir_cert_file=None, shadir_key_file=None, forbid_supervisord_automatic_launch=False, develop=False, software_release_filter_list=None, computer_partition_filter_list=None, software_min_free_space=None, instance_min_free_space=None, instance_storage_home=None, ipv4_global_network=None, firewall_conf={}, ): """Makes easy initialisation of class parameters""" # Parses arguments self.software_root = os.path.abspath(software_root) self.instance_root = os.path.abspath(instance_root) self.master_url = master_url self.computer_id = computer_id self.supervisord_socket = _getSupervisordSocketPath(instance_root) self.key_file = key_file self.cert_file = cert_file self.master_ca_file = master_ca_file self.certificate_repository_path = certificate_repository_path self.signature_private_key_file = signature_private_key_file self.signature_certificate_list = signature_certificate_list self.download_binary_cache_url = download_binary_cache_url self.upload_binary_cache_url = upload_binary_cache_url self.download_from_binary_cache_url_blacklist = \ download_from_binary_cache_url_blacklist self.upload_to_binary_cache_url_blacklist = \ upload_to_binary_cache_url_blacklist self.upload_cache_url = upload_cache_url self.download_binary_dir_url = download_binary_dir_url self.upload_binary_dir_url = upload_binary_dir_url self.upload_dir_url = upload_dir_url self.shacache_ca_file = shacache_ca_file self.shacache_cert_file = shacache_cert_file self.shacache_key_file = shacache_key_file self.shadir_ca_file = shadir_ca_file self.shadir_cert_file = shadir_cert_file self.shadir_key_file = shadir_key_file self.forbid_supervisord_automatic_launch = forbid_supervisord_automatic_launch self.logger = logger # Creates objects from slap module self.slap = slapos.slap.slap() self.slap.initializeConnection(self.master_url, key_file=self.key_file, cert_file=self.cert_file, master_ca_file=self.master_ca_file) self.computer = self.slap.registerComputer(self.computer_id) # Defines all needed paths self.buildout = buildout self.promise_timeout = promise_timeout self.develop = develop if software_release_filter_list is not None: self.software_release_filter_list = \ software_release_filter_list.split(",") else: self.software_release_filter_list = [] self.computer_partition_filter_list = [] if computer_partition_filter_list is not None: self.computer_partition_filter_list = \ computer_partition_filter_list.split(",") self.maximum_periodicity = maximum_periodicity self.software_min_free_space = software_min_free_space self.instance_min_free_space = instance_min_free_space if instance_storage_home: self.instance_storage_home = os.path.abspath(instance_storage_home) else: self.instance_storage_home = "" if ipv4_global_network: self.ipv4_global_network = ipv4_global_network else: self.ipv4_global_network= "" self.firewall_conf = firewall_conf def _getWatchdogLine(self): invocation_list = [WATCHDOG_PATH] invocation_list.append("--master-url '%s' " % self.master_url) if self.certificate_repository_path: invocation_list.append("--certificate-repository-path '%s'" % self.certificate_repository_path) invocation_list.append("--computer-id '%s'" % self.computer_id) invocation_list.append("--instance-root '%s'" % self.instance_root) return ' '.join(invocation_list) def _generateFirewallSupervisorConf(self): """If firewall section is defined in slapos configuration, generate supervisor configuration entry for firewall process. """ supervisord_conf_folder_path = os.path.join(self.instance_root, 'etc', 'supervisord.conf.d') supervisord_firewall_conf = os.path.join(supervisord_conf_folder_path, 'firewall.conf') if not self.firewall_conf or not self.firewall_conf.get('firewall_executable') \ or self.firewall_conf.get('testing', False): if os.path.exists(supervisord_firewall_conf): os.unlink(supervisord_firewall_conf) return supervisord_firewall_program_conf = """\ [program:firewall] directory=/opt/slapos command=%(firewall_executable)s process_name=firewall priority=5 autostart=true autorestart=true startsecs=0 startretries=0 exitcodes=0 stopsignal=TERM stopwaitsecs=60 user=0 group=0 serverurl=AUTO redirect_stderr=true stdout_logfile=%(log_file)s stdout_logfile_maxbytes=100KB stdout_logfile_backups=1 stderr_logfile=%(log_file)s stderr_logfile_maxbytes=100KB stderr_logfile_backups=1 """ % {'firewall_executable': self.firewall_conf['firewall_executable'], 'log_file': self.firewall_conf.get('log_file', '/var/log/firewall.log')} if not os.path.exists(supervisord_conf_folder_path): os.makedirs(supervisord_conf_folder_path) updateFile(supervisord_firewall_conf, supervisord_firewall_program_conf) def _generateDbusSupervisorConf(self): """If dbus command is defined in slapos configuration, generate supervisor configuration entry for dbus daemon. """ supervisord_conf_folder_path = os.path.join(self.instance_root, 'etc', 'supervisord.conf.d') supervisord_dbus_conf = os.path.join(supervisord_conf_folder_path, 'dbus.conf') if not self.firewall_conf or not self.firewall_conf.get('dbus_executable') \ or self.firewall_conf.get('testing', False): if os.path.exists(supervisord_dbus_conf): os.unlink(supervisord_dbus_conf) return supervisord_dbus_program_conf = """\ [program:dbus] directory=/opt/slapos command=%(dbus_executable)s process_name=dbus priority=1 autostart=true autorestart=true startsecs=0 startretries=0 exitcodes=0 stopsignal=TERM stopwaitsecs=60 user=0 group=0 serverurl=AUTO redirect_stderr=true stdout_logfile=%(dbus_log_file)s stdout_logfile_maxbytes=100KB stdout_logfile_backups=1 stderr_logfile=%(dbus_log_file)s stderr_logfile_maxbytes=100KB stderr_logfile_backups=1 """ % {'dbus_executable': self.firewall_conf['dbus_executable'], 'dbus_log_file': self.firewall_conf.get('dbus_log_file', '/var/log/dbus.log')} if not os.path.exists(supervisord_conf_folder_path): os.makedirs(supervisord_conf_folder_path) updateFile(supervisord_dbus_conf, supervisord_dbus_program_conf) def checkEnvironmentAndCreateStructure(self): """Checks for software_root and instance_root existence, then creates needed files and directories. """ # Checks for software_root and instance_root existence if not os.path.isdir(self.software_root): raise OSError('%s does not exist.' % self.software_root) createSupervisordConfiguration(self.instance_root, self._getWatchdogLine()) self._generateFirewallSupervisorConf() self._generateDbusSupervisorConf() def _launchSupervisord(self): if not self.forbid_supervisord_automatic_launch: launchSupervisord(instance_root=self.instance_root, logger=self.logger) def getComputerPartitionList(self): try: return self.computer.getComputerPartitionList() except socket.error as exc: self.logger.fatal(exc) raise def processSoftwareReleaseList(self): """Will process each Software Release. """ self.checkEnvironmentAndCreateStructure() self.logger.info('Processing software releases...') # Boolean to know if every instance has correctly been deployed clean_run = True for software_release in self.computer.getSoftwareReleaseList(): state = software_release.getState() try: software_release_uri = software_release.getURI() url_hash = md5digest(software_release_uri) software_path = os.path.join(self.software_root, url_hash) software = Software(url=software_release_uri, software_root=self.software_root, buildout=self.buildout, logger=self.logger, signature_private_key_file=self.signature_private_key_file, signature_certificate_list=self.signature_certificate_list, download_binary_cache_url=self.download_binary_cache_url, upload_binary_cache_url=self.upload_binary_cache_url, download_from_binary_cache_url_blacklist= self.download_from_binary_cache_url_blacklist, upload_to_binary_cache_url_blacklist= self.upload_to_binary_cache_url_blacklist, upload_cache_url=self.upload_cache_url, download_binary_dir_url=self.download_binary_dir_url, upload_binary_dir_url=self.upload_binary_dir_url, upload_dir_url=self.upload_dir_url, shacache_ca_file=self.shacache_ca_file, shacache_cert_file=self.shacache_cert_file, shacache_key_file=self.shacache_key_file, shadir_ca_file=self.shadir_ca_file, shadir_cert_file=self.shadir_cert_file, shadir_key_file=self.shadir_key_file, software_min_free_space=self.software_min_free_space) if state == 'available': completed_tag = os.path.join(software_path, '.completed') if (self.develop or (not os.path.exists(completed_tag) and len(self.software_release_filter_list) == 0) or url_hash in self.software_release_filter_list or url_hash in (md5digest(uri) for uri in self.software_release_filter_list)): try: software_release.building() except NotFoundError: pass software.install() with open(completed_tag, 'w') as fout: fout.write(time.asctime()) elif state == 'destroyed': if os.path.exists(software_path): self.logger.info('Destroying %r...' % software_release_uri) software.destroy() self.logger.info('Destroyed %r.' % software_release_uri) # Send log before exiting except (SystemExit, KeyboardInterrupt): software_release.error(traceback.format_exc(), logger=self.logger) raise # Buildout failed: send log but don't print it to output (already done) except BuildoutFailedError as exc: clean_run = False try: software_release.error(exc, logger=self.logger) except (SystemExit, KeyboardInterrupt): raise except Exception: self.logger.exception('Problem while reporting error, continuing:') # For everything else: log it, send it, continue. except Exception: self.logger.exception('') software_release.error(traceback.format_exc(), logger=self.logger) clean_run = False else: if state == 'available': try: software_release.available() except (NotFoundError, ServerError): pass elif state == 'destroyed': try: software_release.destroyed() except (NotFoundError, ServerError): self.logger.exception('') self.logger.info('Finished software releases.') # Return success value if not clean_run: return SLAPGRID_FAIL return SLAPGRID_SUCCESS def _checkPromises(self, computer_partition): self.logger.info("Checking promises...") instance_path = os.path.join(self.instance_root, computer_partition.getId()) uid, gid = None, None stat_info = os.stat(instance_path) #stat sys call to get statistics informations uid = stat_info.st_uid gid = stat_info.st_gid promise_present = False # Get the list of promises promise_dir = os.path.join(instance_path, 'etc', 'promise') if os.path.exists(promise_dir) and os.path.isdir(promise_dir): # Check whether every promise is kept for promise in os.listdir(promise_dir): promise_present = True command = [os.path.join(promise_dir, promise)] promise = os.path.basename(command[0]) self.logger.info("Checking promise '%s'.", promise) process_handler = subprocess.Popen(command, preexec_fn=lambda: dropPrivileges(uid, gid, logger=self.logger), cwd=instance_path, env=None if sys.platform == 'cygwin' else {}, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) process_handler.stdin.flush() process_handler.stdin.close() process_handler.stdin = None # Check if the promise finished every tenth of second, # but timeout after promise_timeout. sleep_time = 0.1 increment_limit = int(self.promise_timeout / sleep_time) for current_increment in range(0, increment_limit): if process_handler.poll() is None: time.sleep(sleep_time) continue if process_handler.poll() == 0: # Success! break else: stderr = process_handler.communicate()[1] if stderr is None: stderr = "No error output from '%s'." % promise else: stderr = "Promise '%s':" % promise + stderr raise Slapgrid.PromiseError(stderr) else: process_handler.terminate() raise Slapgrid.PromiseError("The promise '%s' timed out" % promise) if not promise_present: self.logger.info("No promise.") def _endInstallationTransaction(self, computer_partition): partition_id = computer_partition.getId() transaction_file_name = COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME % partition_id transaction_file_path = os.path.join(self.instance_root, partition_id, transaction_file_name) if os.path.exists(transaction_file_path): with open(transaction_file_path, 'r') as tf: try: computer_partition.setComputerPartitionRelatedInstanceList( [reference for reference in tf.read().split('\n') if reference] ) except NotFoundError, e: # Master doesn't implement this feature ? self.logger.warning("NotFoundError: %s. \nCannot send requested instance "\ "list to master. Please check if this feature is"\ "implemented on SlapOS Master." % str(e)) def _addFirewallRule(self, rule_command): """ """ query_cmd = rule_command.replace('--add-rule', '--query-rule') process = FPopen(query_cmd) result, stderr = process.communicate() if result.strip() == 'no': # rule doesn't exist add to firewall self.logger.debug(rule_command) process = FPopen(rule_command) rule_result, stderr = process.communicate() if process.returncode == 0: if rule_result.strip() != 'success': raise Exception(rule_result) else: raise Exception("Failed to add firewalld rule %s\n%s.\n%s" % ( rule_command, rule_result, stderr)) elif result.strip() != 'no' and process.returncode != 0: raise Exception("Failed to run firewalld rule %s\n%s.\n%s" % ( query_cmd, result, stderr)) return result.strip() == 'no' def _removeFirewallRule(self, rule_command): """ """ query_cmd = rule_command.replace('--add-rule', '--query-rule') process = FPopen(query_cmd) result, stderr = process.communicate() if result.strip() == 'yes': # The rule really exist, remove it remove_command = rule_command.replace('--add-rule', '--remove-rule') self.logger.debug(remove_command) process = FPopen(remove_command) rule_result, stderr = process.communicate() if process.returncode == 0: if rule_result.strip() != 'success': raise Exception(rule_result) else: raise Exception("Failed to add firewalld rule %s\n%s.\n%s" % ( rule_command, rule_result, stderr)) elif result.strip() != 'no' and process.returncode != 0: raise Exception("Failed to run firewalld rule %s\n%s.\n%s" % ( query_cmd, result, stderr)) return result.strip() == 'yes' def _checkAddFirewallRules(self, partition_id, command_list, add=True): """ Process Firewall rules from and save rules to firewall_rules_path """ instance_path = os.path.join(self.instance_root, partition_id) firewall_rules_path = os.path.join(instance_path, Partition.partition_firewall_rules_name) reload_rules = False fw_base_cmd = self.firewall_conf['firewall_cmd'] json_list = [] if os.path.exists(firewall_rules_path): with open(firewall_rules_path, 'r') as frules: rules_list = json.loads(frules.read()) for command in rules_list: skip_remove = False if add: for new_cmd in command_list: if command == new_cmd: skip_remove = True break if not skip_remove: state = self._removeFirewallRule('%s %s' % (fw_base_cmd, command)) reload_rules = reload_rules or state if add: json_list = command_list for command in command_list: state = self._addFirewallRule('%s %s' % (fw_base_cmd, command)) reload_rules = reload_rules or state if reload_rules: # Apply changes: reload configuration # XXX - need to check firewalld reload instead of restart self.logger.info("Reloading firewall configuration...") reload_cmd = self.firewall_conf['reload_config_cmd'] reload_process = FPopen(reload_cmd) stdout, stderr = reload_process.communicate() if reload_process.returncode != 0: raise Exception("Failed to load firewalld rules with command %s.\n%" % ( stderr, reload_cmd)) with open(firewall_rules_path, 'w') as frules: frules.write(json.dumps(json_list)) def _getFirewallAcceptRules(self, ip, hosting_ip_list, source_ip_list, ip_type='ipv4'): """ Generate rules for firewall based on list of IP that should have access to `ip` """ if ip_type not in ['ipv4', 'ipv6', 'eb']: raise NotImplementedError("firewall-cmd has not rules with tables %s." % ip_type) command = '--permanent --direct --add-rule %s filter' % ip_type cmd_list = [] ip_list = hosting_ip_list + source_ip_list for other_ip in ip_list: # Configure INPUT rules cmd_list.append('%s INPUT 0 -s %s -d %s -j ACCEPT' % (command, other_ip, ip)) # Configure FORWARD rules cmd_list.append('%s FORWARD 0 -s %s -d %s -j ACCEPT' % (command, other_ip, ip)) # Reject all other requests cmd_list.append('%s INPUT 1000 -d %s -j REJECT' % (command, ip)) cmd_list.append('%s FORWARD 1000 -d %s -j REJECT' % (command, ip)) cmd_list.append('%s INPUT 900 -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % ( command, ip)) cmd_list.append('%s FORWARD 900 -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % ( command, ip)) return cmd_list def _getFirewallRejectRules(self, ip, hosting_ip_list, source_ip_list, ip_type='ipv4'): """ Generate rules for firewall based on list of IP that should not have access to `ip` """ if ip_type not in ['ipv4', 'ipv6', 'eb']: raise NotImplementedError("firewall-cmd has not rules with tables %s." % ip_type) command = '--permanent --direct --add-rule %s filter' % ip_type cmd_list = [] # Accept all other requests #cmd_list.append('%s INPUT 1000 -d %s -j ACCEPT' % (command, ip)) #cmd_list.append('%s FORWARD 1000 -d %s -j ACCEPT' % (command, ip)) # Reject all other requests from the list for other_ip in source_ip_list: cmd_list.append('%s INPUT 800 -s %s -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % ( command, other_ip, ip)) cmd_list.append('%s FORWARD 800 -s %s -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % ( command, other_ip, ip)) cmd_list.append('%s INPUT 900 -s %s -d %s -j REJECT' % (command, other_ip, ip)) cmd_list.append('%s FORWARD 900 -s %s -d %s -j REJECT' % (command, other_ip, ip)) # Accept on this hosting subscription for other_ip in hosting_ip_list: cmd_list.append('%s INPUT 0 -s %s -d %s -j ACCEPT' % (command, other_ip, ip)) cmd_list.append('%s FORWARD 0 -s %s -d %s -j ACCEPT' % (command, other_ip, ip)) return cmd_list def _getValidIpv4FromList(self, ipv4_list, warn=False): """ Return the list containing only valid ipv4 or network address. """ valid_list = [] for ip in ipv4_list: if not ip: continue the_ip = ip.split('/')[0] if valid_ipv4(the_ip): valid_list.append(ip) elif warn: self.logger.warn("IP/Network address %s is not valid. ignored.." % ip) return valid_list def _setupComputerPartitionFirewall(self, computer_partition, ip_list, drop_entries=False): """ Using linux iptables, limit access to IP of this partition to all others partitions of the same Hosting Subscription """ ipv4_list = [] ipv6_list = [] source_ipv4_list = [] source_ipv6_list = [] hosting_ipv4_list = [] hosting_ipv6_list = [] getFirewallRules = getattr(self, '_getFirewallAcceptRules') if not drop_entries: self.logger.info("Configuring firewall...") add_rules = True else: add_rules = False self.logger.info("Removing firewall configuration...") for net_ip in ip_list: iface, ip = (net_ip[0], net_ip[1]) if not iface.startswith('route_'): continue if valid_ipv4(ip): ipv4_list.append(ip) elif valid_ipv6(ip): ipv6_list.append(ip) hosting_ip_list = computer_partition.getFullHostingIpAddressList() for iface, ip in hosting_ip_list: if valid_ipv4(ip): if not ip in ipv4_list: hosting_ipv4_list.append(ip) elif valid_ipv6(ip): if not ip in ipv6_list: hosting_ipv6_list.append(ip) filter_dict = getattr(computer_partition, '_filter_dict', None) extra_list = [] accept_ip_list = [] if filter_dict is not None: if filter_dict.get('fw_restricted_access', 'on') == 'off': extra_list = filter_dict.get('fw_rejected_sources', '').split(' ') getFirewallRules = getattr(self, '_getFirewallRejectRules') accept_ip_list.extend(self.firewall_conf.get('authorized_sources', [])) accept_ip_list.extend(filter_dict.get('fw_authorized_sources', '').split(' ')) else: extra_list = filter_dict.get('fw_authorized_sources', '').split(' ') extra_list.extend(self.firewall_conf.get('authorized_sources', [])) source_ipv4_list = self._getValidIpv4FromList(extra_list, True) hosting_ipv4_list.extend(self._getValidIpv4FromList(accept_ip_list, True)) # XXX - ipv6_list and source_ipv6_list ignored for the moment for ip in ipv4_list: cmd_list = getFirewallRules(ip, hosting_ipv4_list, source_ipv4_list, ip_type='ipv4') self._checkAddFirewallRules(computer_partition.getId(), cmd_list, add=add_rules) def processComputerPartition(self, computer_partition): """ Process a Computer Partition, depending on its state """ computer_partition_id = computer_partition.getId() # Sanity checks before processing # Those values should not be None or empty string or any falsy value if not computer_partition_id: raise ValueError('Computer Partition id is empty.') # Check if we defined explicit list of partitions to process. # If so, if current partition not in this list, skip. if len(self.computer_partition_filter_list) > 0 and \ (computer_partition_id not in self.computer_partition_filter_list): return self.logger.debug('Check if %s requires processing...' % computer_partition_id) instance_path = os.path.join(self.instance_root, computer_partition_id) os.environ['SLAPGRID_INSTANCE_ROOT'] = self.instance_root # Check if transaction file of this partition exists, if the file was created, # remove it so it will be generate with this new transaction transaction_file_name = COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME % computer_partition_id transaction_file_path = os.path.join(instance_path, transaction_file_name) if os.path.exists(transaction_file_path): os.unlink(transaction_file_path) # Try to get partition timestamp (last modification date) timestamp_path = os.path.join( instance_path, COMPUTER_PARTITION_TIMESTAMP_FILENAME ) parameter_dict = computer_partition.getInstanceParameterDict() if 'timestamp' in parameter_dict: timestamp = parameter_dict['timestamp'] else: timestamp = None error_output_file = os.path.join( instance_path, COMPUTER_PARTITION_INSTALL_ERROR_FILENAME % computer_partition_id ) try: software_url = computer_partition.getSoftwareRelease().getURI() except NotFoundError: # Problem with instance: SR URI not set. # Try to process it anyway, it may need to be deleted. software_url = None try: software_path = os.path.join(self.software_root, md5digest(software_url)) except TypeError: # Problem with instance: SR URI not set. # Try to process it anyway, it may need to be deleted. software_path = None periodicity = self.maximum_periodicity if software_path: periodicity_path = os.path.join(software_path, 'periodicity') if os.path.exists(periodicity_path): try: periodicity = int(open(periodicity_path).read()) except ValueError: os.remove(periodicity_path) self.logger.exception('') # Check if timestamp from server is more recent than local one. # If not: it's not worth processing this partition (nothing has # changed). if (computer_partition_id not in self.computer_partition_filter_list and not self.develop and os.path.exists(timestamp_path)): old_timestamp = open(timestamp_path).read() last_runtime = int(os.path.getmtime(timestamp_path)) if timestamp: try: if periodicity == 0: os.remove(timestamp_path) elif int(timestamp) <= int(old_timestamp): # Check periodicity, i.e if periodicity is one day, partition # should be processed at least every day. if int(time.time()) <= (last_runtime + periodicity) or periodicity < 0: self.logger.debug('Partition already up-to-date, skipping.') return else: # Periodicity forced processing this partition. Removing # the timestamp file in case it fails. os.remove(timestamp_path) except ValueError: os.remove(timestamp_path) self.logger.exception('') # Include Partition Logging log_folder_path = "%s/.slapgrid/log" % instance_path mkdir_p(log_folder_path) partition_file_handler = logging.FileHandler( filename="%s/instance.log" % (log_folder_path) ) stat_info = os.stat(instance_path) chownDirectory("%s/.slapgrid" % instance_path, uid=stat_info.st_uid, gid=stat_info.st_gid) formatter = logging.Formatter( '[%(asctime)s] %(levelname)-8s %(name)s %(message)s') partition_file_handler.setFormatter(formatter) self.logger.addHandler(partition_file_handler) try: self.logger.info('Processing Computer Partition %s.' % computer_partition_id) self.logger.info(' Software URL: %s' % software_url) self.logger.info(' Software path: %s' % software_path) self.logger.info(' Instance path: %s' % instance_path) filter_dict = getattr(computer_partition, '_filter_dict', None) if filter_dict: retention_delay = filter_dict.get('retention_delay', '0') else: retention_delay = '0' local_partition = Partition( software_path=software_path, instance_path=instance_path, supervisord_partition_configuration_path=os.path.join( _getSupervisordConfigurationDirectory(self.instance_root), '%s.conf' % computer_partition_id), supervisord_socket=self.supervisord_socket, computer_partition=computer_partition, computer_id=self.computer_id, partition_id=computer_partition_id, server_url=self.master_url, software_release_url=software_url, certificate_repository_path=self.certificate_repository_path, buildout=self.buildout, logger=self.logger, retention_delay=retention_delay, instance_min_free_space=self.instance_min_free_space, instance_storage_home=self.instance_storage_home, ipv4_global_network=self.ipv4_global_network, ) computer_partition_state = computer_partition.getState() # XXX this line breaks 37 tests # self.logger.info(' Instance type: %s' % computer_partition.getType()) self.logger.info(' Instance status: %s' % computer_partition_state) partition_ip_list = full_hosting_ip_list = [] if self.firewall_conf: partition_ip_list = parameter_dict['ip_list'] + parameter_dict.get( 'full_ip_list', []) if computer_partition_state == COMPUTER_PARTITION_STARTED_STATE: local_partition.install() computer_partition.available() local_partition.start() if self.firewall_conf: self._setupComputerPartitionFirewall(computer_partition, partition_ip_list) self._checkPromises(computer_partition) computer_partition.started() self._endInstallationTransaction(computer_partition) elif computer_partition_state == COMPUTER_PARTITION_STOPPED_STATE: try: # We want to process the partition, even if stopped, because it should # propagate the state to children if any. local_partition.install() computer_partition.available() if self.firewall_conf: self._setupComputerPartitionFirewall(computer_partition, partition_ip_list) finally: # Instance has to be stopped even if buildout/reporting is wrong. local_partition.stop() computer_partition.stopped() self._endInstallationTransaction(computer_partition) elif computer_partition_state == COMPUTER_PARTITION_DESTROYED_STATE: local_partition.stop() if self.firewall_conf: self._setupComputerPartitionFirewall(computer_partition, partition_ip_list, drop_entries=True) try: computer_partition.stopped() except (SystemExit, KeyboardInterrupt): computer_partition.error(traceback.format_exc(), logger=self.logger) raise except Exception: pass else: error_string = "Computer Partition %r has unsupported state: %s" % \ (computer_partition_id, computer_partition_state) computer_partition.error(error_string, logger=self.logger) raise NotImplementedError(error_string) except Exception, e: with open(error_output_file, 'w') as error_file: # Write error message in a log file assible to computer partition user error_file.write(str(e)) raise else: self.logger.removeHandler(partition_file_handler) if os.path.exists(error_output_file): os.unlink(error_output_file) # If partition has been successfully processed, write timestamp if timestamp: open(timestamp_path, 'w').write(timestamp) def FilterComputerPartitionList(self, computer_partition_list): """ Try to filter valid partitions to be processed from free partitions. """ filtered_computer_partition_list = [] for computer_partition in computer_partition_list: try: computer_partition_path = os.path.join(self.instance_root, computer_partition.getId()) if not os.path.exists(computer_partition_path): raise NotFoundError('Partition directory %s does not exist.' % computer_partition_path) # Check state of partition. If it is in "destroyed" state, check if it # partition is actually installed in the Computer or if it is "free" # partition, and check if it has some Software information. # XXX-Cedric: Temporary AND ugly solution to check if an instance # is in the partition. Dangerous because not 100% sure it is empty computer_partition_state = computer_partition.getState() try: software_url = computer_partition.getSoftwareRelease().getURI() except (NotFoundError, TypeError, NameError): software_url = None if computer_partition_state == COMPUTER_PARTITION_DESTROYED_STATE and \ not software_url: # Exclude files which may come from concurrent processing # ie.: slapos ndoe report and slapos node instance commands # can create a .timestamp file. file_list = os.listdir(computer_partition_path) for garbage_file in [".slapgrid", ".timestamp"]: if garbage_file in file_list: shutil.rmtree("/".join([computer_partition_path, garbage_file])) if os.listdir(computer_partition_path) != []: self.logger.warning("Free partition %s contains file(s) in %s." % ( computer_partition.getId(), computer_partition_path)) continue # Everything seems fine filtered_computer_partition_list.append(computer_partition) # XXX-Cedric: factor all this error handling # Send log before exiting except (SystemExit, KeyboardInterrupt): computer_partition.error(traceback.format_exc(), logger=self.logger) raise except Exception as exc: # if Buildout failed: send log but don't print it to output (already done) if not isinstance(exc, BuildoutFailedError): # For everything else: log it, send it, continue. self.logger.exception('') try: computer_partition.error(exc, logger=self.logger) except (SystemExit, KeyboardInterrupt): raise except Exception: self.logger.exception('Problem while reporting error, continuing:') return filtered_computer_partition_list def processComputerPartitionList(self): """ Will start supervisord and process each Computer Partition. """ self.logger.info('Processing computer partitions...') # Prepares environment self.checkEnvironmentAndCreateStructure() self._launchSupervisord() # Boolean to know if every instance has correctly been deployed clean_run = True # Boolean to know if every promises correctly passed clean_run_promise = True check_required_only_partitions([cp.getId() for cp in self.getComputerPartitionList()], self.computer_partition_filter_list) # Filter all dummy / empty partitions computer_partition_list = self.FilterComputerPartitionList( self.getComputerPartitionList()) for computer_partition in computer_partition_list: # Nothing should raise outside of the current loop iteration, so that # even if something is terribly wrong while processing an instance, it # won't prevent processing other ones. try: # Process the partition itself self.processComputerPartition(computer_partition) # Send log before exiting except (SystemExit, KeyboardInterrupt): computer_partition.error(traceback.format_exc(), logger=self.logger) raise except Slapgrid.PromiseError as exc: clean_run_promise = False try: self.logger.error(exc) computer_partition.error(exc, logger=self.logger) except (SystemExit, KeyboardInterrupt): raise except Exception: self.logger.exception('Problem while reporting error, continuing:') except Exception as exc: clean_run = False # if Buildout failed: send log but don't print it to output (already done) if not isinstance(exc, BuildoutFailedError): # For everything else: log it, send it, continue. self.logger.exception('') try: computer_partition.error(exc, logger=self.logger) except (SystemExit, KeyboardInterrupt): raise except Exception: self.logger.exception('Problem while reporting error, continuing:') self.logger.info('Finished computer partitions.') # Return success value if not clean_run: return SLAPGRID_FAIL if not clean_run_promise: return SLAPGRID_PROMISE_FAIL return SLAPGRID_SUCCESS def validateXML(self, to_be_validated, xsd_model): """Validates a given xml file""" #We retrieve the xsd model xsd_model = StringIO.StringIO(xsd_model) xmlschema_doc = etree.parse(xsd_model) xmlschema = etree.XMLSchema(xmlschema_doc) try: document = etree.fromstring(to_be_validated) except (etree.XMLSyntaxError, etree.DocumentInvalid) as exc: self.logger.info('Failed to parse this XML report : %s\n%s' % (to_be_validated, _formatXMLError(exc))) self.logger.error(_formatXMLError(exc)) return False if xmlschema.validate(document): return True return False def asXML(self, computer_partition_usage_list): """Generates a XML report from computer partition usage list """ xml = ['', '', '', 'Resource consumptions', '', '%s' % time.strftime("%Y-%m-%d at %H:%M:%S"), '%s' % self.computer_id, '', '', '', '', '', '', ''] for computer_partition_usage in computer_partition_usage_list: try: root = etree.fromstring(computer_partition_usage.usage) except UnicodeError as exc: self.logger.info("Failed to read %s." % computer_partition_usage.usage) self.logger.error(UnicodeError) raise UnicodeError("Failed to read %s: %s" % (computer_partition_usage.usage, exc)) except (etree.XMLSyntaxError, etree.DocumentInvalid) as exc: self.logger.info("Failed to parse %s." % (computer_partition_usage.usage)) self.logger.error(exc) raise _formatXMLError(exc) except Exception as exc: raise Exception("Failed to generate XML report: %s" % exc) for movement in root.findall('movement'): xml.append('') for child in movement.getchildren(): if child.tag == "reference": xml.append('<%s>%s' % (child.tag, computer_partition_usage.getId(), child.tag)) else: xml.append('<%s>%s' % (child.tag, child.text, child.tag)) xml.append('') xml.append('') return ''.join(xml) def agregateAndSendUsage(self): """Will agregate usage from each Computer Partition. """ # Prepares environment self.checkEnvironmentAndCreateStructure() self._launchSupervisord() slap_computer_usage = self.slap.registerComputer(self.computer_id) computer_partition_usage_list = [] self.logger.info('Aggregating and sending usage reports...') #We retrieve XSD models try: computer_consumption_model = \ pkg_resources.resource_string( 'slapos.slap', 'doc/computer_consumption.xsd') except IOError: computer_consumption_model = \ pkg_resources.resource_string( __name__, '../../../../slapos/slap/doc/computer_consumption.xsd') try: partition_consumption_model = \ pkg_resources.resource_string( 'slapos.slap', 'doc/partition_consumption.xsd') except IOError: partition_consumption_model = \ pkg_resources.resource_string( __name__, '../../../../slapos/slap/doc/partition_consumption.xsd') clean_run = True # Loop over the different computer partitions computer_partition_list = self.FilterComputerPartitionList( slap_computer_usage.getComputerPartitionList()) for computer_partition in computer_partition_list: try: computer_partition_id = computer_partition.getId() # We want to execute all the script in the report folder instance_path = os.path.join(self.instance_root, computer_partition.getId()) report_path = os.path.join(instance_path, 'etc', 'report') if os.path.isdir(report_path): script_list_to_run = os.listdir(report_path) else: script_list_to_run = [] # We now generate the pseudorandom name for the xml file # and we add it in the invocation_list f = tempfile.NamedTemporaryFile() name_xml = '%s.%s' % ('slapreport', os.path.basename(f.name)) path_to_slapreport = os.path.join(instance_path, 'var', 'xml_report', name_xml) failed_script_list = [] for script in script_list_to_run: invocation_list = [] invocation_list.append(os.path.join(instance_path, 'etc', 'report', script)) # We add the xml_file name to the invocation_list #f = tempfile.NamedTemporaryFile() #name_xml = '%s.%s' % ('slapreport', os.path.basename(f.name)) #path_to_slapreport = os.path.join(instance_path, 'var', name_xml) invocation_list.append(path_to_slapreport) # Dropping privileges uid, gid = None, None stat_info = os.stat(instance_path) #stat sys call to get statistics informations uid = stat_info.st_uid gid = stat_info.st_gid process_handler = SlapPopen(invocation_list, preexec_fn=lambda: dropPrivileges(uid, gid, logger=self.logger), cwd=os.path.join(instance_path, 'etc', 'report'), env=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, logger=self.logger) if process_handler.returncode is None: process_handler.kill() if process_handler.returncode != 0: clean_run = False failed_script_list.append("Script %r failed." % script) self.logger.warning('Failed to run %r' % invocation_list) if len(failed_script_list): computer_partition.error('\n'.join(failed_script_list), logger=self.logger) # Whatever happens, don't stop processing other instances except Exception: self.logger.exception('Cannot run usage script(s) for %r:' % computer_partition.getId()) # Now we loop through the different computer partitions to report report_usage_issue_cp_list = [] for computer_partition in computer_partition_list: try: filename_delete_list = [] computer_partition_id = computer_partition.getId() instance_path = os.path.join(self.instance_root, computer_partition_id) dir_report_list = [os.path.join(instance_path, 'var', 'xml_report'), os.path.join(self.instance_root, 'var', 'xml_report', computer_partition_id)] for dir_reports in dir_report_list: # The directory xml_report contain a number of files equal # to the number of software instance running inside the same partition if os.path.isdir(dir_reports): filename_list = os.listdir(dir_reports) else: filename_list = [] # self.logger.debug('name List %s' % filename_list) for filename in filename_list: file_path = os.path.join(dir_reports, filename) if os.path.exists(file_path): usage = open(file_path, 'r').read() # We check the validity of xml content of each reports if not self.validateXML(usage, partition_consumption_model): self.logger.info('WARNING: The XML file %s generated by slapreport is ' 'not valid - This report is left as is at %s where you can ' 'inspect what went wrong ' % (filename, dir_reports)) # Warn the SlapOS Master that a partition generates corrupted xml # report else: computer_partition_usage = self.slap.registerComputerPartition( self.computer_id, computer_partition_id) computer_partition_usage.setUsage(usage) computer_partition_usage_list.append(computer_partition_usage) filename_delete_list.append(filename) else: self.logger.debug('Usage report %r not found, ignored' % file_path) # After sending the aggregated file we remove all the valid xml reports for filename in filename_delete_list: os.remove(os.path.join(dir_reports, filename)) # Whatever happens, don't stop processing other instances except Exception: self.logger.exception('Cannot run usage script(s) for %r:' % computer_partition.getId()) for computer_partition_usage in computer_partition_usage_list: self.logger.info('computer_partition_usage_list: %s - %s' % (computer_partition_usage.usage, computer_partition_usage.getId())) filename_delete_list = [] computer_report_dir = os.path.join(self.instance_root, 'var', 'xml_report', self.computer_id) # The directory xml_report contain a number of files equal # to the number of software instance running inside the same partition if os.path.isdir(computer_report_dir): filename_list = os.listdir(computer_report_dir) else: filename_list = [] for filename in filename_list: file_path = os.path.join(computer_report_dir, filename) if os.path.exists(file_path): usage = open(file_path, 'r').read() if self.validateXML(usage, computer_consumption_model): self.logger.info('XML file generated by asXML is valid') slap_computer_usage.reportUsage(usage) filename_delete_list.append(filename) else: self.logger.info('XML file is invalid %s' % filename) # After sending the aggregated file we remove all the valid xml reports for filename in filename_delete_list: os.remove(os.path.join(computer_report_dir, filename)) # If there is, at least, one report if computer_partition_usage_list != []: try: # We generate the final XML report with asXML method computer_consumption = self.asXML(computer_partition_usage_list) self.logger.info('Final xml report: %s' % computer_consumption) # We test the XML report before sending it if self.validateXML(computer_consumption, computer_consumption_model): self.logger.info('XML file generated by asXML is valid') slap_computer_usage.reportUsage(computer_consumption) else: self.logger.info('XML file generated by asXML is not valid !') raise ValueError('XML file generated by asXML is not valid !') except Exception: issue = "Cannot report usage for %r: %s" % ( computer_partition.getId(), traceback.format_exc()) self.logger.info(issue) computer_partition.error(issue, logger=self.logger) report_usage_issue_cp_list.append(computer_partition_id) for computer_partition in computer_partition_list: if computer_partition.getState() == COMPUTER_PARTITION_DESTROYED_STATE: destroyed = False try: computer_partition_id = computer_partition.getId() try: software_url = computer_partition.getSoftwareRelease().getURI() software_path = os.path.join(self.software_root, md5digest(software_url)) except (NotFoundError, TypeError): software_url = None software_path = None local_partition = Partition( software_path=software_path, instance_path=os.path.join(self.instance_root, computer_partition.getId()), supervisord_partition_configuration_path=os.path.join( _getSupervisordConfigurationDirectory(self.instance_root), '%s.conf' % computer_partition_id), supervisord_socket=self.supervisord_socket, computer_partition=computer_partition, computer_id=self.computer_id, partition_id=computer_partition_id, server_url=self.master_url, software_release_url=software_url, certificate_repository_path=self.certificate_repository_path, buildout=self.buildout, logger=self.logger, instance_storage_home=self.instance_storage_home, ipv4_global_network=self.ipv4_global_network, ) local_partition.stop() try: computer_partition.stopped() except (SystemExit, KeyboardInterrupt): computer_partition.error(traceback.format_exc(), logger=self.logger) raise except Exception: pass if computer_partition.getId() in report_usage_issue_cp_list: self.logger.info('Ignoring destruction of %r, as no report usage was sent' % computer_partition.getId()) continue destroyed = local_partition.destroy() except (SystemExit, KeyboardInterrupt): computer_partition.error(traceback.format_exc(), logger=self.logger) raise except Exception: clean_run = False self.logger.exception('') exc = traceback.format_exc() computer_partition.error(exc, logger=self.logger) try: if destroyed: computer_partition.destroyed() except NotFoundError: self.logger.debug('Ignored slap error while trying to inform about ' 'destroying not fully configured Computer Partition %r' % computer_partition.getId()) except ServerError as server_error: self.logger.debug('Ignored server error while trying to inform about ' 'destroying Computer Partition %r. Error is:\n%r' % (computer_partition.getId(), server_error.args[0])) self.logger.info('Finished usage reports.') # Return success value if not clean_run: return SLAPGRID_FAIL return SLAPGRID_SUCCESS slapos.core-1.3.18/slapos/grid/zc.buildout-bootstrap.py0000644000000000000000000002442412752436135023065 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2006 Zope Foundation and Contributors. # All Rights Reserved. # # This software is subject to the provisions of the Zope Public License, # Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution. # THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED # WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS # FOR A PARTICULAR PURPOSE. # ############################################################################## """Bootstrap a buildout-based project Simply run this script in a directory containing a buildout.cfg. The script accepts buildout command-line options, so you can use the -c option to specify an alternate configuration file. """ import os, shutil, sys, tempfile, urllib, urllib2, subprocess from optparse import OptionParser if sys.platform == 'win32': def quote(c): if ' ' in c: return '"%s"' % c # work around spawn lamosity on windows else: return c else: quote = str # See zc.buildout.easy_install._has_broken_dash_S for motivation and comments. stdout, stderr = subprocess.Popen( [sys.executable, '-Sc', 'try:\n' ' import ConfigParser\n' 'except ImportError:\n' ' print 1\n' 'else:\n' ' print 0\n'], stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() has_broken_dash_S = bool(int(stdout.strip())) # In order to be more robust in the face of system Pythons, we want to # run without site-packages loaded. This is somewhat tricky, in # particular because Python 2.6's distutils imports site, so starting # with the -S flag is not sufficient. However, we'll start with that: if not has_broken_dash_S and 'site' in sys.modules: # We will restart with python -S. args = sys.argv[:] args[0:0] = [sys.executable, '-S'] args = map(quote, args) os.execv(sys.executable, args) # Now we are running with -S. We'll get the clean sys.path, import site # because distutils will do it later, and then reset the path and clean # out any namespace packages from site-packages that might have been # loaded by .pth files. clean_path = sys.path[:] import site # imported because of its side effects sys.path[:] = clean_path for k, v in sys.modules.items(): if k in ('setuptools', 'pkg_resources') or ( hasattr(v, '__path__') and len(v.__path__) == 1 and not os.path.exists(os.path.join(v.__path__[0], '__init__.py'))): # This is a namespace package. Remove it. sys.modules.pop(k) is_jython = sys.platform.startswith('java') setuptools_source = 'https://bootstrap.pypa.io/ez_setup.py' distribute_source = 'http://python-distribute.org/distribute_setup.py' # parsing arguments def normalize_to_url(option, opt_str, value, parser): if value: if '://' not in value: # It doesn't smell like a URL. value = 'file://%s' % ( urllib.pathname2url( os.path.abspath(os.path.expanduser(value))),) if opt_str == '--download-base' and not value.endswith('/'): # Download base needs a trailing slash to make the world happy. value += '/' else: value = None name = opt_str[2:].replace('-', '_') setattr(parser.values, name, value) usage = '''\ [DESIRED PYTHON FOR BUILDOUT] bootstrap.py [options] Bootstraps a buildout-based project. Simply run this script in a directory containing a buildout.cfg, using the Python that you want bin/buildout to use. Note that by using --setup-source and --download-base to point to local resources, you can keep this script from going over the network. ''' parser = OptionParser(usage=usage) parser.add_option("-v", "--version", dest="version", help="use a specific zc.buildout version") parser.add_option("-d", "--distribute", action="store_true", dest="use_distribute", default=False, help="Use Distribute rather than Setuptools.") parser.add_option("--setup-source", action="callback", dest="setup_source", callback=normalize_to_url, nargs=1, type="string", help=("Specify a URL or file location for the setup file. " "If you use Setuptools, this will default to " + setuptools_source + "; if you use Distribute, this " "will default to " + distribute_source + ".")) parser.add_option("--download-base", action="callback", dest="download_base", callback=normalize_to_url, nargs=1, type="string", help=("Specify a URL or directory for downloading " "zc.buildout and either Setuptools or Distribute. " "Defaults to PyPI.")) parser.add_option("--eggs", help=("Specify a directory for storing eggs. Defaults to " "a temporary directory that is deleted when the " "bootstrap script completes.")) parser.add_option("-t", "--accept-buildout-test-releases", dest='accept_buildout_test_releases', action="store_true", default=False, help=("Normally, if you do not specify a --version, the " "bootstrap script and buildout gets the newest " "*final* versions of zc.buildout and its recipes and " "extensions for you. If you use this flag, " "bootstrap and buildout will get the newest releases " "even if they are alphas or betas.")) parser.add_option("-c", None, action="store", dest="config_file", help=("Specify the path to the buildout configuration " "file to be used.")) options, args = parser.parse_args() if options.eggs: eggs_dir = os.path.abspath(os.path.expanduser(options.eggs)) else: eggs_dir = tempfile.mkdtemp() if options.setup_source is None: if options.use_distribute: options.setup_source = distribute_source else: options.setup_source = setuptools_source if options.accept_buildout_test_releases: args.insert(0, 'buildout:accept-buildout-test-releases=true') try: import pkg_resources import setuptools # A flag. Sometimes pkg_resources is installed alone. if not hasattr(pkg_resources, '_distribute'): raise ImportError except ImportError: ez_code = urllib2.urlopen( options.setup_source).read().replace('\r\n', '\n') ez = {} exec ez_code in ez setup_args = dict(to_dir=eggs_dir, download_delay=0) if options.download_base: setup_args['download_base'] = options.download_base if options.use_distribute: setup_args['no_fake'] = True if sys.version_info[:2] == (2, 4): setup_args['version'] = '0.6.32' ez['use_setuptools'](**setup_args) if 'pkg_resources' in sys.modules: reload(sys.modules['pkg_resources']) import pkg_resources # This does not (always?) update the default working set. We will # do it. for path in sys.path: if path not in pkg_resources.working_set.entries: pkg_resources.working_set.add_entry(path) cmd = [quote(sys.executable), '-c', quote('from setuptools.command.easy_install import main; main()'), '-mqNxd', quote(eggs_dir)] if not has_broken_dash_S: cmd.insert(1, '-S') find_links = options.download_base if not find_links: find_links = os.environ.get('bootstrap-testing-find-links') if not find_links and options.accept_buildout_test_releases: find_links = 'http://downloads.buildout.org/' if find_links: cmd.extend(['-f', quote(find_links)]) if options.use_distribute: setup_requirement = 'distribute' else: setup_requirement = 'setuptools' ws = pkg_resources.working_set setup_requirement_path = ws.find( pkg_resources.Requirement.parse(setup_requirement)).location env = dict( os.environ, PYTHONPATH=setup_requirement_path) requirement = 'zc.buildout' version = options.version if version is None and not options.accept_buildout_test_releases: # Figure out the most recent final version of zc.buildout. import setuptools.package_index _final_parts = '*final-', '*final' def _final_version(parsed_version): for part in parsed_version: if (part[:1] == '*') and (part not in _final_parts): return False return True index = setuptools.package_index.PackageIndex( search_path=[setup_requirement_path]) if find_links: index.add_find_links((find_links,)) req = pkg_resources.Requirement.parse(requirement) if index.obtain(req) is not None: best = [] bestv = None for dist in index[req.project_name]: distv = dist.parsed_version if distv >= pkg_resources.parse_version('2dev'): continue if _final_version(distv): if bestv is None or distv > bestv: best = [dist] bestv = distv elif distv == bestv: best.append(dist) if best: best.sort() version = best[-1].version if version: requirement += '=='+version else: requirement += '<2dev' cmd.append(requirement) if is_jython: import subprocess exitcode = subprocess.Popen(cmd, env=env).wait() else: # Windows prefers this, apparently; otherwise we would prefer subprocess exitcode = os.spawnle(*([os.P_WAIT, sys.executable] + cmd + [env])) if exitcode != 0: sys.stdout.flush() sys.stderr.flush() print ("An error occurred when trying to install zc.buildout. " "Look above this message for any errors that " "were output by easy_install.") sys.exit(exitcode) ws.add_entry(eggs_dir) ws.require(requirement) import zc.buildout.buildout # If there isn't already a command in the args, add bootstrap if not [a for a in args if '=' not in a]: args.append('bootstrap') # if -c was provided, we push it back into args for buildout's main function if options.config_file is not None: args[0:0] = ['-c', options.config_file] zc.buildout.buildout.main(args) if not options.eggs: # clean up temporary egg directory shutil.rmtree(eggs_dir) slapos.core-1.3.18/slapos/grid/SlapObject.py0000644000000000000000000010742212752436134020635 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import pkg_resources import pwd import shutil import stat import subprocess import tarfile import tempfile import textwrap import time import xmlrpclib from supervisor import xmlrpc from slapos.grid.utils import (md5digest, getCleanEnvironment, SlapPopen, dropPrivileges, updateFile) from slapos.grid import utils # for methods that could be mocked, access them through the module from slapos.slap.slap import NotFoundError from slapos.grid.svcbackend import getSupervisorRPC from slapos.grid.exception import (BuildoutFailedError, WrongPermissionError, PathDoesNotExistError, DiskSpaceError) from slapos.grid.networkcache import download_network_cached, upload_network_cached from slapos.human import bytes2human WATCHDOG_MARK = '-on-watch' REQUIRED_COMPUTER_PARTITION_PERMISSION = 0o750 CP_STORAGE_FOLDER_NAME = 'DATA' # XXX not very clean. this is changed when testing PROGRAM_PARTITION_TEMPLATE = pkg_resources.resource_stream(__name__, 'templates/program_partition_supervisord.conf.in').read() def free_space(path, fn): while True: try: disk = os.statvfs(path) return fn(disk) except OSError: pass if os.sep not in path: break path = os.path.split(path)[0] def free_space_root(path): """ Returns free space available to the root user, in bytes. A non-existent path can be provided, and the ancestors will be queried instead. """ return free_space(path, lambda d: d.bsize * d.f_bfree) def free_space_nonroot(path): """ Returns free space available to non-root users, in bytes. A non-existent path can be provided, and the ancestors will be queried instead. """ return free_space(path, lambda d: d.f_bsize * d.f_bavail) class Software(object): """This class is responsible for installing a software release""" # XXX: "url" parameter should be named "key", "target" or alike to be more generic. # The key is an url in the case of Buildout. def __init__(self, url, software_root, buildout, logger, signature_private_key_file=None, signature_certificate_list=None, upload_cache_url=None, upload_dir_url=None, shacache_ca_file=None, shacache_cert_file=None, shacache_key_file=None, shadir_ca_file=None, shadir_cert_file=None, shadir_key_file=None, download_binary_cache_url=None, upload_binary_cache_url=None, download_binary_dir_url=None, upload_binary_dir_url=None, download_from_binary_cache_url_blacklist=None, upload_to_binary_cache_url_blacklist=None, software_min_free_space=None): """Initialisation of class parameters """ if download_from_binary_cache_url_blacklist is None: download_from_binary_cache_url_blacklist = [] if upload_to_binary_cache_url_blacklist is None: upload_to_binary_cache_url_blacklist = [] self.url = url self.software_root = software_root self.software_url_hash = md5digest(self.url) self.software_path = os.path.join(self.software_root, self.software_url_hash) self.buildout = buildout self.logger = logger self.signature_private_key_file = signature_private_key_file self.signature_certificate_list = signature_certificate_list self.upload_cache_url = upload_cache_url self.upload_dir_url = upload_dir_url self.shacache_ca_file = shacache_ca_file self.shacache_cert_file = shacache_cert_file self.shacache_key_file = shacache_key_file self.shadir_ca_file = shadir_ca_file self.shadir_cert_file = shadir_cert_file self.shadir_key_file = shadir_key_file self.download_binary_cache_url = download_binary_cache_url self.upload_binary_cache_url = upload_binary_cache_url self.download_binary_dir_url = download_binary_dir_url self.upload_binary_dir_url = upload_binary_dir_url self.download_from_binary_cache_url_blacklist = \ download_from_binary_cache_url_blacklist self.upload_to_binary_cache_url_blacklist = \ upload_to_binary_cache_url_blacklist self.software_min_free_space = software_min_free_space def check_free_space(self): required = self.software_min_free_space available = free_space_nonroot(self.software_path) if available < required: msg = "Not enough space for {path}: available {available}, required {required} (option 'software_min_free_space')" raise DiskSpaceError(msg.format(path=self.software_path, available=bytes2human(available), required=bytes2human(required))) def install(self): """ Fetches binary cache if possible. Installs from buildout otherwise. """ self.logger.info("Installing software release %s..." % self.url) cache_dir = tempfile.mkdtemp() self.check_free_space() try: tarpath = os.path.join(cache_dir, self.software_url_hash) # Check if we can download from cache if (not os.path.exists(self.software_path)) \ and download_network_cached( self.download_binary_cache_url, self.download_binary_dir_url, self.url, self.software_root, self.software_url_hash, tarpath, self.logger, self.signature_certificate_list, self.download_from_binary_cache_url_blacklist): tar = tarfile.open(tarpath) try: self.logger.info("Extracting archive of cached software release...") tar.extractall(path=self.software_root) finally: tar.close() else: self._install_from_buildout() # Upload to binary cache if possible and allowed if all([self.software_root, self.url, self.software_url_hash, self.upload_binary_cache_url, self.upload_binary_dir_url]): blacklisted = False for url in self.upload_to_binary_cache_url_blacklist: if self.url.startswith(url): blacklisted = True self.logger.info("Can't upload to binary cache: " "Software Release URL is blacklisted.") break if not blacklisted: self.uploadSoftwareRelease(tarpath) finally: shutil.rmtree(cache_dir) def _set_ownership(self, path): """ If running as root: copy ownership of software_root to path If not running as root: do nothing """ if os.getuid(): return root_stat = os.stat(self.software_root) path_stat = os.stat(path) if (root_stat.st_uid != path_stat.st_uid or root_stat.st_gid != path_stat.st_gid): os.chown(path, root_stat.st_uid, root_stat.st_gid) def _additional_buildout_parameters(self, extends_cache): yield 'buildout:extends-cache=%s' % extends_cache yield 'buildout:directory=%s' % self.software_path if (self.signature_private_key_file or self.upload_cache_url or self.upload_dir_url): yield 'buildout:networkcache-section=networkcache' for networkcache_option, value in [ ('signature-private-key-file', self.signature_private_key_file), ('upload-cache-url', self.upload_cache_url), ('upload-dir-url', self.upload_dir_url), ('shacache-ca-file', self.shacache_ca_file), ('shacache-cert-file', self.shacache_cert_file), ('shacache-key-file', self.shacache_key_file), ('shadir-ca-file', self.shadir_ca_file), ('shadir-cert-file', self.shadir_cert_file), ('shadir-key-file', self.shadir_key_file) ]: if value: yield 'networkcache:%s=%s' % (networkcache_option, value) def _install_from_buildout(self): """ Fetches buildout configuration from the server, run buildout with it. If it fails, we notify the server. """ root_stat = os.stat(self.software_root) os.environ = getCleanEnvironment(logger=self.logger, home_path=pwd.getpwuid(root_stat.st_uid).pw_dir) if not os.path.isdir(self.software_path): os.mkdir(self.software_path) self._set_ownership(self.software_path) extends_cache = tempfile.mkdtemp() self._set_ownership(extends_cache) try: buildout_cfg = os.path.join(self.software_path, 'buildout.cfg') if not os.path.exists(buildout_cfg): self._create_buildout_profile(buildout_cfg, self.url) additional_parameters = list(self._additional_buildout_parameters(extends_cache)) additional_parameters.extend(['-c', buildout_cfg]) utils.bootstrapBuildout(path=self.software_path, buildout=self.buildout, logger=self.logger, additional_buildout_parameter_list=additional_parameters) utils.launchBuildout(path=self.software_path, buildout_binary=os.path.join(self.software_path, 'bin', 'buildout'), logger=self.logger, additional_buildout_parameter_list=additional_parameters) finally: shutil.rmtree(extends_cache) def _create_buildout_profile(self, buildout_cfg, url): with open(buildout_cfg, 'wb') as fout: fout.write(textwrap.dedent("""\ # Created by slapgrid. extends {url} # but you can change it for development purposes. [buildout] extends = {url} """.format(url=url))) self._set_ownership(buildout_cfg) def uploadSoftwareRelease(self, tarpath): """ Try to tar and upload an installed Software Release. """ self.logger.info("Creating archive of software release...") tar = tarfile.open(tarpath, "w:gz") try: tar.add(self.software_path, arcname=self.software_url_hash) finally: tar.close() self.logger.info("Trying to upload archive of software release...") upload_network_cached( self.software_root, self.url, self.software_url_hash, self.upload_binary_cache_url, self.upload_binary_dir_url, tarpath, self.logger, self.signature_private_key_file, self.shacache_ca_file, self.shacache_cert_file, self.shacache_key_file, self.shadir_ca_file, self.shadir_cert_file, self.shadir_key_file) def destroy(self): """Removes software release.""" def retry(func, path, exc): # inspired by slapos.buildout hard remover if func == os.path.islink: os.unlink(path) else: os.chmod(path, 0o600) func(path) try: if os.path.exists(self.software_path): self.logger.info('Removing path %r' % self.software_path) shutil.rmtree(self.software_path, onerror=retry) else: self.logger.info('Path %r does not exists, no need to remove.' % self.software_path) except IOError as exc: raise IOError("I/O error while removing software (%s): %s" % (self.url, exc)) class Partition(object): """This class is responsible of the installation of an instance """ retention_lock_delay_filename = '.slapos-retention-lock-delay' retention_lock_date_filename = '.slapos-retention-lock-date' partition_firewall_rules_name = '.slapos-firewalld-rules' # XXX: we should give the url (or the "key") instead of the software_path # then compute the path from it, like in Software. def __init__(self, software_path, instance_path, supervisord_partition_configuration_path, supervisord_socket, computer_partition, computer_id, partition_id, server_url, software_release_url, buildout, logger, certificate_repository_path=None, retention_delay='0', instance_min_free_space=None, instance_storage_home='', ipv4_global_network='', ): """Initialisation of class parameters""" self.buildout = buildout self.logger = logger self.software_path = software_path self.instance_path = instance_path self.run_path = os.path.join(self.instance_path, 'etc', 'run') self.service_path = os.path.join(self.instance_path, 'etc', 'service') self.supervisord_partition_configuration_path = \ supervisord_partition_configuration_path self.supervisord_socket = supervisord_socket self.computer_partition = computer_partition self.computer_id = computer_id self.partition_id = partition_id self.server_url = server_url self.software_release_url = software_release_url self.instance_storage_home = instance_storage_home self.ipv4_global_network = ipv4_global_network self.key_file = '' self.cert_file = '' if certificate_repository_path is not None: self.key_file = os.path.join(certificate_repository_path, self.partition_id + '.key') self.cert_file = os.path.join(certificate_repository_path, self.partition_id + '.crt') self._updateCertificate() try: self.retention_delay = float(retention_delay) except ValueError: self.logger.warn('Retention delay value (%s) is not valid, ignoring.' \ % self.retention_delay) self.retention_delay = 0 self.retention_lock_delay_file_path = os.path.join( self.instance_path, self.retention_lock_delay_filename ) self.retention_lock_date_file_path = os.path.join( self.instance_path, self.retention_lock_date_filename ) self.firewall_rules_path = os.path.join( self.instance_path, self.partition_firewall_rules_name ) self.instance_min_free_space = instance_min_free_space def check_free_space(self): required = self.instance_min_free_space available = free_space_nonroot(self.instance_path) if available < required: msg = "Not enough space for {path}: available {available}, required {required} (option 'instance_min_free_space')" raise DiskSpaceError(msg.format(path=self.instance_path, available=bytes2human(available), required=bytes2human(required))) def _updateCertificate(self): try: partition_certificate = self.computer_partition.getCertificate() except NotFoundError: raise NotFoundError('Partition %s is not known by SlapOS Master.' % self.partition_id) uid, gid = self.getUserGroupId() for name, path in [('certificate', self.cert_file), ('key', self.key_file)]: new_content = partition_certificate[name] old_content = None if os.path.exists(path): old_content = open(path).read() if old_content != new_content: if old_content is None: self.logger.info('Missing %s file. Creating %r' % (name, path)) else: self.logger.info('Changed %s content. Updating %r' % (name, path)) with os.fdopen(os.open(path, os.O_CREAT | os.O_WRONLY | os.O_TRUNC, 0o400), 'wb') as fout: fout.write(new_content) os.chown(path, uid, gid) def getUserGroupId(self): """Returns tuple of (uid, gid) of partition""" stat_info = os.stat(self.instance_path) uid = stat_info.st_uid gid = stat_info.st_gid return (uid, gid) def addServiceToGroup(self, partition_id, runner_list, path, extension=''): uid, gid = self.getUserGroupId() for runner in runner_list: self.partition_supervisor_configuration += '\n' + \ PROGRAM_PARTITION_TEMPLATE % { 'program_id': '_'.join([partition_id, runner]), 'program_directory': self.instance_path, 'program_command': os.path.join(path, runner), 'program_name': runner + extension, 'instance_path': self.instance_path, 'user_id': uid, 'group_id': gid, # As supervisord has no environment to inherit, setup a minimalistic one 'HOME': pwd.getpwuid(uid).pw_dir, 'USER': pwd.getpwuid(uid).pw_name, } def updateSymlink(self, sr_symlink, software_path): if os.path.lexists(sr_symlink): if not os.path.islink(sr_symlink): self.logger.debug('Not a symlink: %s, has been ignored' % sr_symlink) return os.unlink(sr_symlink) os.symlink(software_path, sr_symlink) os.lchown(sr_symlink, *self.getUserGroupId()) def install(self): """ Creates configuration file from template in software_path, then installs the software partition with the help of buildout """ self.logger.info("Installing Computer Partition %s..." % self.computer_partition.getId()) self.check_free_space() # Checks existence and permissions of Partition directory # Note : Partitions have to be created and configured before running slapgrid if not os.path.isdir(self.instance_path): raise PathDoesNotExistError('Please create partition directory %s' % self.instance_path) sr_symlink = os.path.join(self.instance_path, 'software_release') self.updateSymlink(sr_symlink, self.software_path) instance_stat_info = os.stat(self.instance_path) permission = stat.S_IMODE(instance_stat_info.st_mode) if permission != REQUIRED_COMPUTER_PARTITION_PERMISSION: raise WrongPermissionError('Wrong permissions in %s: actual ' 'permissions are: 0%o, wanted are 0%o' % (self.instance_path, permission, REQUIRED_COMPUTER_PARTITION_PERMISSION)) os.environ = getCleanEnvironment(logger=self.logger, home_path=pwd.getpwuid(instance_stat_info.st_uid).pw_dir) # Check that Software Release directory is present if not os.path.exists(self.software_path): # XXX What should it raise? raise IOError('Software Release %s is not present on system.\n' 'Cannot deploy instance.' % self.software_release_url) # Generate buildout instance profile from template in Software Release template_location = os.path.join(self.software_path, 'instance.cfg') if not os.path.exists(template_location): # Backward compatibility: "instance.cfg" file was named "template.cfg". if os.path.exists(os.path.join(self.software_path, 'template.cfg')): template_location = os.path.join(self.software_path, 'template.cfg') else: # No template: Software Release is either inconsistent or not correctly installed. # XXX What should it raise? raise IOError('Software Release %s is not correctly installed.\nMissing file: %s' % ( self.software_release_url, template_location)) config_location = os.path.join(self.instance_path, 'buildout.cfg') self.logger.debug("Copying %r to %r" % (template_location, config_location)) shutil.copy(template_location, config_location) # fill generated buildout with additional information buildout_text = open(config_location).read() buildout_text += '\n\n' + pkg_resources.resource_string(__name__, 'templates/buildout-tail.cfg.in') % { 'computer_id': self.computer_id, 'partition_id': self.partition_id, 'server_url': self.server_url, 'software_release_url': self.software_release_url, 'key_file': self.key_file, 'cert_file': self.cert_file, 'storage_home': self.instance_storage_home, 'global_ipv4_network_prefix': self.ipv4_global_network, } open(config_location, 'w').write(buildout_text) os.chmod(config_location, 0o640) # Try to find the best possible buildout: # *) if software_root/bin/bootstrap exists use this one to bootstrap # locally # *) as last resort fallback to buildout binary from software_path bootstrap_candidate_dir = os.path.abspath(os.path.join(self.software_path, 'bin')) if os.path.isdir(bootstrap_candidate_dir): bootstrap_candidate_list = [q for q in os.listdir(bootstrap_candidate_dir) if q.startswith('bootstrap')] else: bootstrap_candidate_list = [] uid, gid = self.getUserGroupId() os.chown(config_location, -1, int(gid)) if len(bootstrap_candidate_list) == 0: buildout_binary = os.path.join(self.software_path, 'bin', 'buildout') self.logger.info("Falling back to default buildout %r" % buildout_binary) else: if len(bootstrap_candidate_list) != 1: raise ValueError('More than one bootstrap candidate found.') # Reads uid/gid of path, launches buildout with thoses privileges bootstrap_file = os.path.abspath(os.path.join(bootstrap_candidate_dir, bootstrap_candidate_list[0])) first_line = open(bootstrap_file, 'r').readline() invocation_list = [] if first_line.startswith('#!'): invocation_list = first_line[2:].split() invocation_list.append(bootstrap_file) self.logger.debug('Invoking %r in %r' % (' '.join(invocation_list), self.instance_path)) process_handler = SlapPopen(invocation_list, preexec_fn=lambda: dropPrivileges(uid, gid, logger=self.logger), cwd=self.instance_path, env=getCleanEnvironment(logger=self.logger, home_path=pwd.getpwuid(uid).pw_dir), stdout=subprocess.PIPE, stderr=subprocess.STDOUT, logger=self.logger) if process_handler.returncode is None or process_handler.returncode != 0: message = 'Failed to bootstrap buildout in %r.' % (self.instance_path) self.logger.error(message) raise BuildoutFailedError('%s:\n%s\n' % (message, process_handler.output)) buildout_binary = os.path.join(self.instance_path, 'sbin', 'buildout') if not os.path.exists(buildout_binary): # use own buildout generation utils.bootstrapBuildout(path=self.instance_path, buildout=self.buildout, logger=self.logger, additional_buildout_parameter_list= ['buildout:bin-directory=%s' % os.path.join(self.instance_path, 'sbin')]) buildout_binary = os.path.join(self.instance_path, 'sbin', 'buildout') # Launches buildout utils.launchBuildout(path=self.instance_path, buildout_binary=buildout_binary, logger=self.logger) self.generateSupervisorConfigurationFile() self.createRetentionLockDelay() def generateSupervisorConfigurationFile(self): """ Generates supervisord configuration file from template. check if CP/etc/run exists and it is a directory iterate over each file in CP/etc/run iterate over each file in CP/etc/service adding WatchdogID to their name if at least one is not 0o750 raise -- partition has something funny """ runner_list = [] service_list = [] if os.path.exists(self.run_path): if os.path.isdir(self.run_path): runner_list = os.listdir(self.run_path) if os.path.exists(self.service_path): if os.path.isdir(self.service_path): service_list = os.listdir(self.service_path) if len(runner_list) == 0 and len(service_list) == 0: self.logger.warning('No runners nor services found for partition %r' % self.partition_id) if os.path.exists(self.supervisord_partition_configuration_path): os.unlink(self.supervisord_partition_configuration_path) else: partition_id = self.computer_partition.getId() group_partition_template = pkg_resources.resource_stream(__name__, 'templates/group_partition_supervisord.conf.in').read() self.partition_supervisor_configuration = group_partition_template % { 'instance_id': partition_id, 'program_list': ','.join(['_'.join([partition_id, runner]) for runner in runner_list + service_list]) } # Same method to add to service and run self.addServiceToGroup(partition_id, runner_list, self.run_path) self.addServiceToGroup(partition_id, service_list, self.service_path, extension=WATCHDOG_MARK) updateFile(self.supervisord_partition_configuration_path, self.partition_supervisor_configuration) self.updateSupervisor() def start(self): """Asks supervisord to start the instance. If this instance is not installed, we install it. """ supervisor = self.getSupervisorRPC() partition_id = self.computer_partition.getId() try: supervisor.startProcessGroup(partition_id, False) except xmlrpclib.Fault as exc: if exc.faultString.startswith('BAD_NAME:'): self.logger.info("Nothing to start on %s..." % self.computer_partition.getId()) else: self.logger.info("Requested start of %s..." % self.computer_partition.getId()) def stop(self): """Asks supervisord to stop the instance.""" partition_id = self.computer_partition.getId() try: supervisor = self.getSupervisorRPC() supervisor.stopProcessGroup(partition_id, False) except xmlrpclib.Fault as exc: if exc.faultString.startswith('BAD_NAME:'): self.logger.info('Partition %s not known in supervisord, ignoring' % partition_id) else: self.logger.info("Requested stop of %s..." % self.computer_partition.getId()) def destroy(self): """Destroys the partition and makes it available for subsequent use." """ self.logger.info("Destroying Computer Partition %s..." % self.computer_partition.getId()) self.createRetentionLockDate() if not self.checkRetentionIsAuthorized(): return False # Launches "destroy" binary if exists destroy_executable_location = os.path.join(self.instance_path, 'sbin', 'destroy') if os.path.exists(destroy_executable_location): uid, gid = self.getUserGroupId() self.logger.debug('Invoking %r' % destroy_executable_location) process_handler = SlapPopen([destroy_executable_location], preexec_fn=lambda: dropPrivileges(uid, gid, logger=self.logger), cwd=self.instance_path, env=getCleanEnvironment(logger=self.logger, home_path=pwd.getpwuid(uid).pw_dir), stdout=subprocess.PIPE, stderr=subprocess.STDOUT, logger=self.logger) if process_handler.returncode is None or process_handler.returncode != 0: message = 'Failed to destroy Computer Partition in %r.' % \ self.instance_path self.logger.error(message) raise subprocess.CalledProcessError(message, process_handler.output) # Manually cleans what remains try: for f in [self.key_file, self.cert_file]: if f: if os.path.exists(f): os.unlink(f) # better to manually remove symlinks because rmtree might choke on them sr_symlink = os.path.join(self.instance_path, 'software_release') if os.path.islink(sr_symlink): os.unlink(sr_symlink) data_base_link = os.path.join(self.instance_path, CP_STORAGE_FOLDER_NAME) if self.instance_storage_home and os.path.exists(data_base_link) and \ os.path.isdir(data_base_link): for filename in os.listdir(data_base_link): data_symlink = os.path.join(data_base_link, filename) partition_data_path = os.path.join(self.instance_storage_home, filename, self.partition_id) if os.path.lexists(data_symlink): os.unlink(data_symlink) if os.path.exists(partition_data_path): self.cleanupFolder(partition_data_path) self.cleanupFolder(self.instance_path) # Cleanup all Data storage location of this partition if os.path.exists(self.supervisord_partition_configuration_path): os.remove(self.supervisord_partition_configuration_path) self.updateSupervisor() except IOError as exc: raise IOError("I/O error while freeing partition (%s): %s" % (self.instance_path, exc)) return True def cleanupFolder(self, folder_path): """Delete all files and folders in a specified directory """ for root, dirs, file_list in os.walk(folder_path): for directory in dirs: shutil.rmtree(os.path.join(folder_path, directory)) for file in file_list: os.remove(os.path.join(folder_path, file)) def fetchInformations(self): """Fetch usage informations with buildout, returns it. """ raise NotImplementedError def getSupervisorRPC(self): return getSupervisorRPC(self.supervisord_socket) def updateSupervisor(self): """Forces supervisord to reload its configuration""" # Note: This method shall wait for results from supervisord # In future it will not be needed, as update command # is going to be implemented on server side. self.logger.debug('Updating supervisord') supervisor = self.getSupervisorRPC() # took from supervisord.supervisorctl.do_update result = supervisor.reloadConfig() added, changed, removed = result[0] for gname in removed: results = supervisor.stopProcessGroup(gname) fails = [res for res in results if res['status'] == xmlrpc.Faults.FAILED] if fails: self.logger.warning('Problem while stopping process %r, will try later' % gname) else: self.logger.info('Stopped %r' % gname) for i in xrange(0, 10): # Some process may be still running, be nice and wait for them to be stopped. try: supervisor.removeProcessGroup(gname) break except: if i == 9: raise time.sleep(1) self.logger.info('Removed %r' % gname) for gname in changed: results = supervisor.stopProcessGroup(gname) self.logger.info('Stopped %r' % gname) supervisor.removeProcessGroup(gname) supervisor.addProcessGroup(gname) self.logger.info('Updated %r' % gname) for gname in added: supervisor.addProcessGroup(gname) self.logger.info('Updated %r' % gname) self.logger.debug('Supervisord updated') def _set_ownership(self, path): """ If running as root: copy ownership of software_path to path If not running as root: do nothing """ if os.getuid(): return root_stat = os.stat(self.software_path) path_stat = os.stat(path) if (root_stat.st_uid != path_stat.st_uid or root_stat.st_gid != path_stat.st_gid): os.chown(path, root_stat.st_uid, root_stat.st_gid) def checkRetentionIsAuthorized(self): """ Check if retention is authorized by checking retention lock delay or retention lock date. A retention lock delay is a delay which is: * Defined by the user/machine who requested the instance * Hardcoded the first time the instance is deployed, then is read-only during the whole lifetime of the instance * Triggered the first time the instance is requested to be destroyed (retention will be ignored). From this point, it is not possible to destroy the instance until the delay is over. * Accessible in read-only mode from the partition A retention lock date is the date computed from (date of first retention request + retention lock delay in days). Example: * User requests an instance with delay as 10 (days) to a SlapOS Master * SlapOS Master transmits this information to the SlapOS Node (current code) * SlapOS Node hardcodes this delay at first deployment * User requests retention of instance * SlapOS Node tries to destroy for the first time: it doesn't actually destroy, but it triggers the creation of a retention lock date from from the hardcoded delay. At this point it is not possible to destroy instance until current date + 10 days. * SlapOS Node continues to try to destroy: it doesn't do anything until retention lock date is reached. """ retention_lock_date = self.getExistingRetentionLockDate() now = time.time() if not retention_lock_date: if self.getExistingRetentionLockDelay() > 0: self.logger.info('Impossible to destroy partition yet because of retention lock.') return False # Else: OK to destroy else: if now < retention_lock_date: self.logger.info('Impossible to destroy partition yet because of retention lock.') return False # Else: OK to destroy return True def createRetentionLockDelay(self): """ Create a retention lock delay for the current partition. If retention delay is not specified, create it wth "0" as value """ if os.path.exists(self.retention_lock_delay_file_path): return with open(self.retention_lock_delay_file_path, 'w') as delay_file_path: delay_file_path.write(str(self.retention_delay)) self._set_ownership(self.retention_lock_delay_file_path) def getExistingRetentionLockDelay(self): """ Return the retention lock delay of current partition (created at first deployment) if exist. Return -1 otherwise. """ retention_delay = -1 if os.path.exists(self.retention_lock_delay_file_path): with open(self.retention_lock_delay_file_path) as delay_file_path: retention_delay = float(delay_file_path.read()) return retention_delay def createRetentionLockDate(self): """ If retention lock delay > 0: Create a retention lock date for the current partition from the retention lock delay. Do nothing otherwise. """ if os.path.exists(self.retention_lock_date_file_path): return retention_delay = self.getExistingRetentionLockDelay() if retention_delay <= 0: return now = int(time.time()) retention_date = now + retention_delay * 24 * 3600 with open(self.retention_lock_date_file_path, 'w') as date_file_path: date_file_path.write(str(retention_date)) self._set_ownership(self.retention_lock_date_file_path) def getExistingRetentionLockDate(self): """ Return the retention lock delay of current partition if exist. Return None otherwise. """ if os.path.exists(self.retention_lock_date_file_path): with open(self.retention_lock_date_file_path) as date_file_path: return float(date_file_path.read()) else: return None slapos.core-1.3.18/slapos/collect/0000755000000000000000000000000013006632706016723 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/collect/reporter.py0000644000000000000000000003104012752436134021141 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from lxml import etree as ElementTree from slapos.util import mkdir_p import csv import glob import json import os import os.path import shutil import tarfile import time import psutil log_file = False class Dumper(object): def __init__(self, database): self.db = database def dump(self, folder): raise NotImplemented("Implemented on Subclass") def writeFile(self, **kw): raise NotImplemented("Implemented on Subclass") class SystemReporter(Dumper): def dump(self, folder): """ Dump data """ _date = time.strftime("%Y-%m-%d") self.db.connect() for item, collected_item_list in self.db.exportSystemAsDict(_date).iteritems(): self.writeFile(item, folder, collected_item_list) for partition, collected_item_list in self.db.exportDiskAsDict(_date).iteritems(): partition_id = "_".join(partition.split("-")[:-1]).replace("/", "_") item = "memory_%s" % partition.split("-")[-1] self.writeFile("disk_%s_%s" % (item, partition_id), folder, collected_item_list) self.db.close() class SystemJSONReporterDumper(SystemReporter): def writeFile(self, name, folder, collected_entry_list=[]): """ Dump data as json """ file_io = open(os.path.join(folder, "system_%s.json" % name), "w") json.dump(collected_entry_list, file_io, sort_keys=True, indent=2) file_io.close() class SystemCSVReporterDumper(SystemReporter): def writeFile(self, name, folder, collected_entry_list=[]): """ Dump data as json """ file_io = open(os.path.join(folder, "system_%s.csv" % name), "w") csv_output = csv.writer(file_io) csv_output.writerow(["time", "entry"]) for collected_entry in collected_entry_list: csv_output.writerow([collected_entry["time"], collected_entry["entry"]]) file_io.close() class RawDumper(Dumper): """ Dump raw data in a certain format """ def dump(self, folder): date = time.strftime("%Y-%m-%d") self.db.connect() table_list = self.db.getTableList() for date_scope, amount in self.db.getDateScopeList(ignore_date=date): for table in table_list: self.writeFile(table, folder, date_scope, self.db.select(table, date_scope)) self.db.markDayAsReported(date_scope, table_list=table_list) self.db.commit() self.db.close() class RawCSVDumper(RawDumper): def writeFile(self, name, folder, date_scope, rows): mkdir_p(os.path.join(folder, date_scope), 0o755) file_io = open(os.path.join(folder, "%s/dump_%s.csv" % (date_scope, name)), "w") csv_output = csv.writer(file_io) csv_output.writerows(rows) file_io.close() def compressLogFolder(log_directory): initial_folder = os.getcwd() os.chdir(log_directory) try: for backup_to_archive in glob.glob("*-*-*/"): filename = '%s.tar.gz' % backup_to_archive.strip("/") with tarfile.open(filename, 'w:gz') as tfile: tfile.add(backup_to_archive) tfile.close() shutil.rmtree(backup_to_archive) finally: os.chdir(initial_folder) class ConsumptionReport(object): def __init__(self, database, computer_id, location, user_list): self.computer_id = computer_id self.db = database self.user_list = user_list self.location = location def buildXMLReport(self, date_scope): xml_report_path = "%s/%s.xml" % (self.location, date_scope) if os.path.exists(xml_report_path): return if os.path.exists('%s.uploaded' % xml_report_path): return journal = Journal() transaction = journal.newTransaction() journal.setProperty(transaction, "title", "Eco Information for %s " % self.computer_id) journal.setProperty(transaction, "start_date", "%s 00:00:00" % date_scope) journal.setProperty(transaction, "stop_date", "%s 23:59:59" % date_scope) journal.setProperty(transaction, "reference", "%s-global" % date_scope) journal.setProperty(transaction, "currency", "") journal.setProperty(transaction, "payment_mode", "") journal.setProperty(transaction, "category", "") arrow = ElementTree.SubElement(transaction, "arrow") arrow.set("type", "Destination") cpu_load_percent = self._getCpuLoadAverageConsumption(date_scope) if cpu_load_percent is not None: journal.newMovement(transaction, resource="service_module/cpu_load_percent", title="CPU Load Percent Average", quantity=str(cpu_load_percent), reference=self.computer_id, category="") memory_used = self._getMemoryAverageConsumption(date_scope) if memory_used is not None: journal.newMovement(transaction, resource="service_module/memory_used", title="Used Memory", quantity=str(memory_used), reference=self.computer_id, category="") if self._getZeroEmissionContribution() is not None: journal.newMovement(transaction, resource="service_module/zero_emission_ratio", title="Zero Emission Ratio", quantity=str(self._getZeroEmissionContribution()), reference=self.computer_id, category="") for user in self.user_list: partition_cpu_load_percent = self._getPartitionCPULoadAverage(user, date_scope) if partition_cpu_load_percent is not None: journal.newMovement(transaction, resource="service_module/cpu_load_percent", title="CPU Load Percent Average for %s" % (user), quantity=str(partition_cpu_load_percent), reference=user, category="") mb = float(2 ** 20) for user in self.user_list: partition_memory_used = self._getPartitionUsedMemoryAverage(user, date_scope) if partition_memory_used is not None: journal.newMovement(transaction, resource="service_module/memory_used", title="Memory Used Average for %s" % (user), quantity=str(partition_memory_used/mb), reference=user, category="") for user in self.user_list: partition_disk_used = self._getPartitionDiskUsedAverage(user, date_scope) if partition_disk_used is not None: journal.newMovement(transaction, resource="service_module/disk_used", title="Partition Disk Used Average for %s" % (user), quantity=str(partition_disk_used/1024.0), reference=user, category="") with open(xml_report_path, 'w') as f: f.write(journal.getXML()) f.close() return xml_report_path def _getAverageFromList(self, data_list): return sum(data_list)/len(data_list) def _getCpuLoadAverageConsumption(self, date_scope): self.db.connect() query_result_cursor = self.db.select("system", date_scope, columns="SUM(cpu_percent)/COUNT(cpu_percent)") cpu_load_percent_list = zip(*query_result_cursor) self.db.close() if len(cpu_load_percent_list): return cpu_load_percent_list[0][0] def _getMemoryAverageConsumption(self, date_scope): self.db.connect() query_result_cursor = self.db.select("system", date_scope, columns="SUM(memory_used)/COUNT(memory_used)") memory_used_list = zip(*query_result_cursor) self.db.close() if len(memory_used_list): return memory_used_list[0][0] def _getZeroEmissionContribution(self): self.db.connect() zer = self.db.getLastZeroEmissionRatio() self.db.close() return zer def _getPartitionCPULoadAverage(self, partition_id, date_scope): self.db.connect() query_result_cursor = self.db.select("user", date_scope, columns="SUM(cpu_percent)", where="partition = '%s'" % partition_id) cpu_percent_sum = zip(*query_result_cursor) if len(cpu_percent_sum) and cpu_percent_sum[0][0] is None: return query_result_cursor = self.db.select("user", date_scope, columns="COUNT(DISTINCT time)", where="partition = '%s'" % partition_id) sample_amount = zip(*query_result_cursor) self.db.close() if len(sample_amount) and len(cpu_percent_sum): return cpu_percent_sum[0][0]/sample_amount[0][0] def _getPartitionUsedMemoryAverage(self, partition_id, date_scope): self.db.connect() query_result_cursor = self.db.select("user", date_scope, columns="SUM(memory_rss)", where="partition = '%s'" % partition_id) memory_sum = zip(*query_result_cursor) if len(memory_sum) and memory_sum[0][0] is None: return query_result_cursor = self.db.select("user", date_scope, columns="COUNT(DISTINCT time)", where="partition = '%s'" % partition_id) sample_amount = zip(*query_result_cursor) self.db.close() if len(sample_amount) and len(memory_sum): return memory_sum[0][0]/sample_amount[0][0] def _getPartitionDiskUsedAverage(self, partition_id, date_scope): self.db.connect() query_result_cursor = self.db.select("folder", date_scope, columns="SUM(disk_used)", where="partition = '%s'" % partition_id) disk_used_sum = zip(*query_result_cursor) if len(disk_used_sum) and disk_used_sum[0][0] is None: return query_result_cursor = self.db.select("folder", date_scope, columns="COUNT(DISTINCT time)", where="partition = '%s'" % partition_id) collect_amount = zip(*query_result_cursor) self.db.close() if len(collect_amount) and len(disk_used_sum): return disk_used_sum[0][0]/collect_amount[0][0] class Journal(object): def __init__(self): self.root = ElementTree.Element("journal") def getXML(self): report = ElementTree.tostring(self.root) return "%s" % report def newTransaction(self, portal_type="Sale Packing List"): transaction = ElementTree.SubElement(self.root, "transaction") transaction.set("type", portal_type) return transaction def setProperty(self, element, name, value): property_element = ElementTree.SubElement(element, name) property_element.text = value def newMovement(self, transaction, resource, title, quantity, reference, category): movement = ElementTree.SubElement(transaction, "movement") self.setProperty(movement, "resource", resource) self.setProperty(movement, "title", title) self.setProperty(movement, "reference", reference) self.setProperty(movement, "quantity", quantity) self.setProperty(movement, "price", "0.0") self.setProperty(movement, "VAT", "") # Provide units self.setProperty(movement, "category", category) return movement slapos.core-1.3.18/slapos/collect/snapshot.py0000644000000000000000000002106412752436134021143 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import psutil import os import subprocess from temperature import collectComputerTemperature, \ launchTemperatureTest from temperature.heating import get_contribution_ratio MEASURE_INTERVAL = 5 class _Snapshot(object): def get(self, property, default=None): return getattr(self, property, default) class ProcessSnapshot(_Snapshot): """ Take a snapshot from the running process """ def __init__(self, process=None): assert type(process) is psutil.Process ui_counter_list = process.io_counters() self.username = process.username() self.process_object = process self.pid = process.pid # Save full command line from the process. self.process = "%s-%s" % (process.pid, process.create_time()) # CPU percentage, we will have to get actual absolute value self.cpu_percent = self.process_object.cpu_percent(None) # CPU Time self.cpu_time = sum(process.cpu_times()) # Thread number, might not be really relevant self.cpu_num_threads = process.num_threads() # Memory percentage self.memory_percent = process.memory_percent() # Resident Set Size, virtual memory size is not accouned for self.memory_rss = process.memory_info()[0] # Byte count, Read and write. OSX NOT SUPPORTED self.io_rw_counter = ui_counter_list[2] + ui_counter_list[3] # Read + write IO cycles self.io_cycles_counter = ui_counter_list[0] + ui_counter_list[1] def update_cpu_percent(self): if self.process_object.is_running(): # CPU percentage, we will have to get actual absolute value self.cpu_percent = self.process_object.cpu_percent() class FolderSizeSnapshot(_Snapshot): """Calculate partition folder size. """ def __init__(self, folder_path, pid_file=None, use_quota=False): # slapos computer partition size self.folder_path = folder_path self.pid_file = pid_file self.disk_usage = 0 self.use_quota = use_quota def update_folder_size(self): """Return 0 if the process du is still running """ if self.pid_file and os.path.exists(self.pid_file): with open(self.pid_file, 'r') as fpid: pid_str = fpid.read() if pid_str: pid = int(pid_str) try: os.kill(pid, 0) except OSError: pass else: return self.disk_usage = self._getSize(self.folder_path) # If extra disk added to partition data_dir = os.path.join(self.folder_path, 'DATA') if os.path.exists(data_dir): for filename in os.listdir(data_dir): extra_path = os.path.join(data_dir, filename) if os.path.islink(extra_path) and os.path.isdir('%s/' % extra_path): self.disk_usage += self._getSize('%s/' % extra_path) def _getSize(self, file_path): size = 0 command = 'du -s %s' % file_path process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) if self.pid_file: with open(self.pid_file, 'w') as fpid: pid = fpid.write(str(process.pid)) result = process.communicate()[0] if process.returncode == 0: size, _ = result.strip().split() return float(size) class SystemSnapshot(_Snapshot): """ Take a snapshot from current system usage """ def __init__(self, interval=MEASURE_INTERVAL): cpu_idle_percentage = psutil.cpu_times_percent(interval=interval).idle load_percent = 100 - cpu_idle_percentage memory = psutil.virtual_memory() net_io = psutil.net_io_counters() self.memory_used = memory.used self.memory_free = memory.free self.memory_percent = memory.percent #self.cpu_percent = psutil.cpu_percent() self.cpu_percent = load_percent self.load = os.getloadavg()[0] self.net_in_bytes = net_io.bytes_recv self.net_in_errors = net_io.errin self.net_in_dropped = net_io.dropin self.net_out_bytes = net_io.bytes_sent self.net_out_errors = net_io.errout self.net_out_dropped = net_io.dropout class TemperatureSnapshot(_Snapshot): """ Take a snapshot from the current temperature on all available sensors """ def __init__(self, sensor_id, temperature, alarm): self.sensor_id = sensor_id self.temperature = temperature self.alarm = alarm class HeatingContributionSnapshot(_Snapshot): def __init__(self, sensor_id, model_id): self.initial_temperature = None result = launchTemperatureTest(sensor_id) if result is None: print "Impossible to test sensor: %s " % sensor_id initial_temperature, final_temperature, duration = result self.initial_temperature = initial_temperature self.final_temperature = final_temperature self.delta_time = duration self.model_id = model_id self.sensor_id = sensor_id self.zero_emission_ratio = self._get_contribution_ratio() def _get_contribution_ratio(self): delta_temperature = (self.final_temperature-self.initial_temperature) contribution_value = delta_temperature/self.delta_time return get_contribution_ratio(self.model_id, contribution_value) def _get_uptime(self): # Linux only if os.path.exists('/proc/uptime'): with open('/proc/uptime', 'r') as f: return float(f.readline().split()[0]) return -1 class DiskPartitionSnapshot(_Snapshot): """ Take Snapshot from general disk partitions usage """ def __init__(self, partition, mountpoint): self.partition = partition self.mountpoint_list = [ mountpoint ] disk = psutil.disk_usage(mountpoint) disk_io = psutil.disk_io_counters() self.disk_size_used = disk.used self.disk_size_free = disk.free self.disk_size_percent = disk.percent class ComputerSnapshot(_Snapshot): """ Take a snapshot from computer informations """ def __init__(self, model_id=None, sensor_id=None, test_heating=False): self.cpu_num_core = psutil.cpu_count() self.cpu_frequency = 0 self.cpu_type = 0 self.memory_size = psutil.virtual_memory().total self.memory_type = 0 # # Include a SystemSnapshot and a list DiskPartitionSnapshot # on a Computer Snapshot # self.system_snapshot = SystemSnapshot() self.temperature_snapshot_list = self._get_temperature_snapshot_list() self.disk_snapshot_list = [] self.partition_list = self._get_physical_disk_info() if test_heating and model_id is not None \ and sensor_id is not None: self.heating_contribution_snapshot = HeatingContributionSnapshot(sensor_id, model_id) def _get_temperature_snapshot_list(self): temperature_snapshot_list = [] for sensor_entry in collectComputerTemperature(): sensor_id, temperature, maximal, critical, alarm = sensor_entry temperature_snapshot_list.append( TemperatureSnapshot(sensor_id, temperature, alarm)) return temperature_snapshot_list def _get_physical_disk_info(self): partition_dict = {} for partition in psutil.disk_partitions(): if partition.device not in partition_dict: usage = psutil.disk_usage(partition.mountpoint) partition_dict[partition.device] = usage.total self.disk_snapshot_list.append( DiskPartitionSnapshot(partition.device, partition.mountpoint)) return [(k, v) for k, v in partition_dict.iteritems()] slapos.core-1.3.18/slapos/collect/temperature/0000755000000000000000000000000013006632706021260 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/collect/temperature/heating.py0000644000000000000000000000072312752436134023257 0ustar rootroot00000000000000 CONTRIBUTION_MAPPING = { "shuttle_ds61_i7" : 0.045, "nuc_i7": 0.055 } def get_contribution_ratio(model_id, contribution): zero_emission_ratio_limit = CONTRIBUTION_MAPPING.get(model_id) if zero_emission_ratio_limit is None: raise ValueError("Unknown heating contibution") if contribution < zero_emission_ratio_limit: # The machine don't contribute for heating return 0 else: # The machine contributes for the heating return 100 slapos.core-1.3.18/slapos/collect/temperature/__init__.py0000644000000000000000000000761512752436134023406 0ustar rootroot00000000000000 from multiprocessing import Process, active_children, cpu_count, Pipe import subprocess import os import signal import sys import time FIB_N = 100 DEFAULT_TIME = 60 try: DEFAULT_CPU = cpu_count() except NotImplementedError: DEFAULT_CPU = 1 def collectComputerTemperature(sensor_bin="sensors"): cmd = ["%s -u" % sensor_bin] sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) stdout, stderr = sp.communicate() sensor_output_list = stdout.splitlines() adapter_name = "" sensor_temperature_list = [] for line_number in range(len(sensor_output_list)): found_sensor = None stripped_line = sensor_output_list[line_number].strip() if stripped_line.startswith("Adapter:"): adapter_name = sensor_output_list[line_number-1] elif stripped_line.startswith("temp") and "_input" in stripped_line: temperature = sensor_output_list[line_number].split()[-1] found_sensor = ["%s %s" % (adapter_name, sensor_output_list[line_number-1]), float(temperature)] if found_sensor is not None: critical = '1000' maximal = '1000' for next_line in sensor_output_list[line_number+1:line_number+3]: stripped_next_line = next_line.strip() if stripped_next_line.startswith("temp") and "_max" in stripped_next_line: maximal = stripped_next_line.split()[-1] elif stripped_next_line.startswith("temp") and "_crit" in stripped_next_line: critical = stripped_next_line.split()[-1] found_sensor.extend([float(maximal), float(critical)]) found_sensor.append(checkAlarm(float(temperature), float(maximal), float(critical))) sensor_temperature_list.append(found_sensor) return sensor_temperature_list def checkAlarm(temperature, maximal, critical): """ Returns : O if the temperature is below the maximal limit. 1 if the temperature is above the maximal limit. 2 if the temperature is above the crical limit. """ alarm = 0 if temperature >= maximal: alarm += 1 if temperature >= critical: alarm += 1 return alarm def loop(connection): connection.send(os.getpid()) connection.close() while True: fib(FIB_N) def fib(n): if n < 2: return 1 else: return fib(n - 1) + fib(n - 2) def sigint_handler(signum, frame): procs = active_children() for p in procs: p.terminate() os._exit(1) def launchTemperatureTest(sensor_id, sensor_bin="sensors", timeout=600, interval=30): signal.signal(signal.SIGINT, sigint_handler) def getTemperatureForSensor(s_id): for collected_temperature in collectComputerTemperature(sensor_bin): if collected_temperature[0] == sensor_id: return collected_temperature[1], collected_temperature[4] return None, None process_list = [] process_connection_list = [] begin_time = time.time() initial_temperature, alarm = getTemperatureForSensor(sensor_id) if initial_temperature is None: return if alarm > 0: # Skip to test if temperature is too high, because we cannot # measure appropriatetly. return candidate_temperature = initial_temperature for i in range(DEFAULT_CPU): parent_connection, child_connection = Pipe() process = Process(target=loop, args=(child_connection,)) process.start() process_list.append(process) process_connection_list.append(parent_connection) for connection in process_connection_list: try: print connection.recv() except EOFError: continue time.sleep(interval) current_temperature = getTemperatureForSensor(sensor_id) while current_temperature > candidate_temperature: candidate_temperature = current_temperature time.sleep(interval) current_temperature = getTemperatureForSensor(sensor_id) for process in process_list: process.terminate() return initial_temperature, current_temperature, time.time() - begin_time slapos.core-1.3.18/slapos/collect/db.py0000644000000000000000000004224712752436134017677 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os from time import strftime import datetime from slapos.util import sqlite_connect class Database: database_name = "collector.db" table_list = ["user", "computer", "system", "disk", \ "temperature", "heating"] preserve_table_list = ["heating"] CREATE_USER_TABLE = "create table if not exists user " \ "(partition text, pid real, process text, " \ " cpu_percent real, cpu_time real, " \ " cpu_num_threads real, memory_percent real, " \ " memory_rss real, io_rw_counter real, " \ " io_cycles_counter real, date text, time text, " \ " reported integer NULL DEFAULT 0)" CREATE_FOLDER_TABLE = "create table if not exists folder "\ "(partition text, disk_used real, date text, " \ " time text, reported integer NULL DEFAULT 0)" CREATE_COMPUTER_TABLE = "create table if not exists computer "\ "(cpu_num_core real, cpu_frequency real, cpu_type text," \ " memory_size real, memory_type text, partition_list text," \ " date text, time text, reported integer NULL DEFAULT 0)" CREATE_SYSTEM_TABLE = "create table if not exists system " \ "(loadavg real, cpu_percent real, memory_used real, "\ " memory_free real, net_in_bytes real, net_in_errors real, "\ " net_in_dropped real, net_out_bytes real, net_out_errors real, "\ " net_out_dropped real, date text, time text, " \ " reported integer NULL DEFAULT 0)" CREATE_DISK_PARTITION = "create table if not exists disk "\ "(partition text, used text, free text, mountpoint text, " \ " date text, time text, reported integer NULL DEFAULT 0)" CREATE_TEMPERATURE_TABLE = "create table if not exists temperature " \ "(sensor_id name, temperature real, alarm integer, "\ "date text, time text, reported integer NULL DEFAULT 0)" CREATE_HEATING_TABLE = "create table if not exists heating " \ "(model_id name, sensor_id name, initial_temperature real, "\ " final_temperature real, delta_time real, zero_emission_ratio real, "\ "date text, time text, reported integer NULL DEFAULT 0)" INSERT_USER_TEMPLATE = "insert into user(" \ "partition, pid, process, cpu_percent, cpu_time, " \ "cpu_num_threads, memory_percent," \ "memory_rss, io_rw_counter, io_cycles_counter, " \ "date, time) values " \ "('%s', %s, '%s', %s, %s, %s, %s, %s, %s, %s, '%s', '%s' )" INSERT_FOLDER_TEMPLATE = "insert into folder(" \ "partition, disk_used, date, time) values " \ "('%s', %s, '%s', '%s' )" INSERT_COMPUTER_TEMPLATE = "insert into computer("\ " cpu_num_core, cpu_frequency, cpu_type," \ "memory_size, memory_type, partition_list," \ "date, time) values "\ "(%s, %s, '%s', %s, '%s', '%s', '%s', '%s' )" INSERT_DISK_TEMPLATE = "insert into disk("\ " partition, used, free, mountpoint," \ " date, time) "\ "values ('%s', %s, %s, '%s', '%s', '%s' )" INSERT_SYSTEM_TEMPLATE = "insert into system("\ " loadavg, cpu_percent, memory_used, memory_free," \ " net_in_bytes, net_in_errors, net_in_dropped," \ " net_out_bytes, net_out_errors, net_out_dropped, " \ " date, time) values "\ "( %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, '%s', '%s' )" INSERT_TEMPERATURE_TEMPLATE = "insert into temperature("\ " sensor_id, temperature, alarm," \ " date, time) values "\ "( '%s', %s, %s, '%s', '%s' )" INSERT_HEATING_TEMPLATE = "insert into heating("\ " model_id, sensor_id, initial_temperature, final_temperature, "\ " delta_time, zero_emission_ratio," \ " date, time) values "\ "( '%s', '%s', %s, %s, %s, %s, '%s', '%s' )" def __init__(self, directory = None): assert self.database_name is not None self.uri = os.path.join(directory, self.database_name) self.connection = None self.cursor = None self._bootstrap() def connect(self): self.connection = sqlite_connect(self.uri) self.cursor = self.connection.cursor() def commit(self): assert self.connection is not None self.connection.commit() def close(self): assert self.connection is not None self.cursor.close() self.connection.close() def _execute(self, sql): assert self.connection is not None return self.cursor.execute(sql) def _bootstrap(self): assert self.CREATE_USER_TABLE is not None self.connect() self._execute(self.CREATE_USER_TABLE) self._execute(self.CREATE_FOLDER_TABLE) self._execute(self.CREATE_COMPUTER_TABLE) self._execute(self.CREATE_SYSTEM_TABLE) self._execute(self.CREATE_DISK_PARTITION) self._execute(self.CREATE_TEMPERATURE_TABLE) self._execute(self.CREATE_HEATING_TABLE) self.commit() self.close() def _getInsertionDateTuple(self): return strftime("%Y-%m-d -- %H:%M:%S").split(" -- ") ################### # Insertion methods ################### def insertUserSnapshot(self, partition, pid, process, cpu_percent, cpu_time, cpu_num_threads, memory_percent, memory_rss, io_rw_counter, io_cycles_counter, insertion_date, insertion_time): """ Insert user processes snapshots information on a database """ insertion_sql = self.INSERT_USER_TEMPLATE % \ ( partition, pid, process, cpu_percent, cpu_time, cpu_num_threads, memory_percent, memory_rss, io_rw_counter, io_cycles_counter, insertion_date, insertion_time) self._execute(insertion_sql) return insertion_sql def inserFolderSnapshot(self, partition, disk_usage, insertion_date, insertion_time): """ Insert folder disk usage snapshots information on a database """ insertion_sql = self.INSERT_FOLDER_TEMPLATE % \ ( partition, disk_usage, insertion_date, insertion_time) self._execute(insertion_sql) return insertion_sql def insertComputerSnapshot(self, cpu_num_core, cpu_frequency, cpu_type, memory_size, memory_type, partition_list, insertion_date, insertion_time): """Insert Computer general informations snapshots informations on the database """ insertion_sql = self.INSERT_COMPUTER_TEMPLATE % \ ( cpu_num_core, cpu_frequency, cpu_type, memory_size, memory_type, partition_list, insertion_date, insertion_time) self._execute(insertion_sql) return insertion_sql def insertDiskPartitionSnapshot(self, partition, used, free, mountpoint, insertion_date, insertion_time): """ Insert Disk Partitions informations on the database """ insertion_sql = self.INSERT_DISK_TEMPLATE % \ ( partition, used, free, mountpoint, insertion_date, insertion_time ) self._execute(insertion_sql) return insertion_sql def insertSystemSnapshot(self, loadavg, cpu_percent, memory_used, memory_free, net_in_bytes, net_in_errors, net_in_dropped, net_out_bytes, net_out_errors, net_out_dropped, insertion_date, insertion_time): """ Include System general Snapshot on the database """ insertion_sql = self.INSERT_SYSTEM_TEMPLATE % \ ( loadavg, cpu_percent, memory_used, memory_free, net_in_bytes, net_in_errors, net_in_dropped, net_out_bytes, net_out_errors, net_out_dropped, insertion_date, insertion_time ) self._execute(insertion_sql) return insertion_sql def insertTemperatureSnapshot(self, sensor_id, temperature, alarm, insertion_date, insertion_time): """ Include Temperature information Snapshot on the database """ insertion_sql = self.INSERT_TEMPERATURE_TEMPLATE % \ (sensor_id, temperature, alarm, insertion_date, insertion_time) self._execute(insertion_sql) return insertion_sql def insertHeatingSnapshot(self, model_id, sensor_id, initial_temperature, final_temperature, delta_time, zero_emission_ratio, insertion_date, insertion_time): """ Include Heating information Snapshot on the database """ insertion_sql = self.INSERT_HEATING_TEMPLATE % \ (model_id, sensor_id, initial_temperature, final_temperature, delta_time, zero_emission_ratio, insertion_date, insertion_time) self._execute(insertion_sql) return insertion_sql def getTableList(self): """ Get the list of tables from the database """ return [i[0] for i in self._execute( "SELECT name FROM sqlite_master WHERE type='table'")] def _getGarbageCollectionDateList(self, days_to_preserve=3): """ Return the list of dates to Preserve when data collect """ base = datetime.datetime.today() date_list = [] for x in range(0, days_to_preserve): date_list.append((base - datetime.timedelta(days=x)).strftime("%Y-%m-%d")) return date_list def garbageCollect(self): """ Garbase collect the database, by removing older records already reported. """ date_list = self._getGarbageCollectionDateList() where_clause = "reported = 1" for _date in date_list: where_clause += " AND date != '%s' " % _date delete_sql = "DELETE FROM %s WHERE %s" self.connect() for table in self.table_list: if table not in self.preserve_table_list: self._execute(delete_sql % (table, where_clause)) self.commit() self.close() def getDateScopeList(self, ignore_date=None, reported=0): """ Get from the present unique dates from the system Use a smaller table to sabe time. """ if ignore_date is not None: where_clause = " AND date != '%s'" % ignore_date else: where_clause = "" select_sql = "SELECT date, count(time) FROM system "\ " WHERE reported = %s %s GROUP BY date" % \ (reported, where_clause) return self._execute(select_sql) def markDayAsReported(self, date_scope, table_list): """ Mark all registers from a certain date as reported """ update_sql = "UPDATE %s SET reported = 1 " \ "WHERE date = '%s' AND reported = 0" for table in table_list: self._execute(update_sql % (table, date_scope)) def select(self, table, date=None, columns="*", where=None, order=None, limit=0): """ Query database for a full table information """ if date is not None: where_clause = " WHERE date = '%s' " % date else: where_clause = "" if where is not None: if where_clause == "": where_clause += " WHERE 1 = 1 " where_clause += " AND %s " % where select_sql = "SELECT %s FROM %s %s " % (columns, table, where_clause) if order is not None: select_sql += " ORDER BY %s" % order if limit: select_sql += " limit %s" % limit return self._execute(select_sql) ##################################################### # Export Tables as Dict for handle realtime plotting ##################################################### def exportSystemAsDict(self, date): """ Export system table as dictionally, formatting the output for present it in a nicer presentation. """ collected_entry_dict = {} collected_entry_dict["loadavg"] = [] collected_entry_dict["cpu_percent"] = [] collected_entry_dict["memory_used"] = [] collected_entry_dict["memory_free"] = [] collected_entry_dict["net_in_bytes"] = [] collected_entry_dict["net_in_errors"] = [] collected_entry_dict["net_in_dropped"] = [] collected_entry_dict["net_out_bytes"] = [] collected_entry_dict["net_out_errors"] = [] collected_entry_dict["net_out_dropped"] = [] first_entry = 1 last_entry_in = 0 last_entry_out = 0 entry_list = self._execute( "SELECT loadavg, cpu_percent, memory_used, memory_free," \ " net_in_bytes, net_in_errors, net_in_dropped," \ " net_out_bytes, net_out_errors, net_out_dropped, " \ " date, time FROM system WHERE date = '%s'" % date) for entry in entry_list: entry_time = "%s %s" % (entry[10], str(entry[11])) if not first_entry: _entry_in = entry[4] - last_entry_in last_entry_in = entry[4] entry_in = _entry_in _entry_out = entry[7] - last_entry_out last_entry_out = entry[7] entry_out = _entry_out else: first_entry = 0 last_entry_in = entry[4] last_entry_out = entry[7] continue collected_entry_dict["loadavg"].append( {'entry': entry[0], 'time': entry_time }) collected_entry_dict["cpu_percent"].append( {'entry': entry[1], 'time': entry_time }) collected_entry_dict["memory_used"].append( {'entry': entry[2]/1024, 'time': entry_time }) collected_entry_dict["memory_free"].append( {'entry': entry[3]/1024, 'time': entry_time }) collected_entry_dict["net_in_bytes"].append( {'entry': entry_in/1024, 'time': entry_time }) collected_entry_dict["net_in_errors"].append( {'entry': entry[5], 'time': entry_time }) collected_entry_dict["net_in_dropped"].append( {'entry': entry[6], 'time': entry_time }) collected_entry_dict["net_out_bytes"].append( {'entry': entry_out/1024, 'time': entry_time }) collected_entry_dict["net_out_errors"].append( {'entry': entry[8], 'time': entry_time }) collected_entry_dict["net_out_dropped"].append( {'entry': entry[9], 'time': entry_time }) return collected_entry_dict def exportDiskAsDict(self, date): """ Export a column from a table for a given date. """ collected_entry_dict = {} entry_list = self._execute( "SELECT partition, used, free, date, time "\ "from disk WHERE date = '%s'" % (date)) for partition, used, free, __date, __time in entry_list: partition_used = "%s-used" % partition partition_free = "%s-free" % partition if partition_used not in collected_entry_dict: collected_entry_dict[partition_used] = [] if partition_free not in collected_entry_dict: collected_entry_dict[partition_free] = [] collected_entry_dict[partition_used].append( {'entry': int(used)/1024, 'time': "%s %s" % (__date, str(__time))}) collected_entry_dict[partition_free].append( {'entry': int(free)/1024, 'time': "%s %s" % (__date, str(__time))}) return collected_entry_dict def getLastHeatingTestTime(self): select_sql = "SELECT date, time FROM heating ORDER BY date, time DESC LIMIT 1" for __date, __time in self._execute(select_sql): _date = datetime.datetime.strptime("%s %s" % (__date, __time), "%Y-%m-%d %H:%M:%S") return datetime.datetime.now() - _date return datetime.timedelta(weeks=520) def getLastZeroEmissionRatio(self): select_sql = "SELECT zero_emission_ratio FROM heating ORDER BY date, time DESC LIMIT 1" for entry in self._execute(select_sql): return entry[0] return -1 def getCollectedTemperatureList(self, sensor_id=None, limit=1): """ Query database for a full table information """ if limit > 0: limit_clause = "LIMIT %s" % (limit,) else: limit_clause = "" if sensor_id is not None: where_clause = "WHERE sensor_id = '%s'" % (sensor_id) else: where_clause = "" select_sql = "SELECT * FROM temperature %s ORDER BY time DESC %s" % (where_clause, limit_clause) return self._execute(select_sql) slapos.core-1.3.18/slapos/collect/entity.py0000644000000000000000000002222012752436134020613 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os from datetime import datetime, timedelta from slapos.collect.snapshot import FolderSizeSnapshot def get_user_list(config): nb_user = int(config.get("slapformat", "partition_amount")) name_prefix = config.get("slapformat", "user_base_name") path_prefix = config.get("slapformat", "partition_base_name") instance_root = config.get("slapos", "instance_root") # By default, enable disk snapshot, # and set time_cycle to 24hours after the first disk snapshot run disk_snapshot_params = {'enable': True, 'time_cycle': 86400} if config.has_section('collect'): collect_section = dict(config.items("collect")) disk_snapshot_params = dict( enable=eval(collect_section.get("report_disk_usage", "True")), pid_folder=collect_section.get("disk_snapshot_process_pid_foder", None), time_cycle=int(collect_section.get("disk_snapshot_time_cycle", 86400)), use_quota=eval(collect_section.get("disk_snapshot_use_quota", "True")) ) user_dict = {name: User(name, path, disk_snapshot_params) for name, path in [ ( "%s%s" % (name_prefix, nb), "%s/%s%s" % (instance_root, path_prefix, nb) ) for nb in range(nb_user) ] } #user_dict['root'] = User("root", "/opt/slapgrid") return user_dict class User(object): def __init__(self, name, path, disk_snapshot_params={}): self.name = str(name) self.path = str(path) self.disk_snapshot_params = disk_snapshot_params self.snapshot_list = [] def append(self, value): self.snapshot_list.append(value) def _insertDiskSnapShot(self, database, collected_date, collected_time): if self.disk_snapshot_params['enable']: time_cycle = self.disk_snapshot_params.get('time_cycle', 0) database.connect() if time_cycle: order = 'date DESC, time DESC' limit = 1 query = database.select(table="folder", columns="date, time", order=order, limit=limit, where="partition='%s'" % self.name) query_result = zip(*query) if len(query_result): date, time = (query_result[0][0], query_result[1][0]) latest_date = datetime.strptime('%s %s' % (date, time), "%Y-%m-%d %H:%M:%S") if (datetime.now() - latest_date).seconds < time_cycle: # wait the time cycle return pid_file = self.disk_snapshot_params.get('pid_folder', None) if pid_file is not None: pid_file = os.path.join(pid_file, '%s_disk_size.pid' % self.name) disk_snapshot = FolderSizeSnapshot(self.path, pid_file) disk_snapshot.update_folder_size() # Skeep insert empty partition: size <= 1Mb if disk_snapshot.disk_usage <= 1024.0 and \ not self.disk_snapshot_params.get('testing', False): return database.inserFolderSnapshot(self.name, disk_usage=disk_snapshot.get("disk_usage"), insertion_date=collected_date, insertion_time=collected_time) database.commit() database.close() def save(self, database, collected_date, collected_time): """ Insert collected data on user collector """ database.connect() snapshot_counter = len(self.snapshot_list) for snapshot_item in self.snapshot_list: snapshot_item.update_cpu_percent() database.insertUserSnapshot(self.name, pid=snapshot_item.get("pid"), process=snapshot_item.get("process"), cpu_percent=snapshot_item.get("cpu_percent"), cpu_time=snapshot_item.get("cpu_time"), cpu_num_threads=snapshot_item.get("cpu_num_threads"), memory_percent=snapshot_item.get("memory_percent"), memory_rss=snapshot_item.get("memory_rss"), io_rw_counter=snapshot_item.get("io_rw_counter"), io_cycles_counter=snapshot_item.get("io_cycles_counter"), insertion_date=collected_date, insertion_time=collected_time) database.commit() database.close() # Inser disk snapshot in a new transaction, it can take long self._insertDiskSnapShot(database, collected_date, collected_time) class Computer(dict): def __init__(self, computer_snapshot): self.computer_snapshot = computer_snapshot def save(self, database, collected_date, collected_time): database.connect() self._save_computer_snapshot(database, collected_date, collected_time) self._save_system_snapshot(database, collected_date, collected_time) self._save_disk_partition_snapshot(database, collected_date, collected_time) self._save_temperature_snapshot(database, collected_date, collected_time) self._save_heating_snapshot(database, collected_date, collected_time) database.commit() database.close() def _save_computer_snapshot(self, database, collected_date, collected_time): partition_list = ";".join(["%s=%s" % (x,y) for x,y in \ self.computer_snapshot.get("partition_list")]) database.insertComputerSnapshot( cpu_num_core=self.computer_snapshot.get("cpu_num_core"), cpu_frequency=self.computer_snapshot.get("cpu_frequency"), cpu_type=self.computer_snapshot.get("cpu_type"), memory_size=self.computer_snapshot.get("memory_size"), memory_type=self.computer_snapshot.get("memory_type"), partition_list=partition_list, insertion_date=collected_date, insertion_time=collected_time) def _save_system_snapshot(self, database, collected_date, collected_time): snapshot = self.computer_snapshot.get("system_snapshot") database.insertSystemSnapshot( loadavg=snapshot.get("load"), cpu_percent=snapshot.get("cpu_percent"), memory_used=snapshot.get("memory_used"), memory_free=snapshot.get("memory_free"), net_in_bytes=snapshot.get("net_in_bytes"), net_in_errors=snapshot.get("net_in_errors"), net_in_dropped=snapshot.get("net_in_dropped"), net_out_bytes=snapshot.get("net_out_bytes"), net_out_errors= snapshot.get("net_out_errors"), net_out_dropped=snapshot.get("net_out_dropped"), insertion_date=collected_date, insertion_time=collected_time) def _save_disk_partition_snapshot(self, database, collected_date, collected_time): for disk_partition in self.computer_snapshot.get("disk_snapshot_list"): database.insertDiskPartitionSnapshot( partition=disk_partition.partition, used=disk_partition.disk_size_used, free=disk_partition.disk_size_free, mountpoint=';'.join(disk_partition.mountpoint_list), insertion_date=collected_date, insertion_time=collected_time) def _save_temperature_snapshot(self, database, collected_date, collected_time): for temperature_snapshot in self.computer_snapshot.get("temperature_snapshot_list"): database.insertTemperatureSnapshot( sensor_id=temperature_snapshot.sensor_id, temperature=temperature_snapshot.temperature, alarm=temperature_snapshot.alarm, insertion_date=collected_date, insertion_time=collected_time) def _save_heating_snapshot(self, database, collected_date, collected_time): heating_snapshot = self.computer_snapshot.get("heating_contribution_snapshot") if heating_snapshot is not None and \ heating_snapshot.initial_temperature is not None: database.insertHeatingSnapshot( initial_temperature=heating_snapshot.initial_temperature, final_temperature=heating_snapshot.final_temperature, delta_time=heating_snapshot.delta_time, model_id=heating_snapshot.model_id, sensor_id=heating_snapshot.sensor_id, zero_emission_ratio=heating_snapshot.zero_emission_ratio, insertion_date=collected_date, insertion_time=collected_time) slapos.core-1.3.18/slapos/collect/__init__.py0000644000000000000000000001350112752436134021040 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from psutil import process_iter, NoSuchProcess, AccessDenied from time import strftime import shutil import datetime from slapos.collect.db import Database from slapos.util import mkdir_p import os import stat from slapos.collect.snapshot import ProcessSnapshot, ComputerSnapshot from slapos.collect.reporter import RawCSVDumper, \ SystemCSVReporterDumper, \ compressLogFolder, \ ConsumptionReport from entity import get_user_list, Computer def _get_time(): return strftime("%Y-%m-%d -- %H:%M:%S").split(" -- ") def build_snapshot(proc): try: return ProcessSnapshot(proc) except NoSuchProcess: return None def _get_uptime(): # Linux only if os.path.exists('/proc/uptime'): with open('/proc/uptime', 'r') as f: return datetime.timedelta(seconds=float(f.readline().split()[0])) def current_state(user_dict): """ Iterator used to apply build_snapshot(...) on every single relevant process. A process is considered relevant if its user matches our user list, i.e. its user is a slapos user """ process_list = [p for p in process_iter() if p.username() in user_dict] for i, process in enumerate(process_list): yield build_snapshot(process) def do_collect(conf): """ Main function The idea here is to poll system every so many seconds For each poll, we get a list of Snapshots, holding informations about processes. We iterate over that list to store datas on a per user basis: Each user object is a dict, indexed on timestamp. We add every snapshot matching the user so that we get informations for each users """ try: collected_date, collected_time = _get_time() user_dict = get_user_list(conf) try: for snapshot in current_state(user_dict): if snapshot: user_dict[snapshot.username].append(snapshot) except (KeyboardInterrupt, SystemExit, NoSuchProcess): raise log_directory = "%s/var/data-log" % conf.get("slapos", "instance_root") mkdir_p(log_directory, 0o755) consumption_report_directory = "%s/var/consumption-report" % \ conf.get("slapos", "instance_root") mkdir_p(consumption_report_directory, 0o755) xml_report_directory = "%s/var/xml_report/%s" % \ (conf.get("slapos", "instance_root"), conf.get("slapos", "computer_id")) mkdir_p(xml_report_directory, 0o755) if stat.S_IMODE(os.stat(log_directory).st_mode) != 0o755: os.chmod(log_directory, 0o755) database = Database(log_directory) if conf.has_option("slapformat", "computer_model_id"): computer_model_id = conf.get("slapformat", "computer_model_id") else: computer_model_id = "no_model" uptime = _get_uptime() if conf.has_option("slapformat", "heating_sensor_id"): heating_sensor_id = conf.get("slapformat", "heating_sensor_id") database.connect() test_heating = uptime is not None and \ uptime > datetime.timedelta(seconds=86400) and \ database.getLastHeatingTestTime() > uptime database.close() else: heating_sensor_id = "no_sensor" test_heating = False computer = Computer(ComputerSnapshot(model_id=computer_model_id, sensor_id = heating_sensor_id, test_heating=test_heating)) computer.save(database, collected_date, collected_time) for user in user_dict.values(): user.save(database, collected_date, collected_time) SystemCSVReporterDumper(database).dump(log_directory) RawCSVDumper(database).dump(log_directory) consumption_report = ConsumptionReport( computer_id=conf.get("slapos", "computer_id"), user_list=get_user_list(conf), database=database, location=consumption_report_directory) base = datetime.datetime.today() for x in range(1, 3): report_file = consumption_report.buildXMLReport( (base - datetime.timedelta(days=x)).strftime("%Y-%m-%d")) if report_file is not None: shutil.copy(report_file, xml_report_directory) compressLogFolder(log_directory) # Drop older entries already reported database.garbageCollect() except AccessDenied: print "You HAVE TO execute this script with root permission." slapos.core-1.3.18/slapos/collect/README.txt0000644000000000000000000001731112752436134020430 0ustar rootroot00000000000000 Collecting Data ================ The "slapos node collect" command collects data from a computer taking a few snapshot on different scopes and storing it (currently on sqllite3). Scopes of Snapshots are: - User Processes: Collects data from all user's process related to SlapOS (ie.: slapuser*) - System Information: Collects data from the System Usage and Computer Hardware. So on every slapos node collect calls (perfomed by cron on every minute), the slapos stores the all snapshots for future analizes. User's Processes Snapshot ========================== Collect command search for all process launched by all users related to the slapos [1]. After this, for each process it uses psutil (or similars tools) to collect all available information for every process pid [2]. Once Collected, every Process information is stored on sqllite3 [3], in other words, we have 1 line per pid for a giving time. It's used pid number and process creation date for create a UID for the process, and it is omitted the command name in order to annonymalize the data (so the risk of information leak is reduced). The measuring of process only consider CPU, memory and io operations (rw and cycles), we are studying how to measure network (without be intrusive). System Information Snapshot ============================ Those snapshots has 2 different goals, first is collect current load from existing computer (cpu, memory, disk, network...) and the second goal is collect the available resources the computer has installed [4]. We use 3 types of snapshots for determinate the load and the available resources (all mostly use psutils to collect data): - System Snapshot [5]: It collects general computer usage like CPU, Memory and Network IO usage. - Computer Snapshot [6]: It collects for now number of CPU cores and available memory, however we wish to collect more details. - Disk Snapshot [7]: It collects the informations related to the a disk (1 snapshot per disk), which contains total, usage and io informations. "Real-time" Partial dump (Dygraph) =================================== On every run, we dump data from the current day on csv [8] (2 axes), in order to plot easily with dygraph, so there will be few files available like this: - system_cpu_percent.csv - system_disk_memory_free__dev_sda1.csv - system_disk_memory_free__dev_sdb1.csv - system_disk_memory_used__dev_sda1.csv - system_disk_memory_used__dev_sdb1.csv - system_loadavg.csv - system_memory_free.csv - system_memory_used.csv - system_net_in_bytes.csv - system_net_in_dropped.csv - system_net_in_errors.csv - system_net_out_bytes.csv - system_net_out_dropped.csv - system_net_out_errors.csv All contains only information from computer usage, for global usage (for now). It is perfectly acceptable keep a realtime copy in csv of the most recently data. Logrotate ========= Slapos collects contains its on log rotating policy [9] and gargabe collection [10]. - We dump in folders YYYY-MM-DD, all data which are not from the current day. - Every table generates 1 csv with the date from the dumped day. - All dumped data is marked as reported on sqllite (column reported) - All data which are older them 3 days and it is already reported is removed. - All folders which contains dumped data is compressed in a tar.gz file. Data Structure =============== The header of the CSVs are not included on the dumped file (it is probably a mistake), but it corresponds to (same as columns on the sqllite) which can be easily described like bellow [11]: - user partition (text) pid (real) process (text) cpu_percent (real) cpu_time (real) cpu_num_threads (real) memory_percent (real) memory_rss (real) io_rw_counter (real) io_cycles_counter (real) date (text) time (text) reported (integer) - computer cpu_num_core (real) cpu_frequency (real cpu_type (text) memory_size (real) memory_type (text) partition_list (text) date (text) time (text) reported (integer) - system loadavg (real) cpu_percent (real) memory_used (real) memory_free (real) net_in_bytes (real) net_in_errors (real) net_in_dropped (real) net_out_bytes (real) net_out_errors (real) net_out_dropped (real) date (text) time (text) reported (integer) - disk partition (text) used (text) free (text) mountpoint (text) date (text) time (text) reported (integer) Probably a more formal way to collect data data can be introduced. Download Collected Data ======================== Data is normally available on the server file system, we use a simple software "slapmonitor" which can be deployed on any machine which allow us download via HTTP the data. Slapmonitor can be also used to determinate de availability of the machine (it returns "OK" if accessed on his "/" address), and it servers the data on a url like: - https://
/ -> just return "OK" - https://
//server-log/ -> you can see all files The slapmonitoring can be easily extented to include more sensors (like temperature, benchmarks...) which normally requires more speficic software configurations. Planned Non core extensions and benchmarking ============================================= It is planned to include 4 simple benchmarks measure machines performance degradation overtime: - CPU benchmark with Pystone - SQL Benchmark on SQLlite (for now) - Network Uplink Benchmark - Network Download Benchmark This part is not included or coded, but we intent to measure performance degradation in future, to stop to allocate if the machine is working but cannot mantain a minimal Service Quality (even if it is not looks like overloaded). Servers Availability ===================== All servers contacts the slapos master on regular bases (several times a minute), it is possible to determinate the general availability of a server by looking at apache log using this script: - http://git.erp5.org/gitweb/cloud-quote.git/blob/HEAD:/py/my.py It produces a json like this: - http://git.erp5.org/gitweb/cloud-quote.git/blob/HEAD:/data/stats.json However, this is a bit draft and rudimentar to determinate problems on the machine, as the machine completly "death" is rare, normally most of failures are pure network problems or human/environmental problem (normally not depends of the machine load). [1] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/entity.py?js=1#l58 [2] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/snapshot.py?js=1#l37 [3] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/db.py?js=1#l130 [4] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/entity.py?js=1#l77 [5] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/snapshot.py?js=1#l62 [6] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/snapshot.py?js=1#l95 [7] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/snapshot.py?js=1#l81 [8] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/reporter.py?js=1#l75 [9] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/reporter.py?js=1 [10] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/db.py?js=1#l192 [11] http://git.erp5.org/gitweb/slapos.core.git/blob/HEAD:/slapos/collect/db.py?js=1#l39 slapos.core-1.3.18/slapos/__init__.py0000644000000000000000000000036412752436134017416 0ustar rootroot00000000000000# See http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages try: __import__('pkg_resources').declare_namespace(__name__) except ImportError: from pkgutil import extend_path __path__ = extend_path(__path__, __name__) slapos.core-1.3.18/slapos/slapos.xsd0000644000000000000000000000167112752436135017331 0ustar rootroot00000000000000 slapos.core-1.3.18/slapos/slapos-client.cfg.example0000644000000000000000000000073713006632705022174 0ustar rootroot00000000000000[slapos] master_url = https://slap.vifib.com/ [slapconsole] # Put here the certificate retrieved from SlapOS Master. # Beware: put certificate from YOUR account, not the one from your node. # You (as identified person from SlapOS Master) will request an instance, not your node. # Conclusion: node certificate != person certificate. cert_file = certificate file location coming from your slapos master account key_file = key file location coming from your slapos master account slapos.core-1.3.18/slapos/slapos.cfg.example0000644000000000000000000001621213006632705020713 0ustar rootroot00000000000000[slapos] # Replace computer_id by the unique identifier of your computer on your SlapOS, # Master, usually starting by COMP- computer_id = COMP-123456789 master_url = https://slap.vifib.com/ key_file = /etc/opt/slapos/ssl/computer.key cert_file = /etc/opt/slapos/ssl/computer.crt certificate_repository_path = /etc/opt/slapos/ssl/partition_pki software_root = /opt/slapgrid instance_root = /srv/slapgrid [slapformat] # Replace by your network interface providing IPv6 if you don't use re6st interface_name = lo # Change "create_tap" into "true" if you need to host KVM services create_tap = false partition_amount = 10 computer_xml = /opt/slapos/slapos.xml log_file = /opt/slapos/log/slapos-node-format.log partition_base_name = slappart user_base_name = slapuser tap_base_name = slaptap # You can choose any other local network which does not conflict with your # current machine configuration ipv4_local_network = 10.0.0.0/16 # Change to true if you want slapos to use local-only IPv6 use_unique_local_address = False [networkcache] # Define options for binary cache, used to download already compiled software. download-binary-cache-url = http://download.shacache.org/ download-cache-url = https://www.shacache.org/shacache download-binary-dir-url = http://dir.shacache.org/ # Configuration to Upload Configuration for Binary cache #upload-binary-dir-url = https://www.shacache.org/shadir #upload-binary-cache-url = https://www.shacache.org/shacache #signature_private_key_file = /etc/opt/slapos/shacache/signature.key #signature_certificate_file = /etc/opt/slapos/shacache/signature.cert #upload-cache-url = https://www.shacache.org/shacache #shacache-ca-file = /etc/opt/slapos/shacache/ca.cert #shacache-cert-file = /etc/opt/slapos/shacache/shacache.cert #shacache-key-file = /etc/opt/slapos/shacache/shacache.key #upload-binary-dir-url = https://www.shacache.org/shadir #upload-binary-cache-url = https://www.shacache.org/shacache #upload-dir-url = https://www.shacache.org/shadir #shadir-ca-file = /etc/opt/slapos/shacache/ca.cert #shadir-cert-file = /etc/opt/slapos/shacache/shacache.cert #shadir-key-file = /etc/opt/slapos/shacache/shacache.key # List of signatures of uploaders we trust: # Romain Courteaud # Sebastien Robin # Kazuhiko Shiozaki # Gabriel Monnerat # Test Agent Signature signature-certificate-list = -----BEGIN CERTIFICATE----- MIIB4DCCAUkCADANBgkqhkiG9w0BAQsFADA5MQswCQYDVQQGEwJGUjEZMBcGA1UE CBMQRGVmYXVsdCBQcm92aW5jZTEPMA0GA1UEChMGTmV4ZWRpMB4XDTExMDkxNTA5 MDAwMloXDTEyMDkxNTA5MDAwMlowOTELMAkGA1UEBhMCRlIxGTAXBgNVBAgTEERl ZmF1bHQgUHJvdmluY2UxDzANBgNVBAoTBk5leGVkaTCBnzANBgkqhkiG9w0BAQEF AAOBjQAwgYkCgYEApYZv6OstoqNzxG1KI6iE5U4Ts2Xx9lgLeUGAMyfJLyMmRLhw boKOyJ9Xke4dncoBAyNPokUR6iWOcnPHtMvNOsBFZ2f7VA28em3+E1JRYdeNUEtX Z0s3HjcouaNAnPfjFTXHYj4um1wOw2cURSPuU5dpzKBbV+/QCb5DLheynisCAwEA ATANBgkqhkiG9w0BAQsFAAOBgQBCZLbTVdrw3RZlVVMFezSHrhBYKAukTwZrNmJX mHqi2tN8tNo6FX+wmxUUAf3e8R2Ymbdbn2bfbPpcKQ2fG7PuKGvhwMG3BlF9paEC q7jdfWO18Zp/BG7tagz0jmmC4y/8akzHsVlruo2+2du2freE8dK746uoMlXlP93g QUUGLQ== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIB8jCCAVugAwIBAgIJAPu2zchZ2BxoMA0GCSqGSIb3DQEBBQUAMBIxEDAOBgNV BAMMB3RzeGRldjMwHhcNMTExMDE0MTIxNjIzWhcNMTIxMDEzMTIxNjIzWjASMRAw DgYDVQQDDAd0c3hkZXYzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrPbh+ YGmo6mWmhVb1vTqX0BbeU0jCTB8TK3i6ep3tzSw2rkUGSx3niXn9LNTFNcIn3MZN XHqbb4AS2Zxyk/2tr3939qqOrS4YRCtXBwTCuFY6r+a7pZsjiTNddPsEhuj4lEnR L8Ax5mmzoi9nE+hiPSwqjRwWRU1+182rzXmN4QIDAQABo1AwTjAdBgNVHQ4EFgQU /4XXREzqBbBNJvX5gU8tLWxZaeQwHwYDVR0jBBgwFoAU/4XXREzqBbBNJvX5gU8t LWxZaeQwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQA07q/rKoE7fAda FED57/SR00OvY9wLlFEF2QJ5OLu+O33YUXDDbGpfUSF9R8l0g9dix1JbWK9nQ6Yd R/KCo6D0sw0ZgeQv1aUXbl/xJ9k4jlTxmWbPeiiPZEqU1W9wN5lkGuLxV4CEGTKU hJA/yXa1wbwIPGvX3tVKdOEWPRXZLg== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIB7jCCAVegAwIBAgIJAJWA0jQ4o9DGMA0GCSqGSIb3DQEBBQUAMA8xDTALBgNV BAMMBHg2MXMwIBcNMTExMTI0MTAyNDQzWhgPMjExMTEwMzExMDI0NDNaMA8xDTAL BgNVBAMMBHg2MXMwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANdJNiFsRlkH vq2kHP2zdxEyzPAWZH3CQ3Myb3F8hERXTIFSUqntPXDKXDb7Y/laqjMXdj+vptKk 3Q36J+8VnJbSwjGwmEG6tym9qMSGIPPNw1JXY1R29eF3o4aj21o7DHAkhuNc5Tso 67fUSKgvyVnyH4G6ShQUAtghPaAwS0KvAgMBAAGjUDBOMB0GA1UdDgQWBBSjxFUE RfnTvABRLAa34Ytkhz5vPzAfBgNVHSMEGDAWgBSjxFUERfnTvABRLAa34Ytkhz5v PzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAFLDS7zNhlrQYSQO5KIj z2RJe3fj4rLPklo3TmP5KLvendG+LErE2cbKPqnhQ2oVoj6u9tWVwo/g03PMrrnL KrDm39slYD/1KoE5kB4l/p6KVOdeJ4I6xcgu9rnkqqHzDwI4v7e8/D3WZbpiFUsY vaZhjNYKWQf79l6zXfOvphzJ -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIB9jCCAV+gAwIBAgIJAO4V/jiMoICoMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV BAMMCENPTVAtMjMyMCAXDTEyMDIxNjExMTAyM1oYDzIxMTIwMTIzMTExMDIzWjAT MREwDwYDVQQDDAhDT01QLTIzMjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA wi/3Z8W9pUiegUXIk/AiFDQ0UJ4JFAwjqr+HSRUirlUsHHT+8DzH/hfcTDX1I5BB D1ADk+ydXjMm3OZrQcXjn29OUfM5C+g+oqeMnYQImN0DDQIOcUyr7AJc4xhvuXQ1 P2pJ5NOd3tbd0kexETa1LVhR6EgBC25LyRBRae76qosCAwEAAaNQME4wHQYDVR0O BBYEFMDmW9aFy1sKTfCpcRkYnP6zUd1cMB8GA1UdIwQYMBaAFMDmW9aFy1sKTfCp cRkYnP6zUd1cMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAskbFizHr b6d3iIyN+wffxz/V9epbKIZVEGJd/6LrTdLiUfJPec7FaxVCWNyKBlCpINBM7cEV Gn9t8mdVQflNqOlAMkOlUv1ZugCt9rXYQOV7rrEYJBWirn43BOMn9Flp2nibblby If1a2ZoqHRxoNo2yTmm7TSYRORWVS+vvfjY= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIB9jCCAV+gAwIBAgIJAKRvzcy7OH0UMA0GCSqGSIb3DQEBBQUAMBMxETAPBgNV BAMMCENPTVAtNzcyMCAXDTEyMDgxMDE1NDI1MVoYDzIxMTIwNzE3MTU0MjUxWjAT MREwDwYDVQQDDAhDT01QLTc3MjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA o7aipd6MbnuGDeR1UJUjuMLQUariAyQ2l2ZDS6TfOwjHiPw/mhzkielgk73kqN7A sUREx41eTcYCXzTq3WP3xCLE4LxLg1eIhd4nwNHj8H18xR9aP0AGjo4UFl5BOMa1 mwoyBt3VtfGtUmb8whpeJgHhqrPPxLoON+i6fIbXDaUCAwEAAaNQME4wHQYDVR0O BBYEFEfjy3OopT2lOksKmKBNHTJE2hFlMB8GA1UdIwQYMBaAFEfjy3OopT2lOksK mKBNHTJE2hFlMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAaNRx6YN2 M/p3R8/xS6zvH1EqJ3FFD7XeAQ52WuQnKSREzuw0dsw12ClxjcHiQEFioyTiTtjs 5pW18Ry5Ie7iFK4cQMerZwWPxBodEbAteYlRsI6kePV7Gf735Y1RpuN8qZ2sYL6e x2IMeSwJ82BpdEI5niXxB+iT0HxhmR+XaMI= -----END CERTIFICATE----- # List of URL(s) which shouldn't be downloaded from binary cache. # Any URL beginning by a blacklisted URL will be blacklisted as well. download-from-binary-cache-url-blacklist = https://lab.nexedi.cn/nexedi/slapos/raw/master https://lab.nexedi.cn/nexedi/slapos/raw/1.0/ https://lab.nexedi.cn/nexedi/slapos/raw/erp5 https://lab.nexedi.com/nexedi/slapos/raw/master https://lab.nexedi.com/nexedi/slapos/raw/1.0/ https://lab.nexedi.com/nexedi/slapos/raw/erp5 http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/heads / # List of URL(s) which shouldn't be uploaded into binary cache. # Any URL beginning by a blacklisted URL will be blacklisted as well. upload-to-binary-cache-url-blacklist = https://lab.nexedi.cn/nexedi/slapos/raw/master https://lab.nexedi.cn/nexedi/slapos/raw/1.0/ https://lab.nexedi.cn/nexedi/slapos/raw/erp5 https://lab.nexedi.com/nexedi/slapos/raw/master https://lab.nexedi.com/nexedi/slapos/raw/1.0/ https://lab.nexedi.com/nexedi/slapos/raw/erp5 http://git.erp5.org/gitweb/slapos.git/blob_plain/HEAD http://git.erp5.org/gitweb/slapos.git/blob_plain/refs/heads / slapos.core-1.3.18/slapos/slap/0000755000000000000000000000000013006632706016235 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/slap/__init__.py0000644000000000000000000000304412752436135020354 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import sys if sys.version_info < (2, 6): import warnings warnings.warn('Used python version (%s) is old and has problems with' ' IPv6 connections' % '.'.join([str(q) for q in sys.version_info[:3]])) from slap import * slapos.core-1.3.18/slapos/slap/doc/0000755000000000000000000000000013006632706017002 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/slap/doc/software_instance.xsd0000644000000000000000000000123312752436135023244 0ustar rootroot00000000000000 slapos.core-1.3.18/slapos/slap/doc/computer_consumption.xsd0000644000000000000000000000422012752436135024021 0ustar rootroot00000000000000 slapos.core-1.3.18/slapos/slap/doc/partition_consumption.xsd0000644000000000000000000000171612752436135024203 0ustar rootroot00000000000000 slapos.core-1.3.18/slapos/slap/slap.py0000644000000000000000000011651113003671621017547 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## """ Simple, easy to (un)marshall classes for slap client/server communication """ __all__ = ["slap", "ComputerPartition", "Computer", "SoftwareRelease", "SoftwareInstance", "SoftwareProductCollection", "Supply", "OpenOrder", "NotFoundError", "ResourceNotReady", "ServerError", "ConnectionError"] import os import json import logging import re import urlparse import hashlib from util import xml2dict import netaddr from xml.sax import saxutils import zope.interface from interface import slap as interface from xml_marshaller import xml_marshaller from uritemplate import expand import requests # silence messages like 'Unverified HTTPS request is being made' requests.packages.urllib3.disable_warnings() # silence messages like 'Starting connection' that are logged with INFO urllib3_logger = logging.getLogger('requests.packages.urllib3') urllib3_logger.setLevel(logging.WARNING) # XXX fallback_logger to be deprecated together with the old CLI entry points. fallback_logger = logging.getLogger(__name__) fallback_handler = logging.StreamHandler() fallback_logger.setLevel(logging.INFO) fallback_logger.addHandler(fallback_handler) DEFAULT_SOFTWARE_TYPE = 'RootSoftwareInstance' COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME = '.slapos-request-transaction-%s' class SlapDocument: def __init__(self, connection_helper=None, hateoas_navigator=None): if connection_helper is not None: # Do not require connection_helper to be provided, but when it's not, # cause failures when accessing _connection_helper property. self._connection_helper = connection_helper self._hateoas_navigator = hateoas_navigator class SlapRequester(SlapDocument): """ Abstract class that allow to factor method for subclasses that use "request()" """ def _requestComputerPartition(self, request_dict): try: xml = self._connection_helper.POST('requestComputerPartition', data=request_dict) except ResourceNotReady: return ComputerPartition( request_dict=request_dict, connection_helper=self._connection_helper, ) if type(xml) is unicode: xml = str(xml) xml.encode('utf-8') software_instance = xml_marshaller.loads(xml) computer_partition = ComputerPartition( software_instance.slap_computer_id.encode('UTF-8'), software_instance.slap_computer_partition_id.encode('UTF-8'), connection_helper=self._connection_helper, ) # Hack to give all object attributes to the ComputerPartition instance # XXX Should be removed by correctly specifying difference between # ComputerPartition and SoftwareInstance computer_partition.__dict__ = dict(computer_partition.__dict__.items() + software_instance.__dict__.items()) # XXX not generic enough. if xml_marshaller.loads(request_dict['shared_xml']): computer_partition._synced = True computer_partition._connection_dict = software_instance._connection_dict computer_partition._parameter_dict = software_instance._parameter_dict return computer_partition class SoftwareRelease(SlapDocument): """ Contains Software Release information """ zope.interface.implements(interface.ISoftwareRelease) def __init__(self, software_release=None, computer_guid=None, **kw): """ Makes easy initialisation of class parameters XXX **kw args only kept for compatibility """ SlapDocument.__init__(self, kw.pop('connection_helper', None), kw.pop('hateoas_navigator', None)) self._software_instance_list = [] if software_release is not None: software_release = software_release.encode('UTF-8') self._software_release = software_release self._computer_guid = computer_guid def __getinitargs__(self): return (self._software_release, self._computer_guid, ) def getComputerId(self): if not self._computer_guid: raise NameError('computer_guid has not been defined.') else: return self._computer_guid def getURI(self): if not self._software_release: raise NameError('software_release has not been defined.') else: return self._software_release def error(self, error_log, logger=None): try: # Does not follow interface self._connection_helper.POST('softwareReleaseError', data={ 'url': self.getURI(), 'computer_id': self.getComputerId(), 'error_log': error_log}) except Exception: (logger or fallback_logger).exception('') def available(self): self._connection_helper.POST('availableSoftwareRelease', data={ 'url': self.getURI(), 'computer_id': self.getComputerId()}) def building(self): self._connection_helper.POST('buildingSoftwareRelease', data={ 'url': self.getURI(), 'computer_id': self.getComputerId()}) def destroyed(self): self._connection_helper.POST('destroyedSoftwareRelease', data={ 'url': self.getURI(), 'computer_id': self.getComputerId()}) def getState(self): return getattr(self, '_requested_state', 'available') class SoftwareProductCollection(object): zope.interface.implements(interface.ISoftwareProductCollection) def __init__(self, logger, slap): self.logger = logger self.slap = slap self.get = self.__getattr__ def __getattr__(self, software_product): self.logger.info('Getting best Software Release corresponding to ' 'this Software Product...') software_release_list = \ self.slap.getSoftwareReleaseListFromSoftwareProduct(software_product) try: software_release_url = software_release_list[0] # First is best one. self.logger.info('Found as %s.' % software_release_url) return software_release_url except IndexError: raise AttributeError('No Software Release corresponding to this ' 'Software Product has been found.') # XXX What is this SoftwareInstance class? class SoftwareInstance(SlapDocument): """ Contains Software Instance information """ zope.interface.implements(interface.ISoftwareInstance) def __init__(self, **kwargs): """ Makes easy initialisation of class parameters """ for k, v in kwargs.iteritems(): setattr(self, k, v) """Exposed exceptions""" class ResourceNotReady(Exception): zope.interface.implements(interface.IResourceNotReady) class ServerError(Exception): zope.interface.implements(interface.IServerError) class NotFoundError(Exception): zope.interface.implements(interface.INotFoundError) class AuthenticationError(Exception): pass class ConnectionError(Exception): zope.interface.implements(interface.IConnectionError) class Supply(SlapDocument): zope.interface.implements(interface.ISupply) def supply(self, software_release, computer_guid=None, state='available'): try: self._connection_helper.POST('supplySupply', data={ 'url': software_release, 'computer_id': computer_guid, 'state': state}) except NotFoundError: raise NotFoundError("Computer %s has not been found by SlapOS Master." % computer_guid) class OpenOrder(SlapRequester): zope.interface.implements(interface.IOpenOrder) def request(self, software_release, partition_reference, partition_parameter_kw=None, software_type=None, filter_kw=None, state=None, shared=False): if partition_parameter_kw is None: partition_parameter_kw = {} if filter_kw is None: filter_kw = {} request_dict = { 'software_release': software_release, 'partition_reference': partition_reference, 'partition_parameter_xml': xml_marshaller.dumps(partition_parameter_kw), 'filter_xml': xml_marshaller.dumps(filter_kw), # XXX Cedric: Why state and shared are marshalled? First is a string # And second is a boolean. 'state': xml_marshaller.dumps(state), 'shared_xml': xml_marshaller.dumps(shared), } if software_type is not None: request_dict['software_type'] = software_type else: # Let's enforce a default software type request_dict['software_type'] = DEFAULT_SOFTWARE_TYPE return self._requestComputerPartition(request_dict) def getInformation(self, partition_reference): if not getattr(self, '_hateoas_navigator', None): raise Exception('SlapOS Master Hateoas API required for this operation is not availble.') raw_information = self._hateoas_navigator.getHostingSubscriptionRootSoftwareInstanceInformation(partition_reference) software_instance = SoftwareInstance() # XXX redefine SoftwareInstance to be more consistent for key, value in raw_information.iteritems(): if key in ['_links']: continue setattr(software_instance, '_%s' % key, value) setattr(software_instance, '_software_release_url', raw_information['_links']['software_release']) return software_instance def requestComputer(self, computer_reference): """ Requests a computer. """ xml = self._connection_helper.POST('requestComputer', data={'computer_title': computer_reference}) computer = xml_marshaller.loads(xml) computer._connection_helper = self._connection_helper computer._hateoas_navigator = self._hateoas_navigator return computer def _syncComputerInformation(func): """ Synchronize computer object with server information """ def decorated(self, *args, **kw): if getattr(self, '_synced', 0): return func(self, *args, **kw) computer = self._connection_helper.getFullComputerInformation(self._computer_id) for key, value in computer.__dict__.items(): if isinstance(value, unicode): # convert unicode to utf-8 setattr(self, key, value.encode('utf-8')) else: setattr(self, key, value) setattr(self, '_synced', True) for computer_partition in self.getComputerPartitionList(): setattr(computer_partition, '_synced', True) return func(self, *args, **kw) return decorated class Computer(SlapDocument): zope.interface.implements(interface.IComputer) def __init__(self, computer_id, connection_helper=None, hateoas_navigator=None): SlapDocument.__init__(self, connection_helper, hateoas_navigator) self._computer_id = computer_id def __getinitargs__(self): return (self._computer_id, ) @_syncComputerInformation def getSoftwareReleaseList(self): """ Returns the list of software release which has to be supplied by the computer. Raise an INotFoundError if computer_guid doesn't exist. """ for software_relase in self._software_release_list: software_relase._connection_helper = self._connection_helper software_relase._hateoas_navigator = self._hateoas_navigator return self._software_release_list @_syncComputerInformation def getComputerPartitionList(self): for computer_partition in self._computer_partition_list: computer_partition._connection_helper = self._connection_helper computer_partition._hateoas_navigator = self._hateoas_navigator return [x for x in self._computer_partition_list] def reportUsage(self, computer_usage): if computer_usage == "": return self._connection_helper.POST('useComputer', data={ 'computer_id': self._computer_id, 'use_string': computer_usage}) def updateConfiguration(self, xml): return self._connection_helper.POST('loadComputerConfigurationFromXML', data={'xml': xml}) def bang(self, message): self._connection_helper.POST('computerBang', data={ 'computer_id': self._computer_id, 'message': message}) def getStatus(self): xml = self._connection_helper.GET('getComputerStatus', params={'computer_id': self._computer_id}) return xml_marshaller.loads(xml) def revokeCertificate(self): self._connection_helper.POST('revokeComputerCertificate', data={ 'computer_id': self._computer_id}) def generateCertificate(self): xml = self._connection_helper.POST('generateComputerCertificate', data={ 'computer_id': self._computer_id}) return xml_marshaller.loads(xml) def parsed_error_message(status, body, path): m = re.search('(Error Value:\n.*)', body, re.MULTILINE) if m: match = ' '.join(line.strip() for line in m.group(0).split('\n')) return '%s (status %s while calling %s)' % ( saxutils.unescape(match), status, path ) else: return 'Server responded with wrong code %s with %s' % (status, path) class ComputerPartition(SlapRequester): zope.interface.implements(interface.IComputerPartition) def __init__(self, computer_id=None, partition_id=None, request_dict=None, connection_helper=None, hateoas_navigator=None): SlapDocument.__init__(self, connection_helper, hateoas_navigator) if request_dict is not None and (computer_id is not None or partition_id is not None): raise TypeError('request_dict conflicts with computer_id and ' 'partition_id') if request_dict is None and (computer_id is None or partition_id is None): raise TypeError('computer_id and partition_id or request_dict are ' 'required') self._computer_id = computer_id self._partition_id = partition_id self._request_dict = request_dict # Just create an empty file (for nothing requested yet) self._updateTransactionFile(partition_reference=None) def __getinitargs__(self): return (self._computer_id, self._partition_id, ) def _updateTransactionFile(self, partition_reference=None): """ Store reference to all Instances requested by this Computer Parition """ # Environ variable set by Slapgrid while processing this partition instance_root = os.environ.get('SLAPGRID_INSTANCE_ROOT', '') if not instance_root or not self._partition_id: return transaction_file_name = COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME % self._partition_id transaction_file_path = os.path.join(instance_root, self._partition_id, transaction_file_name) try: if partition_reference is None: if os.access(os.path.join(instance_root, self._partition_id), os.W_OK): if os.path.exists(transaction_file_path): return transac_file = open(transaction_file_path, 'w') transac_file.close() else: with open(transaction_file_path, 'a') as transac_file: transac_file.write('%s\n' % partition_reference) except OSError: return def request(self, software_release, software_type, partition_reference, shared=False, partition_parameter_kw=None, filter_kw=None, state=None): if partition_parameter_kw is None: partition_parameter_kw = {} elif not isinstance(partition_parameter_kw, dict): raise ValueError("Unexpected type of partition_parameter_kw '%s'" % partition_parameter_kw) if filter_kw is None: filter_kw = {} elif not isinstance(filter_kw, dict): raise ValueError("Unexpected type of filter_kw '%s'" % filter_kw) # Let enforce a default software type if software_type is None: software_type = DEFAULT_SOFTWARE_TYPE request_dict = { 'computer_id': self._computer_id, 'computer_partition_id': self._partition_id, 'software_release': software_release, 'software_type': software_type, 'partition_reference': partition_reference, 'shared_xml': xml_marshaller.dumps(shared), 'partition_parameter_xml': xml_marshaller.dumps( partition_parameter_kw), 'filter_xml': xml_marshaller.dumps(filter_kw), 'state': xml_marshaller.dumps(state), } self._updateTransactionFile(partition_reference) return self._requestComputerPartition(request_dict) def building(self): self._connection_helper.POST('buildingComputerPartition', data={ 'computer_id': self._computer_id, 'computer_partition_id': self.getId()}) def available(self): self._connection_helper.POST('availableComputerPartition', data={ 'computer_id': self._computer_id, 'computer_partition_id': self.getId()}) def destroyed(self): self._connection_helper.POST('destroyedComputerPartition', data={ 'computer_id': self._computer_id, 'computer_partition_id': self.getId(), }) def started(self): self._connection_helper.POST('startedComputerPartition', data={ 'computer_id': self._computer_id, 'computer_partition_id': self.getId(), }) def stopped(self): self._connection_helper.POST('stoppedComputerPartition', data={ 'computer_id': self._computer_id, 'computer_partition_id': self.getId(), }) def error(self, error_log, logger=None): try: self._connection_helper.POST('softwareInstanceError', data={ 'computer_id': self._computer_id, 'computer_partition_id': self.getId(), 'error_log': error_log}) except Exception: (logger or fallback_logger).exception('') def bang(self, message): self._connection_helper.POST('softwareInstanceBang', data={ 'computer_id': self._computer_id, 'computer_partition_id': self.getId(), 'message': message}) def rename(self, new_name, slave_reference=None): post_dict = { 'computer_id': self._computer_id, 'computer_partition_id': self.getId(), 'new_name': new_name, } if slave_reference: post_dict['slave_reference'] = slave_reference self._connection_helper.POST('softwareInstanceRename', data=post_dict) def getInformation(self, partition_reference): """ Return all needed informations about an existing Computer Partition in the Instance tree of the current Computer Partition. """ if not getattr(self, '_hateoas_navigator', None): raise Exception('SlapOS Master Hateoas API required for this operation is not availble.') raw_information = self._hateoas_navigator.getRelatedInstanceInformation(partition_reference) software_instance = SoftwareInstance() # XXX redefine SoftwareInstance to be more consistent for key, value in raw_information.iteritems(): if key in ['_links']: continue setattr(software_instance, '_%s' % key, value) setattr(software_instance, '_software_release_url', raw_information['_links']['software_release']) return software_instance def getId(self): if not getattr(self, '_partition_id', None): raise ResourceNotReady() return self._partition_id def getInstanceGuid(self): """Return instance_guid. Raise ResourceNotReady if it doesn't exist.""" if not getattr(self, '_instance_guid', None): raise ResourceNotReady() return self._instance_guid def getState(self): """return _requested_state. Raise ResourceNotReady if it doesn't exist.""" if not getattr(self, '_requested_state', None): raise ResourceNotReady() return self._requested_state def getType(self): """ return the Software Type of the instance. Raise RessourceNotReady if not present. """ # XXX: software type should not belong to the parameter dict. software_type = self.getInstanceParameterDict().get( 'slap_software_type', None) if not software_type: raise ResourceNotReady() return software_type def getInstanceParameterDict(self): return getattr(self, '_parameter_dict', None) or {} def getConnectionParameterDict(self): connection_dict = getattr(self, '_connection_dict', None) if connection_dict is None: # XXX Backward compatibility for older slapproxy (<= 1.0.0) connection_dict = xml2dict(getattr(self, 'connection_xml', '')) return connection_dict or {} def getSoftwareRelease(self): """ Returns the software release associate to the computer partition. """ if not getattr(self, '_software_release_document', None): raise NotFoundError("No software release information for partition %s" % self.getId()) else: return self._software_release_document def setConnectionDict(self, connection_dict, slave_reference=None): if self.getConnectionParameterDict() == connection_dict: return if slave_reference is not None: # check the connection parameters from the slave # Should we check existence? slave_parameter_list = self.getInstanceParameter("slave_instance_list") slave_connection_dict = {} for slave_parameter_dict in slave_parameter_list: if slave_parameter_dict.get("slave_reference") == slave_reference: connection_parameter_hash = slave_parameter_dict.get("connection-parameter-hash", None) break # Skip as nothing changed for the slave if connection_parameter_hash is not None and \ connection_parameter_hash == hashlib.sha256(str(connection_dict)).hexdigest(): return self._connection_helper.POST('setComputerPartitionConnectionXml', data={ 'computer_id': self._computer_id, 'computer_partition_id': self._partition_id, 'connection_xml': xml_marshaller.dumps(connection_dict), 'slave_reference': slave_reference}) def getInstanceParameter(self, key): parameter_dict = getattr(self, '_parameter_dict', None) or {} if key in parameter_dict: return parameter_dict[key] else: raise NotFoundError("%s not found" % key) def getConnectionParameter(self, key): connection_dict = self.getConnectionParameterDict() if key in connection_dict: return connection_dict[key] else: raise NotFoundError("%s not found" % key) def setUsage(self, usage_log): # XXX: this implementation has not been reviewed self.usage = usage_log def getCertificate(self): xml = self._connection_helper.GET('getComputerPartitionCertificate', params={ 'computer_id': self._computer_id, 'computer_partition_id': self._partition_id, } ) return xml_marshaller.loads(xml) def getStatus(self): xml = self._connection_helper.GET('getComputerPartitionStatus', params={ 'computer_id': self._computer_id, 'computer_partition_id': self._partition_id, } ) return xml_marshaller.loads(xml) def getFullHostingIpAddressList(self): xml = self._connection_helper.GET('getHostingSubscriptionIpList', params={ 'computer_id': self._computer_id, 'computer_partition_id': self._partition_id, } ) return xml_marshaller.loads(xml) def setComputerPartitionRelatedInstanceList(self, instance_reference_list): self._connection_helper.POST('updateComputerPartitionRelatedInstanceList', data={ 'computer_id': self._computer_id, 'computer_partition_id': self._partition_id, 'instance_reference_xml': xml_marshaller.dumps(instance_reference_list) } ) def _addIpv6Brackets(url): # if master_url contains an ipv6 without bracket, add it # Note that this is mostly to limit specific issues with # backward compatiblity, not to ensure generic detection. api_scheme, api_netloc, api_path, api_query, api_fragment = urlparse.urlsplit(url) try: ip = netaddr.IPAddress(api_netloc) port = None except netaddr.AddrFormatError: try: ip = netaddr.IPAddress(':'.join(api_netloc.split(':')[:-1])) port = api_netloc.split(':')[-1] except netaddr.AddrFormatError: ip = port = None if ip and ip.version == 6: api_netloc = '[%s]' % ip if port: api_netloc = '%s:%s' % (api_netloc, port) url = urlparse.urlunsplit((api_scheme, api_netloc, api_path, api_query, api_fragment)) return url class ConnectionHelper: def __init__(self, master_url, key_file=None, cert_file=None, master_ca_file=None, timeout=None): master_url = _addIpv6Brackets(master_url) if master_url.endswith('/'): self.slapgrid_uri = master_url else: # add a slash or the last path segment will be ignored by urljoin self.slapgrid_uri = master_url + '/' self.key_file = key_file self.cert_file = cert_file self.master_ca_file = master_ca_file self.timeout = timeout def getComputerInformation(self, computer_id): xml = self.GET('getComputerInformation', params={'computer_id': computer_id}) return xml_marshaller.loads(xml) def getFullComputerInformation(self, computer_id): """ Retrieve from SlapOS Master Computer instance containing all needed informations (Software Releases, Computer Partitions, ...). """ path = 'getFullComputerInformation' params = {'computer_id': computer_id} if not computer_id: # XXX-Cedric: should raise something smarter than "NotFound". raise NotFoundError('%r %r' % (path, params)) try: xml = self.GET(path, params=params) except NotFoundError: # XXX: This is a ugly way to keep backward compatibility, # We should stablise slap library soon. xml = self.GET('getComputerInformation', params=params) if type(xml) is unicode: xml = str(xml) xml.encode('utf-8') return xml_marshaller.loads(xml) def do_request(self, method, path, params=None, data=None, headers=None): url = urlparse.urljoin(self.slapgrid_uri, path) if headers is None: headers = {} headers.setdefault('Accept', '*/*') if path.startswith('/'): path = path[1:] # raise ValueError('method path should be relative: %s' % path) try: if url.startswith('https'): cert = (self.cert_file, self.key_file) else: cert = None # XXX TODO: handle host cert verify # Old behavior was to pass empty parameters as "None" value. # Behavior kept for compatibility with old slapproxies (< v1.3.3). # Can be removed when old slapproxies are no longer in use. if data: for k, v in data.iteritems(): if v is None: data[k] = 'None' req = method(url=url, params=params, cert=cert, verify=False, data=data, headers=headers, timeout=self.timeout) req.raise_for_status() except (requests.Timeout, requests.ConnectionError) as exc: raise ConnectionError("Couldn't connect to the server. Please " "double check given master-url argument, and make sure that IPv6 is " "enabled on your machine and that the server is available. The " "original error was:\n%s" % exc) except requests.HTTPError as exc: if exc.response.status_code == requests.status_codes.codes.not_found: msg = url if params: msg += ' - %s' % params raise NotFoundError(msg) elif exc.response.status_code == requests.status_codes.codes.request_timeout: # this is explicitly returned by SlapOS master, and does not really mean timeout raise ResourceNotReady(path) # XXX TODO test request timeout and resource not found else: # we don't know how or don't want to handle these (including Unauthorized) req.raise_for_status() except requests.exceptions.SSLError as exc: raise AuthenticationError("%s\nCouldn't authenticate computer. Please " "check that certificate and key exist and are valid." % exc) # XXX TODO parse server messages for client configure and node register # elif response.status != httplib.OK: # message = parsed_error_message(response.status, # response.read(), # path) # raise ServerError(message) return req def GET(self, path, params=None, headers=None): req = self.do_request(requests.get, path=path, params=params, headers=headers) return req.text.encode('utf-8') def POST(self, path, params=None, data=None, content_type='application/x-www-form-urlencoded'): req = self.do_request(requests.post, path=path, params=params, data=data, headers={'Content-type': content_type}) return req.text.encode('utf-8') class slap: zope.interface.implements(interface.slap) def initializeConnection(self, slapgrid_uri, key_file=None, cert_file=None, master_ca_file=None, timeout=60, slapgrid_rest_uri=None): if master_ca_file: raise NotImplementedError('Master certificate not verified in this version: %s' % master_ca_file) self._connection_helper = ConnectionHelper(slapgrid_uri, key_file, cert_file, master_ca_file, timeout) if not slapgrid_rest_uri: try: slapgrid_rest_uri = self._connection_helper.GET('getHateoasUrl') except: pass if slapgrid_rest_uri: self._hateoas_navigator = SlapHateoasNavigator( slapgrid_rest_uri, key_file, cert_file, master_ca_file, timeout ) else: self._hateoas_navigator = None # XXX-Cedric: this method is never used and thus should be removed. def registerSoftwareRelease(self, software_release): """ Registers connected representation of software release and returns SoftwareRelease class object """ return SoftwareRelease(software_release=software_release, connection_helper=self._connection_helper, hateoas_navigator=self._hateoas_navigator ) def registerComputer(self, computer_guid): """ Registers connected representation of computer and returns Computer class object """ return Computer(computer_guid, connection_helper=self._connection_helper, hateoas_navigator=self._hateoas_navigator ) def registerComputerPartition(self, computer_guid, partition_id): """ Registers connected representation of computer partition and returns Computer Partition class object """ if not computer_guid or not partition_id: # XXX-Cedric: should raise something smarter than NotFound raise NotFoundError xml = self._connection_helper.GET('registerComputerPartition', params = { 'computer_reference': computer_guid, 'computer_partition_reference': partition_id, } ) if type(xml) is unicode: xml = str(xml) xml.encode('utf-8') result = xml_marshaller.loads(xml) # XXX: dirty hack to make computer partition usable. xml_marshaller is too # low-level for our needs here. result._connection_helper = self._connection_helper result._hateoas_navigator = self._hateoas_navigator return result def registerOpenOrder(self): return OpenOrder( connection_helper=self._connection_helper, hateoas_navigator=self._hateoas_navigator ) def registerSupply(self): return Supply( connection_helper=self._connection_helper, hateoas_navigator=self._hateoas_navigator ) def getSoftwareReleaseListFromSoftwareProduct(self, software_product_reference=None, software_release_url=None): url = 'getSoftwareReleaseListFromSoftwareProduct' params = {} if software_product_reference: if software_release_url is not None: raise AttributeError('Both software_product_reference and ' 'software_release_url parameters are specified.') params['software_product_reference'] = software_product_reference else: if software_release_url is None: raise AttributeError('None of software_product_reference and ' 'software_release_url parameters are specified.') params['software_release_url'] = software_release_url xml = self._connection_helper.GET(url, params=params) if type(xml) is unicode: xml = str(xml) xml.encode('utf-8') result = xml_marshaller.loads(xml) assert(type(result) == list) return result def getOpenOrderDict(self): if not getattr(self, '_hateoas_navigator', None): raise Exception('SlapOS Master Hateoas API required for this operation is not availble.') return self._hateoas_navigator.getHostingSubscriptionDict() class HateoasNavigator(object): """ Navigator for HATEOAS-style APIs. Inspired by https://git.erp5.org/gitweb/jio.git/blob/HEAD:/src/jio.storage/erp5storage.js """ # XXX: needs to be designed for real. For now, just a non-maintainable prototype. # XXX: export to a standalone library, independant from slap. def __init__(self, slapgrid_uri, key_file=None, cert_file=None, master_ca_file=None, timeout=60): self.slapos_master_hateoas_uri = slapgrid_uri self.key_file = key_file self.cert_file = cert_file self.master_ca_file = master_ca_file self.timeout = timeout def GET(self, uri, headers=None): connection_helper = ConnectionHelper( uri, self.key_file, self.cert_file, self.master_ca_file, self.timeout) return connection_helper.GET(uri, headers=headers) def hateoasGetLinkFromLinks(self, links, title): if type(links) == dict: if links.get('title') == title: return links['href'] raise NotFoundError('Action %s not found.' % title) for action in links: if action.get('title') == title: return action['href'] else: raise NotFoundError('Action %s not found.' % title) def getRelativeUrlFromUrn(self, urn): urn_schema = 'urn:jio:get:' try: _, url = urn.split(urn_schema) except ValueError: return return str(url) def getSiteDocument(self, url, headers=None): result = self.GET(url, headers) return json.loads(result) def getRootDocument(self): # XXX what about cache? cached_root_document = getattr(self, 'root_document', None) if cached_root_document: return cached_root_document self.root_document = self.getSiteDocument( self.slapos_master_hateoas_uri, headers={'Cache-Control': 'no-cache'} ) return self.root_document def getDocumentAndHateoas(self, relative_url, view='view'): site_document = self.getRootDocument() return expand( site_document['_links']['traverse']['href'], dict(relative_url=relative_url, view=view) ) def getMeDocument(self): person_relative_url = self.getRelativeUrlFromUrn( self.getRootDocument()['_links']['me']['href']) person_url = self.getDocumentAndHateoas(person_relative_url) return json.loads(self.GET(person_url)) class SlapHateoasNavigator(HateoasNavigator): def _hateoas_getHostingSubscriptionDict(self): action_object_slap_list = self.getMeDocument()['_links']['action_object_slap'] for action in action_object_slap_list: if action.get('title') == 'getHateoasHostingSubscriptionList': getter_link = action['href'] break else: raise Exception('Hosting subscription not found.') result = self.GET(getter_link) return json.loads(result)['_links']['content'] # XXX rename me to blablaUrl(self) def _hateoas_getRelatedHostingSubscription(self): action_object_slap_list = self.getMeDocument()['_links']['action_object_slap'] getter_link = self.hateoasGetLinkFromLinks(action_object_slap_list, 'getHateoasRelatedHostingSubscription') result = self.GET(getter_link) return json.loads(result)['_links']['action_object_jump']['href'] def _hateoasGetInformation(self, url): result = self.GET(url) result = json.loads(result) object_link = self.hateoasGetLinkFromLinks( result['_links']['action_object_slap'], 'getHateoasInformation' ) result = self.GET(object_link) return json.loads(result) def getHateoasInstanceList(self, hosting_subscription_url): hosting_subscription = json.loads(self.GET(hosting_subscription_url)) instance_list_url = self.hateoasGetLinkFromLinks(hosting_subscription['_links']['action_object_slap'], 'getHateoasInstanceList') instance_list = json.loads(self.GET(instance_list_url)) return instance_list['_links']['content'] def getHostingSubscriptionDict(self): hosting_subscription_link_list = self._hateoas_getHostingSubscriptionDict() hosting_subscription_dict = {} for hosting_subscription_link in hosting_subscription_link_list: raw_information = self.getHostingSubscriptionRootSoftwareInstanceInformation(hosting_subscription_link['title']) software_instance = SoftwareInstance() # XXX redefine SoftwareInstance to be more consistent for key, value in raw_information.iteritems(): if key in ['_links']: continue setattr(software_instance, '_%s' % key, value) setattr(software_instance, '_software_release_url', raw_information['_links']['software_release']) hosting_subscription_dict[software_instance._title] = software_instance return hosting_subscription_dict def getHostingSubscriptionRootSoftwareInstanceInformation(self, reference): hosting_subscription_list = self._hateoas_getHostingSubscriptionDict() for hosting_subscription in hosting_subscription_list: if hosting_subscription.get('title') == reference: hosting_subscription_url = hosting_subscription['href'] break else: raise NotFoundError('This document does not exist.') hosting_subscription = json.loads(self.GET(hosting_subscription_url)) software_instance_url = self.hateoasGetLinkFromLinks( hosting_subscription['_links']['action_object_slap'], 'getHateoasRootInstance' ) response = self.GET(software_instance_url) response = json.loads(response) software_instance_url = response['_links']['content'][0]['href'] return self._hateoasGetInformation(software_instance_url) def getRelatedInstanceInformation(self, reference): related_hosting_subscription_url = self._hateoas_getRelatedHostingSubscription() instance_list = self.getHateoasInstanceList(related_hosting_subscription_url) instance_url = self.hateoasGetLinkFromLinks(instance_list, reference) instance = self._hateoasGetInformation(instance_url) return instance slapos.core-1.3.18/slapos/slap/interface/0000755000000000000000000000000013006632706020175 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/slap/interface/__init__.py0000644000000000000000000000250012752436135022310 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## slapos.core-1.3.18/slapos/slap/interface/slap.py0000644000000000000000000003512113003671621021504 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from zope.interface import Interface """ Note: all strings accepted/returned by the slap library are encoded in UTF-8. """ class IException(Interface): """ Classes which implement IException are used to report errors. """ class IConnectionError(IException): """ Classes which implement IServerError are used to report a connection problem to the slap server. """ class IServerError(IException): """ Classes which implement IServerError are used to report unexpected error from the slap server. """ class INotFoundError(IException): """ Classes which implement INotFoundError are used to report missing informations on the slap server. """ class IResourceNotReady(IException): """ Classes which implement IResourceNotReady are used to report resource not ready on the slap server. """ class IRequester(Interface): """ Classes which implement IRequester can request software instance to the slapgrid server. """ def request(software_release, software_type, partition_reference, shared=False, partition_parameter_kw=None, filter_kw=None): """ Request software release instantiation to slapgrid server. Returns a new computer partition document, where this sofware release will be installed. software_release -- uri of the software release which has to be instanciated software_type -- type of component provided by software_release partition_reference -- local reference of the instance used by the recipe to identify the instances. shared -- boolean to use a shared service partition_parameter_kw -- dictionary of parameter used to fill the parameter dict of newly created partition. filter_kw -- dictionary of filtering parameter to select the requested computer partition. computer_guid - computer of the requested partition partition_type - virtio, slave, full, limited port - port provided by the requested partition Example: request('http://example.com/toto/titi', 'typeA', 'mysql_1') """ def getInformation(partition_reference): """ Get informations about an existing instance. If it is called from a Computer Partition, get informations about Software Instance of the instance tree. partition_reference -- local reference of the instance used by the recipe to identify the instances. """ class IBuildoutController(Interface): """ Classes which implement IBuildoutController can report the buildout run status to the slapgrid server. """ def available(): """ Notify (to the slapgrid server) that the software instance is available. """ def building(): """ Notify (to the slapgrid server) that the buildout is not available and under creation. """ def error(error_log): """ Notify (to the slapgrid server) that the buildout is not available and reports an error. error_log -- a text describing the error It can be a traceback for example. """ class ISoftwareRelease(IBuildoutController): """ Software release interface specification """ def getURI(): """ Returns a string representing the uri of the software release. """ def getComputerId(): """ Returns a string representing the identifier of the computer where the SR is installed. """ def getState(): """ Returns a string representing the expected state of the software installation. The result can be: available, destroyed """ def destroyed(): """ Notify (to the slapgrid server) that the software installation has been correctly destroyed. """ class ISoftwareProductCollection(Interface): """ Fake object representing the abstract of all Software Products. Can be used to call "Product().mysoftwareproduct", or, simpler, "product.mysoftwareproduct", to get the best Software Release URL of the Software Product "mysoftwareproduct". Example: product.kvm will have the value of the latest Software Release URL of KVM. """ class ISoftwareInstance(Interface): """ Classes which implement ISoftwareRelease are used by slap to represent informations about a Software Instance. """ class IComputerPartition(IBuildoutController, IRequester): """ Computer Partition interface specification Classes which implement IComputerPartition can propagate the computer partition state to the SLAPGRID server and request new computer partition creation. """ def stopped(): """ Notify (to the slapgrid server) that the software instance is available and stopped. """ def started(): """ Notify (to the slapgrid server) that the software instance is available and started. """ def destroyed(): """ Notify (to the slapgrid server) that the software instance has been correctly destroyed. """ def getId(): """ Returns a string representing the identifier of the computer partition inside the slapgrid server. """ def getInstanceGuid(): """ Returns a string representing the unique identifier of the instance inside the slapgrid server. """ def getState(): """ Returns a string representing the expected state of the computer partition. The result can be: started, stopped, destroyed """ def getSoftwareRelease(): """ Returns the software release associate to the computer partition. Raise an INotFoundError if no software release is associated. """ def getInstanceParameterDict(): """ Returns a dictionary of instance parameters. The contained values can be used to fill the software instanciation profile. """ def getConnectionParameterDict(): """ Returns a dictionary of connection parameters. The contained values are connection parameters of a compute partition. """ def getType(): """ Returns the Software Type of the instance. """ def setUsage(usage_log): """ Associate a usage log to the computer partition. This method does not report the usage to the slapgrid server. See IComputer.report. usage_log -- a text describing the computer partition usage. It can be an XML for example. """ def bang(log): """ Report a problem detected on a computer partition. This will trigger the reinstanciation of all partitions in the instance tree. log -- a text explaining why the method was called """ def getCertificate(): """ Returns a dictionnary containing the authentification certificates associated to the computer partition. The dictionnary keys are: key -- value is a SSL key certificate -- value is a SSL certificate Raise an INotFoundError if no software release is associated. """ def setConnectionDict(connection_dict, slave_reference=None): """ Store the connection parameters associated to a partition. connection_dict -- dictionary of parameter used to fill the connection dict of the partition. slave_reference -- current reference of the slave instance to modify """ def getInstanceParameter(key): """ Returns the instance parameter associated to the key. Raise an INotFoundError if no key is defined. key -- a string name of the parameter """ def getConnectionParameter(key): """ Return the connection parameter associate to the key. Raise an INotFoundError if no key is defined. key -- a string name of the parameter """ def rename(partition_reference, slave_reference=None): """ Change the partition reference of a partition partition_reference -- new local reference of the instance used by the recipe to identify the instances. slave_reference -- current reference of the slave instance to modify """ def getStatus(): """ Returns a dictionnary containing the latest status of the computer partition. The dictionnary keys are: user -- user who reported the latest status created_at -- date of the status text -- message log of the status """ def getFullHostingIpAddressList(): """ Returns a dictionnary containing the latest status of the computer partition. """ def setComputerPartitionRelatedInstanceList(instance_reference_list): """ Set relation between this Instance and all his children. instance_reference_list -- list of instances requested by this Computer Partition. """ class IComputer(Interface): """ Computer interface specification Classes which implement IComputer can fetch informations from the slapgrid server to know which Software Releases and Software Instances have to be installed. """ def getSoftwareReleaseList(): """ Returns the list of software release which has to be supplied by the computer. Raise an INotFoundError if computer_guid doesn't exist. """ def getComputerPartitionList(): """ Returns the list of configured computer partitions associated to this computer. Raise an INotFoundError if computer_guid doesn't exist. """ def reportUsage(computer_partition_list): """ Report the computer usage to the slapgrid server. IComputerPartition.setUsage has to be called on each computer partition to define each usage. computer_partition_list -- a list of computer partition for which the usage needs to be reported. """ def bang(log): """ Report a problem detected on a computer. This will trigger IComputerPartition.bang on all instances hosted by the Computer. log -- a text explaining why the method was called """ def updateConfiguration(configuration_xml): """ Report the current computer configuration. configuration_xml -- computer XML description generated by slapformat """ def getStatus(): """ Returns a dictionnary containing the latest status of the computer. The dictionnary keys are: user -- user who reported the latest status created_at -- date of the status text -- message log of the status """ def generateCertificate(): """ Returns a dictionnary containing the new certificate files for the computer. The dictionnary keys are: key -- key file certificate -- certificate file Raise ValueError is another certificate is already valid. """ def revokeCertificate(): """ Revoke current computer certificate. Raise ValueError is there is not valid certificate. """ class IOpenOrder(IRequester): """ Open Order interface specification Classes which implement Open Order describe which kind of software instances is requested by a given client. """ def requestComputer(computer_reference): """ Request a computer to slapgrid server. Returns a new computer document. computer_reference -- local reference of the computer """ class ISupply(Interface): """ Supply interface specification Classes which implement Supply describe which kind of software releases a given client is ready to supply. """ def supply(software_release, computer_guid=None): """ Tell that given client is ready to supply given sofware release software_release -- uri of the software release which has to be instanciated computer_guid -- the identifier of the computer inside the slapgrid server. """ class slap(Interface): """ Initialise slap connection to the slapgrid server Slapgrid server URL is defined during the slap library installation, as recipes should not use another server. """ def initializeConnection(slapgrid_uri, authentification_key=None): """ Initialize the connection parameters to the slapgrid servers. slapgrid_uri -- uri the slapgrid server connector authentification_key -- string the authentificate the agent. Example: https://slapos.server/slap_interface """ def registerComputer(computer_guid): """ Instanciate a computer in the slap library. computer_guid -- the identifier of the computer inside the slapgrid server. """ def registerComputerPartition(computer_guid, partition_id): """ Instanciate a computer partition in the slap library. computer_guid -- the identifier of the computer inside the slapgrid server. partition_id -- the identifier of the computer partition inside the slapgrid server. Raise an INotFoundError if computer_guid doesn't exist. """ def registerSoftwareRelease(software_release): """ Instanciate a software release in the slap library. software_release -- uri of the software release definition """ def registerOpenOrder(): """ Instanciate an open order in the slap library. """ def registerSupply(): """ Instanciate a supply in the slap library. """ def getSoftwareReleaseListFromSoftwareProduct(software_product_reference, software_release_url): """ Get the list of Software Releases from a product or from another related Sofware Release, from a Software Product point of view. """ def getOpenOrderDict(): """ Get the list of existing open orders (services) for the current user. """ slapos.core-1.3.18/slapos/slap/util.py0000644000000000000000000000074212752436135017574 0ustar rootroot00000000000000from lxml import etree def xml2dict(xml): result_dict = {} if xml is not None and xml != '': tree = etree.fromstring(xml.encode('utf-8')) for element in tree.iter(tag=etree.Element): if element.tag == 'parameter': key = element.get('id') value = result_dict.get(key, None) if value is not None: value = value + ' ' + element.text else: value = element.text result_dict[key] = value return result_dict slapos.core-1.3.18/slapos/format.py0000644000000000000000000014454113003671621017145 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import ConfigParser import errno import fcntl import grp import json import logging import netaddr import netifaces import os import glob import pwd import random import shutil import socket import struct import subprocess import sys import threading import time import traceback import zipfile import platform from urllib2 import urlopen import lxml.etree import xml_marshaller.xml_marshaller import slapos.util from slapos.util import mkdir_p import slapos.slap as slap from slapos import version def prettify_xml(xml): root = lxml.etree.fromstring(xml) return lxml.etree.tostring(root, pretty_print=True) class OS(object): """Wrap parts of the 'os' module to provide logging of performed actions.""" _os = os def __init__(self, conf): self._dry_run = conf.dry_run self._logger = conf.logger add = self._addWrapper add('chown') add('chmod') add('makedirs') add('mkdir') def _addWrapper(self, name): def wrapper(*args, **kw): arg_list = [repr(x) for x in args] + [ '%s=%r' % (x, y) for x, y in kw.iteritems() ] self._logger.debug('%s(%s)' % (name, ', '.join(arg_list))) if not self._dry_run: getattr(self._os, name)(*args, **kw) setattr(self, name, wrapper) def __getattr__(self, name): return getattr(self._os, name) class UsageError(Exception): pass class NoAddressOnInterface(Exception): """ Exception raised if there is no address on the interface to construct IPv6 address with. Attributes: brige: String, the name of the interface. """ def __init__(self, interface): super(NoAddressOnInterface, self).__init__( 'No IPv6 found on interface %s to construct IPv6 with.' % interface ) class AddressGenerationError(Exception): """ Exception raised if the generation of an IPv6 based on the prefix obtained from the interface failed. Attributes: addr: String, the invalid address the exception is raised for. """ def __init__(self, addr): super(AddressGenerationError, self).__init__( 'Generated IPv6 %s seems not to be a valid IP.' % addr ) def getPublicIPv4Address(): test_list = [ { "url": 'https://api.ipify.org/?format=json' , "json_key": "ip"}, { "url": 'http://httpbin.org/ip', "json_key": "origin"}, { "url": 'http://jsonip.com', "json_key": "ip"}] previous = None ipv4 = None for test in test_list: if ipv4 is not None: previous = ipv4 try: ipv4 = json.load(urlopen(test["url"]))[test["json_key"]] except: ipv4 = None if ipv4 is not None and ipv4 == previous: return ipv4 def callAndRead(argument_list, raise_on_error=True): popen = subprocess.Popen(argument_list, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) result = popen.communicate()[0] if raise_on_error and popen.returncode != 0: raise ValueError('Issue while invoking %r, result was:\n%s' % ( argument_list, result)) return popen.returncode, result def isGlobalScopeAddress(a): """Returns True if a is global scope IP v4/6 address""" ip = netaddr.IPAddress(a) return not ip.is_link_local() and not ip.is_loopback() and \ not ip.is_reserved() and ip.is_unicast() def netmaskToPrefixIPv4(netmask): """Convert string represented netmask to its integer prefix""" return netaddr.strategy.ipv4.netmask_to_prefix[ netaddr.strategy.ipv4.str_to_int(netmask)] def netmaskToPrefixIPv6(netmask): """Convert string represented netmask to its integer prefix""" return netaddr.strategy.ipv6.netmask_to_prefix[ netaddr.strategy.ipv6.str_to_int(netmask)] def getIfaceAddressIPv4(iface): """return dict containing ipv4 address netmask, network and broadcast address of interface""" if not iface in netifaces.interfaces(): raise ValueError('Could not find interface called %s to use as gateway ' \ 'for tap network' % iface) try: addresses_list = netifaces.ifaddresses(iface)[socket.AF_INET] if len (addresses_list) > 0: addresses = addresses_list[0].copy() addresses['network'] = str(netaddr.IPNetwork('%s/%s' % (addresses['addr'], addresses['netmask'])).cidr.network) return addresses else: return {} except KeyError: raise KeyError('Could not find IPv4 adress on interface %s.' % iface) def getIPv4SubnetAddressRange(ip_address, mask, size): """Check if a given ipaddress can be used to create 'size' host ip address, then return list of ip address in the subnet""" ip = netaddr.IPNetwork('%s/%s' % (ip_address, mask)) # Delete network and default ip_address from the list ip_list = [x for x in sorted(list(ip)) if str(x) != ip_address and x.value != ip.cidr.network.value] if len(ip_list) < size: raise ValueError('Could not create %s tap interfaces from address %s.' % ( size, ip_address)) return ip_list def _getDict(obj): """ Serialize an object into dictionaries. List and dict will remains the same, basic type too. But encapsulated object will be returned as dict. Set, collections and other aren't handle for now. Args: obj: an object of any type. Returns: A dictionary if the given object wasn't a list, a list otherwise. """ if isinstance(obj, list): return [_getDict(item) for item in obj] if isinstance(obj, dict): dikt = obj else: try: dikt = obj.__dict__ except AttributeError: return obj return { key: _getDict(value) \ for key, value in dikt.iteritems() \ # do not attempt to serialize logger: it is both useless and recursive. if not isinstance(value, logging.Logger) } class Computer(object): "Object representing the computer" instance_root = None software_root = None instance_storage_home = None def __init__(self, reference, interface=None, addr=None, netmask=None, ipv6_interface=None, software_user='slapsoft', tap_gateway_interface=None): """ Attributes: reference: String, the reference of the computer. interface: String, the name of the computer's used interface. """ self.reference = str(reference) self.interface = interface self.partition_list = [] self.address = addr self.netmask = netmask self.ipv6_interface = ipv6_interface self.software_user = software_user self.tap_gateway_interface = tap_gateway_interface # The follow properties are updated on update() method self.public_ipv4_address = None self.os_type = None self.python_version = None self.slapos_version = None def __getinitargs__(self): return (self.reference, self.interface) def getAddress(self, allow_tap=False): """ Return a list of the interface address not attributed to any partition (which are therefore free for the computer itself). Returns: False if the interface isn't available, else the list of the free addresses. """ if self.interface is None: return {'addr': self.address, 'netmask': self.netmask} computer_partition_address_list = [] for partition in self.partition_list: for address in partition.address_list: if netaddr.valid_ipv6(address['addr']): computer_partition_address_list.append(address['addr']) # Going through addresses of the computer's interface for address_dict in self.interface.getGlobalScopeAddressList(): # Comparing with computer's partition addresses if address_dict['addr'] not in computer_partition_address_list: return address_dict if allow_tap: # all addresses on interface are for partition, so let's add new one computer_tap = Tap('compdummy') computer_tap.createWithOwner(User('root'), attach_to_tap=True) self.interface.addTap(computer_tap) return self.interface.addAddr() # Can't find address raise NoAddressOnInterface('No valid IPv6 found on %s.' % self.interface.name) def update(self): """ Collect environmental hardware/network information. """ self.public_ipv4_address = getPublicIPv4Address() self.slapos_version = version.version self.python_version = platform.python_version() self.os_type = platform.platform() def send(self, conf): """ Send a marshalled dictionary of the computer object serialized via_getDict. """ slap_instance = slap.slap() connection_dict = {} if conf.key_file and conf.cert_file: connection_dict['key_file'] = conf.key_file connection_dict['cert_file'] = conf.cert_file slap_instance.initializeConnection(conf.master_url, **connection_dict) slap_computer = slap_instance.registerComputer(self.reference) if conf.dry_run: return try: slap_computer.updateConfiguration(xml_marshaller.xml_marshaller.dumps(_getDict(self))) except slap.NotFoundError as error: raise slap.NotFoundError("%s\nERROR: This SlapOS node is not recognised by " "SlapOS Master and/or computer_id and certificates don't match. " "Please make sure computer_id of slapos.cfg looks " "like 'COMP-123' and is correct.\nError is : 404 Not Found." % error) def dump(self, path_to_xml, path_to_json, logger): """ Dump the computer object to an xml file via xml_marshaller. Args: path_to_xml: String, path to the file to load. path_to_json: String, path to the JSON version to save. """ computer_dict = _getDict(self) if path_to_json: with open(path_to_json, 'wb') as fout: fout.write(json.dumps(computer_dict, sort_keys=True, indent=2)) new_xml = xml_marshaller.xml_marshaller.dumps(computer_dict) new_pretty_xml = prettify_xml(new_xml) path_to_archive = path_to_xml + '.zip' if os.path.exists(path_to_archive) and os.path.exists(path_to_xml): # the archive file exists, we only backup if something has changed with open(path_to_xml, 'rb') as fin: if fin.read() == new_pretty_xml: # computer configuration did not change, nothing to write return if os.path.exists(path_to_xml): try: self.backup_xml(path_to_archive, path_to_xml) except: # might be a corrupted zip file. let's move it out of the way and retry. shutil.move(path_to_archive, path_to_archive + time.strftime('_broken_%Y%m%d-%H:%M')) try: self.backup_xml(path_to_archive, path_to_xml) except: # give up trying logger.exception("Can't backup %s:", path_to_xml) with open(path_to_xml, 'wb') as fout: fout.write(new_pretty_xml) def backup_xml(self, path_to_archive, path_to_xml): """ Stores a copy of the current xml file to an historical archive. """ xml_content = open(path_to_xml).read() saved_filename = os.path.basename(path_to_xml) + time.strftime('.%Y%m%d-%H:%M') with zipfile.ZipFile(path_to_archive, 'a') as archive: archive.writestr(saved_filename, xml_content, zipfile.ZIP_DEFLATED) @classmethod def load(cls, path_to_xml, reference, ipv6_interface, tap_gateway_interface): """ Create a computer object from a valid xml file. Arg: path_to_xml: String, a path to a valid file containing a valid configuration. Return: A Computer object. """ dumped_dict = xml_marshaller.xml_marshaller.loads(open(path_to_xml).read()) # Reconstructing the computer object from the xml computer = Computer( reference=reference, addr=dumped_dict['address'], netmask=dumped_dict['netmask'], ipv6_interface=ipv6_interface, software_user=dumped_dict.get('software_user', 'slapsoft'), tap_gateway_interface=tap_gateway_interface, ) for partition_dict in dumped_dict['partition_list']: if partition_dict['user']: user = User(partition_dict['user']['name']) else: user = User('root') if partition_dict['tap']: tap = Tap(partition_dict['tap']['name']) if tap_gateway_interface: tap.ipv4_addr = partition_dict['tap'].get('ipv4_addr', '') tap.ipv4_netmask = partition_dict['tap'].get('ipv4_netmask', '') tap.ipv4_gateway = partition_dict['tap'].get('ipv4_gateway', '') tap.ipv4_network = partition_dict['tap'].get('ipv4_network', '') else: tap = Tap(partition_dict['reference']) address_list = partition_dict['address_list'] external_storage_list = partition_dict.get('external_storage_list', []) partition = Partition( reference=partition_dict['reference'], path=partition_dict['path'], user=user, address_list=address_list, tap=tap, external_storage_list=external_storage_list, ) computer.partition_list.append(partition) return computer def _speedHackAddAllOldIpsToInterface(self): """ Speed hack: Blindly add all IPs from existing configuration, just to speed up actual computer configuration later on. """ # XXX-TODO: only add an address if it doesn't already exist. if self.ipv6_interface: interface_name = self.ipv6_interface elif self.interface: interface_name = self.interface.name else: return for partition in self.partition_list: try: for address in partition.address_list: try: netmask = netmaskToPrefixIPv6(address['netmask']) except: continue callAndRead(['ip', 'addr', 'add', '%s/%s' % (address['addr'], netmask), 'dev', interface_name]) except ValueError: pass def _addUniqueLocalAddressIpv6(self, interface_name): """ Create a unique local address in the interface interface_name, so that slapformat can build upon this. See https://en.wikipedia.org/wiki/Unique_local_address. """ command = 'ip address add dev %s fd00::1/64' % interface_name callAndRead(command.split()) def construct(self, alter_user=True, alter_network=True, create_tap=True, use_unique_local_address_block=False): """ Construct the computer object as it is. """ if alter_network and self.address is not None: self.interface.addAddr(self.address, self.netmask) if use_unique_local_address_block and alter_network: if self.ipv6_interface: network_interface_name = self.ipv6_interface else: network_interface_name = self.interface.name self._addUniqueLocalAddressIpv6(network_interface_name) for path in self.instance_root, self.software_root: if not os.path.exists(path): os.makedirs(path, 0o755) else: os.chmod(path, 0o755) # own self.software_root by software user slapsoft = User(self.software_user) slapsoft.path = self.software_root if alter_user: slapsoft.create() slapsoft_pw = pwd.getpwnam(slapsoft.name) os.chown(slapsoft.path, slapsoft_pw.pw_uid, slapsoft_pw.pw_gid) os.chmod(self.software_root, 0o755) # get list of instance external storage if exist instance_external_list = [] if self.instance_storage_home: # get all /XXX/dataN where N is a digit data_list = glob.glob(os.path.join(self.instance_storage_home, 'data*')) for i in range(0, len(data_list)): data_path = data_list.pop() the_digit = os.path.basename(data_path).split('data')[-1] if the_digit.isdigit(): instance_external_list.append(data_path) tap_address_list = [] if alter_network and self.tap_gateway_interface and create_tap: gateway_addr_dict = getIfaceAddressIPv4(self.tap_gateway_interface) tap_address_list = getIPv4SubnetAddressRange(gateway_addr_dict['addr'], gateway_addr_dict['netmask'], len(self.partition_list)) assert(len(self.partition_list) <= len(tap_address_list)) if alter_network: self._speedHackAddAllOldIpsToInterface() try: for partition_index, partition in enumerate(self.partition_list): # Reconstructing User's partition.path = os.path.join(self.instance_root, partition.reference) partition.user.setPath(partition.path) partition.user.additional_group_list = [slapsoft.name] partition.external_storage_list = ['%s/%s' % (path, partition.reference) for path in instance_external_list] if alter_user: partition.user.create() # Reconstructing Tap if partition.user and partition.user.isAvailable(): owner = partition.user else: owner = User('root') if alter_network and create_tap: # In case it has to be attached to the TAP network device, only one # is necessary for the interface to assert carrier if self.interface.attach_to_tap and partition_index == 0: partition.tap.createWithOwner(owner, attach_to_tap=True) else: partition.tap.createWithOwner(owner) # If tap_gateway_interface is specified, we don't add tap to bridge # but we create route for this tap if not self.tap_gateway_interface: self.interface.addTap(partition.tap) else: next_ipv4_addr = '%s' % tap_address_list.pop(0) if not partition.tap.ipv4_addr: # define new ipv4 address for this tap partition.tap.ipv4_addr = next_ipv4_addr partition.tap.ipv4_netmask = gateway_addr_dict['netmask'] partition.tap.ipv4_gateway = gateway_addr_dict['addr'] partition.tap.ipv4_network = gateway_addr_dict['network'] partition.tap.createRoutes() # Reconstructing partition's directory partition.createPath(alter_user) partition.createExternalPath(alter_user) # Reconstructing partition's address # There should be two addresses on each Computer Partition: # * global IPv6 # * local IPv4, took from slapformat:ipv4_local_network if not partition.address_list: # regenerate partition.address_list.append(self.interface.addIPv4LocalAddress()) partition.address_list.append(self.interface.addAddr()) elif alter_network: # regenerate list of addresses old_partition_address_list = partition.address_list partition.address_list = [] if len(old_partition_address_list) != 2: raise ValueError( 'There should be exactly 2 stored addresses. Got: %r' % (old_partition_address_list,)) if not any(netaddr.valid_ipv6(q['addr']) for q in old_partition_address_list): raise ValueError('Not valid ipv6 addresses loaded') if not any(netaddr.valid_ipv4(q['addr']) for q in old_partition_address_list): raise ValueError('Not valid ipv6 addresses loaded') for address in old_partition_address_list: if netaddr.valid_ipv6(address['addr']): partition.address_list.append(self.interface.addAddr( address['addr'], address['netmask'])) elif netaddr.valid_ipv4(address['addr']): partition.address_list.append(self.interface.addIPv4LocalAddress( address['addr'])) else: raise ValueError('Address %r is incorrect' % address['addr']) finally: if alter_network and create_tap and self.interface.attach_to_tap: try: self.partition_list[0].tap.detach() except IndexError: pass class Partition(object): "Represent a computer partition" def __init__(self, reference, path, user, address_list, tap, external_storage_list=[]): """ Attributes: reference: String, the name of the partition. path: String, the path to the partition folder. user: User, the user linked to this partition. address_list: List of associated IP addresses. tap: Tap, the tap interface linked to this partition. external_storage_list: Base path list of folder to format for data storage """ self.reference = str(reference) self.path = str(path) self.user = user self.address_list = address_list or [] self.tap = tap self.external_storage_list = [] def __getinitargs__(self): return (self.reference, self.path, self.user, self.address_list, self.tap) def createPath(self, alter_user=True): """ Create the directory of the partition, assign to the partition user and give it the 750 permission. In case if path exists just modifies it. """ self.path = os.path.abspath(self.path) owner = self.user if self.user else User('root') if not os.path.exists(self.path): os.mkdir(self.path, 0o750) if alter_user: owner_pw = pwd.getpwnam(owner.name) os.chown(self.path, owner_pw.pw_uid, owner_pw.pw_gid) os.chmod(self.path, 0o750) def createExternalPath(self, alter_user=True): """ Create and external directory of the partition, assign to the partition user and give it the 750 permission. In case if path exists just modifies it. """ for path in self.external_storage_list: storage_path = os.path.abspath(path) owner = self.user if self.user else User('root') if not os.path.exists(storage_path): os.mkdir(storage_path, 0o750) if alter_user: owner_pw = pwd.getpwnam(owner.name) os.chown(storage_path, owner_pw.pw_uid, owner_pw.pw_gid) os.chmod(storage_path, 0o750) class User(object): """User: represent and manipulate a user on the system.""" path = None def __init__(self, user_name, additional_group_list=None): """ Attributes: user_name: string, the name of the user, who will have is home in """ self.name = str(user_name) self.shell = '/bin/sh' self.additional_group_list = additional_group_list def __getinitargs__(self): return (self.name,) def setPath(self, path): self.path = path def create(self): """ Create a user on the system who will be named after the self.name with its own group and directory. Returns: True: if the user creation went right """ # XXX: This method shall be no-op in case if all is correctly setup # This method shall check if all is correctly done # This method shall not reset groups, just add them grpname = 'grp_' + self.name if sys.platform == 'cygwin' else self.name try: grp.getgrnam(grpname) except KeyError: callAndRead(['groupadd', grpname]) user_parameter_list = ['-d', self.path, '-g', self.name, '-s', self.shell] if self.additional_group_list is not None: user_parameter_list.extend(['-G', ','.join(self.additional_group_list)]) user_parameter_list.append(self.name) try: pwd.getpwnam(self.name) except KeyError: user_parameter_list.append('-r') callAndRead(['useradd'] + user_parameter_list) else: callAndRead(['usermod'] + user_parameter_list) # lock the password of user callAndRead(['passwd', '-l', self.name]) return True def isAvailable(self): """ Determine the availability of a user on the system Return: True: if available False: otherwise """ try: pwd.getpwnam(self.name) return True except KeyError: return False class Tap(object): "Tap represent a tap interface on the system" IFF_TAP = 0x0002 TUNSETIFF = 0x400454ca KEEP_TAP_ATTACHED_EVENT = threading.Event() def __init__(self, tap_name): """ Attributes: tap_name: String, the name of the tap interface. ipv4_address: String, local ipv4 to route to this tap ipv4_network: String, netmask to use when configure route for this tap gateway_ipv4: String, ipv4 of gateway to be used to reach local network """ self.name = str(tap_name) self.ipv4_addr = "" self.ipv4_netmask = "" self.ipv4_gateway = "" self.ipv4_network = "" def __getinitargs__(self): return (self.name,) def attach(self): """ Attach to the TAP interface, meaning that it just opens the TAP interface and waits for the caller to notify that it can be safely detached. Linux distinguishes administrative and operational state of an network interface. The former can be set manually by running ``ip link set dev up|down'', whereas the latter states that the interface can actually transmit data (for a wired network interface, it basically means that there is carrier, e.g. the network cable is plugged into a switch for example). In case of bridge: In order to be able to check the uniqueness of IPv6 address assigned to the bridge, the network interface must be up from an administrative *and* operational point of view. However, from Linux 2.6.39, the bridge reflects the state of the underlying device (e.g. the bridge asserts carrier if at least one of its ports has carrier) whereas it always asserted carrier before. This should work fine for "real" network interface, but will not work properly if the bridge only binds TAP interfaces, which, from 2.6.36, reports carrier only and only if an userspace program is attached. """ tap_fd = os.open("/dev/net/tun", os.O_RDWR) try: # Attach to the TAP interface which has previously been created fcntl.ioctl(tap_fd, self.TUNSETIFF, struct.pack("16sI", self.name, self.IFF_TAP)) except IOError as error: # If EBUSY, it means another program is already attached, thus just # ignore it... if error.errno != errno.EBUSY: os.close(tap_fd) raise else: # Block until the caller send an event stating that the program can be # now detached safely, thus bringing down the TAP device (from 2.6.36) # and the bridge at the same time (from 2.6.39) self.KEEP_TAP_ATTACHED_EVENT.wait() finally: os.close(tap_fd) def detach(self): """ Detach to the TAP network interface by notifying the thread which attach to the TAP and closing the TAP file descriptor """ self.KEEP_TAP_ATTACHED_EVENT.set() def createWithOwner(self, owner, attach_to_tap=False): """ Create a tap interface on the system. """ # some systems does not have -p switch for tunctl #callAndRead(['tunctl', '-p', '-t', self.name, '-u', owner.name]) check_file = '/sys/devices/virtual/net/%s/owner' % self.name owner_id = None if os.path.exists(check_file): owner_id = open(check_file).read().strip() try: owner_id = int(owner_id) except ValueError: pass if owner_id != pwd.getpwnam(owner.name).pw_uid: callAndRead(['tunctl', '-t', self.name, '-u', owner.name]) callAndRead(['ip', 'link', 'set', self.name, 'up']) if attach_to_tap: threading.Thread(target=self.attach).start() def createRoutes(self): """ Configure ipv4 route to reach this interface from local network """ if self.ipv4_addr: # Check if this route exits code, result = callAndRead(['ip', 'route', 'show', self.ipv4_addr]) if code == 0 and self.ipv4_addr in result and self.name in result: return callAndRead(['route', 'add', '-host', self.ipv4_addr, 'dev', self.name]) else: raise ValueError("%s should not be empty. No ipv4 address assigned to %s" % (self.ipv4_addr, self.name)) class Interface(object): """Represent a network interface on the system""" def __init__(self, logger, name, ipv4_local_network, ipv6_interface=None): """ Attributes: name: String, the name of the interface """ self.logger = logger self.name = str(name) self.ipv4_local_network = ipv4_local_network self.ipv6_interface = ipv6_interface # Attach to TAP network interface, only if the interface interface does not # report carrier _, result = callAndRead(['ip', 'addr', 'list', self.name]) self.attach_to_tap = 'DOWN' in result.split('\n', 1)[0] # XXX no __getinitargs__, as instances of this class are never deserialized. def getIPv4LocalAddressList(self): """ Returns currently configured local IPv4 addresses which are in ipv4_local_network """ if not socket.AF_INET in netifaces.ifaddresses(self.name): return [] return [ { 'addr': q['addr'], 'netmask': q['netmask'] } for q in netifaces.ifaddresses(self.name)[socket.AF_INET] if netaddr.IPAddress(q['addr'], 4) in netaddr.glob_to_iprange( netaddr.cidr_to_glob(self.ipv4_local_network)) ] def getGlobalScopeAddressList(self): """Returns currently configured global scope IPv6 addresses""" if self.ipv6_interface: interface_name = self.ipv6_interface else: interface_name = self.name try: address_list = [ q for q in netifaces.ifaddresses(interface_name)[socket.AF_INET6] if isGlobalScopeAddress(q['addr'].split('%')[0]) ] except KeyError: raise ValueError("%s must have at least one IPv6 address assigned" % interface_name) if sys.platform == 'cygwin': for q in address_list: q.setdefault('netmask', 'FFFF:FFFF:FFFF:FFFF::') # XXX: Missing implementation of Unique Local IPv6 Unicast Addresses as # defined in http://www.rfc-editor.org/rfc/rfc4193.txt # XXX: XXX: XXX: IT IS DISALLOWED TO IMPLEMENT link-local addresses as # Linux and BSD are possibly wrongly implementing it -- it is "too local" # it is impossible to listen or access it on same node # XXX: IT IS DISALLOWED to implement ad hoc solution like inventing node # local addresses or anything which does not exists in RFC! return address_list def getInterfaceList(self): """Returns list of interfaces already present on bridge""" interface_list = [] _, result = callAndRead(['brctl', 'show']) in_interface = False for line in result.split('\n'): if len(line.split()) > 1: if self.name in line: interface_list.append(line.split()[-1]) in_interface = True continue if in_interface: break elif in_interface: if line.strip(): interface_list.append(line.strip()) return interface_list def addTap(self, tap): """ Add the tap interface tap to the bridge. Args: tap: Tap, the tap interface. """ if tap.name not in self.getInterfaceList(): callAndRead(['brctl', 'addif', self.name, tap.name]) def _addSystemAddress(self, address, netmask, ipv6=True): """Adds system address to interface Returns True if address was added successfully. Returns False if there was issue. """ if ipv6: address_string = '%s/%s' % (address, netmaskToPrefixIPv6(netmask)) af = socket.AF_INET6 if self.ipv6_interface: interface_name = self.ipv6_interface else: interface_name = self.name else: af = socket.AF_INET address_string = '%s/%s' % (address, netmaskToPrefixIPv4(netmask)) interface_name = self.name # check if address is already took by any other interface for interface in netifaces.interfaces(): if interface != interface_name: address_dict = netifaces.ifaddresses(interface) if af in address_dict: if address in [q['addr'].split('%')[0] for q in address_dict[af]]: return False if not af in netifaces.ifaddresses(interface_name) \ or not address in [q['addr'].split('%')[0] for q in netifaces.ifaddresses(interface_name)[af] ]: # add an address callAndRead(['ip', 'addr', 'add', address_string, 'dev', interface_name]) # Fake success for local ipv4 if not ipv6: return True # wait few moments time.sleep(2) # Fake success for local ipv4 if not ipv6: return True # check existence on interface for ipv6 _, result = callAndRead(['ip', '-6', 'addr', 'list', interface_name]) for l in result.split('\n'): if address in l: if 'tentative' in l: # duplicate, remove callAndRead(['ip', 'addr', 'del', address_string, 'dev', interface_name]) return False # found and clean return True # even when added not found, this is bad... return False def _generateRandomIPv4Address(self, netmask): # no addresses found, generate new one # Try 10 times to add address, raise in case if not possible try_num = 10 while try_num > 0: addr = random.choice([q for q in netaddr.glob_to_iprange( netaddr.cidr_to_glob(self.ipv4_local_network))]).format() if (dict(addr=addr, netmask=netmask) not in self.getIPv4LocalAddressList()): # Checking the validity of the IPv6 address if self._addSystemAddress(addr, netmask, False): return dict(addr=addr, netmask=netmask) try_num -= 1 raise AddressGenerationError(addr) def addIPv4LocalAddress(self, addr=None): """Adds local IPv4 address in ipv4_local_network""" netmask = str(netaddr.IPNetwork(self.ipv4_local_network).netmask) if sys.platform == 'cygwin' \ else '255.255.255.255' local_address_list = self.getIPv4LocalAddressList() if addr is None: return self._generateRandomIPv4Address(netmask) elif dict(addr=addr, netmask=netmask) not in local_address_list: if self._addSystemAddress(addr, netmask, False): return dict(addr=addr, netmask=netmask) else: self.logger.warning('Impossible to add old local IPv4 %s. Generating ' 'new IPv4 address.' % addr) return self._generateRandomIPv4Address(netmask) else: # confirmed to be configured return dict(addr=addr, netmask=netmask) def addAddr(self, addr=None, netmask=None): """ Adds IP address to interface. If addr is specified and exists already on interface does nothing. If addr is specified and does not exists on interface, tries to add given address. If it is not possible (ex. because network changed) calculates new address. Args: addr: Wished address to be added to interface. netmask: Wished netmask to be used. Returns: Tuple of (address, netmask). Raises: AddressGenerationError: Couldn't construct valid address with existing one's on the interface. NoAddressOnInterface: There's no address on the interface to construct an address with. """ # Getting one address of the interface as base of the next addresses if self.ipv6_interface: interface_name = self.ipv6_interface else: interface_name = self.name interface_addr_list = self.getGlobalScopeAddressList() # No address found if len(interface_addr_list) == 0: raise NoAddressOnInterface(interface_name) address_dict = interface_addr_list[0] if addr is not None: if dict(addr=addr, netmask=netmask) in interface_addr_list: # confirmed to be configured return dict(addr=addr, netmask=netmask) if netmask == address_dict['netmask']: # same netmask, so there is a chance to add good one interface_network = netaddr.ip.IPNetwork('%s/%s' % (address_dict['addr'], netmaskToPrefixIPv6(address_dict['netmask']))) requested_network = netaddr.ip.IPNetwork('%s/%s' % (addr, netmaskToPrefixIPv6(netmask))) if interface_network.network == requested_network.network: # same network, try to add if self._addSystemAddress(addr, netmask): # succeed, return it return dict(addr=addr, netmask=netmask) else: self.logger.warning('Impossible to add old public IPv6 %s. ' 'Generating new IPv6 address.' % addr) # Try 10 times to add address, raise in case if not possible try_num = 10 netmask = address_dict['netmask'] while try_num > 0: addr = ':'.join(address_dict['addr'].split(':')[:-1] + ['%x' % ( random.randint(1, 65000), )]) socket.inet_pton(socket.AF_INET6, addr) if (dict(addr=addr, netmask=netmask) not in self.getGlobalScopeAddressList()): # Checking the validity of the IPv6 address if self._addSystemAddress(addr, netmask): return dict(addr=addr, netmask=netmask) try_num -= 1 raise AddressGenerationError(addr) def parse_computer_definition(conf, definition_path): conf.logger.info('Using definition file %r' % definition_path) computer_definition = ConfigParser.RawConfigParser({ 'software_user': 'slapsoft', }) computer_definition.read(definition_path) interface = None address = None netmask = None if computer_definition.has_option('computer', 'address'): address, netmask = computer_definition.get('computer', 'address').split('/') if (conf.alter_network and conf.interface_name is not None and conf.ipv4_local_network is not None): interface = Interface(logger=conf.logger, name=conf.interface_name, ipv4_local_network=conf.ipv4_local_network, ipv6_interface=conf.ipv6_interface) computer = Computer( reference=conf.computer_id, interface=interface, addr=address, netmask=netmask, ipv6_interface=conf.ipv6_interface, software_user=computer_definition.get('computer', 'software_user'), tap_gateway_interface=conf.tap_gateway_interface, ) partition_list = [] for partition_number in range(int(conf.partition_amount)): section = 'partition_%s' % partition_number user = User(computer_definition.get(section, 'user')) address_list = [] for a in computer_definition.get(section, 'address').split(): address, netmask = a.split('/') address_list.append(dict(addr=address, netmask=netmask)) tap = Tap(computer_definition.get(section, 'network_interface')) partition = Partition(reference=computer_definition.get(section, 'pathname'), path=os.path.join(conf.instance_root, computer_definition.get(section, 'pathname')), user=user, address_list=address_list, tap=tap) partition_list.append(partition) computer.partition_list = partition_list return computer def parse_computer_xml(conf, xml_path): interface = Interface(logger=conf.logger, name=conf.interface_name, ipv4_local_network=conf.ipv4_local_network, ipv6_interface=conf.ipv6_interface) if os.path.exists(xml_path): conf.logger.debug('Loading previous computer data from %r' % xml_path) computer = Computer.load(xml_path, reference=conf.computer_id, ipv6_interface=conf.ipv6_interface, tap_gateway_interface=conf.tap_gateway_interface) # Connect to the interface defined by the configuration computer.interface = interface else: # If no pre-existent configuration found, create a new computer object conf.logger.warning('Creating new computer data with id %r', conf.computer_id) computer = Computer( reference=conf.computer_id, interface=interface, addr=None, netmask=None, ipv6_interface=conf.ipv6_interface, software_user=conf.software_user, tap_gateway_interface=conf.tap_gateway_interface, ) partition_amount = int(conf.partition_amount) existing_partition_amount = len(computer.partition_list) if partition_amount < existing_partition_amount: conf.logger.critical('Requested amount of computer partitions (%s) is lower ' 'than already configured (%s), cannot continue', partition_amount, existing_partition_amount) sys.exit(1) elif partition_amount > existing_partition_amount: conf.logger.info('Adding %s new partitions', partition_amount - existing_partition_amount) for i in range(existing_partition_amount, partition_amount): # add new partitions partition = Partition( reference='%s%s' % (conf.partition_base_name, i), path=os.path.join(conf.instance_root, '%s%s' % ( conf.partition_base_name, i)), user=User('%s%s' % (conf.user_base_name, i)), address_list=None, tap=Tap('%s%s' % (conf.tap_base_name, i)) ) computer.partition_list.append(partition) return computer def write_computer_definition(conf, computer): computer_definition = ConfigParser.RawConfigParser() computer_definition.add_section('computer') if computer.address is not None and computer.netmask is not None: computer_definition.set('computer', 'address', '/'.join( [computer.address, computer.netmask])) for partition_number, partition in enumerate(computer.partition_list): section = 'partition_%s' % partition_number computer_definition.add_section(section) address_list = [] for address in partition.address_list: address_list.append('/'.join([address['addr'], address['netmask']])) computer_definition.set(section, 'address', ' '.join(address_list)) computer_definition.set(section, 'user', partition.user.name) computer_definition.set(section, 'network_interface', partition.tap.name) computer_definition.set(section, 'pathname', partition.reference) computer_definition.write(open(conf.output_definition_file, 'w')) conf.logger.info('Stored computer definition in %r' % conf.output_definition_file) def random_delay(conf): # Add delay between 0 and 1 hour # XXX should be the contrary: now by default, and cron should have # --maximal-delay=3600 if not conf.now: duration = float(60 * 60) * random.random() conf.logger.info('Sleeping for %s seconds. To disable this feature, ' 'use with --now parameter in manual.' % duration) time.sleep(duration) def do_format(conf): random_delay(conf) if conf.input_definition_file: computer = parse_computer_definition(conf, conf.input_definition_file) else: # no definition file, figure out computer computer = parse_computer_xml(conf, conf.computer_xml) computer.instance_root = conf.instance_root computer.software_root = conf.software_root computer.instance_storage_home = conf.instance_storage_home conf.logger.info('Updating computer') address = computer.getAddress(conf.create_tap) computer.address = address['addr'] computer.netmask = address['netmask'] if conf.output_definition_file: write_computer_definition(conf, computer) computer.construct(alter_user=conf.alter_user, alter_network=conf.alter_network, create_tap=conf.create_tap, use_unique_local_address_block=conf.use_unique_local_address_block) if getattr(conf, 'certificate_repository_path', None): mkdir_p(conf.certificate_repository_path, mode=0o700) computer.update() # Dumping and sending to the erp5 the current configuration if not conf.dry_run: computer.dump(path_to_xml=conf.computer_xml, path_to_json=conf.computer_json, logger=conf.logger) conf.logger.info('Posting information to %r' % conf.master_url) computer.send(conf) conf.logger.info('slapos successfully prepared the computer.') class FormatConfig(object): key_file = None cert_file = None alter_network = None alter_user = None create_tap = None computer_xml = None computer_json = None input_definition_file = None log_file = None output_definition_file = None dry_run = None software_user = None tap_gateway_interface = None use_unique_local_address_block = None instance_storage_home = None def __init__(self, logger): self.logger = logger @staticmethod def checkRequiredBinary(binary_list): missing_binary_list = [] for b in binary_list: if type(b) != type([]): b = [b] try: callAndRead(b) except ValueError: pass except OSError: missing_binary_list.append(b[0]) if missing_binary_list: raise UsageError('Some required binaries are missing or not ' 'functional: %s' % (','.join(missing_binary_list), )) def mergeConfig(self, args, configp): """ Set options given by parameters. Must be executed before setting up the logger. """ self.key_file = None self.cert_file = None # Set argument parameters for key, value in args.__dict__.items(): setattr(self, key, value) # Merges the arguments and configuration for section in ("slapformat", "slapos"): configuration_dict = dict(configp.items(section)) for key in configuration_dict: if not getattr(self, key, None): setattr(self, key, configuration_dict[key]) def setConfig(self): # setup some nones for parameter in ['interface_name', 'partition_base_name', 'user_base_name', 'tap_base_name', 'ipv4_local_network', 'ipv6_interface']: if getattr(self, parameter, None) is None: setattr(self, parameter, None) # Backward compatibility if not getattr(self, "interface_name", None) \ and getattr(self, "bridge_name", None): setattr(self, "interface_name", self.bridge_name) self.logger.warning('bridge_name option is deprecated and should be ' 'replaced by interface_name.') if not getattr(self, "create_tap", None) \ and getattr(self, "no_bridge", None): setattr(self, "create_tap", not self.no_bridge) self.logger.warning('no_bridge option is deprecated and should be ' 'replaced by create_tap.') # Set defaults lately if self.alter_network is None: self.alter_network = 'True' if self.alter_user is None: self.alter_user = 'True' if self.software_user is None: self.software_user = 'slapsoft' if self.create_tap is None: self.create_tap = True if self.tap_gateway_interface is None: self.tap_gateway_interface = '' if self.use_unique_local_address_block is None: self.use_unique_local_address_block = False # Convert strings to booleans for option in ['alter_network', 'alter_user', 'create_tap', 'use_unique_local_address_block']: attr = getattr(self, option) if isinstance(attr, str): if attr.lower() == 'true': root_needed = True setattr(self, option, True) elif attr.lower() == 'false': setattr(self, option, False) else: message = 'Option %r needs to be "True" or "False", wrong value: ' \ '%r' % (option, getattr(self, option)) self.logger.error(message) raise UsageError(message) if not self.dry_run: if self.alter_user: self.checkRequiredBinary(['groupadd', 'useradd', 'usermod', ['passwd', '-h']]) if self.create_tap: self.checkRequiredBinary([['tunctl', '-d']]) if self.tap_gateway_interface: self.checkRequiredBinary(['route']) if self.alter_network: self.checkRequiredBinary(['ip']) # Required, even for dry run if self.alter_network and self.create_tap: self.checkRequiredBinary(['brctl']) # Check mandatory options for parameter in ('computer_id', 'instance_root', 'master_url', 'software_root', 'computer_xml'): if not getattr(self, parameter, None): raise UsageError("Parameter '%s' is not defined." % parameter) # Check existence of SSL certificate files, if defined for attribute in ['key_file', 'cert_file', 'master_ca_file']: file_location = getattr(self, attribute, None) if file_location is not None: if not os.path.exists(file_location): self.logger.fatal('File %r does not exist or is not readable.' % file_location) sys.exit(1) self.logger.debug('Started.') if self.dry_run: self.logger.info("Dry-run mode enabled.") if self.create_tap: self.logger.info("Tap creation mode enabled.") # Calculate path once self.computer_xml = os.path.abspath(self.computer_xml) if self.input_definition_file: self.input_definition_file = os.path.abspath(self.input_definition_file) if self.output_definition_file: self.output_definition_file = os.path.abspath(self.output_definition_file) def tracing_monkeypatch(conf): """Substitute os module and callAndRead function with tracing wrappers.""" global os global callAndRead real_callAndRead = callAndRead os = OS(conf) if conf.dry_run: def dry_callAndRead(argument_list, raise_on_error=True): if argument_list == ['brctl', 'show']: return real_callAndRead(argument_list, raise_on_error) else: return 0, '' callAndRead = dry_callAndRead def fake_getpwnam(user): class result(object): pw_uid = 12345 pw_gid = 54321 return result pwd.getpwnam = fake_getpwnam else: dry_callAndRead = real_callAndRead def logging_callAndRead(argument_list, raise_on_error=True): conf.logger.debug(' '.join(argument_list)) return dry_callAndRead(argument_list, raise_on_error) callAndRead = logging_callAndRead slapos.core-1.3.18/slapos/proxy/0000755000000000000000000000000013006632706016457 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/proxy/views.py0000644000000000000000000007640013003671621020171 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012, 2013, 2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from lxml import etree import random import string from slapos.slap.slap import Computer, ComputerPartition, \ SoftwareRelease, SoftwareInstance, NotFoundError from slapos.proxy.db_version import DB_VERSION import slapos.slap from slapos.util import sqlite_connect from flask import g, Flask, request, abort import xml_marshaller from xml_marshaller.xml_marshaller import loads from xml_marshaller.xml_marshaller import dumps app = Flask(__name__) EMPTY_DICT_XML = dumps({}) class UnauthorizedError(Exception): pass # cast everything to string, utf-8 encoded def to_str(v): if isinstance(v, str): return v if not isinstance(v, unicode): v = unicode(v) return v.encode('utf-8') def xml2dict(xml): result_dict = {} if xml is not None and xml != '': tree = etree.fromstring(to_str(xml)) for element in tree.iter(tag=etree.Element): if element.tag == 'parameter': key = element.get('id') value = result_dict.get(key, None) if value is not None: value = value + ' ' + element.text else: value = element.text result_dict[key] = value return result_dict def dict2xml(dictionary): instance = etree.Element('instance') for parameter_id, parameter_value in dictionary.iteritems(): # cast everything to string parameter_value = unicode(parameter_value) etree.SubElement(instance, "parameter", attrib={'id': parameter_id}).text = parameter_value return etree.tostring(instance, pretty_print=True, xml_declaration=True, encoding='utf-8') def partitiondict2partition(partition): for key, value in partition.iteritems(): if type(value) is unicode: partition[key] = value.encode() slap_partition = ComputerPartition(partition['computer_reference'], partition['reference']) slap_partition._software_release_document = None slap_partition._requested_state = 'destroyed' slap_partition._need_modification = 0 slap_partition._instance_guid = '%s-%s' % (partition['computer_reference'], partition['reference']) if partition['software_release']: slap_partition._need_modification = 1 slap_partition._requested_state = partition['requested_state'] slap_partition._parameter_dict = xml2dict(partition['xml']) address_list = [] full_address_list = [] for address in execute_db('partition_network', 'SELECT * FROM %s WHERE partition_reference=? AND computer_reference=?', [partition['reference'], partition['computer_reference']]): address_list.append((address['reference'], address['address'])) slap_partition._parameter_dict['ip_list'] = address_list slap_partition._parameter_dict['full_address_list'] = full_address_list slap_partition._parameter_dict['slap_software_type'] = \ partition['software_type'] if partition['slave_instance_list'] is not None: slap_partition._parameter_dict['slave_instance_list'] = \ xml_marshaller.xml_marshaller.loads(partition['slave_instance_list']) else: slap_partition._parameter_dict['slave_instance_list'] = [] slap_partition._connection_dict = xml2dict(partition['connection_xml']) slap_partition._software_release_document = SoftwareRelease( software_release=partition['software_release'], computer_guid=partition['computer_reference']) return slap_partition def execute_db(table, query, args=(), one=False, db_version=None, log=False, db=None): if not db: db = g.db if not db_version: db_version = DB_VERSION query = query % (table + db_version,) if log: print query try: cur = db.execute(query, args) except: app.logger.error('There was some issue during processing query %r on table %r with args %r' % (query, table, args)) raise rv = [dict((cur.description[idx][0], value) for idx, value in enumerate(row)) for row in cur.fetchall()] return (rv[0] if rv else None) if one else rv def connect_db(): return sqlite_connect(app.config['DATABASE_URI']) def _getTableList(): return g.db.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY Name").fetchall() def _getCurrentDatabaseSchemaVersion(): """ Return version of database schema. As there is no actual definition of version, analyse name of all tables (containing version) and take the highest version (as several versions can live in the db). """ # XXX: define an actual version and proper migration/repair procedure. version = -1 for table_name in _getTableList(): try: table_version = int(table_name[0][-2:]) except ValueError: table_version = int(table_name[0][-1:]) if table_version > version: version = table_version return str(version) def _upgradeDatabaseIfNeeded(): """ Analyses current database compared to defined schema, and adapt tables/data it if needed. """ current_schema_version = _getCurrentDatabaseSchemaVersion() # If version of current database is not old, do nothing if current_schema_version == DB_VERSION: return schema = app.open_resource('schema.sql') schema = schema.read() % dict(version=DB_VERSION, computer=app.config['computer_id']) g.db.cursor().executescript(schema) g.db.commit() if current_schema_version == '-1': return # Migrate all data to new tables app.logger.info('Old schema detected: Migrating old tables...') app.logger.info('Note that old tables are not alterated.') for table in ('software', 'computer', 'partition', 'slave', 'partition_network'): for row in execute_db(table, 'SELECT * from %s', db_version=current_schema_version): columns = ', '.join(row.keys()) placeholders = ':'+', :'.join(row.keys()) query = 'INSERT INTO %s (%s) VALUES (%s)' % ('%s', columns, placeholders) execute_db(table, query, row, log=True) g.db.commit() is_schema_already_executed = False @app.before_request def before_request(): g.db = connect_db() global is_schema_already_executed if not is_schema_already_executed: _upgradeDatabaseIfNeeded() is_schema_already_executed = True @app.after_request def after_request(response): g.db.commit() g.db.close() return response @app.route('/getComputerInformation', methods=['GET']) def getComputerInformation(): # Kept only for backward compatiblity return getFullComputerInformation() @app.route('/getFullComputerInformation', methods=['GET']) def getFullComputerInformation(): computer_id = request.args['computer_id'] computer_list = execute_db('computer', 'SELECT * FROM %s WHERE reference=?', [computer_id]) if len(computer_list) != 1: # Backward compatibility if computer_id != app.config['computer_id']: raise NotFoundError('%s is not registered.' % computer_id) slap_computer = Computer(computer_id) slap_computer._software_release_list = [] for sr in execute_db('software', 'select * from %s WHERE computer_reference=?', [computer_id]): slap_computer._software_release_list.append(SoftwareRelease( software_release=sr['url'], computer_guid=computer_id)) slap_computer._computer_partition_list = [] for partition in execute_db('partition', 'SELECT * FROM %s WHERE computer_reference=?', [computer_id]): slap_computer._computer_partition_list.append(partitiondict2partition( partition)) return xml_marshaller.xml_marshaller.dumps(slap_computer) @app.route('/setComputerPartitionConnectionXml', methods=['POST']) def setComputerPartitionConnectionXml(): slave_reference = request.form.get('slave_reference', None) computer_partition_id = request.form['computer_partition_id'].encode() computer_id = request.form['computer_id'].encode() connection_xml = request.form['connection_xml'].encode() connection_dict = xml_marshaller.xml_marshaller.loads( connection_xml) connection_xml = dict2xml(connection_dict) if not slave_reference or slave_reference == 'None': query = 'UPDATE %s SET connection_xml=? WHERE reference=? AND computer_reference=?' argument_list = [connection_xml, computer_partition_id, computer_id] execute_db('partition', query, argument_list) return 'done' else: slave_reference = slave_reference.encode() query = 'UPDATE %s SET connection_xml=? , hosted_by=? WHERE reference=?' argument_list = [connection_xml, computer_partition_id, slave_reference] execute_db('slave', query, argument_list) return 'done' @app.route('/buildingSoftwareRelease', methods=['POST']) def buildingSoftwareRelease(): return 'Ignored' @app.route('/availableSoftwareRelease', methods=['POST']) def availableSoftwareRelease(): return 'Ignored' @app.route('/softwareReleaseError', methods=['POST']) def softwareReleaseError(): return 'Ignored' @app.route('/buildingComputerPartition', methods=['POST']) def buildingComputerPartition(): return 'Ignored' @app.route('/availableComputerPartition', methods=['POST']) def availableComputerPartition(): return 'Ignored' @app.route('/softwareInstanceError', methods=['POST']) def softwareInstanceError(): return 'Ignored' @app.route('/softwareInstanceBang', methods=['POST']) def softwareInstanceBang(): return 'Ignored' @app.route('/startedComputerPartition', methods=['POST']) def startedComputerPartition(): return 'Ignored' @app.route('/stoppedComputerPartition', methods=['POST']) def stoppedComputerPartition(): return 'Ignored' @app.route('/destroyedComputerPartition', methods=['POST']) def destroyedComputerPartition(): return 'Ignored' @app.route('/useComputer', methods=['POST']) def useComputer(): return 'Ignored' @app.route('/loadComputerConfigurationFromXML', methods=['POST']) def loadComputerConfigurationFromXML(): xml = request.form['xml'] computer_dict = xml_marshaller.xml_marshaller.loads(str(xml)) execute_db('computer', 'INSERT OR REPLACE INTO %s values(:reference, :address, :netmask)', computer_dict) for partition in computer_dict['partition_list']: partition['computer_reference'] = computer_dict['reference'] execute_db('partition', 'INSERT OR IGNORE INTO %s (reference, computer_reference) values(:reference, :computer_reference)', partition) execute_db('partition_network', 'DELETE FROM %s WHERE partition_reference = ? AND computer_reference = ?', [partition['reference'], partition['computer_reference']]) for address in partition['address_list']: address['reference'] = partition['tap']['name'] address['partition_reference'] = partition['reference'] address['computer_reference'] = partition['computer_reference'] execute_db('partition_network', 'INSERT OR REPLACE INTO %s (reference, partition_reference, computer_reference, address, netmask) values(:reference, :partition_reference, :computer_reference, :addr, :netmask)', address) return 'done' @app.route('/registerComputerPartition', methods=['GET']) def registerComputerPartition(): computer_reference = request.args['computer_reference'].encode() computer_partition_reference = request.args['computer_partition_reference'].encode() partition = execute_db('partition', 'SELECT * FROM %s WHERE reference=? and computer_reference=?', [computer_partition_reference, computer_reference], one=True) if partition is None: raise UnauthorizedError return xml_marshaller.xml_marshaller.dumps( partitiondict2partition(partition)) @app.route('/supplySupply', methods=['POST']) def supplySupply(): url = request.form['url'] computer_id = request.form['computer_id'] if request.form['state'] == 'destroyed': execute_db('software', 'DELETE FROM %s WHERE url = ? AND computer_reference=?', [url, computer_id]) else: execute_db('software', 'INSERT OR REPLACE INTO %s VALUES(?, ?)', [url, computer_id]) return '%r added' % url @app.route('/requestComputerPartition', methods=['POST']) def requestComputerPartition(): parsed_request_dict = parseRequestComputerPartitionForm(request.form) # Is it a slave instance? slave = loads(request.form.get('shared_xml', EMPTY_DICT_XML).encode()) # Check first if instance is already allocated if slave: # XXX: change schema to include a simple "partition_reference" which # is name of the instance. Then, no need to do complex search here. slave_reference = parsed_request_dict['partition_id'] + '_' + parsed_request_dict['partition_reference'] requested_computer_id = parsed_request_dict['filter_kw'].get('computer_guid', app.config['computer_id']) matching_partition = getAllocatedSlaveInstance(slave_reference, requested_computer_id) else: matching_partition = getAllocatedInstance(parsed_request_dict['partition_reference']) if matching_partition: # Then the instance is already allocated, just update it # XXX: split request and request slave into different update/allocate functions and simplify. # By default, ALWAYS request instance on default computer parsed_request_dict['filter_kw'].setdefault('computer_guid', app.config['computer_id']) if slave: software_instance = requestSlave(**parsed_request_dict) else: software_instance = requestNotSlave(**parsed_request_dict) else: # Instance is not yet allocated: try to do it. external_master_url = isRequestToBeForwardedToExternalMaster(parsed_request_dict) if external_master_url: return forwardRequestToExternalMaster(external_master_url, request.form) # XXX add support for automatic deployment on specific node depending on available SR and partitions on each Node. # Note: It only deploys on default node if SLA not specified # XXX: split request and request slave into different update/allocate functions and simplify. # By default, ALWAYS request instance on default computer parsed_request_dict['filter_kw'].setdefault('computer_guid', app.config['computer_id']) if slave: software_instance = requestSlave(**parsed_request_dict) else: software_instance = requestNotSlave(**parsed_request_dict) return dumps(software_instance) def parseRequestComputerPartitionForm(form): """ Parse without intelligence a form from a request(), return it. """ parsed_dict = {} parsed_dict['software_release'] = form['software_release'].encode() parsed_dict['software_type'] = form.get('software_type').encode() parsed_dict['partition_reference'] = form.get('partition_reference', '').encode() parsed_dict['partition_id'] = form.get('computer_partition_id', '').encode() parsed_dict['partition_parameter_kw'] = loads(form.get('partition_parameter_xml', EMPTY_DICT_XML).encode()) parsed_dict['filter_kw'] = loads(form.get('filter_xml', EMPTY_DICT_XML).encode()) # Note: currently ignored for slave instance (slave instances # are always started). parsed_dict['requested_state'] = loads(form.get('state').encode()) return parsed_dict run_id = ''.join([random.choice(string.ascii_letters + string.digits) for n in xrange(32)]) def checkIfMasterIsCurrentMaster(master_url): """ Because there are several ways to contact this server, we can't easily check in a request() if master_url is ourself or not. So we contact master_url, and if it returns an ID we know: it is ourself """ # Dumb way: compare with listening host/port host = request.host port = request.environ['SERVER_PORT'] if master_url == 'http://%s:%s/' % (host, port): return True # Hack way: call ourself slap = slapos.slap.slap() slap.initializeConnection(master_url) try: master_run_id = slap._connection_helper.GET('/getRunId') except: return False if master_run_id == run_id: return True return False @app.route('/getRunId', methods=['GET']) def getRunId(): return run_id def checkMasterUrl(master_url): """ Check if master_url doesn't represent ourself, and check if it is whitelisted in multimaster configuration. """ if not master_url: return False if checkIfMasterIsCurrentMaster(master_url): # master_url is current server: don't forward return False master_entry = app.config.get('multimaster').get(master_url, None) # Check if this master is known if not master_entry: # Check if it is ourself if not master_url.startswith('https') and checkIfMasterIsCurrentMaster(master_url): return False app.logger.warning('External SlapOS Master URL %s is not listed in multimaster list.' % master_url) abort(404) return True def isRequestToBeForwardedToExternalMaster(parsed_request_dict): """ Check if we HAVE TO forward the request. Several cases: * The request specifies a master_url in filter_kw * The software_release of the request is in a automatic forward list """ master_url = parsed_request_dict['filter_kw'].get('master_url') if checkMasterUrl(master_url): # Don't allocate the instance locally, but forward to specified master return master_url software_release = parsed_request_dict['software_release'] for mutimaster_url, mutimaster_entry in app.config.get('multimaster').iteritems(): if software_release in mutimaster_entry['software_release_list']: # Don't allocate the instance locally, but forward to specified master return mutimaster_url return None def forwardRequestToExternalMaster(master_url, request_form): """ Forward instance request to external SlapOS Master. """ master_entry = app.config.get('multimaster').get(master_url, {}) key_file = master_entry.get('key') cert_file = master_entry.get('cert') if master_url.startswith('https') and (not key_file or not cert_file): app.logger.warning('External master %s configuration did not specify key or certificate.' % master_url) abort(404) if master_url.startswith('https') and not master_url.startswith('https') and (key_file or cert_file): app.logger.warning('External master %s configurqtion specifies key or certificate but is using plain http.' % master_url) abort(404) slap = slapos.slap.slap() if key_file: slap.initializeConnection(master_url, key_file=key_file, cert_file=cert_file) else: slap.initializeConnection(master_url) partition_reference = request_form['partition_reference'].encode() # Store in database execute_db('forwarded_partition_request', 'INSERT OR REPLACE INTO %s values(:partition_reference, :master_url)', {'partition_reference':partition_reference, 'master_url': master_url}) new_request_form = request_form.copy() filter_kw = loads(new_request_form['filter_xml'].encode()) filter_kw['source_instance_id'] = partition_reference new_request_form['filter_xml'] = dumps(filter_kw) xml = slap._connection_helper.POST('/requestComputerPartition', data=new_request_form) if type(xml) is unicode: xml = str(xml) xml.encode('utf-8') partition = loads(xml) # XXX move to other end partition._master_url = master_url return dumps(partition) def getAllocatedInstance(partition_reference): """ Look for existence of instance, if so return the corresponding partition dict, else return None """ args = [] a = args.append table = 'partition' q = 'SELECT * FROM %s WHERE partition_reference=?' a(partition_reference) return execute_db(table, q, args, one=True) def getAllocatedSlaveInstance(slave_reference, requested_computer_id): """ Look for existence of instance, if so return the corresponding partition dict, else return None """ args = [] a = args.append # XXX: Scope currently depends on instance which requests slave. # Meaning that two different instances requesting the same slave will # result in two different allocated slaves. table = 'slave' q = 'SELECT * FROM %s WHERE reference=? and computer_reference=?' a(slave_reference) a(requested_computer_id) # XXX: check there is only one result return execute_db(table, q, args, one=True) def getRootPartition(reference): p = 'SELECT * FROM %s WHERE reference=?' parent_partition = execute_db('partition', p, [reference], one=True) while parent_partition is not None: parent_reference = parent_partition['requested_by'] if not parent_reference or parent_reference == reference: break reference = parent_reference parent_partition = execute_db('partition', p, [reference], one=True) return parent_partition def requestNotSlave(software_release, software_type, partition_reference, partition_id, partition_parameter_kw, filter_kw, requested_state): instance_xml = dict2xml(partition_parameter_kw) requested_computer_id = filter_kw['computer_guid'] instance_xml = dict2xml(partition_parameter_kw) args = [] a = args.append q = 'SELECT * FROM %s WHERE partition_reference=?' a(partition_reference) partition = execute_db('partition', q, args, one=True) args = [] a = args.append q = 'UPDATE %s SET slap_state="busy"' if partition is None: partition = execute_db('partition', 'SELECT * FROM %s WHERE slap_state="free" and computer_reference=?', [requested_computer_id], one=True) if partition is None: app.logger.warning('No more free computer partition') abort(404) q += ' ,software_release=?' a(software_release) if partition_reference: q += ' ,partition_reference=?' a(partition_reference) if partition_id: q += ' ,requested_by=?' a(partition_id) if not software_type: software_type = 'RootSoftwareInstance' else: # XXX Check if software_release should be updated if partition['software_release'].encode() != software_release: q += ' ,software_release=?' a(software_release) if partition['requested_by']: root_partition = getRootPartition(partition['requested_by']) if root_partition and root_partition['requested_state'] != "started": # propagate parent state to child # child can be stopped or destroyed while parent is started requested_state = root_partition['requested_state'] if requested_state: q += ', requested_state=?' a(requested_state) # # XXX change software_type when requested # if software_type: q += ' ,software_type=?' a(software_type) # Else: only update partition parameters if instance_xml: q += ' ,xml=?' a(instance_xml) q += ' WHERE reference=? AND computer_reference=?' a(partition['reference'].encode()) a(partition['computer_reference'].encode()) execute_db('partition', q, args) args = [] partition = execute_db('partition', 'SELECT * FROM %s WHERE reference=? and computer_reference=?', [partition['reference'].encode(), partition['computer_reference'].encode()], one=True) address_list = [] for address in execute_db('partition_network', 'SELECT * FROM %s WHERE partition_reference=?', [partition['reference']]): address_list.append((address['reference'], address['address'])) if not requested_state: requested_state = 'started' # XXX it should be ComputerPartition, not a SoftwareInstance software_instance = SoftwareInstance(_connection_dict=xml2dict(partition['connection_xml']), _parameter_dict=xml2dict(partition['xml']), connection_xml=partition['connection_xml'], slap_computer_id=partition['computer_reference'].encode(), slap_computer_partition_id=partition['reference'], slap_software_release_url=partition['software_release'], slap_server_url='slap_server_url', slap_software_type=partition['software_type'], _instance_guid='%s-%s' % (partition['computer_reference'].encode(), partition['reference']), _requested_state=requested_state, ip_list=address_list) return software_instance def requestSlave(software_release, software_type, partition_reference, partition_id, partition_parameter_kw, filter_kw, requested_state): """ Function to organise link between slave and master. Slave information are stored in places: 1. slave table having information such as slave reference, connection information to slave (given by slave master), hosted_by and asked_by reference. 2. A dictionary in slave_instance_list of selected slave master in which are stored slave_reference, software_type, slave_title and partition_parameter_kw stored as individual keys. """ requested_computer_id = filter_kw['computer_guid'] instance_xml = dict2xml(partition_parameter_kw) # We will search for a master corresponding to request args = [] a = args.append q = 'SELECT * FROM %s WHERE software_release=? and computer_reference=?' a(software_release) a(requested_computer_id) if software_type: q += ' AND software_type=?' a(software_type) if 'instance_guid' in filter_kw: q += ' AND reference=?' # instance_guid should be like: %s-%s % (requested_computer_id, partition_id) # But code is convoluted here, so we check instance_guid = filter_kw['instance_guid'] if instance_guid.startswith(requested_computer_id): a(instance_guid[len(requested_computer_id) + 1:]) else: a(instance_guid) partition = execute_db('partition', q, args, one=True) if partition is None: app.logger.warning('No partition corresponding to slave request: %s' % args) abort(404) # We set slave dictionary as described in docstring new_slave = {} slave_reference = partition_id + '_' + partition_reference new_slave['slave_title'] = slave_reference new_slave['slap_software_type'] = software_type new_slave['slave_reference'] = slave_reference for key in partition_parameter_kw: if partition_parameter_kw[key] is not None: new_slave[key] = partition_parameter_kw[key] # Add slave to partition slave_list if not present else replace information slave_instance_list = partition['slave_instance_list'] if slave_instance_list is None: slave_instance_list = [] else: slave_instance_list = xml_marshaller.xml_marshaller.loads(slave_instance_list.encode()) for x in slave_instance_list: if x['slave_reference'] == slave_reference: slave_instance_list.remove(x) slave_instance_list.append(new_slave) # Update slave_instance_list in database args = [] a = args.append q = 'UPDATE %s SET slave_instance_list=?' a(xml_marshaller.xml_marshaller.dumps(slave_instance_list)) q += ' WHERE reference=? and computer_reference=?' a(partition['reference'].encode()) a(requested_computer_id) execute_db('partition', q, args) args = [] partition = execute_db('partition', 'SELECT * FROM %s WHERE reference=? and computer_reference=?', [partition['reference'].encode(), requested_computer_id], one=True) # Add slave to slave table if not there slave = execute_db('slave', 'SELECT * FROM %s WHERE reference=? and computer_reference=?', [slave_reference, requested_computer_id], one=True) if slave is None: execute_db('slave', 'INSERT OR IGNORE INTO %s (reference,computer_reference,asked_by,hosted_by) values(:reference,:computer_reference,:asked_by,:hosted_by)', [slave_reference, requested_computer_id, partition_id, partition['reference']]) slave = execute_db('slave', 'SELECT * FROM %s WHERE reference=? and computer_reference=?', [slave_reference, requested_computer_id], one=True) address_list = [] for address in execute_db('partition_network', 'SELECT * FROM %s WHERE partition_reference=? and computer_reference=?', [partition['reference'], partition['computer_reference']]): address_list.append((address['reference'], address['address'])) # XXX it should be ComputerPartition, not a SoftwareInstance software_instance = SoftwareInstance(_connection_dict=xml2dict(slave['connection_xml']), _parameter_dict=xml2dict(instance_xml), slap_computer_id=partition['computer_reference'], slap_computer_partition_id=slave['hosted_by'], slap_software_release_url=partition['software_release'], slap_server_url='slap_server_url', slap_software_type=partition['software_type'], ip_list=address_list) return software_instance @app.route('/softwareInstanceRename', methods=['POST']) def softwareInstanceRename(): new_name = request.form['new_name'].encode() computer_partition_id = request.form['computer_partition_id'].encode() computer_id = request.form['computer_id'].encode() q = 'UPDATE %s SET partition_reference = ? WHERE reference = ? AND computer_reference = ?' execute_db('partition', q, [new_name, computer_partition_id, computer_id]) return 'done' @app.route('/getComputerPartitionStatus', methods=['GET']) def getComputerPartitionStatus(): return xml_marshaller.xml_marshaller.dumps('Not implemented.') @app.route('/computerBang', methods=['POST']) def computerBang(): return xml_marshaller.xml_marshaller.dumps('') @app.route('/getComputerPartitionCertificate', methods=['GET']) def getComputerPartitionCertificate(): # proxy does not use partition certificate, but client calls this. return xml_marshaller.xml_marshaller.dumps({'certificate': '', 'key': ''}) @app.route('/getSoftwareReleaseListFromSoftwareProduct', methods=['GET']) def getSoftwareReleaseListFromSoftwareProduct(): software_product_reference = request.args.get('software_product_reference') software_release_url = request.args.get('software_release_url') if software_release_url: assert(software_product_reference is None) raise NotImplementedError('software_release_url parameter is not supported yet.') else: assert(software_product_reference is not None) if app.config['software_product_list'].has_key(software_product_reference): software_release_url_list =\ [app.config['software_product_list'][software_product_reference]] else: software_release_url_list = [] return xml_marshaller.xml_marshaller.dumps(software_release_url_list) slapos.core-1.3.18/slapos/proxy/db_version.py0000644000000000000000000000023212752436135021165 0ustar rootroot00000000000000# -*- coding: utf-8 -*- import pkg_resources DB_VERSION = pkg_resources.resource_stream('slapos.proxy', 'schema.sql').readline().strip().split(':')[1] slapos.core-1.3.18/slapos/proxy/schema.sql0000644000000000000000000000327612752436135020455 0ustar rootroot00000000000000--version:11 CREATE TABLE IF NOT EXISTS software%(version)s ( url VARCHAR(255), computer_reference VARCHAR(255) DEFAULT '%(computer)s', CONSTRAINT uniq PRIMARY KEY (url, computer_reference) ); CREATE TABLE IF NOT EXISTS computer%(version)s ( reference VARCHAR(255) DEFAULT '%(computer)s', address VARCHAR(255), netmask VARCHAR(255), CONSTRAINT uniq PRIMARY KEY (reference) ); CREATE TABLE IF NOT EXISTS partition%(version)s ( reference VARCHAR(255), computer_reference VARCHAR(255) DEFAULT '%(computer)s', slap_state VARCHAR(255) DEFAULT 'free', software_release VARCHAR(255), xml TEXT, connection_xml TEXT, slave_instance_list TEXT, software_type VARCHAR(255), partition_reference VARCHAR(255), -- name of the instance requested_by VARCHAR(255), -- only used for debugging, -- slapproxy does not support proper scope requested_state VARCHAR(255) NOT NULL DEFAULT 'started', CONSTRAINT uniq PRIMARY KEY (reference, computer_reference) ); CREATE TABLE IF NOT EXISTS slave%(version)s ( reference VARCHAR(255), -- unique slave reference computer_reference VARCHAR(255) DEFAULT '%(computer)s', connection_xml TEXT, hosted_by VARCHAR(255), asked_by VARCHAR(255) -- only used for debugging, -- slapproxy does not support proper scope ); CREATE TABLE IF NOT EXISTS partition_network%(version)s ( partition_reference VARCHAR(255), computer_reference VARCHAR(255) DEFAULT '%(computer)s', reference VARCHAR(255), address VARCHAR(255), netmask VARCHAR(255) ); CREATE TABLE IF NOT EXISTS forwarded_partition_request%(version)s ( partition_reference VARCHAR(255), -- a.k.a source_instance_id master_url VARCHAR(255) ); slapos.core-1.3.18/slapos/proxy/__init__.py0000644000000000000000000000755312752436135020607 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2010, 2011, 2012, 2013 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging from slapos.proxy.views import app def _generateSoftwareProductListFromString(software_product_list_string): """ Take a string as argument (which usually comes from the software_product_list parameter of the slapproxy configuration file), and parse it to generate list of Software Products that slapproxy will use. """ try: software_product_string_split = software_product_list_string.split('\n') except AttributeError: return {} software_product_list = {} for line in software_product_string_split: if line: software_reference, url = line.split(' ') software_product_list[software_reference] = url return software_product_list class ProxyConfig(object): def __init__(self, logger): self.logger = logger self.multimaster = {} self.software_product_list = [] def mergeConfig(self, args, configp): # Set arguments parameters (from CLI) as members of self for option, value in args.__dict__.items(): setattr(self, option, value) for section in configp.sections(): configuration_dict = dict(configp.items(section)) if section in ("slapproxy", "slapos"): # Merge the arguments and configuration as member of self for key in configuration_dict: if not getattr(self, key, None): setattr(self, key, configuration_dict[key]) elif section.startswith('multimaster/'): # Merge multimaster configuration if any # XXX: check for duplicate SR entries for key, value in configuration_dict.iteritems(): if key == 'software_release_list': # Split multi-lines values configuration_dict[key] = [line.strip() for line in value.strip().split('\n')] self.multimaster[section.split('multimaster/')[1]] = configuration_dict def setConfig(self): if not self.database_uri: raise ValueError('database-uri is required.') # XXX: check for duplicate SR entries. self.software_product_list = _generateSoftwareProductListFromString( getattr(self, 'software_product_list', '')) def setupFlaskConfiguration(conf): app.config['computer_id'] = conf.computer_id app.config['DATABASE_URI'] = conf.database_uri app.config['software_product_list'] = conf.software_product_list app.config['multimaster'] = conf.multimaster def do_proxy(conf): for handler in conf.logger.handlers: app.logger.addHandler(handler) app.logger.setLevel(logging.INFO) setupFlaskConfiguration(conf) app.run(host=conf.host, port=int(conf.port), threaded=True) slapos.core-1.3.18/slapos/tests/0000755000000000000000000000000013006632706016440 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/tests/cli.py0000644000000000000000000001350712752436135017574 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import pprint import unittest from mock import patch, create_autospec import slapos.cli.list import slapos.cli.info import slapos.cli.supervisorctl from slapos.client import ClientConfig import slapos.grid.svcbackend import slapos.proxy import slapos.slap import supervisor.supervisorctl def raiseNotFoundError(*args, **kwargs): raise slapos.slap.NotFoundError() class CliMixin(unittest.TestCase): def setUp(self): slap = slapos.slap.slap() self.local = {'slap': slap} self.logger = create_autospec(logging.Logger) self.conf = create_autospec(ClientConfig) class TestCliProxy(CliMixin): def test_generateSoftwareProductListFromString(self): """ Test that generateSoftwareProductListFromString correctly parses a parameter coming from the configuration file. """ software_product_list_string = """ product1 url1 product2 url2""" software_release_url_list = { 'product1': 'url1', 'product2': 'url2', } self.assertEqual( slapos.proxy._generateSoftwareProductListFromString( software_product_list_string), software_release_url_list ) def test_generateSoftwareProductListFromString_emptyString(self): self.assertEqual( slapos.proxy._generateSoftwareProductListFromString(''), {} ) class TestCliList(CliMixin): def test_list(self): """ Test "slapos list" command output. """ return_value = { 'instance1': slapos.slap.SoftwareInstance(_title='instance1', _software_release_url='SR1'), 'instance2': slapos.slap.SoftwareInstance(_title='instance2', _software_release_url='SR2'), } with patch.object(slapos.slap.slap, 'getOpenOrderDict', return_value=return_value) as _: slapos.cli.list.do_list(self.logger, None, self.local) self.logger.info.assert_any_call('%s %s', 'instance1', 'SR1') self.logger.info.assert_any_call('%s %s', 'instance2', 'SR2') def test_emptyList(self): with patch.object(slapos.slap.slap, 'getOpenOrderDict', return_value={}) as _: slapos.cli.list.do_list(self.logger, None, self.local) self.logger.info.assert_called_once_with('No existing service.') @patch.object(slapos.slap.slap, 'registerOpenOrder', return_value=slapos.slap.OpenOrder()) class TestCliInfo(CliMixin): def test_info(self, _): """ Test "slapos info" command output. """ setattr(self.conf, 'reference', 'instance1') instance = slapos.slap.SoftwareInstance( _software_release_url='SR1', _requested_state = 'mystate', _connection_dict = {'myconnectionparameter': 'value1'}, _parameter_dict = {'myinstanceparameter': 'value2'} ) with patch.object(slapos.slap.OpenOrder, 'getInformation', return_value=instance): slapos.cli.info.do_info(self.logger, self.conf, self.local) self.logger.info.assert_any_call(pprint.pformat(instance._parameter_dict)) self.logger.info.assert_any_call('Software Release URL: %s', instance._software_release_url) self.logger.info.assert_any_call('Instance state: %s', instance._requested_state) self.logger.info.assert_any_call(pprint.pformat(instance._parameter_dict)) self.logger.info.assert_any_call(pprint.pformat(instance._connection_dict)) def test_unknownReference(self, _): """ Test "slapos info" command output in case reference of service is not known. """ setattr(self.conf, 'reference', 'instance1') with patch.object(slapos.slap.OpenOrder, 'getInformation', side_effect=raiseNotFoundError): slapos.cli.info.do_info(self.logger, self.conf, self.local) self.logger.warning.assert_called_once_with('Instance %s does not exist.', self.conf.reference) @patch.object(supervisor.supervisorctl, 'main') class TestCliSupervisorctl(CliMixin): def test_allow_supervisord_launch(self, _): """ Test that "slapos node supervisorctl" tries to launch supervisord """ instance_root = '/foo/bar' with patch.object(slapos.grid.svcbackend, 'launchSupervisord') as launchSupervisord: slapos.cli.supervisorctl.do_supervisorctl(self.logger, instance_root, ['status'], False) launchSupervisord.assert_any_call(instance_root=instance_root, logger=self.logger) def test_forbid_supervisord_launch(self, _): """ Test that "slapos node supervisorctl" does not try to launch supervisord """ instance_root = '/foo/bar' with patch.object(slapos.grid.svcbackend, 'launchSupervisord') as launchSupervisord: slapos.cli.supervisorctl.do_supervisorctl(self.logger, instance_root, ['status'], True) self.assertFalse(launchSupervisord.called) slapos.core-1.3.18/slapos/tests/distribution.py0000644000000000000000000000653612752436135021550 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import unittest from slapos.grid import distribution class TestDebianize(unittest.TestCase): def test_debian_major(self): """ On debian, we only care about major release. All the other tuples are unchanged. """ for provided, expected in [ (('CentOS', '6.3', 'Final'), None), (('Ubuntu', '12.04', 'precise'), None), (('Ubuntu', '13.04', 'raring'), None), (('Fedora', '17', 'Beefy Miracle'), None), (('debian', '6.0.6', ''), ('debian', '6', '')), (('debian', '7.0', ''), ('debian', '7', '')), ]: self.assertEqual(distribution._debianize(provided), expected or provided) class TestOSMatches(unittest.TestCase): def test_centos(self): self.assertFalse(distribution.os_matches(('CentOS', '6.3', 'Final'), ('Ubuntu', '13.04', 'raring'))) self.assertFalse(distribution.os_matches(('CentOS', '6.3', 'Final'), ('debian', '6.3', ''))) def test_ubuntu(self): self.assertFalse(distribution.os_matches(('Ubuntu', '12.04', 'precise'), ('Ubuntu', '13.04', 'raring'))) self.assertTrue(distribution.os_matches(('Ubuntu', '13.04', 'raring'), ('Ubuntu', '13.04', 'raring'))) self.assertTrue(distribution.os_matches(('Ubuntu', '12.04', 'precise'), ('Ubuntu', '12.04', 'precise'))) def test_debian(self): self.assertFalse(distribution.os_matches(('debian', '6.0.6', ''), ('debian', '7.0', ''))) self.assertTrue(distribution.os_matches(('debian', '6.0.6', ''), ('debian', '6.0.5', ''))) self.assertTrue(distribution.os_matches(('debian', '6.0.6', ''), ('debian', '6.1', ''))) slapos.core-1.3.18/slapos/tests/interface.py0000644000000000000000000001006012752436135020754 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import unittest from zope.interface.verify import verifyClass import zope.interface import types from slapos import slap def getOnlyImplementationAssertionMethod(klass, method_list): """Returns method which verifies if a klass only implements its interfaces""" def testMethod(self): implemented_method_list = [x for x in dir(klass) \ if ((not x.startswith('_')) and callable(getattr(klass, x)))] for interface_method in method_list: if interface_method in implemented_method_list: implemented_method_list.remove(interface_method) if implemented_method_list: raise AssertionError("Unexpected methods %s" % implemented_method_list) return testMethod def getImplementationAssertionMethod(klass, interface): """Returns method which verifies if interface is properly implemented by klass""" def testMethod(self): verifyClass(interface, klass) return testMethod def getDeclarationAssertionMethod(klass): """Returns method which verifies if klass is declaring interface""" def testMethod(self): if len(list(zope.interface.implementedBy(klass))) == 0: self.fail('%s class does not respect its interface(s).' % klass.__name__) return testMethod def generateTestMethodListOnClass(klass, module): """Generate test method on klass""" for class_id in dir(module): implementing_class = getattr(module, class_id) if type(implementing_class) not in (types.ClassType, types.TypeType): continue # add methods to assert that publicly available classes are defining # interfaces method_name = 'test_%s_declares_interface' % (class_id,) setattr(klass, method_name, getDeclarationAssertionMethod( implementing_class)) implemented_method_list = [] for interface in list(zope.interface.implementedBy(implementing_class)): # for each interface which class declares add a method which verify # implementation method_name = 'test_%s_implements_%s' % (class_id, interface.__identifier__) setattr(klass, method_name, getImplementationAssertionMethod( implementing_class, interface)) for interface_klass in interface.__iro__: implemented_method_list.extend(interface_klass.names()) # for each interface which class declares, check that no other method are # available method_name = 'test_%s_only_implements' % class_id setattr(klass, method_name, getOnlyImplementationAssertionMethod( implementing_class, implemented_method_list)) class TestInterface(unittest.TestCase): """Tests all publicly available classes of slap Classes are checked *if* they implement interface and if the implementation is correct. """ # add methods to test class generateTestMethodListOnClass(TestInterface, slap) if __name__ == '__main__': unittest.main() slapos.core-1.3.18/slapos/tests/collect.py0000644000000000000000000006364512752436135020462 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2014 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import glob import unittest import shutil import tarfile import tempfile import slapos.slap import psutil from time import strftime from slapos.collect import entity, snapshot, db, reporter from slapos.cli.entry import SlapOSApp from ConfigParser import ConfigParser class FakeDatabase(object): def __init__(self): self.invoked_method_list = [] def connect(self): self.invoked_method_list.append(("connect", "")) def close(self): self.invoked_method_list.append(("close", "")) def commit(self): self.invoked_method_list.append(("commit", "")) def insertUserSnapshot(self, *args, **kw): self.invoked_method_list.append(("insertUserSnapshot", (args, kw))) def inserFolderSnapshot(self, *args, **kw): self.invoked_method_list.append(("inserFolderSnapshot", (args, kw))) def insertSystemSnapshot(self, *args, **kw): self.invoked_method_list.append(("insertSystemSnapshot", (args, kw))) def insertComputerSnapshot(self, *args, **kw): self.invoked_method_list.append(("insertComputerSnapshot", (args, kw))) def insertDiskPartitionSnapshot(self, *args, **kw): self.invoked_method_list.append(("insertDiskPartitionSnapshot", (args, kw))) def insertTemperatureSnapshot(self, *args, **kw): self.invoked_method_list.append(("insertTemperatureSnapshot", (args, kw))) def insertHeatingSnapshot(self, *args, **kw): self.invoked_method_list.append(("insertHeatingSnapshot", (args, kw))) class FakeDatabase2(FakeDatabase): def select(self, *args, **kw): self.invoked_method_list.append(("select", (args, kw))) return [] class TestCollectDatabase(unittest.TestCase): def setUp(self): self.instance_root = tempfile.mkdtemp() def tearDown(self): if os.path.exists(self.instance_root): shutil.rmtree(self.instance_root) def test_database_bootstrap(self): self.assertFalse(os.path.exists( "%s/collector.db" % self.instance_root )) database = db.Database(self.instance_root) database.connect() try: self.assertEquals( [u'user', u'folder', u'computer', u'system', u'disk', u'temperature', u'heating'], database.getTableList()) finally: database.close() self.assertTrue(os.path.exists( "%s/collector.db" % self.instance_root )) def test_insert_user_snapshot(self): database = db.Database(self.instance_root) database.connect() try: database.insertUserSnapshot( 'fakeuser0', 10, '10-12345', '0.1', '10.0', '1', '10.0', '10.0', '0.1', '0.1', 'DATE', 'TIME') database.commit() self.assertEquals([i for i in database.select('user')], [(u'fakeuser0', 10.0, u'10-12345', 0.1, 10.0, 1.0, 10.0, 10.0, 0.1, 0.1, u'DATE', u'TIME', 0)]) finally: database.close() def test_insert_folder_snapshot(self): database = db.Database(self.instance_root) database.connect() try: database.inserFolderSnapshot( 'fakeuser0', '0.1', 'DATE', 'TIME') database.commit() self.assertEquals([i for i in database.select('folder')], [(u'fakeuser0', 0.1, u'DATE', u'TIME', 0)]) finally: database.close() def test_insert_computer_snapshot(self): database = db.Database(self.instance_root) database.connect() try: database.insertComputerSnapshot( '1', '0', '0', '100', '0', '/dev/sdx1', 'DATE', 'TIME') database.commit() self.assertEquals([i for i in database.select('computer')], [(1.0, 0.0, u'0', 100.0, u'0', u'/dev/sdx1', u'DATE', u'TIME', 0)]) finally: database.close() def test_insert_disk_partition_snapshot(self): database = db.Database(self.instance_root) database.connect() try: database.insertDiskPartitionSnapshot( '/dev/sdx1', '10', '20', '/mnt', 'DATE', 'TIME') database.commit() self.assertEquals([i for i in database.select('disk')], [(u'/dev/sdx1', u'10', u'20', u'/mnt', u'DATE', u'TIME', 0)]) finally: database.close() def test_insert_system_snapshot(self): database = db.Database(self.instance_root) database.connect() try: database.insertSystemSnapshot("0.1", '10.0', '100.0', '100.0', '10.0', '1', '2', '12.0', '1', '1', 'DATE', 'TIME') database.commit() self.assertEquals([i for i in database.select('system')], [(0.1, 10.0, 100.0, 100.0, 10.0, 1.0, 2.0, 12.0, 1.0, 1.0, u'DATE', u'TIME', 0)]) finally: database.close() def test_date_scope(self): database = db.Database(self.instance_root) database.connect() try: database.insertSystemSnapshot("0.1", '10.0', '100.0', '100.0', '10.0', '1', '2', '12.0', '1', '1', 'EXPECTED-DATE', 'TIME') database.commit() self.assertEquals([i for i in database.getDateScopeList()], [('EXPECTED-DATE', 1)]) self.assertEquals([i for i in \ database.getDateScopeList(ignore_date='EXPECTED-DATE')], []) self.assertEquals([i for i in \ database.getDateScopeList(reported=1)], []) finally: database.close() def test_garbage_collection_date_list(self): database = db.Database(self.instance_root) self.assertEquals(len(database._getGarbageCollectionDateList()), 3) self.assertEquals(len(database._getGarbageCollectionDateList(1)), 1) self.assertEquals(len(database._getGarbageCollectionDateList(0)), 0) self.assertEquals(database._getGarbageCollectionDateList(1), [strftime("%Y-%m-%d")]) def test_garbage(self): database = db.Database(self.instance_root) database.connect() database.insertSystemSnapshot("0.1", '10.0', '100.0', '100.0', '10.0', '1', '2', '12.0', '1', '1', '1983-01-10', 'TIME') database.insertDiskPartitionSnapshot( '/dev/sdx1', '10', '20', '/mnt', '1983-01-10', 'TIME') database.insertComputerSnapshot( '1', '0', '0', '100', '0', '/dev/sdx1', '1983-01-10', 'TIME') database.commit() database.markDayAsReported(date_scope="1983-01-10", table_list=database.table_list) database.commit() self.assertEquals(len([x for x in database.select('system')]), 1) self.assertEquals(len([x for x in database.select('computer')]), 1) self.assertEquals(len([x for x in database.select('disk')]), 1) database.close() database.garbageCollect() database.connect() self.assertEquals(len([x for x in database.select('system')]), 0) self.assertEquals(len([x for x in database.select('computer')]), 0) self.assertEquals(len([x for x in database.select('disk')]), 0) def test_mark_day_as_reported(self): database = db.Database(self.instance_root) database.connect() try: database.insertSystemSnapshot("0.1", '10.0', '100.0', '100.0', '10.0', '1', '2', '12.0', '1', '1', 'EXPECTED-DATE', 'TIME') database.insertSystemSnapshot("0.1", '10.0', '100.0', '100.0', '10.0', '1', '2', '12.0', '1', '1', 'NOT-EXPECTED-DATE', 'TIME') database.commit() self.assertEquals([i for i in database.select('system')], [(0.1, 10.0, 100.0, 100.0, 10.0, 1.0, 2.0, 12.0, 1.0, 1.0, u'EXPECTED-DATE', u'TIME', 0), (0.1, 10.0, 100.0, 100.0, 10.0, 1.0, 2.0, 12.0, 1.0, 1.0, u'NOT-EXPECTED-DATE', u'TIME', 0)]) database.markDayAsReported(date_scope="EXPECTED-DATE", table_list=["system"]) database.commit() self.assertEquals([i for i in database.select('system')], [(0.1, 10.0, 100.0, 100.0, 10.0, 1.0, 2.0, 12.0, 1.0, 1.0, u'EXPECTED-DATE', u'TIME', 1), (0.1, 10.0, 100.0, 100.0, 10.0, 1.0, 2.0, 12.0, 1.0, 1.0, u'NOT-EXPECTED-DATE', u'TIME', 0)]) finally: database.close() class TestCollectReport(unittest.TestCase): def setUp(self): self.instance_root = tempfile.mkdtemp() def tearDown(self): if os.path.exists(self.instance_root): shutil.rmtree(self.instance_root) def test_raw_csv_report(self): database = db.Database(self.instance_root) database.connect() database.insertSystemSnapshot("0.1", '10.0', '100.0', '100.0', '10.0', '1', '2', '12.0', '1', '1', '1983-01-10', 'TIME') database.insertDiskPartitionSnapshot( '/dev/sdx1', '10', '20', '/mnt', '1983-01-10', 'TIME') database.insertComputerSnapshot( '1', '0', '0', '100', '0', '/dev/sdx1', '1983-01-10', 'TIME') database.commit() database.close() reporter.RawCSVDumper(database).dump(self.instance_root) self.assertTrue(os.path.exists("%s/1983-01-10" % self.instance_root)) csv_path_list = ['%s/1983-01-10/dump_disk.csv' % self.instance_root, '%s/1983-01-10/dump_computer.csv' % self.instance_root, '%s/1983-01-10/dump_user.csv' % self.instance_root, '%s/1983-01-10/dump_folder.csv' % self.instance_root, '%s/1983-01-10/dump_heating.csv' % self.instance_root, '%s/1983-01-10/dump_temperature.csv' % self.instance_root, '%s/1983-01-10/dump_system.csv' % self.instance_root] self.assertEquals(set(glob.glob("%s/1983-01-10/*.csv" % self.instance_root)), set(csv_path_list)) def test_system_csv_report(self): database = db.Database(self.instance_root) database.connect() database.insertSystemSnapshot("0.1", '10.0', '100.0', '100.0', '10.0', '1', '2', '12.0', '1', '1', strftime("%Y-%m-%d"), 'TIME') database.insertDiskPartitionSnapshot( '/dev/sdx1', '10', '20', '/mnt', strftime("%Y-%m-%d"), 'TIME') database.insertComputerSnapshot( '1', '0', '0', '100', '0', '/dev/sdx1', strftime("%Y-%m-%d"), 'TIME') database.commit() database.close() reporter.SystemCSVReporterDumper(database).dump(self.instance_root) csv_path_list = ['%s/system_memory_used.csv' % self.instance_root, '%s/system_cpu_percent.csv' % self.instance_root, '%s/system_net_out_bytes.csv' % self.instance_root, '%s/system_net_in_bytes.csv' % self.instance_root, '%s/system_disk_memory_free__dev_sdx1.csv' % self.instance_root, '%s/system_net_out_errors.csv' % self.instance_root, '%s/system_disk_memory_used__dev_sdx1.csv' % self.instance_root, '%s/system_net_out_dropped.csv' % self.instance_root, '%s/system_memory_free.csv' % self.instance_root, '%s/system_net_in_errors.csv' % self.instance_root, '%s/system_net_in_dropped.csv' % self.instance_root, '%s/system_loadavg.csv' % self.instance_root] self.assertEquals(set(glob.glob("%s/*.csv" % self.instance_root)), set(csv_path_list)) def test_compress_log_directory(self): log_directory = "%s/test_compress" % self.instance_root dump_folder = "%s/1990-01-01" % log_directory if not os.path.exists(log_directory): os.mkdir(log_directory) if os.path.exists(dump_folder): shutil.rmtree(dump_folder) os.mkdir("%s/1990-01-01" % log_directory) with open("%s/test.txt" % dump_folder, "w") as dump_file: dump_file.write("hi") dump_file.close() reporter.compressLogFolder(log_directory) self.assertFalse(os.path.exists(dump_folder)) self.assertTrue(os.path.exists("%s.tar.gz" % dump_folder)) with tarfile.open("%s.tar.gz" % dump_folder) as tf: self.assertEquals(tf.getmembers()[0].name, "1990-01-01") self.assertEquals(tf.getmembers()[1].name, "1990-01-01/test.txt") self.assertEquals(tf.extractfile(tf.getmembers()[1]).read(), 'hi') class TestCollectSnapshot(unittest.TestCase): def setUp(self): self.slap = slapos.slap.slap() self.app = SlapOSApp() self.temp_dir = tempfile.mkdtemp() os.environ["HOME"] = self.temp_dir self.instance_root = tempfile.mkdtemp() self.software_root = tempfile.mkdtemp() if os.path.exists(self.temp_dir): shutil.rmtree(self.temp_dir) def test_process_snapshot(self): process = psutil.Process(os.getpid()) process_snapshot = snapshot.ProcessSnapshot(process) self.assertNotEquals(process_snapshot.username, None) self.assertEquals(int(process_snapshot.pid), os.getpid()) self.assertEquals(int(process_snapshot.process.split("-")[0]), os.getpid()) self.assertNotEquals(process_snapshot.cpu_percent , None) self.assertNotEquals(process_snapshot.cpu_time , None) self.assertNotEquals(process_snapshot.cpu_num_threads, None) self.assertNotEquals(process_snapshot.memory_percent , None) self.assertNotEquals(process_snapshot.memory_rss, None) self.assertNotEquals(process_snapshot.io_rw_counter, None) self.assertNotEquals(process_snapshot.io_cycles_counter, None) def test_folder_size_snapshot(self): disk_snapshot = snapshot.FolderSizeSnapshot(self.instance_root) self.assertEqual(disk_snapshot.disk_usage, 0) for i in range(0, 10): folder = 'folder%s' % i os.mkdir(os.path.join(self.instance_root, folder)) with open(os.path.join(self.instance_root, folder, 'toto'), 'w') as f: f.write('toto text') disk_snapshot.update_folder_size() self.assertNotEquals(disk_snapshot.disk_usage, 0) pid_file = os.path.join(self.instance_root, 'disk_snap.pid') disk_snapshot = snapshot.FolderSizeSnapshot(self.instance_root, pid_file) disk_snapshot.update_folder_size() self.assertNotEquals(disk_snapshot.disk_usage, 0) pid_file = os.path.join(self.instance_root, 'disk_snap.pid') disk_snapshot = snapshot.FolderSizeSnapshot(self.instance_root, pid_file, use_quota=True) disk_snapshot.update_folder_size() self.assertNotEquals(disk_snapshot.disk_usage, 0) def test_process_snapshot_broken_process(self): self.assertRaises(AssertionError, snapshot.ProcessSnapshot, None) def test_computer_snapshot(self): computer_snapshot = snapshot.ComputerSnapshot() self.assertNotEquals(computer_snapshot.cpu_num_core , None) self.assertNotEquals(computer_snapshot.cpu_frequency , None) self.assertNotEquals(computer_snapshot.cpu_type , None) self.assertNotEquals(computer_snapshot.memory_size , None) self.assertNotEquals(computer_snapshot.memory_type , None) self.assertEquals(type(computer_snapshot.system_snapshot), snapshot.SystemSnapshot) self.assertNotEquals(computer_snapshot.disk_snapshot_list, []) self.assertNotEquals(computer_snapshot.partition_list, []) self.assertEquals(type(computer_snapshot.disk_snapshot_list[0]), snapshot.DiskPartitionSnapshot) def test_system_snapshot(self): system_snapshot = snapshot.SystemSnapshot() self.assertNotEquals(system_snapshot.memory_used , None) self.assertNotEquals(system_snapshot.memory_free , None) self.assertNotEquals(system_snapshot.memory_percent , None) self.assertNotEquals(system_snapshot.cpu_percent , None) self.assertNotEquals(system_snapshot.load , None) self.assertNotEquals(system_snapshot.net_in_bytes , None) self.assertNotEquals(system_snapshot.net_in_errors, None) self.assertNotEquals(system_snapshot.net_in_dropped , None) self.assertNotEquals(system_snapshot.net_out_bytes , None) self.assertNotEquals(system_snapshot.net_out_errors, None) self.assertNotEquals(system_snapshot.net_out_dropped , None) class TestCollectEntity(unittest.TestCase): def setUp(self): self.temp_dir = tempfile.mkdtemp() os.environ["HOME"] = self.temp_dir self.instance_root = tempfile.mkdtemp() self.software_root = tempfile.mkdtemp() if os.path.exists(self.temp_dir): shutil.rmtree(self.temp_dir) def tearDown(self): pass def getFakeUser(self, disk_snapshot_params={}): os.mkdir("%s/fakeuser0" % self.instance_root) return entity.User("fakeuser0", "%s/fakeuser0" % self.instance_root, disk_snapshot_params ) def test_get_user_list(self): config = ConfigParser() config.add_section('slapformat') config.set('slapformat', 'partition_amount', '3') config.set('slapformat', 'user_base_name', 'slapuser') config.set('slapformat', 'partition_base_name', 'slappart') config.add_section('slapos') config.set('slapos', 'instance_root', self.instance_root) user_dict = entity.get_user_list(config) username_list = ['slapuser0', 'slapuser1', 'slapuser2'] self.assertEquals(username_list, user_dict.keys()) for name in username_list: self.assertEquals(user_dict[name].name, name) self.assertEquals(user_dict[name].snapshot_list, []) expected_path = "%s/slappart%s" % (self.instance_root, name.strip("slapuser")) self.assertEquals(user_dict[name].path, expected_path) def test_user_add_snapshot(self): user = self.getFakeUser() self.assertEquals(user.snapshot_list, []) user.append("SNAPSHOT") self.assertEquals(user.snapshot_list, ["SNAPSHOT"]) def test_user_save(self): disk_snapshot_params = {'enable': False} user = self.getFakeUser(disk_snapshot_params) process = psutil.Process(os.getpid()) user.append(snapshot.ProcessSnapshot(process)) database = FakeDatabase() user.save(database, "DATE", "TIME") self.assertEquals(database.invoked_method_list[0], ("connect", "")) self.assertEquals(database.invoked_method_list[1][0], "insertUserSnapshot") self.assertEquals(database.invoked_method_list[1][1][0], ("fakeuser0",)) self.assertEquals(database.invoked_method_list[1][1][1].keys(), ['cpu_time', 'cpu_percent', 'process', 'memory_rss', 'pid', 'memory_percent', 'io_rw_counter', 'insertion_date', 'insertion_time', 'io_cycles_counter', 'cpu_num_threads']) self.assertEquals(database.invoked_method_list[2], ("commit", "")) self.assertEquals(database.invoked_method_list[3], ("close", "")) def test_user_save_disk_snapshot(self): disk_snapshot_params = {'enable': True, 'testing': True} user = self.getFakeUser(disk_snapshot_params) process = psutil.Process(os.getpid()) user.append(snapshot.ProcessSnapshot(process)) database = FakeDatabase2() user.save(database, "DATE", "TIME") self.assertEquals(database.invoked_method_list[0], ("connect", "")) self.assertEquals(database.invoked_method_list[1][0], "insertUserSnapshot") self.assertEquals(database.invoked_method_list[1][1][0], ("fakeuser0",)) self.assertEquals(database.invoked_method_list[1][1][1].keys(), ['cpu_time', 'cpu_percent', 'process', 'memory_rss', 'pid', 'memory_percent', 'io_rw_counter', 'insertion_date', 'insertion_time', 'io_cycles_counter', 'cpu_num_threads']) self.assertEquals(database.invoked_method_list[2], ("commit", "")) self.assertEquals(database.invoked_method_list[3], ("close", "")) self.assertEquals(database.invoked_method_list[4], ("connect", "")) self.assertEquals(database.invoked_method_list[5][0], "inserFolderSnapshot") self.assertEquals(database.invoked_method_list[5][1][0], ("fakeuser0",)) self.assertEquals(database.invoked_method_list[5][1][1].keys(), ['insertion_date', 'disk_usage', 'insertion_time']) self.assertEquals(database.invoked_method_list[6], ("commit", "")) self.assertEquals(database.invoked_method_list[7], ("close", "")) def test_user_save_disk_snapshot_cycle(self): disk_snapshot_params = {'enable': True, 'time_cycle': 3600, 'testing': True} user = self.getFakeUser(disk_snapshot_params) process = psutil.Process(os.getpid()) user.append(snapshot.ProcessSnapshot(process)) database = FakeDatabase2() user.save(database, "DATE", "TIME") self.assertEquals(database.invoked_method_list[0], ("connect", "")) self.assertEquals(database.invoked_method_list[1][0], "insertUserSnapshot") self.assertEquals(database.invoked_method_list[1][1][0], ("fakeuser0",)) self.assertEquals(database.invoked_method_list[1][1][1].keys(), ['cpu_time', 'cpu_percent', 'process', 'memory_rss', 'pid', 'memory_percent', 'io_rw_counter', 'insertion_date', 'insertion_time', 'io_cycles_counter', 'cpu_num_threads']) self.assertEquals(database.invoked_method_list[2], ("commit", "")) self.assertEquals(database.invoked_method_list[3], ("close", "")) self.assertEquals(database.invoked_method_list[4], ("connect", "")) self.assertEquals(database.invoked_method_list[5][0], "select") self.assertEquals(database.invoked_method_list[5][1][0], ()) self.assertEquals(database.invoked_method_list[5][1][1].keys(), ['table', 'where', 'limit', 'order', 'columns']) self.assertEquals(database.invoked_method_list[6][0], "inserFolderSnapshot") self.assertEquals(database.invoked_method_list[6][1][0], ("fakeuser0",)) self.assertEquals(database.invoked_method_list[6][1][1].keys(), ['insertion_date', 'disk_usage', 'insertion_time']) self.assertEquals(database.invoked_method_list[7], ("commit", "")) self.assertEquals(database.invoked_method_list[8], ("close", "")) def test_computer_entity(self): computer = entity.Computer(snapshot.ComputerSnapshot()) database = FakeDatabase() computer.save(database, "DATE", "TIME") self.assertEquals(database.invoked_method_list[0], ("connect", "")) self.assertEquals(database.invoked_method_list[1][0], "insertComputerSnapshot") self.assertEquals(database.invoked_method_list[1][1][0], ()) self.assertEquals(database.invoked_method_list[1][1][1].keys(), ['insertion_time', 'insertion_date', 'cpu_num_core', 'partition_list', 'cpu_frequency', 'memory_size', 'cpu_type', 'memory_type']) self.assertEquals(database.invoked_method_list[2][0], "insertSystemSnapshot") self.assertEquals(database.invoked_method_list[2][1][0], ()) self.assertEquals(set(database.invoked_method_list[2][1][1].keys()), set([ 'memory_used', 'cpu_percent', 'insertion_date', 'insertion_time', 'loadavg', 'memory_free', 'net_in_bytes', 'net_in_dropped', 'net_in_errors', 'net_out_bytes', 'net_out_dropped', 'net_out_errors'])) self.assertEquals(database.invoked_method_list[3][0], "insertDiskPartitionSnapshot") self.assertEquals(database.invoked_method_list[3][1][0], ()) self.assertEquals(set(database.invoked_method_list[3][1][1].keys()), set([ 'used', 'insertion_date', 'partition', 'free', 'mountpoint', 'insertion_time' ])) self.assertEquals(database.invoked_method_list[-2], ("commit", "")) self.assertEquals(database.invoked_method_list[-1], ("close", "")) slapos.core-1.3.18/slapos/tests/slapformat.py0000644000000000000000000006113412773033553021174 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import slapos.format import slapos.util import unittest import netaddr import socket # for mocking import grp import netifaces import os import pwd import time USER_LIST = [] GROUP_LIST = [] INTERFACE_DICT = {} class FakeConfig: pass class TestLoggerHandler(logging.Handler): def __init__(self, *args, **kwargs): self.bucket = [] logging.Handler.__init__(self, *args, **kwargs) def emit(self, record): self.bucket.append(record.msg) class FakeCallAndRead: def __init__(self): self.external_command_list = [] def __call__(self, argument_list, raise_on_error=True): retval = 0, 'UP' global INTERFACE_DICT if 'useradd' in argument_list: print argument_list global USER_LIST username = argument_list[-1] if username == '-r': username = argument_list[-2] USER_LIST.append(username) elif 'groupadd' in argument_list: global GROUP_LIST GROUP_LIST.append(argument_list[-1]) elif argument_list[:3] == ['ip', 'addr', 'add']: ip, interface = argument_list[3], argument_list[5] if ':' not in ip: netmask = netaddr.strategy.ipv4.int_to_str( netaddr.strategy.ipv4.prefix_to_netmask[int(ip.split('/')[1])]) ip = ip.split('/')[0] INTERFACE_DICT[interface][socket.AF_INET].append({'addr': ip, 'netmask': netmask}) else: netmask = netaddr.strategy.ipv6.int_to_str( netaddr.strategy.ipv6.prefix_to_netmask[int(ip.split('/')[1])]) ip = ip.split('/')[0] INTERFACE_DICT[interface][socket.AF_INET6].append({'addr': ip, 'netmask': netmask}) # stabilise by mangling ip to just ip string argument_list[3] = 'ip/%s' % netmask elif argument_list[:3] == ['ip', 'addr', 'list'] or \ argument_list[:4] == ['ip', '-6', 'addr', 'list']: retval = 0, str(INTERFACE_DICT) elif argument_list[:3] == ['ip', 'route', 'show']: retval = 0, 'OK' elif argument_list[:3] == ['route', 'add', '-host']: retval = 0, 'OK' self.external_command_list.append(' '.join(argument_list)) return retval class LoggableWrapper: def __init__(self, logger, name): self.__logger = logger self.__name = name def __call__(self, *args, **kwargs): arg_list = [repr(x) for x in args] + [ '%s=%r' % (x, y) for x, y in kwargs.iteritems()] self.__logger.debug('%s(%s)' % (self.__name, ', '.join(arg_list))) class TimeMock: @classmethod def sleep(self, seconds): return class GrpMock: @classmethod def getgrnam(self, name): global GROUP_LIST if name in GROUP_LIST: return True raise KeyError class PwdMock: @classmethod def getpwnam(self, name): global USER_LIST if name in USER_LIST: class result: pw_uid = 0 pw_gid = 0 return result raise KeyError class NetifacesMock: @classmethod def ifaddresses(self, name): global INTERFACE_DICT if name in INTERFACE_DICT: return INTERFACE_DICT[name] raise ValueError @classmethod def interfaces(self): global INTERFACE_DICT return INTERFACE_DICT.keys() class SlaposUtilMock: @classmethod def chownDirectory(*args, **kw): pass class SlapformatMixin(unittest.TestCase): # keep big diffs maxDiff = None def patchNetifaces(self): self.netifaces = NetifacesMock() self.saved_netifaces = {} for fake in vars(NetifacesMock): self.saved_netifaces[fake] = getattr(netifaces, fake, None) setattr(netifaces, fake, getattr(self.netifaces, fake)) def restoreNetifaces(self): for name, original_value in self.saved_netifaces.items(): setattr(netifaces, name, original_value) del self.saved_netifaces def patchPwd(self): self.saved_pwd = {} for fake in vars(PwdMock): self.saved_pwd[fake] = getattr(pwd, fake, None) setattr(pwd, fake, getattr(PwdMock, fake)) def restorePwd(self): for name, original_value in self.saved_pwd.items(): setattr(pwd, name, original_value) del self.saved_pwd def patchTime(self): self.saved_time = {} for fake in vars(TimeMock): self.saved_time[fake] = getattr(time, fake, None) setattr(time, fake, getattr(TimeMock, fake)) def restoreTime(self): for name, original_value in self.saved_time.items(): setattr(time, name, original_value) del self.saved_time def patchGrp(self): self.saved_grp = {} for fake in vars(GrpMock): self.saved_grp[fake] = getattr(grp, fake, None) setattr(grp, fake, getattr(GrpMock, fake)) def restoreGrp(self): for name, original_value in self.saved_grp.items(): setattr(grp, name, original_value) del self.saved_grp def patchOs(self, logger): self.saved_os = {} for fake in ['mkdir', 'chown', 'chmod', 'makedirs']: self.saved_os[fake] = getattr(os, fake, None) f = LoggableWrapper(logger, fake) setattr(os, fake, f) def restoreOs(self): for name, original_value in self.saved_os.items(): setattr(os, name, original_value) del self.saved_os def patchSlaposUtil(self): self.saved_slapos_util = {} for fake in ['chownDirectory']: self.saved_slapos_util[fake] = getattr(slapos.util, fake, None) setattr(slapos.util, fake, getattr(SlaposUtilMock, fake)) def restoreSlaposUtil(self): for name, original_value in self.saved_slapos_util.items(): setattr(slapos.util, name, original_value) del self.saved_slapos_util def setUp(self): config = FakeConfig() config.dry_run = True config.verbose = True logger = logging.getLogger('testcatch') logger.setLevel(logging.DEBUG) self.test_result = TestLoggerHandler() logger.addHandler(self.test_result) config.logger = logger self.partition = slapos.format.Partition('partition', '/part_path', slapos.format.User('testuser'), [], None) global USER_LIST USER_LIST = [] global GROUP_LIST GROUP_LIST = [] global INTERFACE_DICT INTERFACE_DICT = {} self.real_callAndRead = slapos.format.callAndRead self.fakeCallAndRead = FakeCallAndRead() slapos.format.callAndRead = self.fakeCallAndRead self.patchOs(logger) self.patchGrp() self.patchTime() self.patchPwd() self.patchNetifaces() self.patchSlaposUtil() def tearDown(self): self.restoreOs() self.restoreGrp() self.restoreTime() self.restorePwd() self.restoreNetifaces() self.restoreSlaposUtil() slapos.format.callAndRead = self.real_callAndRead class TestComputer(SlapformatMixin): def test_getAddress_empty_computer(self): computer = slapos.format.Computer('computer') self.assertEqual(computer.getAddress(), {'netmask': None, 'addr': None}) @unittest.skip("Not implemented") def test_construct_empty(self): computer = slapos.format.Computer('computer') computer.construct() @unittest.skip("Not implemented") def test_construct_empty_prepared(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' computer.construct() self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chown('/software_root', 0, 0)", "chmod('/software_root', 493)"], self.test_result.bucket) self.assertEqual([ 'ip addr list bridge', 'groupadd slapsoft', 'useradd -d /software_root -g slapsoft slapsoft -r' ], self.fakeCallAndRead.external_command_list) def test_construct_empty_prepared_no_alter_user(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' computer.construct(alter_user=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chmod('/software_root', 493)"], self.test_result.bucket) self.assertEqual(['ip addr list bridge'], self.fakeCallAndRead.external_command_list) @unittest.skip("Not implemented") def test_construct_empty_prepared_no_alter_network(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' computer.construct(alter_network=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chown('/software_root', 0, 0)", "chmod('/software_root', 493)"], self.test_result.bucket) self.assertEqual([ 'ip addr list bridge', 'groupadd slapsoft', 'useradd -d /software_root -g slapsoft slapsoft -r' ], self.fakeCallAndRead.external_command_list) def test_construct_empty_prepared_no_alter_network_user(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' computer.construct(alter_network=False, alter_user=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chmod('/software_root', 493)"], self.test_result.bucket) self.assertEqual([ 'ip addr list bridge', ], self.fakeCallAndRead.external_command_list) @unittest.skip("Not implemented") def test_construct_prepared(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' partition = slapos.format.Partition('partition', '/part_path', slapos.format.User('testuser'), [], None) partition.tap = slapos.format.Tap('tap') computer.partition_list = [partition] global INTERFACE_DICT INTERFACE_DICT['bridge'] = { socket.AF_INET: [{'addr': '192.168.242.77', 'broadcast': '127.0.0.1', 'netmask': '255.255.255.0'}], socket.AF_INET6: [{'addr': '2a01:e35:2e27::e59c', 'netmask': 'ffff:ffff:ffff:ffff::'}] } computer.construct() self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chown('/software_root', 0, 0)", "chmod('/software_root', 493)", "mkdir('/instance_root/partition', 488)", "chown('/instance_root/partition', 0, 0)", "chmod('/instance_root/partition', 488)" ], self.test_result.bucket) self.assertEqual([ 'ip addr list bridge', 'groupadd slapsoft', 'useradd -d /software_root -g slapsoft slapsoft -r', 'groupadd testuser', 'useradd -d /instance_root/partition -g testuser -G slapsoft testuser -r', 'tunctl -t tap -u testuser', 'ip link set tap up', 'brctl show', 'brctl addif bridge tap', 'ip addr add ip/255.255.255.255 dev bridge', 'ip addr list bridge', 'ip addr add ip/ffff:ffff:ffff:ffff:: dev bridge', 'ip addr list bridge', ], self.fakeCallAndRead.external_command_list) def test_construct_prepared_no_alter_user(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' partition = slapos.format.Partition('partition', '/part_path', slapos.format.User('testuser'), [], None) global USER_LIST USER_LIST = ['testuser'] partition.tap = slapos.format.Tap('tap') computer.partition_list = [partition] global INTERFACE_DICT INTERFACE_DICT['bridge'] = { socket.AF_INET: [{'addr': '192.168.242.77', 'broadcast': '127.0.0.1', 'netmask': '255.255.255.0'}], socket.AF_INET6: [{'addr': '2a01:e35:2e27::e59c', 'netmask': 'ffff:ffff:ffff:ffff::'}] } computer.construct(alter_user=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chmod('/software_root', 493)", "mkdir('/instance_root/partition', 488)", "chmod('/instance_root/partition', 488)" ], self.test_result.bucket) self.assertEqual([ 'ip addr list bridge', 'tunctl -t tap -u testuser', 'ip link set tap up', 'brctl show', 'brctl addif bridge tap', 'ip addr add ip/255.255.255.255 dev bridge', # 'ip addr list bridge', 'ip addr add ip/ffff:ffff:ffff:ffff:: dev bridge', 'ip -6 addr list bridge', ], self.fakeCallAndRead.external_command_list) def test_construct_prepared_tap_no_alter_user(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='iface', ipv4_local_network='127.0.0.1/16'), tap_gateway_interface='eth1') computer.instance_root = '/instance_root' computer.software_root = '/software_root' partition = slapos.format.Partition('partition', '/part_path', slapos.format.User('testuser'), [], None) global USER_LIST USER_LIST = ['testuser'] partition.tap = slapos.format.Tap('tap') computer.partition_list = [partition] global INTERFACE_DICT INTERFACE_DICT['iface'] = { socket.AF_INET: [{'addr': '192.168.242.77', 'broadcast': '127.0.0.1', 'netmask': '255.255.255.0'}], socket.AF_INET6: [{'addr': '2a01:e35:2e27::e59c', 'netmask': 'ffff:ffff:ffff:ffff::'}] } INTERFACE_DICT['eth1'] = { socket.AF_INET: [{'addr': '10.8.0.1', 'broadcast': '10.8.0.254', 'netmask': '255.255.255.0'}] } computer.construct(alter_user=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chmod('/software_root', 493)", "mkdir('/instance_root/partition', 488)", "chmod('/instance_root/partition', 488)" ], self.test_result.bucket) self.assertEqual([ 'ip addr list iface', 'tunctl -t tap -u testuser', 'ip link set tap up', 'ip route show 10.8.0.2', 'route add -host 10.8.0.2 dev tap', 'ip addr add ip/255.255.255.255 dev iface', 'ip addr add ip/ffff:ffff:ffff:ffff:: dev iface', 'ip -6 addr list iface' ], self.fakeCallAndRead.external_command_list) self.assertEqual(partition.tap.ipv4_addr, '10.8.0.2') self.assertEqual(partition.tap.ipv4_netmask, '255.255.255.0') self.assertEqual(partition.tap.ipv4_gateway, '10.8.0.1') self.assertEqual(partition.tap.ipv4_network, '10.8.0.0') @unittest.skip("Not implemented") def test_construct_prepared_no_alter_network(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' partition = slapos.format.Partition('partition', '/part_path', slapos.format.User('testuser'), [], None) partition.tap = slapos.format.Tap('tap') computer.partition_list = [partition] global INTERFACE_DICT INTERFACE_DICT['bridge'] = { socket.AF_INET: [{'addr': '192.168.242.77', 'broadcast': '127.0.0.1', 'netmask': '255.255.255.0'}], socket.AF_INET6: [{'addr': '2a01:e35:2e27::e59c', 'netmask': 'ffff:ffff:ffff:ffff::'}] } computer.construct(alter_network=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chown('/software_root', 0, 0)", "chmod('/software_root', 493)", "mkdir('/instance_root/partition', 488)", "chown('/instance_root/partition', 0, 0)", "chmod('/instance_root/partition', 488)" ], self.test_result.bucket) self.assertEqual([ # 'ip addr list bridge', 'groupadd slapsoft', 'useradd -d /software_root -g slapsoft slapsoft -r', 'groupadd testuser', 'useradd -d /instance_root/partition -g testuser -G slapsoft testuser -r', # 'ip addr add ip/255.255.255.255 dev bridge', # 'ip addr list bridge', # 'ip addr add ip/ffff:ffff:ffff:ffff:: dev bridge', # 'ip addr list bridge', ], self.fakeCallAndRead.external_command_list) def test_construct_prepared_no_alter_network_user(self): computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='bridge', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' partition = slapos.format.Partition('partition', '/part_path', slapos.format.User('testuser'), [], None) partition.tap = slapos.format.Tap('tap') computer.partition_list = [partition] global INTERFACE_DICT INTERFACE_DICT['bridge'] = { socket.AF_INET: [{'addr': '192.168.242.77', 'broadcast': '127.0.0.1', 'netmask': '255.255.255.0'}], socket.AF_INET6: [{'addr': '2a01:e35:2e27::e59c', 'netmask': 'ffff:ffff:ffff:ffff::'}] } computer.construct(alter_network=False, alter_user=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chmod('/software_root', 493)", "mkdir('/instance_root/partition', 488)", "chmod('/instance_root/partition', 488)" ], self.test_result.bucket) self.assertEqual([ 'ip addr list bridge', 'ip addr add ip/255.255.255.255 dev bridge', # 'ip addr list bridge', 'ip addr add ip/ffff:ffff:ffff:ffff:: dev bridge', 'ip -6 addr list bridge', ], self.fakeCallAndRead.external_command_list) def test_construct_use_unique_local_address_block(self): """ Test that slapformat creates a unique local address in the interface. """ global USER_LIST USER_LIST = ['root'] computer = slapos.format.Computer('computer', interface=slapos.format.Interface(logger=self.test_result, name='myinterface', ipv4_local_network='127.0.0.1/16')) computer.instance_root = '/instance_root' computer.software_root = '/software_root' partition = slapos.format.Partition('partition', '/part_path', slapos.format.User('testuser'), [], None) partition.tap = slapos.format.Tap('tap') computer.partition_list = [partition] global INTERFACE_DICT INTERFACE_DICT['myinterface'] = { socket.AF_INET: [{'addr': '192.168.242.77', 'broadcast': '127.0.0.1', 'netmask': '255.255.255.0'}], socket.AF_INET6: [{'addr': '2a01:e35:2e27::e59c', 'netmask': 'ffff:ffff:ffff:ffff::'}] } computer.construct(use_unique_local_address_block=True, alter_user=False, create_tap=False) self.assertEqual([ "makedirs('/instance_root', 493)", "makedirs('/software_root', 493)", "chmod('/software_root', 493)", "mkdir('/instance_root/partition', 488)", "chmod('/instance_root/partition', 488)" ], self.test_result.bucket) self.assertEqual([ 'ip addr list myinterface', 'ip address add dev myinterface fd00::1/64', 'ip addr add ip/255.255.255.255 dev myinterface', 'ip addr add ip/ffff:ffff:ffff:ffff:: dev myinterface', 'ip -6 addr list myinterface' ], self.fakeCallAndRead.external_command_list) class TestPartition(SlapformatMixin): def test_createPath_no_alter_user(self): self.partition.createPath(False) self.assertEqual( [ "mkdir('/part_path', 488)", "chmod('/part_path', 488)" ], self.test_result.bucket ) class TestUser(SlapformatMixin): def test_create(self): user = slapos.format.User('doesnotexistsyet') user.setPath('/doesnotexistsyet') user.create() self.assertEqual([ 'groupadd doesnotexistsyet', 'useradd -d /doesnotexistsyet -g doesnotexistsyet -s /bin/sh '\ 'doesnotexistsyet -r', 'passwd -l doesnotexistsyet' ], self.fakeCallAndRead.external_command_list) def test_create_additional_groups(self): user = slapos.format.User('doesnotexistsyet', ['additionalgroup1', 'additionalgroup2']) user.setPath('/doesnotexistsyet') user.create() self.assertEqual([ 'groupadd doesnotexistsyet', 'useradd -d /doesnotexistsyet -g doesnotexistsyet -s /bin/sh -G '\ 'additionalgroup1,additionalgroup2 doesnotexistsyet -r', 'passwd -l doesnotexistsyet' ], self.fakeCallAndRead.external_command_list) def test_create_group_exists(self): global GROUP_LIST GROUP_LIST = ['testuser'] user = slapos.format.User('testuser') user.setPath('/testuser') user.create() self.assertEqual([ 'useradd -d /testuser -g testuser -s /bin/sh testuser -r', 'passwd -l testuser' ], self.fakeCallAndRead.external_command_list) def test_create_user_exists_additional_groups(self): global USER_LIST USER_LIST = ['testuser'] user = slapos.format.User('testuser', ['additionalgroup1', 'additionalgroup2']) user.setPath('/testuser') user.create() self.assertEqual([ 'groupadd testuser', 'usermod -d /testuser -g testuser -s /bin/sh -G '\ 'additionalgroup1,additionalgroup2 testuser', 'passwd -l testuser' ], self.fakeCallAndRead.external_command_list) def test_create_user_exists(self): global USER_LIST USER_LIST = ['testuser'] user = slapos.format.User('testuser') user.setPath('/testuser') user.create() self.assertEqual([ 'groupadd testuser', 'usermod -d /testuser -g testuser -s /bin/sh testuser', 'passwd -l testuser' ], self.fakeCallAndRead.external_command_list) def test_create_user_group_exists(self): global USER_LIST USER_LIST = ['testuser'] global GROUP_LIST GROUP_LIST = ['testuser'] user = slapos.format.User('testuser') user.setPath('/testuser') user.create() self.assertEqual([ 'usermod -d /testuser -g testuser -s /bin/sh testuser', 'passwd -l testuser' ], self.fakeCallAndRead.external_command_list) def test_isAvailable(self): global USER_LIST USER_LIST = ['testuser'] user = slapos.format.User('testuser') self.assertTrue(user.isAvailable()) def test_isAvailable_notAvailable(self): user = slapos.format.User('doesnotexistsyet') self.assertFalse(user.isAvailable()) if __name__ == '__main__': unittest.main() slapos.core-1.3.18/slapos/tests/slapproxy/0000755000000000000000000000000013006632706020501 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/tests/slapproxy/slapos_multimaster.cfg.in0000644000000000000000000000224712752436135025530 0ustar rootroot00000000000000[slapos] software_root = %(tempdir)s/opt/slapgrid instance_root = %(tempdir)s/srv/slapgrid master_url = %(proxyaddr)s computer_id = computer [slapproxy] host = 127.0.0.1 port = 8080 database_uri = %(tempdir)s/lib/proxy.db # Here goes the list of slapos masters that slapproxy can contact # Each section beginning by multimaster is a different SlapOS Master, represented by arbitrary name. # For each section, you need to specify the URL of the SlapOS Master. # For each section, you can specify if needed the location of key/certificate used to authenticate to this slapOS Master. # For each section, you can specify a list of Software Releases. Any instance request matching this Softwrare Release will be automatically forwarded to this SlapOS Master and will not be allocated locally. [multimaster/https://slap.vifib.com] key = /path/to/cert.key cert = /path/to/cert.cert # XXX add wildcard support for SR list. software_release_list = http://something.io/software.cfg /some/arbitrary/local/unix/path [multimaster/http://%(external_proxy_host)s:%(external_proxy_port)s] # No certificate here: it is http. software_release_list = http://mywebsite.me/exteral_software_release.cfg slapos.core-1.3.18/slapos/tests/slapproxy/__init__.py0000644000000000000000000016525613003671621022625 0ustar rootroot00000000000000# -*- coding: utf-8 -*- # vim: set et sts=2: ############################################################################## # # Copyright (c) 2012, 2013, 2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import ConfigParser import os import logging import shutil import socket import subprocess import sys import tempfile import time import unittest import xml_marshaller from xml_marshaller.xml_marshaller import loads, dumps import slapos.proxy import slapos.proxy.views as views import slapos.slap import slapos.slap.slap from slapos.util import sqlite_connect import sqlite3 import pkg_resources class WrongFormat(Exception): pass class ProxyOption(object): """ Will simulate options given to slapproxy """ def __init__(self, proxy_db): self.verbose = True self.database_uri = proxy_db self.console = False self.log_file = None class BasicMixin(object): def setUp(self): """ Will set files and start slapproxy """ self._tempdir = tempfile.mkdtemp() logging.basicConfig(level=logging.DEBUG) self.setFiles() self.startProxy() def createSlapOSConfigurationFile(self): open(self.slapos_cfg, 'w').write("""[slapos] software_root = %(tempdir)s/opt/slapgrid instance_root = %(tempdir)s/srv/slapgrid master_url = %(proxyaddr)s computer_id = computer [slapproxy] host = 127.0.0.1 port = 8080 database_uri = %(tempdir)s/lib/proxy.db """ % {'tempdir': self._tempdir, 'proxyaddr': self.proxyaddr}) def setFiles(self): """ Set environment to run slapproxy """ self.slapos_cfg = os.path.join(self._tempdir, 'slapos.cfg') self.proxy_db = os.path.join(self._tempdir, 'lib', 'proxy.db') self.proxyaddr = 'http://localhost:80/' self.computer_id = 'computer' self.createSlapOSConfigurationFile() for directory in ['opt', 'srv', 'lib']: path = os.path.join(self._tempdir, directory) os.mkdir(path) def startProxy(self): """ Set config for slapproxy and start it """ conf = slapos.proxy.ProxyConfig(logger=logging.getLogger()) configp = ConfigParser.SafeConfigParser() configp.read(self.slapos_cfg) conf.mergeConfig(ProxyOption(self.proxy_db), configp) conf.setConfig() views.app.config['TESTING'] = True slapos.proxy.setupFlaskConfiguration(conf) self.app_config = views.app.config self.app = views.app.test_client() def add_free_partition(self, partition_amount, computer_id=None): """ Will simulate a slapformat first run and create "partition_amount" partitions """ if not computer_id: computer_id = self.computer_id computer_dict = { 'reference': computer_id, 'address': '123.456.789', 'netmask': 'fffffffff', 'partition_list': [], } for i in range(partition_amount): partition_example = { 'reference': 'slappart%s' % i, 'address_list': [ {'addr': '1.2.3.4', 'netmask': '255.255.255.255'}, {'addr': '4.3.2.1', 'netmask': '255.255.255.255'} ], 'tap': {'name': 'tap0'}, } computer_dict['partition_list'].append(partition_example) request_dict = { 'computer_id': self.computer_id, 'xml': xml_marshaller.xml_marshaller.dumps(computer_dict), } rv = self.app.post('/loadComputerConfigurationFromXML', data=request_dict) self.assertEqual(rv._status_code, 200) def tearDown(self): """ Remove files generated for test """ shutil.rmtree(self._tempdir, True) views.is_schema_already_executed = False class TestInformation(BasicMixin, unittest.TestCase): """ Test Basic response of slapproxy """ def test_getComputerInformation(self): """ Check that getComputerInformation return a Computer and database is generated """ rv = self.app.get('/getComputerInformation?computer_id=%s' % self.computer_id) self.assertIsInstance( xml_marshaller.xml_marshaller.loads(rv.data), slapos.slap.Computer) self.assertTrue(os.path.exists(self.proxy_db)) def test_getFullComputerInformation(self): """ Check that getFullComputerInformation return a Computer and database is generated """ rv = self.app.get('/getFullComputerInformation?computer_id=%s' % self.computer_id) self.assertIsInstance( xml_marshaller.xml_marshaller.loads(rv.data), slapos.slap.Computer) self.assertTrue(os.path.exists(self.proxy_db)) def test_getComputerInformation_wrong_computer(self): """ Test that computer information won't be given to a requester different from the one specified """ with self.assertRaises(slapos.slap.NotFoundError): self.app.get('/getComputerInformation?computer_id=%s42' % self.computer_id) def test_partition_are_empty(self): """ Test that empty partition are empty :) """ self.add_free_partition(10) rv = self.app.get('/getFullComputerInformation?computer_id=%s' % self.computer_id) computer = xml_marshaller.xml_marshaller.loads(rv.data) for slap_partition in computer._computer_partition_list: self.assertIsNone(slap_partition._software_release_document) self.assertEqual(slap_partition._requested_state, 'destroyed') self.assertEqual(slap_partition._need_modification, 0) def test_getSoftwareReleaseListFromSoftwareProduct_software_product_reference(self): """ Check that calling getSoftwareReleaseListFromSoftwareProduct() in slapproxy using a software_product_reference as parameter behaves correctly. """ software_product_reference = 'my_product' software_release_url = 'my_url' self.app_config['software_product_list'] = { software_product_reference: software_release_url } response = self.app.get('/getSoftwareReleaseListFromSoftwareProduct' '?software_product_reference=%s' %\ software_product_reference) software_release_url_list = xml_marshaller.xml_marshaller.loads( response.data) self.assertEqual( software_release_url_list, [software_release_url] ) def test_getSoftwareReleaseListFromSoftwareProduct_noSoftwareProduct(self): """ Check that calling getSoftwareReleaseListFromSoftwareProduct() in slapproxy using a software_product_reference that doesn't exist as parameter returns empty list. """ self.app_config['software_product_list'] = {'random': 'random'} response = self.app.get('/getSoftwareReleaseListFromSoftwareProduct' '?software_product_reference=idonotexist') software_release_url_list = xml_marshaller.xml_marshaller.loads( response.data) self.assertEqual( software_release_url_list, [] ) def test_getSoftwareReleaseListFromSoftwareProduct_bothParameter(self): """ Test that a call to getSoftwareReleaseListFromSoftwareProduct with no parameter raises """ self.assertRaises( AssertionError, self.app.get, '/getSoftwareReleaseListFromSoftwareProduct' '?software_product_reference=foo' '&software_release_url=bar' ) def test_getSoftwareReleaseListFromSoftwareProduct_noParameter(self): """ Test that a call to getSoftwareReleaseListFromSoftwareProduct with both software_product_reference and software_release_url parameters raises """ self.assertRaises( AssertionError, self.app.get, '/getSoftwareReleaseListFromSoftwareProduct' ) def test_getComputerPartitionCertificate(self): """ Tests that getComputerPartitionCertificate method is implemented in slapproxy. """ rv = self.app.get( '/getComputerPartitionCertificate?computer_id=%s&computer_partition_id=%s' % ( self.computer_id, 'slappart0')) response = xml_marshaller.xml_marshaller.loads(rv.data) self.assertEquals({'certificate': '', 'key': ''}, response) def test_computerBang(self): """ Tests that computerBang method is implemented in slapproxy. """ rv = self.app.post( '/computerBang?computer_id=%s' % ( self.computer_id)) response = xml_marshaller.xml_marshaller.loads(rv.data) self.assertEquals('', response) class MasterMixin(BasicMixin, unittest.TestCase): """ Define advanced tool for test proxy simulating behavior slap library tools """ def _requestComputerPartition(self, software_release, software_type, partition_reference, partition_id=None, shared=False, partition_parameter_kw=None, filter_kw=None, state=None): """ Check parameters, call requestComputerPartition server method and return result """ if partition_parameter_kw is None: partition_parameter_kw = {} if filter_kw is None: filter_kw = {} # Let's enforce a default software type if software_type is None: software_type = 'default' request_dict = { 'computer_id': self.computer_id, 'computer_partition_id': partition_id, 'software_release': software_release, 'software_type': software_type, 'partition_reference': partition_reference, 'shared_xml': xml_marshaller.xml_marshaller.dumps(shared), 'partition_parameter_xml': xml_marshaller.xml_marshaller.dumps( partition_parameter_kw), 'filter_xml': xml_marshaller.xml_marshaller.dumps(filter_kw), 'state': xml_marshaller.xml_marshaller.dumps(state), } return self.app.post('/requestComputerPartition', data=request_dict) def request(self, *args, **kwargs): """ Simulate a request with above parameters Return response by server (a computer partition or an error) """ rv = self._requestComputerPartition(*args, **kwargs) self.assertEqual(rv._status_code, 200) xml = rv.data software_instance = xml_marshaller.xml_marshaller.loads(xml) computer_partition = slapos.slap.ComputerPartition( software_instance.slap_computer_id, software_instance.slap_computer_partition_id) computer_partition.__dict__.update(software_instance.__dict__) return computer_partition def supply(self, url, computer_id=None, state=''): if not computer_id: computer_id = self.computer_id request_dict = {'url':url, 'computer_id': computer_id, 'state':state} rv = self.app.post('/supplySupply', data=request_dict) # XXX return a Software Release def setConnectionDict(self, partition_id, connection_dict, slave_reference=None): self.app.post('/setComputerPartitionConnectionXml', data={ 'computer_id': self.computer_id, 'computer_partition_id': partition_id, 'connection_xml': xml_marshaller.xml_marshaller.dumps(connection_dict), 'slave_reference': slave_reference}) def getPartitionInformation(self, computer_partition_id): """ Return computer information as stored in proxy for corresponding id """ rv = self.app.get('/getFullComputerInformation?computer_id=%s' % self.computer_id) computer = xml_marshaller.xml_marshaller.loads(rv.data) for instance in computer._computer_partition_list: if instance._partition_id == computer_partition_id: return instance class TestRequest(MasterMixin): """ Set of tests for requests """ def test_request_consistent_parameters(self): """ Check that all different parameters related to requests (like instance_guid, state) are set and consistent """ self.add_free_partition(1) partition = self.request('http://sr//', None, 'MyFirstInstance', 'slappart0') self.assertEqual(partition.getState(), 'started') self.assertEqual(partition.getInstanceGuid(), 'computer-slappart0') def test_two_request_one_partition_free(self): """ Since slapproxy does not implement scope, providing two partition_id values will still succeed, even if only one partition is available. """ self.add_free_partition(1) self.assertIsInstance(self.request('http://sr//', None, 'MyFirstInstance', 'slappart2'), slapos.slap.ComputerPartition) self.assertIsInstance(self.request('http://sr//', None, 'MyFirstInstance', 'slappart3'), slapos.slap.ComputerPartition) def test_two_request_two_partition_free(self): """ If two requests are made with two available partition both will succeed """ self.add_free_partition(2) self.assertIsInstance(self.request('http://sr//', None, 'MyFirstInstance', 'slappart2'), slapos.slap.ComputerPartition) self.assertIsInstance(self.request('http://sr//', None, 'MyFirstInstance', 'slappart3'), slapos.slap.ComputerPartition) def test_two_same_request_from_one_partition(self): """ Request will return same partition for two equal requests """ self.add_free_partition(2) self.assertEqual( self.request('http://sr//', None, 'MyFirstInstance', 'slappart2').__dict__, self.request('http://sr//', None, 'MyFirstInstance', 'slappart2').__dict__) def test_request_propagate_partition_state(self): """ Request will return same partition for two equal requests """ self.add_free_partition(2) partition_parent = self.request('http://sr//', None, 'MyFirstInstance') parent_dict = partition_parent.__dict__ partition_child = self.request('http://sr//', None, 'MySubInstance', parent_dict['_partition_id']) self.assertEqual(partition_parent.getState(), 'started') self.assertEqual(partition_child.getState(), 'started') partition_parent = self.request('http://sr//', None, 'MyFirstInstance', state='stopped') partition_child = self.request('http://sr//', None, 'MySubInstance', parent_dict['_partition_id']) self.assertEqual(partition_parent.getState(), 'stopped') self.assertEqual(partition_child.getState(), 'stopped') partition_parent = self.request('http://sr//', None, 'MyFirstInstance', state='started') partition_child = self.request('http://sr//', None, 'MySubInstance', parent_dict['_partition_id']) self.assertEqual(partition_parent.getState(), 'started') self.assertEqual(partition_child.getState(), 'started') def test_request_parent_started_children_stopped(self): """ Request will return same partition for two equal requests """ self.add_free_partition(2) partition_parent = self.request('http://sr//', None, 'MyFirstInstance') parent_dict = partition_parent.__dict__ partition_child = self.request('http://sr//', None, 'MySubInstance', parent_dict['_partition_id']) self.assertEqual(partition_parent.getState(), 'started') self.assertEqual(partition_child.getState(), 'started') partition_parent = self.request('http://sr//', None, 'MyFirstInstance') partition_child = self.request('http://sr//', None, 'MySubInstance', parent_dict['_partition_id'], state='stopped') self.assertEqual(partition_parent.getState(), 'started') self.assertEqual(partition_child.getState(), 'stopped') def test_two_requests_with_different_parameters_but_same_reference(self): """ Request will return same partition for two different requests but will only update parameters """ self.add_free_partition(2) wanted_domain1 = 'fou.org' wanted_domain2 = 'carzy.org' request1 = self.request('http://sr//', None, 'MyFirstInstance', 'slappart2', partition_parameter_kw={'domain': wanted_domain1}) request1_dict = request1.__dict__ requested_result1 = self.getPartitionInformation( request1_dict['_partition_id']) request2 = self.request('http://sr//', 'Papa', 'MyFirstInstance', 'slappart2', partition_parameter_kw={'domain': wanted_domain2}) request2_dict = request2.__dict__ requested_result2 = self.getPartitionInformation( request2_dict['_partition_id']) # Test we received same partition for key in ['_partition_id', '_computer_id']: self.assertEqual(request1_dict[key], request2_dict[key]) # Test that only parameters changed for key in requested_result2.__dict__: if key not in ['_parameter_dict', '_software_release_document']: self.assertEqual(requested_result2.__dict__[key], requested_result1.__dict__[key]) elif key in ['_software_release_document']: self.assertEqual(requested_result2.__dict__[key].__dict__, requested_result1.__dict__[key].__dict__) #Test parameters where set correctly self.assertEqual(wanted_domain1, requested_result1._parameter_dict['domain']) self.assertEqual(wanted_domain2, requested_result2._parameter_dict['domain']) def test_two_requests_with_different_parameters_and_sr_url_but_same_reference(self): """ Request will return same partition for two different requests but will only update parameters """ self.add_free_partition(2) wanted_domain1 = 'fou.org' wanted_domain2 = 'carzy.org' request1 = self.request('http://sr//', None, 'MyFirstInstance', 'slappart2', partition_parameter_kw={'domain': wanted_domain1}) request1_dict = request1.__dict__ requested_result1 = self.getPartitionInformation( request1_dict['_partition_id']) request2 = self.request('http://sr1//', 'Papa', 'MyFirstInstance', 'slappart2', partition_parameter_kw={'domain': wanted_domain2}) request2_dict = request2.__dict__ requested_result2 = self.getPartitionInformation( request2_dict['_partition_id']) # Test we received same partition for key in ['_partition_id', '_computer_id']: self.assertEqual(request1_dict[key], request2_dict[key]) # Test that parameters and software_release url changed for key in requested_result2.__dict__: if key not in ['_parameter_dict', '_software_release_document']: self.assertEqual(requested_result2.__dict__[key], requested_result1.__dict__[key]) elif key in ['_software_release_document']: # software_release will be updated self.assertEqual(requested_result2.__dict__[key].__dict__['_software_release'], 'http://sr1//') self.assertEqual(requested_result1.__dict__[key].__dict__['_software_release'], 'http://sr//') #Test parameters where set correctly self.assertEqual(wanted_domain1, requested_result1._parameter_dict['domain']) self.assertEqual(wanted_domain2, requested_result2._parameter_dict['domain']) def test_two_different_request_from_two_partition(self): """ Since slapproxy does not implement scope, two request with different partition_id will still return the same partition. """ self.add_free_partition(2) self.assertEqual( self.request('http://sr//', None, 'MyFirstInstance', 'slappart2').__dict__, self.request('http://sr//', None, 'MyFirstInstance', 'slappart3').__dict__) def test_two_different_request_from_one_partition(self): """ Two different request from same partition will return two different partitions """ self.add_free_partition(2) self.assertNotEqual( self.request('http://sr//', None, 'MyFirstInstance', 'slappart2').__dict__, self.request('http://sr//', None, 'frontend', 'slappart2').__dict__) def test_request_with_nonascii_parameters(self): """ Verify that request with non-ascii parameters is correctly accepted """ self.add_free_partition(1) request = self.request('http://sr//', None, 'myinstance', 'slappart0', partition_parameter_kw={'text': u'Привет Мир!'}) self.assertIsInstance(request, slapos.slap.ComputerPartition) class TestSlaveRequest(MasterMixin): """ Test requests related to slave instances. """ def test_slave_request_no_corresponding_partition(self): """ Slave instance request will fail if no corresponding are found """ self.add_free_partition(2) rv = self._requestComputerPartition('http://sr//', None, 'MyFirstInstance', 'slappart2', shared=True) self.assertEqual(rv._status_code, 404) def test_slave_request_set_parameters(self): """ Parameters sent in slave request must be put in slave master slave instance list. 1. We request a slave instance we defined parameters 2. We check parameters are in the dictionnary defining slave in slave master slave_instance_list """ self.add_free_partition(6) # Provide partition master_partition_id = self.request('http://sr//', None, 'MyFirstInstance', 'slappart4')._partition_id # First request of slave instance wanted_domain = 'fou.org' self.request('http://sr//', None, 'MyFirstInstance', 'slappart2', shared=True, partition_parameter_kw={'domain': wanted_domain}) # Get updated information for master partition master_partition = self.getPartitionInformation(master_partition_id) our_slave = master_partition._parameter_dict['slave_instance_list'][0] self.assertEqual(our_slave.get('domain'), wanted_domain) def test_master_instance_with_no_slave(self): """ Test that a master instance with no requested slave has an empty slave_instance_list parameter. """ self.add_free_partition(6) # Provide partition master_partition_id = self.request('http://sr//', None, 'MyMasterInstance', 'slappart4')._partition_id master_partition = self.getPartitionInformation(master_partition_id) self.assertEqual(len(master_partition._parameter_dict['slave_instance_list']), 0) def test_slave_request_set_parameters_are_updated(self): """ Parameters sent in slave request must be put in slave master slave instance list and updated when they change. 1. We request a slave instance we defined parameters 2. We check parameters are in the dictionnary defining slave in slave master slave_instance_list 3. We request same slave instance with changed parameters 4. We check parameters are in the dictionnary defining slave in slave master slave_instance_list have changed """ self.add_free_partition(6) # Provide partition master_partition_id = self.request('http://sr//', None, 'MyFirstInstance', 'slappart4')._partition_id # First request of slave instance wanted_domain_1 = 'crazy.org' self.request('http://sr//', None, 'MyFirstInstance', 'slappart2', shared=True, partition_parameter_kw={'domain': wanted_domain_1}) # Get updated information for master partition master_partition = self.getPartitionInformation(master_partition_id) our_slave = master_partition._parameter_dict['slave_instance_list'][0] self.assertEqual(our_slave.get('domain'), wanted_domain_1) # Second request of slave instance wanted_domain_2 = 'maluco.org' self.request('http://sr//', None, 'MyFirstInstance', 'slappart2', shared=True, partition_parameter_kw={'domain': wanted_domain_2}) # Get updated information for master partition master_partition = self.getPartitionInformation(master_partition_id) our_slave = master_partition._parameter_dict['slave_instance_list'][0] self.assertNotEqual(our_slave.get('domain'), wanted_domain_1) self.assertEqual(our_slave.get('domain'), wanted_domain_2) def test_slave_request_set_connection_parameters(self): """ Parameters set in slave instance by master instance must be put in slave instance connection parameters. 1. We request a slave instance 2. We set connection parameters for this slave instance 2. We check parameter is present when we do request() for the slave. """ self.add_free_partition(6) # Provide partition master_partition_id = self.request('http://sr//', None, 'MyMasterInstance', 'slappart4')._partition_id # First request of slave instance self.request('http://sr//', None, 'MySlaveInstance', 'slappart2', shared=True) # Set connection parameter master_partition = self.getPartitionInformation(master_partition_id) # XXX change slave reference to be compatible with multiple nodes self.setConnectionDict(partition_id=master_partition._partition_id, connection_dict={'foo': 'bar'}, slave_reference=master_partition._parameter_dict['slave_instance_list'][0]['slave_reference']) # Get updated information for slave partition slave_partition = self.request('http://sr//', None, 'MySlaveInstance', 'slappart2', shared=True) self.assertEqual(slave_partition.getConnectionParameter('foo'), 'bar') def test_slave_request_one_corresponding_partition(self): """ Successfull request slave instance follow these steps: 1. Provide one corresponding partition 2. Ask for Slave instance. But no connection parameters But slave is added to Master Instance slave list 3. Master Instance get updated information (including slave list) 4. Master instance post information about slave connection parameters 5. Ask for slave instance is successfull and return a computer instance with connection information """ self.add_free_partition(6) # Provide partition master_partition_id = self.request('http://sr//', None, 'MyFirstInstance', 'slappart4')._partition_id # First request of slave instance name = 'MyFirstInstance' requester = 'slappart2' our_slave = self.request('http://sr//', None, name, requester, shared=True) self.assertIsInstance(our_slave, slapos.slap.ComputerPartition) self.assertEqual(our_slave._connection_dict, {}) # Get updated information for master partition master_partition = self.getPartitionInformation(master_partition_id) slave_for_master = master_partition._parameter_dict['slave_instance_list'][0] # Send information about slave slave_address = {'url': '%s.master.com'} self.setConnectionDict(partition_id=master_partition._partition_id, connection_dict=slave_address, slave_reference=slave_for_master['slave_reference']) # Successfull slave request with connection parameters our_slave = self.request('http://sr//', None, name, requester, shared=True) self.assertIsInstance(our_slave, slapos.slap.ComputerPartition) self.assertEqual(slave_address, our_slave._connection_dict) def test_slave_request_instance_guid(self): """ Test that instance_guid support behaves correctly. Warning: proxy doesn't gives unique id of instance, but gives instead unique id of partition. """ self.add_free_partition(1) partition = self.request('http://sr//', None, 'MyInstance', 'slappart1') slave = self.request('http://sr//', None, 'MySlaveInstance', 'slappart1', shared=True, filter_kw=dict(instance_guid=partition._instance_guid)) self.assertEqual(slave._partition_id, partition._partition_id) class TestMultiNodeSupport(MasterMixin): def test_multi_node_support_different_software_release_list(self): """ Test that two different registered computers have their own Software Release list. """ self.add_free_partition(6, computer_id='COMP-0') self.add_free_partition(6, computer_id='COMP-1') software_release_1_url = 'http://sr1' software_release_2_url = 'http://sr2' software_release_3_url = 'http://sr3' self.supply(software_release_1_url, 'COMP-0') self.supply(software_release_2_url, 'COMP-1') self.supply(software_release_3_url, 'COMP-0') self.supply(software_release_3_url, 'COMP-1') computer_default = loads(self.app.get('/getFullComputerInformation?computer_id=%s' % self.computer_id).data) computer_0 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-0').data) computer_1 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-1').data) self.assertEqual(len(computer_default._software_release_list), 0) self.assertEqual(len(computer_0._software_release_list), 2) self.assertEqual(len(computer_1._software_release_list), 2) self.assertEqual( computer_0._software_release_list[0]._software_release, software_release_1_url ) self.assertEqual( computer_0._software_release_list[0]._computer_guid, 'COMP-0' ) self.assertEqual( computer_0._software_release_list[1]._software_release, software_release_3_url ) self.assertEqual( computer_0._software_release_list[1]._computer_guid, 'COMP-0' ) self.assertEqual( computer_1._software_release_list[0]._software_release, software_release_2_url ) self.assertEqual( computer_1._software_release_list[0]._computer_guid, 'COMP-1' ) self.assertEqual( computer_1._software_release_list[1]._software_release, software_release_3_url ) self.assertEqual( computer_1._software_release_list[1]._computer_guid, 'COMP-1' ) def test_multi_node_support_remove_software_release(self): """ Test that removing a software from a Computer doesn't affect other computer """ software_release_url = 'http://sr' self.add_free_partition(6, computer_id='COMP-0') self.add_free_partition(6, computer_id='COMP-1') self.supply(software_release_url, 'COMP-0') self.supply(software_release_url, 'COMP-1') self.supply(software_release_url, 'COMP-0', state='destroyed') computer_0 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-0').data) computer_1 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-1').data) self.assertEqual(len(computer_0._software_release_list), 0) self.assertEqual(len(computer_1._software_release_list), 1) self.assertEqual( computer_1._software_release_list[0]._software_release, software_release_url ) self.assertEqual( computer_1._software_release_list[0]._computer_guid, 'COMP-1' ) def test_multi_node_support_instance_default_computer(self): """ Test that instance request behaves correctly with default computer """ software_release_url = 'http://sr' computer_0_id = 'COMP-0' computer_1_id = 'COMP-1' self.add_free_partition(6, computer_id=computer_0_id) self.add_free_partition(6, computer_id=computer_1_id) # Request without SLA -> goes to default computer only. # It should fail if we didn't registered partitions for default computer # (default computer is always registered) rv = self._requestComputerPartition('http://sr//', None, 'MyFirstInstance', 'slappart2') self.assertEqual(rv._status_code, 404) rv = self._requestComputerPartition('http://sr//', None, 'MyFirstInstance', 'slappart2', filter_kw={'computer_guid':self.computer_id}) self.assertEqual(rv._status_code, 404) # Register default computer: deployment works self.add_free_partition(1) self.request('http://sr//', None, 'MyFirstInstance', 'slappart0') computer_default = loads(self.app.get( '/getFullComputerInformation?computer_id=%s' % self.computer_id).data) self.assertEqual(len(computer_default._software_release_list), 0) # No free space on default computer: request without SLA fails rv = self._requestComputerPartition('http://sr//', None, 'CanIHasPartition', 'slappart2', filter_kw={'computer_guid':self.computer_id}) self.assertEqual(rv._status_code, 404) def test_multi_node_support_instance(self): """ Test that instance request behaves correctly with several registered computers """ software_release_url = 'http://sr' computer_0_id = 'COMP-0' computer_1_id = 'COMP-1' software_release_1 = 'http://sr//' software_release_2 = 'http://othersr//' self.add_free_partition(2, computer_id=computer_1_id) # Deploy to first non-default computer using SLA # It should fail since computer is not registered rv = self._requestComputerPartition(software_release_1, None, 'MyFirstInstance', 'slappart2', filter_kw={'computer_guid':computer_0_id}) self.assertEqual(rv._status_code, 404) self.add_free_partition(2, computer_id=computer_0_id) # Deploy to first non-default computer using SLA partition = self.request(software_release_1, None, 'MyFirstInstance', 'slappart0', filter_kw={'computer_guid':computer_0_id}) self.assertEqual(partition.getState(), 'started') self.assertEqual(partition._partition_id, 'slappart0') self.assertEqual(partition._computer_id, computer_0_id) # All other instances should be empty computer_0 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-0').data) computer_1 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-1').data) self.assertEqual(computer_0._computer_partition_list[0]._software_release_document._software_release, software_release_1) self.assertTrue(computer_0._computer_partition_list[1]._software_release_document == None) self.assertTrue(computer_1._computer_partition_list[0]._software_release_document == None) self.assertTrue(computer_1._computer_partition_list[1]._software_release_document == None) # Deploy to second non-default computer using SLA partition = self.request(software_release_2, None, 'MySecondInstance', 'slappart0', filter_kw={'computer_guid':computer_1_id}) self.assertEqual(partition.getState(), 'started') self.assertEqual(partition._partition_id, 'slappart0') self.assertEqual(partition._computer_id, computer_1_id) # The two remaining instances should be free, and MyfirstInstance should still be there computer_0 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-0').data) computer_1 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-1').data) self.assertEqual(computer_0._computer_partition_list[0]._software_release_document._software_release, software_release_1) self.assertTrue(computer_0._computer_partition_list[1]._software_release_document == None) self.assertEqual(computer_1._computer_partition_list[0]._software_release_document._software_release, software_release_2) self.assertTrue(computer_1._computer_partition_list[1]._software_release_document == None) def test_multi_node_support_change_instance_state(self): """ Test that destroying an instance (i.e change state) from a Computer doesn't affect other computer """ software_release_url = 'http://sr' computer_0_id = 'COMP-0' computer_1_id = 'COMP-1' self.add_free_partition(6, computer_id=computer_0_id) self.add_free_partition(6, computer_id=computer_1_id) partition_first = self.request('http://sr//', None, 'MyFirstInstance', 'slappart0', filter_kw={'computer_guid':computer_0_id}) partition_second = self.request('http://sr//', None, 'MySecondInstance', 'slappart0', filter_kw={'computer_guid':computer_1_id}) partition_first = self.request('http://sr//', None, 'MyFirstInstance', 'slappart0', filter_kw={'computer_guid':computer_0_id}, state='stopped') computer_0 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-0').data) computer_1 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-1').data) self.assertEqual(computer_0._computer_partition_list[0].getState(), 'stopped') self.assertEqual(computer_0._computer_partition_list[1].getState(), 'destroyed') self.assertEqual(computer_1._computer_partition_list[0].getState(), 'started') self.assertEqual(computer_1._computer_partition_list[1].getState(), 'destroyed') def test_multi_node_support_same_reference(self): """ Test that requesting an instance with same reference to two different nodes behaves like master: once an instance is assigned to a node, changing SLA will not change node. """ software_release_url = 'http://sr' computer_0_id = 'COMP-0' computer_1_id = 'COMP-1' self.add_free_partition(2, computer_id=computer_0_id) self.add_free_partition(2, computer_id=computer_1_id) partition = self.request('http://sr//', None, 'MyFirstInstance', 'slappart0', filter_kw={'computer_guid':computer_0_id}) partition = self.request('http://sr//', None, 'MyFirstInstance', 'slappart0', filter_kw={'computer_guid':computer_1_id}) self.assertEqual(partition._computer_id, computer_0_id) computer_1 = loads(self.app.get('/getFullComputerInformation?computer_id=COMP-1').data) self.assertTrue(computer_1._computer_partition_list[0]._software_release_document == None) self.assertTrue(computer_1._computer_partition_list[1]._software_release_document == None) def test_multi_node_support_slave_instance(self): """ Test that slave instances are correctly deployed if SLA is specified but deployed only on default computer if not specified (i.e not deployed if default computer doesn't have corresponding master instance). """ computer_0_id = 'COMP-0' computer_1_id = 'COMP-1' self.add_free_partition(2, computer_id=computer_0_id) self.add_free_partition(2, computer_id=computer_1_id) self.add_free_partition(2) self.request('http://sr2//', None, 'MyFirstInstance', 'slappart0', filter_kw={'computer_guid':computer_0_id}) self.request('http://sr//', None, 'MyOtherInstance', 'slappart0', filter_kw={'computer_guid':computer_1_id}) # Request slave without SLA: will fail rv = self._requestComputerPartition('http://sr//', None, 'MySlaveInstance', 'slappart2', shared=True) self.assertEqual(rv._status_code, 404) # Request slave with SLA on incorrect computer: will fail rv = self._requestComputerPartition('http://sr//', None, 'MySlaveInstance', 'slappart2', shared=True, filter_kw={'computer_guid':computer_0_id}) self.assertEqual(rv._status_code, 404) # Request computer on correct computer: will succeed partition = self.request('http://sr//', None, 'MySlaveInstance', 'slappart2', shared=True, filter_kw={'computer_guid':computer_1_id}) self.assertEqual(partition._computer_id, computer_1_id) def test_multi_node_support_instance_guid(self): """ Test that instance_guid support behaves correctly with multiple nodes. Warning: proxy doesn't gives unique id of instance, but gives instead unique id of partition. """ computer_0_id = 'COMP-0' computer_1_id = 'COMP-1' self.add_free_partition(2, computer_id=computer_0_id) self.add_free_partition(2, computer_id=computer_1_id) self.add_free_partition(2) partition_computer_0 = self.request('http://sr2//', None, 'MyFirstInstance', 'slappart0', filter_kw={'computer_guid':computer_0_id}) partition_computer_1 = self.request('http://sr//', None, 'MyOtherInstance', 'slappart0', filter_kw={'computer_guid':computer_1_id}) partition_computer_default = self.request('http://sr//', None, 'MyThirdInstance', 'slappart0') self.assertEqual(partition_computer_0.getInstanceGuid(), 'COMP-0-slappart0') self.assertEqual(partition_computer_1.getInstanceGuid(), 'COMP-1-slappart0') self.assertEqual(partition_computer_default.getInstanceGuid(), 'computer-slappart0') def test_multi_node_support_getComputerInformation(self): """ Test that computer information will not be given if computer is not registered. Test that it still should work for the 'default' computer specified in slapos config even if not yet registered. Test that computer information is given if computer is registered. """ new_computer_id = '%s42' % self.computer_id with self.assertRaises(slapos.slap.NotFoundError): self.app.get('/getComputerInformation?computer_id=%s42' % new_computer_id) try: self.app.get('/getComputerInformation?computer_id=%s' % self.computer_id) except slapos.slap.NotFoundError: self.fail('Could not fetch informations for default computer.') self.add_free_partition(1, computer_id=new_computer_id) try: self.app.get('/getComputerInformation?computer_id=%s' % new_computer_id) except slapos.slap.NotFoundError: self.fail('Could not fetch informations for registered computer.') class TestMultiMasterSupport(MasterMixin): """ Test multimaster support in slapproxy. """ external_software_release = 'http://mywebsite.me/exteral_software_release.cfg' software_release_not_in_list = 'http://mywebsite.me/exteral_software_release_not_listed.cfg' def setUp(self): self.addCleanup(self.stopExternalProxy) # XXX don't use lo self.external_proxy_host = os.environ.get('LOCAL_IPV4', '127.0.0.1') self.external_proxy_port = 8281 self.external_master_url = 'http://%s:%s' % (self.external_proxy_host, self.external_proxy_port) self.external_computer_id = 'external_computer' self.external_proxy_slap = slapos.slap.slap() self.external_proxy_slap.initializeConnection(self.external_master_url) super(TestMultiMasterSupport, self).setUp() self.db = sqlite_connect(self.proxy_db) self.external_slapproxy_configuration_file_location = os.path.join( self._tempdir, 'external_slapos.cfg') self.createExternalProxyConfigurationFile() self.startExternalProxy() def tearDown(self): super(TestMultiMasterSupport, self).tearDown() def createExternalProxyConfigurationFile(self): open(self.external_slapproxy_configuration_file_location, 'w').write("""[slapos] computer_id = %(external_computer_id)s [slapproxy] host = %(host)s port = %(port)s database_uri = %(tempdir)s/lib/external_proxy.db """ % { 'tempdir': self._tempdir, 'host': self.external_proxy_host, 'port': self.external_proxy_port, 'external_computer_id': self.external_computer_id }) def startExternalProxy(self): """ Start external slapproxy """ logging.getLogger().info('Starting external proxy, listening to %s:%s' % (self.external_proxy_host, self.external_proxy_port)) # XXX This uses a hack to run current code of slapos.core import slapos self.external_proxy_process = subprocess.Popen( [ sys.executable, '%s/../cli/entry.py' % os.path.dirname(slapos.tests.__file__), 'proxy', 'start', '--cfg', self.external_slapproxy_configuration_file_location ], env={"PYTHONPATH": ':'.join(sys.path)} ) # Wait a bit for proxy to be started attempts = 0 while (attempts < 20): try: self.external_proxy_slap._connection_helper.GET('/') except slapos.slap.NotFoundError: break except slapos.slap.ConnectionError, socket.error: attempts = attempts + 1 time.sleep(0.1) else: self.fail('Could not start external proxy.') def stopExternalProxy(self): self.external_proxy_process.kill() def createSlapOSConfigurationFile(self): """ Overwrite default slapos configuration file to enable specific multimaster behaviours. """ configuration = pkg_resources.resource_stream( 'slapos.tests.slapproxy', 'slapos_multimaster.cfg.in' ).read() % { 'tempdir': self._tempdir, 'proxyaddr': self.proxyaddr, 'external_proxy_host': self.external_proxy_host, 'external_proxy_port': self.external_proxy_port } open(self.slapos_cfg, 'w').write(configuration) def external_proxy_add_free_partition(self, partition_amount, computer_id=None): """ Will simulate a slapformat first run and create "partition_amount" partitions """ if not computer_id: computer_id = self.external_computer_id computer_dict = { 'reference': computer_id, 'address': '123.456.789', 'netmask': 'fffffffff', 'partition_list': [], } for i in range(partition_amount): partition_example = { 'reference': 'slappart%s' % i, 'address_list': [ {'addr': '1.2.3.4', 'netmask': '255.255.255.255'}, {'addr': '4.3.2.1', 'netmask': '255.255.255.255'} ], 'tap': {'name': 'tap0'}, } computer_dict['partition_list'].append(partition_example) request_dict = { 'computer_id': self.computer_id, 'xml': xml_marshaller.xml_marshaller.dumps(computer_dict), } self.external_proxy_slap._connection_helper.POST('/loadComputerConfigurationFromXML', data=request_dict) def _checkInstanceIsFowarded(self, name, partition_parameter_kw, software_release): """ Test there is no instance on local proxy. Test there is instance on external proxy. Test there is instance reference in external table of databse of local proxy. """ # Test it has been correctly added to local database forwarded_instance_list = slapos.proxy.views.execute_db('forwarded_partition_request', 'SELECT * from %s', db=self.db) self.assertEqual(len(forwarded_instance_list), 1) forwarded_instance = forwarded_instance_list[0] self.assertEqual(forwarded_instance['partition_reference'], name) self.assertEqual(forwarded_instance['master_url'], self.external_master_url) # Test there is nothing allocated locally computer = loads(self.app.get( '/getFullComputerInformation?computer_id=%s' % self.computer_id ).data) self.assertEqual( computer._computer_partition_list[0]._software_release_document, None ) # Test there is an instance allocated in external master external_slap = slapos.slap.slap() external_slap.initializeConnection(self.external_master_url) external_computer = external_slap.registerComputer(self.external_computer_id) external_partition = external_computer.getComputerPartitionList()[0] for k, v in partition_parameter_kw.iteritems(): self.assertEqual( external_partition.getInstanceParameter(k), v ) self.assertEqual( external_partition._software_release_document._software_release, software_release ) def _checkInstanceIsAllocatedLocally(self, name, partition_parameter_kw, software_release): """ Test there is one instance on local proxy. Test there NO is instance reference in external table of databse of local proxy. Test there is not instance on external proxy. """ # Test it has NOT been added to local database forwarded_instance_list = slapos.proxy.views.execute_db('forwarded_partition_request', 'SELECT * from %s', db=self.db) self.assertEqual(len(forwarded_instance_list), 0) # Test there is an instance allocated locally computer = loads(self.app.get( '/getFullComputerInformation?computer_id=%s' % self.computer_id ).data) partition = computer._computer_partition_list[0] for k, v in partition_parameter_kw.iteritems(): self.assertEqual( partition.getInstanceParameter(k), v ) self.assertEqual( partition._software_release_document._software_release, software_release ) # Test there is NOT instance allocated in external master external_slap = slapos.slap.slap() external_slap.initializeConnection(self.external_master_url) external_computer = external_slap.registerComputer(self.external_computer_id) external_partition = external_computer.getComputerPartitionList()[0] self.assertEqual( external_partition._software_release_document, None ) def testForwardToMasterInList(self): """ Test that explicitely asking a master_url in SLA causes proxy to forward request to this master. """ dummy_parameter_dict = {'foo': 'bar'} instance_reference = 'MyFirstInstance' self.add_free_partition(1) self.external_proxy_add_free_partition(1) filter_kw = {'master_url': self.external_master_url} partition = self.request(self.software_release_not_in_list, None, instance_reference, 'slappart0', filter_kw=filter_kw, partition_parameter_kw=dummy_parameter_dict) self._checkInstanceIsFowarded(instance_reference, dummy_parameter_dict, self.software_release_not_in_list) self.assertEqual( partition._master_url, self.external_master_url ) def testForwardToMasterNotInList(self): """ Test that explicitely asking a master_url in SLA causes proxy to refuse to forward if this master_url is not whitelisted """ self.add_free_partition(1) self.external_proxy_add_free_partition(1) filter_kw = {'master_url': self.external_master_url + 'bad'} rv = self._requestComputerPartition(self.software_release_not_in_list, None, 'MyFirstInstance', 'slappart0', filter_kw=filter_kw) self.assertEqual(rv._status_code, 404) def testForwardRequest_SoftwareReleaseList(self): """ Test that instance request is automatically forwarded if its Software Release matches list. """ dummy_parameter_dict = {'foo': 'bar'} instance_reference = 'MyFirstInstance' self.add_free_partition(1) self.external_proxy_add_free_partition(1) partition = self.request(self.external_software_release, None, instance_reference, 'slappart0', partition_parameter_kw=dummy_parameter_dict) self._checkInstanceIsFowarded(instance_reference, dummy_parameter_dict, self.external_software_release) def testRequestToCurrentMaster(self): """ Explicitely ask deployment of an instance to current master """ self.add_free_partition(1) self.external_proxy_add_free_partition(1) instance_reference = 'MyFirstInstance' dummy_parameter_dict = {'foo': 'bar'} filter_kw = {'master_url': self.proxyaddr} self.request(self.software_release_not_in_list, None, instance_reference, 'slappart0', filter_kw=filter_kw, partition_parameter_kw=dummy_parameter_dict) self._checkInstanceIsAllocatedLocally(instance_reference, dummy_parameter_dict, self.software_release_not_in_list) def testRequestExplicitelyOnExternalMasterThenRequestAgain(self): """ Request an instance that will get forwarded to another an instance. Test that subsequent request without SLA doesn't forward """ dummy_parameter_dict = {'foo': 'bar'} self.testForwardToMasterInList() partition = self.request(self.software_release_not_in_list, None, 'MyFirstInstance', 'slappart0', partition_parameter_kw=dummy_parameter_dict) self.assertEqual( getattr(partition, '_master_url', None), None ) # Test it has not been removed from local database (we keep track) forwarded_instance_list = slapos.proxy.views.execute_db('forwarded_partition_request', 'SELECT * from %s', db=self.db) self.assertEqual(len(forwarded_instance_list), 1) # Test there is an instance allocated locally computer = loads(self.app.get( '/getFullComputerInformation?computer_id=%s' % self.computer_id ).data) partition = computer._computer_partition_list[0] for k, v in dummy_parameter_dict.iteritems(): self.assertEqual( partition.getInstanceParameter(k), v ) self.assertEqual( partition._software_release_document._software_release, self.software_release_not_in_list ) # XXX: when testing new schema version, # rename to "TestMigrateVersion10ToLatest" and test accordingly. # Of course, also test version 11 to latest (should be 12). class TestMigrateVersion10To11(TestInformation, TestRequest, TestSlaveRequest, TestMultiNodeSupport): """ Test that old database version are automatically migrated without failure """ def setUp(self): super(TestMigrateVersion10To11, self).setUp() schema = pkg_resources.resource_stream('slapos.tests.slapproxy', 'database_dump_version_10.sql') schema = schema.read() % dict(version='11') self.db = sqlite_connect(self.proxy_db) self.db.cursor().executescript(schema) self.db.commit() def test_automatic_migration(self): table_list = ('software11', 'computer11', 'partition11', 'slave11', 'partition_network11') for table in table_list: self.assertRaises(sqlite3.OperationalError, self.db.execute, "SELECT name FROM computer11") # Run a dummy request to cause migration self.app.get('/getComputerInformation?computer_id=computer') # Check some partition parameters self.assertEqual( loads(self.app.get('/getComputerInformation?computer_id=computer').data)._computer_partition_list[0]._parameter_dict['slap_software_type'], 'production' ) # Lower level tests computer_list = self.db.execute("SELECT * FROM computer11").fetchall() self.assertEqual( computer_list, [(u'computer', u'127.0.0.1', u'255.255.255.255')] ) software_list = self.db.execute("SELECT * FROM software11").fetchall() self.assertEqual( software_list, [(u'/srv/slapgrid//srv//runner/project//slapos/software.cfg', u'computer')] ) partition_list = self.db.execute("select * from partition11").fetchall() self.assertEqual( partition_list, [(u'slappart0', u'computer', u'busy', u'/srv/slapgrid//srv//runner/project//slapos/software.cfg', u'\n\n {\n "site-id": "erp5"\n }\n}\n\n', None, None, u'production', u'slapos', None, u'started'), (u'slappart1', u'computer', u'busy', u'/srv/slapgrid//srv//runner/project//slapos/software.cfg', u"\n\n", u'\n\n mysql://127.0.0.1:45678/erp5\n\n', None, u'mariadb', u'MariaDB DataBase', u'slappart0', u'started'), (u'slappart2', u'computer', u'busy', u'/srv/slapgrid//srv//runner/project//slapos/software.cfg', u'\n\n \n\n', u'\n\n cloudooo://127.0.0.1:23000/\n\n', None, u'cloudooo', u'Cloudooo', u'slappart0', u'started'), (u'slappart3', u'computer', u'busy', u'/srv/slapgrid//srv//runner/project//slapos/software.cfg', u"\n\n", u'\n\n memcached://127.0.0.1:11000/\n\n', None, u'memcached', u'Memcached', u'slappart0', u'started'), (u'slappart4', u'computer', u'busy', u'/srv/slapgrid//srv//runner/project//slapos/software.cfg', u"\n\n", u'\n\n memcached://127.0.0.1:13301/\n\n', None, u'kumofs', u'KumoFS', u'slappart0', u'started'), (u'slappart5', u'computer', u'busy', u'/srv/slapgrid//srv//runner/project//slapos/software.cfg', u'\n\n memcached://127.0.0.1:13301/\n memcached://127.0.0.1:11000/\n cloudooo://127.0.0.1:23000/\n\n', u'\n\n https://[fc00::1]:10001\n\n', None, u'tidstorage', u'TidStorage', u'slappart0', u'started'), (u'slappart6', u'computer', u'free', None, None, None, None, None, None, None, u'started'), (u'slappart7', u'computer', u'free', None, None, None, None, None, None, None, u'started'), (u'slappart8', u'computer', u'free', None, None, None, None, None, None, None, u'started'), (u'slappart9', u'computer', u'free', None, None, None, None, None, None, None, u'started')] ) slave_list = self.db.execute("select * from slave11").fetchall() self.assertEqual( slave_list, [] ) partition_network_list = self.db.execute("select * from partition_network11").fetchall() self.assertEqual( partition_network_list, [(u'slappart0', u'computer', u'slappart0', u'127.0.0.1', u'255.255.255.255'), (u'slappart0', u'computer', u'slappart0', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart1', u'computer', u'slappart1', u'127.0.0.1', u'255.255.255.255'), (u'slappart1', u'computer', u'slappart1', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart2', u'computer', u'slappart2', u'127.0.0.1', u'255.255.255.255'), (u'slappart2', u'computer', u'slappart2', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart3', u'computer', u'slappart3', u'127.0.0.1', u'255.255.255.255'), (u'slappart3', u'computer', u'slappart3', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart4', u'computer', u'slappart4', u'127.0.0.1', u'255.255.255.255'), (u'slappart4', u'computer', u'slappart4', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart5', u'computer', u'slappart5', u'127.0.0.1', u'255.255.255.255'), (u'slappart5', u'computer', u'slappart5', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart6', u'computer', u'slappart6', u'127.0.0.1', u'255.255.255.255'), (u'slappart6', u'computer', u'slappart6', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart7', u'computer', u'slappart7', u'127.0.0.1', u'255.255.255.255'), (u'slappart7', u'computer', u'slappart7', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart8', u'computer', u'slappart8', u'127.0.0.1', u'255.255.255.255'), (u'slappart8', u'computer', u'slappart8', u'fc00::1', u'ffff:ffff:ffff::'), (u'slappart9', u'computer', u'slappart9', u'127.0.0.1', u'255.255.255.255'), (u'slappart9', u'computer', u'slappart9', u'fc00::1', u'ffff:ffff:ffff::')] ) # Override several tests that needs an empty database @unittest.skip("Not implemented") def test_multi_node_support_different_software_release_list(self): pass @unittest.skip("Not implemented") def test_multi_node_support_instance_default_computer(self): pass @unittest.skip("Not implemented") def test_multi_node_support_instance_guid(self): pass @unittest.skip("Not implemented") def test_partition_are_empty(self): pass @unittest.skip("Not implemented") def test_request_consistent_parameters(self): pass slapos.core-1.3.18/slapos/tests/slapmock/0000755000000000000000000000000013006632706020251 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/tests/slapmock/__init__.py0000644000000000000000000000000012752436135022355 0ustar rootroot00000000000000slapos.core-1.3.18/slapos/tests/slapmock/requests.py0000644000000000000000000000021612752436135022502 0ustar rootroot00000000000000# -*- coding: utf-8 -*- def response_ok(url, request): return { 'status_code': 200, 'content': '' } slapos.core-1.3.18/slapos/tests/__init__.py0000644000000000000000000000000012752436135020544 0ustar rootroot00000000000000slapos.core-1.3.18/slapos/tests/configure_local.py0000644000000000000000000001276312752436135022163 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2014 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import unittest import shutil import tempfile import slapos.slap import slapos.cli.configure_local from slapos.cli.configure_local import ConfigureLocalCommand, _createConfigurationDirectory from slapos.cli.entry import SlapOSApp from argparse import Namespace from ConfigParser import ConfigParser # Disable any command to launch slapformat and supervisor slapos.cli.configure_local._runFormat = lambda x: "Do nothing" slapos.cli.configure_local.launchSupervisord = lambda instance_root, logger: "Do nothing" class TestConfigureLocal(unittest.TestCase): def setUp(self): self.slap = slapos.slap.slap() self.app = SlapOSApp() self.temp_dir = tempfile.mkdtemp() os.environ["HOME"] = self.temp_dir self.instance_root = tempfile.mkdtemp() self.software_root = tempfile.mkdtemp() if os.path.exists(self.temp_dir): shutil.rmtree(self.temp_dir) def tearDown(self): for temp_path in (self.temp_dir, \ self.instance_root, self.software_root): if os.path.exists(temp_path): shutil.rmtree(temp_path) def test_configure_local_environment_with_default_value(self): config = ConfigureLocalCommand(self.app, Namespace()) config.__dict__.update({i.dest: i.default \ for i in config.get_parser(None)._option_string_actions.values()}) config.slapos_configuration_directory = self.temp_dir config.slapos_buildout_directory = self.temp_dir config.slapos_instance_root = self.instance_root slapos.cli.configure_local.do_configure( config, config.fetch_config, self.app.log) expected_software_root = "/opt/slapgrid" self.assertTrue( os.path.exists("%s/.slapos/slapos-client.cfg" % self.temp_dir)) with open(self.temp_dir + '/slapos-proxy.cfg') as fout: proxy_config = ConfigParser() proxy_config.readfp(fout) self.assertEquals(proxy_config.get('slapos', 'instance_root'), self.instance_root) self.assertEquals(proxy_config.get('slapos', 'software_root'), expected_software_root) with open(self.temp_dir + '/slapos.cfg') as fout: proxy_config = ConfigParser() proxy_config.readfp(fout) self.assertEquals(proxy_config.get('slapos', 'instance_root'), self.instance_root) self.assertEquals(proxy_config.get('slapos', 'software_root'), expected_software_root) def test_configure_local_environment(self): config = ConfigureLocalCommand(self.app, Namespace()) config.__dict__.update({i.dest: i.default \ for i in config.get_parser(None)._option_string_actions.values()}) config.slapos_configuration_directory = self.temp_dir config.slapos_buildout_directory = self.temp_dir config.slapos_instance_root = self.instance_root config.slapos_software_root = self.software_root slapos.cli.configure_local.do_configure( config, config.fetch_config, self.app.log) log_folder = os.path.join(config.slapos_buildout_directory, 'log') self.assertTrue(os.path.exists(log_folder), "%s not exists" % log_folder) self.assertTrue( os.path.exists("%s/.slapos/slapos-client.cfg" % self.temp_dir)) with open(self.temp_dir + '/slapos-proxy.cfg') as fout: proxy_config = ConfigParser() proxy_config.readfp(fout) self.assertEquals(proxy_config.get('slapos', 'instance_root'), self.instance_root) self.assertEquals(proxy_config.get('slapos', 'software_root'), self.software_root) with open(self.temp_dir + '/slapos.cfg') as fout: proxy_config = ConfigParser() proxy_config.readfp(fout) self.assertEquals(proxy_config.get('slapos', 'instance_root'), self.instance_root) self.assertEquals(proxy_config.get('slapos', 'software_root'), self.software_root) log_file = proxy_config.get('slapformat', 'log_file') self.assertTrue(log_file.startswith(log_folder), "%s don't starts with %s" % (log_file, log_folder)) slapos.core-1.3.18/slapos/tests/slapobject.py0000644000000000000000000004267412752436135021162 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import os import time import unittest from slapos.slap import ComputerPartition as SlapComputerPartition from slapos.grid.SlapObject import Partition, Software from slapos.grid import utils from slapos.grid import networkcache # XXX: BasicMixin should be in a separated module, not in slapgrid test module. from slapos.tests.slapgrid import BasicMixin # Mockup # XXX: Ambiguous name # XXX: Factor with common SlapOS tests class FakeCallAndStore(object): """ Used to check if the mocked method has been called. """ def __init__(self): self.called = False def __call__(self, *args, **kwargs): self.called = True class FakeCallAndNoop(object): """ Used to no-op a method. """ def __call__(self, *args, **kwargs): pass # XXX: change name and behavior to be more generic and factor with other tests class FakeNetworkCacheCallAndRead(object): """ Short-circuit normal calls to slapos buildout helpers, get and store 'additional_buildout_parameter_list' for future analysis. """ def __init__(self): self.external_command_list = [] def __call__(self, *args, **kwargs): additional_buildout_parameter_list = \ kwargs.get('additional_buildout_parameter_list') self.external_command_list.extend(additional_buildout_parameter_list) # Backup modules original_install_from_buildout = Software._install_from_buildout original_upload_network_cached = networkcache.upload_network_cached originalBootstrapBuildout = utils.bootstrapBuildout originalLaunchBuildout = utils.launchBuildout originalUploadSoftwareRelease = Software.uploadSoftwareRelease originalPartitionGenerateSupervisorConfigurationFile = Partition.generateSupervisorConfigurationFile class MasterMixin(BasicMixin, unittest.TestCase): """ Master Mixin of slapobject test classes. """ def setUp(self): BasicMixin.setUp(self) os.mkdir(self.software_root) os.mkdir(self.instance_root) def tearDown(self): BasicMixin.tearDown(self) # Un-monkey patch possible modules global originalBootstrapBuildout global originalLaunchBuildout utils.bootstrapBuildout = originalBootstrapBuildout utils.launchBuildout = originalLaunchBuildout # Helper functions def createSoftware(self, url=None, empty=False): """ Create an empty software, and return a Software object from dummy parameters. """ if url is None: url = 'mysoftware' software_path = os.path.join(self.software_root, utils.md5digest(url)) os.mkdir(software_path) if not empty: # Populate the Software Release directory so that it is "complete" and # "working" from a slapos point of view. open(os.path.join(software_path, 'instance.cfg'), 'w').close() return Software( url=url, software_root=self.software_root, buildout=self.buildout, logger=logging.getLogger(), ) def createPartition( self, software_release_url, partition_id=None, slap_computer_partition=None, retention_delay=None, ): """ Create a partition, and return a Partition object created from dummy parameters. """ # XXX dirty, should disappear when Partition is cleaned up software_path = os.path.join( self.software_root, utils.md5digest(software_release_url) ) if partition_id is None: partition_id = 'mypartition' if slap_computer_partition is None: slap_computer_partition = SlapComputerPartition( computer_id='bidon', partition_id=partition_id) instance_path = os.path.join(self.instance_root, partition_id) os.mkdir(instance_path) os.chmod(instance_path, 0o750) supervisor_configuration_path = os.path.join( self.instance_root, 'supervisor') os.mkdir(supervisor_configuration_path) partition = Partition( software_path=software_path, instance_path=instance_path, supervisord_partition_configuration_path=os.path.join( supervisor_configuration_path, partition_id), supervisord_socket=os.path.join( supervisor_configuration_path, 'supervisor.sock'), computer_partition=slap_computer_partition, computer_id='bidon', partition_id=partition_id, server_url='bidon', software_release_url=software_release_url, buildout=self.buildout, logger=logging.getLogger(), ) partition.updateSupervisor = FakeCallAndNoop if retention_delay: partition.retention_delay = retention_delay return partition class TestSoftwareNetworkCacheSlapObject(MasterMixin, unittest.TestCase): """ Test for Network Cache related features in Software class. """ def setUp(self): MasterMixin.setUp(self) self.fakeCallAndRead = FakeNetworkCacheCallAndRead() utils.bootstrapBuildout = self.fakeCallAndRead utils.launchBuildout = self.fakeCallAndRead self.signature_private_key_file = '/signature/private/key_file' self.upload_cache_url = 'http://example.com/uploadcache' self.upload_dir_url = 'http://example.com/uploaddir' self.shacache_ca_file = '/path/to/shacache/ca/file' self.shacache_cert_file = '/path/to/shacache/cert/file' self.shacache_key_file = '/path/to/shacache/key/file' self.shadir_ca_file = '/path/to/shadir/ca/file' self.shadir_cert_file = '/path/to/shadir/cert/file' self.shadir_key_file = '/path/to/shadir/key/file' def tearDown(self): MasterMixin.tearDown(self) Software._install_from_buildout = original_install_from_buildout networkcache.upload_network_cached = original_upload_network_cached Software.uploadSoftwareRelease = originalUploadSoftwareRelease # Test methods def test_software_install_with_networkcache(self): """ Check if the networkcache parameters are propagated. """ software = Software( url='http://example.com/software.cfg', software_root=self.software_root, buildout=self.buildout, logger=logging.getLogger(), signature_private_key_file='/signature/private/key_file', upload_cache_url='http://example.com/uploadcache', upload_dir_url='http://example.com/uploaddir', shacache_ca_file=self.shacache_ca_file, shacache_cert_file=self.shacache_cert_file, shacache_key_file=self.shacache_key_file, shadir_ca_file=self.shadir_ca_file, shadir_cert_file=self.shadir_cert_file, shadir_key_file=self.shadir_key_file) software.install() command_list = self.fakeCallAndRead.external_command_list self.assertIn('buildout:networkcache-section=networkcache', command_list) self.assertIn('networkcache:signature-private-key-file=%s' % self.signature_private_key_file, command_list) self.assertIn('networkcache:upload-cache-url=%s' % self.upload_cache_url, command_list) self.assertIn('networkcache:upload-dir-url=%s' % self.upload_dir_url, command_list) self.assertIn('networkcache:shacache-ca-file=%s' % self.shacache_ca_file, command_list) self.assertIn('networkcache:shacache-cert-file=%s' % self.shacache_cert_file, command_list) self.assertIn('networkcache:shacache-key-file=%s' % self.shacache_key_file, command_list) self.assertIn('networkcache:shadir-ca-file=%s' % self.shadir_ca_file, command_list) self.assertIn('networkcache:shadir-cert-file=%s' % self.shadir_cert_file, command_list) self.assertIn('networkcache:shadir-key-file=%s' % self.shadir_key_file, command_list) def test_software_install_without_networkcache(self): """ Check if the networkcache parameters are not propagated if they are not available. """ software = Software(url='http://example.com/software.cfg', software_root=self.software_root, buildout=self.buildout, logger=logging.getLogger()) software.install() command_list = self.fakeCallAndRead.external_command_list self.assertNotIn('buildout:networkcache-section=networkcache', command_list) self.assertNotIn('networkcache:signature-private-key-file=%s' % self.signature_private_key_file, command_list) self.assertNotIn('networkcache:upload-cache-url=%s' % self.upload_cache_url, command_list) self.assertNotIn('networkcache:upload-dir-url=%s' % self.upload_dir_url, command_list) # XXX-Cedric: do the same with upload def test_software_install_networkcache_upload_blacklist(self): """ Check if the networkcache upload blacklist parameters are propagated. """ def fakeBuildout(*args, **kw): pass Software._install_from_buildout = fakeBuildout def fake_upload_network_cached(*args, **kw): self.assertFalse(True) networkcache.upload_network_cached = fake_upload_network_cached upload_to_binary_cache_url_blacklist = ["http://example.com"] software = Software( url='http://example.com/software.cfg', software_root=self.software_root, buildout=self.buildout, logger=logging.getLogger(), signature_private_key_file='/signature/private/key_file', upload_cache_url='http://example.com/uploadcache', upload_dir_url='http://example.com/uploaddir', shacache_ca_file=self.shacache_ca_file, shacache_cert_file=self.shacache_cert_file, shacache_key_file=self.shacache_key_file, shadir_ca_file=self.shadir_ca_file, shadir_cert_file=self.shadir_cert_file, shadir_key_file=self.shadir_key_file, upload_to_binary_cache_url_blacklist= upload_to_binary_cache_url_blacklist, ) software.install() def test_software_install_networkcache_upload_blacklist_side_effect(self): """ Check if the networkcache upload blacklist parameters only prevent blacklisted Software Release to be uploaded. """ def fakeBuildout(*args, **kw): pass Software._install_from_buildout = fakeBuildout def fakeUploadSoftwareRelease(*args, **kw): self.uploaded = True Software.uploadSoftwareRelease = fakeUploadSoftwareRelease upload_to_binary_cache_url_blacklist = ["http://anotherexample.com"] software = Software( url='http://example.com/software.cfg', software_root=self.software_root, buildout=self.buildout, logger=logging.getLogger(), signature_private_key_file='/signature/private/key_file', upload_cache_url='http://example.com/uploadcache', upload_dir_url='http://example.com/uploaddir', upload_binary_cache_url='http://example.com/uploadcache', upload_binary_dir_url='http://example.com/uploaddir', shacache_ca_file=self.shacache_ca_file, shacache_cert_file=self.shacache_cert_file, shacache_key_file=self.shacache_key_file, shadir_ca_file=self.shadir_ca_file, shadir_cert_file=self.shadir_cert_file, shadir_key_file=self.shadir_key_file, upload_to_binary_cache_url_blacklist= upload_to_binary_cache_url_blacklist, ) software.install() self.assertTrue(getattr(self, 'uploaded', False)) class TestPartitionSlapObject(MasterMixin, unittest.TestCase): def setUp(self): MasterMixin.setUp(self) Partition.generateSupervisorConfigurationFile = FakeCallAndNoop() utils.bootstrapBuildout = FakeCallAndNoop() utils.launchBuildout = FakeCallAndStore() def tearDown(self): MasterMixin.tearDown(self) Partition.generateSupervisorConfigurationFile = originalPartitionGenerateSupervisorConfigurationFile def test_instance_is_deploying_if_software_release_exists(self): """ Test that slapgrid deploys an instance if its Software Release exists and instance.cfg in the Software Release exists. """ software = self.createSoftware() partition = self.createPartition(software.url) partition.install() self.assertTrue(utils.launchBuildout.called) def test_backward_compatibility_instance_is_deploying_if_template_cfg_is_used(self): """ Backward compatibility test, for old software releases. Test that slapgrid deploys an instance if its Software Release exists and template.cfg in the Software Release exists. """ software = self.createSoftware(empty=True) open(os.path.join(software.software_path, 'template.cfg'), 'w').close() partition = self.createPartition(software.url) partition.install() self.assertTrue(utils.launchBuildout.called) def test_instance_slapgrid_raise_if_software_release_instance_profile_does_not_exist(self): """ Test that slapgrid raises XXX when deploying an instance if the Software Release related to the instance is not correctly installed (i.e there is no instance.cfg in it). """ software = self.createSoftware(empty=True) partition = self.createPartition(software.url) # XXX: What should it raise? self.assertRaises(IOError, partition.install) def test_instance_slapgrid_raise_if_software_release_does_not_exist(self): """ Test that slapgrid raises XXX when deploying an instance if the Software Release related to the instance is not present at all (i.e its directory does not exist at all). """ software = self.createSoftware(empty=True) os.rmdir(software.software_path) partition = self.createPartition(software.url) # XXX: What should it raise? self.assertRaises(IOError, partition.install) class TestPartitionDestructionLock(MasterMixin, unittest.TestCase): def setUp(self): MasterMixin.setUp(self) Partition.generateSupervisorConfigurationFile = FakeCallAndNoop() utils.bootstrapBuildout = FakeCallAndNoop() utils.launchBuildout = FakeCallAndStore() def test_retention_lock_delay_creation(self): delay = 42 software = self.createSoftware() partition = self.createPartition(software.url, retention_delay=delay) partition.install() deployed_delay = int(open(partition.retention_lock_delay_file_path).read()) self.assertEqual(delay, deployed_delay) def test_no_retention_lock_delay(self): software = self.createSoftware() partition = self.createPartition(software.url) partition.install() delay = open(partition.retention_lock_delay_file_path).read() self.assertTrue(delay, '0') self.assertTrue(partition.destroy()) def test_retention_lock_delay_does_not_change(self): delay = 42 software = self.createSoftware() partition = self.createPartition(software.url, retention_delay=delay) partition.install() partition.retention_delay = 23 # install/destroy many times partition.install() partition.destroy() partition.destroy() partition.install() partition.destroy() deployed_delay = int(open(partition.retention_lock_delay_file_path).read()) self.assertEqual(delay, deployed_delay) def test_retention_lock_delay_is_respected(self): delay = 2.0 / (3600 * 24) software = self.createSoftware() partition = self.createPartition(software.url, retention_delay=delay) partition.install() deployed_delay = float(open(partition.retention_lock_delay_file_path).read()) self.assertEqual(int(delay), int(deployed_delay)) self.assertFalse(partition.destroy()) time.sleep(1) self.assertFalse(partition.destroy()) time.sleep(1) self.assertTrue(partition.destroy()) def test_retention_lock_date_creation(self): delay = 42 software = self.createSoftware() partition = self.createPartition(software.url, retention_delay=delay) partition.install() self.assertFalse(os.path.exists(partition.retention_lock_date_file_path)) partition.destroy() deployed_date = float(open(partition.retention_lock_date_file_path).read()) self.assertEqual(delay * 3600 * 24 + int(time.time()), int(deployed_date)) def test_retention_lock_date_does_not_change(self): delay = 42 software = self.createSoftware() partition = self.createPartition(software.url, retention_delay=delay) now = time.time() partition.install() partition.destroy() partition.retention_delay = 23 # install/destroy many times partition.install() partition.destroy() partition.destroy() partition.install() partition.destroy() deployed_date = float(open(partition.retention_lock_date_file_path).read()) self.assertEqual(delay * 3600 * 24 + int(now), int(deployed_date)) slapos.core-1.3.18/slapos/tests/slap.py0000644000000000000000000013277613003671621017765 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import os import unittest import urlparse import tempfile import httmock import slapos.slap import xml_marshaller class UndefinedYetException(Exception): """To catch exceptions which are not yet defined""" class SlapMixin(unittest.TestCase): """ Useful methods for slap tests """ def setUp(self): self._server_url = os.environ.get('TEST_SLAP_SERVER_URL', None) if self._server_url is None: self.server_url = 'http://localhost/' else: self.server_url = self._server_url print 'Testing against SLAP server %r' % self.server_url self.slap = slapos.slap.slap() self.partition_id = 'PARTITION_01' if os.environ.has_key('SLAPGRID_INSTANCE_ROOT'): del os.environ['SLAPGRID_INSTANCE_ROOT'] def tearDown(self): pass def _getTestComputerId(self): """ Returns the computer id used by the test """ return self.id() class TestSlap(SlapMixin): """ Test slap against slap server """ def test_slap_initialisation(self): """ Asserts that slap initialisation works properly in case of passing correct url """ slap_instance = slapos.slap.slap() slap_instance.initializeConnection(self.server_url) self.assertEquals(slap_instance._connection_helper.slapgrid_uri, self.server_url) def test_slap_initialisation_ipv6_and_port(self): slap_instance = slapos.slap.slap() slap_instance.initializeConnection("http://1234:1234:1234:1234:1:1:1:1:5000/foo/") self.assertEqual( slap_instance._connection_helper.slapgrid_uri, "http://[1234:1234:1234:1234:1:1:1:1]:5000/foo/" ) def test_slap_initialisation_ipv6_without_port(self): slap_instance = slapos.slap.slap() slap_instance.initializeConnection("http://1234:1234:1234:1234:1:1:1:1/foo/") self.assertEqual( slap_instance._connection_helper.slapgrid_uri, "http://[1234:1234:1234:1234:1:1:1:1]/foo/" ) def test_slap_initialisation_ipv6_with_bracket(self): slap_instance = slapos.slap.slap() slap_instance.initializeConnection("http://[1234:1234:1234:1234:1:1:1:1]:5000/foo/") self.assertEqual( slap_instance._connection_helper.slapgrid_uri, "http://[1234:1234:1234:1234:1:1:1:1]:5000/foo/" ) def test_slap_initialisation_ipv4(self): slap_instance = slapos.slap.slap() slap_instance.initializeConnection("http://127.0.0.1:5000/foo/") self.assertEqual( slap_instance._connection_helper.slapgrid_uri, "http://127.0.0.1:5000/foo/" ) def test_slap_initialisation_hostname(self): # XXX this really opens a connection ! slap_instance = slapos.slap.slap() slap_instance.initializeConnection("http://foo.com:5000/foo/") self.assertEqual( slap_instance._connection_helper.slapgrid_uri, "http://foo.com:5000/foo/" ) def test_registerComputer_with_new_guid(self): """ Asserts that calling slap.registerComputer with new guid returns Computer object """ computer_guid = self._getTestComputerId() self.slap = slapos.slap.slap() self.slap.initializeConnection(self.server_url) computer = self.slap.registerComputer(computer_guid) self.assertIsInstance(computer, slapos.slap.Computer) def test_registerComputer_with_existing_guid(self): """ Asserts that calling slap.registerComputer with already used guid returns Computer object """ computer_guid = self._getTestComputerId() self.slap = slapos.slap.slap() self.slap.initializeConnection(self.server_url) computer = self.slap.registerComputer(computer_guid) self.assertIsInstance(computer, slapos.slap.Computer) computer2 = self.slap.registerComputer(computer_guid) self.assertIsInstance(computer2, slapos.slap.Computer) # XXX: There is naming conflict in slap library. # SoftwareRelease is currently used as suboject of Slap transmission object def test_registerSoftwareRelease_with_new_uri(self): """ Asserts that calling slap.registerSoftwareRelease with new guid returns SoftwareRelease object """ software_release_uri = 'http://server/' + self._getTestComputerId() self.slap = slapos.slap.slap() self.slap.initializeConnection(self.server_url) software_release = self.slap.registerSoftwareRelease(software_release_uri) self.assertIsInstance(software_release, slapos.slap.SoftwareRelease) def test_registerSoftwareRelease_with_existing_uri(self): """ Asserts that calling slap.registerSoftwareRelease with already used guid returns SoftwareRelease object """ software_release_uri = 'http://server/' + self._getTestComputerId() self.slap = slapos.slap.slap() self.slap.initializeConnection(self.server_url) software_release = self.slap.registerSoftwareRelease(software_release_uri) self.assertIsInstance(software_release, slapos.slap.SoftwareRelease) software_release2 = self.slap.registerSoftwareRelease(software_release_uri) self.assertIsInstance(software_release2, slapos.slap.SoftwareRelease) def test_registerComputerPartition_new_partition_id_known_computer_guid(self): """ Asserts that calling slap.registerComputerPartition on known computer returns ComputerPartition object """ computer_guid = self._getTestComputerId() partition_id = self.partition_id self.slap.initializeConnection(self.server_url) self.slap.registerComputer(computer_guid) def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/registerComputerPartition' and qs == { 'computer_reference': [computer_guid], 'computer_partition_reference': [partition_id] }): partition = slapos.slap.ComputerPartition(computer_guid, partition_id) return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(partition) } else: return {'status_code': 400} self._handler = handler with httmock.HTTMock(handler): partition = self.slap.registerComputerPartition(computer_guid, partition_id) self.assertIsInstance(partition, slapos.slap.ComputerPartition) def test_registerComputerPartition_existing_partition_id_known_computer_guid(self): """ Asserts that calling slap.registerComputerPartition on known computer returns ComputerPartition object """ self.test_registerComputerPartition_new_partition_id_known_computer_guid() with httmock.HTTMock(self._handler): partition = self.slap.registerComputerPartition(self._getTestComputerId(), self.partition_id) self.assertIsInstance(partition, slapos.slap.ComputerPartition) def test_registerComputerPartition_unknown_computer_guid(self): """ Asserts that calling slap.registerComputerPartition on unknown computer raises NotFoundError exception """ computer_guid = self._getTestComputerId() self.slap.initializeConnection(self.server_url) partition_id = 'PARTITION_01' def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/registerComputerPartition' and qs == { 'computer_reference': [computer_guid], 'computer_partition_reference': [partition_id] }): return {'status_code': 404} else: return {'status_code': 0} with httmock.HTTMock(handler): self.assertRaises(slapos.slap.NotFoundError, self.slap.registerComputerPartition, computer_guid, partition_id) def test_getFullComputerInformation_empty_computer_guid(self): """ Asserts that calling getFullComputerInformation with empty computer_id raises early, before calling master. """ self.slap.initializeConnection(self.server_url) def handler(url, req): # Shouldn't even be called self.assertFalse(True) with httmock.HTTMock(handler): self.assertRaises(slapos.slap.NotFoundError, self.slap._connection_helper.getFullComputerInformation, None) def test_registerComputerPartition_empty_computer_guid(self): """ Asserts that calling registerComputerPartition with empty computer_id raises early, before calling master. """ self.slap.initializeConnection(self.server_url) def handler(url, req): # Shouldn't even be called self.assertFalse(True) with httmock.HTTMock(handler): self.assertRaises(slapos.slap.NotFoundError, self.slap.registerComputerPartition, None, 'PARTITION_01') def test_registerComputerPartition_empty_computer_partition_id(self): """ Asserts that calling registerComputerPartition with empty computer_partition_id raises early, before calling master. """ self.slap.initializeConnection(self.server_url) def handler(url, req): # Shouldn't even be called self.assertFalse(True) with httmock.HTTMock(handler): self.assertRaises(slapos.slap.NotFoundError, self.slap.registerComputerPartition, self._getTestComputerId(), None) def test_registerComputerPartition_empty_computer_guid_empty_computer_partition_id(self): """ Asserts that calling registerComputerPartition with empty computer_partition_id raises early, before calling master. """ self.slap.initializeConnection(self.server_url) def handler(url, req): # Shouldn't even be called self.assertFalse(True) with httmock.HTTMock(handler): self.assertRaises(slapos.slap.NotFoundError, self.slap.registerComputerPartition, None, None) def test_getSoftwareReleaseListFromSoftwareProduct_software_product_reference(self): """ Check that slap.getSoftwareReleaseListFromSoftwareProduct calls "/getSoftwareReleaseListFromSoftwareProduct" URL with correct parameters, with software_product_reference parameter being specified. """ self.slap.initializeConnection(self.server_url) software_product_reference = 'random_reference' software_release_url_list = ['1', '2'] def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/getSoftwareReleaseListFromSoftwareProduct' and qs == {'software_product_reference': [software_product_reference]}): return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(software_release_url_list) } with httmock.HTTMock(handler): self.assertEqual( self.slap.getSoftwareReleaseListFromSoftwareProduct( software_product_reference=software_product_reference), software_release_url_list ) def test_getSoftwareReleaseListFromSoftwareProduct_software_release_url(self): """ Check that slap.getSoftwareReleaseListFromSoftwareProduct calls "/getSoftwareReleaseListFromSoftwareProduct" URL with correct parameters, with software_release_url parameter being specified. """ self.slap.initializeConnection(self.server_url) software_release_url = 'random_url' software_release_url_list = ['1', '2'] def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/getSoftwareReleaseListFromSoftwareProduct' and qs == {'software_release_url': [software_release_url]}): return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(software_release_url_list) } with httmock.HTTMock(handler): self.assertEqual( self.slap.getSoftwareReleaseListFromSoftwareProduct( software_release_url=software_release_url), software_release_url_list ) def test_getSoftwareReleaseListFromSoftwareProduct_too_many_parameters(self): """ Check that slap.getSoftwareReleaseListFromSoftwareProduct raises if both parameters are set. """ self.assertRaises( AttributeError, self.slap.getSoftwareReleaseListFromSoftwareProduct, 'foo', 'bar' ) def test_getSoftwareReleaseListFromSoftwareProduct_no_parameter(self): """ Check that slap.getSoftwareReleaseListFromSoftwareProduct raises if both parameters are either not set or None. """ self.assertRaises( AttributeError, self.slap.getSoftwareReleaseListFromSoftwareProduct ) def test_initializeConnection_getHateoasUrl(self): """ Test that by default, slap will try to fetch Hateoas URL from XML/RPC URL. """ hateoas_url = 'foo' def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/getHateoasUrl'): return { 'status_code': 200, 'content': hateoas_url } with httmock.HTTMock(handler): self.slap.initializeConnection('http://bar') self.assertEqual( self.slap._hateoas_navigator.slapos_master_hateoas_uri, hateoas_url ) def test_initializeConnection_specifiedHateoasUrl(self): """ Test that if rest URL is specified, slap will NOT try to fetch Hateoas URL from XML/RPC URL. """ hateoas_url = 'foo' def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/getHateoasUrl'): self.fail('slap should not have contacted master to get Hateoas URL.') with httmock.HTTMock(handler): self.slap.initializeConnection('http://bar', slapgrid_rest_uri=hateoas_url) self.assertEqual( self.slap._hateoas_navigator.slapos_master_hateoas_uri, hateoas_url ) def test_initializeConnection_noHateoasUrl(self): """ Test that if no rest URL is specified and master does not know about rest, it still work. """ hateoas_url = 'foo' def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/getHateoasUrl'): return { 'status_code': 404, } with httmock.HTTMock(handler): self.slap.initializeConnection('http://bar') self.assertEqual(None, getattr(self.slap, '_hateoas_navigator', None)) class TestComputer(SlapMixin): """ Tests slapos.slap.slap.Computer class functionality """ def test_computer_getComputerPartitionList_no_partition(self): """ Asserts that calling Computer.getComputerPartitionList without Computer Partitions returns empty list """ computer_guid = self._getTestComputerId() slap = self.slap slap.initializeConnection(self.server_url) def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/registerComputerPartition' and 'computer_reference' in qs and 'computer_partition_reference' in qs): slap_partition = slapos.slap.ComputerPartition( qs['computer_reference'][0], qs['computer_partition_reference'][0]) return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(slap_partition) } elif (url.path == '/getFullComputerInformation' and 'computer_id' in qs): slap_computer = slapos.slap.Computer(qs['computer_id'][0]) slap_computer._software_release_list = [] slap_computer._computer_partition_list = [] return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(slap_computer) } elif url.path == '/requestComputerPartition': return {'status_code': 408} else: return {'status_code': 404} with httmock.HTTMock(handler): computer = self.slap.registerComputer(computer_guid) self.assertEqual(computer.getComputerPartitionList(), []) def _test_computer_empty_computer_guid(self, computer_method): """ Helper method checking if calling Computer method with empty id raises early. """ self.slap.initializeConnection(self.server_url) def handler(url, req): # Shouldn't even be called self.assertFalse(True) with httmock.HTTMock(handler): computer = self.slap.registerComputer(None) self.assertRaises(slapos.slap.NotFoundError, getattr(computer, computer_method)) def test_computer_getComputerPartitionList_empty_computer_guid(self): """ Asserts that calling getComputerPartitionList with empty computer_guid raises early, before calling master. """ self._test_computer_empty_computer_guid('getComputerPartitionList') def test_computer_getSoftwareReleaseList_empty_computer_guid(self): """ Asserts that calling getSoftwareReleaseList with empty computer_guid raises early, before calling master. """ self._test_computer_empty_computer_guid('getSoftwareReleaseList') def test_computer_getComputerPartitionList_only_partition(self): """ Asserts that calling Computer.getComputerPartitionList with only Computer Partitions returns empty list """ self.computer_guid = self._getTestComputerId() partition_id = 'PARTITION_01' self.slap = slapos.slap.slap() self.slap.initializeConnection(self.server_url) def handler(url, req): qs = urlparse.parse_qs(url.query) if (url.path == '/registerComputerPartition' and qs == { 'computer_reference': [self.computer_guid], 'computer_partition_reference': [partition_id] }): partition = slapos.slap.ComputerPartition(self.computer_guid, partition_id) return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(partition) } elif (url.path == '/getFullComputerInformation' and 'computer_id' in qs): slap_computer = slapos.slap.Computer(qs['computer_id'][0]) slap_computer._computer_partition_list = [] return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(slap_computer) } else: return {'status_code': 400} with httmock.HTTMock(handler): self.computer = self.slap.registerComputer(self.computer_guid) self.partition = self.slap.registerComputerPartition(self.computer_guid, partition_id) self.assertEqual(self.computer.getComputerPartitionList(), []) @unittest.skip("Not implemented") def test_computer_reportUsage_non_valid_xml_raises(self): """ Asserts that calling Computer.reportUsage with non DTD (not defined yet) XML raises (not defined yet) exception """ self.computer_guid = self._getTestComputerId() self.slap = slapos.slap.slap() self.slap.initializeConnection(self.server_url) self.computer = self.slap.registerComputer(self.computer_guid) non_dtd_xml = """ value """ self.assertRaises(UndefinedYetException, self.computer.reportUsage, non_dtd_xml) @unittest.skip("Not implemented") def test_computer_reportUsage_valid_xml_invalid_partition_raises(self): """ Asserts that calling Computer.reportUsage with DTD (not defined yet) XML which refers to invalid partition raises (not defined yet) exception """ self.computer_guid = self._getTestComputerId() partition_id = 'PARTITION_01' self.slap = slapos.slap.slap() self.slap.initializeConnection(self.server_url) self.computer = self.slap.registerComputer(self.computer_guid) self.partition = self.slap.registerComputerPartition(self.computer_guid, partition_id) # XXX: As DTD is not defined currently proper XML is not known bad_partition_dtd_xml = """ URL_CONNECTION_PARAMETER """, slap_computer_id=computer_guid, slap_computer_partition_id=requested_partition_id) return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(slap_partition) } with httmock.HTTMock(handler): computer_partition = open_order.request(software_release_uri, 'myrefe') self.assertIsInstance(computer_partition, slapos.slap.ComputerPartition) self.assertEqual(requested_partition_id, computer_partition.getId()) self.assertEqual("URL_CONNECTION_PARAMETER", computer_partition.getConnectionParameter('url')) class TestSoftwareProductCollection(SlapMixin): def setUp(self): SlapMixin.setUp(self) self.real_getSoftwareReleaseListFromSoftwareProduct =\ slapos.slap.slap.getSoftwareReleaseListFromSoftwareProduct def fake_getSoftwareReleaseListFromSoftwareProduct(inside_self, software_product_reference): return self.getSoftwareReleaseListFromSoftwareProduct_response slapos.slap.slap.getSoftwareReleaseListFromSoftwareProduct =\ fake_getSoftwareReleaseListFromSoftwareProduct self.product_collection = slapos.slap.SoftwareProductCollection( logging.getLogger(), slapos.slap.slap()) def tearDown(self): slapos.slap.slap.getSoftwareReleaseListFromSoftwareProduct =\ self.real_getSoftwareReleaseListFromSoftwareProduct def test_get_product(self): """ Test that the get method (aliased to __getattr__) returns the first element of the list given by getSoftwareReleaseListFromSoftwareProduct (i.e the best one). """ self.getSoftwareReleaseListFromSoftwareProduct_response = ['0', '1', '2'] self.assertEqual( self.product_collection.get('random_reference'), self.getSoftwareReleaseListFromSoftwareProduct_response[0] ) def test_get_product_empty_product(self): """ Test that the get method (aliased to __getattr__) raises if no Software Release is related to the Software Product, or if the Software Product does not exist. """ self.getSoftwareReleaseListFromSoftwareProduct_response = [] self.assertRaises( AttributeError, self.product_collection.get, 'random_reference', ) def test_get_product_getattr(self): """ Test that __getattr__ method is bound to get() method. """ self.getSoftwareReleaseListFromSoftwareProduct_response = ['0'] self.product_collection.foo self.assertEqual( self.product_collection.__getattr__, self.product_collection.get ) self.assertEqual(self.product_collection.foo, '0') if __name__ == '__main__': print 'You can point to any SLAP server by setting TEST_SLAP_SERVER_URL '\ 'environment variable' unittest.main() slapos.core-1.3.18/slapos/tests/slapgrid.py0000644000000000000000000030740713003671621020626 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## from __future__ import absolute_import import logging import os import random import shutil import signal import socket import sys import stat import tempfile import textwrap import time import unittest import urlparse import json import xml_marshaller from mock import patch import slapos.slap.slap import slapos.grid.utils from slapos.grid import slapgrid from slapos.grid.utils import md5digest from slapos.grid.watchdog import Watchdog from slapos.grid import SlapObject from slapos.grid.SlapObject import WATCHDOG_MARK from slapos.slap.slap import COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME import slapos.grid.SlapObject import httmock dummylogger = logging.getLogger() WATCHDOG_TEMPLATE = """#!{python_path} -S import sys sys.path={sys_path} import slapos.slap import slapos.grid.watchdog def bang(self_partition, message): nl = chr(10) with open('{watchdog_banged}', 'w') as fout: for key, value in vars(self_partition).items(): fout.write('%s: %s%s' % (key, value, nl)) if key == '_connection_helper': for k, v in vars(value).items(): fout.write(' %s: %s%s' % (k, v, nl)) fout.write(message) slapos.slap.ComputerPartition.bang = bang slapos.grid.watchdog.main() """ WRAPPER_CONTENT = """#!/bin/sh touch worked && mkdir -p etc/run && echo "#!/bin/sh" > etc/run/wrapper && echo "while true; do echo Working; sleep 0.1; done" >> etc/run/wrapper && chmod 755 etc/run/wrapper """ DAEMON_CONTENT = """#!/bin/sh mkdir -p etc/service && echo "#!/bin/sh" > etc/service/daemon && echo "touch launched if [ -f ./crashed ]; then while true; do echo Working; sleep 0.1; done else touch ./crashed; echo Failing; sleep 1; exit 111; fi" >> etc/service/daemon && chmod 755 etc/service/daemon && touch worked """ class BasicMixin(object): def setUp(self): self._tempdir = tempfile.mkdtemp() self.software_root = os.path.join(self._tempdir, 'software') self.instance_root = os.path.join(self._tempdir, 'instance') if os.environ.has_key('SLAPGRID_INSTANCE_ROOT'): del os.environ['SLAPGRID_INSTANCE_ROOT'] logging.basicConfig(level=logging.DEBUG) self.setSlapgrid() def setSlapgrid(self, develop=False): if getattr(self, 'master_url', None) is None: self.master_url = 'http://127.0.0.1:80/' self.computer_id = 'computer' self.supervisord_socket = os.path.join(self._tempdir, 'supervisord.sock') self.supervisord_configuration_path = os.path.join(self._tempdir, 'supervisord') self.usage_report_periodicity = 1 self.buildout = None self.grid = slapgrid.Slapgrid(self.software_root, self.instance_root, self.master_url, self.computer_id, self.buildout, develop=develop, logger=logging.getLogger()) # monkey patch buildout bootstrap def dummy(*args, **kw): pass slapos.grid.utils.bootstrapBuildout = dummy SlapObject.PROGRAM_PARTITION_TEMPLATE = textwrap.dedent("""\ [program:%(program_id)s] directory=%(program_directory)s command=%(program_command)s process_name=%(program_name)s autostart=false autorestart=false startsecs=0 startretries=0 exitcodes=0 stopsignal=TERM stopwaitsecs=60 stopasgroup=true killasgroup=true user=%(user_id)s group=%(group_id)s serverurl=AUTO redirect_stderr=true stdout_logfile=%(instance_path)s/.%(program_id)s.log stderr_logfile=%(instance_path)s/.%(program_id)s.log environment=USER="%(USER)s",LOGNAME="%(USER)s",HOME="%(HOME)s" """) def launchSlapgrid(self, develop=False): self.setSlapgrid(develop=develop) return self.grid.processComputerPartitionList() def launchSlapgridSoftware(self, develop=False): self.setSlapgrid(develop=develop) return self.grid.processSoftwareReleaseList() def assertLogContent(self, log_path, expected, tries=600): for i in range(tries): if expected in open(log_path).read(): return time.sleep(0.1) self.fail('%r not found in %s' % (expected, log_path)) def assertIsCreated(self, path, tries=600): for i in range(tries): if os.path.exists(path): return time.sleep(0.1) self.fail('%s should be created' % path) def assertIsNotCreated(self, path, tries=50): for i in range(tries): if os.path.exists(path): self.fail('%s should not be created' % path) time.sleep(0.01) def assertInstanceDirectoryListEqual(self, instance_list): instance_list.append('etc') instance_list.append('var') instance_list.append('supervisord.socket') self.assertItemsEqual(os.listdir(self.instance_root), instance_list) def tearDown(self): # XXX: Hardcoded pid, as it is not configurable in slapos svc = os.path.join(self.instance_root, 'var', 'run', 'supervisord.pid') if os.path.exists(svc): try: pid = int(open(svc).read().strip()) except ValueError: pass else: os.kill(pid, signal.SIGTERM) shutil.rmtree(self._tempdir, True) class TestRequiredOnlyPartitions(unittest.TestCase): def test_no_errors(self): required = ['one', 'three'] existing = ['one', 'two', 'three'] slapgrid.check_required_only_partitions(existing, required) def test_one_missing(self): required = ['foobar', 'two', 'one'] existing = ['one', 'two', 'three'] self.assertRaisesRegexp(ValueError, 'Unknown partition: foobar', slapgrid.check_required_only_partitions, existing, required) def test_several_missing(self): required = ['foobar', 'barbaz'] existing = ['one', 'two', 'three'] self.assertRaisesRegexp(ValueError, 'Unknown partitions: barbaz, foobar', slapgrid.check_required_only_partitions, existing, required) class TestBasicSlapgridCP(BasicMixin, unittest.TestCase): def test_no_software_root(self): self.assertRaises(OSError, self.grid.processComputerPartitionList) def test_no_instance_root(self): os.mkdir(self.software_root) self.assertRaises(OSError, self.grid.processComputerPartitionList) @unittest.skip('which request handler here?') def test_no_master(self): os.mkdir(self.software_root) os.mkdir(self.instance_root) self.assertRaises(socket.error, self.grid.processComputerPartitionList) class MasterMixin(BasicMixin): def _mock_sleep(self): self.fake_waiting_time = None self.real_sleep = time.sleep def mocked_sleep(secs): if self.fake_waiting_time is not None: secs = self.fake_waiting_time self.real_sleep(secs) time.sleep = mocked_sleep def _unmock_sleep(self): time.sleep = self.real_sleep def setUp(self): self._mock_sleep() BasicMixin.setUp(self) def tearDown(self): self._unmock_sleep() BasicMixin.tearDown(self) class ComputerForTest(object): """ Class to set up environment for tests setting instance, software and server response """ def __init__(self, software_root, instance_root, instance_amount=1, software_amount=1): """ Will set up instances, software and sequence """ self.sequence = [] self.instance_amount = instance_amount self.software_amount = software_amount self.software_root = software_root self.instance_root = instance_root self.ip_address_list = [ ('interface1', '10.0.8.3'), ('interface2', '10.0.8.4'), ('route_interface1', '10.10.8.4') ] if not os.path.isdir(self.instance_root): os.mkdir(self.instance_root) if not os.path.isdir(self.software_root): os.mkdir(self.software_root) self.setSoftwares() self.setInstances() def request_handler(self, url, req): """ Define _callback. Will register global sequence of message, sequence by partition and error and error message by partition """ self.sequence.append(url.path) if req.method == 'GET': qs = urlparse.parse_qs(url.query) else: qs = urlparse.parse_qs(req.body) if (url.path == '/getFullComputerInformation' and 'computer_id' in qs): slap_computer = self.getComputer(qs['computer_id'][0]) return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(slap_computer) } elif url.path == '/getHostingSubscriptionIpList': ip_address_list = self.ip_address_list return { 'status_code': 200, 'content': xml_marshaller.xml_marshaller.dumps(ip_address_list) } if req.method == 'POST' and 'computer_partition_id' in qs: instance = self.instance_list[int(qs['computer_partition_id'][0])] instance.sequence.append(url.path) instance.header_list.append(req.headers) if url.path == '/availableComputerPartition': return {'status_code': 200} if url.path == '/startedComputerPartition': instance.state = 'started' return {'status_code': 200} if url.path == '/stoppedComputerPartition': instance.state = 'stopped' return {'status_code': 200} if url.path == '/destroyedComputerPartition': instance.state = 'destroyed' return {'status_code': 200} if url.path == '/softwareInstanceBang': return {'status_code': 200} if url.path == "/updateComputerPartitionRelatedInstanceList": return {'status_code': 200} if url.path == '/softwareInstanceError': instance.error_log = '\n'.join( [ line for line in qs['error_log'][0].splitlines() if 'dropPrivileges' not in line ] ) instance.error = True return {'status_code': 200} elif req.method == 'POST' and 'url' in qs: # XXX hardcoded to first software release! software = self.software_list[0] software.sequence.append(url.path) if url.path == '/buildingSoftwareRelease': return {'status_code': 200} if url.path == '/softwareReleaseError': software.error_log = '\n'.join( [ line for line in qs['error_log'][0].splitlines() if 'dropPrivileges' not in line ] ) software.error = True return {'status_code': 200} else: return {'status_code': 500} def setSoftwares(self): """ Will set requested amount of software """ self.software_list = [ SoftwareForTest(self.software_root, name=str(i)) for i in range(self.software_amount) ] def setInstances(self): """ Will set requested amount of instance giving them by default first software """ if self.software_list: software = self.software_list[0] else: software = None self.instance_list = [ InstanceForTest(self.instance_root, name=str(i), software=software) for i in range(self.instance_amount) ] def getComputer(self, computer_id): """ Will return current requested state of computer """ slap_computer = slapos.slap.Computer(computer_id) slap_computer._software_release_list = [ software.getSoftware(computer_id) for software in self.software_list ] slap_computer._computer_partition_list = [ instance.getInstance(computer_id) for instance in self.instance_list ] return slap_computer class InstanceForTest(object): """ Class containing all needed paramaters and function to simulate instances """ def __init__(self, instance_root, name, software): self.instance_root = instance_root self.software = software self.requested_state = 'stopped' self.state = None self.error = False self.error_log = None self.sequence = [] self.header_list = [] self.name = name self.partition_path = os.path.join(self.instance_root, self.name) os.mkdir(self.partition_path, 0o750) self.timestamp = None self.ip_list = [('interface0', '10.0.8.2')] self.full_ip_list = [('route_interface0', '10.10.2.3', '10.10.0.1', '255.0.0.0', '10.0.0.0')] def getInstance(self, computer_id, ): """ Will return current requested state of instance """ partition = slapos.slap.ComputerPartition(computer_id, self.name) partition._software_release_document = self.getSoftwareRelease() partition._requested_state = self.requested_state if getattr(self, 'filter_dict', None): partition._filter_dict = self.filter_dict partition._parameter_dict = {'ip_list': self.ip_list, 'full_ip_list': self.full_ip_list } if self.software is not None: if self.timestamp is not None: partition._parameter_dict['timestamp'] = self.timestamp self.current_partition = partition return partition def getSoftwareRelease(self): """ Return software release for Instance """ if self.software is not None: sr = slapos.slap.SoftwareRelease() sr._software_release = self.software.name return sr else: return None def setPromise(self, promise_name, promise_content): """ This function will set promise and return its path """ promise_path = os.path.join(self.partition_path, 'etc', 'promise') if not os.path.isdir(promise_path): os.makedirs(promise_path) promise = os.path.join(promise_path, promise_name) open(promise, 'w').write(promise_content) os.chmod(promise, 0o777) def setCertificate(self, certificate_repository_path): if not os.path.exists(certificate_repository_path): os.mkdir(certificate_repository_path) self.cert_file = os.path.join(certificate_repository_path, "%s.crt" % self.name) self.certificate = str(random.random()) open(self.cert_file, 'w').write(self.certificate) self.key_file = os.path.join(certificate_repository_path, '%s.key' % self.name) self.key = str(random.random()) open(self.key_file, 'w').write(self.key) class SoftwareForTest(object): """ Class to prepare and simulate software. each instance has a sotfware attributed """ def __init__(self, software_root, name=''): """ Will set file and variable for software """ self.software_root = software_root self.name = 'http://sr%s/' % name self.sequence = [] self.software_hash = md5digest(self.name) self.srdir = os.path.join(self.software_root, self.software_hash) self.requested_state = 'available' os.mkdir(self.srdir) self.setTemplateCfg() self.srbindir = os.path.join(self.srdir, 'bin') os.mkdir(self.srbindir) self.setBuildout() def getSoftware(self, computer_id): """ Will return current requested state of software """ software = slapos.slap.SoftwareRelease(self.name, computer_id) software._requested_state = self.requested_state return software def setTemplateCfg(self, template="""[buildout]"""): """ Set template.cfg """ open(os.path.join(self.srdir, 'template.cfg'), 'w').write(template) def setBuildout(self, buildout="""#!/bin/sh touch worked"""): """ Set a buildout exec in bin """ open(os.path.join(self.srbindir, 'buildout'), 'w').write(buildout) os.chmod(os.path.join(self.srbindir, 'buildout'), 0o755) def setPeriodicity(self, periodicity): """ Set a periodicity file """ with open(os.path.join(self.srdir, 'periodicity'), 'w') as fout: fout.write(str(periodicity)) class TestSlapgridCPWithMaster(MasterMixin, unittest.TestCase): def test_nothing_to_do(self): computer = ComputerForTest(self.software_root, self.instance_root, 0, 0) with httmock.HTTMock(computer.request_handler): self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual([]) self.assertItemsEqual(os.listdir(self.software_root), []) st = os.stat(os.path.join(self.instance_root, 'var')) self.assertEquals(stat.S_IMODE(st.st_mode), 0o755) def test_one_partition(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition']) def test_one_partition_instance_cfg(self): """ Check that slapgrid processes instance is profile is not named "template.cfg" but "instance.cfg". """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition']) def test_one_free_partition(self): """ Test if slapgrid cp does not process "free" partition """ computer = ComputerForTest(self.software_root, self.instance_root, software_amount=0) with httmock.HTTMock(computer.request_handler): partition = computer.instance_list[0] partition.requested_state = 'destroyed' self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(partition.partition_path), []) self.assertItemsEqual(os.listdir(self.software_root), []) self.assertEqual(partition.sequence, []) def test_one_partition_started(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): partition = computer.instance_list[0] partition.requested_state = 'started' partition.software.setBuildout(WRAPPER_CONTENT) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(partition.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) wrapper_log = os.path.join(partition.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertItemsEqual(os.listdir(self.software_root), [partition.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition']) self.assertEqual(partition.state, 'started') def test_one_partition_started_fail(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): partition = computer.instance_list[0] partition.requested_state = 'started' partition.software.setBuildout(WRAPPER_CONTENT) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(partition.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) wrapper_log = os.path.join(partition.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertItemsEqual(os.listdir(self.software_root), [partition.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition']) self.assertEqual(partition.state, 'started') instance = computer.instance_list[0] instance.software.setBuildout("""#!/bin/sh exit 1 """) self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_FAIL) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay', '.slapgrid-0-error.log']) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition', '/getHateoasUrl', '/getFullComputerInformation', '/softwareInstanceError']) self.assertEqual(instance.state, 'started') def test_one_partition_started_stopped(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' instance.software.setBuildout("""#!/bin/sh touch worked && mkdir -p etc/run && ( cat <<'HEREDOC' #!%(python)s import signal def handler(signum, frame): for i in range(30): print 'Signal handler called with signal', signum raise SystemExit signal.signal(signal.SIGTERM, handler) while True: print "Working" HEREDOC )> etc/run/wrapper && chmod 755 etc/run/wrapper """ % {'python': sys.executable}) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition']) self.assertEqual(instance.state, 'started') computer.sequence = [] instance.requested_state = 'stopped' self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertLogContent(wrapper_log, 'Signal handler called with signal 15') self.assertEqual(computer.sequence, ['/getHateoasUrl', '/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition']) self.assertEqual(instance.state, 'stopped') def test_one_broken_partition_stopped(self): """ Check that, for, an already started instance if stop is requested, processes will be stopped even if instance is broken (buildout fails to run) but status is still started. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' instance.software.setBuildout("""#!/bin/sh touch worked && mkdir -p etc/run && ( cat <<'HEREDOC' #!%(python)s import signal def handler(signum, frame): for i in range(30): print 'Signal handler called with signal', signum raise SystemExit signal.signal(signal.SIGTERM, handler) while True: print "Working" HEREDOC )> etc/run/wrapper && chmod 755 etc/run/wrapper """ % {'python': sys.executable}) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition']) self.assertEqual(instance.state, 'started') computer.sequence = [] instance.requested_state = 'stopped' instance.software.setBuildout("""#!/bin/sh exit 1 """) self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_FAIL) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay', '.slapgrid-0-error.log']) self.assertLogContent(wrapper_log, 'Signal handler called with signal 15') self.assertEqual(computer.sequence, ['/getHateoasUrl', '/getFullComputerInformation', '/softwareInstanceError']) self.assertEqual(instance.state, 'started') def test_one_partition_stopped_started(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'stopped' instance.software.setBuildout(WRAPPER_CONTENT) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition']) self.assertEqual('stopped', instance.state) instance.requested_state = 'started' computer.sequence = [] self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.0_wrapper.log', 'etc', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertEqual(computer.sequence, ['/getHateoasUrl', '/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition']) self.assertEqual('started', instance.state) def test_one_partition_destroyed(self): """ Test that an existing partition with "destroyed" status will only be stopped by slapgrid-cp, not processed """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'destroyed' dummy_file_name = 'dummy_file' with open(os.path.join(instance.partition_path, dummy_file_name), 'w') as dummy_file: dummy_file.write('dummy') self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', dummy_file_name]) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/stoppedComputerPartition']) self.assertEqual('stopped', instance.state) class TestSlapgridCPWithMasterWatchdog(MasterMixin, unittest.TestCase): def setUp(self): MasterMixin.setUp(self) # Prepare watchdog self.watchdog_banged = os.path.join(self._tempdir, 'watchdog_banged') watchdog_path = os.path.join(self._tempdir, 'watchdog') open(watchdog_path, 'w').write(WATCHDOG_TEMPLATE.format( python_path=sys.executable, sys_path=sys.path, watchdog_banged=self.watchdog_banged )) os.chmod(watchdog_path, 0o755) self.grid.watchdog_path = watchdog_path slapos.grid.slapgrid.WATCHDOG_PATH = watchdog_path def test_one_failing_daemon_in_service_will_bang_with_watchdog(self): """ Check that a failing service watched by watchdog trigger bang 1.Prepare computer and set a service named daemon in etc/service (to be watched by watchdog). This daemon will fail. 2.Prepare file for supervisord to call watchdog -Set sys.path -Monkeypatch computer partition bang 3.Check damemon is launched 4.Wait for it to fail 5.Wait for file generated by monkeypacthed bang to appear """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): partition = computer.instance_list[0] partition.requested_state = 'started' partition.software.setBuildout(DAEMON_CONTENT) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(partition.partition_path), ['.slapgrid', '.0_daemon.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) daemon_log = os.path.join(partition.partition_path, '.0_daemon.log') self.assertLogContent(daemon_log, 'Failing') self.assertIsCreated(self.watchdog_banged) self.assertIn('daemon', open(self.watchdog_banged).read()) def test_one_failing_daemon_in_run_will_not_bang_with_watchdog(self): """ Check that a failing service watched by watchdog does not trigger bang 1.Prepare computer and set a service named daemon in etc/run (not watched by watchdog). This daemon will fail. 2.Prepare file for supervisord to call watchdog -Set sys.path -Monkeypatch computer partition bang 3.Check damemon is launched 4.Wait for it to fail 5.Check that file generated by monkeypacthed bang do not appear """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): partition = computer.instance_list[0] partition.requested_state = 'started' # Content of run wrapper WRAPPER_CONTENT = textwrap.dedent("""#!/bin/sh touch ./launched touch ./crashed echo Failing sleep 1 exit 111 """) BUILDOUT_RUN_CONTENT = textwrap.dedent("""#!/bin/sh mkdir -p etc/run && echo "%s" >> etc/run/daemon && chmod 755 etc/run/daemon && touch worked """ % WRAPPER_CONTENT) partition.software.setBuildout(BUILDOUT_RUN_CONTENT) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(partition.partition_path), ['.slapgrid', '.0_daemon.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) daemon_log = os.path.join(partition.partition_path, '.0_daemon.log') self.assertLogContent(daemon_log, 'Failing') self.assertIsNotCreated(self.watchdog_banged) def test_watched_by_watchdog_bang(self): """ Test that a process going to fatal or exited mode in supervisord is banged if watched by watchdog Certificates used for the bang are also checked (ie: watchdog id in process name) """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] certificate_repository_path = os.path.join(self._tempdir, 'partition_pki') instance.setCertificate(certificate_repository_path) watchdog = Watchdog( master_url='https://127.0.0.1/', computer_id=self.computer_id, certificate_repository_path=certificate_repository_path ) for event in watchdog.process_state_events: instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, ['/softwareInstanceBang']) def test_unwanted_events_will_not_bang(self): """ Test that a process going to a mode not watched by watchdog in supervisord is not banged if watched by watchdog """ computer = ComputerForTest(self.software_root, self.instance_root) instance = computer.instance_list[0] watchdog = Watchdog( master_url=self.master_url, computer_id=self.computer_id, certificate_repository_path=None ) for event in ['EVENT', 'PROCESS_STATE', 'PROCESS_STATE_RUNNING', 'PROCESS_STATE_BACKOFF', 'PROCESS_STATE_STOPPED']: computer.sequence = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, []) def test_not_watched_by_watchdog_do_not_bang(self): """ Test that a process going to fatal or exited mode in supervisord is not banged if not watched by watchdog (ie: no watchdog id in process name) """ computer = ComputerForTest(self.software_root, self.instance_root) instance = computer.instance_list[0] watchdog = Watchdog( master_url=self.master_url, computer_id=self.computer_id, certificate_repository_path=None ) for event in watchdog.process_state_events: computer.sequence = [] headers = {'eventname': event} payload = "processname:%s groupname:%s from_state:RUNNING"\ % ('daemon', instance.name) watchdog.handle_event(headers, payload) self.assertEqual(computer.sequence, []) def test_watchdog_create_bang_file_after_bang(self): """ For a partition that has been successfully deployed (thus .timestamp file existing), check that bang file is created and contains the timestamp of .timestamp file. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] certificate_repository_path = os.path.join(self._tempdir, 'partition_pki') instance.setCertificate(certificate_repository_path) partition = os.path.join(self.instance_root, '0') timestamp_content = '1234' timestamp_file = open(os.path.join(partition, slapos.grid.slapgrid.COMPUTER_PARTITION_TIMESTAMP_FILENAME), 'w') timestamp_file.write(timestamp_content) timestamp_file.close() watchdog = Watchdog( master_url='https://127.0.0.1/', computer_id=self.computer_id, certificate_repository_path=certificate_repository_path, instance_root_path=self.instance_root ) event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, ['/softwareInstanceBang']) self.assertEqual(open(os.path.join(partition, slapos.grid.slapgrid.COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME)).read(), timestamp_content) def test_watchdog_ignore_bang_if_partition_not_deployed(self): """ For a partition that has never been successfully deployed (buildout is failing, promise is not passing, etc), test that bang is ignored. Practically speaking, .timestamp file in the partition does not exsit. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] certificate_repository_path = os.path.join(self._tempdir, 'partition_pki') instance.setCertificate(certificate_repository_path) partition = os.path.join(self.instance_root, '0') timestamp_content = '1234' watchdog = Watchdog( master_url='https://127.0.0.1/', computer_id=self.computer_id, certificate_repository_path=certificate_repository_path, instance_root_path=self.instance_root ) event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, ['/softwareInstanceBang']) self.assertNotEqual(open(os.path.join(partition, slapos.grid.slapgrid.COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME)).read(), timestamp_content) def test_watchdog_bang_only_once_if_partition_never_deployed(self): """ For a partition that has been never successfully deployed (promises are not passing, etc), test that: * First bang is transmitted * subsequent bangs are ignored until a deployment is successful. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] certificate_repository_path = os.path.join(self._tempdir, 'partition_pki') instance.setCertificate(certificate_repository_path) partition = os.path.join(self.instance_root, '0') watchdog = Watchdog( master_url='https://127.0.0.1/', computer_id=self.computer_id, certificate_repository_path=certificate_repository_path, instance_root_path=self.instance_root ) # First bang event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, ['/softwareInstanceBang']) # Second bang event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, []) def test_watchdog_bang_only_once_if_timestamp_did_not_change(self): """ For a partition that has been successfully deployed (promises are passing, etc), test that: * First bang is transmitted * subsequent bangs are ignored until a new deployment is successful. Scenario: * slapgrid successfully deploys a partition * A process crashes, watchdog calls bang * Another deployment (run of slapgrid) is done, but not successful ( promise is failing) * The process crashes again, but watchdog ignores it * Yet another deployment is done, and it is successful * The process crashes again, watchdog calls bang * The process crashes again, watchdog ignroes it """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] certificate_repository_path = os.path.join(self._tempdir, 'partition_pki') instance.setCertificate(certificate_repository_path) partition = os.path.join(self.instance_root, '0') timestamp_content = '1234' timestamp_file = open(os.path.join(partition, slapos.grid.slapgrid.COMPUTER_PARTITION_TIMESTAMP_FILENAME), 'w') timestamp_file.write(timestamp_content) timestamp_file.close() watchdog = Watchdog( master_url='https://127.0.0.1/', computer_id=self.computer_id, certificate_repository_path=certificate_repository_path, instance_root_path=self.instance_root ) # First bang event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, ['/softwareInstanceBang']) self.assertEqual(open(os.path.join(partition, slapos.grid.slapgrid.COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME)).read(), timestamp_content) # Second bang event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, []) # Second successful deployment timestamp_content = '12345' timestamp_file = open(os.path.join(partition, slapos.grid.slapgrid.COMPUTER_PARTITION_TIMESTAMP_FILENAME), 'w') timestamp_file.write(timestamp_content) timestamp_file.close() # Third bang event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, ['/softwareInstanceBang']) self.assertEqual(open(os.path.join(partition, slapos.grid.slapgrid.COMPUTER_PARTITION_LATEST_BANG_TIMESTAMP_FILENAME)).read(), timestamp_content) # Fourth bang event = watchdog.process_state_events[0] instance.sequence = [] instance.header_list = [] headers = {'eventname': event} payload = 'processname:%s groupname:%s from_state:RUNNING' % ( 'daemon' + WATCHDOG_MARK, instance.name) watchdog.handle_event(headers, payload) self.assertEqual(instance.sequence, []) class TestSlapgridCPPartitionProcessing(MasterMixin, unittest.TestCase): def test_partition_timestamp(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] timestamp = str(int(time.time())) instance.timestamp = timestamp self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.timestamp', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) timestamp_path = os.path.join(instance.partition_path, '.timestamp') self.setSlapgrid() self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertIn(timestamp, open(timestamp_path).read()) self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) def test_partition_timestamp_develop(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] timestamp = str(int(time.time())) instance.timestamp = timestamp self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.timestamp', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(self.launchSlapgrid(develop=True), slapgrid.SLAPGRID_SUCCESS) self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_SUCCESS) self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition', '/availableComputerPartition', '/stoppedComputerPartition']) def test_partition_old_timestamp(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] timestamp = str(int(time.time())) instance.timestamp = timestamp self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.timestamp', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) instance.timestamp = str(int(timestamp) - 1) self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_SUCCESS) self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) def test_partition_timestamp_new_timestamp(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] timestamp = str(int(time.time())) instance.timestamp = timestamp self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.timestamp', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) instance.timestamp = str(int(timestamp) + 1) self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_SUCCESS) self.assertEqual(self.launchSlapgrid(), slapgrid.SLAPGRID_SUCCESS) self.assertEqual(computer.sequence, ['/getHateoasUrl', '/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition', '/getHateoasUrl', '/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition', '/getHateoasUrl', '/getFullComputerInformation']) def test_partition_timestamp_no_timestamp(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] timestamp = str(int(time.time())) instance.timestamp = timestamp self.launchSlapgrid() self.assertInstanceDirectoryListEqual(['0']) partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.timestamp', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) instance.timestamp = None self.launchSlapgrid() self.assertEqual(computer.sequence, ['/getHateoasUrl', '/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition', '/getHateoasUrl', '/getFullComputerInformation', '/availableComputerPartition', '/stoppedComputerPartition']) def test_partition_periodicity_remove_timestamp(self): """ Check that if periodicity forces run of buildout for a partition, it removes the .timestamp file. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] timestamp = str(int(time.time())) instance.timestamp = timestamp instance.requested_state = 'started' instance.software.setPeriodicity(1) self.launchSlapgrid() partition = os.path.join(self.instance_root, '0') self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.timestamp', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) time.sleep(2) # dummify install() so that it doesn't actually do anything so that it # doesn't recreate .timestamp. instance.install = lambda: None self.launchSlapgrid() self.assertItemsEqual(os.listdir(partition), ['.slapgrid', '.timestamp', 'buildout.cfg', 'software_release', 'worked', '.slapos-retention-lock-delay']) def test_one_partition_periodicity_from_file_does_not_disturb_others(self): """ If time between last processing of instance and now is superior to periodicity then instance should be proceed 1. We set a wanted maximum_periodicity in periodicity file in in one software release directory and not the other one 2. We process computer partition and check if wanted_periodicity was used as maximum_periodicty 3. We wait for a time superior to wanted_periodicty 4. We launch processComputerPartition and check that partition using software with periodicity was runned and not the other 5. We check that modification time of .timestamp was modified """ computer = ComputerForTest(self.software_root, self.instance_root, 20, 20) with httmock.HTTMock(computer.request_handler): instance0 = computer.instance_list[0] timestamp = str(int(time.time() - 5)) instance0.timestamp = timestamp instance0.requested_state = 'started' for instance in computer.instance_list[1:]: instance.software = \ computer.software_list[computer.instance_list.index(instance)] instance.timestamp = timestamp wanted_periodicity = 1 instance0.software.setPeriodicity(wanted_periodicity) self.launchSlapgrid() self.assertNotEqual(wanted_periodicity, self.grid.maximum_periodicity) last_runtime = os.path.getmtime( os.path.join(instance0.partition_path, '.timestamp')) time.sleep(wanted_periodicity + 1) for instance in computer.instance_list[1:]: self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) time.sleep(1) self.launchSlapgrid() self.assertEqual(instance0.sequence, ['/availableComputerPartition', '/startedComputerPartition', '/availableComputerPartition', '/startedComputerPartition', ]) for instance in computer.instance_list[1:]: self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) self.assertGreater( os.path.getmtime(os.path.join(instance0.partition_path, '.timestamp')), last_runtime) self.assertNotEqual(wanted_periodicity, self.grid.maximum_periodicity) def test_one_partition_stopped_is_not_processed_after_periodicity(self): """ Check that periodicity forces processing a partition even if it is not started. """ computer = ComputerForTest(self.software_root, self.instance_root, 20, 20) with httmock.HTTMock(computer.request_handler): instance0 = computer.instance_list[0] timestamp = str(int(time.time() - 5)) instance0.timestamp = timestamp for instance in computer.instance_list[1:]: instance.software = \ computer.software_list[computer.instance_list.index(instance)] instance.timestamp = timestamp wanted_periodicity = 1 instance0.software.setPeriodicity(wanted_periodicity) self.launchSlapgrid() self.assertNotEqual(wanted_periodicity, self.grid.maximum_periodicity) last_runtime = os.path.getmtime( os.path.join(instance0.partition_path, '.timestamp')) time.sleep(wanted_periodicity + 1) for instance in computer.instance_list[1:]: self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) time.sleep(1) self.launchSlapgrid() self.assertEqual(instance0.sequence, ['/availableComputerPartition', '/stoppedComputerPartition', '/availableComputerPartition', '/stoppedComputerPartition']) for instance in computer.instance_list[1:]: self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) self.assertNotEqual(os.path.getmtime(os.path.join(instance0.partition_path, '.timestamp')), last_runtime) self.assertNotEqual(wanted_periodicity, self.grid.maximum_periodicity) def test_one_partition_destroyed_is_not_processed_after_periodicity(self): """ Check that periodicity forces processing a partition even if it is not started. """ computer = ComputerForTest(self.software_root, self.instance_root, 20, 20) with httmock.HTTMock(computer.request_handler): instance0 = computer.instance_list[0] timestamp = str(int(time.time() - 5)) instance0.timestamp = timestamp instance0.requested_state = 'stopped' for instance in computer.instance_list[1:]: instance.software = \ computer.software_list[computer.instance_list.index(instance)] instance.timestamp = timestamp wanted_periodicity = 1 instance0.software.setPeriodicity(wanted_periodicity) self.launchSlapgrid() self.assertNotEqual(wanted_periodicity, self.grid.maximum_periodicity) last_runtime = os.path.getmtime( os.path.join(instance0.partition_path, '.timestamp')) time.sleep(wanted_periodicity + 1) for instance in computer.instance_list[1:]: self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) time.sleep(1) instance0.requested_state = 'destroyed' self.launchSlapgrid() self.assertEqual(instance0.sequence, ['/availableComputerPartition', '/stoppedComputerPartition', '/stoppedComputerPartition']) for instance in computer.instance_list[1:]: self.assertEqual(instance.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) self.assertNotEqual(os.path.getmtime(os.path.join(instance0.partition_path, '.timestamp')), last_runtime) self.assertNotEqual(wanted_periodicity, self.grid.maximum_periodicity) def test_one_partition_is_never_processed_when_periodicity_is_negative(self): """ Checks that a partition is not processed when its periodicity is negative 1. We setup one instance and set periodicity at -1 2. We mock the install method from slapos.grid.slapgrid.Partition 3. We launch slapgrid once so that .timestamp file is created and check that install method is indeed called (through mocked_method.called 4. We launch slapgrid anew and check that install as not been called again """ computer = ComputerForTest(self.software_root, self.instance_root, 1, 1) with httmock.HTTMock(computer.request_handler): timestamp = str(int(time.time())) instance = computer.instance_list[0] instance.software.setPeriodicity(-1) instance.timestamp = timestamp with patch.object(slapos.grid.slapgrid.Partition, 'install', return_value=None) as mock_method: self.launchSlapgrid() self.assertTrue(mock_method.called) self.launchSlapgrid() self.assertEqual(mock_method.call_count, 1) def test_one_partition_is_always_processed_when_periodicity_is_zero(self): """ Checks that a partition is always processed when its periodicity is 0 1. We setup one instance and set periodicity at 0 2. We mock the install method from slapos.grid.slapgrid.Partition 3. We launch slapgrid once so that .timestamp file is created 4. We launch slapgrid anew and check that install has been called twice (one time because of the new setup and one time because of periodicity = 0) """ computer = ComputerForTest(self.software_root, self.instance_root, 1, 1) with httmock.HTTMock(computer.request_handler): timestamp = str(int(time.time())) instance = computer.instance_list[0] instance.software.setPeriodicity(0) instance.timestamp = timestamp with patch.object(slapos.grid.slapgrid.Partition, 'install', return_value=None) as mock_method: self.launchSlapgrid() self.launchSlapgrid() self.assertEqual(mock_method.call_count, 2) def test_one_partition_buildout_fail_does_not_disturb_others(self): """ 1. We set up two instance one using a corrupted buildout 2. One will fail but the other one will be processed correctly """ computer = ComputerForTest(self.software_root, self.instance_root, 2, 2) with httmock.HTTMock(computer.request_handler): instance0 = computer.instance_list[0] instance1 = computer.instance_list[1] instance1.software = computer.software_list[1] instance0.software.setBuildout("""#!/bin/sh exit 42""") self.launchSlapgrid() self.assertEqual(instance0.sequence, ['/softwareInstanceError']) self.assertEqual(instance1.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) def test_one_partition_lacking_software_path_does_not_disturb_others(self): """ 1. We set up two instance but remove software path of one 2. One will fail but the other one will be processed correctly """ computer = ComputerForTest(self.software_root, self.instance_root, 2, 2) with httmock.HTTMock(computer.request_handler): instance0 = computer.instance_list[0] instance1 = computer.instance_list[1] instance1.software = computer.software_list[1] shutil.rmtree(instance0.software.srdir) self.launchSlapgrid() self.assertEqual(instance0.sequence, ['/softwareInstanceError']) self.assertEqual(instance1.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) def test_one_partition_lacking_software_bin_path_does_not_disturb_others(self): """ 1. We set up two instance but remove software bin path of one 2. One will fail but the other one will be processed correctly """ computer = ComputerForTest(self.software_root, self.instance_root, 2, 2) with httmock.HTTMock(computer.request_handler): instance0 = computer.instance_list[0] instance1 = computer.instance_list[1] instance1.software = computer.software_list[1] shutil.rmtree(instance0.software.srbindir) self.launchSlapgrid() self.assertEqual(instance0.sequence, ['/softwareInstanceError']) self.assertEqual(instance1.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) def test_one_partition_lacking_path_does_not_disturb_others(self): """ 1. We set up two instances but remove path of one 2. One will fail but the other one will be processed correctly """ computer = ComputerForTest(self.software_root, self.instance_root, 2, 2) with httmock.HTTMock(computer.request_handler): instance0 = computer.instance_list[0] instance1 = computer.instance_list[1] instance1.software = computer.software_list[1] shutil.rmtree(instance0.partition_path) self.launchSlapgrid() self.assertEqual(instance0.sequence, ['/softwareInstanceError']) self.assertEqual(instance1.sequence, ['/availableComputerPartition', '/stoppedComputerPartition']) def test_one_partition_buildout_fail_is_correctly_logged(self): """ 1. We set up an instance using a corrupted buildout 2. It will fail, make sure that whole log is sent to master """ computer = ComputerForTest(self.software_root, self.instance_root, 1, 1) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] line1 = "Nerdy kitten: Can I haz a process crash?" line2 = "Cedric: Sure, here it is." instance.software.setBuildout("""#!/bin/sh echo %s; echo %s; exit 42""" % (line1, line2)) self.launchSlapgrid() self.assertEqual(instance.sequence, ['/softwareInstanceError']) # We don't care of actual formatting, we just want to have full log self.assertIn(line1, instance.error_log) self.assertIn(line2, instance.error_log) self.assertIn('Failed to run buildout', instance.error_log) class TestSlapgridUsageReport(MasterMixin, unittest.TestCase): """ Test suite about slapgrid-ur """ def test_slapgrid_destroys_instance_to_be_destroyed(self): """ Test than an instance in "destroyed" state is correctly destroyed """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' instance.software.setBuildout(WRAPPER_CONTENT) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition']) self.assertEqual(instance.state, 'started') # Then destroy the instance computer.sequence = [] instance.requested_state = 'destroyed' self.assertEqual(self.grid.agregateAndSendUsage(), slapgrid.SLAPGRID_SUCCESS) # Assert partition directory is empty self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), []) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) # Assert supervisor stopped process wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertIsNotCreated(wrapper_log) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/stoppedComputerPartition', '/destroyedComputerPartition']) self.assertEqual(instance.state, 'destroyed') def test_partition_list_is_complete_if_empty_destroyed_partition(self): """ Test that an empty partition with destroyed state but with SR informations Is correctly destroyed Axiom: each valid partition has a state and a software_release. Scenario: 1. Simulate computer containing one "destroyed" partition but with valid SR 2. See if it destroyed """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] computer.sequence = [] instance.requested_state = 'destroyed' self.assertEqual(self.grid.agregateAndSendUsage(), slapgrid.SLAPGRID_SUCCESS) # Assert partition directory is empty self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), []) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) # Assert supervisor stopped process wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertIsNotCreated(wrapper_log) self.assertEqual( computer.sequence, ['/getFullComputerInformation', '/stoppedComputerPartition', '/destroyedComputerPartition']) def test_slapgrid_not_destroy_bad_instance(self): """ Checks that slapgrid-ur don't destroy instance not to be destroyed. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' instance.software.setBuildout(WRAPPER_CONTENT) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay']) wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation', '/availableComputerPartition', '/startedComputerPartition']) self.assertEqual('started', instance.state) # Then run usage report and see if it is still working computer.sequence = [] self.assertEqual(self.grid.agregateAndSendUsage(), slapgrid.SLAPGRID_SUCCESS) # registerComputerPartition will create one more file: from slapos.slap.slap import COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME request_list_file = COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME % instance.name self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay', request_list_file]) wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), ['.slapgrid', '.0_wrapper.log', 'buildout.cfg', 'etc', 'software_release', 'worked', '.slapos-retention-lock-delay', request_list_file]) wrapper_log = os.path.join(instance.partition_path, '.0_wrapper.log') self.assertLogContent(wrapper_log, 'Working') self.assertEqual(computer.sequence, ['/getFullComputerInformation']) self.assertEqual('started', instance.state) def test_slapgrid_instance_ignore_free_instance(self): """ Test than a free instance (so in "destroyed" state, but empty, without software_release URI) is ignored by slapgrid-cp. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.software.name = None computer.sequence = [] instance.requested_state = 'destroyed' self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) # Assert partition directory is empty self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), []) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation']) def test_slapgrid_report_ignore_free_instance(self): """ Test than a free instance (so in "destroyed" state, but empty, without software_release URI) is ignored by slapgrid-ur. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.software.name = None computer.sequence = [] instance.requested_state = 'destroyed' self.assertEqual(self.grid.agregateAndSendUsage(), slapgrid.SLAPGRID_SUCCESS) # Assert partition directory is empty self.assertInstanceDirectoryListEqual(['0']) self.assertItemsEqual(os.listdir(instance.partition_path), []) self.assertItemsEqual(os.listdir(self.software_root), [instance.software.software_hash]) self.assertEqual(computer.sequence, ['/getFullComputerInformation']) class TestSlapgridSoftwareRelease(MasterMixin, unittest.TestCase): def test_one_software_buildout_fail_is_correctly_logged(self): """ 1. We set up a software using a corrupted buildout 2. It will fail, make sure that whole log is sent to master """ computer = ComputerForTest(self.software_root, self.instance_root, 1, 1) with httmock.HTTMock(computer.request_handler): software = computer.software_list[0] line1 = "Nerdy kitten: Can I haz a process crash?" line2 = "Cedric: Sure, here it is." software.setBuildout("""#!/bin/sh echo %s; echo %s; exit 42""" % (line1, line2)) self.launchSlapgridSoftware() self.assertEqual(software.sequence, ['/buildingSoftwareRelease', '/softwareReleaseError']) # We don't care of actual formatting, we just want to have full log self.assertIn(line1, software.error_log) self.assertIn(line2, software.error_log) self.assertIn('Failed to run buildout', software.error_log) class SlapgridInitialization(unittest.TestCase): """ "Abstract" class setting setup and teardown for TestSlapgridArgumentTuple and TestSlapgridConfigurationFile. """ def setUp(self): """ Create the minimun default argument and configuration. """ self.certificate_repository_path = tempfile.mkdtemp() self.fake_file_descriptor = tempfile.NamedTemporaryFile() self.slapos_config_descriptor = tempfile.NamedTemporaryFile() self.slapos_config_descriptor.write(""" [slapos] software_root = /opt/slapgrid instance_root = /srv/slapgrid master_url = https://slap.vifib.com/ computer_id = your computer id buildout = /path/to/buildout/binary """) self.slapos_config_descriptor.seek(0) self.default_arg_tuple = ( '--cert_file', self.fake_file_descriptor.name, '--key_file', self.fake_file_descriptor.name, '--master_ca_file', self.fake_file_descriptor.name, '--certificate_repository_path', self.certificate_repository_path, '-c', self.slapos_config_descriptor.name, '--now') self.signature_key_file_descriptor = tempfile.NamedTemporaryFile() self.signature_key_file_descriptor.seek(0) def tearDown(self): """ Removing the temp file. """ self.fake_file_descriptor.close() self.slapos_config_descriptor.close() self.signature_key_file_descriptor.close() shutil.rmtree(self.certificate_repository_path, True) class TestSlapgridCPWithMasterPromise(MasterMixin, unittest.TestCase): def test_one_failing_promise(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' worked_file = os.path.join(instance.partition_path, 'fail_worked') fail = textwrap.dedent("""\ #!/usr/bin/env sh touch "%s" exit 127""" % worked_file) instance.setPromise('fail', fail) self.assertEqual(self.grid.processComputerPartitionList(), slapos.grid.slapgrid.SLAPGRID_PROMISE_FAIL) self.assertTrue(os.path.isfile(worked_file)) self.assertTrue(instance.error) self.assertNotEqual('started', instance.state) def test_one_succeeding_promise(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' self.fake_waiting_time = 0.1 worked_file = os.path.join(instance.partition_path, 'succeed_worked') succeed = textwrap.dedent("""\ #!/usr/bin/env sh touch "%s" exit 0""" % worked_file) instance.setPromise('succeed', succeed) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertTrue(os.path.isfile(worked_file)) self.assertFalse(instance.error) self.assertEqual(instance.state, 'started') def test_stderr_has_been_sent(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' self.fake_waiting_time = 0.5 promise_path = os.path.join(instance.partition_path, 'etc', 'promise') os.makedirs(promise_path) succeed = os.path.join(promise_path, 'stderr_writer') worked_file = os.path.join(instance.partition_path, 'stderr_worked') with open(succeed, 'w') as f: f.write(textwrap.dedent("""\ #!/usr/bin/env sh touch "%s" echo Error 1>&2 exit 127""" % worked_file)) os.chmod(succeed, 0o777) self.assertEqual(self.grid.processComputerPartitionList(), slapos.grid.slapgrid.SLAPGRID_PROMISE_FAIL) self.assertTrue(os.path.isfile(worked_file)) self.assertEqual(instance.error_log[-5:], 'Error') self.assertTrue(instance.error) self.assertIsNone(instance.state) def test_timeout_works(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' self.fake_waiting_time = 0.1 promise_path = os.path.join(instance.partition_path, 'etc', 'promise') os.makedirs(promise_path) succeed = os.path.join(promise_path, 'timed_out_promise') worked_file = os.path.join(instance.partition_path, 'timed_out_worked') with open(succeed, 'w') as f: f.write(textwrap.dedent("""\ #!/usr/bin/env sh touch "%s" sleep 5 exit 0""" % worked_file)) os.chmod(succeed, 0o777) self.assertEqual(self.grid.processComputerPartitionList(), slapos.grid.slapgrid.SLAPGRID_PROMISE_FAIL) self.assertTrue(os.path.isfile(worked_file)) self.assertTrue(instance.error) self.assertIsNone(instance.state) def test_two_succeeding_promises(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' self.fake_waiting_time = 0.1 for i in range(2): worked_file = os.path.join(instance.partition_path, 'succeed_%s_worked' % i) succeed = textwrap.dedent("""\ #!/usr/bin/env sh touch "%s" exit 0""" % worked_file) instance.setPromise('succeed_%s' % i, succeed) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) for i in range(2): worked_file = os.path.join(instance.partition_path, 'succeed_%s_worked' % i) self.assertTrue(os.path.isfile(worked_file)) self.assertFalse(instance.error) self.assertEqual(instance.state, 'started') def test_one_succeeding_one_failing_promises(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' self.fake_waiting_time = 0.1 for i in range(2): worked_file = os.path.join(instance.partition_path, 'promise_worked_%d' % i) lockfile = os.path.join(instance.partition_path, 'lock') promise = textwrap.dedent("""\ #!/usr/bin/env sh touch "%(worked_file)s" if [ ! -f %(lockfile)s ] then touch "%(lockfile)s" exit 0 else exit 127 fi""" % { 'worked_file': worked_file, 'lockfile': lockfile }) instance.setPromise('promise_%s' % i, promise) self.assertEqual(self.grid.processComputerPartitionList(), slapos.grid.slapgrid.SLAPGRID_PROMISE_FAIL) self.assertEquals(instance.error, 1) self.assertNotEqual('started', instance.state) def test_one_succeeding_one_timing_out_promises(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' self.fake_waiting_time = 0.1 for i in range(2): worked_file = os.path.join(instance.partition_path, 'promise_worked_%d' % i) lockfile = os.path.join(instance.partition_path, 'lock') promise = textwrap.dedent("""\ #!/usr/bin/env sh touch "%(worked_file)s" if [ ! -f %(lockfile)s ] then touch "%(lockfile)s" else sleep 5 fi exit 0""" % { 'worked_file': worked_file, 'lockfile': lockfile} ) instance.setPromise('promise_%d' % i, promise) self.assertEqual(self.grid.processComputerPartitionList(), slapos.grid.slapgrid.SLAPGRID_PROMISE_FAIL) self.assertEquals(instance.error, 1) self.assertNotEqual(instance.state, 'started') class TestSlapgridDestructionLock(MasterMixin, unittest.TestCase): def test_retention_lock(self): """ Higher level test about actual retention (or no-retention) of instance if specifying a retention lock delay. """ computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.requested_state = 'started' instance.filter_dict = {'retention_delay': 1.0 / (3600 * 24)} self.grid.processComputerPartitionList() dummy_instance_file_path = os.path.join(instance.partition_path, 'dummy') with open(dummy_instance_file_path, 'w') as dummy_instance_file: dummy_instance_file.write('dummy') self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.retention_lock_delay_filename ))) instance.requested_state = 'destroyed' self.grid.agregateAndSendUsage() self.assertTrue(os.path.exists(dummy_instance_file_path)) self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.retention_lock_date_filename ))) self.grid.agregateAndSendUsage() self.assertTrue(os.path.exists(dummy_instance_file_path)) time.sleep(1) self.grid.agregateAndSendUsage() self.assertFalse(os.path.exists(dummy_instance_file_path)) class TestSlapgridCPWithFirewall(MasterMixin, unittest.TestCase): def setFirewallConfig(self, source_ip=""): self.firewall_cmd_add = os.path.join(self._tempdir, 'firewall_cmd_add') with open(self.firewall_cmd_add, 'w') as f: f.write("""#!/bin/sh var="$*" R=$(echo $var | grep "query-rule") > /dev/null if [ $? -eq 0 ]; then echo "no" exit 0 fi R=$(echo $var | grep "add-rule") if [ $? -eq 0 ]; then echo "success" exit 0 fi echo "ERROR: $var" exit 1 """) self.firewall_cmd_remove = os.path.join(self._tempdir, 'firewall_cmd_remove') with open(self.firewall_cmd_remove, 'w') as f: f.write("""#!/bin/sh var="$*" R=$(echo $var | grep "query-rule") if [ $? -eq 0 ]; then echo "yes" exit 0 fi R=$(echo $var | grep "remove-rule") if [ $? -eq 0 ]; then echo "success" exit 0 fi echo "ERROR: $var" exit 1 """) os.chmod(self.firewall_cmd_add, 0755) os.chmod(self.firewall_cmd_remove, 0755) firewall_conf= dict( authorized_sources=source_ip, firewall_cmd=self.firewall_cmd_add, firewall_executable='/bin/echo "service firewall started"', reload_config_cmd='/bin/echo "Config reloaded."', log_file='fw-log.log', testing=True, ) self.grid.firewall_conf = firewall_conf def checkRuleFromIpSource(self, ip, accept_ip_list, cmd_list): # XXX - rules for one ip contain 2*len(ip_address_list + accept_ip_list) rules ACCEPT and 4 rules REJECT num_rules = len(self.ip_address_list) * 2 + len(accept_ip_list) * 2 + 4 self.assertEqual(len(cmd_list), num_rules) base_cmd = '--permanent --direct --add-rule ipv4 filter' # Check that there is REJECT rule on INPUT rule = '%s INPUT 1000 -d %s -j REJECT' % (base_cmd, ip) self.assertIn(rule, cmd_list) # Check that there is REJECT rule on FORWARD rule = '%s FORWARD 1000 -d %s -j REJECT' % (base_cmd, ip) self.assertIn(rule, cmd_list) # Check that there is REJECT rule on INPUT, ESTABLISHED,RELATED rule = '%s INPUT 900 -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % (base_cmd, ip) self.assertIn(rule, cmd_list) # Check that there is REJECT rule on FORWARD, ESTABLISHED,RELATED rule = '%s FORWARD 900 -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % (base_cmd, ip) self.assertIn(rule, cmd_list) # Check that there is INPUT ACCEPT on ip_list for _, other_ip in self.ip_address_list: rule = '%s INPUT 0 -s %s -d %s -j ACCEPT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) rule = '%s FORWARD 0 -s %s -d %s -j ACCEPT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) # Check that there is FORWARD ACCEPT on ip_list for other_ip in accept_ip_list: rule = '%s INPUT 0 -s %s -d %s -j ACCEPT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) rule = '%s FORWARD 0 -s %s -d %s -j ACCEPT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) def checkRuleFromIpSourceReject(self, ip, reject_ip_list, cmd_list): # XXX - rules for one ip contain 2 + 2*len(ip_address_list) rules ACCEPT and 4*len(reject_ip_list) rules REJECT num_rules = (len(self.ip_address_list) * 2) + (len(reject_ip_list) * 4) self.assertEqual(len(cmd_list), num_rules) base_cmd = '--permanent --direct --add-rule ipv4 filter' # Check that there is ACCEPT rule on INPUT #rule = '%s INPUT 0 -d %s -j ACCEPT' % (base_cmd, ip) #self.assertIn(rule, cmd_list) # Check that there is ACCEPT rule on FORWARD #rule = '%s FORWARD 0 -d %s -j ACCEPT' % (base_cmd, ip) #self.assertIn(rule, cmd_list) # Check that there is INPUT/FORWARD ACCEPT on ip_list for _, other_ip in self.ip_address_list: rule = '%s INPUT 0 -s %s -d %s -j ACCEPT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) rule = '%s FORWARD 0 -s %s -d %s -j ACCEPT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) # Check that there is INPUT/FORWARD REJECT on ip_list for other_ip in reject_ip_list: rule = '%s INPUT 900 -s %s -d %s -j REJECT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) rule = '%s FORWARD 900 -s %s -d %s -j REJECT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) rule = '%s INPUT 800 -s %s -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) rule = '%s FORWARD 800 -s %s -d %s -m state --state ESTABLISHED,RELATED -j REJECT' % (base_cmd, other_ip, ip) self.assertIn(rule, cmd_list) def test_getFirewallRules(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() self.ip_address_list = computer.ip_address_list ip = computer.instance_list[0].full_ip_list[0][1] source_ip_list = ['10.32.0.15', '10.32.0.0/8'] cmd_list = self.grid._getFirewallAcceptRules(ip, [elt[1] for elt in self.ip_address_list], source_ip_list, ip_type='ipv4') self.checkRuleFromIpSource(ip, source_ip_list, cmd_list) cmd_list = self.grid._getFirewallRejectRules(ip, [elt[1] for elt in self.ip_address_list], source_ip_list, ip_type='ipv4') self.checkRuleFromIpSourceReject(ip, source_ip_list, cmd_list) def test_checkAddFirewallRules(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() # For simulate query rule success self.grid.firewall_conf['firewall_cmd'] = self.firewall_cmd_add self.ip_address_list = computer.ip_address_list instance = computer.instance_list[0] ip = instance.full_ip_list[0][1] name = computer.instance_list[0].name cmd_list = self.grid._getFirewallAcceptRules(ip, [elt[1] for elt in self.ip_address_list], [], ip_type='ipv4') self.grid._checkAddFirewallRules(name, cmd_list, add=True) rules_path = os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ) with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) self.checkRuleFromIpSource(ip, [], rules_list) # Remove all rules self.grid.firewall_conf['firewall_cmd'] = self.firewall_cmd_remove self.grid._checkAddFirewallRules(name, cmd_list, add=False) with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) self.assertEqual(rules_list, []) # Add one more ip in the authorized list self.grid.firewall_conf['firewall_cmd'] = self.firewall_cmd_add self.ip_address_list.append(('interface1', '10.0.8.7')) cmd_list = self.grid._getFirewallAcceptRules(ip, [elt[1] for elt in self.ip_address_list], [], ip_type='ipv4') self.grid._checkAddFirewallRules(name, cmd_list, add=True) with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) self.checkRuleFromIpSource(ip, [], rules_list) def test_partition_no_firewall(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertFalse(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ))) def test_partition_firewall_restrict(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ))) rules_path = os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ) self.ip_address_list = computer.ip_address_list with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) ip = instance.full_ip_list[0][1] self.checkRuleFromIpSource(ip, [], rules_list) def test_partition_firewall(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.filter_dict = {'fw_restricted_access': 'off'} self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ))) rules_path = os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ) self.ip_address_list = computer.ip_address_list with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) ip = instance.full_ip_list[0][1] self.checkRuleFromIpSourceReject(ip, [], rules_list) @unittest.skip('Always fail: instance.filter_dict can\'t change') def test_partition_firewall_restricted_access_change(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.filter_dict = {'fw_restricted_access': 'off', 'fw_rejected_sources': '10.0.8.11'} self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ))) rules_path = os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ) self.ip_address_list = computer.ip_address_list with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) ip = instance.full_ip_list[0][1] self.checkRuleFromIpSourceReject(ip, ['10.0.8.11'], rules_list) # For remove rules self.grid.firewall_conf['firewall_cmd'] = self.firewall_cmd_remove instance.setFilterParameter({'fw_restricted_access': 'on', 'fw_authorized_sources': ''}) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) self.checkRuleFromIpSource(ip, [], rules_list) def test_partition_firewall_ipsource_accept(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() source_ip = ['10.0.8.10', '10.0.8.11'] self.grid.firewall_conf['authorized_sources'] = [source_ip[0]] with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.filter_dict = {'fw_restricted_access': 'on', 'fw_authorized_sources': source_ip[1]} self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ))) rules_path = os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ) rules_list= [] self.ip_address_list = computer.ip_address_list ip = instance.full_ip_list[0][1] base_cmd = '--permanent --direct --add-rule ipv4 filter' with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) for thier_ip in source_ip: rule_input = '%s INPUT 0 -s %s -d %s -j ACCEPT' % (base_cmd, thier_ip, ip) self.assertIn(rule_input, rules_list) rule_fwd = '%s FORWARD 0 -s %s -d %s -j ACCEPT' % (base_cmd, thier_ip, ip) self.assertIn(rule_fwd, rules_list) self.checkRuleFromIpSource(ip, source_ip, rules_list) def test_partition_firewall_ipsource_reject(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() source_ip = '10.0.8.10' self.grid.firewall_conf['authorized_sources'] = ['10.0.8.15'] with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.filter_dict = {'fw_rejected_sources': source_ip, 'fw_restricted_access': 'off'} self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ))) rules_path = os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ) rules_list= [] self.ip_address_list = computer.ip_address_list self.ip_address_list.append(('iface', '10.0.8.15')) ip = instance.full_ip_list[0][1] base_cmd = '--permanent --direct --add-rule ipv4 filter' with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) self.checkRuleFromIpSourceReject(ip, source_ip.split(' '), rules_list) def test_partition_firewall_ip_change(self): computer = ComputerForTest(self.software_root, self.instance_root) self.setFirewallConfig() source_ip = ['10.0.8.10', '10.0.8.11'] self.grid.firewall_conf['authorized_sources'] = [source_ip[0]] with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] instance.filter_dict = {'fw_restricted_access': 'on', 'fw_authorized_sources': source_ip[1]} self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertTrue(os.path.exists(os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ))) rules_path = os.path.join( instance.partition_path, slapos.grid.SlapObject.Partition.partition_firewall_rules_name ) rules_list= [] self.ip_address_list = computer.ip_address_list ip = instance.full_ip_list[0][1] with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) self.checkRuleFromIpSource(ip, source_ip, rules_list) instance = computer.instance_list[0] # XXX -- removed #instance.filter_dict = {'fw_restricted_access': 'on', # 'fw_authorized_sources': source_ip[0]} # For simulate query rule exist self.grid.firewall_conf['firewall_cmd'] = self.firewall_cmd_remove self.grid.firewall_conf['authorized_sources'] = [] computer.ip_address_list.append(('route_interface1', '10.10.8.4')) self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.ip_address_list = computer.ip_address_list with open(rules_path, 'r') as frules: rules_list = json.loads(frules.read()) self.checkRuleFromIpSource(ip, [source_ip[1]], rules_list) class TestSlapgridCPWithTransaction(MasterMixin, unittest.TestCase): def test_one_partition(self): computer = ComputerForTest(self.software_root, self.instance_root) with httmock.HTTMock(computer.request_handler): instance = computer.instance_list[0] partition = os.path.join(self.instance_root, '0') request_list_file = os.path.join(partition, COMPUTER_PARTITION_REQUEST_LIST_TEMPLATE_FILENAME % instance.name) with open(request_list_file, 'w') as f: f.write('some partition') self.assertEqual(self.grid.processComputerPartitionList(), slapgrid.SLAPGRID_SUCCESS) self.assertInstanceDirectoryListEqual(['0']) self.assertFalse(os.path.exists(request_list_file)) slapos.core-1.3.18/slapos/tests/client.py0000644000000000000000000000743012752436135020301 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import logging import unittest import slapos.slap import slapos.client class TestClient(unittest.TestCase): def setUp(self): self.called_software_product = None class FakeSoftwareProductCollection(object): def __init__(inner_self, *args, **kw_args): inner_self.__getattr__ = inner_self.get def get(inner_self, software_product): self.called_software_product = software_product return self.software_product_reference self.slap = slapos.slap.slap() self.product_collection = FakeSoftwareProductCollection( logging.getLogger(), self.slap) def test_getSoftwareReleaseFromSoftwareString_softwareProduct(self): """ Test that if given software is a Sofwtare Product (i.e matching the magic string), it returns the corresponding value of a call to SoftwareProductCollection. """ self.software_product_reference = 'foo' software_string = '%s%s' % ( slapos.client.SOFTWARE_PRODUCT_NAMESPACE, self.software_product_reference ) slapos.client._getSoftwareReleaseFromSoftwareString( logging.getLogger(), software_string, self.product_collection) self.assertEqual( self.called_software_product, self.software_product_reference ) def test_getSoftwareReleaseFromSoftwareString_softwareProduct_emptySoftwareProduct(self): """ Test that if given software is a Software Product (i.e matching the magic string), but this software product is empty, it exits. """ self.software_product_reference = 'foo' software_string = '%s%s' % ( slapos.client.SOFTWARE_PRODUCT_NAMESPACE, self.software_product_reference ) def fake_get(software_product): raise AttributeError() self.product_collection.__getattr__ = fake_get self.assertRaises( SystemExit, slapos.client._getSoftwareReleaseFromSoftwareString, logging.getLogger(), software_string, self.product_collection ) def test_getSoftwareReleaseFromSoftwareString_softwareRelease(self): """ Test that if given software is a simple Software Release URL (not matching the magic string), it is just returned without modification. """ software_string = 'foo' returned_value = slapos.client._getSoftwareReleaseFromSoftwareString( logging.getLogger(), software_string, self.product_collection) self.assertEqual( self.called_software_product, None ) self.assertEqual( returned_value, software_string ) slapos.core-1.3.18/slapos/tests/pyflakes/0000755000000000000000000000000013006632706020256 5ustar rootroot00000000000000slapos.core-1.3.18/slapos/tests/pyflakes/__init__.py0000644000000000000000000000424212752436135022376 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import pkg_resources import pyflakes.scripts.pyflakes import sys import unittest class CheckCodeConsistency(unittest.TestCase): """Lints all SlapOS Node and SLAP library code base.""" def setUp(self): self._original_argv = sys.argv sys.argv = [sys.argv[0], os.path.join( pkg_resources.get_distribution('slapos.core').location, 'slapos', ) ] def tearDown(self): sys.argv = self._original_argv @unittest.skip('pyflakes test is disabled') def testCodeConsistency(self): if pyflakes.scripts.pyflakes.main.func_code.co_argcount: pyflakes.scripts.pyflakes.main([ os.path.join( pkg_resources.get_distribution('slapos.core').location, 'slapos', )]) else: pyflakes.scripts.pyflakes.main() slapos.core-1.3.18/slapos/tests/util.py0000644000000000000000000001220412752436135017773 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2013 Vifib SARL and Contributors. All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import os import slapos.util from slapos.util import string_to_boolean import tempfile import unittest import shutil from pwd import getpwnam class TestUtil(unittest.TestCase): """ Tests methods available in the slapos.util module. """ def test_mkdir_p_new_directory(self): """ Test that mkdir_p recursively creates a directory. """ root_directory = tempfile.mkdtemp() wanted_directory = os.path.join(root_directory, 'foo', 'bar') slapos.util.mkdir_p(wanted_directory) self.assertTrue(os.path.isdir(wanted_directory)) shutil.rmtree(root_directory) def test_mkdir_already_existing(self): """ Check that mkdir_p doesn't raise if directory already exist. """ root_directory = tempfile.mkdtemp() slapos.util.mkdir_p(root_directory) self.assertTrue(os.path.isdir(root_directory)) shutil.rmtree(root_directory) def test_chown_directory(self): """ Test that slapos.util.chownDirectory correctly changes owner. Note: requires root privileges. """ root_slaptest = tempfile.mkdtemp() wanted_directory0 = os.path.join(root_slaptest, 'slap-write0') wanted_directory1 = os.path.join(root_slaptest, 'slap-write0', 'write-slap1') wanted_directory2 = os.path.join(root_slaptest, 'slap-write0', 'write-slap1', 'write-teste2') wanted_directory_mkdir0 = os.makedirs(wanted_directory0, mode=0777) wanted_directory_mkdir1 = os.makedirs(wanted_directory1, mode=0777) wanted_directory_mkdir2 = os.makedirs(wanted_directory2, mode=0777) create_file_txt = tempfile.mkstemp(suffix='.txt', prefix='tmp', dir=wanted_directory2, text=True) user = 'nobody' try: uid = getpwnam(user)[2] gid = getpwnam(user)[3] except KeyError: raise unittest.SkipTest("user %s doesn't exist." % user) if os.getuid() != 0: raise unittest.SkipTest("No root privileges, impossible to chown.") slapos.util.chownDirectory(root_slaptest, uid, gid) uid_check_root_slaptest = os.stat(root_slaptest)[4] gid_check_root_slaptest = os.stat(root_slaptest)[5] self.assertEquals(uid, uid_check_root_slaptest) self.assertEquals(gid, gid_check_root_slaptest) uid_check_wanted_directory0 = os.stat(wanted_directory0)[4] gid_check_wanted_directory0 = os.stat(wanted_directory0)[5] self.assertEquals(uid, uid_check_wanted_directory0) self.assertEquals(gid, gid_check_wanted_directory0) uid_check_wanted_directory1 = os.stat(wanted_directory1)[4] gid_check_wanted_directory1 = os.stat(wanted_directory1)[5] self.assertEquals(uid, uid_check_wanted_directory1) self.assertEquals(gid, gid_check_wanted_directory1) uid_check_wanted_directory2 = os.stat(wanted_directory2)[4] gid_check_wanted_directory2 = os.stat(wanted_directory2)[5] self.assertEquals(uid, uid_check_wanted_directory2) self.assertEquals(gid, gid_check_wanted_directory2) uid_check_file_txt = os.stat(create_file_txt[1])[4] gid_check_file_txt = os.stat(create_file_txt[1])[5] self.assertEquals(uid, uid_check_file_txt) self.assertEquals(gid, gid_check_file_txt) shutil.rmtree(root_slaptest) def test_string_to_boolean_with_true_values(self): """ Check that mkdir_p doesn't raise if directory already exist. """ for value in ['true', 'True', 'TRUE']: self.assertTrue(string_to_boolean(value)) def test_string_to_boolean_with_false_values(self): """ Check that mkdir_p doesn't raise if directory already exist. """ for value in ['false', 'False', 'False']: self.assertFalse(string_to_boolean(value)) def test_string_to_boolean_with_incorrect_values(self): """ Check that mkdir_p doesn't raise if directory already exist. """ for value in [True, False, 1, '1', 't', 'tru', 'truelle', 'f', 'fals', 'falsey']: self.assertRaises(ValueError, string_to_boolean, value) if __name__ == '__main__': unittest.main() slapos.core-1.3.18/slapos/README.format.txt0000644000000000000000000000105212752436134020265 0ustar rootroot00000000000000format ====== slapformat is an application to prepare SlapOS ready node (machine). It "formats" the machine by: - creating users and groups - creating bridge interface - creating needed tap interfaces - creating needed directories with proper ownership and permissions In the end special report is generated and information are posted to configured SlapOS server. This program shall be only run by root. Requirements ------------ Linux with IPv6, bridging and tap interface support. Binaries: * brctl * groupadd * ip * tunctl * useradd slapos.core-1.3.18/slapos/client.py0000644000000000000000000001210012752436134017124 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010, 2011, 2012 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly advised to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 3 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import atexit import ConfigParser import os import sys import slapos.slap.slap from slapos.slap import SoftwareProductCollection SOFTWARE_PRODUCT_NAMESPACE = "product." class ClientConfig(object): state = None def __init__(self, args, configp=None): # XXX configp cannot possibly be optional """ Set options given by parameters. """ # Set options parameters for key, value in args.__dict__.items(): setattr(self, key, value) # Merges the arguments and configuration try: configuration_dict = dict(configp.items('slapconsole')) except ConfigParser.NoSectionError: pass else: for key in configuration_dict: if not getattr(self, key, None): setattr(self, key, configuration_dict[key]) configuration_dict = dict(configp.items('slapos')) master_url = configuration_dict.get('master_url', None) # Backward compatibility, if no key and certificate given in option # take one from slapos configuration if not getattr(self, 'key_file', None) and \ not getattr(self, 'cert_file', None): self.key_file = configuration_dict.get('key_file') self.cert_file = configuration_dict.get('cert_file') if not master_url: raise ValueError("No option 'master_url'") elif master_url.startswith('https') and \ self.key_file is None and \ self.cert_file is None: raise ValueError("No option 'key_file' and/or 'cert_file'") else: self.master_url = master_url self.master_rest_url = configuration_dict.get('master_rest_url') if self.key_file: self.key_file = os.path.expanduser(self.key_file) if self.cert_file: self.cert_file = os.path.expanduser(self.cert_file) def init(conf, logger): """Initialize Slap instance, connect to server and create aliases to common software releases""" # XXX check certificate and key existence slap = slapos.slap.slap() slap.initializeConnection(conf.master_url, key_file=conf.key_file, cert_file=conf.cert_file, slapgrid_rest_uri=conf.master_rest_url) local = globals().copy() local['slap'] = slap # Create global shortcut functions to request instance and software def shorthandRequest(*args, **kwargs): return slap.registerOpenOrder().request(*args, **kwargs) def shorthandSupply(*args, **kwargs): # XXX-Cedric Implement computer_group support return slap.registerSupply().supply(*args, **kwargs) local['request'] = shorthandRequest local['supply'] = shorthandSupply local['product'] = SoftwareProductCollection(logger, slap) return local def _getSoftwareReleaseFromSoftwareString(logger, software_string, product): """ If Software string is a product: Return the best Software Release URL of the Software Product "X" of the string "product.X". Else, return as is. """ if not software_string.startswith(SOFTWARE_PRODUCT_NAMESPACE): return software_string try: return product.__getattr__(software_string[len(SOFTWARE_PRODUCT_NAMESPACE):]) except AttributeError as e: logger.error('Error: %s Exiting now.' % e.message) sys.exit(1) def do_console(local): # try to enable readline with completion and history try: import readline except ImportError: pass else: try: import rlcompleter readline.set_completer(rlcompleter.Completer(local).complete) except ImportError: pass readline.parse_and_bind("tab: complete") historyPath = os.path.expanduser("~/.slapconsolehistory") def save_history(historyPath=historyPath): readline.write_history_file(historyPath) if os.path.exists(historyPath): readline.read_history_file(historyPath) atexit.register(save_history) __import__("code").interact(banner="", local=local) slapos.core-1.3.18/slapos/util.py0000644000000000000000000000642412752436135016640 0ustar rootroot00000000000000# -*- coding: utf-8 -*- ############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## import errno import os import subprocess import sqlite3 def mkdir_p(path, mode=0o700): """\ Creates a directory and its parents, if needed. NB: If the directory already exists, it does not change its permission. """ try: os.makedirs(path, mode) except OSError as exc: if exc.errno == errno.EEXIST and os.path.isdir(path): pass else: raise def chownDirectory(path, uid, gid): if os.getuid() != 0: # we are probably inside of a webrunner return # find /opt/slapgrid -not -user 1000 -exec chown slapsoft:slapsoft {} \; subprocess.check_call([ '/usr/bin/find', path, '-not', '-user', str(uid), '-exec', '/bin/chown', '%s:%s' % (uid, gid), '{}', ';' ]) def parse_certificate_key_pair(html): """ Extract (certificate, key) pair from an HTML page received by SlapOS Master. """ c_start = html.find("Certificate:") c_end = html.find("", c_start) certificate = html[c_start:c_end] k_start = html.find("-----BEGIN PRIVATE KEY-----") k_end = html.find("", k_start) key = html[k_start:k_end] return certificate, key def string_to_boolean(string): """ Return True if the value of the "string" parameter can be parsed as True. Return False if the value of the "string" parameter can be parsed as False. Otherwise, Raise. The parser is completely arbitrary, see code for actual implementation. """ if not isinstance(string, str) and not isinstance(string, unicode): raise ValueError('Given value is not a string.') acceptable_true_values = ['true'] acceptable_false_values = ['false'] string = string.lower() if string in acceptable_true_values: return True if string in acceptable_false_values: return False else: raise ValueError('%s is neither True nor False.' % string) def sqlite_connect(dburi): conn = sqlite3.connect(dburi) conn.text_factory = str # allow 8-bit strings return conn slapos.core-1.3.18/slapos/version.py0000644000000000000000000000251513006630154017333 0ustar rootroot00000000000000############################################################################## # # Copyright (c) 2010-2014 Vifib SARL and Contributors. # All Rights Reserved. # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsibility of assessing all potential # consequences resulting from its eventual inadequacies and bugs # End users who are looking for a ready-to-use solution with commercial # guarantees and support are strongly adviced to contract a Free Software # Service Company # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public License # as published by the Free Software Foundation; either version 2.1 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # ############################################################################## version = '1.3.18' slapos.core-1.3.18/CHANGES.txt0000644000000000000000000006064613006632432015616 0ustar rootroot00000000000000Changes ======= 1.3.18 (2016-11-03) ------------------- * update default web url of master to slapos.vifib.com 1.3.17 (2016-10-25) ------------------- * slapos.grid: Always remove .timestamp and .slapgrid if partition is destroyed. * slapos.proxy: Propagate parent partition state to children * slapos.grid: Increase min space (1G) * slapos.grid: Save slapgrid state into the partition * slapos.format: Remove passwd call while format. * svcbackend: explicitely call the executable instead of using Popen 'executable' keyword. * slapos.grid: Introduce new garbage collector for instances ignored by buildout 1.3.16 (2016-09-29) ------------------- * slapos.format: Include disk usage report. Do not divide cpu_load by number of cpu cores. * slapos.format: set login shell for slapuser and lock login by password * slapos.slap: Do not post same connection parameters of slaves. * slapos.proxy: allow to update software release of partition 1.3.15 (2015-12-08) ------------------- * slapos.collect: Include disk usage report. Do not divide cpu_load by number of cpu cores. 1.3.14 (2015-10-27) ------------------- * slapos.grid: firewall fix bugs 1.3.13 (2015-10-26) ------------------- * slapos.grid: firewall accpet option to specify only list of ip address/wetwork to accept and reject. 1.3.12 (2015-10-15) ------------------- * slapos.grid: add support for firewall configuration using firewalld for partition that use tap+route interface (for kvm cluster). 1.3.11 (2015-09-25) ------------------- * slapos.grid: support shacache-ca-file and shadir-ca-file options. 1.3.10 (2015-04-28) ------------------- 1.3.9 (2015-02-20) ------------------ * slapos.format: allow to format additional list of folder for each partition to use as data storage location. * slapos.format: allow to create tap without bridge (when using option create_tap and tap_gateway_interface), configure ip route with generated ipv4 for tap to access guest vm from host machine. * slapos.grid: update generated buildout file with information to acess partition data storage folder. 1.3.8 (2015-02-04) ------------------ * slapos proxy: allow to specify/override host/port from command line. 1.3.7 (2015-01-30) ------------------ * slapos.grid: Don't try to process partition if software_release_url is None. Removes noisy errors in log. * slapos node report: retry several time when removing processes from supervisor. 1.3.6.3 (2015-01-23) -------------------- * slapos: make forbid_supervisord_automatic_launch generic. 1.3.6.2 (2015-01-22) -------------------- * slapos.grid.svcbackend: check if watchdog is started before restarting. 1.3.6.1 (2015-01-19) -------------------- * slapos: allow to use supervisorctl without automatically starting supervisord. * slapos: Create supervisor configuration when running CLI. 1.3.6 (2015-01-16) ------------------ * supervisord: allow to start with --nodaemon. * rename : zc.buildout-bootstap.py -> zc.buildout-bootstrap.py. * update bootstrap.py. * slapproxy: add missing getComputerPartitionCertificate method * slapos boot: fix error reporting when ipv6 is not available 1.3.5 (2014-12-03) ------------------ * slapos.grid: do not ALWAYS sleep for promise_timeout. Instead, poll often, and continue if promise finished. This change allows a two-folds speed improvement in processing partitions. * slapos.format: don't chown recursively Software Releases. * slapos.util: use find to chown in chownDirectory. 1.3.4 (2014-11-26) ------------------ * slapos.slap hateoas: get 'me' document with no cache. * slapos.grid: report: fix unbound 'destroyed' variable. * slapos.slap: fix __getattr__ of product collection so that product.foo works. * slapos.cli info/list: use raw print instead of logger. 1.3.3 (2014-11-18) ------------------ * slapos.slap/slapos.proxy: Fix regression: requests library ignores empty parameters. * slapos.proxy: fix slave support (again) 1.3.2 (2014-11-14) ------------------ * slapos.slap: parse ipv6 and adds brackets if missing. Needed for requests, that now NEEDS brackets for ipv6. * slapos.slap: cast xml from unicode to string if it is unicode before parsing it. 1.3.1 (2014-11-13) ------------------ * slapos.proxy: fix slave support. 1.3.0 (2014-11-13) ------------------ * Introduce slapos list and slapos info CLIs. * slapos format: fix use_unique_local_address_block feature, and put default to false in configure_local. 1.2.4.1 (2014-10-09) -------------------- * slapos format: Don't chown partitions. * slapos format: alter_user is true again by default. 1.2.4 (2014-09-23) ------------------ * slapos.grid: add support for retention_delay. 1.2.3.1 (2014-09-15) -------------------- * General: Add compatibility with cliff 1.7.0. * tests: Prevent slap tests to leak its stubs/mocks. 1.2.3 (2014-09-11) ------------------ * slapos.proxy: Add multimaster basic support. 1.2.2 (2014-09-10) ------------------ * slapos.collect: Compress historical logs and fix folder permissions. 1.2.1 (2014-08-21) ------------------ * slapproxy: add automatic migration to new database schema if needed. 1.2.0 (2014-08-18) ------------------ Note: not officially released as egg. * slapproxy: add correct support for slaves, instance_guid, state. * slapproxy: add getComputerPartitionStatus dummy support. * slapproxy: add multi-nodes support 1.1.2 (2014-06-02) ------------------ * Minor fixes 1.1.1 (2014-05-23) ------------------ * Drop legacy commands * Introduced SlapOS node Collect 1.0.5 (2014-04-29) ------------------ * Fix slapgrid commands return code * slapos proxy start do not need to be launched as root 1.0.2.1 (2014-01-16) -------------------- Fixes: * Add backward compabitility in slap lib with older slapproxy (<1.0.1) 1.0.1 (2014-01-14) ------------------ New features: * Add configure-local command for standalone slapos [Cedric de Saint Martin/Gabriel Monnerat] Fixes: * Fix slapproxy missing _connection_dict [Rafael Monnerat] 1.0.0 (2014-01-01) ------------------ New features: * slapconsole: Use readline for completion and history. [Jerome Perrin] * slapos console: support for ipython and bpython [Marco Mariani] * Initial windows support. [Jondy Zhao] * Support new/changed parameters in command line tools, defined in documentation. [Marco Mariani] * Register: support for one-time authentication token. [Marco Mariani] * New command: "slapos configure client" [Marco Mariani] * add new "root_check" option in slapos configuration file (true by default) allowing to bypass "am I root" checks in slapos. [Cedric de Saint Martin] * Add support for getSoftwareReleaseListFromSoftwareProduct() SLAP method. [Cedric de Saint Martin] * Add support for Software Product in request, supply and console. [Cedric de Saint Martin] Major Improvements: * Major refactoring of entry points, clearly defining all possible command line parameters, separating logic from arg/conf parsing and logger setup, sanitizing most parameters, and adding help and documentation for each command. [Marco Mariani] * Correct handling of common errors: print error message instead of traceback. [Marco Mariani] * Dramatically speed up slapformat. [Cedric de Saint Martin] * Remove CONFIG_SITE env var from Buildout environment, fixing support of OpenSuse 12.x. [Cedric de Saint Martin] * RootSoftwareInstance is now the default software type. [Cedric de Saint Martin] * Allow to use SlapOS Client for instances deployed in shared SlapOS Nodes. [Cedric de Saint Martin] Other fixes: * Refuse to run 'slapos node' commands as non root. [Marco Mariani] * Register: Replace all reference to vifib by SlapOS Master. [Cedric de Saint Martin] * Watchdog: won't call bang if bang was already called but problem has not been solved. [Cédric de Saint Martin] * Slapgrid: avoid spurious empty lines in Popen() stdout/log. [Marco Mariani] * Slapgrid: Properly include any partition containing any SR informations in the list of partitions to proceed. [Cedric de Saint Martin] * Slapgrid: Remove the timestamp file after defined periodicity. Fixes odd use cases when an instance failing to process after some time is still considered as valid by the node. [Cedric de Saint Martin] * Slapgrid: Fix scary but harmless warnings, fix grammar, remove references to ViFiB. [Cedric de Saint Martin, Jérome Perrin, Marco Mariani] * Slapgrid: Fixes support of Python >= 2.6. [Arnaud Fontaine] * Slapgrid: Check if SR is upload-blacklisted only if we have upload informations. [Cedric de Saint Martin] * Slapgrid: override $HOME to be software_path or instance_path. Fix leaking files like /opt/slapgrid/.npm. [Marco Mariani] * Slapgrid: Always retrieve certificate and key, update files if content changed. Fix "quick&dirty" manual slapos.cfg swaps (change of Node ID). [Marco Mariani] * Slapformat: Make sure everybody can read slapos configuration directory. [Cedric de Saint Martin] * Slapformat: Fix support of slapproxy. [Marco Mariani] * Slapformat: slapos.xml backup: handle corrupted zip files. [Marco Mariani] * Slapformat: Don't erase shell information for each user, every time. Allows easy debugging. [Cédric de Saint Martin] 0.35.1 (2013-02-18) ------------------- New features: * Add ComputerPartition._instance_guid getter in SLAP library. [Cedric de Saint Martin] * Add ComputerPartition._instance_guid support in slapproxy. [Cedric de Saint Martin] Fixes: * Fix link existence check when deploying instance if SR is not correctly installed. This fixes a misleading error. [Cedric de Saint Martin] * Improve message shown to user when requesting. [Cedric de Saint Martin] * Raise NotReady when _requested_state doesn't exist when trying to fetch it from getter. [Cedric de Saint Martin] 0.35 (2013-02-08) ----------------- * slapos: display version number with help. [Marco Mariani] * slapformat: backup slapos.xml to a zip archive at every change. [Marco Mariani] * slapformat: Don't check validity of ipv4 when trying to add address that already exists. [Cedric de Saint Martin] * slapgrid: create and run $MD5/buildout.cfg for eaiser debugging. [Marco Mariani] * slapgrid: keep running if cp.error() or sr.error() have issues (fixes 20130119-744D94). [Marco Mariani] * slapgrid does not crash when there are no certificates (fixes #20130121-136C24). [Marco Mariani] * Add slapproxy-query command. [Marco Mariani] * Other minor typo / output fixes. 0.34 (2013-01-23) ----------------- * networkcache: only match major release number in Debian, fixed platform detection for Ubuntu. [Marco Mariani] * symlink to software_release in each partition. [Marco Mariani] * slapos client: Properly expand "~" when giving configuration file location. [Cedric de Saint Martin] * slapgrid: stop instances that should be stopped even if buildout and/or reporting failed. [Cedric de Saint Martin] * slapgrid: Don't periodically force-process a stopped instance. [Cedric de Saint Martin] * slapgrid: Handle pid files of slapgrid launched through different entry points. [Cedric de Saint Martin] * Watchdog: Bang is called with correct instance certificates. [Cedric Le Ninivin] * Watchdog: Fix watchdog call. [Cedric le Ninivin] * Add a symlink of the used software release in each partitions. [Marco Mariani] * slapformat is verbose by default. [Cedric de Saint Martin] * slapproxy: Filter by instance_guid, allow computer partition renames and change of software_type and requested_state. [Marco Mariani] * slapproxy: Stop instance even if buildout/reporting is wrong. [Cedric de Saint Martin] * slapproxy: implement softwareInstanceRename method. [Marco Mariani] * slapproxy: alllow requests to software_type. [Marco Mariani] * Many other minor fixes. See git diff for details. 0.33.1 (2012-11-05) ------------------- * Fix "slapos console" argument parsing. [Cedric de Saint Martin] 0.33 (2012-11-02) ----------------- * Continue to improve new entry points. The following are now functional: - slapos node format - slapos node start/stop/restart/tail - slapos node supervisord/supervisorctl - slapos node supply and add basic usage. [Cedric de Saint Martin] * Add support for "SLAPOS_CONFIGURATION" and SLAPOS_CLIENT_CONFIGURATION environment variables. (commit c72a53b1) [Cédric de Saint Martin] * --only_sr also accepts plain text URIs. [Marco Mariani] 0.32.3 (2012-10-15) ------------------- * slapgrid: Adopt new return value strategy (0=OK, 1=failed, 2=promise failed) (commit 5d4e1522). [Cedric de Saint Martin] * slaplib: add requestComputer (commits 6cbe82e0, aafb86eb). [Łukasz Nowak] * slapgrid: Add stopasgroup and killasgroup to supervisor (commit 36e0ccc0). [Cedric de Saint Martin] * slapproxy: don't start in debug mode by default (commit e32259c8). [Cédric Le Ninivin * SlapObject: ALWAYS remove tmpdir (commit a652a610). [Cedric de Saint Martin] 0.32.2 (2012-10-11) ------------------- * slapgrid: Remove default delay, now that SlapOS Master is Fast as Light (tm). (commit 03a85d6b8) [Cedric de Saint Martin] * Fix watchdog entry point name, introduced in v0.31. (commit a8651ba12) [Cedric de Saint Martin] * slapgrid: Better filter of instances, won't process false positives anymore (hopefully). (commit ce0a73b41) [Cedric de Saint Martin] * Various output improvements. [Cedric de Saint Martin] 0.32.1 (2012-10-09) ------------------- * slapgrid: Make sure error logs are sent to SlapOS master. Finish implementation began in 0.32. [Cedric de Saint Martin] * slapgrid: Fix Usage Report in case of not empty partition with no SR. [Cedric de Saint Martin] 0.32 (2012-10-04) ----------------- * Introduce new, simpler "slapos" entry point. See documentation for more informations. Note: some functionnalities of this new entry point don't work yet or is not as simple as it should be. [Cedric de Saint Martin, Cedric Le Ninivin] * Revamped "slapos request" to work like described in documentation. [Cédric Le Ninivin, Cédric de Saint Martin] * Rewrote slapgrid logger to always log into stdout. (commits a4d277c881, 5440626dea)[Cédric de Saint Martin] 0.31.2 (2012-10-02) ------------------- * Update slapproxy behavior: when instance already exist, only update partition_parameter_kw. (commit 317d5c8e0aee) [Cedric de Saint Martin] 0.31.1 (2012-10-02) ------------------- * Fixed Watchdog call in slapgrid. [Cédric Le Ninivin] 0.31 (2012-10-02) ------------------- * Added slapos-watchdog to bang exited and failing serices in instance in supervisord. (commits 16b2e8b8, 1dade5cd7) [Cédric Le Ninivin] * Add safety checks before calling SlapOS Master if mandatory instance members of SLAP classes are not properly set. Will result in less calls to SlapOS Master in dirty cases. (commits 5097e87c9763, 5fad6316a0f6d, f2cd014ea8aa) [Cedric de Saint Martin] * Add "periodicty" functionnality support for instances: if an instance has not been processed by slapgrid after defined time, process it. (commits 7609fc7a3d, 56e1c7bfbd) [Cedric Le Ninivin] * slapproxy: Various improvements in slave support (commits 96c6b78b67, bcac5a397d, fbb680f53b)[Cedric Le Ninivin] * slapgrid: bulletproof slapgrid-cp: in case one instance is bad, still processes all other ones. (commits bac94cdb56, 77bc6c75b3d, bd68b88cc3) [Cedric de Saint Martin] * Add support for "upload to binary cache" URL blacklist [Cedric de Saint Martin] * Request on proxy are identified by requester and name (commit 0c739c3) [Cedric Le Ninivin] 0.30 (2012-09-19) ----------------- * Add initial "slave instances" support in slapproxy. [Cedric Le Ninivin] * slapgrid-ur fix: check for partition informations only if we have to destroy it. [Cedric de Saint Martin] 0.29 (2012-09-18) ----------------- * buildout: Migrate slap_connection magic instance profile part to slap-connection, and use variables names separated with '-'. [Cedric de Saint Martin] * slapgrid: Add support for instance.cfg instance profiles [Cedric de Saint Martin] * slapgrid-ur: much less calls to master. [Cedric de Saint Martin] 0.28.9 (2012-09-18) ------------------- * slapgrid: Don't process not updated partitions (regression introduced in 0.28.7). [Cedric de Saint Martin] 0.28.8 (2012-09-18) ------------------- * slapgrid: Don't process free partitions (regression introduced in 0.28.7). [Cedric de Saint Martin] 0.28.7 (2012-09-14) ------------------- * slapgrid: --maximal_delay reappeared to be used in special cases. [Cedric de Saint Martin] 0.28.6 (2012-09-10) ------------------- * register now use slapos.cfg.example from master. [Cédric Le Ninivin] 0.28.5 (2012-08-23) ------------------- * Updated slapos.cfg for register [Cédric Le Ninivin] 0.28.4 (2012-08-22) ------------------- * Fixed egg building. 0.28.3 (2012-08-22) ------------------- * Avoid artificial tap creation on system check. [Łukasz Nowak] 0.28.2 (2012-08-17) ------------------- * Resolved path problem in register [Cédric Le Ninivin] 0.28.1 (2012-08-17) ------------------- * Resolved critical naming conflict 0.28 (2012-08-17) ----------------- * Introduce "slapos node register" command, that will register computer to SlapOS Master (vifib.net by default) for you. [Cédric Le Ninivin] * Set .timestamp in partitions ONLY after slapgrid thinks it's okay (promises, ...). [Cedric de Saint Martin] * slapgrid-ur: when destroying (not reporting), only care about instances to destroy, completely ignore others. [Cedric de Saint Martin] 0.27 (2012-08-08) ----------------- * slapformat: Raise correct error when no IPv6 is available on selected interface. [Cedric de Saint Martin] * slapgrid: Introduce --only_sr and --only_cp. - only_sr filter and force the run of a single SR, and uses url_md5 (folder_id) - only_cp filter which computer patition, will be runned. it can be a list, splited by comman (slappartX,slappartY ...) [Rafael Monnerat] * slapgrid: Cleanup unused option (--usage-report-periodicity). [Cedric de Saint Martin] * slapgrid: --develop will work also for Computer Partitions. [Cedric de Saint Martin] * slaplib: setConnectionDict won't call Master if parameters haven't changed. [Cedric de Saint Martin] 0.26.2 (2012-07-09) ------------------- * Define UTF-8 encoding in SlapOS Node codebase, as defined in PEP-263. 0.26.1 (2012-07-06) ------------------- * slapgrid-sr: Add --develop option to make it ignore .completed files. * SLAP library: it is now possible to fetch whole dict of connection parameters. * SLAP library: it is now possible to fetch single instance parameter. * SLAP library: change Computer and ComputerPartition behavior to have proper caching of computer partition parameters. 0.26 (2012-07-05) ----------------- * slapformat: no_bridge option becomes 'not create_tap'. create_tap is true by default. So a bridge is used and tap will be created by default. [Cedric de Saint Martin] * Add delay for slapformat. [Cedric Le Ninivin] * If no software_type is given, use default one (i.e fix "error 500" when requesting new instance). [Cedric de Saint Martin] * slapgrid: promise based software release, new api to fetch full computer information from server. [Yingjie Xu] * slapproxy: new api to mock full computer information [Yingjie Xu] * slapgrid: minor fix randomise delay feature. [Yingjie Xu] * slapgrid: optimise slapgrid-cp, run buildout only if there is an update on server side. [Yingjie Xu] * libslap: Allow accessing ServerError. [Vincent Pelletier] 0.25 (2012-05-16) ----------------- * Fix support for no_bridge option in configuration files for some values: no_bridge = false was stated as true. [Cedric de Saint Martin] * Delay a randomized period of time before calling slapgrid. [Yingjie Xu] * slapformat: Don't require tunctl if no_bridge is set [Leonardo Rochael] * slapformat: remove monkey patching when creating address so that it doesn't return false positive. [Cedric de Saint Martin] * Various: clearer error messages. 0.24 (2012-03-29) ----------------- * Handles different errors in a user friendly way [Cedric de Saint Martin] * slapgrid: Supports software destruction. [Łukasz Nowak] * slap: added support to Supply.supply state parameter (available, destroyed) [Łukasz Nowak] 0.23 (2012-02-29) ----------------- * slapgrid : Don't create tarball of sofwtare release when shacache is not configured. [Yingjie Xu] 0.22 (2012-02-09) ----------------- * slapformat : Add no-bridge feature. [Cedric de Saint Martin] * slapgrid : Add binary cache support. [Yingjie Xu] 0.21 (2011-12-23) ----------------- * slap: Add renaming API. [Antoine Catton] 0.20 (2011-11-24) ----------------- * slapgrid: Support service-less parttions. [Antoine Catton] * slapgrid: Avoid gid collision while dropping privileges. [Antoine Catton] * slapgrid: Drop down network usage during usage reporting. [Łukasz Nowak] * general: Add sphinx documentation. [Romain Courteaud] 0.19 (2011-11-07) ----------------- * bang: Executable to be called by being banged computer. [Łukasz Nowak] 0.18 (2011-10-18) ----------------- * Fix 0.17 release: missing change for slap library. [Łukasz Nowak] 0.17 (2011-10-18) ----------------- * slap: Avoid request under the hood. [Łukasz Nowak] * slap: ComputerPartition.bang provided. It allows to update all instances in tree. [Łukasz Nowak] * slap: Computer.bang provided. It allows to bang all instances on computer. [Łukasz Nowak] 0.16 (2011-10-03) ----------------- * slapgrid: Bugfix for slapgrid introduced in 0.15. [Łukasz Nowak] 0.15 (2011-09-27) ----------------- * slapgrid: Sanitize environment variables as early as possible. [Arnaud Fontaine] * slap: Docstring bugfix. [Sebastien Robin] * slap: Make request asynchronous call. [Łukasz Nowak] 0.14 (2011-08-31) ----------------- * slapgrid: Implement SSL based authentication to shadir and shacache. [Łukasz Nowak] * slapgrid, slap: Fix usage report packing list generation. [Nicolas Godbert] 0.13 (2011-08-25) ----------------- * slapgrid: Implement software signing and shacache upload. [Lucas Carvalho] * slap: Support slave instances [Gabriel Monnerat] * slapformat: Generate always address for computer [Łukasz Nowak] * slapgrid: Support promises scripts [Antoine Catton] * general: slapos.core gets tests. [many contributors] 0.12 (2011-07-15) ----------------- * Include modifications that should have been included in 0.11. 0.11 (2011-07-15) ----------------- * Bug fix : slapconsole : shorthand methods request and supply now correctly return an object. [Cedric de Saint Martin] 0.10 (2011-07-13) ----------------- * Fix a bug in slapconsole where request and supply shorthand methods don't accept all needed parameters. [Cedric de Saint Martin] 0.9 (2011-07-11) ---------------- * slapconsole: Simplify usage and use configuration file. You can now just run slapconsole and type things like "request(kvm, 'mykvm')". [Cedric de Saint Martin] * slapformat: Fix issue of bridge not connected with real interface on Linux >= 2.6.39 [Arnaud Fontaine] * slapformat: Allow to have IPv6 only interface, with bridge still supporting local IPv4 stack. [Łukasz Nowak] 0.8 (2011-06-27) ---------------- * slapgrid: Bugfix for temporary extends cache permissions. [Łukasz Nowak] 0.7 (2011-06-27) ---------------- * slapgrid: Fallback to buildout in own search path. [Łukasz Nowak] 0.6 (2011-06-27) ---------------- * slap: Fix bug: state shall be XML encapsulated. [Łukasz Nowak] 0.5 (2011-06-24) ---------------- * slapgrid: Use temporary extends-cache directory in order to make faster remote profile refresh. [Łukasz Nowak] 0.4 (2011-06-24) ---------------- * general: Polish requirement versions. [Arnaud Fontaine] * general: Remove libnetworkcache. [Lucas Carvalho] * slap: Remove not needed method from interface. [Romain Courteaud] * slap: state parameter is accepted and transmitted to SlapOS master [Łukasz Nowak] * slapformat: Implement dry run. [Vincent Pelletier] * slapgrid: Allow to select any buildout binary used to bootstrap environment. [Łukasz Nowak] 0.3 (2011-06-14) ---------------- * slap: Implement SLA by filter_kw in OpenOrder.request. [Łukasz Nowak] * slap: Timeout network operations. [Łukasz Nowak] * slapformat: Make slapsoft and slapuser* system users. [Kazuhiko Shiozaki] * slapgrid: Add more tolerance with supervisord. [Łukasz Nowak] 0.2 (2011-06-01) ---------------- * Include required files in distribution [Łukasz Nowak] 0.1 (2011-05-27) ---------------- * Merged slapos.slap, slapos.tool.console, slapos.tool.format, slapos.tool.grid, slapos.tool.libnetworkcache and slapos.tool.proxy into one package: slapos.core slapos.core-1.3.18/MANIFEST.in0000644000000000000000000000032112752436067015540 0ustar rootroot00000000000000include CHANGES.txt include slapos/proxy/schema.sql include slapos/slapos-client.cfg.example include slapos/slapos-proxy.cfg.example include slapos/slapos.cfg.example recursive-include slapos *.in *.txt *.xsd slapos.core-1.3.18/slapos.core.egg-info/0000755000000000000000000000000013006632706017717 5ustar rootroot00000000000000slapos.core-1.3.18/slapos.core.egg-info/SOURCES.txt0000644000000000000000000000557413006632706021616 0ustar rootroot00000000000000CHANGES.txt MANIFEST.in README.txt setup.cfg setup.py slapos/README.console.txt slapos/README.format.txt slapos/README.grid.txt slapos/README.proxy.txt slapos/README.slap.txt slapos/__init__.py slapos/bang.py slapos/client.py slapos/format.py slapos/human.py slapos/slapos-client.cfg.example slapos/slapos-proxy.cfg.example slapos/slapos.cfg.example slapos/slapos.xsd slapos/util.py slapos/version.py slapos.core.egg-info/PKG-INFO slapos.core.egg-info/SOURCES.txt slapos.core.egg-info/dependency_links.txt slapos.core.egg-info/entry_points.txt slapos.core.egg-info/namespace_packages.txt slapos.core.egg-info/not-zip-safe slapos.core.egg-info/requires.txt slapos.core.egg-info/top_level.txt slapos/cli/__init__.py slapos/cli/bang.py slapos/cli/boot.py slapos/cli/cache.py slapos/cli/collect.py slapos/cli/command.py slapos/cli/config.py slapos/cli/configure_client.py slapos/cli/console.py slapos/cli/entry.py slapos/cli/format.py slapos/cli/info.py slapos/cli/list.py slapos/cli/proxy_show.py slapos/cli/proxy_start.py slapos/cli/register.py slapos/cli/remove.py slapos/cli/request.py slapos/cli/slapgrid.py slapos/cli/supervisorctl.py slapos/cli/supervisord.py slapos/cli/supply.py slapos/cli/coloredlogs/LICENSE.txt slapos/cli/coloredlogs/__init__.py slapos/cli/coloredlogs/converter.py slapos/cli/coloredlogs/demo.py slapos/cli/configure_local/__init__.py slapos/collect/README.txt slapos/collect/__init__.py slapos/collect/db.py slapos/collect/entity.py slapos/collect/reporter.py slapos/collect/snapshot.py slapos/collect/temperature/__init__.py slapos/collect/temperature/heating.py slapos/grid/SlapObject.py slapos/grid/__init__.py slapos/grid/distribution.py slapos/grid/exception.py slapos/grid/networkcache.py slapos/grid/slapgrid.py slapos/grid/svcbackend.py slapos/grid/utils.py slapos/grid/watchdog.py slapos/grid/zc.buildout-bootstrap.py slapos/grid/templates/buildout-tail.cfg.in slapos/grid/templates/group_partition_supervisord.conf.in slapos/grid/templates/iptables-ipv4-firewall-add.in slapos/grid/templates/program_partition_supervisord.conf.in slapos/grid/templates/supervisord.conf.in slapos/proxy/__init__.py slapos/proxy/db_version.py slapos/proxy/schema.sql slapos/proxy/views.py slapos/slap/__init__.py slapos/slap/slap.py slapos/slap/util.py slapos/slap/doc/computer_consumption.xsd slapos/slap/doc/partition_consumption.xsd slapos/slap/doc/software_instance.xsd slapos/slap/interface/__init__.py slapos/slap/interface/slap.py slapos/tests/__init__.py slapos/tests/cli.py slapos/tests/client.py slapos/tests/collect.py slapos/tests/configure_local.py slapos/tests/distribution.py slapos/tests/interface.py slapos/tests/slap.py slapos/tests/slapformat.py slapos/tests/slapgrid.py slapos/tests/slapobject.py slapos/tests/util.py slapos/tests/pyflakes/__init__.py slapos/tests/slapmock/__init__.py slapos/tests/slapmock/requests.py slapos/tests/slapproxy/__init__.py slapos/tests/slapproxy/slapos_multimaster.cfg.inslapos.core-1.3.18/slapos.core.egg-info/requires.txt0000644000000000000000000000043013006632705022313 0ustar rootroot00000000000000Flask lxml netaddr>=0.7.5 netifaces setuptools supervisor psutil>=2.0.0 xml_marshaller>=0.9.3 zope.interface zc.buildout cliff requests>=2.4.3 uritemplate [bpython_console] bpython [docs] Sphinx repoze.sphinx.autointerface sphinxcontrib.programoutput [ipython_console] ipython slapos.core-1.3.18/slapos.core.egg-info/top_level.txt0000644000000000000000000000000713006632705022445 0ustar rootroot00000000000000slapos slapos.core-1.3.18/slapos.core.egg-info/entry_points.txt0000644000000000000000000000270013006632705023213 0ustar rootroot00000000000000[console_scripts] slapos = slapos.cli.entry:main slapos-watchdog = slapos.grid.watchdog:main [slapos.cli] cache lookup = slapos.cli.cache:CacheLookupCommand configure client = slapos.cli.configure_client:ConfigureClientCommand configure local = slapos.cli.configure_local:ConfigureLocalCommand console = slapos.cli.console:ConsoleCommand info = slapos.cli.info:InfoCommand list = slapos.cli.list:ListCommand node bang = slapos.cli.bang:BangCommand node boot = slapos.cli.boot:BootCommand node collect = slapos.cli.collect:CollectCommand node format = slapos.cli.format:FormatCommand node instance = slapos.cli.slapgrid:InstanceCommand node register = slapos.cli.register:RegisterCommand node report = slapos.cli.slapgrid:ReportCommand node restart = slapos.cli.supervisorctl:SupervisorctlRestartCommand node software = slapos.cli.slapgrid:SoftwareCommand node start = slapos.cli.supervisorctl:SupervisorctlStartCommand node status = slapos.cli.supervisorctl:SupervisorctlStatusCommand node stop = slapos.cli.supervisorctl:SupervisorctlStopCommand node supervisorctl = slapos.cli.supervisorctl:SupervisorctlCommand node supervisord = slapos.cli.supervisord:SupervisordCommand node tail = slapos.cli.supervisorctl:SupervisorctlTailCommand proxy show = slapos.cli.proxy_show:ProxyShowCommand proxy start = slapos.cli.proxy_start:ProxyStartCommand remove = slapos.cli.remove:RemoveCommand request = slapos.cli.request:RequestCommand supply = slapos.cli.supply:SupplyCommand slapos.core-1.3.18/slapos.core.egg-info/namespace_packages.txt0000644000000000000000000000000713006632705024246 0ustar rootroot00000000000000slapos slapos.core-1.3.18/slapos.core.egg-info/dependency_links.txt0000644000000000000000000000000113006632705023764 0ustar rootroot00000000000000 slapos.core-1.3.18/slapos.core.egg-info/not-zip-safe0000644000000000000000000000000112752450306022146 0ustar rootroot00000000000000 slapos.core-1.3.18/slapos.core.egg-info/PKG-INFO0000644000000000000000000012052213006632705021015 0ustar rootroot00000000000000Metadata-Version: 1.1 Name: slapos.core Version: 1.3.18 Summary: SlapOS core. Home-page: http://community.slapos.org Author: VIFIB Author-email: UNKNOWN License: GPLv3 Description: slapos.core =========== The core of SlapOS. Contains the SLAP library, and the slapgrid, slapformat, slapproxy tools. For more information, see http://www.slapos.org. Changes ======= 1.3.18 (2016-11-03) ------------------- * update default web url of master to slapos.vifib.com 1.3.17 (2016-10-25) ------------------- * slapos.grid: Always remove .timestamp and .slapgrid if partition is destroyed. * slapos.proxy: Propagate parent partition state to children * slapos.grid: Increase min space (1G) * slapos.grid: Save slapgrid state into the partition * slapos.format: Remove passwd call while format. * svcbackend: explicitely call the executable instead of using Popen 'executable' keyword. * slapos.grid: Introduce new garbage collector for instances ignored by buildout 1.3.16 (2016-09-29) ------------------- * slapos.format: Include disk usage report. Do not divide cpu_load by number of cpu cores. * slapos.format: set login shell for slapuser and lock login by password * slapos.slap: Do not post same connection parameters of slaves. * slapos.proxy: allow to update software release of partition 1.3.15 (2015-12-08) ------------------- * slapos.collect: Include disk usage report. Do not divide cpu_load by number of cpu cores. 1.3.14 (2015-10-27) ------------------- * slapos.grid: firewall fix bugs 1.3.13 (2015-10-26) ------------------- * slapos.grid: firewall accpet option to specify only list of ip address/wetwork to accept and reject. 1.3.12 (2015-10-15) ------------------- * slapos.grid: add support for firewall configuration using firewalld for partition that use tap+route interface (for kvm cluster). 1.3.11 (2015-09-25) ------------------- * slapos.grid: support shacache-ca-file and shadir-ca-file options. 1.3.10 (2015-04-28) ------------------- 1.3.9 (2015-02-20) ------------------ * slapos.format: allow to format additional list of folder for each partition to use as data storage location. * slapos.format: allow to create tap without bridge (when using option create_tap and tap_gateway_interface), configure ip route with generated ipv4 for tap to access guest vm from host machine. * slapos.grid: update generated buildout file with information to acess partition data storage folder. 1.3.8 (2015-02-04) ------------------ * slapos proxy: allow to specify/override host/port from command line. 1.3.7 (2015-01-30) ------------------ * slapos.grid: Don't try to process partition if software_release_url is None. Removes noisy errors in log. * slapos node report: retry several time when removing processes from supervisor. 1.3.6.3 (2015-01-23) -------------------- * slapos: make forbid_supervisord_automatic_launch generic. 1.3.6.2 (2015-01-22) -------------------- * slapos.grid.svcbackend: check if watchdog is started before restarting. 1.3.6.1 (2015-01-19) -------------------- * slapos: allow to use supervisorctl without automatically starting supervisord. * slapos: Create supervisor configuration when running CLI. 1.3.6 (2015-01-16) ------------------ * supervisord: allow to start with --nodaemon. * rename : zc.buildout-bootstap.py -> zc.buildout-bootstrap.py. * update bootstrap.py. * slapproxy: add missing getComputerPartitionCertificate method * slapos boot: fix error reporting when ipv6 is not available 1.3.5 (2014-12-03) ------------------ * slapos.grid: do not ALWAYS sleep for promise_timeout. Instead, poll often, and continue if promise finished. This change allows a two-folds speed improvement in processing partitions. * slapos.format: don't chown recursively Software Releases. * slapos.util: use find to chown in chownDirectory. 1.3.4 (2014-11-26) ------------------ * slapos.slap hateoas: get 'me' document with no cache. * slapos.grid: report: fix unbound 'destroyed' variable. * slapos.slap: fix __getattr__ of product collection so that product.foo works. * slapos.cli info/list: use raw print instead of logger. 1.3.3 (2014-11-18) ------------------ * slapos.slap/slapos.proxy: Fix regression: requests library ignores empty parameters. * slapos.proxy: fix slave support (again) 1.3.2 (2014-11-14) ------------------ * slapos.slap: parse ipv6 and adds brackets if missing. Needed for requests, that now NEEDS brackets for ipv6. * slapos.slap: cast xml from unicode to string if it is unicode before parsing it. 1.3.1 (2014-11-13) ------------------ * slapos.proxy: fix slave support. 1.3.0 (2014-11-13) ------------------ * Introduce slapos list and slapos info CLIs. * slapos format: fix use_unique_local_address_block feature, and put default to false in configure_local. 1.2.4.1 (2014-10-09) -------------------- * slapos format: Don't chown partitions. * slapos format: alter_user is true again by default. 1.2.4 (2014-09-23) ------------------ * slapos.grid: add support for retention_delay. 1.2.3.1 (2014-09-15) -------------------- * General: Add compatibility with cliff 1.7.0. * tests: Prevent slap tests to leak its stubs/mocks. 1.2.3 (2014-09-11) ------------------ * slapos.proxy: Add multimaster basic support. 1.2.2 (2014-09-10) ------------------ * slapos.collect: Compress historical logs and fix folder permissions. 1.2.1 (2014-08-21) ------------------ * slapproxy: add automatic migration to new database schema if needed. 1.2.0 (2014-08-18) ------------------ Note: not officially released as egg. * slapproxy: add correct support for slaves, instance_guid, state. * slapproxy: add getComputerPartitionStatus dummy support. * slapproxy: add multi-nodes support 1.1.2 (2014-06-02) ------------------ * Minor fixes 1.1.1 (2014-05-23) ------------------ * Drop legacy commands * Introduced SlapOS node Collect 1.0.5 (2014-04-29) ------------------ * Fix slapgrid commands return code * slapos proxy start do not need to be launched as root 1.0.2.1 (2014-01-16) -------------------- Fixes: * Add backward compabitility in slap lib with older slapproxy (<1.0.1) 1.0.1 (2014-01-14) ------------------ New features: * Add configure-local command for standalone slapos [Cedric de Saint Martin/Gabriel Monnerat] Fixes: * Fix slapproxy missing _connection_dict [Rafael Monnerat] 1.0.0 (2014-01-01) ------------------ New features: * slapconsole: Use readline for completion and history. [Jerome Perrin] * slapos console: support for ipython and bpython [Marco Mariani] * Initial windows support. [Jondy Zhao] * Support new/changed parameters in command line tools, defined in documentation. [Marco Mariani] * Register: support for one-time authentication token. [Marco Mariani] * New command: "slapos configure client" [Marco Mariani] * add new "root_check" option in slapos configuration file (true by default) allowing to bypass "am I root" checks in slapos. [Cedric de Saint Martin] * Add support for getSoftwareReleaseListFromSoftwareProduct() SLAP method. [Cedric de Saint Martin] * Add support for Software Product in request, supply and console. [Cedric de Saint Martin] Major Improvements: * Major refactoring of entry points, clearly defining all possible command line parameters, separating logic from arg/conf parsing and logger setup, sanitizing most parameters, and adding help and documentation for each command. [Marco Mariani] * Correct handling of common errors: print error message instead of traceback. [Marco Mariani] * Dramatically speed up slapformat. [Cedric de Saint Martin] * Remove CONFIG_SITE env var from Buildout environment, fixing support of OpenSuse 12.x. [Cedric de Saint Martin] * RootSoftwareInstance is now the default software type. [Cedric de Saint Martin] * Allow to use SlapOS Client for instances deployed in shared SlapOS Nodes. [Cedric de Saint Martin] Other fixes: * Refuse to run 'slapos node' commands as non root. [Marco Mariani] * Register: Replace all reference to vifib by SlapOS Master. [Cedric de Saint Martin] * Watchdog: won't call bang if bang was already called but problem has not been solved. [Cédric de Saint Martin] * Slapgrid: avoid spurious empty lines in Popen() stdout/log. [Marco Mariani] * Slapgrid: Properly include any partition containing any SR informations in the list of partitions to proceed. [Cedric de Saint Martin] * Slapgrid: Remove the timestamp file after defined periodicity. Fixes odd use cases when an instance failing to process after some time is still considered as valid by the node. [Cedric de Saint Martin] * Slapgrid: Fix scary but harmless warnings, fix grammar, remove references to ViFiB. [Cedric de Saint Martin, Jérome Perrin, Marco Mariani] * Slapgrid: Fixes support of Python >= 2.6. [Arnaud Fontaine] * Slapgrid: Check if SR is upload-blacklisted only if we have upload informations. [Cedric de Saint Martin] * Slapgrid: override $HOME to be software_path or instance_path. Fix leaking files like /opt/slapgrid/.npm. [Marco Mariani] * Slapgrid: Always retrieve certificate and key, update files if content changed. Fix "quick&dirty" manual slapos.cfg swaps (change of Node ID). [Marco Mariani] * Slapformat: Make sure everybody can read slapos configuration directory. [Cedric de Saint Martin] * Slapformat: Fix support of slapproxy. [Marco Mariani] * Slapformat: slapos.xml backup: handle corrupted zip files. [Marco Mariani] * Slapformat: Don't erase shell information for each user, every time. Allows easy debugging. [Cédric de Saint Martin] 0.35.1 (2013-02-18) ------------------- New features: * Add ComputerPartition._instance_guid getter in SLAP library. [Cedric de Saint Martin] * Add ComputerPartition._instance_guid support in slapproxy. [Cedric de Saint Martin] Fixes: * Fix link existence check when deploying instance if SR is not correctly installed. This fixes a misleading error. [Cedric de Saint Martin] * Improve message shown to user when requesting. [Cedric de Saint Martin] * Raise NotReady when _requested_state doesn't exist when trying to fetch it from getter. [Cedric de Saint Martin] 0.35 (2013-02-08) ----------------- * slapos: display version number with help. [Marco Mariani] * slapformat: backup slapos.xml to a zip archive at every change. [Marco Mariani] * slapformat: Don't check validity of ipv4 when trying to add address that already exists. [Cedric de Saint Martin] * slapgrid: create and run $MD5/buildout.cfg for eaiser debugging. [Marco Mariani] * slapgrid: keep running if cp.error() or sr.error() have issues (fixes 20130119-744D94). [Marco Mariani] * slapgrid does not crash when there are no certificates (fixes #20130121-136C24). [Marco Mariani] * Add slapproxy-query command. [Marco Mariani] * Other minor typo / output fixes. 0.34 (2013-01-23) ----------------- * networkcache: only match major release number in Debian, fixed platform detection for Ubuntu. [Marco Mariani] * symlink to software_release in each partition. [Marco Mariani] * slapos client: Properly expand "~" when giving configuration file location. [Cedric de Saint Martin] * slapgrid: stop instances that should be stopped even if buildout and/or reporting failed. [Cedric de Saint Martin] * slapgrid: Don't periodically force-process a stopped instance. [Cedric de Saint Martin] * slapgrid: Handle pid files of slapgrid launched through different entry points. [Cedric de Saint Martin] * Watchdog: Bang is called with correct instance certificates. [Cedric Le Ninivin] * Watchdog: Fix watchdog call. [Cedric le Ninivin] * Add a symlink of the used software release in each partitions. [Marco Mariani] * slapformat is verbose by default. [Cedric de Saint Martin] * slapproxy: Filter by instance_guid, allow computer partition renames and change of software_type and requested_state. [Marco Mariani] * slapproxy: Stop instance even if buildout/reporting is wrong. [Cedric de Saint Martin] * slapproxy: implement softwareInstanceRename method. [Marco Mariani] * slapproxy: alllow requests to software_type. [Marco Mariani] * Many other minor fixes. See git diff for details. 0.33.1 (2012-11-05) ------------------- * Fix "slapos console" argument parsing. [Cedric de Saint Martin] 0.33 (2012-11-02) ----------------- * Continue to improve new entry points. The following are now functional: - slapos node format - slapos node start/stop/restart/tail - slapos node supervisord/supervisorctl - slapos node supply and add basic usage. [Cedric de Saint Martin] * Add support for "SLAPOS_CONFIGURATION" and SLAPOS_CLIENT_CONFIGURATION environment variables. (commit c72a53b1) [Cédric de Saint Martin] * --only_sr also accepts plain text URIs. [Marco Mariani] 0.32.3 (2012-10-15) ------------------- * slapgrid: Adopt new return value strategy (0=OK, 1=failed, 2=promise failed) (commit 5d4e1522). [Cedric de Saint Martin] * slaplib: add requestComputer (commits 6cbe82e0, aafb86eb). [Łukasz Nowak] * slapgrid: Add stopasgroup and killasgroup to supervisor (commit 36e0ccc0). [Cedric de Saint Martin] * slapproxy: don't start in debug mode by default (commit e32259c8). [Cédric Le Ninivin * SlapObject: ALWAYS remove tmpdir (commit a652a610). [Cedric de Saint Martin] 0.32.2 (2012-10-11) ------------------- * slapgrid: Remove default delay, now that SlapOS Master is Fast as Light (tm). (commit 03a85d6b8) [Cedric de Saint Martin] * Fix watchdog entry point name, introduced in v0.31. (commit a8651ba12) [Cedric de Saint Martin] * slapgrid: Better filter of instances, won't process false positives anymore (hopefully). (commit ce0a73b41) [Cedric de Saint Martin] * Various output improvements. [Cedric de Saint Martin] 0.32.1 (2012-10-09) ------------------- * slapgrid: Make sure error logs are sent to SlapOS master. Finish implementation began in 0.32. [Cedric de Saint Martin] * slapgrid: Fix Usage Report in case of not empty partition with no SR. [Cedric de Saint Martin] 0.32 (2012-10-04) ----------------- * Introduce new, simpler "slapos" entry point. See documentation for more informations. Note: some functionnalities of this new entry point don't work yet or is not as simple as it should be. [Cedric de Saint Martin, Cedric Le Ninivin] * Revamped "slapos request" to work like described in documentation. [Cédric Le Ninivin, Cédric de Saint Martin] * Rewrote slapgrid logger to always log into stdout. (commits a4d277c881, 5440626dea)[Cédric de Saint Martin] 0.31.2 (2012-10-02) ------------------- * Update slapproxy behavior: when instance already exist, only update partition_parameter_kw. (commit 317d5c8e0aee) [Cedric de Saint Martin] 0.31.1 (2012-10-02) ------------------- * Fixed Watchdog call in slapgrid. [Cédric Le Ninivin] 0.31 (2012-10-02) ------------------- * Added slapos-watchdog to bang exited and failing serices in instance in supervisord. (commits 16b2e8b8, 1dade5cd7) [Cédric Le Ninivin] * Add safety checks before calling SlapOS Master if mandatory instance members of SLAP classes are not properly set. Will result in less calls to SlapOS Master in dirty cases. (commits 5097e87c9763, 5fad6316a0f6d, f2cd014ea8aa) [Cedric de Saint Martin] * Add "periodicty" functionnality support for instances: if an instance has not been processed by slapgrid after defined time, process it. (commits 7609fc7a3d, 56e1c7bfbd) [Cedric Le Ninivin] * slapproxy: Various improvements in slave support (commits 96c6b78b67, bcac5a397d, fbb680f53b)[Cedric Le Ninivin] * slapgrid: bulletproof slapgrid-cp: in case one instance is bad, still processes all other ones. (commits bac94cdb56, 77bc6c75b3d, bd68b88cc3) [Cedric de Saint Martin] * Add support for "upload to binary cache" URL blacklist [Cedric de Saint Martin] * Request on proxy are identified by requester and name (commit 0c739c3) [Cedric Le Ninivin] 0.30 (2012-09-19) ----------------- * Add initial "slave instances" support in slapproxy. [Cedric Le Ninivin] * slapgrid-ur fix: check for partition informations only if we have to destroy it. [Cedric de Saint Martin] 0.29 (2012-09-18) ----------------- * buildout: Migrate slap_connection magic instance profile part to slap-connection, and use variables names separated with '-'. [Cedric de Saint Martin] * slapgrid: Add support for instance.cfg instance profiles [Cedric de Saint Martin] * slapgrid-ur: much less calls to master. [Cedric de Saint Martin] 0.28.9 (2012-09-18) ------------------- * slapgrid: Don't process not updated partitions (regression introduced in 0.28.7). [Cedric de Saint Martin] 0.28.8 (2012-09-18) ------------------- * slapgrid: Don't process free partitions (regression introduced in 0.28.7). [Cedric de Saint Martin] 0.28.7 (2012-09-14) ------------------- * slapgrid: --maximal_delay reappeared to be used in special cases. [Cedric de Saint Martin] 0.28.6 (2012-09-10) ------------------- * register now use slapos.cfg.example from master. [Cédric Le Ninivin] 0.28.5 (2012-08-23) ------------------- * Updated slapos.cfg for register [Cédric Le Ninivin] 0.28.4 (2012-08-22) ------------------- * Fixed egg building. 0.28.3 (2012-08-22) ------------------- * Avoid artificial tap creation on system check. [Łukasz Nowak] 0.28.2 (2012-08-17) ------------------- * Resolved path problem in register [Cédric Le Ninivin] 0.28.1 (2012-08-17) ------------------- * Resolved critical naming conflict 0.28 (2012-08-17) ----------------- * Introduce "slapos node register" command, that will register computer to SlapOS Master (vifib.net by default) for you. [Cédric Le Ninivin] * Set .timestamp in partitions ONLY after slapgrid thinks it's okay (promises, ...). [Cedric de Saint Martin] * slapgrid-ur: when destroying (not reporting), only care about instances to destroy, completely ignore others. [Cedric de Saint Martin] 0.27 (2012-08-08) ----------------- * slapformat: Raise correct error when no IPv6 is available on selected interface. [Cedric de Saint Martin] * slapgrid: Introduce --only_sr and --only_cp. - only_sr filter and force the run of a single SR, and uses url_md5 (folder_id) - only_cp filter which computer patition, will be runned. it can be a list, splited by comman (slappartX,slappartY ...) [Rafael Monnerat] * slapgrid: Cleanup unused option (--usage-report-periodicity). [Cedric de Saint Martin] * slapgrid: --develop will work also for Computer Partitions. [Cedric de Saint Martin] * slaplib: setConnectionDict won't call Master if parameters haven't changed. [Cedric de Saint Martin] 0.26.2 (2012-07-09) ------------------- * Define UTF-8 encoding in SlapOS Node codebase, as defined in PEP-263. 0.26.1 (2012-07-06) ------------------- * slapgrid-sr: Add --develop option to make it ignore .completed files. * SLAP library: it is now possible to fetch whole dict of connection parameters. * SLAP library: it is now possible to fetch single instance parameter. * SLAP library: change Computer and ComputerPartition behavior to have proper caching of computer partition parameters. 0.26 (2012-07-05) ----------------- * slapformat: no_bridge option becomes 'not create_tap'. create_tap is true by default. So a bridge is used and tap will be created by default. [Cedric de Saint Martin] * Add delay for slapformat. [Cedric Le Ninivin] * If no software_type is given, use default one (i.e fix "error 500" when requesting new instance). [Cedric de Saint Martin] * slapgrid: promise based software release, new api to fetch full computer information from server. [Yingjie Xu] * slapproxy: new api to mock full computer information [Yingjie Xu] * slapgrid: minor fix randomise delay feature. [Yingjie Xu] * slapgrid: optimise slapgrid-cp, run buildout only if there is an update on server side. [Yingjie Xu] * libslap: Allow accessing ServerError. [Vincent Pelletier] 0.25 (2012-05-16) ----------------- * Fix support for no_bridge option in configuration files for some values: no_bridge = false was stated as true. [Cedric de Saint Martin] * Delay a randomized period of time before calling slapgrid. [Yingjie Xu] * slapformat: Don't require tunctl if no_bridge is set [Leonardo Rochael] * slapformat: remove monkey patching when creating address so that it doesn't return false positive. [Cedric de Saint Martin] * Various: clearer error messages. 0.24 (2012-03-29) ----------------- * Handles different errors in a user friendly way [Cedric de Saint Martin] * slapgrid: Supports software destruction. [Łukasz Nowak] * slap: added support to Supply.supply state parameter (available, destroyed) [Łukasz Nowak] 0.23 (2012-02-29) ----------------- * slapgrid : Don't create tarball of sofwtare release when shacache is not configured. [Yingjie Xu] 0.22 (2012-02-09) ----------------- * slapformat : Add no-bridge feature. [Cedric de Saint Martin] * slapgrid : Add binary cache support. [Yingjie Xu] 0.21 (2011-12-23) ----------------- * slap: Add renaming API. [Antoine Catton] 0.20 (2011-11-24) ----------------- * slapgrid: Support service-less parttions. [Antoine Catton] * slapgrid: Avoid gid collision while dropping privileges. [Antoine Catton] * slapgrid: Drop down network usage during usage reporting. [Łukasz Nowak] * general: Add sphinx documentation. [Romain Courteaud] 0.19 (2011-11-07) ----------------- * bang: Executable to be called by being banged computer. [Łukasz Nowak] 0.18 (2011-10-18) ----------------- * Fix 0.17 release: missing change for slap library. [Łukasz Nowak] 0.17 (2011-10-18) ----------------- * slap: Avoid request under the hood. [Łukasz Nowak] * slap: ComputerPartition.bang provided. It allows to update all instances in tree. [Łukasz Nowak] * slap: Computer.bang provided. It allows to bang all instances on computer. [Łukasz Nowak] 0.16 (2011-10-03) ----------------- * slapgrid: Bugfix for slapgrid introduced in 0.15. [Łukasz Nowak] 0.15 (2011-09-27) ----------------- * slapgrid: Sanitize environment variables as early as possible. [Arnaud Fontaine] * slap: Docstring bugfix. [Sebastien Robin] * slap: Make request asynchronous call. [Łukasz Nowak] 0.14 (2011-08-31) ----------------- * slapgrid: Implement SSL based authentication to shadir and shacache. [Łukasz Nowak] * slapgrid, slap: Fix usage report packing list generation. [Nicolas Godbert] 0.13 (2011-08-25) ----------------- * slapgrid: Implement software signing and shacache upload. [Lucas Carvalho] * slap: Support slave instances [Gabriel Monnerat] * slapformat: Generate always address for computer [Łukasz Nowak] * slapgrid: Support promises scripts [Antoine Catton] * general: slapos.core gets tests. [many contributors] 0.12 (2011-07-15) ----------------- * Include modifications that should have been included in 0.11. 0.11 (2011-07-15) ----------------- * Bug fix : slapconsole : shorthand methods request and supply now correctly return an object. [Cedric de Saint Martin] 0.10 (2011-07-13) ----------------- * Fix a bug in slapconsole where request and supply shorthand methods don't accept all needed parameters. [Cedric de Saint Martin] 0.9 (2011-07-11) ---------------- * slapconsole: Simplify usage and use configuration file. You can now just run slapconsole and type things like "request(kvm, 'mykvm')". [Cedric de Saint Martin] * slapformat: Fix issue of bridge not connected with real interface on Linux >= 2.6.39 [Arnaud Fontaine] * slapformat: Allow to have IPv6 only interface, with bridge still supporting local IPv4 stack. [Łukasz Nowak] 0.8 (2011-06-27) ---------------- * slapgrid: Bugfix for temporary extends cache permissions. [Łukasz Nowak] 0.7 (2011-06-27) ---------------- * slapgrid: Fallback to buildout in own search path. [Łukasz Nowak] 0.6 (2011-06-27) ---------------- * slap: Fix bug: state shall be XML encapsulated. [Łukasz Nowak] 0.5 (2011-06-24) ---------------- * slapgrid: Use temporary extends-cache directory in order to make faster remote profile refresh. [Łukasz Nowak] 0.4 (2011-06-24) ---------------- * general: Polish requirement versions. [Arnaud Fontaine] * general: Remove libnetworkcache. [Lucas Carvalho] * slap: Remove not needed method from interface. [Romain Courteaud] * slap: state parameter is accepted and transmitted to SlapOS master [Łukasz Nowak] * slapformat: Implement dry run. [Vincent Pelletier] * slapgrid: Allow to select any buildout binary used to bootstrap environment. [Łukasz Nowak] 0.3 (2011-06-14) ---------------- * slap: Implement SLA by filter_kw in OpenOrder.request. [Łukasz Nowak] * slap: Timeout network operations. [Łukasz Nowak] * slapformat: Make slapsoft and slapuser* system users. [Kazuhiko Shiozaki] * slapgrid: Add more tolerance with supervisord. [Łukasz Nowak] 0.2 (2011-06-01) ---------------- * Include required files in distribution [Łukasz Nowak] 0.1 (2011-05-27) ---------------- * Merged slapos.slap, slapos.tool.console, slapos.tool.format, slapos.tool.grid, slapos.tool.libnetworkcache and slapos.tool.proxy into one package: slapos.core console ------- The slapconsole tool allows to interact with a SlapOS Master throught the SLAP library. For more information about SlapOS or slapconsole usages, please go to http://community.slapos.org. The slapconsole tool is only a bare Python console with several global variables defined and initialized. Initialization and configuration file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Slapconsole allows to automatically connect to a Master using URL and SSL certificate from given slapos.cfg. Certificate has to be *USER* certificate, manually obtained from SlapOS master web interface. Slapconsole tools reads the given slapos.cfg configuration file and use the following informations : * Master URL is read from [slapos] in the "master_url" parameter. * SSL Certificate is read from [slapconsole] in the "cert_file" parameter. * SSL Key is read from [slapconsole] in the "key_file" parameter. See slapos.cfg.example for examples. Global functions/variables ~~~~~~~~~~~~~~~~~~~~~~~~~~ * "request()" is a shorthand for slap.registerOpenOrder().request() allowing to request instances. * "supply()" is a shorthand for slap.registerSupply().supply() allowing to request software installation. For more information about those methods, please read the SLAP library documentation. * "product" is an instance of slap.SoftwareProductCollection whose only goal is to retrieve the URL of the best Software Release of a given Software Product as attribute. for each attribute call, it will retrieve from the SlapOS Master the best available Software Release URL and return it. This allows to request instances in a few words, i.e:: request("mykvm", "http://www.url.com/path/to/current/best/known/kvm/software.cfg") can be simplified into :: request("mykvm", product.kvm) * "slap" is an instance of the SLAP library. It is only used for advanced usages. "slap" instance is obtained by doing :: slap = slapos.slap.slap() slap.initializeConnection(config.master_url, key_file=config.key_file, cert_file=config.cert_file) Examples ~~~~~~~~ :: >>> # Request instance >>> request(product.kvm, "myuniquekvm") >>> # Request instance on specific computer >>> request(product.kvm, "myotheruniquekvm", filter_kw={ "computer_guid": "COMP-12345" }) >>> # Request instance, specifying parameters (here nbd_ip and nbd_port) >>> request(product.kvm, "mythirduniquekvm", partition_parameter_kw={"nbd_ip":"2a01:e35:2e27:460:e2cb:4eff:fed9:48dc", "nbd_port":"1024"}) >>> # Request software installation on owned computer >>> supply(product.kvm, "mycomputer") >>> # Fetch existing instance status >>> request(product.kvm, "myuniquekvm").getState() >>> # Fetch instance information on already launched instance >>> request(product.kvm, "myuniquekvm").getConnectionParameter("url") format ====== slapformat is an application to prepare SlapOS ready node (machine). It "formats" the machine by: - creating users and groups - creating bridge interface - creating needed tap interfaces - creating needed directories with proper ownership and permissions In the end special report is generated and information are posted to configured SlapOS server. This program shall be only run by root. Requirements ------------ Linux with IPv6, bridging and tap interface support. Binaries: * brctl * groupadd * ip * tunctl * useradd grid ==== slapgrid is a client of SLAPos. SLAPos provides support for deploying a SaaS system in a minute. Slapgrid allows you to easily deploy instances of softwares based on buildout profiles. For more informations about SLAP and SLAPos, please see the SLAP documentation. Requirements ------------ A working SLAP server with informations about your computer, in order to retrieve them. As Vifib servers use IPv6 only, we strongly recommend an IPv6 enabled UNIX box. For the same reasons, Python >= 2.6 with development headers is also strongly recommended (IPv6 support is not complete in previous releases). For now, gcc and glibc development headers are required to build most software releases. Concepts -------- Here are the fundamental concepts of slapgrid : A Software Release (SR) is just a software. A Computer Partition (CP) is an instance of a Software Release. Imagine you want to install with slapgrid some software and run it. You will have to install the software as a Software Release, and then instantiate it, i.e configuring it for your needs, as a Computer Partition. How it works ------------ When run, slapgrid will authenticate to the SLAP library with a computer_id and fetch the list of Software Releases to install or remove and Computer Partitions to start or stop. Then, it will process each Software Release, and each Computer Partition. It will also periodically send to SLAP the usage report of each Computer Partition. Installation ------------ With easy_install:: $ easy_install slapgrid slapgrid needs several directories to be created and configured before being able to run : a software releases directory, and an instances directory with configured computer partition directory(ies). You should create for each Computer Partition directory created a specific user and associate it with its Computer Partition directory. Each Computer Partition directory should belongs to this specific user, with permissions of 0750. Usage ----- slapgrid needs several informations in order to run. You can specify them by adding arguments to the slapgrid command line, or by putting then in a configuration file. Beware : you need a valid computer resource on server side. Examples -------- simple example : Just run slapgrid: $ slapgrid --instance-root /path/to/instance/root --software-root /path/to/software_root --master-url https://some.server/some.resource --computer-id my.computer.id configuration file example:: [slapgrid] instance_root = /path/to/instance/root software_root = /path/to/software/root master_url = https://slapos.server/slap_service computer_id = my.computer.id then run slapgrid:: $ slapgrid --configuration-file = path/to/configuration/file proxy ===== Implement minimalist SlapOS Master server without any security, designed to work only from localhost with one SlapOS Node (a.k.a Computer). It implements (or should implement) the SLAP API, as currently implemented in the SlapOS Master (see slaptool.py in Master). The only behavioral difference from the SlapOS Master is: When the proxy doesn't find any free partition (and/or in case of slave instance, any compatible master instance), it will throw a NotFoundError (404). slap ==== Simple Language for Accounting and Provisioning python library. Developer note - python version ------------------------------- This library is used on client (slapgrid) and server side. Server is using python2.4 and client is using python2.6 Having this in mind, code of this library *have* to work on python2.4 How it works ------------ The SLAP main server which is in charge of service coordination receives from participating servers the number of computer paritions which are available, the type of resource which a party is ready provide, and request from parties for resources which are needed. Each participating server is identified by a unique ID and runs a slap-server daemon. This daemon collects from the main server the installation tasks and does the installation of resources, then notifies the main server of completion whenever a resource is configured, installed and available. The data structure on the main server is the following: A - Action: an action which can happen to provide a resource or account its usage CP - Computer Partition: provides a URL to Access a Cloud Resource RI - Resource Item: describes a resource CI - Contract Item: describes the contract to attach the DL to (This is unclear still) R - Resource: describes a type of cloud resource (ex. MySQL Table) is published on slapgrid.org DL - Delivery Line: Describes an action happening on a resource item on a computer partition D - Delivery: groups multiple Delivery Lines Keywords: slapos core Platform: UNKNOWN Classifier: Programming Language :: Python slapos.core-1.3.18/setup.py0000644000000000000000000001100613006625060015500 0ustar rootroot00000000000000from setuptools import setup, find_packages from shutil import copyfile import glob import os from slapos.version import version name = 'slapos.core' long_description = open("README.txt").read() + "\n" + \ open("CHANGES.txt").read() + "\n" for f in sorted(glob.glob(os.path.join('slapos', 'README.*.txt'))): long_description += '\n' + open(f).read() + '\n' slapos_folder_path = os.path.dirname(__file__) for template_name in ('slapos-client.cfg.example', 'slapos-proxy.cfg.example', 'slapos.cfg.example'): template_path = os.path.join(slapos_folder_path, template_name) if os.path.exists(template_path): copyfile(template_path, os.path.join(slapos_folder_path, 'slapos', template_name)) additional_install_requires = [] # Even if argparse is available in python2.7, some python2.7 installations # do not have it, so checking python version is dangerous try: import argparse except ImportError: additional_install_requires.append('argparse') setup(name=name, version=version, description="SlapOS core.", long_description=long_description, classifiers=[ "Programming Language :: Python", ], keywords='slapos core', license='GPLv3', url='http://community.slapos.org', author='VIFIB', namespace_packages=['slapos'], packages=find_packages(), include_package_data=True, install_requires=[ 'Flask', # used by proxy 'lxml', # needed to play with XML trees 'netaddr>=0.7.5', # to play safely with IPv6 prefixes 'netifaces', # to fetch information about network devices 'setuptools', # namespaces 'supervisor', # slapgrid uses supervisor to manage processes 'psutil>=2.0.0', 'xml_marshaller>=0.9.3', # to unmarshall/marshall python objects to/from # XML 'zope.interface', # slap library implementes interfaces 'zc.buildout', 'cliff', 'requests>=2.4.3', 'uritemplate', # used by hateoas navigator ] + additional_install_requires, extras_require={ 'docs': ( 'Sphinx', 'repoze.sphinx.autointerface', 'sphinxcontrib.programoutput' ), 'ipython_console': ('ipython',), 'bpython_console': ('bpython',)}, tests_require=[ 'pyflakes', 'mock', 'httmock', ], zip_safe=False, # proxy depends on Flask, which has issues with # accessing templates entry_points={ 'console_scripts': [ 'slapos-watchdog = slapos.grid.watchdog:main', 'slapos = slapos.cli.entry:main', ], 'slapos.cli': [ # Utilities 'cache lookup = slapos.cli.cache:CacheLookupCommand', # SlapOS Node commands 'node bang = slapos.cli.bang:BangCommand', 'node format = slapos.cli.format:FormatCommand', 'node register = slapos.cli.register:RegisterCommand', 'node supervisord = slapos.cli.supervisord:SupervisordCommand', 'node supervisorctl = slapos.cli.supervisorctl:SupervisorctlCommand', 'node status = slapos.cli.supervisorctl:SupervisorctlStatusCommand', 'node start = slapos.cli.supervisorctl:SupervisorctlStartCommand', 'node stop = slapos.cli.supervisorctl:SupervisorctlStopCommand', 'node restart = slapos.cli.supervisorctl:SupervisorctlRestartCommand', 'node tail = slapos.cli.supervisorctl:SupervisorctlTailCommand', 'node report = slapos.cli.slapgrid:ReportCommand', 'node software = slapos.cli.slapgrid:SoftwareCommand', 'node instance = slapos.cli.slapgrid:InstanceCommand', 'node boot = slapos.cli.boot:BootCommand', 'node collect = slapos.cli.collect:CollectCommand', # SlapOS client commands 'console = slapos.cli.console:ConsoleCommand', 'configure local = slapos.cli.configure_local:ConfigureLocalCommand', 'configure client = slapos.cli.configure_client:ConfigureClientCommand', 'info = slapos.cli.info:InfoCommand', 'list = slapos.cli.list:ListCommand', 'supply = slapos.cli.supply:SupplyCommand', 'remove = slapos.cli.remove:RemoveCommand', 'request = slapos.cli.request:RequestCommand', # SlapOS Proxy commands 'proxy start = slapos.cli.proxy_start:ProxyStartCommand', 'proxy show = slapos.cli.proxy_show:ProxyShowCommand', ] }, test_suite="slapos.tests", ) slapos.core-1.3.18/PKG-INFO0000644000000000000000000012052213006632706015074 0ustar rootroot00000000000000Metadata-Version: 1.1 Name: slapos.core Version: 1.3.18 Summary: SlapOS core. Home-page: http://community.slapos.org Author: VIFIB Author-email: UNKNOWN License: GPLv3 Description: slapos.core =========== The core of SlapOS. Contains the SLAP library, and the slapgrid, slapformat, slapproxy tools. For more information, see http://www.slapos.org. Changes ======= 1.3.18 (2016-11-03) ------------------- * update default web url of master to slapos.vifib.com 1.3.17 (2016-10-25) ------------------- * slapos.grid: Always remove .timestamp and .slapgrid if partition is destroyed. * slapos.proxy: Propagate parent partition state to children * slapos.grid: Increase min space (1G) * slapos.grid: Save slapgrid state into the partition * slapos.format: Remove passwd call while format. * svcbackend: explicitely call the executable instead of using Popen 'executable' keyword. * slapos.grid: Introduce new garbage collector for instances ignored by buildout 1.3.16 (2016-09-29) ------------------- * slapos.format: Include disk usage report. Do not divide cpu_load by number of cpu cores. * slapos.format: set login shell for slapuser and lock login by password * slapos.slap: Do not post same connection parameters of slaves. * slapos.proxy: allow to update software release of partition 1.3.15 (2015-12-08) ------------------- * slapos.collect: Include disk usage report. Do not divide cpu_load by number of cpu cores. 1.3.14 (2015-10-27) ------------------- * slapos.grid: firewall fix bugs 1.3.13 (2015-10-26) ------------------- * slapos.grid: firewall accpet option to specify only list of ip address/wetwork to accept and reject. 1.3.12 (2015-10-15) ------------------- * slapos.grid: add support for firewall configuration using firewalld for partition that use tap+route interface (for kvm cluster). 1.3.11 (2015-09-25) ------------------- * slapos.grid: support shacache-ca-file and shadir-ca-file options. 1.3.10 (2015-04-28) ------------------- 1.3.9 (2015-02-20) ------------------ * slapos.format: allow to format additional list of folder for each partition to use as data storage location. * slapos.format: allow to create tap without bridge (when using option create_tap and tap_gateway_interface), configure ip route with generated ipv4 for tap to access guest vm from host machine. * slapos.grid: update generated buildout file with information to acess partition data storage folder. 1.3.8 (2015-02-04) ------------------ * slapos proxy: allow to specify/override host/port from command line. 1.3.7 (2015-01-30) ------------------ * slapos.grid: Don't try to process partition if software_release_url is None. Removes noisy errors in log. * slapos node report: retry several time when removing processes from supervisor. 1.3.6.3 (2015-01-23) -------------------- * slapos: make forbid_supervisord_automatic_launch generic. 1.3.6.2 (2015-01-22) -------------------- * slapos.grid.svcbackend: check if watchdog is started before restarting. 1.3.6.1 (2015-01-19) -------------------- * slapos: allow to use supervisorctl without automatically starting supervisord. * slapos: Create supervisor configuration when running CLI. 1.3.6 (2015-01-16) ------------------ * supervisord: allow to start with --nodaemon. * rename : zc.buildout-bootstap.py -> zc.buildout-bootstrap.py. * update bootstrap.py. * slapproxy: add missing getComputerPartitionCertificate method * slapos boot: fix error reporting when ipv6 is not available 1.3.5 (2014-12-03) ------------------ * slapos.grid: do not ALWAYS sleep for promise_timeout. Instead, poll often, and continue if promise finished. This change allows a two-folds speed improvement in processing partitions. * slapos.format: don't chown recursively Software Releases. * slapos.util: use find to chown in chownDirectory. 1.3.4 (2014-11-26) ------------------ * slapos.slap hateoas: get 'me' document with no cache. * slapos.grid: report: fix unbound 'destroyed' variable. * slapos.slap: fix __getattr__ of product collection so that product.foo works. * slapos.cli info/list: use raw print instead of logger. 1.3.3 (2014-11-18) ------------------ * slapos.slap/slapos.proxy: Fix regression: requests library ignores empty parameters. * slapos.proxy: fix slave support (again) 1.3.2 (2014-11-14) ------------------ * slapos.slap: parse ipv6 and adds brackets if missing. Needed for requests, that now NEEDS brackets for ipv6. * slapos.slap: cast xml from unicode to string if it is unicode before parsing it. 1.3.1 (2014-11-13) ------------------ * slapos.proxy: fix slave support. 1.3.0 (2014-11-13) ------------------ * Introduce slapos list and slapos info CLIs. * slapos format: fix use_unique_local_address_block feature, and put default to false in configure_local. 1.2.4.1 (2014-10-09) -------------------- * slapos format: Don't chown partitions. * slapos format: alter_user is true again by default. 1.2.4 (2014-09-23) ------------------ * slapos.grid: add support for retention_delay. 1.2.3.1 (2014-09-15) -------------------- * General: Add compatibility with cliff 1.7.0. * tests: Prevent slap tests to leak its stubs/mocks. 1.2.3 (2014-09-11) ------------------ * slapos.proxy: Add multimaster basic support. 1.2.2 (2014-09-10) ------------------ * slapos.collect: Compress historical logs and fix folder permissions. 1.2.1 (2014-08-21) ------------------ * slapproxy: add automatic migration to new database schema if needed. 1.2.0 (2014-08-18) ------------------ Note: not officially released as egg. * slapproxy: add correct support for slaves, instance_guid, state. * slapproxy: add getComputerPartitionStatus dummy support. * slapproxy: add multi-nodes support 1.1.2 (2014-06-02) ------------------ * Minor fixes 1.1.1 (2014-05-23) ------------------ * Drop legacy commands * Introduced SlapOS node Collect 1.0.5 (2014-04-29) ------------------ * Fix slapgrid commands return code * slapos proxy start do not need to be launched as root 1.0.2.1 (2014-01-16) -------------------- Fixes: * Add backward compabitility in slap lib with older slapproxy (<1.0.1) 1.0.1 (2014-01-14) ------------------ New features: * Add configure-local command for standalone slapos [Cedric de Saint Martin/Gabriel Monnerat] Fixes: * Fix slapproxy missing _connection_dict [Rafael Monnerat] 1.0.0 (2014-01-01) ------------------ New features: * slapconsole: Use readline for completion and history. [Jerome Perrin] * slapos console: support for ipython and bpython [Marco Mariani] * Initial windows support. [Jondy Zhao] * Support new/changed parameters in command line tools, defined in documentation. [Marco Mariani] * Register: support for one-time authentication token. [Marco Mariani] * New command: "slapos configure client" [Marco Mariani] * add new "root_check" option in slapos configuration file (true by default) allowing to bypass "am I root" checks in slapos. [Cedric de Saint Martin] * Add support for getSoftwareReleaseListFromSoftwareProduct() SLAP method. [Cedric de Saint Martin] * Add support for Software Product in request, supply and console. [Cedric de Saint Martin] Major Improvements: * Major refactoring of entry points, clearly defining all possible command line parameters, separating logic from arg/conf parsing and logger setup, sanitizing most parameters, and adding help and documentation for each command. [Marco Mariani] * Correct handling of common errors: print error message instead of traceback. [Marco Mariani] * Dramatically speed up slapformat. [Cedric de Saint Martin] * Remove CONFIG_SITE env var from Buildout environment, fixing support of OpenSuse 12.x. [Cedric de Saint Martin] * RootSoftwareInstance is now the default software type. [Cedric de Saint Martin] * Allow to use SlapOS Client for instances deployed in shared SlapOS Nodes. [Cedric de Saint Martin] Other fixes: * Refuse to run 'slapos node' commands as non root. [Marco Mariani] * Register: Replace all reference to vifib by SlapOS Master. [Cedric de Saint Martin] * Watchdog: won't call bang if bang was already called but problem has not been solved. [Cédric de Saint Martin] * Slapgrid: avoid spurious empty lines in Popen() stdout/log. [Marco Mariani] * Slapgrid: Properly include any partition containing any SR informations in the list of partitions to proceed. [Cedric de Saint Martin] * Slapgrid: Remove the timestamp file after defined periodicity. Fixes odd use cases when an instance failing to process after some time is still considered as valid by the node. [Cedric de Saint Martin] * Slapgrid: Fix scary but harmless warnings, fix grammar, remove references to ViFiB. [Cedric de Saint Martin, Jérome Perrin, Marco Mariani] * Slapgrid: Fixes support of Python >= 2.6. [Arnaud Fontaine] * Slapgrid: Check if SR is upload-blacklisted only if we have upload informations. [Cedric de Saint Martin] * Slapgrid: override $HOME to be software_path or instance_path. Fix leaking files like /opt/slapgrid/.npm. [Marco Mariani] * Slapgrid: Always retrieve certificate and key, update files if content changed. Fix "quick&dirty" manual slapos.cfg swaps (change of Node ID). [Marco Mariani] * Slapformat: Make sure everybody can read slapos configuration directory. [Cedric de Saint Martin] * Slapformat: Fix support of slapproxy. [Marco Mariani] * Slapformat: slapos.xml backup: handle corrupted zip files. [Marco Mariani] * Slapformat: Don't erase shell information for each user, every time. Allows easy debugging. [Cédric de Saint Martin] 0.35.1 (2013-02-18) ------------------- New features: * Add ComputerPartition._instance_guid getter in SLAP library. [Cedric de Saint Martin] * Add ComputerPartition._instance_guid support in slapproxy. [Cedric de Saint Martin] Fixes: * Fix link existence check when deploying instance if SR is not correctly installed. This fixes a misleading error. [Cedric de Saint Martin] * Improve message shown to user when requesting. [Cedric de Saint Martin] * Raise NotReady when _requested_state doesn't exist when trying to fetch it from getter. [Cedric de Saint Martin] 0.35 (2013-02-08) ----------------- * slapos: display version number with help. [Marco Mariani] * slapformat: backup slapos.xml to a zip archive at every change. [Marco Mariani] * slapformat: Don't check validity of ipv4 when trying to add address that already exists. [Cedric de Saint Martin] * slapgrid: create and run $MD5/buildout.cfg for eaiser debugging. [Marco Mariani] * slapgrid: keep running if cp.error() or sr.error() have issues (fixes 20130119-744D94). [Marco Mariani] * slapgrid does not crash when there are no certificates (fixes #20130121-136C24). [Marco Mariani] * Add slapproxy-query command. [Marco Mariani] * Other minor typo / output fixes. 0.34 (2013-01-23) ----------------- * networkcache: only match major release number in Debian, fixed platform detection for Ubuntu. [Marco Mariani] * symlink to software_release in each partition. [Marco Mariani] * slapos client: Properly expand "~" when giving configuration file location. [Cedric de Saint Martin] * slapgrid: stop instances that should be stopped even if buildout and/or reporting failed. [Cedric de Saint Martin] * slapgrid: Don't periodically force-process a stopped instance. [Cedric de Saint Martin] * slapgrid: Handle pid files of slapgrid launched through different entry points. [Cedric de Saint Martin] * Watchdog: Bang is called with correct instance certificates. [Cedric Le Ninivin] * Watchdog: Fix watchdog call. [Cedric le Ninivin] * Add a symlink of the used software release in each partitions. [Marco Mariani] * slapformat is verbose by default. [Cedric de Saint Martin] * slapproxy: Filter by instance_guid, allow computer partition renames and change of software_type and requested_state. [Marco Mariani] * slapproxy: Stop instance even if buildout/reporting is wrong. [Cedric de Saint Martin] * slapproxy: implement softwareInstanceRename method. [Marco Mariani] * slapproxy: alllow requests to software_type. [Marco Mariani] * Many other minor fixes. See git diff for details. 0.33.1 (2012-11-05) ------------------- * Fix "slapos console" argument parsing. [Cedric de Saint Martin] 0.33 (2012-11-02) ----------------- * Continue to improve new entry points. The following are now functional: - slapos node format - slapos node start/stop/restart/tail - slapos node supervisord/supervisorctl - slapos node supply and add basic usage. [Cedric de Saint Martin] * Add support for "SLAPOS_CONFIGURATION" and SLAPOS_CLIENT_CONFIGURATION environment variables. (commit c72a53b1) [Cédric de Saint Martin] * --only_sr also accepts plain text URIs. [Marco Mariani] 0.32.3 (2012-10-15) ------------------- * slapgrid: Adopt new return value strategy (0=OK, 1=failed, 2=promise failed) (commit 5d4e1522). [Cedric de Saint Martin] * slaplib: add requestComputer (commits 6cbe82e0, aafb86eb). [Łukasz Nowak] * slapgrid: Add stopasgroup and killasgroup to supervisor (commit 36e0ccc0). [Cedric de Saint Martin] * slapproxy: don't start in debug mode by default (commit e32259c8). [Cédric Le Ninivin * SlapObject: ALWAYS remove tmpdir (commit a652a610). [Cedric de Saint Martin] 0.32.2 (2012-10-11) ------------------- * slapgrid: Remove default delay, now that SlapOS Master is Fast as Light (tm). (commit 03a85d6b8) [Cedric de Saint Martin] * Fix watchdog entry point name, introduced in v0.31. (commit a8651ba12) [Cedric de Saint Martin] * slapgrid: Better filter of instances, won't process false positives anymore (hopefully). (commit ce0a73b41) [Cedric de Saint Martin] * Various output improvements. [Cedric de Saint Martin] 0.32.1 (2012-10-09) ------------------- * slapgrid: Make sure error logs are sent to SlapOS master. Finish implementation began in 0.32. [Cedric de Saint Martin] * slapgrid: Fix Usage Report in case of not empty partition with no SR. [Cedric de Saint Martin] 0.32 (2012-10-04) ----------------- * Introduce new, simpler "slapos" entry point. See documentation for more informations. Note: some functionnalities of this new entry point don't work yet or is not as simple as it should be. [Cedric de Saint Martin, Cedric Le Ninivin] * Revamped "slapos request" to work like described in documentation. [Cédric Le Ninivin, Cédric de Saint Martin] * Rewrote slapgrid logger to always log into stdout. (commits a4d277c881, 5440626dea)[Cédric de Saint Martin] 0.31.2 (2012-10-02) ------------------- * Update slapproxy behavior: when instance already exist, only update partition_parameter_kw. (commit 317d5c8e0aee) [Cedric de Saint Martin] 0.31.1 (2012-10-02) ------------------- * Fixed Watchdog call in slapgrid. [Cédric Le Ninivin] 0.31 (2012-10-02) ------------------- * Added slapos-watchdog to bang exited and failing serices in instance in supervisord. (commits 16b2e8b8, 1dade5cd7) [Cédric Le Ninivin] * Add safety checks before calling SlapOS Master if mandatory instance members of SLAP classes are not properly set. Will result in less calls to SlapOS Master in dirty cases. (commits 5097e87c9763, 5fad6316a0f6d, f2cd014ea8aa) [Cedric de Saint Martin] * Add "periodicty" functionnality support for instances: if an instance has not been processed by slapgrid after defined time, process it. (commits 7609fc7a3d, 56e1c7bfbd) [Cedric Le Ninivin] * slapproxy: Various improvements in slave support (commits 96c6b78b67, bcac5a397d, fbb680f53b)[Cedric Le Ninivin] * slapgrid: bulletproof slapgrid-cp: in case one instance is bad, still processes all other ones. (commits bac94cdb56, 77bc6c75b3d, bd68b88cc3) [Cedric de Saint Martin] * Add support for "upload to binary cache" URL blacklist [Cedric de Saint Martin] * Request on proxy are identified by requester and name (commit 0c739c3) [Cedric Le Ninivin] 0.30 (2012-09-19) ----------------- * Add initial "slave instances" support in slapproxy. [Cedric Le Ninivin] * slapgrid-ur fix: check for partition informations only if we have to destroy it. [Cedric de Saint Martin] 0.29 (2012-09-18) ----------------- * buildout: Migrate slap_connection magic instance profile part to slap-connection, and use variables names separated with '-'. [Cedric de Saint Martin] * slapgrid: Add support for instance.cfg instance profiles [Cedric de Saint Martin] * slapgrid-ur: much less calls to master. [Cedric de Saint Martin] 0.28.9 (2012-09-18) ------------------- * slapgrid: Don't process not updated partitions (regression introduced in 0.28.7). [Cedric de Saint Martin] 0.28.8 (2012-09-18) ------------------- * slapgrid: Don't process free partitions (regression introduced in 0.28.7). [Cedric de Saint Martin] 0.28.7 (2012-09-14) ------------------- * slapgrid: --maximal_delay reappeared to be used in special cases. [Cedric de Saint Martin] 0.28.6 (2012-09-10) ------------------- * register now use slapos.cfg.example from master. [Cédric Le Ninivin] 0.28.5 (2012-08-23) ------------------- * Updated slapos.cfg for register [Cédric Le Ninivin] 0.28.4 (2012-08-22) ------------------- * Fixed egg building. 0.28.3 (2012-08-22) ------------------- * Avoid artificial tap creation on system check. [Łukasz Nowak] 0.28.2 (2012-08-17) ------------------- * Resolved path problem in register [Cédric Le Ninivin] 0.28.1 (2012-08-17) ------------------- * Resolved critical naming conflict 0.28 (2012-08-17) ----------------- * Introduce "slapos node register" command, that will register computer to SlapOS Master (vifib.net by default) for you. [Cédric Le Ninivin] * Set .timestamp in partitions ONLY after slapgrid thinks it's okay (promises, ...). [Cedric de Saint Martin] * slapgrid-ur: when destroying (not reporting), only care about instances to destroy, completely ignore others. [Cedric de Saint Martin] 0.27 (2012-08-08) ----------------- * slapformat: Raise correct error when no IPv6 is available on selected interface. [Cedric de Saint Martin] * slapgrid: Introduce --only_sr and --only_cp. - only_sr filter and force the run of a single SR, and uses url_md5 (folder_id) - only_cp filter which computer patition, will be runned. it can be a list, splited by comman (slappartX,slappartY ...) [Rafael Monnerat] * slapgrid: Cleanup unused option (--usage-report-periodicity). [Cedric de Saint Martin] * slapgrid: --develop will work also for Computer Partitions. [Cedric de Saint Martin] * slaplib: setConnectionDict won't call Master if parameters haven't changed. [Cedric de Saint Martin] 0.26.2 (2012-07-09) ------------------- * Define UTF-8 encoding in SlapOS Node codebase, as defined in PEP-263. 0.26.1 (2012-07-06) ------------------- * slapgrid-sr: Add --develop option to make it ignore .completed files. * SLAP library: it is now possible to fetch whole dict of connection parameters. * SLAP library: it is now possible to fetch single instance parameter. * SLAP library: change Computer and ComputerPartition behavior to have proper caching of computer partition parameters. 0.26 (2012-07-05) ----------------- * slapformat: no_bridge option becomes 'not create_tap'. create_tap is true by default. So a bridge is used and tap will be created by default. [Cedric de Saint Martin] * Add delay for slapformat. [Cedric Le Ninivin] * If no software_type is given, use default one (i.e fix "error 500" when requesting new instance). [Cedric de Saint Martin] * slapgrid: promise based software release, new api to fetch full computer information from server. [Yingjie Xu] * slapproxy: new api to mock full computer information [Yingjie Xu] * slapgrid: minor fix randomise delay feature. [Yingjie Xu] * slapgrid: optimise slapgrid-cp, run buildout only if there is an update on server side. [Yingjie Xu] * libslap: Allow accessing ServerError. [Vincent Pelletier] 0.25 (2012-05-16) ----------------- * Fix support for no_bridge option in configuration files for some values: no_bridge = false was stated as true. [Cedric de Saint Martin] * Delay a randomized period of time before calling slapgrid. [Yingjie Xu] * slapformat: Don't require tunctl if no_bridge is set [Leonardo Rochael] * slapformat: remove monkey patching when creating address so that it doesn't return false positive. [Cedric de Saint Martin] * Various: clearer error messages. 0.24 (2012-03-29) ----------------- * Handles different errors in a user friendly way [Cedric de Saint Martin] * slapgrid: Supports software destruction. [Łukasz Nowak] * slap: added support to Supply.supply state parameter (available, destroyed) [Łukasz Nowak] 0.23 (2012-02-29) ----------------- * slapgrid : Don't create tarball of sofwtare release when shacache is not configured. [Yingjie Xu] 0.22 (2012-02-09) ----------------- * slapformat : Add no-bridge feature. [Cedric de Saint Martin] * slapgrid : Add binary cache support. [Yingjie Xu] 0.21 (2011-12-23) ----------------- * slap: Add renaming API. [Antoine Catton] 0.20 (2011-11-24) ----------------- * slapgrid: Support service-less parttions. [Antoine Catton] * slapgrid: Avoid gid collision while dropping privileges. [Antoine Catton] * slapgrid: Drop down network usage during usage reporting. [Łukasz Nowak] * general: Add sphinx documentation. [Romain Courteaud] 0.19 (2011-11-07) ----------------- * bang: Executable to be called by being banged computer. [Łukasz Nowak] 0.18 (2011-10-18) ----------------- * Fix 0.17 release: missing change for slap library. [Łukasz Nowak] 0.17 (2011-10-18) ----------------- * slap: Avoid request under the hood. [Łukasz Nowak] * slap: ComputerPartition.bang provided. It allows to update all instances in tree. [Łukasz Nowak] * slap: Computer.bang provided. It allows to bang all instances on computer. [Łukasz Nowak] 0.16 (2011-10-03) ----------------- * slapgrid: Bugfix for slapgrid introduced in 0.15. [Łukasz Nowak] 0.15 (2011-09-27) ----------------- * slapgrid: Sanitize environment variables as early as possible. [Arnaud Fontaine] * slap: Docstring bugfix. [Sebastien Robin] * slap: Make request asynchronous call. [Łukasz Nowak] 0.14 (2011-08-31) ----------------- * slapgrid: Implement SSL based authentication to shadir and shacache. [Łukasz Nowak] * slapgrid, slap: Fix usage report packing list generation. [Nicolas Godbert] 0.13 (2011-08-25) ----------------- * slapgrid: Implement software signing and shacache upload. [Lucas Carvalho] * slap: Support slave instances [Gabriel Monnerat] * slapformat: Generate always address for computer [Łukasz Nowak] * slapgrid: Support promises scripts [Antoine Catton] * general: slapos.core gets tests. [many contributors] 0.12 (2011-07-15) ----------------- * Include modifications that should have been included in 0.11. 0.11 (2011-07-15) ----------------- * Bug fix : slapconsole : shorthand methods request and supply now correctly return an object. [Cedric de Saint Martin] 0.10 (2011-07-13) ----------------- * Fix a bug in slapconsole where request and supply shorthand methods don't accept all needed parameters. [Cedric de Saint Martin] 0.9 (2011-07-11) ---------------- * slapconsole: Simplify usage and use configuration file. You can now just run slapconsole and type things like "request(kvm, 'mykvm')". [Cedric de Saint Martin] * slapformat: Fix issue of bridge not connected with real interface on Linux >= 2.6.39 [Arnaud Fontaine] * slapformat: Allow to have IPv6 only interface, with bridge still supporting local IPv4 stack. [Łukasz Nowak] 0.8 (2011-06-27) ---------------- * slapgrid: Bugfix for temporary extends cache permissions. [Łukasz Nowak] 0.7 (2011-06-27) ---------------- * slapgrid: Fallback to buildout in own search path. [Łukasz Nowak] 0.6 (2011-06-27) ---------------- * slap: Fix bug: state shall be XML encapsulated. [Łukasz Nowak] 0.5 (2011-06-24) ---------------- * slapgrid: Use temporary extends-cache directory in order to make faster remote profile refresh. [Łukasz Nowak] 0.4 (2011-06-24) ---------------- * general: Polish requirement versions. [Arnaud Fontaine] * general: Remove libnetworkcache. [Lucas Carvalho] * slap: Remove not needed method from interface. [Romain Courteaud] * slap: state parameter is accepted and transmitted to SlapOS master [Łukasz Nowak] * slapformat: Implement dry run. [Vincent Pelletier] * slapgrid: Allow to select any buildout binary used to bootstrap environment. [Łukasz Nowak] 0.3 (2011-06-14) ---------------- * slap: Implement SLA by filter_kw in OpenOrder.request. [Łukasz Nowak] * slap: Timeout network operations. [Łukasz Nowak] * slapformat: Make slapsoft and slapuser* system users. [Kazuhiko Shiozaki] * slapgrid: Add more tolerance with supervisord. [Łukasz Nowak] 0.2 (2011-06-01) ---------------- * Include required files in distribution [Łukasz Nowak] 0.1 (2011-05-27) ---------------- * Merged slapos.slap, slapos.tool.console, slapos.tool.format, slapos.tool.grid, slapos.tool.libnetworkcache and slapos.tool.proxy into one package: slapos.core console ------- The slapconsole tool allows to interact with a SlapOS Master throught the SLAP library. For more information about SlapOS or slapconsole usages, please go to http://community.slapos.org. The slapconsole tool is only a bare Python console with several global variables defined and initialized. Initialization and configuration file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Slapconsole allows to automatically connect to a Master using URL and SSL certificate from given slapos.cfg. Certificate has to be *USER* certificate, manually obtained from SlapOS master web interface. Slapconsole tools reads the given slapos.cfg configuration file and use the following informations : * Master URL is read from [slapos] in the "master_url" parameter. * SSL Certificate is read from [slapconsole] in the "cert_file" parameter. * SSL Key is read from [slapconsole] in the "key_file" parameter. See slapos.cfg.example for examples. Global functions/variables ~~~~~~~~~~~~~~~~~~~~~~~~~~ * "request()" is a shorthand for slap.registerOpenOrder().request() allowing to request instances. * "supply()" is a shorthand for slap.registerSupply().supply() allowing to request software installation. For more information about those methods, please read the SLAP library documentation. * "product" is an instance of slap.SoftwareProductCollection whose only goal is to retrieve the URL of the best Software Release of a given Software Product as attribute. for each attribute call, it will retrieve from the SlapOS Master the best available Software Release URL and return it. This allows to request instances in a few words, i.e:: request("mykvm", "http://www.url.com/path/to/current/best/known/kvm/software.cfg") can be simplified into :: request("mykvm", product.kvm) * "slap" is an instance of the SLAP library. It is only used for advanced usages. "slap" instance is obtained by doing :: slap = slapos.slap.slap() slap.initializeConnection(config.master_url, key_file=config.key_file, cert_file=config.cert_file) Examples ~~~~~~~~ :: >>> # Request instance >>> request(product.kvm, "myuniquekvm") >>> # Request instance on specific computer >>> request(product.kvm, "myotheruniquekvm", filter_kw={ "computer_guid": "COMP-12345" }) >>> # Request instance, specifying parameters (here nbd_ip and nbd_port) >>> request(product.kvm, "mythirduniquekvm", partition_parameter_kw={"nbd_ip":"2a01:e35:2e27:460:e2cb:4eff:fed9:48dc", "nbd_port":"1024"}) >>> # Request software installation on owned computer >>> supply(product.kvm, "mycomputer") >>> # Fetch existing instance status >>> request(product.kvm, "myuniquekvm").getState() >>> # Fetch instance information on already launched instance >>> request(product.kvm, "myuniquekvm").getConnectionParameter("url") format ====== slapformat is an application to prepare SlapOS ready node (machine). It "formats" the machine by: - creating users and groups - creating bridge interface - creating needed tap interfaces - creating needed directories with proper ownership and permissions In the end special report is generated and information are posted to configured SlapOS server. This program shall be only run by root. Requirements ------------ Linux with IPv6, bridging and tap interface support. Binaries: * brctl * groupadd * ip * tunctl * useradd grid ==== slapgrid is a client of SLAPos. SLAPos provides support for deploying a SaaS system in a minute. Slapgrid allows you to easily deploy instances of softwares based on buildout profiles. For more informations about SLAP and SLAPos, please see the SLAP documentation. Requirements ------------ A working SLAP server with informations about your computer, in order to retrieve them. As Vifib servers use IPv6 only, we strongly recommend an IPv6 enabled UNIX box. For the same reasons, Python >= 2.6 with development headers is also strongly recommended (IPv6 support is not complete in previous releases). For now, gcc and glibc development headers are required to build most software releases. Concepts -------- Here are the fundamental concepts of slapgrid : A Software Release (SR) is just a software. A Computer Partition (CP) is an instance of a Software Release. Imagine you want to install with slapgrid some software and run it. You will have to install the software as a Software Release, and then instantiate it, i.e configuring it for your needs, as a Computer Partition. How it works ------------ When run, slapgrid will authenticate to the SLAP library with a computer_id and fetch the list of Software Releases to install or remove and Computer Partitions to start or stop. Then, it will process each Software Release, and each Computer Partition. It will also periodically send to SLAP the usage report of each Computer Partition. Installation ------------ With easy_install:: $ easy_install slapgrid slapgrid needs several directories to be created and configured before being able to run : a software releases directory, and an instances directory with configured computer partition directory(ies). You should create for each Computer Partition directory created a specific user and associate it with its Computer Partition directory. Each Computer Partition directory should belongs to this specific user, with permissions of 0750. Usage ----- slapgrid needs several informations in order to run. You can specify them by adding arguments to the slapgrid command line, or by putting then in a configuration file. Beware : you need a valid computer resource on server side. Examples -------- simple example : Just run slapgrid: $ slapgrid --instance-root /path/to/instance/root --software-root /path/to/software_root --master-url https://some.server/some.resource --computer-id my.computer.id configuration file example:: [slapgrid] instance_root = /path/to/instance/root software_root = /path/to/software/root master_url = https://slapos.server/slap_service computer_id = my.computer.id then run slapgrid:: $ slapgrid --configuration-file = path/to/configuration/file proxy ===== Implement minimalist SlapOS Master server without any security, designed to work only from localhost with one SlapOS Node (a.k.a Computer). It implements (or should implement) the SLAP API, as currently implemented in the SlapOS Master (see slaptool.py in Master). The only behavioral difference from the SlapOS Master is: When the proxy doesn't find any free partition (and/or in case of slave instance, any compatible master instance), it will throw a NotFoundError (404). slap ==== Simple Language for Accounting and Provisioning python library. Developer note - python version ------------------------------- This library is used on client (slapgrid) and server side. Server is using python2.4 and client is using python2.6 Having this in mind, code of this library *have* to work on python2.4 How it works ------------ The SLAP main server which is in charge of service coordination receives from participating servers the number of computer paritions which are available, the type of resource which a party is ready provide, and request from parties for resources which are needed. Each participating server is identified by a unique ID and runs a slap-server daemon. This daemon collects from the main server the installation tasks and does the installation of resources, then notifies the main server of completion whenever a resource is configured, installed and available. The data structure on the main server is the following: A - Action: an action which can happen to provide a resource or account its usage CP - Computer Partition: provides a URL to Access a Cloud Resource RI - Resource Item: describes a resource CI - Contract Item: describes the contract to attach the DL to (This is unclear still) R - Resource: describes a type of cloud resource (ex. MySQL Table) is published on slapgrid.org DL - Delivery Line: Describes an action happening on a resource item on a computer partition D - Delivery: groups multiple Delivery Lines Keywords: slapos core Platform: UNKNOWN Classifier: Programming Language :: Python slapos.core-1.3.18/README.txt0000644000000000000000000000025012752436067015501 0ustar rootroot00000000000000slapos.core =========== The core of SlapOS. Contains the SLAP library, and the slapgrid, slapformat, slapproxy tools. For more information, see http://www.slapos.org.