pax_global_header00006660000000000000000000000064132311214150014503gustar00rootroot0000000000000052 comment=25ec499868e54eeab48e5c10940bdfcd570461d0 bootstrap-vz-0.9.11+20180121git/000077500000000000000000000000001323112141500156435ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/.gitignore000066400000000000000000000004111323112141500176270ustar00rootroot00000000000000*.pyc # Jekyll-generated files /Gemfile.lock /_site/ # When developing for ec2 `vagrant provision' is quite handy /Vagrantfile /.vagrant/ # Building the package /build/ /dist/ /bootstrap_vz.egg-info/ # Testing /.coverage /.tox/ /build-servers.yml /integration.html bootstrap-vz-0.9.11+20180121git/.travis.yml000066400000000000000000000001461323112141500177550ustar00rootroot00000000000000sudo: False language: python python: - "2.7" install: - "pip install tox" script: - "tox" bootstrap-vz-0.9.11+20180121git/.yamllint000066400000000000000000000000711323112141500174730ustar00rootroot00000000000000--- extends: default rules: line-length: max: 160 bootstrap-vz-0.9.11+20180121git/CHANGELOG.rst000066400000000000000000000221641323112141500176710ustar00rootroot00000000000000Changelog ========= 2017-02-20 ---------- Hugo Antoniio Sepulveda Manriquez: * Updated puppet plugin module: * Installs Puppetlabs 4 PC1 agent software from apt.puppetlabs.com * Enables you to install modules from forge.puppetlabs.com in the image * Important limitations * Only works for Wheezy and Jessie for now. * If you need puppet 3, just add 'puppet' packages provider list. * modules: When installing from forge, it assumes 'install --force' * modules: When installing from forge, It assumes master version on forge 2016-06-04 ---------- Anders Ingemann * Disable persistent network interface names for >=stretch (by @apolloclark) * grub defaults and linux boot options are now easier to configure * Source ixgbevf driver from intel, not sourceforge (by @justinsb) * Use systemd on jessie (by @JamesBromberger) * Tune ec2 images (sysctl settings, module blacklisting, nofail in fstab) (by @JamesBromberger) * Add enable_modules option for cloud-init (by @JamesBromberger) 2016-06-02 ---------- Peter Wagner * Added ec2_publish plugin 2016-06-02 ---------- Zach Marano: * Fix expand-root script to work with newer version of growpart (in jessie-backports and beyond). * Overhaul Google Compute Engine image build. * Add support for Google Cloud repositories. * Google Cloud SDK install uses a deb package from a Google Cloud repository. * Google Compute Engine guest software is installed from a Google Cloud repository. * Google Compute Engine guest software for Debian 8 is updated to new refactor. * Google Compute Engine wheezy and wheezy-backports manifests are deprecated. 2016-03-03 ---------- Anders Ingemann: * Rename integration tests to system tests 2016-02-23 ---------- Nicolas Braud-Santoni: * #282, #290: Added 'debconf' plugin * #290: Relaxed requirements on plugins manifests 2016-02-10 ---------- Manoj Srivastava: * #252: Added support for password and static pubkey auth 2016-02-06 ---------- Tiago Ilieve: * Added Oracle Compute Cloud provider * #280: Declared Squeeze as unsupported 2016-01-14 ---------- Jesse Szwedko: * #269: EC2: Added growpart script extension 2016-01-10 ---------- Clark Laughlin: * Enabled support for KVM on arm64 2015-12-19 ---------- Tim Sattarov: * #263: Ignore loopback interface in udev rules (reduces startup of networking by a factor of 10) 2015-12-13 ---------- Anders Ingemann: * Docker provider implemented (including integration testing harness & tests) * minimize_size: Added various size reduction options for dpkg and apt * Removed image section in manifest. Provider specific options have been moved to the provider section. The image name is now specified on the top level of the manifest with "name" * Provider docs have been greatly improved. All now list their special options. * All manifest option documentation is now accompanied by an example. * Added documentation for the integration test providers 2015-11-13 ---------- Marcin Kulisz: * Exclude docs from binary package 2015-10-20 ---------- Max Illfelder: * Remove support for the GCE Debian mirror 2015-10-14 ---------- Anders Ingemann: * Bootstrap azure images directly to VHD 2015-09-28 ---------- Rick Wright: * Change GRUB_HIDDEN_TIMEOUT to 0 from true and set GRUB_HIDDEN_TIMEOUT_QUIET to true. 2015-09-24 ---------- Rick Wright: * Fix a problem with Debian 8 on GCE with >2TB disks 2015-09-04 ---------- Emmanuel Kasper: * Set Virtualbox memory to 512 MB 2015-08-07 ---------- Tiago Ilieve: * Change default Debian mirror 2015-08-06 ---------- Stephen A. Zarkos: * Azure: Change default shell in /etc/default/useradd for Azure images * Azure: Add boot parameters to Azure config to ease local debugging * Azure: Add apt import for backports * Azure: Comment GRUB_HIDDEN_TIMEOUT so we can set GRUB_TIMEOUT * Azure: Wheezy images use wheezy-backports kernel by default * Azure: Change Wheezy image to use single partition * Azure: Update WALinuxAgent to use 2.0.14 * Azure: Make sure we can override grub.ConfigureGrub for Azure images * Azure: Add console=tty0 to see kernel/boot messages on local console * Azure: Set serial port speed to 115200 * Azure: Fix error with applying azure/assets/udev.diff 2015-07-30 ---------- James Bromberger: * AWS: Support multiple ENI * AWS: PVGRUB AKIs for Frankfurt region 2015-06-29 ---------- Alex Adriaanse: * Fix DKMS kernel version error * Add support for Btrfs * Add EC2 Jessie HVM manifest 2015-05-08 ---------- Alexandre Derumier: * Fix #219: ^PermitRootLogin regex 2015-05-02 ---------- Anders Ingemann: * Fix #32: Add image_commands example * Fix #99: rename image_commands to commands * Fix #139: Vagrant / Virtualbox provider should set ostype when 32 bits selected * Fix #204: Create a new phase where user modification tasks can run 2015-04-29 ---------- Anders Ingemann: * Fix #104: Don't verify default target when adding packages * Fix #217: Implement get_version() function in common.tools 2015-04-28 ---------- Jonh Wendell: * root_password: Enable SSH root login 2015-04-27 ---------- John Kristensen: * Add authentication support to the apt proxy plugin 2015-04-25 ---------- Anders Ingemann (work started 2014-08-31, merged on 2015-04-25): * Introduce `remote bootstrapping `__ * Introduce `integration testing `__ (for VirtualBox and EC2) * Merge the end-user documentation into the sphinx docs (plugin & provider docs are now located in their respective folders as READMEs) * Include READMEs in sphinx docs and transform their links * Docs for integration testing * Document the remote bootstrapping procedure * Add documentation about the documentation * Add list of supported builds to the docs * Add html output to integration tests * Implement PR #201 by @jszwedko (bump required euca2ools version) * grub now works on jessie * extlinux is now running on jessie * Issue warning when specifying pre/successors across phases (but still error out if it's a conflict) * Add salt dependencies in the right phase * extlinux now works with GPT on HVM instances * Take @ssgelm's advice in #155 and copy the mount table -- df warnings no more * Generally deny installing grub on squeeze (too much of a hassle to get working, PRs welcome) * Add 1 sector gap between partitions on GPT * Add new task: DetermineKernelVersion, this can potentially fix a lot of small problems * Disable getty processes on jessie through logind config * Partition volumes by sectors instead of bytes This allows for finer grained control over the partition sizes and gaps Add new Sectors unit, enhance Bytes unit, add unit tests for both * Don't require qemu for raw volumes, use `truncate` instead * Fix #179: Disabling getty processes task fails half the time * Split grub and extlinux installs into separate modules * Fix extlinux config for squeeze * Fix #136: Make extlinux output boot messages to the serial console * Extend sed_i to raise Exceptions when the expected amount of replacements is not met Jonas Bergler: * Fixes #145: Fix installation of vbox guest additions. Tiago Ilieve: * Fixes #142: msdos partition type incorrect for swap partition (Linux) 2015-04-23 ---------- Tiago Ilieve: * Fixes #212: Sparse file is created on the current directory 2014-11-23 ---------- Noah Fontes: * Add support for enhanced networking on EC2 images 2014-07-12 ---------- Tiago Ilieve: * Fixes #96: AddBackports is now a common task 2014-07-09 ---------- Anders Ingemann: * Allow passing data into the manifest * Refactor logging setup to be more modular * Convert every JSON file to YAML * Convert "provider" into provider specific section 2014-07-02 ---------- Vladimir Vitkov: * Improve grub options to work better with virtual machines 2014-06-30 ---------- Tomasz Rybak: * Return information about created image 2014-06-22 ---------- Victor Marmol: * Enable the memory cgroup for the Docker plugin 2014-06-19 ---------- Tiago Ilieve: * Fixes #94: allow stable/oldstable as release name on manifest Vladimir Vitkov: * Improve ami listing performance 2014-06-07 ---------- Tiago Ilieve: * Download `gsutil` tarball to workspace instead of working directory * Fixes #97: remove raw disk image created by GCE after build 2014-06-06 ---------- Ilya Margolin: * pip_install plugin 2014-05-23 ---------- Tiago Ilieve: * Fixes #95: check if the specified APT proxy server can be reached 2014-05-04 ---------- Dhananjay Balan: * Salt minion installation & configuration plugin * Expose debootstrap --include-packages and --exclude-packages options to manifest 2014-05-03 ---------- Anders Ingemann: * Require hostname setting for vagrant plugin * Fixes #14: S3 images can now be bootstrapped outside EC2. * Added enable_agent option to puppet plugin 2014-05-02 ---------- Tomasz Rybak: * Added Google Compute Engine Provider bootstrap-vz-0.9.11+20180121git/CONTRIBUTING.rst000066400000000000000000000156701323112141500203150ustar00rootroot00000000000000Contributing ============ Sending pull requests --------------------- Do you want to contribute to the bootstrap-vz project? Nice! Here is the basic workflow: * Read the `development guidelines <#development-guidelines>`__ * Fork this repository. * Make any changes you want/need. * Check the coding style of your changes using `tox `__ by running `tox -e flake8` and fix any warnings that may appear. This check will be repeated by `Travis CI `__ once you send a pull request, so it's better if you check this beforehand. * If the change is significant (e.g. a new plugin, manifest setting or security fix) add your name and contribution to the `changelog `__. * Commit your changes. * Squash the commits if needed. For instance, it is fine if you have multiple commits describing atomic units of work, but there's no reason to have many little commits just because of corrected typos. * Push to your fork, preferably on a topic branch. * Send a pull request to the `master` branch. Please try to be very descriptive about your changes when you write a pull request, stating what it does, why it is needed, which use cases this change covers, etc. You may be asked to rebase your work on the current branch state, so it can be merged cleanly. If you push a new commit to your pull request you will have to add a new comment to the PR, provided that you want us notified. Github will otherwise not send a notification. Be aware that your modifications need to be properly documented. Please take a look at the `documentation section <#documentation>`__ to see how to do that. Happy hacking! :-) Development guidelines ---------------------- The following guidelines should serve as general advice when developing providers or plugins for bootstrap-vz. Keep in mind that these guidelines are not rules , they are advice on how to better add value to the bootstrap-vz codebase. The manifest should always fully describe the resulting image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The outcome of a bootstrapping process should never depend on settings specified elsewhere. This allows others to easily reproduce any setup other people are running and makes it possible to share manifests. `The official debian EC2 images`__ for example can be reproduced using the manifests available in the manifest directory of bootstrap-vz. __ https:/aws.amazon.com/marketplace/seller-profile?id=890be55d-32d8-4bc8-9042-2b4fd83064d5 The bootstrapper should always be able to run fully unattended ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For end users, this guideline minimizes the risk of errors. Any required input would also be in direct conflict with the previous guideline that the manifest should always fully describe the resulting image. Additionally developers may have to run the bootstrap process multiple times though, any prompts in the middle of that process may significantly slow down the development speed. The bootstrapper should only need as much setup as the manifest requires ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Having to shuffle specific paths on the host into place (e.g. ``/target`` has to be created manually) to get the bootstrapper running is going to increase the rate of errors made by users. Aim for minimal setup. Exceptions are of course things such as the path to the VirtualBox Guest Additions ISO or tools like ``parted`` that need to be installed on the host. Roll complexity into which tasks are added to the tasklist ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If a ``run()`` function checks whether it should do any work or simply be skipped, consider doing that check in ``resolve_tasks()`` instead and avoid adding that task altogether. This allows people looking at the tasklist in the logfile to determine what work has been performed. If a task says it will modify a file but then bails , a developer may get confused when looking at that file after bootstrapping. He could conclude that the file has either been overwritten or that the search & replace does not work correctly. Control flow should be directed from the task graph ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avoid creating complicated ``run()`` functions. If necessary, split up a function into two semantically separate tasks. This allows other tasks to interleave with the control-flow and add extended functionality (e.g. because volume creation and mounting are two separate tasks, `the prebootstrapped plugin `__ can replace the volume creation task with a task of its own that creates a volume from a snapshot instead, but still reuse the mount task). Task classes should be treated as decorated run() functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tasks should not have any state, thats what the BootstrapInformation object is for. Only add stuff to the BootstrapInformation object when really necessary ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is mainly to avoid clutter. Use a json-schema to check for allowed settings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The json-schema may be verbose but it keeps the bulk of check work outside the python code, which is a big plus when it comes to readability. This only applies as long as the checks are simple. You can of course fall back to doing the check in python when that solution is considerably less complex. When invoking external programs, use long options whenever possible ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This makes the commands a lot easier to understand, since the option names usually hint at what they do. When invoking external programs, don't use full paths, rely on ``$PATH`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This increases robustness when executable locations change. Example: Use ``log_call(['wget', ...])`` instead of ``log_call(['/usr/bin/wget', ...])``. Coding style ------------ bootstrap-vz is coded to comply closely with the PEP8 style guidelines. There however a few exceptions: * Max line length is 110 chars, not 80. * Multiple assignments may be aligned with spaces so that the = match vertically. * Ignore ``E221 & E241``: Alignment of assignments * Ignore ``E501``: The max line length is not 80 characters The codebase can be checked for any violations quite easily, since those rules are already specified in the `tox `__ configuration file. :: tox -e flake8 Documentation ------------- When developing a provider or plugin, make sure to update/create the README.rst located in provider/plugin folder. Any links to other rst files should be relative and work, when viewed on github. For information on `how to build the documentation `_ and how the various parts fit together, refer to `the documentation about the documentation `__ :-) bootstrap-vz-0.9.11+20180121git/LICENSE000066400000000000000000000011041323112141500166440ustar00rootroot00000000000000Copyright 2013-2014 Anders Ingemann Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. bootstrap-vz-0.9.11+20180121git/MANIFEST.in000066400000000000000000000003771323112141500174100ustar00rootroot00000000000000include LICENSE include manifests/* recursive-include bootstrapvz *.json recursive-include bootstrapvz *.yml recursive-include bootstrapvz/common/assets * recursive-include bootstrapvz/providers/*/assets * recursive-include bootstrapvz/plugins/*/assets * bootstrap-vz-0.9.11+20180121git/README.rst000066400000000000000000000215531323112141500173400ustar00rootroot00000000000000bootstrap-vz is looking for a new home ====================================== bootstrap-vz is looking for a new home. The reason is that I am simply not using bootstrap-vz myself very much lately, so any bugfixes or improvements are currently introduced via PRs only. If you are willing to take over the project and have a track record with the Debian community or with the development of software like bootstrap-vz, kindly let me know by opening an issue that includes some references. There is a considerable amount of people using the software daily, and I would love seeing my work being continued. I will be happy to answer any questions regarding the code future maintainers might have, now and also in the coming years, so nothing will just be dumped in your lap :-) bootstrap-vz ============ bootstrap-vz is a bootstrapping framework for Debian that creates ready-to-boot images able to run on a number of cloud providers and virtual machines. bootstrap-vz runs without any user intervention and generates images for the following virtualization platforms: - `Amazon AWS EC2 `__ (supports both HVM and PVM; S3 and EBS backed; `used for official Debian images `__; `Quick start <#amazon-ec2-ebs-backed-ami>`__) - `Docker `__ (`Quick start <#docker>`__) - `Google Compute Engine `__ (`used by Google for official Debian images `__) - `KVM `__ (Kernel-based Virtual Machine) - `Microsoft Azure `__ - `Oracle Compute Cloud Service `__ (`used for official Debian images `__) - `Oracle VirtualBox `__ (`with Vagrant support <#virtualbox-vagrant>`__) Its aim is to provide a reproducible bootstrapping process using `manifests `__ as well as supporting a high degree of customizability through plugins. Documentation ------------- The documentation for bootstrap-vz is available at `bootstrap-vz.readthedocs.org `__. There, you can discover `what the dependencies <#dependencies>`__ for a specific cloud provider are, `see a list of available plugins `__ and learn `how you create a manifest `__. Note to developers: The shared documentation links on github and readthedocs are transformed in `a rather peculiar and nifty way`__. __ https://github.com/andsens/bootstrap-vz/blob/master/docs/transform_github_links.py Installation ------------ bootstrap-vz has a master branch into which stable feature branches are merged. After checking out the branch of your choice you can install the python dependencies by running ``python setup.py install``. However, depending on what kind of image you'd like to bootstrap, there are other debian package dependencies as well, at the very least you will need ``debootstrap``. `The documentation `__ explains this in more detail. Note that bootstrap-vz will tell you which tools it requires when they aren't present (the different packages are mentioned in the error message), so you can simply run bootstrap-vz once to get a list of the packages, install them, and then re-run. Quick start ----------- Here are a few quickstart tutorials for the most common images. If you plan on partitioning your volume, you will need the ``parted`` package and ``kpartx``: .. code-block:: sh root@host:~# apt-get install parted kpartx Note that you can always abort a bootstrapping process by pressing ``Ctrl+C``, bootstrap-vz will then initiate a cleanup/rollback process, where volumes are detached/deleted and temporary files removed, pressing ``Ctrl+C`` a second time shortcuts that procedure, halts the cleanup and quits the process. Docker ~~~~~~ .. code-block:: sh user@host:~$ sudo -i # become root root@host:~# git clone https://github.com/andsens/bootstrap-vz.git # Clone the repo root@host:~# apt-get install debootstrap python-pip docker.io # Install dependencies from aptitude root@host:~# pip install termcolor jsonschema fysom docopt pyyaml pyrfc3339 # Install python dependencies root@host:~# bootstrap-vz/bootstrap-vz bootstrap-vz/manifests/examples/docker/jessie-minimized.yml The resulting image should be no larger than 82 MB (81.95 MB to be exact). The manifest ``jessie-minimized.yml`` uses the `minimize\_size `__ plugin to reduce the image size considerably. Rather than installing docker from the debian main repo it is recommended to install `the latest docker version `__. VirtualBox Vagrant ~~~~~~~~~~~~~~~~~~ .. code-block:: sh user@host:~$ sudo -i # become root root@host:~# git clone https://github.com/andsens/bootstrap-vz.git # Clone the repo root@host:~# apt-get install qemu-utils debootstrap python-pip # Install dependencies from aptitude root@host:~# pip install termcolor jsonschema fysom docopt pyyaml # Install python dependencies root@host:~# modprobe nbd max_part=16 root@host:~# bootstrap-vz/bootstrap-vz bootstrap-vz/manifests/examples/virtualbox/jessie-vagrant.yml (The `modprobe nbd max_part=16` part enables the network block device driver to support up to 16 partitions on a device) If you want to use the `minimize\_size `__ plugin, you will have to install the ``zerofree`` package and `VMWare Workstation`__ as well. __ https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/10_0 Amazon EC2 EBS backed AMI ~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: sh user@host:~$ sudo -i # become root root@host:~# git clone https://github.com/andsens/bootstrap-vz.git # Clone the repo root@host:~# apt-get install debootstrap python-pip # Install dependencies from aptitude root@host:~# pip install termcolor jsonschema fysom docopt pyyaml boto3 # Install python dependencies root@host:~# bootstrap-vz/bootstrap-vz bootstrap-vz/manifests/official/ec2/ebs-jessie-amd64-hvm.yml To bootstrap S3 backed AMIs, bootstrap-vz will also need the ``euca2ools`` package. However, version 3.2.0 is required meaning you must install it directly from the eucalyptus repository like this: .. code-block:: sh apt-get install --no-install-recommends python-dev libxml2-dev libxslt-dev gcc zlib1g-dev pip install git+git://github.com/eucalyptus/euca2ools.git@v3.2.0 Cleanup ------- bootstrap-vz tries very hard to clean up after itself both if a run was successful but also if it failed. This ensures that you are not left with volumes still attached to the host which are useless. If an error occurred you can simply correct the problem that caused it and rerun everything, there will be no leftovers from the previous run (as always there are of course rare/unlikely exceptions to that rule). The error messages should always give you a strong hint at what is wrong, if that is not the case please consider `opening an issue`__ and attach both the error message and your manifest (preferably as a gist or similar). __ https://github.com/andsens/bootstrap-vz/issues Dependencies ------------ bootstrap-vz has a number of dependencies depending on the target platform and `the selected plugins `__. At a bare minimum the following python libraries are needed: * `termcolor `__ * `fysom `__ * `jsonschema `__ * `docopt `__ * `pyyaml `__ To bootstrap Debian itself `debootstrap`__ is needed as well. __ https://packages.debian.org/wheezy/debootstrap Any other requirements are dependent upon the manifest configuration and are detailed in the corresponding sections of the documentation. Before the bootstrapping process begins however, bootstrap-vz will warn you if a requirement has not been met. Developers ---------- The API documentation, development guidelines and an explanation of bootstrap-vz internals can be found at `bootstrap-vz.readthedocs.org`__. __ http://bootstrap-vz.readthedocs.org/en/master/developers Contributing ------------ Contribution guidelines are described in the documentation under `Contributing `__. There's also a topic regarding `the coding style `__. Before bootstrap-vz ------------------- bootstrap-vz was coded from scratch in python once the bash script architecture that was used in the `build-debian-cloud `__ bootstrapper reached its limits. The project has since grown well beyond its original goal, but has kept the focus on Debian images. bootstrap-vz-0.9.11+20180121git/bootstrap-vz000077500000000000000000000001501323112141500202370ustar00rootroot00000000000000#!/usr/bin/env python if __name__ == '__main__': from bootstrapvz.base.main import main main() bootstrap-vz-0.9.11+20180121git/bootstrap-vz-remote000077500000000000000000000001521323112141500215320ustar00rootroot00000000000000#!/usr/bin/env python if __name__ == '__main__': from bootstrapvz.remote.main import main main() bootstrap-vz-0.9.11+20180121git/bootstrap-vz-server000077500000000000000000000001541323112141500215470ustar00rootroot00000000000000#!/usr/bin/env python if __name__ == '__main__': from bootstrapvz.remote.server import main main() bootstrap-vz-0.9.11+20180121git/bootstrapvz/000077500000000000000000000000001323112141500202405ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/README.rst000066400000000000000000000037331323112141500217350ustar00rootroot00000000000000How bootstrap-vz works ---------------------- Tasks ~~~~~ At its core bootstrap-vz is based on tasks that perform units of work. By keeping those tasks small and with a solid structure built around them a high degree of flexibility can be achieved. To ensure that tasks are executed in the right order, each task is placed in a dependency graph where directed edges dictate precedence. Each task is a simple class that defines its predecessor tasks and successor tasks via attributes. Here is an example: .. code-block:: python class MapPartitions(Task): description = 'Mapping volume partitions' phase = phases.volume_preparation predecessors = [PartitionVolume] successors = [filesystem.Format] @classmethod def run(cls, info): info.volume.partition_map.map(info.volume) In this case the attributes define that the task at hand should run after the ``PartitionVolume`` task — i.e. after volume has been partitioned (``predecessors``) — but before formatting each partition (``successors``). It is also placed in the ``volume_preparation`` phase. Phases are ordered and group tasks together. All tasks in a phase are run before proceeding with the tasks in the next phase. They are a way of avoiding the need to list 50 different tasks as predecessors and successors. The final task list that will be executed is computed by enumerating all tasks in the package, placing them in the graph and `sorting them topologically `_. Subsequently the list returned is filtered to contain only the tasks the provider and the plugins added to the taskset. System abstractions ~~~~~~~~~~~~~~~~~~~ There are several abstractions in bootstrap-vz that make it possible to generalize things like volume creation, partitioning, mounting and package installation. As a rule these abstractions are located in the ``base/`` folder, where the manifest parsing and task ordering algorithm are placed as well. bootstrap-vz-0.9.11+20180121git/bootstrapvz/__init__.py000066400000000000000000000000311323112141500223430ustar00rootroot00000000000000 __version__ = '0.9.11' bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/000077500000000000000000000000001323112141500211525ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/__init__.py000066400000000000000000000032521323112141500232650ustar00rootroot00000000000000from phase import Phase from task import Task from main import main __all__ = ['Phase', 'Task', 'main'] def validate_manifest(data, validator, error): """Validates the manifest using the base manifest :param dict data: The data of the manifest :param function validator: The function that validates the manifest given the data and a path :param function error: The function tha raises an error when the validation fails """ from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) from bootstrapvz.common.releases import get_release, squeeze release = get_release(data['system']['release']) if release < squeeze: error('Only Debian squeeze and later is supported', ['system', 'release']) # Check the bootloader/partitioning configuration. # Doing this via the schema is a pain and does not output a useful error message. if data['system']['bootloader'] == 'grub': if data['volume']['partitions']['type'] == 'none': error('Grub cannot boot from unpartitioned disks', ['system', 'bootloader']) if release == squeeze: error('Grub installation on squeeze is not supported', ['system', 'bootloader']) # Check the provided apt.conf(5) options if 'packages' in data: for name, val in data['packages'].get('apt.conf.d', {}).iteritems(): from bootstrapvz.common.tools import log_call status, _, _ = log_call(['apt-config', '-c=/dev/stdin', 'dump'], stdin=val + '\n') if status != 0: error('apt.conf(5) syntax error', ['packages', 'apt.conf.d', name]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/bootstrapinfo.py000066400000000000000000000157211323112141500244230ustar00rootroot00000000000000 class BootstrapInformation(object): """The BootstrapInformation class holds all information about the bootstrapping process. The nature of the attributes of this class are rather diverse. Tasks may set their own attributes on this class for later retrieval by another task. Information that becomes invalid (e.g. a path to a file that has been deleted) must be removed. """ def __init__(self, manifest=None, debug=False): """Instantiates a new bootstrap info object. :param Manifest manifest: The manifest :param bool debug: Whether debugging is turned on """ # Set the manifest attribute. self.manifest = manifest self.debug = debug # Create a run_id. This id may be used to uniquely identify the currrent bootstrapping process import random self.run_id = '{id:08x}'.format(id=random.randrange(16 ** 8)) # Define the path to our workspace import os.path self.workspace = os.path.join(manifest.bootstrapper['workspace'], self.run_id) # Load all the volume information from fs import load_volume self.volume = load_volume(self.manifest.volume, manifest.system['bootloader']) # The default apt mirror self.apt_mirror = self.manifest.packages.get('mirror', 'http://deb.debian.org/debian/') # The default apt security mirror self.apt_security = self.manifest.packages.get('security', 'http://security.debian.org/') # Create the manifest_vars dictionary self.manifest_vars = self.__create_manifest_vars(self.manifest, {'apt_security': self.apt_security, 'apt_mirror': self.apt_mirror}) # Keep a list of apt sources, # so that tasks may add to that list without having to fiddle with apt source list files. from pkg.sourceslist import SourceLists self.source_lists = SourceLists(self.manifest_vars) # Keep a list of apt preferences from pkg.preferenceslist import PreferenceLists self.preference_lists = PreferenceLists(self.manifest_vars) # Keep a list of packages that should be installed, tasks can add and remove things from this list from pkg.packagelist import PackageList self.packages = PackageList(self.manifest_vars, self.source_lists) # These sets should rarely be used and specify which packages the debootstrap invocation # should be called with. self.include_packages = set() self.exclude_packages = set() # Dictionary to specify which commands are required on the host. # The keys are commands, while the values are either package names or urls # that hint at how a command may be made available. self.host_dependencies = {} # Path to optional bootstrapping script for modifying the behaviour of debootstrap # (will be used instead of e.g. /usr/share/debootstrap/scripts/jessie) self.bootstrap_script = None # Lists of startup scripts that should be installed and disabled self.initd = {'install': {}, 'disable': []} # Add a dictionary that can be accessed via info._pluginname for the provider and every plugin # Information specific to the module can be added to that 'namespace', this avoids clutter. providername = manifest.modules['provider'].__name__.split('.')[-1] setattr(self, '_' + providername, {}) for plugin in manifest.modules['plugins']: pluginname = plugin.__name__.split('.')[-1] setattr(self, '_' + pluginname, {}) def __create_manifest_vars(self, manifest, additional_vars={}): """Creates the manifest variables dictionary, based on the manifest contents and additional data. :param Manifest manifest: The Manifest :param dict additional_vars: Additional values (they will take precedence and overwrite anything else) :return: The manifest_vars dictionary :rtype: dict """ def set_manifest_vars(obj, data): """Runs through the manifest and creates DictClasses for every key :param dict obj: dictionary to set the values on :param dict data: dictionary of values to set on the obj """ for key, value in data.iteritems(): if isinstance(value, dict): obj[key] = DictClass() set_manifest_vars(obj[key], value) continue # Lists are not supported if not isinstance(value, list): obj[key] = value # manifest_vars is a dictionary of all the manifest values, # with it users can cross-reference values in the manifest, so that they do not need to be written twice manifest_vars = {} set_manifest_vars(manifest_vars, manifest.data) # Populate the manifest_vars with datetime information # and map the datetime variables directly to the dictionary from datetime import datetime now = datetime.now() time_vars = ['%a', '%A', '%b', '%B', '%c', '%d', '%f', '%H', '%I', '%j', '%m', '%M', '%p', '%S', '%U', '%w', '%W', '%x', '%X', '%y', '%Y', '%z', '%Z'] for key in time_vars: manifest_vars[key] = now.strftime(key) # Add any additional manifest variables # They are added last so that they may override previous variables set_manifest_vars(manifest_vars, additional_vars) return manifest_vars def __getstate__(self): from bootstrapvz.remote import supported_classes def can_serialize(obj): if hasattr(obj, '__class__') and hasattr(obj, '__module__'): class_name = obj.__module__ + '.' + obj.__class__.__name__ return class_name in supported_classes or isinstance(obj, (BaseException, Exception)) return True def filter_state(state): if isinstance(state, dict): return {key: filter_state(val) for key, val in state.items() if can_serialize(val)} if isinstance(state, (set, tuple, list, frozenset)): return type(state)(filter_state(val) for val in state if can_serialize(val)) return state state = filter_state(self.__dict__) state['__class__'] = self.__module__ + '.' + self.__class__.__name__ return state def __setstate__(self, state): for key in state: self.__dict__[key] = state[key] class DictClass(dict): """Tiny extension of dict to allow setting and getting keys via attributes """ def __getattr__(self, name): return self[name] def __setattr__(self, name, value): self[name] = value def __delattr__(self, name): del self[name] def __getstate__(self): return self.__dict__ def __setstate__(self, state): for key in state: self[key] = state[key] bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/000077500000000000000000000000001323112141500215625ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/__init__.py000066400000000000000000000043571323112141500237040ustar00rootroot00000000000000 def load_volume(data, bootloader): """Instantiates a volume that corresponds to the data in the manifest :param dict data: The 'volume' section from the manifest :param str bootloader: Name of the bootloader the system will boot with :return: The volume that represents all information pertaining to the volume we bootstrap on. :rtype: Volume """ # Map valid partition maps in the manifest and their corresponding classes from partitionmaps.gpt import GPTPartitionMap from partitionmaps.msdos import MSDOSPartitionMap from partitionmaps.none import NoPartitions partition_map = {'none': NoPartitions, 'gpt': GPTPartitionMap, 'msdos': MSDOSPartitionMap, }.get(data['partitions']['type']) # Map valid volume backings in the manifest and their corresponding classes from bootstrapvz.common.fs.loopbackvolume import LoopbackVolume from bootstrapvz.providers.ec2.ebsvolume import EBSVolume from bootstrapvz.common.fs.virtualdiskimage import VirtualDiskImage from bootstrapvz.common.fs.virtualharddisk import VirtualHardDisk from bootstrapvz.common.fs.virtualmachinedisk import VirtualMachineDisk from bootstrapvz.common.fs.folder import Folder from bootstrapvz.common.fs.logicalvolume import LogicalVolume from bootstrapvz.common.fs.qcow2volume import Qcow2Volume volume_backing = {'raw': LoopbackVolume, 's3': LoopbackVolume, 'vdi': VirtualDiskImage, 'vhd': VirtualHardDisk, 'vmdk': VirtualMachineDisk, 'ebs': EBSVolume, 'folder': Folder, 'lvm': LogicalVolume, 'qcow2': Qcow2Volume }.get(data['backing']) # Instantiate the partition map from bootstrapvz.common.bytes import Bytes # Only operate with a physical sector size of 512 bytes for now, # not sure if we can change that for some of the virtual disks sector_size = Bytes('512B') partition_map = partition_map(data['partitions'], sector_size, bootloader) # Create the volume with the partition map as an argument return volume_backing(partition_map) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/exceptions.py000066400000000000000000000004011323112141500243100ustar00rootroot00000000000000 class VolumeError(Exception): """Raised when an error occurs while interacting with the volume """ pass class PartitionError(Exception): """Raised when an error occurs while interacting with the partitions on the volume """ pass bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitionmaps/000077500000000000000000000000001323112141500244545ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitionmaps/__init__.py000066400000000000000000000000001323112141500265530ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitionmaps/abstract.py000066400000000000000000000110451323112141500266320ustar00rootroot00000000000000from abc import ABCMeta from abc import abstractmethod from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.fsm_proxy import FSMProxy from ..exceptions import PartitionError class AbstractPartitionMap(FSMProxy): """Abstract representation of a partiton map This class is a finite state machine and represents the state of the real partition map """ __metaclass__ = ABCMeta # States the partition map can be in events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'}, {'name': 'map', 'src': 'unmapped', 'dst': 'mapped'}, {'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'}, ] def __init__(self, bootloader): """ :param str bootloader: Name of the bootloader we will use for bootstrapping """ # Create the configuration for the state machine cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}} super(AbstractPartitionMap, self).__init__(cfg) def is_blocking(self): """Returns whether the partition map is blocking volume detach operations :rtype: bool """ return self.fsm.current == 'mapped' def get_total_size(self): """Returns the total size the partitions occupy :return: The size of all partitions :rtype: Sectors """ # We just need the endpoint of the last partition return self.partitions[-1].get_end() def create(self, volume): """Creates the partition map :param Volume volume: The volume to create the partition map on """ self.fsm.create(volume=volume) @abstractmethod def _before_create(self, event): pass def map(self, volume): """Maps the partition map to device nodes :param Volume volume: The volume the partition map resides on """ self.fsm.map(volume=volume) def _before_map(self, event): """ :raises PartitionError: In case a partition could not be mapped. """ volume = event.volume try: # Ask kpartx how the partitions will be mapped before actually attaching them. mappings = log_check_call(['kpartx', '-l', volume.device_path]) import re regexp = re.compile('^(?P.+[^\d](?P\d+)) : ' '(?P\d) (?P\d+) ' '{device_path} (?P\d+)$' .format(device_path=volume.device_path)) log_check_call(['kpartx', '-as', volume.device_path]) import os.path # Run through the kpartx output and map the paths to the partitions for mapping in mappings: match = regexp.match(mapping) if match is None: raise PartitionError('Unable to parse kpartx output: ' + mapping) partition_path = os.path.join('/dev/mapper', match.group('name')) p_idx = int(match.group('p_idx')) - 1 self.partitions[p_idx].map(partition_path) # Check if any partition was not mapped for idx, partition in enumerate(self.partitions): if partition.fsm.current not in ['mapped', 'formatted']: raise PartitionError('kpartx did not map partition #' + str(partition.get_index())) except PartitionError: # Revert any mapping and reraise the error for partition in self.partitions: if partition.fsm.can('unmap'): partition.unmap() log_check_call(['kpartx', '-ds', volume.device_path]) raise def unmap(self, volume): """Unmaps the partition :param Volume volume: The volume to unmap the partition map from """ self.fsm.unmap(volume=volume) def _before_unmap(self, event): """ :raises PartitionError: If the a partition cannot be unmapped """ volume = event.volume # Run through all partitions before unmapping and make sure they can all be unmapped for partition in self.partitions: if partition.fsm.cannot('unmap'): msg = 'The partition {partition} prevents the unmap procedure'.format(partition=partition) raise PartitionError(msg) # Actually unmap the partitions log_check_call(['kpartx', '-ds', volume.device_path]) # Call unmap on all partitions for partition in self.partitions: partition.unmap() bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitionmaps/gpt.py000066400000000000000000000120471323112141500256240ustar00rootroot00000000000000from abstract import AbstractPartitionMap from ..partitions.gpt import GPTPartition from ..partitions.gpt_swap import GPTSwapPartition from bootstrapvz.common.tools import log_check_call class GPTPartitionMap(AbstractPartitionMap): """Represents a GPT partition map """ def __init__(self, data, sector_size, bootloader): """ :param dict data: volume.partitions part of the manifest :param int sector_size: Sectorsize of the volume :param str bootloader: Name of the bootloader we will use for bootstrapping """ from bootstrapvz.common.sectors import Sectors # List of partitions self.partitions = [] # Returns the last partition unless there is none def last_partition(): return self.partitions[-1] if len(self.partitions) > 0 else None if bootloader == 'grub': # If we are using the grub bootloader we need to create an unformatted partition # at the beginning of the map. Its size is 1007kb, which seems to be chosen so that # primary gpt + grub = 1024KiB # The 1 MiB will be subtracted later on, once we know what the subsequent partition is from ..partitions.unformatted import UnformattedPartition self.grub_boot = UnformattedPartition(Sectors('1MiB', sector_size), last_partition()) self.partitions.append(self.grub_boot) # Offset all partitions by 1 sector. # parted in jessie has changed and no longer allows # partitions to be right next to each other. partition_gap = Sectors(1, sector_size) # The boot and swap partitions are optional if 'boot' in data: self.boot = GPTPartition(Sectors(data['boot']['size'], sector_size), data['boot']['filesystem'], data['boot'].get('format_command', None), data['boot'].get('mountopts', None), 'boot', last_partition()) if self.boot.previous is not None: # No need to pad if this is the first partition self.boot.pad_start += partition_gap self.boot.size -= partition_gap self.partitions.append(self.boot) if 'swap' in data: self.swap = GPTSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition()) if self.swap.previous is not None: self.swap.pad_start += partition_gap self.swap.size -= partition_gap self.partitions.append(self.swap) self.root = GPTPartition(Sectors(data['root']['size'], sector_size), data['root']['filesystem'], data['root'].get('format_command', None), data['root'].get('mountopts', None), 'root', last_partition()) if self.root.previous is not None: self.root.pad_start += partition_gap self.root.size -= partition_gap self.partitions.append(self.root) # Create all additional partitions for partition in data: if partition not in ["boot", "swap", "root", "type"] and not None: part_tmp = GPTPartition(Sectors(data[partition]['size'], sector_size), data[partition]['filesystem'], data[partition].get('format_command', None), data[partition].get('mountopts', None), partition, last_partition()) part_tmp.pad_start += partition_gap part_tmp.size -= partition_gap setattr(self, partition, part_tmp) self.partitions.append(part_tmp) if hasattr(self, 'grub_boot'): # Mark the grub partition as a bios_grub partition self.grub_boot.flags.append('bios_grub') # Subtract the grub partition size from the subsequent partition self.partitions[1].size -= self.grub_boot.size else: # Not using grub, mark the boot partition or root as bootable getattr(self, 'boot', self.root).flags.append('legacy_boot') # The first and last 34 sectors are reserved for the primary/secondary GPT primary_gpt_size = Sectors(34, sector_size) self.partitions[0].pad_start += primary_gpt_size self.partitions[0].size -= primary_gpt_size secondary_gpt_size = Sectors(34, sector_size) self.partitions[-1].pad_end += secondary_gpt_size self.partitions[-1].size -= secondary_gpt_size super(GPTPartitionMap, self).__init__(bootloader) def _before_create(self, event): """Creates the partition map """ volume = event.volume # Disk alignment still plays a role in virtualized environment, # but I honestly have no clue as to what best practice is here, so we choose 'none' log_check_call(['parted', '--script', '--align', 'none', volume.device_path, '--', 'mklabel', 'gpt']) # Create the partitions for partition in self.partitions: partition.create(volume) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitionmaps/msdos.py000066400000000000000000000104621323112141500261560ustar00rootroot00000000000000from abstract import AbstractPartitionMap from ..exceptions import PartitionError from ..partitions.msdos import MSDOSPartition from ..partitions.msdos_swap import MSDOSSwapPartition from bootstrapvz.common.tools import log_check_call class MSDOSPartitionMap(AbstractPartitionMap): """Represents a MS-DOS partition map Sometimes also called MBR (but that confuses the hell out of me, so ms-dos it is) """ def __init__(self, data, sector_size, bootloader): """ :param dict data: volume.partitions part of the manifest :param int sector_size: Sectorsize of the volume :param str bootloader: Name of the bootloader we will use for bootstrapping """ from bootstrapvz.common.sectors import Sectors # List of partitions self.partitions = [] # Returns the last partition unless there is none def last_partition(): return self.partitions[-1] if len(self.partitions) > 0 else None # The boot and swap partitions are optional if 'boot' in data: self.boot = MSDOSPartition(Sectors(data['boot']['size'], sector_size), data['boot']['filesystem'], data['boot'].get('format_command', None), data['boot'].get('mountopts', None), 'boot', last_partition()) self.partitions.append(self.boot) # Offset all partitions by 1 sector. # parted in jessie has changed and no longer allows # partitions to be right next to each other. partition_gap = Sectors(1, sector_size) if 'swap' in data: self.swap = MSDOSSwapPartition(Sectors(data['swap']['size'], sector_size), last_partition()) if self.swap.previous is not None: # No need to pad if this is the first partition self.swap.pad_start += partition_gap self.swap.size -= partition_gap self.partitions.append(self.swap) self.root = MSDOSPartition(Sectors(data['root']['size'], sector_size), data['root']['filesystem'], data['root'].get('format_command', None), data['root'].get('mountopts', None), 'root', last_partition()) if self.root.previous is not None: self.root.pad_start += partition_gap self.root.size -= partition_gap self.partitions.append(self.root) # Raise exception while trying to create additional partitions # as its hard to calculate the actual size of the extended partition ATM # And anyhow - we should go with GPT... for partition in data: if partition not in ["boot", "swap", "root", "type"]: raise PartitionError("If you want to have additional partitions please use GPT partition scheme") # Mark boot as the boot partition, or root, if boot does not exist getattr(self, 'boot', self.root).flags.append('boot') # If we are using the grub bootloader, we will need to add a 2 MB offset # at the beginning of the partitionmap and steal it from the first partition. # The MBR offset is included in the grub offset, so if we don't use grub # we should reduce the size of the first partition and move it by only 512 bytes. if bootloader == 'grub': mbr_offset = Sectors('2MiB', sector_size) else: mbr_offset = Sectors('512B', sector_size) self.partitions[0].pad_start += mbr_offset self.partitions[0].size -= mbr_offset # Leave the last sector unformatted # parted in jessie thinks that a partition 10 sectors in size # goes from sector 0 to sector 9 (instead of 0 to 10) self.partitions[-1].pad_end += 1 self.partitions[-1].size -= 1 super(MSDOSPartitionMap, self).__init__(bootloader) def _before_create(self, event): volume = event.volume # Disk alignment still plays a role in virtualized environment, # but I honestly have no clue as to what best practice is here, so we choose 'none' log_check_call(['parted', '--script', '--align', 'none', volume.device_path, '--', 'mklabel', 'msdos']) # Create the partitions for partition in self.partitions: partition.create(volume) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitionmaps/none.py000066400000000000000000000031661323112141500257730ustar00rootroot00000000000000from ..partitions.single import SinglePartition class NoPartitions(object): """Represents a virtual 'NoPartitions' partitionmap. This virtual partition map exists because it is easier for tasks to simply always deal with partition maps and then let the base abstract that away. """ def __init__(self, data, sector_size, bootloader): """ :param dict data: volume.partitions part of the manifest :param int sector_size: Sectorsize of the volume :param str bootloader: Name of the bootloader we will use for bootstrapping """ from bootstrapvz.common.sectors import Sectors # In the NoPartitions partitions map we only have a single 'partition' self.root = SinglePartition(Sectors(data['root']['size'], sector_size), data['root']['filesystem'], data['root'].get('format_command', None), data['root'].get('mount_opts', None)) self.partitions = [self.root] def is_blocking(self): """Returns whether the partition map is blocking volume detach operations :rtype: bool """ return self.root.fsm.current == 'mounted' def get_total_size(self): """Returns the total size the partitions occupy :return: The size of all the partitions :rtype: Sectors """ return self.root.get_end() def __getstate__(self): state = self.__dict__.copy() state['__class__'] = self.__module__ + '.' + self.__class__.__name__ return state def __setstate__(self, state): for key in state: self.__dict__[key] = state[key] bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/000077500000000000000000000000001323112141500237565ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/__init__.py000066400000000000000000000000001323112141500260550ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/abstract.py000066400000000000000000000125541323112141500261420ustar00rootroot00000000000000from abc import ABCMeta from abc import abstractmethod from bootstrapvz.common.sectors import Sectors from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.fsm_proxy import FSMProxy class AbstractPartition(FSMProxy): """Abstract representation of a partiton This class is a finite state machine and represents the state of the real partition """ __metaclass__ = ABCMeta # Our states events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'created'}, {'name': 'format', 'src': 'created', 'dst': 'formatted'}, {'name': 'mount', 'src': 'formatted', 'dst': 'mounted'}, {'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'}, ] def __init__(self, size, filesystem, format_command, mountopts): """ :param Bytes size: Size of the partition :param str filesystem: Filesystem the partition should be formatted with :param list format_command: Optional format command, valid variables are fs, device_path and size """ self.size = size self.filesystem = filesystem self.format_command = format_command # List of mount options self.mountopts = mountopts # Initialize the start & end padding to 0 sectors, may be changed later self.pad_start = Sectors(0, size.sector_size) self.pad_end = Sectors(0, size.sector_size) # Path to the partition self.device_path = None # Dictionary with mount points as keys and Mount objects as values self.mounts = {} # Create the configuration for our state machine cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': {}} super(AbstractPartition, self).__init__(cfg) def get_uuid(self): """Gets the UUID of the partition :return: The UUID of the partition :rtype: str """ [uuid] = log_check_call(['blkid', '-s', 'UUID', '-o', 'value', self.device_path]) return uuid @abstractmethod def get_start(self): pass def get_end(self): """Gets the end of the partition :return: The end of the partition :rtype: Sectors """ return self.get_start() + self.pad_start + self.size + self.pad_end def _before_format(self, e): """Formats the partition """ # If there is no explicit format_command define we simply call mkfs.fstype if self.format_command is None: format_command = ['mkfs.{fs}', '{device_path}'] else: format_command = self.format_command variables = {'fs': self.filesystem, 'device_path': self.device_path, 'size': self.size, } command = map(lambda part: part.format(**variables), format_command) # Format the partition log_check_call(command) def _before_mount(self, e): """Mount the partition """ if self.mountopts is None: mount_command = ['mount', '--types', self.filesystem, self.device_path, e.destination] else: mount_command = ['mount', '--options', ",".join(self.mountopts), '--types', self.filesystem, self.device_path, e.destination] # Mount the partition log_check_call(mount_command) self.mount_dir = e.destination def _after_mount(self, e): """Mount any mounts associated with this partition """ # Make sure we mount in ascending order of mountpoint path length # This ensures that we don't mount /dev/pts before we mount /dev for destination in sorted(self.mounts.iterkeys(), key=len): self.mounts[destination].mount(self.mount_dir) def _before_unmount(self, e): """Unmount any mounts associated with this partition """ # Unmount the mounts in descending order of mounpoint path length # You cannot unmount /dev before you have unmounted /dev/pts for destination in sorted(self.mounts.iterkeys(), key=len, reverse=True): self.mounts[destination].unmount() log_check_call(['umount', self.mount_dir]) del self.mount_dir def add_mount(self, source, destination, opts=[]): """Associate a mount with this partition Automatically mounts it :param str,AbstractPartition source: The source of the mount :param str destination: The path to the mountpoint :param list opts: Any options that should be passed to the mount command """ # Create a new mount object, mount it if the partition is mounted and put it in the mounts dict from mount import Mount mount = Mount(source, destination, opts) if self.fsm.current == 'mounted': mount.mount(self.mount_dir) self.mounts[destination] = mount def remove_mount(self, destination): """Remove a mount from this partition Automatically unmounts it :param str destination: The mountpoint path of the mount that should be removed """ # Unmount the mount if the partition is mounted and delete it from the mounts dict # If the mount is already unmounted and the source is a partition, this will raise an exception if self.fsm.current == 'mounted': self.mounts[destination].unmount() del self.mounts[destination] bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/base.py000066400000000000000000000134451323112141500252510ustar00rootroot00000000000000import os from abstract import AbstractPartition from bootstrapvz.common.sectors import Sectors class BasePartition(AbstractPartition): """Represents a partition that is actually a partition (and not a virtual one like 'Single') """ # Override the states of the abstract partition # A real partition can be mapped and unmapped events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'}, {'name': 'map', 'src': 'unmapped', 'dst': 'mapped'}, {'name': 'format', 'src': 'mapped', 'dst': 'formatted'}, {'name': 'mount', 'src': 'formatted', 'dst': 'mounted'}, {'name': 'unmount', 'src': 'mounted', 'dst': 'formatted'}, {'name': 'unmap', 'src': 'formatted', 'dst': 'unmapped_fmt'}, {'name': 'map', 'src': 'unmapped_fmt', 'dst': 'formatted'}, {'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'}, ] def __init__(self, size, filesystem, format_command, mountopts, previous): """ :param Bytes size: Size of the partition :param str filesystem: Filesystem the partition should be formatted with :param list format_command: Optional format command, valid variables are fs, device_path and size :param BasePartition previous: The partition that preceeds this one """ # By saving the previous partition we have a linked list # that partitions can go backwards in to find the first partition. self.previous = previous # List of flags that parted should put on the partition self.flags = [] # Path to symlink in /dev/disk/by-uuid (manually maintained by this class) self.disk_by_uuid_path = None super(BasePartition, self).__init__(size, filesystem, format_command, mountopts) def create(self, volume): """Creates the partition :param Volume volume: The volume to create the partition on """ self.fsm.create(volume=volume) def get_index(self): """Gets the index of this partition in the partition map :return: The index of the partition in the partition map :rtype: int """ if self.previous is None: # Partitions are 1 indexed return 1 else: # Recursive call to the previous partition, walking up the chain... return self.previous.get_index() + 1 def get_start(self): """Gets the starting byte of this partition :return: The starting byte of this partition :rtype: Sectors """ if self.previous is None: return Sectors(0, self.size.sector_size) else: return self.previous.get_end() def map(self, device_path): """Maps the partition to a device_path :param str device_path: The device path this partition should be mapped to """ self.fsm.map(device_path=device_path) def link_uuid(self): # /lib/udev/rules.d/60-kpartx.rules does not create symlinks in /dev/disk/by-{uuid,label} # This patch would fix that: http://www.redhat.com/archives/dm-devel/2013-July/msg00080.html # For now we just do the uuid part ourselves. # This is mainly to fix a problem in update-grub where /etc/grub.d/10_linux # checks if the $GRUB_DEVICE_UUID exists in /dev/disk/by-uuid and falls # back to $GRUB_DEVICE if it doesn't. # $GRUB_DEVICE is /dev/mapper/xvd{f,g...}# (on ec2), opposed to /dev/xvda# when booting. # Creating the symlink ensures that grub consistently uses # $GRUB_DEVICE_UUID when creating /boot/grub/grub.cfg self.disk_by_uuid_path = os.path.join('/dev/disk/by-uuid', self.get_uuid()) if not os.path.exists(self.disk_by_uuid_path): os.symlink(self.device_path, self.disk_by_uuid_path) def unlink_uuid(self): if os.path.isfile(self.disk_by_uuid_path): os.remove(self.disk_by_uuid_path) self.disk_by_uuid_path = None def _before_create(self, e): """Creates the partition """ from bootstrapvz.common.tools import log_check_call # The create command is fairly simple: # - fs_type is the partition filesystem, as defined by parted: # fs-type can be one of "fat16", "fat32", "ext2", "HFS", "linux-swap", # "NTFS", "reiserfs", or "ufs". # - start and end are just Bytes objects coerced into strings if self.filesystem == 'swap': fs_type = 'linux-swap' else: fs_type = 'ext2' create_command = ('mkpart primary {fs_type} {start} {end}' .format(fs_type=fs_type, start=str(self.get_start() + self.pad_start), end=str(self.get_end() - self.pad_end))) # Create the partition log_check_call(['parted', '--script', '--align', 'none', e.volume.device_path, '--', create_command]) # Set any flags on the partition for flag in self.flags: log_check_call(['parted', '--script', e.volume.device_path, '--', ('set {idx} {flag} on' .format(idx=str(self.get_index()), flag=flag))]) def _before_map(self, e): # Set the device path self.device_path = e.device_path if e.src == 'unmapped_fmt': # Only link the uuid if the partition is formatted self.link_uuid() def _after_format(self, e): # We do this after formatting because there otherwise would be no UUID self.link_uuid() def _before_unmap(self, e): # When unmapped, the device_path information becomes invalid, so we delete it self.device_path = None if e.src == 'formatted': self.unlink_uuid() bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/gpt.py000066400000000000000000000023251323112141500251240ustar00rootroot00000000000000from bootstrapvz.common.tools import log_check_call from base import BasePartition class GPTPartition(BasePartition): """Represents a GPT partition """ def __init__(self, size, filesystem, format_command, mountopts, name, previous): """ :param Bytes size: Size of the partition :param str filesystem: Filesystem the partition should be formatted with :param list format_command: Optional format command, valid variables are fs, device_path and size :param str name: The name of the partition :param BasePartition previous: The partition that preceeds this one """ self.name = name super(GPTPartition, self).__init__(size, filesystem, format_command, mountopts, previous) def _before_create(self, e): # Create the partition and then set the name of the partition afterwards super(GPTPartition, self)._before_create(e) # partition name only works for gpt, for msdos that becomes the part-type (primary, extended, logical) name_command = 'name {idx} {name}'.format(idx=self.get_index(), name=self.name) log_check_call(['parted', '--script', e.volume.device_path, '--', name_command]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/gpt_swap.py000066400000000000000000000010301323112141500261460ustar00rootroot00000000000000from bootstrapvz.common.tools import log_check_call from gpt import GPTPartition class GPTSwapPartition(GPTPartition): """Represents a GPT swap partition """ def __init__(self, size, previous): """ :param Bytes size: Size of the partition :param BasePartition previous: The partition that preceeds this one """ super(GPTSwapPartition, self).__init__(size, 'swap', None, None, 'swap', previous) def _before_format(self, e): log_check_call(['mkswap', self.device_path]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/mount.py000066400000000000000000000033401323112141500254720ustar00rootroot00000000000000from abstract import AbstractPartition import os.path from bootstrapvz.common.tools import log_check_call class Mount(object): """Represents a mount into the partition """ def __init__(self, source, destination, opts): """ :param str,AbstractPartition source: The path from where we mount or a partition :param str destination: The path of the mountpoint :param list opts: List of options to pass to the mount command """ self.source = source self.destination = destination self.opts = opts def mount(self, prefix): """Performs the mount operation or forwards it to another partition :param str prefix: Path prefix of the mountpoint """ mount_dir = os.path.join(prefix, self.destination) # If the source is another partition, we tell that partition to mount itself if isinstance(self.source, AbstractPartition): self.source.mount(destination=mount_dir) else: log_check_call(['mount'] + self.opts + [self.source, mount_dir]) self.mount_dir = mount_dir def unmount(self): """Performs the unmount operation or asks the partition to unmount itself """ # If its a partition, it can unmount itself if isinstance(self.source, AbstractPartition): self.source.unmount() else: log_check_call(['umount', self.mount_dir]) del self.mount_dir def __getstate__(self): state = self.__dict__.copy() state['__class__'] = self.__module__ + '.' + self.__class__.__name__ return state def __setstate__(self, state): for key in state: self.__dict__[key] = state[key] bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/msdos.py000066400000000000000000000013111323112141500254510ustar00rootroot00000000000000from base import BasePartition class MSDOSPartition(BasePartition): """Represents an MS-DOS partition """ def __init__(self, size, filesystem, format_command, mountopts, name, previous): """ :param Bytes size: Size of the partition :param str filesystem: Filesystem the partition should be formatted with :param list format_command: Optional format command, valid variables are fs, device_path and size :param str name: The name of the partition :param BasePartition previous: The partition that preceeds this one """ self.name = name super(MSDOSPartition, self).__init__(size, filesystem, format_command, mountopts, previous) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/msdos_swap.py000066400000000000000000000010451323112141500265070ustar00rootroot00000000000000from bootstrapvz.common.tools import log_check_call from msdos import MSDOSPartition class MSDOSSwapPartition(MSDOSPartition): """Represents a MS-DOS swap partition """ def __init__(self, size, previous): """ :param Bytes size: Size of the partition :param BasePartition previous: The partition that preceeds this one """ super(MSDOSSwapPartition, self).__init__(size, 'swap', None, None, 'swap', previous) def _before_format(self, e): log_check_call(['mkswap', self.device_path]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/single.py000066400000000000000000000006631323112141500256160ustar00rootroot00000000000000from abstract import AbstractPartition class SinglePartition(AbstractPartition): """Represents a single virtual partition on an unpartitioned volume """ def get_start(self): """Gets the starting byte of this partition :return: The starting byte of this partition :rtype: Sectors """ from bootstrapvz.common.sectors import Sectors return Sectors(0, self.size.sector_size) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/partitions/unformatted.py000066400000000000000000000013331323112141500266600ustar00rootroot00000000000000from base import BasePartition class UnformattedPartition(BasePartition): """Represents an unformatted partition It cannot be mounted """ # The states for our state machine. It can only be mapped, not mounted. events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'unmapped'}, {'name': 'map', 'src': 'unmapped', 'dst': 'mapped'}, {'name': 'unmap', 'src': 'mapped', 'dst': 'unmapped'}, ] def __init__(self, size, previous): """ :param Bytes size: Size of the partition :param BasePartition previous: The partition that preceeds this one """ super(UnformattedPartition, self).__init__(size, None, None, None, previous) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/fs/volume.py000066400000000000000000000140701323112141500234450ustar00rootroot00000000000000from abc import ABCMeta from bootstrapvz.common.fsm_proxy import FSMProxy from bootstrapvz.common.tools import log_check_call from .exceptions import VolumeError from partitionmaps.none import NoPartitions class Volume(FSMProxy): """Represents an abstract volume. This class is a finite state machine and represents the state of the real volume. """ __metaclass__ = ABCMeta # States this volume can be in events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'detached'}, {'name': 'attach', 'src': 'detached', 'dst': 'attached'}, {'name': 'link_dm_node', 'src': 'attached', 'dst': 'linked'}, {'name': 'unlink_dm_node', 'src': 'linked', 'dst': 'attached'}, {'name': 'detach', 'src': 'attached', 'dst': 'detached'}, {'name': 'delete', 'src': 'detached', 'dst': 'deleted'}, ] def __init__(self, partition_map): """ :param PartitionMap partition_map: The partition map for the volume """ # Path to the volume self.device_path = None # The partition map self.partition_map = partition_map # The size of the volume as reported by the partition map self.size = self.partition_map.get_total_size() # Before detaching, check that nothing would block the detachment callbacks = {'onbeforedetach': self._check_blocking} if isinstance(self.partition_map, NoPartitions): # When the volume has no partitions, the virtual root partition path is equal to that of the volume # Update that path whenever the path to the volume changes def set_dev_path(e): self.partition_map.root.device_path = self.device_path callbacks['onafterattach'] = set_dev_path callbacks['onafterdetach'] = set_dev_path # Will become None callbacks['onlink_dm_node'] = set_dev_path callbacks['onunlink_dm_node'] = set_dev_path # Create the configuration for our finite state machine cfg = {'initial': 'nonexistent', 'events': self.events, 'callbacks': callbacks} super(Volume, self).__init__(cfg) def _after_create(self, e): if isinstance(self.partition_map, NoPartitions): # When the volume has no partitions, the virtual root partition # is essentially created when the volume is created, forward that creation event. self.partition_map.root.create() def _check_blocking(self, e): """Checks whether the volume is blocked :raises VolumeError: When the volume is blocked from being detached """ # Only the partition map can block the volume if self.partition_map.is_blocking(): raise VolumeError('The partitionmap prevents the detach procedure') def _before_link_dm_node(self, e): """Links the volume using the device mapper This allows us to create a 'window' into the volume that acts like a volume in itself. Mainly it is used to fool grub into thinking that it is working with a real volume, rather than a loopback device or a network block device. :param _e_obj e: Event object containing arguments to create() Keyword arguments to link_dm_node() are: :param int logical_start_sector: The sector the volume should start at in the new volume :param int start_sector: The offset at which the volume should begin to be mapped in the new volume :param int sectors: The number of sectors that should be mapped Read more at: http://manpages.debian.org/cgi-bin/man.cgi?query=dmsetup&apropos=0&sektion=0&manpath=Debian+7.0+wheezy&format=html&locale=en :raises VolumeError: When a free block device cannot be found. """ import os.path from bootstrapvz.common.fs import get_partitions # Fetch information from /proc/partitions proc_partitions = get_partitions() device_name = os.path.basename(self.device_path) device_partition = proc_partitions[device_name] # The sector the volume should start at in the new volume logical_start_sector = getattr(e, 'logical_start_sector', 0) # The offset at which the volume should begin to be mapped in the new volume start_sector = getattr(e, 'start_sector', 0) # The number of sectors that should be mapped sectors = getattr(e, 'sectors', int(self.size) - start_sector) # This is the table we send to dmsetup, so that it may create a device mapping for us. table = ('{log_start_sec} {sectors} linear {major}:{minor} {start_sec}' .format(log_start_sec=logical_start_sector, sectors=sectors, major=device_partition['major'], minor=device_partition['minor'], start_sec=start_sector)) import string import os.path # Figure out the device letter and path for letter in string.ascii_lowercase: dev_name = 'vd' + letter dev_path = os.path.join('/dev/mapper', dev_name) if not os.path.exists(dev_path): self.dm_node_name = dev_name self.dm_node_path = dev_path break if not hasattr(self, 'dm_node_name'): raise VolumeError('Unable to find a free block device path for mounting the bootstrap volume') # Create the device mapping log_check_call(['dmsetup', 'create', self.dm_node_name], table) # Update the device_path but remember the old one for when we unlink the volume again self.unlinked_device_path = self.device_path self.device_path = self.dm_node_path def _before_unlink_dm_node(self, e): """Unlinks the device mapping """ log_check_call(['dmsetup', 'remove', self.dm_node_name]) # Reset the device_path self.device_path = self.unlinked_device_path # Delete the no longer valid information del self.unlinked_device_path del self.dm_node_name del self.dm_node_path bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/log.py000066400000000000000000000075331323112141500223150ustar00rootroot00000000000000"""This module holds functions and classes responsible for formatting the log output both to a file and to the console. """ import logging def get_console_handler(debug, colorize): """Returns a log handler for the console The handler color codes the different log levels :params bool debug: Whether to set the log level to DEBUG (otherwise INFO) :params bool colorize: Whether to colorize console output :return: The console logging handler """ # Create a console log handler import sys console_handler = logging.StreamHandler(sys.stderr) if colorize: # We want to colorize the output to the console, so we add a formatter console_handler.setFormatter(ColorFormatter()) # Set the log level depending on the debug argument if debug: console_handler.setLevel(logging.DEBUG) else: console_handler.setLevel(logging.INFO) return console_handler def get_file_handler(path, debug): """Returns a log handler for the given path If the parent directory of the logpath does not exist it will be created The handler outputs relative timestamps (to when it was created) :params str path: The full path to the logfile :params bool debug: Whether to set the log level to DEBUG (otherwise INFO) :return: The file logging handler """ import os.path if not os.path.exists(os.path.dirname(path)): os.makedirs(os.path.dirname(path)) # Create the log handler file_handler = logging.FileHandler(path) # Absolute timestamps are rather useless when bootstrapping, it's much more interesting # to see how long things take, so we log in a relative format instead file_handler.setFormatter(FileFormatter('[%(relativeCreated)s] %(levelname)s: %(message)s')) # The file log handler always logs everything file_handler.setLevel(logging.DEBUG) return file_handler def get_log_filename(manifest_path): """Returns the path to a logfile given a manifest The logfile name is constructed from the current timestamp and the basename of the manifest :param str manifest_path: The path to the manifest :return: The path to the logfile :rtype: str """ import os.path from datetime import datetime manifest_basename = os.path.basename(manifest_path) manifest_name, _ = os.path.splitext(manifest_basename) timestamp = datetime.now().strftime('%Y%m%d%H%M%S') filename = '{timestamp}_{name}.log'.format(timestamp=timestamp, name=manifest_name) return filename class SourceFormatter(logging.Formatter): """Adds a [source] tag to the log message if it exists The python docs suggest using a LoggingAdapter, but that would mean we'd have to use it everywhere we log something (and only when called remotely), which is not feasible. """ def format(self, record): extra = getattr(record, 'extra', {}) if 'source' in extra: record.msg = '[{source}] {message}'.format(source=record.extra['source'], message=record.msg) return super(SourceFormatter, self).format(record) class ColorFormatter(SourceFormatter): """Colorizes log messages depending on the loglevel """ level_colors = {logging.ERROR: 'red', logging.WARNING: 'magenta', logging.INFO: 'blue', } def format(self, record): # Colorize the message if we have a color for it (DEBUG has no color) from termcolor import colored record.msg = colored(record.msg, self.level_colors.get(record.levelno, None)) return super(ColorFormatter, self).format(record) class FileFormatter(SourceFormatter): """Formats log statements for output to file Currently this is just a stub """ def format(self, record): return super(FileFormatter, self).format(record) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/main.py000066400000000000000000000126071323112141500224560ustar00rootroot00000000000000"""Main module containing all the setup necessary for running the bootstrapping process """ def main(): """Main function for invoking the bootstrap process :raises Exception: When the invoking user is not root and --dry-run isn't specified """ # Get the commandline arguments opts = get_opts() # Require root privileges, except when doing a dry-run where they aren't needed import os if os.geteuid() != 0 and not opts['--dry-run']: raise Exception('This program requires root privileges.') # Set up logging setup_loggers(opts) # Load the manifest from manifest import Manifest manifest = Manifest(path=opts['MANIFEST']) # Everything has been set up, begin the bootstrapping process run(manifest, debug=opts['--debug'], pause_on_error=opts['--pause-on-error'], dry_run=opts['--dry-run']) def get_opts(): """Creates an argument parser and returns the arguments it has parsed """ import docopt usage = """bootstrap-vz Usage: bootstrap-vz [options] MANIFEST Options: --log Log to given directory [default: /var/log/bootstrap-vz] If is `-' file logging will be disabled. --pause-on-error Pause on error, before rollback --dry-run Don't actually run the tasks --color=auto|always|never Colorize the console output [default: auto] --debug Print debugging information -h, --help show this help """ opts = docopt.docopt(usage) if opts['--color'] not in ('auto', 'always', 'never'): raise docopt.DocoptExit('Value of --color must be one of auto, always or never.') return opts def setup_loggers(opts): """Sets up the file and console loggers :params dict opts: Dictionary of options from the commandline """ import logging root = logging.getLogger() root.setLevel(logging.NOTSET) import log # Log to file unless --log is a single dash if opts['--log'] != '-': import os.path log_filename = log.get_log_filename(opts['MANIFEST']) logpath = os.path.join(opts['--log'], log_filename) file_handler = log.get_file_handler(path=logpath, debug=True) root.addHandler(file_handler) if opts['--color'] == 'never': colorize = False elif opts['--color'] == 'always': colorize = True else: # If --color=auto (default), decide whether to colorize by whether stderr is a tty. import os colorize = os.isatty(2) console_handler = log.get_console_handler(debug=opts['--debug'], colorize=colorize) root.addHandler(console_handler) def run(manifest, debug=False, pause_on_error=False, dry_run=False): """Runs the bootstrapping process :params Manifest manifest: The manifest to run the bootstrapping process for :params bool debug: Whether to turn debugging mode on :params bool pause_on_error: Whether to pause on error, before rollback :params bool dry_run: Don't actually run the tasks """ import logging log = logging.getLogger(__name__) # Get the tasklist from tasklist import load_tasks from tasklist import TaskList log.info('Generating tasklist') tasks = load_tasks('resolve_tasks', manifest) tasklist = TaskList(tasks) # 'resolve_tasks' is the name of the function to call on the provider and plugins # Create the bootstrap information object that'll be used throughout the bootstrapping process from bootstrapinfo import BootstrapInformation bootstrap_info = BootstrapInformation(manifest=manifest, debug=debug) try: # Run all the tasks the tasklist has gathered tasklist.run(info=bootstrap_info, dry_run=dry_run) # We're done! :-) log.info('Successfully completed bootstrapping') except (Exception, KeyboardInterrupt) as e: # When an error occurs, log it and begin rollback log.exception(e) if pause_on_error: # The --pause-on-error is useful when the user wants to inspect the volume before rollback raw_input('Press Enter to commence rollback') log.error('Rolling back') # Create a useful little function for the provider and plugins to use, # when figuring out what tasks should be added to the rollback list. def counter_task(taskset, task, counter): """counter_task() adds the third argument to the rollback tasklist if the second argument is present in the list of completed tasks :param set taskset: The taskset to add the rollback task to :param Task task: The task to look for in the completed tasks list :param Task counter: The task to add to the rollback tasklist """ if task in tasklist.tasks_completed and counter not in tasklist.tasks_completed: taskset.add(counter) # Ask the provider and plugins for tasks they'd like to add to the rollback tasklist # Any additional arguments beyond the first two are passed directly to the provider and plugins rollback_tasks = load_tasks('resolve_rollback_tasks', manifest, tasklist.tasks_completed, counter_task) rollback_tasklist = TaskList(rollback_tasks) # Run the rollback tasklist rollback_tasklist.run(info=bootstrap_info, dry_run=dry_run) log.info('Successfully completed rollback') raise return bootstrap_info bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/manifest-schema.yml000066400000000000000000000117611323112141500247470ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Generic manifest type: object required: [name, provider, bootstrapper, system, volume] properties: name: type: string provider: type: object properties: name: {type: string} required: [name] additionalProperties: true bootstrapper: type: object properties: exclude_packages: type: array items: type: string pattern: '^[^/]+$' minItems: 1 include_packages: type: array items: type: string pattern: '^[^/]+$' minItems: 1 mirror: type: string format: uri tarball: {type: boolean} workspace: $ref: '#/definitions/path' variant: type: string enum: [minbase] required: [workspace] additionalProperties: false system: type: object properties: architecture: enum: [i386, amd64, arm64] userspace_architecture: enum: [i386] bootloader: enum: - pvgrub - grub - extlinux - none charmap: {type: string} hostname: type: string pattern: ^\S+$ locale: {type: string} release: {type: string} timezone: {type: string} required: - release - architecture - bootloader - timezone - locale - charmap additionalProperties: false packages: type: object properties: components: type: array items: {type: string} minItems: 1 install: type: array items: anyOf: - pattern: ^[^/]+(/[^/]+)?$ - $ref: '#/definitions/absolute_path' minItems: 1 install_standard: {type: boolean} mirror: type: string format: uri security: type: string format: uri preferences: type: object patternProperties: ^[^/\0]+$: type: array items: type: object properties: package: {type: string} pin: {type: string} pin-priority: {type: integer} required: [pin, package, pin-priority] additionalProperties: false minItems: 1 minProperties: 1 additionalProperties: false apt.conf.d: type: object patternProperties: ^[0-9A-Za-z][0-9A-Za-z-_.]+$: type: string minProperties: 1 additionalProperties: false sources: type: object patternProperties: ^[^/\0]+$: items: type: string pattern: ^(deb|deb-src)\s+(\[\s*(.+\S)?\s*\]\s+)?\S+\s+\S+(\s+(.+\S))?\s*$ minItems: 1 type: array minProperties: 1 additionalProperties: false trusted-keys: type: array items: $ref: '#/definitions/path' minItems: 1 include-source-type: {type: boolean} additionalProperties: false plugins: type: object patternProperties: ^\w+$: {} additionalProperties: false volume: type: object oneOf: - $ref: '#/definitions/standardvolume' - $ref: '#/definitions/logicalvolume' definitions: standardvolume: type: object properties: backing: {type: string} partitions: type: object oneOf: - $ref: '#/definitions/no_partitions' - $ref: '#/definitions/partition_table' required: [partitions] additionalProperties: false logicalvolume: type: object properties: backing: enum: [lvm] volumegroup : {type: string} logicalvolume: {type: string} partitions: type: object oneOf: - $ref: '#/definitions/no_partitions' - $ref: '#/definitions/partition_table' required: [partitions] additionalProperties: false absolute_path: type: string pattern: ^/[^\0]+$ bytes: pattern: ^\d+([KMGT]i?B|B)$ type: string no_partitions: type: object properties: root: {$ref: '#/definitions/partition'} type: {enum: [none]} required: [type, root] additionalProperties: false partition: type: object properties: filesystem: enum: [ext2, ext3, ext4, xfs, btrfs] format_command: items: {type: string} minItems: 1 type: array mountopts: items: {type: string} minItems: 1 type: array size: {$ref: '#/definitions/bytes'} required: [size, filesystem] additionalProperties: false partition_table: type: object patternProperties: ^(?!type|swap).*: {$ref: '#/definitions/partition'} properties: swap: type: object properties: size: {$ref: '#/definitions/bytes'} required: [size] type: {enum: [msdos, gpt]} required: [type, root] additionalProperties: false path: type: string pattern: ^[^\0]+$ bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/manifest.py000066400000000000000000000161101323112141500233310ustar00rootroot00000000000000"""The Manifest module contains the manifest that providers and plugins use to determine which tasks should be added to the tasklist, what arguments various invocations should have etc.. """ from bootstrapvz.common.exceptions import ManifestError from bootstrapvz.common.tools import load_data, rel_path import logging log = logging.getLogger(__name__) class Manifest(object): """This class holds all the information that providers and plugins need to perform the bootstrapping process. All actions that are taken originate from here. The manifest shall not be modified after it has been loaded. Currently, immutability is not enforced and it would require a fair amount of code to enforce it, instead we just rely on tasks behaving properly. """ def __init__(self, path=None, data=None): """Initializer: Given a path we load, validate and parse the manifest. To create the manifest from dynamic data instead of the contents of a file, provide a properly constructed dict as the data argument. :param str path: The path to the manifest (ignored, when `data' is provided) :param str data: The manifest data, if it is not None, it will be used instead of the contents of `path' """ if path is None and data is None: raise ManifestError('`path\' or `data\' must be provided') self.path = path self.metaschema = load_data(rel_path(__file__, 'metaschema.json')) self.load_data(data) self.load_modules() self.validate() self.parse() def load_data(self, data=None): """Loads the manifest and performs a basic validation. This function reads the manifest and performs some basic validation of the manifest itself to ensure that the properties required for initalization are accessible (otherwise the user would be presented with some cryptic error messages). """ if data is None: self.data = load_data(self.path) else: self.data = data from . import validate_manifest # Validate the manifest with the base validation function in __init__ validate_manifest(self.data, self.schema_validator, self.validation_error) def load_modules(self): """Loads the provider and the plugins. """ # Get the provider name from the manifest and load the corresponding module provider_modname = 'bootstrapvz.providers.' + self.data['provider']['name'] log.debug('Loading provider ' + self.data['provider']['name']) # Create a modules dict that contains the loaded provider and plugins import importlib self.modules = {'provider': importlib.import_module(provider_modname), 'plugins': [], } # Run through all the plugins mentioned in the manifest and load them from pkg_resources import iter_entry_points if 'plugins' in self.data: for plugin_name in self.data['plugins'].keys(): log.debug('Loading plugin ' + plugin_name) try: # Internal bootstrap-vz plugins take precedence wrt. plugin name modname = 'bootstrapvz.plugins.' + plugin_name plugin = importlib.import_module(modname) except ImportError: entry_points = list(iter_entry_points('bootstrapvz.plugins', name=plugin_name)) num_entry_points = len(entry_points) if num_entry_points < 1: raise if num_entry_points > 1: msg = ('Unable to load plugin {name}, ' 'there are {num} entry points to choose from.' .format(name=plugin_name, num=num_entry_points)) raise ImportError(msg) plugin = entry_points[0].load() self.modules['plugins'].append(plugin) def validate(self): """Validates the manifest using the provider and plugin validation functions. Plugins are not required to have a validate_manifest function """ # Run the provider validation self.modules['provider'].validate_manifest(self.data, self.schema_validator, self.validation_error) # Run the validation function for any plugin that has it for plugin in self.modules['plugins']: validate = getattr(plugin, 'validate_manifest', None) if callable(validate): validate(self.data, self.schema_validator, self.validation_error) def parse(self): """Parses the manifest. Well... "parsing" is a big word. The function really just sets up some convenient attributes so that tasks don't have to access information with info.manifest.data['section'] but can do it with info.manifest.section. """ self.name = self.data['name'] self.provider = self.data['provider'] self.bootstrapper = self.data['bootstrapper'] self.volume = self.data['volume'] self.system = self.data['system'] from bootstrapvz.common.releases import get_release self.release = get_release(self.system['release']) # The packages and plugins sections are not required self.packages = self.data['packages'] if 'packages' in self.data else {} self.plugins = self.data['plugins'] if 'plugins' in self.data else {} def schema_validator(self, data, schema_path): """This convenience function is passed around to all the validation functions so that they may run a json-schema validation by giving it the data and a path to the schema. :param dict data: Data to validate (normally the manifest data) :param str schema_path: Path to the json-schema to use for validation """ import jsonschema schema = load_data(schema_path) try: jsonschema.validate(schema, self.metaschema) jsonschema.validate(data, schema) except jsonschema.ValidationError as e: self.validation_error(e.message, e.path) def validation_error(self, message, data_path=None): """This function is passed to all validation functions so that they may raise a validation error because a custom validation of the manifest failed. :param str message: Message to user about the error :param list data_path: A path to the location in the manifest where the error occurred :raises ManifestError: With absolute certainty """ raise ManifestError(message, self.path, data_path) def __getstate__(self): return {'__class__': self.__module__ + '.' + self.__class__.__name__, 'path': self.path, 'metaschema': self.metaschema, 'data': self.data} def __setstate__(self, state): self.path = state['path'] self.metaschema = state['metaschema'] self.load_data(state['data']) self.load_modules() self.validate() self.parse() bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/metaschema.json000066400000000000000000000105571323112141500241640ustar00rootroot00000000000000// Core/Validation metaschema: Describes JSON schemata // http://json-schema.org/schema { "id": "http://json-schema.org/draft-04/schema#", "$schema": "http://json-schema.org/draft-04/schema#", "description": "Core schema meta-schema", "definitions": { "schemaArray": { "type": "array", "minItems": 1, "items": { "$ref": "#" } }, "positiveInteger": { "type": "integer", "minimum": 0 }, "positiveIntegerDefault0": { "allOf": [ { "$ref": "#/definitions/positiveInteger" }, { "default": 0 } ] }, "simpleTypes": { "enum": [ "array", "boolean", "integer", "null", "number", "object", "string" ] }, "stringArray": { "type": "array", "items": { "type": "string" }, "minItems": 1, "uniqueItems": true } }, "type": "object", "properties": { "id": { "type": "string", "format": "uri" }, "$schema": { "type": "string", "format": "uri" }, "title": { "type": "string" }, "description": { "type": "string" }, "default": {}, "multipleOf": { "type": "number", "minimum": 0, "exclusiveMinimum": true }, "maximum": { "type": "number" }, "exclusiveMaximum": { "type": "boolean", "default": false }, "minimum": { "type": "number" }, "exclusiveMinimum": { "type": "boolean", "default": false }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "pattern": { "type": "string", "format": "regex" }, "additionalItems": { "anyOf": [ { "type": "boolean" }, { "$ref": "#" } ], "default": {} }, "items": { "anyOf": [ { "$ref": "#" }, { "$ref": "#/definitions/schemaArray" } ], "default": {} }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "uniqueItems": { "type": "boolean", "default": false }, "maxProperties": { "$ref": "#/definitions/positiveInteger" }, "minProperties": { "$ref": "#/definitions/positiveIntegerDefault0" }, "required": { "$ref": "#/definitions/stringArray" }, "additionalProperties": { "anyOf": [ { "type": "boolean" }, { "$ref": "#" } ], "default": {} }, "definitions": { "type": "object", "additionalProperties": { "$ref": "#" }, "default": {} }, "properties": { "type": "object", "additionalProperties": { "$ref": "#" }, "default": {} }, "patternProperties": { "type": "object", "additionalProperties": { "$ref": "#" }, "default": {} }, "dependencies": { "type": "object", "additionalProperties": { "anyOf": [ { "$ref": "#" }, { "$ref": "#/definitions/stringArray" } ] } }, "enum": { "type": "array", "minItems": 1, "uniqueItems": true }, "type": { "anyOf": [ { "$ref": "#/definitions/simpleTypes" }, { "type": "array", "items": { "$ref": "#/definitions/simpleTypes" }, "minItems": 1, "uniqueItems": true } ] }, "allOf": { "$ref": "#/definitions/schemaArray" }, "anyOf": { "$ref": "#/definitions/schemaArray" }, "oneOf": { "$ref": "#/definitions/schemaArray" }, "not": { "$ref": "#" } }, "dependencies": { "exclusiveMaximum": [ "maximum" ], "exclusiveMinimum": [ "minimum" ] }, "default": {} } bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/phase.py000066400000000000000000000020221323112141500226200ustar00rootroot00000000000000 class Phase(object): """The Phase class represents a phase a task may be in. It has no function other than to act as an anchor in the task graph. All phases are instantiated in common.phases """ def __init__(self, name, description): # The name of the phase self.name = name # The description of the phase (currently not used anywhere) self.description = description def pos(self): """Gets the position of the phase :return: The positional index of the phase in relation to the other phases :rtype: int """ from bootstrapvz.common.phases import order return next(i for i, phase in enumerate(order) if phase is self) def __cmp__(self, other): """Compares the phase order in relation to the other phases :return int: """ return self.pos() - other.pos() def __str__(self): """ :return: String representation of the phase :rtype: str """ return self.name bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/pkg/000077500000000000000000000000001323112141500217335ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/pkg/__init__.py000066400000000000000000000000001323112141500240320ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/pkg/exceptions.py000066400000000000000000000003511323112141500244650ustar00rootroot00000000000000 class PackageError(Exception): """Raised when an error occurrs while handling the packageslist """ pass class SourceError(Exception): """Raised when an error occurs while handling the sourceslist """ pass bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/pkg/packagelist.py000066400000000000000000000113001323112141500245670ustar00rootroot00000000000000 class PackageList(object): """Represents a list of packages """ class Remote(object): """A remote package with an optional target """ def __init__(self, name, target): """ :param str name: The name of the package :param str target: The name of the target release """ self.name = name self.target = target def __str__(self): """Converts the package into somehting that apt-get install can parse :rtype: str """ if self.target is None: return self.name else: return self.name + '/' + self.target class Local(object): """A local package """ def __init__(self, path): """ :param str path: The path to the local package """ self.path = path def __str__(self): """ :return: The path to the local package :rtype: string """ return self.path def __init__(self, manifest_vars, source_lists): """ :param dict manifest_vars: The manifest variables :param SourceLists source_lists: The sourcelists for apt """ self.manifest_vars = manifest_vars self.source_lists = source_lists # The default_target is the release we are bootstrapping self.default_target = '{system.release}'.format(**self.manifest_vars) # The list of packages that should be installed, this is not a set. # We want to preserve the order in which the packages were added so that local # packages may be installed in the correct order. self.install = [] # A function that filters the install list and only returns remote packages self.remote = lambda: filter(lambda x: isinstance(x, self.Remote), self.install) def add(self, name, target=None): """Adds a package to the install list :param str name: The name of the package to install, may contain manifest vars references :param str target: The name of the target release for the package, may contain manifest vars references :raises PackageError: When a package of the same name but with a different target has already been added. :raises PackageError: When the specified target release could not be found. """ from exceptions import PackageError name = name.format(**self.manifest_vars) if target is not None: target = target.format(**self.manifest_vars) # Check if the package has already been added. # If so, make sure it's the same target and raise a PackageError otherwise package = next((pkg for pkg in self.remote() if pkg.name == name), None) if package is not None: # It's the same target if the target names match or one of the targets is None # and the other is the default target. same_target = package.target == target same_target = same_target or package.target is None and target == self.default_target same_target = same_target or package.target == self.default_target and target is None if not same_target: msg = ('The package {name} was already added to the package list, ' 'but with target release `{target}\' instead of `{add_target}\'' .format(name=name, target=package.target, add_target=target)) raise PackageError(msg) # The package has already been added, skip the checks below return # Check if the target exists (unless it's the default target) in the sources list # raise a PackageError if does not if target not in (None, self.default_target) and not self.source_lists.target_exists(target): msg = ('The target release {target} was not found in the sources list').format(target=target) raise PackageError(msg) # Note that we maintain the target value even if it is none. # This allows us to preserve the semantics of the default target when calling apt-get install # Why? Try installing nfs-client/wheezy, you can't. It's a virtual package for which you cannot define # a target release. Only `apt-get install nfs-client` works. self.install.append(self.Remote(name, target)) def add_local(self, package_path): """Adds a local package to the installation list :param str package_path: Path to the local package, may contain manifest vars references """ package_path = package_path.format(**self.manifest_vars) self.install.append(self.Local(package_path)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/pkg/preferenceslist.py000066400000000000000000000024371323112141500255100ustar00rootroot00000000000000 class PreferenceLists(object): """Represents a list of preferences lists for apt """ def __init__(self, manifest_vars): """ :param dict manifest_vars: The manifest variables """ # A dictionary with the name of the file in preferences.d as the key # That values are lists of Preference objects self.preferences = {} # Save the manifest variables, we need the later on self.manifest_vars = manifest_vars def add(self, name, preferences): """Adds a preference to the apt preferences list :param str name: Name of the file in preferences.list.d, may contain manifest vars references :param object preferences: The preferences """ name = name.format(**self.manifest_vars) self.preferences[name] = [Preference(p) for p in preferences] class Preference(object): """Represents a single preference """ def __init__(self, preference): """ :param dict preference: A apt preference dictionary """ self.preference = preference def __str__(self): """Convert the object into a preference block :rtype: str """ return "Package: {package}\nPin: {pin}\nPin-Priority: {pin-priority}\n".format(**self.preference) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/pkg/sourceslist.py000066400000000000000000000070711323112141500246710ustar00rootroot00000000000000 class SourceLists(object): """Represents a list of sources lists for apt """ def __init__(self, manifest_vars): """ :param dict manifest_vars: The manifest variables """ # A dictionary with the name of the file in sources.list.d as the key # That values are lists of Source objects self.sources = {} # Save the manifest variables, we need the later on self.manifest_vars = manifest_vars def add(self, name, line): """Adds a source to the apt sources list :param str name: Name of the file in sources.list.d, may contain manifest vars references :param str line: The line for the source file, may contain manifest vars references """ name = name.format(**self.manifest_vars) line = line.format(**self.manifest_vars) if name not in self.sources: self.sources[name] = [] self.sources[name].append(Source(line)) def target_exists(self, target): """Checks whether the target exists in the sources list :param str target: Name of the target to check for, may contain manifest vars references :return: Whether the target exists :rtype: bool """ target = target.format(**self.manifest_vars) # Run through all the sources and return True if the target exists for lines in self.sources.itervalues(): if target in (source.distribution for source in lines): return True return False class Source(object): """Represents a single source line """ def __init__(self, line): """ :param str line: A apt source line :raises SourceError: When the source line cannot be parsed """ # Parse the source line and populate the class attributes with it # The format is taken from `man sources.list` # or: http://manpages.debian.org/cgi-bin/man.cgi?sektion=5&query=sources.list&apropos=0&manpath=sid&locale=en import re regexp = re.compile('^(?Pdeb|deb-src)\s+' '(\[\s*(?P.+\S)?\s*\]\s+)?' '(?P\S+)\s+' '(?P\S+)' '(\s+(?P.+\S))?\s*$') match = regexp.match(line).groupdict() if match is None: from exceptions import SourceError raise SourceError('Unable to parse source line: ' + line) self.type = match['type'] self.options = [] if match['options'] is not None: self.options = re.sub(' +', ' ', match['options']).split(' ') self.uri = match['uri'] self.distribution = match['distribution'] self.components = [] if match['components'] is not None: self.components = re.sub(' +', ' ', match['components']).split(' ') def __str__(self): """Convert the object into a source line This is pretty much the reverse of what we're doing in the initialization function. :rtype: str """ options = '' if len(self.options) > 0: options = ' [{options}]'.format(options=' '.join(self.options)) components = '' if len(self.components) > 0: components = ' {components}'.format(components=' '.join(self.components)) return ('{type}{options} {uri} {distribution}{components}' .format(type=self.type, options=options, uri=self.uri, distribution=self.distribution, components=components)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/task.py000066400000000000000000000020261323112141500224660ustar00rootroot00000000000000 class Task(object): """The task class represents a task that can be run. It is merely a wrapper for the run function and should never be instantiated. """ # The phase this task is located in. phase = None # List of tasks that should run before this task is run predecessors = [] # List of tasks that should run after this task has run successors = [] class __metaclass__(type): """Metaclass to control how the class is coerced into a string """ def __repr__(cls): """ :return str: The full module path to the Task """ return cls.__module__ + '.' + cls.__name__ def __str__(cls): """ :return: The full module path to the Task :rtype: str """ return repr(cls) @classmethod def run(cls, info): """The run function, all work is done inside this function :param BootstrapInformation info: The bootstrap info object. """ pass bootstrap-vz-0.9.11+20180121git/bootstrapvz/base/tasklist.py000066400000000000000000000306401323112141500233650ustar00rootroot00000000000000"""The tasklist module contains the TaskList class. """ from bootstrapvz.common.exceptions import TaskListError import logging log = logging.getLogger(__name__) class TaskList(object): """The tasklist class aggregates all tasks that should be run and orders them according to their dependencies. """ def __init__(self, tasks): self.tasks = tasks self.tasks_completed = [] def run(self, info, dry_run=False): """Converts the taskgraph into a list and runs all tasks in that list :param dict info: The bootstrap information object :param bool dry_run: Whether to actually run the tasks or simply step through them """ # Get a hold of every task we can find, so that we can topologically sort # all tasks, rather than just the subset we are going to run. # We pass in the modules which the manifest has already loaded in order # to support third-party plugins, which are not in the bootstrapvz package # and therefore wouldn't be discovered. all_tasks = set(get_all_tasks([info.manifest.modules['provider']] + info.manifest.modules['plugins'])) # Create a list for us to run task_list = create_list(self.tasks, all_tasks) # Output the tasklist log.debug('Tasklist:\n\t' + ('\n\t'.join(map(repr, task_list)))) for task in task_list: # Tasks are not required to have a description if hasattr(task, 'description'): log.info(task.description) else: # If there is no description, simply coerce the task into a string and print its name log.info('Running ' + str(task)) if not dry_run: # Run the task task.run(info) # Remember which tasks have been run for later use (e.g. when rolling back, because of an error) self.tasks_completed.append(task) def load_tasks(function, manifest, *args): """Calls ``function`` on the provider and all plugins that have been loaded by the manifest. Any additional arguments are passed directly to ``function``. The function that is called shall accept the taskset as its first argument and the manifest as its second argument. :param str function: Name of the function to call :param Manifest manifest: The manifest :param list args: Additional arguments that should be passed to the function that is called """ tasks = set() # Call 'function' on the provider getattr(manifest.modules['provider'], function)(tasks, manifest, *args) for plugin in manifest.modules['plugins']: # Plugins are not required to have whatever function we call fn = getattr(plugin, function, None) if callable(fn): fn(tasks, manifest, *args) return tasks def create_list(taskset, all_tasks): """Creates a list of all the tasks that should be run. """ from bootstrapvz.common.phases import order # Make sure all_tasks is a superset of the resolved taskset if not all_tasks >= taskset: msg = ('bootstrap-vz generated a list of all available tasks. ' 'That list is not a superset of the tasks required for bootstrapping. ' 'The tasks that were not found are: {tasks} ' '(This is an error in the code and not the manifest, please report this issue.)' .format(tasks=', '.join(map(str, taskset - all_tasks))) ) raise TaskListError(msg) # Create a graph over all tasks by creating a map of each tasks successors graph = {} for task in all_tasks: # Do a sanity check first check_ordering(task) successors = set() # Add all successors mentioned in the task successors.update(task.successors) # Add all tasks that mention this task as a predecessor successors.update(filter(lambda succ: task in succ.predecessors, all_tasks)) # Create a list of phases that succeed the phase of this task succeeding_phases = order[order.index(task.phase) + 1:] # Add all tasks that occur in above mentioned succeeding phases successors.update(filter(lambda succ: succ.phase in succeeding_phases, all_tasks)) # Map the successors to the task graph[task] = successors # Use the strongly connected components algorithm to check for cycles in our task graph components = strongly_connected_components(graph) cycles_found = 0 for component in components: # Node of 1 is also a strongly connected component but hardly a cycle, so we filter them out if len(component) > 1: cycles_found += 1 log.debug('Cycle: {list}\n' + (', '.join(map(repr, component)))) if cycles_found > 0: msg = ('{num} cycles were found in the tasklist, ' 'consult the logfile for more information.'.format(num=cycles_found)) raise TaskListError(msg) # Run a topological sort on the graph, returning an ordered list sorted_tasks = topological_sort(graph) # Filter out any tasks not in the tasklist # We want to maintain ordering, so we don't use set intersection sorted_tasks = filter(lambda task: task in taskset, sorted_tasks) return sorted_tasks def get_all_tasks(loaded_modules): """Gets a list of all task classes in the package :return: A list of all tasks in the package :rtype: list """ import pkgutil import os.path import bootstrapvz from bootstrapvz.common.tools import rel_path module_paths = set([(rel_path(bootstrapvz.__file__, 'common/tasks'), 'bootstrapvz.common.tasks.')]) for module in loaded_modules: module_path = os.path.dirname(module.__file__) module_prefix = module.__name__ + '.' module_paths.add((module_path, module_prefix)) providers = rel_path(bootstrapvz.__file__, 'providers') for module_loader, module_name, ispkg in pkgutil.iter_modules([providers, 'bootstrapvz.providers']): module_path = os.path.join(module_loader.path, module_name) # The prefix param seems to do nothing, so we prefix the module name ourselves module_prefix = 'bootstrapvz.providers.{}.'.format(module_name) module_paths.add((module_path, module_prefix)) plugins = rel_path(bootstrapvz.__file__, 'plugins') for module_loader, module_name, ispkg in pkgutil.iter_modules([plugins, 'bootstrapvz.plugins']): module_path = os.path.join(module_loader.path, module_name) module_prefix = 'bootstrapvz.plugins.{}.'.format(module_name) module_paths.add((module_path, module_prefix)) # Get generators that return all classes in a module generators = [] for (module_path, module_prefix) in module_paths: generators.append(get_all_classes(module_path, module_prefix)) import itertools classes = itertools.chain(*generators) # lambda function to check whether a class is a task (excluding the superclass Task) def is_task(obj): from task import Task return issubclass(obj, Task) and obj is not Task return filter(is_task, classes) # Only return classes that are tasks def get_all_classes(path=None, prefix='', excludes=[]): """ Given a path to a package, this function retrieves all the classes in it :param str path: Path to the package :param str prefix: Name of the package followed by a dot :param list excludes: List of str matching module names that should be ignored :return: A generator that yields classes :rtype: generator :raises Exception: If a module cannot be inspected. """ import pkgutil import importlib import inspect def walk_error(module_name): if not any(map(lambda excl: module_name.startswith(excl), excludes)): raise TaskListError('Unable to inspect module ' + module_name) walker = pkgutil.walk_packages([path], prefix, walk_error) for _, module_name, _ in walker: if any(map(lambda excl: module_name.startswith(excl), excludes)): continue module = importlib.import_module(module_name) classes = inspect.getmembers(module, inspect.isclass) for class_name, obj in classes: # We only want classes that are defined in the module, and not imported ones if obj.__module__ == module_name: yield obj def check_ordering(task): """Checks the ordering of a task in relation to other tasks and their phases. This function checks for a subset of what the strongly connected components algorithm does, but can deliver a more precise error message, namely that there is a conflict between what a task has specified as its predecessors or successors and in which phase it is placed. :param Task task: The task to check the ordering for :raises TaskListError: If there is a conflict between task precedence and phase precedence """ for successor in task.successors: # Run through all successors and throw an error if the phase of the task # lies before the phase of a successor, log a warning if it lies after. if task.phase > successor.phase: msg = ("The task {task} is specified as running before {other}, " "but its phase '{phase}' lies after the phase '{other_phase}'" .format(task=task, other=successor, phase=task.phase, other_phase=successor.phase)) raise TaskListError(msg) if task.phase < successor.phase: log.warn("The task {task} is specified as running before {other} " "although its phase '{phase}' already lies before the phase '{other_phase}' " "(or the task has been placed in the wrong phase)" .format(task=task, other=successor, phase=task.phase, other_phase=successor.phase)) for predecessor in task.predecessors: # Run through all successors and throw an error if the phase of the task # lies after the phase of a predecessor, log a warning if it lies before. if task.phase < predecessor.phase: msg = ("The task {task} is specified as running after {other}, " "but its phase '{phase}' lies before the phase '{other_phase}'" .format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase)) raise TaskListError(msg) if task.phase > predecessor.phase: log.warn("The task {task} is specified as running after {other} " "although its phase '{phase}' already lies after the phase '{other_phase}' " "(or the task has been placed in the wrong phase)" .format(task=task, other=predecessor, phase=task.phase, other_phase=predecessor.phase)) def strongly_connected_components(graph): """Find the strongly connected components in a graph using Tarjan's algorithm. Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py :param dict graph: mapping of tasks to lists of successor tasks :return: List of tuples that are strongly connected comoponents :rtype: list """ result = [] stack = [] low = {} def visit(node): if node in low: return num = len(low) low[node] = num stack_pos = len(stack) stack.append(node) for successor in graph[node]: visit(successor) low[node] = min(low[node], low[successor]) if num == low[node]: component = tuple(stack[stack_pos:]) del stack[stack_pos:] result.append(component) for item in component: low[item] = len(graph) for node in graph: visit(node) return result def topological_sort(graph): """Runs a topological sort on a graph. Source: http://www.logarithmic.net/pfh-files/blog/01208083168/sort.py :param dict graph: mapping of tasks to lists of successor tasks :return: A list of all tasks in the graph sorted according to ther dependencies :rtype: list """ count = {} for node in graph: count[node] = 0 for node in graph: for successor in graph[node]: count[successor] += 1 ready = [node for node in graph if count[node] == 0] result = [] while ready: node = ready.pop(-1) result.append(node) for successor in graph[node]: count[successor] -= 1 if count[successor] == 0: ready.append(successor) return result bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/000077500000000000000000000000001323112141500215305ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/__init__.py000066400000000000000000000000001323112141500236270ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/000077500000000000000000000000001323112141500230325ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/extlinux/000077500000000000000000000000001323112141500247125ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/extlinux/boot.txt000066400000000000000000000000421323112141500264120ustar00rootroot00000000000000Wait 5 seconds or press ENTER to bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/extlinux/extlinux.conf000066400000000000000000000013001323112141500274330ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. default l0 prompt 1 timeout 50 label l0 menu label Debian GNU/Linux, kernel {kernel_version} linux {boot_prefix}/vmlinuz-{kernel_version} append initrd={boot_prefix}/initrd.img-{kernel_version} root=UUID={root_uuid} ro quiet console=ttyS0 label l0r menu label Debian GNU/Linux, kernel {kernel_version} (recovery mode) linux {boot_prefix}/vmlinuz-{kernel_version} append initrd={boot_prefix}/initrd.img-{kernel_version} root=UUID={root_uuid} ro console=ttyS0 single text help This option boots the system into recovery mode (single-user) endtext bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/000077500000000000000000000000001323112141500242175ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/expand-root000066400000000000000000000024621323112141500264060ustar00rootroot00000000000000#!/bin/bash ### BEGIN INIT INFO # Provides: expand-root # Required-Start: # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Description-Short: Expand the filesystem of the mounted root volume/partition to its maximum possible size # Description: Expand the filesystem of the mounted root volume/partition to its maximum possible size # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO prog=$(basename $0) logger="logger -t $prog" growpart="growpart" hash $growpart 2> /dev/null || { $logger "$growpart was not found on PATH. Unable to expand size." exit 1 } root_device_path="/dev/xvda" root_index="0" # Growpart can fail if the partition is already resized. $growpart $root_device_path $root_index || { $logger "growpart failed. Unable to expand size." } device_path="${root_device_path}${root_index}" filesystem=$(blkid -s TYPE -o value ${device_path}) case $filesystem in xfs) xfs_growfs / ;; ext2) resize2fs $device_path ;; ext3) resize2fs $device_path ;; ext4) resize2fs $device_path ;; *) $logger "The filesystem $filesystem was not recognized. Unable to expand size." ;; esac bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/jessie/000077500000000000000000000000001323112141500255015ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/jessie/generate-ssh-hostkeys000066400000000000000000000030351323112141500316610ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: generate-ssh-hostkeys # Required-Start: $local_fs # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: S # Default-Stop: # Description-Short: Generate ssh host keys if they do not exist # Description: Generate ssh host keys if they do not exist. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO prog=$(basename $0) logger="logger -t $prog" rsa_key="/etc/ssh/ssh_host_rsa_key" dsa_key="/etc/ssh/ssh_host_dsa_key" ecdsa_key="/etc/ssh/ssh_host_ecdsa_key" ed25519_key="/etc/ssh/ssh_host_ed25519_key" # Exit if the hostkeys already exist if [ -f $rsa_key -a -f $dsa_key -a -f $ecdsa_key -a -f $ed25519_key ]; then exit fi # Generate the ssh host keys [ -f $rsa_key ] || ssh-keygen -f $rsa_key -t rsa -C 'host' -N '' [ -f $dsa_key ] || ssh-keygen -f $dsa_key -t dsa -C 'host' -N '' [ -f $ecdsa_key ] || ssh-keygen -f $ecdsa_key -t ecdsa -C 'host' -N '' [ -f $ed25519_key ] || ssh-keygen -f $ed25519_key -t ed25519 -C 'host' -N '' # Output the public keys to the console # This allows user to get host keys securely through console log echo "-----BEGIN SSH HOST KEY FINGERPRINTS-----" | $logger ssh-keygen -l -f $rsa_key.pub | $logger ssh-keygen -l -f $dsa_key.pub | $logger ssh-keygen -l -f $ecdsa_key.pub | $logger ssh-keygen -l -f $ed25519_key.pub | $logger echo "------END SSH HOST KEY FINGERPRINTS------" | $logger bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/squeeze/000077500000000000000000000000001323112141500257005ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/squeeze/generate-ssh-hostkeys000066400000000000000000000022731323112141500320630ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: generate-ssh-hostkeys # Required-Start: $local_fs # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: S # Default-Stop: # Description-Short: Generate ssh host keys if they do not exist # Description: Generate ssh host keys if they do not exist. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO prog=$(basename $0) logger="logger -t $prog" rsa_key="/etc/ssh/ssh_host_rsa_key" dsa_key="/etc/ssh/ssh_host_dsa_key" # Exit if the hostkeys already exist if [ -f $rsa_key -a -f $dsa_key ]; then exit fi # Generate the ssh host keys [ -f $rsa_key ] || ssh-keygen -f $rsa_key -t rsa -C 'host' -N '' [ -f $dsa_key ] || ssh-keygen -f $dsa_key -t dsa -C 'host' -N '' # Output the public keys to the console # This allows user to get host keys securely through console log echo "-----BEGIN SSH HOST KEY FINGERPRINTS-----" | $logger ssh-keygen -l -f $rsa_key.pub | $logger ssh-keygen -l -f $dsa_key.pub | $logger echo "------END SSH HOST KEY FINGERPRINTS------" | $logger bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/ssh-generate-hostkeys000066400000000000000000000015311323112141500303760ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: ssh-generate-hostkeys # Required-Start: $local_fs $syslog # Required-Stop: # X-Start-Before: ssh # Default-Start: 2 3 4 5 # Default-Stop: # Short-Description: Generate ssh host keys if they do not exist # Description: Generate ssh host keys if they do not exist. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO set -e DAEMON=/usr/local/sbin/ssh-generate-hostkeys [ -x "$DAEMON" ] || exit 0 . /lib/lsb/init-functions case "$1" in start) $DAEMON exit $? ;; stop|restart|reload|force-reload|status) ;; *) echo "Usage: $0 {start|stop|restart|force-reload|status}" >&2 exit 1 esac exit 0 bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/wheezy/000077500000000000000000000000001323112141500255325ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/init.d/wheezy/generate-ssh-hostkeys000066400000000000000000000025451323112141500317170ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: generate-ssh-hostkeys # Required-Start: $local_fs # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: S # Default-Stop: # Description-Short: Generate ssh host keys if they do not exist # Description: Generate ssh host keys if they do not exist. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO prog=$(basename $0) logger="logger -t $prog" rsa_key="/etc/ssh/ssh_host_rsa_key" dsa_key="/etc/ssh/ssh_host_dsa_key" ecdsa_key="/etc/ssh/ssh_host_ecdsa_key" # Exit if the hostkeys already exist if [ -f $rsa_key -a -f $dsa_key -a -f $ecdsa_key ]; then exit fi # Generate the ssh host keys [ -f $rsa_key ] || ssh-keygen -f $rsa_key -t rsa -C 'host' -N '' [ -f $dsa_key ] || ssh-keygen -f $dsa_key -t dsa -C 'host' -N '' [ -f $ecdsa_key ] || ssh-keygen -f $ecdsa_key -t ecdsa -C 'host' -N '' # Output the public keys to the console # This allows user to get host keys securely through console log echo "-----BEGIN SSH HOST KEY FINGERPRINTS-----" | $logger ssh-keygen -l -f $rsa_key.pub | $logger ssh-keygen -l -f $dsa_key.pub | $logger ssh-keygen -l -f $ecdsa_key.pub | $logger echo "------END SSH HOST KEY FINGERPRINTS------" | $logger bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/ssh-generate-hostkeys000066400000000000000000000013521323112141500272120ustar00rootroot00000000000000#!/bin/sh # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # Generate ssh host keys if they do not exist. # Output the public keys to the console. This allows user to get host # keys securely through console log. set -eu prog="$(basename $0)" logger="logger -t ${prog}" echo "-----BEGIN SSH HOST KEY FINGERPRINTS-----" | ${logger} for key in ecdsa ed25519 rsa; do keyfile="/etc/ssh/ssh_host_${key}_key" if [ ! -f "${keyfile}" ]; then /usr/bin/ssh-keygen -f "${keyfile}" -t "${key}" -C 'host' -N '' fi /usr/bin/ssh-keygen -l -f "${keyfile}.pub" | ${logger} done echo "------END SSH HOST KEY FINGERPRINTS------" | ${logger} bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/systemd/000077500000000000000000000000001323112141500245225ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/systemd/logind.conf000066400000000000000000000003121323112141500266410ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. [Login] # Disable all TTY getters NAutoVTs=0 ReserveVT=0 bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/assets/systemd/ssh-generate-hostkeys.service000066400000000000000000000007611323112141500323440ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. [Unit] Description=OpenBSD Secure Shell server Host Key Generation ConditionFileNotEmpty=|!/etc/ssh/ssh_host_ecdsa_key ConditionFileNotEmpty=|!/etc/ssh/ssh_host_ed25519_key ConditionFileNotEmpty=|!/etc/ssh/ssh_host_rsa_key Before=ssh.service [Service] ExecStart=/usr/local/sbin/ssh-generate-hostkeys Type=oneshot [Install] WantedBy=multi-user.target bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/bytes.py000066400000000000000000000111271323112141500232320ustar00rootroot00000000000000from exceptions import UnitError def onlybytes(msg): def decorator(func): def check_other(self, other): if not isinstance(other, Bytes): raise UnitError(msg) return func(self, other) return check_other return decorator class Bytes(object): units = {'B': 1, 'KiB': 1024, 'MiB': 1024 * 1024, 'GiB': 1024 * 1024 * 1024, 'TiB': 1024 * 1024 * 1024 * 1024, } def __init__(self, qty): if isinstance(qty, (int, long)): self.qty = qty else: self.qty = Bytes.parse(qty) @staticmethod def parse(qty_str): import re regex = re.compile('^(?P\d+)(?P[KMGT]i?B|B)$') parsed = regex.match(qty_str) if parsed is None: raise UnitError('Unable to parse ' + qty_str) qty = int(parsed.group('qty')) unit = parsed.group('unit') if unit[0] in 'KMGT': unit = unit[0] + 'iB' byte_qty = qty * Bytes.units[unit] return byte_qty def get_qty_in(self, unit): if unit[0] in 'KMGT': unit = unit[0] + 'iB' if unit not in Bytes.units: raise UnitError('Unrecognized unit: ' + unit) if self.qty % Bytes.units[unit] != 0: msg = 'Unable to convert {qty} bytes to a whole number in {unit}'.format(qty=self.qty, unit=unit) raise UnitError(msg) return self.qty / Bytes.units[unit] def __repr__(self): converted = str(self.get_qty_in('B')) + 'B' if self.qty == 0: return converted for unit in ['TiB', 'GiB', 'MiB', 'KiB']: try: converted = str(self.get_qty_in(unit)) + unit break except UnitError: pass return converted def __str__(self): return self.__repr__() def __int__(self): return self.qty def __long__(self): return self.qty @onlybytes('Can only compare Bytes to Bytes') def __lt__(self, other): return self.qty < other.qty @onlybytes('Can only compare Bytes to Bytes') def __le__(self, other): return self.qty <= other.qty @onlybytes('Can only compare Bytes to Bytes') def __eq__(self, other): return self.qty == other.qty @onlybytes('Can only compare Bytes to Bytes') def __ne__(self, other): return self.qty != other.qty @onlybytes('Can only compare Bytes to Bytes') def __ge__(self, other): return self.qty >= other.qty @onlybytes('Can only compare Bytes to Bytes') def __gt__(self, other): return self.qty > other.qty @onlybytes('Can only add Bytes to Bytes') def __add__(self, other): return Bytes(self.qty + other.qty) @onlybytes('Can only add Bytes to Bytes') def __iadd__(self, other): self.qty += other.qty return self @onlybytes('Can only subtract Bytes from Bytes') def __sub__(self, other): return Bytes(self.qty - other.qty) @onlybytes('Can only subtract Bytes from Bytes') def __isub__(self, other): self.qty -= other.qty return self def __mul__(self, other): if not isinstance(other, (int, long)): raise UnitError('Can only multiply Bytes with integers') return Bytes(self.qty * other) def __imul__(self, other): if not isinstance(other, (int, long)): raise UnitError('Can only multiply Bytes with integers') self.qty *= other return self def __div__(self, other): if isinstance(other, Bytes): return self.qty / other.qty if not isinstance(other, (int, long)): raise UnitError('Can only divide Bytes with integers or Bytes') return Bytes(self.qty / other) def __idiv__(self, other): if isinstance(other, Bytes): self.qty /= other.qty else: if not isinstance(other, (int, long)): raise UnitError('Can only divide Bytes with integers or Bytes') self.qty /= other return self @onlybytes('Can only take modulus of Bytes with Bytes') def __mod__(self, other): return Bytes(self.qty % other.qty) @onlybytes('Can only take modulus of Bytes with Bytes') def __imod__(self, other): self.qty %= other.qty return self def __getstate__(self): return {'__class__': self.__module__ + '.' + self.__class__.__name__, 'qty': self.qty, } def __setstate__(self, state): self.qty = state['qty'] bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/exceptions.py000066400000000000000000000021031323112141500242570ustar00rootroot00000000000000 class ManifestError(Exception): def __init__(self, message, manifest_path=None, data_path=None): super(ManifestError, self).__init__(message) self.message = message self.manifest_path = manifest_path self.data_path = data_path self.args = (self.message, self.manifest_path, self.data_path) def __str__(self): if self.data_path is not None: path = '.'.join(map(str, self.data_path)) return ('{msg}\n File path: {file}\n Data path: {datapath}' .format(msg=self.message, file=self.manifest_path, datapath=path)) return '{file}: {msg}'.format(msg=self.message, file=self.manifest_path) class TaskListError(Exception): def __init__(self, message): super(TaskListError, self).__init__(message) self.message = message self.args = (self.message,) def __str__(self): return 'Error in tasklist: ' + self.message class TaskError(Exception): pass class UnexpectedNumMatchesError(Exception): pass class UnitError(Exception): pass bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/000077500000000000000000000000001323112141500221405ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/__init__.py000066400000000000000000000017171323112141500242570ustar00rootroot00000000000000from contextlib import contextmanager def get_partitions(): import re regexp = re.compile('^ *(?P\d+) *(?P\d+) *(?P\d+) (?P\S+)$') matches = {} path = '/proc/partitions' with open(path) as partitions: next(partitions) next(partitions) for line in partitions: match = regexp.match(line) if match is None: raise RuntimeError('Unable to parse {line} in {path}'.format(line=line, path=path)) matches[match.group('dev_name')] = match.groupdict() return matches @contextmanager def unmounted(volume): from bootstrapvz.base.fs.partitionmaps.none import NoPartitions p_map = volume.partition_map root_dir = p_map.root.mount_dir p_map.root.unmount() if not isinstance(p_map, NoPartitions): p_map.unmap(volume) yield p_map.map(volume) else: yield p_map.root.mount(destination=root_dir) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/folder.py000066400000000000000000000011721323112141500237660ustar00rootroot00000000000000from bootstrapvz.base.fs.volume import Volume class Folder(Volume): # Override the states this volume can be in (i.e. we can't "format" or "attach" it) events = [{'name': 'create', 'src': 'nonexistent', 'dst': 'attached'}, {'name': 'delete', 'src': 'attached', 'dst': 'deleted'}, ] extension = 'chroot' def create(self, path): self.fsm.create(path=path) def _before_create(self, e): import os self.path = e.path os.mkdir(self.path) def _before_delete(self, e): from shutil import rmtree rmtree(self.path) del self.path bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/logicalvolume.py000066400000000000000000000025541323112141500253620ustar00rootroot00000000000000from bootstrapvz.base.fs.volume import Volume from bootstrapvz.common.tools import log_check_call import os class LogicalVolume(Volume): def __init__(self, partitionmap): super(LogicalVolume, self).__init__(partitionmap) self.vg = '' self.lv = '' def create(self, volumegroup, logicalvolume): self.vg = volumegroup self.lv = logicalvolume image_path = os.path.join(os.sep, 'dev', self.vg, self.lv) self.fsm.create(image_path=image_path) def _before_create(self, e): self.image_path = e.image_path lv_size = str(self.size.bytes.get_qty_in('MiB')) log_check_call(['lvcreate', '--size', '{mib}M'.format(mib=lv_size), '--name', self.lv, self.vg]) def _before_attach(self, e): log_check_call(['lvchange', '--activate', 'y', self.image_path]) [self.loop_device_path] = log_check_call(['losetup', '--show', '--find', '--partscan', self.image_path]) self.device_path = self.loop_device_path def _before_detach(self, e): log_check_call(['losetup', '--detach', self.loop_device_path]) log_check_call(['lvchange', '--activate', 'n', self.image_path]) del self.loop_device_path self.device_path = None def delete(self): log_check_call(['lvremove', '-f', self.image_path]) del self.image_path bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/loopbackvolume.py000066400000000000000000000016371323112141500255430ustar00rootroot00000000000000from bootstrapvz.base.fs.volume import Volume from ..tools import log_check_call class LoopbackVolume(Volume): extension = 'raw' def create(self, image_path): self.fsm.create(image_path=image_path) def _before_create(self, e): self.image_path = e.image_path size_opt = '--size={mib}M'.format(mib=self.size.bytes.get_qty_in('MiB')) log_check_call(['truncate', size_opt, self.image_path]) def _before_attach(self, e): [self.loop_device_path] = log_check_call(['losetup', '--show', '--find', '--partscan', self.image_path]) self.device_path = self.loop_device_path def _before_detach(self, e): log_check_call(['losetup', '--detach', self.loop_device_path]) del self.loop_device_path self.device_path = None def _before_delete(self, e): from os import remove remove(self.image_path) del self.image_path bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/qcow2volume.py000066400000000000000000000001661323112141500250000ustar00rootroot00000000000000from qemuvolume import QEMUVolume class Qcow2Volume(QEMUVolume): extension = 'qcow2' qemu_format = 'qcow2' bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/qemuvolume.py000066400000000000000000000072541323112141500247210ustar00rootroot00000000000000from loopbackvolume import LoopbackVolume from bootstrapvz.base.fs.exceptions import VolumeError from ..tools import log_check_call from . import get_partitions class QEMUVolume(LoopbackVolume): def _before_create(self, e): self.image_path = e.image_path vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M' log_check_call(['qemu-img', 'create', '-f', self.qemu_format, self.image_path, vol_size]) def _check_nbd_module(self): from bootstrapvz.base.fs.partitionmaps.none import NoPartitions if isinstance(self.partition_map, NoPartitions): if not self._module_loaded('nbd'): msg = ('The kernel module `nbd\' must be loaded ' '(`modprobe nbd\') to attach .{extension} images' .format(extension=self.extension)) raise VolumeError(msg) else: num_partitions = len(self.partition_map.partitions) if not self._module_loaded('nbd'): msg = ('The kernel module `nbd\' must be loaded ' '(run `modprobe nbd max_part={num_partitions}\') ' 'to attach .{extension} images' .format(num_partitions=num_partitions, extension=self.extension)) raise VolumeError(msg) nbd_max_part = int(self._module_param('nbd', 'max_part')) if nbd_max_part < num_partitions: # Found here: http://bethesignal.org/blog/2011/01/05/how-to-mount-virtualbox-vdi-image/ msg = ('The kernel module `nbd\' was loaded with the max_part ' 'parameter set to {max_part}, which is below ' 'the amount of partitions for this volume ({num_partitions}). ' 'Reload the nbd kernel module with max_part set to at least {num_partitions} ' '(`rmmod nbd; modprobe nbd max_part={num_partitions}\').' .format(max_part=nbd_max_part, num_partitions=num_partitions)) raise VolumeError(msg) def _before_attach(self, e): self._check_nbd_module() self.loop_device_path = self._find_free_nbd_device() log_check_call(['qemu-nbd', '--connect', self.loop_device_path, self.image_path]) self.device_path = self.loop_device_path def _before_detach(self, e): log_check_call(['qemu-nbd', '--disconnect', self.loop_device_path]) del self.loop_device_path self.device_path = None def _module_loaded(self, module): import re regexp = re.compile('^{module} +'.format(module=module)) with open('/proc/modules') as loaded_modules: for line in loaded_modules: match = regexp.match(line) if match is not None: return True return False def _module_param(self, module, param): import os.path param_path = os.path.join('/sys/module', module, 'parameters', param) with open(param_path) as param: return param.read().strip() # From http://lists.gnu.org/archive/html/qemu-devel/2011-11/msg02201.html # Apparently it's not in the current qemu-nbd shipped with wheezy def _is_nbd_used(self, device_name): return device_name in get_partitions() def _find_free_nbd_device(self): import os.path for i in xrange(0, 15): device_name = 'nbd' + str(i) if not self._is_nbd_used(device_name): return os.path.join('/dev', device_name) raise VolumeError('Unable to find free nbd device.') def __setstate__(self, state): for key in state: self.__dict__[key] = state[key] bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/virtualdiskimage.py000066400000000000000000000006471323112141500260650ustar00rootroot00000000000000from qemuvolume import QEMUVolume class VirtualDiskImage(QEMUVolume): extension = 'vdi' qemu_format = 'vdi' # VDI format does not have an URI (check here: https://forums.virtualbox.org/viewtopic.php?p=275185#p275185) ovf_uri = None def get_uuid(self): import uuid with open(self.image_path) as image: image.seek(392) return uuid.UUID(bytes_le=image.read(16)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/virtualharddisk.py000066400000000000000000000014341323112141500257140ustar00rootroot00000000000000from qemuvolume import QEMUVolume from ..tools import log_check_call class VirtualHardDisk(QEMUVolume): extension = 'vhd' qemu_format = 'vpc' ovf_uri = 'http://go.microsoft.com/fwlink/?LinkId=137171' # Azure requires the image size to be a multiple of 1 MiB. # VHDs are dynamic by default, so we add the option # to make the image size fixed (subformat=fixed) def _before_create(self, e): self.image_path = e.image_path vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M' log_check_call(['qemu-img', 'create', '-o', 'subformat=fixed', '-f', self.qemu_format, self.image_path, vol_size]) def get_uuid(self): if not hasattr(self, 'uuid'): import uuid self.uuid = uuid.uuid4() return self.uuid bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fs/virtualmachinedisk.py000066400000000000000000000016361323112141500264060ustar00rootroot00000000000000from qemuvolume import QEMUVolume class VirtualMachineDisk(QEMUVolume): extension = 'vmdk' qemu_format = 'vmdk' ovf_uri = 'http://www.vmware.com/specifications/vmdk.html#sparse' def get_uuid(self): if not hasattr(self, 'uuid'): import uuid self.uuid = uuid.uuid4() return self.uuid # import uuid # with open(self.image_path) as image: # line = '' # lines_read = 0 # while 'ddb.uuid.image="' not in line: # line = image.read() # lines_read += 1 # if lines_read > 100: # from common.exceptions import VolumeError # raise VolumeError('Unable to find UUID in VMDK file.') # import re # matches = re.search('ddb.uuid.image="(?P[^"]+)"', line) # return uuid.UUID(hex=matches.group('uuid')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/fsm_proxy.py000066400000000000000000000037611323112141500241370ustar00rootroot00000000000000 class FSMProxy(object): def __init__(self, cfg): from fysom import Fysom events = set([event['name'] for event in cfg['events']]) cfg['callbacks'] = self.collect_event_listeners(events, cfg['callbacks']) self.fsm = Fysom(cfg) self.attach_proxy_methods(self.fsm, events) def collect_event_listeners(self, events, callbacks): callbacks = callbacks.copy() callback_names = [] for event in events: callback_names.append(('_before_' + event, 'onbefore' + event)) callback_names.append(('_after_' + event, 'onafter' + event)) for fn_name, listener in callback_names: fn = getattr(self, fn_name, None) if callable(fn): if listener in callbacks: old_fn = callbacks[listener] def wrapper(e, old_fn=old_fn, fn=fn): old_fn(e) fn(e) callbacks[listener] = wrapper else: callbacks[listener] = fn return callbacks def attach_proxy_methods(self, fsm, events): def make_proxy(fsm, event): fn = getattr(fsm, event) def proxy(*args, **kwargs): if len(args) > 0: raise FSMProxyError('FSMProxy event listeners only accept named arguments.') fn(**kwargs) return proxy for event in events: if not hasattr(self, event): setattr(self, event, make_proxy(fsm, event)) def __getstate__(self): state = {} for key, value in self.__dict__.iteritems(): if callable(value) or key == 'fsm': continue state[key] = value state['__class__'] = self.__module__ + '.' + self.__class__.__name__ return state def __setstate__(self, state): for key in state: self.__dict__[key] = state[key] class FSMProxyError(Exception): pass bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/minify_json.py000066400000000000000000000072271323112141500244360ustar00rootroot00000000000000''' Created on 20/01/2011 v0.1 (C) Gerald Storer MIT License Based on JSON.minify.js: https://github.com/getify/JSON.minify ''' import re def json_minify(json,strip_space=True): tokenizer=re.compile('"|(/\*)|(\*/)|(//)|\n|\r') in_string = False in_multiline_comment = False in_singleline_comment = False new_str = [] from_index = 0 # from is a keyword in Python for match in re.finditer(tokenizer,json): if not in_multiline_comment and not in_singleline_comment: tmp2 = json[from_index:match.start()] if not in_string and strip_space: tmp2 = re.sub('[ \t\n\r]*','',tmp2) # replace only white space defined in standard new_str.append(tmp2) from_index = match.end() if match.group() == '"' and not in_multiline_comment and not in_singleline_comment: escaped = re.search('(\\\\)*$',json[:match.start()]) if not in_string or escaped is None or len(escaped.group()) % 2 == 0: # start of string with ", or unescaped " character found to end string in_string = not in_string from_index -= 1 # include " character in next catch elif match.group() == '/*' and not in_string and not in_multiline_comment and not in_singleline_comment: in_multiline_comment = True elif match.group() == '*/' and not in_string and in_multiline_comment and not in_singleline_comment: in_multiline_comment = False elif match.group() == '//' and not in_string and not in_multiline_comment and not in_singleline_comment: in_singleline_comment = True elif (match.group() == '\n' or match.group() == '\r') and not in_string and not in_multiline_comment and in_singleline_comment: in_singleline_comment = False elif not in_multiline_comment and not in_singleline_comment and ( match.group() not in ['\n','\r',' ','\t'] or not strip_space): new_str.append(match.group()) new_str.append(json[from_index:]) return ''.join(new_str) if __name__ == '__main__': import json # requires Python 2.6+ to run tests def test_json(s): return json.loads(json_minify(s)) test1 = '''// this is a JSON file with comments { "foo": "bar", // this is cool "bar": [ "baz", "bum", "zam" ], /* the rest of this document is just fluff in case you are interested. */ "something": 10, "else": 20 } /* NOTE: You can easily strip the whitespace and comments from such a file with the JSON.minify() project hosted here on github at http://github.com/getify/JSON.minify */ ''' test1_res = '''{"foo":"bar","bar":["baz","bum","zam"],"something":10,"else":20}''' test2 = ''' {"/*":"*/","//":"",/*"//"*/"/*/":// "//"} ''' test2_res = '''{"/*":"*/","//":"","/*/":"//"}''' test3 = r'''/* this is a multi line comment */{ "foo" : "bar/*"// something , "b\"az":/* something else */"blah" } ''' test3_res = r'''{"foo":"bar/*","b\"az":"blah"}''' test4 = r'''{"foo": "ba\"r//", "bar\\": "b\\\"a/*z", "baz\\\\": /* yay */ "fo\\\\\"*/o" } ''' test4_res = r'''{"foo":"ba\"r//","bar\\":"b\\\"a/*z","baz\\\\":"fo\\\\\"*/o"}''' assert test_json(test1) == json.loads(test1_res),'Failed test 1' assert test_json(test2) == json.loads(test2_res),'Failed test 2' assert test_json(test3) == json.loads(test3_res),'Failed test 3' assert test_json(test4) == json.loads(test4_res),'Failed test 4' if __debug__: # Don't print passed message if the asserts didn't run print 'Passed all tests' bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/phases.py000066400000000000000000000026761323112141500234000ustar00rootroot00000000000000from bootstrapvz.base.phase import Phase validation = Phase('Validation', 'Validating data, files, etc.') preparation = Phase('Preparation', 'Initializing connections, fetching data etc.') volume_creation = Phase('Volume creation', 'Creating the volume to bootstrap onto') volume_preparation = Phase('Volume preparation', 'Formatting the bootstrap volume') volume_mounting = Phase('Volume mounting', 'Mounting bootstrap volume') os_installation = Phase('OS installation', 'Installing the operating system') package_installation = Phase('Package installation', 'Installing software') system_modification = Phase('System modification', 'Modifying configuration files, adding resources, etc.') user_modification = Phase('User modification', 'Running user specified modifications') system_cleaning = Phase('System cleaning', 'Removing sensitive data, temporary files and other leftovers') volume_unmounting = Phase('Volume unmounting', 'Unmounting the bootstrap volume') image_registration = Phase('Image registration', 'Uploading/Registering with the provider') cleaning = Phase('Cleaning', 'Removing temporary files') order = [validation, preparation, volume_creation, volume_preparation, volume_mounting, os_installation, package_installation, system_modification, user_modification, system_cleaning, volume_unmounting, image_registration, cleaning, ] bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/releases.py000066400000000000000000000036251323112141500237130ustar00rootroot00000000000000 class _Release(object): def __init__(self, codename, version): self.codename = codename self.version = version def __cmp__(self, other): return self.version - other.version def __str__(self): return self.codename def __getstate__(self): state = self.__dict__.copy() state['__class__'] = self.__module__ + '.' + self.__class__.__name__ return state def __setstate__(self, state): for key in state: self.__dict__[key] = state[key] class _ReleaseAlias(_Release): def __init__(self, alias, release): self.alias = alias self.release = release super(_ReleaseAlias, self).__init__(self.release.codename, self.release.version) def __str__(self): return self.alias sid = _Release('sid', 11) buster = _Release('buster', 10) stretch = _Release('stretch', 9) jessie = _Release('jessie', 8) wheezy = _Release('wheezy', 7) squeeze = _Release('squeeze', 6.0) lenny = _Release('lenny', 5.0) etch = _Release('etch', 4.0) sarge = _Release('sarge', 3.1) woody = _Release('woody', 3.0) potato = _Release('potato', 2.2) slink = _Release('slink', 2.1) hamm = _Release('hamm', 2.0) bo = _Release('bo', 1.3) rex = _Release('rex', 1.2) buzz = _Release('buzz', 1.1) unstable = _ReleaseAlias('unstable', sid) testing = _ReleaseAlias('testing', buster) stable = _ReleaseAlias('stable', stretch) oldstable = _ReleaseAlias('oldstable', jessie) def get_release(release_name): """Normalizes the release codenames This allows tasks to query for release codenames rather than 'stable', 'unstable' etc. """ from . import releases release = getattr(releases, release_name, None) if release is None or not isinstance(release, _Release): raise UnknownReleaseException('The release `{name}\' is unknown'.format(name=release)) return release class UnknownReleaseException(Exception): pass bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/sectors.py000066400000000000000000000146011323112141500235660ustar00rootroot00000000000000from exceptions import UnitError from bytes import Bytes def onlysectors(msg): def decorator(func): def check_other(self, other): if not isinstance(other, Sectors): raise UnitError(msg) return func(self, other) return check_other return decorator class Sectors(object): def __init__(self, quantity, sector_size): if isinstance(sector_size, Bytes): self.sector_size = sector_size else: self.sector_size = Bytes(sector_size) if isinstance(quantity, Bytes): self.bytes = quantity else: if isinstance(quantity, (int, long)): self.bytes = self.sector_size * quantity else: self.bytes = Bytes(quantity) def get_sectors(self): return self.bytes / self.sector_size def __repr__(self): return str(self.get_sectors()) + 's' def __str__(self): return self.__repr__() def __int__(self): return self.get_sectors() def __long__(self): return self.get_sectors() @onlysectors('Can only compare sectors with sectors') def __lt__(self, other): return self.bytes < other.bytes @onlysectors('Can only compare sectors with sectors') def __le__(self, other): return self.bytes <= other.bytes @onlysectors('Can only compare sectors with sectors') def __eq__(self, other): return self.bytes == other.bytes @onlysectors('Can only compare sectors with sectors') def __ne__(self, other): return self.bytes != other.bytes @onlysectors('Can only compare sectors with sectors') def __ge__(self, other): return self.bytes >= other.bytes @onlysectors('Can only compare sectors with sectors') def __gt__(self, other): return self.bytes > other.bytes def __add__(self, other): if isinstance(other, (int, long)): return Sectors(self.bytes + self.sector_size * other, self.sector_size) if isinstance(other, Bytes): return Sectors(self.bytes + other, self.sector_size) if isinstance(other, Sectors): if self.sector_size != other.sector_size: raise UnitError('Cannot sum sectors with different sector sizes') return Sectors(self.bytes + other.bytes, self.sector_size) raise UnitError('Can only add sectors, bytes or integers to sectors') def __iadd__(self, other): if isinstance(other, (int, long)): self.bytes += self.sector_size * other return self if isinstance(other, Bytes): self.bytes += other return self if isinstance(other, Sectors): if self.sector_size != other.sector_size: raise UnitError('Cannot sum sectors with different sector sizes') self.bytes += other.bytes return self raise UnitError('Can only add sectors, bytes or integers to sectors') def __sub__(self, other): if isinstance(other, (int, long)): return Sectors(self.bytes - self.sector_size * other, self.sector_size) if isinstance(other, Bytes): return Sectors(self.bytes - other, self.sector_size) if isinstance(other, Sectors): if self.sector_size != other.sector_size: raise UnitError('Cannot subtract sectors with different sector sizes') return Sectors(self.bytes - other.bytes, self.sector_size) raise UnitError('Can only subtract sectors, bytes or integers from sectors') def __isub__(self, other): if isinstance(other, (int, long)): self.bytes -= self.sector_size * other return self if isinstance(other, Bytes): self.bytes -= other return self if isinstance(other, Sectors): if self.sector_size != other.sector_size: raise UnitError('Cannot subtract sectors with different sector sizes') self.bytes -= other.bytes return self raise UnitError('Can only subtract sectors, bytes or integers from sectors') def __mul__(self, other): if isinstance(other, (int, long)): return Sectors(self.bytes * other, self.sector_size) else: raise UnitError('Can only multiply sectors with integers') def __imul__(self, other): if isinstance(other, (int, long)): self.bytes *= other return self else: raise UnitError('Can only multiply sectors with integers') def __div__(self, other): if isinstance(other, (int, long)): return Sectors(self.bytes / other, self.sector_size) if isinstance(other, Sectors): if self.sector_size == other.sector_size: return self.bytes / other.bytes else: raise UnitError('Cannot divide sectors with different sector sizes') raise UnitError('Can only divide sectors with integers or sectors') def __idiv__(self, other): if isinstance(other, (int, long)): self.bytes /= other return self if isinstance(other, Sectors): if self.sector_size == other.sector_size: self.bytes /= other.bytes return self else: raise UnitError('Cannot divide sectors with different sector sizes') raise UnitError('Can only divide sectors with integers or sectors') @onlysectors('Can only take modulus of sectors with sectors') def __mod__(self, other): if self.sector_size == other.sector_size: return Sectors(self.bytes % other.bytes, self.sector_size) else: raise UnitError('Cannot take modulus of sectors with different sector sizes') @onlysectors('Can only take modulus of sectors with sectors') def __imod__(self, other): if self.sector_size == other.sector_size: self.bytes %= other.bytes return self else: raise UnitError('Cannot take modulus of sectors with different sector sizes') def __getstate__(self): return {'__class__': self.__module__ + '.' + self.__class__.__name__, 'sector_size': self.sector_size, 'bytes': self.bytes, } def __setstate__(self, state): self.sector_size = state['sector_size'] self.bytes = state['bytes'] bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/task_groups.py000066400000000000000000000172471323112141500244560ustar00rootroot00000000000000from tasks import workspace from tasks import packages from tasks import host from tasks import grub from tasks import extlinux from tasks import bootstrap from tasks import volume from tasks import loopback from tasks import filesystem from tasks import partitioning from tasks import cleanup from tasks import apt from tasks import security from tasks import locale from tasks import network from tasks import initd from tasks import ssh from tasks import kernel from tasks import folder def get_standard_groups(manifest): group = [] group.extend(get_base_group(manifest)) group.extend(volume_group) if manifest.volume['partitions']['type'] != 'none': group.extend(partitioning_group) if 'boot' in manifest.volume['partitions']: group.extend(boot_partition_group) group.extend(mounting_group) group.extend(kernel_group) group.extend(get_fs_specific_group(manifest)) group.extend(get_network_group(manifest)) group.extend(get_apt_group(manifest)) group.extend(security_group) group.extend(get_locale_group(manifest)) group.extend(get_bootloader_group(manifest)) group.extend(get_cleanup_group(manifest)) return group def get_base_group(manifest): group = [workspace.CreateWorkspace, bootstrap.AddRequiredCommands, host.CheckExternalCommands, bootstrap.Bootstrap, workspace.DeleteWorkspace, ] if manifest.bootstrapper.get('tarball', False): group.append(bootstrap.MakeTarball) if manifest.bootstrapper.get('include_packages', False): group.append(bootstrap.IncludePackagesInBootstrap) if manifest.bootstrapper.get('exclude_packages', False): group.append(bootstrap.ExcludePackagesInBootstrap) return group volume_group = [volume.Attach, volume.Detach, filesystem.AddRequiredCommands, filesystem.Format, filesystem.FStab, ] partitioning_group = [partitioning.AddRequiredCommands, partitioning.PartitionVolume, partitioning.MapPartitions, partitioning.UnmapPartitions, ] boot_partition_group = [filesystem.CreateBootMountDir, filesystem.MountBoot, ] mounting_group = [filesystem.CreateMountDir, filesystem.MountRoot, filesystem.MountAdditional, filesystem.MountSpecials, filesystem.CopyMountTable, filesystem.RemoveMountTable, filesystem.UnmountRoot, filesystem.DeleteMountDir, ] kernel_group = [kernel.DetermineKernelVersion, kernel.UpdateInitramfs, ] ssh_group = [ssh.AddOpenSSHPackage, ssh.DisableSSHPasswordAuthentication, ssh.DisableSSHDNSLookup, ssh.AddSSHKeyGeneration, initd.InstallInitScripts, ssh.ShredHostkeys, ] def get_network_group(manifest): if ( manifest.bootstrapper.get('variant', None) == 'minbase' and 'netbase' not in manifest.bootstrapper.get('include_packages', []) ): # minbase has no networking return [] group = [network.ConfigureNetworkIF, network.RemoveDNSInfo] if manifest.system.get('hostname', False): group.append(network.SetHostname) else: group.append(network.RemoveHostname) return group def get_apt_group(manifest): group = [apt.AddDefaultSources, apt.WriteSources, apt.DisableDaemonAutostart, apt.AptUpdate, apt.AptUpgrade, packages.InstallPackages, apt.PurgeUnusedPackages, apt.AptClean, apt.EnableDaemonAutostart, ] if 'sources' in manifest.packages: group.append(apt.AddManifestSources) if 'trusted-keys' in manifest.packages: group.append(apt.ValidateTrustedKeys) group.append(apt.InstallTrustedKeys) if 'preferences' in manifest.packages: group.append(apt.AddManifestPreferences) group.append(apt.WritePreferences) if 'apt.conf.d' in manifest.packages: group.append(apt.WriteConfiguration) if 'install' in manifest.packages: group.append(packages.AddManifestPackages) if manifest.packages.get('install_standard', False): group.append(packages.AddTaskselStandardPackages) return group security_group = [security.EnableShadowConfig] def get_locale_group(manifest): from bootstrapvz.common.releases import jessie group = [ locale.LocaleBootstrapPackage, locale.GenerateLocale, locale.SetTimezone, ] if manifest.release > jessie: group.append(locale.SetLocalTimeLink) else: group.append(locale.SetLocalTimeCopy) return group def get_bootloader_group(manifest): from bootstrapvz.common.releases import jessie from bootstrapvz.common.releases import stretch group = [] if manifest.system['bootloader'] == 'grub': group.extend([grub.AddGrubPackage, grub.InitGrubConfig, grub.SetGrubTerminalToConsole, grub.SetGrubConsolOutputDeviceToSerial, grub.RemoveGrubTimeout, grub.DisableGrubRecovery, grub.WriteGrubConfig]) if manifest.release < jessie: group.append(grub.InstallGrub_1_99) else: group.append(grub.InstallGrub_2) if manifest.release >= stretch: group.append(grub.DisablePNIN) if manifest.system['bootloader'] == 'extlinux': group.append(extlinux.AddExtlinuxPackage) if manifest.release < jessie: group.extend([extlinux.ConfigureExtlinux, extlinux.InstallExtlinux]) else: group.extend([extlinux.ConfigureExtlinuxJessie, extlinux.InstallExtlinuxJessie]) return group def get_fs_specific_group(manifest): partitions = manifest.volume['partitions'] fs_specific_tasks = {'ext2': [filesystem.TuneVolumeFS], 'ext3': [filesystem.TuneVolumeFS], 'ext4': [filesystem.TuneVolumeFS], 'xfs': [filesystem.AddXFSProgs], } group = set() if 'boot' in partitions: group.update(fs_specific_tasks.get(partitions['boot']['filesystem'], [])) if 'root' in partitions: group.update(fs_specific_tasks.get(partitions['root']['filesystem'], [])) return list(group) def get_cleanup_group(manifest): from bootstrapvz.common.releases import jessie group = [cleanup.ClearMOTD, cleanup.CleanTMP, ] if manifest.release >= jessie: group.append(cleanup.ClearMachineId) return group rollback_map = {workspace.CreateWorkspace: workspace.DeleteWorkspace, loopback.Create: volume.Delete, volume.Attach: volume.Detach, partitioning.MapPartitions: partitioning.UnmapPartitions, filesystem.CreateMountDir: filesystem.DeleteMountDir, filesystem.MountRoot: filesystem.UnmountRoot, folder.Create: folder.Delete, } def get_standard_rollback_tasks(completed): rollback_tasks = set() for task in completed: if task not in rollback_map: continue counter = rollback_map[task] if task in completed and counter not in completed: rollback_tasks.add(counter) return rollback_tasks bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/000077500000000000000000000000001323112141500226555ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/__init__.py000066400000000000000000000001301323112141500247600ustar00rootroot00000000000000from bootstrapvz.common.tools import rel_path assets = rel_path(__file__, '../assets') bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/apt.py000066400000000000000000000235401323112141500240170ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import rel_path import locale import logging import os class ValidateTrustedKeys(Task): description = 'Validate apt trusted keys' phase = phases.validation @classmethod def run(cls, info): from bootstrapvz.common.tools import log_call for i, rel_key_path in enumerate(info.manifest.packages.get('trusted-keys', {})): key_path = rel_path(info.manifest.path, rel_key_path) if not os.path.isfile(key_path): info.manifest.validation_error('File not found: {}'.format(key_path), ['packages', 'trusted-keys', i]) from tempfile import mkdtemp from shutil import rmtree tempdir = mkdtemp() status, _, _ = log_call( ['gpg', '--quiet', '--homedir', tempdir, '--keyring', key_path, '-k'] ) rmtree(tempdir) if status != 0: info.manifest.validation_error('Invalid GPG keyring: {}'.format(key_path), ['packages', 'trusted-keys', i]) class AddManifestSources(Task): description = 'Adding sources from the manifest' phase = phases.preparation @classmethod def run(cls, info): for name, lines in info.manifest.packages['sources'].iteritems(): for line in lines: info.source_lists.add(name, line) class AddDefaultSources(Task): description = 'Adding default release sources' phase = phases.preparation predecessors = [AddManifestSources] @classmethod def run(cls, info): from bootstrapvz.common.releases import sid, wheezy include_src = info.manifest.packages.get('include-source-type', False) components = ' '.join(info.manifest.packages.get('components', ['main'])) info.source_lists.add('main', 'deb {apt_mirror} {system.release} ' + components) if include_src: info.source_lists.add('main', 'deb-src {apt_mirror} {system.release} ' + components) if info.manifest.release != sid and info.manifest.release >= wheezy: info.source_lists.add('main', 'deb {apt_security} {system.release}/updates ' + components) if include_src: info.source_lists.add('main', 'deb-src {apt_security} {system.release}/updates ' + components) info.source_lists.add('main', 'deb {apt_mirror} {system.release}-updates ' + components) if include_src: info.source_lists.add('main', 'deb-src {apt_mirror} {system.release}-updates ' + components) class AddBackports(Task): description = 'Adding backports to the apt sources' phase = phases.preparation predecessors = [AddDefaultSources] @classmethod def run(cls, info): from bootstrapvz.common.releases import testing from bootstrapvz.common.releases import unstable if info.source_lists.target_exists('{system.release}-backports'): msg = ('{system.release}-backports target already exists').format(**info.manifest_vars) logging.getLogger(__name__).info(msg) elif info.manifest.release == testing: logging.getLogger(__name__).info('There are no backports for testing') elif info.manifest.release == unstable: logging.getLogger(__name__).info('There are no backports for sid/unstable') else: info.source_lists.add('backports', 'deb {apt_mirror} {system.release}-backports main') info.source_lists.add('backports', 'deb-src {apt_mirror} {system.release}-backports main') class AddManifestPreferences(Task): description = 'Adding preferences from the manifest' phase = phases.preparation @classmethod def run(cls, info): for name, preferences in info.manifest.packages['preferences'].iteritems(): info.preference_lists.add(name, preferences) class InstallTrustedKeys(Task): description = 'Installing trusted keys' phase = phases.package_installation @classmethod def run(cls, info): from shutil import copy for rel_key_path in info.manifest.packages['trusted-keys']: key_path = rel_path(info.manifest.path, rel_key_path) key_name = os.path.basename(key_path) destination = os.path.join(info.root, 'etc/apt/trusted.gpg.d', key_name) copy(key_path, destination) class WriteConfiguration(Task): decription = 'Write configuration to apt.conf.d from the manifest' phase = phases.package_installation @classmethod def run(cls, info): for name, val in info.manifest.packages.get('apt.conf.d', {}).iteritems(): if name == 'main': path = os.path.join(info.root, 'etc/apt/apt.conf') else: path = os.path.join(info.root, 'etc/apt/apt.conf.d', name) with open(path, 'w') as conf_file: conf_file.write(val + '\n') class WriteSources(Task): description = 'Writing aptitude sources to disk' phase = phases.package_installation predecessors = [InstallTrustedKeys] @classmethod def run(cls, info): if not info.source_lists.target_exists(info.manifest.system['release']): import logging log = logging.getLogger(__name__) log.warn('No default target has been specified in the sources list, ' 'installing packages may fail') for name, sources in info.source_lists.sources.iteritems(): if name == 'main': list_path = os.path.join(info.root, 'etc/apt/sources.list') else: list_path = os.path.join(info.root, 'etc/apt/sources.list.d/', name + '.list') with open(list_path, 'w') as source_list: for source in sources: source_list.write(str(source) + '\n') class WritePreferences(Task): description = 'Writing aptitude preferences to disk' phase = phases.package_installation @classmethod def run(cls, info): for name, preferences in info.preference_lists.preferences.iteritems(): if name == 'main': list_path = os.path.join(info.root, 'etc/apt/preferences') else: list_path = os.path.join(info.root, 'etc/apt/preferences.d/', name) with open(list_path, 'w') as preference_list: for preference in preferences: preference_list.write(str(preference) + '\n') class DisableDaemonAutostart(Task): description = 'Disabling daemon autostart' phase = phases.package_installation @classmethod def run(cls, info): rc_policy_path = os.path.join(info.root, 'usr/sbin/policy-rc.d') with open(rc_policy_path, 'w') as rc_policy: rc_policy.write(('#!/bin/sh\n' 'exit 101')) os.chmod(rc_policy_path, 0755) initictl_path = os.path.join(info.root, 'sbin/initctl') with open(initictl_path, 'w') as initctl: initctl.write(('#!/bin/sh\n' 'exit 0')) os.chmod(initictl_path, 0755) class AptUpdate(Task): description = 'Updating the package cache' phase = phases.package_installation predecessors = [locale.GenerateLocale, WriteConfiguration, WriteSources, WritePreferences] @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'apt-get', 'update']) class AptUpgrade(Task): description = 'Upgrading packages and fixing broken dependencies' phase = phases.package_installation predecessors = [AptUpdate, DisableDaemonAutostart] @classmethod def run(cls, info): from subprocess import CalledProcessError try: log_check_call(['chroot', info.root, 'apt-get', 'install', '--fix-broken', '--no-install-recommends', '--assume-yes']) log_check_call(['chroot', info.root, 'apt-get', 'upgrade', '--no-install-recommends', '--assume-yes']) except CalledProcessError as e: if e.returncode == 100: msg = ('apt exited with status code 100. ' 'This can sometimes occur when package retrieval times out or a package extraction failed. ' 'apt might succeed if you try bootstrapping again.') logging.getLogger(__name__).warn(msg) raise class PurgeUnusedPackages(Task): description = 'Removing unused packages' phase = phases.system_cleaning @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'apt-get', 'autoremove', '--purge', '--assume-yes']) class AptClean(Task): description = 'Clearing the aptitude cache' phase = phases.system_cleaning @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'apt-get', 'clean']) lists = os.path.join(info.root, 'var/lib/apt/lists') for list_file in [os.path.join(lists, f) for f in os.listdir(lists)]: if os.path.isfile(list_file): os.remove(list_file) class EnableDaemonAutostart(Task): description = 'Re-enabling daemon autostart after installation' phase = phases.system_cleaning @classmethod def run(cls, info): os.remove(os.path.join(info.root, 'usr/sbin/policy-rc.d')) os.remove(os.path.join(info.root, 'sbin/initctl')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/boot.py000066400000000000000000000034521323112141500241760ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases import os.path from . import assets class UpdateInitramfs(Task): description = 'Updating initramfs' phase = phases.system_modification @classmethod def run(cls, info): from ..tools import log_check_call log_check_call(['chroot', info.root, 'update-initramfs', '-u']) class BlackListModules(Task): description = 'Blacklisting kernel modules' phase = phases.system_modification successors = [UpdateInitramfs] @classmethod def run(cls, info): blacklist_path = os.path.join(info.root, 'etc/modprobe.d/blacklist.conf') with open(blacklist_path, 'a') as blacklist: blacklist.write(('# disable pc speaker and floppy\n' 'blacklist pcspkr\n' 'blacklist floppy\n')) class DisableGetTTYs(Task): description = 'Disabling getty processes' phase = phases.system_modification @classmethod def run(cls, info): # Forward compatible check for jessie from bootstrapvz.common.releases import jessie if info.manifest.release < jessie: from ..tools import sed_i inittab_path = os.path.join(info.root, 'etc/inittab') tty1 = '1:2345:respawn:/sbin/getty 38400 tty1' sed_i(inittab_path, '^' + tty1, '#' + tty1) ttyx = ':23:respawn:/sbin/getty 38400 tty' for i in range(2, 7): i = str(i) sed_i(inittab_path, '^' + i + ttyx + i, '#' + i + ttyx + i) else: from shutil import copy logind_asset_path = os.path.join(assets, 'systemd/logind.conf') logind_destination = os.path.join(info.root, 'etc/systemd/logind.conf') copy(logind_asset_path, logind_destination) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/bootstrap.py000066400000000000000000000104441323112141500252470ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases from ..exceptions import TaskError import host import logging import os.path log = logging.getLogger(__name__) class AddRequiredCommands(Task): description = 'Adding commands required for bootstrapping Debian' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): info.host_dependencies['debootstrap'] = 'debootstrap' def get_bootstrap_args(info): executable = ['debootstrap'] arch = info.manifest.system.get('userspace_architecture', info.manifest.system.get('architecture')) options = ['--arch=' + arch] if 'variant' in info.manifest.bootstrapper: options.append('--variant=' + info.manifest.bootstrapper['variant']) if len(info.include_packages) > 0: options.append('--include=' + ','.join(info.include_packages)) if len(info.exclude_packages) > 0: options.append('--exclude=' + ','.join(info.exclude_packages)) mirror = info.manifest.bootstrapper.get('mirror', info.apt_mirror) arguments = [info.manifest.system['release'], info.root, mirror] return executable, options, arguments def get_tarball_filename(info): from hashlib import sha1 executable, options, arguments = get_bootstrap_args(info) # Filter info.root which points at /target/volume-id, we won't ever hit anything with that in there. hash_args = [arg for arg in arguments if arg != info.root] tarball_id = sha1(repr(frozenset(options + hash_args))).hexdigest()[0:8] tarball_filename = 'debootstrap-' + tarball_id + '.tar' return os.path.join(info.manifest.bootstrapper['workspace'], tarball_filename) class MakeTarball(Task): description = 'Creating bootstrap tarball' phase = phases.os_installation @classmethod def run(cls, info): executable, options, arguments = get_bootstrap_args(info) tarball = get_tarball_filename(info) if os.path.isfile(tarball): log.debug('Found matching tarball, skipping creation') else: from ..tools import log_call status, out, err = log_call(executable + options + ['--make-tarball=' + tarball] + arguments) if status not in [0, 1]: # variant=minbase exits with 0 msg = 'debootstrap exited with status {status}, it should exit with status 0 or 1'.format(status=status) raise TaskError(msg) class Bootstrap(Task): description = 'Installing Debian' phase = phases.os_installation predecessors = [MakeTarball] @classmethod def run(cls, info): executable, options, arguments = get_bootstrap_args(info) tarball = get_tarball_filename(info) if os.path.isfile(tarball): if not info.manifest.bootstrapper.get('tarball', False): # Only shows this message if it hasn't tried to create the tarball log.debug('Found matching tarball, skipping download') options.extend(['--unpack-tarball=' + tarball]) if info.bootstrap_script is not None: # Optional bootstrapping script to modify the bootstrapping process arguments.append(info.bootstrap_script) try: from ..tools import log_check_call log_check_call(executable + options + arguments) except KeyboardInterrupt: # Sometimes ../root/sys and ../root/proc are still mounted when # quitting debootstrap prematurely. This break the cleanup process, # so we unmount manually (ignore the exit code, the dirs may not be mounted). from ..tools import log_call log_call(['umount', os.path.join(info.root, 'sys')]) log_call(['umount', os.path.join(info.root, 'proc')]) raise class IncludePackagesInBootstrap(Task): description = 'Add packages in the bootstrap phase' phase = phases.preparation @classmethod def run(cls, info): info.include_packages.update( set(info.manifest.bootstrapper['include_packages']) ) class ExcludePackagesInBootstrap(Task): description = 'Remove packages from bootstrap phase' phase = phases.preparation @classmethod def run(cls, info): info.exclude_packages.update( set(info.manifest.bootstrapper['exclude_packages']) ) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/cleanup.py000066400000000000000000000026321323112141500246610ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases import os import shutil class ClearMOTD(Task): description = 'Clearing the MOTD' phase = phases.system_cleaning @classmethod def run(cls, info): with open('/var/run/motd', 'w'): pass class CleanTMP(Task): description = 'Removing temporary files' phase = phases.system_cleaning @classmethod def run(cls, info): tmp = os.path.join(info.root, 'tmp') for tmp_file in [os.path.join(tmp, f) for f in os.listdir(tmp)]: if os.path.isfile(tmp_file): os.remove(tmp_file) else: shutil.rmtree(tmp_file) log = os.path.join(info.root, 'var/log/') os.remove(os.path.join(log, 'bootstrap.log')) os.remove(os.path.join(log, 'dpkg.log')) class ClearMachineId(Task): description = 'Clearing the Machine ID' phase = phases.system_cleaning @classmethod def run(cls, info): import logging log = logging.getLogger(__name__) for machineid_file in [os.path.join(info.root, f) for f in ['etc/machine-id', 'var/lib/dbus/machine-id']]: if os.path.isfile(machineid_file): log.debug(machineid_file + ' found, clearing') with open(machineid_file, 'w'): pass else: log.debug(machineid_file + ' not found, not clearing') bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/development.py000066400000000000000000000004701323112141500255520ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases class TriggerRollback(Task): phase = phases.cleaning description = 'Triggering a rollback by throwing an exception' @classmethod def run(cls, info): from ..exceptions import TaskError raise TaskError('Trigger rollback') bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/extlinux.py000066400000000000000000000103301323112141500251040ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases from ..tools import log_check_call import filesystem import kernel from bootstrapvz.base.fs import partitionmaps import os class AddExtlinuxPackage(Task): description = 'Adding extlinux package' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('extlinux') if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap): info.packages.add('syslinux-common') class ConfigureExtlinux(Task): description = 'Configuring extlinux' phase = phases.system_modification predecessors = [filesystem.FStab] @classmethod def run(cls, info): from bootstrapvz.common.releases import squeeze if info.manifest.release == squeeze: # On squeeze /etc/default/extlinux is generated when running extlinux-update log_check_call(['chroot', info.root, 'extlinux-update']) from bootstrapvz.common.tools import sed_i extlinux_def = os.path.join(info.root, 'etc/default/extlinux') sed_i(extlinux_def, r'^EXTLINUX_PARAMETERS="([^"]+)"$', r'EXTLINUX_PARAMETERS="\1 console=ttyS0"') class InstallExtlinux(Task): description = 'Installing extlinux' phase = phases.system_modification predecessors = [filesystem.FStab, ConfigureExtlinux] @classmethod def run(cls, info): if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap): bootloader = '/usr/lib/syslinux/gptmbr.bin' else: bootloader = '/usr/lib/extlinux/mbr.bin' log_check_call(['chroot', info.root, 'dd', 'bs=440', 'count=1', 'if=' + bootloader, 'of=' + info.volume.device_path]) log_check_call(['chroot', info.root, 'extlinux', '--install', '/boot/extlinux']) log_check_call(['chroot', info.root, 'extlinux-update']) class ConfigureExtlinuxJessie(Task): description = 'Configuring extlinux' phase = phases.system_modification @classmethod def run(cls, info): extlinux_path = os.path.join(info.root, 'boot/extlinux') os.mkdir(extlinux_path) from . import assets with open(os.path.join(assets, 'extlinux/extlinux.conf')) as template: extlinux_config_tpl = template.read() config_vars = {'root_uuid': info.volume.partition_map.root.get_uuid(), 'kernel_version': info.kernel_version} # Check if / and /boot are on the same partition # If not, /boot will actually be / when booting if hasattr(info.volume.partition_map, 'boot'): config_vars['boot_prefix'] = '' else: config_vars['boot_prefix'] = '/boot' extlinux_config = extlinux_config_tpl.format(**config_vars) with open(os.path.join(extlinux_path, 'extlinux.conf'), 'w') as extlinux_conf_handle: extlinux_conf_handle.write(extlinux_config) # Copy the boot message from shutil import copy boot_txt_path = os.path.join(assets, 'extlinux/boot.txt') copy(boot_txt_path, os.path.join(extlinux_path, 'boot.txt')) class InstallExtlinuxJessie(Task): description = 'Installing extlinux' phase = phases.system_modification predecessors = [filesystem.FStab, ConfigureExtlinuxJessie] # Make sure the kernel image is updated after we have installed the bootloader successors = [kernel.UpdateInitramfs] @classmethod def run(cls, info): if isinstance(info.volume.partition_map, partitionmaps.gpt.GPTPartitionMap): # Yeah, somebody saw it fit to uppercase that folder in jessie. Why? BECAUSE bootloader = '/usr/lib/EXTLINUX/gptmbr.bin' else: bootloader = '/usr/lib/EXTLINUX/mbr.bin' log_check_call(['chroot', info.root, 'dd', 'bs=440', 'count=1', 'if=' + bootloader, 'of=' + info.volume.device_path]) log_check_call(['chroot', info.root, 'extlinux', '--install', '/boot/extlinux']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/filesystem.py000066400000000000000000000201701323112141500254130ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases from ..tools import log_check_call import bootstrap import host import volume class AddRequiredCommands(Task): description = 'Adding commands required for formatting' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): if 'xfs' in (p.filesystem for p in info.volume.partition_map.partitions): info.host_dependencies['mkfs.xfs'] = 'xfsprogs' class Format(Task): description = 'Formatting the volume' phase = phases.volume_preparation @classmethod def run(cls, info): from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition for partition in info.volume.partition_map.partitions: if isinstance(partition, UnformattedPartition): continue partition.format() class TuneVolumeFS(Task): description = 'Tuning the bootstrap volume filesystem' phase = phases.volume_preparation predecessors = [Format] @classmethod def run(cls, info): from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition import re # Disable the time based filesystem check for partition in info.volume.partition_map.partitions: if isinstance(partition, UnformattedPartition): continue if re.match('^ext[2-4]$', partition.filesystem) is not None: log_check_call(['tune2fs', '-i', '0', partition.device_path]) class AddXFSProgs(Task): description = 'Adding `xfsprogs\' to the image packages' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('xfsprogs') class CreateMountDir(Task): description = 'Creating mountpoint for the root partition' phase = phases.volume_mounting @classmethod def run(cls, info): import os info.root = os.path.join(info.workspace, 'root') os.makedirs(info.root) class MountRoot(Task): description = 'Mounting the root partition' phase = phases.volume_mounting predecessors = [CreateMountDir] @classmethod def run(cls, info): info.volume.partition_map.root.mount(destination=info.root) class CreateBootMountDir(Task): description = 'Creating mountpoint for the boot partition' phase = phases.volume_mounting predecessors = [MountRoot] @classmethod def run(cls, info): import os.path os.makedirs(os.path.join(info.root, 'boot')) class MountBoot(Task): description = 'Mounting the boot partition' phase = phases.volume_mounting predecessors = [CreateBootMountDir] @classmethod def run(cls, info): p_map = info.volume.partition_map p_map.root.add_mount(p_map.boot, 'boot') class MountAdditional(Task): description = 'Mounting additional partitions' phase = phases.volume_mounting predecessors = [MountRoot] @classmethod def run(cls, info): import os from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition from bootstrapvz.base.fs.partitions.single import SinglePartition def is_additional(partition): return (not isinstance(partition, (UnformattedPartition, SinglePartition)) and partition.name not in ["boot", "swap", "root"]) p_map = info.volume.partition_map partitions = p_map.partitions for partition in sorted( filter(is_additional, partitions), key=lambda partition: len(partition.name)): partition = getattr(p_map, partition.name) os.makedirs(os.path.join(info.root, partition.name)) if partition.mountopts is None: p_map.root.add_mount(getattr(p_map, partition.name), partition.name) else: p_map.root.add_mount(getattr(p_map, partition.name), partition.name, ['--options'] + partition.mountopts) class MountSpecials(Task): description = 'Mounting special block devices' phase = phases.os_installation predecessors = [bootstrap.Bootstrap] @classmethod def run(cls, info): root = info.volume.partition_map.root root.add_mount('/dev', 'dev', ['--bind']) root.add_mount('none', 'proc', ['--types', 'proc']) root.add_mount('none', 'sys', ['--types', 'sysfs']) root.add_mount('none', 'dev/pts', ['--types', 'devpts']) class CopyMountTable(Task): description = 'Copying mtab from host system' phase = phases.os_installation predecessors = [MountSpecials] @classmethod def run(cls, info): import shutil import os.path shutil.copy('/proc/mounts', os.path.join(info.root, 'etc/mtab')) class UnmountRoot(Task): description = 'Unmounting the bootstrap volume' phase = phases.volume_unmounting successors = [volume.Detach] @classmethod def run(cls, info): info.volume.partition_map.root.unmount() class RemoveMountTable(Task): description = 'Removing mtab' phase = phases.volume_unmounting successors = [UnmountRoot] @classmethod def run(cls, info): import os os.remove(os.path.join(info.root, 'etc/mtab')) class DeleteMountDir(Task): description = 'Deleting mountpoint for the bootstrap volume' phase = phases.volume_unmounting predecessors = [UnmountRoot] @classmethod def run(cls, info): import os os.rmdir(info.root) del info.root class FStab(Task): description = 'Adding partitions to the fstab' phase = phases.system_modification @classmethod def run(cls, info): import os.path from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition from bootstrapvz.base.fs.partitions.single import SinglePartition def is_additional(partition): return (not isinstance(partition, (UnformattedPartition, SinglePartition)) and partition.name not in ["boot", "swap", "root"]) p_map = info.volume.partition_map partitions = p_map.partitions mount_points = [{'path': '/', 'partition': p_map.root, 'dump': '1', 'pass_num': '1', }] if hasattr(p_map, 'boot'): mount_points.append({'path': '/boot', 'partition': p_map.boot, 'dump': '1', 'pass_num': '2', }) if hasattr(p_map, 'swap'): mount_points.append({'path': 'none', 'partition': p_map.swap, 'dump': '1', 'pass_num': '0', }) for partition in sorted( filter(is_additional, partitions), key=lambda partition: len(partition.name)): mount_points.append({'path': "/" + partition.name, 'partition': getattr(p_map, partition.name), 'dump': '1', 'pass_num': '2', }) fstab_lines = [] for mount_point in mount_points: partition = mount_point['partition'] if partition.mountopts is None: mount_opts = ['defaults'] else: mount_opts = partition.mountopts fstab_lines.append('UUID={uuid} {mountpoint} {filesystem} {mount_opts} {dump} {pass_num}' .format(uuid=partition.get_uuid(), mountpoint=mount_point['path'], filesystem=partition.filesystem, mount_opts=','.join(mount_opts), dump=mount_point['dump'], pass_num=mount_point['pass_num'])) fstab_path = os.path.join(info.root, 'etc/fstab') with open(fstab_path, 'w') as fstab: fstab.write('\n'.join(fstab_lines)) fstab.write('\n') bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/folder.py000066400000000000000000000011631323112141500245030ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import volume import workspace class Create(Task): description = 'Creating volume folder' phase = phases.volume_creation successors = [volume.Attach] @classmethod def run(cls, info): import os.path info.root = os.path.join(info.workspace, 'root') info.volume.create(info.root) class Delete(Task): description = 'Deleting volume folder' phase = phases.cleaning successors = [workspace.DeleteWorkspace] @classmethod def run(cls, info): info.volume.delete() del info.root bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/grub.py000066400000000000000000000372141323112141500241750ustar00rootroot00000000000000from bootstrapvz.base import Task from ..exceptions import TaskError from .. import phases from ..tools import log_check_call import filesystem import kernel from bootstrapvz.base.fs import partitionmaps import os.path class AddGrubPackage(Task): description = 'Adding grub package' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('grub-pc') class InitGrubConfig(Task): description = 'Initializing grub standard configuration' phase = phases.preparation @classmethod def run(cls, info): # The default values and documentation below was fetched from # https://www.gnu.org/software/grub/manual/html_node/Simple-configuration.html # Some explanations have been shortened info.grub_config = { # The default menu entry. This may be a number, in which case it identifies the Nth entry # in the generated menu counted from zero, or the title of a menu entry, # or the special string `saved'. Using the title may be useful if you want to set a menu entry # as the default even though there may be a variable number of entries before it. # If you set this to `saved', then the default menu entry will be that # saved by `GRUB_SAVEDEFAULT', grub-set-default, or grub-reboot. # The default is `0'. 'GRUB_DEFAULT': 0, # If this option is set to `true', then, when an entry is selected, # save it as a new default entry for use by future runs of GRUB. # This is only useful if `GRUB_DEFAULT=saved'; # it is a separate option because `GRUB_DEFAULT=saved' is useful without this option, # in conjunction with grub-set-default or grub-reboot. Unset by default. 'GRUB_SAVEDEFAULT': None, # Boot the default entry this many seconds after the menu is displayed, unless a key is pressed. # The default is `5'. Set to `0' to boot immediately without displaying the menu, # or to `-1' to wait indefinitely. 'GRUB_TIMEOUT': 5, # Wait this many seconds for a key to be pressed before displaying the menu. # If no key is pressed during that time, display the menu for the number of seconds specified # in GRUB_TIMEOUT before booting the default entry. # We expect that most people who use GRUB_HIDDEN_TIMEOUT will want to have GRUB_TIMEOUT set to `0' # so that the menu is not displayed at all unless a key is pressed. Unset by default. 'GRUB_HIDDEN_TIMEOUT': None, # In conjunction with `GRUB_HIDDEN_TIMEOUT', set this to `true' to suppress the verbose countdown # while waiting for a key to be pressed before displaying the menu. Unset by default. 'GRUB_HIDDEN_TIMEOUT_QUIET': None, # Variants of the corresponding variables without the `_BUTTON' suffix, # used to support vendor-specific power buttons. See Vendor power-on keys. 'GRUB_DEFAULT_BUTTON': None, 'GRUB_TIMEOUT_BUTTON': None, 'GRUB_HIDDEN_TIMEOUT_BUTTON': None, 'GRUB_BUTTON_CMOS_ADDRESS': None, # Set by distributors of GRUB to their identifying name. # This is used to generate more informative menu entry titles. 'GRUB_DISTRIBUTOR': None, # Select the terminal input device. You may select multiple devices here, separated by spaces. # Valid terminal input names depend on the platform, but may include # `console' (PC BIOS and EFI consoles), # `serial' (serial terminal), # `ofconsole' (Open Firmware console), # `at_keyboard' (PC AT keyboard, mainly useful with Coreboot), # or `usb_keyboard' (USB keyboard using the HID Boot Protocol, # for cases where the firmware does not handle this). # The default is to use the platform's native terminal input. 'GRUB_TERMINAL_INPUT': None, # Select the terminal output device. You may select multiple devices here, separated by spaces. # Valid terminal output names depend on the platform, but may include # `console' (PC BIOS and EFI consoles), # `serial' (serial terminal), # `gfxterm' (graphics-mode output), # `ofconsole' (Open Firmware console), # or `vga_text' (VGA text output, mainly useful with Coreboot). # The default is to use the platform's native terminal output. 'GRUB_TERMINAL_OUTPUT': None, # If this option is set, it overrides both `GRUB_TERMINAL_INPUT' and `GRUB_TERMINAL_OUTPUT' # to the same value. 'GRUB_TERMINAL': None, # A command to configure the serial port when using the serial console. # See serial. Defaults to `serial'. 'GRUB_SERIAL_COMMAND': 'serial', # Command-line arguments to add to menu entries for the Linux kernel. 'GRUB_CMDLINE_LINUX': [], # Unless `GRUB_DISABLE_RECOVERY' is set to `true', # two menu entries will be generated for each Linux kernel: # one default entry and one entry for recovery mode. # This option lists command-line arguments to add only to the default menu entry, # after those listed in `GRUB_CMDLINE_LINUX'. 'GRUB_CMDLINE_LINUX_DEFAULT': [], # As `GRUB_CMDLINE_LINUX' and `GRUB_CMDLINE_LINUX_DEFAULT', but for NetBSD. 'GRUB_CMDLINE_NETBSD': [], 'GRUB_CMDLINE_NETBSD_DEFAULT': [], # As `GRUB_CMDLINE_LINUX', but for GNU Mach. 'GRUB_CMDLINE_GNUMACH': [], # The values of these options are appended to the values of # `GRUB_CMDLINE_LINUX' and `GRUB_CMDLINE_LINUX_DEFAULT' # for Linux and Xen menu entries. 'GRUB_CMDLINE_XEN': [], 'GRUB_CMDLINE_XEN_DEFAULT': [], # The values of these options replace the values of # `GRUB_CMDLINE_LINUX' and `GRUB_CMDLINE_LINUX_DEFAULT' for Linux and Xen menu entries. 'GRUB_CMDLINE_LINUX_XEN_REPLACE': [], 'GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT': [], # Normally, grub-mkconfig will generate menu entries that use # universally-unique identifiers (UUIDs) to identify the root filesystem to the Linux kernel, # using a `root=UUID=...' kernel parameter. This is usually more reliable, # but in some cases it may not be appropriate. # To disable the use of UUIDs, set this option to `true'. 'GRUB_DISABLE_LINUX_UUID': None, # If this option is set to `true', disable the generation of recovery mode menu entries. 'GRUB_DISABLE_RECOVERY': None, # If graphical video support is required, either because the `gfxterm' graphical terminal is # in use or because `GRUB_GFXPAYLOAD_LINUX' is set, # then grub-mkconfig will normally load all available GRUB video drivers and # use the one most appropriate for your hardware. # If you need to override this for some reason, then you can set this option. # After grub-install has been run, the available video drivers are listed in /boot/grub/video.lst. 'GRUB_VIDEO_BACKEND': None, # Set the resolution used on the `gfxterm' graphical terminal. # Note that you can only use modes which your graphics card supports # via VESA BIOS Extensions (VBE), so for example native LCD panel resolutions # may not be available. # The default is `auto', which tries to select a preferred resolution. See gfxmode. 'GRUB_GFXMODE': 'auto', # Set a background image for use with the `gfxterm' graphical terminal. # The value of this option must be a file readable by GRUB at boot time, # and it must end with .png, .tga, .jpg, or .jpeg. # The image will be scaled if necessary to fit the screen. 'GRUB_BACKGROUND': None, # Set a theme for use with the `gfxterm' graphical terminal. 'GRUB_THEME': None, # Set to `text' to force the Linux kernel to boot in normal text mode, # `keep' to preserve the graphics mode set using `GRUB_GFXMODE', # `widthxheight'[`xdepth'] to set a particular graphics mode, # or a sequence of these separated by commas or semicolons to try several modes in sequence. # See gfxpayload. # Depending on your kernel, your distribution, your graphics card, and the phase of the moon, # note that using this option may cause GNU/Linux to suffer from various display problems, # particularly during the early part of the boot sequence. # If you have problems, set this option to `text' and GRUB will # tell Linux to boot in normal text mode. 'GRUB_GFXPAYLOAD_LINUX': None, # Normally, grub-mkconfig will try to use the external os-prober program, if installed, # to discover other operating systems installed on the same system and generate appropriate # menu entries for them. Set this option to `true' to disable this. 'GRUB_DISABLE_OS_PROBER': None, # Play a tune on the speaker when GRUB starts. # This is particularly useful for users unable to see the screen. # The value of this option is passed directly to play. 'GRUB_INIT_TUNE': None, # If this option is set, GRUB will issue a badram command to filter out specified regions of RAM. 'GRUB_BADRAM': None, # This option may be set to a list of GRUB module names separated by spaces. # Each module will be loaded as early as possible, at the start of grub.cfg. 'GRUB_PRELOAD_MODULES': [], } class WriteGrubConfig(Task): description = 'Writing grub defaults configuration' phase = phases.system_modification @classmethod def run(cls, info): grub_config_contents = """# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. """ for key, value in info.grub_config.items(): if isinstance(value, str): grub_config_contents += '{}="{}"\n'.format(key, value) elif isinstance(value, int): grub_config_contents += '{}={}\n'.format(key, value) elif isinstance(value, bool): grub_config_contents += '{}="{}"\n'.format(key, str(value).lower()) elif isinstance(value, list): if len(value) > 0: args_list = ' '.join(map(str, value)) grub_config_contents += '{}="{}"\n'.format(key, args_list) elif value is not None: raise TaskError('Don\'t know how to handle type {}, ' 'when creating grub config'.format(type(value))) grub_defaults = os.path.join(info.root, 'etc/default/grub') with open(grub_defaults, 'w') as grub_defaults_handle: grub_defaults_handle.write(grub_config_contents) class DisablePNIN(Task): description = 'Disabling Predictable Network Interfaces' phase = phases.system_modification successors = [WriteGrubConfig] @classmethod def run(cls, info): # See issue #245 for more details info.grub_config['GRUB_CMDLINE_LINUX'].append('net.ifnames=0') info.grub_config['GRUB_CMDLINE_LINUX'].append('biosdevname=0') class SetGrubTerminalToConsole(Task): description = 'Setting the grub terminal to `console\'' phase = phases.system_modification successors = [WriteGrubConfig] @classmethod def run(cls, info): # See issue #245 for more details info.grub_config['GRUB_TERMINAL'] = 'console' class SetGrubConsolOutputDeviceToSerial(Task): description = 'Setting the grub terminal output device to `ttyS0\'' phase = phases.system_modification successors = [WriteGrubConfig] @classmethod def run(cls, info): # See issue #245 for more details info.grub_config['GRUB_CMDLINE_LINUX'].append('console=ttyS0') info.grub_config['GRUB_CMDLINE_LINUX'].append('earlyprintk=ttyS0') class RemoveGrubTimeout(Task): description = 'Setting grub menu timeout to 0' phase = phases.system_modification successors = [WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_TIMEOUT'] = 0 info.grub_config['GRUB_HIDDEN_TIMEOUT'] = 0 info.grub_config['GRUB_HIDDEN_TIMEOUT_QUIET'] = True class DisableGrubRecovery(Task): description = 'Disabling the grub recovery menu entry' phase = phases.system_modification successors = [WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_DISABLE_RECOVERY'] = True class EnableSystemd(Task): description = 'Enabling systemd' phase = phases.system_modification successors = [WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX'].append('init=/bin/systemd') class InstallGrub_1_99(Task): description = 'Installing grub 1.99' phase = phases.system_modification predecessors = [filesystem.FStab, WriteGrubConfig] @classmethod def run(cls, info): p_map = info.volume.partition_map # GRUB screws up when installing in chrooted environments # so we fake a real harddisk with dmsetup. # Guide here: http://ebroder.net/2009/08/04/installing-grub-onto-a-disk-image/ from ..fs import unmounted with unmounted(info.volume): info.volume.link_dm_node() if isinstance(p_map, partitionmaps.none.NoPartitions): p_map.root.device_path = info.volume.device_path try: [device_path] = log_check_call(['readlink', '-f', info.volume.device_path]) device_map_path = os.path.join(info.root, 'boot/grub/device.map') partition_prefix = 'msdos' if isinstance(p_map, partitionmaps.gpt.GPTPartitionMap): partition_prefix = 'gpt' with open(device_map_path, 'w') as device_map: device_map.write('(hd0) {device_path}\n'.format(device_path=device_path)) if not isinstance(p_map, partitionmaps.none.NoPartitions): for idx, partition in enumerate(info.volume.partition_map.partitions): device_map.write('(hd0,{prefix}{idx}) {device_path}\n' .format(device_path=partition.device_path, prefix=partition_prefix, idx=idx + 1)) # Install grub log_check_call(['chroot', info.root, 'grub-install', device_path]) log_check_call(['chroot', info.root, 'update-grub']) finally: with unmounted(info.volume): info.volume.unlink_dm_node() if isinstance(p_map, partitionmaps.none.NoPartitions): p_map.root.device_path = info.volume.device_path class InstallGrub_2(Task): description = 'Installing grub 2' phase = phases.system_modification predecessors = [filesystem.FStab, WriteGrubConfig] # Make sure the kernel image is updated after we have installed the bootloader successors = [kernel.UpdateInitramfs] @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'grub-install', info.volume.device_path]) log_check_call(['chroot', info.root, 'update-grub']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/host.py000066400000000000000000000025041323112141500242050ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases from ..exceptions import TaskError class CheckExternalCommands(Task): description = 'Checking availability of external commands' phase = phases.validation @classmethod def run(cls, info): import re import os import logging from distutils.spawn import find_executable missing_packages = [] log = logging.getLogger(__name__) for command, package in info.host_dependencies.items(): log.debug('Checking availability of ' + command) path = find_executable(command) if path is None or not os.access(path, os.X_OK): if re.match('^https?:\/\/', package): msg = ('The command `{command}\' is not available, ' 'you can download the software at `{package}\'.' .format(command=command, package=package)) else: msg = ('The command `{command}\' is not available, ' 'it is located in the package `{package}\'.' .format(command=command, package=package)) missing_packages.append(msg) if len(missing_packages) > 0: msg = '\n'.join(missing_packages) raise TaskError(msg) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/image.py000066400000000000000000000013131323112141500243070ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class MoveImage(Task): description = 'Moving volume image' phase = phases.image_registration @classmethod def run(cls, info): image_name = info.manifest.name.format(**info.manifest_vars) filename = image_name + '.' + info.volume.extension import os.path destination = os.path.join(info.manifest.bootstrapper['workspace'], filename) import shutil shutil.move(info.volume.image_path, destination) info.volume.image_path = destination import logging log = logging.getLogger(__name__) log.info('The volume image has been moved to ' + destination) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/initd.py000066400000000000000000000061051323112141500243400ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases from ..tools import log_check_call from . import assets import os.path class InstallInitScripts(Task): description = 'Installing startup scripts' phase = phases.system_modification @classmethod def run(cls, info): import stat from bootstrapvz.common.releases import jessie rwxr_xr_x = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR | stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH) from shutil import copy for name, src in info.initd['install'].iteritems(): dst = os.path.join(info.root, 'etc/init.d', name) copy(src, dst) os.chmod(dst, rwxr_xr_x) if info.manifest.release > jessie: log_check_call(['chroot', info.root, 'systemctl', 'enable', name]) else: log_check_call(['chroot', info.root, 'insserv', '--default', name]) for name in info.initd['disable']: if info.manifest.release > jessie: log_check_call(['chroot', info.root, 'systemctl', 'mask', name]) else: log_check_call(['chroot', info.root, 'insserv', '--remove', name]) class AddExpandRoot(Task): description = 'Adding init script to expand the root volume' phase = phases.system_modification successors = [InstallInitScripts] @classmethod def run(cls, info): init_scripts_dir = os.path.join(assets, 'init.d') info.initd['install']['expand-root'] = os.path.join(init_scripts_dir, 'expand-root') class RemoveHWClock(Task): description = 'Removing hardware clock init scripts' phase = phases.system_modification successors = [InstallInitScripts] @classmethod def run(cls, info): from bootstrapvz.common.releases import squeeze info.initd['disable'].append('hwclock.sh') if info.manifest.release == squeeze: info.initd['disable'].append('hwclockfirst.sh') class AdjustExpandRootScript(Task): description = 'Adjusting the expand-root script' phase = phases.system_modification predecessors = [InstallInitScripts] @classmethod def run(cls, info): from ..tools import sed_i script = os.path.join(info.root, 'etc/init.d/expand-root') root_idx = info.volume.partition_map.root.get_index() root_index_line = 'root_index="{idx}"'.format(idx=root_idx) sed_i(script, '^root_index="0"$', root_index_line) root_device_path = 'root_device_path="{device}"'.format(device=info.volume.device_path) sed_i(script, '^root_device_path="/dev/xvda"$', root_device_path) class AdjustGrowpartWorkaround(Task): description = 'Adjusting expand-root for growpart-workaround' phase = phases.system_modification predecessors = [AdjustExpandRootScript] @classmethod def run(cls, info): from ..tools import sed_i script = os.path.join(info.root, 'etc/init.d/expand-root') sed_i(script, '^growpart="growpart"$', 'growpart-workaround') bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/kernel.py000066400000000000000000000035341323112141500245140ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases from ..tasks import packages import logging class AddDKMSPackages(Task): description = 'Adding DKMS and kernel header packages' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('dkms') kernel_pkg_arch = {'i386': '686-pae', 'amd64': 'amd64'}[info.manifest.system['architecture']] info.packages.add('linux-headers-' + kernel_pkg_arch) class UpdateInitramfs(Task): description = 'Rebuilding initramfs' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call # Update initramfs (-u) for all currently installed kernel versions (-k all) log_check_call(['chroot', info.root, 'update-initramfs', '-u', '-k', 'all']) class DetermineKernelVersion(Task): description = 'Determining kernel version' phase = phases.package_installation predecessors = [packages.InstallPackages] @classmethod def run(cls, info): # Snatched from `extlinux-update' in wheezy # list the files in boot/ that match vmlinuz-* # sort what the * matches, the first entry is the kernel version import os.path import re regexp = re.compile('^vmlinuz-(?P.+)$') def get_kernel_version(vmlinuz_path): vmlinux_basename = os.path.basename(vmlinuz_path) return regexp.match(vmlinux_basename).group('version') from glob import glob boot = os.path.join(info.root, 'boot') vmlinuz_paths = glob('{boot}/vmlinuz-*'.format(boot=boot)) kernels = map(get_kernel_version, vmlinuz_paths) info.kernel_version = sorted(kernels, reverse=True)[0] logging.getLogger(__name__).debug('Kernel version is {version}'.format(version=info.kernel_version)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/locale.py000066400000000000000000000047521323112141500244760ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases import os.path class LocaleBootstrapPackage(Task): description = 'Adding locale package to bootstrap installation' phase = phases.preparation @classmethod def run(cls, info): # We could bootstrap without locales, but things just suck without them # eg. error messages when running apt info.include_packages.add('locales') class GenerateLocale(Task): description = 'Generating system locale' phase = phases.package_installation @classmethod def run(cls, info): from ..tools import sed_i from ..tools import log_check_call lang = '{locale}.{charmap}'.format(locale=info.manifest.system['locale'], charmap=info.manifest.system['charmap']) locale_str = '{locale}.{charmap} {charmap}'.format(locale=info.manifest.system['locale'], charmap=info.manifest.system['charmap']) search = '# ' + locale_str locale_gen = os.path.join(info.root, 'etc/locale.gen') sed_i(locale_gen, search, locale_str) log_check_call(['chroot', info.root, 'locale-gen']) log_check_call(['chroot', info.root, 'update-locale', 'LANG=' + lang]) class SetTimezone(Task): description = 'Setting the selected timezone' phase = phases.system_modification @classmethod def run(cls, info): tz_path = os.path.join(info.root, 'etc/timezone') timezone = info.manifest.system['timezone'] with open(tz_path, 'w') as tz_file: tz_file.write(timezone) class SetLocalTimeLink(Task): description = 'Setting the selected local timezone (link)' phase = phases.system_modification @classmethod def run(cls, info): timezone = info.manifest.system['timezone'] localtime_path = os.path.join(info.root, 'etc/localtime') os.unlink(localtime_path) os.symlink(os.path.join('/usr/share/zoneinfo', timezone), localtime_path) class SetLocalTimeCopy(Task): description = 'Setting the selected local timezone (copy)' phase = phases.system_modification @classmethod def run(cls, info): from shutil import copy timezone = info.manifest.system['timezone'] zoneinfo_path = os.path.join(info.root, '/usr/share/zoneinfo', timezone) localtime_path = os.path.join(info.root, 'etc/localtime') copy(zoneinfo_path, localtime_path) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/logicalvolume.py000066400000000000000000000021601323112141500260700ustar00rootroot00000000000000import bootstrapvz.common.tasks.host as host import bootstrapvz.common.tasks.volume as volume from bootstrapvz.base import Task from bootstrapvz.common import phases class AddRequiredCommands(Task): description = 'Adding commands required for creating and mounting logical volumes' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): from bootstrapvz.common.fs.logicalvolume import LogicalVolume if type(info.volume) is LogicalVolume: info.host_dependencies['lvcreate'] = 'lvm2' info.host_dependencies['losetup'] = 'mount' class Create(Task): description = 'Creating a Logical volume' phase = phases.volume_creation successors = [volume.Attach] @classmethod def run(cls, info): info.volume.create(volumegroup=info.manifest.volume['volumegroup'], logicalvolume=info.manifest.volume['logicalvolume']) class Delete(Task): description = 'Deleting a Logical volume' phase = phases.cleaning @classmethod def run(cls, info): info.volume.delete() bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/loopback.py000066400000000000000000000020051323112141500250160ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import host import volume class AddRequiredCommands(Task): description = 'Adding commands required for creating loopback volumes' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): from ..fs.loopbackvolume import LoopbackVolume from ..fs.qemuvolume import QEMUVolume if type(info.volume) is LoopbackVolume: info.host_dependencies['losetup'] = 'mount' info.host_dependencies['truncate'] = 'coreutils' if isinstance(info.volume, QEMUVolume): info.host_dependencies['qemu-img'] = 'qemu-utils' class Create(Task): description = 'Creating a loopback volume' phase = phases.volume_creation successors = [volume.Attach] @classmethod def run(cls, info): import os.path image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension) info.volume.create(image_path) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/network-configuration.yml000066400000000000000000000006351323112141500277420ustar00rootroot00000000000000--- # This is a mapping of Debian release codenames to NIC configurations squeeze: | auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp wheezy: | auto eth0 iface eth0 inet dhcp jessie: | auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp stretch: | auto eth0 iface eth0 inet dhcp buster: | auto eth0 iface eth0 inet dhcp sid: | auto eth0 iface eth0 inet dhcp bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/network.py000066400000000000000000000034401323112141500247210ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases import os class RemoveDNSInfo(Task): description = 'Removing resolv.conf' phase = phases.system_cleaning @classmethod def run(cls, info): if os.path.isfile(os.path.join(info.root, 'etc/resolv.conf')): os.remove(os.path.join(info.root, 'etc/resolv.conf')) class RemoveHostname(Task): description = 'Removing the hostname file' phase = phases.system_cleaning @classmethod def run(cls, info): if os.path.isfile(os.path.join(info.root, 'etc/hostname')): os.remove(os.path.join(info.root, 'etc/hostname')) class SetHostname(Task): description = 'Writing hostname into the hostname file' phase = phases.system_modification @classmethod def run(cls, info): hostname = info.manifest.system['hostname'].format(**info.manifest_vars) hostname_file_path = os.path.join(info.root, 'etc/hostname') with open(hostname_file_path, 'w') as hostname_file: hostname_file.write(hostname) hosts_path = os.path.join(info.root, 'etc/hosts') from bootstrapvz.common.tools import sed_i sed_i(hosts_path, '^127.0.0.1\tlocalhost$', '127.0.0.1\tlocalhost\n127.0.1.1\t' + hostname) class ConfigureNetworkIF(Task): description = 'Configuring network interfaces' phase = phases.system_modification @classmethod def run(cls, info): from ..tools import config_get, rel_path network_config_path = rel_path(__file__, 'network-configuration.yml') if_config = config_get(network_config_path, [info.manifest.release.codename]) interfaces_path = os.path.join(info.root, 'etc/network/interfaces') with open(interfaces_path, 'a') as interfaces: interfaces.write(if_config + '\n') bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/packages.py000066400000000000000000000105741323112141500250140ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases import apt from ..tools import log_check_call class AddManifestPackages(Task): description = 'Adding packages from the manifest' phase = phases.preparation predecessors = [apt.AddManifestSources, apt.AddDefaultSources, apt.AddBackports] @classmethod def run(cls, info): import re remote = re.compile('^(?P[^/]+)(/(?P[^/]+))?$') for package in info.manifest.packages['install']: match = remote.match(package) if match is not None: info.packages.add(match.group('name'), match.group('target')) else: info.packages.add_local(package) class InstallPackages(Task): description = 'Installing packages' phase = phases.package_installation predecessors = [apt.AptUpgrade] @classmethod def run(cls, info): batch = [] actions = {info.packages.Remote: cls.install_remote, info.packages.Local: cls.install_local} for i, package in enumerate(info.packages.install): batch.append(package) next_package = info.packages.install[i + 1] if i + 1 < len(info.packages.install) else None if next_package is None or package.__class__ is not next_package.__class__: actions[package.__class__](info, batch) batch = [] @classmethod def install_remote(cls, info, remote_packages): import os from ..tools import log_check_call from subprocess import CalledProcessError try: env = os.environ.copy() env['DEBIAN_FRONTEND'] = 'noninteractive' log_check_call(['chroot', info.root, 'apt-get', 'install', '--no-install-recommends', '--assume-yes'] + map(str, remote_packages), env=env) except CalledProcessError as e: import logging disk_stat = os.statvfs(info.root) root_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024 disk_stat = os.statvfs(os.path.join(info.root, 'boot')) boot_free_mb = disk_stat.f_bsize * disk_stat.f_bavail / 1024 / 1024 free_mb = min(root_free_mb, boot_free_mb) if free_mb < 50: msg = ('apt exited with a non-zero status, ' 'this may be because\nthe image volume is ' 'running out of disk space ({free}MB left)').format(free=free_mb) logging.getLogger(__name__).warn(msg) else: if e.returncode == 100: msg = ('apt exited with status code 100. ' 'This can sometimes occur when package retrieval times out or a package extraction failed. ' 'apt might succeed if you try bootstrapping again.') logging.getLogger(__name__).warn(msg) raise @classmethod def install_local(cls, info, local_packages): from shutil import copy import os absolute_package_paths = [] chrooted_package_paths = [] for package_src in local_packages: pkg_name = os.path.basename(package_src.path) package_rel_dst = os.path.join('tmp', pkg_name) package_dst = os.path.join(info.root, package_rel_dst) copy(package_src.path, package_dst) absolute_package_paths.append(package_dst) package_path = os.path.join('/', package_rel_dst) chrooted_package_paths.append(package_path) env = os.environ.copy() env['DEBIAN_FRONTEND'] = 'noninteractive' log_check_call(['chroot', info.root, 'dpkg', '--install'] + chrooted_package_paths, env=env) for path in absolute_package_paths: os.remove(path) class AddTaskselStandardPackages(Task): description = 'Adding standard packages from tasksel' phase = phases.package_installation predecessors = [apt.AptUpdate] successors = [InstallPackages] @classmethod def run(cls, info): tasksel_packages = log_check_call(['chroot', info.root, 'tasksel', '--task-packages', 'standard']) for pkg in tasksel_packages: info.packages.add(pkg) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/partitioning.py000066400000000000000000000025421323112141500257410ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import filesystem import host import volume class AddRequiredCommands(Task): description = 'Adding commands required for partitioning the volume' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): from bootstrapvz.base.fs.partitionmaps.none import NoPartitions if not isinstance(info.volume.partition_map, NoPartitions): info.host_dependencies['parted'] = 'parted' info.host_dependencies['kpartx'] = 'kpartx' class PartitionVolume(Task): description = 'Partitioning the volume' phase = phases.volume_preparation @classmethod def run(cls, info): info.volume.partition_map.create(info.volume) class MapPartitions(Task): description = 'Mapping volume partitions' phase = phases.volume_preparation predecessors = [PartitionVolume] successors = [filesystem.Format] @classmethod def run(cls, info): info.volume.partition_map.map(info.volume) class UnmapPartitions(Task): description = 'Removing volume partitions mapping' phase = phases.volume_unmounting predecessors = [filesystem.UnmountRoot] successors = [volume.Detach] @classmethod def run(cls, info): info.volume.partition_map.unmap(info.volume) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/security.py000066400000000000000000000005041323112141500250750ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases class EnableShadowConfig(Task): description = 'Enabling shadowconfig' phase = phases.system_modification @classmethod def run(cls, info): from ..tools import log_check_call log_check_call(['chroot', info.root, 'shadowconfig', 'on']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/ssh.py000066400000000000000000000130371323112141500240300ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases from ..tools import log_check_call import os.path from . import assets import initd import shutil class AddOpenSSHPackage(Task): description = 'Adding openssh package' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('openssh-server') class AddSSHKeyGeneration(Task): description = 'Adding SSH private key generation init scripts' phase = phases.system_modification successors = [initd.InstallInitScripts] @classmethod def run(cls, info): init_scripts_dir = os.path.join(assets, 'init.d') systemd_dir = os.path.join(assets, 'systemd') install = info.initd['install'] from subprocess import CalledProcessError try: log_check_call(['chroot', info.root, 'dpkg-query', '-W', 'openssh-server']) from bootstrapvz.common.releases import squeeze from bootstrapvz.common.releases import wheezy from bootstrapvz.common.releases import jessie if info.manifest.release == squeeze: install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'squeeze/generate-ssh-hostkeys') elif info.manifest.release == wheezy: install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'wheezy/generate-ssh-hostkeys') elif info.manifest.release == jessie: install['generate-ssh-hostkeys'] = os.path.join(init_scripts_dir, 'jessie/generate-ssh-hostkeys') else: install['ssh-generate-hostkeys'] = os.path.join(init_scripts_dir, 'ssh-generate-hostkeys') ssh_keygen_host_service = os.path.join(systemd_dir, 'ssh-generate-hostkeys.service') ssh_keygen_host_service_dest = os.path.join(info.root, 'etc/systemd/system/ssh-generate-hostkeys.service') ssh_keygen_host_script = os.path.join(assets, 'ssh-generate-hostkeys') ssh_keygen_host_script_dest = os.path.join(info.root, 'usr/local/sbin/ssh-generate-hostkeys') # Copy files over shutil.copy(ssh_keygen_host_service, ssh_keygen_host_service_dest) shutil.copy(ssh_keygen_host_script, ssh_keygen_host_script_dest) os.chmod(ssh_keygen_host_script_dest, 0750) # Enable systemd service log_check_call(['chroot', info.root, 'systemctl', 'enable', 'ssh-generate-hostkeys.service']) except CalledProcessError: import logging logging.getLogger(__name__).warn('The OpenSSH server has not been installed, ' 'not installing SSH host key generation script.') class DisableSSHPasswordAuthentication(Task): description = 'Disabling SSH password authentication' phase = phases.system_modification @classmethod def run(cls, info): from ..tools import sed_i sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config') sed_i(sshd_config_path, '^#PasswordAuthentication yes', 'PasswordAuthentication no') class EnableRootLogin(Task): description = 'Enabling SSH login for root' phase = phases.system_modification @classmethod def run(cls, info): sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config') if os.path.exists(sshdconfig_path): from bootstrapvz.common.tools import sed_i sed_i(sshdconfig_path, '^#?PermitRootLogin .*', 'PermitRootLogin yes') else: import logging logging.getLogger(__name__).warn('The OpenSSH server has not been installed, ' 'not enabling SSH root login.') class DisableRootLogin(Task): description = 'Disabling SSH login for root' phase = phases.system_modification @classmethod def run(cls, info): sshdconfig_path = os.path.join(info.root, 'etc/ssh/sshd_config') if os.path.exists(sshdconfig_path): from bootstrapvz.common.tools import sed_i sed_i(sshdconfig_path, '^#?PermitRootLogin .*', 'PermitRootLogin no') else: import logging logging.getLogger(__name__).warn('The OpenSSH server has not been installed, ' 'not disabling SSH root login.') class DisableSSHDNSLookup(Task): description = 'Disabling sshd remote host name lookup' phase = phases.system_modification @classmethod def run(cls, info): sshd_config_path = os.path.join(info.root, 'etc/ssh/sshd_config') with open(sshd_config_path, 'a') as sshd_config: sshd_config.write('UseDNS no') class ShredHostkeys(Task): description = 'Securely deleting ssh hostkeys' phase = phases.system_cleaning @classmethod def run(cls, info): ssh_hostkeys = ['ssh_host_dsa_key', 'ssh_host_rsa_key'] from bootstrapvz.common.releases import wheezy if info.manifest.release >= wheezy: ssh_hostkeys.append('ssh_host_ecdsa_key') from bootstrapvz.common.releases import jessie if info.manifest.release >= jessie: ssh_hostkeys.append('ssh_host_ed25519_key') private = [os.path.join(info.root, 'etc/ssh', name) for name in ssh_hostkeys] public = [path + '.pub' for path in private] from ..tools import log_check_call log_check_call(['shred', '--remove'] + [key for key in private + public if os.path.isfile(key)]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/volume.py000066400000000000000000000011531323112141500245360ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases import workspace class Attach(Task): description = 'Attaching the volume' phase = phases.volume_creation @classmethod def run(cls, info): info.volume.attach() class Detach(Task): description = 'Detaching the volume' phase = phases.volume_unmounting @classmethod def run(cls, info): info.volume.detach() class Delete(Task): description = 'Deleting the volume' phase = phases.cleaning successors = [workspace.DeleteWorkspace] @classmethod def run(cls, info): info.volume.delete() bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tasks/workspace.py000066400000000000000000000006741323112141500252340ustar00rootroot00000000000000from bootstrapvz.base import Task from .. import phases class CreateWorkspace(Task): description = 'Creating workspace' phase = phases.preparation @classmethod def run(cls, info): import os os.makedirs(info.workspace) class DeleteWorkspace(Task): description = 'Deleting workspace' phase = phases.cleaning @classmethod def run(cls, info): import os os.rmdir(info.workspace) bootstrap-vz-0.9.11+20180121git/bootstrapvz/common/tools.py000066400000000000000000000110331323112141500232400ustar00rootroot00000000000000import os def log_check_call(command, stdin=None, env=None, shell=False, cwd=None): status, stdout, stderr = log_call(command, stdin, env, shell, cwd) from subprocess import CalledProcessError if status != 0: e = CalledProcessError(status, ' '.join(command), '\n'.join(stderr)) # Fix Pyro4's fixIronPythonExceptionForPickle() by setting the args property, # even though we use our own serialization (at least I think that's the problem). # See bootstrapvz.remote.serialize_called_process_error for more info. setattr(e, 'args', (status, ' '.join(command), '\n'.join(stderr))) raise e return stdout def log_call(command, stdin=None, env=None, shell=False, cwd=None): import subprocess import logging from multiprocessing.dummy import Pool as ThreadPool from os.path import realpath command_log = realpath(command[0]).replace('/', '.') log = logging.getLogger(__name__ + command_log) if type(command) is list: log.debug('Executing: {command}'.format(command=' '.join(command))) else: log.debug('Executing: {command}'.format(command=command)) process = subprocess.Popen(args=command, env=env, shell=shell, cwd=cwd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if stdin is not None: log.debug(' stdin: ' + stdin) process.stdin.write(stdin + "\n") process.stdin.flush() process.stdin.close() stdout = [] stderr = [] def handle_stdout(line): log.debug(line) stdout.append(line) def handle_stderr(line): log.error(line) stderr.append(line) handlers = {process.stdout: handle_stdout, process.stderr: handle_stderr} def stream_readline(stream): for line in iter(stream.readline, ''): handlers[stream](line.strip()) pool = ThreadPool(2) pool.map(stream_readline, [process.stdout, process.stderr]) pool.close() pool.join() process.wait() return process.returncode, stdout, stderr def sed_i(file_path, pattern, subst, expected_replacements=1): replacement_count = inline_replace(file_path, pattern, subst) if replacement_count != expected_replacements: from exceptions import UnexpectedNumMatchesError msg = ('There were {real} instead of {expected} matches for ' 'the expression `{exp}\' in the file `{path}\'' .format(real=replacement_count, expected=expected_replacements, exp=pattern, path=file_path)) raise UnexpectedNumMatchesError(msg) def inline_replace(file_path, pattern, subst): import fileinput import re replacement_count = 0 for line in fileinput.input(files=file_path, inplace=True): (replacement, count) = re.subn(pattern, subst, line) replacement_count += count print replacement, return replacement_count def load_json(path): import json from minify_json import json_minify with open(path) as stream: return json.loads(json_minify(stream.read(), False)) def load_yaml(path): import yaml with open(path, 'r') as stream: return yaml.safe_load(stream) def load_data(path): filename, extension = os.path.splitext(path) if not os.path.isfile(path): raise Exception('The path {path} does not point to a file.'.format(path=path)) if extension == '.json': return load_json(path) elif extension == '.yml' or extension == '.yaml': return load_yaml(path) else: raise Exception('Unrecognized extension: {ext}'.format(ext=extension)) def config_get(path, config_path): config = load_data(path) for key in config_path: config = config.get(key) return config def copy_tree(from_path, to_path): from shutil import copy for abs_prefix, dirs, files in os.walk(from_path): prefix = os.path.normpath(os.path.relpath(abs_prefix, from_path)) for path in dirs: full_path = os.path.join(to_path, prefix, path) if os.path.exists(full_path): if os.path.isdir(full_path): continue else: os.remove(full_path) os.mkdir(full_path) for path in files: copy(os.path.join(abs_prefix, path), os.path.join(to_path, prefix, path)) def rel_path(base, path): import os.path return os.path.normpath(os.path.join(os.path.dirname(base), path)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/000077500000000000000000000000001323112141500217215ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/README.rst000066400000000000000000000006451323112141500234150ustar00rootroot00000000000000Plugins are a key feature of bootstrap-vz. Despite their small size (most plugins do not exceed 100 source lines of code) they can modify the behavior of bootstrapped systems to a great extent. Below you will find documentation for all plugins available for bootstrap-vz. If you cannot find what you are looking for, consider `developing it yourself `__ and contribute to this list! bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/__init__.py000066400000000000000000000000001323112141500240200ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/admin_user/000077500000000000000000000000001323112141500240475ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/admin_user/README.rst000066400000000000000000000025641323112141500255450ustar00rootroot00000000000000Admin user ---------- This plugin creates a user with passwordless sudo privileges. It also disables the SSH root login. There are three ways to grant access to the admin user: - Use the EC2 public key (EC2 machines only) - Set a password for the user - Provide a SSH public key to allow remote SSH login If the EC2 init scripts are installed, the script for fetching the SSH authorized keys will be adjusted to match the username specified in ``username``. If a password is provided (the ``password`` setting), this plugin sets the admin password, which also re-enables SSH password login (off by default in Jessie or newer). If the optional setting ``pubkey`` is present (it should be a full path to a SSH public key), you will be able to log in to the admin user account using the corresponding private key (this disables the EC2 public key injection mechanism). The ``password`` and ``pubkey`` settings can be used at the same time. Settings ~~~~~~~~ - ``username``: The username of the account to create. ``required`` - ``password``: An optional password for the account to create. ``optional`` - ``pubkey``: The full path to an SSH public key to allow remote access into the admin account. ``optional`` Example: .. code-block:: yaml --- plugins: admin_user: username: admin password: s3cr3t pubkey: /home/bootstrap-vz/.ssh/id_rsa bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/admin_user/__init__.py000066400000000000000000000022361323112141500261630ustar00rootroot00000000000000def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): import logging import tasks from bootstrapvz.common.tasks import ssh from bootstrapvz.common.releases import jessie if manifest.release < jessie: taskset.update([ssh.DisableRootLogin]) if 'password' in manifest.plugins['admin_user']: taskset.discard(ssh.DisableSSHPasswordAuthentication) taskset.add(tasks.AdminUserPassword) if 'pubkey' in manifest.plugins['admin_user']: taskset.add(tasks.CheckPublicKeyFile) taskset.add(tasks.AdminUserPublicKey) elif manifest.provider['name'] == 'ec2': logging.getLogger(__name__).info("The SSH key will be obtained from EC2") taskset.add(tasks.AdminUserPublicKeyEC2) elif 'password' not in manifest.plugins['admin_user']: logging.getLogger(__name__).warn("No SSH key and no password set") taskset.update([tasks.AddSudoPackage, tasks.CreateAdminUser, tasks.PasswordlessSudo, ]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/admin_user/manifest-schema.yml000066400000000000000000000007111323112141500276350ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Admin user plugin manifest type: object properties: plugins: type: object properties: admin_user: type: object properties: username: {type: string} password: {type: string} pubkey: {$ref: '#/definitions/path'} required: [username] additionalProperties: false definitions: path: pattern: ^[^\0]+$ type: string bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/admin_user/tasks.py000066400000000000000000000125361323112141500255550ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks.initd import InstallInitScripts from bootstrapvz.providers.ec2.tasks.initd import AddEC2InitScripts import os import logging log = logging.getLogger(__name__) class CheckPublicKeyFile(Task): description = 'Check that the public key is a valid file' phase = phases.validation @classmethod def run(cls, info): from bootstrapvz.common.tools import log_call, rel_path pubkey = info.manifest.plugins['admin_user'].get('pubkey', None) if pubkey is not None: abs_pubkey = rel_path(info.manifest.path, pubkey) if not os.path.isfile(abs_pubkey): msg = 'Could not find public key at %s' % pubkey info.manifest.validation_error(msg, ['plugins', 'admin_user', 'pubkey']) ret, _, stderr = log_call(['ssh-keygen', '-l', '-f', abs_pubkey]) if ret != 0: msg = 'Invalid public key file at %s' % pubkey info.manifest.validation_error(msg, ['plugins', 'admin_user', 'pubkey']) class AddSudoPackage(Task): description = 'Adding `sudo\' to the image packages' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('sudo') class CreateAdminUser(Task): description = 'Creating the admin user' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call log_check_call(['chroot', info.root, 'useradd', '--create-home', '--shell', '/bin/bash', info.manifest.plugins['admin_user']['username']]) class PasswordlessSudo(Task): description = 'Allowing the admin user to use sudo without a password' phase = phases.system_modification @classmethod def run(cls, info): sudo_admin_path = os.path.join(info.root, 'etc/sudoers.d/99_admin') username = info.manifest.plugins['admin_user']['username'] with open(sudo_admin_path, 'w') as sudo_admin: sudo_admin.write('{username} ALL=(ALL) NOPASSWD:ALL'.format(username=username)) import stat ug_read_only = (stat.S_IRUSR | stat.S_IRGRP) os.chmod(sudo_admin_path, ug_read_only) class AdminUserPassword(Task): description = 'Setting the admin user password' phase = phases.system_modification predecessors = [InstallInitScripts, CreateAdminUser] @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call log_check_call(['chroot', info.root, 'chpasswd'], info.manifest.plugins['admin_user']['username'] + ':' + info.manifest.plugins['admin_user']['password']) class AdminUserPublicKey(Task): description = 'Installing the public key for the admin user' phase = phases.system_modification predecessors = [AddEC2InitScripts, CreateAdminUser] successors = [InstallInitScripts] @classmethod def run(cls, info): if 'ec2-get-credentials' in info.initd['install']: log.warn('You are using a static public key for the admin account.' 'This will conflict with the ec2 public key injection mechanism.' 'The ec2-get-credentials startup script will therefore not be enabled.') del info.initd['install']['ec2-get-credentials'] # Get the stuff we need (username & public key) username = info.manifest.plugins['admin_user']['username'] from bootstrapvz.common.tools import rel_path pubkey_path = rel_path(info.manifest.path, info.manifest.plugins['admin_user']['pubkey']) with open(pubkey_path) as pubkey_handle: pubkey = pubkey_handle.read() # paths from os.path import join ssh_dir_rel = join('home', username, '.ssh') auth_keys_rel = join(ssh_dir_rel, 'authorized_keys') ssh_dir_abs = join(info.root, ssh_dir_rel) auth_keys_abs = join(info.root, auth_keys_rel) # Create the ssh dir if nobody has created it yet if not os.path.exists(ssh_dir_abs): os.mkdir(ssh_dir_abs, 0700) # Create (or append to) the authorized keys file (and chmod u=rw,go=) import stat with open(auth_keys_abs, 'a') as auth_keys_handle: auth_keys_handle.write(pubkey + '\n') os.chmod(auth_keys_abs, (stat.S_IRUSR | stat.S_IWUSR)) # Set the owner of the authorized keys file # (must be through chroot, the host system doesn't know about the user) from bootstrapvz.common.tools import log_check_call log_check_call(['chroot', info.root, 'chown', '-R', (username + ':' + username), ssh_dir_rel]) class AdminUserPublicKeyEC2(Task): description = 'Modifying ec2-get-credentials to copy the ssh public key to the admin user' phase = phases.system_modification predecessors = [InstallInitScripts, CreateAdminUser] @classmethod def run(cls, info): from bootstrapvz.common.tools import sed_i getcreds_path = os.path.join(info.root, 'etc/init.d/ec2-get-credentials') username = info.manifest.plugins['admin_user']['username'] sed_i(getcreds_path, "username='root'", "username='{username}'".format(username=username)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ansible/000077500000000000000000000000001323112141500233365ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ansible/__init__.py000066400000000000000000000006371323112141500254550ustar00rootroot00000000000000import tasks def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.update([tasks.AddPackages, tasks.AddRequiredCommands, tasks.CheckPlaybookPath, tasks.RunAnsiblePlaybook, ]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ansible/manifest-schema.yml000066400000000000000000000014671323112141500271350ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Ansible plugin manifest type: object properties: plugins: type: object properties: ansible: type: object properties: extra_vars: type: object minItems: 1 tags: type: array items: {type: string} minItems: 1 skip_tags: type: array items: {type: string} minItems: 1 opt_flags: type: array items: {type: string} minItems: 1 groups: type: array host: {type: string} minItems: 1 playbook: type: string pattern: ^[^\0]+$ required: [playbook] additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ansible/tasks.py000066400000000000000000000060271323112141500250420ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common.tasks import host from bootstrapvz.common import phases from bootstrapvz.common.tools import rel_path import os import json class AddRequiredCommands(Task): description = 'Adding commands required for provisioning with ansible' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): info.host_dependencies['ansible'] = 'ansible' class CheckPlaybookPath(Task): description = 'Checking whether the playbook path exist' phase = phases.validation @classmethod def run(cls, info): from bootstrapvz.common.exceptions import TaskError playbook = rel_path(info.manifest.path, info.manifest.plugins['ansible']['playbook']) if not os.path.exists(playbook): msg = 'The playbook file {playbook} does not exist.'.format(playbook=playbook) raise TaskError(msg) if not os.path.isfile(playbook): msg = 'The playbook path {playbook} does not point to a file.'.format(playbook=playbook) raise TaskError(msg) class AddPackages(Task): description = 'Making sure python is installed' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('python') class RunAnsiblePlaybook(Task): description = 'Running ansible playbook' phase = phases.user_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call # Extract playbook and directory playbook = rel_path(info.manifest.path, info.manifest.plugins['ansible']['playbook']) # build the inventory file inventory = os.path.join(info.root, 'tmp/bootstrap-inventory') with open(inventory, 'w') as handle: conn = '{} ansible_connection=chroot'.format(info.root) content = "" if 'groups' in info.manifest.plugins['ansible']: for group in info.manifest.plugins['ansible']['groups']: content += '[{}]\n{}\n'.format(group, conn) else: content = conn handle.write(content) # build the ansible command cmd = ['ansible-playbook', '-i', inventory, playbook] if 'extra_vars' in info.manifest.plugins['ansible']: cmd.extend(['--extra-vars', json.dumps(info.manifest.plugins['ansible']['extra_vars'])]) if 'tags' in info.manifest.plugins['ansible']: cmd.extend(['--tags', ','.join(info.manifest.plugins['ansible']['tags'])]) if 'skip_tags' in info.manifest.plugins['ansible']: cmd.extend(['--skip-tags', ','.join(info.manifest.plugins['ansible']['skip_tags'])]) if 'opt_flags' in info.manifest.plugins['ansible']: # Should probably do proper validation on these, but I don't think it should be used very often. cmd.extend(info.manifest.plugins['ansible']['opt_flags']) # Run and remove the inventory file log_check_call(cmd) os.remove(inventory) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/apt_proxy/000077500000000000000000000000001323112141500237465ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/apt_proxy/README.rst000066400000000000000000000016651323112141500254450ustar00rootroot00000000000000APT Proxy --------- This plugin creates a proxy configuration file for APT, so you could enjoy the benefits of using cached packages instead of downloading them from the mirror every time. You could just install ``apt-cacher-ng`` on the host machine and then add ``"address": "127.0.0.1"`` and ``"port": 3142`` to the manifest file. Settings ~~~~~~~~ - ``address``: The IP or host of the proxy server. ``required`` - ``port``: The port (integer) of the proxy server. ``required`` - ``username``: The username for authentication against the proxy server. This is ignored if ``password`` is not also set. ``optional`` - ``password``: The password for authentication against the proxy server. This is ignored if ``username`` is not also set. ``optional`` - ``persistent``: Whether the proxy configuration file should remain on the machine or not. Valid values: ``true``, ``false`` Default: ``false``. ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/apt_proxy/__init__.py000066400000000000000000000006161323112141500260620ustar00rootroot00000000000000def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): import tasks taskset.add(tasks.CheckAptProxy) taskset.add(tasks.SetAptProxy) if not manifest.plugins['apt_proxy'].get('persistent', False): taskset.add(tasks.RemoveAptProxy) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/apt_proxy/manifest-schema.yml000066400000000000000000000007111323112141500275340ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: APT proxy plugin manifest type: object properties: plugins: type: object properties: apt_proxy: type: object properties: address: {type: string} password: {type: string} port: {type: integer} persistent: {type: boolean} username: {type: string} required: [address, port] additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/apt_proxy/tasks.py000066400000000000000000000043171323112141500254520ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import apt import os import urllib2 class CheckAptProxy(Task): description = 'Checking reachability of APT proxy server' phase = phases.validation @classmethod def run(cls, info): proxy_address = info.manifest.plugins['apt_proxy']['address'] proxy_port = info.manifest.plugins['apt_proxy']['port'] proxy_url = 'http://{address}:{port}'.format(address=proxy_address, port=proxy_port) try: urllib2.urlopen(proxy_url, timeout=5) except Exception as e: # Default response from `apt-cacher-ng` if isinstance(e, urllib2.HTTPError) and e.code in [404, 406] and e.msg == 'Usage Information': pass else: import logging log = logging.getLogger(__name__) log.warning('The APT proxy server couldn\'t be reached. `apt-get\' commands may fail.') class SetAptProxy(Task): description = 'Setting proxy for APT' phase = phases.package_installation successors = [apt.AptUpdate] @classmethod def run(cls, info): proxy_path = os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy') proxy_username = info.manifest.plugins['apt_proxy'].get('username') proxy_password = info.manifest.plugins['apt_proxy'].get('password') proxy_address = info.manifest.plugins['apt_proxy']['address'] proxy_port = info.manifest.plugins['apt_proxy']['port'] if None not in (proxy_username, proxy_password): proxy_auth = '{username}:{password}@'.format( username=proxy_username, password=proxy_password) else: proxy_auth = '' with open(proxy_path, 'w') as proxy_file: proxy_file.write( 'Acquire::http {{ Proxy "http://{auth}{address}:{port}"; }};\n' .format(auth=proxy_auth, address=proxy_address, port=proxy_port)) class RemoveAptProxy(Task): description = 'Removing APT proxy configuration file' phase = phases.system_cleaning @classmethod def run(cls, info): os.remove(os.path.join(info.root, 'etc/apt/apt.conf.d/02proxy')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/chef/000077500000000000000000000000001323112141500226265ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/chef/__init__.py000066400000000000000000000005741323112141500247450ustar00rootroot00000000000000import tasks def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.add(tasks.AddPackages) if 'assets' in manifest.plugins['chef']: taskset.add(tasks.CheckAssetsPath) taskset.add(tasks.CopyChefAssets) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/chef/manifest-schema.yml000066400000000000000000000006221323112141500264150ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Chef plugin manifest type: object properties: plugins: type: object properties: chef: type: object properties: assets: $ref: '#/definitions/absolute_path' required: [assets] additionalProperties: false definitions: absolute_path: pattern: ^/[^\0]+$ type: string bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/chef/tasks.py000066400000000000000000000022131323112141500243230ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import os class CheckAssetsPath(Task): description = 'Checking whether the assets path exist' phase = phases.validation @classmethod def run(cls, info): from bootstrapvz.common.exceptions import TaskError assets = info.manifest.plugins['chef']['assets'] if not os.path.exists(assets): msg = 'The assets directory {assets} does not exist.'.format(assets=assets) raise TaskError(msg) if not os.path.isdir(assets): msg = 'The assets path {assets} does not point to a directory.'.format(assets=assets) raise TaskError(msg) class AddPackages(Task): description = 'Add chef package' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('chef') class CopyChefAssets(Task): description = 'Copying chef assets' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import copy_tree copy_tree(info.manifest.plugins['chef']['assets'], os.path.join(info.root, 'etc/chef')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/000077500000000000000000000000001323112141500240525ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/README.rst000066400000000000000000000016601323112141500255440ustar00rootroot00000000000000cloud-init ---------- This plugin installs and configures `cloud-init `__ on the system. Depending on the release it installs it from either backports or the main repository. cloud-init is only compatible with Debian wheezy and upwards. Settings ~~~~~~~~ - ``username``: The username of the account to create. ``required`` - ``groups``: A list of strings specifying which additional groups the account should be added to. ``optional`` - ``disable_modules``: A list of strings specifying which cloud-init modules should be disabled. ``optional`` - ``metadata_sources``: A string that sets the `datasources `__ that cloud-init should try fetching metadata from (corresponds to debconf-set-selections values). The source is automatically set when using the ec2 provider. ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/__init__.py000066400000000000000000000027151323112141500261700ustar00rootroot00000000000000from bootstrapvz.common.tools import rel_path assets = rel_path(__file__, 'assets') def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): import tasks import bootstrapvz.providers.ec2.tasks.initd as initd_ec2 from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import initd from bootstrapvz.common.tasks import ssh from bootstrapvz.common.releases import wheezy from bootstrapvz.common.releases import jessie if manifest.release == wheezy: taskset.add(apt.AddBackports) if manifest.release >= jessie: taskset.add(tasks.SetCloudInitMountOptions) taskset.update([tasks.SetMetadataSource, tasks.AddCloudInitPackages, ]) options = manifest.plugins['cloud_init'] if 'username' in options: taskset.add(tasks.SetUsername) if 'groups' in options and len(options['groups']): taskset.add(tasks.SetGroups) if 'enable_modules' in options: taskset.add(tasks.EnableModules) if 'disable_modules' in options: taskset.add(tasks.DisableModules) taskset.discard(initd_ec2.AddEC2InitScripts) taskset.discard(initd.AddExpandRoot) taskset.discard(initd.AdjustExpandRootScript) taskset.discard(initd.AdjustGrowpartWorkaround) taskset.discard(ssh.AddSSHKeyGeneration) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/assets/000077500000000000000000000000001323112141500253545ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/assets/cloud-init/000077500000000000000000000000001323112141500274235ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/assets/cloud-init/debian_cloud.cfg000066400000000000000000000003101323112141500325060ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. apt_preserve_sources_list: true manage_etc_hosts: true bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/manifest-schema.yml000066400000000000000000000021151323112141500276400ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: cloud-init plugin manifest type: object properties: system: type: object properties: release: type: string enum: - wheezy - oldstable - jessie - stable - stretch - testing - sid - unstable plugins: type: object properties: cloud_init: type: object properties: username: {type: string} groups: type: array items: {type: string} uniqueItems: true metadata_sources: {type: string} disable_modules: type: array items: {type: string} uniqueItems: true enable_modules: type: object properties: cloud_init_modules: type: array properties: module: {type: string} position: {type: number} additionalProperties: false required: [username] additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/cloud_init/tasks.py000066400000000000000000000117311323112141500255540ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import locale from . import assets from shutil import copy import logging import os class AddCloudInitPackages(Task): description = 'Adding cloud-init package and sudo' phase = phases.preparation predecessors = [apt.AddBackports] @classmethod def run(cls, info): target = None from bootstrapvz.common.releases import wheezy if info.manifest.release == wheezy: target = '{system.release}-backports' info.packages.add('cloud-init', target) info.packages.add('sudo') class SetUsername(Task): description = 'Setting username in cloud.cfg' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import sed_i cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg') username = info.manifest.plugins['cloud_init']['username'] search = '^ name: debian$' replace = (' name: {username}\n' ' sudo: ALL=(ALL) NOPASSWD:ALL\n' ' shell: /bin/bash').format(username=username) sed_i(cloud_cfg, search, replace) class SetGroups(Task): description = 'Setting groups in cloud.cfg' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import sed_i cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg') groups = info.manifest.plugins['cloud_init']['groups'] search = ('^ groups: \[adm, audio, cdrom, dialout, floppy, video,' ' plugdev, dip\]$') replace = (' groups: [adm, audio, cdrom, dialout, floppy, video,' ' plugdev, dip, {groups}]').format(groups=', '.join(groups)) sed_i(cloud_cfg, search, replace) class SetMetadataSource(Task): description = 'Setting metadata source' phase = phases.package_installation predecessors = [locale.GenerateLocale] successors = [apt.AptUpdate] @classmethod def run(cls, info): if 'metadata_sources' in info.manifest.plugins['cloud_init']: sources = info.manifest.plugins['cloud_init']['metadata_sources'] else: source_mapping = {'ec2': 'Ec2'} sources = source_mapping.get(info.manifest.provider['name'], None) if sources is None: msg = ('No cloud-init metadata source mapping found for provider `{provider}\', ' 'skipping selections setting.').format(provider=info.manifest.provider['name']) logging.getLogger(__name__).warn(msg) return sources = "cloud-init cloud-init/datasources multiselect " + sources log_check_call(['chroot', info.root, 'debconf-set-selections'], sources) class DisableModules(Task): description = 'Disabling cloud.cfg modules' phase = phases.system_modification @classmethod def run(cls, info): import re patterns = "" for pattern in info.manifest.plugins['cloud_init']['disable_modules']: if patterns != "": patterns = patterns + "|" + pattern else: patterns = "^\s+-\s+(" + pattern patterns = patterns + ")$" regex = re.compile(patterns) cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg') import fileinput for line in fileinput.input(files=cloud_cfg, inplace=True): if not regex.match(line): print line, class EnableModules(Task): description = 'Enabling cloud.cfg modules' phase = phases.system_modification @classmethod def run(cls, info): import fileinput import re cloud_cfg = os.path.join(info.root, 'etc/cloud/cloud.cfg') for section in info.manifest.plugins['cloud_init']['enable_modules']: regex = re.compile("^%s:" % section) for entry in info.manifest.plugins['cloud_init']['enable_modules'][section]: count = 0 counting = 0 for line in fileinput.input(files=cloud_cfg, inplace=True): if regex.match(line) and not counting: counting = True if counting: count = count + 1 if int(entry['position']) == int(count): print(" - %s" % entry['module']) print line, class SetCloudInitMountOptions(Task): description = 'Setting cloud-init default mount options' phase = phases.system_modification @classmethod def run(cls, info): cloud_init_src = os.path.join(assets, 'cloud-init/debian_cloud.cfg') cloud_init_dst = os.path.join(info.root, 'etc/cloud/cloud.cfg.d/01_debian_cloud.cfg') copy(cloud_init_src, cloud_init_dst) os.chmod(cloud_init_dst, 0644) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/commands/000077500000000000000000000000001323112141500235225ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/commands/README.rst000066400000000000000000000021421323112141500252100ustar00rootroot00000000000000Commands -------------- This plugin allows you to run arbitrary commands during the bootstrap process. The commands are run at an indeterminate point *after* packages have been installed, but *before* the volume has been unmounted. Settings ~~~~~~~~ - ``commands``: A list of lists containing strings. Each top-level item is a single command, while the strings inside each list comprise parts of a command. This allows for proper shell argument escaping. To circumvent escaping, simply put the entire command in a single string, the command will additionally be evaluated in a shell (e.g. globbing will work). In addition to the manifest variables ``{root}`` is also available. It points at the root of the image volume. ``chroot {root}`` should be used for the command to run in the images' environment. ``required`` ``manifest vars`` Example ~~~~~~~ Create an empty `index.html` in `/var/www` and delete all locales except english. .. code-block:: yaml commands: commands: - [touch, '{root}/var/www/index.html'] - ['chroot {root} rm -rf /usr/share/locale/[^en]*'] bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/commands/__init__.py000066400000000000000000000004311323112141500256310ustar00rootroot00000000000000 def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): from tasks import ImageExecuteCommand taskset.add(ImageExecuteCommand) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/commands/manifest-schema.yml000066400000000000000000000007521323112141500273150ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Commands plugin manifest type: object properties: plugins: type: object properties: commands: type: object properties: commands: items: items: type: string minItems: 1 type: array minItems: 1 type: array required: [commands] additionalProperties: false required: [commands] bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/commands/tasks.py000066400000000000000000000013421323112141500252210ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.plugins.file_copy.tasks import MkdirCommand from bootstrapvz.plugins.file_copy.tasks import FileCopyCommand class ImageExecuteCommand(Task): description = 'Executing commands in the image' phase = phases.user_modification predecessors = [MkdirCommand, FileCopyCommand] @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call for raw_command in info.manifest.plugins['commands']['commands']: command = map(lambda part: part.format(root=info.root, **info.manifest_vars), raw_command) shell = len(command) == 1 log_check_call(command, shell=shell) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/debconf/000077500000000000000000000000001323112141500233215ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/debconf/README.rst000066400000000000000000000012121323112141500250040ustar00rootroot00000000000000debconf ------- ``debconf(7)`` is the configuration system for Debian packages. It enables you to preconfigure packages before their installation. This plugin lets you specify debconf answers directly in the manifest. You should only specify answers for packages that will be installed; the plugin does not check that this is the case. Settings ~~~~~~~~ The ``debconf`` plugin directly takes an inline string::: plugins: debconf: >- d-i pkgsel/install-language-support boolean false popularity-contest popularity-contest/participate boolean false Consult ``debconf-set-selections(1)`` for a description of the data format. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/debconf/__init__.py000066400000000000000000000006161323112141500254350ustar00rootroot00000000000000def validate_manifest(data, validator, error): from bootstrapvz.common.tools import log_check_call, rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) log_check_call(['debconf-set-selections', '--checkonly'], stdin=data['plugins']['debconf']) def resolve_tasks(taskset, manifest): import tasks taskset.update([tasks.DebconfSetSelections]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/debconf/manifest-schema.yml000066400000000000000000000006031323112141500271070ustar00rootroot00000000000000$schema: http://json-schema.org/schema# title: Manifest schema for the debconf plugin type: object properties: plugins: type: object properties: debconf: name: Debconf selections to set description: >- This value should be an inline string in the input format of debconf-set-selections(1). type: string required: [debconf] bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/debconf/tasks.py000066400000000000000000000010231323112141500250140ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import packages from bootstrapvz.common.tools import log_check_call class DebconfSetSelections(Task): description = 'Set debconf(7) selections from the manifest' phase = phases.package_installation successors = [packages.InstallPackages] @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'debconf-set-selections'], stdin=info.manifest.plugins['debconf']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/000077500000000000000000000000001323112141500245135ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/README.rst000066400000000000000000000011141323112141500261770ustar00rootroot00000000000000Docker daemon ------------- Install `docker `__ daemon in the image. Uses init scripts for the official repository. This plugin can only be used if the distribution being bootstrapped is at least ``wheezy``, as Docker needs a kernel version ``3.8`` or higher, which is available at the ``wheezy-backports`` repository. There's also an architecture requirement, as it runs only on ``amd64``. Settings ~~~~~~~~ - ``version``: Selects the docker version to install. To select the latest version simply omit this setting. Default: ``latest`` ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/__init__.py000066400000000000000000000022651323112141500266310ustar00rootroot00000000000000from bootstrapvz.common.tools import rel_path import tasks from bootstrapvz.common.tasks import apt from bootstrapvz.common.releases import wheezy def validate_manifest(data, validator, error): validator(data, rel_path(__file__, 'manifest-schema.yml')) from bootstrapvz.common.releases import get_release if get_release(data['system']['release']) == wheezy: # prefs is a generator of apt preferences across files in the manifest prefs = (item for vals in data.get('packages', {}).get('preferences', {}).values() for item in vals) if not any('linux-image' in item['package'] and 'wheezy-backports' in item['pin'] for item in prefs): msg = 'The backports kernel is required for the docker daemon to function properly' error(msg, ['packages', 'preferences']) def resolve_tasks(taskset, manifest): if manifest.release == wheezy: taskset.add(apt.AddBackports) taskset.add(tasks.AddDockerDeps) taskset.add(tasks.AddDockerBinary) taskset.add(tasks.AddDockerInit) taskset.add(tasks.EnableMemoryCgroup) if len(manifest.plugins['docker_daemon'].get('pull_images', [])) > 0: taskset.add(tasks.PullDockerImages) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/assets/000077500000000000000000000000001323112141500260155ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/assets/default/000077500000000000000000000000001323112141500274415ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/assets/default/docker000066400000000000000000000014641323112141500306400ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # Docker Upstart and SysVinit configuration file # Customize location of Docker binary (especially for development testing). #DOCKER="/usr/local/bin/docker" # Use DOCKER_OPTS to modify the daemon startup options. #DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" # Use DOCKER_NOFILE to set ulimit -n before starting Docker. #DOCKER_NOFILE=65536 # Use DOCKER_LOCKEDMEMORY to set ulimit -l before starting Docker. #DOCKER_LOCKEDMEMORY=unlimited # If you need Docker to use an HTTP proxy, it can also be specified here. #export http_proxy="http://127.0.0.1:3128/" # This is also a handy place to tweak where Docker's temporary files go. #export TMPDIR="/mnt/bigdrive/docker-tmp" bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/assets/init.d/000077500000000000000000000000001323112141500272025ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/assets/init.d/docker000066400000000000000000000064031323112141500303770ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: docker # Required-Start: $syslog $remote_fs # Required-Stop: $syslog $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Create lightweight, portable, self-sufficient containers. # Description: # Docker is an open-source project to easily create lightweight, portable, # self-sufficient containers from any application. The same container that a # developer builds and tests on a laptop can run at scale, in production, on # VMs, bare metal, OpenStack clusters, public clouds and more. # # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin BASE=$(basename $0) # modify these in /etc/default/$BASE (/etc/default/docker) DOCKER=/usr/bin/$BASE DOCKER_PIDFILE=/var/run/$BASE.pid DOCKER_LOGFILE=/var/log/$BASE.log DOCKER_OPTS= DOCKER_DESC="Docker" # Get lsb functions . /lib/lsb/init-functions if [ -f /etc/default/$BASE ]; then . /etc/default/$BASE fi # see also init_is_upstart in /lib/lsb/init-functions (which isn't available in Ubuntu 12.04, or we'd use it) if [ -x /sbin/initctl ] && /sbin/initctl version 2>/dev/null | grep -q upstart; then log_failure_msg "$DOCKER_DESC is managed via upstart, try using service $BASE $1" exit 1 fi # Check docker is present if [ ! -x $DOCKER ]; then log_failure_msg "$DOCKER not present or not executable" exit 1 fi fail_unless_root() { if [ "$(id -u)" != '0' ]; then log_failure_msg "$DOCKER_DESC must be run as root" exit 1 fi } cgroupfs_mount() { # see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount if grep -v '^#' /etc/fstab | grep -q cgroup \ || [ ! -e /proc/cgroups ] \ || [ ! -d /sys/fs/cgroup ]; then return fi if ! mountpoint -q /sys/fs/cgroup; then mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup fi ( cd /sys/fs/cgroup for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do mkdir -p $sys if ! mountpoint -q $sys; then if ! mount -n -t cgroup -o $sys cgroup $sys; then rmdir $sys || true fi fi done ) } case "$1" in start) fail_unless_root cgroupfs_mount touch "$DOCKER_LOGFILE" chgrp docker "$DOCKER_LOGFILE" if [ -n "$DOCKER_NOFILE" ]; then ulimit -n $DOCKER_NOFILE fi if [ -n "$DOCKER_LOCKEDMEMORY" ]; then ulimit -l $DOCKER_LOCKEDMEMORY fi log_begin_msg "Starting $DOCKER_DESC: $BASE" start-stop-daemon --start --background \ --no-close \ --exec "$DOCKER" \ --pidfile "$DOCKER_PIDFILE" \ -- \ -d -p "$DOCKER_PIDFILE" \ $DOCKER_OPTS \ >> "$DOCKER_LOGFILE" 2>&1 log_end_msg $? ;; stop) fail_unless_root log_begin_msg "Stopping $DOCKER_DESC: $BASE" start-stop-daemon --stop --pidfile "$DOCKER_PIDFILE" log_end_msg $? ;; restart) fail_unless_root docker_pid=`cat "$DOCKER_PIDFILE" 2>/dev/null` [ -n "$docker_pid" ] \ && ps -p $docker_pid > /dev/null 2>&1 \ && $0 stop $0 start ;; force-reload) fail_unless_root $0 restart ;; status) status_of_proc -p "$DOCKER_PIDFILE" "$DOCKER" docker ;; *) echo "Usage: $0 {start|stop|restart|status}" exit 1 ;; esac exit 0 bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/manifest-schema.yml000066400000000000000000000011361323112141500303030ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Install Docker plugin manifest type: object properties: system: type: object properties: architecture: type: string enum: [amd64] release: not: type: string enum: - squeeze - oldstable plugins: type: object properties: docker_daemon: type: object properties: version: pattern: '^\d\.\d{1,2}\.\d$' type: string docker_opts: type: string additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/docker_daemon/tasks.py000066400000000000000000000105341323112141500262150ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import grub from bootstrapvz.common.tasks import initd from bootstrapvz.common.tools import log_check_call, sed_i, rel_path import os import os.path import shutil import subprocess import time ASSETS_DIR = rel_path(__file__, 'assets') class AddDockerDeps(Task): description = 'Add packages for docker deps' phase = phases.preparation DOCKER_DEPS = ['aufs-tools', 'btrfs-tools', 'git', 'iptables', 'procps', 'xz-utils', 'ca-certificates'] @classmethod def run(cls, info): for pkg in cls.DOCKER_DEPS: info.packages.add(pkg) class AddDockerBinary(Task): description = 'Add docker binary' phase = phases.system_modification @classmethod def run(cls, info): docker_version = info.manifest.plugins['docker_daemon'].get('version', False) docker_url = 'https://get.docker.io/builds/Linux/x86_64/docker-' if docker_version: docker_url += docker_version else: docker_url += 'latest' bin_docker = os.path.join(info.root, 'usr/bin/docker') log_check_call(['wget', '-O', bin_docker, docker_url]) os.chmod(bin_docker, 0755) class AddDockerInit(Task): description = 'Add docker init script' phase = phases.system_modification successors = [initd.InstallInitScripts] @classmethod def run(cls, info): init_src = os.path.join(ASSETS_DIR, 'init.d/docker') info.initd['install']['docker'] = init_src default_src = os.path.join(ASSETS_DIR, 'default/docker') default_dest = os.path.join(info.root, 'etc/default/docker') shutil.copy(default_src, default_dest) docker_opts = info.manifest.plugins['docker_daemon'].get('docker_opts') if docker_opts: sed_i(default_dest, r'^#*DOCKER_OPTS=.*$', 'DOCKER_OPTS="%s"' % docker_opts) class EnableMemoryCgroup(Task): description = 'Enable the memory cgroup in the grub config' phase = phases.system_modification successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX'].append('cgroup_enable=memory') class PullDockerImages(Task): description = 'Pull docker images' phase = phases.system_modification predecessors = [AddDockerBinary] @classmethod def run(cls, info): from bootstrapvz.common.exceptions import TaskError from subprocess import CalledProcessError images = info.manifest.plugins['docker_daemon'].get('pull_images', []) retries = info.manifest.plugins['docker_daemon'].get('pull_images_retries', 10) bin_docker = os.path.join(info.root, 'usr/bin/docker') graph_dir = os.path.join(info.root, 'var/lib/docker') socket = 'unix://' + os.path.join(info.workspace, 'docker.sock') pidfile = os.path.join(info.workspace, 'docker.pid') try: # start docker daemon temporarly. daemon = subprocess.Popen([bin_docker, '-d', '--graph', graph_dir, '-H', socket, '-p', pidfile]) # wait for docker daemon to start. for _ in range(retries): try: log_check_call([bin_docker, '-H', socket, 'version']) break except CalledProcessError: time.sleep(1) for img in images: # docker load if tarball. if img.endswith('.tar.gz') or img.endswith('.tgz'): cmd = [bin_docker, '-H', socket, 'load', '-i', img] try: log_check_call(cmd) except CalledProcessError as e: msg = 'error {e} loading docker image {img}.'.format(img=img, e=e) raise TaskError(msg) # docker pull if image name. else: cmd = [bin_docker, '-H', socket, 'pull', img] try: log_check_call(cmd) except CalledProcessError as e: msg = 'error {e} pulling docker image {img}.'.format(img=img, e=e) raise TaskError(msg) finally: # shutdown docker daemon. daemon.terminate() os.remove(os.path.join(info.workspace, 'docker.sock')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_launch/000077500000000000000000000000001323112141500237245ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_launch/README.rst000066400000000000000000000013271323112141500254160ustar00rootroot00000000000000ec2-launch ---------- This plugin is spinning up **AWS classic instance** from the AMI created with the template from which this plugin is invoked. Settings ~~~~~~~~ - ``security_group_ids``: A list of security groups (not VPC) to attach to the instance ``required`` - ``instance_type``: A string with AWS Classic capable instance to run (default: m3.medium) ``optional`` - ``ssh_key``: A string with the ssh key name to apply to the instance. ``required`` - ``print_public_ip``: A string with the path to write instance external IP to ``optional`` - ``tags``: ``optional`` - ``deregister_ami``: A boolean value describing if AMI should be kept after sinning up instance or not (default: false) ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_launch/__init__.py000066400000000000000000000007331323112141500260400ustar00rootroot00000000000000def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): import tasks taskset.add(tasks.LaunchEC2Instance) if 'print_public_ip' in manifest.plugins['ec2_launch']: taskset.add(tasks.PrintPublicIPAddress) if manifest.plugins['ec2_launch'].get('deregister_ami', False): taskset.add(tasks.DeregisterAMI) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_launch/manifest-schema.yml000066400000000000000000000010561323112141500275150ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: EC2-launch plugin manifest type: object properties: plugins: type: object properties: ec2_launch: type: object properties: security_group_ids: type: array items: {type: string} uniqueItems: true instance_type: {type: string} ssh_key: {type: string} print_public_ip: {type: string} tags: {type: object} deregister_ami: {type: boolean} additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_launch/tasks.py000066400000000000000000000066361323112141500254360ustar00rootroot00000000000000import logging from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.providers.ec2.tasks import ami # TODO: Merge with the method available in wip-integration-tests branch def waituntil(predicate, timeout=5, interval=0.05): import time threshhold = time.time() + timeout while time.time() < threshhold: if predicate(): return True time.sleep(interval) return False class LaunchEC2Instance(Task): description = 'Launching EC2 instance' phase = phases.image_registration predecessors = [ami.RegisterAMI] @classmethod def run(cls, info): conn = info._ec2['connection'] r = conn.run_instances(ImageId=info._ec2['image']['ImageId'], MinCount=1, MaxCount=1, SecurityGroupIds=info.manifest.plugins['ec2_launch'].get('security_group_ids'), KeyName=info.manifest.plugins['ec2_launch'].get('ssh_key'), InstanceType=info.manifest.plugins['ec2_launch'].get('instance_type', 'm3.medium')) info._ec2['instance'] = r['Instances'][0] if 'tags' in info.manifest.plugins['ec2_launch']: raw_tags = info.manifest.plugins['ec2_launch']['tags'] formatted_tags = {k: v.format(**info.manifest_vars) for k, v in raw_tags.items()} tags = [{'Key': k, 'Value': v} for k, v in formatted_tags.items()] conn.create_tags(Resources=[info._ec2['instance']['InstanceId']], Tags=tags) class PrintPublicIPAddress(Task): description = 'Waiting for the instance to launch' phase = phases.image_registration predecessors = [LaunchEC2Instance] @classmethod def run(cls, info): conn = info._ec2['connection'] logger = logging.getLogger(__name__) filename = info.manifest.plugins['ec2_launch']['print_public_ip'] if not filename: filename = '/dev/null' f = open(filename, 'w') try: waiter = conn.get_waiter('instance_status_ok') waiter.wait(InstanceIds=[info._ec2['instance']['InstanceId']], Filters=[{'Name': 'instance-state-name', 'Values': ['running']}]) info._ec2['instance'] = conn.describe_instances(InstanceIds=[info._ec2['instance']['InstanceId']])['Reservations'][0]['Instances'][0] logger.info('******* EC2 IP ADDRESS: %s *******' % info._ec2['instance']['PublicIpAddress']) f.write(info._ec2['instance']['PublicIpAddress']) except Exception: logger.error('Could not get IP address for the instance') f.write('') f.close() class DeregisterAMI(Task): description = 'Deregistering AMI' phase = phases.image_registration predecessors = [LaunchEC2Instance] @classmethod def run(cls, info): ec2 = info._ec2 logger = logging.getLogger(__name__) def instance_running(): ec2['instance'].update() return ec2['instance'].state == 'running' if waituntil(instance_running, timeout=120, interval=5): info._ec2['connection'].deregister_image(info._ec2['image']) info._ec2['snapshot'].delete() else: logger.error('Timeout while booting instance') bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_publish/000077500000000000000000000000001323112141500241205ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_publish/README.rst000066400000000000000000000011031323112141500256020ustar00rootroot00000000000000EC2 publish ----------- This plugin lets you publish an EC2 AMI to multiple regions, make AMIs public, and output the AMIs generated in each file. Settings ~~~~~~~~ - ``regions``: EC2 regions to copy the final image to. ``optional`` - ``public``: Whether the AMIs should be made public (i.e. available by ALL users). Valid values: ``true``, ``false`` Default: ``false``. ``optional`` - ``manifest_url``: URL to publish generated AMIs. Can be a path on the local filesystem, or a URL to S3 (https://bucket.s3-region.amazonaws.com/amis.json) ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_publish/__init__.py000066400000000000000000000007421323112141500262340ustar00rootroot00000000000000def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): import tasks taskset.add(tasks.CopyAmiToRegions) if 'manifest_url' in manifest.plugins['ec2_publish']: taskset.add(tasks.PublishAmiManifest) ami_public = manifest.plugins['ec2_publish'].get('public') if ami_public: taskset.add(tasks.PublishAmi) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_publish/manifest-schema.yml000066400000000000000000000013311323112141500277050ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: EC2-publish plugin manifest type: object properties: plugins: type: object properties: ec2_publish: type: object properties: regions: type: array items: {$ref: '#/definitions/aws-region'} uniqueItems: true manifest_url: {type: string} public: {type: boolean} additionalProperties: false definitions: aws-region: enum: - ap-northeast-1 - ap-northeast-2 - ap-southeast-1 - ap-southeast-2 - ca-central-1 - eu-central-1 - eu-west-1 - sa-east-1 - us-east-1 - us-gov-west-1 - us-west-1 - us-west-2 - cn-north-1 bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ec2_publish/tasks.py000066400000000000000000000072411323112141500256230ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.providers.ec2.tasks import ami import logging class CopyAmiToRegions(Task): description = 'Copy AWS AMI over other regions' phase = phases.image_registration predecessors = [ami.RegisterAMI] @classmethod def run(cls, info): source_region = info._ec2['region'] source_ami = info._ec2['image'] name = info._ec2['ami_name'] copy_description = "Copied from %s (%s)" % (source_ami, source_region) connect_args = { 'aws_access_key_id': info.credentials['access-key'], 'aws_secret_access_key': info.credentials['secret-key'] } if 'security-token' in info.credentials: connect_args['security_token'] = info.credentials['security-token'] region_amis = {source_region: source_ami} region_conns = {source_region: info._ec2['connection']} from boto.ec2 import connect_to_region regions = info.manifest.plugins['ec2_publish'].get('regions', ()) for region in regions: conn = connect_to_region(region, **connect_args) region_conns[region] = conn copied_image = conn.copy_image(source_region, source_ami, name=name, description=copy_description) region_amis[region] = copied_image.image_id info._ec2['region_amis'] = region_amis info._ec2['region_conns'] = region_conns class PublishAmiManifest(Task): description = 'Publish a manifest of generated AMIs' phase = phases.image_registration predecessors = [CopyAmiToRegions] @classmethod def run(cls, info): manifest_url = info.manifest.plugins['ec2_publish']['manifest_url'] import json amis_json = json.dumps(info._ec2['region_amis']) from urlparse import urlparse parsed_url = urlparse(manifest_url) parsed_host = parsed_url.netloc if not parsed_url.scheme: with open(parsed_url.path, 'w') as local_out: local_out.write(amis_json) elif parsed_host.endswith('amazonaws.com') and 's3' in parsed_host: region = 'us-east-1' path = parsed_url.path[1:] if 's3-' in parsed_host: loc = parsed_host.find('s3-') + 3 region = parsed_host[loc:parsed_host.find('.', loc)] if '.s3' in parsed_host: bucket = parsed_host[:parsed_host.find('.s3')] else: bucket, path = path.split('/', 1) from boto.s3 import connect_to_region conn = connect_to_region(region) key = conn.get_bucket(bucket, validate=False).new_key(path) headers = {'Content-Type': 'application/json'} key.set_contents_from_string(amis_json, headers=headers, policy='public-read') class PublishAmi(Task): description = 'Make generated AMIs public' phase = phases.image_registration predecessors = [CopyAmiToRegions] @classmethod def run(cls, info): region_conns = info._ec2['region_conns'] region_amis = info._ec2['region_amis'] logger = logging.getLogger(__name__) import time for region, region_ami in region_amis.items(): conn = region_conns[region] current_image = conn.get_image(region_ami) while current_image.state == 'pending': logger.debug('Waiting for %s in %s (currently: %s)', region_ami, region, current_image.state) time.sleep(5) current_image = conn.get_image(region_ami) conn.modify_image_attribute(region_ami, attribute='launchPermission', operation='add', groups='all') bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/000077500000000000000000000000001323112141500242435ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/README.rst000066400000000000000000000022001323112141500257240ustar00rootroot00000000000000Expand Root ----------- This plugin adds support to expand the root partition and filesystem dynamically on boot. It adds a shell script to call growpart and the proper filesystem expansion tool for a given device, partition, and filesystem. The growpart script is part of the cloud-guest-utils package in stretch and jessie-backports. The version of this script in jessie is broken in several ways and so this plugin installs the version from jessie-backports which works correctly. This plugin should not be used in conjunction with common.tasks.initd.AddExpandRoot and common.tasks.initd.AdjustExpandRootScript. It is meant to replace the existing internal common version of expand-root. Settings ~~~~~~~~ - ``filesystem_type``: The type of filesystem to grow, one of ext2, ext3, ext4, of xfs. - ``root_device``: The root device we are growing, /dev/sda as an example. - ``root_partition``: The root partition ID we are growing, 1 (which becomes /dev/sda1). This is specified so you could grow a different partition on the root_device if you have a multi partition setup and because growpart takes the partition number as a separate argument. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/__init__.py000066400000000000000000000004521323112141500263550ustar00rootroot00000000000000import tasks from bootstrapvz.common.tools import rel_path def validate_manifest(data, validator, error): validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.add(tasks.InstallGrowpart) taskset.add(tasks.InstallExpandRootScripts) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/assets/000077500000000000000000000000001323112141500255455ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/assets/expand-root000077500000000000000000000021261323112141500277340ustar00rootroot00000000000000#!/bin/bash # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # Expands a partition and filesystem using growpart and an appropriate # filesystem tool for live filesystem expansion. Takes three arguments: # DEVICE, such as "/dev/sda" # PARTITION, such as "1" # FILESYSTEM, such as "ext4" DEVICE="${1}" PARTITION="${2}" FILESYSTEM="${3}" if [[ -z "${DEVICE}" || -z "${PARTITION}" || -z "${FILESYSTEM}" ]]; then echo "Requires: $0 DEVICE PARTITION FILESYSTEM" exit 1 fi # Grow partition using growpart if [[ -x /usr/bin/growpart ]]; then echo "Growing partition ${DEVICE}${PARTITION}" /usr/bin/growpart "${DEVICE}" "${PARTITION}" else echo "/usr/bin/growpart was not found" exit 1 fi echo "Resizing ${FILESYSTEM} filesystem on ${DEVICE}${PARTITION}" case "${FILESYSTEM}" in xfs) xfs_growfs / ;; ext2) resize2fs "${DEVICE}${PARTITION}" ;; ext3) resize2fs "${DEVICE}${PARTITION}" ;; ext4) resize2fs "${DEVICE}${PARTITION}" ;; *) echo "Unsupported filesystem, unable to expand size." ;; esac bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/assets/expand-root.service000066400000000000000000000006011323112141500313640ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. [Unit] Description=Expand the root partition and filesystem on boot After=local-fs.target Wants=local-fs.target [Service] ExecStart=/usr/local/sbin/expand-root DEVICE PARTITION FILESYSTEM Type=oneshot [Install] WantedBy=multi-user.target bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/manifest-schema.yml000066400000000000000000000010401323112141500300250ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Expand root plugin manifest type: object properties: plugins: type: object properties: expand_root: type: object properties: filesystem_type: enum: - ext2 - ext3 - ext4 - xfs root_device: {type: string} root_partition: {type: integer} required: - filesystem_type - root_device - root_partition additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/expand_root/tasks.py000066400000000000000000000044751323112141500257540ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import initd from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import rel_path from bootstrapvz.common.tools import sed_i import os import shutil ASSETS_DIR = rel_path(__file__, 'assets') class InstallGrowpart(Task): description = 'Adding necessary packages for growpart.' phase = phases.preparation predecessors = [apt.AddBackports] @classmethod def run(cls, info): # Use the cloud-guest-utils package from jessie-backports which has # several significant bug fixes from the mainline growpart script. target = None from bootstrapvz.common.releases import jessie if info.manifest.release == jessie: target = '{system.release}-backports' info.packages.add('cloud-guest-utils', target) class InstallExpandRootScripts(Task): description = 'Installing scripts for expand-root.' phase = phases.system_modification successors = [initd.InstallInitScripts] @classmethod def run(cls, info): expand_root_script = os.path.join(ASSETS_DIR, 'expand-root') expand_root_service = os.path.join(ASSETS_DIR, 'expand-root.service') expand_root_script_dest = os.path.join(info.root, 'usr/local/sbin/expand-root') expand_root_service_dest = os.path.join(info.root, 'etc/systemd/system/expand-root.service') filesystem_type = info.manifest.plugins['expand_root'].get('filesystem_type') root_device = info.manifest.plugins['expand_root'].get('root_device') root_partition = info.manifest.plugins['expand_root'].get('root_partition') # Copy files over shutil.copy(expand_root_script, expand_root_script_dest) os.chmod(expand_root_script_dest, 0750) shutil.copy(expand_root_service, expand_root_service_dest) # Expand out options into expand-root script. opts = '%s %s %s' % (root_device, root_partition, filesystem_type) sed_i(expand_root_service_dest, r'^ExecStart=/usr/local/sbin/expand-root.*$', 'ExecStart=/usr/local/sbin/expand-root %s' % opts) # Enable systemd service log_check_call(['chroot', info.root, 'systemctl', 'enable', 'expand-root.service']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/file_copy/000077500000000000000000000000001323112141500236725ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/file_copy/README.rst000066400000000000000000000022521323112141500253620ustar00rootroot00000000000000File copy --------- This plugin lets you copy files from the host to the VM under construction, create directories, and set permissions and ownership. Note that this necessarily violates the `first development guideline`_. .. _first development guideline: https://github.com/andsens/bootstrap-vz/blob/master/CONTRIBUTING.rst#the-manifest-should-always-fully-describe-the-resulting-image Settings ~~~~~~~~ The ``file_copy`` plugin takes a (non-empty) ``files`` list, and optionally a ``mkdirs`` list. Files (items in the ``files`` list) must be objects with the following properties: - ``src`` and ``dst`` (required) are the source and destination paths. ``src`` is relative to the manifest, whereas ``dst`` is a path in the VM. - ``permissions`` (optional) is a permission string in a format appropriate for ``chmod(1)``. - ``owner`` and ``group`` (optional) are respectively a user and group specification, in a format appropriate for ``chown(1)`` and ``chgrp(1)``. Folders (items in the ``mkdirs`` list) must be objects with the following properties: - ``dir`` (required) is the path of the directory. - ``permissions``, ``owner`` and ``group`` are the same as for files. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/file_copy/__init__.py000066400000000000000000000006731323112141500260110ustar00rootroot00000000000000import tasks def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): if ('mkdirs' in manifest.plugins['file_copy']): taskset.add(tasks.MkdirCommand) if ('files' in manifest.plugins['file_copy']): taskset.add(tasks.ValidateFiles) taskset.add(tasks.FileCopyCommand) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/file_copy/manifest-schema.yml000066400000000000000000000022631323112141500274640ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# properties: plugins: properties: file_copy: properties: mkdirs: items: type: object properties: dir: type: string permissions: type: string owner: type: string group: type: string required: [dir] additionalProperties: false files: type: array minItems: 1 items: type: object properties: src: type: string dst: type: string permissions: type: string owner: type: string group: type: string required: [src, dst] additionalProperties: false required: - files type: object additionalProperties: false required: - file_copy type: object required: - plugins title: File copy plugin manifest type: object bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/file_copy/tasks.py000066400000000000000000000051141323112141500253720ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import os import shutil def modify_path(info, path, entry): from bootstrapvz.common.tools import log_check_call if 'permissions' in entry: # We wrap the permissions string in str() in case # the user specified a numeric bitmask chmod_command = ['chroot', info.root, 'chmod', str(entry['permissions']), path] log_check_call(chmod_command) if 'owner' in entry: chown_command = ['chroot', info.root, 'chown', entry['owner'], path] log_check_call(chown_command) if 'group' in entry: chgrp_command = ['chroot', info.root, 'chgrp', entry['group'], path] log_check_call(chgrp_command) class MkdirCommand(Task): description = 'Creating directories requested by user' phase = phases.user_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call for dir_entry in info.manifest.plugins['file_copy']['mkdirs']: mkdir_command = ['chroot', info.root, 'mkdir', '-p', dir_entry['dir']] log_check_call(mkdir_command) modify_path(info, dir_entry['dir'], dir_entry) class ValidateFiles(Task): description = 'Check that the required files exist' phase = phases.validation @classmethod def run(cls, info): from bootstrapvz.common.tools import rel_path for i, file_entry in enumerate(info.manifest.plugins['file_copy']['files']): if not os.path.exists(rel_path(info.manifest.path, file_entry['src'])): msg = 'The source file %s does not exist.' % file_entry['src'] info.manifest.validation_error(msg, ['plugins', 'file_copy', 'files', i]) class FileCopyCommand(Task): description = 'Copying user specified files into the image' phase = phases.user_modification predecessors = [MkdirCommand] @classmethod def run(cls, info): from bootstrapvz.common.tools import rel_path for file_entry in info.manifest.plugins['file_copy']['files']: # note that we don't use os.path.join because it can't # handle absolute paths, which 'dst' most likely is. final_destination = os.path.normpath("%s/%s" % (info.root, file_entry['dst'])) src_path = rel_path(info.manifest.path, file_entry['src']) if os.path.isfile(src_path): shutil.copy(src_path, final_destination) else: shutil.copytree(src_path, final_destination) modify_path(info, file_entry['dst'], file_entry) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/google_cloud_repo/000077500000000000000000000000001323112141500254105ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/google_cloud_repo/README.rst000066400000000000000000000012031323112141500270730ustar00rootroot00000000000000Google Cloud Repo ----------------- This plugin adds support to use Google Cloud apt repositories for Debian. It adds the public repo key and optionally will add an apt source list file and install a package containing the key in order to maintain the key over time. Settings ~~~~~~~~ - ``cleanup_bootstrap_key``: Deletes the bootstrap key by removing /etc/apt/trusted.gpg in favor of the package maintained version. This is only to avoid having multiple keys around in the apt-key list. This should only be used with enable_keyring_repo. - ``enable_keyring_repo``: Add a repository and package to maintain the repo public key over time. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/google_cloud_repo/__init__.py000066400000000000000000000011261323112141500275210ustar00rootroot00000000000000import tasks from bootstrapvz.common.tools import rel_path def validate_manifest(data, validator, error): validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.add(tasks.AddGoogleCloudRepoKey) if manifest.plugins['google_cloud_repo'].get('enable_keyring_repo', False): taskset.add(tasks.AddGoogleCloudRepoKeyringRepo) taskset.add(tasks.InstallGoogleCloudRepoKeyringPackage) if manifest.plugins['google_cloud_repo'].get('cleanup_bootstrap_key', False): taskset.add(tasks.CleanupBootstrapRepoKey) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/google_cloud_repo/manifest-schema.yml000066400000000000000000000005571323112141500312060ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Google Cloud repository plugin manifest type: object properties: plugins: type: object properties: google_cloud_repo: type: object properties: cleanup_bootstrap_key: {type: boolean} enable_keyring_repo: {type: boolean} additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/google_cloud_repo/tasks.py000066400000000000000000000031701323112141500271100ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import packages from bootstrapvz.common.tools import log_check_call import os class AddGoogleCloudRepoKey(Task): description = 'Adding Google Cloud Repo key.' phase = phases.package_installation predecessors = [apt.InstallTrustedKeys] successors = [apt.WriteSources] @classmethod def run(cls, info): key_file = os.path.join(info.root, 'google.gpg.key') log_check_call(['wget', 'https://packages.cloud.google.com/apt/doc/apt-key.gpg', '-O', key_file]) log_check_call(['chroot', info.root, 'apt-key', 'add', 'google.gpg.key']) os.remove(key_file) class AddGoogleCloudRepoKeyringRepo(Task): description = 'Adding Google Cloud keyring repository.' phase = phases.preparation predecessors = [apt.AddManifestSources] @classmethod def run(cls, info): info.source_lists.add('google-cloud', 'deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-{system.release} main') class InstallGoogleCloudRepoKeyringPackage(Task): description = 'Installing Google Cloud key package.' phase = phases.preparation successors = [packages.AddManifestPackages] @classmethod def run(cls, info): info.packages.add('google-cloud-packages-archive-keyring') class CleanupBootstrapRepoKey(Task): description = 'Cleaning up bootstrap repo key.' phase = phases.system_cleaning @classmethod def run(cls, info): os.remove(os.path.join(info.root, 'etc', 'apt', 'trusted.gpg')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/000077500000000000000000000000001323112141500245745ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/README.rst000066400000000000000000000067641323112141500263000ustar00rootroot00000000000000minimize size ------------- This plugin can be used to reduce the size of the resulting image. Often virtual volumes are much smaller than their reported size until any data is written to them. During the bootstrapping process temporary data like the aptitude cache is written to the volume only to be removed again. The minimize size plugin employs various strategies to keep a low volume footprint: - Mount folders from the host into key locations of the image volume to avoid any unnecessary disk writes. - Use `zerofree `__ to deallocate unused sectors on the volume. On an unpartitioned volume this will be done for the entire volume, while it will only happen on the root partition for partitioned volumes. - Shrink the real volume size. Supported tools are: - `vmware-vdiskmanager `__ (only applicable when using vmdk backing). The tool is part of the `VMWare Workstation `__ package. - `qemu-img` (only applicaple when using vmdk, vdi, raw or qcow2 backing). This tool is part of the `QEMU emulator `__. - Tell apt to only download specific language files. See the `apt.conf manpage `__ for more details ("Languages" in the "Acquire group" section). - Configure debootstrap and dpkg to filter out specific paths when installing packages Settings ~~~~~~~~ - ``zerofree``: Specifies if it should mark unallocated blocks as zeroes, so the volume could be better shrunk after this. Valid values: true, false Default: false ``optional`` - ``shrink``: Whether the volume should be shrunk. This setting works best in conjunction with the zerofree tool. Valid values: - false: Do not shrink. - ``vmware-vdiskmanager`` or true: Shrink using the `vmware-vdiskmanager` utility. - ``qemu-img``: Shrink using the `qemu-img` utility. Default: false ``optional`` - ``apt``: Apt specific configurations. ``optional`` - ``autoclean``: Configure apt to clean out the archive and cache after every run. Valid values: true, false Default: false ``optional`` - ``languages``: List of languages apt should download. Use ``[none]`` to not download any languages at all. ``optional`` - ``gzip_indexes``: Gzip apt package indexes. Valid values: true, false Default: false ``optional`` - ``autoremove_suggests``: Suggested packages are removed when running. ``apt-get purge --auto-remove`` Valid values: true, false Default: false ``optional`` - ``dpkg``: dpkg (and debootstrap) specific configurations. These settings not only affect the behavior of dpkg when installing packages after the image has been created, but also during the bootstrapping process. This includes the behavior of debootstrap. ``optional`` - ``locales``: List of locales that should be kept. When this option is used, all locales (and the manpages in those locales) are excluded from installation excepting the ones in this list. Specify an empty list to not install any locales at all. ``optional`` - ``exclude_docs``: Exclude additional package documentation located in ``/usr/share/doc`` Valid values: true, false Default: false ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/__init__.py000066400000000000000000000070301323112141500267050ustar00rootroot00000000000000import tasks.mounts import tasks.shrink import tasks.apt import tasks.dpkg from bootstrapvz.common.tasks import locale def get_shrink_type(plugins): """Gets the type of shrinking process requested by the user, taking into account backward compatibility values :param dict plugins: the part of the manifest related to plugins :return: None (if none selected), "vmware-vdiskmanager" or "qemu-img" (tool to be used)""" shrink_type = plugins['minimize_size'].get('shrink') if shrink_type is True: shrink_type = 'vmware-vdiskmanager' elif shrink_type is False: shrink_type = None return shrink_type def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) shrink_type = get_shrink_type(data['plugins']) if shrink_type == 'vmware-vdiskmanager' and data['volume']['backing'] != 'vmdk': error('Can only shrink vmdk images with vmware-vdiskmanager', ['plugins', 'minimize_size', 'shrink']) if shrink_type == 'qemu-img' and data['volume']['backing'] not in ('vmdk', 'vdi', 'raw', 'qcow2'): error('Can only shrink vmdk, vdi, raw and qcow2 images with qemu-img', ['plugins', 'minimize_size', 'shrink']) def resolve_tasks(taskset, manifest): taskset.update([tasks.mounts.AddFolderMounts, tasks.mounts.RemoveFolderMounts, ]) if manifest.plugins['minimize_size'].get('zerofree', False): taskset.add(tasks.shrink.AddRequiredZeroFreeCommand) taskset.add(tasks.shrink.Zerofree) if get_shrink_type(manifest.plugins) == 'vmware-vdiskmanager': taskset.add(tasks.shrink.AddRequiredVDiskManagerCommand) taskset.add(tasks.shrink.ShrinkVolumeWithVDiskManager) if get_shrink_type(manifest.plugins) == 'qemu-img': taskset.add(tasks.shrink.AddRequiredQemuImgCommand) taskset.add(tasks.shrink.ShrinkVolumeWithQemuImg) if 'apt' in manifest.plugins['minimize_size']: apt = manifest.plugins['minimize_size']['apt'] if apt.get('autoclean', False): taskset.add(tasks.apt.AutomateAptClean) if 'languages' in apt: taskset.add(tasks.apt.FilterTranslationFiles) if apt.get('gzip_indexes', False): taskset.add(tasks.apt.AptGzipIndexes) if apt.get('autoremove_suggests', False): taskset.add(tasks.apt.AptAutoremoveSuggests) if 'dpkg' in manifest.plugins['minimize_size']: filter_tasks = [tasks.dpkg.CreateDpkgCfg, tasks.dpkg.InitializeBootstrapFilterList, tasks.dpkg.CreateBootstrapFilterScripts, tasks.dpkg.DeleteBootstrapFilterScripts, ] dpkg = manifest.plugins['minimize_size']['dpkg'] if 'locales' in dpkg: taskset.update(filter_tasks) taskset.add(tasks.dpkg.FilterLocales) # If no locales are selected, we don't need the locale package if len(dpkg['locales']) == 0: taskset.discard(locale.LocaleBootstrapPackage) taskset.discard(locale.GenerateLocale) if dpkg.get('exclude_docs', False): taskset.update(filter_tasks) taskset.add(tasks.dpkg.ExcludeDocs) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): counter_task(taskset, tasks.mounts.AddFolderMounts, tasks.mounts.RemoveFolderMounts) counter_task(taskset, tasks.dpkg.CreateBootstrapFilterScripts, tasks.dpkg.DeleteBootstrapFilterScripts) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/assets/000077500000000000000000000000001323112141500260765ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/assets/apt-autoremove-suggests000066400000000000000000000016001323112141500326300ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # Since Docker users are looking for the smallest possible final images, the # following emerges as a very common pattern: # RUN apt-get update \ # && apt-get install -y \ # && \ # && apt-get purge -y --auto-remove # By default, APT will actually _keep_ packages installed via Recommends or # Depends if another package Suggests them, even and including if the package # that originally caused them to be installed is removed. Setting this to # "false" ensures that APT is appropriately aggressive about removing the # packages it added. # https://aptitude.alioth.debian.org/doc/en/ch02s05s05.html#configApt-AutoRemove-SuggestsImportant Apt::AutoRemove::SuggestsImportant "false"; bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/assets/apt-clean000066400000000000000000000024271323112141500276720ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # Since for most Docker users, package installs happen in "docker build" steps, # they essentially become individual layers due to the way Docker handles # layering, especially using CoW filesystems. What this means for us is that # the caches that APT keeps end up just wasting space in those layers, making # our layers unnecessarily large (especially since we'll normally never use # these caches again and will instead just "docker build" again and make a brand # new image). # Ideally, these would just be invoking "apt-get clean", but in our testing, # that ended up being cyclic and we got stuck on APT's lock, so we get this fun # creation that's essentially just "apt-get clean". DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; }; APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; }; Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache ""; # Note that we do realize this isn't the ideal way to do this, and are always # open to better suggestions (https://github.com/docker/docker/issues). bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/assets/apt-gzip-indexes000066400000000000000000000012111323112141500312040ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # Since Docker users using "RUN apt-get update && apt-get install -y ..." in # their Dockerfiles don't go delete the lists files afterwards, we want them to # be as small as possible on-disk, so we explicitly request "gz" versions and # tell Apt to keep them gzipped on-disk. # For comparison, an "apt-get update" layer without this on a pristine # "debian:wheezy" base image was "29.88 MB", where with this it was only # "8.273 MB". Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz"; bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/assets/apt-languages000066400000000000000000000006631323112141500305560ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # In Docker, we don't often need the "Translations" files, so we're just wasting # time and space by downloading them, and this inhibits that. For users that do # need them, it's a simple matter to delete this file and "apt-get update". :) Acquire::Languages { ACQUIRE_LANGUAGES_FILTER }; bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/assets/bootstrap-files-filter.sh000066400000000000000000000005621323112141500330350ustar00rootroot00000000000000#!/bin/sh # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # First we filter out all paths not relating to the stuff we want to filter # After that we take out the paths that we *do* want to keep grep 'EXCLUDE_PATTERN' | grep --invert-match --fixed-strings 'INCLUDE_PATHS' bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/assets/bootstrap-script.sh000066400000000000000000000021741323112141500317550ustar00rootroot00000000000000#!/bin/sh # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # This script does not override anything defined in /usr/share/debootstrap/scripts # Instead we use it to redefine extract_dpkg_deb_data(), so that we may exclude # certain files during bootstrapping. extract_dpkg_deb_data () { local pkg="$1" local excludes_file="DEBOOTSTRAP_EXCLUDES_PATH" # List all files in $pkg and run them through the filter (avoid exit status >0 if no matches are found) dpkg-deb --fsys-tarfile "$pkg" | tar -t | BOOTSTRAP_FILES_FILTER_PATH > "$excludes_file" || true dpkg-deb --fsys-tarfile "$pkg" | tar --exclude-from "$excludes_file" -xf - rm "$excludes_file" } # Direct copypasta from the debootstrap script where it determines # which script to run. We do exactly the same but leave out the # if [ "$4" != "" ] part so that we can source the script that # should've been sourced in this scripts place. SCRIPT="$DEBOOTSTRAP_DIR/scripts/$SUITE" if [ -n "$VARIANT" ] && [ -e "${SCRIPT}.${VARIANT}" ]; then SCRIPT="${SCRIPT}.${VARIANT}" fi . $SCRIPT bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/manifest-schema.yml000066400000000000000000000020051323112141500303600ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# properties: plugins: properties: minimize_size: properties: shrink: anyOf: - type: boolean - enum: [vmware-vdiskmanager, qemu-img] zerofree: type: boolean apt: type: object properties: autoclean: type: boolean languages: type: array minItems: 1 items: type: string gzip_indexes: type: boolean autoremove_suggests: type: boolean dpkg: type: object properties: locales: type: array items: type: string exclude_docs: type: boolean type: object additionalProperties: false type: object title: Minimize size plugin manifest type: object bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/tasks/000077500000000000000000000000001323112141500257215ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/tasks/__init__.py000066400000000000000000000001301323112141500300240ustar00rootroot00000000000000from bootstrapvz.common.tools import rel_path assets = rel_path(__file__, '../assets') bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/tasks/apt.py000066400000000000000000000046061323112141500270650ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import apt from bootstrapvz.common.tools import sed_i import os import shutil from . import assets class AutomateAptClean(Task): description = 'Configuring apt to always clean everything out when it\'s done' phase = phases.package_installation successors = [apt.AptUpdate] # Snatched from: # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap @classmethod def run(cls, info): shutil.copy(os.path.join(assets, 'apt-clean'), os.path.join(info.root, 'etc/apt/apt.conf.d/90clean')) class FilterTranslationFiles(Task): description = 'Configuring apt to only download and use specific translation files' phase = phases.package_installation successors = [apt.AptUpdate] # Snatched from: # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap @classmethod def run(cls, info): langs = info.manifest.plugins['minimize_size']['apt']['languages'] config = '; '.join(map(lambda l: '"' + l + '"', langs)) config_path = os.path.join(info.root, 'etc/apt/apt.conf.d/20languages') shutil.copy(os.path.join(assets, 'apt-languages'), config_path) sed_i(config_path, r'ACQUIRE_LANGUAGES_FILTER', config) class AptGzipIndexes(Task): description = 'Configuring apt to always gzip lists files' phase = phases.package_installation successors = [apt.AptUpdate] # Snatched from: # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap @classmethod def run(cls, info): shutil.copy(os.path.join(assets, 'apt-gzip-indexes'), os.path.join(info.root, 'etc/apt/apt.conf.d/20gzip-indexes')) class AptAutoremoveSuggests(Task): description = 'Configuring apt to remove suggested packages when autoremoving' phase = phases.package_installation successors = [apt.AptUpdate] # Snatched from: # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap @classmethod def run(cls, info): shutil.copy(os.path.join(assets, 'apt-autoremove-suggests'), os.path.join(info.root, 'etc/apt/apt.conf.d/20autoremove-suggests')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/tasks/dpkg.py000066400000000000000000000142361323112141500272260ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import bootstrap from bootstrapvz.common.tasks import workspace from bootstrapvz.common.tools import sed_i import os import shutil from . import assets class CreateDpkgCfg(Task): description = 'Creating /etc/dpkg/dpkg.cfg.d before bootstrapping' phase = phases.os_installation successors = [bootstrap.Bootstrap] @classmethod def run(cls, info): os.makedirs(os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d')) class InitializeBootstrapFilterList(Task): description = 'Initializing the bootstrapping filter list' phase = phases.preparation @classmethod def run(cls, info): info._minimize_size['bootstrap_filter'] = {'exclude': [], 'include': []} class CreateBootstrapFilterScripts(Task): description = 'Creating the bootstrapping locales filter script' phase = phases.os_installation successors = [bootstrap.Bootstrap] # Inspired by: # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap @classmethod def run(cls, info): if info.bootstrap_script is not None: from bootstrapvz.common.exceptions import TaskError raise TaskError('info.bootstrap_script seems to already be set ' 'and is conflicting with this task') bootstrap_script = os.path.join(info.workspace, 'bootstrap_script.sh') filter_script = os.path.join(info.workspace, 'bootstrap_files_filter.sh') excludes_file = os.path.join(info.workspace, 'debootstrap-excludes') shutil.copy(os.path.join(assets, 'bootstrap-script.sh'), bootstrap_script) shutil.copy(os.path.join(assets, 'bootstrap-files-filter.sh'), filter_script) sed_i(bootstrap_script, r'DEBOOTSTRAP_EXCLUDES_PATH', excludes_file) sed_i(bootstrap_script, r'BOOTSTRAP_FILES_FILTER_PATH', filter_script) # We exclude with patterns but include with fixed strings # The pattern matching when excluding is needed in order to filter # everything below e.g. /usr/share/locale but not the folder itself filter_lists = info._minimize_size['bootstrap_filter'] exclude_list = '\|'.join(map(lambda p: '.' + p + '.\+', filter_lists['exclude'])) include_list = '\n'.join(map(lambda p: '.' + p, filter_lists['include'])) sed_i(filter_script, r'EXCLUDE_PATTERN', exclude_list) sed_i(filter_script, r'INCLUDE_PATHS', include_list) os.chmod(filter_script, 0755) info.bootstrap_script = bootstrap_script info._minimize_size['filter_script'] = filter_script class FilterLocales(Task): description = 'Configuring dpkg and debootstrap to only include specific locales/manpages when installing packages' phase = phases.os_installation predecessors = [CreateDpkgCfg] successors = [CreateBootstrapFilterScripts] # Snatched from: # https://github.com/docker/docker/blob/1d775a54cc67e27f755c7338c3ee938498e845d7/contrib/mkimage/debootstrap # and # https://raphaelhertzog.com/2010/11/15/save-disk-space-by-excluding-useless-files-with-dpkg/ @classmethod def run(cls, info): # Filter when debootstrapping info._minimize_size['bootstrap_filter']['exclude'].extend([ '/usr/share/locale/', '/usr/share/man/', ]) locales = info.manifest.plugins['minimize_size']['dpkg']['locales'] info._minimize_size['bootstrap_filter']['include'].extend([ '/usr/share/locale/locale.alias', '/usr/share/man/man1', '/usr/share/man/man2', '/usr/share/man/man3', '/usr/share/man/man4', '/usr/share/man/man5', '/usr/share/man/man6', '/usr/share/man/man7', '/usr/share/man/man8', '/usr/share/man/man9', ] + map(lambda l: '/usr/share/locale/' + l + '/', locales) + map(lambda l: '/usr/share/man/' + l + '/', locales) ) # Filter when installing things with dpkg locale_lines = ['path-exclude=/usr/share/locale/*', 'path-include=/usr/share/locale/locale.alias'] manpages_lines = ['path-exclude=/usr/share/man/*', 'path-include=/usr/share/man/man[1-9]'] locales = info.manifest.plugins['minimize_size']['dpkg']['locales'] locale_lines.extend(map(lambda l: 'path-include=/usr/share/locale/' + l + '/*', locales)) manpages_lines.extend(map(lambda l: 'path-include=/usr/share/man/' + l + '/*', locales)) locales_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-locales') manpages_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10filter-manpages') with open(locales_path, 'w') as locale_filter: locale_filter.write('\n'.join(locale_lines) + '\n') with open(manpages_path, 'w') as manpages_filter: manpages_filter.write('\n'.join(manpages_lines) + '\n') class ExcludeDocs(Task): description = 'Configuring dpkg and debootstrap to not install additional documentation for packages' phase = phases.os_installation predecessors = [CreateDpkgCfg] successors = [CreateBootstrapFilterScripts] @classmethod def run(cls, info): # "Packages must not require the existence of any files in /usr/share/doc/ in order to function [...]." # Source: https://www.debian.org/doc/debian-policy/ch-docs.html # So doing this should cause no problems. info._minimize_size['bootstrap_filter']['exclude'].append('/usr/share/doc/') exclude_docs_path = os.path.join(info.root, 'etc/dpkg/dpkg.cfg.d/10exclude-docs') with open(exclude_docs_path, 'w') as exclude_docs: exclude_docs.write('path-exclude=/usr/share/doc/*\n') class DeleteBootstrapFilterScripts(Task): description = 'Deleting the bootstrapping locales filter script' phase = phases.cleaning successors = [workspace.DeleteWorkspace] @classmethod def run(cls, info): os.remove(info._minimize_size['filter_script']) del info._minimize_size['filter_script'] os.remove(info.bootstrap_script) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/tasks/mounts.py000066400000000000000000000030521323112141500276200ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import bootstrap import os folders = ['tmp', 'var/lib/apt/lists'] class AddFolderMounts(Task): description = 'Mounting folders for writing temporary and cache data' phase = phases.os_installation predecessors = [bootstrap.Bootstrap] @classmethod def run(cls, info): info._minimize_size['foldermounts'] = os.path.join(info.workspace, 'minimize_size') os.mkdir(info._minimize_size['foldermounts']) for folder in folders: temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_')) os.mkdir(temp_path) full_path = os.path.join(info.root, folder) os.chmod(temp_path, os.stat(full_path).st_mode) info.volume.partition_map.root.add_mount(temp_path, full_path, ['--bind']) class RemoveFolderMounts(Task): description = 'Removing folder mounts for temporary and cache data' phase = phases.system_cleaning successors = [apt.AptClean] @classmethod def run(cls, info): import shutil for folder in folders: temp_path = os.path.join(info._minimize_size['foldermounts'], folder.replace('/', '_')) full_path = os.path.join(info.root, folder) info.volume.partition_map.root.remove_mount(full_path) shutil.rmtree(temp_path) os.rmdir(info._minimize_size['foldermounts']) del info._minimize_size['foldermounts'] bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/minimize_size/tasks/shrink.py000066400000000000000000000047761323112141500276070ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import filesystem from bootstrapvz.common.tasks import host from bootstrapvz.common.tasks import partitioning from bootstrapvz.common.tasks import volume from bootstrapvz.common.tools import log_check_call import os class AddRequiredZeroFreeCommand(Task): description = 'Adding command required for zero-ing volume' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): info.host_dependencies['zerofree'] = 'zerofree' class AddRequiredVDiskManagerCommand(Task): description = 'Adding vmware-vdiskmanager command required for reducing volume size' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): link = 'https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/10_0' info.host_dependencies['vmware-vdiskmanager'] = link class AddRequiredQemuImgCommand(Task): description = 'Adding qemu-img command required for reducing volume size' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): info.host_dependencies['qemu-img'] = 'qemu-img' class Zerofree(Task): description = 'Zeroing unused blocks on the root partition' phase = phases.volume_unmounting predecessors = [filesystem.UnmountRoot] successors = [partitioning.UnmapPartitions, volume.Detach] @classmethod def run(cls, info): log_check_call(['zerofree', info.volume.partition_map.root.device_path]) class ShrinkVolumeWithVDiskManager(Task): description = 'Shrinking the volume with vmware-vdiskmanager' phase = phases.volume_unmounting predecessors = [volume.Detach] @classmethod def run(cls, info): perm = os.stat(info.volume.image_path).st_mode & 0777 log_check_call(['/usr/bin/vmware-vdiskmanager', '-k', info.volume.image_path]) os.chmod(info.volume.image_path, perm) class ShrinkVolumeWithQemuImg(Task): description = 'Shrinking the volume with qemu-img' phase = phases.volume_unmounting predecessors = [volume.Detach] @classmethod def run(cls, info): tmp_name = os.path.join(info.workspace, 'shrunk.' + info.volume.extension) log_check_call( ['qemu-img', 'convert', '-O', info.volume.extension, info.volume.image_path, tmp_name]) os.rename(tmp_name, info.volume.image_path) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ntp/000077500000000000000000000000001323112141500225225ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ntp/README.rst000066400000000000000000000004141323112141500242100ustar00rootroot00000000000000NTP --- This plugins installs the Network Time Protocol daemon and optionally defines which time servers it should use. Settings ~~~~~~~~ - ``servers``: A list of strings specifying which servers should be used to synchronize the machine clock. ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ntp/__init__.py000066400000000000000000000005351323112141500246360ustar00rootroot00000000000000def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): import tasks taskset.add(tasks.AddNtpPackage) if manifest.plugins['ntp'].get('servers', False): taskset.add(tasks.SetNtpServers) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ntp/manifest-schema.yml000066400000000000000000000005221323112141500263100ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: NTP plugin manifest type: object properties: plugins: type: object properties: ntp: type: object properties: servers: type: array items: {type: string} minItems: 1 additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/ntp/tasks.py000066400000000000000000000020411323112141500242160ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class AddNtpPackage(Task): description = 'Adding NTP Package' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('ntp') class SetNtpServers(Task): description = 'Setting NTP servers' phase = phases.system_modification @classmethod def run(cls, info): import fileinput import os import re ntp_path = os.path.join(info.root, 'etc/ntp.conf') servers = list(info.manifest.plugins['ntp']['servers']) debian_ntp_server = re.compile('.*[0-9]\.debian\.pool\.ntp\.org.*') for line in fileinput.input(files=ntp_path, inplace=True): # Will write all the specified servers on the first match, then supress all other default servers if re.match(debian_ntp_server, line): while servers: print 'server {server_address} iburst'.format(server_address=servers.pop(0)) else: print line, bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/opennebula/000077500000000000000000000000001323112141500240515ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/opennebula/README.rst000066400000000000000000000014741323112141500255460ustar00rootroot00000000000000Open Nebula ----------- This plugin adds `OpenNebula contextualization `__ to the image, which sets up the network configuration and SSH keys. The virtual machine context should be configured as follows: .. code-block:: text ETH0_DNS $NETWORK[DNS, NETWORK_ID=2] ETH0_GATEWAY $NETWORK[GATEWAY, NETWORK_ID=2] ETH0_IP $NIC[IP, NETWORK_ID=2] ETH0_MASK $NETWORK[MASK, NETWORK_ID=2] ETH0_NETWORK $NETWORK[NETWORK, NETWORK_ID=2] FILES path_to_my_ssh_public_key.pub The plugin will install all *.pub* files in the root authorized\_keys file. When using the ec2 provider, the USER\_EC2\_DATA will be executed if present. Settings ~~~~~~~~ This plugin has no settings. To enable it add ``"opennebula":{}`` to the plugin section of the manifest. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/opennebula/__init__.py000066400000000000000000000004231323112141500261610ustar00rootroot00000000000000 def resolve_tasks(taskset, manifest): import tasks from bootstrapvz.common.tasks import apt from bootstrapvz.common.releases import wheezy if manifest.release == wheezy: taskset.add(apt.AddBackports) taskset.update([tasks.AddONEContextPackage]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/opennebula/tasks.py000066400000000000000000000010411323112141500255440ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common.tasks import apt from bootstrapvz.common import phases class AddONEContextPackage(Task): description = 'Adding the OpenNebula context package' phase = phases.preparation predecessors = [apt.AddBackports] @classmethod def run(cls, info): target = None from bootstrapvz.common.releases import wheezy if info.manifest.release == wheezy: target = '{system.release}-backports' info.packages.add('opennebula-context', target) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/pip_install/000077500000000000000000000000001323112141500242375ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/pip_install/README.rst000066400000000000000000000006071323112141500257310ustar00rootroot00000000000000Pip install ----------- Install packages from the Python Package Index via pip. Installs ``build-essential`` and ``python-dev`` debian packages, so Python extension modules can be built. Settings ~~~~~~~~ - ``packages``: Python packages to install, a list of strings. The list can contain anything that ``pip install`` would accept as an argument, for example ``awscli==1.3.13``. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/pip_install/__init__.py000066400000000000000000000004451323112141500263530ustar00rootroot00000000000000import tasks def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.add(tasks.AddPipPackage) taskset.add(tasks.PipInstallCommand) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/pip_install/manifest-schema.yml000066400000000000000000000006151323112141500300300ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Pip install plugin manifest type: object properties: plugins: type: object properties: pip_install: type: object properties: packages: type: array items: type: string minItems: 1 uniqueItems: true additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/pip_install/tasks.py000066400000000000000000000014671323112141500257460ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class AddPipPackage(Task): description = 'Adding `pip\' and Co. to the image packages' phase = phases.preparation @classmethod def run(cls, info): for package_name in ('python-pip', 'build-essential', 'python-dev'): info.packages.add(package_name) class PipInstallCommand(Task): description = 'Install python packages from pypi with pip' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call packages = info.manifest.plugins['pip_install']['packages'] pip_install_command = ['chroot', info.root, 'pip', 'install'] pip_install_command.extend(packages) log_check_call(pip_install_command) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/prebootstrapped/000077500000000000000000000000001323112141500251365ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/prebootstrapped/README.rst000066400000000000000000000023111323112141500266220ustar00rootroot00000000000000prebootstrapped --------------- When developing for bootstrap-vz, testing can be quite tedious since the bootstrapping process can take a while. The prebootstrapped plugin solves that problem by creating a snapshot of your volume right after all the software has been installed. The next time bootstrap-vz is run, the plugin replaces all volume preparation and bootstrapping tasks and recreates the volume from the snapshot instead. The plugin assumes that the users knows what he is doing (e.g. it doesn't check whether bootstrap-vz is being run with a partitioned volume configuration, while the snapshot is unpartitioned). When no snapshot or image is specified the plugin creates one and outputs its ID/path. Specifying an ID/path enables the second mode of operation which recreates the volume from the specified snapshot instead of creating it from scratch. Settings ~~~~~~~~ - ``snapshot``: ID of the EBS snapshot to use. This setting only works with the volume backing ``ebs``. - ``image``: Path to the loopbackvolume snapshot. This setting works with the volume backings ``raw``, ``s3``, ``vdi``, ``vmdk`` - ``folder``: Path to the folder copy. This setting works with the volume backing ``folder`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/prebootstrapped/__init__.py000066400000000000000000000053241323112141500272530ustar00rootroot00000000000000import tasks from bootstrapvz.providers.ec2.tasks import ebs from bootstrapvz.plugins.minimize_size.tasks import dpkg from bootstrapvz.providers.virtualbox.tasks import guest_additions from bootstrapvz.common.tasks import loopback from bootstrapvz.common.tasks import volume from bootstrapvz.common.tasks import folder from bootstrapvz.common.tasks import locale from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import bootstrap from bootstrapvz.common.tasks import filesystem from bootstrapvz.common.tasks import partitioning def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): settings = manifest.plugins['prebootstrapped'] skip_tasks = [ebs.Create, loopback.Create, folder.Create, filesystem.Format, partitioning.PartitionVolume, filesystem.TuneVolumeFS, filesystem.AddXFSProgs, filesystem.CreateBootMountDir, apt.DisableDaemonAutostart, dpkg.InitializeBootstrapFilterList, dpkg.CreateDpkgCfg, dpkg.CreateBootstrapFilterScripts, dpkg.FilterLocales, dpkg.ExcludeDocs, dpkg.DeleteBootstrapFilterScripts, locale.GenerateLocale, bootstrap.MakeTarball, bootstrap.Bootstrap, guest_additions.InstallGuestAdditions, ] if manifest.volume['backing'] == 'ebs': if settings.get('snapshot', None) is not None: taskset.add(tasks.CreateFromSnapshot) [taskset.discard(task) for task in skip_tasks] else: taskset.add(tasks.Snapshot) elif manifest.volume['backing'] == 'folder': if settings.get('folder', None) is not None: taskset.add(tasks.CreateFromFolder) [taskset.discard(task) for task in skip_tasks] else: taskset.add(tasks.CopyFolder) else: if settings.get('image', None) is not None: taskset.add(tasks.CreateFromImage) [taskset.discard(task) for task in skip_tasks] else: taskset.add(tasks.CopyImage) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): if manifest.volume['backing'] == 'ebs': counter_task(taskset, tasks.CreateFromSnapshot, volume.Delete) elif manifest.volume['backing'] == 'folder': counter_task(taskset, tasks.CreateFromFolder, folder.Delete) else: counter_task(taskset, tasks.CreateFromImage, volume.Delete) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/prebootstrapped/manifest-schema.yml000066400000000000000000000010651323112141500307270ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Prebootstrapped plugin manifest type: object properties: volume: type: object properties: backing: type: string enum: - raw - ebs - s3 - vdi - vmdk - folder required: [backing] plugins: type: object properties: prebootstrapped: type: object properties: image: {type: string} snapshot: {type: string} folder: {type: string} additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/prebootstrapped/tasks.py000066400000000000000000000111071323112141500266350ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import volume from bootstrapvz.common.tasks import packages from bootstrapvz.providers.virtualbox.tasks import guest_additions from bootstrapvz.providers.ec2.tasks import ebs from bootstrapvz.common.fs import unmounted from bootstrapvz.common.tools import log_check_call from shutil import copyfile import os.path import time import logging log = logging.getLogger(__name__) class Snapshot(Task): description = 'Creating a snapshot of the bootstrapped volume' phase = phases.package_installation predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions] @classmethod def run(cls, info): snapshot = None with unmounted(info.volume): snapshot = info.volume.snapshot() msg = 'A snapshot of the bootstrapped volume was created. ID: ' + snapshot.id log.info(msg) class CreateFromSnapshot(Task): description = 'Creating EBS volume from a snapshot' phase = phases.volume_creation successors = [ebs.Attach] @classmethod def run(cls, info): snapshot = info.manifest.plugins['prebootstrapped']['snapshot'] ebs_volume = info._ec2['connection'].create_volume(info.volume.size.bytes.get_qty_in('GiB'), info._ec2['host']['availabilityZone'], snapshot=snapshot) while ebs_volume.volume_state() != 'available': time.sleep(5) ebs_volume.update() info.volume.volume = ebs_volume set_fs_states(info.volume) class CopyImage(Task): description = 'Creating a snapshot of the bootstrapped volume' phase = phases.package_installation predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions] @classmethod def run(cls, info): loopback_backup_name = 'volume-{id}.{ext}.backup'.format(id=info.run_id, ext=info.volume.extension) destination = os.path.join(info.manifest.bootstrapper['workspace'], loopback_backup_name) with unmounted(info.volume): copyfile(info.volume.image_path, destination) msg = 'A copy of the bootstrapped volume was created. Path: ' + destination log.info(msg) class CreateFromImage(Task): description = 'Creating loopback image from a copy' phase = phases.volume_creation successors = [volume.Attach] @classmethod def run(cls, info): info.volume.image_path = os.path.join(info.workspace, 'volume.' + info.volume.extension) loopback_backup_path = info.manifest.plugins['prebootstrapped']['image'] copyfile(loopback_backup_path, info.volume.image_path) set_fs_states(info.volume) class CopyFolder(Task): description = 'Creating a copy of the bootstrap folder' phase = phases.package_installation predecessors = [packages.InstallPackages, guest_additions.InstallGuestAdditions] @classmethod def run(cls, info): folder_backup_name = '{id}.{ext}.backup'.format(id=info.run_id, ext=info.volume.extension) destination = os.path.join(info.manifest.bootstrapper['workspace'], folder_backup_name) log_check_call(['cp', '-a', info.volume.path, destination]) msg = 'A copy of the bootstrapped volume was created. Path: ' + destination log.info(msg) class CreateFromFolder(Task): description = 'Creating bootstrap folder from a copy' phase = phases.volume_creation successors = [volume.Attach] @classmethod def run(cls, info): info.root = os.path.join(info.workspace, 'root') log_check_call(['cp', '-a', info.manifest.plugins['prebootstrapped']['folder'], info.root]) info.volume.path = info.root info.volume.fsm.current = 'attached' def set_fs_states(volume): volume.fsm.current = 'detached' p_map = volume.partition_map from bootstrapvz.base.fs.partitionmaps.none import NoPartitions if not isinstance(p_map, NoPartitions): p_map.fsm.current = 'unmapped' from bootstrapvz.base.fs.partitions.unformatted import UnformattedPartition from bootstrapvz.base.fs.partitions.single import SinglePartition for partition in p_map.partitions: if isinstance(partition, UnformattedPartition): partition.fsm.current = 'unmapped' continue if isinstance(partition, SinglePartition): partition.fsm.current = 'formatted' continue partition.fsm.current = 'unmapped_fmt' bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/puppet/000077500000000000000000000000001323112141500232365ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/puppet/README.rst000066400000000000000000000106751323112141500247360ustar00rootroot00000000000000Puppet ------ Installs `puppet version 4 ` PC1 From the site repository `` and optionally applies a manifest inside the chroot. You can also have it copy your puppet configuration into the image so it is readily available once the image is booted. Rationale and use case in a masterless setup ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You want to use this plugin when you wish to create an image and to be able to manage that image with Puppet. You have a Puppet 4 setup in mind and thus you want the image to contain the puppet agent software from the puppetlabs repo. You want it to almost contain everything you need to get it up and running This plugin does just that! While you're at it, throw in some modules from the forge as well! Want to include your own modules? Include them as assets! This is primarily useful when you have a very limited collection of nodes you wish to manage with puppet without to having to set up an entire puppet infra- structure. This allows you thus to work "masterless". You can use this to bootstrap any kind of appliance, like a puppet master! For now this plugin is only compatible with Debian versions Wheezy, Jessie and Stretch. These are Debian distributions supported by puppetlabs. About Master/agent setups ~~~~~~~~~~~~~~~~~~~~~~~~~ If you wish to use this plugin in an infrastructure where a puppet master is present, you should evaluate what your setup is. In a puppet OSS server setup it can be useful to just use the plugin without any manifests, assets or modules included. In a puppet PE environment you will probably not need this plugin since the PE server console gives you an URL that installs the agent corresponding to your PE server. About Puppet 5 ~~~~~~~~~~~~~~ Although Puppet 5 is available for some time, there is still heavy development going on in that version. This module does NOT support the installation of this version at this time. If you think this should be the case, please open up an issue on ``. Settings ~~~~~~~~ - ``manifest``: Path to the puppet manifest that should be applied. ``optional`` - ``assets``: Path to puppet assets. The contents will be copied into ``/etc/puppetlabs`` on the image. Any existing files will be overwritten. ``optional`` - ``install_modules``: A list of modules you wish to install available from `` inside the chroot. It will assume a FORCED install of the modules. This list is a list of tuples. Every tuple must at least contain the module name. A version is optional, when no version is given, it will take the latest version available from the forge. Format: [module_name (required), version (optional)] - ``enable_agent``: Whether the puppet agent daemon should be enabled. ``optional - not recommended``. disabled by default. UNTESTED An example bootstrap-vz manifest is included in the ``KVM`` folder of the manifests examples directory. Limitations ~~~~~~~~~~~ (Help is always welcome, feel free to chip in!) General: - This plugin only installs the PC1 package for now, needs to be extended to be able to install the package of choice Manifests: - Running puppet manifests is not recommended and untested, see below Assets: - The assets path must be ABSOLUTE to your manifest file. install_modules: - It assumes installing the given list of tuples of modules with the following command: "... install --force $module_name (--version $version_number)" The module name is mandatory, the version is optional. When no version is given, it will pick the master version of the module from `` - It assumes the modules are installed into the "production" environment. Installing into another environment e.g. develop, is currently not implemented. - You cannot include local modules this way, to include you homebrewn modules, You need to inject them through the assets directive. UNTESTED: - Enabling the agent and applying the manifest inside the chrooted environment. Keep in mind that when applying a manifest when enabling the agent option, the system is in a chrooted environment. This can prevent daemons from running properly (e.g. listening to ports), they will also need to be shut down gracefully (which bootstrap-vz cannot do) before unmounting the volume. It is advisable to avoid starting any daemons inside the chroot at all. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/puppet/__init__.py000066400000000000000000000015671323112141500253600ustar00rootroot00000000000000import tasks def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.add(tasks.CheckRequestedDebianRelease) taskset.add(tasks.AddPuppetlabsPC1SourcesList) taskset.add(tasks.InstallPuppetlabsPC1ReleaseKey) taskset.add(tasks.InstallPuppetAgent) if 'assets' in manifest.plugins['puppet']: taskset.add(tasks.CheckAssetsPath) taskset.add(tasks.CopyPuppetAssets) if 'manifest' in manifest.plugins['puppet']: taskset.add(tasks.CheckManifestPath) taskset.add(tasks.ApplyPuppetManifest) if 'install_modules' in manifest.plugins['puppet']: taskset.add(tasks.InstallModules) if manifest.plugins['puppet'].get('enable_agent', False): taskset.add(tasks.EnableAgent) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/puppet/assets/000077500000000000000000000000001323112141500245405ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/puppet/assets/gpg-keyrings-PC1/000077500000000000000000000000001323112141500275275ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/puppet/assets/gpg-keyrings-PC1/jessie/000077500000000000000000000000001323112141500310115ustar00rootroot00000000000000puppetlabs-pc1-keyring.gpg000066400000000000000000000277731323112141500357570ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/puppet/assets/gpg-keyrings-PC1/jessie L7@PN}xcР}CI1 WsG̎}@Py|F~­^/Θ yv]nlmWݥ2z[B"&ZQE2'F3Pur`,TǺj?k5m K =&aq,m |gʼ9)o7A @ xr}S3L (&O)~8#Sc>;VS8 =bG>w*6>4*_x[}kSIU0UW~/TmK5+ǭ O-%jʾ^1N45ECq5\-a{$1zߓu>_NR8:49¸19&No{FY6y>E&\Jc~q6R5?'ї% 3$9&&91{B j &QGPuppet Labs Release Key (Puppet Labs Release Key) L7F !ZPp2>sT\&}vۙrpG^v ?2AĜQRCRk*5 !eEz֬eK87S%4Z0U#c`'Me~-tO)*\@l\z;i-Nf{/<*O@ oNޕI1S<>H${1R`V^vs>c }WcS :t ,pUP8Q J3 HQ6_~4H33hNp[8`~.biM#H-|@̨%)7o:g8K>{ 秭XmDa<]&2 cA7X#9"{[Y5_hmo{܎8nR b+g(24S.2*w?J&Y ŹŰϏY?{Ǐ:jfдY\n#NѻcE%.,{Z9 5m+-Fv7]Eaojԉ\g"lEؑhOY[um:*[.v֏0mK@[Q,RfwIW,脇wE@;KM k|!՝*OҦF.$Nbı EX4ߑ@|>(  O H2: TK0)rU)݊oxhz*CAmV 1*O-Ѕi6E"'k^Py>hb,)A^ӽ#6ڈ8L;Dh&J?xfrܥQZvKq<~܌KSz'Hu|4W~fWWs)Ȋ0P1&zK6JqԪO-e N#}+Wϛ{tV\P)3OZSQCH:\ ~&߿6zxT }5z̮|q>DH 3ٹ0(O}{&J\1zv+U+n&DKӁQRKʑf<N0H~>^&Irnr y“rVPء̢󧥻*OlXm2]&DOP$Hjb˼旌e(b9or$2Xo>(L7@ g   TK0khzHQ\Qd13Hlrn^ϔ_uqWACxL/3 +BYڡQjr^ ~/ 6~hQ+x:ʹ goQ)h*#C fu[ 7eU0:9PW" RTh+uK8o1-IwF~+ܨ zh,719!Q2n!PjtV:GV㑘$!%(T6vAgr$pA7JVH0[wzu!cU 95NI3ИiXeu1%UC9C›o&,ۨ4Ќf1;ߚYD JHY*ٖ}Fu̠hR4UiJ2~bHQtg#v !I1(n $t5 ,_\~b<j:R"y+'ss;D׵ X0<SX$$:TJ? ;6H.8>H{˾xf DLݾe>(  W 5 TK0up!nF ڻ݌k|ʥBtSRW(L-[axh_9*>wT=w7' 6zd2^&Ct%BN+Ģi9Ȏ,غ{E'{,.Bkb^I5J8l"/kNw,\BuwdIs9SЎ2!Tkj  ?`kKI`Z+t_QMR+eSiJ@"^L\b*5LC?F=lKYQL@ָNhu^ _%+hF(/y<$2)N":v9Ae]c5缾L>Dj_;1P ~p*y}$m҉mA#q>^Ĩ:&=v5,[y+,;/mӹ(Q]x2YO 2ŵzvz4@g*>F|z&oVs%|2-j\tIIdF\]T 9WpQM. _ye/Wر] &=J;e.  $jqg ㎄DQL R x=Z<"p=U.z.5+w SGssm_2 K3$04# I@>2/6s\ ?N 7;R_FD^VXLSTƎ26$ۂ)yIɴ5z\thWvշDon+iqץl&9%J*?P+nt Qyڹh/EcPkSH0i4L7ΞѡUWOkI~4ItKdw=b4YkIMb>ё*$ܧ.s~XwrmQNV:>(jBL>Q>u Kݻ¸y̵_IA:L7Qc,y (Aanq­٤v @YSbM.o1u[Q_#hAUn (wHUףj݌K}kU@,Ҿ5\}dK0*3%0N1.zP"Hy(1[b@.躥QM`J_ҊC ) HT ZO01~v|D\u3Ms1xٟΐ%o5B=:4mKIOpP?6gX>]c`i}#4P(tds'G'iJBۤM WPuppet Labs Nightly Build Key (Puppet Labs Nightly Build Key) ? )  V& P" lW9,i01:1t;/VPa!-a3M^v渳t0 Ԇl#ۏyB*)vkm|01J(mpsJ) L: ѧ}K~2/y<@FKM:K[1 &cPVBUU?q!7c%$x,i;iz|d!0kA9v7pAm.l&wL}@ ;kyDA__|2ѯTi MLY$;ƢBDh(JYã,FSPuppet Labs Nightly Build Key (Puppet Labs Nightly Build Key) > (  V& P" lW Zdy%V6~xD4NeB.q H,3Zh;J4xw_hEk/8ێ ttT'{WxznA'* %[0\kj%+E|+I4b@ub팚8,T է-XNd^VhEz%=|R>tH@2y݆,5B0X5Pgppi7V3 Ffҷ.bl)NK$C94 wNEX %^b >@ᡇQnJO!C4_ y4!̨^k*8a i׾ iskGTh<#&OdA(`Ee0yV\PL|H[{z\i]Y%~ۏWMn8hh1IEm&0K^ݡêiA=6om Qy^{Bu -\ a[ǢzPm8͔9%UZ̉ZZf3]eվ<ƌH\.V4\{jHH3le0xrɎ$\ؐS ?i\E0dY n^cߴή;MLE/s"_ohTr5dpG/ b`W. b zL}Ae-i0B7ʬ9px7m]!yznviР`&Ъ!޼GJt*44 wyHp/R+8|7m)xoNçڳ66a1qӷ  }=写W3RCT")]9slno8@Ǵ)UH‚^4 \qn]\vUEV`ƁK'mTi _m5 ;%  V& P" lWaӎN7osylܙ2!|~F!.p6De.la9Avgڈ!U06oZ7rՐ܈e˪Fƶ|J (B_i_ZnwK l(lB5<.z.udk;WJ*tv-HÍs]*3YI!2{=Qܔ֙ga ‹Hn شcˊI5ܐApXDNV|VF-X7 OH@#+$0)#إ2b PY~w]qO"95/Y06P{q§NMbEyZYvK5#;@F-#t^m"Ad%q| n>\q& 2La1`jFA ƫ!ґ.4ljNc'\b٬1Q|Vb0 W#>nZdv"0.Kѻ|33bA)5tߡ d?SE" k 2xɕo/+yrƭgQO$^W_4הDaLHi_^(#-$hewy/ZKe\ڗPmd><,t;Zx08eŀK: ӽg‡i_Rish #9OM-#*>:ʁNC'`S1bF[hnBjh7*VTEFy!% k7(FOM̐DÌөtC-6xLmX)-^6,W85~ɆLb&VvoZ6lqX]"i0.m~Y/ /5dB$[|]m( 2X3ɑ))kY#&BO2M2lOƚxuKqHPuppet, Inc. Release Key (Puppet, Inc. Release Key) >(W#> f   C4 7G2XapvH[ zt&ͽRa0$W#j) J-6+j,a=I ! A#5[up?`'SHCl8wղR>SZ\, Ԃ'Hs.gJ"*5;,[eͅՙ$$V鑎ϵԘ,,AOސΛe b+h&ăLGK,@:ye8Jw\YVS*\oCdL-/})PKc QR{?J4MoqR?4q[(NZ6žlGF^_r; -?o GOvFX'O-A2p2 qMV"7e Wxno{wv`:@Tmћ1R NU\ԧWEð+m`*peI#ly]  Wq po5E oc5rd];(94f2HeCVk5}R2zパHZgqN^6rо4*kX?.%9zF*0 xw }z뎴a )kZ]OFooxTԓu ewO'moqŁ{31%PC9ܻ΁Ⱦ]fٶYFvaURͩm.MHO8Kl֘ IjBQF Wu 9W~uo<\UULAb-R[{r}UeBL4kS^\2SXFpípdr%T1#AP_,%PBCJX*Ʌ W聤Vd[em"s O[(,gzE?+]ÒEL.>P#-|.=Xa%isw}EO)GIzY0iK]i wN=yWx ^E'~%7lX!uzK8$(N$} Μ_ a2g[DJUqNJr ^"@tǗk0es`A]MJk > z$glF@E!oιϊ&<c8QM215..K#O؜$:yhi |B5 /\eصf e W2W RrK~jB[_^=P*j\%d2ʰnX Wz rm8BG,{A @"I e}Qn}y{f> %[:S<6L_XQ^_x ޳SC~|c&LCIV?v_GK-7΋ֽ-9h=`,XMW} F<J̈ypNA"ݻnRќ:7AaŕhL j8xYߕZ9xQ$\! 2?+} { Wz EQTw~pb€iP9fd,TM" W{ Γn߻l|MSH v3S*|~5a\L ꐣy#02y8T٨z?*/qw-ZXUc]JxL?Q"EYkƬ*N?P8EWT _ 1> "jbx9lƃ\A6vJ.\,4W{ sslGBH'}ߓv b,&aV1 Q8 =%kdDK 6"b|,Ŝa[GaG(F9ͧqME-F;{R{=4у^Fr3nBD Cy] MnE y6VH},C bml_@ەw8`V]- iSR~Cqz}SL".ܖ{B8Qc%#{Vm`ڔ#=aV"+q]ufZL# ){GE(]hsH(XUV/}= ~JV>!sHɇ6b -eΖ{}7.9eKX+_w4r*k0$;S\ .^:C87_!jv[Ηu*BpFU߅)T. ɡc5Rwp#Rv E3(*:".oz%jȈ0MΉ W{ ]g/.T%ȹJv)2}V\X&{o^]FL4S y!!%CY5m?埉a߇iGI񩓂ؿOH*&A b##HԸYXlܛOº!JPTxQs r ~ i [Et$5#?/} N}xE8 t3oɚk#>r,}hJ #ЙMd*|K`4KpA$xi:1Q}]8"8Qijfсz53T.Nb~a{v."$3|(ażσ /jL7MOG~N y@XuTLXv"\fR KH: JD-xHoT2"|P 2-l KS3ק?#) tY_@0uIмk x VO= bq#ENz+R W#>nq-$xW=8݀6J׹<۵DFfI׺JZkc;pF`; "/+$f"dn4ոnUw 8s`SX"-ļSBz IE92(p @ p 't:C z|]=TT*jYeXx6=8Õ$%=JȅAt'Cj#nUy= =fb.,F,[?I\qI {!15#TπLcR1—t!JRtl_Y}R|cPnҝA?@n $ x^M,0U$SiLxl75aI+()lf&hTC#H}[0 gK F|PA-<Mm ]Jʗ1MQXOl\WSae%W#>  f C4mf| 42̘=Pꭃ)!q7qklB!*;oOGT4FL)ٛzwspX Ap>˪e;J,96>\ ]NWVڐi&uEPҗ?$uAh|^(P>Pew9r QPbqq;fUָ1 7Q\}ϊHnPoa2jP)Up'4.e @L5,b),魠ϫX;304kdD4ϰJϢDFbL$M%eق4}#̳ T!ج̔+98;VS8 =bG>w*6>4*_x[}kSIU0UW~/TmK5+ǭ O-%jʾ^1N45ECq5\-a{$1zߓu>_NR8:49¸19&No{FY6y>E&\Jc~q6R5?'ї% 3$9&&91{B j &QGPuppet Labs Release Key (Puppet Labs Release Key) L7F !ZPp2>sT\&}vۙrpG^v ?2AĜQRCRk*5 !eEz֬eK87S%4Z0U#c`'Me~-tO)*\@l\z;i-Nf{/<*O@ oNޕI1S<>H${1R`V^vs>c }WcS :t ,pUP8Q J3 HQ6_~4H33hNp[8`~.biM#H-|@̨%)7o:g8K>{ 秭XmDa<]&2 cA7X#9"{[Y5_hmo{܎8nR b+g(24S.2*w?J&Y ŹŰϏY?{Ǐ:jfдY\n#NѻcE%.,{Z9 5m+-Fv7]Eaojԉ\g"lEؑhOY[um:*[.v֏0mK@[Q,RfwIW,脇wE@;KM k|!՝*OҦF.$Nbı EX4ߑ@|>(  O H2: TK0)rU)݊oxhz*CAmV 1*O-Ѕi6E"'k^Py>hb,)A^ӽ#6ڈ8L;Dh&J?xfrܥQZvKq<~܌KSz'Hu|4W~fWWs)Ȋ0P1&zK6JqԪO-e N#}+Wϛ{tV\P)3OZSQCH:\ ~&߿6zxT }5z̮|q>DH 3ٹ0(O}{&J\1zv+U+n&DKӁQRKʑf<N0H~>^&Irnr y“rVPء̢󧥻*OlXm2]&DOP$Hjb˼旌e(b9or$2Xo>(L7@ g   TK0khzHQ\Qd13Hlrn^ϔ_uqWACxL/3 +BYڡQjr^ ~/ 6~hQ+x:ʹ goQ)h*#C fu[ 7eU0:9PW" RTh+uK8o1-IwF~+ܨ zh,719!Q2n!PjtV:GV㑘$!%(T6vAgr$pA7JVH0[wzu!cU 95NI3ИiXeu1%UC9C›o&,ۨ4Ќf1;ߚYD JHY*ٖ}Fu̠hR4UiJ2~bHQtg#v !I1(n $t5 ,_\~b<j:R"y+'ss;D׵ X0<SX$$:TJ? ;6H.8>H{˾xf DLݾe>(  W 5 TK0up!nF ڻ݌k|ʥBtSRW(L-[axh_9*>wT=w7' 6zd2^&Ct%BN+Ģi9Ȏ,غ{E'{,.Bkb^I5J8l"/kNw,\BuwdIs9SЎ2!Tkj  ?`kKI`Z+t_QMR+eSiJ@"^L\b*5LC?F=lKYQL@ָNhu^ _%+hF(/y<$2)N":v9Ae]c5缾L>Dj_;1P ~p*y}$m҉mA#q>^Ĩ:&=v5,[y+,;/mӹ(Q]x2YO 2ŵzvz4@g*>F|z&oVs%|2-j\tIIdF\]T 9WpQM. _ye/Wر] &=J;e.  $jqg ㎄DQL R x=Z<"p=U.z.5+w SGssm_2 K3$04# I@>2/6s\ ?N 7;R_FD^VXLSTƎ26$ۂ)yIɴ5z\thWvշDon+iqץl&9%J*?P+nt Qyڹh/EcPkSH0i4L7ΞѡUWOkI~4ItKdw=b4YkIMb>ё*$ܧ.s~XwrmQNV:>(jBL>Q>u Kݻ¸y̵_IA:L7Qc,y (Aanq­٤v @YSbM.o1u[Q_#hAUn (wHUףj݌K}kU@,Ҿ5\}dK0*3%0N1.zP"Hy(1[b@.躥QM`J_ҊC ) HT ZO01~v|D\u3Ms1xٟΐ%o5B=:4mKIOpP?6gX>]c`i}#4P(tds'G'iJBۤM WPuppet Labs Nightly Build Key (Puppet Labs Nightly Build Key) ? )  V& P" lW9,i01:1t;/VPa!-a3M^v渳t0 Ԇl#ۏyB*)vkm|01J(mpsJ) L: ѧ}K~2/y<@FKM:K[1 &cPVBUU?q!7c%$x,i;iz|d!0kA9v7pAm.l&wL}@ ;kyDA__|2ѯTi MLY$;ƢBDh(JYã,FSPuppet Labs Nightly Build Key (Puppet Labs Nightly Build Key) > (  V& P" lW Zdy%V6~xD4NeB.q H,3Zh;J4xw_hEk/8ێ ttT'{WxznA'* %[0\kj%+E|+I4b@ub팚8,T է-XNd^VhEz%=|R>tH@2y݆,5B0X5Pgppi7V3 Ffҷ.bl)NK$C94 wNEX %^b >@ᡇQnJO!C4_ y4!̨^k*8a i׾ iskGTh<#&OdA(`Ee0yV\PL|H[{z\i]Y%~ۏWMn8hh1IEm&0K^ݡêiA=6om Qy^{Bu -\ a[ǢzPm8͔9%UZ̉ZZf3]eվ<ƌH\.V4\{jHH3le0xrɎ$\ؐS ?i\E0dY n^cߴή;MLE/s"_ohTr5dpG/ b`W. b zL}Ae-i0B7ʬ9px7m]!yznviР`&Ъ!޼GJt*44 wyHp/R+8|7m)xoNçڳ66a1qӷ  }=写W3RCT")]9slno8@Ǵ)UH‚^4 \qn]\vUEV`ƁK'mTi _m5 ;%  V& P" lWaӎN7osylܙ2!|~F!.p6De.la9Avgڈ!U06oZ7rՐ܈e˪Fƶ|J (B_i_ZnwK l(lB5<.z.udk;WJ*tv-HÍs]*3YI!2{=Qܔ֙ga ‹Hn شcˊI5ܐApXDNV|VF-X7 OH@#+$0)#إ2b PY~w]qO"95/Y06P{q§NMbEyZYvK5#;@F-#t^m"Ad%q| n>\q& 2La1`jFA ƫ!ґ.4ljNc'\b٬1Q|Vb0 W#>nZdv"0.Kѻ|33bA)5tߡ d?SE" k 2xɕo/+yrƭgQO$^W_4הDaLHi_^(#-$hewy/ZKe\ڗPmd><,t;Zx08eŀK: ӽg‡i_Rish #9OM-#*>:ʁNC'`S1bF[hnBjh7*VTEFy!% k7(FOM̐DÌөtC-6xLmX)-^6,W85~ɆLb&VvoZ6lqX]"i0.m~Y/ /5dB$[|]m( 2X3ɑ))kY#&BO2M2lOƚxuKqHPuppet, Inc. Release Key (Puppet, Inc. Release Key) >(W#> f   C4 7G2XapvH[ zt&ͽRa0$W#j) J-6+j,a=I ! A#5[up?`'SHCl8wղR>SZ\, Ԃ'Hs.gJ"*5;,[eͅՙ$$V鑎ϵԘ,,AOސΛe b+h&ăLGK,@:ye8Jw\YVS*\oCdL-/})PKc QR{?J4MoqR?4q[(NZ6žlGF^_r; -?o GOvFX'O-A2p2 qMV"7e Wxno{wv`:@Tmћ1R NU\ԧWEð+m`*peI#ly]  Wq po5E oc5rd];(94f2HeCVk5}R2zパHZgqN^6rо4*kX?.%9zF*0 xw }z뎴a )kZ]OFooxTԓu ewO'moqŁ{31%PC9ܻ΁Ⱦ]fٶYFvaURͩm.MHO8Kl֘ IjBQF Wu 9W~uo<\UULAb-R[{r}UeBL4kS^\2SXFpípdr%T1#AP_,%PBCJX*Ʌ W聤Vd[em"s O[(,gzE?+]ÒEL.>P#-|.=Xa%isw}EO)GIzY0iK]i wN=yWx ^E'~%7lX!uzK8$(N$} Μ_ a2g[DJUqNJr ^"@tǗk0es`A]MJk > z$glF@E!oιϊ&<c8QM215..K#O؜$:yhi |B5 /\eصf e W2W RrK~jB[_^=P*j\%d2ʰnX Wz rm8BG,{A @"I e}Qn}y{f> %[:S<6L_XQ^_x ޳SC~|c&LCIV?v_GK-7΋ֽ-9h=`,XMW} F<J̈ypNA"ݻnRќ:7AaŕhL j8xYߕZ9xQ$\! 2?+} { Wz EQTw~pb€iP9fd,TM" W{ Γn߻l|MSH v3S*|~5a\L ꐣy#02y8T٨z?*/qw-ZXUc]JxL?Q"EYkƬ*N?P8EWT _ 1> "jbx9lƃ\A6vJ.\,4W{ sslGBH'}ߓv b,&aV1 Q8 =%kdDK 6"b|,Ŝa[GaG(F9ͧqME-F;{R{=4у^Fr3nBD Cy] MnE y6VH},C bml_@ەw8`V]- iSR~Cqz}SL".ܖ{B8Qc%#{Vm`ڔ#=aV"+q]ufZL# ){GE(]hsH(XUV/}= ~JV>!sHɇ6b -eΖ{}7.9eKX+_w4r*k0$;S\ .^:C87_!jv[Ηu*BpFU߅)T. ɡc5Rwp#Rv E3(*:".oz%jȈ0MΉ W{ ]g/.T%ȹJv)2}V\X&{o^]FL4S y!!%CY5m?埉a߇iGI񩓂ؿOH*&A b##HԸYXlܛOº!JPTxQs r ~ i [Et$5#?/} N}xE8 t3oɚk#>r,}hJ #ЙMd*|K`4KpA$xi:1Q}]8"8Qijfсz53T.Nb~a{v."$3|(ażσ /jL7MOG~N y@XuTLXv"\fR KH: JD-xHoT2"|P 2-l KS3ק?#) tY_@0uIмk x VO= bq#ENz+R W#>nq-$xW=8݀6J׹<۵DFfI׺JZkc;pF`; "/+$f"dn4ոnUw 8s`SX"-ļSBz IE92(p @ p 't:C z|]=TT*jYeXx6=8Õ$%=JȅAt'Cj#nUy= =fb.,F,[?I\qI {!15#TπLcR1—t!JRtl_Y}R|cPnҝA?@n $ x^M,0U$SiLxl75aI+()lf&hTC#H}[0 gK F|PA-<Mm ]Jʗ1MQXOl\WSae%W#>  f C4mf| 42̘=Pꭃ)!q7qklB!*;oOGT4FL)ٛzwspX Ap>˪e;J,96>\ ]NWVڐi&uEPҗ?$uAh|^(P>Pew9r QPbqq;fUָ1 7Q\}ϊHnPoa2jP)Up'4.e @L5,b),魠ϫX;304kdD4ϰJϢDFbL$M%eق4}#̳ T!ج̔+98;VS8 =bG>w*6>4*_x[}kSIU0UW~/TmK5+ǭ O-%jʾ^1N45ECq5\-a{$1zߓu>_NR8:49¸19&No{FY6y>E&\Jc~q6R5?'ї% 3$9&&91{B j &QGPuppet Labs Release Key (Puppet Labs Release Key) L7F !ZPp2>sT\&}vۙrpG^v ?2AĜQRCRk*5 !eEz֬eK87S%4Z0U#c`'Me~-tO)*\@l\z;i-Nf{/<*O@ oNޕI1S<>H${1R`V^vs>c }WcS :t ,pUP8Q J3 HQ6_~4H33hNp[8`~.biM#H-|@̨%)7o:g8K>{ 秭XmDa<]&2 cA7X#9"{[Y5_hmo{܎8nR b+g(24S.2*w?J&Y ŹŰϏY?{Ǐ:jfдY\n#NѻcE%.,{Z9 5m+-Fv7]Eaojԉ\g"lEؑhOY[um:*[.v֏0mK@[Q,RfwIW,脇wE@;KM k|!՝*OҦF.$Nbı EX4ߑ@|>(  O H2: TK0)rU)݊oxhz*CAmV 1*O-Ѕi6E"'k^Py>hb,)A^ӽ#6ڈ8L;Dh&J?xfrܥQZvKq<~܌KSz'Hu|4W~fWWs)Ȋ0P1&zK6JqԪO-e N#}+Wϛ{tV\P)3OZSQCH:\ ~&߿6zxT }5z̮|q>DH 3ٹ0(O}{&J\1zv+U+n&DKӁQRKʑf<N0H~>^&Irnr y“rVPء̢󧥻*OlXm2]&DOP$Hjb˼旌e(b9or$2Xo>(L7@ g   TK0khzHQ\Qd13Hlrn^ϔ_uqWACxL/3 +BYڡQjr^ ~/ 6~hQ+x:ʹ goQ)h*#C fu[ 7eU0:9PW" RTh+uK8o1-IwF~+ܨ zh,719!Q2n!PjtV:GV㑘$!%(T6vAgr$pA7JVH0[wzu!cU 95NI3ИiXeu1%UC9C›o&,ۨ4Ќf1;ߚYD JHY*ٖ}Fu̠hR4UiJ2~bHQtg#v !I1(n $t5 ,_\~b<j:R"y+'ss;D׵ X0<SX$$:TJ? ;6H.8>H{˾xf DLݾe>(  W 5 TK0up!nF ڻ݌k|ʥBtSRW(L-[axh_9*>wT=w7' 6zd2^&Ct%BN+Ģi9Ȏ,غ{E'{,.Bkb^I5J8l"/kNw,\BuwdIs9SЎ2!Tkj  ?`kKI`Z+t_QMR+eSiJ@"^L\b*5LC?F=lKYQL@ָNhu^ _%+hF(/y<$2)N":v9Ae]c5缾L>Dj_;1P ~p*y}$m҉mA#q>^Ĩ:&=v5,[y+,;/mӹ(Q]x2YO 2ŵzvz4@g*>F|z&oVs%|2-j\tIIdF\]T 9WpQM. _ye/Wر] &=J;e.  $jqg ㎄DQL R x=Z<"p=U.z.5+w SGssm_2 K3$04# I@>2/6s\ ?N 7;R_FD^VXLSTƎ26$ۂ)yIɴ5z\thWvշDon+iqץl&9%J*?P+nt Qyڹh/EcPkSH0i4L7ΞѡUWOkI~4ItKdw=b4YkIMb>ё*$ܧ.s~XwrmQNV:>(jBL>Q>u Kݻ¸y̵_IA:L7Qc,y (Aanq­٤v @YSbM.o1u[Q_#hAUn (wHUףj݌K}kU@,Ҿ5\}dK0*3%0N1.zP"Hy(1[b@.躥QM`J_ҊC ) HT ZO01~v|D\u3Ms1xٟΐ%o5B=:4mKIOpP?6gX>]c`i}#4P(tds'G'iJBۤM WPuppet Labs Nightly Build Key (Puppet Labs Nightly Build Key) ? )  V& P" lW9,i01:1t;/VPa!-a3M^v渳t0 Ԇl#ۏyB*)vkm|01J(mpsJ) L: ѧ}K~2/y<@FKM:K[1 &cPVBUU?q!7c%$x,i;iz|d!0kA9v7pAm.l&wL}@ ;kyDA__|2ѯTi MLY$;ƢBDh(JYã,FSPuppet Labs Nightly Build Key (Puppet Labs Nightly Build Key) > (  V& P" lW Zdy%V6~xD4NeB.q H,3Zh;J4xw_hEk/8ێ ttT'{WxznA'* %[0\kj%+E|+I4b@ub팚8,T է-XNd^VhEz%=|R>tH@2y݆,5B0X5Pgppi7V3 Ffҷ.bl)NK$C94 wNEX %^b >@ᡇQnJO!C4_ y4!̨^k*8a i׾ iskGTh<#&OdA(`Ee0yV\PL|H[{z\i]Y%~ۏWMn8hh1IEm&0K^ݡêiA=6om Qy^{Bu -\ a[ǢzPm8͔9%UZ̉ZZf3]eվ<ƌH\.V4\{jHH3le0xrɎ$\ؐS ?i\E0dY n^cߴή;MLE/s"_ohTr5dpG/ b`W. b zL}Ae-i0B7ʬ9px7m]!yznviР`&Ъ!޼GJt*44 wyHp/R+8|7m)xoNçڳ66a1qӷ  }=写W3RCT")]9slno8@Ǵ)UH‚^4 \qn]\vUEV`ƁK'mTi _m5 ;%  V& P" lWaӎN7osylܙ2!|~F!.p6De.la9Avgڈ!U06oZ7rՐ܈e˪Fƶ|J (B_i_ZnwK l(lB5<.z.udk;WJ*tv-HÍs]*3YI!2{=Qܔ֙ga ‹Hn شcˊI5ܐApXDNV|VF-X7 OH@#+$0)#إ2b PY~w]qO"95/Y06P{q§NMbEyZYvK5#;@F-#t^m"Ad%q| n>\q& 2La1`jFA ƫ!ґ.4ljNc'\b٬1Q|Vb0 W#>nZdv"0.Kѻ|33bA)5tߡ d?SE" k 2xɕo/+yrƭgQO$^W_4הDaLHi_^(#-$hewy/ZKe\ڗPmd><,t;Zx08eŀK: ӽg‡i_Rish #9OM-#*>:ʁNC'`S1bF[hnBjh7*VTEFy!% k7(FOM̐DÌөtC-6xLmX)-^6,W85~ɆLb&VvoZ6lqX]"i0.m~Y/ /5dB$[|]m( 2X3ɑ))kY#&BO2M2lOƚxuKqHPuppet, Inc. Release Key (Puppet, Inc. Release Key) >(W#> f   C4 7G2XapvH[ zt&ͽRa0$W#j) J-6+j,a=I ! A#5[up?`'SHCl8wղR>SZ\, Ԃ'Hs.gJ"*5;,[eͅՙ$$V鑎ϵԘ,,AOސΛe b+h&ăLGK,@:ye8Jw\YVS*\oCdL-/})PKc QR{?J4MoqR?4q[(NZ6žlGF^_r; -?o GOvFX'O-A2p2 qMV"7e Wxno{wv`:@Tmћ1R NU\ԧWEð+m`*peI#ly]  Wq po5E oc5rd];(94f2HeCVk5}R2zパHZgqN^6rо4*kX?.%9zF*0 xw }z뎴a )kZ]OFooxTԓu ewO'moqŁ{31%PC9ܻ΁Ⱦ]fٶYFvaURͩm.MHO8Kl֘ IjBQF Wu 9W~uo<\UULAb-R[{r}UeBL4kS^\2SXFpípdr%T1#AP_,%PBCJX*Ʌ W聤Vd[em"s O[(,gzE?+]ÒEL.>P#-|.=Xa%isw}EO)GIzY0iK]i wN=yWx ^E'~%7lX!uzK8$(N$} Μ_ a2g[DJUqNJr ^"@tǗk0es`A]MJk > z$glF@E!oιϊ&<c8QM215..K#O؜$:yhi |B5 /\eصf e W2W RrK~jB[_^=P*j\%d2ʰnX Wz rm8BG,{A @"I e}Qn}y{f> %[:S<6L_XQ^_x ޳SC~|c&LCIV?v_GK-7΋ֽ-9h=`,XMW} F<J̈ypNA"ݻnRќ:7AaŕhL j8xYߕZ9xQ$\! 2?+} { Wz EQTw~pb€iP9fd,TM" W{ Γn߻l|MSH v3S*|~5a\L ꐣy#02y8T٨z?*/qw-ZXUc]JxL?Q"EYkƬ*N?P8EWT _ 1> "jbx9lƃ\A6vJ.\,4W{ sslGBH'}ߓv b,&aV1 Q8 =%kdDK 6"b|,Ŝa[GaG(F9ͧqME-F;{R{=4у^Fr3nBD Cy] MnE y6VH},C bml_@ەw8`V]- iSR~Cqz}SL".ܖ{B8Qc%#{Vm`ڔ#=aV"+q]ufZL# ){GE(]hsH(XUV/}= ~JV>!sHɇ6b -eΖ{}7.9eKX+_w4r*k0$;S\ .^:C87_!jv[Ηu*BpFU߅)T. ɡc5Rwp#Rv E3(*:".oz%jȈ0MΉ W{ ]g/.T%ȹJv)2}V\X&{o^]FL4S y!!%CY5m?埉a߇iGI񩓂ؿOH*&A b##HԸYXlܛOº!JPTxQs r ~ i [Et$5#?/} N}xE8 t3oɚk#>r,}hJ #ЙMd*|K`4KpA$xi:1Q}]8"8Qijfсz53T.Nb~a{v."$3|(ażσ /jL7MOG~N y@XuTLXv"\fR KH: JD-xHoT2"|P 2-l KS3ק?#) tY_@0uIмk x VO= bq#ENz+R W#>nq-$xW=8݀6J׹<۵DFfI׺JZkc;pF`; "/+$f"dn4ոnUw 8s`SX"-ļSBz IE92(p @ p 't:C z|]=TT*jYeXx6=8Õ$%=JȅAt'Cj#nUy= =fb.,F,[?I\qI {!15#TπLcR1—t!JRtl_Y}R|cPnҝA?@n $ x^M,0U$SiLxl75aI+()lf&hTC#H}[0 gK F|PA-<Mm ]Jʗ1MQXOl\WSae%W#>  f C4mf| 42̘=Pꭃ)!q7qklB!*;oOGT4FL)ٛzwspX Ap>˪e;J,96>\ ]NWVڐi&uEPҗ?$uAh|^(P>Pew9r QPbqq;fUָ1 7Q\}ϊHnPoa2jP)Up'4.e @L5,b),魠ϫX;304kdD4ϰJϢDFbL$M%eق4}#̳ T!ج̔+98`__ minion in the image. Uses `salt-bootstrap `__ script to install. Settings ~~~~~~~~ - ``install_source``: Source to install salt codebase from. ``stable`` for current stable, ``daily`` for installing the daily build, and ``git`` to install from git repository. ``required`` - ``version``: Only needed if you are installing from ``git``. \ ``develop`` to install current development head, or provide any tag name or commit hash from `salt repo `__ ``optional`` - ``master``: Salt master FQDN or IP ``optional`` - ``grains``: Set `salt grains `__ for this minion. Accepts a map with grain name as key and the grain data as value. ``optional`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/salt/__init__.py000066400000000000000000000006071323112141500250000ustar00rootroot00000000000000import tasks def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.add(tasks.InstallSaltDependencies) taskset.add(tasks.BootstrapSaltMinion) if 'grains' in manifest.plugins['salt']: taskset.add(tasks.SetSaltGrains) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/salt/manifest-schema.yml000066400000000000000000000011131323112141500264470ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Saltstack plugin manifest type: object properties: plugins: type: object properties: salt: type: object properties: grains: type: object patternProperties: ^[^/\0]+$: {type: string} minItems: 1 install_source: enum: - stable - daily - git master: {type: string} version: {type: string} required: [install_source] additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/salt/tasks.py000066400000000000000000000042411323112141500243640ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import packages from bootstrapvz.common.tools import log_check_call from bootstrapvz.common.tools import sed_i import os import urllib class InstallSaltDependencies(Task): description = 'Add depended packages for salt-minion' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('curl') info.packages.add('ca-certificates') class BootstrapSaltMinion(Task): description = 'Installing salt-minion using the bootstrap script' phase = phases.package_installation predecessors = [packages.InstallPackages] @classmethod def run(cls, info): # Download bootstrap script bootstrap_script = os.path.join(info.root, 'install_salt.sh') with open(bootstrap_script, 'w') as f: d = urllib.urlopen('http://bootstrap.saltstack.org') f.write(d.read()) # This is needed since bootstrap doesn't handle -X for debian distros properly. # We disable checking for running services at end since we do not start them. sed_i(bootstrap_script, 'install_debian_check_services', 'disabled_debian_check_services') bootstrap_command = ['chroot', info.root, 'bash', 'install_salt.sh', '-X'] if 'master' in info.manifest.plugins['salt']: bootstrap_command.extend(['-A', info.manifest.plugins['salt']['master']]) install_source = info.manifest.plugins['salt'].get('install_source', 'stable') bootstrap_command.append(install_source) if install_source == 'git' and ('version' in info.manifest.plugins['salt']): bootstrap_command.append(info.manifest.plugins['salt']['version']) log_check_call(bootstrap_command) class SetSaltGrains(Task): description = 'Set custom salt grains' phase = phases.system_modification @classmethod def run(cls, info): grains_file = os.path.join(info.root, 'etc/salt/grains') grains = info.manifest.plugins['salt']['grains'] with open(grains_file, 'a') as f: for g in grains: f.write('%s: %s\n' % (g, grains[g])) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/tmpfs_workspace/000077500000000000000000000000001323112141500251305ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/tmpfs_workspace/README.rst000066400000000000000000000013431323112141500266200ustar00rootroot00000000000000tmpfs workspace --------------- The ``tmpfs workspace`` plugin mounts a tmpfs filesystem for the workspace temporary files. This is useful when the workspace directory is placed on a slow medium (e.g. a hard disk drive), the build process performs lots of local I/O (e.g. building a vagrant box), and there is enough RAM to store data necessary for the build process. For example, the ``stretch-vagrant.yml`` manifest file from the examples directory takes 33 minutes to build on the plugin author's home server. Using this plugin reduces this time to 3 minutes at the cost of 1.2GB of additional RAM usage. Settings ~~~~~~~~ This plugin has no settings. To enable it add ``"tmpfs_workspace":{}`` to the plugin section of the manifest. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/tmpfs_workspace/__init__.py000066400000000000000000000012321323112141500272370ustar00rootroot00000000000000from bootstrapvz.common.tasks.workspace import CreateWorkspace, DeleteWorkspace from tasks import CreateTmpFsWorkspace, MountTmpFsWorkspace, UnmountTmpFsWorkspace, DeleteTmpFsWorkspace def resolve_tasks(taskset, manifest): taskset.discard(CreateWorkspace) taskset.discard(DeleteWorkspace) taskset.add(CreateTmpFsWorkspace) taskset.add(MountTmpFsWorkspace) taskset.add(UnmountTmpFsWorkspace) taskset.add(DeleteTmpFsWorkspace) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): counter_task(taskset, MountTmpFsWorkspace, UnmountTmpFsWorkspace) counter_task(taskset, CreateTmpFsWorkspace, DeleteTmpFsWorkspace) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/tmpfs_workspace/tasks.py000066400000000000000000000030321323112141500266250ustar00rootroot00000000000000from os import makedirs, rmdir from bootstrapvz.base import Task from bootstrapvz.common.tasks.workspace import CreateWorkspace, DeleteWorkspace from bootstrapvz.common import phases from bootstrapvz.common.tools import log_check_call class CreateTmpFsWorkspace(Task): description = 'Creating directory for tmpfs-based workspace' phase = phases.preparation @classmethod def run(cls, info): makedirs(info.workspace) class MountTmpFsWorkspace(Task): description = 'Mounting tmpfs-based workspace' phase = phases.preparation # CreateWorkspace is explicitly skipped (see the plugin's resolve_task function). Several other tasks # depend on CreateWorkspace to put their own files inside the workspace. We position MountTmpFs before # CreateWorkspace to leverage these dependencies. See also UnmountTmpFs/DeleteWorkspace below. successors = [CreateWorkspace] predecessors = [CreateTmpFsWorkspace] @classmethod def run(cls, info): log_check_call(['mount', '--types', 'tmpfs', 'none', info.workspace]) class UnmountTmpFsWorkspace(Task): description = 'Unmounting tmpfs-based workspace' phase = phases.cleaning predecessors = [DeleteWorkspace] @classmethod def run(cls, info): log_check_call(['umount', info.workspace]) class DeleteTmpFsWorkspace(Task): description = 'Deleting directory for tmpfs-based workspace' phase = phases.cleaning predecessors = [UnmountTmpFsWorkspace] @classmethod def run(cls, info): rmdir(info.workspace) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/unattended_upgrades/000077500000000000000000000000001323112141500257465ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/unattended_upgrades/README.rst000066400000000000000000000010711323112141500274340ustar00rootroot00000000000000Unattended upgrades ------------------- Enables the `unattended update/upgrade feature `__ in aptitude. Enable it to have your system automatically download and install security updates automatically with a set interval. Settings ~~~~~~~~ - ``update_interval``: Days between running ``apt-get update``. ``required`` - ``download_interval``: Days between running ``apt-get upgrade --download-only`` ``required`` - ``upgrade_interval``: Days between installing any security upgrades. ``required`` bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/unattended_upgrades/__init__.py000066400000000000000000000004751323112141500300650ustar00rootroot00000000000000 def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): import tasks taskset.add(tasks.AddUnattendedUpgradesPackage) taskset.add(tasks.EnablePeriodicUpgrades) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/unattended_upgrades/manifest-schema.yml000066400000000000000000000007641323112141500315440ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Unattended upgrades plugin manifest type: object properties: plugins: type: object properties: unattended_upgrades: type: object properties: download_interval: {type: integer} update_interval: {type: integer} upgrade_interval: {type: integer} required: - update_interval - download_interval - upgrade_interval additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/unattended_upgrades/tasks.py000066400000000000000000000040251323112141500274460ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class AddUnattendedUpgradesPackage(Task): description = 'Adding `unattended-upgrades\' to the image packages' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('unattended-upgrades') class EnablePeriodicUpgrades(Task): description = 'Writing the periodic upgrades apt config file' phase = phases.system_modification @classmethod def run(cls, info): import os.path periodic_path = os.path.join(info.root, 'etc/apt/apt.conf.d/02periodic') update_interval = info.manifest.plugins['unattended_upgrades']['update_interval'] download_interval = info.manifest.plugins['unattended_upgrades']['download_interval'] upgrade_interval = info.manifest.plugins['unattended_upgrades']['upgrade_interval'] with open(periodic_path, 'w') as periodic: periodic.write(('// Enable the update/upgrade script (0=disable)\n' 'APT::Periodic::Enable "1";\n\n' '// Do "apt-get update" automatically every n-days (0=disable)\n' 'APT::Periodic::Update-Package-Lists "{update_interval}";\n\n' '// Do "apt-get upgrade --download-only" every n-days (0=disable)\n' 'APT::Periodic::Download-Upgradeable-Packages "{download_interval}";\n\n' '// Run the "unattended-upgrade" security upgrade script\n' '// every n-days (0=disabled)\n' '// Requires the package "unattended-upgrades" and will write\n' '// a log in /var/log/unattended-upgrades\n' 'APT::Periodic::Unattended-Upgrade "{upgrade_interval}";\n' .format(update_interval=update_interval, download_interval=download_interval, upgrade_interval=upgrade_interval))) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/000077500000000000000000000000001323112141500233635ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/README.rst000066400000000000000000000007761323112141500250640ustar00rootroot00000000000000Vagrant ------- Vagrant is a tool to quickly create virtualized environments. It uses "boxes" to make downloading and sharing those environments easier. A box is a tarball containing a virtual volumes accompanied by an `OVF specification `__ of the virtual machine. This plugin creates a vagrant box that is ready to be shared or deployed. At the moment it is only compatible with the VirtualBox provider and doesn't requires any additional settings. bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/__init__.py000066400000000000000000000020341323112141500254730ustar00rootroot00000000000000import tasks from bootstrapvz.common import task_groups from bootstrapvz.common.tasks import image, ssh, volume from bootstrapvz.common.tools import rel_path def validate_manifest(data, validator, error): validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.update(task_groups.ssh_group) taskset.discard(image.MoveImage) taskset.discard(ssh.DisableSSHPasswordAuthentication) taskset.update([tasks.CheckBoxPath, tasks.CreateVagrantBoxDir, tasks.AddPackages, tasks.CreateVagrantUser, tasks.PasswordlessSudo, tasks.SetRootPassword, tasks.AddInsecurePublicKey, tasks.PackageBox, tasks.RemoveVagrantBoxDir, volume.Delete, ]) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): counter_task(taskset, tasks.CreateVagrantBoxDir, tasks.RemoveVagrantBoxDir) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/assets/000077500000000000000000000000001323112141500246655ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/assets/Vagrantfile000066400000000000000000000012131323112141500270470ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. Vagrant::Config.run do |config| # This Vagrantfile is auto-generated by `vagrant package` to contain # the MAC address of the box. Custom configuration should be placed in # the actual `Vagrantfile` in this box. config.vm.base_mac = "[MAC_ADDRESS]" end # Load include vagrant file if it exists after the auto-generated # so it can override any of the settings include_vagrantfile = File.expand_path("../include/_Vagrantfile", __FILE__) load include_vagrantfile if File.exist?(include_vagrantfile) bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/assets/authorized_keys000066400000000000000000000006311323112141500300210ustar00rootroot00000000000000ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/assets/box.ovf000066400000000000000000000217071323112141500262000ustar00rootroot00000000000000 List of the virtual disks used in the package Logical networks used in the package Logical network used by this appliance. A virtual machine The kind of installed guest operating system [OS_DESCRIPTION] [OS_TYPE] Virtual hardware requirements for a virtual machine Virtual Hardware Family 0 [BOXNAME] virtualbox-2.2 1 virtual CPU Number of virtual CPUs 1 virtual CPU 1 3 1 MegaBytes 512 MB of memory Memory Size 512 MB of memory 2 4 512 0 ideController0 IDE Controller ideController0 3 PIIX4 5 1 ideController1 IDE Controller ideController1 4 PIIX4 5 0 sataController0 SATA Controller sataController0 5 AHCI 20 true Ethernet adapter on 'NAT' NAT Ethernet adapter on 'NAT' 6 E1000 10 0 disk1 Disk Image disk1 /disk/vmdisk1 7 5 17 0 true cdrom1 CD-ROM Drive cdrom1 8 3 15 0 true cdrom2 CD-ROM Drive cdrom2 9 4 15 Complete VirtualBox machine configuration in VirtualBox format bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/assets/metadata.json000066400000000000000000000000331323112141500273340ustar00rootroot00000000000000{"provider": "virtualbox"} bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/manifest-schema.yml000066400000000000000000000007511323112141500271550ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Vagrant plugin manifest type: object properties: provider: type: object properties: name: type: string enum: [virtualbox] system: required: [hostname] volume: type: object properties: backing: type: string enum: [vmdk] required: [backing] plugins: type: object properties: vagrant: type: object additionalProperties: false bootstrap-vz-0.9.11+20180121git/bootstrapvz/plugins/vagrant/tasks.py000066400000000000000000000227461323112141500250750ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import workspace from bootstrapvz.common.tools import rel_path import os import shutil assets = rel_path(__file__, 'assets') class CheckBoxPath(Task): description = 'Checking if the vagrant box file already exists' phase = phases.validation @classmethod def run(cls, info): box_basename = info.manifest.name.format(**info.manifest_vars) box_name = box_basename + '.box' box_path = os.path.join(info.manifest.bootstrapper['workspace'], box_name) if os.path.exists(box_path): from bootstrapvz.common.exceptions import TaskError msg = 'The vagrant box `{name}\' already exists at `{path}\''.format(name=box_name, path=box_path) raise TaskError(msg) info._vagrant['box_name'] = box_name info._vagrant['box_path'] = box_path class CreateVagrantBoxDir(Task): description = 'Creating directory for the vagrant box' phase = phases.preparation predecessors = [workspace.CreateWorkspace] @classmethod def run(cls, info): info._vagrant['folder'] = os.path.join(info.workspace, 'vagrant') os.mkdir(info._vagrant['folder']) class AddPackages(Task): description = 'Add packages that vagrant depends on' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('openssh-server') info.packages.add('sudo') info.packages.add('nfs-client') class CreateVagrantUser(Task): description = 'Creating the vagrant user' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call log_check_call(['chroot', info.root, 'useradd', '--create-home', '--shell', '/bin/bash', 'vagrant']) class PasswordlessSudo(Task): description = 'Allowing the vagrant user to use sudo without a password' phase = phases.system_modification @classmethod def run(cls, info): sudo_vagrant_path = os.path.join(info.root, 'etc/sudoers.d/vagrant') with open(sudo_vagrant_path, 'w') as sudo_vagrant: sudo_vagrant.write('vagrant ALL=(ALL) NOPASSWD:ALL') import stat ug_read_only = (stat.S_IRUSR | stat.S_IRGRP) os.chmod(sudo_vagrant_path, ug_read_only) class AddInsecurePublicKey(Task): description = 'Adding vagrant insecure public key' phase = phases.system_modification predecessors = [CreateVagrantUser] @classmethod def run(cls, info): ssh_dir = os.path.join(info.root, 'home/vagrant/.ssh') os.mkdir(ssh_dir) authorized_keys_source_path = os.path.join(assets, 'authorized_keys') with open(authorized_keys_source_path, 'r') as authorized_keys_source: insecure_public_key = authorized_keys_source.read() authorized_keys_path = os.path.join(ssh_dir, 'authorized_keys') with open(authorized_keys_path, 'a') as authorized_keys: authorized_keys.write(insecure_public_key) import stat os.chmod(ssh_dir, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR) os.chmod(authorized_keys_path, stat.S_IRUSR | stat.S_IWUSR) # We can't do this directly with python, since getpwnam gets its info from the host from bootstrapvz.common.tools import log_check_call log_check_call(['chroot', info.root, 'chown', 'vagrant:vagrant', '/home/vagrant/.ssh', '/home/vagrant/.ssh/authorized_keys']) class SetRootPassword(Task): description = 'Setting the root password to `vagrant\'' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call log_check_call(['chroot', info.root, 'chpasswd'], 'root:vagrant') class PackageBox(Task): description = 'Packaging the volume as a vagrant box' phase = phases.image_registration @classmethod def run(cls, info): vagrantfile_source = os.path.join(assets, 'Vagrantfile') vagrantfile = os.path.join(info._vagrant['folder'], 'Vagrantfile') shutil.copy(vagrantfile_source, vagrantfile) import random mac_address = '080027{mac:06X}'.format(mac=random.randrange(16 ** 6)) from bootstrapvz.common.tools import sed_i sed_i(vagrantfile, '\\[MAC_ADDRESS\\]', mac_address) metadata_source = os.path.join(assets, 'metadata.json') metadata = os.path.join(info._vagrant['folder'], 'metadata.json') shutil.copy(metadata_source, metadata) from bootstrapvz.common.tools import log_check_call disk_name = 'box-disk1.' + info.volume.extension disk_link = os.path.join(info._vagrant['folder'], disk_name) log_check_call(['ln', '-s', info.volume.image_path, disk_link]) ovf_path = os.path.join(info._vagrant['folder'], 'box.ovf') cls.write_ovf(info, ovf_path, mac_address, disk_name) box_files = os.listdir(info._vagrant['folder']) log_check_call(['tar', '--create', '--gzip', '--dereference', '--file', info._vagrant['box_path'], '--directory', info._vagrant['folder']] + box_files ) import logging logging.getLogger(__name__).info('The vagrant box has been placed at ' + info._vagrant['box_path']) @classmethod def write_ovf(cls, info, destination, mac_address, disk_name): namespaces = {'ovf': 'http://schemas.dmtf.org/ovf/envelope/1', 'rasd': 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData', 'vssd': 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData', 'xsi': 'http://www.w3.org/2001/XMLSchema-instance', 'vbox': 'http://www.virtualbox.org/ovf/machine', } def attr(element, name, value=None): for prefix, ns in namespaces.iteritems(): name = name.replace(prefix + ':', '{' + ns + '}') if value is None: return element.attrib[name] else: element.attrib[name] = str(value) template_path = os.path.join(assets, 'box.ovf') import xml.etree.ElementTree as ET template = ET.parse(template_path) root = template.getroot() [disk_ref] = root.findall('./ovf:References/ovf:File', namespaces) attr(disk_ref, 'ovf:href', disk_name) # List of OVF disk format URIs # Snatched from VBox source (src/VBox/Main/src-server/ApplianceImpl.cpp:47) # ISOURI = "http://www.ecma-international.org/publications/standards/Ecma-119.htm" # VMDKStreamURI = "http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" # VMDKSparseURI = "http://www.vmware.com/specifications/vmdk.html#sparse" # VMDKCompressedURI = "http://www.vmware.com/specifications/vmdk.html#compressed" # VMDKCompressedURI2 = "http://www.vmware.com/interfaces/specifications/vmdk.html#compressed" # VHDURI = "http://go.microsoft.com/fwlink/?LinkId=137171" volume_uuid = info.volume.get_uuid() [disk] = root.findall('./ovf:DiskSection/ovf:Disk', namespaces) attr(disk, 'ovf:capacity', info.volume.size.bytes.get_qty_in('B')) attr(disk, 'ovf:format', info.volume.ovf_uri) attr(disk, 'vbox:uuid', volume_uuid) [system] = root.findall('./ovf:VirtualSystem', namespaces) attr(system, 'ovf:id', info._vagrant['box_name']) # Set the operating system [os_section] = system.findall('./ovf:OperatingSystemSection', namespaces) os_info = {'i386': {'id': 96, 'name': 'Debian'}, 'amd64': {'id': 96, 'name': 'Debian_64'} }.get(info.manifest.system['architecture']) attr(os_section, 'ovf:id', os_info['id']) [os_desc] = os_section.findall('./ovf:Description', namespaces) os_desc.text = os_info['name'] [os_type] = os_section.findall('./vbox:OSType', namespaces) os_type.text = os_info['name'] [sysid] = system.findall('./ovf:VirtualHardwareSection/ovf:System/' 'vssd:VirtualSystemIdentifier', namespaces) sysid.text = info._vagrant['box_name'] [machine] = system.findall('./vbox:Machine', namespaces) import uuid attr(machine, 'ovf:uuid', uuid.uuid4()) attr(machine, 'ovf:name', info._vagrant['box_name']) from datetime import datetime attr(machine, 'ovf:lastStateChange', datetime.now().strftime('%Y-%m-%dT%H:%M:%SZ')) [nic] = machine.findall('./ovf:Hardware/ovf:Network/ovf:Adapter', namespaces) attr(machine, 'ovf:MACAddress', mac_address) [device_img] = machine.findall('./ovf:StorageControllers' '/ovf:StorageController[@name="SATA Controller"]' '/ovf:AttachedDevice/ovf:Image', namespaces) attr(device_img, 'uuid', '{' + str(volume_uuid) + '}') template.write(destination, xml_declaration=True) # , default_namespace=namespaces['ovf'] class RemoveVagrantBoxDir(Task): description = 'Removing the vagrant box directory' phase = phases.cleaning successors = [workspace.DeleteWorkspace] @classmethod def run(cls, info): shutil.rmtree(info._vagrant['folder']) del info._vagrant['folder'] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/000077500000000000000000000000001323112141500222555ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/README.rst000066400000000000000000000007271323112141500237520ustar00rootroot00000000000000Providers in bootstrap-vz represent various cloud providers and virtual machines. bootstrap-vz is an extensible platform with loose coupling and a significant amount of tooling, which allows for painless implementation of new providers. The virtualbox provider for example is implemented in only 89 lines of python, since most of the building blocks are a part of the common task library. Only the kernel and guest additions installation are specific to that provider. bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/__init__.py000066400000000000000000000000001323112141500243540ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/000077500000000000000000000000001323112141500234035ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/README.rst000066400000000000000000000020421323112141500250700ustar00rootroot00000000000000Azure ===== This provider generates raw images for Microsoft Azure computing platform. Manifest settings ----------------- Provider ~~~~~~~~ - ``waagent``: Waagent specific settings. ``required`` - ``conf``: Path to ``waagent.conf`` that should override the default ``optional`` - ``version``: Version of waagent to install. Waagent versions are available at: https://github.com/Azure/WALinuxAgent/releases ``required`` Example: .. code-block:: yaml --- provider: name: azure waagent: conf: /root/waagent.conf version: 2.0.4 The Windows Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. Modify the following parameters in /etc/waagent.conf appropriately: :: ResourceDisk.Format=y ResourceDisk.Filesystem=ext4 ResourceDisk.MountPoint=/mnt/resource ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be. bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/__init__.py000066400000000000000000000025061323112141500255170ustar00rootroot00000000000000from bootstrapvz.common import task_groups import tasks.packages import tasks.boot from bootstrapvz.common.tasks import image from bootstrapvz.common.tasks import loopback from bootstrapvz.common.tasks import initd from bootstrapvz.common.tasks import ssh from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import grub def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.update(task_groups.get_standard_groups(manifest)) taskset.update([apt.AddBackports, tasks.packages.DefaultPackages, loopback.AddRequiredCommands, loopback.Create, image.MoveImage, initd.InstallInitScripts, ssh.AddOpenSSHPackage, ssh.ShredHostkeys, ssh.AddSSHKeyGeneration, tasks.packages.Waagent, tasks.boot.ConfigureGrub, tasks.boot.PatchUdev, ]) taskset.discard(grub.SetGrubConsolOutputDeviceToSerial) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): taskset.update(task_groups.get_standard_rollback_tasks(completed)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/assets/000077500000000000000000000000001323112141500247055ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/assets/udev.diff000066400000000000000000000011201323112141500264740ustar00rootroot00000000000000diff --git a/debian/extra/initramfs-tools/scripts/init-top/udev b/debian/extra/initramfs-tools/scripts/init-top/udev index 687911f..87f1121 100755 --- a/debian/extra/initramfs-tools/scripts/init-top/udev +++ b/debian/extra/initramfs-tools/scripts/init-top/udev @@ -31,11 +31,5 @@ if [ -d /sys/bus/scsi ]; then udevadm settle || true fi -# If the rootdelay parameter has been set, we wait a bit for devices -# like usb/firewire disks to settle. -if [ "$ROOTDELAY" ]; then - sleep $ROOTDELAY -fi - # Leave udev running to process events that come in out-of-band (like USB # connections) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/manifest-schema.yml000066400000000000000000000013211323112141500271670ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Azure manifest type: object properties: provider: type: object properties: waagent: type: object properties: conf: {type: string} version: {type: string} required: [version] required: [waagent] system: type: object properties: bootloader: type: string enum: - grub - extlinux volume: type: object properties: backing: type: string enum: [vhd] partitions: type: object properties: type: enum: - none - msdos - gpt required: [backing] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/tasks/000077500000000000000000000000001323112141500245305ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/tasks/__init__.py000066400000000000000000000001301323112141500266330ustar00rootroot00000000000000from bootstrapvz.common.tools import rel_path assets = rel_path(__file__, '../assets') bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/tasks/boot.py000066400000000000000000000023151323112141500260460ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import grub from bootstrapvz.common.tasks import kernel import os class PatchUdev(Task): description = 'Patching udev configuration to remove ROOTDELAY sleep' phase = phases.system_modification successors = [kernel.UpdateInitramfs] @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call from . import assets # c.f. http://anonscm.debian.org/cgit/pkg-systemd/systemd.git/commit/?id=61e055638cea udev_file = os.path.join(info.root, 'usr/share/initramfs-tools/scripts/init-top/udev') diff_file = os.path.join(assets, 'udev.diff') log_check_call(['patch', '--no-backup-if-mismatch', udev_file, diff_file]) class ConfigureGrub(Task): description = 'Change grub configuration to allow for ttyS0 output' phase = phases.system_modification successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX'].extend([ 'console=tty0', 'console=ttyS0,115200n8', 'earlyprintk=ttyS0,115200', 'rootdelay=300', ]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/tasks/packages-kernels.yml000066400000000000000000000006241323112141500304740ustar00rootroot00000000000000--- # This is a mapping of Debian release codenames to processor architectures to kernel packages squeeze: amd64: linux-image-amd64 i386: linux-image-686 wheezy: amd64: linux-image-amd64 i386: linux-image-686 jessie: amd64: linux-image-amd64 i386: linux-image-686-pae stretch: amd64: linux-image-amd64 i386: linux-image-686-pae sid: amd64: linux-image-amd64 i386: linux-image-686-pae bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/azure/tasks/packages.py000066400000000000000000000050341323112141500266620ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks.packages import InstallPackages class DefaultPackages(Task): description = 'Adding image packages required for Azure' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('openssl') info.packages.add('python-openssl') info.packages.add('python-pyasn1') info.packages.add('sudo') info.packages.add('parted') from bootstrapvz.common.tools import config_get, rel_path kernel_packages_path = rel_path(__file__, 'packages-kernels.yml') kernel_package = config_get(kernel_packages_path, [info.manifest.release.codename, info.manifest.system['architecture']]) info.packages.add(kernel_package) class Waagent(Task): description = 'Add waagent' phase = phases.package_installation predecessors = [InstallPackages] @classmethod def run(cls, info): from bootstrapvz.common.tools import log_check_call import os waagent_version = info.manifest.provider['waagent']['version'] waagent_file = 'WALinuxAgent-' + waagent_version + '.tar.gz' waagent_url = 'https://github.com/Azure/WALinuxAgent/archive/' + waagent_file log_check_call(['wget', '-P', info.root, waagent_url]) waagent_directory = os.path.join(info.root, 'root') log_check_call(['tar', 'xaf', os.path.join(info.root, waagent_file), '-C', waagent_directory]) os.remove(os.path.join(info.root, waagent_file)) waagent_script = '/root/WALinuxAgent-WALinuxAgent-' + waagent_version + '/waagent' log_check_call(['chroot', info.root, 'cp', waagent_script, '/usr/sbin/waagent']) log_check_call(['chroot', info.root, 'chmod', '755', '/usr/sbin/waagent']) log_check_call(['chroot', info.root, 'waagent', '-install']) if info.manifest.provider['waagent'].get('conf', False): if os.path.isfile(info.manifest.provider['waagent']['conf']): log_check_call(['cp', info.manifest.provider['waagent']['conf'], os.path.join(info.root, 'etc/waagent.conf')]) # The Azure Linux agent uses 'useradd' to add users, but SHELL # is set to /bin/sh by default. Set this to /bin/bash instead. from bootstrapvz.common.tools import sed_i useradd_config = os.path.join(info.root, 'etc/default/useradd') sed_i(useradd_config, r'^(SHELL=.*)', r'SHELL=/bin/bash') bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/000077500000000000000000000000001323112141500235245ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/README.rst000066400000000000000000000032021323112141500252100ustar00rootroot00000000000000Docker ====== The `Docker `__ provider creates a docker image from scratch, creates a Dockerfile for it and imports the image to a repo specified in the manifest. In order to reduce the size of the image, it is highly recommend to make use of the `minimize_size <../../plugins/minimize_size>`__ plugin. With optimal settings a 64-bit jessie image can be whittled down to 81.95 MB (built on Dec 13th 2015 with ``manifests/examples/docker/jessie-minimized.yml``). Manifest settings ----------------- Name ~~~~ - ``name``: The image name is the repository and tag to where an image should be imported. ``required`` ``manifest vars`` Provider ~~~~~~~~ - ``dockerfile``: List of Dockerfile instructions that should be appended to the ones created by the bootstrapper. ``optional`` - ``labels``: Labels that should be added to the dockerfile. The image name specified at the top of the manifest will be added as the label ``name``. Check out the `docker docs `__ for more information about custom labels. `Project atomic `__ also has some `useful recommendations `__ for generic container labels. ``optional`` ``manifest vars`` Example: .. code-block:: yaml --- name: bootstrap-vz:latest provider: name: docker dockerfile: - CMD /bin/bash labels: name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} description: Debian {system.release} {system.architecture} bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/__init__.py000066400000000000000000000027561323112141500256470ustar00rootroot00000000000000from bootstrapvz.common import task_groups from bootstrapvz.common.tasks import apt, folder, filesystem from bootstrapvz.common.tools import rel_path import tasks.commands import tasks.image def validate_manifest(data, validator, error): schema_path = rel_path(__file__, 'manifest-schema.yml') validator(data, schema_path) def resolve_tasks(taskset, manifest): taskset.update(task_groups.get_base_group(manifest)) taskset.update([folder.Create, filesystem.CopyMountTable, filesystem.RemoveMountTable, folder.Delete, ]) taskset.update(task_groups.get_network_group(manifest)) taskset.update(task_groups.get_apt_group(manifest)) taskset.update(task_groups.get_locale_group(manifest)) taskset.update(task_groups.security_group) taskset.update(task_groups.get_cleanup_group(manifest)) # Let the autostart of daemons by apt remain disabled taskset.discard(apt.EnableDaemonAutostart) taskset.update([tasks.commands.AddRequiredCommands, tasks.image.CreateDockerfileEntry, tasks.image.CreateImage, ]) if 'labels' in manifest.provider: taskset.add(tasks.image.PopulateLabels) if 'dockerfile' in manifest.provider: taskset.add(tasks.image.AppendManifestDockerfile) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): taskset.update(task_groups.get_standard_rollback_tasks(completed)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/manifest-schema.yml000066400000000000000000000022241323112141500273130ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Docker manifest type: object properties: provider: type: object properties: labels: type: object properties: # https://github.com/projectatomic/ContainerApplicationGenericLabels distribution-scope: type: string enum: - private - authoritative-source-only - restricted - public patternProperties: ^.+$: {type: string} dockerfile: type: array items: # https://github.com/turtlebender/docker/blob/6e2662b3bad319679e17fe25d410f246820ab0e9/builder/job.go#L27 type: string pattern: '^(ENTRYPOINT|CMD|USER|WORKDIR|ENV|VOLUME|EXPOSE|ONBUILD|LABEL|MAINTAINER)' system: type: object properties: bootloader: type: string enum: [none] volume: type: object properties: backing: type: string enum: [folder] partitions: type: object properties: type: type: string enum: [none] required: [backing] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/tasks/000077500000000000000000000000001323112141500246515ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/tasks/__init__.py000066400000000000000000000000001323112141500267500ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/tasks/commands.py000066400000000000000000000005721323112141500270300ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import host class AddRequiredCommands(Task): description = 'Adding commands required for docker' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): info.host_dependencies['docker'] = 'docker.io' bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/docker/tasks/image.py000066400000000000000000000046361323112141500263160ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tools import log_check_call class CreateDockerfileEntry(Task): description = 'Creating the Dockerfile entry' phase = phases.preparation @classmethod def run(cls, info): info._docker['dockerfile'] = [] class CreateImage(Task): description = 'Creating docker image' phase = phases.image_registration @classmethod def run(cls, info): from pipes import quote tar_cmd = ['tar', '--create', '--numeric-owner', '--directory', info.volume.path, '.'] docker_cmd = ['docker', 'import'] for instruction in info._docker['dockerfile']: docker_cmd.extend(['--change', instruction]) docker_cmd.extend(['-', info.manifest.name.format(**info.manifest_vars)]) cmd = ' '.join(map(quote, tar_cmd)) + ' | ' + ' '.join(map(quote, docker_cmd)) [info._docker['image_id']] = log_check_call([cmd], shell=True) class PopulateLabels(Task): description = 'Populating docker labels' phase = phases.image_registration successors = [CreateImage] @classmethod def run(cls, info): import pyrfc3339 from datetime import datetime import pytz labels = {} labels['name'] = info.manifest.name.format(**info.manifest_vars) # Inspired by https://github.com/projectatomic/ContainerApplicationGenericLabels # See here for the discussion on the debian-cloud mailing list # https://lists.debian.org/debian-cloud/2015/05/msg00071.html labels['architecture'] = info.manifest.system['architecture'] labels['build-date'] = pyrfc3339.generate(datetime.utcnow().replace(tzinfo=pytz.utc)) if 'labels' in info.manifest.provider: for label, value in info.manifest.provider['labels'].items(): labels[label] = value.format(**info.manifest_vars) from pipes import quote for label, value in labels.items(): info._docker['dockerfile'].append('LABEL {}={}'.format(label, quote(value))) class AppendManifestDockerfile(Task): description = 'Appending Dockerfile instructions from the manifest' phase = phases.image_registration predecessors = [PopulateLabels] successors = [CreateImage] @classmethod def run(cls, info): info._docker['dockerfile'].extend(info.manifest.provider['dockerfile']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/000077500000000000000000000000001323112141500227265ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/README.rst000066400000000000000000000076661323112141500244340ustar00rootroot00000000000000EC2 === The `EC2 `__ provider automatically creates a volume for bootstrapping (be it EBS or S3), makes a snapshot of it once it is done and registers it as an AMI. EBS volume backing only works on an EC2 host while S3 backed volumes *should* work locally (at this time however they do not, a fix is in the works). Unless `the cloud-init plugin <../../plugins/cloud_init>`__ is used, special startup scripts will be installed that automatically fetch the configured authorized\_key from the instance metadata and save or run any userdata supplied (if the userdata begins with ``#!`` it will be run). Set the variable ``install_init_scripts`` to ``False`` in order to disable this behaviour. Manifest settings ----------------- Credentials ~~~~~~~~~~~ The AWS credentials can be configured in two ways: Via the manifest or through environment variables. To bootstrap S3 backed instances you will need a user certificate and a private key in addition to the access key and secret key, which are needed for bootstraping EBS backed instances. The settings describes below should be placed in the ``credentials`` key under the ``provider`` section. - ``access-key``: AWS access-key. May also be supplied via the environment variable ``$AWS_ACCESS_KEY`` ``required for EBS & S3 backing`` - ``secret-key``: AWS secret-key. May also be supplied via the environment variable ``$AWS_SECRET_KEY`` ``required for EBS & S3 backing`` - ``certificate``: Path to the AWS user certificate. Used for uploading the image to an S3 bucket. May also be supplied via the environment variable ``$AWS_CERTIFICATE`` ``required for S3 backing`` - ``private-key``: Path to the AWS private key. Used for uploading the image to an S3 bucket. May also be supplied via the environment variable ``$AWS_PRIVATE_KEY`` ``required for S3 backing`` - ``user-id``: AWS user ID. Used for uploading the image to an S3 bucket. May also be supplied via the environment variable ``$AWS_USER_ID`` ``required for S3 backing`` Example: .. code-block:: yaml --- provider: name: ec2 credentials: access-key: AFAKEACCESSKEYFORAWS secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva Virtualization ~~~~~~~~~~~~~~ EC2 supports both paravirtual and hardware virtual machines. The virtualization type determines various factors about the virtual machine performance (read more about this `in the EC2 docs`__). __ http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html - ``virtualization``: The virtualization type Valid values: ``pvm``, ``hvm`` ``required`` Example: .. code-block:: yaml --- provider: name: ec2 virtualization: hvm Enhanced networking ~~~~~~~~~~~~~~~~~~~ Install enhanced networking drivers to take advantage of SR-IOV capabilities on hardware virtual machines. Read more about this in `the EC2 docs`__. __ http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html Example: .. code-block:: yaml --- provider: name: ec2 virtualization: hvm enhanced_networking: simple Image ~~~~~ - ``description``: Description of the AMI. ``manifest vars`` - ``bucket``: When bootstrapping an S3 backed image, this will be the bucket where the image is uploaded to. ``required for S3 backing`` - ``region``: Region in which the AMI should be registered. ``required for S3 backing`` Example: .. code-block:: yaml --- provider: name: ec2 description: Debian {system.release} {system.architecture} bucket: debian-amis region: us-west-1 Dependencies ------------ To communicate with the AWS API `boto3 `__ is required you can install boto with ``pip install boto3`` (on wheezy, the packaged version is too low). S3 images are chopped up and uploaded using `euca2ools `__ (install with ``apt-get install euca2ools``). bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/__init__.py000066400000000000000000000140601323112141500250400ustar00rootroot00000000000000from bootstrapvz.common import task_groups import tasks.packages import tasks.connection import tasks.host import tasks.ami import tasks.ebs import tasks.filesystem import tasks.boot import tasks.network import tasks.initd import tasks.tuning from bootstrapvz.common.tasks import apt, boot, filesystem, grub, initd from bootstrapvz.common.tasks import kernel, loopback, volume from bootstrapvz.common.tools import rel_path def validate_manifest(data, validator, error): validator(data, rel_path(__file__, 'manifest-schema.yml')) from bootstrapvz.common.bytes import Bytes if data['volume']['backing'] == 'ebs': volume_size = Bytes(0) for key, partition in data['volume']['partitions'].iteritems(): if key != 'type': volume_size += Bytes(partition['size']) if int(volume_size % Bytes('1GiB')) != 0: msg = ('The volume size must be a multiple of 1GiB when using EBS backing') error(msg, ['volume', 'partitions']) else: validator(data, rel_path(__file__, 'manifest-schema-s3.yml')) bootloader = data['system']['bootloader'] virtualization = data['provider']['virtualization'] backing = data['volume']['backing'] partition_type = data['volume']['partitions']['type'] enhanced_networking = data['provider']['enhanced_networking'] if 'enhanced_networking' in data['provider'] else None if virtualization == 'pvm' and bootloader != 'pvgrub': error('Paravirtualized AMIs only support pvgrub as a bootloader', ['system', 'bootloader']) if backing != 'ebs' and virtualization == 'hvm': error('HVM AMIs currently only work when they are EBS backed', ['volume', 'backing']) if backing == 's3' and partition_type != 'none': error('S3 backed AMIs currently only work with unpartitioned volumes', ['system', 'bootloader']) if enhanced_networking == 'simple' and virtualization != 'hvm': error('Enhanced networking only works with HVM virtualization', ['provider', 'virtualization']) def resolve_tasks(taskset, manifest): """ Function setting up tasks to run for this provider """ from bootstrapvz.common.releases import wheezy, jessie, stable taskset.update(task_groups.get_standard_groups(manifest)) taskset.update(task_groups.ssh_group) taskset.update([tasks.host.AddExternalCommands, tasks.packages.DefaultPackages, tasks.connection.SilenceBotoDebug, tasks.connection.GetCredentials, tasks.ami.AMIName, tasks.connection.Connect, tasks.tuning.TuneSystem, tasks.tuning.BlackListModules, boot.BlackListModules, boot.DisableGetTTYs, tasks.boot.AddXenGrubConsoleOutputDevice, grub.WriteGrubConfig, tasks.boot.UpdateGrubConfig, initd.AddExpandRoot, initd.RemoveHWClock, initd.InstallInitScripts, tasks.ami.RegisterAMI, ]) if manifest.release > wheezy: taskset.add(tasks.network.InstallNetworkingUDevHotplugAndDHCPSubinterface) if manifest.release <= wheezy: # The default DHCP client `isc-dhcp' doesn't work properly on wheezy and earlier taskset.add(tasks.network.InstallDHCPCD) taskset.add(tasks.network.EnableDHCPCDDNS) if manifest.release >= jessie: taskset.add(tasks.packages.AddWorkaroundGrowpart) taskset.add(initd.AdjustGrowpartWorkaround) if manifest.system['bootloader'] == 'grub': taskset.add(grub.EnableSystemd) if manifest.release <= stable: taskset.add(apt.AddBackports) if manifest.provider.get('install_init_scripts', True): taskset.add(tasks.initd.AddEC2InitScripts) if manifest.volume['partitions']['type'] != 'none': taskset.add(initd.AdjustExpandRootScript) if manifest.system['bootloader'] == 'pvgrub': taskset.add(grub.AddGrubPackage) taskset.update([grub.AddGrubPackage, grub.InitGrubConfig, grub.SetGrubTerminalToConsole, grub.SetGrubConsolOutputDeviceToSerial, grub.RemoveGrubTimeout, grub.DisableGrubRecovery, tasks.boot.CreatePVGrubCustomRule, tasks.boot.ConfigurePVGrub, grub.WriteGrubConfig, tasks.boot.UpdateGrubConfig, tasks.boot.LinkGrubConfig]) if manifest.volume['backing'].lower() == 'ebs': taskset.update([tasks.host.GetInstanceMetadata, tasks.ebs.Create, tasks.ebs.Snapshot, ]) taskset.add(tasks.ebs.Attach) taskset.discard(volume.Attach) if manifest.volume['backing'].lower() == 's3': taskset.update([loopback.AddRequiredCommands, tasks.host.SetRegion, loopback.Create, tasks.filesystem.S3FStab, tasks.ami.BundleImage, tasks.ami.UploadImage, tasks.ami.RemoveBundle, ]) taskset.discard(filesystem.FStab) if manifest.provider.get('enhanced_networking', None) == 'simple': taskset.update([kernel.AddDKMSPackages, tasks.network.InstallEnhancedNetworking, tasks.network.InstallENANetworking, kernel.UpdateInitramfs]) taskset.update([filesystem.Format, volume.Delete, ]) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): taskset.update(task_groups.get_standard_rollback_tasks(completed)) counter_task(taskset, tasks.ebs.Create, volume.Delete) counter_task(taskset, tasks.ebs.Attach, volume.Detach) counter_task(taskset, tasks.ami.BundleImage, tasks.ami.RemoveBundle) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/000077500000000000000000000000001323112141500242305ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/bin/000077500000000000000000000000001323112141500250005ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/bin/growpart000077500000000000000000000375211323112141500266030ustar00rootroot00000000000000#!/bin/sh # Copyright (C) 2011 Canonical Ltd. # Copyright (C) 2013 Hewlett-Packard Development Company, L.P. # # Authors: Scott Moser # Juerg Haefliger # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, version 3 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # the fudge factor. if its within this many 512 byte sectors, dont bother FUDGE=${GROWPART_FUDGE:-$((20*1024))} TEMP_D="" RESTORE_FUNC="" RESTORE_HUMAN="" VERBOSITY=0 DISK="" PART="" PT_UPDATE=false DRY_RUN=0 MBR_CHS="" MBR_BACKUP="" GPT_BACKUP="" _capture="" error() { echo "$@" 1>&2 } fail() { [ $# -eq 0 ] || echo "FAILED:" "$@" exit 2 } nochange() { echo "NOCHANGE:" "$@" exit 1 } changed() { echo "CHANGED:" "$@" exit 0 } change() { echo "CHANGE:" "$@" exit 0 } cleanup() { if [ -n "${RESTORE_FUNC}" ]; then error "***** WARNING: Resize failed, attempting to revert ******" if ${RESTORE_FUNC} ; then error "***** Appears to have gone OK ****" else error "***** FAILED! or original partition table" \ "looked like: ****" cat "${RESTORE_HUMAN}" 1>&2 fi fi [ -z "${TEMP_D}" -o ! -d "${TEMP_D}" ] || rm -Rf "${TEMP_D}" } debug() { local level=${1} shift [ "${level}" -gt "${VERBOSITY}" ] && return if [ "${DEBUG_LOG}" ]; then echo "$@" >>"${DEBUG_LOG}" else error "$@" fi } debugcat() { local level="$1" shift; [ "${level}" -gt "$VERBOSITY" ] && return if [ "${DEBUG_LOG}" ]; then cat "$@" >>"${DEBUG_LOG}" else cat "$@" 1>&2 fi } mktemp_d() { # just a mktemp -d that doens't need mktemp if its not there. _RET=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX" 2>/dev/null) && return _RET=$(umask 077 && t="${TMPDIR:-/tmp}/${0##*/}.$$" && mkdir "${t}" && echo "${t}") return } Usage() { cat <&2 error "$@" exit 2 } mbr_restore() { sfdisk --no-reread "${DISK}" ${MBR_CHS} -I "${MBR_BACKUP}" } sfdisk_worked_but_blkrrpart_failed() { local ret="$1" output="$2" # exit code found was just 1, but dont insist on that #[ $ret -eq 1 ] || return 1 # Successfully wrote the new partition table grep -qi "Success.* wrote.* new.* partition" "$output" && grep -qi "BLKRRPART: Device or resource busy" "$output" return } mbr_resize() { RESTORE_HUMAN="${TEMP_D}/recovery" MBR_BACKUP="${TEMP_D}/orig.save" local change_out=${TEMP_D}/change.out local dump_out=${TEMP_D}/dump.out local new_out=${TEMP_D}/new.out local dump_mod=${TEMP_D}/dump.mod local tmp="${TEMP_D}/tmp.out" local err="${TEMP_D}/err.out" local _devc cyl _w1 heads _w2 sectors _w3 tot dpart local pt_start pt_size pt_end max_end new_size change_info # --show-pt-geometry outputs something like # /dev/sda: 164352 cylinders, 4 heads, 32 sectors/track rqe sfd_geom sfdisk "${DISK}" --show-pt-geometry >"${tmp}" && read _devc cyl _w1 heads _w2 sectors _w3 <"${tmp}" && MBR_CHS="-C ${cyl} -H ${heads} -S ${sectors}" || fail "failed to get CHS from ${DISK}" tot=$((${cyl}*${heads}*${sectors})) debug 1 "geometry is ${MBR_CHS}. total size=${tot}" rqe sfd_dump sfdisk ${MBR_CHS} --unit=S --dump "${DISK}" \ >"${dump_out}" || fail "failed to dump sfdisk info for ${DISK}" { echo "## sfdisk ${MBR_CHS} --unit=S --dump ${DISK}" cat "${dump_out}" } >"${RESTORE_HUMAN}" [ $? -eq 0 ] || fail "failed to save sfdisk -d output" debugcat 1 "${RESTORE_HUMAN}" sed -e 's/,//g; s/start=/start /; s/size=/size /' "${dump_out}" \ >"${dump_mod}" || fail "sed failed on dump output" dpart="${DISK}${PART}" # disk and partition number if [ -b "${DISK}p${PART}" -a "${DISK%[0-9]}" != "${DISK}" ]; then # for block devices that end in a number (/dev/nbd0) # the partition is "p" (/dev/nbd0p1) dpart="${DISK}p${PART}" elif [ "${DISK#/dev/loop[0-9]}" != "${DISK}" ]; then # for /dev/loop devices, sfdisk output will be p # format also, even though there is not a device there. dpart="${DISK}p${PART}" fi pt_start=$(awk '$1 == pt { print $4 }' "pt=${dpart}" <"${dump_mod}") && pt_size=$(awk '$1 == pt { print $6 }' "pt=${dpart}" <"${dump_mod}") && [ -n "${pt_start}" -a -n "${pt_size}" ] && pt_end=$((${pt_size}+${pt_start})) || fail "failed to get start and end for ${dpart} in ${DISK}" # find the minimal starting location that is >= pt_end max_end=$(awk '$3 == "start" { if($4 >= pt_end && $4 < min) { min = $4 } } END { printf("%s\n",min); }' \ min=${tot} pt_end=${pt_end} "${dump_mod}") && [ -n "${max_end}" ] || fail "failed to get max_end for partition ${PART}" debug 1 "max_end=${max_end} tot=${tot} pt_end=${pt_end}" \ "pt_start=${pt_start} pt_size=${pt_size}" [ $((${pt_end})) -eq ${max_end} ] && nochange "partition ${PART} is size ${pt_size}. it cannot be grown" [ $((${pt_end}+${FUDGE})) -gt ${max_end} ] && nochange "partition ${PART} could only be grown by" \ "$((${max_end}-${pt_end})) [fudge=${FUDGE}]" # now, change the size for this partition in ${dump_out} to be the # new size new_size=$((${max_end}-${pt_start})) sed "\|^\s*${dpart} |s/${pt_size},/${new_size},/" "${dump_out}" \ >"${new_out}" || fail "failed to change size in output" change_info="partition=${PART} start=${pt_start} old: size=${pt_size} end=${pt_end} new: size=${new_size},end=${max_end}" if [ ${DRY_RUN} -ne 0 ]; then echo "CHANGE: ${change_info}" { echo "# === old sfdisk -d ===" cat "${dump_out}" echo "# === new sfdisk -d ===" cat "${new_out}" } 1>&2 exit 0 fi LANG=C sfdisk --no-reread "${DISK}" ${MBR_CHS} --force \ -O "${MBR_BACKUP}" <"${new_out}" >"${change_out}" 2>&1 ret=$? [ $ret -eq 0 ] || RESTORE_FUNC="mbr_restore" if [ $ret -eq 0 ]; then : elif $PT_UPDATE && sfdisk_worked_but_blkrrpart_failed "$ret" "${change_out}"; then # if the command failed, but it looks like only because # the device was busy and we have pt_update, then go on debug 1 "sfdisk failed, but likely only because of blkrrpart" else error "attempt to resize ${DISK} failed. sfdisk output below:" sed 's,^,| ,' "${change_out}" 1>&2 fail "failed to resize" fi rq pt_update pt_update "$DISK" "$PART" || fail "pt_resize failed" RESTORE_FUNC="" changed "${change_info}" # dump_out looks something like: ## partition table of /tmp/out.img #unit: sectors # #/tmp/out.img1 : start= 1, size= 48194, Id=83 #/tmp/out.img2 : start= 48195, size= 963900, Id=83 #/tmp/out.img3 : start= 1012095, size= 305235, Id=82 #/tmp/out.img4 : start= 1317330, size= 771120, Id= 5 #/tmp/out.img5 : start= 1317331, size= 642599, Id=83 #/tmp/out.img6 : start= 1959931, size= 48194, Id=83 #/tmp/out.img7 : start= 2008126, size= 80324, Id=83 } gpt_restore() { sgdisk -l "${GPT_BACKUP}" "${DISK}" } gpt_resize() { GPT_BACKUP="${TEMP_D}/pt.backup" local pt_info="${TEMP_D}/pt.info" local pt_pretend="${TEMP_D}/pt.pretend" local pt_data="${TEMP_D}/pt.data" local out="${TEMP_D}/out" local dev="disk=${DISK} partition=${PART}" local pt_start pt_end pt_size last pt_max code guid name new_size local old new change_info # Dump the original partition information and details to disk. This is # used in case something goes wrong and human interaction is required # to revert any changes. rqe sgd_info sgdisk "--info=${PART}" --print "${DISK}" >"${pt_info}" || RESTORE_HUMAN="${pt_info}" debug 1 "$dev: original sgdisk info:" debugcat 1 "${pt_info}" # Pretend to move the backup GPT header to the end of the disk and dump # the resulting partition information. We use this info to determine if # we have to resize the partition. rqe sgd_pretend sgdisk --pretend --move-second-header \ --print "${DISK}" >"${pt_pretend}" || fail "${dev}: failed to dump pretend sgdisk info" debug 1 "$dev: pretend sgdisk info" debugcat 1 "${pt_pretend}" # Extract the partition data from the pretend dump awk 'found { print } ; $1 == "Number" { found = 1 }' \ "${pt_pretend}" >"${pt_data}" || fail "${dev}: failed to parse pretend sgdisk info" # Get the start and end sectors of the partition to be grown pt_start=$(awk '$1 == '"${PART}"' { print $2 }' "${pt_data}") && [ -n "${pt_start}" ] || fail "${dev}: failed to get start sector" pt_end=$(awk '$1 == '"${PART}"' { print $3 }' "${pt_data}") && [ -n "${pt_end}" ] || fail "${dev}: failed to get end sector" pt_size="$((${pt_end} - ${pt_start}))" # Get the last usable sector last=$(awk '/last usable sector is/ { print $NF }' \ "${pt_pretend}") && [ -n "${last}" ] || fail "${dev}: failed to get last usable sector" # Find the minimal start sector that is >= pt_end pt_max=$(awk '{ if ($2 >= pt_end && $2 < min) { min = $2 } } END \ { print min }' min="${last}" pt_end="${pt_end}" \ "${pt_data}") && [ -n "${pt_max}" ] || fail "${dev}: failed to find max end sector" debug 1 "${dev}: pt_start=${pt_start} pt_end=${pt_end}" \ "pt_size=${pt_size} pt_max=${pt_max} last=${last}" # Check if the partition can be grown [ "${pt_end}" -eq "${pt_max}" ] && nochange "${dev}: size=${pt_size}, it cannot be grown" [ "$((${pt_end} + ${FUDGE}))" -gt "${pt_max}" ] && nochange "${dev}: could only be grown by" \ "$((${pt_max} - ${pt_end})) [fudge=${FUDGE}]" # The partition can be grown if we made it here. Get some more info # about it so we can do it properly. # FIXME: Do we care about the attribute flags? code=$(awk '/^Partition GUID code:/ { print $4 }' "${pt_info}") guid=$(awk '/^Partition unique GUID:/ { print $4 }' "${pt_info}") name=$(awk '/^Partition name:/ { gsub(/'"'"'/, "") ; \ if (NF >= 3) print substr($0, index($0, $3)) }' "${pt_info}") [ -n "${code}" -a -n "${guid}" ] || fail "${dev}: failed to parse sgdisk details" debug 1 "${dev}: code=${code} guid=${guid} name='${name}'" # Calculate the new size of the partition new_size=$((${pt_max} - ${pt_start})) old="old: size=${pt_size},end=${pt_end}" new="new: size=${new_size},end=${pt_max}" change_info="${dev}: start=${pt_start} ${old} ${new}" # Dry run [ "${DRY_RUN}" -ne 0 ] && change "${change_info}" # Backup the current partition table, we're about to modify it rq sgd_backup sgdisk "--backup=${GPT_BACKUP}" "${DISK}" || fail "${dev}: failed to backup the partition table" # Modify the partition table. We do it all in one go (the order is # important!): # - move the GPT backup header to the end of the disk # - delete the partition # - recreate the partition with the new size # - set the partition code # - set the partition GUID # - set the partition name rq sgdisk_mod sgdisk --move-second-header "--delete=${PART}" \ "--new=${PART}:${pt_start}:${pt_max}" \ "--typecode=${PART}:${code}" \ "--partition-guid=${PART}:${guid}" \ "--change-name=${PART}:${name}" "${DISK}" && rq pt_update pt_update "$DISK" "$PART" || { RESTORE_FUNC=gpt_restore fail "${dev}: failed to repartition" } changed "${change_info}" } kver_to_num() { local kver="$1" maj="" min="" mic="0" kver=${kver%%-*} maj=${kver%%.*} min=${kver#${maj}.} min=${min%%.*} mic=${kver#${maj}.${min}.} [ "$kver" = "$mic" ] && mic=0 _RET=$(($maj*1000*1000+$min*1000+$mic)) } kver_cmp() { local op="$2" n1="" n2="" kver_to_num "$1" n1="$_RET" kver_to_num "$3" n2="$_RET" [ $n1 $op $n2 ] } rq() { # runquieterror(label, command) # gobble stderr of a command unless it errors local label="$1" ret="" efile="" efile="$TEMP_D/$label.err" shift; debug 2 "running[$label][$_capture]" "$@" if [ "${_capture}" = "erronly" ]; then "$@" 2>"$TEMP_D/$label.err" ret=$? else "$@" >"$TEMP_D/$label.err" 2>&1 ret=$? fi if [ $ret -ne 0 ]; then error "failed [$label:$ret]" "$@" cat "$efile" 1>&2 fi return $ret } rqe() { local _capture="erronly" rq "$@" } verify_ptupdate() { local input="$1" found="" reason="" kver="" # we can always satisfy 'off' if [ "$input" = "off" ]; then _RET="false"; return 0; fi if command -v partx >/dev/null 2>&1; then partx --help | grep -q -- --update || { reason="partx has no '--update' flag in usage." found="off" } else reason="no 'partx' command" found="off" fi if [ -z "$found" ]; then if [ "$(uname)" != "Linux" ]; then reason="Kernel is not Linux per uname." found="off" fi fi if [ -z "$found" ]; then kver=$(uname -r) || debug 1 "uname -r failed!" if ! kver_cmp "${kver-0.0.0}" -ge 3.8.0; then reason="Kernel '$kver' < 3.8.0." found="off" fi fi if [ -z "$found" ]; then _RET="true" return 0 fi case "$input" in on) error "$reason"; return 1;; auto) _RET="false"; debug 1 "partition update disabled: $reason" return 0;; force) _RET="true" error "WARNING: ptupdate forced on even though: $reason" return 0;; esac error "unknown input '$input'"; return 1; } pt_update() { local dev="$1" part="$2" update="${3:-$PT_UPDATE}" if ! $update; then return 0 fi partx --update "$part" "$dev" } has_cmd() { command -v "${1}" >/dev/null 2>&1 } pt_update="auto" while [ $# -ne 0 ]; do cur=${1} next=${2} case "$cur" in -h|--help) Usage exit 0 ;; --fudge) FUDGE=${next} shift ;; -N|--dry-run) DRY_RUN=1 ;; -u|--update|--update=*) if [ "${cur#--update=}" != "$cur" ]; then next="${cur#--update=}" else shift fi case "$next" in off|auto|force|on) pt_update=$next;; *) fail "unknown --update option: $next";; esac ;; -v|--verbose) VERBOSITY=$(($VERBOSITY+1)) ;; --) shift break ;; -*) fail "unknown option ${cur}" ;; *) if [ -z "${DISK}" ]; then DISK=${cur} else [ -z "${PART}" ] || fail "confused by arg ${cur}" PART=${cur} fi ;; esac shift done [ -n "${DISK}" ] || bad_Usage "must supply disk and partition-number" [ -n "${PART}" ] || bad_Usage "must supply partition-number" has_cmd "sfdisk" || fail "sfdisk not found" [ -e "${DISK}" ] || fail "${DISK}: does not exist" [ "${PART#*[!0-9]}" = "${PART}" ] || fail "partition-number must be a number" verify_ptupdate "$pt_update" || fail PT_UPDATE=$_RET debug 1 "update-partition set to $PT_UPDATE" mktemp_d && TEMP_D="${_RET}" || fail "failed to make temp dir" trap cleanup EXIT # get the ID of the first partition to determine if it's MBR or GPT id=$(sfdisk --id --force "${DISK}" 1 2>/dev/null) || fail "unable to determine partition type" if [ "${id}" = "ee" ] ; then has_cmd "sgdisk" || fail "GPT partition found but no sgdisk" debug 1 "found GPT partition table (id = ${id})" gpt_resize else debug 1 "found MBR partition table (id = ${id})" mbr_resize fi # vi: ts=4 noexpandtab bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/certs/000077500000000000000000000000001323112141500253505ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/certs/cert-ec2.pem000066400000000000000000000025431323112141500274630ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDzjCCAzegAwIBAgIJALDnZV+lpZdSMA0GCSqGSIb3DQEBBQUAMIGhMQswCQYD VQQGEwJaQTEVMBMGA1UECBMMV2VzdGVybiBDYXBlMRIwEAYDVQQHEwlDYXBlIFRv d24xJzAlBgNVBAoTHkFtYXpvbiBEZXZlbG9wbWVudCBDZW50cmUgKFNBKTEMMAoG A1UECxMDQUVTMREwDwYDVQQDEwhBRVMgVGVzdDEdMBsGCSqGSIb3DQEJARYOYWVz QGFtYXpvbi5jb20wHhcNMDUwODA5MTYwMTA5WhcNMDYwODA5MTYwMTA5WjCBoTEL MAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2Fw ZSBUb3duMScwJQYDVQQKEx5BbWF6b24gRGV2ZWxvcG1lbnQgQ2VudHJlIChTQSkx DDAKBgNVBAsTA0FFUzERMA8GA1UEAxMIQUVTIFRlc3QxHTAbBgkqhkiG9w0BCQEW DmFlc0BhbWF6b24uY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8v/X5 zZv8CAVfNmvBM0br/RUcf1wU8xC5d2otFQQsQKB3qiWoj3oHeOWskOlTPFVZ8N+/ hEaMjyOUkg2+g6XEagCQtFCEBzUVoMjiQIBPiWj5CWkFtlav2zt33LZ0ErTND4xl j7FQFqbaytHU9xuQcFO2p12bdITiBs5Kwoi9bQIDAQABo4IBCjCCAQYwHQYDVR0O BBYEFPQnsX1kDVzPtX+38ACV8RhoYcw8MIHWBgNVHSMEgc4wgcuAFPQnsX1kDVzP tX+38ACV8RhoYcw8oYGnpIGkMIGhMQswCQYDVQQGEwJaQTEVMBMGA1UECBMMV2Vz dGVybiBDYXBlMRIwEAYDVQQHEwlDYXBlIFRvd24xJzAlBgNVBAoTHkFtYXpvbiBE ZXZlbG9wbWVudCBDZW50cmUgKFNBKTEMMAoGA1UECxMDQUVTMREwDwYDVQQDEwhB RVMgVGVzdDEdMBsGCSqGSIb3DQEJARYOYWVzQGFtYXpvbi5jb22CCQCw52VfpaWX UjAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAJJlWll4uGlrqBzeIw7u M3RvomlxMESwGKb9gI+ZeORlnHAyZxvd9XngIcjPuU+8uc3wc10LRQUCn45a5hFs zaCp9BSewLCCirn6awZn2tP8JlagSbjrN9YShStt8S3S/Jj+eBoRvc7jJnmEeMkx O0wHOzp5ZHRDK7tGULD6jCfU -----END CERTIFICATE----- bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/ec2/000077500000000000000000000000001323112141500247015ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/ec2/53-ec2-network-interfaces.rules000066400000000000000000000015461323112141500324710ustar00rootroot00000000000000# Copyright (C) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. # A copy of the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND, either express or implied. See the License for the # specific language governing permissions and limitations under the # License. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ACTION=="add", SUBSYSTEM=="net", KERNEL=="eth*", IMPORT{program}="/bin/sleep 1" SUBSYSTEM=="net", RUN+="/etc/sysconfig/network-scripts/ec2net.hotplug" bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/ec2/ec2dhcp.sh000066400000000000000000000016551323112141500265540ustar00rootroot00000000000000#!/bin/bash # Copyright (C) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. # A copy of the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND, either express or implied. See the License for the # specific language governing permissions and limitations under the # License. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. INTERFACE="${interface}" PREFIX="${new_prefix}" . /etc/sysconfig/network-scripts/ec2net-functions ec2dhcp_config() { rewrite_rules rewrite_aliases } ec2dhcp_restore() { remove_aliases remove_rules } bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/ec2/ec2net-functions000066400000000000000000000145021323112141500300140ustar00rootroot00000000000000# -*-Shell-script-*- # Copyright (C) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. # A copy of the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND, either express or implied. See the License for the # specific language governing permissions and limitations under the # License. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # This file is not a stand-alone shell script; it provides functions # to ec2 network scripts that source it. # Set up a default search path. PATH="/sbin:/usr/sbin:/bin:/usr/bin" export PATH # metadata query requires an interface and hardware address if [ -z "${INTERFACE}" ]; then exit fi HWADDR=$(cat /sys/class/net/${INTERFACE}/address 2>/dev/null) if [ -z "${HWADDR}" ] && [ "${ACTION}" != "remove" ]; then exit fi export HWADDR # generate a routing table number RTABLE=${INTERFACE#eth} let RTABLE+=10000 metadata_base="http://169.254.169.254/latest/meta-data/network/interfaces/macs" config_file="/etc/sysconfig/network-scripts/ifcfg-${INTERFACE}" route_file="/etc/sysconfig/network-scripts/route-${INTERFACE}" dhclient_file="/etc/dhcp/dhclient-${INTERFACE}.conf" # make no changes to unmanaged interfaces if [ -s ${config_file} ]; then unmanaged=$(LANG=C grep -l "^[[:space:]]*EC2SYNC=no\([[:space:]#]\|$\)" $config_file) if [ "${config_file}" == "${unmanaged}" ]; then exit fi fi get_meta() { attempts=10 false while [ "${?}" -gt 0 ]; do [ "${attempts}" -eq 0 ] && return meta=$(curl -s -f ${metadata_base}/${HWADDR}/${1}) if [ "${?}" -gt 0 ]; then let attempts-- sleep 3 false fi done echo "${meta}" } get_cidr() { cidr=$(get_meta 'subnet-ipv4-cidr-block') echo "${cidr}" } get_ipv4s() { ipv4s=$(get_meta 'local-ipv4s') echo "${ipv4s}" } get_primary_ipv4() { ipv4s=($(get_ipv4s)) echo "${ipv4s[0]}" } get_secondary_ipv4s() { ipv4s=($(get_ipv4s)) echo "${ipv4s[@]:1}" } remove_primary() { if [ "${INTERFACE}" == "eth0" -o "${INTERFACE}" == "lo" ]; then return fi rm -f ${config_file} rm -f ${route_file} rm -f ${dhclient_file} } rewrite_primary() { if [ "${INTERFACE}" == "eth0" -o "${INTERFACE}" == "lo" ]; then return fi cidr=$(get_cidr) if [ -z ${cidr} ]; then return fi network=$(echo ${cidr}|cut -d/ -f1) router=$(( $(echo ${network}|cut -d. -f4) + 1)) gateway="$(echo ${network}|cut -d. -f1-3).${router}" cat <<- EOF > ${config_file} DEVICE=${INTERFACE} BOOTPROTO=dhcp ONBOOT=yes TYPE=Ethernet USERCTL=yes PEERDNS=no IPV6INIT=no PERSISTENT_DHCLIENT=yes HWADDR=${HWADDR} DEFROUTE=no EC2SYNC=yes EOF cat <<- EOF > ${route_file} default via ${gateway} dev ${INTERFACE} table ${RTABLE} default via ${gateway} dev ${INTERFACE} metric ${RTABLE} EOF # Use broadcast address instead of unicast dhcp server address. # Works around an issue with two interfaces on the same subnet. # Unicast lease requests go out the first available interface, # and dhclient ignores the response. Broadcast requests go out # the expected interface, and dhclient accepts the lease offer. cat <<- EOF > ${dhclient_file} supersede dhcp-server-identifier 255.255.255.255; EOF } remove_aliases() { /sbin/ip addr flush dev ${INTERFACE} secondary } rewrite_aliases() { aliases=$(get_secondary_ipv4s) if [ ${#aliases[*]} -eq 0 ]; then remove_aliases return fi # The network prefix can be provided in the environment by # e.g. DHCP, but if it's not available then we need it to # correctly configure secondary addresses. if [ -z "${PREFIX}" ]; then cidr=$(get_cidr) PREFIX=$(echo ${cidr}|cut -d/ -f2) fi [ -n "${PREFIX##*[!0-9]*}" ] || return # Retrieve a list of secondary IP addresses on the interface. # Treat this as the stale list. For each IP address obtained # from metadata, cross it off the stale list if present, or # add it to the interface otherwise. Then, remove any address # remaining in the stale list. declare -A secondaries for secondary in $(/sbin/ip addr list dev ${INTERFACE} secondary \ |grep "inet .* secondary ${INTERFACE}" \ |awk '{print $2}'|cut -d/ -f1); do secondaries[${secondary}]=1 done for alias in ${aliases}; do if [[ ${secondaries[${alias}]} ]]; then unset secondaries[${alias}] else /sbin/ip addr add ${alias}/${PREFIX} brd + dev ${INTERFACE} fi done for secondary in "${!secondaries[@]}"; do /sbin/ip addr del ${secondary}/${PREFIX} dev ${INTERFACE} done } remove_rules() { if [ "${INTERFACE}" == "eth0" ]; then return fi for rule in $(/sbin/ip rule list \ |grep "from .* lookup ${RTABLE}" \ |awk -F: '{print $1}'); do /sbin/ip rule delete pref "${rule}" done } rewrite_rules() { if [ "${INTERFACE}" == "eth0" ]; then return fi ips=($(get_ipv4s)) if [ ${#ips[*]} -eq 0 ]; then remove_rules return fi # Retrieve a list of IP rules for the route table that belongs # to this interface. Treat this as the stale list. For each IP # address obtained from metadata, cross the corresponding rule # off the stale list if present. Otherwise, add a rule sending # outbound traffic from that IP to the interface route table. # Then, remove all other rules found in the stale list. declare -A rules for rule in $(/sbin/ip rule list \ |grep "from .* lookup ${RTABLE}" \ |awk '{print $1$3}'); do split=(${rule//:/ }) rules[${split[1]}]=${split[0]} done for ip in ${ips[@]}; do if [[ ${rules[${ip}]} ]]; then unset rules[${ip}] else /sbin/ip rule add from ${ip} lookup ${RTABLE} fi done for rule in "${!rules[@]}"; do /sbin/ip rule delete pref "${rules[${rule}]}" done } plug_interface() { rewrite_primary } unplug_interface() { remove_rules remove_aliases } activate_primary() { /sbin/ifup ${INTERFACE} } deactivate_primary() { /sbin/ifdown ${INTERFACE} } bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/ec2/ec2net.hotplug000066400000000000000000000022331323112141500274650ustar00rootroot00000000000000#!/bin/bash # Copyright (C) 2012 Amazon.com, Inc. or its affiliates. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. # A copy of the License is located at # # http://aws.amazon.com/apache2.0/ # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND, either express or implied. See the License for the # specific language governing permissions and limitations under the # License. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # During init and before the network service is started, metadata is not # available. Exit without attempting to configure the elastic interface. if [ `/sbin/runlevel | /usr/bin/cut -d\ -f2` -ne 5 ]; then exit fi if [ -f /dev/.in_sysinit ]; then exit fi . /etc/sysconfig/network-scripts/ec2net-functions case $ACTION in add) plug_interface activate_primary ;; remove) deactivate_primary unplug_interface ;; esac bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/grub.d/000077500000000000000000000000001323112141500254115ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/grub.d/40_custom000066400000000000000000000041211323112141500271470ustar00rootroot00000000000000#!/bin/sh # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. # This file generates the old menu.lst configuration with grub2 # It was copied from tomheadys github repo: # https://github.com/tomheady/ec2debian/blob/master/src/root/etc/grub.d/40_custom prefix=/usr exec_prefix=${prefix} bindir=${exec_prefix}/bin libdir=${exec_prefix}/lib . ${libdir}/grub/grub-mkconfig_lib export TEXTDOMAIN=grub export TEXTDOMAINDIR=${prefix}/share/locale GRUB_DEVICE=/dev/xvda cat << EOF default ${GRUB_DEFAULT} timeout ${GRUB_TIMEOUT} EOF if ${GRUB_HIDDEN_TIMEOUT:-false}; then printf "hiddenmenu\n" fi linux_entry () { os="$1" version="$2" args="$3" title="$(gettext_quoted "%s, with Linux %s")" cat << EOF title ${version} root (hd0) kernel ${rel_dirname}/${basename} root=${GRUB_DEVICE} ro ${args} initrd ${rel_dirname}/${initrd} EOF } list=`for i in /boot/vmlinuz-* /boot/vmlinux-* /vmlinuz-* /vmlinux-* ; do if grub_file_is_not_garbage "$i" ; then echo -n "$i " ; fi done` prepare_boot_cache= while [ "x$list" != "x" ] ; do linux=`version_find_latest $list` basename=`basename $linux` dirname=`dirname $linux` rel_dirname=`make_system_path_relative_to_its_root $dirname` version=`echo $basename | sed -e "s,^[^0-9]*-,,g"` alt_version=`echo $version | sed -e "s,\.old$,,g"` linux_root_device_thisversion="${LINUX_ROOT_DEVICE}" initrd= for i in "initrd.img-${version}" "initrd-${version}.img" \ "initrd-${version}" "initramfs-${version}.img" \ "initrd.img-${alt_version}" "initrd-${alt_version}.img" \ "initrd-${alt_version}" "initramfs-${alt_version}.img"; do if test -e "${dirname}/${i}" ; then initrd="$i" break fi done initramfs= for i in "config-${version}" "config-${alt_version}"; do if test -e "${dirname}/${i}" ; then initramfs=`grep CONFIG_INITRAMFS_SOURCE= "${dirname}/${i}" | cut -f2 -d= | tr -d \"` break fi done linux_entry "${OS}" "${version}" \ "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}" list=`echo $list | tr ' ' '\n' | grep -vx $linux | tr '\n' ' '` done bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/init.d/000077500000000000000000000000001323112141500254155ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/init.d/ec2-get-credentials000066400000000000000000000031311323112141500310570ustar00rootroot00000000000000#!/bin/bash ### BEGIN INIT INFO # Provides: ec2-get-credentials # Required-Start: $network # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Description-Short: Retrieve the ssh credentials and add to authorized_keys # Description: Retrieve the ssh credentials and add to authorized_keys. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO # # ec2-get-credentials - Retrieve the ssh credentials and add to authorized_keys # # Based on /usr/local/sbin/ec2-get-credentials from Amazon's ami-20b65349 # prog=$(basename $0) logger="logger -t $prog" public_key_url=http://169.254.169.254/1.0/meta-data/public-keys/0/openssh-key username='root' # A little bit of nastyness to get the homedir, when the username is a variable ssh_dir="`eval printf ~$username`/.ssh" authorized_keys="$ssh_dir/authorized_keys" # Try to get the ssh public key from instance data. public_key=`wget -qO - $public_key_url` if [ -n "$public_key" ]; then if [ ! -f $authorized_keys ]; then if [ ! -d $ssh_dir ]; then mkdir -m 700 $ssh_dir chown $username:$username $ssh_dir fi touch $authorized_keys chown $username:$username $authorized_keys fi if ! grep -s -q "$public_key" $authorized_keys; then printf "\n%s" -- "$public_key" >> $authorized_keys $logger "New ssh key added to $authorized_keys from $public_key_url" chmod 600 $authorized_keys chown $username:$username $authorized_keys fi fi bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/init.d/ec2-run-user-data000066400000000000000000000032321323112141500304760ustar00rootroot00000000000000#!/bin/bash ### BEGIN INIT INFO # Provides: ec2-run-user-data # Required-Start: ec2-get-credentials # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Description-Short: Run instance user-data if it looks like a script # Description: Run instance user-data if it looks like a script. # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. ### END INIT INFO # # Only retrieves and runs the user-data script once per instance. If # you want the user-data script to run again (e.g., on the next boot) # then readd this script with insserv: # insserv -d ec2-run-user-data # prog=$(basename $0) logger="logger -t $prog" instance_data_url="http://169.254.169.254/2008-02-01" # Retrieve the instance user-data and run it if it looks like a script user_data_file=$(tempfile --prefix ec2 --suffix .user-data --mode 700) $logger "Retrieving user-data" wget -qO $user_data_file $instance_data_url/user-data 2>&1 | $logger if [ $(file -b --mime-type $user_data_file) = 'application/x-gzip' ]; then $logger "Uncompressing gzip'd user-data" mv $user_data_file $user_data_file.gz gunzip $user_data_file.gz fi if [ ! -s $user_data_file ]; then $logger "No user-data available" elif head -1 $user_data_file | egrep -v '^#!'; then $logger "Skipping user-data as it does not begin with #!" else $logger "Running user-data" $user_data_file 2>&1 | logger -t "user-data" $logger "user-data exit code: $?" fi rm -f $user_data_file # Disable this script, it may only run once insserv -r $0 bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/sysctl.d/000077500000000000000000000000001323112141500257735ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/assets/sysctl.d/tuning.conf000066400000000000000000000011451323112141500301470ustar00rootroot00000000000000# This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. vm.swappiness = 0 vm.dirty_ratio = 80 vm.dirty_background_ratio = 5 vm.dirty_expire_centisecs = 12000 net.core.somaxconn = 1000 net.core.netdev_max_backlog = 5000 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_wmem = 4096 12582912 16777216 net.ipv4.tcp_rmem = 4096 12582912 16777216 net.ipv4.tcp_max_syn_backlog = 8096 net.ipv4.tcp_slow_start_after_idle = 0 net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 10240 65535 kernel.sysrq = 0 bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/ebsvolume.py000066400000000000000000000051521323112141500253040ustar00rootroot00000000000000from bootstrapvz.base.fs.volume import Volume from bootstrapvz.base.fs.exceptions import VolumeError class EBSVolume(Volume): def create(self, conn, zone): self.fsm.create(connection=conn, zone=zone) def _before_create(self, e): self.conn = e.connection zone = e.zone size = self.size.bytes.get_qty_in('GiB') self.volume = self.conn.create_volume(Size=size, AvailabilityZone=zone, VolumeType='gp2') self.vol_id = self.volume['VolumeId'] waiter = self.conn.get_waiter('volume_available') waiter.wait(VolumeIds=[self.vol_id], Filters=[{'Name': 'status', 'Values': ['available']}]) def attach(self, instance_id): self.fsm.attach(instance_id=instance_id) def _before_attach(self, e): import os.path import string self.instance_id = e.instance_id for letter in string.ascii_lowercase[5:]: dev_path = os.path.join('/dev', 'xvd' + letter) if not os.path.exists(dev_path): self.device_path = dev_path self.ec2_device_path = os.path.join('/dev', 'sd' + letter) break if self.device_path is None: raise VolumeError('Unable to find a free block device path for mounting the bootstrap volume') self.conn.attach_volume(VolumeId=self.vol_id, InstanceId=self.instance_id, Device=self.ec2_device_path) waiter = self.conn.get_waiter('volume_in_use') waiter.wait(VolumeIds=[self.vol_id], Filters=[{'Name': 'attachment.status', 'Values': ['attached']}]) def _before_detach(self, e): self.conn.detach_volume(VolumeId=self.vol_id, InstanceId=self.instance_id, Device=self.ec2_device_path) waiter = self.conn.get_waiter('volume_available') waiter.wait(VolumeIds=[self.vol_id], Filters=[{'Name': 'status', 'Values': ['available']}]) del self.ec2_device_path self.device_path = None def _before_delete(self, e): self.conn.delete_volume(VolumeId=self.vol_id) def snapshot(self): snapshot = self.conn.create_snapshot(VolumeId=self.vol_id) self.snap_id = snapshot['SnapshotId'] waiter = self.conn.get_waiter('snapshot_completed') waiter.wait(SnapshotIds=[self.snap_id], Filters=[{'Name': 'status', 'Values': ['completed']}]) return self.snap_id bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/manifest-schema-s3.yml000066400000000000000000000014571323112141500270470ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: EC2 manifest for instance store AMIs type: object properties: provider: type: object properties: credentials: type: object properties: certificate: {type: string} private-key: {type: string} user-id: type: string pattern: (^arn:aws:iam::\d*:user/\w.*$)|(^\d{4}-\d{4}-\d{4}$) bucket: {type: string} region: {$ref: '#/definitions/aws-region'} required: - bucket - region definitions: aws-region: enum: - ap-northeast-1 - ap-northeast-2 - ap-southeast-1 - ap-southeast-2 - ca-central-1 - eu-central-1 - eu-west-1 - sa-east-1 - us-east-1 - us-gov-west-1 - us-west-1 - us-west-2 - cn-north-1 bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/manifest-schema.yml000066400000000000000000000017261323112141500265230ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: EC2 manifest type: object properties: tags: type: object minProperties: 1 provider: type: object properties: description: {type: string} credentials: type: object properties: access-key: {type: string} secret-key: {type: string} virtualization: enum: - pvm - hvm enhanced_networking: enum: - none - simple required: [description, virtualization] system: type: object properties: bootloader: type: string enum: - pvgrub - grub - extlinux volume: type: object properties: backing: enum: - ebs - s3 partitions: type: object properties: type: enum: - none - msdos - gpt required: [backing] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/000077500000000000000000000000001323112141500240535ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/__init__.py000066400000000000000000000001301323112141500261560ustar00rootroot00000000000000from bootstrapvz.common.tools import rel_path assets = rel_path(__file__, '../assets') bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/ami-akis.yml000066400000000000000000000015221323112141500262710ustar00rootroot00000000000000--- # This is a mapping of EC2 regions to processor architectures to Amazon Kernel Images # Source: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedKernels.html#AmazonKernelImageIDs ap-northeast-1: amd64: aki-176bf516 i386: aki-136bf512 ap-southeast-1: amd64: aki-503e7402 i386: aki-ae3973fc ap-southeast-2: amd64: aki-c362fff9 i386: aki-cd62fff7 eu-west-1: amd64: aki-52a34525 i386: aki-68a3451f eu-central-1: i386: aki-3e4c7a23 amd64: aki-184c7a05 sa-east-1: amd64: aki-5553f448 i386: aki-5b53f446 us-east-1: amd64: aki-919dcaf8 i386: aki-8f9dcae6 us-gov-west-1: amd64: aki-1de98d3e i386: aki-1fe98d3c us-west-1: amd64: aki-880531cd i386: aki-8e0531cb us-west-2: amd64: aki-fc8f11cc i386: aki-f08f11c0 cn-north-1: amd64: aki-9e8f1da7 i386: aki-908f1da9 ca-central-1: amd64: aki-320ebd56 bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/ami.py000077500000000000000000000137201323112141500252010ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.exceptions import TaskError from bootstrapvz.common.tools import log_check_call, rel_path from ebs import Snapshot from bootstrapvz.common.tasks import workspace import connection from . import assets import os.path cert_ec2 = os.path.join(assets, 'certs/cert-ec2.pem') class AMIName(Task): description = 'Determining the AMI name' phase = phases.preparation predecessors = [connection.SilenceBotoDebug, connection.Connect] @classmethod def run(cls, info): ami_name = info.manifest.name.format(**info.manifest_vars) ami_description = info.manifest.provider['description'].format(**info.manifest_vars) images = info._ec2['connection'].describe_images(Owners=['self'])['Images'] for image in images: if 'Name' in image and ami_name == image['Name']: msg = 'An image by the name {ami_name} already exists.'.format(ami_name=ami_name) raise TaskError(msg) info._ec2['ami_name'] = ami_name info._ec2['ami_description'] = ami_description class BundleImage(Task): description = 'Bundling the image' phase = phases.image_registration @classmethod def run(cls, info): bundle_name = 'bundle-' + info.run_id info._ec2['bundle_path'] = os.path.join(info.workspace, bundle_name) arch = {'i386': 'i386', 'amd64': 'x86_64'}.get(info.manifest.system['architecture']) log_check_call(['euca-bundle-image', '--image', info.volume.image_path, '--arch', arch, '--user', info.credentials['user-id'], '--privatekey', info.credentials['private-key'], '--cert', info.credentials['certificate'], '--ec2cert', cert_ec2, '--destination', info._ec2['bundle_path'], '--prefix', info._ec2['ami_name']]) class UploadImage(Task): description = 'Uploading the image bundle' phase = phases.image_registration predecessors = [BundleImage] @classmethod def run(cls, info): manifest_file = os.path.join(info._ec2['bundle_path'], info._ec2['ami_name'] + '.manifest.xml') if info._ec2['region'] == 'us-east-1': s3_url = 'https://s3.amazonaws.com/' elif info._ec2['region'] == 'cn-north-1': s3_url = 'https://s3.cn-north-1.amazonaws.com.cn' else: s3_url = 'https://s3-{region}.amazonaws.com/'.format(region=info._ec2['region']) info._ec2['manifest_location'] = info.manifest.provider['bucket'] + '/' + info._ec2['ami_name'] + '.manifest.xml' log_check_call(['euca-upload-bundle', '--bucket', info.manifest.provider['bucket'], '--manifest', manifest_file, '--access-key', info.credentials['access-key'], '--secret-key', info.credentials['secret-key'], '--url', s3_url, '--region', info._ec2['region']]) class RemoveBundle(Task): description = 'Removing the bundle files' phase = phases.cleaning successors = [workspace.DeleteWorkspace] @classmethod def run(cls, info): from shutil import rmtree rmtree(info._ec2['bundle_path']) del info._ec2['bundle_path'] class RegisterAMI(Task): description = 'Registering the image as an AMI' phase = phases.image_registration predecessors = [Snapshot, UploadImage] @classmethod def run(cls, info): registration_params = {'Name': info._ec2['ami_name'], 'Description': info._ec2['ami_description']} registration_params['Architecture'] = {'i386': 'i386', 'amd64': 'x86_64'}.get(info.manifest.system['architecture']) if info.manifest.volume['backing'] == 's3': registration_params['ImageLocation'] = info._ec2['manifest_location'] else: root_dev_name = {'pvm': '/dev/sda', 'hvm': '/dev/xvda'}.get(info.manifest.provider['virtualization']) registration_params['RootDeviceName'] = root_dev_name block_device = [{'DeviceName': root_dev_name, 'Ebs': { 'SnapshotId': info._ec2['snapshot'], 'VolumeSize': info.volume.size.bytes.get_qty_in('GiB'), 'VolumeType': 'gp2', 'DeleteOnTermination': True}}] registration_params['BlockDeviceMappings'] = block_device if info.manifest.provider['virtualization'] == 'hvm': registration_params['VirtualizationType'] = 'hvm' else: registration_params['VirtualizationType'] = 'paravirtual' akis_path = rel_path(__file__, 'ami-akis.yml') from bootstrapvz.common.tools import config_get registration_params['kernel_id'] = config_get(akis_path, [info._ec2['region'], info.manifest.system['architecture']]) if info.manifest.provider.get('enhanced_networking', None) == 'simple': registration_params['SriovNetSupport'] = 'simple' registration_params['EnaSupport'] = True info._ec2['image'] = info._ec2['connection'].register_image(**registration_params) # Setting up tags on the AMI if 'tags' in info.manifest.data: raw_tags = info.manifest.data['tags'] formatted_tags = {k: v.format(**info.manifest_vars) for k, v in raw_tags.items()} tags = [{'Key': k, 'Value': v} for k, v in formatted_tags.items()] info._ec2['connection'].create_tags(Resources=[info._ec2['image']['ImageId']], Tags=tags) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/boot.py000066400000000000000000000054521323112141500253760ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import grub from . import assets import os from bootstrapvz.common.tools import log_check_call class AddXenGrubConsoleOutputDevice(Task): description = 'Adding XEN `hvc0\' as output device for grub' phase = phases.system_modification successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX_DEFAULT'].append('console=hvc0') class UpdateGrubConfig(Task): description = 'Updating the grub config' phase = phases.system_modification successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'update-grub']) class CreatePVGrubCustomRule(Task): description = 'Creating special rule for PVGrub' phase = phases.system_modification successors = [UpdateGrubConfig] @classmethod def run(cls, info): import stat x_all = stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH grubd = os.path.join(info.root, 'etc/grub.d') for cfg in [os.path.join(grubd, f) for f in os.listdir(grubd)]: os.chmod(cfg, os.stat(cfg).st_mode & ~ x_all) from shutil import copy script_src = os.path.join(assets, 'grub.d/40_custom') script_dst = os.path.join(info.root, 'etc/grub.d/40_custom') copy(script_src, script_dst) os.chmod(script_dst, 0755) from bootstrapvz.base.fs.partitionmaps.none import NoPartitions if not isinstance(info.volume.partition_map, NoPartitions): from bootstrapvz.common.tools import sed_i root_idx = info.volume.partition_map.root.get_index() grub_device = 'GRUB_DEVICE=/dev/xvda' + str(root_idx) sed_i(script_dst, '^GRUB_DEVICE=/dev/xvda$', grub_device) grub_root = '\troot (hd0,{idx})'.format(idx=root_idx - 1) sed_i(script_dst, '^\troot \(hd0\)$', grub_root) if info.manifest.volume['backing'] == 's3': from bootstrapvz.common.tools import sed_i sed_i(script_dst, '^GRUB_DEVICE=/dev/xvda$', 'GRUB_DEVICE=/dev/xvda1') class ConfigurePVGrub(Task): description = 'Configuring PVGrub' phase = phases.system_modification successors = [UpdateGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX'].extend([ 'consoleblank=0', 'elevator=noop', ]) class LinkGrubConfig(Task): description = 'Linking the grub config to /boot/grub/menu.lst' phase = phases.system_modification predecessors = [UpdateGrubConfig] @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'ln', '--symbolic', '/boot/grub/grub.cfg', '/boot/grub/menu.lst']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/connection.py000066400000000000000000000055761323112141500266010ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import host class SilenceBotoDebug(Task): description = 'Silence boto debug logging' phase = phases.preparation @classmethod def run(cls, info): # Regardless of of loglevel, we don't want boto debug stuff, it's very noisy import logging logging.getLogger('boto').setLevel(logging.INFO) class GetCredentials(Task): description = 'Getting AWS credentials' phase = phases.preparation successors = [SilenceBotoDebug] @classmethod def run(cls, info): keys = ['access-key', 'secret-key'] if info.manifest.volume['backing'] == 's3': keys.extend(['certificate', 'private-key', 'user-id']) info.credentials = cls.get_credentials(info.manifest, keys) @classmethod def get_credentials(cls, manifest, keys): from os import getenv creds = {} if 'credentials' in manifest.provider: if all(key in manifest.provider['credentials'] for key in keys): for key in keys: creds[key] = manifest.provider['credentials'][key] return creds def env_key(key): return ('aws-' + key).upper().replace('-', '_') if all(getenv(env_key(key)) is not None for key in keys): for key in keys: creds[key] = getenv(env_key(key)) return creds def provider_key(key): return key.replace('-', '_') import boto.provider provider = boto.provider.Provider('aws') if all(getattr(provider, provider_key(key)) is not None for key in keys): for key in keys: creds[key] = getattr(provider, provider_key(key)) if hasattr(provider, 'security_token'): creds['security-token'] = provider.security_token return creds raise RuntimeError(('No ec2 credentials found, they must all be specified ' 'exclusively via environment variables or through the manifest.')) class Connect(Task): description = 'Connecting to EC2' phase = phases.preparation predecessors = [GetCredentials, host.GetInstanceMetadata, host.SetRegion] @classmethod def run(cls, info): import boto3 connect_args = { 'aws_access_key_id': info.credentials['access-key'], 'aws_secret_access_key': info.credentials['secret-key'] } if 'security-token' in info.credentials: connect_args['security_token'] = info.credentials['security-token'] info._ec2['connection'] = boto3.Session(info._ec2['region'], info.credentials['access-key'], info.credentials['secret-key']) info._ec2['connection'] = boto3.client('ec2', region_name=info._ec2['region']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/ebs.py000066400000000000000000000022601323112141500251760ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class Create(Task): description = 'Creating the EBS volume' phase = phases.volume_creation @classmethod def run(cls, info): info.volume.create(info._ec2['connection'], info._ec2['host']['availabilityZone']) class Attach(Task): description = 'Attaching the volume' phase = phases.volume_creation predecessors = [Create] @classmethod def run(cls, info): info.volume.attach(info._ec2['host']['instanceId']) class Snapshot(Task): description = 'Creating a snapshot of the EBS volume' phase = phases.image_registration @classmethod def run(cls, info): info._ec2['snapshot'] = info.volume.snapshot() # Setting up tags on the snapshot if 'tags' in info.manifest.data: raw_tags = info.manifest.data['tags'] formatted_tags = {k: v.format(**info.manifest_vars) for k, v in raw_tags.items()} tags = [{'Key': k, 'Value': v} for k, v in formatted_tags.items()] info._ec2['connection'].create_tags(Resources=[info._ec2['snapshot']], Tags=tags) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/filesystem.py000066400000000000000000000020111323112141500266030ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class S3FStab(Task): description = 'Adding the S3 root partition to the fstab' phase = phases.system_modification @classmethod def run(cls, info): import os.path root = info.volume.partition_map.root fstab_lines = [] mount_opts = ['defaults'] fstab_lines.append('{device_path}{idx} {mountpoint} {filesystem} {mount_opts} {dump} {pass_num}' .format(device_path='/dev/xvda', idx=1, mountpoint='/', filesystem=root.filesystem, mount_opts=','.join(mount_opts), dump='1', pass_num='1')) fstab_path = os.path.join(info.root, 'etc/fstab') with open(fstab_path, 'w') as fstab: fstab.write('\n'.join(fstab_lines)) fstab.write('\n') bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/host.py000066400000000000000000000022651323112141500254070ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import host class AddExternalCommands(Task): description = 'Determining required external commands for EC2 bootstrapping' phase = phases.validation successors = [host.CheckExternalCommands] @classmethod def run(cls, info): if info.manifest.volume['backing'] == 's3': info.host_dependencies['euca-bundle-image'] = 'euca2ools' info.host_dependencies['euca-upload-bundle'] = 'euca2ools' class GetInstanceMetadata(Task): description = 'Retrieving instance metadata' phase = phases.preparation @classmethod def run(cls, info): import urllib2 import json metadata_url = 'http://169.254.169.254/latest/dynamic/instance-identity/document' response = urllib2.urlopen(url=metadata_url, timeout=5) info._ec2['host'] = json.load(response) info._ec2['region'] = info._ec2['host']['region'] class SetRegion(Task): description = 'Setting the AWS region' phase = phases.preparation @classmethod def run(cls, info): info._ec2['region'] = info.manifest.provider['region'] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/initd.py000066400000000000000000000012521323112141500255340ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import initd from . import assets import os.path class AddEC2InitScripts(Task): description = 'Adding EC2 startup scripts' phase = phases.system_modification successors = [initd.InstallInitScripts] @classmethod def run(cls, info): init_scripts = {'ec2-get-credentials': 'ec2-get-credentials', 'ec2-run-user-data': 'ec2-run-user-data'} init_scripts_dir = os.path.join(assets, 'init.d') for name, path in init_scripts.iteritems(): info.initd['install'][name] = os.path.join(init_scripts_dir, path) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/network.py000066400000000000000000000147331323112141500261260ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import kernel import os.path class InstallDHCPCD(Task): description = 'Replacing isc-dhcp with dhcpcd' phase = phases.preparation @classmethod def run(cls, info): # isc-dhcp-client before jessie doesn't work properly with ec2 info.packages.add('dhcpcd') info.exclude_packages.add('isc-dhcp-client') info.exclude_packages.add('isc-dhcp-common') class EnableDHCPCDDNS(Task): description = 'Configuring the DHCP client to set the nameservers' phase = phases.system_modification @classmethod def run(cls, info): from bootstrapvz.common.tools import sed_i dhcpcd = os.path.join(info.root, 'etc/default/dhcpcd') sed_i(dhcpcd, '^#*SET_DNS=.*', 'SET_DNS=\'yes\'') class AddBuildEssentialPackage(Task): description = 'Adding build-essential package' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('build-essential') class InstallNetworkingUDevHotplugAndDHCPSubinterface(Task): description = 'Setting up udev and DHCPD rules for EC2 networking' phase = phases.system_modification @classmethod def run(cls, info): from . import assets script_src = os.path.join(assets, 'ec2') script_dst = os.path.join(info.root, 'etc') import stat rwxr_xr_x = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR | stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH) from shutil import copy copy(os.path.join(script_src, '53-ec2-network-interfaces.rules'), os.path.join(script_dst, 'udev/rules.d/53-ec2-network-interfaces.rules')) os.chmod(os.path.join(script_dst, 'udev/rules.d/53-ec2-network-interfaces.rules'), rwxr_xr_x) os.mkdir(os.path.join(script_dst, 'sysconfig'), 0755) os.mkdir(os.path.join(script_dst, 'sysconfig/network-scripts'), 0755) copy(os.path.join(script_src, 'ec2net.hotplug'), os.path.join(script_dst, 'sysconfig/network-scripts/ec2net.hotplug')) os.chmod(os.path.join(script_dst, 'sysconfig/network-scripts/ec2net.hotplug'), rwxr_xr_x) copy(os.path.join(script_src, 'ec2net-functions'), os.path.join(script_dst, 'sysconfig/network-scripts/ec2net-functions')) os.chmod(os.path.join(script_dst, 'sysconfig/network-scripts/ec2net-functions'), rwxr_xr_x) copy(os.path.join(script_src, 'ec2dhcp.sh'), os.path.join(script_dst, 'dhcp/dhclient-exit-hooks.d/ec2dhcp.sh')) os.chmod(os.path.join(script_dst, 'dhcp/dhclient-exit-hooks.d/ec2dhcp.sh'), rwxr_xr_x) with open(os.path.join(script_dst, 'network/interfaces'), "a") as interfaces: interfaces.write("iface eth1 inet dhcp\n") interfaces.write("iface eth2 inet dhcp\n") interfaces.write("iface eth3 inet dhcp\n") interfaces.write("iface eth4 inet dhcp\n") interfaces.write("iface eth5 inet dhcp\n") interfaces.write("iface eth6 inet dhcp\n") interfaces.write("iface eth7 inet dhcp\n") class InstallEnhancedNetworking(Task): description = 'Installing enhanced networking kernel driver using DKMS' phase = phases.system_modification successors = [kernel.UpdateInitramfs] @classmethod def run(cls, info): from bootstrapvz.common.releases import stretch if info.manifest.release >= stretch: version = '4.2.1' drivers_url = 'https://downloadmirror.intel.com/27160/eng/ixgbevf-4.2.1.tar.gz' else: version = '3.2.2' drivers_url = 'https://downloadmirror.intel.com/26561/eng/ixgbevf-3.2.2.tar.gz' # Sadly the first number in the URL changes: # 2.16.1 => https://downloadmirror.intel.com/25464/eng/ixgbevf-2.16.1.tar.gz archive = os.path.join(info.root, 'tmp', 'ixgbevf-%s.tar.gz' % (version)) module_path = os.path.join(info.root, 'usr', 'src', 'ixgbevf-%s' % (version)) import urllib urllib.urlretrieve(drivers_url, archive) from bootstrapvz.common.tools import log_check_call log_check_call(['tar', '--ungzip', '--extract', '--file', archive, '--directory', os.path.join(info.root, 'usr', 'src')]) with open(os.path.join(module_path, 'dkms.conf'), 'w') as dkms_conf: dkms_conf.write("""PACKAGE_NAME="ixgbevf" PACKAGE_VERSION="%s" CLEAN="cd src/; sed -i '1s/^/EXTRA_CFLAGS := -fno-pie/' Makefile && make clean" MAKE="cd src/; make BUILD_KERNEL=${kernelver}" BUILT_MODULE_LOCATION[0]="src/" BUILT_MODULE_NAME[0]="ixgbevf" DEST_MODULE_LOCATION[0]="/updates" DEST_MODULE_NAME[0]="ixgbevf" AUTOINSTALL="yes" """ % (version)) for task in ['add', 'build', 'install']: # Invoke DKMS task using specified kernel module (-m) and version (-v) log_check_call(['chroot', info.root, 'dkms', task, '-m', 'ixgbevf', '-v', version, '-k', info.kernel_version]) class InstallENANetworking(Task): description = 'Installing ENA networking kernel driver using DKMS' phase = phases.system_modification successors = [kernel.UpdateInitramfs] @classmethod def run(cls, info): version = '1.0.0' drivers_url = 'https://github.com/amzn/amzn-drivers' module_path = os.path.join(info.root, 'usr', 'src', 'amzn-drivers-%s' % (version)) from bootstrapvz.common.tools import log_check_call log_check_call(['git', 'clone', drivers_url, module_path]) with open(os.path.join(module_path, 'dkms.conf'), 'w') as dkms_conf: dkms_conf.write("""PACKAGE_NAME="ena" PACKAGE_VERSION="%s" CLEAN="make -C kernel/linux/ena clean" MAKE="make -C kernel/linux/ena/ BUILD_KERNEL=${kernelver}" BUILT_MODULE_NAME[0]="ena" BUILT_MODULE_LOCATION="kernel/linux/ena" DEST_MODULE_LOCATION[0]="/updates" DEST_MODULE_NAME[0]="ena" AUTOINSTALL="yes" """ % (version)) for task in ['add', 'build', 'install']: # Invoke DKMS task using specified kernel module (-m) and version (-v) log_check_call(['chroot', info.root, 'dkms', task, '-m', 'amzn-drivers', '-v', version, '-k', info.kernel_version]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/packages-kernels.yml000066400000000000000000000007221323112141500300160ustar00rootroot00000000000000--- # This is a mapping of Debian release codenames to processor architectures to kernel packages squeeze: # In squeeze, we need a special kernel flavor for xen amd64: linux-image-xen-amd64 i386: linux-image-xen-686 wheezy: amd64: linux-image-amd64 i386: linux-image-686 jessie: amd64: linux-image-amd64 i386: linux-image-686-pae stretch: amd64: linux-image-amd64 i386: linux-image-686-pae sid: amd64: linux-image-amd64 i386: linux-image-686-pae bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/packages.py000066400000000000000000000021271323112141500262050ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import os.path class DefaultPackages(Task): description = 'Adding image packages required for EC2' phase = phases.preparation @classmethod def run(cls, info): from bootstrapvz.common.tools import rel_path info.packages.add('file') # Needed for the init scripts kernel_packages_path = rel_path(__file__, 'packages-kernels.yml') from bootstrapvz.common.tools import config_get kernel_package = config_get(kernel_packages_path, [info.manifest.release.codename, info.manifest.system['architecture']]) info.packages.add(kernel_package) class AddWorkaroundGrowpart(Task): description = 'Adding growpart workaround for jessie' phase = phases.system_modification @classmethod def run(cls, info): from shutil import copy from . import assets src = os.path.join(assets, 'bin/growpart') dst = os.path.join(info.root, 'usr/bin/growpart-workaround') copy(src, dst) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/ec2/tasks/tuning.py000066400000000000000000000016001323112141500257260ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from . import assets from shutil import copy import os class TuneSystem(Task): description = 'Tuning system for EC2' phase = phases.system_modification @classmethod def run(cls, info): sysctl_src = os.path.join(assets, 'sysctl.d/tuning.conf') sysctl_dst = os.path.join(info.root, 'etc/sysctl.d/01_ec2.conf') copy(sysctl_src, sysctl_dst) os.chmod(sysctl_dst, 0644) class BlackListModules(Task): description = 'Blacklisting unused kernel modules' phase = phases.system_modification @classmethod def run(cls, info): blacklist_path = os.path.join(info.root, 'etc/modprobe.d/blacklist.conf') with open(blacklist_path, 'a') as blacklist: blacklist.write(('blacklist i2c_piix4\n' 'blacklist psmouse\n')) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/000077500000000000000000000000001323112141500230135ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/README.rst000066400000000000000000000020561323112141500245050ustar00rootroot00000000000000Google Compute Engine ===================== The `GCE `__ provider can creates image as expected by GCE - i.e. raw disk image in \*.tar.gz file. It can upload created images to Google Cloud Storage (to a URI provided in the manifest by ``gcs_destination``) and can register images to be used by Google Compute Engine to a project provided in the manifest by ``gce_project``. Both of those functionalities are not fully tested yet. Note that to register an image, it must first be uploaded to GCS, so you must specify ``gcs_destination`` (upload to GCS) to use ``gce_project`` (register with GCE) Manifest settings ----------------- Provider ~~~~~~~~ - ``description``: Description of the image. - ``gcs_destination``: Image destination in GCS. - ``gce_project``: GCE project in which to register the image. Example: .. code-block:: yaml --- provider: name: gce description: Debian {system.release} {system.architecture} gcs_destination: gs://my-bucket gce_project: my-project bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/__init__.py000066400000000000000000000033051323112141500251250ustar00rootroot00000000000000from bootstrapvz.common import task_groups import tasks.apt import tasks.boot import tasks.configuration import tasks.image import tasks.packages from bootstrapvz.common.tasks import apt, boot, image, loopback, initd from bootstrapvz.common.tasks import ssh, volume, grub def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.update(task_groups.get_standard_groups(manifest)) taskset.update([apt.AddBackports, apt.AddDefaultSources, loopback.AddRequiredCommands, loopback.Create, tasks.packages.DefaultPackages, tasks.configuration.GatherReleaseInformation, tasks.boot.ConfigureGrub, initd.InstallInitScripts, boot.BlackListModules, boot.UpdateInitramfs, ssh.AddSSHKeyGeneration, ssh.DisableSSHPasswordAuthentication, ssh.DisableRootLogin, tasks.apt.AddBaselineAptCache, image.MoveImage, tasks.image.CreateTarball, volume.Delete, ]) taskset.discard(grub.SetGrubConsolOutputDeviceToSerial) if 'gcs_destination' in manifest.provider: taskset.add(tasks.image.UploadImage) if 'gce_project' in manifest.provider: taskset.add(tasks.image.RegisterImage) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): taskset.update(task_groups.get_standard_rollback_tasks(completed)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/manifest-schema.yml000066400000000000000000000011041323112141500265760ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: GCE manifest type: object properties: provider: type: object properties: description: {type: string} gce_project: {type: string} gcs_destination: {type: string} system: type: object properties: bootloader: type: string enum: - grub - extlinux volume: type: object properties: partitions: type: object properties: type: enum: - msdos - gpt required: [partitions] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/000077500000000000000000000000001323112141500241405ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/__init__.py000066400000000000000000000000001323112141500262370ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/apt.py000066400000000000000000000010151323112141500252730ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import apt from bootstrapvz.common.tasks import network from bootstrapvz.common.tools import log_check_call class AddBaselineAptCache(Task): description = 'Add a baseline apt cache into the image.' phase = phases.system_cleaning predecessors = [apt.AptClean] successors = [network.RemoveDNSInfo] @classmethod def run(cls, info): log_check_call(['chroot', info.root, 'apt-get', 'update']) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/boot.py000066400000000000000000000013211323112141500254520ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import grub class ConfigureGrub(Task): description = 'Change grub configuration to allow for ttyS0 output' phase = phases.system_modification successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX'].append('console=ttyS0,38400n8') info.grub_config['GRUB_CMDLINE_LINUX'].append('elevator=noop') # Enable SCSI block multiqueue on Stretch. from bootstrapvz.common.releases import stretch if info.manifest.release >= stretch: info.grub_config['GRUB_CMDLINE_LINUX'].append('scsi_mod.use_blk_mq=Y') bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/configuration.py000066400000000000000000000013611323112141500273620ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tools import log_check_call class GatherReleaseInformation(Task): description = 'Gathering release information about created image' phase = phases.system_modification @classmethod def run(cls, info): lsb_distribution = log_check_call(['chroot', info.root, 'lsb_release', '-i', '-s']) lsb_description = log_check_call(['chroot', info.root, 'lsb_release', '-d', '-s']) lsb_release = log_check_call(['chroot', info.root, 'lsb_release', '-r', '-s']) info._gce['lsb_distribution'] = lsb_distribution[0] info._gce['lsb_description'] = lsb_description[0] info._gce['lsb_release'] = lsb_release[0] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/image.py000066400000000000000000000045361323112141500256040ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import image from bootstrapvz.common.tools import log_check_call import os.path class CreateTarball(Task): description = 'Creating tarball with image' phase = phases.image_registration predecessors = [image.MoveImage] @classmethod def run(cls, info): image_name = info.manifest.name.format(**info.manifest_vars) filename = image_name + '.' + info.volume.extension # ensure that we do not use disallowed characters in image name image_name = image_name.lower() image_name = image_name.replace(".", "-") info._gce['image_name'] = image_name tarball_name = image_name + '.tar.gz' tarball_path = os.path.join(info.manifest.bootstrapper['workspace'], tarball_name) info._gce['tarball_name'] = tarball_name info._gce['tarball_path'] = tarball_path # GCE requires that the file in the tar be named disk.raw, hence the transform log_check_call(['tar', '--sparse', '-C', info.manifest.bootstrapper['workspace'], '-caf', tarball_path, '--transform=s|.*|disk.raw|', filename]) class UploadImage(Task): description = 'Uploading image to GCS' phase = phases.image_registration predecessors = [CreateTarball] @classmethod def run(cls, info): log_check_call(['gsutil', 'cp', info._gce['tarball_path'], info.manifest.provider['gcs_destination'] + info._gce['tarball_name']]) class RegisterImage(Task): description = 'Registering image with GCE' phase = phases.image_registration predecessors = [UploadImage] @classmethod def run(cls, info): image_description = info._gce['lsb_description'] if 'description' in info.manifest.provider: image_description = info.manifest.provider['description'] image_description = image_description.format(**info.manifest_vars) log_check_call(['gcloud', 'compute', '--project=' + info.manifest.provider['gce_project'], 'images', 'create', info._gce['image_name'], '--source-uri=' + info.manifest.provider['gcs_destination'] + info._gce['tarball_name'], '--description=' + image_description]) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/packages-kernels.yml000066400000000000000000000004171323112141500301040ustar00rootroot00000000000000--- # This is a mapping of Debian release codenames to processor architectures to kernel packages wheezy: amd64: linux-image-amd64 jessie: amd64: linux-image-amd64 stretch: amd64: linux-image-amd64 buster: amd64: linux-image-amd64 sid: amd64: linux-image-amd64 bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/gce/tasks/packages.py000066400000000000000000000024471323112141500262770ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import packages from bootstrapvz.common.tools import config_get, rel_path class DefaultPackages(Task): description = 'Adding image packages required for GCE' phase = phases.preparation successors = [packages.AddManifestPackages] @classmethod def run(cls, info): info.packages.add('acpi-support-base') info.packages.add('busybox') info.packages.add('ca-certificates') info.packages.add('curl') info.packages.add('ethtool') info.packages.add('gdisk') info.packages.add('kpartx') info.packages.add('isc-dhcp-client') info.packages.add('lsb-release') info.packages.add('ntp') info.packages.add('parted') info.packages.add('python') info.packages.add('openssh-client') info.packages.add('openssh-server') info.packages.add('sudo') info.packages.add('uuid-runtime') kernel_packages_path = rel_path(__file__, 'packages-kernels.yml') kernel_package = config_get(kernel_packages_path, [info.manifest.release.codename, info.manifest.system['architecture']]) info.packages.add(kernel_package) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/000077500000000000000000000000001323112141500230525ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/README.rst000066400000000000000000000025201323112141500245400ustar00rootroot00000000000000KVM === The `KVM `__ provider creates virtual images for Linux Kernel-based Virtual Machines. It supports the installation of `virtio kernel modules `__ (paravirtualized drivers for IO operations). It also supports creating an image with LVM and qcow2 as a disk backend. Manifest settings ----------------- Provider ~~~~~~~~ - ``virtio``: Specifies which virtio kernel modules to install. ``optional`` - ``console``: Specifies which console should be used for stdout and stderr of init process to show startup messages and act as a console in single-user mode. Regardless of this setting output of kernel messages generated by ``printk()`` and seen by ``dmesg`` goes to both virtual and serial console. Valid options: ```virtual``` or ```serial``` (default). ``optional`` - ``logicalvolume``: Specifies the logical volume where the disk image will be built. - ``volumegroup``: Specifies the volume group where the logical volume will be stored. These options should only be used if ``lvm`` was given as a disk backend. Example: .. code-block:: yaml --- provider: name: kvm virtio: - virtio_blk - virtio_net console: virtual volume: backing: lvm logicalvolume: lvtest volumegroup: vgtest bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/__init__.py000066400000000000000000000034461323112141500251720ustar00rootroot00000000000000from bootstrapvz.common import task_groups import tasks.packages import tasks.boot from bootstrapvz.common.tasks import image, loopback, initd, ssh, logicalvolume def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.update(task_groups.get_standard_groups(manifest)) taskset.update([tasks.packages.DefaultPackages, initd.InstallInitScripts, ssh.AddOpenSSHPackage, ssh.ShredHostkeys, ssh.AddSSHKeyGeneration, ]) if manifest.volume.get('logicalvolume', []): taskset.update([logicalvolume.AddRequiredCommands, logicalvolume.Create, ]) else: taskset.update([loopback.AddRequiredCommands, loopback.Create, image.MoveImage, ]) if manifest.provider.get('virtio', []): from tasks import virtio taskset.update([virtio.VirtIO]) if manifest.provider.get('console', False): if manifest.provider['console'] == 'virtual': taskset.update([tasks.boot.SetGrubConsolOutputDeviceToVirtual]) from bootstrapvz.common.releases import jessie if manifest.release >= jessie: taskset.update([tasks.boot.SetGrubConsolOutputDeviceToVirtual, tasks.boot.SetSystemdTTYVTDisallocate, ]) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): taskset.update(task_groups.get_standard_rollback_tasks(completed)) counter_task(taskset, logicalvolume.Create, logicalvolume.Delete) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/assets/000077500000000000000000000000001323112141500243545ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/assets/noclear.conf000066400000000000000000000001751323112141500266510ustar00rootroot00000000000000# https://wiki.debian.org/systemd#Missing_startup_messages_on_console.28tty1.29_after_the_boot [Service] TTYVTDisallocate=no bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/manifest-schema.yml000066400000000000000000000020021323112141500266330ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: KVM manifest type: object properties: provider: type: object properties: virtio: type: array items: type: string enum: - virtio - virtio_pci - virtio_balloon - virtio_blk - virtio_net - virtio_ring minItems: 1 console: type: string enum: - serial - virtual system: type: object properties: bootloader: type: string enum: - grub - extlinux - none volume: type: object properties: backing: type: string enum: - qcow2 - raw - lvm logicalvolume: {type: string} volumegroup: {type: string} partitions: type: object properties: type: type: string enum: - none - msdos - gpt required: [backing] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/tasks/000077500000000000000000000000001323112141500241775ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/tasks/__init__.py000066400000000000000000000000001323112141500262760ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/tasks/boot.py000066400000000000000000000025111323112141500255130ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import grub from bootstrapvz.common.tools import rel_path assets = rel_path(__file__, '../assets') class SetGrubConsolOutputDeviceToVirtual(Task): description = 'Setting the init process terminal output device to `tty0\'' phase = phases.system_modification predecessors = [grub.SetGrubConsolOutputDeviceToSerial] successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX'].append('console=tty0') class SetGrubSystemdShowStatus(Task): description = 'Setting systemd show_status' phase = phases.system_modification successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX'].append('systemd.show_status=1') class SetSystemdTTYVTDisallocate(Task): description = 'Setting systemd TTYVTDisallocate to no\'' phase = phases.system_modification @classmethod def run(cls, info): import os.path from shutil import copy src = os.path.join(assets, 'noclear.conf') dst_dir = os.path.join(info.root, 'etc/systemd/system/getty@tty1.service.d') dst = os.path.join(dst_dir, 'noclear.conf') os.mkdir(dst_dir, 0755) copy(src, dst) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/tasks/packages-kernels.yml000066400000000000000000000007121323112141500301410ustar00rootroot00000000000000--- # This is a mapping of Debian release codenames to processor architectures to kernel packages squeeze: amd64: linux-image-amd64 i386: linux-image-686 wheezy: amd64: linux-image-amd64 i386: linux-image-686 jessie: amd64: linux-image-amd64 i386: linux-image-686-pae arm64: linux-image-arm64 stretch: amd64: linux-image-amd64 i386: linux-image-686-pae arm64: linux-image-arm64 sid: amd64: linux-image-amd64 i386: linux-image-686-pae bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/tasks/packages.py000066400000000000000000000011361323112141500263300ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class DefaultPackages(Task): description = 'Adding image packages required for kvm' phase = phases.preparation @classmethod def run(cls, info): from bootstrapvz.common.tools import config_get, rel_path kernel_packages_path = rel_path(__file__, 'packages-kernels.yml') kernel_package = config_get(kernel_packages_path, [info.manifest.release.codename, info.manifest.system['architecture']]) info.packages.add(kernel_package) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/kvm/tasks/virtio.py000066400000000000000000000007721323112141500260730ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases import os class VirtIO(Task): description = 'Install virtio modules' phase = phases.system_modification @classmethod def run(cls, info): modules = os.path.join(info.root, '/etc/initramfs-tools/modules') with open(modules, "a") as modules_file: modules_file.write("\n") for module in info.manifest.provider.get('virtio', []): modules_file.write(module + "\n") bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/000077500000000000000000000000001323112141500235225ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/README.rst000066400000000000000000000032731323112141500252160ustar00rootroot00000000000000Oracle ====== The `Oracle `__ provider creates RAW images compressed in a ``.tar.gz`` tarball. Those images can be uploaded using the web interface of the Oracle Compute Cloud Service dashboard or configured to be automatically sent by our Oracle Storage Cloud Service API embedded client. Manifest settings ----------------- Credentials ~~~~~~~~~~~ The settings described below should be placed in the ``credentials`` key under the ``provider`` section, if the image is intended to be uploaded after generation. They will be used to authenticate the API client. - ``username``: the same login used to access the Oracle Compute Cloud dashboard. ``required`` - ``password``: password for the username specified above. ``required`` - ``identity-domain``: this is auto-generated by Oracle and available in the "New Account Information" e-mail message they send after registration. ``required`` Example: .. code-block:: yaml --- provider: name: oracle credentials: username: user@example.com password: qwerty123456 identity-domain: usoracle9999 Provider ~~~~~~~~ If the ``credentials`` have been specified, the following settings are available to customize the process of uploading and verifying an image. - ``container``: the container (folder) to which the image will be uploaded. ``required`` - ``verify``: specifies if the image should be downloaded again and have its checksum compared against the local one. Valid values: ``true``, ``false``. Default: ``false``. ``optional`` .. code-block:: yaml --- provider: name: oracle container: compute_images verify: true bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/__init__.py000066400000000000000000000034271323112141500256410ustar00rootroot00000000000000from bootstrapvz.common import task_groups from bootstrapvz.common.tasks import image, loopback, ssh, volume import tasks.api import tasks.image import tasks.network import tasks.packages def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) keys = ['username', 'password', 'identity-domain'] if 'credentials' in data['provider']: if not all(key in data['provider']['credentials'] for key in keys): msg = 'All Oracle Compute Cloud credentials should be specified in the manifest' error(msg, ['provider', 'credentials']) if not data['provider'].get('container'): msg = 'The container to which the image will be uploaded should be specified' error(msg, ['provider']) def resolve_tasks(taskset, manifest): taskset.update(task_groups.get_standard_groups(manifest)) taskset.update(task_groups.ssh_group) taskset.update([loopback.AddRequiredCommands, loopback.Create, image.MoveImage, ssh.DisableRootLogin, volume.Delete, tasks.image.CreateImageTarball, tasks.network.InstallDHCPCD, tasks.packages.DefaultPackages, ]) if 'credentials' in manifest.provider: taskset.add(tasks.api.Connect) taskset.add(tasks.image.UploadImageTarball) if manifest.provider.get('verify', False): taskset.add(tasks.image.DownloadImageTarball) taskset.add(tasks.image.CompareImageTarballs) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): taskset.update(task_groups.get_standard_rollback_tasks(completed)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/apiclient.py000066400000000000000000000110331323112141500260420ustar00rootroot00000000000000import hashlib import logging import os import requests from bootstrapvz.common.bytes import Bytes class OracleStorageAPIClient: def __init__(self, username, password, identity_domain, container): self.username = username self.password = password self.identity_domain = identity_domain self.container = container self.base_url = 'https://' + identity_domain + '.storage.oraclecloud.com' self.log = logging.getLogger(__name__) # Avoid 'requests' INFO/DEBUG log messages logging.getLogger('requests').setLevel(logging.WARNING) logging.getLogger('urllib3').setLevel(logging.WARNING) def _fail(self, error): raise RuntimeError('Oracle Storage Cloud API - ' + error) @property def auth_token(self): headers = { 'X-Storage-User': 'Storage-{id_domain}:{user}'.format( id_domain=self.identity_domain, user=self.username, ), 'X-Storage-Pass': self.password, } url = self.base_url + '/auth/v1.0' response = requests.get(url, headers=headers) if response.status_code == 200: return response.headers.get('x-auth-token') else: self._fail(response.text) @property def chunk_size(self): file_size = os.path.getsize(self.file_path) if file_size > int(Bytes('300MiB')): chunk_size = int(Bytes('100MiB')) else: chunk_size = int(Bytes('50MiB')) return chunk_size def compare_files(self): uploaded_file_md5 = hashlib.md5() downloaded_file_md5 = hashlib.md5() files = [self.file_path, self.target_file_path] hashes = [uploaded_file_md5, downloaded_file_md5] for f, h in zip(files, hashes): with open(f, 'rb') as current_file: while True: data = current_file.read(int(Bytes('8MiB'))) if not data: break h.update(data) if uploaded_file_md5.hexdigest() != downloaded_file_md5.hexdigest(): self.log.error('File hashes mismatch') else: self.log.debug('Both files have the same hash') def create_manifest(self): headers = { 'X-Auth-Token': self.auth_token, 'X-Object-Manifest': '{container}/{object_name}-'.format( container=self.container, object_name=self.file_name, ), 'Content-Length': '0', } url = self.object_url self.log.debug('Creating remote manifest to join chunks') response = requests.put(url, headers=headers) if response.status_code != 201: self._fail(response.text) def download_file(self): headers = { 'X-Auth-Token': self.auth_token, } url = self.object_url response = requests.get(url, headers=headers, stream=True) if response.status_code != 200: self._fail(response.text) with open(self.target_file_path, 'wb') as f: for chunk in response.iter_content(chunk_size=int(Bytes('8MiB'))): if chunk: f.write(chunk) @property def file_name(self): return os.path.basename(self.file_path) @property def object_url(self): url = '{base}/v1/Storage-{id_domain}/{container}/{object_name}'.format( base=self.base_url, id_domain=self.identity_domain, container=self.container, object_name=self.file_name, ) return url def upload_file(self): f = open(self.file_path, 'rb') n = 1 while True: chunk = f.read(self.chunk_size) if not chunk: break chunk_name = '{name}-{number}'.format( name=self.file_name, number='{0:04d}'.format(n), ) headers = { 'X-Auth-Token': self.auth_token, } url = '{base}/v1/Storage-{id_domain}/{container}/{object_chunk_name}'.format( base=self.base_url, id_domain=self.identity_domain, container=self.container, object_chunk_name=chunk_name, ) self.log.debug('Uploading chunk ' + chunk_name) response = requests.put(url, data=chunk, headers=headers) if response.status_code != 201: self._fail(response.text) n += 1 self.create_manifest() bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/manifest-schema.yml000066400000000000000000000013721323112141500273140ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Oracle manifest type: object properties: provider: type: object properties: credentials: type: object properties: username: {type: string} password: {type: string} identity-domain: {type: string} container: {type: string} verify: {type: boolean} system: type: object properties: bootloader: type: string enum: [grub] volume: type: object properties: backing: type: string enum: [raw] partitions: type: object properties: type: type: string enum: - msdos - gpt required: [backing] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/tasks/000077500000000000000000000000001323112141500246475ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/tasks/__init__.py000066400000000000000000000000001323112141500267460ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/tasks/api.py000066400000000000000000000014241323112141500257730ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.providers.oracle.apiclient import OracleStorageAPIClient class Connect(Task): description = 'Connecting to the Oracle Storage Cloud API' phase = phases.preparation @classmethod def run(cls, info): info._oracle['client'] = OracleStorageAPIClient( username=info.manifest.provider['credentials']['username'], password=info.manifest.provider['credentials']['password'], identity_domain=info.manifest.provider['credentials']['identity-domain'], container=info.manifest.provider['container'], ) # Try to fetch the token, so it will fail early if the credentials are wrong info._oracle['client'].auth_token bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/tasks/image.py000066400000000000000000000037141323112141500263100ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import image from bootstrapvz.common.tools import log_check_call import os class CreateImageTarball(Task): description = 'Creating tarball with image' phase = phases.image_registration predecessors = [image.MoveImage] @classmethod def run(cls, info): image_name = info.manifest.name.format(**info.manifest_vars) filename = image_name + '.' + info.volume.extension tarball_name = image_name + '.tar.gz' tarball_path = os.path.join(info.manifest.bootstrapper['workspace'], tarball_name) info._oracle['tarball_path'] = tarball_path log_check_call(['tar', '--sparse', '-C', info.manifest.bootstrapper['workspace'], '-caf', tarball_path, filename]) class UploadImageTarball(Task): description = 'Uploading image tarball' phase = phases.image_registration predecessors = [CreateImageTarball] @classmethod def run(cls, info): info._oracle['client'].file_path = info._oracle['tarball_path'] info._oracle['client'].upload_file() class DownloadImageTarball(Task): description = 'Downloading image tarball for integrity verification' phase = phases.image_registration predecessors = [UploadImageTarball] @classmethod def run(cls, info): tmp_tarball_path = '{tarball_path}-{pid}.tmp'.format( tarball_path=info._oracle['tarball_path'], pid=os.getpid(), ) info._oracle['client'].target_file_path = tmp_tarball_path info._oracle['client'].download_file() class CompareImageTarballs(Task): description = 'Comparing uploaded and downloaded image tarballs hashes' phase = phases.image_registration predecessors = [DownloadImageTarball] @classmethod def run(cls, info): info._oracle['client'].compare_files() os.remove(info._oracle['client'].target_file_path) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/tasks/network.py000066400000000000000000000005611323112141500267140ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class InstallDHCPCD(Task): description = 'Replacing isc-dhcp with dhcpcd5' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('dhcpcd5') info.exclude_packages.add('isc-dhcp-client') info.exclude_packages.add('isc-dhcp-common') bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/tasks/packages-kernels.yml000066400000000000000000000004351323112141500306130ustar00rootroot00000000000000--- # This is a mapping of Debian release codenames to processor architectures to kernel packages jessie: amd64: linux-image-amd64 i386: linux-image-686-pae stretch: amd64: linux-image-amd64 i386: linux-image-686-pae sid: amd64: linux-image-amd64 i386: linux-image-686-pae bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/oracle/tasks/packages.py000066400000000000000000000011471323112141500270020ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tools import config_get, rel_path class DefaultPackages(Task): description = 'Adding image packages required for Oracle Compute Cloud' phase = phases.preparation @classmethod def run(cls, info): kernel_packages_path = rel_path(__file__, 'packages-kernels.yml') kernel_package = config_get(kernel_packages_path, [info.manifest.release.codename, info.manifest.system['architecture']]) info.packages.add(kernel_package) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/000077500000000000000000000000001323112141500244545ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/README.rst000066400000000000000000000020411323112141500261400ustar00rootroot00000000000000VirtualBox ========== The `VirtualBox `__ provider can bootstrap to both .vdi and .vmdk images (raw images are also supported but do not run in VirtualBox). It's advisable to always use vmdk images for interoperability (e.g. `OVF `__ files *should* support vdi files, but since they have no identifier URL not even VirtualBox itself can import them). VirtualBox Guest Additions can be installed automatically if the ISO is provided in the manifest. VirtualBox Additions iso can be installed from main Debian repo by running: `apt install virtualbox-guest-additions-iso` Manifest settings ----------------- Provider ~~~~~~~~ - ``guest_additions``: Specifies the path to the VirtualBox Guest Additions ISO, which, when specified, will be mounted and used to install the VirtualBox Guest Additions. ``optional`` Example: .. code-block:: yaml --- provider: name: virtualbox guest_additions: /usr/share/virtualbox/VBoxGuestAdditions.iso bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/__init__.py000066400000000000000000000022201323112141500265610ustar00rootroot00000000000000from bootstrapvz.common import task_groups import tasks.packages import tasks.boot from bootstrapvz.common.tasks import image from bootstrapvz.common.tasks import loopback def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) def resolve_tasks(taskset, manifest): taskset.update(task_groups.get_standard_groups(manifest)) taskset.update([tasks.packages.DefaultPackages, tasks.boot.AddVirtualConsoleGrubOutputDevice, loopback.AddRequiredCommands, loopback.Create, image.MoveImage, ]) if manifest.provider.get('guest_additions', False): from tasks import guest_additions taskset.update([guest_additions.CheckGuestAdditionsPath, guest_additions.AddGuestAdditionsPackages, guest_additions.InstallGuestAdditions, ]) def resolve_rollback_tasks(taskset, manifest, completed, counter_task): taskset.update(task_groups.get_standard_rollback_tasks(completed)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/assets/000077500000000000000000000000001323112141500257565ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/assets/install_guest_additions.sh000066400000000000000000000005771323112141500332360ustar00rootroot00000000000000#!/bin/bash # This file was created by bootstrap-vz. # See https://github.com/andsens/bootstrap-vz/blob/master/LICENSE for # legal notices and disclaimers. function uname { if [[ $1 == '-r' ]]; then echo "KERNEL_VERSION" return 0 elif [[ $1 == '-m' ]]; then echo "KERNEL_ARCH" return 0 else $(which uname) $@ fi } export -f uname INSTALL_SCRIPT --nox11 bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/manifest-schema.yml000066400000000000000000000012551323112141500302460ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: VirtualBox manifest type: object properties: provider: type: object properties: guest_additions: type: string pattern: ^[^\0]+$ system: type: object properties: bootloader: type: string enum: - grub - extlinux volume: type: object properties: backing: type: string enum: - raw - vdi - vmdk partitions: type: object properties: type: type: string enum: - none - msdos - gpt required: [backing] bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/tasks/000077500000000000000000000000001323112141500256015ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/tasks/__init__.py000066400000000000000000000000001323112141500277000ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/tasks/boot.py000066400000000000000000000007431323112141500271220ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks import grub class AddVirtualConsoleGrubOutputDevice(Task): description = 'Adding `tty0\' as output device for grub' phase = phases.system_modification predecessors = [grub.SetGrubConsolOutputDeviceToSerial] successors = [grub.WriteGrubConfig] @classmethod def run(cls, info): info.grub_config['GRUB_CMDLINE_LINUX_DEFAULT'].append('console=tty0') bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/tasks/guest_additions.py000066400000000000000000000065621323112141500313510ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases from bootstrapvz.common.tasks.packages import InstallPackages from bootstrapvz.common.exceptions import TaskError from bootstrapvz.common.tools import rel_path import os assets = rel_path(__file__, '../assets') class CheckGuestAdditionsPath(Task): description = 'Checking whether the VirtualBox Guest Additions image exists' phase = phases.validation @classmethod def run(cls, info): guest_additions_path = rel_path(info.manifest.path, info.manifest.provider['guest_additions']) if not os.path.exists(guest_additions_path): msg = 'The file {file} does not exist.'.format(file=guest_additions_path) raise TaskError(msg) class AddGuestAdditionsPackages(Task): description = 'Adding packages to support Guest Additions installation' phase = phases.preparation @classmethod def run(cls, info): info.packages.add('bzip2') info.packages.add('build-essential') info.packages.add('dkms') kernel_headers_pkg = 'linux-headers-' if info.manifest.system['architecture'] == 'i386': arch = 'i686' kernel_headers_pkg += '686-pae' else: arch = 'x86_64' kernel_headers_pkg += 'amd64' info.packages.add(kernel_headers_pkg) info.kernel = { 'arch': arch, 'headers_pkg': kernel_headers_pkg, } class InstallGuestAdditions(Task): description = 'Installing the VirtualBox Guest Additions' phase = phases.package_installation predecessors = [InstallPackages] @classmethod def run(cls, info): from bootstrapvz.common.tools import log_call, log_check_call for line in log_check_call(['chroot', info.root, 'apt-cache', 'show', info.kernel['headers_pkg']]): key, value = line.split(':') if key.strip() == 'Depends': kernel_version = value.strip().split('linux-headers-')[-1] break guest_additions_path = rel_path(info.manifest.path, info.manifest.provider['guest_additions']) mount_dir = 'mnt/guest_additions' mount_path = os.path.join(info.root, mount_dir) os.mkdir(mount_path) root = info.volume.partition_map.root root.add_mount(guest_additions_path, mount_path, ['-o', 'loop']) install_script = os.path.join('/', mount_dir, 'VBoxLinuxAdditions.run') install_wrapper_name = 'install_guest_additions.sh' install_wrapper = open(os.path.join(assets, install_wrapper_name)) \ .read() \ .replace("KERNEL_VERSION", kernel_version) \ .replace("KERNEL_ARCH", info.kernel['arch']) \ .replace("INSTALL_SCRIPT", install_script) install_wrapper_path = os.path.join(info.root, install_wrapper_name) with open(install_wrapper_path, 'w') as f: f.write(install_wrapper + '\n') # Don't check the return code of the scripts here, because 1 not necessarily means they have failed log_call(['chroot', info.root, 'bash', '/' + install_wrapper_name]) # VBoxService process could be running, as it is not affected by DisableDaemonAutostart log_call(['chroot', info.root, 'service', 'vboxadd-service', 'stop']) root.remove_mount(mount_path) os.rmdir(mount_path) os.remove(install_wrapper_path) bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/tasks/packages-kernels.yml000066400000000000000000000006241323112141500315450ustar00rootroot00000000000000--- # This is a mapping of Debian release codenames to processor architectures to kernel packages squeeze: amd64: linux-image-amd64 i386: linux-image-686 wheezy: amd64: linux-image-amd64 i386: linux-image-686 jessie: amd64: linux-image-amd64 i386: linux-image-686-pae stretch: amd64: linux-image-amd64 i386: linux-image-686-pae sid: amd64: linux-image-amd64 i386: linux-image-686-pae bootstrap-vz-0.9.11+20180121git/bootstrapvz/providers/virtualbox/tasks/packages.py000066400000000000000000000011451323112141500277320ustar00rootroot00000000000000from bootstrapvz.base import Task from bootstrapvz.common import phases class DefaultPackages(Task): description = 'Adding image packages required for virtualbox' phase = phases.preparation @classmethod def run(cls, info): from bootstrapvz.common.tools import config_get, rel_path kernel_packages_path = rel_path(__file__, 'packages-kernels.yml') kernel_package = config_get(kernel_packages_path, [info.manifest.release.codename, info.manifest.system['architecture']]) info.packages.add(kernel_package) bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/000077500000000000000000000000001323112141500215335ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/README.rst000066400000000000000000000154571323112141500232360ustar00rootroot00000000000000Remote bootstrapping ==================== bootstrap-vz is able to bootstrap images not only on the machine on which it is invoked, but also on remote machines that have bootstrap-vz installed. This is helpful when you create manifests on your own workstation, but have a beefed up remote build server which can create images quickly. There may also be situations where you want to build multiple manifests that have different providers and require the host machines to be running on that provider (e.g. EBS backed AMIs can only be created on EC2 instances), when doing this multiple times SSHing into the machines and copying the manifests can be a hassle. Lastly, the main motivation for supporting remote bootstrapping is the automation of `system testing <../../tests/system>`__. As you will see `further down <#bootstrap-vz-remote>`__, bootstrap-vz is able to select which build server is required for a specific test and run the bootstrapping procedure on said server. bootstrap-vz-remote ------------------- Normally you'd use ``bootstrap-vz`` to start a bootstrapping process. When bootstrapping remotely simply use ``bootstrap-vz-remote`` instead, it takes the same arguments plus a few additional ones: * ``--servers ``: Path to a list of build-servers (see `build-servers.yml <#build-servers-yml>`__ for more info) * ``--name ``: Selects a specific build-server from the list of build-servers * ``--release ``: Restricts the autoselection of build-servers to the ones with the specified release Much like when bootstrapping directly, you can press ``Ctrl+C`` at any time to abort the bootstrapping process. The remote process will receive the keyboard interrupt signal and begin cleaning up - pressing ``Ctrl+C`` a second time will abort that as well and kill the connection immediately. Note that there is also a ``bootstrap-vz-server``, this file is not meant to be invoked directly by the user, but is instead launched by bootstrap-vz on the remote server when connecting to it. Dependencies ------------ For the remote bootstrapping procedure to work, you will need to install bootstrap-vz as well as the ``sudo`` command on the remote machine. Also make sure that all the needed dependencies for bootstrapping your image are installed. Locally the pip package `Pyro4`__ is needed. __ https://pypi.python.org/pypi/Pyro4 build-servers.yml ----------------- The file ``build-servers.yml`` informs bootstrap-vz about the different build servers you have at your disposal. In its simplest form you can just add your own machine like this: .. code-block:: yaml local: type: local can_bootstrap: [virtualbox] release: jessie build_settings: {} ``type`` specifies how bootstrap-vz should connect to the build-server. ``local`` simply means that it will call the bootstrapping procedure directly, no new process is spawned. ``can_bootstrap`` tells bootstrap-vz for which providers this machine is capable of building images. With the exception of the EC2 provider, the accepted values match the accepted provider names in the manifest. For EC2 you can specify ``ec2-s3`` and/or ``ec2-ebs``. ``ec2-ebs`` specifies that the machine in question can bootstrap EBS backed images and should only be used when the it is located on EC2. ``ec2-s3`` signifies that the machine is capable of bootstrapping S3 backed images. Beyond being a string, the value of ``release`` is not enforced in any way. It's only current use is for ``bootstrap-vz-remote`` where you can restrict which build-server should be autoselected. Remote settings ~~~~~~~~~~~~~~~ The other (and more interesting) setting for ``type`` is ``ssh``, which requires a few more configuration settings: .. code-block:: yaml local_vm: type: ssh can_bootstrap: - virtualbox - ec2-s3 release: wheezy # remote settings below here address: 127.0.0.1 port: 2222 username: admin keyfile: path_to_private_key_file server_bin: /root/bootstrap/bootstrap-vz-server The last 5 settings specify how bootstrap-vz can connect to the remote build-server. While the initial handshake is achieved through SSH, bootstrap-vz mainly communicates with its counterpart through RPC (the communication port is automatically forwarded through an SSH tunnel). ``address``, ``port``, ``username`` and ``keyfile`` are hopefully self explanatory (remote machine address, SSH port, login name and path to private SSH key file). ``server_bin`` refers to the `abovementioned <#bootstrap-vz-remote>`__ bootstrap-vz-server executable. This is the command bootstrap-vz executes on the remote machine to start the RPC server. Be aware that there are a few limitations as to what bootstrap-vz is able to deal with, regarding the remote machine setup (in time they may be fixed by a benevolent contributor): * The login user must be able to execute sudo without a password * The private key file must be added to the ssh-agent before invocation (alternatively it may not be password protected) * The server must already be part of the known_hosts list (bootstrap-vz uses ``ssh`` directly and cannot handle interactive prompts) Build settings ~~~~~~~~~~~~~~ The build settings allow you to override specific manifest properties. This is useful when for example the VirtualBox guest additions ISO is located at ``/root/guest_additions.iso`` on server 1, while server 2 has it at ``/root/images/vbox.iso``. .. code-block:: yaml local: type: local can_bootstrap: - virtualbox - ec2-s3 release: jessie build_settings: guest_additions: /root/images/VBoxGuestAdditions.iso apt_proxy: address: 127.0.0.1 port: 3142 ec2-credentials: access-key: AFAKEACCESSKEYFORAWS secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva certificate: /root/manifests/cert.pem private-key: /root/manifests/pk.pem user-id: 1234-1234-1234 s3-region: eu-west-1 * ``guest_additions`` specifies the path to the VirtualBox guest additions ISO on the remote machine. * ``apt_proxy`` sets the configuration for the `apt_proxy plugin <../plugins/apt_proxy>`. * ``ec2-credentials`` contains all the settings you know from EC2 manifests. * ``s3-region`` overrides the s3 bucket region when bootstrapping S3 backed images. Run settings ~~~~~~~~~~~~~~ The run settings hold information about how to start a bootstrapped image. This is useful only when running system tests. .. code-block:: yaml local: type: local can_bootstrap: - ec2-s3 release: jessie run_settings: ec2-credentials: access-key: AFAKEACCESSKEYFORAWS secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva docker: machine: default * ``ec2-credentials`` contains the access key and secret key used to boot an EC2 AMI. * ``docker.machine`` The docker machine on which an image built for docker should run. bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/__init__.py000066400000000000000000000121741323112141500236510ustar00rootroot00000000000000"""Remote module containing methods to bootstrap remotely """ from Pyro4.util import SerializerBase import logging log = logging.getLogger(__name__) supported_classes = ['bootstrapvz.base.manifest.Manifest', 'bootstrapvz.base.bootstrapinfo.BootstrapInformation', 'bootstrapvz.base.bootstrapinfo.DictClass', 'bootstrapvz.common.fs.loopbackvolume.LoopbackVolume', 'bootstrapvz.common.fs.qemuvolume.QEMUVolume', 'bootstrapvz.common.fs.virtualdiskimage.VirtualDiskImage', 'bootstrapvz.common.fs.virtualmachinedisk.VirtualMachineDisk', 'bootstrapvz.base.fs.partitionmaps.gpt.GPTPartitionMap', 'bootstrapvz.base.fs.partitionmaps.msdos.MSDOSPartitionMap', 'bootstrapvz.base.fs.partitionmaps.none.NoPartitions', 'bootstrapvz.base.fs.partitions.mount.Mount', 'bootstrapvz.base.fs.partitions.gpt.GPTPartition', 'bootstrapvz.base.fs.partitions.gpt_swap.GPTSwapPartition', 'bootstrapvz.base.fs.partitions.msdos.MSDOSPartition', 'bootstrapvz.base.fs.partitions.msdos_swap.MSDOSSwapPartition', 'bootstrapvz.base.fs.partitions.single.SinglePartition', 'bootstrapvz.base.fs.partitions.unformatted.UnformattedPartition', 'bootstrapvz.common.bytes.Bytes', 'bootstrapvz.common.sectors.Sectors', ] supported_exceptions = ['bootstrapvz.common.exceptions.ManifestError', 'bootstrapvz.common.exceptions.TaskListError', 'bootstrapvz.common.exceptions.TaskError', 'bootstrapvz.base.fs.exceptions.VolumeError', 'bootstrapvz.base.fs.exceptions.PartitionError', 'bootstrapvz.base.pkg.exceptions.PackageError', 'bootstrapvz.base.pkg.exceptions.SourceError', 'bootstrapvz.common.exceptions.UnitError', 'bootstrapvz.common.fsm_proxy.FSMProxyError', 'subprocess.CalledProcessError', ] def register_deserialization_handlers(): for supported_class in supported_classes: SerializerBase.register_dict_to_class(supported_class, deserialize) for supported_exc in supported_exceptions: SerializerBase.register_dict_to_class(supported_exc, deserialize_exception) import subprocess SerializerBase.register_class_to_dict(subprocess.CalledProcessError, serialize_called_process_error) def unregister_deserialization_handlers(): for supported_class in supported_classes: SerializerBase.unregister_dict_to_class(supported_class, deserialize) for supported_exc in supported_exceptions: SerializerBase.unregister_dict_to_class(supported_exc, deserialize_exception) def deserialize_exception(fq_classname, data): class_object = get_class_object(fq_classname) return SerializerBase.make_exception(class_object, data) def deserialize(fq_classname, data): class_object = get_class_object(fq_classname) from Pyro4.util import SerpentSerializer from Pyro4.errors import SecurityError ser = SerpentSerializer() state = {} for key, value in data.items(): try: state[key] = ser.recreate_classes(value) except SecurityError as e: msg = 'Unable to deserialize key `{key}\' on {class_name}'.format(key=key, class_name=fq_classname) raise Exception(msg, e) instance = class_object.__new__(class_object) instance.__setstate__(state) return instance def serialize_called_process_error(obj): # This is by far the weirdest exception serialization. # There is a bug in both Pyro4 and the Python subprocess module. # CalledProcessError does not populate its args property, # although according to https://docs.python.org/2/library/exceptions.html#exceptions.BaseException.args # it should... # So we populate that property during serialization instead # (the code is grabbed directly from Pyro4's class_to_dict()) # However, Pyro4 still cannot figure out to call the deserializer # unless we also use setattr() on the exception to set the args below # (before throwing it). # Mind you, the error "__init__() takes at least 3 arguments (2 given)" # is thrown *on the server* if we don't use setattr(). # It's all very confusing to me and I'm not entirely # sure what the exact problem is. Regardless - it works, so there. return {'__class__': obj.__class__.__module__ + '.' + obj.__class__.__name__, '__exception__': True, 'args': (obj.returncode, obj.cmd, obj.output), 'attributes': vars(obj) # add custom exception attributes } def get_class_object(fq_classname): parts = fq_classname.split('.') module_name = '.'.join(parts[:-1]) class_name = parts[-1] import importlib imported_module = importlib.import_module(module_name) return getattr(imported_module, class_name) bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/build_servers/000077500000000000000000000000001323112141500244035ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/build_servers/__init__.py000066400000000000000000000030031323112141500265100ustar00rootroot00000000000000 def pick_build_server(build_servers, manifest, preferences={}): # Validate the build servers list from bootstrapvz.common.tools import load_data, rel_path schema = load_data(rel_path(__file__, 'build-servers-schema.yml')) import jsonschema jsonschema.validate(build_servers, schema) if manifest['provider']['name'] == 'ec2': must_bootstrap = 'ec2-' + manifest['volume']['backing'] else: must_bootstrap = manifest['provider']['name'] def matches(name, settings): if preferences.get('name', name) != name: return False if preferences.get('release', settings['release']) != settings['release']: return False if must_bootstrap not in settings['can_bootstrap']: return False return True for name, settings in build_servers.iteritems(): if not matches(name, settings): continue if settings['type'] == 'local': from local import LocalBuildServer return LocalBuildServer(name, settings) else: from remote import RemoteBuildServer return RemoteBuildServer(name, settings) raise Exception('Unable to find a build server that matches your preferences.') def getNPorts(n, port_range=(1024, 65535)): import random ports = [] for i in range(0, n): while True: port = random.randrange(*port_range) if port not in ports: ports.append(port) break return ports bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/build_servers/build-servers-schema.yml000066400000000000000000000042771323112141500311640ustar00rootroot00000000000000--- $schema: http://json-schema.org/draft-04/schema# title: Build server settings list type: object properties: local: type: object properties: type: {enum: [local]} can_bootstrap: {$ref: '#/definitions/can_bootstrap'} release: {type: string} build_settings: {$ref: '#/definitions/build_settings'} required: [type, can_bootstrap, release] patternProperties: ^(?!local).*$: {$ref: '#/definitions/ssh'} definitions: absolute_path: type: string pattern: ^/[^\0]+$ can_bootstrap: type: array items: enum: - virtualbox - ec2-ebs - ec2-s3 - docker build_settings: type: object properties: guest_additions: {$ref: '#/definitions/absolute_path'} ec2-credentials: required: [access-key, secret-key] type: object properties: access-key: {type: string} secret-key: {type: string} certificate: {type: string} private-key: {type: string} user-id: type: string pattern: (^arn:aws:iam::\d*:user/\w.*$)|(^\d{4}-\d{4}-\d{4}$) additional_properties: false apt_proxy: type: object properties: address: {type: string} port: {type: integer} persistent: {type: boolean} required: [address, port] run_settings: type: object properties: ec2-credentials: required: [access-key, secret-key] type: object properties: access-key: {type: string} secret-key: {type: string} additional_properties: false docker: type: object properties: machine: {type: string} additional_properties: false ssh: type: object properties: type: {enum: [ssh]} can_bootstrap: {$ref: '#/definitions/can_bootstrap'} build_settings: {$ref: '#/definitions/build_settings'} release: {type: string} address: {type: string} port: {type: integer} username: {type: string} password: {type: string} keyfile: {$ref: '#/definitions/absolute_path'} server_bin: {$ref: '#/definitions/absolute_path'} required: [type, can_bootstrap, release] bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/build_servers/build_server.py000066400000000000000000000027051323112141500274460ustar00rootroot00000000000000 class BuildServer(object): def __init__(self, name, settings): self.name = name self.settings = settings self.build_settings = settings.get('build_settings', {}) self.run_settings = settings.get('run_settings', {}) self.can_bootstrap = settings['can_bootstrap'] self.release = settings.get('release', None) def apply_build_settings(self, manifest_data): if manifest_data['provider']['name'] == 'virtualbox' and 'guest_additions' in manifest_data['provider']: manifest_data['provider']['guest_additions'] = self.build_settings['guest_additions'] if 'apt_proxy' in self.build_settings: manifest_data.get('plugins', {})['apt_proxy'] = self.build_settings['apt_proxy'] if 'ec2-credentials' in self.build_settings: if 'credentials' not in manifest_data['provider']: manifest_data['provider']['credentials'] = {} for key in ['access-key', 'secret-key', 'certificate', 'private-key', 'user-id']: if key in self.build_settings['ec2-credentials']: manifest_data['provider']['credentials'][key] = self.build_settings['ec2-credentials'][key] if 's3-region' in self.build_settings and manifest_data['volume']['backing'] == 's3': if 'region' not in manifest_data['image']: manifest_data['image']['region'] = self.build_settings['s3-region'] return manifest_data bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/build_servers/callback.py000066400000000000000000000021351323112141500265120ustar00rootroot00000000000000import Pyro4 import logging Pyro4.config.REQUIRE_EXPOSE = True log = logging.getLogger(__name__) class CallbackServer(object): def __init__(self, listen_port, remote_port): self.daemon = Pyro4.Daemon(host='localhost', port=listen_port, nathost='localhost', natport=remote_port, unixsocket=None) self.daemon.register(self) def __enter__(self): def serve(): self.daemon.requestLoop() from threading import Thread self.thread = Thread(target=serve) log.debug('Starting callback server') self.thread.start() return self def __exit__(self, type, value, traceback): log.debug('Shutting down callback server') self.daemon.shutdown() self.thread.join() @Pyro4.expose def handle_log(self, pickled_record): import pickle record = pickle.loads(pickled_record) log = logging.getLogger() record.extra = getattr(record, 'extra', {}) record.extra['source'] = 'remote' log.handle(record) bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/build_servers/local.py000066400000000000000000000005261323112141500260520ustar00rootroot00000000000000from build_server import BuildServer from contextlib import contextmanager class LocalBuildServer(BuildServer): @contextmanager def connect(self): yield LocalConnection() class LocalConnection(object): def run(self, *args, **kwargs): from bootstrapvz.base.main import run return run(*args, **kwargs) bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/build_servers/remote.py000066400000000000000000000124371323112141500262570ustar00rootroot00000000000000from build_server import BuildServer from bootstrapvz.common.tools import log_check_call from contextlib import contextmanager import logging log = logging.getLogger(__name__) class RemoteBuildServer(BuildServer): def __init__(self, name, settings): super(RemoteBuildServer, self).__init__(name, settings) self.address = settings['address'] self.port = settings['port'] self.username = settings['username'] self.password = settings.get('password', None) self.keyfile = settings['keyfile'] self.server_bin = settings['server_bin'] @contextmanager def connect(self): with self.spawn_server() as forwards: args = {'listen_port': forwards['local_callback_port'], 'remote_port': forwards['remote_callback_port']} from callback import CallbackServer with CallbackServer(**args) as callback_server: with connect_pyro('localhost', forwards['local_server_port']) as connection: connection.set_callback_server(callback_server) yield connection @contextmanager def spawn_server(self): from . import getNPorts # We can't use :0 for the forwarding ports because # A: It's quite hard to retrieve the port on the remote after the daemon has started # B: SSH doesn't accept 0:localhost:0 as a port forwarding option [local_server_port, local_callback_port] = getNPorts(2) [remote_server_port, remote_callback_port] = getNPorts(2) server_cmd = ['sudo', self.server_bin, '--listen', str(remote_server_port)] def set_process_group(): # Changes the process group of a command so that any SIGINT # for the main thread will not be propagated to it. # We'd like to handle SIGINT ourselves (i.e. propagate the shutdown to the serverside) import os os.setpgrp() addr_arg = '{user}@{host}'.format(user=self.username, host=self.address) ssh_cmd = ['ssh', '-i', self.keyfile, '-p', str(self.port), '-L' + str(local_server_port) + ':localhost:' + str(remote_server_port), '-R' + str(remote_callback_port) + ':localhost:' + str(local_callback_port), addr_arg] full_cmd = ssh_cmd + ['--'] + server_cmd log.debug('Opening SSH connection to build server `{name}\''.format(name=self.name)) import sys import subprocess ssh_process = subprocess.Popen(args=full_cmd, stdout=sys.stderr, stderr=sys.stderr, preexec_fn=set_process_group) try: yield {'local_server_port': local_server_port, 'local_callback_port': local_callback_port, 'remote_server_port': remote_server_port, 'remote_callback_port': remote_callback_port} finally: log.debug('Waiting for SSH connection to the build server to close') import time start = time.time() while ssh_process.poll() is None: if time.time() - start > 5: log.debug('Forcefully terminating SSH connection to the build server') ssh_process.terminate() break else: time.sleep(0.5) def download(self, src, dst): log.debug('Downloading file `{src}\' from ' 'build server `{name}\' to `{dst}\'' .format(src=src, dst=dst, name=self.name)) # Make sure we can read the file as {user} self.remote_command(['sudo', 'chown', self.username, src]) src_arg = '{user}@{host}:{path}'.format(user=self.username, host=self.address, path=src) log_check_call(['scp', '-i', self.keyfile, '-P', str(self.port), src_arg, dst]) def delete(self, path): log.debug('Deleting file `{path}\' on build server `{name}\''.format(path=path, name=self.name)) self.remote_command(['sudo', 'rm', path]) def remote_command(self, command): ssh_cmd = ['ssh', '-i', self.keyfile, '-p', str(self.port), self.username + '@' + self.address, '--'] + command log_check_call(ssh_cmd) @contextmanager def connect_pyro(host, port): import Pyro4 server_uri = 'PYRO:server@{host}:{port}'.format(host=host, port=port) connection = Pyro4.Proxy(server_uri) log.debug('Connecting to RPC daemon') connected = False try: remaining_retries = 5 while not connected: try: connection.ping() connected = True except (Pyro4.errors.ConnectionClosedError, Pyro4.errors.CommunicationError): if remaining_retries > 0: remaining_retries -= 1 from time import sleep sleep(2) else: raise yield connection finally: if connected: log.debug('Stopping RPC daemon') connection.stop() connection._pyroRelease() else: log.warn('Unable to stop RPC daemon, it might still be running on the server') bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/log.py000066400000000000000000000014161323112141500226700ustar00rootroot00000000000000import logging class LogForwarder(logging.Handler): def __init__(self, level=logging.NOTSET): self.server = None super(LogForwarder, self).__init__(level) def set_server(self, server): self.server = server def emit(self, record): if self.server is not None: if record.exc_info is not None: import traceback exc_type, exc_value, exc_traceback = record.exc_info record.extra = getattr(record, 'extra', {}) record.extra['traceback'] = traceback.format_exception(exc_type, exc_value, exc_traceback) record.exc_info = None # TODO: Use serpent instead import pickle self.server.handle_log(pickle.dumps(record)) bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/main.py000066400000000000000000000050651323112141500230370ustar00rootroot00000000000000"""Main module containing all the setup necessary for running the remote bootstrapping process """ def main(): """Main function for invoking the bootstrap process remotely """ # Get the commandline arguments opts = get_opts() from bootstrapvz.common.tools import load_data # load the manifest data, we might want to modify it later on manifest_data = load_data(opts['MANIFEST']) # load the build servers file build_servers = load_data(opts['--servers']) # Pick a build server from build_servers import pick_build_server preferences = {} if opts['--name'] is not None: preferences['name'] = opts['--name'] if opts['--release'] is not None: preferences['release'] = opts['--release'] build_server = pick_build_server(build_servers, manifest_data, preferences) # Apply the build server settings to the manifest (e.g. the virtualbox guest additions path) manifest_data = build_server.apply_build_settings(manifest_data) # Load the manifest from bootstrapvz.base.manifest import Manifest manifest = Manifest(path=opts['MANIFEST'], data=manifest_data) # Set up logging from bootstrapvz.base.main import setup_loggers setup_loggers(opts) # Register deserialization handlers for objects # that will pass between server and client from . import register_deserialization_handlers register_deserialization_handlers() # Everything has been set up, connect to the server and begin the bootstrapping process with build_server.connect() as connection: connection.run(manifest, debug=opts['--debug'], dry_run=opts['--dry-run']) def get_opts(): """Creates an argument parser and returns the arguments it has parsed """ from docopt import docopt usage = """bootstrap-vz-remote Usage: bootstrap-vz-remote [options] --servers= MANIFEST Options: --servers Path to list of build servers --name Selects specific server from the build servers list --release Require the build server OS to be a specific release --log Log to given directory [default: /var/log/bootstrap-vz] If is `-' file logging will be disabled. --pause-on-error Pause on error, before rollback --dry-run Don't actually run the tasks --color=auto|always|never Colorize the console output [default: auto] --debug Print debugging information -h, --help show this help """ return docopt(usage) bootstrap-vz-0.9.11+20180121git/bootstrapvz/remote/server.py000066400000000000000000000104601323112141500234140ustar00rootroot00000000000000import Pyro4 import logging Pyro4.config.REQUIRE_EXPOSE = True log = logging.getLogger(__name__) def main(): opts = getopts() from . import register_deserialization_handlers register_deserialization_handlers() log_forwarder = setup_logging() server = Server(opts['--listen'], log_forwarder) server.start() def setup_logging(): root = logging.getLogger() root.setLevel(logging.NOTSET) from log import LogForwarder log_forwarder = LogForwarder() root.addHandler(log_forwarder) from datetime import datetime import os.path from bootstrapvz.base.log import get_file_handler timestamp = datetime.now().strftime('%Y%m%d%H%M%S') filename = '{timestamp}_remote.log'.format(timestamp=timestamp) logfile_path = os.path.join('/var/log/bootstrap-vz', filename) file_handler = get_file_handler(logfile_path, True) root.addHandler(file_handler) return log_forwarder def getopts(): from docopt import docopt usage = """bootstrap-vz-server Usage: bootstrap-vz-server [options] Options: --listen Serve on specified port [default: 46675] -h, --help show this help """ return docopt(usage) class Server(object): def __init__(self, listen_port, log_forwarder): self.stop_serving = False self.log_forwarder = log_forwarder self.listen_port = listen_port def start(self): Pyro4.config.COMMTIMEOUT = 0.5 daemon = Pyro4.Daemon('localhost', port=int(self.listen_port), unixsocket=None) daemon.register(self, 'server') daemon.requestLoop(loopCondition=lambda: not self.stop_serving) @Pyro4.expose def set_callback_server(self, server): log.debug('Forwarding logs to the callback server') self.log_forwarder.set_server(server) @Pyro4.expose def ping(self): if hasattr(self, 'connection_timeout'): self.connection_timeout.cancel() del self.connection_timeout return 'pong' @Pyro4.expose def stop(self): if hasattr(self, 'bootstrap_process'): log.warn('Sending SIGINT to bootstrapping process') import os import signal os.killpg(self.bootstrap_process.pid, signal.SIGINT) self.bootstrap_process.join() # We can't send a SIGINT to the server, # for some reason the Pyro4 shutdowns are rather unclean, # throwing exceptions and such. self.stop_serving = True @Pyro4.expose def run(self, manifest, debug=False, dry_run=False): def bootstrap(queue): # setsid() creates a new session, making this process the group leader. # We do that, so when the server calls killpg (kill process group) # on us, it won't kill itself (this process was spawned from a # thread under the server, meaning it's part of the same group). # The process hierarchy looks like this: # Pyro server (process - listening on a port) # +- pool thread # +- pool thread # +- pool thread # +- started thread (the one that got the "run()" call) # L bootstrap() process (us) # Calling setsid() also fixes another problem: # SIGINTs sent to this process seem to be redirected # to the process leader. Since there is a thread between # us and the process leader, the signal will not be propagated # (signals are not propagated to threads), this means that any # subprocess we start (i.e. debootstrap) will not get a SIGINT. import os os.setsid() from bootstrapvz.base.main import run try: bootstrap_info = run(manifest, debug=debug, dry_run=dry_run) queue.put(bootstrap_info) except (Exception, KeyboardInterrupt) as e: queue.put(e) from multiprocessing import Queue from multiprocessing import Process queue = Queue() self.bootstrap_process = Process(target=bootstrap, args=(queue,)) self.bootstrap_process.start() self.bootstrap_process.join() del self.bootstrap_process result = queue.get() if isinstance(result, Exception): raise result return result bootstrap-vz-0.9.11+20180121git/docs/000077500000000000000000000000001323112141500165735ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/.gitignore000066400000000000000000000000071323112141500205600ustar00rootroot00000000000000_build bootstrap-vz-0.9.11+20180121git/docs/README.rst000066400000000000000000000031611323112141500202630ustar00rootroot00000000000000:orphan: Documentation ============= Both the end-user and developer documentation is combined into a single sphinx build (the two were previously split between github pages and sphinx). Building -------- To build the documentation, simply run ``tox -e docs`` in the project root. Serving the docs through http can be achieved by subsequently running ``(cd docs/_build/html; python -m SimpleHTTPServer 8080)`` and accessing them on ``http://localhost:8080/``. READMEs ------- Many of the folders in the project have a README.rst which describes the purpose of the contents in that folder. These files are automatically included when building the documentation, through use of the `include`__ directive. __ http://docutils.sourceforge.net/docs/ref/rst/directives.html#including-an-external-document-fragment Include files for the providers and plugins are autogenerated through the sphinx conf.py script. Links ----- All links in rst files outside of ``docs/`` (but also ``docs/README.rst``) that link to other rst files are relative and reference folder names when the link would point at a README.rst otherwise. This is done to take advantage of the github feature where README files are displayed when viewing its parent folder. When accessing the ``manifests/`` folder for example, the documentation for how manifests work is displayed at the bottom. When sphinx generates the documentation, these relative links are automatically converted into relative links that work inside the generated html pages instead. If you are interested in how this works, take a look at the link transformation module in ``docs/transform_github_links``. bootstrap-vz-0.9.11+20180121git/docs/__init__.py000066400000000000000000000000001323112141500206720ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/_static/000077500000000000000000000000001323112141500202215ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/_static/.gitignore000066400000000000000000000000131323112141500222030ustar00rootroot00000000000000graph.json bootstrap-vz-0.9.11+20180121git/docs/_static/taskoverview.coffee000066400000000000000000000147511323112141500241330ustar00rootroot00000000000000class window.TaskOverview viewBoxHeight = 800 viewBoxWidth = 800 margins = top: 200 left: 50 bottom: 200 right: 50 gravity = lateral: .1 longitudinal: .2 length = ([x,y]) -> Math.sqrt(x*x + y*y) sum = ([x1,y1], [x2,y2]) -> [x1+x2, y1+y2] diff = ([x1,y1], [x2,y2]) -> [x1-x2, y1-y2] prod = ([x,y], scalar) -> [x*scalar, y*scalar] div = ([x,y], scalar) -> [x/scalar, y/scalar] unit = (vector) -> div(vector, length(vector)) scale = (vector, scalar) -> prod(unit(vector), scalar) position = (coord, vector) -> [coord, sum(coord, vector)] free = ([coord1, coord2]) -> diff(coord2, coord1) pmult = (pvector=[coord1, _], scalar) -> position(coord1, prod(free(pvector), scalar)) pdiv = (pvector=[coord1, _], scalar) -> position(coord1, div(free(pvector), scalar)) constructor: ({@selector}) -> @svg = d3.select(@selector) .attr('viewBox', "0 0 #{viewBoxWidth} #{viewBoxHeight}") d3.json '../_static/graph.json', @buildGraph buildGraph: (error, @data) => @createDefinitions() taskLayout = @createNodes() taskLayout.start() createDefinitions: () -> definitions = @svg.append 'defs' arrow = definitions.append('marker') arrow.attr('id', 'right-arrowhead') .attr('refX', arrowHeight = 4) .attr('refY', (arrowWidth = 6) / 2) .attr('markerWidth', arrowHeight) .attr('markerHeight', arrowWidth) .attr('orient', 'auto') .append('path').attr('d', "M0,0 V#{arrowWidth} L#{arrowHeight},#{arrowWidth/2} Z") partitionKey = 'phase' nodeColorKey = 'module' keyMap: phase: 'phases' module: 'modules' partition: (key, idx) -> return @data[@keyMap[key]] nodeRadius = 10 nodePadding = 10 createNodes: () -> options = gravity: 0 linkDistance: 50 linkStrength: .8 charge: -130 size: [viewBoxWidth, viewBoxHeight] layout = d3.layout.force() layout[option](value) for option, value of options array_sum = (list) -> list.reduce(((a,b) -> a + b), 0) partitioning = nonLinear: (groupCounts, range, k=2) -> ratios = (Math.pow(count, 1/k) for count in groupCounts) fraction = range / (array_sum ratios) return (fraction * ratio for ratio in ratios) linear: (groups, range) -> fraction = range / groups return (fraction for _ in [0..groups]) offset: (ranges, i) -> (array_sum ranges.slice 0, i) + ranges[i] / 2 layout.nodes @data.nodes layout.links @data.links grouping = d3.nest().key((d) -> d[partitionKey]).sortKeys(d3.ascending) widths = partitioning.nonLinear((d.values for d in grouping.rollup((d) -> d.length).entries(@data.nodes)), viewBoxWidth - margins.left - margins.right) heights = partitioning.nonLinear((d.values for d in grouping.rollup((d) -> d.length).entries(@data.nodes)), viewBoxHeight - margins.top - margins.bottom, 4) for node in @data.nodes # node.cx = margins.left + partitioning.offset(widths, node[partitionKey]) # node.cy = viewBoxHeight / 2 + margins.top node.cx = viewBoxWidth / 2 + margins.left node.cy = margins.top + partitioning.offset(heights, node[partitionKey]) node.radius = nodeRadius groups = d3.nest().key((d) -> d[partitionKey]) .sortKeys(d3.ascending) .entries(layout.nodes()) hullColors = d3.scale.category20() nodeColors = d3.scale.category20c() hulls = @svg.append('g').attr('class', 'hulls') .selectAll('path').data(groups).enter() .append('path').attr('id', (d) -> "hull-#{d.key}") .style 'fill': (d, i) -> hullColors(i) 'stroke': (d, i) -> hullColors(i) hullLabels = @svg.append('g').attr('class', 'hull-labels') .selectAll('text').data(groups).enter() .append('text') hullLabels.append('textPath').attr('xlink:href', (d) -> "#hull-#{d.key}") .text((d) => @partition(partitionKey)[d.key].name) links = @svg.append('g').attr('class', 'links') .selectAll('line').data(layout.links()).enter() .append('line').attr('marker-end', 'url(#right-arrowhead)') mouseOver = (d) -> labels.classed 'hover', (l) -> d is l nodes.classed 'highlight', (n) -> d.module is n.module mouseOut = (d) -> labels.classed 'hover', no nodes.classed 'highlight', no nodes = @svg.append('g').attr('class', 'nodes') .selectAll('g.partition').data(groups).enter() .append('g').attr('class', 'partition') .selectAll('circle').data((d) -> d.values).enter() .append('circle').attr('r', (d) -> d.radius) .style('fill', (d) -> nodeColors(d[nodeColorKey])) .call(layout.drag) .on('mouseover', mouseOver) .on('mouseout', mouseOut) labels = @svg.append('g').attr('class', 'node-labels') .selectAll('g.partition').data(groups).enter() .append('g').attr('class', 'partition') .selectAll('text').data((d) -> d.values).enter() .append('text').text((d) -> d.name) .attr('transform', (d) -> offset=-(d.radius + 5); "translate(0,#{offset})") rotate = (x, n) -> n = n % x.length x.slice(0,-n).reverse().concat(x.slice(-n).reverse()).reverse() circle_coords = (parts) -> partSize = 2*Math.PI/parts return (for i in [0..parts] theta = partSize*i [(Math.cos theta), (Math.sin theta)]) hullPointMatrix = (prod(v, nodeRadius*2) for v in circle_coords(16)) hullBoundaries = (d) -> nodePoints = d.values.map (i) -> [i.x, i.y] padded_points = [] padded_points.push sum(p, v) for v in hullPointMatrix for p in nodePoints points = d3.geom.hull(padded_points) points = rotate points, Math.floor -points.length / 2.5 "M#{points.join('L')}Z" gravity_fn = (alpha) => (d) -> d.x += (d.cx - d.x) * alpha * gravity.lateral d.y += (d.cy - d.y) * alpha * gravity.longitudinal layout.on 'tick', (e) => hulls.attr('d', hullBoundaries) nodes.each gravity_fn(e.alpha) nodes.attr cx: ({x}) -> x cy: ({y}) -> y labels.each gravity_fn(e.alpha) labels.attr x: ({x}) -> x y: ({y}) -> y links.each ({source:{x:x1,y:y1},target:{x:x2,y:y2}}, i) -> [x,y] = scale(free([[x1,y1], [x2,y2]]), nodeRadius) @setAttribute 'x1', x1 + x @setAttribute 'y1', y1 + y @setAttribute 'x2', x2 - x @setAttribute 'y2', y2 - y return layout bootstrap-vz-0.9.11+20180121git/docs/_static/taskoverview.less000066400000000000000000000010311323112141500236350ustar00rootroot00000000000000#taskoverview-graph { g.hulls path { opacity: 0.25; } g.hull-labels text { } g.nodes circle { stroke: #000000; &.highlight { stroke: #555599; stroke-width: 2.5px; fill: #EEAAAA !important; } opacity: .9; stroke-width: 1.5px; } g.node-labels text { pointer-events: none; font: 10px sans-serif; text-anchor: middle; text-shadow: 0 0 2px #FFFFFF; font-weight: bold; opacity: 0; &.hover { transition: opacity .5s; opacity: .9; } } g.links line { stroke: #999; stroke-opacity: .6; } } bootstrap-vz-0.9.11+20180121git/docs/api/000077500000000000000000000000001323112141500173445ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/api/base/000077500000000000000000000000001323112141500202565ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/api/base/fs.rst000066400000000000000000000034361323112141500214260ustar00rootroot00000000000000 Filesystem handling =================== Volume ------ .. automodule:: bootstrapvz.base.fs.volume :members: :private-members: Partitionmaps ------------- Abstract Partitionmap ''''''''''''''''''''' .. automodule:: bootstrapvz.base.fs.partitionmaps.abstract :members: :private-members: GPT Partitionmap '''''''''''''''' .. automodule:: bootstrapvz.base.fs.partitionmaps.gpt :members: :private-members: MS-DOS Partitionmap ''''''''''''''''''' .. automodule:: bootstrapvz.base.fs.partitionmaps.msdos :members: :private-members: No Partitionmap ''''''''''''''' .. automodule:: bootstrapvz.base.fs.partitionmaps.none :members: :private-members: Partitions ---------- Abstract partition '''''''''''''''''' .. automodule:: bootstrapvz.base.fs.partitions.abstract :members: :private-members: Base partition '''''''''''''' .. automodule:: bootstrapvz.base.fs.partitions.base :members: :private-members: GPT partition ''''''''''''' .. automodule:: bootstrapvz.base.fs.partitions.gpt :members: :private-members: GPT swap partition .................. .. automodule:: bootstrapvz.base.fs.partitions.gpt_swap :members: :private-members: MS-DOS partition '''''''''''''''' .. automodule:: bootstrapvz.base.fs.partitions.msdos :members: :private-members: MS-DOS swap partition ..................... .. automodule:: bootstrapvz.base.fs.partitions.msdos_swap :members: :private-members: Single '''''' .. automodule:: bootstrapvz.base.fs.partitions.single :members: :private-members: Unformatted partition ''''''''''''''''''''' .. automodule:: bootstrapvz.base.fs.partitions.unformatted :members: :private-members: Exceptions ---------- .. automodule:: bootstrapvz.base.fs.exceptions :members: :private-members: bootstrap-vz-0.9.11+20180121git/docs/api/base/index.rst000066400000000000000000000014771323112141500221300ustar00rootroot00000000000000 Base functionality ================== The base module represents concepts of the bootstrapping process that tasks can interact with and handles the gather, sorting and running of tasks. .. toctree:: :maxdepth: 2 :glob: * Bootstrap information --------------------- .. automodule:: bootstrapvz.base.bootstrapinfo :members: :private-members: Manifest -------- .. automodule:: bootstrapvz.base.manifest :members: :private-members: Tasklist -------- .. automodule:: bootstrapvz.base.tasklist :members: :private-members: Logging -------- .. automodule:: bootstrapvz.base.log :members: :private-members: Task -------- .. automodule:: bootstrapvz.base.task :members: :private-members: Phase -------- .. automodule:: bootstrapvz.base.phase :members: :private-members: bootstrap-vz-0.9.11+20180121git/docs/api/base/pkg.rst000066400000000000000000000007531323112141500215760ustar00rootroot00000000000000 Package handling ================ Package list ------------ .. automodule:: bootstrapvz.base.pkg.packagelist :members: :private-members: Sources list ------------ .. automodule:: bootstrapvz.base.pkg.sourceslist :members: :private-members: Preferences list ---------------- .. automodule:: bootstrapvz.base.pkg.preferenceslist :members: :private-members: Exceptions ---------- .. automodule:: bootstrapvz.base.pkg.exceptions :members: :private-members: bootstrap-vz-0.9.11+20180121git/docs/api/common/000077500000000000000000000000001323112141500206345ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/api/common/fs.rst000066400000000000000000000000661323112141500220000ustar00rootroot00000000000000 Volume representations ============================= bootstrap-vz-0.9.11+20180121git/docs/api/common/index.rst000066400000000000000000000004151323112141500224750ustar00rootroot00000000000000 Common ====== The common module contains features that are common to multiple providers and plugins. It holds both a large set of shared tasks and also various tools that are used by both the base module and tasks. .. toctree:: :maxdepth: 2 fs tasks/index bootstrap-vz-0.9.11+20180121git/docs/api/common/tasks/000077500000000000000000000000001323112141500217615ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/api/common/tasks/index.rst000066400000000000000000000000711323112141500236200ustar00rootroot00000000000000 Shared tasks ============ .. toctree:: :maxdepth: 2 bootstrap-vz-0.9.11+20180121git/docs/api/index.rst000066400000000000000000000001221323112141500212000ustar00rootroot00000000000000API === .. toctree:: :maxdepth: 1 :hidden: base/index common/index bootstrap-vz-0.9.11+20180121git/docs/changelog.rst000066400000000000000000000000361323112141500212530ustar00rootroot00000000000000.. include:: ../CHANGELOG.rst bootstrap-vz-0.9.11+20180121git/docs/conf.py000066400000000000000000000277741323112141500201130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # bootstrap-vz documentation build configuration file, created by # sphinx-quickstart on Sun Mar 23 16:17:28 2014. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath(os.pardir)) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = ['sphinx.ext.coverage', 'sphinx.ext.autodoc', 'sphinx.ext.linkcode', 'docs.transform_github_links', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'bootstrap-vz' copyright = u'2014, Anders Ingemann' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # from bootstrapvz import __version__ # The short X.Y version. version = '.'.join(__version__.split('.')[:2]) # The full version, including alpha/beta/rc tags. release = __version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org on_rtd = os.environ.get('READTHEDOCS', None) == 'True' if not on_rtd: # only import and set the theme if we're building docs locally import sphinx_rtd_theme html_theme = 'sphinx_rtd_theme' html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'bootstrap-vzdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = {} # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [('index', 'bootstrap-vz.tex', u'bootstrap-vz Documentation', u'Anders Ingemann', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [('index', 'bootstrap-vz', u'bootstrap-vz Documentation', [u'Anders Ingemann'], 1)] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [('index', 'bootstrap-vz', u'bootstrap-vz Documentation', u'Anders Ingemann', 'bootstrap-vz', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False # -- Link to rst files scattered throughout the project ------------------- import glob import os.path for readme_path in glob.glob('../bootstrapvz/providers/*/README.rst'): provider_name = os.path.basename(os.path.dirname(readme_path)) include_path = os.path.join('providers', provider_name + '.rst') if not os.path.exists(include_path): path_to_readme = os.path.join('../../bootstrapvz/providers', provider_name, 'README.rst') with open(include_path, 'w') as include: include.write('.. include:: ' + path_to_readme) for readme_path in glob.glob('../bootstrapvz/plugins/*/README.rst'): plugin_name = os.path.basename(os.path.dirname(readme_path)) include_path = os.path.join('plugins', plugin_name + '.rst') if not os.path.exists(include_path): path_to_readme = os.path.join('../../bootstrapvz/plugins', plugin_name, 'README.rst') with open(include_path, 'w') as include: include.write('.. include:: ' + path_to_readme) for readme_path in glob.glob('../tests/system/providers/*/README.rst'): provider_name = os.path.basename(os.path.dirname(readme_path)) include_path = os.path.join('testing/system_test_providers', provider_name + '.rst') if not os.path.exists(include_path): path_to_readme = os.path.join('../../../tests/system/providers', provider_name, 'README.rst') with open(include_path, 'w') as include: include.write('.. include:: ' + path_to_readme) # -- Create task overview graph data -------------------------------------- from docs import taskoverview data = taskoverview.generate_graph_data() taskoverview.write_data(data, '_static/graph.json') # -- Substitute links for github with relative links in readthedocs ------- if on_rtd: pass # Snatched from here: # https://sourcegraph.com/github.com/Gallopsled/pwntools@master/.PipPackage/pwntools/.def/docs/source/conf/linkcode_resolve/lines baseurl = 'https://github.com/andsens/bootstrap-vz' import subprocess try: git_head = subprocess.check_output('git describe --tags 2>/dev/null', shell=True) except subprocess.CalledProcessError: try: git_head = subprocess.check_output('git rev-parse HEAD', shell=True).strip()[:10] except subprocess.CalledProcessError: pass def linkcode_resolve(domain, info): if domain != 'py': return None if not info['module']: return None filepath = info['module'].replace('.', '/') + '.py' fmt_args = {'baseurl': baseurl, 'commit': git_head, 'path': filepath} import importlib import inspect import types module = importlib.import_module(info['module']) value = module for part in info['fullname'].split('.'): value = getattr(value, part, None) if value is None: break valid_types = (types.ModuleType, types.ClassType, types.MethodType, types.FunctionType, types.TracebackType, types.FrameType, types.CodeType) if isinstance(value, valid_types): try: lines, first = inspect.getsourcelines(value) fmt_args['linestart'] = first fmt_args['lineend'] = first + len(lines) - 1 return '{baseurl}/blob/{commit}/{path}#L{linestart}-L{lineend}'.format(**fmt_args) except IOError: pass return '{baseurl}/blob/{commit}/{path}'.format(**fmt_args) bootstrap-vz-0.9.11+20180121git/docs/developers/000077500000000000000000000000001323112141500207435ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/developers/contributing.rst000066400000000000000000000000441323112141500242020ustar00rootroot00000000000000.. include:: ../../CONTRIBUTING.rst bootstrap-vz-0.9.11+20180121git/docs/developers/documentation.rst000066400000000000000000000000331323112141500243420ustar00rootroot00000000000000.. include:: ../README.rst bootstrap-vz-0.9.11+20180121git/docs/developers/index.rst000066400000000000000000000002641323112141500226060ustar00rootroot00000000000000Developers ========== .. toctree:: :maxdepth: 1 :hidden: contributing plugins documentation switches taskoverview .. include:: ../../bootstrapvz/README.rst bootstrap-vz-0.9.11+20180121git/docs/developers/plugins.rst000066400000000000000000000126221323112141500231610ustar00rootroot00000000000000Developing plugins ================== Developing a plugin for bootstrap-vz is a fairly straightforward process, since there is very little code overhead. The process is the same whether you create an `internal <#internal-plugins>`__ or an `external <#external-plugins>`__ plugin (though you need to add some code for package management when creating an external plugin) Start by creating an ``__init__.py`` in your plugin folder. The only obligatory function you need to implement is ``resolve_tasks()``. This function adds tasks to be run to the tasklist: .. code-block:: python def resolve_tasks(taskset, manifest): taskset.add(tasks.DoSomething) The manifest variable holds the manifest the user specified, with it you can determine settings for your plugin and e.g. check of which release of Debian bootstrap-vz will create an image. A task is a class with a static ``run()`` function and some meta-information: .. code-block:: python class DoSomething(Task): description = 'Doing something' phase = phases.volume_preparation predecessors = [PartitionVolume] successors = [filesystem.Format] @classmethod def run(cls, info): pass To read more about tasks and their ordering, check out the section on `how bootstrap-vz works `__. Besides the ``resolve_tasks()`` function, there is also the ``resolve_rollback_tasks()`` function, which comes into play when something has gone awry while bootstrapping. It should be used to clean up anything that was created during the bootstrapping process. If you created temporary files for example, you can add a task to the rollback taskset that deletes those files, you might even already have it because you run it after an image has been successfully bootstrapped: .. code-block:: python def resolve_rollback_tasks(taskset, manifest, completed, counter_task): counter_task(taskset, tasks.DoSomething, tasks.UndoSomething) In ``resolve_rollback_tasks()`` you have access to the taskset (this time it contains tasks that will be run during rollback), the manifest, and the tasks that have already been run before the bootstrapping aborted (``completed``). The last parameter is the ``counter_task()`` function, with it you can specify that a specific task (2nd param) has to be in the taskset (1st param) for the rollback task (3rd param) to be added. This saves code and makes it more readable than running through the completed tasklist and checking each completed task. You can also specify a ``validate_manifest()`` function. Typically it looks like this: .. code-block:: python def validate_manifest(data, validator, error): from bootstrapvz.common.tools import rel_path validator(data, rel_path(__file__, 'manifest-schema.yml')) This code validates the manifest against a schema in your plugin folder. The schema is a `JSON schema `__, since bootstrap-vz supports `yaml `__, you can avoid a lot of curly braces quotes: .. code-block:: yaml $schema: http://json-schema.org/draft-04/schema# title: Example plugin manifest type: object properties: plugins: type: object properties: example: type: object properties: message: {type: string} required: [message] additionalProperties: false In the schema above we check that the ``example`` plugin has a single property named ``message`` with a string value (setting ``additionalProperties`` to ``false`` makes sure that users don't misspell optional attributes). Internal plugins ---------------- Internal plugins are part of the bootstrap-vz package and distributed with it. If you have developed a plugin that you think should be part of the package because a lot of people might use it you can send a pull request to get it included (just remember to `read the guidelines `__ first). External plugins ----------------- External plugins are packages distributed separately from bootstrap-vz. Separate distribution makes sense when your plugin solves a narrow problem scope specific to your use-case or when the plugin contains proprietary code that you would not like to share. They integrate with bootstrap-vz by exposing an entry-point through ``setup.py``: .. code-block:: python setup(name='example-plugin', version=0.9.5, packages=find_packages(), include_package_data=True, entry_points={'bootstrapvz.plugins': ['plugin_name = package_name.module_name']}, install_requires=['bootstrap-vz >= 0.9.5'], ) Beyond ``setup.py`` the package might need a ``MANIFEST.in`` so that assets like ``manifest-schema.yml`` are included when the package is built: .. code-block:: text include example/manifest-schema.yml include example/README.rst To test your package from source you can run ``python setup.py develop`` to register the package so that bootstrap-vz can find the entry-point of your plugin. An example plugin is available at ``__, you can use it as a starting point for your own plugin. Installing external plugins ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some plugins may not find their way to the python package index (especially if it's in a private repo). They can of course still be installed using pip: .. code-block:: sh pip install git+ssh://git@github.com/username/repo#egg=plugin_name bootstrap-vz-0.9.11+20180121git/docs/developers/switches.rst000066400000000000000000000011331323112141500233240ustar00rootroot00000000000000 Commandline switches ==================== As a developer, there are commandline switches available which can make your life a lot easier. + ``--debug``: Enables debug output in the console. This includes output from all commands that are invoked during bootstrapping. + ``--pause-on-error``: Pauses the execution when an exception occurs before rolling back. This allows you to debug by inspecting the volume at the time the error occurred. + ``--dry-run``: Prevents the ``run()`` function from being called on all tasks. This is useful if you want to see whether the task order is correct. bootstrap-vz-0.9.11+20180121git/docs/developers/taskoverview.rst000066400000000000000000000013121323112141500242230ustar00rootroot00000000000000 Taskoverview ============ .. raw:: html bootstrap-vz-0.9.11+20180121git/docs/index.rst000066400000000000000000000003731323112141500204370ustar00rootroot00000000000000.. toctree:: :maxdepth: 1 :hidden: self manifests/index providers/index plugins/index supported_builds logging remote_bootstrapping changelog developers/index api/index testing/index .. include:: ../README.rst bootstrap-vz-0.9.11+20180121git/docs/logging.rst000066400000000000000000000004331323112141500207530ustar00rootroot00000000000000 Logfile ======= Every run creates a new logfile in the ``logs/`` directory. The filename for each run consists of a timestamp (``%Y%m%d%H%M%S``) and the basename of the manifest used. The log also contains debugging statements regardless of whether the ``--debug`` switch was used. bootstrap-vz-0.9.11+20180121git/docs/manifests/000077500000000000000000000000001323112141500205645ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/manifests/index.rst000066400000000000000000000002341323112141500224240ustar00rootroot00000000000000Manifests ========= .. toctree:: :maxdepth: 1 :hidden: official_ec2_manifests official_gce_manifests .. include:: ../../manifests/README.rst bootstrap-vz-0.9.11+20180121git/docs/manifests/official_ec2_manifests.rst000066400000000000000000000000651323112141500256750ustar00rootroot00000000000000.. include:: ../../manifests/official/ec2/README.rst bootstrap-vz-0.9.11+20180121git/docs/manifests/official_gce_manifests.rst000066400000000000000000000000651323112141500257620ustar00rootroot00000000000000.. include:: ../../manifests/official/gce/README.rst bootstrap-vz-0.9.11+20180121git/docs/plugins/000077500000000000000000000000001323112141500202545ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/plugins/.gitignore000066400000000000000000000000311323112141500222360ustar00rootroot00000000000000* !index.rst !.gitignore bootstrap-vz-0.9.11+20180121git/docs/plugins/index.rst000066400000000000000000000001751323112141500221200ustar00rootroot00000000000000Plugins ======= .. toctree:: :maxdepth: 1 :hidden: :glob: * .. include:: ../../bootstrapvz/plugins/README.rst bootstrap-vz-0.9.11+20180121git/docs/providers/000077500000000000000000000000001323112141500206105ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/providers/.gitignore000066400000000000000000000000311323112141500225720ustar00rootroot00000000000000* !index.rst !.gitignore bootstrap-vz-0.9.11+20180121git/docs/providers/index.rst000066400000000000000000000002031323112141500224440ustar00rootroot00000000000000Providers ========= .. toctree:: :maxdepth: 1 :hidden: :glob: * .. include:: ../../bootstrapvz/providers/README.rst bootstrap-vz-0.9.11+20180121git/docs/remote_bootstrapping.rst000066400000000000000000000000561323112141500235740ustar00rootroot00000000000000.. include:: ../bootstrapvz/remote/README.rst bootstrap-vz-0.9.11+20180121git/docs/supported_builds.rst000066400000000000000000000053021323112141500227140ustar00rootroot00000000000000Supported builds ================ The following is a list of supported manifest combinations. Bootloaders and partitions -------------------------- Note that grub cannot boot from unpartitioned volumes. Azure ~~~~~ TODO EC2 ~~~ EBS ___ ========================== ================= ================= ================= Bootloader / Partitioning none msdos gpt ========================== ================= ================= ================= pvgrub (paravirtualized) supported supported supported extlinux (hvm) supported supported supported grub (hvm) *not supported* supported supported ========================== ================= ================= ================= S3 __ ========================== ================= ================= ================= Bootloader / Partitioning none msdos gpt ========================== ================= ================= ================= pvgrub (paravirtualized) supported *not implemented* *not implemented* extlinux (hvm) *not implemented* *not implemented* *not implemented* grub (hvm) *not supported* *not implemented* *not implemented* ========================== ================= ================= ================= GCE ~~~ TODO KVM ~~~ TODO Oracle ~~~~~~ TODO VirtualBox ~~~~~~~~~~ ========================== ================= ================= ================= Bootloader / Partitioning none msdos gpt ========================== ================= ================= ================= extlinux supported supported supported grub *not supported* supported supported ========================== ================= ================= ================= Known working builds -------------------- The following is a list of supported releases, providers and architectures combination. We know that they are working because there's someone working on them. ======= ======== ============ =========================================================== Release Provider Architecture Person ======= ======== ============ =========================================================== Jessie EC2 ``amd64`` `James Bromberger `__ Jessie GCE ``amd64`` `Zach Marano `__ (and GCE Team) Jessie KVM ``arm64`` `Clark Laughlin `__ Jessie Oracle ``amd64`` `Tiago Ilieve `__ ======= ======== ============ =========================================================== bootstrap-vz-0.9.11+20180121git/docs/taskoverview.py000077500000000000000000000046571323112141500217150ustar00rootroot00000000000000#!/usr/bin/python import sys import os.path sys.path.append(os.path.join(os.path.dirname(__file__), '..')) def generate_graph_data(): import bootstrapvz.common.tasks import bootstrapvz.providers import bootstrapvz.plugins from bootstrapvz.base.tasklist import get_all_tasks tasks = get_all_tasks([bootstrapvz.common.tasks, bootstrapvz.providers, bootstrapvz.plugins]) def distinct(seq): seen = set() return [x for x in seq if x not in seen and not seen.add(x)] modules = distinct([task.__module__ for task in tasks]) task_links = [] task_links.extend([{'source': task, 'target': succ, 'definer': task, } for task in tasks for succ in task.successors]) task_links.extend([{'source': pre, 'target': task, 'definer': task, } for task in tasks for pre in task.predecessors]) def mk_phase(phase): return {'name': phase.name, 'description': phase.description, } def mk_module(module): return {'name': module, } from bootstrapvz.common import phases def mk_node(task): return {'name': task.__name__, 'module': modules.index(task.__module__), 'phase': (i for i, phase in enumerate(phases.order) if phase is task.phase).next(), } def mk_link(link): for key in ['source', 'target', 'definer']: link[key] = tasks.index(link[key]) return link return {'phases': map(mk_phase, phases.order), 'modules': map(mk_module, modules), 'nodes': map(mk_node, tasks), 'links': map(mk_link, task_links)} def write_data(data, output_path=None): import json if output_path is None: import sys json.dump(data, sys.stdout, indent=4, separators=(',', ': ')) else: with open(output_path, 'w') as output: json.dump(data, output) if __name__ == '__main__' and __package__ is None: from docopt import docopt usage = """Usage: taskoverview.py [options] Options: --output output -h, --help show this help """ opts = docopt(usage) data = generate_graph_data() write_data(data, opts.get('--output', None)) bootstrap-vz-0.9.11+20180121git/docs/testing/000077500000000000000000000000001323112141500202505ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/testing/index.rst000066400000000000000000000002351323112141500221110ustar00rootroot00000000000000Testing ======= .. toctree:: :maxdepth: 1 :hidden: unit_tests system_tests system_test_providers/index .. include:: ../../tests/README.rst bootstrap-vz-0.9.11+20180121git/docs/testing/system_test_providers/000077500000000000000000000000001323112141500247305ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/docs/testing/system_test_providers/.gitignore000066400000000000000000000000311323112141500267120ustar00rootroot00000000000000* !index.rst !.gitignore bootstrap-vz-0.9.11+20180121git/docs/testing/system_test_providers/index.rst000066400000000000000000000002231323112141500265660ustar00rootroot00000000000000System test providers ===================== .. toctree:: :maxdepth: 1 :glob: * .. include:: ../../../tests/system/providers/README.rst bootstrap-vz-0.9.11+20180121git/docs/testing/system_tests.rst000066400000000000000000000000531323112141500235460ustar00rootroot00000000000000.. include:: ../../tests/system/README.rst bootstrap-vz-0.9.11+20180121git/docs/testing/unit_tests.rst000066400000000000000000000000511323112141500231770ustar00rootroot00000000000000.. include:: ../../tests/unit/README.rst bootstrap-vz-0.9.11+20180121git/docs/transform_github_links.py000066400000000000000000000071221323112141500237240ustar00rootroot00000000000000import re def setup(app): app.connect('doctree-resolved', transform_github_links) return {'version': '0.1'} # Maps from files in docs/ to folders/files in repo includes_mapping = { r'^index$': r'', r'^(providers|plugins)/index$': r'bootstrapvz/\1/', r'^(providers|plugins)/(?!index)([^/]+)$': r'bootstrapvz/\1/\2/', r'^manifests/index$': r'manifest/', r'^manifests/official_([^_]+)_manifests$': r'manifest/official/\1/', r'^testing/index$': r'tests/', r'^testing/(?!index)([^/]+)_tests$': r'tests/\1/', r'^remote_bootstrapping$': r'bootstrapvz/remote/', r'^developers/index$': r'bootstrapvz/', r'^developers/contributing$': r'CONTRIBUTING.rst', r'^developers/documentation$': r'docs/', r'^changelog$': r'CHANGELOG.rst', } # Maps from links in repo to files/folders in docs/ links_mapping = { r'^$': r'', r'^bootstrapvz/(providers|plugins)$': r'\1', r'^bootstrapvz/(providers|plugins)/([^/]+)$': r'\1/\2.html', r'^tests$': r'testing', r'^manifests$': r'manifests', r'^manifests/official/([^/]+)$': r'manifests/official_\1_manifests.html', r'^tests/([^/]+)$': r'testing/\1_tests.html', r'^bootstrapvz/remote$': r'remote_bootstrapping.html', r'^bootstrapvz$': r'developers', r'^CONTRIBUTING\.rst$': r'developers/contributing.html', r'^docs$': r'developers/documentation.html', r'^CHANGELOG\.rst$': r'changelog.html', } for key, val in includes_mapping.items(): del includes_mapping[key] includes_mapping[re.compile(key)] = val for key, val in links_mapping.items(): del links_mapping[key] links_mapping[re.compile(key)] = val def find_original(path): for key, val in includes_mapping.items(): if re.match(key, path): return re.sub(key, val, path) return None def find_docs_link(link): try: # Preserve anchor when doing lookups link, anchor = link.split('#', 1) anchor = '#' + anchor except ValueError: # No anchor, keep the original link anchor = '' for key, val in links_mapping.items(): if re.match(key, link): return re.sub(key, val, link) + anchor return None def transform_github_links(app, doctree, fromdocname): # Convert relative links in repo into relative links in docs. # We do this by first figuring out whether the current document # has been included from outside docs/ and only continue if so. # Next we take the repo path matching the current document # (lookup through 'includes_mapping'), tack the link onto the dirname # of that path and normalize it using os.path.normpath. # The result is the path to a document/folder in the repo. # We then convert this path into one that works in the documentation # (lookup through 'links_mapping'). # If a mapping is found we, create a relative link from the current document. from docutils import nodes import os.path original_path = find_original(fromdocname) if original_path is None: return for node in doctree.traverse(nodes.reference): if 'refuri' not in node: continue if node['refuri'].startswith('http'): continue abs_link = os.path.normpath(os.path.join(os.path.dirname(original_path), node['refuri'])) docs_link = find_docs_link(abs_link) if docs_link is None: continue # special handling for when we link inside the same document if docs_link.startswith('#'): node['refuri'] = docs_link else: node['refuri'] = os.path.relpath(docs_link, os.path.dirname(fromdocname)) bootstrap-vz-0.9.11+20180121git/manifests/000077500000000000000000000000001323112141500176345ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/README.rst000066400000000000000000000307521323112141500213320ustar00rootroot00000000000000The manifest file is the primary way to interact with bootstrap-vz. Every configuration and customization of a Debian installation is specified in this file. The manifest format is YAML or JSON. It is near impossible to run the bootstrapper with an invalid configuration, since every part of the framework supplies a `json-schema `__ that specifies exactly which configuration settings are valid in different situations. Manifest variables ------------------ Many of the settings in the example manifests use strings like ``debian-{system.release}-{system.architecture}-{{"{%y"}}}{{"{%m"}}}{{"{%d"}}}``. These strings make use of manifest variables, which can cross reference other settings in the manifest or specific values supplied by the bootstrapper (e.g. all python date formatting variables are available). Any reference uses dots to specify a path to the desired manifest setting. Not all settings support this though, to see whether embedding a manifest variable in a setting is possible, look for the ``manifest vars`` label. To insert a literal ``{foo}`` use double braces, that is ``{{foo}}``. For example in a shell command where you may want to use the expression ``${foo}``, use ``${{foo}}`` instead. Sections -------- The manifest is split into 7 sections. Name ~~~~~ Single string property that specifies the name of the image. - ``name``: The name of the resulting image. When bootstrapping cloud images, this would be the name visible in the interface when booting up new instances. When bootstrapping for VirtualBox or kvm, it's the filename of the image. ``required`` ``manifest vars`` Example: .. code:: yaml --- name: debian-{system.release}-{system.architecture}-{%Y}-{%m}-{%d}-ebs Provider ~~~~~~~~ The provider section contains all provider specific settings and the name of the provider itself. - ``name``: target virtualization platform of the installation ``required`` Consult the `providers <../bootstrapvz/providers>`__ section of the documentation for a list of valid values. Example: .. code:: yaml --- provider: name: ec2 Bootstrapper ~~~~~~~~~~~~ This section concerns the bootstrapper itself and its behavior. There are 4 possible settings: - ``workspace``: Path to where the bootstrapper should place images and intermediate files. Any volumes will be mounted under that path. ``required`` - ``tarball``: debootstrap has the option to download all the software and pack it up in a tarball. When starting the actual bootstrapping process, debootstrap can then be pointed at that tarball and use it instead of downloading anything from the internet. If you plan on running the bootstrapper multiple times, this option can save you a lot of bandwidth and time. This option just specifies whether it should create a new tarball or not. It will search for and use an available tarball if it already exists, regardless of this setting. ``optional`` Valid values: ``true, false`` Default: ``false`` - ``mirror``: The mirror debootstrap should download software from. It is advisable to specify a mirror close to your location (or the location of the host you are bootstrapping on), to decrease latency and improve bandwidth. If not specified, `the configured aptitude mirror URL <#packages>`__ is used. ``optional`` - ``include_packages``: Extra packages to be installed during bootstrap. Accepts a list of package names. ``optional`` - ``exclude_packages``: Packages to exclude during bootstrap phase. Accepts a list of package names. ``optional`` - ``variant``: Debian variant to install. The only supported value is ``minbase`` and should only be used in conjunction with the Docker provider. Not specifying this option will result in a normal Debian variant being bootstrapped. Example: .. code:: yaml --- bootstrapper: workspace: /target tarball: true mirror: http://deb.debian.org/debian/ include_packages: - whois - psmisc exclude_packages: - isc-dhcp-client - isc-dhcp-common variant: minbase System ~~~~~~ This section defines anything that pertains directly to the bootstrapped system and does not fit under any other section. - ``architecture``: The architecture of the system. Valid values: ``i386, amd64`` ``required`` - ``bootloader``: The bootloader for the system. Depending on the bootmethod of the virtualization platform, the options may be restricted. Valid values: ``grub, extlinux, pv-grub`` ``required`` - ``charmap``: The default charmap of the system. Valid values: Any valid charmap like ``UTF-8``, ``ISO-8859-`` or ``GBK``. ``required`` - ``hostname``: hostname to preconfigure the system with. ``optional`` - ``locale``: The default locale of the system. Valid values: Any locale mentioned in ``/etc/locale.gen`` ``required`` - ``release``: Defines which debian release should be bootstrapped. Valid values: ``squeeze``, ``wheezy``, ``jessie``, ``sid``, ``oldstable``, ``stable``, ``testing``, ``unstable`` ``required`` - ``timezone``: Timezone of the system. Valid values: Any filename from ``/usr/share/zoneinfo`` ``required`` Example: .. code:: yaml --- system: release: jessie architecture: amd64 bootloader: extlinux charmap: UTF-8 hostname: jessie x86_64 locale: en_US timezone: UTC Packages ~~~~~~~~ The packages section allows you to install custom packages from a variety of sources. - ``install``: A list of strings that specify which packages should be installed. Valid values: Package names optionally followed by a ``/target`` or paths to local ``.deb`` files. Note that packages are installed in the order they are listed. The installer invocations are bundled by package type (remote or local), meaning if you install two local packages, then two remote packages and then another local package, there will be two calls to ``dpkg -i ...`` and a single call to ``apt-get install ...``. - ``install_standard``: Defines if the packages of the ``"Standard System Utilities"`` option of the Debian installer, provided by `tasksel `__, should be installed or not. The problem is that with just ``debootstrap``, the system ends up with very basic commands. This is not a problem for a machine that will not be used interactively, but otherwise it is nice to have at hand tools like ``bash-completion``, ``less``, ``locate``, etc. ``optional`` Valid values: ``true``, ``false`` Default: ``false`` - ``mirror``: The default aptitude mirror. ``optional`` Default: ``http://deb.debian.org/debian/`` - ``security``: The default security mirror. ``optional`` Default: ``http://security.debian.org/`` - ``sources``: A map of additional sources that should be added to the aptitude sources list. The key becomes the filename in ``/etc/apt/sources.list.d/`` (with ``.list`` appended to it), except for ``main``, which designates ``/etc/apt/sources.list``. The value is an array with each entry being a line. ``optional`` - ``components``: A list of components that should be added to the default apt sources. For example ``contrib`` or ``non-free`` ``optional`` Default: ``['main']`` - ``trusted-keys``: List of paths (relative to the manifest) to ``.gpg`` keyrings that should be added to the aptitude keyring of trusted signatures for repositories. ``optional`` - ``apt.conf.d``: A map of ``apt.conf(5)`` configuration snippets. The key become the filename in ``/etc/apt/apt.conf.d``, except ``main`` which designates ``/etc/apt/apt.conf``. The value is a string in the ``apt.conf(5)`` syntax. ``optional`` - ``preferences``: Allows you to pin packages through `apt preferences `__. The setting is an object where the key is the preference filename in ``/etc/apt/preferences.d/``. The key ``main`` is special and refers to the file ``/etc/apt/preferences``, which will be overwritten if specified. ``optional`` The values are objects with three keys: - ``package``: The package to pin (wildcards allowed) - ``pin``: The release to pin the package to. - ``pin-priority``: The priority of this pin. Example: .. code:: yaml --- packages: install: - /root/packages/custom_app.deb - puppet install_standard: true mirror: http://cloudfront.debian.net/debian security: http://security.debian.org/ sources: puppet: - deb http://apt.puppetlabs.com wheezy main dependencies components: - contrib - non-free trusted-keys: - /root/keys/puppet.gpg apt.conf.d: 00InstallRecommends: >- APT::Install-Recommends "false"; APT::Install-Suggests "false"; 00IPv4: 'Acquire::ForceIPv4 "false";' preferences: main: - package: * pin: release o=Debian, n=wheezy pin-priority: 800 - package: * pin: release o=Debian Backports, a=wheezy-backports, n=wheezy-backports pin-priority: 760 - package: puppet puppet-common pin: version 2.7.25-1puppetlabs1 pin-priority: 840 Volume ~~~~~~ bootstrap-vz allows a wide range of options for configuring the disk layout of the system. It can create unpartitioned as well as partitioned volumes using either the gpt or msdos scheme. At most, there are only three partitions with predefined roles configurable though. They are boot, root and swap. - ``backing``: Specifies the volume backing. This setting is very provider specific. Valid values: ``ebs``, ``s3``, ``vmdk``, ``vdi``, ``raw``, ``qcow2``, ``lvm`` ``required`` - ``partitions``: A map of the partitions that should be created on the volume. - ``type``: The partitioning scheme to use. When using ``none``, only root can be specified as a partition. Valid values: ``none``, ``gpt``, ``msdos`` ``required`` - ``root``: Configuration of the root partition. ``required`` - ``size``: The size of the partition. Valid values: Any datasize specification up to TB (e.g. 5KiB, 1MB, 6TB). ``required`` - ``filesystem``: The filesystem of the partition. When choosing ``xfs``, the ``xfsprogs`` package will need to be installed. Valid values: ``ext2``, ``ext3``, ``ext4``, ``xfs`` ``required`` - ``format_command``: Command to format the partition with. This optional setting overrides the command bootstrap-vz would normally use to format the partition. The command is specified as a string array where each option/argument is an item in that array (much like the `commands <../bootstrapvz/plugins/commands>`__ plugin). ``optional`` The following variables are available: - ``{fs}``: The filesystem of the partition. - ``{device_path}``: The device path of the partition. - ``{size}``: The size of the partition. - ``{mount_opts}``: Options to mount the partition with. This optional setting overwrites the default option list bootstrap-vz would normally use to mount the partiton (defaults). The List is specified as a string array where each option/argument is an item in that array. ``optional`` Here some examples: - ``nodev`` - ``nosuid`` - ``noexec`` - ``journal_ioprio=3`` The default command used by bootstrap-vz is ``['mkfs.{fs}', '{device_path}']``. - ``boot``: Configuration of the boot partition. All settings equal those of the root partition. ``optional`` - ``swap``: Configuration of the swap partition. Since the swap partition has its own filesystem you can only specify the size for this partition. ``optional`` - ``additional_path``: Configuration of additional partitions. (e.g. /var/tmp) All settings equal those of the root partition. Example: .. code:: yaml --- volume: backing: vdi partitions: type: msdos boot: filesystem: ext2 size: 32MiB root: filesystem: ext4 size: 864MiB swap: size: 128MiB Plugins ~~~~~~~ The plugins section is a map of plugin names to whatever configuration a plugin requires. Go to the `plugin section <../bootstrapvz/plugins>`__ of the documentation, to see the configuration for a specific plugin. Example: .. code:: yaml --- plugins: minimize_size: zerofree: true shrink: true bootstrap-vz-0.9.11+20180121git/manifests/examples/000077500000000000000000000000001323112141500214525ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/examples/azure/000077500000000000000000000000001323112141500226005ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/examples/azure/jessie.yml000066400000000000000000000010311323112141500246000ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: azure waagent: version: 2.0.14 bootstrapper: mirror: http://deb.debian.org/debian/ workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: vhd partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: components: - main - contrib - non-free plugins: ntp: servers: - time.windows.com bootstrap-vz-0.9.11+20180121git/manifests/examples/azure/wheezy.yml000066400000000000000000000012561323112141500246420ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: azure waagent: version: 2.0.14 bootstrapper: mirror: http://deb.debian.org/debian/ workspace: /target system: release: wheezy architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: vhd partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: components: - main - contrib - non-free preferences: backport-kernel: - package: linux-image-* initramfs-tools pin: release n=wheezy-backports pin-priority: 500 plugins: ntp: servers: - time.windows.com bootstrap-vz-0.9.11+20180121git/manifests/examples/docker/000077500000000000000000000000001323112141500227215ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/examples/docker/jessie-minimized.yml000066400000000000000000000017151323112141500267150ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}:latest provider: name: docker labels: summary: Debian {system.release} {system.architecture} description: > Minimized version of Debian jessie without any manpages, additional documentation or other language files. Additional package installs: inetutils-ping (dep: netbase) and iproute2 distribution-scope: public dockerfile: - CMD /bin/bash bootstrapper: workspace: /target variant: minbase system: release: jessie architecture: amd64 bootloader: none charmap: UTF-8 locale: en_US timezone: UTC packages: install: - inetutils-ping - iproute2 volume: backing: folder partitions: type: none root: filesystem: ext4 size: 1GiB plugins: minimize_size: apt: autoclean: true languages: [none] gzip_indexes: true autoremove_suggests: true dpkg: locales: [] exclude_docs: true bootstrap-vz-0.9.11+20180121git/manifests/examples/ec2/000077500000000000000000000000001323112141500221235ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/examples/ec2/ebs-testing-amd64-pvm.yml000066400000000000000000000012761323112141500266110ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: testing architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/examples/ec2/ebs-unstable-amd64-pvm-contrib.yml000066400000000000000000000013641323112141500304050ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: unstable architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian components: - main - contrib - non-free plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/examples/ec2/ebs-unstable-amd64-pvm.yml000066400000000000000000000012761323112141500267510ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: unstable architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/examples/ec2/s3-wheezy-amd64-pvm.yml000066400000000000000000000014701323112141500262170ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%y}{%m}{%d} provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva # certificate: /path/to/your/certificate.pem # private-key: /path/to/your/private.key # user-id: arn:aws:iam::123456789012:user/iamuser description: Debian {system.release} {system.architecture} AMI bucket: debian-amis region: us-west-1 bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: s3 partitions: type: none root: filesystem: ext4 size: 4GiB packages: mirror: http://cloudfront.debian.net/debian bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/000077500000000000000000000000001323112141500222475ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/jessie-arm64-virtio.yml000066400000000000000000000010311323112141500265100ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_pci - virtio_blk bootstrapper: workspace: /target system: release: jessie architecture: arm64 bootloader: none charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos boot: filesystem: ext2 size: 32MiB root: filesystem: ext4 size: 864MiB swap: size: 128MiB packages: {} plugins: root_password: password: test bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/jessie-lvm.yml000066400000000000000000000006731323112141500250560ustar00rootroot00000000000000--- name: debian-lvm-example provider: name: kvm bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: lvm logicalvolume: lvtest volumegroup: vgtest partitions: type: gpt root: filesystem: ext4 size: 1GB packages: security: http://security.debian.org/ plugins: root_password: password: test bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/jessie-puppet.yaml000066400000000000000000000021121323112141500257240ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_pci - virtio_blk bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: install_standard: true mirror: http://httpredir.debian.org/debian install: # required to be pre-installed for proper puppet functioning of # puppetlabs-apt, it is also the primary puppet module - lsb-release plugins: # It is advisable to avoid running things as root, use a sudo account instead admin_user: username: administrator password: something # puppet plugin puppet: # The assets path MUST be ABSOLUTE on your project. assets: /your/absolute/path/to/etc/puppetlabs install_modules: - [puppetlabs-accounts] - [puppetlabs-apt] - [puppetlabs-concat, 3.0.0] - [puppetlabs-stdlib] - [puppetlabs-apache, 1.11.0] bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/jessie-qcow2.yml000066400000000000000000000005541323112141500253110ustar00rootroot00000000000000--- name: debian-qcow2-example provider: name: kvm bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: qcow2 partitions: type: gpt root: filesystem: ext4 size: 1GB packages: {} plugins: root_password: password: test bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/jessie-virtio.yml000066400000000000000000000006721323112141500255730ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_pci - virtio_blk bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 8GiB plugins: root_password: password: test bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/stretch-console.yml000066400000000000000000000010041323112141500261010ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_blk - virtio_net - virtio_rng console: virtual bootstrapper: workspace: /target system: release: stretch architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 2GiB mountopts: - defaults - noatime - errors=remount-ro bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/stretch-puppet.yaml000066400000000000000000000021131323112141500261170ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_pci - virtio_blk bootstrapper: workspace: /target system: release: stretch architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: install_standard: true mirror: http://httpredir.debian.org/debian install: # required to be pre-installed for proper puppet functioning of # puppetlabs-apt, it is also the primary puppet module - lsb-release plugins: # It is advisable to avoid running things as root, use a sudo account instead admin_user: username: administrator password: something # puppet plugin puppet: # The assets path MUST be ABSOLUTE on your project. assets: /your/absolute/path/to/etc/puppetlabs install_modules: - [puppetlabs-accounts] - [puppetlabs-apt] - [puppetlabs-concat, 3.0.0] - [puppetlabs-stdlib] - [puppetlabs-apache, 1.11.0] bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/stretch-virtio-partitions.yml000066400000000000000000000013761323112141500301610ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_pci - virtio_blk bootstrapper: workspace: /target system: release: stretch architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: gpt boot: filesystem: ext2 size: 1GiB swap: size: 128MiB root: filesystem: ext4 size: 8GiB tmp: mountopts: - nodev - noexec - nosuid - journal_ioprio=3 filesystem: ext4 size: 1GiB var: filesystem: ext4 size: 1GiB var/tmp: filesystem: ext4 size: 1GiB plugins: root_password: password: test bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/wheezy-virtio.yml000066400000000000000000000010311323112141500256120ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_pci - virtio_blk bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos boot: filesystem: ext2 size: 32MiB root: filesystem: ext4 size: 864MiB swap: size: 128MiB packages: {} plugins: root_password: password: test bootstrap-vz-0.9.11+20180121git/manifests/examples/kvm/wheezy.yml000066400000000000000000000007451323112141500243130ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: kvm bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos boot: filesystem: ext2 size: 32MiB root: filesystem: ext4 size: 864MiB swap: size: 128MiB packages: {} plugins: root_password: password: test bootstrap-vz-0.9.11+20180121git/manifests/examples/virtualbox/000077500000000000000000000000001323112141500236515ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/examples/virtualbox/jessie-vagrant.yml000066400000000000000000000010561323112141500273200ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: virtualbox guest_additions: /usr/share/virtualbox/VBoxGuestAdditions.iso bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 hostname: localhost locale: en_US timezone: UTC volume: backing: vmdk partitions: type: msdos boot: filesystem: ext2 size: 64MiB root: filesystem: ext4 size: 1856MiB swap: size: 128MiB packages: {} plugins: vagrant: {} bootstrap-vz-0.9.11+20180121git/manifests/examples/virtualbox/stretch-vagrant.yml000066400000000000000000000012721323112141500275120ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: virtualbox guest_additions: /usr/share/virtualbox/VBoxGuestAdditions.iso bootstrapper: workspace: /target system: release: stretch architecture: amd64 bootloader: grub charmap: UTF-8 hostname: localhost locale: en_US timezone: UTC volume: backing: vmdk partitions: type: msdos boot: filesystem: ext2 size: 64MiB root: filesystem: ext4 size: 1856MiB swap: size: 128MiB packages: {} plugins: vagrant: {} root_password: password-crypted: $6$MU3jLtZHS$UHdibqwOJrZw5yI7cqzG.AnzWqOVD9krryd3Y/SgXDSHUEMsaT7iAiQHhuCpjN4Q0tEssbJYoy4H1QFxOY3Tc/ bootstrap-vz-0.9.11+20180121git/manifests/examples/virtualbox/wheezy-vagrant.yml000066400000000000000000000010561323112141500273510ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: virtualbox guest_additions: /usr/share/virtualbox/VBoxGuestAdditions.iso bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: grub charmap: UTF-8 hostname: localhost locale: en_US timezone: UTC volume: backing: vmdk partitions: type: msdos boot: filesystem: ext2 size: 64MiB root: filesystem: ext4 size: 1856MiB swap: size: 128MiB packages: {} plugins: vagrant: {} bootstrap-vz-0.9.11+20180121git/manifests/examples/virtualbox/wheezy.yml000066400000000000000000000007771323112141500257220ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%y}{%m}{%d} provider: name: virtualbox guest_additions: /usr/share/virtualbox/VBoxGuestAdditions.iso bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: vdi partitions: type: msdos boot: filesystem: ext2 size: 32MiB root: filesystem: ext4 size: 864MiB swap: size: 128MiB packages: {} bootstrap-vz-0.9.11+20180121git/manifests/official/000077500000000000000000000000001323112141500214105ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/000077500000000000000000000000001323112141500220615ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/README.rst000066400000000000000000000010371323112141500235510ustar00rootroot00000000000000Official EC2 manifests ====================== The official Debian images for EC2 are built with bootstrap-vz. In the folder ``manifests/official/ec2`` you will find the various manifests that are used to create the different flavors of Debian AMIs for EC2. You can read more about those official images in the `Debian wiki`__. .. __: https://wiki.debian.org/Cloud/AmazonEC2Image/ The official images can be found on the `AWS marketplace`__. .. __: https://aws.amazon.com/marketplace/seller-profile?id=890be55d-32d8-4bc8-9042-2b4fd83064d5 bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-jessie-amd64-hvm.yml000066400000000000000000000024301323112141500263350ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-{%H}{%M}-ebs tags: Name: "Jessie 8.6+1" Debian: "8.6+{%Y}{%m}{%d}" provider: name: ec2 virtualization: hvm enhanced_networking: simple # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: gpt root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian components: - main - contrib - non-free preferences: backport-cloud-init-cloud-utils: - package: cloud-init cloud-utils pin: release n=jessie-backports pin-priority: 500 install: - awscli - python-boto - python3-boto - apt-transport-https - lvm2 - ncurses-term - parted - cloud-init - cloud-utils - gdisk - systemd - systemd-sysv plugins: cloud_init: metadata_sources: Ec2 username: admin enable_modules: cloud_init_modules: - {module: growpart, position: 4} bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-squeeze-amd64-pvm.yml000066400000000000000000000012431323112141500265450ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: squeeze architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian plugins: admin_user: username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-squeeze-i386-pvm.yml000066400000000000000000000012421323112141500263220ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: squeeze architecture: i386 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian plugins: admin_user: username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-stretch-amd64-hvm.yml000066400000000000000000000027201323112141500265310ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-{%H}{%M}-ebs tags: Name: "Stretch 9.0 alpha" Debian: "9.0~{%Y}{%m}{%d}{%H}{%M}" provider: name: ec2 virtualization: hvm enhanced_networking: simple # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: stretch architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: gpt root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian components: - main - contrib - non-free install: - awscli - python-boto - python3-boto - apt-transport-https - lvm2 - ncurses-term - parted - cloud-init - cloud-utils - gdisk - systemd - systemd-sysv plugins: cloud_init: metadata_sources: Ec2 username: admin enable_modules: cloud_init_modules: - {module: growpart, position: 4} # ec2_launch: # security_group_ids: # # this have to be a NONE VPC SG # - sg-398a704f # instance_type: m3.medium # ssh_key: my_ssh_key # print_public_ip: "/tmp/stretch-ami-test-ip" # tags: # Name: "testing-ami-{system.release}" # Debian: "9.0~{%Y}{%m}{%d}{%H}{%M}" # deregister_ami: false bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-wheezy-amd64-hvm-cn-north-1.yml000066400000000000000000000013301323112141500302500ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: hvm enhanced_networking: simple # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: extlinux charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://ftp.cn.debian.org/debian plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-wheezy-amd64-hvm.yml000066400000000000000000000013341323112141500263700ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: hvm enhanced_networking: simple # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: extlinux charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-wheezy-amd64-pvm-cn-north-1.yml000066400000000000000000000012701323112141500302630ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://ftp.cn.debian.org/debian plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-wheezy-amd64-pvm.yml000066400000000000000000000012741323112141500264030ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/ebs-wheezy-i386-pvm.yml000066400000000000000000000012731323112141500261600ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d}-ebs provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: wheezy architecture: i386 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: ebs partitions: type: none root: filesystem: ext4 size: 8GiB packages: mirror: http://cloudfront.debian.net/debian plugins: cloud_init: metadata_sources: Ec2 username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/ec2/s3-wheezy-amd64-pvm-cn-north-1.yml000066400000000000000000000016701323112141500300430ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{provider.virtualization}-{%Y}-{%m}-{%d} provider: name: ec2 virtualization: pvm # credentials: # access-key: AFAKEACCESSKEYFORAWS # secret-key: thes3cr3tkeyf0ryourawsaccount/FS4d8Qdva # certificate: /path/to/your/certificate.pem # private-key: /path/to/your/private.key # user-id: arn:aws:iam::123456789012:user/iamuser description: Debian {system.release} {system.architecture} AMI bucket: debian-amis-cn-north-1 region: cn-north-1 bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: pvgrub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: s3 partitions: type: none root: filesystem: ext4 size: 4GiB packages: mirror: http://ftp.cn.debian.org/debian plugins: cloud_init: disable_modules: - landscape - byobu - ssh-import-id username: admin bootstrap-vz-0.9.11+20180121git/manifests/official/gce/000077500000000000000000000000001323112141500221465ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/official/gce/README.rst000066400000000000000000000032311323112141500236340ustar00rootroot00000000000000Official GCE manifests ====================== These are the official manifests used to build [Google Compute Engine (GCE) Debian images](https://cloud.google.com/compute/docs/images). The included packages and configuration changes are necessary for Debian to run on GCE as a first class citizen of the platform. Included GCE software is published on github: [Google Compute Engine guest environment](https://github.com/GoogleCloudPlatform/compute-image-packages) Debian 8 Package Notes: * python-crcmod is pulled in from backports as it provides a compiled crcmod required for the Google Cloud Storage CLI (gsutil). * cloud-utils and cloud-guest-utils are pulled in from backports as they provide a fixed version of growpart to safely grow the root partition on disks >2TB. Debian 8 and 9 Package Notes: * google-cloud-sdk is pulled from a Google Cloud repository. * google-compute-engine is pulled from a Google Cloud repository. * python-google-compute-engine is pulled from a Google Cloud repository. * python3-google-compute-engine is pulled from a Google Cloud repository. jessie-minimal and stretch-minimal: The only additions are the necessary google-compute-engine, python-google-compute-engine, and python3-google-compute-engine packages. This image is not published on GCE however the manifest is provided here for those wishing a minimal GCE Debian image. buster and buster-minimal: Buster is for testing only, it should not be used for production images and may break at any time. Deprecated manifests: Debian 7 Wheezy and Backports Debian 7 Wheezy are deprecated images on GCE and are no longer supported. These manifests are provided here for historic purposes. bootstrap-vz-0.9.11+20180121git/manifests/official/gce/buster-minimal.yml000066400000000000000000000014071323112141500256230ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: buster architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: include-source-type: true sources: google-cloud: - deb http://packages.cloud.google.com/apt google-compute-engine-{system.release}-stable main install: - google-compute-engine - python-google-compute-engine - python3-google-compute-engine plugins: google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal bootstrap-vz-0.9.11+20180121git/manifests/official/gce/buster.yml000066400000000000000000000021671323112141500242030ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: buster architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: include-source-type: true sources: google-cloud: - deb http://packages.cloud.google.com/apt cloud-sdk-{system.release} main - deb http://packages.cloud.google.com/apt google-compute-engine-{system.release}-stable main install: - file - google-cloud-sdk - google-compute-engine - python-google-compute-engine - python3-google-compute-engine - man - net-tools - python-crcmod - screen - vim plugins: expand_root: filesystem_type: ext4 root_device: /dev/sda root_partition: 1 google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal unattended_upgrades: download_interval: 1 update_interval: 1 upgrade_interval: 1 bootstrap-vz-0.9.11+20180121git/manifests/official/gce/deprecated/000077500000000000000000000000001323112141500242465ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/official/gce/deprecated/wheezy-backports.yml000066400000000000000000000024511323112141500302740ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: sources: google-cloud: - deb http://packages.cloud.google.com/apt cloud-sdk-{system.release} main - deb http://packages.cloud.google.com/apt google-cloud-compute-legacy-{system.release} main install: - google-cloud-sdk - google-compute-daemon - google-startup-scripts - python-gcimagebundle - rsync - screen - vim preferences: backport-kernel: - package: linux-image-* initramfs-tools pin: release n=wheezy-backports pin-priority: 500 backport-ssh: - package: init-system-helpers openssh-sftp-server openssh-client openssh-server pin: release n=wheezy-backports pin-priority: 500 backport-growroot: - package: cloud-initramfs-growroot pin: release n=wheezy-backports pin-priority: 500 plugins: google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal bootstrap-vz-0.9.11+20180121git/manifests/official/gce/deprecated/wheezy.yml000066400000000000000000000015461323112141500263120ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: wheezy architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: sources: google-cloud: - deb http://packages.cloud.google.com/apt cloud-sdk-{system.release} main - deb http://packages.cloud.google.com/apt google-cloud-compute-legacy-{system.release} main install: - google-cloud-sdk - google-compute-daemon - google-startup-scripts - python-gcimagebundle - rsync - screen - vim plugins: google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal bootstrap-vz-0.9.11+20180121git/manifests/official/gce/jessie-minimal.yml000066400000000000000000000014071323112141500256010ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: include-source-type: true sources: google-cloud: - deb http://packages.cloud.google.com/apt google-compute-engine-{system.release}-stable main install: - google-compute-engine - python-google-compute-engine - python3-google-compute-engine plugins: google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal bootstrap-vz-0.9.11+20180121git/manifests/official/gce/jessie.yml000066400000000000000000000025021323112141500241520ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: include-source-type: true sources: google-cloud: - deb http://packages.cloud.google.com/apt cloud-sdk-{system.release} main - deb http://packages.cloud.google.com/apt google-compute-engine-{system.release}-stable main install: - file - google-cloud-sdk - google-compute-engine - python-google-compute-engine - python3-google-compute-engine - python-crcmod - screen - vim preferences: # python-crcmod in backports has a compiled version needed for Google Cloud Storage. backport-python-crcmod: - package: python-crcmod pin: release n=jessie-backports pin-priority: 500 plugins: expand_root: filesystem_type: ext4 root_device: /dev/sda root_partition: 1 google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal unattended_upgrades: download_interval: 1 update_interval: 1 upgrade_interval: 1 bootstrap-vz-0.9.11+20180121git/manifests/official/gce/stretch-minimal.yml000066400000000000000000000014101323112141500257650ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: stretch architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: include-source-type: true sources: google-cloud: - deb http://packages.cloud.google.com/apt google-compute-engine-{system.release}-stable main install: - google-compute-engine - python-google-compute-engine - python3-google-compute-engine plugins: google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal bootstrap-vz-0.9.11+20180121git/manifests/official/gce/stretch.yml000066400000000000000000000021701323112141500243450ustar00rootroot00000000000000--- name: disk provider: name: gce description: Debian {system.release} {system.architecture} bootstrapper: workspace: /target system: release: stretch architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 10GiB packages: include-source-type: true sources: google-cloud: - deb http://packages.cloud.google.com/apt cloud-sdk-{system.release} main - deb http://packages.cloud.google.com/apt google-compute-engine-{system.release}-stable main install: - file - google-cloud-sdk - google-compute-engine - python-google-compute-engine - python3-google-compute-engine - man - net-tools - python-crcmod - screen - vim plugins: expand_root: filesystem_type: ext4 root_device: /dev/sda root_partition: 1 google_cloud_repo: cleanup_bootstrap_key: true enable_keyring_repo: true ntp: servers: - metadata.google.internal unattended_upgrades: download_interval: 1 update_interval: 1 upgrade_interval: 1 bootstrap-vz-0.9.11+20180121git/manifests/official/oracle/000077500000000000000000000000001323112141500226555ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/manifests/official/oracle/jessie.yml000066400000000000000000000006371323112141500246700ustar00rootroot00000000000000--- name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d} provider: name: oracle bootstrapper: workspace: /target system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: type: msdos root: filesystem: ext4 size: 8GiB plugins: cloud_init: username: opc metadata_sources: Ec2 bootstrap-vz-0.9.11+20180121git/setup.py000066400000000000000000000037731323112141500173670ustar00rootroot00000000000000from setuptools import setup from setuptools import find_packages import os.path def find_version(path): import re version_file = open(path).read() version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M) if version_match: return version_match.group(1) raise RuntimeError("Unable to find version string.") setup(name='bootstrap-vz', version=find_version(os.path.join(os.path.dirname(__file__), 'bootstrapvz/__init__.py')), packages=find_packages(exclude=['docs']), include_package_data=True, entry_points={'console_scripts': ['bootstrap-vz = bootstrapvz.base:main', 'bootstrap-vz-remote = bootstrapvz.remote.main:main', 'bootstrap-vz-server = bootstrapvz.remote.server:main', ]}, install_requires=['termcolor >= 1.1.0', 'fysom >= 1.0.15', 'jsonschema >= 2.3.0', 'pyyaml >= 3.10', 'boto >= 2.14.0', 'boto3 >= 1.4.2', 'docopt >= 0.6.1', 'pyrfc3339 >= 1.0', 'requests >= 2.4.3', 'pyro4 >= 4.30', ], license='Apache License, Version 2.0', description='Bootstrap Debian images for virtualized environments', long_description='''bootstrap-vz is a bootstrapping framework for Debian. It is is specifically targeted at bootstrapping systems for virtualized environments. bootstrap-vz runs without any user intervention and generates ready-to-boot images for a number of virtualization platforms. Its aim is to provide a reproducible bootstrapping process using manifests as well as supporting a high degree of customizability through plugins.''', author='Anders Ingemann', author_email='anders@ingemann.de', url='http://www.github.com/andsens/bootstrap-vz', ) bootstrap-vz-0.9.11+20180121git/tests/000077500000000000000000000000001323112141500170055ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/README.rst000066400000000000000000000013051323112141500204730ustar00rootroot00000000000000The testing framework consists of two parts: The unit tests and the integration tests. The `unit tests `__ are responsible for testing individual parts of bootstrap-vz, while the `integration tests `__ test entire manifests by bootstrapping and booting them. Selecting tests --------------- To run one specific test suite simply append the module path to tox: .. code-block:: sh $ tox -e unit tests.unit.releases_tests Specific tests can be selected by appending the function name with a colon to the modulepath -- to run more than one tests, simply attach more arguments. .. code-block:: sh $ tox -e unit tests.unit.releases_tests:test_lt tests.unit.releases_tests:test_eq bootstrap-vz-0.9.11+20180121git/tests/__init__.py000066400000000000000000000004401323112141500211140ustar00rootroot00000000000000 # Snatched from: http://stackoverflow.com/a/2186565/339505 def recursive_glob(path, pattern): import fnmatch import os for path, dirnames, filenames in os.walk(path): for filename in fnmatch.filter(filenames, pattern): yield os.path.join(path, filename) bootstrap-vz-0.9.11+20180121git/tests/integration/000077500000000000000000000000001323112141500213305ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/integration/__init__.py000066400000000000000000000000001323112141500234270ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/integration/dry_run_tests.py000066400000000000000000000015601323112141500246100ustar00rootroot00000000000000 def test_manifest_generator(): """ manifests_tests - test_manifest_generator. Loops through the manifests directory and tests that each file can successfully be loaded and validated. """ from bootstrapvz.base.manifest import Manifest from bootstrapvz.base.main import run def dry_run(path): manifest = Manifest(path=path) run(manifest, dry_run=True) import os.path from .. import recursive_glob from itertools import chain manifests = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../manifests') manifest_paths = chain(recursive_glob(manifests, '*.yml'), recursive_glob(manifests, '*.json')) for manifest_path in manifest_paths: dry_run.description = "Dry-running %s" % os.path.relpath(manifest_path, manifests) yield dry_run, manifest_path bootstrap-vz-0.9.11+20180121git/tests/system/000077500000000000000000000000001323112141500203315ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/README.rst000066400000000000000000000053771323112141500220340ustar00rootroot00000000000000System tests ============ `System tests`__ test bootstrap-vz in its entirety. This testing includes building images from manifests and creating/booting said images. __ http://en.wikipedia.org/wiki/System_testing Since hardcoding manifests for each test, bootstrapping them and booting the resulting images is too much code for a single test, a testing harness has been developed that reduces each test to it's bare essentials: * Combine available `manifest partials <#manifest-partials>`__ into a single manifest * Boot an instance from a manifest * Run tests on the booted instance In order for the system testing harness to be able to bootstrap it must know about your `build-servers <../../bootstrapvz/remote#build-servers-yml>`__. Depending on the manifest that is bootstrapped, the harness chooses a fitting build-server, connects to it and starts the bootstrapping process. When running system tests, the framework will look for ``build-servers.yml`` at the root of the repo and raise an error if it is not found. Manifest combinations --------------------- The tests mainly focus on varying key parts of an image (e.g. partitioning, Debian release, bootloader, ec2 backing, ec2 virtualization method) that have been problem areas. Essentially the tests are the cartesian product of these key parts. Aborting a test --------------- You can press ``Ctrl+C`` at any time during the testing to abort - the harness will automatically clean up any temporary resources and shut down running instances. Pressing ``Ctrl+C`` a second time stops the cleanup and quits immediately. Manifest partials ----------------- Instead of creating manifests from scratch for each single test, reusable parts are factored out into partials in the manifest folder. This allows code like this: .. code-block:: python partials = {'vdi': '{provider: {name: virtualbox}, volume: {backing: vdi}}', 'vmdk': '{provider: {name: virtualbox}, volume: {backing: vmdk}}', } def test_unpartitioned_extlinux_oldstable(): std_partials = ['base', 'stable64', 'extlinux', 'unpartitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) The code above produces a manifest for Debian stable 64-bit unpartitioned virtualbox VMDK image. ``root_password`` is a special partial in that the actual password is randomly generated on load. Missing parts ------------- The system testing harness is in no way complete. * It still has no support for providers other than Virtualbox, EC2 and Docker. * Creating an SSH connection to a booted instance is cumbersome and does not happen in any of the tests - this would be particularly useful when manifests are to be tested beyond whether they boot up. bootstrap-vz-0.9.11+20180121git/tests/system/__init__.py000066400000000000000000000000001323112141500224300ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/docker_tests.py000066400000000000000000000012031323112141500233700ustar00rootroot00000000000000from manifests import merge_manifest_data from tools import boot_manifest partials = {'docker': ''' provider: name: docker virtualization: hvm dockerfile: CMD /bin/bash bootstrapper: variant: minbase system: bootloader: none volume: backing: folder partitions: type: none root: filesystem: ext4 size: 1GiB ''', } def test_stable(): std_partials = ['base', 'stable64'] custom_partials = [partials['docker']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print('\n'.join(instance.run(['echo', 'test']))) bootstrap-vz-0.9.11+20180121git/tests/system/ec2_ebs_hvm_tests.py000066400000000000000000000132221323112141500243010ustar00rootroot00000000000000from manifests import merge_manifest_data from tools import boot_manifest partials = {'ebs_hvm': ''' provider: name: ec2 virtualization: hvm description: Debian {system.release} {system.architecture} volume: {backing: ebs} ''', 'extlinux': 'system: {bootloader: extlinux}', 'grub': 'system: {bootloader: grub}', } def test_unpartitioned_extlinux_oldstable(): std_partials = ['base', 'oldstable64', 'unpartitioned', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_extlinux_oldstable(): std_partials = ['base', 'oldstable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_extlinux_oldstable(): std_partials = ['base', 'oldstable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_unpartitioned_extlinux_stable(): std_partials = ['base', 'stable64', 'unpartitioned', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_extlinux_stable(): std_partials = ['base', 'stable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_extlinux_stable(): std_partials = ['base', 'stable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_grub_stable(): std_partials = ['base', 'stable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['grub']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_grub_stable(): std_partials = ['base', 'stable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['grub']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_unpartitioned_extlinux_unstable(): std_partials = ['base', 'unstable64', 'unpartitioned', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_extlinux_unstable(): std_partials = ['base', 'unstable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_extlinux_unstable(): std_partials = ['base', 'unstable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['extlinux']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_grub_unstable(): std_partials = ['base', 'unstable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['grub']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_grub_unstable(): std_partials = ['base', 'unstable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_hvm'], partials['grub']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't2.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) bootstrap-vz-0.9.11+20180121git/tests/system/ec2_ebs_pvm_tests.py000066400000000000000000000072771323112141500243260ustar00rootroot00000000000000from manifests import merge_manifest_data from tools import boot_manifest partials = {'ebs_pvm': ''' provider: name: ec2 virtualization: pvm description: Debian {system.release} {system.architecture} system: {bootloader: pvgrub} volume: {backing: ebs} ''' } def test_unpartitioned_oldstable(): std_partials = ['base', 'oldstable64', 'unpartitioned', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_oldstable(): std_partials = ['base', 'oldstable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_oldstable(): std_partials = ['base', 'oldstable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_unpartitioned_stable(): std_partials = ['base', 'stable64', 'unpartitioned', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_stable(): std_partials = ['base', 'stable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_stable(): std_partials = ['base', 'stable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_unpartitioned_unstable(): std_partials = ['base', 'unstable64', 'unpartitioned', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_msdos_unstable(): std_partials = ['base', 'unstable64', 'msdos', 'single_partition', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_gpt_unstable(): std_partials = ['base', 'unstable64', 'gpt', 'single_partition', 'root_password'] custom_partials = [partials['ebs_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 't1.micro'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) bootstrap-vz-0.9.11+20180121git/tests/system/ec2_s3_pvm_tests.py000066400000000000000000000030151323112141500240640ustar00rootroot00000000000000from manifests import merge_manifest_data from tools import boot_manifest import random s3_bucket_name = '{id:x}'.format(id=random.randrange(16 ** 16)) partials = {'s3_pvm': ''' provider: name: ec2 virtualization: pvm description: Debian {system.release} {system.architecture} bucket: ''' + s3_bucket_name + ''' system: {bootloader: pvgrub} volume: {backing: s3} ''' } def test_unpartitioned_oldstable(): std_partials = ['base', 'oldstable64', 'unpartitioned', 'root_password'] custom_partials = [partials['s3_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 'm1.small'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_unpartitioned_stable(): std_partials = ['base', 'stable64', 'unpartitioned', 'root_password'] custom_partials = [partials['s3_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 'm1.small'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) def test_unpartitioned_unstable(): std_partials = ['base', 'unstable64', 'unpartitioned', 'root_password'] custom_partials = [partials['s3_pvm']] manifest_data = merge_manifest_data(std_partials, custom_partials) boot_vars = {'instance_type': 'm1.small'} with boot_manifest(manifest_data, boot_vars) as instance: print(instance.get_console_output().output) bootstrap-vz-0.9.11+20180121git/tests/system/manifests/000077500000000000000000000000001323112141500223225ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/manifests/__init__.py000066400000000000000000000036401323112141500244360ustar00rootroot00000000000000import os.path import glob import random import string from bootstrapvz.common.tools import load_data partial_json = glob.glob(os.path.join(os.path.dirname(__file__), '*.yml')) partial_yaml = glob.glob(os.path.join(os.path.dirname(__file__), '*.json')) partials = {} for path in partial_json + partial_yaml: key = os.path.splitext(os.path.basename(path))[0] if key in partials: msg = 'Error when loading partial manifests: The partial {key} exists twice'.format(key=key) raise Exception(msg) partials[key] = load_data(path) pool = string.ascii_uppercase + string.ascii_lowercase + string.digits random_password = ''.join(random.choice(pool) for _ in range(16)) partials['root_password']['plugins']['root_password']['password'] = random_password def merge_manifest_data(standard_partials=[], custom=[]): import yaml manifest_data = [partials[name] for name in standard_partials] manifest_data.extend(yaml.load(data) for data in custom) return merge_dicts(*manifest_data) # Snatched from here: http://stackoverflow.com/a/7205107 def merge_dicts(*args): def clone(obj): copy = obj if isinstance(obj, dict): copy = {key: clone(value) for key, value in obj.iteritems()} if isinstance(obj, list): copy = [clone(value) for value in obj] if isinstance(obj, set): copy = set([clone(value) for value in obj]) return copy def merge(a, b, path=[]): for key in b: if key in a: if isinstance(a[key], dict) and isinstance(b[key], dict): merge(a[key], b[key], path + [str(key)]) elif a[key] == b[key]: pass else: raise Exception('Conflict at `{path}\''.format(path='.'.join(path + [str(key)]))) else: a[key] = clone(b[key]) return a return reduce(merge, args, {}) bootstrap-vz-0.9.11+20180121git/tests/system/manifests/base.yml000066400000000000000000000004161323112141500237600ustar00rootroot00000000000000--- name: deb-{system.release}-{system.architecture}-{system.bootloader}-{volume.partitions.type}-{%y}{%m}{%d} provider: {} bootstrapper: workspace: /target tarball: true system: charmap: UTF-8 locale: en_US timezone: UTC volume: partitions: {} packages: {} bootstrap-vz-0.9.11+20180121git/tests/system/manifests/extlinux.yml000066400000000000000000000000431323112141500247220ustar00rootroot00000000000000--- system: bootloader: extlinux bootstrap-vz-0.9.11+20180121git/tests/system/manifests/gpt.yml000066400000000000000000000000501323112141500236320ustar00rootroot00000000000000--- volume: partitions: type: gpt bootstrap-vz-0.9.11+20180121git/tests/system/manifests/grub.yml000066400000000000000000000000371323112141500240040ustar00rootroot00000000000000--- system: bootloader: grub bootstrap-vz-0.9.11+20180121git/tests/system/manifests/msdos.yml000066400000000000000000000000521323112141500241670ustar00rootroot00000000000000--- volume: partitions: type: msdos bootstrap-vz-0.9.11+20180121git/tests/system/manifests/oldstable64.yml000066400000000000000000000000671323112141500251730ustar00rootroot00000000000000--- system: release: oldstable architecture: amd64 bootstrap-vz-0.9.11+20180121git/tests/system/manifests/partitioned.yml000066400000000000000000000002361323112141500253700ustar00rootroot00000000000000--- volume: partitions: boot: filesystem: ext2 size: 64MiB root: filesystem: ext4 size: 832MiB swap: size: 128MiB bootstrap-vz-0.9.11+20180121git/tests/system/manifests/root_password.yml000066400000000000000000000001371323112141500257530ustar00rootroot00000000000000--- plugins: root_password: password: random password set by the partial manifest loader bootstrap-vz-0.9.11+20180121git/tests/system/manifests/single_partition.yml000066400000000000000000000001141323112141500264130ustar00rootroot00000000000000--- volume: partitions: root: filesystem: ext4 size: 1GiB bootstrap-vz-0.9.11+20180121git/tests/system/manifests/stable64.yml000066400000000000000000000000641323112141500244710ustar00rootroot00000000000000--- system: release: stable architecture: amd64 bootstrap-vz-0.9.11+20180121git/tests/system/manifests/stable86.yml000066400000000000000000000000621323112141500244730ustar00rootroot00000000000000--- system: release: stable architecture: x86 bootstrap-vz-0.9.11+20180121git/tests/system/manifests/unpartitioned.yml000066400000000000000000000001331323112141500257270ustar00rootroot00000000000000--- volume: partitions: type: none root: filesystem: ext4 size: 1GiB bootstrap-vz-0.9.11+20180121git/tests/system/manifests/unstable64.yml000066400000000000000000000000661323112141500250360ustar00rootroot00000000000000--- system: release: unstable architecture: amd64 bootstrap-vz-0.9.11+20180121git/tests/system/providers/000077500000000000000000000000001323112141500223465ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/providers/README.rst000066400000000000000000000025671323112141500240470ustar00rootroot00000000000000System testing providers are implemented on top of the abstraction that is the testing harness. Implementation -------------- At their most basic level all they need to implement is the ``boot_image()`` function, which, when called, boots the image that has been bootstrapped. It should yield something the test can use to ascertain whether the image has been successfully bootstrapped (i.e. a reference to the bootlog or an object with various functions to interact with the booted instance). How this is implemented is up to the individual provider. A ``prepare_bootstrap()`` function may also be implemented, to ensure that the bootstrapping process can succeed (i.e. create the AWS S3 into which an image should be uploaded). Both functions are generators that yield, so that they may clean up any created resources, once testing is done (or failed, so remember to wrap ``yield`` in a ``try:.. finally:..``). Debugging --------- When developing a system test provider, debugging through multiple invocations of ``tox`` can be cumbersome. A short test script, which sets up logging and invokes a specific test can be used instead: Example: .. code-block:: python #!/usr/bin/env python from tests.system.docker_tests import test_stable from bootstrapvz.base.main import setup_loggers setup_loggers({'--log': '-', '--color': 'default', '--debug': True}) test_stable() bootstrap-vz-0.9.11+20180121git/tests/system/providers/__init__.py000066400000000000000000000000001323112141500244450ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/providers/docker/000077500000000000000000000000001323112141500236155ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/providers/docker/README.rst000066400000000000000000000001601323112141500253010ustar00rootroot00000000000000Docker ------ Dependencies ~~~~~~~~~~~~ The host machine running the system tests must have docker installed. bootstrap-vz-0.9.11+20180121git/tests/system/providers/docker/__init__.py000066400000000000000000000050711323112141500257310ustar00rootroot00000000000000from contextlib import contextmanager import logging log = logging.getLogger(__name__) @contextmanager def boot_image(manifest, build_server, bootstrap_info): image_id = None try: import os from bootstrapvz.common.tools import log_check_call docker_machine = build_server.run_settings.get('docker', {}).get('machine', None) docker_env = os.environ.copy() if docker_machine is not None: cmd = ('eval "$(docker-machine env {machine})" && ' 'echo $DOCKER_HOST && echo $DOCKER_CERT_PATH && echo $DOCKER_TLS_VERIFY' .format(machine=docker_machine)) [docker_host, docker_cert_path, docker_tls] = log_check_call([cmd], shell=True) docker_env['DOCKER_TLS_VERIFY'] = docker_tls docker_env['DOCKER_HOST'] = docker_host docker_env['DOCKER_CERT_PATH'] = docker_cert_path docker_env['DOCKER_MACHINE_NAME'] = docker_machine from bootstrapvz.remote.build_servers.local import LocalBuildServer image_id = bootstrap_info._docker['image_id'] if not isinstance(build_server, LocalBuildServer): import tempfile handle, image_path = tempfile.mkstemp() os.close(handle) remote_image_path = os.path.join('/tmp', image_id) try: log.debug('Saving remote image to file') build_server.remote_command([ 'sudo', 'docker', 'save', '--output=' + remote_image_path, image_id, ]) log.debug('Downloading remote image') build_server.download(remote_image_path, image_path) log.debug('Importing image') log_check_call(['docker', 'load', '--input=' + image_path], env=docker_env) except (Exception, KeyboardInterrupt): raise finally: log.debug('Deleting exported image from build server and locally') build_server.delete(remote_image_path) os.remove(image_path) log.debug('Deleting image from build server') build_server.remote_command(['sudo', 'docker', 'rmi', bootstrap_info._docker['image_id']]) from image import Image with Image(image_id, docker_env) as container: yield container finally: if image_id is not None: log.debug('Deleting image') log_check_call(['docker', 'rmi', image_id], env=docker_env) bootstrap-vz-0.9.11+20180121git/tests/system/providers/docker/image.py000066400000000000000000000032361323112141500252550ustar00rootroot00000000000000from bootstrapvz.common.tools import log_check_call import logging log = logging.getLogger(__name__) class Image(object): def __init__(self, image_id, docker_env): self.image_id = image_id self.docker_env = docker_env def __enter__(self): self.container = Container(self.image_id, self.docker_env) self.container.create() try: self.container.start() except Exception: self.container.destroy() raise return self.container def __exit__(self, exc_type, exc_value, traceback): try: self.container.stop() self.container.destroy() except Exception as e: log.exception(e) class Container(object): def __init__(self, image_id, docker_env): self.image_id = image_id self.docker_env = docker_env def create(self): log.debug('Creating container') [self.container_id] = log_check_call(['docker', 'create', '--tty=true', self.image_id], env=self.docker_env) def start(self): log.debug('Starting container') log_check_call(['docker', 'start', self.container_id], env=self.docker_env) def run(self, command): log.debug('Running command in container') return log_check_call(['docker', 'exec', self.container_id] + command, env=self.docker_env) def stop(self): log.debug('Stopping container') log_check_call(['docker', 'stop', self.container_id], env=self.docker_env) def destroy(self): log.debug('Deleting container') log_check_call(['docker', 'rm', self.container_id], env=self.docker_env) del self.container_id bootstrap-vz-0.9.11+20180121git/tests/system/providers/ec2/000077500000000000000000000000001323112141500230175ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/providers/ec2/README.rst000066400000000000000000000002161323112141500245050ustar00rootroot00000000000000EC2 --- Dependencies ~~~~~~~~~~~~ The host machine running the system tests must have the python package ``boto`` installed (``>= 2.14.0``). bootstrap-vz-0.9.11+20180121git/tests/system/providers/ec2/__init__.py000066400000000000000000000126531323112141500251370ustar00rootroot00000000000000from contextlib import contextmanager from tests.system.tools import waituntil import logging log = logging.getLogger(__name__) @contextmanager def prepare_bootstrap(manifest, build_server): if manifest.volume['backing'] == 's3': credentials = {'access-key': build_server.build_settings['ec2-credentials']['access-key'], 'secret-key': build_server.build_settings['ec2-credentials']['secret-key']} from boto.s3 import connect_to_region as s3_connect s3_connection = s3_connect(manifest.image['region'], aws_access_key_id=credentials['access-key'], aws_secret_access_key=credentials['secret-key']) log.debug('Creating S3 bucket') bucket = s3_connection.create_bucket(manifest.image['bucket'], location=manifest.image['region']) try: yield finally: log.debug('Deleting S3 bucket') for item in bucket.list(): bucket.delete_key(item.key) s3_connection.delete_bucket(manifest.image['bucket']) else: yield @contextmanager def boot_image(manifest, build_server, bootstrap_info, instance_type=None): credentials = {'access-key': build_server.run_settings['ec2-credentials']['access-key'], 'secret-key': build_server.run_settings['ec2-credentials']['secret-key']} from boto.ec2 import connect_to_region as ec2_connect ec2_connection = ec2_connect(bootstrap_info._ec2['region'], aws_access_key_id=credentials['access-key'], aws_secret_access_key=credentials['secret-key']) from boto.vpc import connect_to_region as vpc_connect vpc_connection = vpc_connect(bootstrap_info._ec2['region'], aws_access_key_id=credentials['access-key'], aws_secret_access_key=credentials['secret-key']) if manifest.volume['backing'] == 'ebs': from images import EBSImage image = EBSImage(bootstrap_info._ec2['image'], ec2_connection) if manifest.volume['backing'] == 's3': from images import S3Image image = S3Image(bootstrap_info._ec2['image'], ec2_connection) try: with run_instance(image, manifest, instance_type, ec2_connection, vpc_connection) as instance: yield instance finally: image.destroy() @contextmanager def run_instance(image, manifest, instance_type, ec2_connection, vpc_connection): with create_env(ec2_connection, vpc_connection) as boot_env: def waituntil_instance_is(state): def instance_has_state(): instance.update() return instance.state == state return waituntil(instance_has_state, timeout=600, interval=3) instance = None try: log.debug('Booting ec2 instance') reservation = image.ami.run(instance_type=instance_type, subnet_id=boot_env['subnet_id']) [instance] = reservation.instances instance.add_tag('Name', 'bootstrap-vz test instance') if not waituntil_instance_is('running'): raise EC2InstanceStartupException('Timeout while booting instance') if not waituntil(lambda: instance.get_console_output().output is not None, timeout=600, interval=3): raise EC2InstanceStartupException('Timeout while fetching console output') from bootstrapvz.common.releases import wheezy if manifest.release <= wheezy: termination_string = 'INIT: Entering runlevel: 2' else: termination_string = 'Debian GNU/Linux' console_output = instance.get_console_output().output if termination_string not in console_output: last_lines = '\n'.join(console_output.split('\n')[-50:]) message = ('The instance did not boot properly.\n' 'Last 50 lines of console output:\n{output}'.format(output=last_lines)) raise EC2InstanceStartupException(message) yield instance finally: if instance is not None: log.debug('Terminating ec2 instance') instance.terminate() if not waituntil_instance_is('terminated'): raise EC2InstanceStartupException('Timeout while terminating instance') # wait a little longer, aws can be a little slow sometimes and think the instance is still running import time time.sleep(15) @contextmanager def create_env(ec2_connection, vpc_connection): vpc_cidr = '10.0.0.0/28' subnet_cidr = '10.0.0.0/28' @contextmanager def vpc(): log.debug('Creating VPC') vpc = vpc_connection.create_vpc(vpc_cidr) try: yield vpc finally: log.debug('Deleting VPC') vpc_connection.delete_vpc(vpc.id) @contextmanager def subnet(vpc): log.debug('Creating subnet') subnet = vpc_connection.create_subnet(vpc.id, subnet_cidr) try: yield subnet finally: log.debug('Deleting subnet') vpc_connection.delete_subnet(subnet.id) with vpc() as _vpc: with subnet(_vpc) as _subnet: yield {'subnet_id': _subnet.id} class EC2InstanceStartupException(Exception): pass bootstrap-vz-0.9.11+20180121git/tests/system/providers/ec2/images.py000066400000000000000000000012641323112141500246410ustar00rootroot00000000000000import logging log = logging.getLogger(__name__) class AmazonMachineImage(object): def __init__(self, image_id, ec2_connection): self.ec2_connection = ec2_connection self.ami = self.ec2_connection.get_image(image_id) class EBSImage(AmazonMachineImage): def destroy(self): log.debug('Deleting AMI') self.ami.deregister() for device, block_device_type in self.ami.block_device_mapping.items(): self.ec2_connection.delete_snapshot(block_device_type.snapshot_id) del self.ami class S3Image(AmazonMachineImage): def destroy(self): log.debug('Deleting AMI') self.ami.deregister() del self.ami bootstrap-vz-0.9.11+20180121git/tests/system/providers/virtualbox/000077500000000000000000000000001323112141500245455ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/providers/virtualbox/README.rst000066400000000000000000000003271323112141500262360ustar00rootroot00000000000000Virtualbox ---------- Dependencies ~~~~~~~~~~~~ VirtualBox itself is required on the machine that is running the system tests. The same machine also needs to have python package ``vboxapi`` (``>=1.0``) installed. bootstrap-vz-0.9.11+20180121git/tests/system/providers/virtualbox/__init__.py000066400000000000000000000031441323112141500266600ustar00rootroot00000000000000from contextlib import contextmanager import logging log = logging.getLogger(__name__) @contextmanager def boot_image(manifest, build_server, bootstrap_info): from bootstrapvz.remote.build_servers.local import LocalBuildServer if isinstance(build_server, LocalBuildServer): image_path = bootstrap_info.volume.image_path else: import tempfile handle, image_path = tempfile.mkstemp() import os os.close(handle) try: build_server.download(bootstrap_info.volume.image_path, image_path) except (Exception, KeyboardInterrupt): os.remove(image_path) raise finally: build_server.delete(bootstrap_info.volume.image_path) from image import VirtualBoxImage image = VirtualBoxImage(image_path) import hashlib image_hash = hashlib.sha1(image_path).hexdigest() instance_name = 'bootstrap-vz-{hash}'.format(hash=image_hash[:8]) try: image.open() try: with run_instance(image, instance_name, manifest) as instance: yield instance finally: image.close() finally: image.destroy() @contextmanager def run_instance(image, instance_name, manifest): from instance import VirtualBoxInstance instance = VirtualBoxInstance(image, instance_name, manifest.system['architecture'], manifest.release) try: instance.create() try: instance.boot() yield instance finally: instance.shutdown() finally: instance.destroy() bootstrap-vz-0.9.11+20180121git/tests/system/providers/virtualbox/image.py000066400000000000000000000017201323112141500262010ustar00rootroot00000000000000import virtualbox import logging log = logging.getLogger(__name__) class VirtualBoxImage(object): def __init__(self, image_path): self.image_path = image_path self.vbox = virtualbox.VirtualBox() def open(self): log.debug('Opening vbox medium `{path}\''.format(path=self.image_path)) self.medium = self.vbox.open_medium(self.image_path, # location virtualbox.library.DeviceType.hard_disk, # device_type virtualbox.library.AccessMode.read_only, # access_mode False) # force_new_uuid def close(self): log.debug('Closing vbox medium `{path}\''.format(path=self.image_path)) self.medium.close() def destroy(self): log.debug('Deleting vbox image `{path}\''.format(path=self.image_path)) import os os.remove(self.image_path) del self.image_path bootstrap-vz-0.9.11+20180121git/tests/system/providers/virtualbox/instance.py000066400000000000000000000127611323112141500267320ustar00rootroot00000000000000import virtualbox from contextlib import contextmanager from tests.system.tools import waituntil import logging log = logging.getLogger(__name__) class VirtualBoxInstance(object): cpus = 1 memory = 256 def __init__(self, image, name, arch, release): self.image = image self.name = name self.arch = arch self.release = release self.vbox = virtualbox.VirtualBox() manager = virtualbox.Manager() self.session = manager.get_session() def create(self): log.debug('Creating vbox machine `{name}\''.format(name=self.name)) # create machine os_type = {'x86': 'Debian', 'amd64': 'Debian_64'}.get(self.arch) self.machine = self.vbox.create_machine(settings_file='', name=self.name, groups=[], os_type_id=os_type, flags='') self.machine.cpu_count = self.cpus self.machine.memory_size = self.memory self.machine.save_settings() # save settings, so that we can register it self.vbox.register_machine(self.machine) # attach image log.debug('Attaching SATA storage controller to vbox machine `{name}\''.format(name=self.name)) with lock(self.machine, self.session) as machine: strg_ctrl = machine.add_storage_controller('SATA Controller', virtualbox.library.StorageBus.sata) strg_ctrl.port_count = 1 machine.attach_device(name='SATA Controller', controller_port=0, device=0, type_p=virtualbox.library.DeviceType.hard_disk, medium=self.image.medium) machine.save_settings() # redirect serial port log.debug('Enabling serial port on vbox machine `{name}\''.format(name=self.name)) with lock(self.machine, self.session) as machine: serial_port = machine.get_serial_port(0) serial_port.enabled = True import tempfile handle, self.serial_port_path = tempfile.mkstemp() import os os.close(handle) serial_port.path = self.serial_port_path serial_port.host_mode = virtualbox.library.PortMode.host_pipe serial_port.server = True # Create the socket on startup machine.save_settings() def boot(self): log.debug('Booting vbox machine `{name}\''.format(name=self.name)) self.machine.launch_vm_process(self.session, 'headless').wait_for_completion(-1) from tests.system.tools import read_from_socket # Gotta figure out a more reliable way to check when the system is done booting. # Maybe bootstrapped unit test images should have a startup script that issues # a callback to the host. from bootstrapvz.common.releases import wheezy if self.release <= wheezy: termination_string = 'INIT: Entering runlevel: 2' else: termination_string = 'Debian GNU/Linux' self.console_output = read_from_socket(self.serial_port_path, termination_string, 120) def shutdown(self): log.debug('Shutting down vbox machine `{name}\''.format(name=self.name)) self.session.console.power_down().wait_for_completion(-1) if not waituntil(lambda: self.machine.session_state == virtualbox.library.SessionState.unlocked): raise LockingException('Timeout while waiting for the machine to become unlocked') def destroy(self): log.debug('Destroying vbox machine `{name}\''.format(name=self.name)) if hasattr(self, 'machine'): try: log.debug('Detaching SATA storage controller from vbox machine `{name}\''.format(name=self.name)) with lock(self.machine, self.session) as machine: machine.detach_device(name='SATA Controller', controller_port=0, device=0) machine.save_settings() except virtualbox.library.VBoxErrorObjectNotFound: pass log.debug('Unregistering and removing vbox machine `{name}\''.format(name=self.name)) self.machine.unregister(virtualbox.library.CleanupMode.unregister_only) self.machine.remove(delete=True) else: log.debug('vbox machine `{name}\' was not created, skipping destruction'.format(name=self.name)) @contextmanager def lock(machine, session): if machine.session_state != virtualbox.library.SessionState.unlocked: msg = ('Acquiring lock on machine failed, state was `{state}\' ' 'instead of `Unlocked\'.'.format(state=str(machine.session_state))) raise LockingException(msg) machine.lock_machine(session, virtualbox.library.LockType.write) yield session.machine if machine.session_state != virtualbox.library.SessionState.locked: if not waituntil(lambda: machine.session_state == virtualbox.library.SessionState.unlocked): msg = ('Error before trying to release lock on machine, state was `{state}\' ' 'instead of `Locked\'.'.format(state=str(machine.session_state))) raise LockingException(msg) session.unlock_machine() if not waituntil(lambda: machine.session_state == virtualbox.library.SessionState.unlocked): msg = ('Timeout while trying to release lock on machine, ' 'last state was `{state}\''.format(state=str(machine.session_state))) raise LockingException(msg) class LockingException(Exception): pass bootstrap-vz-0.9.11+20180121git/tests/system/tools/000077500000000000000000000000001323112141500214715ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/system/tools/__init__.py000066400000000000000000000062721323112141500236110ustar00rootroot00000000000000from contextlib import contextmanager from bootstrapvz.remote import register_deserialization_handlers import logging log = logging.getLogger(__name__) # Register deserialization handlers for objects # that will pass between server and client register_deserialization_handlers() @contextmanager def boot_manifest(manifest_data, boot_vars={}): from bootstrapvz.common.tools import load_data build_servers = load_data('build-servers.yml') from bootstrapvz.remote.build_servers import pick_build_server build_server = pick_build_server(build_servers, manifest_data) manifest_data = build_server.apply_build_settings(manifest_data) from bootstrapvz.base.manifest import Manifest manifest = Manifest(data=manifest_data) import importlib provider_module = importlib.import_module('tests.system.providers.' + manifest.provider['name']) prepare_bootstrap = getattr(provider_module, 'prepare_bootstrap', noop) with prepare_bootstrap(manifest, build_server): bootstrap_info = None log.info('Connecting to build server') with build_server.connect() as connection: log.info('Building manifest') bootstrap_info = connection.run(manifest) log.info('Creating and booting instance') with provider_module.boot_image(manifest, build_server, bootstrap_info, **boot_vars) as instance: yield instance def waituntil(predicate, timeout=5, interval=0.05): import time threshhold = time.time() + timeout while time.time() < threshhold: if predicate(): return True time.sleep(interval) return False def read_from_socket(socket_path, termination_string, timeout, read_timeout=0.5): import socket import select import errno console = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) console.connect(socket_path) console.setblocking(0) from timeit import default_timer start = default_timer() output = '' ptr = 0 continue_select = True while continue_select: read_ready, _, _ = select.select([console], [], [], read_timeout) if console in read_ready: while True: try: output += console.recv(1024) if termination_string in output[ptr:]: continue_select = False else: ptr = len(output) - len(termination_string) break except socket.error, e: if e.errno != errno.EWOULDBLOCK: raise Exception(e) continue_select = False if default_timer() - start > timeout: from exceptions import SocketReadTimeout msg = ('Reading from socket `{path}\' timed out after {seconds} seconds.\n' 'Here is the output so far:\n{output}' .format(path=socket_path, seconds=timeout, output=output)) raise SocketReadTimeout(msg) console.close() return output @contextmanager def noop(*args, **kwargs): yield bootstrap-vz-0.9.11+20180121git/tests/system/tools/exceptions.py000066400000000000000000000000571323112141500242260ustar00rootroot00000000000000 class SocketReadTimeout(Exception): pass bootstrap-vz-0.9.11+20180121git/tests/system/virtualbox_tests.py000066400000000000000000000106711323112141500243310ustar00rootroot00000000000000from manifests import merge_manifest_data from tools import boot_manifest partials = {'vdi': '{provider: {name: virtualbox}, volume: {backing: vdi}}', 'vmdk': '{provider: {name: virtualbox}, volume: {backing: vmdk}}', } def test_unpartitioned_extlinux_oldstable(): std_partials = ['base', 'oldstable64', 'extlinux', 'unpartitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_msdos_extlinux_oldstable(): std_partials = ['base', 'oldstable64', 'extlinux', 'msdos', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_gpt_extlinux_oldstable(): std_partials = ['base', 'oldstable64', 'extlinux', 'gpt', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_unpartitioned_extlinux_stable(): std_partials = ['base', 'stable64', 'extlinux', 'unpartitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_msdos_extlinux_stable(): std_partials = ['base', 'stable64', 'extlinux', 'msdos', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_gpt_extlinux_stable(): std_partials = ['base', 'stable64', 'extlinux', 'gpt', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_msdos_grub_stable(): std_partials = ['base', 'stable64', 'grub', 'msdos', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_gpt_grub_stable(): std_partials = ['base', 'stable64', 'grub', 'gpt', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_unpartitioned_extlinux_unstable(): std_partials = ['base', 'unstable64', 'extlinux', 'unpartitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_msdos_extlinux_unstable(): std_partials = ['base', 'unstable64', 'extlinux', 'msdos', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_gpt_extlinux_unstable(): std_partials = ['base', 'unstable64', 'extlinux', 'gpt', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_msdos_grub_unstable(): std_partials = ['base', 'unstable64', 'grub', 'msdos', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) def test_gpt_grub_unstable(): std_partials = ['base', 'unstable64', 'grub', 'gpt', 'partitioned', 'root_password'] custom_partials = [partials['vmdk']] manifest_data = merge_manifest_data(std_partials, custom_partials) with boot_manifest(manifest_data) as instance: print(instance.console_output) bootstrap-vz-0.9.11+20180121git/tests/unit/000077500000000000000000000000001323112141500177645ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/unit/README.rst000066400000000000000000000000261323112141500214510ustar00rootroot00000000000000Unit tests ========== bootstrap-vz-0.9.11+20180121git/tests/unit/__init__.py000066400000000000000000000000001323112141500220630ustar00rootroot00000000000000bootstrap-vz-0.9.11+20180121git/tests/unit/bytes_tests.py000066400000000000000000000033521323112141500227110ustar00rootroot00000000000000from nose.tools import eq_ from nose.tools import raises from bootstrapvz.common.bytes import Bytes from bootstrapvz.common.exceptions import UnitError def test_lt(): assert Bytes('1MiB') < Bytes('2MiB') def test_le(): assert Bytes('1MiB') <= Bytes('2MiB') assert Bytes('1MiB') <= Bytes('1MiB') def test_eq(): eq_(Bytes('1MiB'), Bytes('1MiB')) def test_neq(): assert Bytes('15MiB') != Bytes('1MiB') def test_gt(): assert Bytes('2MiB') > Bytes('1MiB') def test_ge(): assert Bytes('2MiB') >= Bytes('1MiB') assert Bytes('2MiB') >= Bytes('2MiB') def test_eq_unit(): eq_(Bytes('1024MiB'), Bytes('1GiB')) def test_add(): eq_(Bytes('2GiB'), Bytes('1GiB') + Bytes('1GiB')) def test_iadd(): b = Bytes('1GiB') b += Bytes('1GiB') eq_(Bytes('2GiB'), b) def test_sub(): eq_(Bytes('1GiB'), Bytes('2GiB') - Bytes('1GiB')) def test_isub(): b = Bytes('2GiB') b -= Bytes('1GiB') eq_(Bytes('1GiB'), b) def test_mul(): eq_(Bytes('2GiB'), Bytes('1GiB') * 2) @raises(UnitError) def test_mul_bytes(): Bytes('1GiB') * Bytes('1GiB') def test_imul(): b = Bytes('1GiB') b *= 2 eq_(Bytes('2GiB'), b) def test_div(): eq_(Bytes('1GiB'), Bytes('2GiB') / 2) def test_div_bytes(): eq_(2, Bytes('2GiB') / Bytes('1GiB')) def test_idiv(): b = Bytes('2GiB') b /= 2 eq_(Bytes('1GiB'), b) def test_mod(): eq_(Bytes('256MiB'), Bytes('1GiB') % Bytes('768MiB')) @raises(UnitError) def test_mod_int(): Bytes('1GiB') % 768 def test_imod(): b = Bytes('1GiB') b %= Bytes('768MiB') eq_(Bytes('256MiB'), b) @raises(UnitError) def test_imod_int(): b = Bytes('1GiB') b %= 5 def test_convert_int(): eq_(pow(1024, 3), int(Bytes('1GiB'))) bootstrap-vz-0.9.11+20180121git/tests/unit/manifests_tests.py000066400000000000000000000021601323112141500235500ustar00rootroot00000000000000 def test_manifest_generator(): """ manifests_tests - test_manifest_generator. Loops through the manifests directory and tests that each file can successfully be loaded and validated. """ from nose.tools import assert_true from bootstrapvz.base.manifest import Manifest def validate_manifest(path): manifest = Manifest(path=path) assert_true(manifest.data) assert_true(manifest.data['name']) assert_true(manifest.data['provider']) assert_true(manifest.data['bootstrapper']) assert_true(manifest.data['volume']) assert_true(manifest.data['system']) import os.path from .. import recursive_glob from itertools import chain manifests = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../manifests') manifest_paths = chain(recursive_glob(manifests, '*.yml'), recursive_glob(manifests, '*.json')) for manifest_path in manifest_paths: validate_manifest.description = "Validating %s" % os.path.relpath(manifest_path, manifests) yield validate_manifest, manifest_path bootstrap-vz-0.9.11+20180121git/tests/unit/releases_tests.py000066400000000000000000000020561323112141500233660ustar00rootroot00000000000000from nose.tools import raises from bootstrapvz.common import releases def test_gt(): assert releases.wheezy > releases.squeeze def test_lt(): assert releases.wheezy < releases.stretch def test_eq(): assert releases.wheezy == releases.wheezy def test_neq(): assert releases.wheezy != releases.jessie def test_identity(): assert releases.wheezy is releases.wheezy def test_not_identity(): # "==" tests equality "is" tests identity assert releases.stretch == releases.stable assert releases.stretch is not releases.stable assert releases.stable is releases.stable assert releases.stretch is releases.stretch assert releases.jessie != releases.stable assert releases.jessie is not releases.stable def test_alias(): assert releases.oldstable == releases.jessie assert releases.stable == releases.stretch assert releases.testing == releases.buster assert releases.unstable == releases.sid @raises(releases.UnknownReleaseException) def test_bogus_releasename(): releases.get_release('nemo') bootstrap-vz-0.9.11+20180121git/tests/unit/sectors_tests.py000066400000000000000000000054651323112141500232540ustar00rootroot00000000000000from nose.tools import eq_ from nose.tools import raises from bootstrapvz.common.sectors import Sectors from bootstrapvz.common.bytes import Bytes from bootstrapvz.common.exceptions import UnitError std_secsz = Bytes(512) def test_init_with_int(): secsize = 4096 eq_(Sectors('1MiB', secsize), Sectors(256, secsize)) def test_lt(): assert Sectors('1MiB', std_secsz) < Sectors('2MiB', std_secsz) def test_le(): assert Sectors('1MiB', std_secsz) <= Sectors('2MiB', std_secsz) assert Sectors('1MiB', std_secsz) <= Sectors('1MiB', std_secsz) def test_eq(): eq_(Sectors('1MiB', std_secsz), Sectors('1MiB', std_secsz)) def test_neq(): assert Sectors('15MiB', std_secsz) != Sectors('1MiB', std_secsz) def test_gt(): assert Sectors('2MiB', std_secsz) > Sectors('1MiB', std_secsz) def test_ge(): assert Sectors('2MiB', std_secsz) >= Sectors('1MiB', std_secsz) assert Sectors('2MiB', std_secsz) >= Sectors('2MiB', std_secsz) def test_eq_unit(): eq_(Sectors('1024MiB', std_secsz), Sectors('1GiB', std_secsz)) def test_add(): eq_(Sectors('2GiB', std_secsz), Sectors('1GiB', std_secsz) + Sectors('1GiB', std_secsz)) @raises(UnitError) def test_add_with_diff_secsize(): Sectors('1GiB', Bytes(512)) + Sectors('1GiB', Bytes(4096)) def test_iadd(): s = Sectors('1GiB', std_secsz) s += Sectors('1GiB', std_secsz) eq_(Sectors('2GiB', std_secsz), s) def test_sub(): eq_(Sectors('1GiB', std_secsz), Sectors('2GiB', std_secsz) - Sectors('1GiB', std_secsz)) def test_sub_int(): secsize = Bytes('4KiB') eq_(Sectors('1MiB', secsize), Sectors('1028KiB', secsize) - 1) def test_isub(): s = Sectors('2GiB', std_secsz) s -= Sectors('1GiB', std_secsz) eq_(Sectors('1GiB', std_secsz), s) def test_mul(): eq_(Sectors('2GiB', std_secsz), Sectors('1GiB', std_secsz) * 2) @raises(UnitError) def test_mul_bytes(): Sectors('1GiB', std_secsz) * Sectors('1GiB', std_secsz) def test_imul(): s = Sectors('1GiB', std_secsz) s *= 2 eq_(Sectors('2GiB', std_secsz), s) def test_div(): eq_(Sectors('1GiB', std_secsz), Sectors('2GiB', std_secsz) / 2) def test_div_bytes(): eq_(2, Sectors('2GiB', std_secsz) / Sectors('1GiB', std_secsz)) def test_idiv(): s = Sectors('2GiB', std_secsz) s /= 2 eq_(Sectors('1GiB', std_secsz), s) def test_mod(): eq_(Sectors('256MiB', std_secsz), Sectors('1GiB', std_secsz) % Sectors('768MiB', std_secsz)) @raises(UnitError) def test_mod_int(): Sectors('1GiB', std_secsz) % 768 def test_imod(): s = Sectors('1GiB', std_secsz) s %= Sectors('768MiB', std_secsz) eq_(Sectors('256MiB', std_secsz), s) @raises(UnitError) def test_imod_int(): s = Sectors('1GiB', std_secsz) s %= 5 def test_convert_int(): secsize = 512 eq_(pow(1024, 3) / secsize, int(Sectors('1GiB', secsize))) bootstrap-vz-0.9.11+20180121git/tests/unit/subprocess.sh000077500000000000000000000011451323112141500225140ustar00rootroot00000000000000#!/bin/bash # Expected input from stdin: # streamNo delay message # Running the following # # (cat <&$stream done wait bootstrap-vz-0.9.11+20180121git/tests/unit/tools_tests.py000066400000000000000000000020571323112141500227240ustar00rootroot00000000000000import os from nose.tools import eq_ from bootstrapvz.common.tools import log_call subprocess_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'subprocess.sh') def setup_logger(): import logging root = logging.getLogger() root.setLevel(logging.NOTSET) import StringIO output = StringIO.StringIO() string_handler = logging.StreamHandler(output) string_handler.setLevel(logging.DEBUG) root.addHandler(string_handler) return output def test_log_call_output_order(): logged = setup_logger() fixture = """ 2 0.0 one\\\\n 1 0.4 two\\\\n 1 0.4 four\\\\n 2 0.4 No, three..\\\\n 1 0.4 three\\\\n """ status, stdout, stderr = log_call([subprocess_path], stdin=fixture) eq_(status, 0) eq_(stderr, ['one', 'No, three..']) eq_(stdout, ['two', 'four', 'three']) expected_order = ['one', 'two', 'four', 'No, three..', 'three', ] eq_(expected_order, logged.getvalue().split("\n")[8:-1]) bootstrap-vz-0.9.11+20180121git/tox.ini000066400000000000000000000015301323112141500171550ustar00rootroot00000000000000[tox] envlist = flake8, yamllint, unit, integration, docs [flake8] ignore = E221,E241,E501 max-line-length = 110 [testenv] basepython = python2.7 [testenv:flake8] deps = flake8 commands = flake8 bootstrapvz/ tests/ --exclude=minify_json.py {posargs} [testenv:unit] deps = nose nose-cov commands = nosetests --verbose {posargs:tests/unit} [testenv:integration] deps = nose nose-cov commands = nosetests --verbose {posargs:tests/integration} [testenv:system] deps = nose nose-cov nose-htmloutput pyvbox >= 0.2.0 commands = nosetests --with-html --html-file=system.html --verbose {posargs:tests/system} [testenv:docs] changedir = docs deps = sphinx != 1.5 sphinx_rtd_theme commands = sphinx-build -W -b html -d _build/html/doctrees . _build/html [testenv:yamllint] deps = yamllint commands = yamllint manifests